Proficiency is Tiered and other Lies We Tell Ourselves

Ryan James Spencer

Tiered categorisations of knowledge and proficiency are fundamentally flawed as they rest on the notion that all knowledge can eventually be obtained, retained, and divvied up amongst n-many categories. Such categorisations also ignore the fact that most skills rely on overlapping knowledge from various domains. In this article I propose a way to evaluate subject matter in the context of best prioritising what should I learn next? I’ll also offer up an approach to evaluating others that isn’t based on ‘skill level’ (admittedly regurgitated from Amy Cuddy).

A Tagging System

Instead of suggesting that knowledge from a domain of expertise can be chunked and tagged in toto, we consider an alternative tagging system where we look at knowledge from three types of labels and, most importantly, accept that the fringes are fuzzy:

  • Fundamentals
    • There is usually a corpus of knowledge that everyone can agree upon is pivotal to ‘being competent/dangerous’ in a particular subject matter. These may overlap to other skills and it may be unclear which skills they overlap in, but what matters is that these skills are relatively obvious in the domain of note.
  • Nice-to-Haves
    • This is the knowledge that might be good to spend a bit of time on as it refines and builds on fundamentals to introduce more powerful techniques and practices. This is where the only clarity is that they are definitely not fundamentals and they are definitely not esoteric.
  • Trivia
    • This is the stuff you probably don’t need to know, like that some AIX machines have a weird bug in certain prompts where inputting uppercase characters will cause the machine to reboot or that earlier, alternative architectures supported 7-bit bytes. These tidbits of information (sometimes not so miniscule!) are probably very costly to pick up and don’t give you much in return.

These tags map very much to the progression of learning a subject: when you start learning a subject, everything is rough and unclear; you should focus on exposing yourself to as much as knowledge as possible even if you don’t quite understand everything. This ’5yo’ view of the world helps build the framework wherein we can fill in further details as we step towards the nice-to-haves, but instead of becoming an ‘expert’ by picking up trivia, we try to avoid it, and if it were important, then it would fall back into the nice-to-haves. This is the important caveat to learning anything in general I’m trying to make here; mastering a subject has nothing to do with knowing absolutely everything there is to know.

The practice of using these tags is simple: whenever you’re faced with a variety of options, pick fundamentals over nice-to-haves, nice-to-haves over trivia, and (per that last regard) try to pick more fundamentals and nice-to-haves in a variety of subject matter than trying to pick up a collection of specific trivia for a single subject matter.

I liken this to the Pareto principle, which effectively states that input effort is usually disproportionate to output gains, or, as the common quote goes, “20% of the effort for 80% of the output” (although it’s perfectly feasible for the opposite situation to occur). This roughly implies that most initial upfront work is high leverage and that driving towards ‘expertise’ may have little return on investment. What I like about this proposal is that it accepts the fact that knowledge categorisation is messy and that there’s probably no one in the world that knows everything.

Evaluation of Others

One problem with the above proposal is that it doesn’t consider the common usage of such tiered categorisations: evaluation of others’ sets skills. Amy Cuddy proposes that most people are judging you on your competency after they’ve judged you on whether or not they can trust you. I propose we try to drop the skill-evaluation-at-moment-of-evaluation tactic and focus on evaluating others in two primary metrics: trust and ability to advance ones skills over any given period of time. Some people call this “hiring for the slope rather than the Y axis”.

That said, I’m predominantly a software engineer and in my experience I find the former usage of this proposal to be the one I care about the most. Determining what’s appropriate for ourselves rather than trying to divvy people up into boxes is a far more valuable use of engineering time, and further, evaluating people on the merits of their enthusiasm, ability and desire to continually learn, and their capacity to both work away and in teams is more worth it’s weight in gold than if someone is a self-proclaimed 10x engineer capable of cranking a lot of (read: complicated, un-maintainable, mal-scoped) code.