Why measuring soft skills is hard
Soft skills are notoriously difficult to measure, and for good reason. Unlike technical skills — where you can administer a certification exam or track output metrics — soft skills manifest in complex, context-dependent ways. A person's communication skill isn't a single number; it varies by audience, medium, emotional state, and subject matter.
Traditional measurement approaches fail for several reasons. Self-assessments are unreliable because people are poor judges of their own interpersonal abilities — the Dunning-Kruger effect is especially pronounced in soft skills. One-time evaluations capture a snapshot rather than a trajectory. And purely qualitative feedback ("she's a good communicator") lacks the specificity needed to drive improvement.
But the difficulty of measurement doesn't excuse the absence of it. Organizations spend billions annually on training programs with no way to determine whether they work. The result is a cycle of expensive interventions, vague satisfaction surveys, and no evidence of lasting impact.
The solution isn't to find a single perfect metric — it's to build a framework that combines multiple signals across different time horizons. Leading indicators tell you if people are engaged. Behavioral indicators tell you if habits are changing. Business outcomes tell you if it matters. Together, they paint a picture that no single metric could provide.