Executives like to treat hiring as a people decision. In practice it is capital allocation under uncertainty. The firm is buying a stream of problem solving, coordination, and judgment that will convert salary outlay into delivery. Yet most interview processes still privilege theatre over throughput. That mispricing feeds the productivity puzzle that boards complain about, because selection filters reward presentation over the capacity to move real work across the line inside complex systems.
The first structural error is a reliance on proxies that once correlated with capability, but now mostly describe access. School brand, recognizable employer logos, and smooth narrative performance remain the strongest predictors of an offer in many markets. These signals travel well across geographies and search committees, which is why they persist. They also compress diversity of operating styles and underweight emergent skills that do not sit on a transcript. When capital is abundant, this bias hides inside budget slack. When rates and risk premia rise, it turns into delayed projects, crowded governance, and duplicated headcount to compensate for weak delivery.
A second error sits in the evaluation mechanics. Interviews still test memory and composure under an artificial time limit. Panelists listen for neat arc stories, consensus language, and polite dissent. That might help screen for client-facing roles in old-line consultancies. Inside a regional bank migrating core systems, or a supply chain operator rewriting demand forecasts, the winning variable is different. The firm needs people who reduce coordination cost, who can decompose an ambiguous brief without waiting for a perfect dataset, who can write clearly enough to align five functions with conflicting incentives. None of these are captured by the standard playbook of conversational interviews, quick puzzles, and a final culture chat that measures similarity more than contribution.
AI has quietly widened the gap between interview performance and on-the-job value. Candidates arrive with polished answers trained on public question sets. Writing samples and case prompts now benefit from widely available tooling. That does not make interviews useless. It does make them less informative unless the task mirrors the real production environment. A thirty minute chat on stakeholder management will not reveal whether a candidate can constrain scope, write a decision memo that survives legal review, or convert a vague executive ask into a deliverable timeline with tradeoffs spelled out.
The labor market distortion is broader than any single company. Mis-hiring and slow sorting show up in macro data as persistent vacancy alongside muted productivity gains. Firms call it a skills shortage. In many cases it is a measurement failure. When processes over-index on performance in controlled conversations, they miss the skill most correlated with value creation in modern work: the ability to push work through multi-constraint systems without supervision. That is not charisma. It is applied judgment plus written clarity, repeated.
Geography matters, but not in the way most assume. Singapore, the Gulf, and Hong Kong share a deep bench of credentialed talent and a policy push toward skills-based economies. Yet enterprise hiring still leans on brand and interview poise because these signals travel across multinational committees and satisfy auditability. The result is a comfort premium. Boards believe they are reducing risk by choosing familiar packaging. In reality they are buying delayed execution that later requires additional program management layers to fix.
What would a better assessment look like. It would bring the real work inside the evaluation window and measure coordination, not personality. A strong process asks for a work sample anchored to the firm’s actual constraints. Give the candidate a messy source set, a conflicting brief from two stakeholders, and a policy boundary that matters. Ask for a one page decision memo, a two hour time box, and a short live defense that clarifies tradeoffs. Calibrate scorers on three variables only. Can the person narrow the problem without permission. Can they write in a way that reduces future meetings. Can they surface and price the risks that matter to legal, finance, and operations. The answers travel across roles and regions without relying on pedigree.
There is a second measure with high predictive power that few companies record with discipline. It is the candidate’s history of repeated delivery under constraint. Not the one large success that rides a brand, but a pattern of medium sized wins where the constraint is visible. Early in career that might look like high quality internship outputs inside real teams, not competitions. Mid career it looks like clear ownership of a process that used to leak value and now does not. Late career it looks like policy or architecture decisions that removed recurring blockers. This is not glossier storytelling. It is documented before-after change, with stakeholders who would confirm it if asked.
Some fear that writing heavy assessments will bias toward native English speakers and extroverted analysts. The concern is valid only if writing is graded for flourish. The aim is transmission efficiency. A clear email that aligns legal, data, and product beats a charismatic town hall that leaves delivery teams guessing. In cross border environments, the cost of ambiguity compounds. Every unclear paragraph creates another meeting and another week lost to clarification. Firms that hire for crisp written alignment see lower program risk and faster compounding of institutional knowledge.
Standard interview formats also ignore a skill that is quietly precious when budgets tighten. It is the ability to build trust quickly across functions that do not share incentives. This is not charm. It is evidence of reliable follow through on small promises inside short cycles. A process can test this without theatrics. Give the candidate a scheduled check in during the work sample window. Ask them to request one dependency they need from a mock stakeholder. Observe whether the ask is clear, whether the follow up is timely, and whether they adjust when the dependency arrives late. That simple loop reveals more about future coordination cost than any abstract question about leadership style.
Policy environments can help or hinder the shift. Governments can signal for skills by funding mid career conversions that produce public work samples, by recognizing micro credentials only when attached to verified outputs, and by publishing hiring guidelines that elevate work simulation over conversational scoring. Large employers can accelerate the change by publishing their assessment rubrics. The point is not to remove discretion. It is to move discretion away from the comfort of similarity and toward evidence of throughput under ambiguity.
There is a predictable objection from senior operators. Interviews protect against team damage that a bad hire can cause. No memorandum can capture values and fit. That is correct, but incomplete. Values are visible in the work sample when the task includes a stakeholder who loses out and needs to be handled with respect. Fit is visible in whether the candidate’s writing reduces or increases cognitive load for the reader. If a candidate can move a conflicted decision forward and keep colleagues informed without noise, that is fit in any high velocity organization.
The capital markets angle is straightforward. Hiring that misprices delivery risk raises the cost of execution. Projects take longer, governance layers thicken, and strategic pivots stall while leaders search for mythical senior fixes. Investors read this as management weakness or structural headwinds. Sometimes it is both. Often it is simply selection tuned to the wrong variables. Correct the assessment inputs, and the same payroll buys more throughput with less managerial glue. In a slow growth world where rates stay higher than the last decade, that efficiency advantage becomes a competitive moat.
None of this requires radical tooling. It requires discipline and the willingness to be audited by outcomes. Track post hire delivery against the assessment rubric. If teams that used work samples and decision memos deliver faster with fewer revisions, keep going. If the new process fails to predict, adjust the tasks rather than retreat to conversational comfort. The market does not pay you for interviews that feel right. It pays you for systems that deliver under pressure without constant executive intervention.
Job interviews aren’t evaluating the right skills. Boards that accept this and redesign the assessment to mirror real work will buy more execution per dollar, reduce coordination tax, and close a piece of the productivity gap that balance sheets have been carrying for years. The result reads modest on paper. Inside the operating rhythm it feels like compound interest.