Companies over-rely on interviews when hiring, which has been shown to be a poor predictor of future performance and introduces opportunities for bias. As an alternative, try giving candidates who make it past an initial screening test a small test of the primary skill the job requires. For instance, ask a coder to solve a small coding project. This “minimally viable demonstration of competence,” and a follow-up discussion that debriefs the exercise, can be a powerful tool for moving beyond the resume to find qualified candidates that hiring bots might have passed over.
As a hiring manager, you want to bring on the “best” person for a job (whatever that means for the given role), but how do you know who’s right?
It’s a simple question with no easy answer and high downside risk. The cost of no hire ranges from delayed schedules to eroding service levels to worse. The cost of a wrong hire is often calculated to range from 30% to 50% of their salary – or more. Traditional approaches to screening and hiring have always been imprecise, but given the pace of change in the very nature of work itself, these tools are becoming even more limited.
Startups create a minimum viable product, or MVP, to test consumer demand for a concept before investing in building a polished version. What if recruiters could use a similar tactic to get better insight about your candidates, declutter the hiring process, better match true competency with the job, and increase the diversity of your hiring? We believe you can.
In order to do so, however, you’ll have to be willing to think differently about the job interview, which arose in its present form out of psychological screening of soldiers during World War I. Even when interview questions are relevant, the interview is a poor predictor of future performance. It demonstrates someone’s competence to answer questions, know theory, and prioritize information – all of which may or may not correlate to what they need to do on the job.
The traditional interview also makes it more likely we hire someone in our image, the “mini me” cognitive error. We can’t help it, however much we feel we’re objective. (Organizational psychologist Adam Grant calls this the “I’m not biased” bias.) Think about it: When someone walks out of your office, or off a Zoom call, what makes you think it was a good interview? Usually, it’s a spark of connection, which happens when you find something in common, not something different. Simply because we’re human, we all tend to substitute this feeling of connectedness as a proxy for competency. In a world more focused on driving DEI, the interview is actually a tool that likely makes it harder to achieve DEI objectives.
Many organizations have little idea what should replace the interview process, even when they know it’s not working. Personality tests often attempt to judge competency based on temperament. Bot screening can help whittle down a mountain of resumes, but is clumsy and misses a lot, especially as jobs are becoming less codified. Previous work experience is frequently a poor predictor of future performance. Some gamification shows promise, but to date has mostly measured work ethic, emotional intelligence, and situational judgment. Pre-hire assessment technologies have been called a “wild west,” and may have unintended consequences with respect to protected traits like race, sex, or national origin.
But there is a better way. We suggest an approach we call minimally viable demonstrations of competence. What’s that? It means boiling down any path forward to the smallest testable hypothesis, and taking action to see what happens. If it goes well, keep going; if it doesn’t, correct course. We believe this method holds great promise in the hiring process.
One of us (Jeff) spent several years hiring writers for our firm. He used a scenario-driven writing assignment, administered after a short introductory call, to assess skills. Many publications use writing or editing tests for job candidates, but Jeff approached the task more analytically than most: After receiving the assignment, he conducted a follow-up conversation to understand not just what was on the page, but the candidate’s choices in crafting it. Not only did this give us a sense of how a candidate would perform, but they got a much better sense of the job itself, as we related elements of the task to actual role expectations. By using the same exercise repeatedly, it also built a database of responses over time, a positive feedback loop to better assess the next candidate. “This approach to hiring was a game-changer, and I ended up with several rock stars,” Jeff says. “The exercise also allowed me to hire from truly nontraditional backgrounds, because I didn’t have to worry much about job specifics on the resume. The process itself told me if someone would be good.”
Let’s focus for a moment on our term for this process: minimally viable demonstrations of competence. “Minimally viable” means making the test as unfussy and brief as possible while still giving the evidence you need. “Demonstrations of competence” means, quite simply, that the test actually shows skills in action that are essential to the job. It won’t do if the test demonstrates something interesting and important, but not core to the job or not predictive of performance.
Minimally viable demonstrations of competence is a mouthful, so we sometimes use a shorthand term for this process: the reveal. It refers to the moment in poker when players turn over their hands and show what they’re holding. For many types of roles, a minimally viable demonstration of competence can offer the same revelatory moment.
Of course, working conditions vary wildly. It’s easy to ask an aspiring writer for a mock assignment, an aspiring consultant to join a project team for a workshop, or an aspiring saleswoman to execute a cold call. But what about manufacturing, where physical movement and ability to adapt to the unforeseen is paramount? Or wildfire firefighting, where you are battling something that is unpredictable in conditions that can be hard to imagine. Well, it works there too. Through scenario planning and AI-enabled VR tools, we can approximate almost any real-life situation these days. But human reaction to those situations can’t be predicted, only witnessed through the reveal.
This process can be industrialized, since looking for five people to write is very different from hiring thousands. One helpful tool is a “pre-hire assessment,” usually some kind of test, which has been used since the imperial Han dynasty in ancient China. Partly because it can be deployed impersonally and at scale, it rose in popularity at the start of the pandemic as a way of getting 10,000 resumes down to 1,000. Some of these assessments measure dimensions such as emotional intelligence, work ethic, or basic aptitude (not always a great proxy for future performance). But well-designed, a pre-hire assessment can be one useful tool, as part of a larger process.
Deloitte (where we work) is doing this today with full-stack engineers, taking a minimally viable approach by asking the applicant to actually build something. Partly, this is because of the rapid evolution of the field of software development, and consequent imprecision of work history. Someone may claim to be a software engineer, but do they really have the capability to do full-stack development in a modern DevOps manner? Today, Deloitte is driving more targeted interviews based on applicants’ actual coding abilities.
Human evaluation still plays a vital role. One of us (Steve) once hired a candidate for consulting role who didn’t fit our firm’s protocol for hiring new graduates. Ellen was taking five years to finish her degree, was not in a business program, and studied at a school where we don’t typically recruit. However, she was an engineer with experience running a hydrocarbon plant, and her cover letter was clear about why she was making a career shift, signaling maturity. And she traveled two hours to meet with us. When we met, she set to work on a problem and quickly came to an elegant, smarter answer that we and hundreds of others missed. A bot would have rejected Ellen, but today’s she’s a managing director in our M&A practice.
Relying on human judgement and a carefully devised reveal is more targeted and predictive than the “pack the pipeline and see what makes it through” approach many companies use today. There are few companies that practice this on any wide-scale basis yet, but the good news is it appears some are starting to challenge the orthodoxies of the traditional hiring process. Steve and Geoff’s friend and mentor, Joe Fuller, recently wrote very convincingly on the trend in HBR. We imagine a world where it becomes the norm for the “interview” to evolve into a targeted set of minimally viable steps, culminating perhaps in a one or two day “ride-along”: the corporate equivalent of dating without any notion of obligation or strings attached. Better yet, why not turn the “gig economy” into a permanent pipeline: Gig a job out and invite the best performers in for full-time roles at premium rewards?
It’s an approach more companies should try.