Your Hiring Process Is Showing the Market Who You Are
I’ve spent enough years building engineering organizations to know that how you hire tells the market exactly what kind of company you are. Not what your careers page says. Not what your recruiter promises. How you actually treat people during the process.
Two patterns keep showing up in technology hiring that should concern anyone responsible for building a team - unpaid labor disguised as interviews, and automated screening tools that discriminate by design. Both are legally dangerous. Both repel senior talent. And both tell the market more about the company than the company realizes.
The Catch-22 of Unpaid Technical Work
A recurring problem in technology hiring is the normalization of practices that violate U.S. labor law, often because hiring methods are copied from other countries without understanding the legal differences.
In the United States, wages are governed by the Fair Labor Standards Act (29 U.S.C. 201-219). The law defines employment broadly. Under 29 U.S.C. 203(g), employment includes anyone an employer “suffers or permits to work.” Once a company receives productive work that benefits it, that work generally must be compensated under 29 U.S.C. 206, which requires payment of at least minimum wage. Department of Labor regulations (29 C.F.R. 785.11-785.12) reinforce that employers cannot accept the benefit of someone’s work without paying for that time.
This is why many unpaid “working interviews” and take-home projects are legally dangerous. If a candidate is writing production-quality code, implementing real features, or fixing defects that resemble the company’s actual work, the activity crosses the line from evaluation into uncompensated labor. The Supreme Court addressed this distinction in Walling v. Portland Terminal Co., 330 U.S. 148 (1947), explaining that unpaid training is only lawful when the employer receives no immediate advantage from the worker’s activity. Once the employer benefits, the worker is likely covered by wage law.
Here is the part that companies don’t think through: this creates an inescapable catch-22. If the candidate isn’t producing production-quality work during the exercise, what are you actually evaluating? You’re watching someone write code you’d never ship. That tells you almost nothing about whether they can do the job. But if the candidate is producing work at the level you’d actually want to hire for - code you could use, features you could ship, bugs that get fixed - then you are receiving economic value from their labor, and the FLSA applies.
For the interview to tell you what you need to know, the candidate has to perform at a level where the law says you owe them compensation. There is no version of this that works in the company’s favor. Either the exercise is too artificial to be useful, or it is useful enough to be illegal.
Companies that hand candidates a repository with real bugs and frame it as an “interview exercise,” or that assign week-long take-home projects and then ship the result - these practices are visible to the market. Engineers talk to each other. Word gets around about which companies treat the interview process as a way to extract free labor from people who feel they can’t say no because they need the job.
The LeetCode Confusion
LeetCode-style problems were originally designed for personal practice and skill evaluation, not for extracting unpaid work. Their legal safety in interviews comes from the fact that they are abstract exercises that produce no economic value for the employer. A candidate solving a graph traversal problem on a whiteboard is demonstrating knowledge. Nobody is shipping that solution to production.
It is also worth noting that many of these platforms explicitly prohibit their use in hiring processes. Their terms of service were written for individual learners, not for companies looking to build an interview pipeline on someone else’s problem bank. When organizations ignore those terms, they are not just misusing the format - they are violating the agreements they accepted when they started using the platform.
Problems arise when companies go further and misuse the format itself. When a “coding exercise” is actually a disguised feature request, a bug fix from a real repository, or a project that mirrors production work, the legal protection disappears. The exercise is no longer abstract. The company is now receiving economic value from unpaid labor, and the FLSA applies.
The distinction matters, and it is not subtle. If your interview exercise produces something the company could use, you have crossed the line.
Automated Screening and the Disability Problem
A separate but related issue is the rise of automated screening and AI-based proctoring systems used during technical assessments. As someone who has built and evaluated complex systems across healthcare, defense, and consumer platforms, I can tell you that the failure modes in these tools are not edge cases. They are not accidents. They are the predictable result of intentional design choices made to save money.
Building an assessment system that works correctly for candidates who use screen readers, magnification tools, alternative input devices, or who simply interact with a screen differently than an algorithm expects is expensive. It requires research, testing across a genuine range of human bodies and abilities, and ongoing iteration. Most vendors choose not to do that work. Most companies that buy these tools choose not to ask whether it was done. The result is a system that was designed from the start to work for one kind of person and to fail - quietly, invisibly, and deniably - for everyone else.
Automated monitoring tools used during technical assessments - eye tracking, screen monitoring, behavior analysis - flag behaviors like looking away from the screen, irregular eye movement, or assistive technology usage as suspicious. For candidates with disabilities, including visual impairments or the use of accessibility tools, these systems misinterpret normal working behavior as cheating. A qualified engineer using a screen reader, a magnification tool, or simply interacting with a screen differently than the algorithm expects gets flagged - not for misconduct, but for having a body that doesn’t match the model’s assumptions. That engineer never gets a chance to show what they can build. They are eliminated by a system that was never designed to see them as a valid candidate in the first place.
In the United States, disability discrimination in hiring is governed by the Americans with Disabilities Act (42 U.S.C. 12101 et seq.). Employers must provide reasonable accommodations and cannot use hiring systems that systematically exclude or penalize disabled candidates. When automated assessment tools misclassify accessibility behavior as misconduct, they create discriminatory barriers that violate these obligations.
This is not an edge case in terms of population either. According to the CDC, roughly 26% of adults in the United States have some type of disability. One in four. When your hiring funnel includes a tool that penalizes people for how their bodies work, you are not screening for talent. You are telling a quarter of your candidate pool that they are not welcome, and you are doing it with a system specifically designed to make sure you never have to say it to their face.
And you are not just losing a candidate. You are losing what that candidate would have built for you. The engineer you just silently eliminated might have been the one who stays for a decade, learns every corner of your system, and becomes the person the entire team depends on. That capability, that passion, that stability - it does not disappear. It goes to your competitor. They get someone who wants to learn everything, who wants to put down roots and build something lasting, while you keep churning through a revolving door of hires who leave after eighteen months because your organization has already demonstrated - starting with the hiring process - that it does not care about people.
Recruiters have a term for this. They call it candidate experience. And every time you hand that experience to a third-party vendor or an automated system that you cannot audit, you are failing at it. Not partially. Completely. Because the candidate’s experience of your company is now determined by a tool that does not know them, cannot accommodate them, and was never designed to treat them as a human being worth evaluating fairly. That is the first impression your company makes, and for the candidates you most want to hire, it is often the last.
Any engineering leader evaluating their hiring pipeline should be asking whether these tools have been tested against the full range of candidates they claim to assess - and if the vendor can’t answer that question, that is your answer. But more than that, any leader who cares about the people they serve should be asking themselves whether they are comfortable with a process that silently eliminates qualified human beings because it was cheaper to build the tool that way.
What This Actually Signals to the Market
Here is the practical reality many companies ignore: even if a questionable hiring practice were technically legal, it still signals poor integrity. And integrity is not an abstraction. It is the thing that determines whether the best people in the industry want to build with you or avoid you.
Strong engineers do not just evaluate compensation and technology stack. They evaluate the people. They are looking for leaders worth following, teams worth joining, and missions worth committing years of their life to. When they encounter a hiring process that treats candidates as disposable labor or funnels them through systems designed to exclude rather than discover, they withdraw - not because the process is inconvenient, but because it tells them exactly how that organization treats the people inside it. What remains is a filtered pool of candidates who either lack alternatives or are willing to tolerate questionable practices. That is not how you build a team capable of solving hard problems.
The same applies to automated screening, but the signal is even worse. When an organization relies on opaque proctoring tools to evaluate technical candidates, it is admitting something about its own leadership: they do not have enough technical depth to fairly evaluate engineers themselves. They have outsourced that judgment to a vendor they cannot correctly audit, because they lack the capability to build or maintain that evaluation process in-house.
For a competent senior engineer, this is a disqualifying signal. Engineers at this level want to work under leaders they can learn from - people with enough technical depth to recognize good work when they see it. When leadership has already decided that a third-party proctoring vendor is better equipped to evaluate engineering talent than they are, they are telling every strong candidate two things: first, that the leadership lacks the technical competence to assess the people they are hiring. Second, that technical brilliance brought into the company will not be valued, because the organization has already committed to vendor-driven evaluation over building that capability internally.
A company that outsources its candidate evaluation to an algorithm that systematically disadvantages a quarter of the adult population is not just creating legal exposure. It is advertising that its technical leadership cannot do one of the most fundamental parts of their job - recognizing talent - and has no intention of learning how.
I’ve spent my career building systems at companies where engineering decisions have real consequences - healthcare compliance, defense systems, platforms used by millions of people. The companies worth working for are the ones that apply the same rigor to how they hire as they do to how they build. The ones that don’t are showing the market exactly who they are, whether they realize it or not.
The Boundary Is Not Complicated
Interview exercises should evaluate knowledge and reasoning without producing economic value for the employer. Candidates should not be asked to perform unpaid labor. Hiring systems should not introduce discriminatory barriers through poorly designed automation. These are not high bars. They are the minimum.
Why I Care About This
I am trying to build something. Not a product - a team. The kind of team that solves problems most organizations cannot touch. Healthcare systems that keep people alive. Defense platforms where failure has real consequences. Infrastructure that serves millions of people who will never know your name but whose lives are better because you built it right.
You cannot build that team if your hiring process is broken. You cannot attract the people capable of doing that work if the first thing they see is a company cutting legal corners on unpaid labor or outsourcing its judgment to a tool that silently eliminates qualified candidates because inclusion was too expensive to engineer. The people who build things that matter are the same people who notice when an organization does not take its obligations seriously - because they take their own obligations seriously, and they are looking for leaders who do the same.
I wrote about fiduciary duty in fractional leadership because I believe that when someone trusts you with their organization, you owe them more than competence - you owe them integrity. Hiring is no different. When a candidate walks into your process, they are trusting you with their time, their effort, and their dignity. Companies that treat that trust as something to exploit rather than something to honor are not just creating legal exposure. They are breaking a promise they made the moment they posted the job listing. And the people who would have made them great are watching.
If you are building a hiring process and want to make sure it actually attracts senior talent instead of filtering it out, or if you are building a team that needs to solve hard problems at scale and you want leadership that takes these obligations seriously, let’s talk.
Disclaimer: I write about hiring practices from the perspective of engineering leadership, not as legal counsel. The statutes and case law referenced in this essay are matters of public record, but nothing here constitutes legal advice. If you have questions about the legal dimensions of your hiring process, consult an attorney.