When Everyone Codes, Who Maintains the Mess?

LLMs make coding easy. That’s the pitch, and leadership is buying it. But we’ve seen this movie before; it doesn’t end well.

There is a growing push to let rank-and-file employees write code with large language models (LLMs). If LLMs can generate Python, JavaScript, or SQL on the fly, the logic goes, then PMs, designers, and analysts should all be empowered (read: forced) to ship code. But empowerment without accountability merely shifts the burden, and engineers are the ones who have to clean up the mess.

History repeats itself

In the olden days, we built software by iteratively throwing code over the fence from development to QA and back until it bounced into operations’ lap. Developers had little incentive to make sure the code worked beyond passing tests. DevOps got rid of that model with one rule: “You build it, you own it.” Suddenly, developers were on-call for their own mess, and, voilà, they cared more about maintainability, observability, scalability, and resilience. If you create a mess today, you are also the one who has to clean it up when a production system crashes on a Sunday at 2AM. DevOps handed accountability to those with the responsibility to create software.

The rise of the janitor engineer

In the hands of a seasoned engineer, an LLM can provide clear benefits: they can build tools outside their core expertise. They can also generate boilerplate code far more efficiently. But at least they can ascertain the quality of the LLM’s output by spotting issues most would not notice.

Now people with zero knowledge of software engineering can spit out code that looks acceptable and sort of works, except it lacks authentication and authorization (authn/z), it exposes hardcoded API keys, it comes loaded with libraries, almost no unit tests or CI/CD integration, it has no observability, and it does not scale at all. It’s not professional software that is maintainable. It’s a brittle script. Why? Because LLMs have been trained on open-source code, most of which was never meant to run in production.

You can ask the LLM to address each issue separately and hope for the best. But why? Authn/z, secrets managers, tests, sensible abstractions, modularity, CI/CD, and observability are all par for the course for any decent software engineer. It is part of their responsibilities. Except now non-engineers can build fragile yet shiny tools, and when these break, engineers get paged. When everyone codes, software engineers become glorified janitors.

We have seen this particular pattern also in data engineering. Data scientists and machine learning engineers outsource the dull parts of their jobs to data engineers who must clean up the mess someone else created in the source systems and ensure compliance with vague rules dictated by rogue data scientists. It is one of the reasons why so many machine learning initiatives in companies around the world end in failure.

Productivity theatre

The truth is that software development is rarely the bottleneck in organizations: coordination, prioritization, tech and product debt, and getting agreement among stakeholders are. AI is also expected to revolutionize product management—by people who have never been product managers themselves! Sure it can, but only if you ignore the fact that anything of substance requires effort, patience, and experience:

  • Crafting a vision that is backed by actual data, not made-up figures that look reasonable but are completely off.
  • Prioritizing features when there are hard trade-offs to make and tough discussions to be had with customers as a consequence.
  • Dropping features that no longer serve most customers or the business.
  • Managing stakeholders and getting support from leadership.
  • Coming up with the right problems to focus on with clear problem statements in the first place.

LLMs are overconfident interns who have read the first paragraph of every Wikipedia entry and forked every weekend hobby project on GitHub: they sound smooth, appear to know a lot, but they are at best generalists and at worst bullshit artists. They externalize the cost of their own hallucinations.

The moment you ask for depth or references, they provide figures that look reasonable but are partially or completely made up, and you won’t know unless you spend more time checking than just doing the work yourself. The alleged productivity gains from LLMs never include the extra work of verifying their outputs.

Intriguingly, usage of AIs carries a social cost. Employees who depend on them are often perceived by peers as lazy, less competent, or lacking in diligence.

Power without accountability

The job most ripe for automation? The one pushing for AI everywhere: the CEO. Many hold multiple jobs, sit on several boards, and they rarely show up in the office. Yet they mandate AI and a return to the offices, because mandates are wonderful! Except to the people affected by such decisions.

Apart from playing golf, the CEO’s workload can easily be handed off to a Python script or a Slack bot:

  • Announce 5–10% layoffs every year or so—unless your last name rhymes with “cluckerberg”, then it’s every six months.
  • Shout random buzz words from MBA programmes every now and then, and throw in AI, crypto, blockchain, and quantum to sound more tech-y.
  • Ask the executive assistant to schedule a luncheon with a key client to promise new features on an infeasible timeline without consulting with or informing anyone on the team.
  • Request a 50-slide status deck every Monday before dawn.

Rapid prototyping

Cue the standard argument that LLMs make people prototype faster. Sure, if guardrails exist and ownership is clear. But how often do you actually take a prototype you have launched with duct tape and prayers (and no version control except perhaps for an “Initial commit” message) offline to ensure engineers have sufficient time to re-engineer it instead of dealing with on-going operations? Yeah, I didn’t think so either. Especially since the person who built the illusion is already promoted and handed more power to do more damage. It is the engineers who inherit such digital time bombs. And if customers are exposed to fragile AI-generated tools, they may not think much of your company afterwards. Then again, if every company does the same, customers have no choice.

Accountability

Speed is not the enemy. Speed without accountability is. LLMs are chaos amplifiers and tech debt accelerators, not a free lunch.

AIs can write code in seconds, but you may have years of regret. It merely leads to fragile systems and invisible toil, or rather: toil outsourced to janitor engineers. Or even judges.

It is very simple: if everyone is empowered to build, everyone must be accountable for what they build. And if an LLM can build your entire business, then perhaps your business was not worth much to begin with. Until then, engineers (and product managers) are not freed by AI. They’re being buried under it by their executives.