Will AI eat my software eng job?
Plus other fun and exciting questions about the future of engineering
Disclaimer: this post is for software engineers. If you don’t build software for your job, you might get bored.
As I’ve built more with AI, I’ve become fascinated with developer workflows to maximize productivity via AI. An FAQ style list of observations and opinions follows, starting with everyone’s favorite question…
Will AI eat my software engineering job?
That depends on you. If you learn to use AI to write code more efficiently, you’ll be more valuable than ever. If not, another engineer using AI will eat your job.
AI coding agents are already more productive than most junior engineers. But a strong engineer using AI is way more productive than AI wielded by a non-engineer.
If you’ve tried AI coding agents and it slowed you down, I understand, but you need to try harder. These agents make step change improvements monthly. You will fall behind if you don’t learn.
If your company doesn’t pay for AI coding agents… One, that’s super dumb. Two, too bad. You need to pay for it out of pocket.
Zooming out, if every capable engineer goes AI-first, we’ll unlock more build capacity than ever, leading to an explosion of companies and products we can’t yet imagine.
But, in the far future, will AI eat my job no matter what I do?
No one knows. But if I were a gambling man – and you know I am – I’d bet human software engineering sticks around for a long time.
But the role of software engineer will change and shift to work like:
Supplementing where the models fall short for a while (eg, adding context, reviewing code)
Aligning AI context and prompts with business and user goals
Selecting models, managing AI builds to align with company standards, and managing compute cost
Integrating pre-AI legacy code with post-AI code
Long-term system architecture that deeply aligns with company mission and vision
This is a good thing. Engineering talent is the top constraint within software companies. Many good ideas aren’t built because there aren’t enough engineers. With AI, we’ll build a lot more of them, and you’ll continue to have a job as long as you embrace building AI-first.
What do I need to do to become an AI-first engineer? How do I get on the winning side?
Let’s break it down: from zero AI usage to full mastery.
To start:
Use an AI coding agent to interface with your code. Pick your favorite.
Ask questions of the code. Anytime you’d ask another engineer for context, ask the coding agent first.
Ask increasingly complex questions. Eg, what are alternative backend frameworks I should consider? Explain your thinking.
The goal is to get value as quickly as possible. You’re trying to red pill yourself.
Once you’re pilled:
Start using the AI coding agent to write new lines of code (not tab complete). Review the code manually yourself for comfort.
Create a thoughtful approach to your own development workflow. Here’s a great example from a recent Gauntlet AI grad.
You should write more docs to give AI better context in fewer words to manage the context window.
You should set up user rules so the AI adheres to your style and your company’s best practices.
Once you have a strong AI workflow:
AI gets the first shot at writing all code. It’s always the starting point.
As you improve, AI writes nearly 100% of all lines of code. Anthropic reportedly writes 90% of code with Claude. Many new YC startups write all code with AI.
You run multiple agents in parallel (eg, on separate containers) because waiting on one agent is too slow.
Or as a good friend likes to say, this is the “Let Jesus Take the Wheel” phase.
Now that you’re a master, your role is to:
Spend more time planning, documenting, and prototyping to reduce revision loops.
Frame work as closed loops. Eg, prompt AI to build an API endpoint that meets success criteria x, y, and z and to iterate until it’s successful. You’re setting up AI to not need human intervention.
You write unit tests with AI, but you supervise, since AI tends to write tests it passes :)
You spend more time on code review, but that’s because you’re able to deliver more code in less time.
This interview with the creator of Claude Code is a great watch for advanced tips and the philosophy behind building AI-first.
Bonus points:
Teach others. Model the way. Set the best practices for how AI agents adhere to your company’s standards.
Which tool should I start with? I haven’t tried any of them.
This space moves fast. This will be out of date within a couple of months. Here’s what I’d try based on current tools:
Experiment with Claude Code, Cursor, Replit, and v0. Keep an eye on Gemini CLI and all new credible competitors.
I bucket all tools into two categories:
For technical users / for writing production grade code
For non-technical users / for prototyping
Within bucket 1:
Most popular options: Cursor, Windsurf, Claude Code, Gemini CLI. Cursor and Claude Code are the top choices right now.
There are two dominant UX: IDE (eg, Cursor) and terminal based (eg, Claude Code)
They all use the same models under the hood. Claude Sonnet is most popular, but you can toggle the model.
There are differences in balancing cost, usage, and intelligence:
Cursor puts guardrails in place to lower compute cost and encourage human oversight. Its most popular plan is $20/month.
Claude Code has relaxed constraints, higher intelligence, and higher compute cost. It expects less human oversight. The $100/month max plan is most common for full-time engineers.
To a degree, it feels like we’re in the days of $0.50 Uber rides because these companies are all eating compute cost for heavy users.
Within bucket 2:
Most popular options: Replit, v0
Replit builds full stack apps, while v0 is designed for frontend prototypes.
Replit is an all-in-one solution to ship, including code gen, turnkey integrations, hosting, auth, security, and more. It’s also dumber than all bucket 1 options.
Over time, I think these options will ship increasingly complex production ready apps, but they won’t match the intelligence of bucket 1 agents.
How do you expect these tools to evolve over time?
As models improve, they’ll need humans less per feature delivered. Interestingly, this favors tools at both extremes, for non-technical (eg, Replit) and highly technical users (eg, Claude Code).
Tools that encourage human-in-the-loop workflows are at a disadvantage. Eg, this is existential for Cursor, and Cursor is adapting.
It’s hard to say what will win out. I expect a range of options, but the big winners will be at both ends of the extremes.
When should I work within the AI coding agent vs outside like in ChatGPT?
Generally speaking, new feature planning should happen outside of your development environment. Conversational LLMs encourage open-ended thinking. Being untethered from the code helps.
For existing features, it’s your call. Planning in your development environment has the advantage of referencing the code base but can also tie you to sub-optimal legacy choices.
What’s the most surprising thing you’ve found during your AI exploration?
The most capable engineers I know let AI run on “autopilot” more often than other engineers. They’re also more optimistic about its ability to improve over time.
That’s surprising. Most great engineers don’t easily give up control.
In reality, the most capable engineers work the hardest and smartest to set up AI to be successful, so they get the best results.
How should we think about the cost of building with AI?
Engineers, buy whatever helps you. Most plans cost $20–$100/month. If your company won’t pay, it’s still worth it.
Companies should treat AI tools as headcount, not SaaS. AI costs less than 1% of eng salary and should make them at least 2x as productive. It’s a no brainer.
Final thoughts
To be on the winning side of history, you should view AI as a threat and an opportunity. It should drive a sense of urgency to up-level your ability via AI. When you do that, you’ll be more in demand than ever.