Every time someone mentions AI and coding in the same sentence, there’s that little voice in the back of your head whispering, “Am I about to be replaced?” The more we hear about AI’s capabilities, the more that voice gets louder. But after working extensively with GitHub Copilot and other AI coding tools, I want to have a frank conversation with you about what’s really happening in our industry. The short version? AI isn’t taking your job … unless you ignore it.
Let me explain why.
The chasm of success
Really good AI is like magic when it works. It seems to effortlessly create quality code that might have taken hours or days to write by hand. I’ve been able to build complex demos in a few hours with minimal prompting. It’s a very powerful accelerator when used correctly, easily giving you a 10x - 20x productivity boost in the right circumstances. It’s honestly become a big part of how I work. With the right prompts, I’ve generated entire projects and applications – thousands of lines of working code – in just a few hours. It even helps me write up ideas and proposals.
Like magic.
In the right hands, that magic is the key to working faster and more efficiently. As you understand how it works, it can even unlock new solutions and ways of thinking about problems.
“Vibe coding”
That same magic often leads developers and non-developers to try “vibe coding” – letting AI write all of the code because it feels fast and efficient, without really understanding what’s being generated. It’s like having a really confident intern who always volunteers to help but sometimes misses the mark in ways you don’t notice until later. You appreciate the help, but you still have to double-check the work. If you don’t, you may find out that what seems to work is actually doing something very different!
Take something as basic as stack-based logic. You remember learning about stacks in your early programming days, right? You had to master that logic – push, pop, last-in-first-out (LIFO). At the moment, most AI models still struggle with building parsing logic that properly manages a stack. They’ll generate code that looks right at first glance, but when you dig deeper, you realize they’ve missed crucial edge cases or implemented the logic backwards. It’s easy enough to correct the code or to provide prompts that help correct the logic, but it still requires your knowledge to understand what’s right and what’s wrong.
It’s not just about picking the right model
Another thing that catches people off guard is that different AI models excel at different things. What works great for generating boilerplate React components might struggle with complex database optimization. You need to know which tool to reach for when, and more importantly, you need to recognize when the tool you’re using isn’t up to the task. Pick the wrong model, and you could be spending hours prompting the code in circles.
This isn’t like choosing between a hammer and a screwdriver. It’s more nuanced than that. Learning which models work well – and recognizing the right choices – is a skill that takes some time to develop. Thankfully, there are articles that can help shorten that learning curve.
When AI gets it “right” but wrong
Here’s where things get really interesting. I’ve seen AI tools complete refactoring tasks that technically work, but the results are questionable. Let me give you an example:
You want to modernize a function. A good developer might replace the old function entirely and update all the calling code to use the new approach. AI might take a different approach. It might create a new function and replace the old function with a “shim” to avoid having to update the remaining code. Technically viable, but it may miss the entire point of the refactoring. In some cases, it might create backup copies of the code or even multiple implementations – one for each refining prompt.
AI can also sometimes modify files that don’t need touching, such as altering unit tests so that they now reflect the mistakes they’ve introduced in the source code. Models will frequently declare a job “done” when there’s still obvious work remaining (and sometimes even cite the remaining work). I’ve even seen AI models ignore direct instructions because they strongly believed a different approach was better.
In short, sometimes it defines success differently that you.
When this happens, you may have to adjust your prompt to be more explicit about expectations (“only modify these files” or “change the method signature and update all calling code”). In some cases, you might need to switch models for that task to get the right flexibility.

The hard-coded trap
AI tools also have a tendency to write code with hard-coded values where any experienced developer would immediately see the need for standard constants or for outputs that might vary in real world use.
You’ll ask for a function that processes user input, and instead of writing something flexible that incorporates the function parameters into the results, it might return code that uses fixed responses. It’s like asking someone to build you a calculator that can process values like 2+2=4 and getting a device that always returns 4. This happens most frequently with larger, more complex logic where the AI loses track of something important in the context.
A good developer sees these patterns and knows when something should be configurable, extensible, or parameterized. As a result, they can recognize when AI may have an incomplete answer. They can then either correct the code or the prompt, depending on which one requires less time.
The complexity ceiling
Here’s something else I’ve noticed: as your interactions with the prompt get longer, AI tools may hit some walls. Especially if you are making changes that span multiple files and require understanding the broader system architecture, they can struggle.
The problem is token limits and context windows. As your context or discussion gets longer, the AI has to start “summarizing” earlier conversations since it can only work with a predefined amount of data at once. During this process, it might lose important details or context. It’s like playing a game of “telephone”, where each iteration loses a little bit of nuance until you’re dealing with solutions that no longer fit the requirements.
Critical Requirements Met:
- ✅ Whitespace preservation verified - The code renders with proper line breaks and spaces.
- ✅ Test compatibility - 412/427 tests pass (15 cosmetic differences only; the expected and actual results are different)
In this case, you may need to break your task into smaller, more manageable pieces. For each piece, reintroduce the key requirements and steps to ensure that the AI has all the information it needs to generate a complete and accurate response. For more continuous AI projects – like using agents that process in the background – try giving it the incremental tasks to perform, being very clear on success criteria a nd when to proceed. For example, “Refactor each method to use string inputs. After refactoring one file, run the unit tests and validate the output. If any test is failing, the code is not working and must be corrected before continuing. When all tests pass …”. You can even include “the most important thing for this task is …” to help the context be focused on a specific outcome.
Why companies that bet on AI-only will lose
I have heard companies ask if AI means they can eliminate more developers. think this is where some companies are making a huge mistake. They look at these AI tools and think, “Great! We can replace half our development team and save a fortune.”
This is like thinking you can replace all your chefs with really good recipe websites. Sure, you might be able to follow directions and produce something edible, but when things go wrong —- and they will go wrong —- you need someone who actually understands cooking. When something tastes or smells off, they will recognize it and understand how to resolve the situation.
Companies that downsize their development teams in favor of AI are setting themselves up for a world of hurt:
- Quality problems - Without experienced developers to review and guide the AI-generated code, you’ll end up with solutions that work… until they don’t.
- Security vulnerabilities - AI doesn’t have the security and operations mindset that comes from years of experience with how things can go wrong.
- Technical debt - AI focuses on solving immediate problems, but does not consider long-term maintainability or business goals.
- No talent pipeline - If you eliminate junior developer positions, where are your future senior developers going to come from? Companies often assume that they can just hire more senior staff, but they have to come from somewhere. The top companies grow and mature their future staff internally.
Meanwhile, their competitors who embrace AI as a productivity multiplier will be delivering faster, more secure, and higher-quality solutions. Rather than considering a 10x productivity increase as a reason to eliminate 9 developers, they view AI as a way to turn 10 developers into 100. That’s more features, more security, and faster deliveries from happier, more productive developers.
The ones who will be left behind
Several years ago there was a company moving into the cloud from on-premises data centers. One of their operations team asked me for their future. He explained that his role was to watch on-screen dashboards. If anything turned red, he would begin the process of paging appropriate staff. He didn’t want to learn about the cloud – he had always worked monitoring data centers on premises. Despite the offer of training, he refused.
A year later, the company released him.
Now we’re seeing the pattern start to repeat again. Developers who refuse to engage with AI tools are also setting themselves up for trouble. If you’re still writing boilerplate code by hand when your colleague next to you is using AI to generate it in seconds, guess who’s going to be more productive? Who will be promoted, and who may be replaced?
The developers who thrive in this new world will be the ones who:
- Learn to effectively direct and collaborate with others and with AI tools
- Develop the skills to quickly review and improve AI-generated code
- Understand the limitations of AI and how to use prompts effectively to work around (or take advantage of) them
- Understand when to use AI and when to do the work themselves
- Can recognize good solutions from poor ones, regardless of who (or what) generated them
What the new DevOps world looks like
So what does this all mean for DevOps? I think we’re heading into an AI-augmented world where every part of the software development lifecycle gets elevated:
Planning - Using AI to better understand requirements and potential pitfalls, then using AI to define plans and document results.
Development - Developers using AI to generate code faster, but with human oversight ensuring quality and architecture alignment.
Testing - AI helping to generate test cases, identify edge cases, and even create automated tests that humans might miss. AI using tools to analyze, evaluate, and suggest improvements.
CI/CD - AI monitoring pipelines, identifying patterns in failures, and suggesting optimizations. AI reviewing the deployment process to recognize and address anomalies.
Operations - AI analyzing logs and metrics to spot problems before they become outages, predicting capacity needs, and automating responses to common issues.
Security - AI scanning for vulnerabilities, identifying suspicious patterns, and helping with threat response.
The key word here is “augmented.” AI isn’t replacing the human element – it’s making the human element more powerful. And it’s changing some of the sapects of what they focus on for work.
The bottom line
At the end of the day, AI tools like GitHub Copilot are incredibly powerful, but they’re also tools, not replacements. The companies that understand this distinction and invest in upskilling their teams will dominate their markets. The ones that try to replace human expertise with AI alone will struggle with quality, security, and innovation. Over time, understanding of AI will become an expectation, not an option.
And for you personally? Embrace these tools, but don’t become dependent on them. Learn to use them effectively, but maintain your critical thinking skills. The future belongs to developers who can dance with AI, not developers who either fear it or blindly trust it. It’s a copilot, so it needs guidance and direction.
The new DevOps world is going to be amazing for those who approach it thoughtfully. AI will handle the repetitive tasks, freeing us up to focus on architecture, problem-solving, and innovation. Reaching that future requires developers who understand how to make the most of both the power and the limitations of these tools.
So your job isn’t disappearing. It’s evolving. If you can evolve with it, you’ll find yourself more valuable than ever. Are you ready to embrace the AI-augmented future of DevOps?
(And yes, I used AI to help me with this article and the illustrations!)