How do you envision the role of AI in software development evolving in the future?
Last Updated: 21.06.2025 09:31

developers will spend less time typing code and more time thinking about code. ie describing their projects. Discussing what they want to achieve with an agent, which requires reasoning and formalizing what they want to accomplish.
agent-centric IDEs (cursor, windsurf, claude code…) which empower agents to reason with an entire codebase and provide more actionable answers / perform more useful tasks.
the introduction of code completion tools (github Copilot etc. ) which liberate devs from memorizing precise syntax,
I think that “vibe coding” ie giving a brief description of what you want to achieve and get fully functional code as a result is going to have very limited impact. It works, yes, but in very specific cases, but it doesn’t scale well, and the economies it creates are not worth the trouble in the general case.
In the “pre-copilot era”, there was a general push towards code quality, as in: developers were nudged into making code that was easier to maintain by their fellow developers. Code quality is going to evolve into: code that AI agents find easy to work with. Those two things are not incompatible, but it means things like more comments, more tests.
conversational LLM agents (chatGPT, claude etc.) that can accelerate research, simulate brainstorming and perform small technical tasks,
Why did Sumire's summoning Nue act strangely in response to Kawaki's karma?
We are entering a new phase of uncertainty. In the late 2010s/early 2020s (“pre-copilot era”) the developer experience was concentrating around fewer tools with large adoption. Now the market for these tools is fractionated again.
We’re still waiting to see how the dust is going to settle IMO.
In the past 3 years there’s been 3 pivotal moments:
What are the new technological advancements we should adopt?
Developers will spend more time on quality insurance, both upstream and downstream. Thinking - how should this piece of code integrate in the larger whole. What are the signals that it’s broken. What logs, testing, monitoring and alerting should I put in place.
The trends I expect to continue are:
a larger part of the code in codebases is going to be generated. This doesn’t mean that a large portion of the tasks that were once handled by humans can be entirely delegated to AI, but rather, in a typical commit, an increasingly large proportion of the lines of code changed will be done automatically.
New model helps to figure out which distant planets may host life - Space