How I Stopped Worrying and Learned to Love the Boom
Part 1: The Trajectory of AI LLM Tools in Software Development
"I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal." - HAL 9000, in 2001: A Space Odyssey
The Main Idea
It’s not coming, it’s here. Software development leaders must develop and maintain a deep understanding of the capabilities, limitations, and potential of AI Large Language Models (LLMs) in software development. The breathtaking speed of evolution of these tools signals a need to learn, respond, and adapt decision making cycles in virtually every aspect of software production. Are they to be used as tools or collaborated with as partners?
Imagining the Strategic Frontier
Here’s a vignette of how I imagine the future for a software development team who has learned to leverage the opportunity, with you in the driver’s seat.
To set the stage, let’s use an analogy of how we iteratively generate images with AI today. You start with a goal in mind, and through iterations, ideas morph, and your prompts build upon the new information introduced by AI. With each iteration you get pulled a bit further into a surprising mélange of shared ideas, with the boundaries between them ever fuzzier. By the time you select a winner, you stop for a moment and wonder to what degree your original vision was reshaped by the contributions of AI. You didn’t just get work done faster and at lower cost, you got ideas out of the exchange. Information was created that didn’t come from you. Many of the ideas were discarded, but ultimately, you netted new information that can’t be fully attributed to your prompt writing.
Now picture how this might work with software generation. You provide a couple of prompts, one for the app and one to provide context for the backing data. A minute passes and four fully formed repos are built and deployed into isolated runtime environments. Tests, test data, users, and documentation are created. You tap away, play with the four solutions, then discard the ones that missed the mark. You pick two and update your prompt to suggest a merger of two ideas into one. A minute later, the next iteration of four apps is presented. This time, AI took an unexpected left turn. It may have misunderstood you, but cheerfully unveils a dazzling departure. Intrigued, you pin that one and start to layer on the next batch of ideas, pressing the system to suggest features from complementary product types.
At this stage of the hypothetical evolution of AI LLMs, the boundaries between us and our tools have shrunken to a vanishing point.
How far off is this scenario? Is it hyperbole or a naïve underestimation?
The Hype Cycle Undersold Us, For Once
Generations of software developers have enjoyed a rather modest amount of noise by software tool and technology marketers. Think about the big moments in software development history. Big ideas such as IDEs, version control, containerization, opensource, public cloud, PaaS, and programming frameworks emerged slowly with incremental changes dribbling out over multi-year cycles. The IDE of today (sans Copilot) looks nearly identical to the IDE of 10 years ago.
We are understandably wary of new fads due to so many fizzled hype cycles. Blockchain, VR, AR, and IoT have wolfed down investment capital and returned a tiny fraction of value.
This all seems rather quaint when viewed through the lens of the current juggernaut of AI LLMs. The spokespeople of the emerging leaders in this segment almost seem a bit puzzled and uncertain about where this is going. The braggadocio of hype marketers has been replaced with a chilling form of humility mixed with a sense of apprehension. Are they afraid they got it right this time? If they did get it right, how can we be the beneficiaries rather than the victims of this mega-trend?
"Now, look, boys, I ain't much of a hand at makin' speeches. But I got a pretty fair idea that something doggone important is goin' on back there." - Major T.J. "King" Kong, in Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb
Is “Partnership” the Right Word for Where This is Going?
A partnership denotes a relationship where both humans and AI contribute to achieving common goals and there is creative fluidity that flows in both directions. The nature of this creativity isn’t relevant to the concept of partnership. It is in the incremental information exchange between the parties that new value is created. There is, in a sense, mutual benefit in this partnership. The humans make progress in efficiently developing software, and the AI LLM feeds itself in an unending cycle of learning.
The bar must be high to be called a partner. The relationship isn’t there today, but I believe it’s heading in that direction. Before we look at the evolution towards strategic partnership, let’s go back to our image generation analogy to discuss the iterative accretion of ideas that should emerge from a partnership.
AI-Powered Image Generation and the Information Loop
Our experiences using AI image gen tools, even in their relatively nascent state, vividly illustrates the evolution and the collaborative process it supports. They demonstrate a dynamic interplay of information exchange. We input a prompt, the AI generates an image, and the results often inspire us to refine the prompt or explore new ideas, leading to a richer, more creative output. Think about how AI influences that refinement cycle. The result is ultimately some degree of departure between our original idea and the final chosen image. This cycle of interaction and mutual influence mirrors the way I expect AI LLMs to evolve beyond their current state and towards strategic partnership.
When Does Information Become Ideas?
In order to be considered a strategic partner, AI must contribute information that shifts the strategic direction of a project. It’s critical that this interpretation of information coming from the LLM are seen as ideas the human didn’t have in the first place, nor would they have had without the introduction of new insights or perspective. The human interprets this new information in the context of their strategic goal, and this drives them in a slightly adjusted direction. Their next prompt builds upon the combination of the previous prompt, plus the new information. The subsequent combined prompts and the images co-evolve, yielding still more opportunity.
The Trajectory: Toys, Tools, and Tactics in Software Development
Starting from where we are today and projecting forward in time, let’s consider the trajectory of utility when engaging with AI LLMs and predict if a partnership may be coming.
This trajectory, which is unsurprisingly non-linear, emerges slowly but gradually accelerates delivering utility with basic tools, and then exponentially evolves to assist in delivering tactical value. We don’t know when, but ultimately these AI partners will become entrenched in advanced organizations delivering complete strategic initiatives. At that stage, continuing to call them tools will soon sound trite and antiquated.
Toy
A toy feels novel. It’s engaging and fun to play with. But beyond entertainment and early probing research, it has low value to software development teams. Since the return on investment of these early investigations yielded the correct assessment that the technology wasn’t helpful, they were arguably value-subtracting. AI LLMs at this stage were a bit like Tesla’s first attempts at “self driving.” A promise of utility that didn't deliver: brief moments of joy interrupted by frustrating setbacks.
Basic Tool - Coding Assistant
A tool has true utility. It helps accelerate and simplify completion of a single activity. But utterly lacking context, memory, or an ability to expand from point A to B, it has the heft of a dumb but useful hammer. When tools do their job consistently well, people re-engineer their work processes to harvest maximum utility. But there are limits.
During this stage, quick leaps forward in the capabilities of tools are beginning to get organizations’ attention. Pilot programs are widening and developers are reporting some productivity gains. As they work to understand the capabilities of AI LLMs, they struggle to develop best practices for their use and imagine how to integrate these tools into their training and development process. But since workable code is being generated, we are entering a period of rapid utility increase. Rather than considering them partners, they are more apt to be called assistants at this stage. As I nostalgically scroll back in time on my conversations with my coding assistant, roughly one in three questions previously returned incorrect information, often hallucinating code sections that were non-functional.
“You are correct. My apologies for the confusion...” - ChatGPT 3.5, June 2023
Tactical Coding Collaborator
What is tactical software development? Tactics are the discrete engagements we have on the path to attempting to gain our strategic goals. They are the activities required to tackle small chunks of the bigger problem which, when constructed and aligned correctly, push the product and company towards the strategic goal. Each step is guided by a human, who understands the overall strategy. Partner isn’t the right term for this stage, since they lack autonomy and the ability to predict and suggest what should be done next.
This phase represents a tipping point where the practical applications of AI LLMs in software development become abundantly obvious. The productivity gains from automating code generation, bug fixing, documentation, and complex tasks like database design and complete API test coverage begin to be realized. This leads to significant improvements in efficiency, cost reduction, and opportunities for humans to spend more time on strategic work.
Strategic Coding Partner
Eventually, the utility of AI LLMs in software development approaches the strategic frontier.
Multiple AI-powered partners will collaborate within the human-driven software development environment. Perhaps one is in the role of a creative supporter of ideation, pushing the limits of creativity. The other is a devil's advocate, seeking ways to counter-argue the various ideas in the room and act as a conservative counter-balance. We present a strategic product goal and they spin up ideas and counter ideas. We iterate and see our vision refined and tuned with their help. Our partners never sleep. We wake up in the morning and check in with them. Overnight, PRs await our final check, with variations on ideas there for the picking. Security vulnerability patches, dependency library updates, and new feature ideas all stack up in our PR review list. We glance at the prioritized issue backlog and notice that it has been re-ranked based on data from an attempted break-in by a bad actor (probably also AI). We delegate tactical decision making to our contrarian, looking to toss cold water on the ideas submitted by our cheerful AI partner. We see comments and counter-comments from them in our issues. We make the call, but are thankful for the perspectives of our partners. We pick a winner, see the PR produced, and in moments it’s deployable thanks to solid test automation. Your AI helper spins up an A/B test, suggests a plan, and after you give the green-light, your partner monitors results, letting you know when things hit stat sig.
Lather, rinse, repeat.
Then What Do People Do?
What will we humans spend our time on during this stage? I see the collaboration as an engaging creative experience, taking up a busy workday iterating on ideas, developing variations, deploying, and testing. We will require fewer software engineers as we redeploy our capital towards paying for tooling in the center and then around the edges of the full cycle of software development. I imagine that a company that spends 50% of opex on engineering talent today may cut that by half within the next several years.
Supply and Demand: Pricing the Future of Talent
Thanks to the expected higher productivity (more product code deployed per developer per unit time), developer staffing requirements will drop during the tactical stage of the evolution of AI LLMs. AI LLMs don’t need to be providing strategic value for us to see this shift. Salaries will follow staffing levels (lower demand and more developer supply, lower price per developer). This shift will take longer to impact top-tier product developers with strategic skills. For those developers who respond to the need to shift their skills towards being highly capable with AI tools, they’ll continue to command a premium, but there will be a much smaller pool of talent and a great divide in pay. Former developers without strength in this area will be analogous to the previous generation of data entry techs: a low priced commodity. New jobs will be created, but like a factory run partially by robots, the human caretakers of the robots aren’t the same staff that they replaced.
"Most people don't believe something can happen until it already has. That's not stupidity or weakness, that's just human nature." -Jurgen Warmbrunn, World War Z
Conclusion: Remember the Humans
200 years ago, a military theorist named Carl von Clausewitz developed a series of books on military strategy, published posthumously. In On War, Clausewitz breaks with tradition and applies significant focus toward human ingenuity and creativity, away from tools, blueprints, and arms-length attempts at rote doctrine. He suggests that talent (he uses the term genius) is the greatest driver in outcomes. It’s not the tools, it's the humans, he tells us. And not just any humans: he means those that are strategically capable.
“It was a case of handling a material substance, a unilateral activity, and was basically nothing but a gradual rise from a craft to a refined mechanical art. It was about as relevant to combat as the craft of the swordsmith to the art of fencing. It did not yet include the use of force under conditions of danger, subject to constant interaction with an adversary, nor the efforts of spirit and courage to achieve a desired end.” - Carl von Clausewitz, On War
Here Clausewitz distinguishes between the evolution of military technology, the human genius behind the creative use of these tools, and the crucible of decision making pressure of achieving a strategic goal under real-world friction. Software development tooling has risen to a “refined art.” Today, we wield the tools of AI, and it's our ability to harness them now that provides us the opportunity to bring these tools into the realm of strategic relevance. Human genius will navigate the way forward.
Next Up
Keep reading Part 2: Ten Leadership Adaptation Imperatives