Introduction
In my most recent post on navigating the change curve with GenAI, I mentioned that, in its current state, GenAI is an abstraction layer. It’s a tool that takes us one step further away from manipulating the bits directly. I’ve been reflecting on this more over the past few weeks and I’d like to dig a little deeper into the idea of abstraction layers, their purpose, and their relationship with GenAI.
In this reflection I argue that humans need abstractions to make sense of a complex world. I build on this by introducing the idea that programming languages are abstractions that help us extract value from computers. I make the case that GenAI is a modern abstraction for helping us extract value from computers. I delve into the idea that GenAI doesn’t share our cognitive limitations, therefore it doesn’t need the same abstractions we do. Lastly, I explore how to identify the boundary between AI and AGI, through the lens of abstraction layers.
Cognition and abstractions
The history of the software engineering industry is a history of abstraction layers. Each abstraction layer has simultaneously let us accomplish more with less, and moved us one additional step away from the bits. Essentially, abstraction layers have let us find more and more effective ways to get computers to do things for us.
Take programming languages for example. Initially bits were manipulated directly with punch cards and machine code. After machine code came assembly language, which abstracted away lower level details and provided a few basic instructions. After that came languages like Fortan, Lisp, and COBOL, which abstracted away the underlying hardware and provided human-readable variable names and functions. Then came C, C++, Java, and scripting languages, all of which accelerated the pace of innovation by adding abstraction layers. For example, Java abstracts away memory management, freeing developers to focus on solving business problems. You see the same evolution within a company as well. As a company evolves, complex cross-cutting concerns tend to be pushed down lower into the organization. In software companies, these are the platform teams.
All of these abstractions have helped us focus more and more of our time on solving problems, creating more and more value for each hour of input along the way. Can you imagine writing a modern day application with assembly or machine code? Of course not, the only reason to do that would be for entertainment purposes.
I would argue the main reason we need these abstraction layers is because of limitations in our cognitive abilities. We need abstractions so that we can package up complexity into manageable concepts, otherwise we would be stuck in analysis paralysis and unable to make decisions efficiently. We simply can’t process and make sense of the countless micro-details that are part of everyday life. By packaging complex concepts and functionality into an abstraction layer, we can ignore the underlying complexity and free our cognitive space to do other things. As I argued in my previous post, GenAI is yet another abstraction, adding another layer between us and the bits. Instead of writing code by hand, we can now prompt GenAI and it will write the code we want, in any language we want.
Let’s summarize what we’ve discussed so far:
Humans need abstractions to make sense of a complex world
Programming languages offer abstractions that have allowed us to tell the computer what to do more efficiently
GenAI is a new abstraction layer that gives us the ability to tell the computer what to do with natural language prompts
Implications for “vibe coding”
Vibe coding refers to the process of using natural language prompts to generate code, allowing developers to focus more on the "vibe" or the overall feel and functionality they want to achieve, rather than getting bogged down in the syntax and specifics of coding languages. But if programming languages were created to simplify the human-computer interaction and we can now interact with the computer through natural language prompts, what are the implications for programming languages as we know them today? To help understand what the future might look like, let’s step into the “mind” of AI.
Let’s transport ourselves into the mind of an AI by contemplating a few questions. What would the world look like if we humans had a perfect memory, could answer complex questions in nanoseconds, didn’t have emotions, and didn’t need to sleep? Would we still need maps that remind us every turn we need to make on the way to a destination? Would we have needed to create programming languages to abstract away lower level computer resources? What would our abstractions look like in this strange new world? What would our conversations be like? If we had infallible memories, never forgetting a single detail, and the ability to rapidly communicate instantly with every other human on the planet, what would that look like? If programming languages and their respective abstractions are present in order to aid humans, would GenAI still need programming languages?
When it comes to vibe coding, the objective is to focus more on what you want to achieve, and less on the technical details. But if programming languages were created by humans in order to help us navigate our own cognitive limitations, and AI doesn’t have these same limitations, is using natural language to write code really the most effective way to achieve our objective? I think there is a reason GenAI writes such messy code with no desire for engineering excellence. It’s because programming languages, in their current form, aren’t the most natural and effective way for an AI to achieve the stated objective. My hunch is that the next abstraction layer that emerges won’t be to solve for our human cognitive limitations, it will be for AI to solve for the limitations we’ve imposed on it. What does this mean for vibe coding? My sense is that vibe coding won’t be a durable and lasting way to achieve our objectives.
The evolution of abstraction layers
So what new types of abstraction layers might emerge and how do these abstraction layers relate to AGI? I think it’s difficult to say for sure, but if AI doesn’t have the same cognitive limitations as us humans, I think the one thing that we can guarantee is that they will look a lot different than what we’re used to. When it comes to programming languages, perhaps instead of a set of instructions like in traditional code, it might reason through a feedback loop of goal setting and action simulation, choosing actions based on learned reward patterns. As a result, the system would be inherently dynamic and continually optimize the reward function.
Other potential evolutions might involve a convergence of data and code. This would result in dynamic entities that merge behavior and state, resulting in systems that more closely resemble organisms in the real world, adapting and changing as the environment around them changes. And like a living organism, perhaps what’s happening inside the system is invisible to us, like a black box. In this world, debugging becomes more like brain science. We’ll need new tools analogous to X-Rays and MRIs to get a sense for what’s happening in the system. And instead of interpreting logic and syntax, we’ll have to identify patterns, weights, and probabilities. Understanding the system becomes less about reading code and more about observing behavior and inspecting inputs/outputs. Early in the development process we’re likely to spend more time considering the right reward functions and less time thinking about control flow.
In terms of identifying the boundary between AI and AGI, I think a good litmus test is how well we can reason about the black box. Once AI is building abstractions that exceed our cognitive limits, we have likely crossed the boundary into AGI. For example, if systems are inherently dynamic and evolving independently as they optimize their various reward functions, it’s likely the pace of change within the system will be rapid; constantly changing like the water in the ocean, never fully static. Making sense of such a complex system will be difficult to impossible. In general, I think when we collectively reach a point of confusion and are unable to keep up with the rapid pace of change, we’ve likely crossed the AGI boundary. Like many things in life, I suspect the transition from AI to AGI will involve many small incremental changes over time, eventually culminating in the realization and acceptance that AGI has arrived.
Conclusion
I find exploring answers to these questions fascinating. It’s unclear what the future holds, but I’m excited to see what new abstractions unfold as things evolve. My sense is that vibe coding will be a short fad that leads towards a more general “vibe prompting” approach to solving complex real world problems. Perhaps GenAI will be able to create abstractions for complex issues like climate change, geopolitics, or societal issues. The question is, will we be able to understand the abstractions? What do we do when we don’t understand how GenAI creates these abstractions and solves these problems? Does it matter? How much trust should we put into things we don’t understand? Answering these questions is beyond my cognitive limits.