AI is eating software
Future of Software

AI is eating software

AI abstraction future of computing

In 2011 Marc Andreessen wrote a now famous essay arguing that software would eat the world. In his words: “we are in the middle of a dramatic and broad technological and economic shift in which software companies are poised to take over large swathes of the economy.” Why Software Is Eating the World | Andreessen Horowitz My theory is that software did eat much of the world, but now AI models are going to eat the software, or more precisely, they will eat the interfaces between us and the software, abstracting applications away behind a layer of intelligent agents. They will do this because AI will simply be an easier, more efficient way to get things done. This isn’t unprecedented. Computing has always progressed through abstraction. Since the 1950s, we’ve steadily abstracted away layers of complexity. Programmers no longer write in machine code, they use high-level languages. They don’t manually manage memory or handle low-level I/O operations. Virtual machines abstracted away specific CPU architectures. Cloud platforms like AWS abstracted away physical servers. Kubernetes and Terraform abstracted away entire datacenters. At each level, we gained productivity by forgetting how the layer below actually worked. The same pattern has played out for end users. In the 1990s, computer users had to think constantly about the machine’s constraints: storage space, memory, file formats, driver compatibility. Today’s users think almost exclusively about tasks and outcomes: edit a photo, share a document, join a call. The plumbing is invisible. This abstraction has accelerated beyond computing. We’ve moved from planning logistics to expressing intent. We used to need shopping lists, cash on hand, paper maps, and detailed plans. Now we simply state what we want, “I need this item,” “I want to go there”, and the system invisibly coordinates global logistics networks, financial rails, and satellite navigation to make it happen. Cars follow the same trajectory. There was a time when drivers needed to understand how engines worked and how to repair them. Today most of us can’t change our own oil. The endgame is Waymo: we get in and tell it where to go. Driving itself becomes abstracted away.

The AI layer we’re building now is the next step in this progression. And there’s reason to believe this transition will happen much faster than previous ones because AI is improving at unprecedented speed. Consider what’s already happening with Excel. We can all use it a little, but Excel is famously difficult to master. If you invest months learning its advanced features, you can earn good money solving complex problems for companies. But with AI assistance we can now use Excel in sophisticated ways without really knowing Excel very well at all. Nate Jones demonstrates this well here: Excel AI Will Replace Finance Teams by 2026, Here’s Why (And What to Do) This seems great for Excel. More users can access its power. But there is an uncomfortable question: once AI models sit between us and spreadsheets, why does it matter whether we’re using Excel, Google Sheets, or an open-source alternative? If we always ask the AI to “create the monthly P&L” and it handles the details, we will stop caring about which spreadsheet application it uses. The task is what matters. The tool becomes invisible. This is abstraction in action. Excel doesn’t disappear, the functionality remains necessary, but the interface, the learned expertise, the user habits that create Excel’s moat… those start to dissolve.

Now scale this pattern across all software. In January 2025, Anthropic’s CEO Dario Amodei outlined the company’s vision at Davos: to create a “virtual collaborator” by the end of the year. Anthropic CEO: More confident than ever that we’re ‘very close’ to powerful AI capabilities This isn’t marketing hype, the capabilities are already emerging. In just the past year, we’ve seen AI models gain the fundamental abilities needed to abstract away software: Computer use. Models can now control our existing applications the same way humans do, clicking, typing, navigating interfaces. Claude, GPT-5, and Gemini can all operate desktop software directly. Autonomous operation. Models can work on complex tasks for hours without human oversight, reviewing their own work, making decisions, and correcting mistakes. The latest models can maintain focus on a single problem for up to 30 hours. Direct software integration. Through protocols like Model Context Protocol (MCP), AI can now communicate directly with application APIs, bypassing user interfaces entirely. OpenAI just launched a feature where we can interact with Apps from within ChatGPT. That’s the roadmap. Contextual awareness. Models are learning to understand our communications, schedules, and work patterns, not just responding to explicit commands but anticipating what we need. These aren’t separate features, they’re the building blocks of Dario’s “virtual collaborator.” And they point to a fundamental shift in how we interact with computers.

Consider a common but complex task: generating a monthly report for your manager. This typically involves:

  1. Gathering data from multiple sources (databases, emails, spreadsheets)
  2. Cleaning and manipulating that data to extract relevant metrics
  3. Creating visualisations: charts and graphs
  4. Building a presentation deck
  5. Writing an executive summary
  6. Sending drafts to stakeholders for review
  7. Incorporating feedback
  8. Answering follow-up questions about the data

Until 2025, the only entity capable of orchestrating this kind of multi-step, multi-tool workflow was a human. This is why hundreds of millions of us sit in offices. We’re the general intelligence that can bridge between specialised tools to accomplish complex goals. But AI models are now general intelligences too. And unlike humans, they’re native to the digital environment. They don’t need to learn keyboard shortcuts or remember where menu items live. They can interface with software at whatever level is most efficient, whether that’s manipulating a user interface, calling an API directly, or even generating code on the fly. The report generation task I just described? Current models can already handle significant portions of it. Within a year or less, they’ll likely handle all of it, start to finish, with minimal human input beyond “I need the Q3 performance report by Friday.”

This is where the abstraction becomes consequential for software companies. When AI models sit between users and applications, several things happen simultaneously. First, the user interface, often a company’s primary moat, loses its value. Users no longer need to learn your app, remember where features live, or develop habits around your particular workflow. The AI handles all of that. Second, the switching costs evaporate. If I’ve spent six months mastering Excel, I’m unlikely to switch to Google Sheets. But if I’ve spent six months asking Claude to “analyse my sales data,” I have no loyalty to whatever spreadsheet application Claude happened to use. The AI can switch between tools transparently, choosing based on price, performance, or availability, not my learned preferences. Third, brand becomes less visible. When you use Uber’s app, you see their logo, their design language, their driver ratings system. You develop a relationship with the service. But when you tell your AI “I need to get to the airport,” and it handles everything behind the scenes, do you even know which service it used? Do you care? This doesn’t mean all software companies face the same fate. The impact depends on what kind of value they provide. Consider Uber. It won’t cease to exist. It has real infrastructure: a network of drivers, dispatch algorithms, insurance arrangements, regulatory relationships, payment processing systems. These remain valuable. But Uber becomes more like AWS than like the Uber we know today: infrastructure that AI models consume on behalf of users, chosen based on price and performance rather than consumer brand preference. Uber survives, but as a B2B commodity provider rather than a consumer brand with pricing power. The same logic applies differently to different types of software: Complex, specialised tools may resist longest. AutoCAD, advanced audio production software, scientific analysis tools - these require such deep domain expertise that even powerful AI models may not fully abstract them away by 2030. But even here, AI will likely handle 80% of common tasks, while we will get hands on for specialised use cases. Middleware and APIs may actually benefit. If AI models are making thousands of API calls instead of humans clicking through interfaces, the companies providing those APIs might see increased usage, although they’ll face new competitive pressure from models that can dynamically choose between providers. Data and networks remain valuable. Uber’s driver network, Google’s search index, social media connection graphs - these are assets AI models need to access, not just interfaces they can replace. But the companies that own these assets may find they’re selling wholesale to AI providers rather than retail to consumers. The timeline varies, but the direction is clear. If you’re starting a software company today and expecting humans to directly use your interface in five years, you’re betting against a powerful trend. Maybe your particular domain is complex enough that it takes seven years instead of five. Maybe only three. But ask yourself: what happens when your users discover they can accomplish their goals faster by describing them to an AI rather than learning your app? By 2028, I expect the majority of routine business software tasks - scheduling, data analysis, document creation, basic communications - will happen through AI intermediaries rather than direct application use. Many of us will still have computers, but increasingly those computers will be running AI agents that run applications on their behalf. And where UI is focused on reviewing work rather than making it, we can likely use smaller devices like phones and tablets. By 2030, I expect the question “which app do you use for X?” will sound as dated as “which DOS command do you use to copy files?” sounds today. We’ll think in terms of tasks and outcomes. The applications that handle them will be implementation details we rarely consider. For software companies, this means the next five years require a fundamental rethinking of value creation. User interface and user experience, historically the primary moats, are becoming less important. What matters is:

  • Network effects and data that can’t be easily replicated,
  • Complex domain expertise that resists full automation,
  • and Infrastructure and reliability that makes you the API of choice for AI models

Companies that recognise this and rebuild their businesses around AI-as-customer rather than human-as-customer will survive and potentially thrive. Those that continue optimising their UI for human users while ignoring the coming abstraction layer are optimising for a world that’s disappearing. The software ate the world. Now the AI is eating the software. And the companies that understand this earliest will be the ones that still exist into the 2030s.