For most of the history of software, we lived with a permanent trade-off. We could build the best possible experience, or we could build something affordable. Rarely both.

In the early days of personal computing, this was obvious. Different machines, different operating systems, different toolchains meant that supporting multiple platforms required paying for multiple platforms. Native applications delivered the best experience, but they were expensive to build and painful to maintain. So the industry looked for shortcuts: shared frameworks, cross-platform toolkits, abstraction layers, anything that promised write once, run anywhere. They rarely lived up to the slogan, but they were good enough, and good enough had a powerful economic logic behind it.

Then came the web. With browsers, distribution became trivial. You could deploy once and reach everyone. For enterprise software in particular, this was transformative. No installs, no updates, no platform wars, just a URL. The experience was often inferior to native software, but the economics were unbeatable, and convenience won. The industry accepted good enough as a standard not reluctantly but deliberately, because the alternative was too expensive to justify.

Mobile repeated the same story. iOS and Android reintroduced fragmentation, hybrid frameworks promised salvation, and once again experience was compromised in favor of cost and reach. Over and over, the industry chose abstraction over quality. Not because abstraction was better, but because it was cheaper. The economics made the decision before the designers had a chance to.

The Shift We Are Living Through

Something fundamental is changing in how software gets built, and it is not a marginal improvement. With modern coding agents and AI-assisted development, the cost structure of implementation is being rewritten. In greenfield development especially, generating a working application, even a sophisticated one, is becoming routine. The machine writes the code. What the machine cannot do is decide what the code should mean.

That is where the constraint has moved. Clarity, intent, structure, meaning: these are now the expensive parts. In an agent-driven world, ambiguity is no longer absorbed by creative developers who figure things out along the way. Vague requirements produce unusable output. The quality of the specification determines the quality of the software, which means the discipline of articulating what you want has become more consequential than the discipline of writing code to express it.

This is not a small change. It shifts where value lives and, as a consequence, it shifts what kinds of decisions become rational again.

From Writing Code to Writing Intent

For decades, development teams spent most of their energy on implementation. Architecture, modeling, and specification were important, but often secondary to getting something working. You could figure things out in code, and most teams did.

That approach is becoming less viable. When machines implement, humans must explain. Design systems, interaction models, domain concepts, state transitions, error semantics, behavioral rules: these are no longer optional niceties at the edges of a project. They are the primary inputs. The better you define what you want, the better the output. The less you define it, the more you are generating noise at scale.

In effect, we are learning to write software twice. First in meaning, then in materialization. AI handles the second part. We remain responsible for the first. And once you accept that shift, something interesting follows about where the economics of experience end up.

When Implementation Is Cheap, Optimization Makes Sense Again

Historically, native development lost the platform wars not because it was bad but because it was expensive. Native software still offers advantages that browsers and generic frameworks struggle to match: performance, responsiveness, deep platform integration, reliable offline behavior, input paradigms suited to real work. These things matter, especially in complex or cognitively demanding applications. But for years, improving them was rarely worth the cost. The calculation was simple. You could build one web client or three native ones, and most teams chose one.

As AI-assisted development reduces implementation cost, that calculation changes. If generating a macOS client, a Windows client, a mobile client, and a web interface becomes relatively cheap, the old compromise starts to look unnecessary rather than sensible. The constraint that drove two decades of decisions is weakening, and the decisions that followed from it no longer need to hold. When coding is no longer the bottleneck, experience becomes rational again.

This Is Not About Declaring Winners

It is worth being clear about what this argument is not. Browser-based software will remain essential for distribution, for lightweight access, for occasional users, for cross-organizational workflows, for contexts where installation is not viable. Web is often exactly the right answer, and for many use cases it already delivers everything users need.

This is not a case against the web. It is a case for genuine choice. For decades, teams chose web or cross-platform primarily because they had to, not because it was ideal but because it was affordable. The economics made the decision before the designers had a chance to. As that constraint weakens, teams regain the ability to decide based on user value rather than development economics. Sometimes that will mean web. Sometimes native. Sometimes something in between. The difference is that it becomes a deliberate decision rather than a forced one, which is precisely the kind of freedom the industry has been lacking for thirty years.

The Client as a Projection of Intent

This shift fits into a broader pattern worth naming. We are moving away from software as handcrafted artifact and toward software as compiled intent. When you define the domain model, the interaction model, the state model, the design system, and the behavioral rules clearly enough, those definitions can be translated into concrete implementations across different platforms and form factors. The underlying meaning remains stable while the surface varies by context.

In that world, user interfaces are no longer the primary product. They are projections of intent. And once that is true, generating a high-quality native client is no longer a heroic engineering effort requiring months of platform-specific work. It is a mechanical consequence of having defined what you mean clearly enough. The craftsmanship moves upstream, into modeling and semantics and structure, which is where it arguably always should have been.

A Second-Order Effect of Clarity

Most of the current conversation about AI in software focuses on productivity: faster delivery, more features, smaller teams. That matters, but it is not the most interesting consequence of what is happening. The deeper shift is that clarity becomes the central discipline.

When you must be explicit about what you mean, you are forced to confront complexity early rather than defer it. You are forced to understand your own system before you build it. That discipline has a second-order effect that extends beyond process efficiency. When intent is clearly defined, form can vary freely. When meaning is stable, implementation becomes flexible. When implementation is flexible, quality becomes affordable. That is how native experiences come back, not as nostalgia, not as rebellion against the web, but as a natural consequence of the industry learning to think more carefully about what software is supposed to do.

From Scarcity to Deliberate Design

For thirty years, much of software engineering was fundamentally about managing scarcity: scarcity of skills, of time, of budget, of platform support. The response was rational. Optimize for reuse, choose abstraction, target lowest common denominators, accept the compromise because the alternative was too expensive. That made sense when implementation was the binding constraint.

We are entering a period of relative abundance in implementation capacity. A good specification can become working software far faster than it could five years ago, and that gap will continue to widen. What remains scarce is not code but understanding: understanding of the domain, of the user, of the problem worth solving. In that world, the competitive advantage shifts away from writing code faster and toward knowing more clearly why you are writing it. Experience stops being something you apologize for and becomes something you design deliberately, because for the first time in a long time, you can afford to.

Returning Without Going Back

We are not returning to the 1990s. Not to artisanal desktop development, not to handwritten platform APIs, not to a world before the web changed distribution forever. The web solved a real problem, and that solution will not be undiscovered.

We are moving forward into a different configuration, one where humans define meaning and machines implement structure, where interfaces adapt to context rather than forcing users to adapt to constraints, where quality is affordable because implementation is no longer the scarce resource. Native clients will likely become more common again, not because they are fashionable, but because teams can finally afford to choose them when they are right. In a world where intent comes first, form is free. That is not a return to the past. It is a consequence of growing up.