13 Feb 2026 Blog Kevin Bell, Advisor, PR

AI in 2026: from models to mandates

Kevin Bell, advisor at Miltton Sweden attended Techarena 2026 in Stockholm and came away with a clear takeaway: Europe is drifting into the wrong AI debate. While much of the attention still fixates on who can build the biggest models, the conversations on stage pointed to a different reality, that AI is rapidly becoming infrastructure, shaped as much by trust, energy and sovereignty as by code. 

Share

For two years, the public debate has obsessed over one metric: who can build the biggest models. But in 2026, that race is no longer the main story. The real competition is shifting fast and it is far less technical than many leaders still assume.

The new question is this: who will earn the mandate to scale AI?

Because AI is no longer “just innovation.” It is becoming society-scale infrastructure. And infrastructure does not scale on performance alone. It scales on permission: regulatory, political, and public.

At Techarena 2026, in Stockholm, Sweden, the clearest signal wasn’t about the next model architecture. It was about the next operating reality: AI sits at the intersection of compute, energy, data sovereignty, security, and legitimacy. For businesses operating in Europe, the winners will be the ones who can build not only systems, but trust.

This is where communications, public affairs, and stakeholder engagement move from ”support functions” to strategic infrastructure. 

AI has entered its infrastructure era

In 2026, AI is no longer abstract. It is physical.

Every serious deployment depends on physical infrastructure and political choices. Advanced compute remains concentrated, supply chains are fragile, and dependencies are increasingly geopolitical. In other words, compute is becoming sovereignty.

At Techarena 2026, Prime Minister Ulf Kristersson’s presence on stage, and French artificial intelligence startup Mistral’s €1.2bn data center announcement in Sweden, made the point: AI is now a competitiveness and sovereignty issue.

If your AI strategy doesn’t have answers, it’s not a strategy. It’s a pilot. 

Energy is the new gravity of AI

For decades, digital infrastructure followed data. Now AI infrastructure follows energy.

As inference scales, electricity becomes a strategic constraint. This is why you now see business leaders moving into power agreements and long-term capacity planning, moves that would have seemed extreme only a few years ago.

For the Nordics, this creates a real geopolitical and industrial opportunity. Countries with stable grids, fossil-free supply, predictable permitting, and trusted institutions will attract AI investment not because they support tech, but because they can support megawatts.

Sweden’s position is particularly strategic: the ability to convert energy into intelligence is a new value chain. Not just exporting electrons, but exporting capability. 

Europe’s edge won’t be scale – it will be trust

Europe is often criticized for regulating early. But the deeper issue is not regulation. It is legitimacy.

The EU AI Act, widely described as the world’s first comprehensive AI law, turns this logic into a risk-based framework, including specific transparency duties for generative AI. The timeline is already real: prohibited practices and AI literacy obligations have applied since 2 February 2025, with full applicability from 2 August 2026.

AI will not scale in Europe if citizens perceive it as something done to them rather than for them. And that legitimacy test will be practical, not philosophical.

People will ask, why does this project deserve energy capacity? Why here? Why now? What do we get in return? Who is accountable?

The companies that win will be the ones who can answer those questions without defensiveness and without hype.

Europe’s potential advantage is to build the most credible AI ecosystem, one where transparency is real, governance is operational, and responsible AI is not a slogan but a measurable practice. 

The enterprise shift: from AI optimism to AI operational realism

A second signal is just as important: the market has matured.

Most organizations do not need to train new foundation models. The value is increasingly created through inference at scale, AI agents embedded in workflows, secure access to proprietary data, governance that enables speed without losing control, and compounding use cases tied to business outcomes.

In 2026 and beyond, the winners will not be those running the most pilots. They will be those who can turn AI into repeatable operations, and show the line from tokens to KPIs to business value.

But operational AI has a communications consequence. Once AI touches decisions, customers, employees, and critical systems, it becomes reputationally material. Your AI program will be judged not only on efficiency, but on fairness, safety, accountability and impact.

The communication challenge is no longer “make AI sound exciting.” It is “make AI understandable, governable, and trustworthy.”

What business leaders need to have under control in 2026  

AI strategy is no longer a technology roadmap. It is a trust roadmap. Executives will increasingly be expected to answer:

  • What is our public value case for AI, beyond productivity claims?
  • What is our governance model when things go wrong?
  • How do we communicate AI impacts credibly to employees, customers, investors, media and policymakers? 

Interested?

Let’s talk!