Power & Grid Strategy for AI Data Centres

27 Apr 2026

When Compute Scales Faster than the Grid 

AI is transforming the data centre industry at an extraordinary speed. But while much of the conversation focuses on GPUs, model architecture, and cooling technologies, power is emerging as a quieter constraint on project viability. 

Across the UK, and increasingly in other mature markets, national grids are struggling to keep pace with the sudden, concentrated demand created by AI workloads. Projects are being delayed not because the technology isn’t ready, but because electricity simply cannot be delivered quickly enough. In this environment, power strategy is no longer a technical consideration. It is shaping where, how, and even whether AI data centres get built at all. 

The Scale of the Challenge

AI data centres are driving a change in the amount of power being requested from electricity networks. 

Historically, many data centre developments were sized in the range of 10–20MW. Today, AI driven projects are being delivered at a different scale. It is increasingly common to see campuses made up of multiple buildings, each requiring 50MW or more, with the total power of a site running into the hundreds of megawatts. 

 

Power Grid
At the same time, the number of operators seeking this level of capacity is growing. Rather than a small number of data centre providers expanding steadily, grids are now facing a surge of large, simultaneous connection requests, all competing for limited available power distribution capacity. At a power generation level most countries have the capacity required however, in many regions, the growing demand simply exceeds what the local network can accommodate. 

This creates a fundamental distribution capacity squeeze. Even where individual projects are technically viable, local grids often cannot provide the volume of power being requested at the pace required.  

Connection queues are lengthening and delivery timelines are extending well beyond what developers expect. 

Grid investment takes years to plan and deliver. AI has rapidly increased both the size and concentration of demand, exposing a growing gap between the scale of data centre ambition and the physical limits of existing power infrastructure. In many markets, that gap is now defining a constraint on AI data centre development.

A Power First Approach to Location 

For much of the data centre industry’s history, location strategy followed a familiar set of priorities including proximity to users, strong connectivity, access to talent, and suitable land. Power was assumed to follow, however that assumption is no longer the case. 

In many established data centre hubs, availability to the grid is constrained or already fully allocated. Developers are discovering that sites which appear highly attractive on paper can involve multi‑year delays for capacity, or substantial and unexpected reinforcement costs. In practice, this turns power availability into a delivery risk rather than an operational detail. 

As a result, site selection is quietly changing. Location strategy is shifting from network-first to power-first. Developers are increasingly forced to balance latency, ecosystem access, and market proximity against a more fundamental question: how quickly can reliable power be delivered? This shift is pushing some AI projects toward regions with stronger grid capacity, fewer competing loads, or clearer upgrade pathways, even if those locations were previously considered secondary. 

Latency and connectivity still matter, but they are no longer absolute drivers. For AI data centres, speed to power, certainty of delivery, and the ability to scale have become equally important.

Engaging with the Grid (Not just Connecting to It) 

Grid access once appeared very straightforward, with developers applying for capacity, connecting, and then scaling as needed. That model is now under significant strain. 

In today’s environment, securing capacity is less a one‑time transaction and more an ongoing process. Long connection queues, changing rules, and uncertain timelines mean data centre operators must engage much earlier, and more actively, with grid operators than in the past. Capacity discussions now shape design assumptions from the very start of a project. 

This is not simply a regulatory challenge. It is about managing uncertainty. Flexible connections, phased energisation, and demand management are increasingly common tools. In effect, the grid is no longer just a utility at the edge of the project. It is a stakeholder whose constraints directly influence architecture, phasing, and even commercial models.

Renewables, PPAs, and the Physical Reality of Power

Sustainability remains a central pillar of data centre strategy, but AI is exposing the gap between carbon reporting and physical power delivery. 

Long‑term renewable power purchase agreements play an important role in managing emissions and price risk. However, they do not guarantee that power can be delivered where and when it is required. In congested networks, the difference between procuring renewable energy and physically accessing capacity becomes critical. 

AI workloads also bring uncertainty. Training intensity, inference patterns, utilisation rates, and hardware efficiency are all evolving rapidly. That makes long‑term commitments harder to plan with confidence. The result is a more nuanced approach to renewable procurement and one that recognises complexity rather than smoothing it over.
Green Data Centres

On‑Site Power: From Backup to Core Infrastructure 

Perhaps the most visible shift in AI data centre design is the evolving role of on‑site generation. 

Traditionally, on‑site power was synonymous with resilience, an insurance against failure rather than a primary supply. Today, it is often being used as a capacity‑bridging tool. Gas generators, temporary plants, and hybrid configurations are enabling sites to operate at scale while grid upgrades catch up. In some cases, this “temporary” phase lasts several years. 

The appeal is clear. On‑site power delivers speed, control, and predictability in an increasingly constrained environment. But it also introduces complexity. Fuel logistics, emissions management, operational oversight, and planning scrutiny all become more significant. There is also a risk that interim solutions quietly become permanent fixtures. 

Despite these challenges, for many AI projects on site power is no longer optional. It is becoming a core part of baseline design rather than a contingency.

Planning, Communities, and Visibility 

As AI data centres grow larger and more energy‑intensive, they are also becoming harder to ignore. 

Local authorities and communities are paying closer attention to power consumption, on site generation, and environmental impact. Infrastructure that was once largely invisible (generators, fuel storage, new substations) is now shaping how data centres are perceived. 

For operators, social responsibility increasingly sits alongside technical feasibility. Transparency, early engagement, and credible local benefit narratives are becoming essential parts of project delivery rather than optional extras. 

Future‑Proofing for What Comes Next

The greatest challenge with AI infrastructure is uncertainty. Nobody knows precisely what the next wave of demand will look like. 

Model architectures are changing and hardware generations are evolving. The safest assumption is continued volatility rather than stability. 

That reality favours flexibility. Modular power systems, hybrid supply models, and designs that can adapt to different grid outcomes are emerging as the most resilient approaches. There is no single winning strategy. Only combinations that balance speed, cost, resilience, and risk. 

In the AI era, power infrastructure is no longer just about keeping the lights on. It is about building systems that can respond as fast as the technology they support. For data centres, that may be the most important challenge of all. 

お問い合わせ

Coltデータセンターサービスの詳細については、こちらからColtチームまでお問い合わせください。お客様のデジタルインフラストラクチャ要件について、サポートをさせていただきます。
お問い合わせ