November 24, 2025

Share:

AI Datacenter

Artificial intelligence (AI) is expanding rapidly and penetrating so many aspects of our daily lives from content generation to online chatbots providing customer service. Behind this is a huge growth in data processing, much of which requires powerful computing infrastructure. Before they are ready for use, AI models require training and inference normally carried out in advanced data centers.

Inside a modern data center, one would typically find many thousands of high-performance servers. Each of these requires significant energy, both for operation and cooling. With the widespread rapid growth, energy consumption in the AI data center sector is ballooning, leading to concerns about sustainability and the environmental impact of this technological revolution. 

The International Energy Agency (IEA) estimates that data centers account for 1.5% of total electricity demand – approximately 415 terawatt-hours (TWh) in 2024. The consumption is expected to more than double to around 945 TWh, increasing to about 3% share by 2030. With data centers often being clustered together, there can be significant strain put on areas of the power grid. 

Engineering the Future: Adapting Data Centre Power Architecture for AI

Compared to typical web usage like searching, the energy needed for AI is much higher, often by a factor of ten. Primarily, this is due to the powerful graphics processing units (GPUs) needed, as each can consume hundreds of Watts. Training models are especially heavy on power – as an example training GPT-4 requires 25,000 NVIDIA A100 GPUs for 3 months, consuming 50 gigawatt-hours (GWh) of energy and costing $100 million, according to OpenAI.

Far from slowing down, AI is doubling its power consumption every six months with the industry consuming as much energy as a small nation. At this scale, losses are a real concern. As electricity moves during transmission and distribution, up to 6% of energy is wasted due to the resistance in the cables. Power from the grid to the GPU is converted more than four times, resulting in the average loss of 12% of energy.

Each of the thousands of servers can consume 40 kilowatts (kW), so a heavy-duty bus is used to move power to the racks. The standard 12 volts of direct current (VDC) bus has evolved into 48 VDC to reduce currents. But to address the energy demand for AI, a higher +/-400 VDC bus architecture will likely be required. 

Figure 1: Data centers require multiple power conversion stages

Power semiconductors are essential for the efficient conversion of power to meet the needs of AI processors and GPUs. Silicon carbide (SiC) and gallium nitride (GaN) are replacing silicon as they enable highly compact and energy-efficient power converters, significantly improving the total cost of ownership (TCO) of data centers. 

Innovative Solutions for Efficiency and Sustainability

Data center power delivery from the grid to the GPU rack goes through many power conversions. Intelligent SiC and silicon (Si) power solutions are instrumental to each branch of the power tree. Power first goes through a solid-state transformer (SST) and an automatic transfer switch (ATS) control which is backed up by a diesel generator. The 20k VAC line is converted to a three-phase 400 VAC, and then goes through an uninterruptible power supply (UPS). EliteSiC discrete and power modules can be used to deliver higher efficiency and power density at this entry point into the data center. The power distribution unit then converts the three-phase 400 VAC to a single-phase 230 VAC line at the rack level.

At the rack where the GPU servers are located, the rest of the power conversion happens. Within the power supply unit (PSU) and battery backup unit, the combination of SiC cascode JFETs and PowerTrench T10 Si MOSFETs is ideal for high power AC-to-DC solutions. The high current SiC cascode JFETs are essential for the transition from 3 kW to 5 kW PSU required in the next gen hyperscale architecture. 

EliteSiC 650 V MOSFETs and T10 MOSFETs from onsemi are used in transforming 230 VAC line voltage to 48 VDC first and then to 12 VDC along the power flow. Conversion efficiency is the key here to maintain the Open Rack V3 (ORV3) specification of 97.5% peak efficiency. This high efficiency reduces wasted energy and helps lower operating expenses and cooling demands. The T10 Si MOSFETs and power management ICs are also used in converting the 48 V to an intermediate bus converter (IBC) voltage of 12 V to power up the Vcore (CPU core voltage) branch of the power tree.  Moreover, for 400/800 V bus architectures, SiC JFETs and SiC Combo JFETs offer reliable overcurrent protection for hot swap/e-Fuse before IBC stage. 

The Future of Power Management in AI Data Centers 

Efficiency is the most critical power parameter in AI data centers. This means that losses must be minimized wherever possible, not least because cooling can consume up to 50% of power used in data centers, while the other half is consumed by IT equipment, such as servers, storage systems, and power infrastructure. 

onsemi is a leader in AI data center solutions, and is one of few suppliers that can meet the needs of the entire power tree from the grid to the GPU. The future will require advanced wide band gap technologies, such as EliteSiC and vertical GaN from onsemi, for their robust power conversion at higher frequencies, and higher efficiencies that permit more compact designs. These devices can operate reliably at higher temperatures, requiring less cooling and enabling more compact solutions as well as reducing operating costs.

Additional resources: