Schneider Electric’s latest quarter shows AI data center cooling and liquid cooling moving from an optional upgrade to a capital priority, as hyperscalers accelerate AI build-outs and operators face higher rack densities, power constraints, and tighter energy reporting requirements.
The company’s latest quarterly results have sharpened investor attention on data center cooling, with CETA System Co., Limited analyzing the figures as evidence that thermal management is becoming central to AI infrastructure economics. The French energy-management group reported 11.2% organic revenue growth to $11.4 billion in the most recent quarter, with data center cooling systems contributing strongly to performance. Hyperscaler capital allocation has also exceeded $600 billion toward AI infrastructure deployments across the current annual investment cycle.
Key Takeaways
- Schneider Electric’s quarterly results show AI data center cooling is now a priority due to increased AI workloads and rack densities.
- The company reported 11.2% organic revenue growth, with significant contributions from data center cooling systems.
- Liquid cooling systems can handle higher thermal loads, addressing limitations of traditional air cooling as rack densities rise.
- Regulatory pressures and rising energy costs compel operators to reassess cooling systems as essential for operational efficiency and risk management.
- CETA System projects substantial growth in the AI data center cooling sector, reflecting a shift towards more energy-efficient and resilient cooling solutions.
Table of contents
AI Data Center Cooling Moves From Operations Issue to Strategic Priority
The results point to a broader change in how data center operators are evaluating cooling infrastructure. Schneider Electric’s data center category recorded triple-digit year-on-year growth, while data centers and networks accounted for 30% of the group’s end-market exposure in the latest annual order mix. Its finance leadership has projected that the vertical will represent more than 24% of group revenue within twelve months, reinforcing the extent to which AI workload growth is influencing industrial energy-management demand.
Schneider Electric’s acquisition of a 75% controlling interest in Motivair for $948.7 million has also expanded attention on liquid-cooling capability. The transaction supports rack densities exceeding 140 kilowatts, with architecture provisions for future one-megawatt deployments per rack. Market Decipher projects the global AI data center cooling and liquid-cooling sector to expand from $4.1 billion over the current market window to $20.2 billion by 2036, reflecting the pressure that denser AI infrastructure is placing on legacy thermal designs.
Describing the shift, Lee Tsz-Hin, CETA System’s Chief Executive Officer, called it “a practical reassessment of cooling as a financial, operational and resilience issue, not just a facilities issue.” Cooling systems consume approximately 40% of data center energy, making thermal management a direct operating-cost concern for owners, operators, and investors assessing long-term asset performance.

Rising Rack Density Is Driving Liquid Cooling Adoption
That reassessment is being driven by rack-density increases that are testing conventional cooling approaches. Standard configurations have moved from 8.4 kilowatts in an earlier operating window toward projected 30-kilowatt deployments by 2027, while AI workloads can exceed 100 kilowatts per rack. Air cooling reaches practical limits at 20 to 25 kilowatts, with thermal-capacity constraints near 70 kilowatts, pushing operators toward direct-to-chip cooling, rear-door heat exchangers, and immersion systems where higher densities require them.
CETA System views vendor-agnostic integration as important for mixed-infrastructure environments, where operators need to connect existing building-management systems and data center infrastructure-management tools without full hardware replacement. Such integration allows cooling performance to be considered alongside equipment-health analytics, helping operators assess degradation patterns before failures escalate. Lee called advisory-first deployment “the practical model for critical infrastructure, where AI can inform decisions while operators retain approval and oversight.”
Cost Efficiency and Operational Risk Management
For data center operators, the issue is not only whether cooling can meet higher technical loads, but whether it can do so with predictable cost and outage exposure. Energy costs represent 25% to 30% of data center operational budgets, while cooling systems consume 30% to 40% of facility power. Traditional air-cooling architectures lose efficiency at higher rack densities, while liquid-cooling technologies are being evaluated for consumption reductions of up to 30%.
Resilience is another core factor. Thermal failures are identified as the second leading cause of data center downtime after power-system failures, making predictive maintenance relevant to both engineering and financial risk. Platforms that detect equipment degradation weeks before critical failure can reduce unplanned downtime by 50% across industrial deployments. Continuous monitoring of vibration patterns, thermal anomalies, and acoustic emissions can provide earlier warning indicators for critical cooling components, supporting planned maintenance rather than emergency response.
Regulation and Reporting Add Investment Pressure
Regulatory and reporting pressures are adding further weight to the investment case. In Europe, data centers with rated power above 500 kilowatts must report energy performance annually, including power usage effectiveness, temperature set points, waste-heat utilization, water consumption, and renewable-energy deployment. Globally, data centers account for 415 terawatt-hours of electricity consumption, with projections reaching 945 terawatt-hours by 2030. In Hong Kong, the Buildings Energy Efficiency Ordinance is expanding coverage to 15 building types and shortening energy-audit cycles to every five years, with disclosure requirements covering floor area, twelve-month energy analysis, and cost-benefit assessments.
Capital Discipline in the AI Infrastructure Cycle
Capital discipline is becoming equally important as AI infrastructure spending rises. Global data center investment totaled $58.2 billion in the latest annual tally, while Bain estimates annual capital expenditure could reach $477.1 billion by 2030, requiring $1.9 trillion in revenue to justify deployment. Against that backdrop, platforms focused on energy optimization, condition-based maintenance, and operator-supervised advisory controls offer a way to link technical performance with capital efficiency.
Schneider Electric’s performance highlights the commercial momentum behind AI-era cooling, but the wider implication is that data center operators must manage density, energy cost, resilience, and compliance as connected issues. CETA System reads these conditions as supporting sustained demand for vendor-agnostic thermal-management platforms that help operators optimize cooling, monitor critical assets, and reduce avoidable operational risk as AI data center cooling and workloads reshape infrastructure requirements.
About CETA System
CETA System Co., Limited is a Hong Kong-incorporated technology company founded in 2017, delivering artificial-intelligence solutions for data center infrastructure across HVAC and chiller-plant energy optimization, predictive maintenance for critical assets, and advisory-first integration with existing building-management and DCIM environments for colocation, enterprise, and hyperscale operators across Asia-Pacific and beyond.











