Skip to main content
News Directory 3
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
Menu
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
AI & Critical Infrastructure: Gartner Warns of Potential Shutdowns by 2028 - News Directory 3

AI & Critical Infrastructure: Gartner Warns of Potential Shutdowns by 2028

February 16, 2026 Lisa Park Tech
News Context
At a glance
  • The increasing reliance on artificial intelligence to manage critical infrastructure carries a significant, and surprisingly near-term, risk.
  • Gartner defines CPS as “engineered systems that orchestrate sensing, computation, control, networking and analytics to interact with the physical world (including humans).” This broad definition encompasses a vast...
  • The core concern, according to Gartner, isn’t the widely discussed potential for AI “hallucinations” – where AI generates incorrect or nonsensical outputs – but the possibility that these...
Original source: computerworld.com

The increasing reliance on artificial intelligence to manage critical infrastructure carries a significant, and surprisingly near-term, risk. A new report from Gartner predicts that misconfigured AI will lead to a shutdown of national critical infrastructure in a G20 country by 2028. This isn’t a scenario involving malicious actors deliberately targeting systems, but rather a failure stemming from the inherent complexities of AI operating within cyber-physical systems (CPS).

Gartner defines CPS as “engineered systems that orchestrate sensing, computation, control, networking and analytics to interact with the physical world (including humans).” This broad definition encompasses a vast range of technologies, including operational technology (OT), industrial control systems (ICS), industrial automation and control systems (IACS), the Industrial Internet of Things (IIoT), robots, drones, and Industry 4.0 technologies. Essentially, it’s anything where software directly controls physical processes.

The core concern, according to Gartner, isn’t the widely discussed potential for AI “hallucinations” – where AI generates incorrect or nonsensical outputs – but the possibility that these systems will fail to recognize subtle changes that a human operator with experience would readily identify. In the context of critical infrastructure, even seemingly minor errors can quickly cascade into major disasters. This is particularly concerning as operators increasingly grant machine learning systems the authority to make real-time decisions, prioritizing efficiency gains over constant human oversight.

The shift towards autonomous control isn’t inherently flawed, but it introduces new vulnerabilities. A seemingly insignificant alteration in settings, a flawed software update, or even inaccurate data input can trigger unpredictable responses with potentially devastating consequences. Unlike traditional software bugs that might crash a server, errors in AI-driven control systems can directly impact the physical world, causing equipment failures, forcing shutdowns, or destabilizing entire supply chains.

The speed of AI adoption is exacerbating the risk. As more critical infrastructure systems are integrated with AI, the potential for misconfiguration and unforeseen consequences grows exponentially. The complexity of these systems, combined with the lack of comprehensive testing and validation procedures, creates a fertile ground for errors. The report highlights that the issue isn’t about *if* a failure will occur, but *when* and *where*.

The implications are far-reaching. Critical infrastructure encompasses essential services like energy grids, water treatment facilities, transportation networks, and communication systems. A disruption to any of these systems could have severe economic and social consequences. The Gartner report specifically points to a G20 nation being affected, suggesting the scale of potential disruption is significant.

The challenge lies in the nature of AI itself. Traditional software operates on deterministic rules – given the same input, it will always produce the same output. AI, particularly machine learning models, are probabilistic. They learn from data and make predictions based on patterns, but they are not guaranteed to be correct in every situation. This inherent uncertainty, coupled with the complexity of CPS, makes it difficult to predict and prevent potential failures.

the data used to train these AI systems may not accurately reflect all possible real-world scenarios. A model trained on historical data may not be able to handle unexpected events or novel situations. This is particularly problematic in critical infrastructure, where conditions can change rapidly, and unpredictably.

Addressing this risk requires a multi-faceted approach. Gartner suggests a focus on robust testing and validation procedures, as well as improved monitoring and anomaly detection capabilities. It’s crucial to develop systems that can identify and flag potentially problematic behavior before it leads to a catastrophic failure. This includes investing in tools and techniques for explainable AI (XAI), which can help operators understand *why* an AI system made a particular decision.

However, XAI is not a panacea. Even with explainable AI, it can be difficult to fully understand the complex interactions within a CPS. It’s also essential to maintain a degree of human oversight, particularly in critical applications. Operators should be empowered to override AI decisions when necessary, and they should have access to the information they need to make informed judgments.

The Gartner report serves as a stark warning. The benefits of AI in critical infrastructure are undeniable, but they must be weighed against the potential risks. Proactive measures are needed to mitigate these risks and ensure the reliability and resilience of these essential systems. The window for action is shrinking, with the report predicting a critical failure within the next two years. CIOs and infrastructure operators must prioritize the secure and responsible implementation of AI to avoid a potentially devastating outcome.

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X

Related

Search:

News Directory 3

ByoDirectory is a comprehensive directory of businesses and services across the United States. Find what you need, when you need it.

Quick Links

  • Disclaimer
  • Terms and Conditions
  • About Us
  • Advertising Policy
  • Contact Us
  • Cookie Policy
  • Editorial Guidelines
  • Privacy Policy

Browse by State

  • Alabama
  • Alaska
  • Arizona
  • Arkansas
  • California
  • Colorado

Connect With Us

© 2026 News Directory 3. All rights reserved.

Privacy Policy Terms of Service