Optimizing Real-Time Systems at the Hardware-Software Intersection
- Artificial intelligence is reshaping how real-time systems are designed and optimized, moving beyond its traditional role in executing workloads to actively improving the infrastructure that supports them.
- The paper introduces the concept of “AI-for-RT,” a complementary approach to the more commonly discussed “RT-for-AI.” While much research has focused on ensuring real-time guarantees for AI workloads,...
- One of the key challenges in real-time systems is balancing mixed-criticality workloads—those that require strict timing guarantees alongside best-effort tasks.
Artificial intelligence is reshaping how real-time systems are designed and optimized, moving beyond its traditional role in executing workloads to actively improving the infrastructure that supports them. A recent position paper published in Real-Time Systems on May 26, 2025, explores this shift, proposing that AI techniques could replace or augment classical methods like Mixed-Integer Linear Programming (MILP) in the development of real-time systems. The paper, titled “To MILP or not to MILP? On AI techniques for the design and optimization of real-time systems”, argues that AI—particularly supervised and reinforcement learning—can enhance both the design and runtime configuration of systems where speed, efficiency, and reliability are critical.
AI as a Tool for Real-Time Systems Optimization
The paper introduces the concept of “AI-for-RT,” a complementary approach to the more commonly discussed “RT-for-AI.” While much research has focused on ensuring real-time guarantees for AI workloads, this work flips the perspective, examining how AI can optimize the real-time systems themselves. The authors highlight scenarios where AI could outperform traditional techniques, particularly in complex environments where hardware and software interactions are dynamic and resource management must adapt in real time.
One of the key challenges in real-time systems is balancing mixed-criticality workloads—those that require strict timing guarantees alongside best-effort tasks. Traditional methods like MILP have been effective for static or predictable systems, but they struggle with recursive real-time analysis and dynamic resource allocation. The paper suggests that AI-driven approaches, such as reinforcement learning, could provide more flexible and adaptive solutions, particularly in edge computing and embedded systems where conditions change rapidly.
Potential Applications and Future Directions
The authors outline several use cases where AI could improve real-time systems, including:

- Recursive real-time analysis, where systems must continuously evaluate and adjust their own performance.
- Complex hardware-software interactions, such as those in autonomous vehicles or industrial automation, where components must coordinate seamlessly.
- Dynamic resource management, particularly in edge computing environments where workloads fluctuate unpredictably.
The paper does not present finished solutions but instead aims to establish a “Roadmap Towards Learning-Enabled and Learning-Assisted Real-Time Systems.” This roadmap is intended to guide future research, encouraging collaboration between AI and real-time systems communities to develop practical implementations. The authors emphasize that while AI holds promise, its integration into real-time systems will require rigorous validation to ensure safety and reliability.
Broader Implications for Edge Computing and Embedded Systems
The shift toward AI-driven optimization aligns with broader trends in edge computing and embedded systems. As industries adopt software-defined solutions, the demand for real-time adaptability is growing. For example, Intel’s Time-Coordinated Computing (TCC) framework, designed for edge and 5G applications, highlights the need for systems that can handle both real-time and best-effort workloads on the same hardware. AI could play a role in dynamically allocating resources to meet these demands, particularly in environments where manual configuration is impractical.
Similarly, the rise of digital twins—virtual replicas of physical systems—relies on real-time data to simulate and optimize performance. AI could enhance these models by improving their ability to predict and respond to changes, reducing latency and improving efficiency in manufacturing, healthcare, and IoT applications.
Challenges and Open Questions
Despite its potential, the integration of AI into real-time systems is not without challenges. The paper acknowledges that AI techniques must be carefully validated to ensure they meet the strict timing and safety requirements of real-time applications. Unlike traditional methods, which rely on deterministic models, AI-driven systems introduce probabilistic elements that could complicate certification and compliance, particularly in safety-critical industries like aerospace or medical devices.
Another concern is the computational overhead of AI models. Real-time systems often operate under tight resource constraints, and deploying large AI models could introduce latency or require specialized hardware. The authors suggest that lightweight, edge-optimized AI techniques—such as federated learning or tiny machine learning (TinyML)—could mitigate these issues, but further research is needed to refine these approaches.
Industry and Academic Collaboration
The paper is part of a special issue in Real-Time Systems focused on learning-enabled real-time systems, signaling growing interest in this intersection. The authors call for greater collaboration between AI researchers and real-time systems engineers to address the technical and theoretical challenges. This includes developing new benchmarks, simulation tools, and validation frameworks to ensure AI-driven systems meet the rigorous standards of real-time computing.

For companies like OpenAI, which is actively hiring for roles in compute infrastructure, the shift toward AI-for-RT could open new avenues for innovation. As AI models become more integrated into the underlying systems that power applications—from autonomous vehicles to smart grids—the ability to optimize these systems in real time will be a competitive advantage. The paper’s roadmap suggests that the next generation of real-time systems may rely as much on AI for their design and operation as they do on traditional engineering methods.
Conclusion
The position paper from Real-Time Systems presents a compelling vision for the future of real-time computing, where AI is not just a workload but a tool for optimization. While the work is still in its early stages, it highlights the potential for AI to address longstanding challenges in dynamic resource management, hardware-software interaction, and recursive analysis. As industries continue to push the boundaries of real-time systems, the integration of AI could unlock new levels of efficiency and adaptability—provided the technical and safety hurdles are overcome.
For now, the paper serves as a call to action for researchers and engineers to explore this emerging field, laying the groundwork for a future where AI and real-time systems evolve in tandem.
