Intel Benchmark Cheating: GeekBench Takes Action
- Intel is facing scrutiny over the use of its Binary Optimization Tool, known as iBOT, which has been linked to inconsistent performance results in industry-standard benchmarks.
- The controversy centers on the new Core Ultra 200S Plus chips, specifically the Core Ultra 7 270K Plus and 250K Plus CPUs.
- John Poole of Geekbench issued a warning on March 25, 2026, stating that the performance results from these chips cannot be trusted at this time.
Intel is facing scrutiny over the use of its Binary Optimization Tool, known as iBOT, which has been linked to inconsistent performance results in industry-standard benchmarks. The tool, designed to boost Instructions Per Clock (IPC) by manipulating instructions at the hardware level, has triggered warnings from Primate Labs, the developers of Geekbench.
The controversy centers on the new Core Ultra 200S Plus chips, specifically the Core Ultra 7 270K Plus and 250K Plus CPUs. These processors utilize iBOT to squeeze additional performance out of the Arrow Lake architecture, but the method of optimization has raised questions regarding the fairness and accuracy of reported benchmark scores.
Geekbench Performance Discrepancies
John Poole of Geekbench issued a warning on March 25, 2026, stating that the performance results from these chips cannot be trusted at this time. The primary concern is that there is currently no way for users or benchmark developers to identify whether the Binary Optimization Tool is enabled or disabled during a specific benchmark run.
According to Poole, the impact of iBOT on Geekbench 6 is significant. Workload scores on the affected chips can increase by as much as 40% when the tool is enabled, which leads to an overall score improvement of up to 8%.
Since the tool modifies the benchmark, and it is unclear to both Primate Labs and the general public how these changes occur
John Poole, Geekbench
Technical analysis suggests that iBOT achieves these gains through vectorization. By replacing or modifying code to use vectorized instructions, the tool can artificially inflate scores in both single-threaded and multi-threaded tests.
Lack of Transparency and Documentation
A major point of contention for Primate Labs is the lack of public documentation regarding the techniques iBOT employs to optimize code. Without this information, it is difficult for independent testers to determine how these optimizations translate to real-world applications outside of synthetic benchmarks.

This lack of transparency makes it impossible for the user base to understand the actual performance delta between a chip running with iBOT and one running without it. Because the tool modifies the benchmark itself, the resulting scores may not reflect the general processing capabilities of the hardware.
Industry Implications
The situation has led to discussions within the tech community regarding whether such optimizations constitute a legitimate performance boost or a form of benchmark manipulation. Some analysts argue that hand-optimizing and replacing benchmark codes to achieve higher scores undermines the purpose of independent testing.
The use of iBOT highlights a growing tension between hardware manufacturers seeking to maximize architectural efficiency and the need for standardized, transparent benchmarking that allows consumers to make informed purchasing decisions based on consistent data.
As of early April 2026, the industry remains focused on how benchmark software can adapt to identify and account for hardware-level binary optimizations to ensure a level playing field between competing processor architectures.
