Skip to main content
News Directory 3
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
Menu
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
AI Safety Audits: Former OpenAI Chief Launches New Institute

AI Safety Audits: Former OpenAI Chief Launches New Institute

January 15, 2026 Victoria Sterling -Business Editor Business

Miles ⁣Brundage, a well-known former policy researcher at OpenAI, is‌ launching ‌an institute dedicated to a simple ⁤idea: AI companies shouldn’t be allowed to grade their ⁣own homework.

Today Brundage formally announced⁣ the AI Verification and Evaluation research Institute (AVERI), ⁣a new⁢ nonprofit aimed at⁣ pushing the idea that frontier ⁢AI models should be ⁢subject​ to external auditing. AVERI is also working to establish AI auditing standards.

The launch coincides with‍ the publication of a research paper, coauthored by Brundage and more ⁢than ‌30 AI safety researchers and governance experts, that lays out ‍a detailed framework​ for⁤ how independent audits of the companies building ​the world’s⁣ most powerful AI systems could work.

Brundage spent seven years at OpenAI, as a ​policy researcher and an advisor on how the company should prepare for the advent of⁣ human-like ‍artificial general intelligence. he left the company in October 2024.

“One of the things​ I learned while working at OpenAI ​is that companies are⁢ figuring out the norms of this kind​ of ​thing on ‍their own,” ⁣Brundage told Fortune. “Ther’s ⁢no one⁤ forcing them to work with third-party experts to make⁢ sure that things are safe ‍and secure. They kind of ‌write their own rules.”

That⁣ creates risks.⁤ Although‍ the‍ leading AI labs conduct safety and security testing and publish technical reports on the results of ⁤many of these evaluations, some of which they conduct with the ‍help of external “red team”⁤ organizations, right now consumers, business and⁣ governments simply have to trust what the AI labs say about these tests. No one​ is forcing them to ​conduct these evaluations or report them according to any⁢ particular set​ of standards.

Brundage said that in other industries, auditing is used to​ provide the public-including consumers, business partners,⁣ and to some ​degree ⁤regulators-assurance that products are safe and have been tested ⁢in a rigorous way.

“If you ‌go out and buy a vacuum cleaner,you know,there will be components in it,like batteries,that have been tested by independent ‌laboratories​ according to rigorous safety standards to make sure it isn’t going​ to catch⁤ on fire,” ‍he said.

New institute will push for policies and standards

Brundage said that AVERI was interested ⁤in policies that ⁤woudl encourage the AI labs to move to a system of rigorous external ​auditing,as well as⁤ researching‍ what the standards⁣ should be for those audits,but was not interested in conducting ⁢audits ​itself.

“We’re⁢ a think tank. We’re ⁤trying to understand ⁤and shape this transition,” he⁣ said. “We’re not⁣ trying to get all the Fortune 500 companies as ​customers.”

He ‍said existing ‍public accounting, auditing, assurance, and testing firms ⁣could move into ‍the business of ‌auditing ‍AI safety, or that startups would⁣ be established⁤ to ​take ⁣on this role.

AVERI ​said it has raised‌ $7.5 million toward a goal of $13 million to cover 14 staff and two years of ‍operations.​ Its funders so far‍ include Halcyon Futures, Fathom, Coefficient Giving, former Y Combinator president Geoff Ralston, Craig ⁢Falls, Good Forever Foundation, Sympatico Ventures, and the AI Underwriting Company.

The organization‍ says it has also received ​donations from current and former non-executive employees of frontier AI companies. “These are people​ who know where the ⁢bodies are buried” and “would love to see more accountability,” Brundage said.

insurance companies or investors could⁣ force AI safety ‌audits

Brundage said that ⁤there could ⁤be several mechanisms that ⁢would encourage AI firms to begin​ to hire independent auditors. One is that big ‍businesses that are buying⁤ AI models may demand ⁣audits to have⁤ some assurance that ⁤the AI​ models they are buying will function as ‍promised and don’t pose hidden ⁤risks.

insurance companies may also push‌ for the establishment of​ AI auditing. For instance,insurers offering business‍ continuity ‌insurance to large companies​ that⁢ use AI⁤ models for key ​business‍ processes could require auditing as a ​condition of un

Summary of ‌the ‌Article: The Growing Need for⁤ Independent AI Auditing

This article‌ discusses the increasing calls ⁢for independent auditing of ⁣the safety and security of AI models,especially those being developed​ by fast-growing⁢ startups like OpenAI ​and Anthropic. Here’s a breakdown of the key points:

* Why Auditing is ⁤Needed: ‌ As AI labs potentially go public, they face increased legal and financial risk. A⁤ failure to proactively assess and mitigate AI risks through independent audits could ⁣lead to shareholder lawsuits or SEC ⁤prosecution if the AI ⁣causes‌ harm ‌and ⁤impacts share prices.
* ​ Regulatory Landscape:

​ *​ US: Currently lacks federal AI regulation. The Trump management is⁣ discouraging state-level AI⁤ regulation without proposing a ‌national standard.
​ ‍ * EU: The EU ⁤AI Act, ⁢while not explicitly requiring audits, ‍leans towards them. ⁤Its “Code of Practice” mandates external evaluation access for high-risk​ models. The act also‍ requires “conformity assessments” ⁤for AI used in high-risk applications (loans,benefits,healthcare).
* Proposed Framework: AI ‍Assurance Levels: AVERI proposes a tiered system of “AI Assurance Levels” (1-4), with Level 4 offering the highest level of‍ security⁢ suitable for international agreements.
*⁣ Challenges to‍ Implementation:

​ * Lack of Qualified Auditors: ⁢Finding individuals with the necessary combination of technical⁤ AI expertise and ⁢governance/audit‍ experience is challenging.⁢ those with ⁤the skills are frequently enough recruited by the AI companies themselves.
⁣ * Building Expertise: ⁢The solution proposed is to create “dream teams” combining⁣ experts from ⁣audit firms, cybersecurity, AI ‍safety nonprofits, and academia.
* Proactive vs. Reactive Approach: The⁣ author hopes to establish auditing infrastructure before a major​ AI-related crisis occurs,‌ learning from the history of regulation in​ industries like nuclear power and food safety.

In essence, the⁢ article argues ⁢that ‍independent AI auditing is becoming increasingly crucial for responsible AI development, risk ​management, and future​ legal/financial stability, ⁢and that proactive steps‌ are needed to build‌ the necessary infrastructure and expertise.

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X

Related

audit, insurance, OpenAI, public policy, Tech regulation

Search:

News Directory 3

ByoDirectory is a comprehensive directory of businesses and services across the United States. Find what you need, when you need it.

Quick Links

  • Copyright Notice
  • Disclaimer
  • Terms and Conditions

Browse by State

  • Alabama
  • Alaska
  • Arizona
  • Arkansas
  • California
  • Colorado

Connect With Us

© 2026 News Directory 3. All rights reserved.

Privacy Policy Terms of Service