Skip to main content
News Directory 3
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
Menu
  • Home
  • Business
  • Entertainment
  • Health
  • News
  • Sports
  • Tech
  • World
Israel’s Safe Superintelligence Raises B

Israel’s Safe Superintelligence Raises $2B

March 10, 2025 Catherine Williams - Chief Editor Tech

Safe Superintelligence Eyes $2⁢ Billion Funding, Valuation soars to $30 Billion

Table of Contents

  • Safe Superintelligence Eyes $2⁢ Billion Funding, Valuation soars to $30 Billion
    • Sutskever’s vision for⁤ Safe Superintelligence
    • Secrecy and‍ Investment
    • Ilya⁢ Sutskever: From AGI Visionary⁢ to⁤ SSI ‌founder
  • Safe​ Superintelligence (SSI): Your Questions Answered
    • What is Safe Superintelligence (SSI)?
    • Why did Ilya Sutskever leave OpenAI to start SSI?
    • How is SSI different from ​other AI companies like ‌OpenAI and Anthropic?
    • What does “superintelligence” mean?
    • How is SSI addressing the “safe” aspect of superintelligence?
    • How much funding has SSI raised, and what ⁣is its ⁣valuation?
    • Who are the key investors in SSI?
    • What is‌ SSI’s⁢ approach to secrecy?
    • who are⁤ the ⁢co-founders of SSI besides Ilya Sutskever?
    • Where are SSI’s offices located?
    • What ‌are ⁣the key takeaways about SSI?
    • Safe Superintelligence: Key Information‍ at a Glance

march ‌10, 2025

Safe Superintelligence (SSI), the artificial intelligence startup co-founded ⁢by Ilya Sutskever, is reportedly in talks to raise $2 billion ​in funding. This investment could value ⁤the company at a staggering $30⁤ billion, ‌solidifying its position among ⁤the most valuable AI ⁤startups globally.

Sutskever’s vision for⁤ Safe Superintelligence

Ilya Sutskever,‍ 38, formerly the⁢ chief scientist⁣ at​ OpenAI, departed the⁤ San ‍francisco-based AI ‍research institution in⁣ May ‍2024 following disagreements with CEO ⁢sam Altman. ⁤Sutskever was a key figure in developing ‌the technology behind ChatGPT.

His new venture, Safe⁢ Superintelligence (SSI), operates with a high degree of secrecy⁤ from its offices in Silicon Valley adn Tel Aviv.Unlike competitors such as Google, openai, and Anthropic, SSI has a unique approach.The company has stated it will not release any products until‌ it achieves superintelligence, which is defined as‍ AI that surpasses human experts in nearly ⁤all fields.

While other companies focus on releasing ​consumer chatbots and business ‍applications to generate ‌revenue, Sutskever ‌is pursuing a different path. He has ⁤indicated to associates⁤ that he is not using the same ‍methods as openai to develop advanced AI. Rather, he believes he has identified a “different mountain to climb” that shows early promise.

James Cham, a ‍partner at Bloomberg beta, a San Francisco-based ⁢venture firm, ⁣notes the industry’s curiosity:

Everyone is curious about ​exactly what he’s pushing and ​exactly what⁣ the insight is. It’s super-high risk,and if it works out,maybe you have⁣ the potential to be part of⁣ someone who is ‌changing‍ the⁢ world.
James‌ Cham, Bloomberg Beta

Secrecy and‍ Investment

SSI operates with⁤ exceptional secrecy, contrasting with most⁤ AI startups that aggressively ⁣seek⁤ publicity. The company’s website is minimal, containing only a brief mission​ statement. ‌With approximately 20 employees, considerably fewer than OpenAI⁤ and‍ Anthropic, staff members are discouraged from mentioning SSI on their LinkedIn profiles.

Job ‍candidates attending⁤ in-person interviews must place their phones in a Faraday cage before​ entering‌ SSI’s offices, preventing cellular and Wi-Fi signals⁤ from entering or leaving the room.

Despite the secrecy,top Silicon Valley investors,including Sequoia ‌Capital‌ and Andreessen Horowitz,have invested ‍heavily in SSI.The latest financing ⁣round, led⁢ by‌ Greenoaks Capital, marks ⁤a significant increase⁣ from⁤ SSI’s $5 billion valuation in september 2024.

Ilya⁢ Sutskever: From AGI Visionary⁢ to⁤ SSI ‌founder

Born in the former Soviet Union and raised in Israel, Sutskever gained ⁤recognition ‌as a graduate student in Canada for his work on deep-learning AI algorithms. He later joined ​OpenAI in 2015, drawn by the vision of developing artificial general intelligence (AGI) for public benefit.

At openai, ‍Sutskever ‌was ‍known for contemplating ⁢the implications of⁤ AGI and how to prevent ‍catastrophic outcomes. ‍He stated at OpenAI’s 2022 holiday‍ party, “Our goal is to make a mankind-loving AGI.”

Following the success of ChatGPT, OpenAI shifted its focus towards​ becoming ⁢a product and revenue-driven company.​ This transition⁤ reportedly ​reduced resources for Sutskever’s team to study advanced AI safety risks.

His relationship with altman deteriorated,leading to Sutskever ​informing Altman in⁣ November 2023 that the board was firing him.This decision⁢ backfired, resulting in hundreds of employees ⁣threatening to quit. Altman was reinstated within a week,and while Sutskever remained employed,he resigned​ in May 2024.

Sutskever ​later founded​ SSI‍ with former OpenAI​ researcher Daniel Levy ‍and investor Daniel Gross. By focusing ‍exclusively on creating safe ⁣superintelligence, ‍the company aims to avoid​ the conflicts ‍between product development and ‍research that characterized OpenAI.

After securing initial seed funding, SSI raised $1 billion in September 2024. In December,at the​ NeurIPS AI conference in Vancouver,Sutskever ⁢discussed his vision for ⁢superintelligence,suggesting that such ⁢systems could become unpredictable,self-aware,and possibly desire ⁣rights.

He added:

It’s not a bad⁢ end result if you have ​AIs, ‌and all they⁣ want is to coexist with us.
Ilya Sutskever, NeurIPS ⁤AI Conference, December 2024

Safe​ Superintelligence (SSI): Your Questions Answered

Safe ‌Superintelligence ‌(SSI) is making ⁣waves ‍in the AI world. Co-founded by Ilya Sutskever, a key ‍figure ‌behind ChatGPT, ⁢SSI aims to achieve superintelligence safely. This Q&A provides a extensive overview of SSI, its ‍mission, and its potential ⁢impact⁢ on the future of AI.

What is Safe Superintelligence (SSI)?

Safe Superintelligence (SSI) is an artificial intelligence startup founded by Ilya Sutskever, Daniel Levy, and Daniel Gross. Its primary goal is⁢ to develop superintelligence—AI that surpasses human experts in nearly every field—while ensuring its safety and alignment with human values. Unlike many AI companies, SSI does not currently plan to release products untill it achieves this superintelligence milestone.

Why did Ilya Sutskever leave OpenAI to start SSI?

ilya‌ Sutskever left ⁤OpenAI in May 2024 after disagreements with CEO Sam Altman regarding the company’s direction. Sutskever reportedly felt that OpenAI’s focus⁣ had shifted from AI safety research to product development and revenue generation following the success of ChatGPT, leading to decreased resources for advanced AI safety studies.​ This divergence in vision prompted him to create SSI, an organization solely dedicated to safe superintelligence.

How is SSI different from ​other AI companies like ‌OpenAI and Anthropic?

SSI differentiates itself from other ‍AI companies in several ‍key ways:

Focus: SSI’s sole focus is on achieving safe superintelligence, whereas companies like OpenAI and ⁣Anthropic balance research with product development and revenue generation.

Product Release Strategy: SSI aims to refrain from releasing any products⁤ until they’ve achieved superintelligence to ensure its safe deployment. Most AI companies release consumer ‌chatbots and business applications.

Methodology: Sutskever has indicated that SSI is⁢ pursuing a different approach to achieving superintelligence,claiming to have identified a distinct and promising path.

Secrecy: SSI ⁤operates with a high degree of secrecy, while most AI startups actively seek publicity.

What does “superintelligence” mean?

superintelligence refers to a ⁤hypothetical level of artificial intelligence that surpasses human intelligence in nearly every aspect, including general wisdom, problem-solving, creativity, and social skills. In essence, a ‌superintelligent AI would be smarter than the‍ best human minds in‌ practically ⁤every field.

How is SSI addressing the “safe” aspect of superintelligence?

While specifics‌ of SSI’s approach to AI safety are guarded,⁢ the company’s core mission is developing AI in a way that aligns with human values and prevents catastrophic outcomes. Ilya Sutskever has⁣ a ⁣long history of contemplating the implications of ⁣AGI and how to ensure its safe development.

How much funding has SSI raised, and what ⁣is its ⁣valuation?

As of March 2025, ‌SSI is reportedly in talks to raise $2 billion in funding, which could value ‌the company at $30‌ billion. Its last valuation, in September 2024, ‍was $5 billion after raising $1 billion in seed funding.

Who are the key investors in SSI?

Top Silicon Valley investors have heavily invested ⁤in SSI, including:

Sequoia Capital

Andreessen Horowitz

Greenoaks Capital (led the latest funding round)

What is‌ SSI’s⁢ approach to secrecy?

SSI maintains a high level of secrecy, much higher ⁣than most startups:

Minimal Website: Its website contains only a brief mission statement.

‍ Employee Confidentiality: Staff members are discouraged from​ mentioning SSI ‌on their LinkedIn profiles.

Faraday Cages: Job candidates attending in-person interviews must place their phones in a Faraday cage to prevent signal leakage.

who are⁤ the ⁢co-founders of SSI besides Ilya Sutskever?

Ilya Sutskever co-founded SSI⁤ with:

Daniel Levy (former OpenAI researcher)

Daniel Gross ‌(investor)

Where are SSI’s offices located?

SSI​ has offices in:

Silicon Valley

⁤ Tel Aviv

What ‌are ⁣the key takeaways about SSI?

​ Focuses exclusively on safe⁣ superintelligence.

Currently refrains from releasing products.

Operates with high secrecy.

​ Founded by renowned AI researcher Ilya Sutskever.

* Backed by prominent ‍Silicon Valley investors.

Safe Superintelligence: Key Information‍ at a Glance

| Feature‍ ‍ | Description ⁣ ‌ ‍ ​ ‌ ​ ‍ ⁤ ​ ⁢ ⁢ |

| —————— | ———————————————————————————————————————- |

| Mission | Develop safe superintelligence ​ ‌ ⁣ ​ ⁣ ‍ ‍ |

| ⁢ Founder ‌ | Ilya ⁢sutskever,Daniel Levy,Daniel Gross ​ ⁤ ⁢ ‍ ⁤ ⁣ ⁢ ​ ​ ⁤ |

| Funding (Reported March 2025) | Seeking $2 Billion ‌ ‍ ⁤ ⁣ ⁣ ‌ ​ ‍ ‍ ‍ ‍ |

| Valuation (Potential,March‍ 2025) | $30 Billion ‌ ⁤ ​ ‌ ‍ ​ ​ ⁤ ‌ ​ ‍ ‌ |

| ⁣ key Difference ‌‍ | Sole focus ​on​ safety; no product releases until superintelligence is achieved.|

| ​ Secrecy Level |‍ High; limited public⁤ information, employee confidentiality, Faraday cages during interviews.|

| Offices ‌ | Silicon Valley and Tel Aviv ⁣ ⁣ ⁤ ‍ ⁤ ‍ ⁢ ‌ |

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X

Related

Search:

News Directory 3

ByoDirectory is a comprehensive directory of businesses and services across the United States. Find what you need, when you need it.

Quick Links

  • Copyright Notice
  • Disclaimer
  • Terms and Conditions

Browse by State

  • Alabama
  • Alaska
  • Arizona
  • Arkansas
  • California
  • Colorado

Connect With Us

© 2026 News Directory 3. All rights reserved.

Privacy Policy Terms of Service