πŸ‡¨πŸ‡¦VancouverπŸ‡¨πŸ‡¦TorontoπŸ‡ΊπŸ‡ΈLos AngelesπŸ‡ΊπŸ‡ΈOrlandoπŸ‡ΊπŸ‡ΈMiami
1-855-KOO-TECH
KootechnikelKootechnikel
Insights Β· Field notes from the SOC
Plain-language briefings from the people watching the alerts.
Weekly Β· No spam
Back to News
Artificial Intelligence & Machine LearningIndustry

Meta Attempts to Acquire Safe Superintelligence in $32 Billion AI Talent Bid

AuthorZe Research Writer
Published
Read Time9 min read
Views0
Meta Attempts to Acquire Safe Superintelligence in $32 Billion AI Talent Bid

Meta Attempts to Acquire Safe Superintelligence in $32 Billion AI Talent Bid

Meta approached Safe Superintelligence, the AI safety startup founded by former OpenAI chief scientist Ilya Sutskever, with an acquisition offer valuing the company at approximately $32 billion, according to multiple reports emerging on June 19, 2025.

Meta has approached Safe Superintelligence (SSI), the artificial intelligence startup founded by former OpenAI chief scientist Ilya Sutskever, with an acquisition offer that would value the company at approximately $32 billion, according to reports emerging on June 19, 2025. The approach represents one of the largest attempted acquisitions in the AI sector and signals the intensifying competition among technology giants for top AI research talent.

Technical diagram showing vulnerability chain
Figure 1: Visual representation of the BeyondTrust vulnerability chain

What Happened

The sequence of events leading to the acquisition attempt began with Safe Superintelligence's founding in June 2024. Sutskever, who served as OpenAI's chief scientist and co-founder, left the organization following a period of internal turmoil that included the brief removal and reinstatement of CEO Sam Altman in November 2023.

SSI raised $1 billion in its initial funding round in September 2024, with investors including Andreessen Horowitz, Sequoia Capital, and DST Global. The company established offices in Palo Alto and Tel Aviv, maintaining a deliberately small team focused on fundamental research rather than product development.

By early 2025, SSI's valuation had grown substantially. Reports in June 2025 indicated the company was valued at approximately $32 billion, representing a more than sixfold increase from its September 2024 valuation. The growth reflected investor confidence in Sutskever's research direction and the broader market enthusiasm for AI companies.

Meta's approach to SSI occurred against the backdrop of the company's aggressive AI investment strategy. Meta has committed billions of dollars to AI infrastructure and research, competing directly with OpenAI, Google DeepMind, and Anthropic for dominance in large language models and generative AI applications.

According to CTech, Meta's acquisition offer was rejected by Sutskever. The report indicated that following the rejection, Meta shifted its focus to recruiting Daniel Gross, SSI's CEO and co-founder. Gross previously worked at Apple, where he led machine learning initiatives, and later co-founded Pioneer, a startup accelerator.

Key Claims and Evidence

The $32 billion valuation figure appeared in multiple reports, though the exact terms of Meta's offer have not been publicly disclosed. CTech reported that Meta made a direct acquisition approach to SSI, which Sutskever declined.

TechCrunch confirmed the acquisition attempt and Meta's subsequent interest in hiring Gross. The publication noted that the approach represented part of a broader pattern of Meta seeking to acquire AI talent and companies during this period.

PYMNTS reported that Meta approached multiple AI startups for potential acquisition, indicating that the SSI approach was not an isolated incident but part of a systematic strategy. The publication did not identify the other startups Meta approached.

Safe Superintelligence has not issued a public statement regarding the acquisition attempt as of the time of reporting. Meta has also not commented publicly on the matter.

The $32 billion valuation, if accurate, would make SSI one of the most valuable AI startups globally, despite having no commercial products and a relatively small team. The valuation reflects the market's assessment of Sutskever's research capabilities and the potential value of SSI's work on artificial general intelligence.

Authentication bypass flow diagram
Figure 2: How the authentication bypass vulnerability works

Pros and Opportunities

For Meta, acquiring SSI would have provided immediate access to one of the most respected AI researchers in the field. Sutskever's experience at OpenAI, where he contributed to the development of GPT models and other foundational AI systems, represents institutional knowledge that cannot be easily replicated.

The acquisition would have also brought SSI's research direction under Meta's umbrella. SSI's focus on AI safety aligns with growing regulatory and public concern about the risks of advanced AI systems, potentially providing Meta with credibility in safety-focused AI development.

From a competitive standpoint, acquiring SSI would have prevented rivals from accessing Sutskever's expertise. The AI talent market remains extremely constrained, with a limited number of researchers possessing experience in training frontier AI systems.

For investors in SSI, a $32 billion acquisition would have represented substantial returns on the $1 billion invested in September 2024. The premium reflects the scarcity value of top AI research talent and the strategic importance of AI capabilities to major technology companies.

Cons, Risks, and Limitations

The acquisition attempt raises questions about the independence of AI safety research. SSI was founded explicitly to pursue safe artificial general intelligence outside the commercial pressures of major technology companies. An acquisition by Meta would have fundamentally altered that mission.

Critics of large-scale AI acquisitions argue that consolidation reduces diversity in AI research approaches. Independent research organizations can pursue directions that may not align with the commercial interests of major technology companies, potentially leading to important safety insights that might otherwise be overlooked.

The $32 billion valuation also raises concerns about AI market dynamics. SSI has no commercial products and generates no revenue, making the valuation entirely dependent on future potential. Such valuations can create distortions in the AI talent market and make it difficult for smaller organizations to compete for researchers.

For Meta, the failed acquisition represents a setback in its AI talent acquisition strategy. The company's subsequent approach to Gross suggests a willingness to pursue individual talent when company-level acquisitions fail, but this approach may face similar resistance from researchers committed to independent work.

Privilege escalation process
Figure 3: Privilege escalation from user to SYSTEM level

How the Technology Works

Safe Superintelligence's research focus centers on developing artificial general intelligence (AGI) with built-in safety properties. Unlike narrow AI systems designed for specific tasks, AGI would possess general reasoning capabilities applicable across domains.

The company's approach, as described in public statements, emphasizes safety as a core design principle rather than an afterthought. Sutskever has argued that safety and capability can be developed together, rejecting the notion that safety research necessarily slows capability development.

SSI's technical approach has not been publicly detailed, consistent with the company's research-focused rather than product-focused orientation. The company has indicated it will not release products until it achieves its safety and capability goals, distinguishing it from competitors that release incremental products.

Technical context (optional): AGI development involves solving fundamental challenges in machine learning, including generalization across domains, reasoning under uncertainty, and alignment with human values. Current large language models demonstrate impressive capabilities in specific domains but lack the general reasoning abilities associated with AGI. SSI's research presumably addresses these fundamental challenges, though specific technical details remain proprietary.

Broader Industry Implications

The acquisition attempt reflects the strategic importance of AI talent in the current technology landscape. Major technology companies are competing intensively for a limited pool of researchers with experience in frontier AI development, driving valuations and compensation to unprecedented levels.

Meta's approach also signals the company's recognition that internal AI development may not be sufficient to maintain competitive position. Despite substantial investments in AI research, Meta has lagged behind OpenAI and Google in deploying frontier AI systems, prompting the company to seek external capabilities.

The AI safety focus of SSI adds another dimension to the acquisition dynamics. As regulatory scrutiny of AI increases globally, companies with credible safety research programs may gain advantages in navigating regulatory requirements. Acquiring SSI would have provided Meta with both technical capabilities and regulatory credibility.

The failed acquisition may influence how other AI startups approach potential acquirers. SSI's rejection demonstrates that not all AI researchers prioritize financial returns over research independence, potentially encouraging other researchers to maintain independence despite lucrative offers.

What Remains Unclear

Several aspects of the acquisition attempt remain unconfirmed as of June 19, 2025. The exact terms of Meta's offer, including the proposed deal structure and any conditions attached, have not been publicly disclosed.

The status of Meta's discussions with Daniel Gross remains unclear. Reports indicate Meta has approached Gross about joining the company, but whether those discussions are ongoing, concluded, or have resulted in any agreement has not been confirmed.

Meta's broader AI acquisition strategy, including which other startups the company has approached, remains largely undisclosed. PYMNTS reported that Meta approached multiple AI startups, but the identities of those companies and the outcomes of those approaches have not been revealed.

SSI's response to the acquisition attempt beyond Sutskever's reported rejection has not been publicly detailed. Whether the company engaged in negotiations before rejecting the offer, or declined immediately, is not known.

What to Watch Next

Observers should monitor Meta's AI hiring announcements for indications of whether the company successfully recruited Gross or other SSI personnel. Any departures from SSI would signal the effectiveness of Meta's alternative talent acquisition strategy.

SSI's next funding round, if any, will provide updated valuation data and indicate investor confidence following the acquisition attempt. A higher valuation would suggest the market views SSI's independence as valuable; a lower valuation might indicate concerns about the company's ability to compete without major technology company backing.

Regulatory developments regarding AI acquisitions merit attention. Antitrust authorities in the United States and European Union have shown increasing interest in AI market concentration, and large acquisitions in the sector may face enhanced scrutiny.

Meta's AI product announcements will indicate whether the company's internal development efforts are sufficient to maintain competitive position, or whether the failed SSI acquisition represents a significant strategic setback.

Sources

  1. CTech - "Ilya Sutskever rejected Meta's bid for his $32B startup, so Zuckerberg is hiring his CEO instead" - June 20, 2025
  2. TechCrunch - "After trying to buy Ilya Sutskever's $32B AI startup, Meta looks to hire its CEO" - June 20, 2025
  3. PYMNTS - "Meta Approached Multiple AI Startups for Potential Acquisition" - June 24, 2025

Sources & References

Related Topics

artificial-intelligencemetasafe-superintelligenceilya-sutskeveracquisitions