top of page

OpenAI co-founder Ilya Sutskever's new safety-focused AI startup SSI raises $1B

The startup aims for breakthroughs in safe artificial intelligence amidst rising industry concerns.

The startup aims for breakthroughs in safe artificial intelligence amidst rising industry concerns.

Ilya Sutskever's SSI Secures $1B to Revolutionize AI Safety

F. Schubert

F. Schubert

A humanist first, passionate about human interactions, AI, Space, Human Life and a DJ. 20 year experienced in Team Management in BBAS3 and also founder of Estudio1514.com. São Paulo, Brazil based.

  • Instagram
  • Facebook
  • LinkedIn

Summary

The tech industry is buzzing with the latest development surrounding Safe Superintelligence (SSI), a company co-founded by Ilya Sutskever, who previously served as chief scientist at OpenAI. Information from the NYP, guarantte that, recently, SSI secured a staggering $1 billion in funding aimed at advancing safe artificial intelligence systems that aim to exceed human capabilities significantly. This substantial backing demonstrates a continued belief from investors in foundational AI research despite broader hesitance towards financing companies that may not yield immediate profits.

SSI's Ambitious Plans

With a current workforce of just 10 employees, SSI's immediate strategy involves leveraging this massive infusion of cash to acquire essential computing power and attract premier talent in the field. The company's focus will be on creating a tightly-knit team of researchers and engineers, strategically located between Palo Alto, California, and Tel Aviv, Israel. Although the organization has refrained from publicly stating its valuation, insiders suggest it stands at around $5 billion.

Enduring Investor Confidence

This level of investment reflects a trend where certain venture capital firms are willing to place significant bets on exceptional talent within AI research. Notably, prominent firms such as Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel have committed funds to SSI. Additionally, NFDG—an investment partnership helmed by Nat Friedman, alongside SSI's CEO Daniel Gross—has also joined the funding round.

"It's crucial for us to surround ourselves with investors who understand, respect, and support our mission," Gross stated in an interview. He highlighted their objective to make rapid advancements toward achieving safe superintelligence while dedicating multiple years to research and development before entering the market.

The Importance of AI Safety

As AI continues to evolve, so do concerns regarding its potential risks. The discourse surrounding AI safety—which aims to mitigate potential harm caused by autonomous systems—has gained traction, fueled by fears that unregulated AI could operate against humanity's best interests or, in worst-case scenarios, lead to human extinction. In light of these discussions, a California bill proposing safety regulations has stirred controversy within the industry. While some companies like OpenAI and Google oppose these measures, others, including Anthropic and Elon Musk's xAI, advocate for them.

Ilya Sutskever (middle) co-founded Safe Superintelligence, which has raised $1 billion in cash to help develop safe AI systems.
Ilya Sutskever (middle) co-founded Safe Superintelligence, which has raised $1 billion in cash to help develop safe AI systems. Credits: Reuters

Sutskever's Vision and Background

At only 37 years old, Ilya Sutskever remains one of the most influential figures in the AI domain. He co-founded SSI in June 2023 with Daniel Gross, a former Apple AI initiative leader, and Daniel Levy, a former OpenAI researcher. Within the organizational framework, Sutskever holds the position of chief scientist, while Levy serves as principal scientist, and Gross oversees fundraising and computational needs.

Sutskever expressed that his decision to launch SSI stemmed from identifying a unique challenge distinct from his previous work. His prior tenure at OpenAI included involvement in significant corporate decisions, notably the controversial ousting of CEO Sam Altman due to alleged communication breakdowns. Just days later, he retracted his stance, aligning himself with fellow employees advocating for Altman's return.

Sutskever was part of the board that voted to oust OpenAI CEO Sam Altman (above) last year.
Sutskever was part of the board that voted to oust OpenAI CEO Sam Altman (above) last year. Credits NYP and Reuters

However, this series of events ultimately diminished Sutskever's influence within OpenAI, leading to his removal from the board and subsequent departure from the company in May. After his exit, OpenAI dismantled his "Superalignment" team, dedicated to ensuring AI systems align with human values as they become more advanced.

Corporate Structure and Hiring Philosophy

In contrast to OpenAI's unconventional corporate design—established for AI safety but which contributed to Altman's dismissal—SSI adopts a standard for-profit structure. This choice reflects SSI's commitment to building a cohesive culture.

Gross emphasized the importance of hiring individuals with exemplary character, spending considerable time vetting candidates to ensure they possess extraordinary capabilities rather than simply focusing on credentials. "What excites us is discovering people genuinely interested in the work and not merely the hype surrounding it," he remarked.

Future Collaborations and Scaling Approaches

Looking ahead, SSI plans to partner with cloud service providers and semiconductor manufacturers to fulfill its computing power needs. While specific collaborations have yet to be finalized, many AI startups typically rely on partnerships with established firms like Microsoft and Nvidia for their infrastructure requirements.

Sutskever has long been a proponent of the scaling hypothesis—the idea that the performance of AI models improves with access to vast amounts of computational resources. His insights into scaling have spurred substantial investments in chips, data centers, and energy, facilitating breakthroughs in generative AI technologies, including ChatGPT.

Notably, Sutskever intends to approach scaling differently than he did at OpenAI. He raised pertinent questions about what aspects of AI need scaling, emphasizing that mere endurance and repetitive efforts are not sufficient for remarkable innovation. Instead, he believes that taking a different approach can pave the way for unprecedented accomplishments.

FAQs

What is Safe Superintelligence (SSI)? Safe Superintelligence is a newly established AI startup focused on developing safe artificial intelligence systems, co-founded by Ilya Sutskever.

How much funding did SSI raise? SSI raised $1 billion to further its research and development efforts in AI safety.

Where is SSI located? The company has teams based in Palo Alto, California, and Tel Aviv, Israel.

Who are the key investors in SSI? Notable investors include Andreessen Horowitz, Sequoia Capital, DST Global, SV Angel, and NFDG.

What is the mission of SSI? SSI aims to create safe superintelligence through significant research and development over the coming years.

As we look at the evolving landscape of AI safety, it's evident that Ilya Sutskever's SSI is making strides toward a future where artificial intelligence aligns more closely with human values. The financial backing and strategic vision set forth by the company portray a fresh perspective on navigating the complexities of advanced AI technologies. Information from New York Post.




Fonte

New York Post

Tags

AI safety, Ilya Sutskever, SSI, AI investment, artificial intelligence

You may also like

A Inteligência Artificial Replica-se: Um Marco Aterrorizante?

Lewandowski Intervém: Proíbe Algemas e Garante Voo da FAB para Deportados dos EUA

Trump Demite 17 Fiscais Independentes em Agências Governamentais dos EUA

Formas de Ganhar Dinheiro na Internet: 15 Ideias Promissoras para 2025

bottom of page