9 Best Ilya Sutskever Quotes on AI

Written by Aiifi Staff
Last updated on April 28, 2026 | FACT CHECKED | How we review

OpenAI co-founder Ilya Sutskever has made one argument since 2019: AI safety and AGI must be built together. His December 2024 claim that pre-training will unquestionably end came after he quit OpenAI to found Safe Superintelligence, a lab with safety as its only product.

Between 2022 and 2025, Sutskever moved from arguing inside OpenAI that safety and capability research must run in parallel to founding a lab built on that single conclusion. His May 2024 departure was the moment the argument became the product.

1. Why Does Ilya Sutskever Say Pre-Training Will Unquestionably End?

"Pre-training as we know it will unquestionably end."
Ilya Sutskever, The Verge, December 2024

Sutskever argued at NeurIPS 2024 that pre-training is constrained by a hard physical limit: there is only one internet, and training data from it has been largely consumed. The December 2024 keynote in Vancouver was his first major public appearance since founding SSI six months earlier. The timing was sharp: OpenAI and Google had just announced synthetic data and test-time compute as their replacement scaling methods.

2. Why Did Ilya Sutskever Claim Large Neural Networks May Be Slightly Conscious?

"it may be that today's large neural networks are slightly conscious."
Ilya Sutskever, Futurism, February 2022

Sutskever posted the claim on X in February 2022, when ChatGPT was still nine months from launch. Yann LeCun replied it was not true "even for small values of 'slightly conscious,'" and cognitive scientist Stanislas Dehaene described current networks as implementing "mostly nonconscious operations." Sutskever did not retract it, and by 2023, Anthropic had launched model welfare research from the same question.

3. How Does Ilya Sutskever Expect Humans to Merge With AI?

"One possibility—something that may be crazy by today's standards but will not be so crazy by future standards—is that many people will choose to become part AI."
Ilya Sutskever, MIT Technology Review, October 2023

Sutskever frames merging with AI as a technology adoption curve: fringe choice at first, normal within a generation. The October 2023 interview came weeks before the board crisis and seven months before his resignation. Dario Amodei at Anthropic chose a different path, building safety into model training rather than founding a separate safety-only lab.

4. Why Does Ilya Sutskever Say SSI Will Pursue Safe Superintelligence in a Straight Shot?

"We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product."
Ilya Sutskever, Interesting Engineering, June 2024. Excerpt.

Sutskever's June 2024 launch statement defined SSI by what it would cut: no product diversification, no commercial timeline attached to the safety research. Sutskever posted it to X on June 19, 2024, five weeks after his exit from OpenAI. SSI launched with fewer than 20 researchers and no product release date, a headcount and scope chosen to give the research team a single mandate.

5. How Did Ilya Sutskever Announce His Departure From OpenAI?

"After almost a decade, I have made the decision to leave OpenAI. The company's trajectory has been nothing short of miraculous."
Ilya Sutskever, TechCrunch, May 2024. Excerpt.

Sutskever announced his OpenAI departure in a May 14, 2024 post on X, brief and positive, with no details about what came next. Jakub Pachocki, OpenAI's research director, was named chief scientist within hours. Sutskever introduced Safe Superintelligence Inc. one month later, alongside co-founders Daniel Gross and Daniel Levy. Mira Murati, OpenAI's CTO, followed him out in September 2024.

6. What Did Ilya Sutskever Say After Voting to Fire Sam Altman?

"I deeply regret my participation in the board's actions. I never intended to harm OpenAI."
Ilya Sutskever, Fortune, November 2023. Excerpt.

Sutskever's regret statement appeared on X on November 20, 2023, three days after the Friday board vote to remove Sam Altman. More than 700 OpenAI employees had already signed a letter threatening to resign unless he returned. Altman was reinstated four days after the firing. The reinstatement ended Sutskever's governance role at OpenAI; he resigned six months later.

7. What Did Ilya Sutskever Predict About AI's Effect on Every Person's Life?

"...whether you like it or not, your life is going to be affected by AI to a great extent."
Ilya Sutskever, biocomm.ai transcript, June 2025. University of Toronto honorary doctorate address. Excerpt.

The June 2025 commencement was an unusual venue: a general audience of graduating students rather than AI researchers. The University of Toronto ceremony came a year into SSI's operation. Sutskever completed his PhD at the same institution under Geoffrey Hinton, whose foundational deep learning work underlies the systems SSI now exists to make safe.

8. Why Did Ilya Sutskever Say He Needed a New Company for His Vision?

"Ultimately, I had a big new vision. And it felt more suitable for a new company."
Ilya Sutskever, Calcalist Tech, October 2025

Sutskever testified that his vision for safe superintelligence required a structure free from product deadlines, something OpenAI's capped-profit model could not offer. The deposition came as part of Elon Musk's lawsuit against the company and Sam Altman, in proceedings over its for-profit conversion. SSI raised more than $3 billion by March 2025 and reached a $32 billion valuation, with no public product yet.

9. What Does Ilya Sutskever Mean When He Says AI Is Back in the Age of Research?

"We are squarely an age of research company."
Ilya Sutskever, Dwarkesh Patel Podcast, November 2025

The Dwarkesh Patel interview was Sutskever's longest public conversation since founding SSI, recorded eleven months after his NeurIPS keynote on pre-training's end. Same diagnosis, sharper conclusion: with internet-scale training data largely consumed, capital scaling is no longer the bottleneck for AI development; research is. SSI is the bet that follows from it.

What to Read Next