"Robots haven't taken to the streets yet, and most of us don't spend all day talking to AI. People still die from disease, we can't easily access space, and there's still much in the universe we don't understand.
Yet we've recently built systems that are in many ways smarter than humans, and that can significantly amplify the output of those who use them. The hardest part is over; the scientific breakthroughs that have brought us to systems like GPT-4 and o3 haven't come easily, but they will take us much further."
"In an important sense, ChatGPT is already more powerful than any human who has ever existed. Hundreds of millions of people rely on it every day to perform increasingly important tasks; a small increase in capability can have a huge positive impact; and a small deviation, if amplified by hundreds of millions of people, can also have a serious negative impact."
On June 11, OpenAI CEO Sam Altman wrote in an article published on his personal website. According to him, humanity may have entered the early stages of the "singularity"—the point at which artificial intelligence surpasses human intelligence. Altman says humanity has crossed a critical turning point, an "event horizon," marking the beginning of a new era of digital superintelligence. "We have already crossed this event horizon, and takeoff has begun," he wrote. "Humanity is close to creating digital superintelligence, and so far, it's not as strange as imagined." Altman's views come as leading developers in the field of artificial intelligence are warning that artificial general intelligence (AGI) could soon replace vast swathes of jobs and disrupt the global economy, developing at a pace that even governments and institutions struggle to cope. The so-called "singularity" is a theoretical moment when artificial intelligence surpasses human intelligence, triggering rapid and unpredictable technological leaps that could profoundly transform society. The "event horizon" signifies an irreversible tipping point, after which the direction of AI development becomes irreversible. Altman believes we are entering a "gentle singularity"—a gradual, controlled transition to powerful digital intelligence, rather than a sudden, drastic change. The takeoff has already begun, but it remains understandable and positive. As evidence, he cites the rapid adoption of ChatGPT since its public release in 2022: "Hundreds of millions of people rely on it every day, for increasingly important tasks," he says. The data supports his assertion. By May 2025, ChatGPT reportedly had 800 million weekly active users. Despite ongoing legal disputes with writers and media outlets, and calls for a moratorium on AI development, OpenAI clearly shows no signs of slowing down. Altman emphasized that even small advances in technology can bring huge benefits. However, at the scale of serving hundreds of millions of users, even slight deviations can have serious consequences. To address these potential biases, he offered several suggestions: Ensure that AI systems align with long-term human goals rather than simply satisfy short-term impulses; Avoid centralized control by any one person, company, or country; and Immediately initiate a global discussion to clarify the values and boundaries that should guide the development of powerful AI. Altman noted that the next five years will be a critical period for the development of artificial intelligence. "2025 will see the emergence of 'agents' capable of performing truly cognitive tasks; the way we write computer code will never be the same," he said. "2026 may see the arrival of systems that can discover new insights, and 2027 may see the emergence of robots that can complete tasks in the real world." He predicted that by 2030, intelligence itself, the ability to generate and execute ideas, will become ubiquitous. "We already live with powerful digital intelligence, and after the initial shock, most people have become accustomed to its presence," he said, noting the rapid shift from awe at AI to anticipation. As the world awaits the arrival of general artificial intelligence and the singularity, Altman believes that the most astonishing breakthroughs won't be revolutionary—they'll become commonplace, the minimum standard for market entry. "That's how the singularity manifests itself: miracles become routine, then essential," he says. "The pace at which new miracles materialize will be astonishing. Today, it's hard to imagine what we'll discover before 2035. Perhaps we'll solve a high-energy physics problem this year and colonize extraterrestrial life next. Perhaps a breakthrough in materials science this year will lead to a truly high-bandwidth brain-computer interface next year. Many people will continue to live their lives the same way, but at least some may choose to plug in." Looking ahead, all this may sound incomprehensible. But experiencing it firsthand will likely feel both awe-inspiring and manageable. From a relativity perspective, the singularity unfolds incrementally, and convergence proceeds slowly. We are climbing the long arc of exponential technological progress; from the front it looks like a vertical rise, from the back it looks flat, but in fact it is a smooth curve. (Recall how crazy it sounded in 2020 to say we would be close to general artificial intelligence by 2025; how familiar the reality has been over the past five years.)
He concludes his article: "May our journey towards superintelligence be smooth, exponential, and silent."