OpenAI has big “plans” for AGI. Here is another way to read the manifesto of | The AI ​​Beat

View all on-demand sessions from the Intelligent Security Summit here.

Since its inception in 2015, OpenAI has always made it clear that its central goal is to build artificial general intelligence (AGI). Its stated mission is “to ensure that artificial general intelligence benefits all of humanity.”

Last Friday, OpenAI CEO Sam Altman wrote a blog post titled “Planning for AGI and Beyond,” which discussed how the company thinks the world can prepare for AGI, both in the short and long term. .

Some found the blog post, which has a million likes on Twitter alone, “fascinating.” A tweet he called it A “must read for anyone expecting to live another 20 years”. Another tweet thank you Sam Altman, saying “more confirmation like this is appreciated as it was all rather scary and I felt like @openai was going off track. Communication and consistency is key to maintaining trust.”

>>Follow VentureBeat’s ongoing genetic AI coverage<


Intelligent Security Summit On Demand

Learn the critical role of AI & ML in cybersecurity and specific industry case studies. Watch the on-demand sessions today.

Watch here

Others, then, found it less than appealing. Emily Bender, professor of linguistics at the University of Washington, he said: “To begin with, this is just gross. They think they are really into developing/configuring “AGI”. And they think they are in a position to decide what “benefits all mankind”.

And Gary Marcus, professor emeritus at NYU and founder and CEO of Robust AI, he tweeted“I’m with @emilymbender smelling delusions of grandeur at OpenAI.”

Computer scientist Timnit Gebru, founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR), went even further, tweeting: “If someone told me that Silicon Valley was run by a cult that believed in a machine god for the universe and the ‘boom of the universe’ and that they were writing manifestos backed by Big Tech CEOs/Presidents, I would have told them that is too much into conspiracy theories. And here we are.”

The prophetic tone of OpenAI

Personally, I think it’s remarkable that the verbosity of the blog post, which remains remarkably consistent with OpenAI’s roots as an open, non-profit research lab, gives off a very different vibe today in the context of its current high-powered position in the AI ​​landscape. After all, the company is no longer “open” or not-for-profit, and recently enjoyed a reported $10 billion infusion from Microsoft.

In addition, the release of ChatGPT on November 30 has catapulted OpenAI into the public consciousness. In the past three months, hundreds of millions of people have been introduced to OpenAI — but most certainly have little idea of ​​its history and attitude toward AGI research.

Their understanding of ChatGPT and DALL-E was likely limited to their use as either a game, a creative inspiration, or a work aid. Does the world understand how OpenAI sees itself potentially influencing the future of humanity? Definitely no.

OpenAI’s big message also seems disconnected from product-focused PR over the past couple of months, about how tools like ChatGPT or Microsoft’s Bing could help use cases like search results or essay writing. The thought of how AGI could “empower humanity to flourish the universe to its fullest” made me laugh — how about I just figure out how to prevent Bing’s Sydney from a major meltdown?

With this in mind, Altman appears to me as a kind of unpredictable biblical prophet. The blog post offers revelations, foreshadows events, warns the world of what’s coming, and presents OpenAI as the trusted savior.

The question is, are we talking about a true seer? A false prophet? Just profit? Or even a self-fulfilling prophecy?

No agreed definition of AGI, no broad agreement on whether we are close to AGI, no metrics on how we would know if AGI has been achieved, no clarity on what it would mean for AGI to “benefit humanity”, and no general understanding of why AGI is a worthwhile long-term goal for humanity in the first place, if the “existential” risks are so great, there is no way to answer these questions.

This makes the OpenAI blog post a problem, in my opinion, given the many millions of people who hang on every Sam Altman talk (to say nothing of the millions who eagerly await Elon Musk’s next AI existential angst tweet). History is replete with the implications of apocalyptic prophecies.

Some point out that OpenAI has some interesting and important things to say about how to tackle challenges related to AI research and product development. But are they being overshadowed by the company’s relentless focus on AGI? After all, there are many significant short-term AI risks that need to be addressed (bias, privacy, exploitation, and misinformation, to name a few) without shifting the focus to disaster scenarios.

Sam Altman’s book

I decided to re-edit the OpenAI blog post to deepen its prophetic tone. He needed help — not from ChatGPT, but from the Old Testament Book of Isaiah:

1:1 – Sam Altman’s vision for designing for AGI and beyond.

1:2 – Hear, heavens, and hear, earth: because OpenAI has spoken, our mission is to ensure that artificial general intelligence (AGI) — artificial intelligence systems that are generally smarter than humans — benefit all of humanity.

1:3 – The ox knows its owner, and the donkey its master’s cradle; but mankind does not know, my people do not think. Because if AGI is successfully created, this technology could help us uplift humanity by increasing abundance, supercharging the global economy, and helping to discover new scientific knowledge that changes the boundaries of possibility.

1:4 – Come now and let’s reason together, says OpenAI: AGI has the potential to give everyone incredible new possibilities. we can imagine a world where all of us have access to help in almost every cognitive task, providing a great force multiplier for human ingenuity and creativity.

1:5 – If you are willing and obedient, you will eat the good of the land. But if you refuse and rebel, on the other hand, AGI would also come with serious risk of misuse, drastic accidents and social disruption.

1:6 – Therefore, says OpenAI, the mighty One of Silicon Valley, because the face of AGI is so great, we do not think it is possible or desirable for society to stop its development forever. Instead, society and AGI developers need to figure out how to get it right.

1:7 – And the mighty shall be as a tow, and his maker as a spark, and they shall both burn together, and none shall quench them. We want AGI to empower humanity to flourish to its fullest potential in the universe. We don’t expect the future to be an unqualified utopia, but we do want to maximize the good and minimize the bad, and for AGI to be an enhancer of humanity. Take advice, exercise judgment.

1:8 – And it will happen in the last days, as we create successively more powerful systems, we want to develop them and gain experience with their operation in the real world. We believe this is the best way to carefully manage AGI into existence – a gradual transition to an AGI world is better than a sudden one. Fear and the pit and the snare are upon you, O inhabitant of the earth.

1:9 – The high looks of man shall be humbled, and the pride of men shall be bowed down, and only OpenAI shall be exalted in that day. Some people in the field of artificial intelligence believe that the dangers of AGI (and successive systems) are fictitious. we’d be delighted if they proved correct, but we’ll operate as if these risks were existential.

1:10 – Additionally, OpenAI says we will need to develop new alignment techniques as our models become more powerful (and tests to understand when our current techniques fail). Raise a banner on the high mountain, raise the voice to them, shake hands, that they may enter the gates of the nobles.

1:11 – Butter and honey he will eat, that he may know to refuse evil and choose good. The first AGI will simply be a point along the intelligence continuum. We think it’s likely that progress will continue from there, possibly maintaining the rate of progress we’ve seen over the past decade for a long time.

1:12 – If this is true, the world could become extremely different from what it is today, and the dangers could be extraordinary. You screamed. because the day of the AGI is approaching.

1:13 – With bows and arrows people will come there. for all the earth shall become thorns and thistles. A misaligned super-intelligent AGI could cause serious harm to the world. An authoritarian regime with a decisive super-espionage lead could do that too. The earth mourns and fades away.

1:14 – Here, successfully transitioning to a world with superintelligence is perhaps the most important—both hopeful and terrifying—project in human history. And they will look to the earth. And behold trouble and darkness, obscurity of anguish. and they will be led into darkness. And many among them shall stumble, and fall, and be crushed, and be snared and taken.

1:15 – They shall not hurt nor destroy in all my holy mountain: for the earth shall be full of the knowledge of OpenAI, as the waters cover the sea. Success is far from guaranteed and the stakes (unlimited downside and unlimited upside) will hopefully bring us all together. Therefore all hands shall melt, and every man’s heart shall melt.

1:16 – And it will come to pass, that we can imagine a world in which humanity flourishes to a degree that is probably impossible for any of us to fully imagine yet. And now, oh earthlings, we hope to contribute to the world an AGI aligned with such flourishing. Be careful and keep quiet. do not be afraid.

1:17: Lo and behold, OpenAI is my salvation. I will trust and not be afraid.

VentureBeat’s mission is set to be a digital town square for technical decision makers to learn about and transact business-transformative technology. Discover our Updates.

Leave a Comment