Kevin Kelly, author of Out of Control and founding editor-in-chief of Wired magazine, believes that from the next 5,000 days to the next 50 years, AI will be the world’s most important keyword, becoming as ubiquitous an infrastructure as the Internet.

And OpenAI, the most successful AI company in his mind, “a great case of disruptor.”

Since ChatGPT’s hasty launch late last year and unexpected explosion in popularity, the company and his founding team have been propelled to center stage by the technology frenzy sweeping the globe.

Where did OpenAI come from and where is it going? In a recent lengthy article in Wired magazine, renowned tech journalist Steven Levy offers an in-depth discussion of OpenAI’s growth history and the company’s vision.

Altman’s Choice

Before leading OpenAI, Sam Altman had become CEO of Y Combinator, the world’s most famous tech incubator, and the point of cashing in on all those unicorns wasn’t to fill the wallets of his partners, but to fund change at the species level.

He’s set up a research arm in the hopes of funding ambitious projects to solve the world’s biggest problems. But to him, AI is the area of innovation that will disrupt everything: a superintelligence that can solve human problems better than humans can.

He even had thoughts of running for governor of California. But he realized that he was perfectly capable of something bigger - leading a company that would transform humanity itself.

At a dinner in California, he and Musk hit it off.

Musk was arguing with Google co-founder Larry Page at the time. Because Musk believes that human consciousness has precious uniqueness, but Page believes that machines and people are equal, if the machine really has consciousness, eliminate human beings, that is the law of natural evolution, Page also accused Musk of being a “speciesist”.

So, Musk is determined to spend some money, for the “human team” to make more efforts.

Altman, who cares about both technological change and AI safety, naturally became his ideal partner.

Altman’s answer to the question of what attracts top talent to a brand new AI research organization is the crazy vision of AGI.

AGI is the so-called General Artificial Intelligence, that is, AI that can do complex tasks like human beings. in the days when Altman was still the CEO of YC, computers were already able to do amazing feats through deep learning and neural networks, such as labeling photos, translating text, and optimizing complex ad networks.

These advances convinced him that AGI was truly within reach for the first time. However, putting it in the hands of large companies worried him. He believed that these companies would be too focused on their own products to seize the opportunity to develop AGI as quickly as possible, and, if they did create it, they might be reckless enough to make it public without taking the necessary precautions. That’s why there needs to be someone else to keep them in check.

Altman’s most important principle for screening recruits is that they must be believers in AGI. With his own and Musk’s call to arms, and the tantalizing rhetoric of exploring AGI, Altman scooped up the likes of Stripe CTO Greg Brockman and Google Brain core scientist Ilya Sutskever.

In December 2015, OpenAI was officially founded.

In 2021, he told reporters:”AGI can only be built once. And there aren’t many people who can run OpenAI well. I’ve been fortunate to have a series of experiences in my life that have really actively prepared me for this.”
A period of confusion

Despite having a crazy, great vision, OpenAI was clueless about how to get there.
Altman recalls that when the original small team didn’t have an office and gathered in Brockman’s apartment, his mind kept wondering:”What are we going to do?”

Things didn’t get much better until more than a year after the company was founded. The company didn’t have a clear direction; it was just trying random things, drilling down to a system that solved video games, spending a lot of energy on robotics, and then sending out a few papers.

Altman says when he remembers the scene at the company at the time:”We knew what we wanted to do. We knew why we wanted to do it. But we didn’t know how.”

But they believed. Their optimism is bolstered by the constant improvement of artificial neural networks using deep learning techniques, and Sutskever says that chasing AI “isn’t completely crazy. It’s just moderately crazy.”

It wasn’t until 2016 that OpenAI waited for legendary AI researcher Alec Radford, who, after accepting OpenAI’s offer, told his high school magazine that taking on the new position was “a little bit like joining a graduate program” - an open-access program for researching AI. -an open, low-pressure habitat for researching AI.

Radford, an introverted, low-key researcher, didn’t accept the author’s invitation for a face-to-face interview, but instead wrote a long email describing his work at OpenAI.

His biggest interest is getting neural networks to have clear conversations with humans. This is a departure from the traditional scripting model for making chatbots, which has been used poorly from the original ELIZA to the popular Siri and Alexa. He writes: “Our goal is to see if there is any task, any environment, any domain, anything, where a language model can come in handy.”

At the time, he explains, language models were seen as novelty toys that could only occasionally generate a meaningful sentence, and only if you really squinted your eyes. His first experiment was to scan 2 billion Reddit comments to train a language model.

Like many of OpenAI’s early experiments, this one failed. That’s OK. The 23-year-old was given license to move on and fail again, Brockman says: “We were like, Alec’s great, let him do his thing.”

The turning point

In early 2017, an advance copy of a research paper co-authored by eight Google researchers appeared, but it didn’t get much attention. The paper’s official title was “Attention Is All You Need,” but it became known as the “Transformer paper,” named both to reflect the game-changing nature of the idea and to honor the transformation from trucks to giants. It was named so to reflect both the game-changing nature of the idea and to honor the toy that transformed from a truck into a giant robot.

Transformers enable neural networks to understand and generate language more efficiently. They analyze the corpus in parallel to find out which elements are worth paying attention to. This greatly optimized the process of generating coherent text in response to cues.

Eventually, it was realized that the same technique could also generate images and even videos. While the paper has since been called the catalyst for the current AI frenzy, at the time, Ilya Sutskever was just one of the few people who understood how powerful this breakthrough was.

Brockman recalls that Ilya exclaimed in surprise when she saw the Transformer emerge, ‘This is what we’ve been waiting for.’That’s OpenAI’s strategy - work on the problem and then have faith that the team or someone in the field will manage to figure out the missing piece.

After that, Alec Radford started experimenting with the Transformer structure. He said that at that time he had made more progress in two weeks than he had in the last two years. It became clear to him that the key to getting the most out of the new model was to scale it up - to train it on super-sized datasets. This idea was dubbed “Big Transformer” by his colleague Rewon Child.

This approach required a change in OpenAI’s previously fragmented, siloed corporate culture, where team resources had to be gathered to focus on a single point of breakthrough, as Quora CEO Adam D’Angelo, who sits on OpenAI’s board of directors, explained to the authors:”To capitalize on Transformer’s strengths, you need to scale it up. You need to run it more like an engineering organization. You can’t have every researcher doing their own thing, training their own models, making elegant things that you can publish. You have to do all this more tedious, less elegant work.”

Radford and his collaborators named the model they created “generatively pretrained transformer” - an acronym for GPT. Eventually, the model came to be commonly known as “generative AI. To build the model, they collected 7,000 unpublished books, many of them in the romance, fantasy, and adventure genres, and refined it with thousands of passages from Quora quizzes and middle and high school exams. The model contains 117 million parameters or variables and outperforms all previous models in understanding language and generating answers.

But the most striking result is that, after processing such a large amount of data, the model is able to deliver results beyond its training, providing expertise in entirely new areas. These unplanned robotic capabilities are known as “zero samples”. They continue to baffle researchers - which is why many in the field are uneasy about these so-called large-scale language models.

Commercialization

OpenAI’s pre-funding support basically came from Musk. But in 2018, Tesla began looking into using AI technology for Autopilot, just as OpenAI was already making significant technological breakthroughs.

Musk had always regarded OpenAI, the company, as his own, so he proposed at the time that it would be better for him to take care of the entire company - directly merging OpenAI into Tesla. But the proposal was flatly rejected by Altman and other executives, so the two sides cut ties, and Musk withdrew his entire investment, announcing at a town hall meeting that he was leaving.

At the meeting, he predicted that OpenAI would fail, and called at least one of the researchers “stupid”.

With no revenue coming into the company, Musk’s withdrawal is certainly an existential crisis. While the research OpenAI was doing was some of the hippest AI in Silicon Valley, the fact that it was a non-profit organization certainly limited its attractiveness for funding.

In March 2019, OpenAI executives came up with a bizarre solution. Create another for-profit entity while remaining a nonprofit. But there was a cap on the revenue of this for-profit division - a number that wasn’t made public, and speculated from the company’s bylaws to be as high as trillions of dollars (OpenAI also believed that if their revenue actually reached that number, they would have certainly made AGIs that they could actually use by then). After that cap is reached, everything the for-profit entity gets goes back to the non-profit labs.

So, with its new corporate structure, OpenAI managed to bring in a number of venture capital organizations, including Sequoia. But embarrassingly, for OpenAI, billions of dollars in venture capital is a minuscule amount of money, and AI R&D is an exaggerated bottomless pit. The Big Transformer method for creating large language models requires large hardware, and each iteration of the GPT series requires exponentially growing arithmetic power that only a handful of companies can afford.

So OpenAI quickly locked in on Microsoft, Altman told reporters, and that’s because Microsoft CEO Satya Nadella and CTO Kevin Scott were bold enough: after spending more than 20 years and billions of dollars building a supposedly cutting-edge AI research division, to admit that their work was a mess, and then to bet on a small company that was only a few years old. company.

Microsoft initially contributed $1 billion in return for computing time on its servers. But the deal grew in size as both sides gained confidence. Now, Microsoft has poured $13 billion into OpenAI.

12.png

Microsoft has also secured a big payday for itself, not only owning a “non-controlling stake” in OpenAI’s for-profit division - reportedly 49 percent - but also obtaining an exclusive license to commercialize OpenAI’s technology. Moreover, it managed to make its cloud computing platform Azure the exclusive cloud provider for OpenAI. In other words, Microsoft’s huge investment not only secures a powerful partner, but also locks in one of the world’s most popular new customers for its Azure cloud service.

Furthermore, under the terms of the deal, some of OpenAI’s original ideals - providing equal access for all - appear to have been tossed in the trash.
Over the course of the deal, OpenAI gradually took on the nature of a for-profit organization, which turned off some employees and led to the subsequent departure of several executives, who argued that OpenAI had become too commercialized and strayed from its original mission.

The Future of OpenAI

As the crazy vision of AGI gets closer to reality, Sam Altman and his team are under increasing pressure to revolutionize every product cycle, meet the business needs of their investors, and stay ahead of the competition. More critically, they were also tasked with preventing AI from wiping out humanity as a “quasi-savior”.
OpenAI has changed a lot in the course of time, but the vision of building a secure AGI remains unchanged and is still driving them forward, and the OpenAI leaders are confident that they will create an AI system that is smart and secure enough to bring humanity into an era of unimaginable abundance.