Share this story!
- OpenAI, the company behind ChatGPT, forms a specialized team to align superintelligent AI with human values.
- Ilya Sutskever and Jan Leike lead the charge, dedicating 20 percent of OpenAI's compute power.
- The ambitious plan aims to navigate superintelligence alignment challenges in just four years.
Decoding AI alignment
At its core, AI alignment seeks to ensure artificial intelligence systems resonate with human objectives, ethics, and desires. An AI that acts in harmony with these principles is termed as 'aligned'. Conversely, an AI that veers away from these intentions is 'misaligned'.
The conundrum of AI alignment isn't new. In 1960, AI pioneer Norbert Wiener aptly highlighted the necessity of ensuring that machine-driven objectives align with genuine human desires. The alignment process encompasses two main hurdles: defining the system's purpose (outer alignment) and ensuring the AI robustly adopts this specification (inner alignment).
It is this unsolved problem that makes some people afraid of super-intelligent AI.
OpenAI's mission: Superaligment within four years
OpenAI, the organization behind ChatGPT, is spearheading this mission. Their goal? To devise a human-level automated alignment researcher. This means not only creating a system that understands human intent but also ensuring that it can keep evolving AI technologies in check.
Under the leadership of Ilya Sutskever, OpenAI's co-founder and Chief Scientist, and Jan Leike, Head of Alignment, the company is rallying the best minds in machine learning and AI.
"If you’ve been successful in machine learning, but you haven’t worked on alignment before, this is your time to make the switch", they write on their website.
"Superintelligence alignment is one of the most important unsolved technical problems of our time. We need the world’s best minds to solve this problem."
We need AI to solve problems
This is another example of why it is counterproductive to "pause" AI progress. AI gives us new tools, to understand and create with. Out of that comes tonnes of opportunities, like creating new proteins. But also new problems.
If we "pause" AI progress we won't get the benefits, but the problems will also be much harder to solve, because we won't have the tools to do that. Pausing development to first solve problems is therefore not a viable path.
One such problem was that we don't understand exactly how tools like ChatGPT come up with their answers. But OpenAI used their latest model, GPT4, to do that.
Now OpenAI is repeating that approach to solve what some believe is an existential threat to humanity.