Share this story!
Finally, there is some real pushback against the AI doomerism: Why AI Will Save the World
Marc Andreessen is the creator of the first widely used web browser, Mosaic, and a co-founder of Netscape. He was among six inductees in the World Wide Web Hall of Fame. Now he is a co-founder of the venture capital firm Andreessen Horowitz.
You should read the whole thing, but it's long, so here is a summary. (Of course made with AI.)
Why AI Can Make Everything We Care About Better
"The era of Artificial Intelligence is here, and boy are people freaking out."
AI has the potential to significantly augment human intelligence, improving myriad life outcomes. Its application ranges from assisting in education and work, to driving scientific and creative advancements.
As AI evolves, everyone will have a personalized, knowledgeable AI companion, boosting economic growth, promoting creativity, and enhancing decision-making, even potentially reducing wartime casualties.
In addition to these practical benefits, AI also humanizes by facilitating artistic expression and offering emotional support.
The proliferation of AI is akin to major technological advancements like electricity and microchips, representing not a risk, but an obligation towards a better future.
So Why The Panic?
Contrasting the optimistic view of AI, fear and paranoia pervade public discourse, with predictions of AI-induced disasters and societal collapse.
This phenomenon isn't new; every impactful technology, from electric lighting to the internet, has caused a moral panic – an irrational fear that it would destroy society.
While new technologies can bring negative outcomes alongside benefits, these panics often inflate legitimate concerns to hysteria, making it harder to address serious issues.
Currently, we're experiencing a substantial moral panic about AI. Many individuals are advocating for AI restrictions and regulations, presenting themselves as public advocates, capitalizing on and exacerbating this panic.
The question is, are their intentions genuinely for the public good, and are their concerns valid?
The Baptists And Bootleggers Of AI
In the realm of AI, two categories of actors parallel those in the prohibition movement of the 1920s.
- "Baptists," true believers, genuinely think AI restrictions are necessary to avoid societal disaster.
- "Bootleggers," on the other hand, stand to profit from such restrictions by insulating themselves from competition.
Some seemingly genuine advocates might be Bootleggers in disguise, fostering AI panic for financial gain.
Often, the crafty Bootleggers outmaneuver the ideologically-driven Baptists, securing regulatory advantages while the Baptists' social reform intentions go awry. A prime example is the Dodd-Frank Act after the 2008 financial crisis, which instead of breaking up the "too big to fail" banks, allowed them to grow even larger.
This dynamic is unfolding in the current push for AI regulations.
However, it's essential not just to identify these actors and question their motives, but to evaluate their arguments on their merits.
AI Risk #1: Will AI Kill Us All?
The pervasive fear that technology could rise up and annihilate humanity, seen in myths and popular culture, might stem from a desire to caution against potential risks of new technologies. But this fear often inflates the potential harm and overlooks the vast benefits of technological progress.
The idea that AI will decide to annihilate humanity is a significant misunderstanding. AI, essentially a tool built and controlled by humans, doesn't have motivations or desires. It won't "come alive" like living beings, shaped by evolution to survive and thrive.
Nonetheless, there are "Baptists," who fervently warn against AI becoming a killer and propose various drastic restrictions on AI development. These warnings are often unsupported by scientific reasoning, lack testable hypotheses, and remain vague about the impending danger. Their extreme stance, seemingly bordering on conspiracy theories about math and code, and their readiness for potential violence, raises questions about their motives.
Three possible motivations might explain their behavior. Some may be dramatizing their work's significance. Others, posing as "Baptists," may be "Bootleggers," who profit from fueling AI doomsday narratives. Lastly, the “AI risk” discourse has taken on characteristics of a cult, exhibiting typical millenarian apocalyptic beliefs.
While such cults may be intriguing, their extreme beliefs shouldn't dictate the future of AI laws and society.
AI Risk #2: Will AI Ruin Our Society?
The second AI risk is that it could disrupt society by spreading hate speech or misinformation. This concern, termed "AI alignment," asks whose values AI should reflect.
The debate mirrors social media content regulation, where restrictions can snowball into widespread censorship.
As debates over "AI alignment" continue, it's essential to remember that the outcomes of these discussions will impact society significantly. AI, anticipated as the control layer for many future systems, will wield vast influence. Thus, the rules governing AI's operation might matter more than anything else.
With AI set to control many future systems, it's crucial that regulation discussions are diverse and don't favor a narrow ideology.
AI Risk #3: Will AI Take All Our Jobs?
AI taking all our jobs is a recurring fear, but each technological advance in history has led to more jobs at higher wages.
The automation-kills-jobs viewpoint is based on a fallacy that there's a fixed amount of work to be done. In reality, technology-driven productivity growth reduces costs, increases demand, creates new industries, and raises wages.
Even if AI could replace all human labor, it would lead to unprecedented economic growth and new job opportunities.
AI Risk #4: Will AI Lead To Crippling Inequality?
The fear that AI will lead to massive wealth inequality is based on the Marxist notion that the owners of technology will hoard the profits, leaving regular people with nothing.
However, in reality, technology owners are motivated to sell to as many people as possible to maximize profits, inevitably driving down prices. This has been seen with cars, electricity, computers, and now AI.
Consequently, technology ends up empowering its users, not centralizing wealth.
The real risk of AI and inequality is not AI causing more inequality, but us not allowing AI to be used to reduce it, particularly in sectors like housing, education, and healthcare.
AI Risk #5: Will AI Lead To Bad People Doing Bad Things?
While the first four commonly feared AI risks may not hold water, the fifth — that AI could enable malicious acts — is valid. Like all technology, AI can be used for both good and bad. Some suggest banning AI to prevent misuse, but AI isn't an esoteric physical material. It's math and code, widely accessible and impossible to effectively control without totalitarian oppression.
There are two better ways to address this risk:
- Use existing laws: Most potential malicious uses of AI are already illegal — hacking, theft, bioweapon creation, terrorism. We should focus on preventing and prosecuting these crimes.
- Leverage AI for defense: The same capabilities making AI potentially dangerous also make it a potent tool for good.
In short, instead of trying to ban AI, we should be focusing on using it effectively for protection and defense.
The Actual Risk Of Not Pursuing AI With Maximum Force And Speed
The gravest AI risk lies in its use by authoritarian regimes, particularly China's Communist Party, for population control. China's AI agenda, open and aggressive, intends to extend beyond its borders through initiatives like 5G networks, Belt and Road loans, and apps such as TikTok. The risk escalates if China achieves global AI dominance before the West.
The optimal strategy is a Reagan-esque approach: "We win, they lose." Instead of being hamstrung by unfounded AI fears, the West should vigorously pursue AI, aiming to achieve global technological superiority before China.
By integrating AI into our economy and society rapidly, we can enhance productivity and human potential, counteract real AI risks, and ensure our way of life prevails over China's authoritarian vision.
What Is To Be Done?
- Large AI companies should be permitted to aggressively develop AI, but they shouldn't be allowed to achieve regulatory capture or establish a government-protected cartel due to false claims of AI risk.
- AI startups should be allowed to build and compete freely without government intervention. Their success or failure would keep larger companies on their toes, benefiting society and the economy.
- Open-source AI should be allowed to proliferate freely, providing a valuable resource to students worldwide and ensuring AI accessibility irrespective of socioeconomic status.
- Governments and private sectors should collaborate to utilize AI in strengthening societal defensive capabilities, addressing potential risks as well as broader issues like malnutrition, disease, and climate change.
- To counteract the risk of China's global AI dominance, the West should leverage its private sector, scientific community, and government to ensure its AI technology leads globally, including within China.
In summary, harness the power of AI to solve global challenges.
It's time to build.