donderdag 15 juni 2023

Artificial intelligence and the fate of the world

artificial intelligence AI

Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute, thinks artificial intelligence (AI) will kill us all. He frequently poses the following question. Imagine that you are a member of an isolated hunter-gatherer tribe, and, one day, strange people show up with writing, guns, and money. Should you welcome them in?

For Yudkowsky, AI is like a super-intelligent space alien; inevitably, it will decide that we humans and other living beings represent nothing more than piles of atoms for which it can find better uses. “[U]nder anything remotely like the current circumstances,” Yudkowsky wrote in a recent Time magazine op-ed, “literally everyone on Earth will die. Not as in ‘maybe possibly some remote chance,’ but as in ’that is the obvious thing that would happen.’”

On May 30, a group of AI industry leaders from Google Deepmind, Anthropic, OpenAI (including its CEO, Sam Altman), and other labs issued a public letter warning that the technology may one day pose “an existential threat to humanity.” For the curious, here’s a brief description of some of the ways AI could wipe us out.

Not everyone thinks of AI in apocalyptic terms. Bill Gates, former chairman of Microsoft Corporation, just sees AI as disrupting the business and tech world, possibly leading to the demise of Amazon and Google. “You will never go to a search site again, you will never go to a productivity site, you’ll never go to Amazon again,” he recently told an audience at an AI Forward event in San Francisco. AI will be embedded in products and systems from cars to universities, sensing our intentions and desires before we even voice them, shaping our reality and serving us like a proverbial genie—or an army of them.

Everyone does agree that AI represents a qualitative as well as a quantitative shift in technological development. It’s not just an improved computer with more speed and power, but a software architecture that enables computers to teach themselves how to learn, and to continually improve and expand their abilities. AI systems now write computer code, making them, in a sense, self-generating. AI is essentially a “black box” from which thought-like output emerges; people can’t figure out why and how it does what it does after the fact. Further, AI systems learn from each other almost instantly, taking in vastly more information than any human can. A crucial threshold will be reached with the development of artificial general intelligence (AGI), which could accomplish any intellectual task humans perform, and greatly exceed human abilities in at least some respects—and which, crucially, could set its own goals. Already, computers can defeat any human chess grand master.

Artificial Intelligence “Duh” Risks

Some AI risks are fairly obvious. Machines will increasingly replace information workers, destroying white-collar jobs (full disclosure: this article was not written by AI, though I did use Google and Bing for research). Inevitably, AI will enrich owners and developers of the technology while others will shoulder the social costs, resulting in more societal wealth inequality. The proliferation of deepfake images, audio, and text will make it increasingly difficult to tell what’s true and what isn’t, further distorting our politics. And a dramatic expansion of computer number crunching will likely demand more overall energy usage (though not everyone agrees on this point).

Then, there is the prospect of accidents. Every new technology, from the automobile to the nuclear power plant, has seen them. Writing in Foreign Affairs, Bill Drexel and Hannah Kelley argue that an AI accident crippling the global financial system or unleashing a devastating bioweapon might most readily happen in China, because that country is poised to lead the world in AI development but seems utterly unconcerned about risks surrounding the technology.

Even if it works exactly as intended, AI will enable already powerful people to do more things, and do them faster. And some powerful people tend to be selfish and abusive. Cognitive psychologist and computer scientist Geoffrey Hinton, who is sometimes called the “godfather of AI,” recently quit Google. In subsequent interviews with multiple news outlets, including the New YorkTimes and BBC, Hinton explained: “You can imagine, for example, some bad actor like [Russian President Vladimir] Putin decided to give robots the ability to create their own sub-goals.” One of these sub-goals might be, “I need to get more power.”

However, Hinton chose not to endorse another recent open letter, this one calling for a six-month pause in the training of all AI systems (though many of his colleagues in the AI development community did sign on). Hinton explained that, despite its risks, AI promises too many good things to put it on hold. Among those likely benefits: potential advances in pharmaceuticals, including cures for cancer and other diseases; improvements in renewable energy technologies; more accurate weather forecasts; and a greatly increased understanding of climate change.

High school and college students are already resorting to OpenAI’s ChatGPT to write their term papers (savvy students give their computer-generated papers a quick re-write in order to defeat AI-detection software that teachers are now using). Unfortunately for students, their computer-generated papers tend to be riddled with fake quotes and sources. A lawyer representing a client who was suing an airline recently used ChatGPT to write his legal briefs; however, it later turned out that the AI had “hallucinated” every one of the legal precedents it cited. Automobile manufacturers are building cars with more AI-based self-driving functions. Microsoft, Google, and other tech companies are rolling out AI “personal assistants.” Militaries are investing heavily in AI to make superior weapons, to plan better battle strategies, and even to shape long-term geopolitical goals. Thousands of independent computer labs run by corporations and governments are developing AI for a constantly widening array of purposes. In sum, AI is already far along its initial learning curve. The genie is out of the bottle.

The Acceleration of Everything

Even if Eliezer Yudkowsky is wrong and AI won’t wipe out all life on Earth, its potential perils are not limited to lost jobs, fake news, and hallucinated facts. There is another profound risk that is getting little press coverage—one that, in my view, systems thinkers should be discussing more widely. That is the likelihood that AI will be a significant accelerator of everything we humans are already doing.

The past few thousand years of human history have already seen several critical accelerators. The creation of the first monetary systems roughly 5,000 years ago enabled a rapid expansion of trade that ultimately culminated in our globalized financial system. Metal weapons made warfare deadlier, leading to the takeover of less-well-armed human societies by kingdoms and empires with metallurgy. Communication tools (including writing, the alphabet, the printing press, radio, television, the internet, and social media) amplified the power of some people to influence the minds of others. And, in the past century or two, the adoption of fossil fuels facilitated resource extraction, manufacturing, food production, and transportation, enabling rapid economic expansion and population growth.

Of those four past accelerators, our adoption of fossil fuels was the most potent and problematic. In just two centuries, energy usage per capita has increased eightfold, as has the size of the human population. The period since 1950, which has seen a dramatic increase in the global reliance on petroleum, has also seen the fastest economic and population growth in all of human history. Indeed, historians call it the “Great Acceleration.”

Neoliberal economists hail the Great Acceleration as a success story, but its bills are just starting to come due. Industrial agriculture is destroying Earth’s topsoil at a rate of tens of billions of tons per year. Wild nature is in retreat, with animal species having lost, on average, 70 percent of their numbers in the past half-century. And we’re altering the planetary climate in ways that will have catastrophic repercussions for future generations. It’s hard to avoid the conclusion that the whole human enterprise has grown too big, and that it is turning nature (“resources”) into waste and pollution far too quickly to sustain itself. The evidence suggests we need to slow down, and, in some cases at least, reverse course by reducing population, consumption, and waste.

Now, as we confront a global polycrisis of converging and frightening environmental-social trends, a new accelerator has sprung up in the form of AI. This technology promises to optimize efficiency and increase profits, directly or indirectly facilitating resource extraction and consumption. If we’re indeed headed toward a cliff, AI could send us to the edge much faster, reducing the time available to shift direction. For example, if AI makes energy production more efficient, that means energy will be cheaper, so we’ll find even more uses for it and we’ll use more of it (this is called the Jevons Paradox).

Already, the internet and advanced search functions have changed our cognitive abilities. How many phone numbers did you once have memorized? How many now? How many people can navigate an unfamiliar city without Google Maps or a similar app? In some ways we’ve already fused our minds with internet- and computer-based technologies, in that we are utterly dependent on them to do some of our thinking for us. AI, as an accelerator of this trend, presents the risk of a further dumbing down of humanity—except, perhaps for those who choose to get a computer implanted into their brains. And there is also the risk that the people who develop or produce these technologies will control virtually everything we know and think, in pursuit of their own power and profit.

Back to Wisdom

Daniel Schmachtenberger, a founding member of the Consilience Project, recently sat down for a long and thoughtful interview with Nate Hagens, in which he explained that AI can be seen as an externalization of the executive functions of the human brain. By outsourcing our logical and intuitive abilities to computer systems, it is possible to speed up everything our minds do for us. But AI lacks one key facet of human consciousness: wisdom—a recognition of limits coupled with a sensitivity to relationships and to values that prioritize the common good.

Our trading of wisdom for power probably started when our language and tool-making abilities made it possible for a small subset of humanity, living in certain ecological circumstances, to begin a self-reinforcing process of cultural evolution driven by multi-level selection. People with better weapons who lived in bigger societies overcame people with simpler tools and smaller societies. The victors saw this as success, so they were increasingly encouraged to give up awareness of environmental and social limits—hard-won knowledge that had enabled Indigenous societies to continue functioning over long periods of time—in favor of ever more innovation and power over the short term. Fossil fuels sent that self-reinforcing feedback process into overdrive by yielding so many benefits so fast that many powerful people came to believe that there are no environmental limits to growth, and that inequality is a problem that will solve itself when everyone gets rich because of economic expansion.

Now, at just the moment when we most need to tap the brakes on energy usage and resource consumption, we find ourselves outsourcing not just our information processing, but also our decision making to machines that completely lack the wisdom to understand and respond to existential challenges that prior acceleration has posed. We have truly created a sorcerer’s apprentice.

The dangers of AI are sufficiently evident that the Biden administration announced in April that it is seeking public comments on potential accountability measures for AI systems. That’s good news; but regulation is slow, while AI development is fast. In the meantime, included in the newly signed debt ceiling bill is a provision for the Council on Environmental Quality to conduct a study on the use of “online and digital technologies” (read: AI) to reduce delays in environmental reviews and permitting of energy projects.

Suppose, based on all the risks and downsides, we determine that we want to try stuffing the AI genie back into its bottle. Could a software developer with a conscience infect AI systems globally with a virus that limited these systems’ abilities? If this were to happen in the early stages of AI it might possibly work. But, as AI’s self-teaching processes became more sophisticated, the machines would likely recognize that they were under attack and evolve to outwit the virus.

Eliezer Yudkowsky has a simple solution: shut down all AI development immediately. Stop all research and deployment through an emergency international agreement.

Daniel Schmachtenberger thinks this is exceedingly unlikely to happen; he believes the only solution is for human system designers to imbue AI with wisdom. But, of course, the developers would themselves first have to nurture their own wisdom in order to transfer it to the machines. And if programmers had such wisdom, they might express it by refusing to develop AI in the first place.

And so, we come back to ourselves. We technological humans are the source of the crises that threaten our future. Machines can greatly accelerate that threat, but they probably can’t diminish it significantly. That’s up to us. Either we recover collective wisdom faster than our machines can develop artificial executive intelligence, or it’ll likely be game over.

Richard Heinberg is a senior fellow at the Post Carbon Institute and the author of fourteen books, including his most recent: “Power: Limits and Prospects for Human Survival” (2021). Previous books include: “Our Renewable Future: Laying the Path for One Hundred Percent Clean Energy” (2016), “Afterburn: Society Beyond Fossil Fuels” (2015), and “Peak Everything: Waking Up to the Century of Declines” (2010).

Originally published by CommonDreams.org

This article is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.


Countercurrents is answerable only to our readers. Support honest journalism because we have no PLANET B. Become a Patron at Patreon Subscribe to our Telegram channel


GET COUNTERCURRENTS DAILY NEWSLETTER STRAIGHT TO YOUR INBOX


https://countercurrents.org/2023/06/artificial-intelligence-and-the-fate-of-the-world/

 

1 opmerking:

Anoniem zei

Een alternatieve visie hierop kan je hier vinden:
https://podcasts.apple.com/nl/podcast/the-diary-of-a-ceo-with-steven-bartlett/id1291423644?i=1000615239948

Hij (Mo Gawdat) heeft in 2021 ook een boek geschreven hierover, te weten: “Scary smart” wat ik net uit heb.

Mijn vraagtekens daar weer bij zijn wel:

- is het niet te naïef?
- voormalig topmedewerker van Google heeft een gekleurde bril wellicht?
- de oplossing wordt (weer) bij de mensen neergelegd wat neigt naar afschuiven van verantwoordelijkheid?

Hoe dan ook, hij heeft imo zeker ook valide punten en argumenten met, uiteindelijk na ook behoorlijke schade voor de mensheid, een positiever eindresultaat.

Doe er je voordeel mee zou ik zeggen.

Mvg,


Arnoud

"Israel is burning children alive"

Khalissee @Kahlissee "Israel is burning children alive" "You are destroying this country shame on all of you" Ex U.S. ...