woensdag 28 juli 2021

Avoiding the Robot Apocalypse

 TOMGRAM

John Feffer, Avoiding the Robot Apocalypse

POSTED ON 

[Note for TomDispatch Readers: As many of you may remember, Dispatch Books has long been publishing John Feffer’s Splinterlands trilogy, his dystopian novels that foresaw so much that’s since engulfed us. His first volume, Splinterlands, was published in 2016, Frostlands in 2018, and the final must-read book, Songlands, is now out. Of it, Adam Hochschild has written: “An intriguing conclusion to a worthy trilogy. Feffer leaps far into the future in this book, but his view of it is enriched by a quirky, sensitive understanding of our world as it is — both its dangers and its possibilities.” Make sure, at the very least, to order yourself a copy. Any of you who might, however, like to support TomDispatch in return for your own signed, personalized Songlands, should go to our donation page and contribute at least $100 (or, if you live outside the U.S.A., $125) and it’ll be yours. Truly, you won’t regret it. In fact, given the ever-hotter world we find ourselves in, it couldn’t be a more appropriate book to read! Tom]

In my younger years, I had significant experience with futuristic worlds, sometimes of the grimmest sort. After all, I went to the moon with Jules Verne; saw London being destroyed with H.G. Wells; met my first robot with Isaac Asimov; faced the apocalyptic world of those aggressively poisonous plants, the Triffids, with John Wyndham; and met Big Brother with George Orwell. Yet, from pandemics to climate change, social media to the robotization of the planet that TomDispatch regular John Feffer describes today, nothing that I read once upon a time, no matter how futuristic, no matter how strange or apocalyptic, prepared me for the everyday world I now find myself in at age 77.

Back in the days of the pen and manual typewriter (remember, I’ve been an editor most of my life), if you had told me that, were I someday to mistakenly spell “life” as “kife,” the spell-check program on my computer (yes, an actual computer!) would promptly underline it in red to let me know that I had goofed, I would never have believed you. I, edited incessantly by a machine? Not on your life, or perhaps I should say: not until it became part of my seldom-thought-about everyday life. Nor, of course, could you have convinced me that someday I would be able to carry my total communications system in my pocket and more or less talk to anyone I know anywhere, anytime. Had you suggested that, then, I would undoubtedly have laughed you out of the room.

And yet here I am, living in an online world I barely grasp in a version of everyday life that’s left more youthful thoughts about the future in the dust. And now, Feffer has the nerve to fill me in on a future world to be in which, functionally, a robot may be carrying the equivalent of me around in its pocket or simply leave beings like me in a ditch somewhere along the way. Apocalypse then? I shudder to think. Read his piece and see if you don’t shudder, too. Tom

Artificial Intelligence Wants You (and Your Job)

We’d Better Control Machines Before They Control Us

My wife and I were recently driving in Virginia, amazed yet again that the GPS technology on our phones could guide us through a thicket of highways, around road accidents, and toward our precise destination. The artificial intelligence (AI) behind the soothing voice telling us where to turn has replaced passenger-seat navigators, maps, even traffic updates on the radio. How on earth did we survive before this technology arrived in our lives? We survived, of course, but were quite literally lost some of the time.

My reverie was interrupted by a toll booth. It was empty, as were all the other booths at this particular toll plaza. Most cars zipped through with E-Z passes, as one automated device seamlessly communicated with another. Unfortunately, our rental car didn’t have one.

So I prepared to pay by credit card, but the booth lacked a credit-card reader.

Okay, I thought, as I pulled out my wallet, I’ll use cash to cover the $3.25.

As it happened, that booth took only coins and who drives around with 13 quarters in his or her pocket?

I would have liked to ask someone that very question, but I was, of course, surrounded by mute machines. So, I simply drove through the electronic stile, preparing myself for the bill that would arrive in the mail once that plaza’s automated system photographed and traced our license plate.

In a thoroughly mundane fashion, I’d just experienced the age-old conflict between the limiting and liberating sides of technology. The arrowhead that can get you food for dinner might ultimately end up lodged in your own skull. The car that transports you to a beachside holiday contributes to the rising tides — by way of carbon emissions and elevated temperatures — that may someday wash away that very coastal gem of a place. The laptop computer that plugs you into the cyberworld also serves as the conduit through which hackers can steal your identity and zero out your bank account.

In the previous century, technology reached a true watershed moment when humans, harnessing the power of the atom, also acquired the capacity to destroy the entire planet. Now, thanks to AI, technology is hurtling us toward a new inflection point.

Science-fiction writers and technologists have long worried about a future in which robots, achieving sentience, take over the planet. The creation of a machine with human-like intelligence that could someday fool us into believing it’s one of us has often been described, with no small measure of trepidation, as the “singularity.” Respectable scientists like Stephen Hawking have argued that such a singularity will, in fact, mark the “end of the human race.”

This will not be some impossibly remote event like the sun blowing up in a supernova several billion years from now. According to one poll, AI researchers reckon that there’s at least a 50-50 chance that the singularity will occur by 2050. In other words, if pessimists like Hawking are right, it’s odds on that robots will dispatch humanity before the climate crisis does.

Neither the artificial intelligence that powers GPS nor the kind that controlled that frustrating toll plaza has yet attained anything like human-level intelligence — not even close. But in many ways, such dumb robots are already taking over the world. Automation is currently displacing millions of workers, including those former tollbooth operators. “Smart” machines like unmanned aerial vehicles have become an indispensable part of waging war. AI systems are increasingly being deployed to monitor our every move on the Internet, through our phones, and whenever we venture into public space. Algorithms are replacing teaching assistants in the classroom and influencing sentencing in courtrooms. Some of the loneliest among us have already become dependent on robot pets.

As AI capabilities continue to improve, the inescapable political question will become: to what extent can such technologies be curbed and regulated? Yes, the nuclear genie is out of the bottle as are other technologies — biological and chemical — capable of causing mass destruction of a kind previously unimaginable on this planet. With AI, however, that day of singularity is still in the future, even if a rapidly approaching one. It should still be possible, at least theoretically, to control such an outcome before there’s nothing to do but play the whack-a-mole game of non-proliferation after the fact.

As long as humans continue to behave badly on a global scale — war, genocide, planet-threatening carbon emissions — it’s difficult to imagine that anything we create, however intelligent, will act differently. And yet we continue to dream that some deus in machina, a god in the machine, could appear as if by magic to save us from ourselves.

Taming AI?

In the early 1940s, science fiction writer Isaac Asimov formulated his famed three laws of robotics: that robots were not to harm humans, directly or indirectly; that they must obey our commands (unless doing so violates the first law); and that they must safeguard their own existence (unless self-preservation contravenes the first two laws).

Any number of writers have attempted to update Asimov. The latest is legal scholar Frank Pasquale, who has devised four laws to replace Asimov’s three. Since he’s a lawyer not a futurist, Pasquale is more concerned with controlling the robots of today than hypothesizing about the machines of tomorrow. He argues that robots and AI should help professionals, not replace them; that they should not counterfeit humans; that they should never become part of any kind of arms race; and that their creators, controllers, and owners should always be transparent.

Pasquale’s “laws,” however, run counter to the artificial-intelligence trends of our moment. The prevailing AI ethos mirrors what could be considered the prime directive of Silicon Valley: move fast and break things. This philosophy of disruption demands, above all, that technology continuously drive down labor costs and regularly render itself obsolescent.

In the global economy, AI indeed helps certain professionals — like Facebook’s Mark Zuckerberg and Amazon’s Jeff Bezos, who just happen to be among the richest people on the planet — but it’s also replacing millions of us. In the military sphere, automation is driving boots off the ground and eyes into the sky in a coming robotic world of war. And whether it’s Siri, the bots that guide increasingly frustrated callers through automated phone trees, or the AI that checks out Facebook posts, the aim has been to counterfeit human beings — “machines like me,” as Ian McEwan called them in his 2019 novel of that title — while concealing the strings that connect the creation to its creator.

Pasquale wants to apply the brakes on a train that has not only left the station but no longer is under the control of the engine driver. It’s not difficult to imagine where such a runaway phenomenon could end up and techno-pessimists have taken a perverse delight in describing the resulting cataclysm. In his book Superintelligence, for instance, Nick Bostrom writes about a sandstorm of self-replicating nanorobots that chokes every living thing on the planet — the so-called grey goo problem — and an AI that seizes power by “hijacking political processes.”

Since they would be interested only in self-preservation and replication, not protecting humanity or following its orders, such sentient machines would clearly tear up Asimov’s rulebook. Futurists have leapt into the breach. For instance, Ray Kurzweil, who predicted in his 2005 book The Singularity Is Near that a robot would attain sentience by about 2045, has proposed a “ban on self-replicating physical entities that contain their own codes for self-replication.” Elon Musk, another billionaire industrialist who’s no enemy of innovation, has called AI humanity’s “biggest existential threat” and has come out in favor of a ban on future killer robots.

To prevent the various worst-case scenarios, the European Union has proposed to control AI according to degree of risk. Some products that fall in the EU’s “high risk” category would have to get a kind of Good Housekeeping seal of approval (the Conformité Européenne). AI systems “considered a clear threat to the safety, livelihoods, and rights of people,” on the other hand, would be subject to an outright ban. Such clear-and-present dangers would include, for instance, biometric identification that captures personal data by such means as facial recognition, as well as versions of China’s social credit system where AI helps track individuals and evaluate their overall trustworthiness.

Techno-optimists have predictably lambasted what they consider European overreach. Such controls on AI, they believe, will put a damper on R&D and, if the United States follows suit, allow China to secure an insuperable technological edge in the field. “If the member states of the EU — and their allies across the Atlantic — are serious about competing with China and retaining their power status (as well as the quality of life they provide to their citizens),” writes entrepreneur Sid Mohasseb in Newsweek, “they need to call for a redraft of these regulations, with growth and competition being seen as at least as important as regulation and safety.”

Mohasseb’s concerns are, however, misleading. The regulators he fears so much are, in fact, now playing a game of catch-up. In the economy and on the battlefield, to take just two spheres of human activity, AI has already become indispensable.

The Automation of Globalization

The ongoing Covid-19 pandemic has exposed the fragility of global supply chains. The world economy nearly ground to a halt in 2020 for one major reason: the health of human workers. The spread of infection, the risk of contagion, and the efforts to contain the pandemic all removed workers from the labor force, sometimes temporarily, sometimes permanently. Factories shut down, gaps widened in transportation networks, and shops lost business to online sellers.

A desire to cut labor costs, a major contributor to a product’s price tag, has driven corporations to look for cheaper workers overseas. For such cost-cutters, eliminating workers altogether is an even more beguiling prospect. Well before the pandemic hit, corporations had begun to turn to automation. By 2030, up to 45 million U.S. workerswill be displaced by robots. The World Bank estimates that they will eventually replace an astounding 85% of the jobs in Ethiopia, 77% in China, and 72% in Thailand.”

The pandemic not only accelerated this trend, but increased economic inequality as well because, at least for now, robots tend to replace the least skilled workers. In a survey conducted by the World Economic Forum, 43% of businesses indicated that they would reduce their workforces through the increased use of technology. “Since the pandemic hit,” reports NBC News,

“food manufacturers ramped up their automation, allowing facilities to maintain output while social distancing. Factories digitized controls on their machines so they could be remotely operated by workers working from home or another location. New sensors were installed that can flag, or predict, failures, allowing teams of inspectors operating on a schedule to be reduced to an as-needed maintenance crew.”

In an ideal world, robots and AI would increasingly take on all the dirty, dangerous, and demeaning jobs globally, freeing humans to do more interesting work. In the real world, however, automation is often making jobs dirtier and more dangerous by, for instance, speeding up the work done by the remaining human labor force. Meanwhile, robots are beginning to encroach on what’s usually thought of as the more interesting kinds of work done by, for example, architects and product designers.

In some cases, AI has even replaced managers. A contract driver for Amazon, Stephen Normandin, discovered that the AI system that monitored his efficiency as a deliveryman also used an automated email to fire him when it decided he wasn’t up to snuff. Jeff Bezos may be stepping down as chief executive of Amazon, but robots are quickly climbing its corporate ladder and could prove at least as ruthless as he’s been, if not more so.

Mobilizing against such a robot replacement army could prove particularly difficult as corporate executives aren’t the only ones putting out the welcome mat. Since fully automated manufacturing in “dark factories” doesn’t require lighting, heating, or a workforce that commutes to the site by car, that kind of production can reduce a country’s carbon footprint — a potentially enticing factor for “green growth” advocates and politicians desperate to meet their Paris climate targets.

It’s possible that sentient robots won’t need to devise ingenious stratagems for taking over the world. Humans may prove all too willing to give semi-intelligent machines the keys to the kingdom.

The New Fog of War

The 2020 war between Armenia and Azerbaijan proved to be unlike any previous military conflict. The two countries had been fighting since the 1980s over a disputed mountain enclave, Nagorno-Karabakh. Following the collapse of the Soviet Union, Armenia proved the clear victor in conflict that followed in the early 1990s, occupying not only the disputed territory but parts of Azerbaijan as well.

In September 2020, as tensions mounted between the two countries, Armenia was prepared to defend those occupied territories with a well-equipped army of tanks and artillery. Thanks to its fossil-fuel exports, Azerbaijan, however, had been spending considerably more than Armenia on the most modern version of military preparedness. Still, Armenian leaders often touted their army as the best in the region. Indeed, according to the 2020 Global Militarization Index, that country was second only to Israel in terms of its level of militarization. 

Yet Azerbaijan was the decisive winner in the 2020 conflict, retaking possession of Nagorno-Karabkah. The reason: automation.

“Azerbaijan used its drone fleet — purchased from Israel and Turkey — to stalk and destroy Armenia’s weapons systems in Nagorno-Karabakh, shattering its defenses and enabling a swift advance,” reported the Washington Post‘s Robyn Dixon. “Armenia found that air defense systems in Nagorno-Karabakh, many of them older Soviet systems, were impossible to defend against drone attacks, and losses quickly piled up.”

Armenian soldiers, notorious for their fierceness, were spooked by the semi-autonomous weapons regularly above them. “The soldiers on the ground knew they could be hit by a drone circling overhead at any time,” noted Mark Sullivan in the business magazine Fast Company. “The drones are so quiet they wouldn’t hear the whir of the propellers until it was too late. And even if the Armenians did manage to shoot down one of the drones, what had they really accomplished? They’d merely destroyed a piece of machinery that would be replaced.” 

The United States pioneered the use of drones against various non-state adversaries in its war on terror in Afghanistan, Iraq, Pakistan, Somalia, and elsewhere across the Greater Middle East and Africa. But in its 2020 campaign, Azerbaijan was using the technology to defeat a modern army. Now, every military will feel compelled not only to integrate increasingly more powerful AI into its offensive capabilities, but also to defend against the new technology.

To stay ahead of the field, the United States is predictably pouring money into the latest technologies. The new Pentagon budget includes the “largest ever” request for R&D, including a down payment of nearly a billion dollars for AI. As TomDispatchregular Michael Klare has written, the Pentagon has even taken a cue from the business world by beginning to replace its war managers — generals — with a huge, interlinked network of automated systems known as the Joint All-Domain Command-and-Control (JADC2).

The result of any such handover of greater responsibility to machines will be the creation of what mathematician Cathy O’Neill calls “weapons of math destruction.” In the global economy, AI is already replacing humans up and down the chain of production. In the world of war, AI could in the end annihilate people altogether, whether thanks to human design or computer error.

After all, during the Cold War, only last-minute interventions by individuals on both sides ensured that nuclear “missile attacks” detected by Soviet and American computers — which turned out to be birds, unusual weather, or computer glitches — didn’t precipitate an all-out nuclear war. Take the human being out of the chain of command and machines could carry out such a genocide all by themselves.

And the fault, dear reader, would lie not in our robots but in ourselves.

Robots of Last Resort

In my new novel Songlands, humanity faces a terrible set of choices in 2052. Having failed to control carbon emissions for several decades, the world is at the point of no return, too late for conventional policy fixes. The only thing left is a scientific Hail Mary pass, an experiment in geoengineering that could fail or, worse, have terrible unintended consequences. The AI responsible for ensuring the success of the experiment may or may not be trustworthy. My dystopia, like so many others, is really about a narrowing of options and a whittling away of hope, which is our current trajectory.

And yet, we still have choices. We could radically shift toward clean energy and marshal resources for the whole world, not just its wealthier portions, to make the leap together. We could impose sensible regulations on artificial intelligence. We could debate the details of such programs in democratic societies and in participatory multilateral venues.

Or, throwing up our hands because of our unbridgeable political differences, we could wait for a post-Trumpian savior to bail us out. Techno-optimists hold out hope that automation will set us free and save the planet. Laissez-faire enthusiasts continue to believe that the invisible hand of the market will mysteriously direct capital toward planet-saving innovations instead of SUVs and plastic trinkets.

These are illusions. As I write in Songlands, we have always hoped for someone or something to save us: “God, a dictator, technology. For better or worse, the only answer to our cries for help is an echo.”

In the end, robots won’t save us. That’s one piece of work that can’t be outsourced or automated. It’s a job that only we ourselves can do.

Follow TomDispatch on Twitter and join us on Facebook. Check out the newest Dispatch Books, John Feffer’s new dystopian novel, Songlands (the final one in his Splinterlands series), Beverly Gologorsky’s novel Every Body Has a Story, and Tom Engelhardt’s A Nation Unmade by War, as well as Alfred McCoy’s In the Shadows of the American Century: The Rise and Decline of U.S. Global Power and John Dower’s The Violent American Century: War and Terror Since World War II.

Geen opmerkingen: