7.9 C
New York
Thursday, April 18, 2024

Can Robots Evolve Into Machines of Loving Grace?

logoThe AI Database →

Application

Human-computer interaction

Sector

Consumer services

Source Data

Sensors

Geolocation

Technology

Machine vision

Neural Network

Nobody could say exactly when the robots arrived. They seemed to have been smuggled onto campus during the break without any official announcement, explanation, or warning. There were a few dozen of them in total: six-wheeled, ice-chest-sized boxes with little yellow flags on top for visibility. They navigated the sidewalks around campus using cameras, radar, and ultrasonic sensors. They were there for the students, ferrying deliveries ordered via an app from university food services, but everyone I knew who worked on campus had some anecdote about their first encounter.

>

These stories were shared, at least in the beginning, with amusement or a note of performative exasperation. Several people complained that the machines had made free use of the bike paths but were ignorant of social norms: They refused to yield to pedestrians and traveled slowly in the passing lane, backing up traffic. One morning a friend of mine, a fellow adjunct instructor who was running late to his class, nudged his bike right up behind one of the bots, intending to run it off the road, but it just kept moving along on its course, oblivious. Another friend discovered a bot trapped helplessly in a bike rack. It was heavy, and she had to enlist the help of a passerby to free it. “Thankfully it was just a bike rack,” she said. “Just wait till they start crashing into bicycles and moving cars.”

Among the students, the only problem was an excess of affection. The bots were often held up during their delivery runs because the students insisted on taking selfies with the machines outside the dorms or chatting with them. The robots had minimum speech capacities—they were able to emit greetings and instructions and to say “Thank you, have a nice day!” as they rolled away—and yet this was enough to have endeared them to many people as social creatures. The bots often returned to their stations with notes affixed to them: Hello, robot! and We love you! They inspired a proliferation of memes on the University of Wisconsin–Madison social media pages. One student dressed a bot in a hat and scarf, snapped a photo, and created a profile for it on a dating app. Its name was listed as Onezerozerooneoneone, its age 18. Occupation: delivery boi. Orientation: asexual robot.

Around this time autonomous machines were popping up all over the country. Grocery stores were using them to patrol aisles, searching for spills and debris. Walmart had introduced them in its supercenters to keep track of out-of-stock items. A New York Times story reported that many of these robots had been christened with nicknames by their human coworkers and given name badges. One was thrown a birthday party, where it was given, among other gifts, a can of WD-40 lubricant. The article presented these anecdotes wryly, for the most part, as instances of harmless anthropomorphism, but the same instinct was already driving public policy. In 2017 the European Parliament had proposed that robots should be deemed “electronic persons,” arguing that certain forms of AI had become sophisticated enough to be considered responsible agents. It was a legal distinction, made within the context of liability law, though the language seemed to summon an ancient, animist cosmology wherein all kinds of inanimate objects—trees and rocks, pipes and kettles—were considered nonhuman “persons.”

It made me think of the opening of a 1967 poem by Richard Brautigan, “All Watched Over by Machines of Loving Grace”:

I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.

Brautigan penned these lines during the Summer of Love, from the heart of the counterculture in San Francisco, while he was poet in residence at the California Institute of Technology. The poem’s subsequent stanzas elaborate on this enchanted landscape of “cybernetic forests” and flowerlike computers, a world in which digital technologies reunite us with “our mammal brothers and sisters,” where man and robot and beast achieve true equality of being. The work evokes a particular subgenre of West Coast utopianism, one that recalls the back-to-the-land movement and Stewart Brand’s Whole Earth Catalog, which envisioned the tools of the American industrial complex repurposed to bring about a more equitable and ecologically sustainable world. It imagines technology returning us to a more primitive era—a premodern and perhaps pre-Christian period of history, when humans lived in harmony with nature and inanimate objects were enchanted with life.

>

Echoes of this dream can still be found in conversations about technology. It is reiterated by those, like MIT’s David Rose, who speculate that the internet of things will soon “enchant” everyday objects, imbuing doorknobs, thermostats, refrigerators, and cars with responsiveness and intelligence. It can be found in the work of posthuman theorists like Jane Bennett, who imagines digital technologies reconfiguring our modern understanding of “dead matter” and reviving a more ancient worldview “wherein matter has a liveliness, resilience, unpredictability, or recalcitrance that is itself a source of wonder for us.”

“I like to think” begins each stanza of Brautigan’s poem, a refrain that reads less as poetic device than as mystical invocation. This vision of the future may be just another form of wishful thinking, but it is a compelling one, if only because of its historical symmetry. It seems only right that technology should restore to us the enchanted world that technology itself destroyed. Perhaps the very forces that facilitated our exile from Eden will one day reanimate our garden with digital life. Perhaps the only way out is through.

Brautigan’s poem had been on my mind for some time before the robots arrived. Earlier that year I’d been invited to take part in a panel called Writing the Nonhuman, a conversation about the relationship between humans, nature, and technology during the Anthropocene.

My talk was about emergent intelligence in AI, the notion that higher-level capacities can spontaneously appear in machines without having been designed. I’d focused primarily on the work of Rodney Brooks, who headed up the MIT Artificial Intelligence Lab in the late 1990s, and his “embodied intelligence” approach to robotics. Before Brooks came along, most forms of AI were designed like enormous disembodied brains, as scientists believed that the body played no part in human cognition. As a result, these machines excelled at the most abstract forms of intelligence—calculus, chess—but failed miserably when it came to the kinds of activities that children found easy: speech and vision, distinguishing a cup from a pencil. When the machines were given bodies and taught to interact with their environment, they did so at a painfully slow and clumsy pace, as they had to constantly refer each new encounter back to their internal model of the world.

Brooks’ revelation was that it was precisely this central processing—the computer’s “brain,” so to speak—that was holding it back. While watching one of these robots clumsily navigate a room, he realized that a cockroach could accomplish the same task with more speed and agility despite requiring less computing power. Brooks began building machines that were modeled after insects. He used an entirely new system of computing he called subsumption architecture, a form of distributed intelligence much like the kind found in beehives and forests. In place of central processing, his machines were equipped with several different modules that each had its own sensors, cameras, and actuators and communicated minimally with the others. Rather than being programmed in advance with a coherent picture of the world, they learned on the fly by directly interacting with their environment. One of them, Herbert, learned to wander around the lab and steal empty soda cans from people’s offices. Another, Genghis, managed to navigate rough terrain without any kind of memory or internal mapping. Brooks took these successes to mean that intelligence did not require a unified, knowing subject. He was convinced that these simple robot competencies would build on one another until they evolved something that looked very much like human intelligence.

Brooks and his team at MIT were essentially trying to re-create the conditions of human evolution. If it’s true that human intelligence emerges from the more primitive mechanisms we inherited from our ancestors, then robots should similarly evolve complex behaviors from a series of simple rules. With AI, engineers had typically used a top-down approach to programming, as though they were gods making creatures in their image. But evolution depends on bottom-up strategies—single-cell organisms develop into complex, multicellular creatures—which Brooks came to see as more effective. Abstract thought was a late development in human evolution, and not as important as we liked to believe; long before we could solve differential equations, our ancestors had learned to walk, to eat, to move about in an environment. Once Brooks realized that his insect robots could achieve these tasks without central processing, he moved on to creating a humanoid robot. The machine was just a torso without legs, but it convincingly resembled a human upper body, complete with a head, a neck, shoulders, and arms. He named it Cog. It was equipped with over 20 actuated joints, plus microphones and sensors that allowed it to distinguish between sound, color, and movement. Each eye contained two cameras that mimicked the way human vision works and enabled it to saccade from one place to another. Like the insect robots, Cog lacked central control and was instead programmed with a series of basic drives. The idea was that through social interaction, and with the help of learning algorithms, the machine would develop more complex behaviors and perhaps even the ability to speak.

Over the years that Brooks and his team worked on Cog, the machine achieved some remarkable behaviors. It learned to recognize faces and make eye contact with humans. It could throw and catch a ball, point at things, and play with a Slinky.

When the team played rock music, Cog managed to beat out a passable rhythm on a snare drum. Occasionally the robot did display emergent behaviors—new actions that seemed to have evolved organically from the machine’s spontaneous actions in the world. One day, one of Brooks’ grad students, Cynthia Breazeal, was shaking a whiteboard eraser and Cog reached out and touched it. Amused, Breazeal repeated the act, which prompted Cog to touch the eraser again, as though it were a game. Brooks was stunned. It appeared as though the robot recognized the idea of turn-taking, something it had not been programmed to understand. Breazeal knew that Cog couldn’t understand this—she had helped design the machine. But for a moment she seemed to have forgotten and, as Brooks put it, “behaved as though there was more to Cog than there really was.” According to Brooks, his student’s willingness to treat the robot as “more than” it actually was had elicited something new. “Cog had been able to perform at a higher level than its design so far called for,” he said.

Brooks knew that we are more likely to treat objects as persons when we are made to socially engage with them. In fact, he believed that intelligence exists only in the relationships we, as observers, perceive when watching an entity interact with its environment. “Intelligence,” he wrote, “is in the eye of the observer.” He predicted that, over time, as the systems grew more complex, they would evolve not only intelligence but consciousness as well. Consciousness was not some substance in the brain but rather emerged from the complex relationships between the subject and the world. It was part alchemy, part illusion, a collaborative effort that obliterated our standard delineations between self and other. As Brooks put it, “Thought and consciousness will not need to be programmed in. They will emerge.”

The AI philosopher Mark A. Bedau has argued that emergentism, as a theory of mind, “is uncomfortably like magic.” Rather than looking for distinct processes in the brain that are responsible for consciousness, emergentists believe that the way we experience the world—our internal theater of thoughts and feelings and beliefs—is a dynamic process that cannot be explained in terms of individual neurons, just as the behavior of a flock of starlings cannot be accounted for by the movements of any single bird. Although there is plenty of evidence of emergent phenomena in nature, the idea becomes more elusive when applied to consciousness, something that cannot be objectively observed in the brain. According to its critics, emergentism is an attempt to get “something from nothing,” by imagining some additional, invisible power that exists within the mechanism, like a ghost in the machine.

Some have argued that emergentism is just an updated version of vitalism, a popular theory throughout the 18th and 19th centuries that proposed that the world was animated by an elusive life force that permeates all things. Contrary to the mechanistic view of nature that was popular at that time, vitalists insisted that an organism was more than the sum of its parts—that there must exist, in addition to its physical body, some “living principle,” or élan vital. Some believed that this life force was ether or electricity, and scientific efforts to discover this substance often veered into the ambition to re-create it artificially. The Italian scientist Luigi Galvani performed well-publicized experiments in which he tried to bring dismembered frog legs to life by zapping them with an electrical current. Reports of these experiments inspired Mary Shelley’s novel Frankenstein, whose hero, the mad scientist, is steeped in the vitalist philosophies of his time.

When reading about Brooks and his team at MIT, I often got the feeling they were engaged in a kind of alchemy, carrying on the legacy of those vitalist magicians who inspired Victor Frankenstein to animate his creature out of dead matter—and flirting with the same dangers. The most mystical aspect of emergentism, after all, is the implication that we can make things that we don’t completely understand. For decades, critics have argued that artificial general intelligence—AI that is equivalent to human intelligence—is impossible, because we don’t yet know how the human brain works. But emergence in nature demonstrates that complex systems can self-organize in unexpected ways without being intended or designed. Order can arise from chaos. In machine intelligence, the hope persists that if we put the pieces together the right way—through ingenuity or accident—consciousness will emerge as a side effect of complexity. At some point nature will step in and finish the job.

It seems impossible. But then again, aren’t all creative undertakings rooted in processes that remain mysterious to the creator? Artists have long understood that making is an elusive endeavor, one that makes the artist porous to larger forces that seem to arise from outside herself. The philosopher Gillian Rose once described the act of writing as “a mix of discipline and miracle, which leaves you in control, even when what appears on the page has emerged from regions beyond your control.” I have often experienced this strange phenomenon in my own work. I always sit down at my desk with a vision and a plan. But at some point the thing I have made opens its mouth and starts issuing decrees of its own. The words seem to take on their own life, such that when I am finished, it is difficult to explain how the work became what it did. Writers often speak of such experiences with wonder and awe, but I’ve always been wary of them. I wonder whether it is a good thing for an artist, or any kind of maker, to be so porous, even if the intervening god is nothing more than the laws of physics or the workings of her unconscious. If what emerges from such efforts comes, as Rose puts it, “from regions beyond your control,” then at what point does the finished product transcend your wishes or escape your intent?

Later that spring I learned that the food-delivery robots had indeed arrived during the break. A friend of mine who’d spent the winter on campus told me that for several weeks they had roamed the empty university sidewalks, learning all the routes and mapping important obstacles. The machines had neural nets and learned to navigate their environment through repeated interactions with it. This friend was working in one of the emptied-out buildings near the lake, and he said he’d often looked out the window of his office and seen them zipping around below. Once he caught them all congregated in a circle in the middle of the campus mall. “They were having some kind of symposium,” he said. They communicated dangers to one another and remotely passed along information to help adapt to new challenges in the environment. When construction began that spring outside one of the largest buildings, word spread through the robot network—or, as one local paper put it, “the robots remapped and ‘told’ each other about it.”

>

One day I was passing through campus on my way home from the library. It was early evening, around the time the last afternoon classes let out, and the sidewalks were crowded with students. I was waiting at a light to cross the main thoroughfare—a busy four-lane street that bifurcated the campus—along with dozens of other people. Farther down the street there was another crosswalk, though this one did not have a light. It was a notoriously dangerous intersection, particularly at night, when the occasional student would make a wild, last-second dash across it, narrowly escaping a rush of oncoming traffic. As I stood there waiting, I noticed that everyone’s attention was drawn to this other crosswalk. I looked down the street, and there, waiting on the corner, was one of the delivery robots, looking utterly bewildered and forlorn. (But how? It did not even have a face.) It was trying to cross the street, but each time it inched out into the crosswalk, it sensed a car approaching and backed up. The crowd emitted collective murmurs of concern. “You can do it!” someone yelled from the opposite side of the street. By this point several people on the sidewalk had stopped walking to watch the spectacle.

The road cleared momentarily, and the robot once again began inching forward. This was its one shot, though the machine still moved tentatively—it wasn’t clear whether it was going to make a run for it. Students began shouting, “Now, now, NOW!” And magically, as though in response to this encouragement, the robot sped across the crosswalk. Once it arrived at the other side of the street—just missing the next bout of traffic—the entire crowd erupted into cheers. Someone shouted that the robot was his hero. The light changed. As we began walking across the street, the crowd remained buoyant, laughing and smiling. A woman who was around my age—subsumed, like me, in this sea of young people—caught my eye, identifying an ally. She clutched her scarf around her neck and shook her head, looking somewhat stunned. “I was really worried for that little guy.”

Later I learned that the robots were observed at all times by a human engineer who sat in a room somewhere in the bowels of the campus, watching them all on computer screens. If one of the bots found itself in a particularly hairy predicament, the human controller could override its systems and control it manually. In other words, it was impossible to know whether the bots were acting autonomously or being maneuvered remotely. The most eerily intelligent behavior I had observed in them may have been precisely what it appeared to be: evidence of human intelligence.


From the book God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning, by Meghan O’Gieblyn. Published by Doubleday, a division of Penguin Random House LLC.


If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.

Related Articles

Latest Articles