Bringing religion to the Artificial Intelligence lab
Anne Foerst, MIT professor of theology
Interview by John Zollinger
Anne Foerst is a study in contrasts. A minister, she holds a doctorate in theology as well as degrees in computer science and philosophy. Such a mixed background might at first glance seem to reflect a dilettante's indulgences. For the 34-year-old German researcher, however, the clash of science and religion is far from a light-hearted affair and she's carried the issue to one of the most ambitious science projects ever undertaken: the quest to create a robot capable of human learning and interaction.
As the resident theologian for the Massachusetts Institute of Technology's Artificial Intelligence Laboratory, Foerst aims to illustrate that spirituality plays just as big a part as servos and software when it comes to creating robots.
At the invitation of the lab's director, she's taking part in the Cog (www.ai.mit.edu/projects/cog) and Kismet (www.ai.mit.edu/projects/kismet) programs. The former is an attempt to build a robot that can approximate the sensory and motor dynamics of the human body. The latter, a subdivision of the Cog project, revolves around the creation of a robotic head capable of articulating a host of basic human emotional expressions with its eyes, mouth, and ears. Through these experiments and other ancillary projects, the researchers are exploring the hypothesis that acquiring humanoid intelligence requires humanoid interactions with the world.
Departing from the traditional focus on robot-to-robot interactions, the group is looking to build robots that engage in meaningful social exchanges with humans. In theory, such interactions could make it possible to have a human assist the robot in acquiring more sophisticated communication skills and helping it learn the meaning these acts have for others—just as human babies learn from their parents.
So where does a theologian fit in this scientific endeavor? Foerst is examining the role religion plays in the lab on multiple planes. Through observation and interaction with the researchers, she underscores the fact that they often bring religious underpinnings to their work—be it consciously or unconsciously. In addition, she probes issues beyond the purely technical front, such as what the essence of being a robot—and by extension, being a human—truly is.
Besides her lab work, Foerst has sought to heighten the discourse on the schism between religion and science that has evolved since the age of the Enlightenment. Having created the "God and Computers" lecture series in 1997, she's brought numerous religious and AI scholars together to debate both the conflicts and confluences of the two fields.
While her work has garnered her the praise of many colleagues, it has also earned the wrath of many peers who believe that the division between science and religion should be strictly preserved. Marvin Minsky, the MIT professor who founded the Artificial Intelligence lab in 1959, has labeled Foerst's teachings as nothing less than evangelistic.
In a telephone interview with Networker @USC, Foerst reflected on everything from the golem—mythic men of made of mud found in the ancient Jewish tradition—to the quest some modern scientists have of attaining immortality by downloading their brains onto the Internet.
Networker: What do you do as the resident theologian at MIT's Artificial Intelligence Laboratory?
Anne Foerst: I'm working in three different directions. The first direction is to bring theological insight into the AI community and that has two aspects. The first aspect is to analyze religion's underpinnings and existential questions underlying the people's research—in AI this is particularly the desire to build artificial humans and secondly, the desire to analyze everything that's going on in us and therefore to get rid of a lot of our problems. It is also in many ways that many researchers wish to have eternal life through technology to avoid death by building yourself artificially and rebuilding yourself artificially. So this is one aspect of the work.
The other aspect of the work is to bring concepts like the dignity of a person into a completely mechanized, functionalistic understanding of what it is to be human.
AI researchers have to assume that we are nothing but machines because otherwise they could just give up their research. That's a fine working assumption because it's easy to develop a lot of great research, but it becomes dangerous when it becomes an ontological statement. Instead of being an assumption, it is a statement about what humans really are. And as soon as that happens, it becomes very dangerous because concepts like dignity and personhood are not concepts which can be explained in an objective way or in a scientific way. These are concepts that we assign to one another.
I raise questions like "When would you assign personhood to a robot?" or "When would you hesitate to switch it off?" and questions like that which make those people think about their own assumptions and about their own concepts of dignity.
NW: What motivated you to make a link between theology and computers?
AF: I was studying theology and I was studying computer science. I love the field of theology and I'm so completely amazed by it because you learn so much about humans, about who we are, about how chaotic we are, and how wonderful but so weird at the same time. At the same time, I liked the people who did computer science a lot, but they were thinking completely different from the people in theology.
NW: What are the basic conflicts between the people who approach Artificial Intelligence as empirical research and people who wish to approach it from a theological stance?
AF: This is the point of my work within the lab—some people are not aware of theological underpinnings. They really think they are doing their research in an objective way. And this is where it becomes interesting. Because when I point out, "Aren't you realizing here that you're going back to very old myths?" some of the people are not willing to acknowledge that. They think, "Well, I have no hidden agenda. I'm purely objective. Anybody who tells me something different is wrong." This is the big inherent danger. Because when you think you have no hidden agenda and everything is just fine and dandy then what you do in your research becomes ontological.
When you're doing it in a very pragmatic manner and say, "Okay, I assume that we are nothing but machines so let's just see how far we can get," that would be fine. But as a matter of fact, people who are interested in that sort of research field usually don't approach it that way. They usually have a hidden agenda behind that.
For instance, in my research group [the goal] is to get rid of a disembodied understanding of intelligence and to include social interaction and embodiment. That's great. But it can lead to, "Then we will ultimately build robots which are exactly like us and ultimately we will make ourselves superfluous." Then there is no reason for us to exist anymore. And at this moment it becomes a religious statement.
But some of the people are not aware of the schism. Well, at least now they are aware, but they weren't. I mean for them, being nothing but machines was a dogma and a completely objective statement. And [if] anyone was saying "What about personal dignity?" they thought that was silly or said, "We can reduce all that also to brain cells or mechanisms or whatever."
For me, the statement is a religious statement as soon as it is not universally valid anymore. And that means for instance, if people claim to overcome death by being able to download the brain contents in a computer and therefore have the person live forever, they forget that this perhaps not the understanding of eternal life some people want to adapt. I wouldn't want to live without my body in a computer even though it might be technologically possible.
NW: That anticipates my question, what is your take on the idea of someone downloading their brain contents onto the Web? Were it possible, what implications do you see arising?
AF: The argument against that is not entirely theological. I have also a lot of scientific arguments against that. That is what my lab particularly is doing, to point out that there is no split between body and mind. Mind is body. We are biological beings, we are biological creatures who were developed over the course of evolution. To deny that and say, "We are so great and so abstract that it's really only the mind that counts," is basically, sorry to use the word, bullshit.
We know so much about [the fact] that the body has local memory, that the body is absolutely crucial for the way we see things. We know for instance, that our eyes are not cameras that record the world, but that they are actually matched with expectations created in the brain, with pictures created by the brain which shape the way we see. And this is culturally imbedded. So even our various sensory apparatus is tied to the body that we live in and in which we were raised. This of course ties into the society, the cultural values, the cultural stories all that in which we were raised. There is so much scientific evidence for that, that just from that perspective the idea of downloading the brain seems completely unfeasible.
NW: What else are you doing in the lab?
AF: The second thing that I'm doing is I'm working to fight against prejudices and fears of these technologies. Basically, the people think that if we are building robots which are like us, that make eye contact, which react socially, which trigger emotions in us of tenderness or whatever, it means that we are nothing but machines, right? It's the conclusion that they always draw and it makes them afraid and therefore they don't want to know about it. And I always point out, "Wait a moment. From a purely logical point of view this not true." Because it is assumed that we are nothing but machines and that we build robots on that assumption, and so a project can never prove its own assumption. So that when we build a robot that is like us, the only thing that it tells us is that it is possible to build a robot that is like us entirely on mechanistic and functionalistic assumptions. But that doesn't mean that those are true. So the question that I try to raise is "Why do people give so much power to technology?" Why do they give technology power over their own selves in a sense? They do it with technology and they do it with science as well. Why does evolution mean that we are not created by God? The one is a scientific theory and the other one is a religious issue or a religious commitment or a religious faith. They answer different questions in human life, but they are nonetheless enormously connected. I think that's wonderful. Why do they give science so much power over their lives that they think that evolutionary theory is correct and the whole creation story is wrong?
[Regarding] technology, it has become so crucial that people really learn to understand themselves as machines. All these machine metaphors like "I couldn't store it," or "I couldn't process it," that is all the way that people kind of put themselves into the category with computers. I really question that and I think that all these fears of AI are very much due to the fact that people do that. And that is due to the fact that our society believes in the objectivity of the truth of science. And this is what I depict in my work here at the lab, by pointing out hidden assumptions and hidden agendas.
The third thing in my work—and this is actually the one which is most dear to me—I want to go back to theology and bring back all of my insights about what society is about and what technology is all about back into theology. I want to bring back the insights about embodiment and social interaction into theology. Because my tradition, the Christian tradition, often has kind of bought into this very disembodied sort of mind-rationality-centered sort of theology. And I would like to bring the embodiment and the spirituality and the emotions and all that back in from scientific level. That is what I'm right now working on.
NW: Describe your involvement with Cog
AF: I started with Cog because Cog was the only social robot. What fascinated me about the Cog robot was that it's as if all the fears and hopes of AI—all the myths and everything—were collected in one single project which was the whole idea of recreating us. Cog was the first explicit attempt to rebuild a human.
There has been a long history of myths about that and a long history of hopeful statements about building artificial humans, but no one has ever tried in an academic lab to actually do it. And so here the people actually tried to do it. In the Jewish and Christian traditions there is a whole ambiguity about rebuilding ourselves. On the one hand this is very often connected with hubris. Frankenstein and stories like that, it's hubris to try to be like God. But there are actually a couple of aspects that would challenge that very one-sided interpretation of those projects.
One is the Frankenstein story itself and the other is the golem tradition in the Jewish mysticism and the cabala. The cabalists who supposedly in the 15th and 16th century built artificial men from clay who were called golem, they actually understood this work as prayer. And they understood it as prayer because they said, "We are created in the image of God, that means we have God-given creativity. So each time we are using our creativity we are adoring and celebrating God and God's creativity in us. And we are the crown of creation so that we are actually rebuilding ourselves. We are using our creativity the most, hence celebrating God the most." So for them, building an artificial man was actually prayer and service.
In the light of what we experience with Cog, Cog is supposed to be a newborn baby. Every newborn baby is so far ahead of Cog, [even though] the best engineers and computer scientists in the world have now since '93 tried to build that robot. It's still one of the best robots probably in the world, but compared to even a newborn it's pathetic. So basically Cog raises enormous amounts of respect for the human machinery. So in that sense, it is in a way service. I think that most of my lab comrades would never formulate it in that spiritual way, but they all agree that they learn to respect humans much more since they tried to build one. This is what we can learn from the Jewish tradition which I think is just wonderful, it always turns around the whole issue.
The second thing is with Frankenstein, when you look at the monster and its bad development, the reason why it goes berserk in the end is that it never got a name, it never was accepted in the community, it never had a friend. So it was basically an outcast. If we are right with our theories, our robots are immediately, from the moment we switch them on, they are treated as part of the community. So if they ever reach a certain level, they will have been raised in a community, they will be part of a community and that seems to make it very unlikely that they actually will turn against us.
NW: If Cog matures to the point where it's learning, are you going to be the one that imbues it with religious beliefs?
AF: It is already learning a lot. It is learning to move its body, it's learning body-arm-eye coordination, like a newborn does. So it is already learning a lot. But I think you have to distinguish between theological advice and spiritual advice. I am not here to do anything with Cog because with Cog it is not feasible within the next years that anything comes out where such a spiritual adviser would be necessary. Theological adviser means that I have raised the questions and that I have made people aware of those questions which they thought weren't there.
Some think they still are not there and here we come to Kismet. Kismet is in part a result of my participation in the group. Cynthia Breazeal, the engineer who built Kismet, had, from all the discussions we had had about what it means to be human and so on, the idea to build a robot which is entirely dependent on interaction, to really build a robot which is like a newborn baby, completely dependent that people treat it nicely and interact with it. Kismet has this extremely cute face, it's just an amazing robot. We are using a lot of hard-wired mechanisms in our own brains, we have for instance in-built facial recognition apparatus. We can assume that all humans who interact with Kismet interpret the facial expressions of Kismet in the same way or in similar ways and we've actually made experiments which have shown that. The second thing is that we have always in-built what is called the in the literature the "baby scheme," which means that we react to certain features like round face, high forehead, big eyes, sucking mouth, general sense of cuteness, big head versus small body, small extremities, those kinds of things. We can't help but act enormously protective whenever we see something like that—Hollywood uses that a lot—and so when you sit down in front of Kismet, you can't help but react emotionally. You're tied emotionally to that robot. If it looks sad, you think "Oh, how can I make you happy?" And you really try very hard to entertain the robot, make it look happy, make it look content. It's the same mechanism that works with babies and with their parents at the beginning. It's in-built, we can't help it. Kismet basically is the first robot which uses our hard-wired social mechanisms, our human, hard-wired social mechanisms to learn. To create an environment for itself in which it can learn, which is really like how human babies do it.
NW: Is there any notion of having Kismet go beyond just having its cognitive processes grow, actually giving it another body at some point?
AF: Since Cog and Kismet are built on the same architecture the dream certainly is to put the two together, thus giving Kismet a body. But right now, these things that we are doing here are so cutting edge and so complicated, it will take years before we can really think about going beyond what we have right now.
NW: Shifting a bit, could you elaborate on the reaction at MIT when you first offered your "God and Computers" series?
AF: There has never been a class like that in the engineering department. And it really speaks a lot for MIT that they allowed me to teach that class. A couple of days before the term started I sent out a three-paragraph description of classes that everyone sends out. It was two days before registration day and I just wanted the faculty to know. Unfortunately, I sent it not just to the faculty but to a list called all "ALL AIs" which went to probably 1,500 people. And so Marvin Minsky answered and basically said this course is an evangelical enterprise and it should not be taught in an academic environment and I will brainwash the students and there was just no way that he would agree with that and it should be stopped immediately. And of course, since it was Marvin Minsky, immediately a lot of people replied and said yes, he's absolutely right and then they said things like "Anne is pathologically deluded, how can anyone believe in God in these days?" The main argument was always that MIT has always tried so hard to be objective and now comes this theologian back and destroys all that.
First, it was really hard to accept or to deal with because I was personally attacked in a way which I hope never ever to experience again. That was really nasty. But after awhile I thought about it and said it was actually great because these people proved exactly my point of view. Why do they react so frantically? Why do all these people who claim to be so busy, write up to 10 emails a day? They were so vehemently opposed to that class which really was not at all dogmatic or whatever. It just brings various concepts about what it means to be human together. They didn't even look at that. They heard the word religion and said "Whaaa!" and they became aggressive. That [reaction] was so religious in itself, even dogmatic in itself, even fundamentalist in its way. Then what happened was the public lecture series I had organized on "God and Computers," there were 350 people there the first time. Because the people just wanted to know what's going on. Since I had invited speakers from the region who are extremely famous, extremely good, the people just realized I am not converting. I had Jews, I had Christians, I had Jewish atheists, I had Christian atheists, I had a Hindu, I had a Muslim, I had a Buddhist and it was really cool. And because it was such a success, I did it the next year and again I got each week at least 200, 250 people. It was a major success. This year it went together with the dean's office so it became a more whole MIT sort of thing.
NW: What are your plans for the future, what directions would you like to go?
AF: This is the question I'm debating right now within myself. I think I would very much like to continue the line of reasoning that I'm going through. But, on the other hand, I'm so eager to go back to my own field. I just want so badly to go back to theology and bring all my insights back into theology. So I guess what I will end up doing is going back to the faculty in theology. This fall I'm giving a major public lecture series at Columbia University. This lecture series for me is an attempt to see if I can still talk to theologians. Which I very much hope will be the case.