E-Intentionality: Weekly Meetings
from: http://www.cogs.susx.ac.uk/projects/e-int/aigod.html
Notes from meeting 12 Jan 2001
Presenter: Ron Chrisley - "AI and God"
News/notices:
Presentation: "AI and God"
In anticipation of my invited presentation to the Society of Ordained Scientists Consultation on AI to be held at Windsor next month, I discussed two of my ideas on the connection between AI and God or the spiritual.First, I brought out two inter-related issues: 1) the difficulty that naturalists have in making the artificial/natural distinction, and thus in making sense of *artificial* intelligence, and 2) the fact that the concept of AI is traceable back to some of the ancient creation stories, in particular the Judeo-Christian creation story. Here is a relevant excerpt from the introduction to the set I edited, Artificial Intelligence: Critical Concepts:
"Another conceptual difficulty concerns the natural/artificial distinction itself. That it is crucial to the concept of artificial intelligence is without doubt: for example, the distinction is required in order to prevent sexual reproduction from being classified as achieving artificial intelligence (although perhaps an investigation, informed by theories of gender politics and power, into why such an exclusion is seen to be of paramount importance, is called for here). Yet most of the participants in the field of artificial intelligence today are naturalists: they join with Boyle in rejecting the Aristotelian distinction between natural and artificial; they think that there is no special dualism between humans and animals, mind and matter; there is no supernatural soul. Yet if humans are natural, then surely artefact construction is itself a natural activity. And so the distinction between artificial and natural intelligence seems to be an untenable, arbitrary one, on which nothing can hinge.
This may be the case, but it works in favour of artificial intelligence research, not against it. It is only the critic of the possibility of artificial intelligence that needs to make a sharp, principled distinction between it and natural intelligence, so that the former, and the latter, can be shown to be impossible. For the naturalist, artificial intelligence vs. natural intelligence becomes natural intelligence brought about in a (particular) new way vs. natural intelligence brought the traditional way.
Perhaps a more serious difficulty with the concept of artificial intelligence is an apparent contradiction in the very notion. "Artificial intelligence" is an oxymoron, since intelligence implies, and artefactuality is inconsistent with, autonomy. Take the latter point first: Aristotle made the point that an artefact is defined in terms of the purpose or function that it is intended to have. If it does not have an intended function, then (at least for Aristotle), it is not an artefact. Contrast this with our notion of thinking, as expressed by Morris: "Something whose 'success' is too closely dependent upon the intervention of a designer or supervisor does not count as a thinker, because it is not responsible for what it does... The notion of responsibility used here... is meant just to capture whatever is required to make sense of the idea that a person is the doer of her deeds. Someone who is responsible in this sense is not always to be praised or blamed for what is done - if praise or blame is due at all. But, where praise or blame is due, the doer of a deed should get it, unless there is some other person to whom it can be passed. And we cannot make sense of a person being a responsible subject in this sense, if she could never be properly praised or blamed." (Morris, M. (1992) The Good and the True. Oxford: OUP, p 206. [Later note: Maggie Boden makes a similar point in "AI: A contradiction in terms?" in Purposive Explanation in Psychology]).
So for an artefact to be truly thinking, it must be that it itself, and not its designer, is responsible for its actions. And yet inasmuch as it is itself responsible, it is to the same degree difficult to talk of it being an artefact, with a pre-given function or purpose. For responsibility seems to require the ability to determine one's own purposes and goals. In this respect, this conceptual conundrum reflects the centuries-old debate concerning the problem of evil. If God created us (if we are artefacts), then He is responsible for what we do (an untenable conclusion, since some of the things we do are evil, and yet God is omnibenevolent). If we are to be given responsibility for our actions (if we are to be thinkers), then it must be that we are not God's artefacts, we are not his creations (which violates the Judaeo-Christian view of God a sole creator of "all that is, seen and unseen").
A technical response to this situation would be to notice that Morris only speaks of thinking, not intelligence. This leaves room for the possibility of a kind of non-autonomous intelligence that is less than thinking, but which does not contradict artefactuality. This is cold comfort, however, to the majority of the people interested in artificial intelligence; for them, nothing less than a fully autonomous, thinking artefact will do. Another option is to reject the Aristotelian idea that an artefact must have a designer-intended purpose or function, thus leaving room for artefacts which are autonomous, independent of us, and thus candidates for true intelligence. But now we no longer have a handle on what makes something an artefact. Simply being the end product of causal sequences which we initiate is not sufficient, since that would include natural childrearing. But at this point the dialectic returns to familiar ground, and we are back at the end of the discussion on the artificial/natural distinction. It may be that no such distinction is necessary to do artificial intelligence, but recognising that the distinction is arbitrary may change one's view of the origins of the concept of artificial intelligence, and one's expectations for its future course."
Second, I mentioned the idea that the explorations in cognitve science into the conditions that must be met in order for mental concepts to be naturalised, else eliminated, can be applied to the case of spiritual concepts as well. Thus, since we now know that reduction is too strong a requirement for the naturalisation of mental states, we should not jump to eliminating the spiritual if we conclude that it is irreducible to the physical, as it no doubt is. But then what constaints do we put in the place of reduction in order to keep mentality and rule out, say, astrology? And do spiritual concepts meet these requirements? Or is a naturalised spirituality, following Boden, a "contradiction in terms?"
Ron Chrisley