Back to the Cyberculture Archive



Wallowing in the Quagmire of Language: 
Artificial Intelligence, Psychiatry, and the Search for the Subject
Phoebe Sengers Literary and Cultural Theory / Computer Science
Carnegie Mellon University

The rate at which society is becoming electronically interconnected is truly
amazing. This is what you're thinking to yourself as you settle in for another
electronic session with your therapist. Sure, it lacks the je ne sais quoi of a
face-to-face session. But it's so much more convenient, and anyway there's
something comforting about its lack of personal confrontation. The therapist
can't tell that you're looking out the window or reading netnews during the
session.

But you're starting to feel a little unsettled. In the chat room at the
university, you'd heard rumors that some of the therapists that work for
InterPsyche are being replaced by computer programs. Of course, this is all just
urban legend. Artificial intelligence simply isn't that advanced - maybe some
people could have a satisfying session with Eliza, but the in-depth heart-to-
heart conversations you have with your therapist could only be between flesh-
and-blood, bona fide humans. But today, you've noticed that her language is a
little wooden, her metaphors a little stilted, and she certainly repeats that
annoying line, ``I hear you,'' a little too often. The seed of doubt has settled
in. You start to wonder: is she just a program? Does she really understand what
you're going through? Does she really understand anything you're saying?

Let's face it, people are prejudiced against machines. Once you start thinking
your therapist is a computer, it's hard to think of her as compassionate, as
real, as understanding you, as a subject. This is a fundamental problem
Artificial Intelligence (AI) researchers face. Our goal - as yet, it is true,
only very modestly achieved - is to build programs that could replace your
psychiatrist, your doctor, your dog, your colleague: programs that behave
competently and can take the place of a human or other agent in a social
situation. But we stumble at the issue of how to evaluate our creations, since
the the very fact that an agent is mechanical, not biological, is enough to undo
someone's view of that agent as an intentional, feeling agent. How can we prove
that our agent is really conscious? How do we show that it is not `merely'
mechanical but truly occupying a subject position?

Like all bad things about Western civilization, this attitude finds expression
in the work of Descartes. In the same era that La Mettrie was asserting the
identity of man and machine in ``L'Homme Machine,'' Descartes pronounced the
impossibility of achieving a machine which had more than a body and hence was a
true subject. Bodies are mechanical, so animals, which do things only by
instinct, could be successfully imitated by machines; your Virtual Dog is in the
works. But no machine can be conscious as people are because no machine can
speak natural language.

[I]f there were any machines that resembled our own bodies, and imitated our
actions as much as possible, we should still have... certain ways of knowing
that they were not real men. The first is that they would be unable to put
together words, or any other signs, as we do, to utter our thoughts. For we
can certainly conceive of a machine so constructed that it can utter words,
and can even utter words in relation to bodily actions that cause some
change in its organs. Thus, if we touch it in one spot, it may ask us what
we want with it; if we touch it in another, it may cry out that it is being
hurt, and so on. What it cannot do is to arrange its words in varying ways
so as to reply sensibly to whatever is said in its presence, as the stupidest
of men can do (79-80).

Note that there are two things going on in this argument: of course, there is the
usual statement that bodies are basically mechanical and hence unrelated to
conscious experience; but more importantly, Descartes claims that there is a way
to access the mind, to test for subjectivity, and that is through natural
language. One can verify the subjectivity or lack thereof of an agent by
engaging in dialogue with it. Via natural language we can come to a decision
about the inner disposition of our conversational partner.

Some artificial intelligence researchers have come to a similar conclusion. Alan
Turing, one of the originators of the field of computer science, addresses this
issue in a seminal article in which he discusses the question, ``Can machines
think?'' Turing admits that thinking about this question in the abstract,
involving ordinary definitions of ``machine'' and ``think,'' is impractical and
unscientific. Instead, he redefines the question ``Can machines think?'' to
allow for objective experimentation. He proposes a test (he called it an
``imitation game;'' subsequently it has been called the ``Turing test'') to
determine whether or not a machine should be considered conscious. A judge will
examine a person and the computer, asking each questions in an attempt to
determine which is which. This examination will take place across teletype so
that the computer will not be handicapped by not having a humanoid body. If the
judge in repeated trials cannot determine with greater than random accuracy
which is the computer and which is the person, the computer will be declared to
be conscious (whether or not it actually is).

This story repeats several themes recognizable from Descartes's statement: the
body of the agent is fundamentally unimportant, and because of this natural
language becomes the supreme arbiter of whether or not an agent should be
considered conscious. The question both Descartes and Turing need to wonder
about is, does language allow access to the subject, or does it itself construct
a subject? If a subject is constructed narratively, we can have no confidence
that the subject these linguistic tests are supposed to access exists prior to
the test itself.

An objection to this line of reasoning is that artificial agents that speak
competently across a broad range of subjects do not (yet?) exist. So far, these
questions are purely philosophical. But there is one agent who has passed a form
of the Turing Test: Kenneth Mark Colby's PARRY, the artificial paranoiac. In the
tradition of the treatment of the mentally ill in this nation, we should feel
free to use him as a testbed for the our experiment in the creation of
subjectivity through language.

PARRY is a program written in the early 70's and refined into the 80's that
simulates a paranoid patient in a clinical interview.

Dr. - How did you come to be in the hospital?
Pt. - I am upset
Dr. - How long have you been in the hospital?
Pt. - About a week
Dr. - In what way are you upset?
Pt. - People make me nervous
Dr. - In what way do they make you nervous?
Pt. - People give me a funny look sometimes (Colby 75-77)

PARRY passed the Turing Test in the following sense - the judges (psychiatrists)
were not told ahead of time they might be conversing with a computer. They each
did two separate clinical interviews, one with PARRY and one with a (real)
hospitalized paranoiac. They were to judge each on level of paranoia. Afterwards
they were told there was a possibility either or both interviews could have been
with a computer program, and were asked for each interview whether or not they
thought it had been done with a computer. The results were no better than random
guessing. Computer professionals were also sent the protocols of the interviews,
and they did no better at identifying which involved the computer. In this
sense, PARRY passes the Turing Test, and by Turing and Descartes's rubrics
should be considered conscious.

How is PARRY taken to be a subject? How, in this case, does language construct
the subject? Colby's claim is that PARRY mimics the natural process by which
paranoiacs engage in conversation, that the program's structure is isomorphic to
the `deep structure' of the mind of the paranoiac. I mean `deep structure' in
the Chomskian sense: those processes working behind the scenes to create the
surface input-output relations. ``Since we do not know the structure of the
`real' simulative processes used by the mind-brain, our posited structure stands
as an imagined theoretical analogue, a possible and plausible organization of
processes analogous to the unknown processes and serving as an attempt to
explain their workings'' (Colby 21). Deep structure is what is essential and
behind the scenes; it is the structure of the subject, which we will access via
natural language. The difference between an arbitrarily signifying stream and a
truly signifying machine is a question of
deep structure, of intention or what is on the other side of the sign.

For Turing, if you cannot linguistically tell the difference between a person
and a computer, the computer must be considered, like the person, internally
conscious. Colby's claim is similar: since you cannot linguistically tell the
difference between a paranoiac and PARRY, PARRY's internal structure as spelled
out by his theory must be the same (in some sense) as the internal structure of
a paranoiac. The question is, how much of that imputed paranoia comes from the
internal theory, how much from implementation detail, and how much from other
contingencies of the situation in which PARRY was judged? And given the answers
to those questions, what kind of knowledge about paranoia can we extract from
PARRY's workings?

Since we are testing PARRY using discourse, its natural language understanding
and generating mechanisms are paramount. One might presume that paranoiacs
understand a lot of what is said to them, even if they place it within the
context of a delusional system. The fact is, natural language processing is not
up to speed - PARRY simply can't, using today's technology, understand what is
said to it. Colby decides to go with a pragmatic natural language system. When
PARRY is trying to decide how to respond to the interviewer, it does not try to
extract a complete, nuanced `meaning' but ``some degree, or partial,
idiosyncratic, idiolectic meaning'' (38). In particular, it understands the
meaning of the user's sentences only to the extent that it will be able to
respond. It is not important that it be able to understand everything that may
be said to it (``the seductive myth of generalization'' (38)); it only needs to
understand the things to which it needs to respond.

One unintended advantage of this semi-understanding algorithm is that it creates
part of the effect that makes PARRY seem paranoid. Colby quotes from a
transcript:

Dr. - I don't know about the underworld but when you mentioned the
underworld I thought you must have some contact with it and that's
what I would like to know more about.
Pt. - I try to avoid the underworld.
Comment by Judge 1: This definitely feels paranoid. It has that
feeling I associate with twisting. Avoiding
answering directly or going into detail. Being
suspicious of my questioning. (77)

In fact, PARRY avoids answering directly because the doctor's question was far
too long to understand. It avoids going into detail because all of its possible
output sentences are written by the programmer ahead of time, and it would take
too much effort to anticipate all the kinds of detail the doctor might like to
know. The other details the doctor notices are effects of these two problems. In
this sense PARRY derives much of its realism from effects of its natural
language system rather than from its theory of paranoia.

The problem is, PARRY really is not all that impressive. Unlike with a human,
with a computer we can look at agent's internal construction. PARRY's program is
extremely simple. All of its possible output sentences are entered ahead of
time. When run PARRY merely classifies the input by a set of superficial metrics
and its current emotional state (the setting of the variables Anger, Fear, and
Mistrust, as well as some superficial memory of what has happened in the
conversation so far) and prints out the corresponding sentence from its list: a
far cry from consciousness. Also, on interaction with the program in a direct
attempt to discover whether or not it is human strange disparities in its speech
quickly arise. One critic cites the following dialogue:

Interviewer: Would you rather be outside the hospital?
Machine: Yes I am. This place is not for me.
Interviewer: Where would you live?
Machine: I live in San Jose.
Interviewer: How would you support yourself? When you are out, will you be
able to pay rent and buy food?
Machine: It's about eight o'clock. (Kochen 540)

In light of this, PARRY seems less a paranoiac than a
parrot. In fact, sophisticated tests (where judges compare the protocols along
various dimensions other than paranoia and humanity) shows that the paranoid
speech output by PARRY is significantly different from natural paranoid speech.
These tests show PARRY is more like the way psychiatrists expect paranoid people
to be than the way they really are, even in the superficial, linguistic sense.
How could the psychiatrists have been fooled?

One obvious answer is that psychiatrists expect bizarre, stereotyped behavior,
precisely the kind that can be imitated by a machine, from a patient in a mental
hospital. Insanity involves a breakdown of reason; it underscores one's
inability to control what goes on inside one's psyche. Ronell quotes a
schizophrenic: ``I am unable to give an account of what I really do, everything
is mechanical in me and is done unconsciously. I am nothing but a machine''
(118). Paranoia in particular represents a breakdown in signification - the
paranoiac reads into signs and discovers a meaning that is not really there. The
claim, then, is that PARRY derives its strength from a breakdown in human
discourse, specifically a breakdown that occurs in the psychiatric interview.

PARRY is represented to the psychiatrists as a signifying subject. The question
for the psychiatrists in judging is not whether or not PARRY's deep structure is
like the deep structure of patients. The psychiatrist has no access to deep
structure; in psychiatric interviews s/he only has access to the superficiality
of language and physical cues. Colby realizes that psychiatrists can only deal
with patients at the symbolic level; this, he claims, is why he creates PARRY as
a symbolic structure. But perhaps the pertinent detail is not that psychiatrists
deal with their patients on a symbolic level, but that they treat their patients
as symbols.

For the psychiatrist, all the actions of the patient - everything the patient
says, does, or thinks, hence the whole patient - is a symptom. The patient as
such can not act, only signify. The same holds for PARRY: ``the only physical
action the model can perform is to `talk' '' (41). PARRY corresponds to the
patient in the sense that s/he is considered to be a pure stream of
signification, without any reference to an underlying subjectivity. Blanchot
describes the experience this way:

They would challenge my story: `Talk,' and my story would put itself at
their service. In haste, I would rid myself of myself.... Right before their
eyes, though they were not at all startled, I became a drop of water, a spot
of ink. I reduced myself to them. The whole of me passed in full view before
them, and when at last nothing was present but my perfect nothingness and
there was nothing more to see, they ceased to see me too. Very irritated,
they stood up and cried out, `All right, where are you? Where are you
hiding? Hiding is forbidden, it is an offense,' etc.'' (14).

Through the very reduction of the patient to the symbolic his or her
subjectivity disappears. The patient as person cannot be seen by the
psychiatrist. It is in this sense that PARRY and the patient are, for the
psychiatrist, truly alike.

PARRY's acceptance by the psychiatrists can be traced to two major sources: the
implementation details of its natural language understanding and generation
systems and the categorization by psychiatrists of their patients as pure
symbols, pure streams of signification, rather than as full subjects. These two
processes interact to create a mentally ill subject within a particular
discourse, that of the psychiatric interview. Things are looking dim for PARRY's
theory's purchase as a general theory of paranoia. What does this mean for
Colby's claims to
scientificity?

Colby's claim is strong: his theory of paranoia is superior scientifically
because it avoids natural language. Colby contrasts his implemented model of
paranoia with the natural language explanations of a previous era. Natural
language is open to multiple interpretations; it is flexible; it is imprecise;
it is unscientific. ``Since natural language is vague and ambiguous, prose
theories are difficult to analyze'' (10). For example, Colby cites Freud, who
proposed that paranoia was due to ``unconscious homosexual conflict:''

Because of inconsistencies and difficulty in testing, the homosexual-
conflict explanation has not achieved consensus.... Freud's later attempts
at the explanation of paranoia assumed simply that love was transformed into
hate. This notion is too incomplete and unspecific a formulation to qualify
as an acceptable scientific explanation. Contemporary requirements demand a
more complex and precisely defined organization of functions to account for
such a transformation (12).

Colby has problems with Freud because he is not empirically verifiable. And, in
fact, he is right on the mark with this accusation. Central to Freud's method is
his employment of a hermeneutics of suspicion, a method of inquiry that refuses
to take the subject at his or her word about internal processes. Freud posits
explanations at an insane level of detail, which explanations are posed without
regard for whether the patient agrees. If the patient does not agree, s/he has
repressed the truth, that truth that the psychoanalyst alone can be entrusted
with unfolding. And it is to the extent that the psychoanalyst's authority is
all that stands behind a statement's
truth-value that these explanations are precisely not empirically verifiable.

Let us quickly note that their claims to validity are backed by one other
source, and that is the patient's return to health. That is, the pragmatic
effects of Freud's theories can be called on to justify their use. While the
hand of politics always plays a role, one can imagine, I believe, without being
too naive that Freud's work would have to be of at least some use in order to
have gained and maintained the authority with which it is credited.

What is interesting is that if we look again at this hermeneutics of suspicion,
this unverifiable generation of explanation, we run across the model of the
paranoiac him or herself. Like the paranoiac, for the psychoanalyst ``[n]othing
can be allowed to be unattendable'' (2). ``He is greatly concerned with
`evidence.' No room is allowed for mistakes, ambiguities, or chance happenings''
(3). Like the psychoanalyst, the paranoiac lives under the rubric of a master
narrative that explains and hierarchizes all experience for him; the only
difference is that the paranoiac is considered `wrong.' As Agassi points out,
``whereas a person who is not persecuted but feels persecuted may be said to
suffer from persecutionism, a person who is persecuted and feels persecuted will
be judged as behaving normally'' (536). So while the paranoid classification
only applies to a small portion of society, the paranoid mode starts to look
suspiciously familiar.

In fact, we do not have to look too far to find that we are suddenly looking
back at Colby again. After all, we started with his inability to allow for
ambiguity or vagueness in natural language explanations. As a psychiatrist,
Colby inherits both Freud's hermeneutics and the scientific zeal for accuracy.
Consider an example: Colby's theory of paranoia is that the paranoiac is worried
about a flaw s/he perceives in him or herself, and that s/he lashes out against
others to defend him or herself. He asks, ``why does a man believe others will
ridicule him about his appearance unless some part of himself believes his
appearance to be defective'' (34)? It is precisely in this talk about ``some
part of himself'' and unacknowledged flaws that he engages the hermeneutics of
suspicion, the search for a behind-the-scenes, unverifiable explanation. At this
point it does not seem too off-the-mark to describe Colby, the scientist in
search of an unambiguous, error-free explanation, as paranoid according to
Hofstader's description: ``the paranoid mentality is far more coherent than the
real world since it leaves no room for mistakes, failures, or ambiguities''
(qtd. in Colby 6). But as we noted above, paranoia invokes the nonverifiability
of one's theories because one seeks a coherent explanation which is stronger
than reality can provide. So Colby, even with his coherent, hyper-scientific
model, falls right back in to the trap of he is trying to avoid.

Colby is caught in the quagmire of natural language. His claim is that he has
built a `deep structure' that corresponds to the subject, what is hidden behind
the sign. The problem is that deep structure cannot be verified. To quote Colby,
``[t]here is an inevitable limit to scrutinizing the `underlying' processes of
the world. Einstein likened this situation to a man explaining the behavior of a
watch without opening it: `He will never be able to compare his picture with the
real mechanism and he cannot even imagine the possibility or meaning of such a
comparison' ''(28). We have no way of accessing this deep structure except
through interaction; once we have written the body out of the picture this
leaves only natural language, the vague and ambiguous, unanalyzable trickster.

To say that these explanations - those of Colby, those of Freud - are
unverifiable in an epistemological sense is not necessarily to say they are
worthless. It is just this sort of master narrative that psychiatrists can use
to enable change in the patient. Like Freud's work, deep structure models can be
verified pragmatically. But in order for the psychiatrist to use the model, s/he
must understand it; and this process of understanding, as Colby realizes, will
most likely be transmitted via vague, ambiguous natural language explanations.
And in the case of paranoia, even these explanations will do little good: Colby
himself states that ``[l]anguage-based or semantic techniques do not seem very
effective in the psychoses'' (102). In the end, aside from the benefits of
having an implemented patient itself (though one that can never be cured), very
little justification for PARRY's epistemological supremacy can be given.

That does not mean PARRY is worthless for AI researchers. In fact, PARRY teaches
them a valuable lesson: particular subjectivities can be constructed by
considering concrete situations (in this case, the situation of the psychiatric
interview) and tailoring the artificial agent to meet the demands of that
situation. Descartes may be right: the agent who answers properly in all
situations may not exist, but agents who do the right thing in a particular
situation can exist. Agent designers may have no luck creating the universal
subject, but subjects that are generated within specific practices are
achievable.

So far, we have elided bodies from the picture. Descartes thought bodies were
unimportant because they could obviously be mechanical; Turing and Colby thought
they were unimportant because they could obviously not be mechanical. In all
three cases it was because bodies were elided that natural language became so
important. But are bodies really so superfluous? As long as psychiatrists sleep
with their patients, the extra-linguistic aspects of the doctor-patient relation
seep in. One judge commented about PARRY, ``If I had him in my office I would
feel confident I could get more information if I didn't have to go through the
teletype.''(86) In therapy, physical presence really does matter - as the
InterPsyche client above noticed, in a variety of ways.

The same thing is true of artificial agents, as my group discovered in a series
of experiments. We, the Oz project for interactive fiction at Carnegie Mellon,
led by Joseph Bates, have created two types of agents using the same basic
internal construction (Bates, ``Integrating''). One is a cat named Lyotard
(Bates, ``Architecture''), with which one can communicate through text. In this
extract, which has been condensed to save space, Lyotard is at first afraid of
the player but gradually gets to like her:


PLAYER> go to the bedroom

Lyotard is in the bedroom.

Lyotard runs to the closet.

PLAYER> go to the kitchen

You go to the kitchen.

PLAYER> call Lyotard

You hear a cat-call. Lyotard is now in the kitchen.

PLAYER> give Lyotard the sardine

You offer the black sardine to Lyotard. Lyotard eats the black sardine.

PLAYER> wait

You wait. Lyotard rubs you.

PLAYER> pet Lyotard

You pet Lyotard. Lyotard looks lazily at you.

PLAYER> take Lyotard

You are now holding Lyotard. Lyotard closes his eyes lazily.

The other agents we have built using the same basic structure are the woggles
(Loyall), dim- witted but charming agents you can watch on a computer screen and
with which you can interact using the mouse.

We, the programmers, know the internal construction of the agents and that the
woggles are no smarter or more complicated than Lyotard. However, people react
much more enthusiastically to the woggles than to Lyotard. With the cat, people
suspect that there is trickery in the language that is being used. The cat
didn't really close his eyes lazily, the system just said it did. But you can
see with your own two eyes what the woggles are doing. It is the physical
presence of the agent that makes one feel more confident about it, that fools
one into thinking one is not being fooled by language. From this we learned that
the bodies of the agents can be quite important in imputing them with
subjectivity, something Turing forgot when designing his test.

Note that in neither of these case - indeed in none of the cases in this paper -
one was actually able to access the ``deep structure'' of the human or agent
through discursive means. Language is not transparent; it constructs subjects
rather than revealing them. The implications of this statement for AI, for
psychiatry, and for those who will interact with possibly artificial agents are
manifold. The narrative process of construction is good for AI: it is enabling
for the creation of artificial agents. But by the same token there will be no
objective method for uncovering the true subjectivity of an agent that proceeds
through discourse. Similarly, there is no objective method of determining the
truth-value of symbolically oriented theories of consciousness and its disorders
- precisely the kind of theories psychiatrists and psychologists need to engage
in language-based therapies. The attempt to get behind discourse to the `true
state' of the subject will always end up caught in the quagmire of language.

Back to the Cyberculture Archive