Yes, Data would pass the Turing Test. So, What Does This Mean Exactly?

Note: The thesis of the Turing test says that if a computer can fool a person into believing it is a human (what percentage of the time or to what degree of certainty depends on who wrote the rules for any particular testing session) we have reason to say the computer is a thinking thing. For a more detailed explanation of the Turing test, see trekphilosophy.com’s summary of the test or for an even more thorough analysis go to the Stanford Encyclopedia of Philosophy’s entry on The Turing Test.

When discussing any aspect of manufactured intelligence and Star Trek it makes sense to start by talking about Lt. Commander Data. Data is the manufactured intelligence we viewers have the most experience with. He appeared in every episode of Star Trek: TNG (176 or 179 depending on how you count), all four TNG movies, two episodes of Star Trek: Picard, and was heard (but not seen) in the finale of Enterprise. Not to mention the dozens and dozens of Data stories and roles in beta-canon material.

However, the question “would Data pass the Turing test” is not a particularly interesting one. We know he can, because he has done so, on many occasions. His shipmates, for the most part (I’m looking at you Dr. Pulaski) believe Data is a thinking thing. Members of alien races including allies, enemies and the rest, routinely interact with Data as a thinking thing. Occasionally there are interactions between Data and others during which his companions raise an eyebrow about exactly what kind of thing Data is. Admiral McCoy’s stroll with Data through the halls of the Enterprise (Encounter at Farpoint) is the first such example; Data’s time among the people of Barkon IV is another (Thine Own Self). But in both cases, Data could be seen by those with whom he was conversing. Apparently, even among people with plenty of experience dealing with alien races, Data’s manufactured nature somehow shows through. But beyond all this, we see Data does pass the Turing test (if not explicitly, then at least in form) in the episode Pen Pals. In this episode Data is conversing via radio with a young girl, Sarjenka, on the doomed planet, Drema IV. Over their many conversations Sarjenka never doubts that she is talking to anything but a living, breathing adult member of her own race.

It is no small feat of programming and engineering to create a machine that can convince us it is human, even if only at a rate better than chance. Aristotle said that human beings are the “rational animal”; humans are the animal that, among other things, think. Though Data cannot ever be fully human (he cannot ever be a biological human animal), his ability to think like a human goes a long way toward making him as human as possible. But is this kind of thinking and ability to pass the Turing test what makes Data the special kind of machine he is? If the answer is no, then passing the Turing test is only a starting point in determining what gives humans their humanity (or Klingons their Klinganity? etc.). If the answer is yes, then the Star Trek universe is full of machines possessed of at least some degree of humanity. Nomad (The Changeling), V’ger (Star Trek: The Motion Picture), the Exocomps (Quality of Life) and a host of other manufactured entities might be able to pass the Turing test (at least in time). And if these entities and others like them, can pass the Turing test, what does this say for their status as persons with rights under the law and morality? Is the ability to think only a step in the direction of agency and personhood or the whole of it? What should we say of the Enterprise itself? Here, specifically, the Enterprise 1701-D. The Enterprise and other starships have passed the Turing test. Are we obligated or willing to admit that these machines have some degree of humanity?

I will give three examples of times a starship has passed the Turing test (though there are many more). I will start with a trivial example and move through progressively more difficult cases.

In the episode Up the Long Ladder Danilo Odell talks of the Enterprise-D’s computer as if it is a person (granted, Odell is paradoxically a space colonist with a 19th century sensibility and knowledge of technology). This sort of thing happens often in Star Trek: an ignorant or less technologically savvy intelligent, being mistakes the computers for a person. However, this says about as much for the status of “thinking thing” for the computer as a small child’s reactions to a talking toy.

A slightly more challenging case is the interplay between Lwaxana Troi and the holographic bartender, Rex, in the episode Manhunt. Lwaxana does not know Rex is a holographic projection and pursues him as her possible third husband. It stands to reason that if you are considering marrying a thing, you at least tacitly acknowledge them as a person. It seems clear that the computer has managed to convince at least one person it is a human. But here, there is a catch. Lwaxana is undergoing “The Phase,” a sort of Betazoid menopause, which is marked by an increasing sex drive and a decrease in inhibition. Her state of mind, coupled with Rex’s complex programming aimed at mimicking human interaction, sets Lwaxana up to give the Enterprise computer a passing grade on the Turing test.

The last, and most challenging, set of examples are related to the Lwaxana/Rex scenario but do not fall to the same easy objections. Characters on the holodeck display a wide range of abilities and degrees of awareness of their existence. Most are completely clueless, but others like (police) Officer McNary (<span title=”TNG S1:Ep12″The Big Goodbye“) appear to have an existential crisis as he discusses with Picard the matter of continued existence once the program has ended. Picard is sympathetic but admits to being powerless to help. In a later episode (Elementary, Dear Data) a holographic projection of Sherlock Holmes’ greatest foil, Professor James Moriarty, comes to understand the nature of his existence and expresses the desire to continue his life in Picard’s “real”, physical world. This time Picard assures the projected person the Federation will research ways to take the professor off the holodeck. Later, Picard makes good on this promise (Ship in a Bottle) by moving Moriarty’s program from the ship’s main computer to a portable module. (Since Moriarty does not know he has merely been moved from one computer to another Picard has technically fulfilled the letter of the promise, but not the spirit.) Both McNary and (pre-module) Moriarty are products of the programming and hardware of the Enterprise-D. This being the case, Picard was expressing concern and making promises to the computer, or some subroutine thereof. Does this mean the computer of the 1710-D has passed the Turing test? And if so, what does this mean for our treatment of high-level holographic projections …

… like Voyager’s Doctor.

If a case can be made for hologram-as-person, The Doctor would certainly be the best bet. No one who knows The Doctor’s nature and has spent more than a few hours with him ever questions him as being anything other than what he appears to be, an intelligent being. In addition, many of those who do know he is a hologram are strong advocates in favor of his personhood (Kes is perhaps the best example). Of the primary characters, only Captain Janeway waffles back and forth between The Doctor’s status as person or program, and even then her position is often more out of practicality than principle. The structured argument that Voyager’s computer passes the Turing test is this:

P1- The Doctor is a product of the working hardware and software of Voyager’s computer.

P2- When people interact with The Doctor they are interacting with the computer.

P3- Those interacting with The Doctor are convinced he is a thinking, intelligent being. 

P4- If a computer can convince an interrogator it is a thinking thing, it is a thinking thing. 

Conclusion- Voyager’s computer passes the Turing test.

Perhaps, you might say, The Doctor passes the Turing test, but the computer as a whole does not. The Doctor passes the Turing test, but the computer when, say, speaking to the crew in engineering does not. This objection might hold some weight after The Doctor gets his mobile emitter and is thus freed from the Voyager’s hardware (Future’s End, Part II) but even then The Doctor is constituted of the hardware and software of the emitter.

Separating “The Doctor” from “Voyager’s computer” isn’t possible. It is perhaps only a subset of programming and hardware which make The Doctor. In the same sort of way, your hand is part of you but is not necessary to pass the Turing test. It would be foolish to say you are not a thinking thing because only part of you passes the Turing test. Your hand can’t or won’t. Nor will every part of your brain, at all times. You are judged to be a thinking thing because you, or a collection of parts of you functioning in the right way, pass the test. If The Doctor passes the Turing test, it is because the computer(s) from which he emerges pass the test.

The Doctor, Data, Lore, Automated Unit 3947 (Prototype) and a host of other manufactured intelligences unquestioningly pass the Turing test. They are not human, but have a functional capacity like humans – thinking. But is this all that we think is valuable or worthy about them? If passing the Turing test demonstrates a capacity to think, what does the capacity to think mean for the one who is thinking? Is the human ability to think all that matters and makes us different from other living and nonliving things? Imagine a generic holodeck character or alien robot capable of convincing someone that it is a real person, but rather than acting out of emotion, desire or will it is merely following a staggeringly complex flowchart of inputs and outputs. The holo-projection has no more humanity than a display on a monitor and the alien robot, an enormous abacus. What is missing from these machines is a sometimes concrete, sometimes ethereal quality of ‘intentionality’. A full exploration of intentionality is not possible here, but concretely, intentionality is:

“…the quality of mental states (e.g., thoughts, beliefs, desires, hopes) that consists in their being directed toward some object or state of affairs …”

This is easy enough to understand; it simply means a thing was done with a purpose that the doer had in mind. Less concretely, intentionality is (from the Stanford Encyclopedia of Philosophy):

“… the power of minds and mental states to be about, to represent, or to stand for, things, properties and states of affairs. To say of an individual’s mental states that they have intentionality is to say that they are mental representations or that they have contents”.

This is a lot less clear without more unpacking, clarification, and discussion.

Rather than understanding a (small) subset of Star Trek’s manufactured intelligences as only thinking things, but rather as intentional beings, we can see what makes Data, The Doctor and Moriarty special. For them, their actions are about something other than the completion of an algorithmic process. Their actions have meaning to them. Intentional holo-projections are reliant on the ship’s computer for their existence and cannot be separated from them as distinct entities. However, this does not mean that we must grant all rights accorded to sentient beings to the computer as a whole. What this means is that any changes to the programming or hardware responsible or required for the continued existence and flourishing of these beings needs to be protected just as the body and lives of human persons. We treat The Doctor wrongly if we, without his consent, rewrite The Doctor’s program in ways that change who he is. This does not mean that we can’t refit a replicator of warp drive without The Doctor’s consent, because he does not (presumably) need these to exist and flourish. Scuttling Voyager to keep it out of Kazon or Hirogen hands, however, is not just a matter of destroying one’s property, but sacrificing an intentional being, which is a different moral question.

So how do you test for intentionality? Well, we can’t really. At least we don’t have any idea yet how to do this, and the prospects are grim. It is easily conceivable that in passing the Turing test a machine may well fool us into believing it is possessed of intentionality when in fact it has no more mental life than a pocket calculator. If we as a species decide to create a manufactured intelligence with genuine intentionality, we need to face the fact that along the way there will be a price. The common biological bias that only organic machines are capable of “being human” means there will be low-level intentional machines we will fail to recognize as such; just as some non-human animals (and some human races) were not recognized for what they are: persons.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: