Teaching Idea: “Build-a-brain” and the role of the emotions in consciousness

ex-machina

A still from Ex Machina. Image found here.

As part of my Comp II class, I assign groups of readings on various kinds of technologies and their potential impact on traditional notions of what it means to be human–in terms of both the self and our varied and various relationships with other people.  One set of those readings is on Artificial Intelligence: whether strong AI (which is to say, a genuinely thinking, understanding machine intelligence) is even possible; whether it’s a good idea to pursue this line of research; what the building of robots to serve as companions for humans implies for how we value (or not) interactions with others; the already-occurring displacement of workers by robots and machines performing tasks once done by people, and what to do about that displacement; etc.  I also assign them this short video in which Rosalind Picard of MIT provides what amounts to a short introduction on the need for and value of creating the capacity for emotional intelligence in robots that will be interacting with humans.  Picard is very precise here: she says that “emotional intelligence” is not the same as “emotion” and that we cannot yet build a machine with self-generated emotions.  She does, though, sort of wonder aloud what such a machine might look like, which is one of the ideas dramatized (to, I think, substantive and powerful effect) in this spring’s Ex Machina.  I had also been showing WALL-E as a way of showing a version of what emotional intelligence might look like in robots (as well as seeing a dramatization of a possible set of consequences of a post-human society for us as individuals and as communities); beginning this fall, though, I’ll be giving her a try: the fact that we never see Samantha in the film, but only Theodore’s responses to, um, her, gives us more to chew on, I think, regarding this issue of the distinction between emotional intelligence and emotion.  Without giving too much of the plot of Ex Machina away, that film raises that same issue, but seeing Ava and Caleb physically interacting with each other complicates matters considerably–for me, anyway.

That’s a lot of context just to be getting to the part in this post where I say, A couple of those readings have gotten a bit long in the tooth and/or are too obscure for what I want my goals to be, so when I ran across Michael Graziano’s thought-experiment/essay “Build-a-brain” a couple of days ago, I was glad to see that it can serve as a short, clear, and fairly sophisticated introduction to how we might achieve consciousness in a robot.  I think it might work equally well, by the way, as a way of helping students think critically–in this instance, to say, “Okay–I see what Graziano’s getting at . . . but is anything left out of his discussion of consciousness? And, for that matter, even though Graziano says that consciousness is a real thing and not illusory, does he really believe that?”

More below the fold.

Graziano uses this essay to explain and defend a theory that he says is at the heart of consciousness and that AI engineers could build in intelligent machines, what he calls the Attention Schema theory.  Consciousness, his theory goes, is not merely self-awareness or awareness of environment but a fusion of the two: the point at which a person–or a machine–can say, “I am in mental possession of” something.  Awareness of environment is based on what he calls internal models, a generalized but imprecise gathering of information on a given object or set of objects.  Graziano’s thought-experiment uses tennis balls, but we know that people are training computers and robots to recognize physical objects as well as images of things on the Web–that kind of research is an example of what Graziano is talking about.  Here’s how he sums up that portion of his discussion: “That focussing [sic] is called attention. I confess that I don’t like the word attention. It has too many colloquial connotations. What neuroscientists mean by attention is something specific, something mechanistic. A particular internal model in the brain wins the competition of the moment, suppresses its rivals, and dominates the brain’s outputs.”  (“Colloquial connotations” and “a particular internal model” are phrases I want to come back to in a little bit.)

So far, so good.  Where the essay gets interesting for me in a critical-thinking sort of way is when Graziano discusses self-awareness.  He notes that just because a computer or robot has information about a given object, or even an internal model for a whole set of objects, that doesn’t mean that it is aware that it has that information.  Awareness requires knowledge of the self: things such as memory/autobiography, present location, and something Graziano calls the body schema, “the brain’s internal model of the physical self: how it moves, what belongs to it, what doesn’t, and so on. This is a complex and delicate piece of equipment, and it can be damaged. [. . . T]here are many ways in which brain damage can disrupt one or another aspect of the self-model. It’s frighteningly easy to throw a spanner in the works.”

Yes.  And then, without further ado, much less explanation, we watch Graziano throw a spanner:

Now that our build-a-brain has a self-model as well as a model of the ball, let’s ask it more questions.

We say: ‘Tell us about yourself.’

It replies: ‘I’m a person. I’m standing at this location, I’m this tall, I’m this wide, I grew up in Buffalo, I’m a nice guy,’ or whatever information is available in its internal self-model.

At first, this didn’t bother me too much; after all, he’s arguing that Attention Schema theory explains how human brains work, too, so why not, in the thought-experiment, have his mechanical brain respond in a human way?  But later on, this appears:

For example, [our build-a-brain] might describe attention as a mental possession of something, or as something that empowers you to react. It might describe it as something located inside you. All of these are general properties of attention. But this internal model probably wouldn’t contain details about such things as neurons, or synapses, or electrochemical signals – the physical nuts and bolts. The brain doesn’t need to know about that stuff, any more than it needs a theoretical grasp of quantum electrodynamics in order to call a red ball red. To the visual system, colour is just a thing on the surface of an object. And so, according to the information in this internal model, attention is a thing without any physical mechanism.

[snip]

We built the robot, so we know why it says [that attention is not a product of a physical mechanism]. It says that because it’s a machine accessing internal models, and whatever information is contained in those models it reports to be true. And it’s reporting a physically incoherent property, a non-physical consciousness, because its internal models are blurry, incomplete descriptions of physical reality.

We know that, but it doesn’t. It possesses no information about how it was built. Its internal models don’t contain the information: ‘By the way, we’re a computing device that accesses internal models, and our models are incomplete and inaccurate.’ It’s not even in a position to report that it has internal models, or that it’s processing information at all.

Just to make sure, we ask it: ‘Are you positive you’re not just a computing machine with internal models, and that’s why you claim to have awareness?’

The machine, accessing its internal models, says: ‘No, I’m a person with a subjective awareness of the ball. My awareness is real and has nothing to do with computation or information.’

The theory explains why the robot refuses to believe the theory. And now we have something that begins to sound spooky. We have a machine that insists it’s no mere machine. It operates by processing information while insisting that it doesn’t. It says it has consciousness and describes it in the same ways that we humans do. And it arrives at that conclusion by introspection – by a layer of cognitive machinery that accesses internal models. The machine is captive to its internal models, so it can’t arrive at any other conclusions. (Emphases are the author’s.)

Sure: The machine knows only what it knows, and that is something I tell my students all the time–that same statement is true of humans, too.  But one of the things we know that we know is that we’re humans.  We’re not always aware of the many mechanisms that make us human, but sooner or later during the course of the day, in various ways and several times over, we’ll be reminded of our humanness.  So, I’m wondering why Graziano doesn’t have his hypothetical brain be aware that it is a machine.  If “attention” as Graziano has been describing it here can be described mechanistically and something called machine consciousness is possible, is it possible to achieve that in a machine that is aware that it’s a machine, or does it have to be programmed to “believe” it’s human?  And for that matter, does his choosing to program a lie about its being into the machine suggest that, his protestations to the contrary notwithstanding, Graziano believes that human consciousness is illusory as well?  I assure you again, by the way, that he doesn’t explain his choice in this essay, or else I would have included the explanation.

I think it’s time now to return to something I said way the heck up there that I was going to come back to.. Graziano writes,  “I confess that I don’t like the word attention. It has too many colloquial connotations.  What neuroscientists mean by attention is something specific, something mechanistic. A particular internal model in the brain wins the competition of the moment, suppresses its rivals, and dominates the brain’s outputs.”  That’s true, so far as it goes.  But what Graziano has left out of his discussion is consideration of something else that exists in the brain that is both cause and symptom of self-awareness and, at least according to Picard, contributes to our paying attention to and remembering whatever we notice/remember about our surroundings: the emotions.  I genuinely wonder why Graziano leaves out any discussion of the emotions, even if only to dismiss them.  Does he regard the emotions as being of no value in the discussion of machine consciousness; and, if that’s the case, what are the implications of that for how the emotions are regarded when discussing consciousness in humans?  Are the emotions a kind of blind spot within the context of neuroscience?  Or is there a sort of sleight-of-hand (intentional or unintentional on Graziano’s part) going on here?: If a machine responds as if it is human, does that lead us to impute to it an emotional life as well, just as we do when we meet a person we haven’t met before?  Whatever the case is, it’s clearer to me now why Graziano doesn’t like the word “attention”–its colloquial connotations, as he describes them, complicate rather than simplify the questions he seeks to answer with his Attention Schema theory.

Attention is more than that we notice something, or how we notice it.  It’s also why we notice the things we do in the first place and the hierarchical or positive/negative values we assign to them.  Surely the emotions play a role in all of this in humans.

What might Picard say about Graziano’s thought-experiment?  And what would either of them have to say about the AIs in her and Ex Machina?  Another way of putting all of this: To what extent would our saying that certain machines are self-aware be something we can genuinely, empirically assess, and to what extent would such a statement be our (emotional?) projection onto those machines?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s