I have just now finished my first read-through of PrairyErth and will shortly begin to work my way through it again, this time looking at notes I’ve made in the book by passages that might serve well as jumping-off points for writing assignments for my class this fall. So, this seems like a good time to take stock of academic-related work I’ve accomplished so far this summer.
As part of my Comp II class, I assign groups of readings on various kinds of technologies and their potential impact on traditional notions of what it means to be human–in terms of both the self and our varied and various relationships with other people. One set of those readings is on Artificial Intelligence: whether strong AI (which is to say, a genuinely thinking, understanding machine intelligence) is even possible; whether it’s a good idea to pursue this line of research; what the building of robots to serve as companions for humans implies for how we value (or not) interactions with others; the already-occurring displacement of workers by robots and machines performing tasks once done by people, and what to do about that displacement; etc. I also assign them this short video in which Rosalind Picard of MIT provides what amounts to a short introduction on the need for and value of creating the capacity for emotional intelligence in robots that will be interacting with humans. Picard is very precise here: she says that “emotional intelligence” is not the same as “emotion” and that we cannot yet build a machine with self-generated emotions. She does, though, sort of wonder aloud what such a machine might look like, which is one of the ideas dramatized (to, I think, substantive and powerful effect) in this spring’s Ex Machina. I had also been showing WALL-E as a way of showing a version of what emotional intelligence might look like in robots (as well as seeing a dramatization of a possible set of consequences of a post-human society for us as individuals and as communities); beginning this fall, though, I’ll be giving her a try: the fact that we never see Samantha in the film, but only Theodore’s responses to, um, her, gives us more to chew on, I think, regarding this issue of the distinction between emotional intelligence and emotion. Without giving too much of the plot of Ex Machina away, that film raises that same issue, but seeing Ava and Caleb physically interacting with each other complicates matters considerably–for me, anyway.
That’s a lot of context just to be getting to the part in this post where I say, A couple of those readings have gotten a bit long in the tooth and/or are too obscure for what I want my goals to be, so when I ran across Michael Graziano’s thought-experiment/essay “Build-a-brain” a couple of days ago, I was glad to see that it can serve as a short, clear, and fairly sophisticated introduction to how we might achieve consciousness in a robot. I think it might work equally well, by the way, as a way of helping students think critically–in this instance, to say, “Okay–I see what Graziano’s getting at . . . but is anything left out of his discussion of consciousness? And, for that matter, even though Graziano says that consciousness is a real thing and not illusory, does he really believe that?”
More below the fold.