Fall Semester greetings, and an image to ponder

Happy new academic year to my students, and to whoever else might happen upon this post.  If you are a student of mine, I especially want to affirm you: the URL for this site may appear on your syllabus, but you’re otherwise not required to visit here.  If you’re here, then, it’s because you have some inclination toward going above and beyond what is asked of you.  That trait–something in you that no assessment test will ever be able to measure–will nevertheless stand you in good stead as few other traits or abilities will, and for years after your days of formal education are past.

Why we should train workers

My source for this image was via someone in my Twitter feed (which I unfortunately didn’t make note of).  Here is the article itself, from Brookings.

I am a couple of weeks late in getting this post up; the semester got up and running (it’s been good, so far), but the work of doing that and some family illnesses at home have cut into spare time for writing here.  As it happens, though, the work in class that I’m most proud of so far seems to me to run counter to the implications of the assertion in the image you see here.  We’ve done precious little thus far that overtly prepares you for work, much less prepares you as we would prepare intelligent machines for the work they do, and I’m quite proud of this fact: this past week, we’ve looked at some paintings and talked about some poems in our Comp I classes, and in Comp II we’ve talked about rhetorical appeals.  The rest of the semester, once we begin working on writing and research projects, will indeed have some value to you in your future careers and lives away from work; but, again, I won’t be training you as though you are machine-learning algorithms.  There are two pretty simple, obvious reasons for that: you already possess such an algorithm (though we still don’t quite understand how it works); and, for that matter, you’re already a far superior information processor, that even the fastest computers can only begin to approach in ability.  There’s also a third, more existential reason: You are, or should be, more than the work you will be hired to do.

It’s for these reasons that the assertion that accompanies the image is both deeply weird and more than a little lacking in awareness of what a good education should do for students.

Continue reading

Advertisements

Mid-July work

I have just now finished my first read-through of PrairyErth and will shortly begin to work my way through it again, this time looking at notes I’ve made in the book by passages that might serve well as jumping-off points for writing assignments for my class this fall.  So, this seems like a good time to take stock of academic-related work I’ve accomplished so far this summer.

Continue reading

Teaching Idea: “Build-a-brain” and the role of the emotions in consciousness

ex-machina

A still from Ex Machina. Image found here.

As part of my Comp II class, I assign groups of readings on various kinds of technologies and their potential impact on traditional notions of what it means to be human–in terms of both the self and our varied and various relationships with other people.  One set of those readings is on Artificial Intelligence: whether strong AI (which is to say, a genuinely thinking, understanding machine intelligence) is even possible; whether it’s a good idea to pursue this line of research; what the building of robots to serve as companions for humans implies for how we value (or not) interactions with others; the already-occurring displacement of workers by robots and machines performing tasks once done by people, and what to do about that displacement; etc.  I also assign them this short video in which Rosalind Picard of MIT provides what amounts to a short introduction on the need for and value of creating the capacity for emotional intelligence in robots that will be interacting with humans.  Picard is very precise here: she says that “emotional intelligence” is not the same as “emotion” and that we cannot yet build a machine with self-generated emotions.  She does, though, sort of wonder aloud what such a machine might look like, which is one of the ideas dramatized (to, I think, substantive and powerful effect) in this spring’s Ex Machina.  I had also been showing WALL-E as a way of showing a version of what emotional intelligence might look like in robots (as well as seeing a dramatization of a possible set of consequences of a post-human society for us as individuals and as communities); beginning this fall, though, I’ll be giving her a try: the fact that we never see Samantha in the film, but only Theodore’s responses to, um, her, gives us more to chew on, I think, regarding this issue of the distinction between emotional intelligence and emotion.  Without giving too much of the plot of Ex Machina away, that film raises that same issue, but seeing Ava and Caleb physically interacting with each other complicates matters considerably–for me, anyway.

That’s a lot of context just to be getting to the part in this post where I say, A couple of those readings have gotten a bit long in the tooth and/or are too obscure for what I want my goals to be, so when I ran across Michael Graziano’s thought-experiment/essay “Build-a-brain” a couple of days ago, I was glad to see that it can serve as a short, clear, and fairly sophisticated introduction to how we might achieve consciousness in a robot.  I think it might work equally well, by the way, as a way of helping students think critically–in this instance, to say, “Okay–I see what Graziano’s getting at . . . but is anything left out of his discussion of consciousness? And, for that matter, even though Graziano says that consciousness is a real thing and not illusory, does he really believe that?”

More below the fold.

Continue reading