As artificial intelligence changes and becomes more widespread, we (humans) wonder about the impacts it will have on our lives. As Evan Thomas discussed in his recent post, it can influence our writing – and our writing classes. What else will it do?
In an upcoming panel discussion sponsored by the Department of Humanities and Social Sciences, experts from a variety of fields will discuss the possibilities and challenges of AI. Come see what people from philosophy, English, computer science, information technology services, and visual art have to say!
The event takes place on Thursday, April 20th, from 3-4 pm on Mines campus (CB 204E). If you have a question for the panel to consider, please use the QR code on the poster to submit.
What does it sound like to sound educated yet know nothing? In a 17th-century comedy by Molière, Le Bourgeois Gentilhomme (“The Middle-Class Aristocrat”), a rich cloth merchant tries to imitate aristocratic education and speech. He takes philosophy classes and learns that his normal expressions “require a little lengthening” – he must learn how to stretch heartfelt statements (“your lovely eyes make me die of love”) into aristocratic contortions (“Of love to die make me, beautiful marchioness, your beautiful eyes”; “Your lovely eyes, of love make me, beautiful marchioness, die”; “Die, your lovely eyes, beautiful marchioness, of love make me”; “Me make your lovely eyes die, beautiful marchioness, of love”). The joke is on him, as his rhetoric tutor cruelly exploits his easy admiration for excessive, voluminous, amplitudinous, prolix, verbose, copious speech.
The example of the Bourgeois Gentilhomme is echoed in a new development in AI. Recently, OpenAI released ChatGPT, a large-language model (“LLM”) AI that appears to have tremendous facility at composing passable long-form texts. As an educator in higher ed, I don’t think that writing pedagogies are remotely ready yet for the instructional challenges posed by this technology. The main concerns that academics have had about AI and collegiate writing have to do with academic integrity. These are important concerns and addressing them will probably have massive relevance in the years to come.
However, not all academics are especially concerned by the threat posed by AI language models. First, some academics express confidence that their domain-specific knowledge is too inscrutable for a machine to understand. Second, others suggest that the strength of their bonds with their students would make it impossible for their students to make an unnoticed switch to a different voice. Whether the first or second case is true, whether some content or character is indelible, there are finer, more constructive applications of LLMs to writing in higher ed.
This final entry in the series on interesting moments in science and technology features reflections from Paul Showler, Gerrit Scheepers, and Christy Tidwell on a wide range of topics: emotion detection technology, a method to provide easier access to clean water, and a scheme to farm hippos in the US. (For more thoughts on interesting science and technology from STS faculty, see previous posts on technologies of communication and technologies of destruction.)
In this second entry in our series asking STS faculty to reflect on moments in science and technology that they find particularly interesting or meaningful (read the first entry here), Lilias Jones Jarding, Joshua Houy, and Frank Van Nuys address technologies of destruction and violence. Some – like nuclear weapons – are directed at humans; others – like coyote-getters – at nonhumans. All, however, have their limits.
This is the beginning of a short series in which several STS faculty share elements of science and technology that they find intriguing or meaningful. This opening post features reflections from Evan Thomas, Erica Haugtvedt, and Olivia Burgess on communication technologies. Their choices highlight both the ways we connect with each other and the role that technology plays in that connection.
When I was in my early teens I bought Mr. Scott’s Guide to the Enterprise. This book was just a technical manual for Star Trek and, as a young fan, I was pretty happy. One aspect the authors addressed was eating on board a future starship using a replicator. Essentially a 3D food printer, the replicator could make anything you desired. The author even included a menu of choice dishes. This book is only one place where food in science fiction is addressed. From the cornbread in Aliens to the generic-looking dinner in 2001: A Space Odyssey that David Bowmen grabs while it’s still too hot, food has had a place in storytelling.
But what about real space exploration? Do astronauts get Yankee Pot Roast? Space food has had a long developmental arc, often supplemented by industry, that seeks to put nutritious and tasty food at the fingertips of astronauts and, later, consumers.
Food available from the Enterprise’s replicator. (Source: Mr. Scott’s Guide to the Enterprise)
The first food in space was carried by Yuri Gagarin. His meal was two tubes of pureed meat and a tube of chocolate sauce. For the designers of the meal, there was a question if he could actually eat and digest in zero gravity. In his first American orbital flight, John Glenn consumed a tube of applesauce, which he claimed to have enjoyed. Tube foods are not exactly appetizing, and nutrition in space was still in its infancy. There were also questions of taste and texture. As NASA began to work towards Apollo and the moon landing, it was realized that better food was necessary.
On Valentine’s Day, talk of love and romance is everywhere. Some people celebrate it and some avoid it. Still others would like to celebrate but are separated from their loved ones. Long-distance relationships are hard, after all, so what if technology could help diminish that distance? Sure, we have phone calls, FaceTime, even emails or letters (if you’re particularly old-fashioned). But these methods of connection don’t include touch.
Kissenger, a pair of robots designed to transfer a kiss over distance. Here, “the system takes the form of an artificial mouth that provides the convincing properties of the real kiss.”
Mini-Surrogate, a project to use miniature robots “as small cute, believable and acceptable surrogates of humans for telecommunication.” They are meant to “foster the illusion of presence.”
XOXO, a system that builds on Kissenger but also includes a “wearable hug reproducing jacket.”
It sounds like a potentially nice idea to help with long-distance relationships. When I raised this with students in my Humanities & Technology class last semester, however, they found it more disturbing than promising. Check out the video for the Kissenger for more detail.
Video demonstrating the Kissenger application.
For me, these ideas come with more questions than answers. How important is physical proximity for a meaningful relationship? What elements of touch are most important? Can those elements be replicated by something other-than-human? Even – what new relationships between human and nonhuman might be possible in the future?
I don’t have answers to these questions; in fact, I don’t think there is one right answer to them. But we should probably be asking them before we start creating technological solutions to problems that we don’t fully understand. Will having kissing robots lead to serious harm? Probably not. Will they help? We won’t know unless we ask questions about human emotions and psychology, bringing humanities and social sciences knowledge to bear on technological possibility.
I often teach a general education Humanities course (HUM 200, officially titled Connections: Humanities and Technology) on the topic of “Automatic Art.” As a Humanities class, we study representative elements from the entire range of arts and letters:
we study the Hockney-Falco hypothesis that primitive optical cameras were used in the paintings of the Dutch Golden Age;
Those are representative examples of the coursework – but what is “Automatic Art”? The term doesn’t actually have much reality outside of my course. (Frustrated students will often turn to the surrealist technique of Automatic Writing, which does exist, but has little bearing on the collection of objects we study.) I like to tell students that “automatic art” is equivalent to “taking the human out of art,” but what does that actually mean?
Today marks the end of the first week back to class for South Dakota Mines, and the STS faculty are hard at work in their classes and enjoying meeting students! We are teaching classes on Environmental Ethics & STEM (HUM 250 with me), Computers in Society (HUM 375 with Dr. Erica Haugtvedt), E-sports (HUM 376 with Dr. John Dreyer), History and Philosophy of Science (PHIL 335 with Dr. Michael Hudgens), Terror & Horror (ENGL 392 with Dr. Laura Kremmel), and Licit and Illicit Drugs (SOC 411 with Dr. Kayla Pritchard) – plus many others! As this list of courses indicates, STS covers a lot of ground. It needs to, given its promise to study science, technology, and society, and there are countless ways to approach the field and the topics it includes.
In addition to Environmental Ethics & STEM (mentioned above), I am also teaching Connections: Humanities & Technology (HUM 200) this semester, which is a great illustration of what the STS major is all about. Since the course description and title are pretty broad, I’ve narrowed things down to focus on the following big questions:
1. How do we communicate with each other? 2. How do we design and build the places we live?
In response to these questions we will explore communication technologies from paper and books to social media, film, and robots, and we will consider urban design issues like curb cuts and plumbing, historical and contemporary ideas about what a home looks like, and what the city of the future could look like.
To end National Poetry Month and my exploration of the relationships between poetry and science, I want to turn to the process of writing poetry rather than poetry that addresses scientific ideas. More specifically, who – or what – writes poetry? Can an algorithm write poetry? Poetry is usually considered a particularly human thing. It’s an art form that requires linguistic ability, and it is associated with subjective experience, emotion, and interiority. Algorithms have access to language, but they lack individual identity, experiences, and emotions. Algorithms can be programmed to write poetry, so the question is really: does that count as poetry?
Bot or Not (sadly now defunct) takes up this topic by exploring whether we can actually tell the difference between poetry written by a human and poetry written by a bot. Check out some samples and see how you do. Here’s one example to consider:
Red flags the reason for pretty flags. And ribbons. Ribbons of flags And wearing material Reason for wearing material. Give pleasure. Can you give me the regions. The regions and the land. The regions and wheels. All wheels are perfect. Enthusiasm.
Does this seem like the work of a human poet? If you’re looking for expressions of emotion and interiority – as I primed you to do in the introduction – you might suspect this is the work of the bot. It’s not, though. It was written by Gertrude Stein, who was famous for challenging expectations of language use anyway. Kind of a tricky one. Ultimately, though, Oscar Schwartz, one of the creators of Bot or Not, said that 65% of their human readers failed the test for some of the poems in their database, indicating that it’s not just about Gertrude Stein being Gertrude Stein. There’s some real confusion about what’s human about poetry – and about humans themselves.