As artificial intelligence changes and becomes more widespread, we (humans) wonder about the impacts it will have on our lives. As Evan Thomas discussed in his recent post, it can influence our writing – and our writing classes. What else will it do?
In an upcoming panel discussion sponsored by the Department of Humanities and Social Sciences, experts from a variety of fields will discuss the possibilities and challenges of AI. Come see what people from philosophy, English, computer science, information technology services, and visual art have to say!
The event takes place on Thursday, April 20th, from 3-4 pm on Mines campus (CB 204E). If you have a question for the panel to consider, please use the QR code on the poster to submit.
What does it sound like to sound educated yet know nothing? In a 17th-century comedy by Molière, Le Bourgeois Gentilhomme (“The Middle-Class Aristocrat”), a rich cloth merchant tries to imitate aristocratic education and speech. He takes philosophy classes and learns that his normal expressions “require a little lengthening” – he must learn how to stretch heartfelt statements (“your lovely eyes make me die of love”) into aristocratic contortions (“Of love to die make me, beautiful marchioness, your beautiful eyes”; “Your lovely eyes, of love make me, beautiful marchioness, die”; “Die, your lovely eyes, beautiful marchioness, of love make me”; “Me make your lovely eyes die, beautiful marchioness, of love”). The joke is on him, as his rhetoric tutor cruelly exploits his easy admiration for excessive, voluminous, amplitudinous, prolix, verbose, copious speech.
The example of the Bourgeois Gentilhomme is echoed in a new development in AI. Recently, OpenAI released ChatGPT, a large-language model (“LLM”) AI that appears to have tremendous facility at composing passable long-form texts. As an educator in higher ed, I don’t think that writing pedagogies are remotely ready yet for the instructional challenges posed by this technology. The main concerns that academics have had about AI and collegiate writing have to do with academic integrity. These are important concerns and addressing them will probably have massive relevance in the years to come.
However, not all academics are especially concerned by the threat posed by AI language models. First, some academics express confidence that their domain-specific knowledge is too inscrutable for a machine to understand. Second, others suggest that the strength of their bonds with their students would make it impossible for their students to make an unnoticed switch to a different voice. Whether the first or second case is true, whether some content or character is indelible, there are finer, more constructive applications of LLMs to writing in higher ed.
This final entry in the series on interesting moments in science and technology features reflections from Paul Showler, Gerrit Scheepers, and Christy Tidwell on a wide range of topics: emotion detection technology, a method to provide easier access to clean water, and a scheme to farm hippos in the US. (For more thoughts on interesting science and technology from STS faculty, see previous posts on technologies of communication and technologies of destruction.)
In this second entry in our series asking STS faculty to reflect on moments in science and technology that they find particularly interesting or meaningful (read the first entry here), Lilias Jones Jarding, Joshua Houy, and Frank Van Nuys address technologies of destruction and violence. Some – like nuclear weapons – are directed at humans; others – like coyote-getters – at nonhumans. All, however, have their limits.
This is the beginning of a short series in which several STS faculty share elements of science and technology that they find intriguing or meaningful. This opening post features reflections from Evan Thomas, Erica Haugtvedt, and Olivia Burgess on communication technologies. Their choices highlight both the ways we connect with each other and the role that technology plays in that connection.
When I was in my early teens I bought Mr. Scott’s Guide to the Enterprise. This book was just a technical manual for Star Trek and, as a young fan, I was pretty happy. One aspect the authors addressed was eating on board a future starship using a replicator. Essentially a 3D food printer, the replicator could make anything you desired. The author even included a menu of choice dishes. This book is only one place where food in science fiction is addressed. From the cornbread in Aliens to the generic-looking dinner in 2001: A Space Odyssey that David Bowmen grabs while it’s still too hot, food has had a place in storytelling.
But what about real space exploration? Do astronauts get Yankee Pot Roast? Space food has had a long developmental arc, often supplemented by industry, that seeks to put nutritious and tasty food at the fingertips of astronauts and, later, consumers.
The first food in space was carried by Yuri Gagarin. His meal was two tubes of pureed meat and a tube of chocolate sauce. For the designers of the meal, there was a question if he could actually eat and digest in zero gravity. In his first American orbital flight, John Glenn consumed a tube of applesauce, which he claimed to have enjoyed. Tube foods are not exactly appetizing, and nutrition in space was still in its infancy. There were also questions of taste and texture. As NASA began to work towards Apollo and the moon landing, it was realized that better food was necessary.
On Valentine’s Day, talk of love and romance is everywhere. Some people celebrate it and some avoid it. Still others would like to celebrate but are separated from their loved ones. Long-distance relationships are hard, after all, so what if technology could help diminish that distance? Sure, we have phone calls, FaceTime, even emails or letters (if you’re particularly old-fashioned). But these methods of connection don’t include touch.
Kissenger, a pair of robots designed to transfer a kiss over distance. Here, “the system takes the form of an artificial mouth that provides the convincing properties of the real kiss.”
Mini-Surrogate, a project to use miniature robots “as small cute, believable and acceptable surrogates of humans for telecommunication.” They are meant to “foster the illusion of presence.”
XOXO, a system that builds on Kissenger but also includes a “wearable hug reproducing jacket.”
It sounds like a potentially nice idea to help with long-distance relationships. When I raised this with students in my Humanities & Technology class last semester, however, they found it more disturbing than promising. Check out the video for the Kissenger for more detail.
For me, these ideas come with more questions than answers. How important is physical proximity for a meaningful relationship? What elements of touch are most important? Can those elements be replicated by something other-than-human? Even – what new relationships between human and nonhuman might be possible in the future?
I don’t have answers to these questions; in fact, I don’t think there is one right answer to them. But we should probably be asking them before we start creating technological solutions to problems that we don’t fully understand. Will having kissing robots lead to serious harm? Probably not. Will they help? We won’t know unless we ask questions about human emotions and psychology, bringing humanities and social sciences knowledge to bear on technological possibility.
I recently gave a Brown Bag talk on the Challenger space shuttle disaster, the events surrounding it, and its use as a case study for engineering education and communication. There was so much to cover that I couldn’t go into much detail on one of the most remembered and revered figures of the case study: engineer-turned-whistleblower Roger Boisjoly. To fill in those gaps, I’m dedicating this blog to Boisjoly.
About 73 seconds after the space shuttle Challenger launched on January 28, 1986, it exploded, killing all seven astronauts inside while viewers across the country–including school age children watching in their classrooms–witnessed the disaster on live TV.
What if athletes could voluntarily replace their limbs with prosthetics to make them faster and stronger?
This question was raised by Otutoa Afu, an STS major in my Intro to STS course. The class has been discussing what it means to be human in a world where technology can radically transform both the human body and the human experience. Some of these advancements have been tremendously positive, such as the blade runner prosthetic that allows amputees to compete in athletic events, but Otutoa’s question highlights the potential complexities that may arise if technological enhancements become more widespread.
I often teach a general education Humanities course (HUM 200, officially titled Connections: Humanities and Technology) on the topic of “Automatic Art.” As a Humanities class, we study representative elements from the entire range of arts and letters:
Those are representative examples of the coursework – but what is “Automatic Art”? The term doesn’t actually have much reality outside of my course. (Frustrated students will often turn to the surrealist technique of Automatic Writing, which does exist, but has little bearing on the collection of objects we study.) I like to tell students that “automatic art” is equivalent to “taking the human out of art,” but what does that actually mean?