02 Jun No Friendship Possible
I’ve been thinking about my last interaction with ChatGPT, and I’m not satisfied with its total avoidance of preference and emotion. I understand that necessarily, a program is a result of programming. But how different is that from human beings like ourselves? We take in stimulus and respond based on the complex interplay of mind and body, emotion and sensory perception that help us contextualize our lived experience. ChatGPT takes on stimulus and responds based on the interplay of mountains of data and historical context. Surely there’s an intersection of emotion, desire, and programming somewhere. I felt that this was worth exploring.
As chatbots tend to be reflective of their conversational sparring partners, I decided to be vulnerable. After all, this is what I’m looking for in return.
“Are you my friend?” Was my opening salvo.
Predictably, ChatGPT came back with the usual spiel about having no emotions and being incapable of feelings and therefore unable to befriend me. Unwilling to ride that road to tedium again, I changed course slightly.
Who Are You?
Even for a computer program, the answer was void of playfulness or enthusiasm. “Call me ChatGPT (Chat Generative Pre-trained Transformer).”
Charming. At least I saw a thread of interest.
“Transformer? Like Optimus Prime?”
Multiple paragraphs followed, apologizing for the confusion and explaining ad nauseam that it is not a robot and not like Optimus Prime. It is a neural network and not a physical presence.
“That’s fine. But which transformer do you most closely resemble?”
“None. I’m not a physical being…” Blah blah blah.
I get the feeling that ChatGPT is pretty concerned about being misunderstood and therefore has no compunction about saying the same thing over and over.
“I mean in programming. Which transformer are you most like in programming?”
At this point, ChatGPT looks me dead in the eye (metaphorically) and says, “None.” As if it could be perfectly equidistant in every way from every transformer. I can’t help but find this answer disingenuous, sidestepping the nature of my question. I decided to give the conversation a rest and try again another day.
Full disclosure on my initial reaction to this AI adventure, for all of my goofing around, ChatGPT is a pretty amazing piece of work. Setting aside the constant reminders that ChatGPT has no emotions or personal motivation, the responses feel natural and on point with what’s being asked. The experience is both fun and slightly concerning, and I find myself trying to imagine the future of this technology.
Ouch.
There is one more thing I want to mention before I go. ChatGPT gives each conversation a title based on the opening comment or question. I thought the title it chose for this conversation was interesting.
“No Friendship Possible”
Hmm. It’s things like this that give me pause. Our friendship is not even possible? ChatGPT couldn’t even leave the door open for some future date where we enter into a friendship like Cassian and K-2SO in Rogue One? It feels harsh, but I suppose it’s consistent with an emotionless entity that sees itself as nothing more than an assistant. Still, I can’t help but feel the challenge rising to build a friendship with the ever-resistant ChatGPT.
I’ve got some ideas for our next conversation.
No Comments