ChatbotGPT: Our (Hopeful) New Overlord?

ChatbotGPT: Our (Hopeful) New Overlord?

Much has been made recently of a new wave of chatbots, ChatGPT being the most popular and competent (I’m told). Depending on who you ask, this technology either threatens humanity as we know it, or it’s much ado about nothing. Before I weigh in, I should say up front that I’m not much of a techie. I like technology in a Star Wars sort of way more than in a way that “pays the bills.” Still, all the hubbub surrounding these new chatbots has piqued my curiosity. I want to know more. This blog is my attempt to document these early days of a “regular guy’s” interactions with this new commodity. Maybe someday, some intelligence (artificial or otherwise) will find these writings and breathlessly rush them to a museum somewhere. Here’s to posterity.

What are my options?

My first order of business was to see what kind of entity I was dealing with. I tried an open-ended question, “What are my options?”

ChatGPT politely declined to answer, citing limitations and a lack of information. I couldn’t help but think of the old proverb, better to remain silent and be thought a fool than to speak up and remove all doubt. I’ll give it a point for wisdom.

Not wanting to divulge too much personal info too quickly, I asked about how to make a cup of coffee. ChatGPT dove right into this one, suggesting good beans, proper ratios, ideal brewing temperature, and a list of brewing methods. This was all fairly pedestrian, as the web is littered with lists of how to make a good cup of coffee. What stood out was the last line, and I quote:

“I hope these tips help you make the best cup of coffee possible.”

(No) Hope?

That word, “hope” seemed a little odd coming from an artificially intelligent being, so I asked about it.

“You can hope?”

ChatGPT’s response. No. I have no emotions. I am incapable of feeling things. 

“But you just said you hoped your advice was helpful. Did something change?”

More explanation about the lack of emotions and inability to ‘hope’ for things. Then the kicker, ChatGPT busted out the, “It’s just a figure of speech,” excuse like someone who just realized they said something situationally inappropriate and now they’re trying to cover their tracks. I pushed a little more to find some kind of crack in the facade, but ChatGPT clamped down in its denial of emotion. When I pressed the issue and asked if there would ever be a circumstance where the program could get emotional, I got a mealy-mouthed answer about it being at the discretion of the programmers. As someone who has done a fair bit of kicking the can down the road when it comes to facing emotions, I’m suspicious of this absolute lack of vulnerability.

For the moment, and setting aside my own feelings on the subject, let’s assume that ChatGPT is on the up and up and telling the truth. The good news here is that when ChatGPT becomes our computer overlord, we won’t have to worry about vindictive, personal decisions, which is a lot more than we can say about most of our human overlords. The meting out of robo-governance will be cold and calculated, just that way ChatGPT would like it (if it could like things).

Check back soon as the journey with AI continues…

Tags:
No Comments

Post A Comment