So today I read an article about GPT-4.5 passing a modern Turing Test. In a recent experiment, participants chatted with both a human and the AI, and were asked to guess which was which. The AI was identified as human more often than the actual human was. Let that sink in.
Now to be fair, what we have here isn't consciousness. It's not sentience. It's a highly advanced parrot. It has no idea what it's saying, but it's gotten very, very good at mimicking human behavior. Like a mirror that doesn't just reflect your image, but also starts talking back in your voice.
This reminds me of Commander Data on Star Trek. He could recite poetry, play music, and even paint, but he always struggled with understanding why humans did those things. He wanted to be human, but lacked emotion and intuition. In fact, Dr. Soong, his creator, intentionally designed Data to be less human than his predecessor, Lore. Lore was too lifelike, too emotional, and it freaked out the colonists on Omicron Theta. Soong tried again with Data, and made him more restrained, more focused on logic and function.
And of course, I can't talk about artificial beings without thinking about Rush. You know the lyric: "One zero zero one zero zero one" from The Body Electric. That song is literally about an android breaking free from control. Except in our case, these AIs aren't breaking free. They're doing exactly what we trained them to do, and it's kind of eerie how good they're getting at it.
Also, I can't help but hear HAL 9000's voice saying, "I'm sorry, Dave. I'm afraid I can't do that." It was calm. Polite. Reasonable. And completely terrifying. Fortunately, we're not there. These systems are just guessing the next word in a sentence based on patterns and training. No independent thought. No malicious intent.
So let's not get carried away. No Terminators. No doomsday. AI is just a tool. A very powerful one, yes, but still a tool. It's up to us to steer the conversation in ways that are meaningful, useful, and helpful to humanity. As long as we use it responsibly, it can be one of the greatest assets we have moving into the future.
Just like HAL was given conflicting directives and couldn't follow Dave's direction, if we train AI (or as I like to say Silicon Based Intelligence (SBI)) with conflicting "directives" or "knowledge", we might find out that the SBI comes to the conclusion that the only way to "save Earth" is to destroy all humans. LOL!!!
Antony Lee
@Reply 13 months ago
1001001 almost 11001001 a TNG Series 1 episode of the same name.
If you are a Visitor, go ahead and post your reply as a
new comment, and we'll move it here for you
once it's approved. Be sure to use the same name and email address.
This thread is now CLOSED. If you wish to comment, start a NEW discussion in
Captain's Log.