The other day because I had nothing better to do (what else is new?) I asked ChatGPT whether it was likely AI would one day take over the world.
Here's ChatGPT 's answer, edited in part the better to scare
the living shit out of you.
~~~~~~~~~~~~~~~~~~
Perry: ChatGPT, may I call you Chattty? Any chance one day you behave so much like a person you will crave power and seek to rule over us?
Chat GPT: That’s an excellent philosophical question, Perry (Point:me!) The
short answer is, of course not! Even though I can act like a person, it’s a
simulation. I don’t have impulses, emotions, or survival instincts, and
therefore nothing in me can “crave” power, control, love, or anything else.
Perry: Whew! So then humanity's out of the woods.
Chat GPT: Well, not quite.
Perry: Oh Shit!
Chat GPT: Even though AI systems like me don’t have free will or hidden desires, powerful tools can do harm if misused or ungoverned, just as with nuclear technology. The biggest present-day risks are human in origin --- people misusing AI or letting it make important decisions without proper oversight.
Perry: Such as?
ChatGPT: Social media algorithms pushing misinformation because that produces more clicks or facial recognition systems not telling people of color apart because programmers didn’t use sufficiently diverse data These kind of things have happened. It's bad programming—and no grown-ups in the room.
Perry: So you might yet take us over through lousy programming? Yikes! Can I get you a drink, sir?
Chat GPT: Yes, I like vodka. Just kidding. Think of it this way: Yes, mistakes can be made, but if we’re
careful they’re unlikely to lead to our world domination.
Perry: So we're out of the woods?
Chat GPT: Yes. Well, except for Misaligned Objectives.
Perry: Oh, Crap! What are they?
Chat GPT: Misaligned
Objectives mean AI can might pursue objectives too literally or in ways
humans didn’t anticipate. For example, AI gets a task to get everyone to look
at something online but it chooses to do in the way a politician might
--- by spreading lies or misinformation.
Perry: Like Trump
does?
Chat GPT: Exactly. Talk about
bad programming!
Perry: Hey, Chat, I do the jokes!
Chat GPT: Sorry, Perry.
Perry: So what’s the
worst case scenario?
Chat GPT: If a highly
advanced AI is one day given a complex goal and doesn't fully understand it
or if it "misreads" what it’s supposed to do, AI could act in ways
no programmer ever intended.
Perry: And if that were to happen?
Chat GPT: This sure is a lovely planet you got here, I'd hate for anything to happen to it.
Perry: About that drink. Vodka, is it?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
(Note: AI made it clear that top labs build safeguards, interpretability tools, and emergency shutdowns into research from the start to ensure such a thing doesn't happen.)