As mythological metaphors go, the one that SNHU professor David Humphreys gave during a recent talk about the eruption of technologies known as artificial intelligence wasn’t terribly reassuring.
“We’ve opened a Pandora’s box,” he told two dozen university students during a breakout session at the College Convention hosted by New England College earlier this month. “It’s not going to go away.”
You will recall that in Greek mythology, overly curious Pandora opened the box she had been told to keep shut only to release sickness, death and a host of unspecified evils into the world. Similarly, a host of evils have been unleashed by the arrival of large language models like ChatGPT and other software products that use predictive algorithms to create astonishing simulacrums of human thought.
These evils range from the annoying, such as crummy e-books swamping Amazon, to the alarming, such as “deepfake” videos and voice recordings of people that can fool even their loved ones, to apocalyptic possibilities like AI-controlled, weapon-carrying drones.
And that doesn’t count the possibility that AI will be used to change your job for the worse in the name of increasing investor returns.
The one upside of Pandora’s story is that she also released hope. (EDIT: A reader pointed out after this ran in the Monitor that she actually didn’t release hope; she kept it in the box. Apparently this has resulted in centuries of debate over cause and effect.) The equivalent upside for AI is the way it is supercharging good things such as medical research, weather forecasting, scientific analysis and the ability to spot activities like illegal fishing and human trafficking.
Because Humphreys’ talk was part of a multi-day session about politics, he focused on the way AI can be used to fool people. “With AI it has just become easier and easier and easier to create disinformation,” he warned.
Then he showed pictures and audio of famous people doing and saying silly things, which he had created in a few minutes using online software that anybody can subscribe to for just a few bucks, as well as fake videos supposedly from the war in Gaza. Not long ago they would have required days or weeks of work by people trained on complicated software; now they can be created by bored teenagers or bad actors paid by political opponents or foreign governments.
Unfortunately, Humphreys said, there’s no easy solution, and one may not even be possible. AI-spotting software holds out some hope, but I’m very dubious that it can stay ahead of ever-improved fakery any more than spam filters have killed spam. The incentives for misbehavior are much greater than the incentives for enforcement.
As for laws, good luck with that. Unless the U.S. takes China’s route and creates total federal control of the online world, there will always be ways to sidestep even the strictest American legislation from overseas.
That leaves things, unfortunately, up to us. The old rule that you should be suspicious of things you see on the internet is now 10 times more relevant. (Note: My columns are an exception. Always believe them.)
“We need to hold each other accountable. If you’re on social media and you see people sharing misinformation, call them out. … Think before you click the share button. If you think, ‘Yeah, this is crazy!’ – it might be. Double-check … especially the more inflammatory things, especially as we get closer to the presidential election,” he said.
As you might suspect from a professor speaking to students, especially a professor whose job includes teaching other teachers how to use AI in their teaching, Humphreys is a big advocate of education. He thinks digital literacy should be part of the curriculum even in elementary schools since we’re now a society addicted to our screens.
He urged the students to “follow watchdog groups like Newsguardtech.com and Snopes” and to apply the CRAAP test to things found online, which stands for currency, relevancy, authority, accuracy and purpose.
“Look for all of those things. … The ones that are missing more than one element have a stronger likelihood of being misinformation,” he said.
I will go further and say that AI has changed the balance so much that your default for online information should shift from “it’s true unless shown otherwise” to “it’s fake unless shown otherwise.” I may be biased since this attitude gives more credence to traditional media like the Monitor, where you’ve got real, live folks to hold accountable, but I’m afraid it’s a necessary step. AI is just going to get more powerful, and it’s not like we can turn it off.
“Is it worth it to shut the box?” Humphreys asked in a rhetorical moment. “Can we deal with the bad things, to get all the good things that come along with it?”
The answer, of course, is that we can’t shut the box. But we can minimize the damage: “If we can teach people how to effectively and ethically share information online … then we’re doing what we can do to help prevent the spread of mis- and dis-information.”
David Brooks-
Interesting article on AI – a power tool that will need Real Intelligence to use properly.
For your possible interest, I reproduce my answer to a Guardian article sometime back:
[response to https://www.theguardian.com/commentisfree/2023/mar/30/artificial-intelligence-chatgpt-human-mind%5D
Evgeny Morozov’s article is interesting and “Artificial Intelligence” – “AI” may not exactly fit the phenomenon. I’ll use AI, lacking an alternative. I am no AI authority, but did brush against it as a youth.
AI is dominated by algorithms, improved with new approaches and faster computers over time. Now AI can appropriate input on a broader scale of human endeavor to respond, sometimes showing “Real Intelligence”. Many politicians lack it, whose intelligence seems artificial without incorporating constituent input.
A computer can exhibit what observers consider intelligence when accomplishing something ordinarily requiring human intelligence, This happened with my 1950s high school science fair project (EMAG3), a 3200 tube computer wired to play checkers. Some suspected I secretly controlled it. But EMAG3 used purely logic circuit algorithms, with no human input. Also in the 1950s was pioneering AI at IBM. Arthur Samuel’s checker program learned from experience and championship book games fed into it. Eventually it beat Samuel and human checker experts. I spent high school vacations looking over Samuel’s shoulder and did a little programming on this project. 1950s chess programs existed, but like EMAG3, depended only on algorithms and unable to learn from experience or human input. Chess was complex, and the technology of the day was why checkers was chosen.
Morozov mentions art and psychology, but how they help solving problems outside those fields is unclear. How is intelligence defined? Humans can be intelligent without knowing art or psychology. So can some animals to a surprising degree without speech or understanding the world. Why not admit machine intelligence?
AI is a tool, today becoming sharper. Any tool, in the wrong hands, especially those whose intelligence is artificial, can be dangerous. We must beware of this. However, used rightly, it can benefit humankind.
-Dave E.