Select Page


Timing is everything and the time to keep the technologies called A.I. from spinning out of our control is now.

That’s the argument that Jim Isaak makes, and I’m afraid I agree with him.

The technologies known as artificial intelligence are so powerful and developing so incredibly quickly, Isaak argues, that we’re in big trouble if the future is left to frenzied competition among companies releasing poorly tested updates into the public because they’re afraid of losing future profits.

“Clearly we’ve gone past a tipping point and the question is, when and how do we finally decide to try and manage it?” said Isaak, a New Hampshire software engineer I’ve known for years. “A lot of high-tech folks, including myself, say it’s time to start putting the regulatory mechanisms in place before we hit the next tipping point and it’s too late.”

“It’s real, and I think there’s an opportunity to get a handle on it. A year from now, that might not be possible.”

Rest assured that Isaak, who lives in Bedford, is no technophobe, nor is he basing his opinion on superficial knowledge (that’s my role).

His career goes back to Digital Equipment Corp., a name that will bring nostalgic smiles to many a local geek, and he has held a variety of positions with IEEE, the huge professional organization for engineers in electronics fields, including chair of the USA committee on communications policy and vice-chair of the Society for Social Implications of Technology.

Isaak is in good company with his fears. There are plenty of examples of experienced software engineers, many of who helped develop the technologies which have led to generative AI, all but begging lawmakers to clamp down before this iteration of “move fast and break things” breaks society. China, which worries about societal breakdown, has implemented controls already and the European Union may follow suit soon.

What’s the danger? “We’re not talking about Arnold Schwarzenegger in ‘Terminator,’ ” said Isaak. Nor is he talking about it taking some jobs or otherwise unsettling the economy.

The concern, he said, is that “all of a sudden the systems have learned how to manipulate people to the point that we no longer have the freedom of action we think we have.”

To make his point Isaak points to social media like Facebook, Twitter and Tik-Tok, which use algorithms – interconnected mathematical rules so dense that few people really understand them – to draw attention so they can sell ads.

This doesn’t sound so bad, since newspapers have been trying to draw attention to our ads for a century, but as we all know social media as maximized by a few companies has manipulated millions of people and generated fault lines in society. It has infected our thinking to the point that the Surgeon General has warned about damage being done to the mental health of America’s youth.

A.I. is worse, Isaak says: it is social media on steroids. It can be used to “learn” what manipulates people much more quickly than current algorithms do. Since manipulation of the public drives profits, you can be sure companies will follow that route as fast as they can, unless we control them.

Isaak doesn’t think companies should abandon research and development of generative A.I., large language models and the other advances that have combined to create ChatGPT and the other programs. But they must be made to limit its uncontrolled spread.

“We’re not asking them to stop development, to cut back on things. We’re asking them to not let it out of the lab. Let medical labs, research labs have it, just don’t let it get out where people who want to build bombs might have access,” he said.

“The government needs to call together corporate leaders, protect them from antitrust violation, say OK guys let’s identify how to get control of this now, what guidelines we should have in the future, and who we have to get together to hammer out those guidelines,” Isaak said.

“I don’t think it’s too late. I think there’s enough interest in corporate leadership to do that.”

There are people who think that fears like Isaak’s are overblown since there’s no “intelligence” in artificial intelligence. They’re correct: Many AI technologies are basically extreme versions of auto-correct, using probabilistic models based on massive collections of past writing to choose what words come next.

But so what? They work, work really well, and are improving at breakneck speed. They also pass the Turing Test with flying colors and can sound more human than some people I’ve worked with. Combine them with deep-fake video tech (another technology that needs regulatory oversight) or voice simulation and you’ve got a powerful, uncontrollable weapon.

An entertaining but unsettling examination of the power of ChatGPT, the best-known current AI model, can be heard on the NPR podcast “Planet Money.” The hosts decided to see if ChatGPT could create an entire podcast; at the beginning they were laughing about it, at the end they were worried about their jobs.Web body

Pin It on Pinterest