The decision by New Hampshire to allow AI-powered software as a “tutor” in public schools is a great reminder that the technology we call “artificial intelligence” is both wonderful and worrisome at the same time.

It’s also a reminder that handing public services to private companies is equally wonderful and worrisome.

The decision, which I learned about from a story in the independent news organization New Hampshire Bulletin, adds New Hampshire to a long-ish list of states and school districts contracting with the private organization Khan Academy for a program called Khanmigo. The contract involves $2.3 million in federal money; I don’t think any state money is used.

Khanmigo is basically a large language model, or LLM, that does the sort of cool things we first saw in ChatGPT. Using algorithms and databases controlled by Khan Academy, it can answer questions and do original writing (depending on your definition of “original”), including helping solve pre-calculus math problems.

Khanmigo, which launched last year, is cleverly designed. For example, not only does it perform straightforward tutoring help with homework or essays, it lets students ask questions of certain historical figures like Thomas Jefferson or text with literary characters like Eeyore (although the way the world is going, I’d rather chat with Pooh), creating conversations that pass the Turing Test with east.

Khanmigo looks like it can really help a lot of kids, especially members of the COVID generation who are more comfortable texting than talking face-to-face. It can also free up teachers’ time for more in-depth and imaginative work, helping with things like lesson plans and grading. That’s the wonderful part.

The worrisome part? ChatGPT and its ilk are infamous for making up stuff in their answers, such as sourcing books that never existed or making up laws of physics or creating realistic photos of imaginary things without telling you they’re imaginary. This is so common there’s a term for it – “hallucinations” – and it’s a hard-to-avoid aspect of large language models.

LLMs are basically prediction machines. You feed them lots of stuff that humans have written or said or pictured and it uses mathematical analysis to find patterns in them. When responding to a query, it uses those patterns to predict what a person would write or say or draw.

The weak link is the “feed them lots of stuff” bit, which AI people call “training” because it sounds better.

LLMs require unimaginably huge amounts of text, entire libraries full of it, to make good predictions. OpenAI, the company behind ChatGPT that Khan uses, gets all that text by grabbing everything they can get their hands on for free. That means downloading – “scraping” – huge swaths of the internet.

We all know that the internet is full of errors, lies and balderdash. If those are scraped and added to the database that the LLM is trained on, the LLM will spew out errors, lies and balderdash because it doesn’t know any better. (It doesn’t really know anything, but that’s another topic.)

The only way to avoid AI hallucinations is to give it a pristine database or try to improve the algorithm to avoid the crappy bits. Khan Academy says they have safeguards to remove hallucinations from Khanmingo, which is why we are paying them instead of telling students to use plain old ChatGPT. But do they?

Khan Academy is a respected organization with a good track record in online education, and as a nonprofit they can avoid the worst temptations of the marketplace. They’ve also been vetted by respected groups. There’s no reason to doubt them.

But there’s no way to check them, either, and that’s where we get to the worrisome part of letting private companies perform public services.

Private companies can usually operate more quickly, more cheaply and with more imagination than government bodies. But they never operate more openly. This is particularly true in software. Firms use nondisclosure agreements, digital-rights management laws and that all-powerful word “proprietary” to keep us from looking under the hood.

For example, maybe the Shakespeare database in Khanmigo has somehow diluted the effect of Shylock’s religion and Othello’s skin color to avoid controversy. Or maybe it has accented them to increase customer interaction. Or maybe it has done neither. We have no way of finding out. We have to trust Khan Academy, and if the past decade has taught us anything it’s that we shouldn’t trust tech companies: “Do no evil” becomes “sell out your customers” in the blink of a quarterly report.

There’s also no guarantee that Khan Academy won’t go sour. If it decides to cash out and sell to the economic devil’s spawn known as private equity, look out.

If Khan Academy was a government agency it couldn’t sell out, and the public could look under the hood by filing a Freedom of Information Act request, or getting legislators to hold hearings, or having the governor grab it by the scruff of the neck. It wouldn’t be easy – any reporter can tell stories about bureaucrats impeding the flow of information – but it would be possible in a way that isn’t possible with private companies or groups.

That’s the drawback of privatization. Wonderful but worrisome, just like technology.

At this point, any Robert Heinlein fan will nod and say “TANSTAAFL.” You can ask Khanmigo what that means.

Pin It on Pinterest