Take a look at the state’s brand-new Code of Ethics For Generative Artificial Intelligence. If you have certain literary preferences, which you probably do if you’re reading this column, something will look familiar.
“I’m so glad you mentioned that,” said Ken Weeks, the state’s chief information security officer, after I pointed to a line in the Code of Ethics saying “AI Systems should neither cause nor exacerbate harm or otherwise adversely affect human beings and the natural environment.”
Was that inspired by the Three Laws of Robotics from science fiction writer Isaac Asimov?, I asked.
“Of course it was!” he said, laughing. “Anybody who went into this and tells you they didn’t take into effect the Three Laws of Robotics, they’re not (admitting) it.”
And that’s a good thing, he said. “Just because the application and the technology is new, it’s important to remember there has been thinking about this for decades, even if it was speculative.”
New Hampshire’s state government, like every other government, organization, business, entity or human being, is trying to figure out how to respond to the various systems known as generative AI. The Code of Ethics and the resulting executive branch policy is the first of what will undoubtedly be many such efforts.
“What sparked this for me was back in late April, I was attending a meeting of the National Association of County Information Officers and several discussions during those meetings about how to improve services for citizens, how to automate certain functions … and it almost all centered around using artificial intelligence setups,” said Denis Goulet, commissioner for the state’s Department of Information Technology.
“Vermont has done pretty extensive work on AI with a task force – the good, the bad and the other – and I thought we should start thinking about something like this in New Hampshire, to get ahead of the problem before we have agencies just diving right in. We already had some (agencies) looking for use cases,” said Goulet. “We leveraged work extensively that Vermont had done.”
Most of the resulting three-page Code of Ethics (it is linked from the department’s home page at www.doit.nh.gov) could apply to any technology, saying things like it it shouldn’t be biased against any individual or group, shouldn’t infringe on rights, and must be transparent and accountable.
And then there’s this: “Automated final decision systems should not be used by any government organization in the State of New Hampshire. … In New Hampshire, humans interacting with AI Systems must be able to keep full and effective self-determination over themselves and be able to partake in the democratic process.”
To me, this is the key because it’s the point where generative AI differs from websites or chatbots or other similar systems. AI isn’t actually intelligent but sure seems like it is, creating a strong temptation to hand it the reins and let the software make the decisions. It’s so much more efficient!
But so much more dangerous.
Goulet emphasized a realization about the dangers several times in our talk.
“We didn’t want any resident or New Hampshire business, or visitor for that matter – any human being interacting with the state of New Hampshire – to be put in a position where a software code was making a determination for them that affected their rights. A human always has to do that,” he said. For example, he talked about a request for unemployment insurance, which requires gathering and correlating a bunch of information before it gets approved or not.
“The case manager can actually set a bunch of tasks at 4:30 … allow those to run overnight, then come back in the morning and that information is gathered and presented in a way that enables very efficient decision making. You end up with the state working 24/7,” he said. “Theoretically you could let the machine decide the fate of this individual. … But in the end, according to our Code of Ethics, you have to have a human say yes or no, not the AI.”
That human also has the responsibility to know what datasets were used because generative AI is brilliant at presenting complete nonsense in a form that is very, very believable. The software is the ultimate bull artist.
Goulet said handling these questions through executive policy is preferable with something as unsettled as AI. It is likely that this new executive policy, and possibly the Code of Ethics behind it, will change and perhaps change often as the technology matures.
Creating new laws will also be needed, partly because policies from the Department of Information Technology only cover the executive branch but not the Legislature or the courts. However, that’s slower to do and harder to tweak as the perils and promise of AI become clear. If I was writing an AI law right now, in fact, I’m not sure what I’d say beyond “don’t be evil,” and we know how well that has gone with Google.
So maybe Asimov’s Three Laws of Robotics – first, robots can’t hurt people; second, they must obey orders; third, they can protect themselves only if the first two rules have been met – aren’t a bad guide.
“I think he got it pretty right,” said Weeks. “They’re simple, easy to understand. It just kind of makes sense.”
Why just a summary of Asimov’s 3 laws, when the original text is so elegant? From Wikipedia:
The Three Laws, presented to be from the fictional “Handbook of Robotics, 56th Edition, 2058 A.D.”, are:
The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Of course the beauty of Asimov’s robot stories was, having constrained himself with these immutable laws, what kinds of unintended consequences would still be possible. I’m afraid that despite our best efforts, we too will be experiencing all sorts of unintended consequences in our use of general AI, despite our best efforts to anticipate possible problems.
Asimov’s laws were created to be violated. … That’s the basis of his related stories, so intentionally flawed … but even if they were “complete” (Godel would have something to say about this possibility) … there would likely be unintended consequences.
Today’s GenAI (generative AI) systems are seriously lacking the essential sentient interactions with the environment needed to develop common sense nor are they designed to distinguish fact from fiction. Hopefully we won’t have to wait until 2058 to develop some more appropriate guidelines
A very interesting document. My professional society has been working on both AI policy recommendations (https://ieeeusa.org/committees/aipc/ ) and AI Ethics standards (https://standards.ieee.org/industry-connections/ec/autonomous-systems/ ) for some time. It is good that the EU work is being used as a basis, there has been more work done there than in the U.S. along policy lines.
There are some flaws in the current guidelines, or perhaps points of incompleteness. As indicated already, the distinction between regular systems and AI systems is a grey scale, and a distinct line cannot be made. The hope of “accuracy” for generative AI systems is highly optimistic …. these create content based on prior examples with no tests for correctness — (one listed the date of my memorial service as Saturday Jan 22, 2023, which was a Sunday … not to mention that I am not yet dead.) … Some of the systems are based on content prior to a 2021 date, so unable to obtain current data, others are trained on data sets that are not disclosed and some trained on data sets that have substantial copyright protected content the implications of which has yet to be adjudicated. It is very useful to have initial guidelines, some form for feedback to address issues with these should be included.