Select Page

It sometimes seems that every chemical ever used in every product ever made turns out to be toxic, which we only realize after it has spread throughout the world and made people sick.

Wouldn’t it be nice to know how toxic a chemical really is before we start using it?

This, it turns out, is the whole idea behind a relatively new field (new to me, certainly): Predictive toxicology. It tries to move beyond the time-consuming laboratory and field studies which have long been used to determine toxicity, making quicker decisions using more easily obtained information such as the molecular structure of substances.

“Just like machine learning is being used to predict what we might buy, predictive toxicology is an emerging field to generate data in the lab and compare it across chemical structures, using big data to better predict the toxic effects that a chemical might have,” said Britton Goodale, Ph.D., a research scientist and toxicologist at Geisel School of Medicine at Dartmouth.

If nothing else, she said, predictive toxicology can start prioritizing which chemicals should be studied.

Huge chemical backlog

There are currently about 85,000 substances in the EPA’s inventory under the Toxic Substances Control Act, and about 30,000 are thought to be used in wide commercial application. That doesn’t include, according to a 2011 study in the journal “Science of the Total Environment,” 8,600 food additives, 3,400 cosmetic ingredients, 1,800 pharmaceuticals and 1,000 pesticides that are regulated under federal agencies other than the EPA.

At that time, the study said, about 10,000 substances being considered for an EPA priority testing program yet high-quality data was available for only about one-quarter of them, and one-third of them had virtually no toxicology information at all.

Being able to make useful predictions about effects based on chemical structure and other existing data would be really helpful. Hence the excitement about predictive toxicology.

But that new approach is still – well, new. Most decisions continue to be made based on traditional science.

How much arsenic?

I approached Goodale about it because her specialty is arsenic, which is in the news at the moment. The New Hampshire Department of Environmental Services has proposed lowering the acceptable level of arsenic in our drinking water based on updated science, from 10 parts per billion to five parts per billion. In other words, each drop of our water should be 99.99995 percent arsenic-free instead of just 99.99990 percent arsenic-free.

(Arsenic is usually a byproduct of our geology, by the way. This is one of many cases where it does not help that the chemical is “natural.”)

That proposal made me curious about the scientific processes by which we decide that one incredibly diluted amount of a chemical is dangerous while a slightly less incredibly diluted amount is safe. Hence my call to Goodale.

The complication is that you can’t do controlled experiments on human health. You can’t subject a random group people to a potential toxin for a number of months or years to see what happens compared to a control group.

So, Goodale explained, we have to make intelligent guesses based on three things, all of which have shortcomings: Epidemiological studies, analysis of human cells in labs, and animal studies.

Animal studies involve exposing lab rats or their equivalent to doses of a chemical, while lab work does the same thing to isolated human cells in a Petri dish or the equivalent. Both take the results and extrapolate to what would happen if the exposure had been done to human beings, an extrapolation that involves plenty of room for uncertainty.

You’re probably more familiar with epidemiology. You know how it goes: A community that seems to have an unusual amount of some disease gets studied to see if an environmental cause can be found. If there’s more cancer than the national average and more of a chemical around than is usual, did the chemical cause the disease, or is it coincidence, or is there some other factor?

This is more direct, since it looks at actual humans, but much more complicated. Determining how much a specific agent caused a change in disease over a period of time is an exercise in extreme biology and statistics, with a hefty dose of data-gathering uncertainty added.

“Epidemiological studies can show correlation but cannot provide causation, because it’s not a controlled exposure,” she said.

One in a million

As Goodale explained it, the process starts with a benchmark dose at which experience or studies indicate that effects are likely to start showing up. For cancer, the usual benchmark is that it’s not expected to cause a more than one-in-a-million of cancer over a human lifetime.

The acceptable dose limit is then lowered by various factors.

“Usually it’s divided by 10 to factor in people who might be more sensitive, to account for that uncertainty. Another factor of 10 if extrapolating from animal data to human data. Another factor of 10 if the study data are not chronic – if they’re shorter-term, which is often the case in animal studies,” she said. “Sometimes they add another uncertainty factor, depending on circumstances.”

“Then there’s an evaluation of the limitations of the data – things that are considered are things like the risk of bias of the study, the quality of the study, the confidence in how the exposure was assessed, and the like,” she said.

Ten times ten times ten times ten is 10,000 – which means that a benchmark dose of one part per million based on limited studies can turn into a regulated limit of one part per 10 billion.

Combine this with an ever-increasing ability of technology to detect and measure chemicals in minuscule amounts and you see how we get to mind-boggling levels like 70 parts per trillion – trillion! – of the chemicals known as PFAS in southern New Hampshire groundwater. (To help grasp what that number means, consider: One million seconds is 11 days, but one trillion seconds is 30,000 years – yes, years.)

There’s another complication: You can’t assume that health effects are linear. In other words, if a substance causes X problems at a certain dose, cutting that dose by a factor of 10 doesn’t necessarily mean that the effects will also be cut by a factor of 10. The effects might disappear entirely if you get below a cutoff level, or they might go down by the expected factor of 10, or the effects might not go down much at all.

“Endocrine disruptors are a good example of that. Acting like hormones, they can often have different effects at lower levels that you wouldn’t predict if you just extrapolated down from high-level exposure,” Goodale said.

So what’s to be done? One possibility is to take the precautionary principle to the extreme, demanding a certain lack of future harm before producing any new chemical or using an existing chemical in a new way. The difficulty of proving a negative means this would basically cripple all new industrial production. While that may not seem too bad an idea to some folks, it’s never going to happen.

Instead, we need to double down on science and research to help inform ourselves as much as possible. And we need to accept the reality of regulation and oversight that will sometimes get in the way of our wishes and desires and business plans; we are grownups, after all.

As for Goodall, she’s cautiously optimistic.

“With more health effects being taken into account in design, and as methods improve to determine which chemicals will be bad, there’s a lot of hope that we can develop chemicals that will not be harmful to human health or the environment,” she said.

Pin It on Pinterest