top of page
  • Writer's pictureGeoff Russell

Merchants of doubt and merchants of fear

Updated: Jan 7, 2023

Climate scientist Matthew England interviewed Naomi Oreskes, who, with Erik Conway, wrote: “Merchants of Doubt; How a handful of scientists obscured the truth on issues from tobacco smoke to global warming” (MOD) on Australia’s ABC Science Show recently. It’s a terrific book on the history of the campaign to discredit climate science in the eyes of the general public.

I’m waiting for the sequel: “Merchants of fear; how a handful of oncoblind activists obscured the truth about radiation and delayed effective action on climate change”. English rewilding and general environmental activist George Monbiot contributed relevant material with various articles in 2011, starting with Evidence Meltdown, which began:


Over the past fortnight I’ve made a deeply troubling discovery. The anti-nuclear movement to which I once belonged has misled the world about the impacts of radiation on human health. The claims we have made are ungrounded in science, unsupportable when challenged and wildly wrong. We have done other people, and ourselves, a terrible disservice.

I know exactly how he felt, more on that later. Even if you don’t finish this article, I’ll be happy if you just read that first article of Monbiot’s.


The holy trinity of climate books

MOD appeared in 2010, but I don’t quite know when I read it. I associate it with Michael Mann’s also terrific book: The Hockey Stick and the Climate Wars, which came out a couple of years later. Mann’s book is more personal. The attacks on him get a mention in Merchants of Doubt, but Mann’s detailed blow by blow account had a much deeper impact on me.


Both books followed James Hansen’s “Storms of my Grandchildren: the truth about the coming climate catastrophe and our last chance to save humanity” which appeared in 2009. Hansen is a retired (but still publishing) NASA climate scientist with an extraordinary grasp of both the big picture and the tiniest of details in a range of fields.


I’d started compulsively reading Hansen’s climate science publications in 2005, shortly after being convinced that global warming was “for real”. Hansen is one of these rare scientists whose academic work is written clearly enough to be comprehensible to anybody with basic scientific literacy and a generous dose of persistence. “Storms” is also also beautifully written, but don’t expect to read it like a novel. The getting of knowledge isn’t like getting wet from standing under a shower.


But good science can be disturbing, as well as enlightening

For me, the years between realising that climate change was for real and reading the Oreskes, Mann and Hansen books was a period of intense introspection, watching the debate between climate change deniers and climate scientists play out in the public space. I remember a 2009 debate between Professors Barry Brook and Ian Plimer at Adelaide University. I had been impressed by Plimer’s withering critiques of creationism, so it was both fascinating and depressing watching him flounder so irrationally over climate questions. Plimer’s climate denial book, “Heaven and Earth” also appeared in 2009. Mathematician Ian Enting, gave it a page by page shellacking; Plimer’s range of mistakes goes from trivial to big enough to park an ice-shelf on. Going through Enting’s list was a revelation. Watching a superficially convincing piece of writing crumble under the scrutiny of real expertise was one of those “Wow!” moments in my life. Being present at the actual debate between Plimer and Brook comparing Plimer’s hand-waving rhetoric with Brook’s clear, measured and rational arguments was also hugely instructive.


Having been an animal activist for years, it was a little confronting to recognise basic flaws in many climate denial positions mirrored in my own arguments. Despite degrees in both Philosophy and Mathematics, I was still eminently capable of mounting bad arguments; that was tough to accept. Recognising such flaws in Plimer didn’t make finding them in myself any less disturbing. Like many animal activists, I used some of the same dodgy arguments of climate change deniers. Being desperate to change people’s minds makes it easy to overplay the health claims about how unhealthy meat is or make elaborate claims about vegan diets. I had cherry picked data, ignored contrary or inconvenient evidence. I undertook to do better!

Rational argument is complex, detailed and bloody hard work.


Scientific journals force authors to consider contrary evidence. You simply won’t get published if you ignore it, but that makes for dry reading. In-person or in-print debates aren’t the same thing. They consist typically of two sides slaughtering facts and arguments for entertainment. The side with the best jokes can easily sound like the winner. That makes debating a very risky way of exploring serious issues.


Eventually, I concluded that it is acceptable, when marketing ideas, as distinct from forming them, to cut corners, but only in the context of a solid backing from the best available science. Otherwise, facts lose and bullshit wins. Plimer’s book was a far better seller than its critiques.

Oreskes and Conway’s book details the history of climate denial propaganda and links to the earlier campaigns defending tobacco. The climate denial propaganda tricks came straight out of the tobacco industry play book.


History can be irrelevant or even misleading

Strictly speaking, all of this history is irrelevant; even when it makes for compelling reading. Merchants is terrifically entertaining and some good science is imparted during its account. But, and it’s a big but, there is something both wrong and disturbing about attempting to dismiss an argument by associating it with detestable people and groups. Maybe that wasn’t Oreskes’ intention. But the book will be viewed that way by many. How often have you heard somebody say: “You can’t trust that claim, it comes from BP, or Big Oil, or the cattle industry, or Big Pharma, or Big Solar, or [insert your favourite enemy].” Or even worse, “You can’t believe that, it was in The Telegraph, The Australian, The Age, The Guardian … or [insert any news source you like to hate]”.


This kind of bad argument has its own name ‘Ad Hominem’; and it’s beloved by all sides of politics.


Scientists understand that Ad Hominem arguments are junk, but they also understand that financial interests, among other things, can bias judgement. Researchers sometimes mtake heroic efforts to counter bias. For example, they may do sham surgeries where people with a medical problem are split into two groups; one group is given the surgical procedure being tested and the other group undergoes pretend surgery, getting the cut and the stitches but with nothing happening under the skin. The evaluating team doesn’t know who has had the real surgery and who has had the sham surgery. The patient doesn’t even know! Why? Because if you are a surgeon who has developed a procedure you think is terrific, your judgement on its efficacy may not be reliable. But even sham surgery has methodological flaws. Getting stuff right is always more a matter of multiple pieces of research, rather than any single experiment.


So the best scientific journals require authors to declare interests, and subject papers to anonymous review where reviewers don’t know who they are reviewing. But, provided interests are declared, openly and honestly, anybody, regardless of employer, can publish in the best journals on the planet; and they do.


[Note added 7th Jan 2023. For people interested in how scientists in the meat, dairy and egg industries publish studies designed to confuse or totally mislead people about their products, Michael Greger puts them under the microscope frequently ... dairy, eggs, processed meat, meat generally, and again meat in general but specifically about spreading doubt, part ii, cholesterol. ]


Conflicts and declarations of interest

The declaration of interests is a warning to readers to beware of subtle mistakes and influences; it is not a guarantee that the paper is bullshit; nor is having no conflicts of interest a guarantee of reliability. A sound argument is a sound argument regardless of whether it comes from a paid tobacco industry lobbyist or James Hansen.


However, current journal requirements to declare interests are far too weak to capture all the kinds of ways that people’s judgement can be compromised.


In Australia for example, cattle generate more warming than all our fossil fuelled power stations, so if you love beef, might that compromise your objectivity? I’ll provide an example shortly. So why aren’t authors required to disclose their dietary habits where relevant?

I once emailed James Hansen and asked why scientific papers ignore meat reduction in their list of things we should do to reduce our climate footprint. The essence of his reply was humility … saying I’d raised “a very good point, which I usually forget to mention – thanks for reminding me”. Hansen went on to publicly state that eating lower on the food chain in “in terms of individual action, is perhaps the best thing you can do”.


These days, numerous scientific papers (e.g., insert example/s,) make it clear that dietary reform, meaning less meat –, particularly ruminant meat and dairy products – is an essential component of meeting climate targets.


Contrast Hansen’s response with Prof. Tim Flannery, one of Australia’s most high profile climate change campaigners for over 15 years. His reply to a similar email was that he was a “proud eater of flesh”. Flannery’s denial of the science on animal agriculture and the climate (not to mention biodiversity impacts) has been (and, I imagine. still is) extreme, recommending an expansion of the global livestock industry in the world’s “rangelands”. Flannery also supports holistic grazing despite it being panned by climate scientists. Again, Monbiot nails it. He has changed his mind a few times about meat- eating, both on ethical and environmental grounds, eventually writing a piece about being wrong about being wrong. When you decide to follow evidence, you can become reluctant to make decisions, always waiting for the next study; and the one after that. Or, you occasionally change your mind as you learn more. That’s Monbiot’s way; he calls it like he sees it and isn’t afraid to admit to being wrong. Hansen is the same. As is Barry Brook.


How are we to assess the objectivity of climate scientists who don’t list their dietary habits in their disclosures of interest in journal papers? How do we handle reviewer and editorial bias due to dietary preference? All tough questions.


Ultimately, regardless of the strength and breadth of disclosure requirements, we still have to evaluate arguments on their merits. If we were flawless at this, we wouldn’t need disclosures. But we aren’t, so when we are warned about potential biases, it helps us focus and orient our bullshit detectors.


What are Oreskes’ biases and how can we detect them?


Oreskes and techno-fideism

The stated topic of the interview was “Human impact on and response to a changing climate. But there was a large part of the interview that was background. Oreskes has an interesting history, some of it in my city of Adelaide, working as a geologist with Western Mining. She had plenty to do with Roxby Downs, more famous in some circles for the 4,000 tonnes of uranium it produces annually than the 230,000 tonnes of copper.


When the interview finally arrives at climate change, she begins by describing what she calls “techno-fideism”, the blind faith in technology to solve the problem.


She uses the term in relation to nuclear fusion (as opposed to fission, i.e., conventional nuclear power) and to any other belief in a magic bullet; except for the one she believes in.

Here’s her argument:


techno-fideism is a form of climate change denial, it’s a way of kicking the can down the road and saying, oh don’t worry, technology will solve it. … we’ve already wasted 30 years, so we need to mobilise the technologies that we have right here right now, which is essentially renewable energy, efficiency, storage, those are the big ones.

Let me repeat for emphasis: “...mobilise the technologies that we have right here and right now…” but she ignores nuclear power. What?


Here’s a chart of US electricity production in 2021. As you can see, nuclear is the biggest clean energy source in the US.




It’s the same in Europe. Nuclear is by far the biggest clean energy source in both Europe as a whole and the European Union in particular. Here are the EU figures:



Clearly, nuclear power is clearly a well- understood, current technology. Oreskes is committing a typical climate denier sin, ignoring data which contradicts her views.

What of wind and solar? It’s easy to grow at a high rate when you are tiny. Not so easy when you get bigger. Maybe they will scale better than they have, but so far, their rollout speed has been glacially slow.


What of storage? Oreskes puts it in the category of technologies we have here and now. We may have the technology, but do we have the mines and factories needed to produce them at the scale which renewables, but not nuclear, require?


Oreskes should have looked at the production predictions for batteries; predicted to be about 2.7 terawatts by 2030, which isn’t even enough for about half the electric vehicles (EV)s we’d better be producing by then, let alone grid storage. Using Li-ion batteries to firm up the grid is as wasteful as using burning old growth forest for electricity.


Oreskes’ faith in a non-existent technology

But Oreskes’ biggest problem isn’t her renewables and batteries, it’s her lack of any mention of the technology they require which isn’t here and now.


To her, like most renewable advocates, this technology is invisible; the electricity grid.

Apart from ignoring conflicting data, she breaks her own prohibition by by expressing blind faith in a non-existent technology.


Nobody has ever built a renewable- only grid at the kind of scale we need, and renewable industry experts are very clear that the difficulties increase rapidly as penetration increases.

Here is an image from the (June 2021) paper just cited:




Note the terms: “largely unsolved”, “partially solved” (which means not yet solved) and “largely solved”. Why don’t they just say “solved” for short term variability? Probably because these are actual experts, not just renewable advocates.


Our Australian Energy Market Operator (AEMO) used more colourful language in a recent (2022) report, the Engineering Roadmap to 100% Renewables (ERM100):


"Progressing this [renewable transformation], while simultaneously operating a real, gigawatt-scale power system is akin to “rebuilding a plane while flying it.".

I’d say it’s more like turning a plane into a helicopter without landing.


And yet Oreskes assumes, by omission, that we have this technology “here and now”. We don’t.


So Oreskes both ignores the biggest, most proven technology we have and also puts her faith in something which according to renewable experts is full of unsolved problems.


The image above is a broad brush, it gains its power by coming from some of the best renewable experts in the best renewable energy labs in the US. Let’s flesh out some details.

Here is a small selection of the (many) problems mentioned in the recent AEMO Operations Technology Roadmap.


  1. Lack of dynamic models for DER, DPV, BESS (distributed energy resources, distributed photovoltaics, battery energy storage systems … AEMO acronym soup)

  2. Lack of EFCS modelling (emergency frequency control system)

  3. Lack of participant dynamic response and secondary control systems

  4. Lack of important operational data

  5. Multiple models managed by different teams

  6. Different functions using different model variations

The first two should be show-stoppers.


Can you imagine building a sky-scraper when you can’t model it’s behaviour in a wind?


Now imagine building a sky-scraper when you don’t even know how the components behave.


Engineers have tables and equations that describe steel, concrete and all the other structural components they use. But in the evolving renewable grid, the components simply don’t all behave as per specification.


Inverters are the devices that connect wind and solar generators with the grid. There are standards on how these must behave. But the standards keep changing as problems arise. These are not minor changes but changes critical to prevent inverter behaviour causing a cascading outage that could bring down the entire grid. Almost a decade ago, Germany recognised the problem of inverter disconnection and reversed standards that required them to disconnect when they detected a high frequency on the grid.


Our inverter standards were changed as recently as 2020 to address this same class of problem. And, despite the critical nature of inverters in any grid, many don’t work the way they are supposed to. There are “high levels of non-compliance”. What’s a “high level”? The Renewable Integration Study: Stage 1 (RIS) estimated that 40% of inverters at the time (in Australia) weren’t compliant with specifications.


Put simply as the penetration of renewables increases, engineers keep finding problems they didn’t foresee, or didn’t imagine would be serious.



For example, grids have protection schemes to selectively blackout areas when demand exceeds supply by some set amount. It’s called load-shedding. Sacrificing the supply to some users to avoid everybody being blacked out.


These have to act incredibly quickly to prevent the entire grid collapsing. Add a large number of photovoltaic (PV) panels and these schemes can backfire.


Think about it. I’ve given you everything you need to predict the problem, but you probably won’t. Think some more and then keep reading.


When plenty of houses have 10kW systems on their roofs, the area the current systems automatically select to blackout may be a net source of electricity. So a current load shedding system could accentuate rather than solve the underlying problem. Working out in real time which geographical areas are net sources and sinks isn’t simple. Load shedding systems will have to be redesigned and the data they need to function will have to be available.


Communications standards for inverters are evolving. Unlike the Internet core protocols, which were designed in one place in one country and then slowly spread around the planet, there are far more fingers in the grid design pie and the control problems are worse; so it will take longer.


For decades, the standard behaviour of any electrical machine sophisticated enough to need to monitor its electricity input would simply disconnect if it detected a supply that was outside some limits. Such behaviour can be fatal in a large grid with active rather than passive devices and cause what engineers call a “cascading outage”; you don’t need a technical understanding to get the picture.


The question isn’t so much how to model it, but can it be modelled and modelled fast enough?


Modelling large complex systems rapidly almost always requires that components be represented as linear devices over their operating range. Electricity grids aren’t like this. When things happen like trees falling on powerlines or lightening strikes, voltage and current surges can be large. These are modelled using non-linear methods. The physics is non-linear and that’s what the models have to handle. Modelling large collections of non-linear devices is an engineer’s second- worste nightmare. It graduates to their worst nightmare when the devices don’t even operate to specification.


The problem isn’t sto much large wind farms or large solar farms. Firstly, their electrical equipment isn’t consumer level junk made to hit a price point rather than a performance specification, and secondly, many core grid assumptions remain unaffected. But they do require massive additional transmission resources and storage.


Rooftop PV or any other kind of small distributed energy supply causes much bigger problems. They totally break current grid models and functioning by disrupting the basic of assumptions used to identify and isolate faults.


Large wind and solar farms can fit into the current grid model, but they need to work like existing large providers. They need to handle perceived voltage and current changes without doing a dummy spit.


Why anybody wants such environmentally destructive energy systems is a mystery when there is a class of much more eco-friendly systems. Consider Hinkley Point C, the new tiny nuclear plant being built in the UK. I called it tiny because that’s what it is compared to any kind of renewable alternative. Here’s an image showing the area of land required to supply an equal amount of electricity annually using solar PV in the UK. Breaking that up into 407 smaller solar farms of 100 hectares each might fool some people about the impacts, but they won’t change. Why don’t UK environmentalists care about the English countryside?




Perhaps many people do care about the English countryside, but there is no shortage of academics in the UK building electricity models which ignore the problems mentioned above but which purport to show that nuclear isn’t cost- effective. One such study featured recently in New Scientist. The model has a default anti-nuclear position based on “issues such as safely storing nuclear waste” and “hazards from low-probably [sic] but high-impact accidents”.

They don’t feel the need to quantify these issues and hazards. Their default position is that nuclear is obviously bad, so all we need to do is show we can do it cheaper with wind and solar. Looking at the image above, I’d say there has never been a nuclear accident anywhere on the planet which would make that land-use trade off rational … unless of course, you simply hate the English country side.


In case you missed it, this post has a concise list of why nuclear is more eco-friendly than renewables, and this post looks at the extraordinary problems of nuclear waste.


Merchants of fear


Oreskes understands the selling of doubt; she wrote a book about it. But seems blind to the large quantities of nuclear fear she’s loaded into her shopping trolley during her lifetime. Where did it come from? Who funded its dissemination? That’ds make a great book. She’s the perfect author to write it. But I’m more interested in the truth and logic of the facts and arguments than where they came from.


In 2020 Foreign Affairs asked a panel of 41 experts to assess the claim: “A global expansion of nuclear energy should be a central piece of the fight against climate change”; they were asked about their level of agreement or disagreement and most also contributed a statement in support of their judgement.


Oreskes begins with a clearly rhetorical flourish.


We have nearly 70 years of experience with nuclear power generation. During that time, only one country in the world has generated the lion’s share of their electricity from nuclear power without a major accident, and that country is France.

This is what I call a “funnel statement”. It leads you down a hole to a misleading implications. Here’s another funnel statement: “No country which has generated the majority of it’s electricity with nuclear power has ever had a major accident.” It’s true because the only country which has ever generated the majority of it’s electricity with nuclear power is France.


Oreskes’ statement implies that many countries have generated the lion’s share of electricity but only one hasn’t had a major accident; meaning all the others have.


How many have generated “the lion’s share”? I’d say one –, France. But Sweden has had years with nuclear being the largest source of electricity. But why make a statement dependent on how you define something as vague as “lion’s share”.


Ukraine, at the time of Chernobyl, was generating just 22% of its electricity with nuclear. Nuclear expanded considerably in Ukraine after Chernobyl.


Why not just consider the ratio of accidents to operating hours? Or gigawatt-hours? Or something sensible. That’s how we do it with everything else. Nobody talks about the number of air crashes in countries where the lion’s share of travel is by air. We look at crashes per operating hour, or passenger kilometre or something sensible.


And how do we judge when an accident is “major”? Does it have to kill more than some minimum number of people? Plenty of things kill 5 people, or 10. What makes an accident “major”? A hundred?


Think of plane accidents, bus rollovers, gas explosions, dam failures and the like. Are they major accidents? Definitely. But nobody marches in the street calling for bans on planes, dams and buses, and such calls for bans on gas are very recent. Why are nuclear plants singled out and even more tightly regulated than aeroplanes, which are much bigger killers?


How about accidents which make many people ill? Do they count as major?


Consider food poisoning. Wikipedia provides a useful list. The worst being a processed meat Listeria contamination in South Africa which killed a couple of hundred people and infected over a thousand. The worst in the US was also a Listeria outbreak which killed about 50 people and infected 86.


There has only ever been one nuclear accident on anything like the scale which could be called a “major accident”, Chernobyl.


Am I ignoring Three Mile Island? Of course not, it didn’t kill anybody or make anybody sick. It was expensive, that’s all.


Fukushima?


A triple meltdown is certainly expensive, but what was the death and illness toll from the accident as opposed to the incompetent and unnecessary evacuation? Zero or, arguably, four (3 drowned when the tsunami hit and one cancer case got a compensation pay-out). Had the Japanese Government followed the IAEA guidelines, the Criteria for Use in Preparedness and Response for a Nuclear or Radiological Emergency (IAEAGUIDE), the death toll would have been limited to the drownings. Why blame nuclear power for incompetent actions by politicians? Prime Minister Naoto Kan was so heroically ill-informed that he thought the Chernobyl accident was a nuclear explosion! This is according to ABC’s Mark Willacy in his book about Fukushima. It would be funny if his ignorance hadn’t caused so much distress, suicide and deaths.


We judge other accidents by considering the benefits of the technology; while taking reasonable steps to minimise their incidence. For nuclear power, apart from the benefits of electricity and heat, there has been the avoidance of 1.8 million premature deaths from the displacement of fossil fuels. That’s without adding in the lives saved by the various nuclear plant seawalls in Japan? All the other seawalls on that stretch of coast failed spectacularly.

Oreskes goes on to say thate nuclear power is “notorious for being very slow to build”. She omits the obvious qualifier “sometimes”.


Here’s a graph of the build times of Japanese reactors. As you can see, the median build time was under 4 years. When you are talking about something that supplies more energy than 10-20 really large solar farms and provides not just electricity, but dispatchable electricity, this isn’t so slow at all. As a historian, with technical qualifications, Oreskes would be well placed to ask what is so screwed up in the US that their build times are almost double this. Are US engineers just stupid? Of course not.




So why does Oreskes say that nuclear build times are “notoriously” long?


Perhaps she’s not familiar with much outside the US or perhaps she’s suffering a little confirmation bias and simply hasn’t looked.


Am I cherry picking the country with the fastest build times? Of course.


I’m trying to demonstrate that slow build times are not intrinsic to the technology, but to other national factors.


What about other countries? Here is South Korea; median build time 4.9 years.




China; median build time 5.6 years.




Germany is interesting; median build time 6.2 years. Are German engineers are as stupid as US ones? (NB. a few reactors here have no "Start of Construction" date on the database, I've no idea why not).



The German graph has the same basic shape as the US graph; median build time = 6.9 years with some really long recent builds blowing out the average to 8.2.


Germany is home to the German Greens. The forced closure of German reactors after the Fukushima meltdowns led to the premature deaths of about 9,000 Germans, thanks to increases in coal use and pollution. The German Greens, like other anti-nuclear groups, simply don’t understand cancer and its relationship to radiation. I’d be confident that Oreskes has never studiedy either topic closely;, or perhaps not at all.


Oncoblindness and risk


Oreskes constructed the word "techno-fideism", so I figured I'd make up a word too, "oncoblindness". It denotes people who don't see the huge sea of cancers we live in and, so have no reference point for any kind of risk assessment. Oncology is a fascinating subject, but fear tends to put people off learning about it. Here's an analogy. Tell people that shampoos have chemicals in them and some people will stop using it. The word "chemical" has one meaning for anybody with the slightest scientific literacy, but another meaning for more than a few other people. It means poison. The word "radiation" works the same, but with far more people. It can even scares the hell out of physicists, because they understand the link with cancer, but it may be the only such link they've ever heard of. So they can't compare. Anybody watching the HBO "Chernobyl" series may have noticed how medical experts where either treated as figures of fun, or just written out of the history. This is because physicists are used to being "the smartest guys/gals in the room" when the reality is that most know bugger all about cancer.


But let's ignore that tangent ...


In the past 50 years, the Japanese population has increased about 15% while the number of bowel cancers annually has gone from 20,000 to 148,000. Wow! Say that nice and slowly and calculate how many people get that terrifying diagnosis daily.


That’s what happens when you increase the exposure to significant carcinogens (red and processed meat in this case). The same surge occurred in the US with lung cancer and tobacco during the early years of the last century.


Here is a question for Prof. Oresekes or anybody else who thinks nuclear power is risky. How many Chernobyl sized accidents annually would you need to have to produce a surge in cancers like that?


Oreskes thinks nuclear plants are “risky” because she has no reference point. That’s what oncoblindness means. When somebody says Chernobyl caused 6,000 thyroid cancers over 36 years, people are shocked. That’s about 166 cancers per year among the 162,000 cancers in Ukraine each year. Thyroid cancers certainly aren’t desirable, but they are manageable and almost never kill people. If it comes to a choice between falling off a roof while cleaning a solar panel and getting thyroid cancer, it's an easy choice; unless you think wheelchair look like fun. There have been some 15 deaths in total, over 36 years. On the other hand, the 5- year survival rate for bowel cancer is about 65% in the US. So 166 bowel cancers per year for 36 years would result in at least a couple of thousand deaths. Compare these thyroid cancers with the 148,000 extra bowel cancers every year in Japan. How many deaths would you get? About 60,000 a year, every year.


You could have thousands of Chernobyl sized accidents, every year, and not get close to the deaths caused by bowel cancer increases in just a single country – Japan, let alone all the other countries with rising bowel cancer rates as people eat more red and processed meat or get fatter.


Oreskes’ description of nuclear as risky is a pretty clear indication of oncoblindness, she knows nothing at all (evidently) about the big carcinogens discovered in the past 50 years.


She’s been studying different kinds of history and has simply missed 50 years of both oncology and DNA biology.


The Germans are big wurst eaters but worried about nuclear accidents, that’s like an alcoholic smoker worrying about the carcinogenicity of red lollies.


I’ve written about this in more detail ion other posts.


How cheap and fast to build could reactors be with sensible safety regulations? That’s the big question everybody should be asking.


Particularly in the US. Despite its astonishing capacity in many areas, it can’t control guns, or drug prices, or nutter SCOTUS judges, and it can’t build nuclear plants or a functional health care system, and its voting system looks like it was designed by a 10 -year- old. The “notorious” slow build rate isn’t a technology issue, it’s a regulatory issue, and it’s clearly at its most serious in the US and Germany.


Appendix: What about new reactor designs?

Nuclear experts, and advocates like me, have been prone to praising various nuclear technologies that have never been built at scale. This is definitely a silver bullet risk. When I first realised I’d been wrong about nuclear power, I was enamoured with these new designs. But over time, as I learned more about cancer and radiation, I realised we don’t need new designs; certainly not because of any safety issues.


So while I’d certainly support continuing work on better designs, and also fusion, we should definitely not bet the farm on them. We should look at the current international reactor fleet and decide which designs have worked well and can be built as fast and as cost-effectively as possible. We need to be thinking less about the science of nuclear reactors and more about production engineering.


The delicate policy task is to balance the size of the effort devoted to these two arms of activity.


bottom of page