top of page
  • Writer's pictureGeoff Russell

Part III: Is nuclear power expensive or just much better?

Part III: Is nuclear power really expensive, or just much better?


Abbreviations:

  • AEMO: Australian Energy Market Operator

  • CCGT: combined-cycle gas turbine, this is the top-of-the-range gas turbine

  • CCS: Carbon Capture and Storage

  • CEM: capacity expansion model

  • CSIRO: Commonwealth Scientific and Industrial Research Organisation

  • EMT: electromagnetic transient model

  • GENCOST: Gencost

  • IAEA: International Atomic Energy Agency

  • IEA: International Energy Agency

  • ISP: Integrated System Plan

  • MIT: Massachusetts Institute of Technology

  • NEM: National Electricity Market

  • NREL: United States National Renewable Energy Laboratory

  • OCGT: open-cycle gas turbine, this is the cheapest least efficient gas turbine

  • PV: photovoltaic

  • UNECE: United Nations Economic Commission for Europe

Minimising system costs

This is the third post in a series on nuclear power costs, the previous two are here and here.

In this part, we’ll return to where we started and dig a bit deeper into why a US study, with lead author Paul Denholm from United States National Renewable Energy Laboratory (NREL), said that total grid costs can be minimised by including nuclear reactors in a grid.


They allowed for other types of dispatchable power, like burning forests and gas with Carbon Capture and Storage (CCS), but I’ll focus on nuclear because of its much better eco-footprint.


Debates in Australia about nuclear power and its cost are usually short-circuited by somebody citing the Commonwealth Scientific and Industrial Research Organisation (CSIRO) Gencost (GENCOST) report and declaring the matter settled. It’s the dietary equivalent of citing the price of donuts and telling people to stop banging on about grains, beans and leafy greens.


But wind and solar power simply don’t deliver all the things that a reliable robust grid needs;, they are only cheap energy, containing none of the other essential grid “nutrients”.


GENCOST doesn’t model grid operation or take into account the operational specifics of various energy sources. By citing it to argue against nuclear power, you are assuming that cobbling together the cheapest energy sources will magically give you the cheapest grid. It doesn’t work for a diet nor for an electricity grid.


Capacity expansion models


But it isn’t just the GENCOST report that is simplistic and irrelevant.


There is a whole body of obsolete electricity modelling literature obscuring the decarbonisation challenge. These are Capacity Expansion Models; models that are more sophisticated than GENCOST but still overly simplistic in trying to match electricity demand with capacity to tell you how much of the latter you need for the former.


As the Denholm paper says:


However, CEMs capacity expansion models simplify power-system operation to maintain tractability and therefore cannot analyze in detail the operational reliability and adequacy of the resulting generation mixes.

They go on to mention that there are at least 181 capacity expansion model (CEM) studies of high penetration (>95%) levels of renewables; and they are all, by implication, misleading.

CSIRO’s GENCOST isn’t even a CEM; it’s even more simplistic and misleading.


There’s an old saying in modelling, “all models are wrong, but some are useful”. The corollary is that many models are useless, or even worse, grossly and dangerously misleading. For example, a much cited study by Stanford Professor Mark Jacobson claimed to show how to supply 100% of energy with 100% renewables in 139 countries. When the configuration it claimed would work in the UK was carefully tested by Imperial College Researchers Clara Heuberger and Niall MacDowell, with a more detailed model, it was found that it would result in frequent blackouts and about 10 percent of demand would simply not be met. The Jacobson model is a bit like testing a cancer drug in a Petri dish with mouse cells and asserting it will work in people.


The Heuberger model captures more detail than the Jacobson model and demonstrates that the Jacobson model gives wrong answers; misleading answers. It is dangerously worse than just useless.


In essence, the Heuberger paper lifted the modelling bar. Nevertheless, when I last checked, Jacobson’s simplistic and misleading paper had been cited 489 times; mostly positively. It’s like when a newspaper gets something wrong on the front page and later issues an apology on page 75. The headline can live on and have a wide and abiding influence.


Nonetheless, the Heuberger model was well short of the electromagnetic transient model (EMT) models discussed in my modelling post.


Just to refresh your memory, an EMT model predicts the precise voltage and frequency changes in each part of an electricity grid in response to some event, like a tree falling on a power line, or a transmission line being hit by lightning. The Heuberger model isn’t an EMT model, but it’s far more sophisticated than Jacobson.


The Denholm paper cites other studies modelling grid operation and reliability problems using more physical detail than either Heuberger or Jacobson. These more detailed papers also demonstrate why the Jacobson work and, by implication, the other 180 studies like it, are profoundly misleading.


Key among these improved studies is a pair which appeared in 2018; the results of Ph.D work at the Massachusetts Institute of Technology (MIT).


The first is this one with lead author Nesta Sepulveda. The second is this one with lead author Jesse Jenkins. The Sepulveda paper uses some of the work of the Jenkins paper and both lead authors created an extraordinary computing tool for modelling grids; GenX (see documentation here).


GenX was used by the Sepulveda paper to incorporate critical technical details of all of the technologies into their modelling. Among the attributes their model can consider in calculating minimal cost configurations are, for each technology:

  1. what is the maximum stable output,

  2. what is the minimum stable output,

  3. how fast can it increase power output,

  4. how fast can it decrease output,

  5. how long does it take to start,

  6. carbon emissions as a function of output.

In addition, it can model transmission grid geometry and properties, battery characteristics, and locations; it can even model demand-response impacts. The latter is where a company (for a price) takes its operation off-line to reduce load.


GenX allows you to throw all this detail into the mix along with all manner of complicated cost scenarios and combinations of those scenarios. You can answer hypotheticals. What if nuclear turns out to be expensive, but solar continues to get cheaper, or nuclear is cheap but solar’s rapid price reductions level off.


In all they considered 19 cost combination scenarios, with each being an assumption about the future costs of wind+solar and storage, nuclear, and biomass.


These scenarios were considered in conjunction with a range of emission reduction targets. As we said in Part I, the Australian Energy Market Operator (AEMO) Integrated System Plan (ISP) has carbon dioxide emission levels of roughly 60 grams-Co2/kWh. What if you set a carbon dioxide emission target of 20 grams-Co2/kWh? or 0? The Sepulveda paper models targets of 200, 100, 50, 10, 5, 1 and 0 grams-Co2/kWh, combined with all the other possibilities.


Their range of construction costings for nuclear included three levels: $US4,200/kW, $US4,700/kW, and $US7,000/kW. The top figure being well above the International Energy Agency (IEA)’s working assumption of $US5,000/kW and the bottom figure being still much higher than the IEA’s estimate of the current Chinese build price, $US2,800/kWh.


They ran their optimisation using historical weather and demand data for every hour in two US geographical regions; a northern cold region (about the same size as our National Electricity Market (NEM) grid) and a southern warmer region (about 3 times the size of the NEM grid). They also modelled what happens if you connect these two grids.


The final outcome of all this mix was 912 distinct scenarios in addition to another 380 scenarios exploring the impacts of transmission capacity and demand management strategies.


The goal was to investigate the landscape of how costs varied in response to all these factors. This contrasts with the CEM studies which not only ignore the intricacies of how various technologies work, but only ever shine a light on one, or a few, points on a landscape and jump to conclusions.


The GenX engine, the computer code which makes it possible to formulate and solve this multidimensional problem, was also written by Jenkins and Sepulveda. For any geeks reading, they wrote the code in Julia, a language particularly suited to mathematical optimisation of complex systems.


CSIRO is Australia’s foremost scientific research agency. Their name is on the GENCOST report. What tools and methodology did that report use?


GENCOST used a spreadsheet. Not that I’ve anything against spreadsheets. Some of my best friends use them.


GENCOST calculated “levelised costs of electricity”. As explained in Part 1, this is as analogous to, and just as crude as, calculating the cost per calorie of foods and thinking that you can build a balanced diet by combining the cheapest foods per calorie.


What’s the difference between GENCOST and the MIT modelling? Tesla vs billy cart. Too rude? I think not.


To be fair to the researchers involved, I’m sure they were constrained by their employers or the lobby and industry groups that are typically called stakeholders in today’s world where science can become merely another gun for hire.


What did the Sepulveda et al. study find about levelised costs of electricity?

“… even in regions with abundant renewable resources, firm low-carbon resources [nuclear/CCS] can lower the cost of deep decarbonization significantly, even if the firm resources have much higher levelized costs than do variable renewables, and even if very-low-cost battery energy storage technologies are available.

It’s worth repeating: even when nuclear has much higher levelised costs and the cost of storage is very-low, adding nuclear still drops your total grid cost.


Results


Full details of the Sepulveda teams modelling are available for free on the web. But for those wanting just a feel of the results, here is one small graph. I picked the modelling of the hotter southern area of the US as being more relevant to Australia. I’ve extracted only the panel relating to scenarios where firm resources are allowed, ignoring the more costly scenarios which prohibited them.


It shows the lowest cost mix of technologies under a range of cost assumptions and emission targets. It doesn’t represent all the modelling results; just some important ones.

The top labels “All Conservative”, “All Mid-Range” and “All Very Low” refer to costs: highest to lowest, left to right. On the right, nuclear is missing from the mix, priced out by biogas. Along the bottom are carbon dioxide emissions from the mix: 200, 100, … down to 0.


Our AEMO ISP is close to the 50 g-CO2/kWh and you can see that for such a target, gas is cheaper than nuclear (Note. combined-cycle gas turbine (CCGT) and open-cycle gas turbine (OCGT) are both natural gas turbines). As you target lower emissions, gas with CCS starts to displace gas without CCS in the minimal cost mix.


The height of each bar is the combined maximum power output of the generators in the mix, in megawatts; usually called the capacity. The dotted line is the maximum electricity demand. With a high level of wind and solar you need plenty of excess capacity to cover times when there is little wind. If your target is zero emissions, then at high (Conservative) costs, there is plenty of firm capacity (nuclear and biomass) in the mix and you need little spare capacity.

This modelling doesn’t consider the ecological impacts of biomass or any other technologies, other than on the climate but the authors do note that biomass is problematic for many reasons. Nor does it consider full lifecycle emissions of any technology; just the fuel emissions. Using the United Nations Economic Commission for Europe (UNECE) life-cycle emissions it’s reasonably obvious that the nuclear share of the mix would increase at low emission targets.


Figure 1. Southern System with reliable technology allowed

The corresponding panel for the scenarios where firm technologies, like nuclear, are excluded, shows that the required size of the construction efforts are higher. You need a much bigger wind and solar overbuild, and considerable amounts of electricity are simply thrown out.


Other graphs in the study present the model results in a way that allows you to compare costs. In this Southern grid, these renewable-only cost increases range from 11% to 105% higher.


Conclusion


CSIRO and everybody else have had four years to consider the MIT research. People from three of the best US renewable research labs have looked at it and are citing the results. GENX is now being widely used.


Where new tools and modelling approaches show that older, simpler methods give false results, the only reasonable response is to dump the old models.


So what are we doing in Australia? We are still pretending that the “levelised” cost of electricity is not just a good metric, but the only one we need, when it is neither.

GENCOST does little more than estimate levelised costs. It uses these to predict global future energy mix and then pulls some grid integration costs out of a hat by assuming that just adding some transmission lines and batteries will do the job; it’s a very Australian kind of “she’ll be right mate” approach.


GENCOST used just three scenarios and incorporated no technical knowledge about the engineering characteristics of the technologies, other than capacity factors. And for nuclear power, it doesn’t even get those right.


What are capacity factors? Skip the next paragraph if you understand this concept.

Suppose you have a 10 megawatt wind turbine. This means that 10 megawatts is its maximum power. If the turbine ran at this power continuously, it would produce 10x24x365=87600 megawatt-hours of electricity per year. Wind turbines typically produce 30-40 percent of this maximum figure. That percentage, whatever it is, is called the capacity factor. It changes according to the weather for wind and solar. Nuclear reactors are typically only taken offline for refuelling or repairs. The refuelling outages are a product of the design. Some reactors can keep running while refuelling.


For nuclear power, GENCOST used 80 percent as the capacity factor in their “Low cost” scenario and 60 percent as the capacity factor in their “High cost” scenario. If you look at the International Atomic Energy Agency (IAEA) database, the current capacity factor for US reactors was 94.9 percent in 2021; with Chinese reactors getting 89 percent in 2021. Why does GENCOST deliberately hobble nuclear with a low number? Maybe the stakeholders know.


Imagine if they’d just subtracted 15% from the measured capacity factor of solar photovoltaic (PV)? There wouldn’t be much left.


The next post will dig deeper into exactly how nuclear or other reliable generators can reduce system costs. The great thing about good optimisation is that it can reveal solutions you may never have considered. It's like when a car's satnav tells you to go a different way than you'd thought and you find it's quicker and shorter.


Keep in mind that none of this includes the costs of building a grid that can handle intermittency. A recent IEA report estimated that the grid to handle wind and solar would cost as much again as the panels and turbines; not counting the environmental and wildlife costs, always the last in line for consideration.


END PART III Next post … Nuclear load following and grid costs

0 comments
bottom of page