So I bet you weren’t expecting that title were you? But I must confess I’ll find any excuse to draw a Harry Potter analogy for things in my life, and climate models are no exception. Indeed, they are the stuff of a J.K. Rowling book in how they work; a sprinkling of mathematical magic, a big glug of compute power combined with the knowledge and wisdom of the climate science wizards. But how do these mystical creatures really work? What are they used for? Where are they set to go next in their evolution? I am myself not a climate model wizard, but aim to answer these in basic, easy to understand terms. Hopefully this post sheds light on how often you have and will continue to meet them throughout the climate change narrative that is increasingly (and positively so!) prominent in our lives. As with pretty much all technologies, there is still much more ground to cover with some Voldemort-ish limitations within them. However, with a Dumbledore’s army worth of value already being delivered in their anticipation of future climate risks, you just know good will prevail in the end.
Do forgive the littering of Hogwarts analogies throughout this piece, now all aboard from platform 9 and 3/4s – anything from the trolley?
How do they work?
Although there are multiple types of climate models with different focuses, they have the same basic ingredients.
Gridding. Firstly, they map a grid of 3D cells across the globe, representing all areas of Earth with the inclusion of ocean depths up to atmospheric layers. In this context, size matters. The smaller the grid cell used the higher the resolution of the model, making it better at projections at a regional level. There are different gridding techniques used, such as ones that remove the squashing of cells as they get closer to the poles.
Equations. Over the years, climate models have layered on more and more equations as they run to represent the physics within our complex climate system. This enables them to better simulate the external factors that alter climate, known as ‘forcings’. These can range from equations to express the transfer of energy from the sun to us on Earth (incoming solar insolation) through to how heat reflective a given surface is (its ‘albedo’ – not to be confused with ‘libedo’…)
Compute power. It takes Hagrid-esque giant machines, sometimes the same size as a tennis court, to run these complex equations of the climate model (Carbon Brief, 2018). These supercomputers will have been built up over many years and contain trillions of lines of code. Nerdy fact: the widely used Met Office Hadley Centre climate models have 3 new Cray XC40 supercomputers, which combined are capable of processing 14,000 trillion calculations a second – Bloody hell Harry!
Scenario based. Climate models differ from weather models in that they don’t produce predictions, rather projections. That may sound like semantics, but it is important. The climate system itself is so chaotically fragile and because there is the huge unknown in how human activity may alter future climate, models produce a sample of pathways. To go someway to address the latter, the Intergovernmental Panel for Climate Change (IPCC) has defined four key emission scenarios for future emissions (known as Representative Concentration Pathways – RCPs) with varying assumptions on how we act or don’t act to reduce Greenhouse gas emissions.

What are they used for?
- Better know the past. Hindcasting is a common output for climate models, inputting in data from ‘paleoclimates’ (previous climate eras) to understand the forcings that had a role, and to what extent, in their coming about – useful for the future of course. Simulating the past also enables scientists to have confidence in their model if it outputs results similar to the actual data. I suppose you could call these experiments ‘the Philosopher’s stone’ runs…!
- Understand impacts of the present. I’ve touched on attribution science briefly in another post. Essentially by simulating extreme weather events that have recently occurred, scientists use different scenarios in their model of that event, control runs which exclude human-based greenhouse gas contribution – in other words they take the greenhouse gas concentrations, oceanic factors and more from pre-industrial times and run this through the model. They then compare this to scenarios where human activity is included. This then allows for an attribution score of which scenarios closely relate to the actual event, i.e. how much did human-based climate change cause it. For example, experiments to understand the role of climate change on Hurricane Katrina found that the simulations where sea level rise is moreso and caused by human induced global warming produced more intense cyclones with higher wind speeds in comparison to other control scenarios (Irish et. al, 2005)
- Assess risks and take action for the future. Climate models can run for hundreds to thousands of years into the future. They can isolate different forcings, different rates of increase/ decrease in those forcings and observe the impacts across different geographical ranges. They have also gotten much better at appreciating feedbacks in the climate system; changes that can either speed up or slow down climate change, which then further amplifies their change. One feedback would be ice melt; as the ice melts, it reduces the cooling effect it has in how it reflects the sun’s warmth back (ice albedo). This then increases temperature, which leads to even more ice melt – the trajectory is an exponential one. The range of outputs across a given model’s results as well as across different models further increases the confidence in risk factors. And that’s important, because this feeds into how political leaders put in place actions to either mitigate or adapt to climate change. It’s how we’ve come to know that 1.5°C warming is tolerable, and 2°C warming is a different story entirely.

Excerpt image from global surface temperature change in RCP scenario 2.6 (close to 1.5°C warming only)
What’re their limitations? (them who shall now be named)
Yes, we haven’t mentioned Voldemort yet, but alas a bit of the ‘dark lord’ still exists in these miraculous models:
Parameters. As mentioned, there are a ludicrous number of equations that are calculated as the models run, which takes time. However, sometimes instead of an equation representing the given process, an assumed fixed value is entered into the code, called parameterization (and breath). These are used for different reasons, namely to reduce compute power needed, a ‘best guess’ for processes not yet well understood or because the processes are too difficult to measure. By and large they aren’t great, especially given they tend to vary across models too. According to McGuffie (2014), parameters used for turbulence and cloud cover are very much lacking in models, and ‘is still well beyond what will be possible with current computing capabilities or those that are likely to be available for some time to come’.
Data. Ah this old chestnut – the accuracy of the initial data inputted into the model. Given the timeframe and geographical span of data collection used in modelling, there are, of course, some nasty horcruxes. The equipment by which variables are measured have changed over time and still, in some instances, data is either not present for certain locations or missing in previous years. Very well known historical measurement changes have been overcome and the use of ‘parallel measurements’ are now used to rationalise away the effect of the change, but these aren’t the magic wand to remove these data inaccuracies in totality.
epistemological uncertainty. Helmholtz defines this as ‘a lack of knowledge about the system or phenomenon of interest’ (Helmholtz, 2022). In other words, relating to a model this could be uncertain factors that play a part in the results that are not known, for example the structural integrity of the model (e.g. some unknown biases in how it is coded) or fundamental gaps in the physics the model draws upon – perhaps because we are yet to make key scientific discoveries. So there will always be a degree of epistemological uncertainty, no getting away from it, but what scientists can do is quantify this uncertainty in order to decipher how ‘confident’ they can be in the model results.

Wrapping up…
So, if Dumbledore and Voldemort walked into a bar and made something to represent climate change, you’d get a climate model. At the risk of going full geek here, they really are a magical work of genius, but that doesn’t mean they are to be considered absolute truth. Horcruxes aside, the confidence in their larger findings are pretty sound, it is just their magnitude that varies. That is why the power of assimilating findings across several models reinforces these trend projections and, I surmise to grow in importance, the role of climate change in causing extreme events that have happened, particularly when thinking about international funding – the attribution science.
As we get increasingly better at understanding the processes and representing these in models, our ability to get clearer possible visions of future risks will too whilst still remaining ‘possibilities’ given the complex climate system . And after all, good investing and policymaking is always risk based and never a certainty. Ironically, the biggest unknown forcing variable is something we have the collective power to define – what we do.
References:
- Irish, J. L., A. Sleath, M.A. Cialone, T.R. Knutson, and R.E. Jensen ‘Simulations of Hurricane Katrina (2005) under sea level and climate conditions for 1900’, Climatic Change 122(4) 2014, pp.635–649.
- Anon., ‘How do climate models work?’, Met office, 2012, available here
- Anderson, T.R., E. Hawkins and P.D. Jones ’CO2, the greenhouse effect and global warming: from the pioneering work of Arrhenius and Callendar to today’s Earth system models’, Endeavour 40(3) 2016, pp.178–187.
- Hargreaves, J.C. and J.D. Annan ’Can we trust climate models?’ , Wiley Interdisciplinary Reviews: Climate Change 5(4) 2014, pp.435–440.
- Jones, P.D. and T.M.L. Wigley ‘Estimation of global temperature trends: what’s important and what isn’t’, Climatic Change 100 2010, pp.59–69.
- Anon., ‘Q&A: How do climate models work?’, Carbon Brief (2018), available here
- McGuffie, K. and A. Henderson-Sellers The climate modelling primer . (West Sussex: John Wiley & Sons, 2014). [Section 2.5]
- Friedlingstein, P., M. Meinshausen, V.K. Arora, C.D. Jones, A. Anav, S.K. Liddicoat and R. Knutti ‘Uncertainties in CMIP5 climate projections due to carbon cycle feedbacks’, Journal of Climate 27(2) 2014, pp.511–526. Last accessed 04/01/23
- Timeline, ‘the history of climate modelling’, Carbon Brief, 2018, available here
- Buis, A., ‘Study confirms climate models are getting future warming projections right’, NASA, 2020, available here. Last accessed 04/01/23
- ‘Climate Model: Temperature change (RCP 2.6) 2006-2100’, NOAA GDFL, available here.
- H.-O. Pörtner, D.C. Roberts, V. Masson-Delmotte, P. Zhai, M. Tignor, E. Poloczanska, K. Mintenbeck, A. Alegría, M. Nicolai, A. Okem, J. Petzold, B. Rama, N.M. Weyer (eds.)], ‘IPCC Special Report on the Ocean and Cryosphere in a Changing Climate’, IPCC, 2019, available here
- Anon., ‘Types of uncertainty’, Helmholtz uncertainty quantification, 2022, available here

Leave a comment