Climate Models

 

CLIMATE MODELS

Climate models are based on physics and run on supercomputers. They are indispensable for understanding climate, they have been tested, and they work.

------------------

MODEL INPUTS INCLUDE NATURAL AND HUMAN EFFECTS

These models use natural (e.g. solar) and anthropogenic inputs. Here is a graphic for the inputs to the climate models. Note that both natural (including solar) and anthropogenic effects are included. 

Below is a graph of the time dependence of the various forcings relative to 1880, used as input to  NASA/GISS models:

 

Go to top.

--------------------------

MODEL TESTS FROM TEMPERATURE DATA

The temperature is an output of the models, and is not the result of a "fit" to parameters. As shown in the graphics below, the climate models have been successfully backtested against the last 100 years of temperature data (black lines) on a global level and at a continental level. The output of the models including all effects (anthropogenic/human and natural) are shown in red, which follow the data including recent global warming in the last 30 years. Natural effects only, in blue, are NOT sufficient to produce agreement with the data, especially in the last 30 years. Model uncertainties are exhibited by the bands. For all the supporting details, see the report.

 

 Go to top.

--------------------------

MODEL FORECASTING

Below is the graphic for the model forecasts for the average global temperature over the next 100 years. Forecasts require scenarios. A scenario represents a set of assumptions of human behavior in the future. All the forecasts predict increasing global temperature trends, depending on the scenario: rapidly increasing temperatures if "business as usual" (A2 red), less rapidly increasing temperatures if action is taken against global warming (B1 blue, A1B green). For each scenario, there are uncertainties in model behaviors, shown in the vertical "error bars" (gray) on the right. The baseline (orange) is irrelevant since it assumes no increase in greenhouse gases.

Note that the graph is in Centigrade (one degree C = 1.8 degrees F).

 Go to top.

-------------------------- 

MODEL CONSTRUCTION

 From this reference:

In my study of the climate modelers at the UK Met Office Hadley centre, I had identified a list of potential success factors that might explain why the climate modelers appear to be successful (i.e. to the extent that we are able to assess it, they appear to build good quality software with low defect rates, without following a standard software engineering process). My list was:

  • Highly tailored software development process – software development is tightly integrated into scientific work;
  • Single Site Development – virtually all coupled climate models are developed at a single sitemanaged and coordinated at a single site, once they become sufficiently complex [edited – see Bob’s comments below], usually a government lab as universities don’t have the resources;
  • Software developers are domain experts – they do not delegate programming tasks to programmers, which means they avoid the misunderstandings of the requirements common in many software projects;
  • Shared ownership and commitment to quality, which means that the software developers are more likely to make contributions to the project that matter over the long term (in contrast to, say, offshored software development, where developers are only likely to do the tasks they are immediately paid for);
  • Openness – the software is freely shared with a broad community, which means that there are plenty of people examining it and identifying defects;
  • Benchmarking – there are many groups around the world building similar software, with regular, systematic comparisons on the same set of scenarios, through model inter-comparison projects (this trait could be unique – we couldn’t think of any other type of software for which this is done so widely).
  • Unconstrained Release Schedule – as there is no external customer, software releases are unhurried, and occur only when the software is considered stable and tested enough.

At the workshop we identified many more distinguishing traits, any of which might be important:

  • A stable architecture, defined by physical processes: atmosphere, ocean, sea ice, land scheme,…. All GCMs have the same conceptual architecture, and it is unchanged since modeling began, because it is derived from the natural boundaries in physical processes being simulated [edit: I mean the top level organisation of the code, not the choice of numerical methods, which do vary across models – see Bob’s comments below]. This is used as an organising principle both for the code modules, and also for the teams of scientists who contribute code. However, the modelers don’t necessarily derive some of the usual benefits of stable software architectures, such as information hiding and limiting the impacts of code changes, because the modules have very complex interfaces between them.
  • The modules and integrated system each have independent lives, owned by different communities. For example, a particular ocean model might be used uncoupled by a large community, and also be integrated into several different coupled climate models at different labs. The communities who care about the ocean model on its own will have different needs and priorities than each of communities who care about the coupled models. Hence, the inter-dependence has to be continually re-negotiated. Some other forms of software have this feature too: Audris mentioned voice response systems in telecoms, which can be used stand-alone, and also in integrated call centre software; Lionel mentioned some types of embedded control systems onboard ships, where the modules are used indendently on some ships, and as part of a larger integrated command and control system on others.
  • The software has huge societal importance, but the impact of software errors is very limited. First, a contrast: for automotive software, a software error can immediately lead to death, or huge expense, legal liability, etc,  as cars are recalled. What would be the impact of software errors in climate models? An error may affect some of the experiments performed on the model, with perhaps the most serious consequence being the need to withdraw published papers (although I know of no cases where this has happened because of software errors rather than methodological errors). Because there are many other modeling groups, and scientific results are filtered through processes of replication, and systematic assessment of the overall scientific evidence, the impact of software errors on, say, climate policy is effectively nil. I guess it is possible that systematic errors are being made by many different climate modeling groups in the same way, but these wouldn’t be coding errors – they would be errors in the understanding of the physical processes and how best to represent them in a model.
  • The programming language of choice is Fortran, and is unlikely to change for very good reasons. The reasons are simple: there is a huge body of legacy Fortran code, everyone in the community knows and understands Fortran (and for many of them, only Fortran), and Fortran is ideal for much of the work of coding up the mathematical formulae that represent the physics. Oh, and performance matters enough that the overhead of object oriented languages makes them unattractive. Several climate scientists have pointed out to me that it probably doesn’t matter what language they use, the bulk of the code would look pretty much the same – long chunks of sequential code implementing a series of equations. Which means there’s really no push to discard Fortran.
  • Existence and use of shared infrastructure and frameworks. An example used by pretty much every climate model is MPI. However, unlike Fortran, which is generally liked (if not loved), everyone universally hates MPI. If there was something better they would use it. [OpenMP doesn’t seem to have any bigger fanclub]. There are also frameworks for structuring climate models and coupling the different physics components (more on these in a subsequent post). Use of frameworks is an internal constraint that will distinguish some species of software engineering, although I’m really not clear how it will relate to choices of software development process. More research needed.
  • The software developers are very smart people. Typically with PhDs in physics or related geosciences. When we discussed this in the group, we all agreed this is a very significant factor, and that you don’t need much (formal) process with very smart people. But we couldn’t think of any existing empirical evidence to support such a claim. So we speculated that we needed a multi-case case study, with some cases representing software built by very smart people (e.g. climate models, the Linux kernel, Apache, etc), and other cases representing software built by …. stupid people. But we felt we might have some difficulty recruiting subjects for such a study (unless we concealed our intent), and we would probably get into trouble once we tried to publish the results 🙂
  • The software is developed by users for their own use, and this software is mission-critical for them. I mentioned this above, but want to add something here. Most open source projects are built by people who want a tool for their own use, but that others might find useful too. The tools are built on the side (i.e. not part of the developers’ main job performance evaluations) but most such tools aren’t critical to the developers’ regular work. In contrast, climate models are absolutely central to the scientific work on which the climate scientists’ job performance depends. Hence, we described them as mission-critical, but only in a personal kind of way. If that makes sense.
  • The software is used to build a product line, rather than an individual product. All the main climate models have a number of different model configurations, representing different builds from the codebase (rather than say just different settings). In the extreme case, the UK Met Office produces several operational weather forecasting models and several research climate models from the same unified codebase, although this is unusual for a climate modeling group.
  • Testing focuses almost exclusively on integration testing. In climate modeling, there is very little unit testing, because it’s hard to specify an appropriate test for small units in isolation from the full simulation. Instead the focus is on very extensive integration tests, with daily builds, overnight regression testing, and a rigorous process of comparing the output from runs before and after each code change. In contrast, most other types of software engineering focus instead on unit testing, with elaborate test harnesses to test pieces of the software in isolation from the rest of the system. In embedded software, the testing environment usually needs to simulate the operational environment; the most extreme case I’ve seen is the software for the international space station, where the only end-to-end software integration was the final assembly in low earth orbit.
  • Software development activities are completely entangled with a wide set of other activities: doing science. This makes it almost impossible to assess software productivity in the usual way, and even impossible to estimate the total development cost of the software. We tried this as a thought experiment at the Hadley Centre, and quickly gave up: there is no sensible way of drawing a boundary to distinguish some set of activities that could be regarded as contributing to the model development, from other activities that could not. The only reasonable path to assessing productivity that we can think of must focus on time-to-results, or time-to-publication, rather than on software development and delivery.
  • Optimization doesn’t help. This is interesting, because one might expect climate modelers to put a huge amount of effort into optimization, given that century-long climate simulations still take weeks/months on some of the world’s fastest supercomputers. In practice, optimization, where it is done, tends to be an afterthought. The reason is that the model is changed so frequently that hand optimization of any particular model version is not useful. Plus the code has to remain very understandable, so very clever designed-in optimizations tend to be counter-productive.
  • There are very few resources available for software infrastructure. Most of the funding is concentrated on the frontline science (and the costs of buying and operating supercomputers). It’s very hard to divert any of this funding to software engineering support, so development of the software infrastructure is sidelined and sporadic.
  • …and last but not least, A very politically charged atmosphere. A large number of people actively seek to undermine the science, and to discredit individual scientists, for political (ideological) or commercial (revenue protection) reasons. We discussed how much this directly impacts the climate modellers, and I have to admit I don’t really know. My sense is that all of the modelers I’ve interviewed are shielded to a large extend from the political battles (I never asked them about this). Those scientists who have been directly attacked (e.g. MannJonesSanter) tend to be scientists more involved in creation and analysis of datasets, rather than GCM developers. However, I also think the situation is changing rapidly, especially in the last few months, and climate scientists of all types are starting to feel more exposed.

 Go to top.

-------------------------- 

WHERE CAN I GET MORE INFORMATION ON MODELS?

A thorough description of climate models, with references, can be found in the IPCC Science Report. For specific global models see, e.g. the Princeton GFDL/NOAA site and the GISS/NASA site 

CLICK HERE for the new 2013 IPCC Science report with updated climate model results.

 Go to top.

--------------------------

HERE is an informative  VIDEO on climate modeling by Peter Sinclair, with testimony by Jim Hansen

 

 

Go to top

HERE is a great TED talk video by Gavin Schmidt explaining climate models, "The Emergent Patterns of Climate Change".

The transcript of the talk is HERE.

Go to top.

--------------------------

RESPONSES TO FAQ from RealClimate

RealClimate, the approachable scientific climate website run by professional climatologists, has responses to common Frequently Asked Questions FAQ about climate models.

FAQ Part I

What is the difference between a physics-based model and a statistical model?

Are climate models just a fit to the trend in the global temperature data? (Answer = NO)

Why are there ‘wiggles’ in the output?

What is robust in a climate projection and how can I tell?

How have models changed over the years?

What is tuning?

How are models evaluated?

Are the models complete? That is, do they contain all the processes we know about?

Do models have global warming built in?  (Answer = NO)

How do I write a paper that proves that models are wrong?

Can GCMs predict the temperature and precipitation for my home?

Can I use a climate model myself? (Answer = YES)

FAQ Part II

What are parameterisations?

How are the parameterisations evaluated?

Are clouds included in models? How are they parameterised?

What is being done to address the considerable uncertainty associated with cloud and aerosol forcings?

Do models assume a constant relative humidity? (Answer = NO)

What are boundary conditions?

Does the climate change if the boundary conditions are stable?

Does the climate change if boundary conditions change?

What is a forcing then?

What are the differences between climate models and weather models?

How are solar variations represented in the models?

What do you mean when you say a model has “skill”?

How much can we learn from paleoclimate?

Go to top.

--------------------------

nb: Graphs from the 2007 report (model inputs, tests, forecasts exhibited above) will be updated to the new 2013 IPCC report.

Last edit: 13Sep2017

  • You can run a Climate Simulation! Featured Article You can run a Climate Simulation! You can run a Climate Simulation!

     You can run a climate simulation yourself! Click HERE, enter the data, and hit the Run Simulation button. This simulation is called C-Learn. C-Learn is the 3-region,... More »

  • Peter Gleick's RSS Feed Featured Article Peter Gleick's RSS Feed Peter Gleick's RSS Feed

    Here is Peter Gleick's RSS Feed on the Huffington Post. He is a water and climate scientist, and President of the Pacific Institute Here is a great post for starters: Climate... More »

  • Monckton Myths Featured Article Monckton Myths Monckton Myths

    John Cook's Skeptical Science website has done it again with a great compilation of Monckton Myths. Christopher Monckton, trained in classics and journalism with no scientific... More »

  • Skeptical Science RSS Feed Featured Article Skeptical Science RSS Feed Skeptical Science RSS Feed

    Below is the RSS feed from the anti-contrarian /denier / skeptic website Skeptical Science. From the site: Scientific skepticism is healthy. Scientists should always... More »

Recently Updated
The Contrarians? Full article with references Last Updated on 2017-01-16 08:14:16   Climate Contrarians AKA climate science deniers or climate "faux" skeptics spread disinformation about climate and try to prevent responsible climate risk management. Information for this rather comprehensive article is drawn from the many sources and links in the text. Who are the contrarians? What is the contrarian Agenda? What do contrarians Say about global warming? [The "Four Fallacy Trenches"] What about Real Scientific Skepticism and contrarians? What are some Contrarian Tactics and Fallacies? What about "fringe" contrarian ideas? Aren't there some contrarians with credentials who disproved mainstream climate science? ANSWER = NO The Tobacco Analogy What do contrarians misunderstand about climate Risk Management? or WHY CONTRARIANS ARE CLIMATE RISK... More »
Michael Mann - Video on Science and Denial Last Updated on 2014-10-07 16:02:04 Michael Mann's AGU Chapman Conference talk (2013) "The Battle to Communicate Climate Change: Lessons from the Front Lines" --------- Talk (2014): The Hockey Stick and the Climate Wars go to top ------ More »
Quick one-liner rebuttals to contrarian claims Last Updated on 2014-02-15 19:58:49   The One-Liners Here is a list of quick one-line rebuttals to over 100 fallacious claims by contrarians/skeptics on the popular Skeptical Science website. They rebuttals are not intended to be complete, but simply to enable the respondent to "bat the ball back over the net" in an informal encounter with someone quoting a contrarian claim. As described HERE, most of these one-liners were written by Dr. Jan Dash, Chair of the UU-UNO Climate Initiative, with some editing by John Cook, author of Skeptical Science. Details on the science at Skeptical Science are obtained by clicking on the one-line responses.  ----------------------- In THIS document of some historical interest (online HERE), the first line in a response labeled JWD is the one-liner summarizing the science by Jan Dash. The next lines labeled JC are the summary responses by John Cook, which he... More »
Global Warming has NOT stopped Last Updated on 2014-02-03 10:33:46 GLOBAL WARMING HAS NOT STOPPED! The fallacy "global warming has stopped" starts with a cherry-picked unusually high surface atmosphere temperature point in 1998 that resulted from a huge "El Niño", which basically transferred a lot of heat out of the ocean to the atmosphere. Following 1998, La Niña dominated, which transferred heat from the atmosphere into the ocean (and lately into the deep ocean). Looking only at the atmosphere and neglecting the heat transfer in and out of the ocean leads to the fallacy. The total warming of the planet (ocean + atmosphere) has been trending upwards. If the El Niño/La Niña noise is subtracted out, or if the Arctic region is included, the global warming trend in the surface atmosphere is clear (as shown below). The scientific consensus is that with careful analysis there has been no pause in global... More »
Climate Science Rapid Response Team Last Updated on 2014-02-01 19:13:40 Climate Science Rapid Response Team (CSRRT) The Climate Science Rapid Response Team or CSRRT is a match-making service to connect climate scientists with lawmakers and the media. The group is committed to providing rapid, high-quality information to media and government officials. Climate Science Rapid Response team member scientists are chosen to cover a wide array of topics related to Climate Science. They have been selected based upon their publications in professional peer-reviewed scientific journals. There is a wide gap between what scientists know about climate change and what the public knows. The scientists of the Climate Science Rapid Response Team understand that better communication can narrow this gap. The media is in the best position to deliver accurate science information to the general public and to our elected leaders but only when they have access to that... More »