• The Climate Change models are bad, too crude in their scale and relying on feedback mechanisms whose uncertainties are unresolved after decades of research. Even the scientists admit that most of them “run hot”.
  • The input into them of the basic GHG emission volumes is screwed because of utterly unrealistic assumptions and large uncertainties about the global human population, the degree and pace of industrialisation, the technologies used and the estimates of energy production and consumption.
  • The global temperature measurements against which the models are compared are also seriously flawed due to poor station measurement and management, statistical processing that cannot correct for that, and outright scientific corruption.

The problems with so much of climate science have been exposed both in terms of big, public announcements that have failed (see the chart above), which are ongoing.

Fun With Climate Science – History
Fun With Climate Science – Bad Outputs

The Antarctic ice sheet is growing, not shrinking (“surprising”):

The Great Barrier Reef is growing, not shrinking:

  • “The Great Barrier Reef is doomed by 2030 without immediate action,” Time in 2013.
  • “Great Barrier Reef damage is ‘irreversible’ unless radical action taken,” The Guardian in 2014.

But the same problems are appearing in the more detailed predictions and forecasts of the scientific world.

The problems with the public claims can be put down to the political and ideological fights that have allowed activists to vastly exaggerate the scientific predictions. Unfortunately that’s been aided by some of the Climate Change scientists, like Michael Mann and Gavin Schmidt, getting very political themselves.

But the problems with the actual science come down to three factors:

  1. The Climate Change computer models run hot.
  2. Crude and inaccurate Green House Gas (GHG) emissions scenarios.
  3. The measurements against which the models are tested are bad.
  4. In Summary

======================

The Climate Change computer models run hot.

It’s always been acknowledged that the models had problems, but the assurance was that early ones in the 1980’s were just too crude and that things would improve as they became more complex. By 2022 even scientists were starting to realise that wasn’t the case, as four scientists warned in a Nature Magazine article, Climate simulations: recognize the ‘hot model’ problem. The four were desperate to make it clear that they were still very much part of the “consensus” – “There is no serious disagreement that continued emissions will lead to dangerous levels of warming” – and writing very carefully to avoid given aid and comfort to climate-change sceptics. Even so:

Users beware: a subset of the newest generation of models are ‘too hot’ and project climate warming in response to carbon dioxide emissions that might be larger than that supported by other evidence. Some suggest that doubling atmospheric CO2 concentrations from pre-industrial levels will result in warming above 5 °C, for example. This was not the case in previous generations of simpler models.

 As models become more realistic, they are expected to converge. 

Shorter story, they’re not converging, which means there’s something wrong with the basic science going into them. The scientists suggestions were to weight each model based on how well it re-produced past global temperatures but that’s not a great solution either when they admit many of them don’t.

Numerous studies have found that these high-sensitivity models do a poor job of reproducing historical temperatures over time and in simulating the climates of the distant past.

I’ve built computer models, not for Climate Change, but for other things, and one basic rule is that a model must accurately re-produce the past, in the same time frame you wish to use for the future, before you turn it toward predicting the future. If it can’t re-produce the history of its limited context (e.g. financial trading patterns) then it doesn’t matter if the conclusion is that something has changed in the real world that is not understood or that the model is missing something. Either way the model is useless.

So what are the internal problems (yes, plural) with the models? The first ones simply doubled CO2 and these generated a global temperature increase by the year 2100 of about 1°C, which didn’t get anybody excited since we’d had that much warming since the end of the Little Ice Age. But then something magical happened:

So how do so many models predict 4.5 degrees or more? Two words: feedback effects. That is, changes in atmospheric water vapor (clouds, which both trap and reflect heat), wind patterns, ocean temperatures, shrinkage of ice caps at the poles, and other dynamic changes in ecosystems on a large scale.

Add CO2, get a bit of warmth, which then evaporates water off oceans, seas, lakes and rivers, which causes the real warming, because H2O is the primary Green House Gas (GHG) in the atmosphere. Not because it’s the most powerful; molecule for molecule, it’s weaker in trapping infrared radiation than CO2, which in turn is weaker than Methane. H2O is more powerful because there’s far more of it. The same is true in reverse for Methane, which constitutes a vastly smaller slice of the atmosphere than CO2, and is not increasing – which alone should be a reason for New Zealand to dump that crap, although we’re the idiots who mainly contributed to the IPCC Methane emission rules.

Feedback is a great theory and many computer models have them, but it’s incredibly hard to model climate feedback because there are so many uncertainties involved, which is one of the dirty but open secrets of the IPCC:

The huge uncertainties in the models (especially for the most important factor—clouds) are always candidly acknowledged in the voluminous technical reports the U.N.’s Intergovernmental Panel on Climate Change (IPCC) issues every few years, but few people—and no one in the media—bother to read the technical sections carefully.

I’ve read good chunks of the technical stuff in every IPPC report except AR1 and AR6 (the latest) and in all that time the cloud factor has remained stubbornly fixed at neutral, meaning decades of research still hasn’t determined whether it’s a positive (warming) or a negative (cooling) factor on global warming!

There’s a second big input problem:

Climate models are surprisingly crude, as they divide up the atmosphere into 100 km x 100 km grids, which are then stacked like pancakes from the ground to the upper atmosphere. Most climate models have one million atmospheric grid squares, and as many as 100 million smaller (10 sq. km) grid squares for the ocean. The models then attempt to simulate what happens within each grid square and sum the results. It can take up to two months for the fastest supercomputers to complete a model “run” based on the data assumptions input into the model.

The thing is that a lot of important climate phenomena – including that feedback of clouds – happens at scales smaller than that 100 sq. km grid size, which naturally limits the accuracy of the model predictions. They could reduce the grids down to 10km squares but even the fastest supercomputers would then take a century to run the model.

The final input problem relates straight back to the very thing at the heart of the Climate Change movement – how much GHG’s are humans going to produce in the future? Even the Nature article admitted that’s the largest uncertainty.

======================

Crude and inaccurate Green House Gas (GHG) emissions scenarios.

[Nuclear physicist] Herman Kahn explained that “scenarios are simply a more or less imaginative sequence of events that are put together so that each event forms a context for the other events and so that there is some continuity over time in the ‘narrative.’” 

Back in 1990 when the IPCC first cranked up, demographers reckoned that by the year 2050 the human population of the Earth would be about 15 billion. You’d have thought there would have been some doubts about that even at the time, since the first estimates in the early-mid 1970’s were for 25 billion. The trend was down and the steady decline in birth rates, even in high birth-rate places like Africa, should have alerted people that the trend would continue. It’s now estimated to hit around 9 billion, with decline to follow.

Where this feeds into Climate Change is that these emissions scenarios, which the IPCC calls Representative Concentration Pathways or RCP’s, depend on a whole series of nested assumptions and calculations:

Human populationDegree of IndustrialisationTechnologies employedEnergy Consumption and ProductionVolumes of GHG produced

In other words, if the human population estimate alone is off the mark, then GHG emission calculations will be off as well – and that’s before we get into the assumptions and estimates involved with trying to figure out how fast the undeveloped world is going to develop industry, the technologies used, and thus the levels of energy production and consumption.

Also understand that this is as much social science and economics as science, with plenty of space for false assumptions and massive miscalculations, each building on the earlier ones. Put it this way, the most accurate part of those estimates for the future is the simplest; human population having just two key factors, birth rates and death rates, and even those forecasts have dropped a lot over the last fifty years.

And all this is before those GHG emission volume estimates are then entered into the already flawed Climate Change models!

It should therefore come as no surprise that in 1990 the IPCC figured that by the year 2100, GHG’s would be at 1,200 parts per million (ppm) CO2 equivalent with a radiative forcing (a measure of the greenhouse effect) of 10 watts per square meter (W/m2).

By the year 2000 they’d calmed down a bit and settled on RCP 8.5 – meaning 8.5 W/m2. But buried within that was the assumption of 15 billion humans by 2050, a six-fold growth in global coal consumption per capita, and other linked absurdities. Even at the time there were economists and scientists in the IPCC who argued that this scenario had to be dropped because it was utterly unrealistic. The late climate expert Stephen Schneider argued for including likelihoods in the scenarios, which would have been sensible. He was ignored.

It was not until the last couple of years that the IPCC decided to back off, and even now they’ve done so merely with new names – Shared Socioeconomic Pathways (SSPs), particularly SSP3-7.0 and SSP5-8.5 – that still feed into the old RCP’s and, in turn, the climate models.

Why so stubborn in the face of reality? Why not apply likelihoods? This article, How Climate Scenarios Lost Touch With Reality gives a good explanation of the history, the reasons for the scenario choices, and the problems that have arisen as a result:

Rather, it selected RCP8.5 in part to facilitate continuity with scenarios of past IPCC reports, both SRES and earlier baseline scenarios, so that results of climate modeling research across decades could be comparable. It also chose RCP8.5 to help climate modelers explore the differences between climate behavior under hypothesized extreme conditions of human-caused climate forcing and natural variability.

These decisions might be justifiable if climate models were simply scientific tools aimed at exploring a variety of conditions as a way to test hypotheses and researchers’ understanding of the climate system. But scientists, policymakers, the media, environmentalists, and the public now widely justify and interpret climate models as providing predictive information about plausible futures. By choosing RCP8.5 as one of only four forcing scenarios to be used by modelers, and compounding this choice by labeling it as the business-as-usual scenario, the IPCC promoted a scenario useful for scientific exploration but highly misleading when applied to projecting the future to inform decision-making. 

It’s a larger version of the problem that our MSM, political leaders and activists scream at us based on their readings of the IPCC summary reports, not the technical reports. The authors of that article argue that none of the IPCC’s scenarios are plausible.

And everything else is built on them!

Even so, model problems are to be expected. What you do to improve them is measure their forecasts against history and the current world.

But what if those measurements are also off?

======================

The measurements against which the models are tested are bad.

Gavin Schmidt is another climate scientist from the mould of Michael Mann, which is to say that he’s aggressive, political and utterly unbending in his case about the temperature data at the NASA Goddard Institute for Space Studies (GISS), of which he is the Director. The global temperature index is compiled from hundreds of data sets from around the world, including satellite measurement, which he argues agree with ground-based and ocean measurements.

Two scientists, Dr. John Christy and Dr. Roy Spencer, who run the only global temperature dataset that is independent of government compilation and reporting methods, the satellite-derived global temperature data from the University of Alabama at Huntsville (UAH), don’t agree – and the disagreement is widening:

 

To be fair the UAH satellite dataset doesn’t give a full and accurate picture of global surface temperature either because of limitations of the satellite system. At present, the system measures atmospheric temperature of the lower troposphere at about 26,000 feet (8 kilometers) altitude.

But that restriction pales in comparison to the mishmash that is the ground-based temperature measurements that produce the GISS index. In the USA those measurements largely come from a network of weather stations, called the Cooperative Observer Program (COOP), established in 1891, taking over from an ad hoc network that had operated by the US Army Signal Corp since 1873.

The United Kingdom is a particularly sad example given its long history in the field:

William Happer, the former physics professor at Princeton, describes the Central England Temperature (CET) record as a “world treasure” since it provides continuous recordings from 1659 – over 350 years. It shows a rise just over 1°C from the depths of the Little Ice Age to the present day.

Impressive, but now there are problems:

We discovered that nearly eight out of 10 Met Office measuring stations across the United Kingdom were sited in near-junk class 4 and junk class 5….Class 1 sites number just 24 and make up only 6.3% of the total.

In case you think The Daily Sceptic is being harsh in calling these measuring stations “junk”, understand that Class 4 sites have official ‘uncertainties’ of up to 2°C while Class 5 has ’uncertainties’ up to 5°.C They’re not even very good for measuring the daily weather, let alone being part a global temperature measurement system. The following story is now a well-known laugh item in Britain:

[The Met Office] continues to promote a 60 second spike to 40.3°C at RAF Coningsby at 3.12pm on July 19 in 2022 as a U.K. temperature record, despite the known presence of three typhoon jets attempting to land around the same time.

You cannot go around claiming, as the Met Office did in 2023, that the year was only 0.06°C cooler than the ‘record’ year of 2022, when you’ve got errors in the majority of measurement devices that are orders of magnitude greater.

And this is before we get into the statistical processes that crunch the data together to produce the temperature reconstructions that turn into GISS. These global surface temperatures – rounded, adjusted, and compromised readings – are what the climate change models are using to measure their forecasting.

Then there’s the outright corruption of science at the government level, as has been observed from the Pentagon to Human Health and Services, as politicised activists like Mann and Schmidt maintain control over their areas in the same way Dr Fauci did over virus research, with the political cabals of modelers and measurers reinforcing each other.

That corruption even extends to changing historic data against, like lessening the heat peak of the 1930’s so as to make recent temperature increases look to be the worst ever.

And more recently:

It shows how the surface data issued by NASA’s Goddard Institute of Space Studies (GISS), the two green lines, does not match the satellite data at all. While the satellite data shows no warming this entire century, the GISS data shows steady rising in the surface data. Other slides by Heller show that this rise comes solely from data adjustments and the extrapolation of imagined temperature data in places where no data exists, neither of which has been explained in any manner by the scientists at GISS.

What is most damning however is the change Heller documents between GISS’s November 2016 and December 2016 data sets. For reasons that are simply unjustified by any scientific measure, GISS somehow found it necessary to adjust its entire data set upward in one month about 0.03 of a degree. The only reason I can find for such a change in such a short period of time is a desire by the scientists at GISS to create the illusion that the climate is warming, and warming fast. They don’t have any real data to show this, so they make it up.

Those “adjustments” have never been justified in any way. Nor has Gavin Schmidt, the man who heads GISS, ever done anything to correct them. Moreover, when his office was accused of this tampering in 2016 he not only refused to fix or justify the changes, he responded by claiming “planetary warming does not care about the election.” Very Fauci!

Back to Britain, where the same shit is happening with historic data:

Science writer Paul Homewood last year discovered considerable tampering in 2022 with the recent CET record. He initially found that in version one, the summer of 1995 had been 0.1°C warmer than 2018. In version 2, the two years swapped places with 1995 cooled by 0.07°C and 2018 warmed by 0.13°C…Homewood then found that the years from 1970 to 2003 had been cooled markedly, followed by significant rises to 2022. 

The people running these systems need to be fired. As the Trump Administration has found in most of the bureaucracy, there are people, including scientists, who are putting their political and ideological activism ahead of their profession, let alone their “public service”.

India is doing it:

India has cancelled the license of the Centre for Policy Research (CPR) to obtain international funding, the interior ministry said, about a year after it suspended the top think tank’s permit for allegedly violating norms for foreign grants.

Well yeah, in 2022 it got about three-quarters of its grant funding from outfits like the Bill & Melinda Gates Foundation. An article in Nature bitched about this, which, given their stance on Climate Change, means that Modi is doing the right thing. As science writer Bob Zimmerman puts it:

CPR routinely advocates leftist policy positions. When a leftwing government is in power, its policy papers will glow with pride about the achievements of government. When a rightwing government is in power — such as the Modi administration — its policy papers will be suddenly “objective and honest” and hard-hitting, attacking the government for daring to challenge its assumptions about “climate change, social and economic policy, governance and infrastructure.”

Trump needs to do the same, and it appears he’s made a small start, cancelling the building lease for GISS in New York that has existed since 1961. But frankly Schmidt needs to be fired, along with anybody he hired.

======================

In Summary

  • The Climate Change models are bad, too crude in their scale and relying on feedback mechanisms whose uncertainties are unresolved after decades of research. Even the scientists admit that most of them “run hot”.
  • The input into them of the basic GHG emission volumes is screwed because of utterly unrealistic assumptions and large uncertainties about the global human population, the degree and pace of industrialisation, the technologies used and the estimates of energy production and consumption.
  • The global temperature measurements against which the models are compared are also seriously flawed due to poor station measurement and management, statistical processing that cannot correct for that, and outright scientific corruption.

========

Fun With Climate Science – History
Fun With Climate Science – Bad Outputs
The Myth that 97% of Scientists Agree About Dangerous Climate Change

Rigorous international surveys conducted by German scientists Dennis Bray and Hans von Storch—most recently published in Environmental Science & Policy in 2010—have found that most climate scientists disagree with the consensus on key issues such as the reliability of climate data and computer models. They do not believe that climate processes such as cloud formation and precipitation are sufficiently understood to predict future climate change.