Call or complete the form to contact us for details and to book directly with us
435-425-3414
435-691-4384
888-854-5871 (Toll-free USA)

 

Contact Owner

*Name
*Email
Phone
Comment
 
Skip to Primary Navigation Skip to Primary Content Skip to Footer Navigation

In the Wake of the News

Not-so-subtle Influences – Search Engine Bias

Google and other search engines are our modern-day paths to knowledge on an extremely broad range of subjects. However, at least on the topic of climate science and climate scientists, they are a not-so-subtly guided path toward information consistent with the current climate consensus.

Searches for information on Dr. Roy Spencer, Dr. Judith Curry, Dr. Richard Lindzen, Dr. John Christy and Dr. David Legates all begin with links to their individual websites, followed by a link to a Wikipedia page. However, these are immediately followed by one or two links to websites such as skepticalscience.com, desmogblog.com, thinkprogress.org and facingsouth.org., typically referring to them as climate “deniers” or climate “misinformers”, or linking them to conservative or industry funding sources.

Searches for information on Dr. James Hansen, Dr. Gavin Schmidt, Dr. Michael Mann, Dr. Kevin Trenberth and other members of the consensed climate science community follow a similar pattern initially, but contain no links to opposition websites. Rather, they typically contain links to recent studies they have completed or recent public comments they have made.

The companies which operate search engines use several criteria to determine the material to which they provide links; and, the order in which those links appear. The consistent “first list” provision of links to opposition websites in response to searches for information regarding skeptical climate scientists and the complete absence of such links in the case of searches for information on members of the consensed climate science community suggests several possible explanations:

  • the operator of the search engine prefers or supports one position over another;
  • the early-listed website owners somehow influence their positioning in the lists; or,
  • there are no opposition websites taking positions against the climate consensus and the members of the consensed climate science community.

Regardless, the effect is to include negative references to the scientists or their work prior to providing links to their work or to positions they take regarding climate issues. This makes finding links to their work more difficult, while not-so-subtly suggesting that their work might be of poor quality, or of little value, or inaccurate.

Similarly, searches for recent climate change research typically return very few links to skeptical research and skeptical scientists. Much of this is the result of the massive disparity between the funding levels for research supporting the global and national government climate consensus and the funding levels available for skeptical research. The first lists provided by such searches typically contain multiple links to NASA and EPA webpages and references to government-funded research.

Searches for skeptical climate change research typically contain multiple links to sites critical of the skeptical position; and, few if any links to actual skeptical research results.

Success in finding skeptical research results and commentary essentially requires that you know where to look for such information, since the most used search engines will not make the search easy or very productive. Even searches for skeptical climate change websites return first lists containing links to opposition websites, including skepticalscience.com and the Union of Concerned Scientists.

 

Tags: Silencing the Skeptics, Bias

Diminishing Impact of Increased CO2

“It doesn't matter how beautiful your theory is, it doesn't matter how smart you are.  If it doesn't agree with experiment, it's wrong.” --Richard P. Feynman

The impact of incremental increases in atmospheric CO2 concentrations diminishes as the CO2 concentration increases, as illustrated by this graph.

Heating Effect of CO2

The author notes that “the first 20 ppm accounts for over half of the heating effect to the pre-industrial level of 280 ppm”. Equally important is that the impact of increased atmospheric CO2, from whatever source, is approaching zero asymptotically.

The range of climate models included in the Climate Model Intercomparison Project (CMIP5) largely do not illustrate an asymptotic approach to a limiting anomaly value, as would be expected, as illustrated in this graph. However, both the HadCRUT4 near-surface temperature anomaly and the UAH Lower Atmosphere temperature anomaly appear to demonstrate the beginning of an asymptotic approach to a limiting anomaly value, which would be substantially lower than might be indicated by most of the climate models. This apparent asymptotic approach coincides with the “hiatus” or “pause” which followed the 1998 super El Nino.

90 CMIP5 Climate Models vs. Observations

Numerous recent research papers have suggested far lower climate sensitivity to increased atmospheric CO2 concentrations than the climate sensitivity estimates used by the CMIP5 climate model scenarios shown above. Co-author of the paper “ Lewis N and Curry J A: The implications for climate sensitivity of AR5 forcing and heat uptake estimates, Climate Dynamics (2014)”, Nic Lewis, explains their study here. The results reported in these papers are consistent with the appearance of an approach to a limiting climate anomaly in both the near-surface and satellite anomalies, as shown above. While the duration of the flattening of the anomaly curves above is too short to be considered a climate change, it is certainly an indication that there are effects occurring in the climate which are not predicted by the models.

The time frame shown in the graph above begins shortly after the beginning of the satellite era. A longer time frame is available in the graph below, produced by Dr. James E. Hansen of NASA GISS in 1988.

Annual Mean Global Temperature Change

The lowest line on the graph, Scenario “C”, assumes drastically lower emissions beginning between 1990 and 2000. The point “x1” indicates the value of the UAH satellite anomaly as of June 2017. The point “x2” indicates the value of the HadCRUT4 anomaly as of June 2017. The calculated anomalies are consistent with Scenario “C”, though actual emissions history is consistent with Scenario “A”; and, are also consistent with lower climate sensitivity.

The consensed climate science community has begun to acknowledge the widening gap between the climate scenarios produced by the CMIP5 climate models and the anomalies calculated by NOAA / NCEI, NASA GISS, HadCRUT, UAH and RSS. However, that has not yet ended the persistent claim that “the science is settled” and that “the time for debate is over”, nor has it ended EPA Administrator Pruitt’s call for a Red Team / Blue Team exercise on climate change.

“It ain’t over till it’s over.”, Yogi Berra, American philosopher

Tags: Climate Models, CO2 Emissions

Highlighted Article - The Science Police

From the summer 2017 issue of ISSUES IN SCIENCE AND TECHNOLOGY

The Science Police

By: Keith Kloor

" On highly charged issues, such as climate change and endangered species, peer review literature and public discourse are aggressively patrolled by self-appointed sheriffs in the scientific community."

 

Tags: Highlighted Article

Clean Power Plan Redux

EPA Administrator Scott Pruitt has signed an order rescinding the Clean Power Plan. This action will trigger an Advanced Notice of Proposed Rulemaking regarding potential future emissions reductions and reduction strategies. This action will also likely trigger a plethora of lawsuits by environmental groups; and, perhaps, by several states. However, these lawsuits are likely to be fought out in the courts, rather than resolved through the “sue and settle” approach for which the Obama Administration EPA became famous.

The Clean Power Plan was frequently characterized as “a war against coal”, though it was actually a war against fossil fuel use for electric power generation. The Clean Power Plan established CO2 emissions levels for power plants which could not be met by any commercially available technology for burning coal to produce electricity. However, this approach left the door open for further reductions in the permitted emissions levels, which would eventually have precluded natural gas simple-cycle and combined-cycle gas turbines as well. Natural gas was viewed as a “bridge” fuel, useful to displace coal for power generation until it could then be replaced by renewables.

It is important to recognize that, under President Obama’s Climate Action Plan the Clean Power Plan was the Obama Administration’s primary tool to move the electric sector of the US energy economy to a zero net CO2 emissions by the end of the twenty-first century, if not before. The Obama Administration’s Corporate Average Fuel Economy (CAFE) standards for light duty vehicles and Regulations for Greenhouse Gas Emissions from Commercial Trucks and Buses focused on gasoline and Diesel emissions from the transportation sector. The intent of these regulations was to move the transportation market towards electric vehicles.

The Climate Action Plan also discussed industrial, commercial and residential energy efficiency initiatives, but did not discuss setting emissions standards for those energy markets which would ultimately result in the elimination of fossil fuel use for industrial and commercial processes, or for commercial and residential space heating, water heating, laundry drying, food preparation, etc. However, the ultimate intent was to shift all direct energy use in these sectors to electricity.

Each of these federal actions was part of an overall plan to shift the US energy economy to total reliance on electricity; and, ultimately, to total reliance on electricity generated by renewable energy sources, primarily hydro, geothermal, wind and solar, plus other renewable sources which might become economically competitive over time, including ocean thermal energy conversion, wave energy and dry hot rock geothermal energy.

The Obama Administration used tax policy and direct subsidies to encourage utilities and their customers to adopt renewable technologies and hybrid and electric vehicles. Numerous states supported this effort with tax policies and direct subsidies, as well as indirect subsidies, including net metering of electricity for residential and commercial customers who implemented on-site renewable energy systems, primarily solar photovoltaic electric generating systems.

All of this activity stemmed from the 2009 EPA Endangerment Finding. Recent research has questioned whether the information used to justify the Endangerment Finding was accurate. It is likely that the current Administration will seek to overturn the 2009 Endangerment Finding, though this is perceived to be a very difficult challenge.

 

Tags: EPA Endangerment Finding, Clean Power Plan

Climate Change Debate

Professor Michael E. Mann, Distinguished Professor of Atmospheric Science at Penn State University, creator of the (in)famous hockey stick and self-appointed spokesperson of the consensed climate science community, apparently has no interest in participating in the Red Team / Blue Team exercise regarding climate change proposed by EPA Administrator Scott Pruitt. Mann recently opined, during a lecture on (of all things) academic and intellectual freedom at the University of Michigan, that climate was not debatable. It would be just as well that Mann would choose not to participate, since he should not be allowed to participate, as he has failed to share the data and calculations underlying his seminal “contribution” to climate science; and has, in fact, aggressively used the court system to avoid disclosing the underlying data and calculations.

The Red Team / Blue Team debate should focus on:

  • the degree to which the near-surface temperature data currently being collected represent climate, as opposed to the effects of localized heat islands;
  • the legitimacy and objectivity of the processes being used to “adjust” the data;
  • the frequency of recalibration of the sensors used to collect the data;
  • the influence of data “infilling” and “homogenization”;
  • the justification for periodic “reanalysis” of historic data;
  • recent research results for climate sensitivity;
  • recent research regarding cloud formation and cloud forcing;
  • recent research regarding solar influences on earth’s climate;
  • the causes of the recent temperature “hiatus” or “pause”;
  • the causes of the recent 12 year major landfalling hurricane respite;
  • the causes of the decrease in major tornado frequency and intensity;
  • changes in drought and flood frequency and magnitude;
  • the difference between the land-based and satellite sea level rise measurements;
  • the growing disparity between measured and modeled anomalies;
  • the Social Cost of Carbon;
  • recent research on the social benefits of carbon; and,
  • the influences of natural phenomena on climate (El Nino, La, Nina, AMO, PDO)

All the above issues are clearly debatable; and, are subjects of active debate, even within the consensed climate science community.

Perhaps the most difficult aspect of the proposed debate will be the necessity to separate fact from belief in the presentations of various positions. We know that CO2, Methane and several other gases are “greenhouse gases”. We know that human activities result in the emissions of these gases. We know that the effects of these gases in the atmosphere are logarithmic, with declining effect as concentrations increase. We know that earth’s atmospheric, near-surface and sea surface temperatures have increased.

However, we do not know the sensitivity of earth’s climate to a doubling of atmospheric CO2 concentration. We do not know the magnitude of climate forcings and feedbacks. We do not have a model which allows us to know what climate will be like in the future. We do not have accurate temperature measurements of the near-surface, with the exception of the United States Climate Reference Network. There is even recent disagreement between the two primary sources of satellite temperature measurements. There is also continuing disagreement between the surface-based and satellite sea level measurements.

Dr. Mann was correct when he stated that “climate is not debatable”. Earth has a climate. He would still have been right if he had stated that climate change was not debatable. Climate has clearly changed throughout earth’s history. He might even have been correct if he had stated that some human contribution to climate change is not debatable. However, he was almost certainly not correct in stating that climate change “is human-caused”, since that would exclude any involvement of natural variation, which clearly continues.

 

Tags: Michael Mann, Red Team Blue Team Debate, Settled Science, Climate Change Debate

The Boy Who Cried Wolf - Hurricanes and Climate Change

Hurricane Harvey was one of the most extensively and most accurately tracked and reported hurricanes in history. Predictions of its track, timing, intensity at landfall and expected precipitation amounts were extremely accurate. Federal and Texas government preparation for its aftermath appear to have been exemplary. The federal and state governments recommended that Houston be evacuated. However, the mayor of Houston elected not to call for evacuation; and, many Houston residents decided not to self-evacuate for their own safety.

Evacuation would not have reduced the physical devastation to property and infrastructure in Houston, but it would likely have reduced the incidence of injury and death resulting from the storm. Evacuation would also likely have reduced the need for the extensive formal and informal rescue operations which followed the storm. One tends to wonder why people and politicians decided not to evacuate. I suspect one reason is the tendency of the National Weather Service, the National Hurricane Center and the media to over-hype weather events which then are far less severe than their hype.

I suspect that much of the climate science community shares some responsibility for the public’s tendency to ignore warnings of impending disaster. Much of the climate science community has been consistently and aggressively incautious in its creation of worst case scenarios regarding potential future climate change. Movies such as Al Gore’s An Inconvenient Truth and An Inconvenient Sequel and Roland Emmerich’s The Day After Tomorrow have created an aura of unreality regarding climate change.

The climate science community has generally been cautious about blaming Harvey’s severity on climate change, but some climate scientists have stated unequivocally that climate change made Harvey stronger and more damaging. Other climate scientists have stated that there is no scientific basis for such claims.

Hurricanes have been a fact of life in the southeastern US throughout our history. There is a Saffir-Simpson scale for hurricane intensity because the intensity of hurricanes varies significantly, though the underlying reasons for these variations are not well understood. The satellite era has allowed meteorologists to detect tropical depressions earlier and then monitor their intensity as they develop into tropical storms and hurricanes, or decay, over time. Similar technology has been applied to the identification and tracking of tornados as well.

Regardless of the assertions by much of the climate science community, there has been no increase in hurricane frequency or intensity over the past seventy years. There has also been no documented increase in tornado frequency and intensity, or flooding and drought frequency and intensity. Sea levels have been rising since the trough of the Little Ice Age; and, have been rising at a relatively consistent rate throughout the period of the instrumental record.

The technology we have available to track these storms and predict their futures is very impressive. However, it is essential that those who use this technology use it responsibly and report what they learn from the technology clearly and carefully, so that the public and public officials can respond appropriately to the information they provide.

 

Tags: Climate Science, Severe Weather

Well Imagine That – HadCRUT4 Global Temperature Record

The graph below shows the HadCRUT4 Northern Hemisphere, Southern Hemisphere and Global temperature history for the period of the instrumental temperature record from 1850 to the present.

HadCRUT4 Temperature Anomaly

The global average temperature anomaly has varied from approximately -0.3oC to approximately 0.7oC over the period, relative to the HadCRUT 1961-1990 reference period, or a total variation of approximately 1.0oC, on an annual basis. The variation has been greater in the Northern Hemisphere (~1.1oC) than in the Southern Hemisphere (~0.8oC), largely because the Northern Hemisphere has a larger land area, which changes temperature more rapidly than the sea area.

In a recent blog entry at the Real Climate web site, Dr. Gavin A. Schmidt, Director of the NASA Goddard Institute of Space Studies, made the following observation regarding absolute temperatures and temperature anomalies.

“But think about what happens when we try and estimate the absolute global mean temperature for, say, 2016. The climatology for 1981-2010 is 287.4±0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56±0.05ºC. So our estimate for the absolute value is (using the first rule shown above) 287.96±0.502K, and then using the second, that reduces to 288.0±0.5K. The same approach for 2015 gives 287.8±0.5K, and for 2014 it is 287.7±0.5K. All of which appear to be the same within the uncertainty. Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.” (emphasis mine)

Absolute zero Kelvin is -273.15oC, so 287.4oK is 14.25oC (~57.6oF). According to Dr. Schmidt, this temperature is known to +/-0.5oC (+/-0.9oF).

The HadCRUT anomaly data show a total variation in global average temperature of approximately 1.0oC over the period from 1850-2016, which is approximately equal to the confidence range (+/-0.5oC) asserted by Dr. Schmidt for the absolute value of global average temperature. Therefore, the absolute value of the global average temperature is approximately 14.5oC+/-0.5oC over the period; and, the change in that absolute value over the period is of questionable statistical significance. That is a far cry from unprecedented warming.

Dr. Schmidt expresses the anomaly value, in the observation quoted above, to two decimal place precision, as is common in climate science. However, the anomalies are not measured at this level of precision. Rather, the measurements are made to a single decimal place and then “adjusted”. The additional decimal place results from application of the Law of Large Numbers. However, in the case of global temperature anomaly values, the application of this statistical principle is questionable, since the numerical values used to calculate the anomalies have been “adjusted” and, thus, any errors in the numbers should not be considered random.

 

Tags: Global Temperature, Temperature Record, HadCRUT

Temperature and Anomalies

Recently, there has been renewed discussion of absolute temperatures and temperature anomalies in climate science. This discussion is made more complex by the fact that “there is no universally accepted definition for Earth’s average temperature”; and, by that fact there is also no universally accepted definition for the earth’s average temperature anomaly.

I have written here previously about temperature measurement issues; and, about their effects on temperature anomalies. Global temperature measurement is confounded by numerous issues, including: measurement instrument and enclosure changes; measuring station relocation; changes in the area surrounding the measuring station; areas with sparse or non-existent measuring stations; and, missing station data. In addition, an internet search reveals no links to information regarding the periodic recalibration of the measuring instruments used to measure global near-surface temperatures.

Global temperature history is further confounded by the fact that the actual temperature measurements are “adjusted” for a variety of reasons; and, by the fact that the “adjusted” temperature measurements are periodically subject to “reanalysis”, which changes the previously recorded “adjusted” temperatures. These “reanalysis” efforts are conducted by numerous agencies, resulting in varying reanalysis results. These “adjustment” and “reanalysis” efforts lead to questions regarding the validity of the global temperature record.

Therefore, any calculation of global average near-surface temperature is based on estimates of what the measured temperatures might have been, had they been measured timely at properly selected, calibrated, sited, installed, and maintained temperature measuring instruments. The calculations must also account for changes in the number and location of active measuring stations; and, changes in the characteristics of their surroundings. The agencies which calculate global average near-surface temperature claim precision of +/-0.5oC for their calculations, which is greater than half the magnitude of the calculated global average near-surface temperature change over the period of the instrumental temperature record.

Climate science typically focuses on temperature anomalies, rather than absolute temperatures. This approach is largely based on the assumption that, while the actual temperature measurements might be inaccurate, the differences between measurements taken at those stations over time are more accurate, since the measurement instruments and stations are assumed to be unchanged over the measurement period. However, the continuing need to “adjust” the temperature measurements and the periodic need to perform “reanalysis” of the “adjusted” temperature measurements might cause one to question that assumption.

The agencies which calculate the global near-surface temperature anomalies claim precision of +/-0.01oC. To put this claim in perspective, it is essential to understand precisely what the calculated anomaly represents. The anomaly is the calculated difference between the average of the “adjusted” and “reanalyzed” global average near surface temperatures over a reference 30 year base period and the “adjusted” global average near-surface temperature in the current period. This is typically either a month-to-month or a year-to-year comparison.

The situation is further complicated by the fact that the agencies which calculate the global average near-surface temperature anomaly use different base periods as their reference; and, they select separately from the available temperature measurements and then perform their own “adjustments” to those measured temperatures. In one case (NASA GISS), the agency also “infills” missing temperature measurements with synthetic estimated temperatures.

It is hardly surprising that there is renewed discussion of the accuracy of absolute temperatures and temperature anomalies in climate science. It is long past time to establish “tiger teams” to investigate all aspects of these processes.

 

Tags: Global Temperature, Temperature Record

Unfinding Endangerment - Rescinding the Endangerment Finding

In a discussion on a recent comment thread, one commenter stated that “EPA had the evidence” when it issued its 2009 Endangerment Finding regarding CO2 and other “greenhouse gases”. Now, eight years later, the Endangerment Finding is being questioned by a new federal Administration and a new EPA Administrator. The courts, including the US Supreme Court, were involved in the determination that US EPA had the authority to regulate CO2 and other “greenhouse gases”  (GHGs) under the Clean Air Act. It is a virtual certainty that the courts, including the US Supreme Court, would be involved in any effort to rescind the Endangerment Finding.

This raises two obvious questions regarding the “evidence”:

  1. What evidence did EPA present in support of its Finding?
  2. Is that evidence still valid?

The evidence presented in support of the Endangerment Finding is described in the EPA Technical Support Document (TSD) published on December 7, 2009 and the reference documents listed in the document and its appendices. The primary evidence, based on observations, is that:

  • GHGs trap heat in the atmosphere.
  • Atmospheric concentrations of GHGs have increased.
  • Average ambient temperatures have increased.
  • Average sea surface temperatures have increased.
  • Average sea levels have increased.

From that evidence, the document proceeds to lay out Projections of Future Climate Change With Continued Increases in Elevated GHG Concentrations”. These projections are largely based on scenarios produced by general circulation models of the global environment, which were not then and are not now verified.

There is no question that the primary evidence, as listed above, is factual. However, there is reason to question whether the details of that evidence, particularly the temperature and sea level evidence, are factual. The near-surface temperature data are routinely “adjusted”, for a variety of reasons, before they are used to produce the various near-surface temperature anomaly products. The sea surface temperature data are also “adjusted”; and, temperature estimates are “infilled”, where actual data are not available.

There are far greater bases to question the “projections” made and suggested in the TSD. The potential future “endangerment” envisioned in the TSD is based on the historical rates of change of near-surface and sea surface temperatures and of sea levels; and, “projections” of future rates of change of these temperatures and sea levels, based on projections of future GHG emissions and their residence times in the global atmosphere.

The most widely recognized general circulation model, at the time of the Endangerment Finding, was the model developed by Dr. James E. Hansen of NASA GISS. The data input to this model is now 30 years old, the typical period identified by the World Meteorological Organization as the timeframe for “climate”. Therefore, it forms a reasonable basis for determining the accuracy of the “Predictions of Future Climate Change …” contained in the EPA TSD. A review of the scenarios produced by Hansen’s climate model can be found here.

Clearly, while GHG emissions have followed Hansen’s Scenario A, the temperature anomaly response has more nearly followed Hansen’s Scenario C, which envisioned a rapid cessation of global GHG emissions. Therefore, the “endangerment” envisioned in the 2009 Endangerment Finding is far less than was portrayed in the document. As a result, the regulations imposed and proposed by EPA might well be far more stringent that is justified by the actual impending “endangerment”, if “endangerment” actually impends.

 

Tags: EPA, EPA Endangerment Finding

Red Team / Blue Team – Public Climate Debate

EPA Administrator Scott Pruitt has proposed a formal Red Team / Blue Team exercise regarding climate change. However, the recent experiences with hurricanes Harvey and Irma have spontaneously initiated an informal Red Team / Blue Team exchange in the media and the blogosphere.

The informal Blue Team struck first, assisted by the media, with Dr. Michael Mann carefully opining that the hurricanes, while not caused by anthropogenic climate change, were at least made more severe as a result. Primary blame was assigned to warmer air and water temperatures, which would be expected to increase the quantity of moisture in the atmosphere, thus increasing the potential rainfall produced by hurricanes. Blame was also attributed to increased sea levels, which would be expected to increase the extent and impact of storm surge. Mann failed to distinguish between natural and anthropogenic climate change, though there is no indication that the relatively steady sea level rise over the past 150+ years is attributable to anthropogenic causation.

 

The media and numerous non-climate scientists were less careful, in one case (Eric Holthaus) declaring that “Harvey and Irma aren’t natural disasters. They’re climate change disasters.” Many were quick to criticize the Administration for withdrawing the US from the Paris Accords. The media, however, largely failed to mention that there had been a twelve-year period, prior to Harvey, during which no category 3 or greater hurricanes had made landfall in the US. They did, however, repeat frequent claims that climate change would make hurricanes more frequent, though these claims are based on unverified climate models.

The informal Red Team responded quickly, with publication of an e-book by Dr. Roy Spencer, a blog post by Dr. Neil Frank, and statements by Dr. Judith Curry and  Joseph Bastardi of Weatherbell Analytics, among others. Their messages were basically that climate change does not cause hurricanes; and, that there is no clearly established linkage between anthropogenic climate change and hurricane frequency or severity.

The NOAA Geophysical Fluid Dynamics Laboratory (GFDL), while it refers to several potential changes to tropical cyclone frequency and intensity in the future, based on modeled scenarios, provides the following conclusion based on current research:

“It is premature to conclude that human activities–and particularly greenhouse gas emissions that cause global warming–have already had a detectable impact on Atlantic hurricane or global tropical cyclone activity. That said, human activities may have already caused changes that are not yet detectable due to the small magnitude of the changes or observational limitations, or are not yet confidently modeled (e.g., aerosol effects on regional climate).”

NOAA GFDL goes on to discuss potential future impacts, based on unverified climate models. They suggest that global tropical cyclone intensity might increase by 2-11% by the end of the century. Even if this were to occur, it suggests that any existing change in global  tropical cyclone intensity is likely miniscule and probably undetectable. This stands in stark contrast to assertions such as “Harvey is what climate change looks like.”

As interesting as this informal Red Team / Blue Team exercise has been, it strongly illustrates the importance of a formal Red Team / Blue Team exercise. The truth lies somewhere; and, it would be nice to know where.

Tags: Red Team Blue Team Debate

Ground Rules for a Red Team / Blue Team Climate Debate

There is growing interest in a very public “Red Team / Blue Team” evaluation of the current state of climate science. The EPA Administrator has recently suggested that climate scientists participate in a televised debate regarding the state of the science. The “Blue Team”, the consensed climate science community, has dominated the climate change discussion and has largely refused to debate those who question or oppose the consensus. An open and rigorous debate of the issues regarding the science is long overdue. However, there is a need to establish a firm set of ground rules for the debate.

Dr. Judith Curry has recently presented ideas for framing the debate. She suggests that the debate must not be limited to anthropogenic climate change, but rather must be broadened to include all influences on climate, to the extent that they are known. This also implies a recognition of known unknowns and unknown unknowns, to paraphrase former US Secretary of Defense Donald Rumsfeld.

Perhaps the most crucial aspect of any set of ground rules for such a debate is the separation of fact from opinion, belief, and projection. In this debate, the facts include original data, data “adjustment” methods, data analysis methods, analytical models, and their supporting documentation. The ground rules should stipulate that nothing be accepted as fact that has not been freely available for analysis by other than the original analysts for at least one year prior to the debate.

The collection, “adjustment”, and analysis of data and the development and exercise of climate models by the “Blue Team” has been funded by the US federal government and other government agencies, including the IPCC. The members of the “Red Team” should not be expected to review and analyze the material developed by the “Blue Team” at their own expense. Rather, their efforts should be funded, as required, by the same agencies which funded the “Blue Team” efforts. Also, the “Red Team” must have sufficient time and resources to conduct a thorough analysis once all of the required information has been made available to them.

Refusal to provide unrestricted access to any body of work conducted by any researcher, or team of researchers, should be grounds to preclude any portion of that body of work from being introduced into the debate; and, should also preclude any of those researchers from participating in the debate. There is absolutely no excuse for refusal to provide unrestricted access to research funded by the government, at the request of the government, in pursuit of a government effort to establish the validity of the research results.

I question the potential value of a televised debate, in that even television news has degenerated into a collection of “soundbites” and “bumper sticker” slogans. TV panels made up of those with opposing views frequently descend into shouting matches, with the participants talking over each other, both to make their points and to deter their opponents from making theirs. The result is all too frequently “full of sound and fury, signifying nothing”.

A debate regarding the efficacy of billions of dollars of research and of public policy potentially affecting trillions of dollars of future investment in a thorough revision of the global economic system should not be permitted to degenerate into a shouting match loaded with unsupported opinion and innuendo. The taxpayers who have funded the research and would ultimately fund the investment deserve far better.

 

Tags: Climate Change Debate, EPA, Red Team Blue Team Debate, Taxpayer Funded Data and Studies

Hansen Revisited – Were the Climate Models Right?

The World Meteorological Organization typically defines climate as average weather over a period of 30 years. In climate science, then, it is useful to compare observed weather over a 30 year period with the scenarios of future climate produced by climate models, to assess the accuracy and predictive skills of the models.

Perhaps the most widely recognized climate model scenarios covering the period of 30 years ago until the present are the model scenarios produced by Dr. James E. Hansen of NASA GISS in the mid-1980s. These model scenarios were the subject of the now infamous Wirth / Hansen “warm hearing room” presentation to the US Congress in 1988. The graph presented by Dr. Hansen at this hearing is reproduced below.

Scenario A: Continued annual emissions growth of ~1.5% per year

Scenario B: Continued emissions at current (mid-1980s) rates

Scenario C: Drastically reduced emissions rates from 1990 – 2000

The ‘x’ labeled ‘1’ located at approximately June 2017 at 0.46oC is the current 0.21oC satellite global tropospheric temperature anomaly produced by UAH added to the anomaly of approximately 0.25oC shown in the graph above for 1980.

The ‘x’ labeled ‘2’ located at approximately June 2017 at 0.65oC is the current HadCRUT4 near-surface temperature anomaly.

Global annual CO2 emissions have continued to increase at approximately the 1.5% per year rate assumed by Dr. Hansen for his Scenario A, rather than leveling off at 1980 rates, as assumed for Scenario B, or declining drastically, as assumed for Scenario C. Global annual temperature anomalies, however, continue to be below the continued 1980s emissions level Scenario B (HadCRUT) or below the drastic reduction Scenario C (UAH).

The HadCRUT anomaly is currently approximately 0.40oC below the Scenario B level and approximately 0.6oC below the Scenario A level. The UAH anomaly is approximately 0.05oC below the Scenario C level, 0.6oC below the Scenario B level and 0.9oC below the Scenario A level. Clearly, the models and model inputs used by Dr. Hansen produced future climate scenarios significantly warmer than the actual climate for the period 1987 – 2017. However, we only have the luxury of that knowledge 30 years (one climate period) after the scenarios were produced.

It is likely that climate models have improved since the models used by Dr. Hansen in the mid-1980s. However, we will not be able to verify any improvement until scenarios produced by those models reach 30 years of age and can be compared against that same 30 years of observations. The model mean of the CMIP5 ensemble is still substantially warmer than any of the near-surface or satellite temperature anomaly products.

There is still no verified climate model; and, there is no climate model which has demonstrated predictive skill. Therefore, there is still no climate model which forms a reliable basis for major global or national climate change or economic policy. Clearly, that has not prevented, or even discouraged, the UN and most of its members from pursuing a modestly aggressive CO2 emissions reduction effort, with the goal of achieving zero net global annual CO2 emissions, if not by 2050, then certainly by the end of the century.

 

Tags: Climate Models

Who Stole My Warming – Problems With the Models

 “It doesn't matter how beautiful your theory is, it doesn't matter how smart you are. If it doesn't agree with experiment, it's wrong.” Richard P. Feynman

“Over 95% of climate models agree: the Observations must be wrong.” Roy Spencer

“Everything should be made as simple as possible, but not simpler.” Albert Einstein

A recent paper in Nature Geoscience by a gaggle of co-authors, including several well-known members of the consensed climate science community, analyzed “causes of differences in model and satellite tropospheric warming rates”.

Abstract

In the early twenty-first century, satellite-derived tropospheric warming trends were generally smaller than trends estimated from a large multi-model ensemble. Because observations and coupled model simulations do not have the same phasing of natural internal variability, such decadal differences in simulated and observed warming rates invariably occur. Here we analyse global-mean tropospheric temperatures from satellites and climate model simulations to examine whether warming rate differences over the satellite era can be explained by internal climate variability alone. We find that in the last two decades of the twentieth century, differences between modelled and observed tropospheric temperature trends are broadly consistent with internal variability. Over most of the early twenty-first century, however, model tropospheric warming is substantially larger than observed; warming rate differences are generally outside the range of trends arising from internal variability. The probability that multi-decadal internal variability fully explains the asymmetry between the late twentieth and early twenty-first century results is low (between zero and about 9%). It is also unlikely that this asymmetry is due to the combined effects of internal variability and a model error in climate sensitivity. We conclude that model overestimation of tropospheric warming in the early twenty-first century is partly due to systematic deficiencies in some of the post-2000 external forcings used in the model simulations.

The full paper is available from Nature Geoscience, but is behind a paywall, so it is not freely accessible.

The graphic below, form Dr. Roy Spencer, illustrates the situation discussed in the Abstract reproduced above.

Climate Models vs. Observations

Note that both the UAH Lower Troposphere and HadCRUT surface temperature trends begin diverging after 1998, the Hadcrut trend diverges more dramatically beginning in 2007. Dr. Spencer’s critique of an earlier Santer paper on the divergence is here.

It appears that the models were “fitted” to the “adjusted” surface anomalies through approximately 2000, though the authors do not state that this is the case. Beyond 2000, the “adjusted” surface temperature anomalies decline through 2012, then begin rising due to the effects of the 2015/2016 super El Nino.

The authors are careful to avoid use of the terms “hiatus” and “pause” in describing the measured temperature anomaly trends after 2000, even though the consensed climate science community has provided more than 60 potential explanations for the “hiatus”.

The abstract combined with the above graph make several very important points:

  • modeled warming is greater than measured warming in all but 2 models;
  • the difference between modeled warming and measured warming is very unlikely to be the result of natural variability in the climate alone, but is a clear indication that natural variability is at play;
  • the difference between modeled warming and measured warming is also unlikely to be the result of only natural variability plus a model error in climate sensitivity; and,
  • the difference between modeled warming and measured warming is most likely the result of a combination of natural variability, inaccurate climate sensitivity and inaccurate external forcings in the model simulations.

The clear conclusion is that the current climate models do not actually model the real climate. The authors acknowledge that this is likely the result of a combination of sensitivity and forcings errors in the model simulations. Numerous recent papers have suggested climate sensitivities near or below the lower end of the range of climate sensitivities identified by the IPCC. Also, Dr. Spencer has previously suggested that cloud forcing, assumed by the IPCC to be positive, is more likely negative. However, it is also likely that several aspects of climate, which are not well understood and therefore not included in the current models, are also at play in the differences between the measured and modeled anomalies.

A recent paper suggests that the “adjustments” to the near-surface temperature anomalies have increased their values by approximately 0.1oC. Were these “adjustments” to be reversed and the actual observed anomalies were used in the analysis, the gap between the modeled anomalies and the observed anomalies would widen from approximately 0.3oC to approximately 0.4oC.

It is also clear that the current model ensemble demonstrates no significant predictive ability; and, thus, should not form the basis for establishment of national or global climate policy. Further, it is clear that there is no “hockey stick” present in the temperature anomalies, which is interesting since Dr. Michael Mann is a co-author of the paper.

 

“Climate Science is the science of data that aren’t and models that don’t.” Ed Reid

 

Tags: Climate Models

Cost / Benefit Analysis in the Regulatory Process

The US federal government has taken numerous actions to require cost / benefit analyses, or cost effectiveness analyses, regarding federal rulemaking activities. The intent of these actions is to assure that the rulemaking activities provide real benefits at acceptable costs. However, this intent is violated when the regulatory agencies analyze only the costs, or only the benefits, of proposed actions.

One example of this violation of intent is the federal effort to establish the “Social Cost of Carbon”, specifically the supposed costs of increased atmospheric carbon dioxide concentrations on society. This effort has totally ignored the social benefits of increased atmospheric carbon dioxide concentrations, despite the well documented effects of enhanced carbon dioxide concentrations on the rate and extent of growth of the field crops used to produce food for people and animals. This effort has also ignored the greening of the globe, largely resulting from increased atmospheric carbon dioxide concentrations, recently documented by NASA, as well as the improvement of many plants’ ability to use available moisture efficiently.

Recent congressional testimony by Dr. Patrick J. Michaels, Director of the Center for the Study of Science at the Cato Institute, suggests that the social benefits of increased atmospheric carbon dioxide concentrations might well exceed the social costs, now and for the foreseeable future. If Dr. Michaels is correct, the recent federal efforts to establish the “Social Cost of Carbon” have been misguided, arguably deceptive and, ultimately, worse than useless.

Another example is provided in a recent article by Professor Michael Giberson of Texas Tech and Megan Hansen, Director of Policy at Strata. The federal government has focused heavily on the benefits of wind and solar generation as part of its climate change efforts; and, has provided substantial subsidies and incentives to encourage wider implementation of these technologies. However, relatively little effort has been made to identify the costs of these efforts, both direct and indirect.

The article highlights the renewable industry reaction to a recent study of electric grid reliability requested by Secretary of Energy Rick Perry. The study will examine the costs to the electric utility and its customers resulting from early retirements of baseload generating facilities and from the investments required to adapt the electric utility grid to increased reliance on intermittent renewable sources of electricity. The ability of the electric utility grid to operate reliably as the share of intermittent renewable electricity increases is dependent upon the availability of economical and reliable grid-scale electricity storage technology, which is not currently commercially available.

The intent of cost / benefit analysis requirements can also be violated by assigning unreasonable and/or unsupportable costs to activities or emissions. Perhaps the classic example of this type of violation is the US EPA estimate of the “Societal Cost of a Life Unnecessarily Shortened” at ~$9 million, regardless of the age of the person whose life is shortened, to justify new or more stringent environmental regulations. Such a determination is unsupportable, if for no other reason, because there is no basis on which to judge the relative cost to society of the premature death of an infant and of an elderly person. The use of an estimated societal cost of this magnitude makes it possible to “justify” extremely costly solutions to relatively trivial or non-existent issues.

Cost / benefit analyses must be comprehensive and objective to be useful. Apparently, much of recent cost / benefit analysis effort does not pass this test.

 

Tags: Cost of Carbon, Solar Energy, Wind Energy, Regulation

“It’s the Law of the Land” – UN Agencies Recognizing the Palestinian Authority

The United States Congress passed legislation in 1990 (Public Law 101-106) and 1994 (Public Law 103-236) prohibiting funding for United Nations “specialized agencies” and “affiliated organizations”. This legislation was signed into law by Presidents George H. W. Bush and William J. Clinton respectively.

The UN was aware of these laws when it extended participation in UNESCO (a UN “specialized agency”) to the Palestinian Authority in 2011. This UN action led to termination of US funding to UNESCO which represented ~22% of the UNESCO budget.

The UN was also aware of these laws when it extended membership in the UNFCCC (a UN “affiliated organization”) to the Palestinian Authority in 2016. This UN action, however, did not lead to termination of US funding to UNFCCC, as required by law. Rather, the Obama Administration requested $13 million in funding for the UNFCCC in 2017; and, provided $1 billion in funding for the UNFCCC’s Green Climate Fund, without specific congressional authorization and appropriation.

That was then. This is now. The “climate” regarding climate change has changed.

Now that the US has announced its withdrawal from the Paris Agreement, which is a creature of the UNFCCC, there appears to be no compelling reason for the US Administration to avoid following US law and to not defund UNFCCC and the associated Green Climate Fund. Arguably, there is no compelling reason for continued US participation in UNFCCC, since its sole focus is implementing the Paris Agreement and the associated Green Climate Fund.

The UN appears to need to be reminded periodically that it is not a global government with sovereignty over the sovereign nations of the world. The UN also appears to need to be reminded periodically that its actions have consequences when they conflict with the laws in place in its member nations.

It is long past time to instill a sense of humility into the UN bureaucracy. Defunding “specialized agencies’ and “affiliated organizations” as required by US law is a necessary, though likely not a sufficient, first step in the process.

Tags: United Nations, Paris Agreement, Green Climate Fund
Search Older Blog Posts