Skip to Primary Navigation Skip to Primary Content Skip to Footer Navigation

In the Wake of the News

Climate Science “The time has come, …”

"The time has come," the Walrus said,
"To talk of many things:
Of shoes--and ships--and sealing-wax--
Of cabbages--and kings--
And why the sea is boiling hot--
And whether pigs have wings."

The Walrus and The Carpenter, Lewis Carroll

Climate science is important. Climate science is a mess. Climate science must be fixed. The new Environmental Protection Agency (EPA) Administrator and the soon to be appointed new National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) Administrators have an obligation, either to make the science worth the money being spent on it, or to reduce the money being spent on it to a level commensurate with its apparent value.

The Climategate scandal in 2009 and 2011 began the very public questioning of climate scientists and the conduct of climate science. Climategate exposed Frantic Researchers Adjusting Unsuitable Data, attempting to control peer review, attempting to control or discredit peer reviewed journals, attempting to prevent certain scientists and their research from being published and attempting to ruin the careers of certain scientists. The recent revelations about the conduct of NOAA researchers in the preparation and publication of ‘Possible Artifacts of Data Biases in the Recent Global Surface Warming Hiatus’ by Karl et al., Science 4 June 2015; and, the refusal of the NOAA Administrator to provide materials to a US House committee have rekindled the issue.

The new EPA, NOAA and NASA Administrators should immediately initiate a thorough, joint investigation of the acquisition, analysis and application of climate data by their agencies. This investigation should then expand to include those activities by other nations, the Intergovernmental Panel on Climate Change (IPCC) and the United Nations Framework Convention on Climate Change (UNFCCC). The initial focus of the investigation should be on data quality and data integrity. This should be followed by a focus on climate modeling and the application of climate models, particularly the need to verify climate models before using them as vehicles for producing questionable future scenarios.       

All the agencies using US and global near-surface temperature data to track global warming “adjust” the data because they either KNOW, or have strong reason to BELIEVE, that the data are flawed. The new NOAA and NASA Administrators should immediately question why the agencies have chosen to continue adjusting flawed data, rather than taking steps to ensure that the data they collect for analysis is accurate. Instrument calibration schedules and instrument enclosure and site maintenance schedules should be reviewed and amended as necessary to ensure data quality. Data archiving and storage procedures should also be reviewed and amended as required to ensure data integrity and accessibility. The new Administrators should establish firm policies regarding data and information access, with the expressed intent of eliminating the need for FOIA requests to obtain access.

These investigations and reviews should not be conducted by agency personnel, but rather by “tiger teams” of outside experts in all of the technical fields of concern, with support from agency personnel. Agency personnel should be notified that refusal to cooperate honestly, fully and timely with the tiger teams would be considered a notice of resignation from the agency; and, processed accordingly.

There is general recognition that the data available from satellites is more comprehensive than the data available from surface and near-surface sources. However, there is continuing disagreement regarding the relationship between satellite data and surface and near-surface data. The Administrators should redirect funds currently spent on studies applying unverified models to produce future scenarios of questionable value. These funds should be used instead to fund studies to resolve the differences between the various data sources, since such studies would likely result in an enhanced understanding of the atmosphere and the processes occurring within it.

Tags: Bad Science, Climate Science, 2016 election, Temperature Record, Climate Models

Oxymoron Alert (Carbon Taxes)

The Climate Leadership Council has just released a new study “The Conservative Case for Carbon Dividends” and presented it to the Trump Administration for its consideration. The study identifies Four Pillars of a Carbon Dividends Plan:





Interestingly, the study focuses on the carbon dividend in its title, rather than on the carbon tax. I would argue that a carbon tax is hardly “conservative”. I would also argue that it is not a “market mechanism”, though it would rely on market mechanisms to adapt to the market distortions caused by the tax. The magnitude of the market distortion which would be caused by a carbon tax of ~$40 per ton of CO2 emissions is estimated to be ~$500 per US social security card holder. It would manifest in the economy as an increase in the cost of every good and service; a tax-driven cost push inflation of prices.

The study begins with the premise that global CO2 emissions are undesirable and must be reduced; that is, it assumes that continued CO2 emissions would lead to catastrophic anthropogenic global warming (CAGW). The study also begins with the premise, shared by many economists, that a carbon tax is the most efficient approach to reducing carbon emissions. The study suggests that the recommended carbon tax would be a Pigouvian tax; that is, a tax intended to discourage activities which lead to negative externalities. However, this suggestion ignores any positive externalities associated with CO2 emissions, though positive externalities exist; and, might well currently dominate at present. It is more likely that the recommended tax is simply a “sin” tax, particularly since it is intended to increase over time until the identified “sin” (CO2 emissions) is eliminated.

A carbon tax at a given level does not reliably produce a CO2 emissions reduction of a specific magnitude or percentage. This is particularly true in the short term, as the market response to the tax is affected both by the availability of economic alternative technologies and by the remaining economic useful life of existing equipment in service. The carbon tax does have the flexibility to be increased progressively, as required, to drive the desired emissions reduction. Clearly, the ultimate goal of the tax is to drive CO2 emissions in the US to zero, which would essentially require a transition to an all-electric energy economy, with all electricity provided by nuclear and / or renewables. It is unclear how high the carbon tax would have to be to achieve that objective. It is clear the tax would penalize the current transition from coal to natural gas as the predominant fuel for electric power generation, which has been largely responsible for the reductions in US CO2 emissions over the past several years. It might also constrain that transition in the short term, by discouraging investment in the additional natural gas pipeline and storage capacity required to supply the growing natural gas generation base.

The carbon dividend is a thinly disguised form of income redistribution, since it would provide a uniform quarterly dividend to all US social security cardholders, regardless of the amount of carbon tax, if any, directly or indirectly paid by that social security cardholder. The carbon dividend would likely be popular, particularly among the ~70% of social security cardholders expected to receive a dividend greater than their tax expense. The carbon tax would be imposed upstream of the consumer and would probably appear to be a price increase imposed by suppliers and service providers, rather than as a flow-through tax. Therefore, many dividend recipients would blame suppliers and service providers for the price increases, though they would thank the federal government for the dividend.

The border carbon adjustments process would be extremely complex and costly. There would likely be massive disagreements over the “carbon content” of manufactured goods and the impacts of any carbon emissions reduction efforts on the part of the government of the country in which the imported goods were manufactured. The numbers of products and the number of countries from which they are imported, combined with technology changes over time, would make this a long term ongoing process.

The “significant regulatory rollback” would fly in the face of the regulatory agencies’ imperative to survive and grow. Certainly the tax would be preferable to the current and growing command and control “rats nest” which is environmental regulation. However, in this instance, neither the regulatory morass nor the tax appears to be necessary, absent near-religious belief in the scenarios produced by unverified climate models.

In my early youth, I believed in Santa Claus, the Easter Bunny and the Tooth Fairy. In my advancing maturity, I find it impossible to believe in a revenue neutral tax. The history of “cap and trade” (cap and tax) legislation in the US strongly suggests that the Congress is incapable of producing such a tax.


Related Articles:

Tags: Carbon Tax

Highlighted Article: At What Cost? Examining The Social Cost Of Carbon

Here is an excellent article we would like to highlight from Cato Institute's Patrick J. Michaels.

At What Cost? Examining The Social Cost Of Carbon

"My testimony concerns the selective science that underlies the existing federal determination of the Social Cost of Carbon and how a more inclusive and considered process would have resulted in a lower value for the social cost of carbon."

Tags: Cost of Carbon, Cato Institute

More Anomalous Anomalies

The three primary producers of global near-surface temperature anomalies, NASA GISS(Goddard Institute for Space Studies), NOAA NCEI(National Centers for Environmental Information) and HadCRUT all begin the process with access to the same data. They then select a subset of the data, “adjust” the data to what they believe the data should have been; and, calculate the current “adjusted” anomalies from the previous ”adjusted” anomalies. NASA GISS alone “infills” temperature estimates where no data exist. Each producer then independently prepares their global temperature anomaly product.

The calculated anomaly in any given month relates directly to a global average temperature for that month. The difference between the calculated anomalies in any pair of months is thus the same as the difference between the calculated global average temperatures for those months. However, the differences reported by the three primary producers of global average temperature anomaly products from month to month, or year to year, are rarely the same; and, the changes are not always even in the same direction, warming or cooling.

The global average temperature anomaly differences reported by the three primary producers of global near-surface temperature anomalies for the months of November and December, 2016 are an interesting case in point. NASA GISS reported a decrease of 0.12oC for the period. NOAA NCEI reported an increase of 0.04oC. HadCRUT reported an increase of 0.07oC. Each provider estimates a confidence range of +/- 0.10oC for their reported anomalies. (NASA GISS: -0.22oC / -0.12oC / -0.02oC; NOAA NCEI: -0.06oC / +0.04oC / +0.14oC; HadCRUT: -0.03oC / +0.07oC / +0.17oC) Therefore, the confidence ranges overlap, suggesting that the differences among the anomaly estimates are not statistically significant. However, it is clear that the global average near-surface temperature did not both increase and decrease from November to December.

The second decimal place in each of these reported anomalies is not the result of temperature measurement accuracy, but rather of numerical averaging of less accurate “adjusted” temperature estimates resulting from data “adjustment”. The Law of Large Numbers states that it can be appropriate to express calculated averages to greater precision than the precision of the numbers being averaged, if the errors in the individual numbers are random. However, the nature of the factors which cause individual temperature measurements to be inaccurate and thus require “adjustment” suggests that the resulting errors are not random. Certainly, the “adjustments” made to the data are not random. Therefore, it is highly unlikely that reporting global average near-surface temperatures to greater precision than the underlying “adjusted” temperature data is appropriate.

Tags: Temperature Record, Global Temperature

Hottest Year Time Again

The near-surface global temperature anomaly data for 2016 have been collected, selected, infilled, “adjusted” and analyzed. The results are in; and, again, they are anomalous. NASA GISS(Goddard Institute for Space Studies) and NOAA NCEI(National Centers for Environmental Information) both report the average anomaly for 2016 as 0.99oC. This represents an increase of 0.13oC for the NASA GISS anomaly, compared with 2015; but, an increase of 0.09oC for the NOAA NCEI anomaly, compared with 2015. Both NASA GISS and NOAA NCEI place the confidence limits on their reported anomaly at +/- 0.10oC, or approximately the same magnitude as the reported year to year global average anomaly change. Both NASA GISS and NOAA NCEI estimate that the influence of the 2015/2016 El Nino contributed 0.12oC to the increase in the reported anomaly for 2016, 0.01oC less than the global average anomaly increase reported by NASA GISS and 0.03oC more than the global average anomaly increase reported by NOAA NCEI. That is, essentially all of the 2016 global average temperature anomaly increase reported by both agencies was the result of the influence of the 2015/2016 El Nino, which was very similar in magnitude to the 1997/1998 El Nino, which are the two strongest El Ninos recorded in the instrumental temperature record. HadCRUT reported an average anomaly of 0.774oC, an increase of 0.14oC from the 2015 average anomaly. HadCRUT estimated similar confidence limits and a similar El Nino contribution.

All of the near-surface temperature anomaly products reported dramatic drops in their anomalies, beginning in the Spring of 2016, though these drops were from record high monthly peaks driven by the El Nino. The NASA GISS anomaly dropped from a high of 1.36oC to a December 2016 level of 0.81oC, a change of -0.55oC. The NOAA NCEI anomaly dropped from a high of 1.22oC to 0.79oC, a change of -0.43oC. The HadCRUT4 anomaly dropped from a high of 1.08oC to 0.59oC, a change of -0.49oC. This a variance of 0.12oC between the near-surface temperature anomaly products, approximately equal to the magnitude of the reported 2016 anomaly increases, the estimated impact of the 2015/2016 El Nino and half the confidence range claimed for the reported anomalies.

University of Alabama Huntsville (UAH)  and Remote Sensing Systems (RSS) both reported that 2016 was 0.02oC warmer than 1998, which both sources still report as the previous warmest year in the satellite temperature record. Dr. Roy Spencer of UAH stated that the increase in the reported temperature anomaly between 1998 and 2016 would have had to be ~0.10oC to be statistically significant. The UAH anomaly dropped from a 2016 high of 0.83oC to a December 2016 level of 0.24oC, a change of -0.59oC. The RSS anomaly dropped from a 2016 high of 1.0oC to a December level of 0.23oC, a change of -0.77oC. Both the UAH and RSS anomalies show the dramatic impact of the 2015/2016 El Nino. Both anomalies suggest that the “Pause” has returned, since they show no statistically significant warming since at least 1998.

The question now is whether there will be a La Nina in 2017; and, if so, the extent to which it will further reduce the post El Nino anomalies.

Tags: Warmest, Global Temperature, Temperature Record

Highlighted Article: Climate Models for the Layman

Here is an excellent paper on climate models by Dr. Judith Curry and The Global Warming Policy Foundation (GWPF).

Climate Models for the Layman

"This report attempts to describe the debate surrounding GCMs to an educated but nontechnical audience."

Tags: Climate Models

Opening the Kimono

Steve Goreham, the author of the book Climatism! Science, Common Sense, and the 21st Century's Hottest Topic coined the term Climatism, which he defines as “the belief that man-made greenhouse gas emissions are destroying the Earth's climate". Arguably, the definition should include the assertion “that man-made greenhouse gas emissions are destroying the Earth's climate", even absent belief in the assertion.

Ari Halperin manages the blog Climatism. ( Early in 2016, Halperin authored a guest essay on Watts Up With That ( entitled Who unleashed Climatism ( in which he discusses the origins of climate alarmism at length. He concludes that: “Climatism is a foreign assault on America. The aggressor is not another nation-state, but an alliance of UN agencies and environmental NGOs.”

Leo Goldstein ( has recently posted two guest essays on Watts Up With That:; and, Goldstein has coined two new terms in these essays: Climate Alarmism Governance (CAG); and, Climintern (Climatist International). Goldstein traces the history of CAG from the founding of the Climate Action Network (CAN) in 1989. CAN now has 1100+ members globally. CAN, the UN and numerous foundations provide CAG; and, are referred to collectively as the Climintern.

The Climintern refers to the analyses provided by Halperin and Goldstein as conspiracy theories. Others might refer to them as conspiracy exposes or histories. The documentation they provide in these essays clearly establishes the nature and scope of CAG and the influence of the Climintern. Their analyses are well worth reading. They provide a historical record of how climatism has proceeded from its earliest days to today; and, how it works to influence global and national climate policy through the UNFCCC and the UN IPCC.

“If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.” (HT: James Whitcomb Riley)

Tags: Climate Skeptics

Another One Bites the Dust – Judith Curry Resigns

Dr. Judith Curry has resigned her position at Georgia Tech, in frustration over “the politics and propaganda that beset climate science”. Dr. Curry explained,

“the deeper reasons have to do with my growing disenchantment with universities, the academic field of climate science and scientists… I no longer know what to say to students and postdocs regarding how to navigate the CRAZINESS in the field of climate science. Research and other professional activities are professionally rewarded only if they are channeled in certain directions approved by a politicized academic establishment — funding, ease of getting your papers published, getting hired in prestigious positions, appointments to prestigious committees and boards, professional recognition, etc.”
“How young scientists are to navigate all this is beyond me, and it often becomes a battle of scientific integrity versus career suicide.” (

Dr. Curry’s concerns regarding university climate science education do not bode well for the future of climate science over the next 30-40 years.

Dr. Curry follows Dr. Roger Pielke, Jr., who did not resign his faculty position at the University of Colorado, but has redirected his research efforts away from climate science. (

Other climate scientists, including Dr. David Legates (formerly Delaware State Climatologist) and Dr. Noelle Metting (formerly US DOE) were terminated for failing to adhere to the political climate change narrative. (


Dr. Wei-Hock Soon and Dr. Sallie Baliunas, both of the Harvard-Smithsonian Center for Astrophysics have been under attack since 2003 for their work on the solar contribution to climate change. (Dr. Baliunas has since retired.)

The Climategate e-mails released in 2009 and 2010 exposed efforts on the part of members of the consensed climate science community to destroy the careers of other climate scientists, including Dr. Christopher deFrietas, Dr. Christopher Landsea and Dr. Patrick Michaels. Fortunately, these efforts were unsuccessful.

The life of a non-consensed climate scientist is hardly a bed of roses.

Tags: Consensus

Trump’s Corruption Mandate

Donald Trump’s astonishing election victory was in part a backlash against increasingly corrupt American politics.

Transparency International publishes an annual Corruption Perceptions Index, ranking all nations from most to least clean in their political conduct. The United States entered the twenty-first century by falling out of the top ten. Scandinavian nations such as Finland, Denmark, and Sweden along with Commonwealth nations such as New Zealand, Canada, and the United Kingdom dominated the top spots, while the USA was ranked fourteenth.

Since then the USA has declined further in the Index's rankings.

Both corruption and the perception of corruption increased during the tenures of Bill and Hillary Clinton, George W. Bush, and Barack Obama. Examples included the use of the IRS to bully political enemies, government bailout funds going to politically-connected crony businesses, the use of high office to enrich one’s private foundation, and presidents and their appointees to regulatory bodies using their discretionary power indiscriminately.

Given this, one therefore understands the joke that the wildly-popular television show House of Cards is really a documentary.

Yet there is a danger in that joke. While corruption occurs in all governments, there a huge difference between cultures in which corruption is normalized and implicitly tolerated and those in which corruption is condemned, vigilantly monitored, and forced to go underground.

So Trump’s astonishing election victory may be a healthy reaction against the increasing corruption -- and therefore even more astonishing because his own character seems imbued with significant elements of personal and business-political corruption -- and because the combination of his presidency with his personal financial holdings is fraught with conflicts of interest (as this Wall Street Journal graphic shows). Yet since conflict-of-interest rules apply differently to the president and vice-president, according to 18 U.S. Code § 208, it is unclear how many conflicts will actually be avoided.

Of course some Trump supporters argue that it takes a beast to fight a beast, but what we really need is a political culture that does not lend itself to amoral animal metaphors.

Political leadership is a human endeavor, and effective human leadership in the free and open democratic republic we aspire to be requires both integrity and the widespread perception of integrity. We are a rich country economically, so we can recover from billions of dollars of loss. But the erosion of character among our leadership is much more expensive. It encourages cynicism among the citizenry. It imposes demoralization and disengagement costs upon them. It discourages the morally principled from seeking political office. And it attracts the even-more-corrupt to the corridors of power. No democratic republic can survive that downward cycle for long.

So while I did not vote for Trump, I am encouraged that his administration is following up on at least one of his campaign promises to for example, a five-year ban on lobbying for all transition and administration officials. We can debate the morality and likely effectiveness of that particular anti-corruption policy, but as a post-election statement of intent its seriousness is evident and positive.

Nations always have a choice. A century ago Argentina was among the top ten most prosperous and clean nations in the world, but it has declined precipitously and is now relatively much poorer and ranked in the bottom half of nations for bribery and related corruptions. South Africa was only moderately corrupt a generation ago and has also declined sadly.

Yet some countries have cleaned up their corruption impressively. Botswana improved dramatically in one generation, as did Chile -- both overcoming the widespread stereotype of irredeemable business-as-usual-corruption in African and Latin American politics.

A banana-republic destiny is avoidable for the USA. President Trump’s character -- with its odd mix of obviousness and unpredictability -- will be decisive, as will the vigilance of the rest of us and our commitment to putting the animals back in their cages and cleaning up their messes.

Tags: Corruption, Politics, Donald Trump

“Mann Overboard”

Much has been written this year about the 2015/2016 El Nino and about the apparent record global temperature anomalies. Professor Michael Mann of Penn State University was quick to provide his opinion that the El Nino contributed only about 0.1oC, or about 15%, to the 2015/2016 global average temperature anomaly increase. Others provided estimates ranging from 0.07oC to 0.2oC. The balance of the temperature anomaly increases was attributed to the continuing increase in atmospheric CO2 concentrations as the result of human fossil fuel combustion.

However, the 2015/2016 El Nino is now over; and, global temperature anomalies have dropped sharply: by approximately 0.4oC overall; and, by approximately 1.0oC over land only. The sea surface temperature anomalies are expected to decrease further, although more slowly, especially if a significant La Nina develops in 2017. The equatorial Pacific is in a weak La Nina condition at present, but La Nina conditions appear to be weakening.

Regardless, Mann and others who minimized the potential contribution of the 2015/2016 El Nino to the rapid global temperature anomaly increases in those years are now faced with explaining the large, rapid decreases in the global average anomalies following the end of the El Nino. It would be difficult enough to explain rapid anomaly increases in association with slow increases an atmospheric CO2 concentrations; but, even more difficult to explain rapid anomaly decreases in association with slow increases in atmospheric CO2 concentrations.

Tags: Temperature Record, Global Temperature

Political protest in a "post-fact era"

“Everyone is entitled to his own opinion, but not to his own facts” (Senator Daniel Patrick Moynihan)


A protester was shot at the University of Washington during a clash between rival factions — one faction physically blocking an audience from hearing a speech, the other faction seeking to hear a rabble-rousing orator.

The orator was Milo Yiannopoulos, a leading spokesman for the alt-right movement, a revitalized and muscularized version of nationalist and populist politics long submerged in American politics.

Outside the auditorium, blocs of red-wearing Trump supporters and black-wearing anarchists and others faced each other  (unconsciously updating Stendhal's novel The Red and the Black.) The man who was shot was apparently a peacemaker, placing himself in the middle of the verbally-abusing and pushing-and-shoving factions.

The victim's positioning was unfortunate, as there is little "middle" left in our polarized political times.

And it is symbolic that the shooting took place at a university, because it was precisely at universities where the battle for civility has been lost.

A generation ago in universities we had vigorous debates about truth, justice, freedom, and equality. The governing premises was that through argument rational people could fine-tune their grasp of the facts and test the logic of their theories. The process would often be contentious. Yet with professors and students committed to a baseline civility, it would be cognitively progressive.

But the leading professors of the new era — Michel Foucault, Jacques Derrida, and Richard Rorty among them -- undercut that entire process. Facts, they argued, are merely subjective constructs and masks for hidden power agendas. Over the next generation the words "truth," "justice," "freedom," and "equality" began to appear exclusively in ironic scare quotes.

"Everything," declared post-modern professor Fredric Jameson, "is political." And absent facts, argued post-modernist Frank Lentricchia, the professor's task is transformed from truth-seeker to political activist: in the classroom he should "exercise power for the purpose of social change."

We live in the resulting postmodern intellectual culture, with an entire generation (mis-)educated to see politics not as a cooperative quest to solve economic problems and protect human rights — but as a ceaseless clash of adversarial groups each committed to its own subjectivist values. Feminist groups versus racial groups versus wealth groups versus ethic groups versus sexuality groups versus an open-ended number of increasingly hostile and Balkanized subdivisions.

Thus we have a generation populated with biologically mature people who lack the psychological maturity to handle debate and occasional political loss — at the same time convinced of the absolute subjective necessity of asserting their goals in a hostile, victimizing social reality.

As reasonable discussion declined in universities, physicalist tactics quickly replaced them. Arguments about principles were replaced with routine ad hominem attacks. Letters of invitation to guest lecturers prompted threats of violence. The heckling of speakers turned to shouting them down. Picketing protests became intentional obstruction.

And now we get the inevitable backlash as other, rival factions learn the new rules and steel themselves for engagement.

Yiannopoulos himself is a product of post-modern culture, as it was he who exultingly coined the phrase "post-fact era" to describe how politics now works. He is proving himself to be an effective player of that brand of political activism.

Yet the governing ethic of our political culture is not a lost cause, as large swathes of the American populace are still committed to the core democrat-republican civic virtues of intellectually honest debate, free speech, tolerance — and of being both a good loser and a good winner. A fractious election brought out many of the worst among us. But journalistic headlines aside, our choice is not only between the tactics of post-modern political correctness and those of alt-right populism. Our leading intellectuals, especially those within universities who are nurturing the next generation of leaders, must also teach the genuinely liberal-education alternative.

Tags: Free Speech, College Students, Protests

“Just the facts, ma’am.”

“Politics is a battle of ideas; in the course of a healthy debate, we’ll prioritize different goals, and the different means of reaching them. But without some common baseline of facts; without a willingness to admit new information, and concede that your opponent is making a fair point, and that science and reason matter, we’ll keep talking past each other, making common ground and compromise impossible.”

– President Obama (Jan. 10, 2017)

Simple Definition of fact

  • : something that truly exists or happens : something that has actual existence
  • : a true piece of information

Source: Merriam-Webster's Learner's Dictionary


“Just the facts, ma’am.” #1

Some simple words are often used imprecisely. In discussions related to climate science, the simple word “fact” is a case in point. For example, a temperature measurement taken by an observer from a particular instrument, in a particular enclosure, at a particular location and at a particular time is frequently referred to as a “fact”. However, it is only a “fact”, as defined above, in those particular circumstances. It is not necessarily and not even likely “a true piece of information”, in the broader sense, since it is affected by those circumstances.

Temperature measurements taken “near-surface” are “selected” for inclusion in the temperature record; and, are then “adjusted” to account for the particular instrument, enclosure, location and time of observation. These “adjusted” measurements are not “something that truly exists or happens”, but rather an estimate of something that might “truly exist or happen”.

An “ideal” near-surface temperature measurement site is identified as follows: Climate Reference Network (CRN) Site Information Handbook

Class 1– Flat and horizontal ground surrounded by a clear surface with a slope below 1/3 (<19º). Grass/low vegetation ground cover <10 centimeters high.    Sensors located at least 100 meters from artificial heating or reflecting surfaces, such as buildings, concrete surfaces, and parking lots.  Far from large bodies of water, except if it is representative of the area, and then located at least 100 meters away.  No shading when the sun elevation >3 degrees.

Such a site is estimated to be able to produce a near-surface temperature measurement with an error of less than 1oC, assuming proper instrument selection and calibration, proper enclosure and timely reading. Such a measurement is a “fact”, subject to those limitations.

Climate science deals with these errors of “fact” regarding near-surface temperature measurements by using temperature anomalies, or the differences between temperature measurements taken at a particular site. These anomalies are “facts” only if there have been no changes in any of the circumstances which affect the measurements; and, they cease to be facts if the measurements are “adjusted”, rendering them merely estimates.



“Just the facts, ma’am.” #2

Above I discussed the limitations on “facts”; and, the difference between facts and estimates related to individual temperature measurements, whether analyzed as discrete temperatures or temperature anomalies.

Once near-surface temperature measurements have been recorded, selected and “adjusted”, the next step in the process is to combine these selected, “adjusted” temperature estimates, expressed as anomalies from previous selected, “adjusted” temperature estimates, into an estimated global average near-surface temperature anomaly. While it might be argued that errors in the recorded temperature measurements might be random, it cannot be argued that the selection of the temperature measurements to be included in the global average calculation or the “adjustments” made to these temperature measurements are random. There could be no rational explanation for making random “adjustments” to measurements.

The estimated global average surface temperature anomaly is reported to two decimal place “precision”; and, used to calculate decadal rates of temperature increase to three decimal place “precision”. This level of “precision” is highly questionable, bordering on ridiculous, considering the inaccuracy of the underlying temperature measurements. The underlying temperature measurements are estimated to be in absolute error by an average of more than 2oC in the US, where they have been surveyed and their siting compared to the US CRN1 siting requirements. The expected inaccuracy of the remaining global temperature measuring stations is assumed to be similar, though they have not been surveyed and their siting compared to the US CRN1 siting requirements.

Finally, the estimated “adjusted” global average temperature is reported to two decimal place “precision”. This estimate is reported as a “fact”, though the particular circumstances under which the estimate might have been a “fact” are ignored.



“Just the facts, ma’am.” #3

“Just the facts, ma’am.” (1 & 2) discussed “facts” in the context of individual near-surface temperature measurements and global average temperature anomaly calculations. The final step in the climate change analysis process is the creation of future climate change scenarios using climate models.

There are numerous climate models, none of which have been verified. Therefore, it cannot be said that there is a climate model which is a “fact”, in the sense that it accurately models “something that truly exists or happens”, rather than hypothesizes something which might happen if the model were accurate.

The climate models are run using a range of inputs for climate sensitivity and climate forcings, because there are no specific, verified values for the various sensitivities and forcings. Therefore, not only are the climate models not “facts” (“something that truly exists or happens”), the inputs which feed the models are not “facts” either, in the sense that they are “a true piece of information”. It is not even a “fact” that the actual climate sensitivity or actual climate forcings are within the ranges of the values used as inputs to the models.

Therefore, the modeled scenarios of future climate change (temperature change) are not “facts”, or arguably even based on facts. Rather, they are estimates, based on estimates, of the potential change in current estimates over some future period.

Based on the “facts”, as discussed in these commentaries, there is only a tenuous basis for concern about catastrophic anthropogenic climate change.

That’s “Just the facts, ma’am.” (HT: Sgt. Joe Friday, LAPD)

Tags: Temperature Record, Estimates as Facts, Global Temperature

Energy Efficiency as a Climate Change Reform Strategy: Are we just throwing money at the problem?

(For an introduction to this E3 blog series click here)

Put to one side for a moment whether we need to do anything about climate change.  Assume it is real and we need to do “something.”  There are a wide variety of “somethings” that we can do.  Indeed, right now we are in the “throw spaghetti on the wall and see what sticks” phase.

But let’s face it, we have many more problems than climate change (even assuming it is a real problem).  There is, if you will, strong competition for scarce public resources to solve problems.  I would think it hardly controversial to state that we should spend public tax dollars in the most cost effective way possible.  Bjorn Lomborg makes the point that we don’t want to just feel good, we want to DO good!

For example, if there are two competing proposals to reduce a certain amount of greenhouse gases, all other things being equal, the one that does it cheapest should be chosen.  Similarly, if there are two competing proposals that will save lives, we should choose the one that saves lives for lower costs.  What if one proposal is to save a life a century from now and one to save it today?  More difficult, what if one is to buy mosquito netting for developing countries to mitigate malaria and another to slow the increase in global temperature in 50 years?  These all involve difficult trade-offs on how to use scarce resources.

Nearly every discussion of remedies for climate change discusses the enormous opportunity for energy efficiency.  Based on engineering models, rather optimistic claims are made for the potential for investments in energy efficiency to cure a variety of what ails us, often called a win-win-win situation.  We would use less energy.  Our total energy bills would be reduced.  We would emit less greenhouse gases.  We would need to build fewer electric power plants.  And best of all, the return on investment would rival that of Bernie Madoff’s and it would be tax free.

To be fair, the US does have an energy efficiency problem.  If energy prices (either gasoline or electricity) are distorted, then by definition are we not using energy efficiently.  This has possible environmental, energy security, and economic implications.

Both liberal and conservative energy analysts agree that the way that electricity is priced in the US leaves lots of room for improvement.  Broadly, we set prices that are too low in the peak period and too high in the off-peak period.  Additionally, many states set electricity rates in a way that gives electric utilities incentives to build new power plants rather than to invest in electric efficiency technologies.

The key disagreement between market oriented analysts and many liberals is the technique that should be adopted to correct these problems.  Market analysts promote the use of competition and market forces and market-based regulatory approaches to achieve better pricing signals for consumers.  Once prices are “efficient,” then let the consumer choose how to make the myriad trade-offs as to how to spend their money.  Many liberals promote a much more command-and-control regulatory approach to rectify the distortions created by bad pricing.  In essence, they don’t believe the consumer will make the “right” choices and thus adopt policies that force correct choices.

In 2015, a dramatic study was released by several professors from the University of Chicago and University of California at Berkeley.   The study found that the engineering model that is most often used to project the costs and benefits of energy efficiency technologies was seriously flawed. It found that the model’s projections seriously overestimated the energy savings that would result from investing in a given technology. This is important because public monies are often used to fund investments in energy efficiency. If the study is correct many projects that are funded do not meet the standard of being cost beneficial.

The study caused quite a stir in the energy policy community. It threatened to slaughter one of the sacred cows of progressives. But the study is significant because it is the first of its kind to comprehensively study the actual after-the-fact results to compare projected energy savings to actual energy savings. If the study is correct, it severely undercuts one of the main arguments that is often used to justify significant public investment in energy efficiency.

Some who are climate skeptics will no doubt tout the study as evidence for a variety of propositions, i.e., wasteful government programs, unreliability of engineering models, the triumph of good intentions over good policy.  But even for those genuinely concerned with climate change, if the study is correct and it turns out we have a real climate change problem, energy efficiency strategies will fail to address the intended problem.  This means we are not really addressing climate change.  We are just throwing scarce public resources at the problem.

Tags: Efficiency Standards


One of the principal concerns raised regarding climate change is its potential effects on agriculture. There is continuing discussion that the potential combination of increased temperatures with drought or increased rainfall might result in reduced crop yields or crop failure in some or all of the traditional crop production regions. There is also continuing discussion of the perceived need to reduce meat consumption, so that grazing land could be converted to food production.

There is little discussion of the likelihood that production of these food crops would move to areas which have been too cold or had too short growing seasons in the past. There is also little recognition of the contribution of plant genetics to increased plant tolerance and yield.

However, in the face of all of this concern about food production, at least in the United States, the number one cash crop in ten US states is marijuana, as shown in the attached table. Marijuana is among the top five cash crops in a total of 39 of the 50 states; and, among the top ten cash crops in all but 2 states.

This is not to suggest that marijuana is a major crop in any of these states by volume or weight, or that it is crowding out production of other crops for human or animal consumption, including export. Rather, it is to suggest that a very high value has been placed on a crop which has no food value (even when baked into brownies), in the face of vocal concern about the adequacy of future food production.

Current efforts to legalize the consumption of marijuana for other than medicinal purposes will likely increase the demand for the product, increasing the productive acreage dedicated to its production, though it might also result in corresponding reductions in its commercial value. One has to question whether this agricultural product should have such a high priority.


Marijuana Rank as Cash Crop, by State

Alabama 1 Louisiana 6 Ohio 4
Alaska NA Maine 1 Oklahoma 3
Arizona 3 Maryland 5 Oregon 4
Arkansas 4 Massachusetts 2 Pennsylvania 5
California 1 Michigan 5 Rhode Island 1
Colorado 4 Minnesota 6 South Carolina 3
Connecticut 1 Mississippi 3 South Dakota 9
Delaware 3 Missouri 4 Tennessee 1
Florida 2 Montana 4 Texas 6
Georgia 3 Nebraska 9 Utah 2
Hawaii 1 Nevada 2 Vermont 2
Idaho 5 New Hampshire 2 Virginia 1
Illinois 4 New Jersey 7 Washington 5
Indiana 3 New Mexico 2 West Virginia 1
Iowa 4 New York 2 Wisconsin 6
Kansas 6 North Carolina 5 Wyoming 8
Kentucky 1 North Dakota ?    


Source: NORML (USDA data)

Tags: Agriculture

Energy Policy and the Presidential Election

(For an introduction to this E3 blog series click here)

Professor Richard Muller of the University of California, Berkley, a PhD in physics, wrote a fascinating book in 2012, Energy for Future Presidents: The Science Behind the Headlines.  He wrote it as if it were a memo to the next president.  Though written for the 2012 presidency, bottom line, it stands the test of time and is still a good read for the next president, assuming the president also reads my critiques of some of the conclusions.

I just found the book but I wish I had found it sooner.  Several things are fascinating.

First, the book is comprehensive in its discussion of current energy policy issues.  He is remarkably lucid considering the complexity of the subject matter.  To be completely honest, I learned a lot about the underlying physics of energy that will prove helpful in making future E3  policy recommendations.

Second, four years have passed so we have more information now than Professor Muller had.  But he was remarkably prescient in his insights about many energy issues.  For example, fracking for oil shale was in its infancy, while fracking for natural gas was going gang busters.  He correctly predicted that if fracking for oil shale played out it would radically change global dynamics related to energy as well as foreign policy.  He even predicted that it could mean the end of OPEC.  I am ready to concede that he is correct on this one.

Third, he predicates much of his analysis of energy on two issues: oil security and climate change.  I call each of these issues a Golden Thread.  If his Threads are correct, his recommendations create an elegant garment.  But if you pull both of the Golden Threads from the garment, it unravels into rags. Future blog postings will discuss these Golden Threads and the attendant recommendations.  Since I disagree with the importance of the Golden Threads, much of my analysis will be critical of an otherwise very insightful book.

Fourth, I highly recommend the book because I am very impressed with Professor Muller’s intellectual integrity.  My career has been in energy policy specifically and domestic economic policy broadly.  I have been at it for almost 4 decades.  I am a lawyer by training but have mostly “practiced” economics and the implementation of free market policies in energy.  I worked in the Reagan and Bush Administrations for nearly 11 years on radically changing policy relating to natural gas.  In hindsight, we succeeded beyond our wildest imaginations, as natural gas transitioned from energy basket case in the 1970s to the sharpest arrow in the energy quiver in the 2000s.  Many predicted at the time of the reforms that this paradigm based on market reliance would result in complete failure. (The late Senator Howard Metzenbaum called the Wellhead Decontrol Act of 1989 “one of the most anti-consumer pieces of legislation we have produced in a long time.”)

Accordingly, I have had my share of arguments and disagreements about energy policy.  It is rare to find someone whose arguments do not correspond with their self-interest.  Self-interest doesn’t necessarily mean you are wrong; it just means your arguments have to be taken with a grain of salt.  I have had only a few discussions with persons with no self-interest, a deep understanding of the issues, and intellectual integrity.  Thus it was a pleasant surprise to find Professor Muller’s book.

He does not share my world view.  First, he is by his own admission a physicist and concedes that he does not have training in the many other disciplines that are required for sound energy policy.  But even I would concede that my recommendations must be consistent with sound physics; thus, his insights must be taken seriously.  Second, he has much more faith in government solutions than I do, though he is thoughtful in his consideration of policy alternatives as compared to conventional liberal wisdom.  Third, as noted above, he believes in the Golden Threads, and I disagree with him on a number of points relating to those Threads.

Despite our disagreements, I have found his ability to decimate some of the Left’s sacred cows absolutely compelling and brutally honest.  Given that he is a member of their tribe and I am not, he presumably has some credibility in his observations that a “hack” like me would lack.  Several examples follow:

  1. Nuclear power is safe.
  2. Energy accidents are overblown.
  3. A radical carbon reduction strategy is unsound.
  4. Electric cars are not a sound solution to any of our energy problems.
  5. Many assumptions made about solar, wind, geothermal, and electric storage are pie in the sky from a physics perspective and should be embraced with a grain of salt.

Thus, his book is an excellent starting point for deeper discussions of energy policy in future blog postings as we enter the cycle of re-envisioning energy policy in a new Administration.

(Truth in advertising, I was Dr. Ben Carson’s energy and environment advisor.)

Tags: Book Review
Search Older Blog Posts