31 December 2012

Politics: A New Catastrophe Reinsurance Risk

In a new report out today on the state of the catastrophe reinsurance industry, Willis Re has this interesting passage (here in PDF):
Superstorm Sandy has yet again demonstrated the danger of overreliance on catastrophe models, due to the complexity of the original loss as well as the emergence of a new and as yet unmodeled uncertainty: the politicization of policy form interpretation. Buyers who have relied on covers with indexed, non-indemnity triggers may yet find their reinsurance protections do not respond to Superstorm Sandy loss recoveries in the way they had planned.
What is "the politicization of policy form interpretation"?

This is of course the issue of the "hurricane deductible" that I have discussed recently on this blog. The way for reinsurers to address the politicization of policy form interpretation is to develop covers that rely on indexed, non-indemnity triggers which are robust to political meddling. Such risks are not ever going to be usefully modeled by cat modelers (who have a hard enough time as it is).

In plain English this means that the industry should rely on triggers that are unambiguous, verifiable and, ideally, independent from governmental decision making.

Updated: Normalized Hurricane Losses 1900-2012

The graph above shows an updated estimate of the 1900 to 2012 normalized hurricane losses for the United States. The normalization methodology is described in Pielke et al. 2008 (here in PDF) and the data presented in the graph comes from the ICAT Damage Estimator, which extends the analysis of Pielke at al. through 2011.

Today Willis Re released a report (here in PDF) on the state of the reinsurance industry, and presents an estimated $20-25 billion in insured losses for Sandy. As is conventionally done, to arrive at total losses for 2012 I have doubled the $25B figure to arrive at an estimated $50 billion total loss for Sandy. Please note that the final loss estimate, apples-to-apples with the normalized loss database (kept by NOAA/NHC) may wind up being higher or lower. In addition, I have added in 3 placeholders of $5 billion in losses for the 3 other storms (besides Sandy) which made landfall as post-tropical cyclones of hurricane strength in 1904, 1924 and 1925 -- the losses from these four storms are depicted in grey in the graph above). In 2013 we will develop a rigorous basis for estimating these losses, which I'd guess have a good chance of winding up larger than the placeholders I have entered for now.

In case you are curious there is no trend in the normalized data, which makes sense as there is also no trend in hurricane frequency or intensity at landfall in the United States over the same period, and the lack of trend is insensitive to the removal of the 4 post-tropical cyclones.

What does the reinsurance market say about all this? Willis Re explains:
"most reinsurers are still within their annual catastrophe budgets for 2012 and not facing any capital impact... In the absence of Superstorm Sandy, reinsurers would have found it difficult to resist buyer pressure for further concessions. As such, Sandy’s impact has helped to stabilize market pricing on an overall basis and reinsurers have largely delivered to their clients in terms of capacity and continuity."
In other words, thank goodness for Sandy. 

26 December 2012

Is This Progress?

The graph above shows the number of people living on less than $2.00 per day, in apples to apples PPP dollars (see below for details), in 1820 and 2005. There are more than 250% more people today living at this very low level of income than almost 200 years ago.

Is that progress?

The details:

The figure of 2.6 billion people worldwide living at less than $2.00 per day in 2005 comes from a 2008 World Bank paper by Chen and Ravallion (here in PDF) and specifically their Table 5. The $2.00 are expressed in 2005 international dollars (i.e., PPP adjusted).

The figure for 1820 comes from the Maddison global GDP dataset, and actually works out to $1.82 in 1990 international dollars (i.e., PPP adjusted values, which will be closer to $2.00 in 2005 dollars). Maddison estimated total global population in 1820 to be 1.04 billion, so the maximum number of people living at less than $2.00 is 1 billion. The actual value is no doubt less than this, but by how much is not known.

Thus, my conclusion that the number of people living on less that $2.00 per day has increased by 250% over almost 200 years is conservative -- it could easily be 300% or 400%.

Some more details: From 1820 to 2005 global population increased by 620% and per capita GDP increased by 1,045% (again, PPP-adjusted dollars, both figures from the Maddison dataset).

24 December 2012

The Work of a Nation

Readers of this blog who have followed my discussions of innovation, productivity and economic growth will recognize two competing perspectives among economists on the role of innovation as a source of our current economic difficulties, neither of which I think adequately captures the innovation challenge facing the US economy.

One of these competing views, represented by the work of Robert Gordon and most recently expressed in this weekend's WSJ, is that we have reached the end of innovation and productivity growth, and thus economic growth will slow to a crawl. Gordon writes that "the future of American economic growth is dismal" following from what he says is a decelerating trend in rates of economic growth and innovation over the past half century. As I have shown in a series of posts, the data do not support Gordon's argument - US per capita economic growth rates are largely unchanged since 1870.

At the opposite pole from Gordon are those economists who argue that rather than reaching the end of innovation and productivity growth, we in fact have far too much [UPDATE: See below for a response from Brynjolfsson]. Erik Brynjolfsson and Andrew McAfee argue that we "need to start preparing for a technology-fueled economy that’s ever-more productive, but that just might not need a great deal of human labor." Similar to Gordon they base their argument on an analysis of recent trends in labor productivity as compared to employment, finding a "great divergence." Here as well, I don't think that the data actually supports their argument.
The graph above provides further evidence that the productivity hawks and doves are both reading into the data signs that just are not there. The graph shows data on GDP per hour worked in the economies of the the US, Germany and Japan (data from BLS). There is no indication of a rapid acceleration or deceleration in the output per hour worked in any of these three economies. (The one-time effect of the reunification of Germany in 1991 does stand out.) What this graph suggests is that changes in productivity are not the source of more fundamental issues associated with employment, income, equality and other economic outcomes that we care about.
The graph above, also from the BLS data, tells an interesting story. It shows the total hours worked annually across the entire economies of each of the US, Germany and Japan. The graph shows that Japanese workers were performing the same total amount of work in 2011 as in 1970. We can deduct from this that all gains in Japan's GDP since 1970 are a result of productivity gains. Germany has seen its total work increase by 11% since 1970, but with essentially no change since reunification. Since 1970 Germany has seen an almost 30% reduction in hours worked per employed person (Japan saw a 15% decline over the same period). Germany has policies focused on maintaining high employment, which leads to lower rates of change in GDP per employed person, but which has not penalized GDP growth per hour worked, as shown in the first graph above. Germany provides good evidence that productivity gains and employment need not be closely related, and employment policies matter a great deal for aggregate levels of employment.

The United States, interestingly, did the same total amount of work in 2011 as it did in 1998, despite seeing overall population grow by 36 million and employment grow by 8.5 million. Since 1970 the average US worker is working only 8% less than in 1970, but most of this decrease has occurred since 1998. With global GDP increasing by 133% since 1998 the world obviously has not been suffering for a lack of work to be done. Indeed, according to the McKinsey Global Institute the world added 1.1 billion new jobs from 1980 to 2010, with 164 million in so-called "advanced economies."  Unlike Japan and Germany, the US does not have a population that is either decreasing (in the case of Japan) or slowly increasing (Germany).

It seems that at the root of US economic problems lies a challenge of stagnant work, a condition that cannot be attributed to too much or too little productivity gains -- the work that is being done is improving its productivity as it always has. Further, the stagnation in work appears to have pre-dated the recent economic crisis, but was no doubt exacerbated by it.

In this context, reducing unemployment would mean sharing work more broadly (e.g., the average German worker works 20% less hours per year than the average American worker) or simply creating or finding more work to be done. Sharing work makes a lot of sense in a slowly growing or shrinking workforce, as in Japan or Germany. Such policies may make sense in the US, but they are not long-term solutions to the challenges of work in an economy that is seeing significant population growth. In that case more work is needed. Creating or finding more work means innovating -- and specifically, by competing more effectively in existing markets or creating new markets to compete in.

In an era of globalization, we have learned that competing more effectively in existing markets can be difficult and, as has been the case in manufacturing, may lead to continued economic growth unmatched by growth in employment. By contrast, the prospect of creating and competing in new markets or even leveling the playing field in old markets offers potential for creating more work. More fundamentally, as the world has become wealthier there is increasing demand for economic necessities such as infrastructure, energy and food. The United States is well-positioned on these fronts to put the nation to work. However, doing so requires a forward looking approach to innovation and a renewed commitment to globalization and economic growth around the world.

UPDATE 12/25: Erik Brynjolfsson sends this comment:
One correction: Andy and I don't think there is "far too much" innovation and productivity growth.  We are huge fans of both and say so repeatedly in our book and our other writings.  In fact, we'd like to see more innovation in work practices and organizations to keep up with the technological advances, aka learn how to better race with machines.   Some people mistakenly lump us with the luddites, but that's about 180 degrees from where we are, even if we don't think technology always and everywhere helps all people evenly.

21 December 2012

Most Read Posts of 2012: Superheroes, Hooters Girls and the Bullsh*t Button

What were the most read posts at this blog in 2012?

1. Chief Scientists are no Superheroes
A critique of a House of Lords report on departmental Chief Scientific Advisors:
The basic flaw in the report is the assumption that science can be cleanly separated from the political process by empowering a heroic individual with special influence on policy making.
2. Hooters Girls are not Hicks Neutral
A part of an ongoing series of discussions on innovation, technology and the economy that is the subject of a new book:
This post is also part of a argument that asserts that the discipline of economics has failed to provide an account of innovation in the economy that helps to guide 21st century policy making, and more fundamentally, that economics is simply incapable of providing such an account. A big claim, sure to raise the hackles of card carrying economists. So fasten your seatbelt.
3. A Handy Bullshit Button on Disasters and Climate Change
Who knows how high and deep the nonsense on disasters and climate change will continue to be piled -- Way high is my guess. Keep yourself grounded with the scientific literature with this handy button:
With this post I am creating a handy bullshit button on this subject (pictured above). Anytime that you read claims that invoke disasters loss trends as an indication of human-caused climate change, including  the currently popular "billion dollar disasters" meme, you can simply call "bullshit" and point to the IPCC SREX report.
Other posts getting a lot of traffic included initial ruminations on the Peter Gleick impersonation/forgery affair, a critique of IPCC shenanigans before the US Congress and the IPCC's tortured response to an internally-motivated effort to correct errors in AR4 on disasters.

Thanks for reading and happy holidays! 

20 December 2012

Why is NOAA Acting Like it has Something to Hide?

UPDATE 12/21: Following an inquiry to the NWS and NOAA, Christopher Vaccarro of NOAA External Relations confirms to me by email that NOAA has not asked its Sandy Assessment Team members to sign a  nondisclosure agreement as alleged by Rep. Paul Broun in his letter sent earlier this week where he wrote: "the new charter requires that participants sign non-disclosure agreements." NOAA has some questions to answer, to be sure, but to remain a credible overseer the House Science Committee needs to have its own facts straight. I've struck out the resulting incorrect text below. 

NOAA -- the National Oceanic and Atmospheric Administration and parent agency to the cooperative institute where I am a Fellow here at the University of Colorado -- is acting like an agency with something to hide.

Last week I described the problematic nature of the so-called 'hurricane deductible" and how it placed NOAA, and its National Hurricane Center in particular, in a very difficult situation. Specifically, the NHC's preliminary (and not yet final) characterization of the meteorological characteristics of Sandy at landfall implicated tens of billions of dollars in winners and losers among those who suffered property damage from the storm. Predictably, the NHC's characterization of Sandy has attracted the interests of politicians who have pressured NOAA to make its decisions about the storm's status at landfall in a politically preferred manner.

When NOAA empaneled a post-Sandy assessment team (a routine action) and then immediately terminated it (far from routine), it raised eyebrows. NOAA then reconstituted an assessment team comprised only of government officials who, we have now learned, have been required to sign non-disclosure agreements and to work outside of the provisions of the Federal Advisory Committee Act (FACA). Both steps create the appearance of wanting to keep something secret. They certainly do not suggest transparency. Having served as an outside member on a politcally sensitive post-disaster NOAA assessment (Red River Floods, 1997), I do know how such investigations work.

After NOAA starting acting strangely on its Sandy investigation, Representative Paul Broun (R-GA), chairman of the Investigations and Oversight Subcommittee of the House Committee on Science, Space and Technology sent Jane Lubchenco, NOAA administrator, a letter with some pointed questions (PDF).  NOAA responded to this letter last week with a vacuous and non-responsive letter (PDF), a step almost guaranteed to focus even more attention on the agency.

Rep. Broun apparently did not like the fact that NOAA completely ignored his oversight, because yesterday he sent a second, strongly worded letter (PDF). In it Rep. Broun asks NOAA the same questions, requiring a point-by-point response, and he also challenges NOAA's commitment to transparency and openness.

Broun's new letter makes note of some new details that should raise the interest of anyone interested in that messy space where science and politics meet up (including bloggers and the media who have thus far completely ignored this issue, but I digress):
  • NOAA dramatically scaled back the scope of its post-Sandy assessment, now NHC is not even mentioned in the new assessment charter, yet was the focus of the original charter;
  • The new assessment does not need approval by NOAA offices, rather sign-off on the new assessment is closely held by the NOAA administration;
  • NOAA cites FACA as a reason for limiting participation in the assessment -- contrary to FACA's intended purpose, and contrary to recent recommendations for such assessments by the National Research Council;
  • NOAA cites a need for timeliness as an excuse not to include outsiders, however their internal assessment is not due to start until after the holidays, some 10 weeks after Sandy.
  • Remarkably, participants in the new assessment are required to sign non-disclosure agreements (this fact alone is remarkable, what could NOAA conceivably want to remain non-disclosed?).
Rep. Broun concludes his letter:
I remain concerned that the NWS Sandy service Assessment lacks sufficient independence as non-governmental participation has been scaled back, confidentiality clauses have been added, and management influence has grown. NOAA has also narrowed the assessment to the point that it may not substantively inform future agency actions.
Rep. Broun's letter makes no mention of the "hurricane deductible" but it would be surprising if it was not in some way related to NOAA's strange behavior.  Acting like you have something to hide -- whether you do or not -- is a sure way to get the attention  of congressional overseers. NOAA's response is requested by 4 January 2012. Stay tuned.

18 December 2012

Jaws of the Snake

Writing in the New York Times last week, Erik Brynjolfsson and Andrew McAfee argue that “a wonderful ride” began to unravel in the late 1990s when employment growth became “decoupled” from productivity growth.

Their contention is that something fundamental has changed in the economy over the past decade, illustrated in the following graph by the increasing gap between gains in productivity and employment, described ominously as the “jaws of the snake.”

A closer look at the data shows, however, that the “jaws of the snake” have been open for more than 30 years. More fundamentally, rather than a “great decoupling” between trends in employment and productivity, it is clear that productivity and employment have been diverging for a very long time as the composition of the economy has changed.
And here is how it concludes:
So when we look into the “jaws of the snake” it is not the perverse consequences of rapid technological change on the economy, an influence which Brynjolfsson and McAfee implies that “we also need to start preparing for a technology-fueled economy that’s ever-more productive, but that just might not need a great deal of human labor.”

To the contrary, an analysis with more appropriate data suggests that the rise of the machines is not the main reason for our current economic challenges, and a world where labor is less needed is not yet upon us.
For the meat in the sandwich, head on over to the BTI, and feel welcome to come back here and comment and critique.

17 December 2012

The De Facto Privatization of State Universities

Writing in the Wall Street Journal over the weekend, Scott Thurm took an in-depth look at the University of Colorado, where I am a professor, as an case study in how state governments are moving out of the business of subsidizing university attendance for their residents.

Thurm writes:
For generations of Americans, public colleges and universities offered an affordable option for earning a college degree. Now, cash-strapped states across the country are cutting funding for colleges and directing scarce resources to primary and secondary schooling, Medicaid and prisons. That is shifting more of the cost of higher education to students and their families.

Public higher education in the U.S. dates to the 1795 establishment of the University of North Carolina. In 1862, Congress passed the Morrill Act, which gave land to the states to establish colleges "to promote the liberal and practical education of the industrial classes." The 108 so-called land-grant colleges still form the backbone of the U.S. public higher-education system.

Public-college enrollment exploded after World War II and the adoption of the GI Bill. As recently as 1951, more Americans were enrolled in private universities than public ones. Sixty years later, more than 15 million students were enrolled at the nation's 678 public colleges and universities, nearly three times the number attending private ones, according to the Education Department.
The article points to a recent PhD dissertation  by Brian Burnett, a vice chancellor at the University of Colorado-Colorado Springs. Burnett writes in his dissertation that there has been a reversal of roles between the state and students over the past four decades (here in PDF at p. 152; the graph above is mine, and comes from this blog post):
These data clearly illustrate that over the past 40 years, the costs of a college education in Colorado for resident undergraduate students has fundamentally shifted from the majority of the expenses funded by taxpayers to the majority now funded by students. As Colorado State University System‘s Chief Financial Officer Richard Schweigert pointed out, "We are cost shifting to students—that is all we are doing due to the state basically ignoring its 'economic engine‘—higher education." This shift has occurred not only due to increasing costs to provide postsecondary educational services but also due to limits of the state‘s general fund to support student growth and other demands on the state‘s general fund. Such a change has had a profound impact on the equation of public funding and the state policy that was in place in 1980 when the student was expected to pay 25% of the cost of an undergraduate education while the state covered 75%. This equation, for many public institutions in Colorado looks more like the equation in Figure 4.14 than it did in 1980, where in some cases today, students are paying approximately75% of the cost of their education while the state support has declined to just 25% of the cost.
What is happening here in Colorado is not unique. Thurm writes:
State subsidies for these public colleges and universities fell 21%, on a per-student, inflation-adjusted basis between 2000-01 and 2010-11, according to the State Higher Education Executive Officers Association, a national research and advocacy group. Over that same period, tuition at two- and four-year public colleges rose an inflation-adjusted 45% to $4,774 in 2010-11, according to the association. At public four-year colleges this year, tuition averages $8,655, according to the College Board.

But education experts say wrenching decisions on the state level about how to allocate limited public resources are having a very big effect.
The results are a more market-oriented campus which has both positives and negatives. On the positive side, the market incentives help to motivate innovation and performance, with a focus on meeting the demands of students. On the negative side, the push for pure economic value leads to what arguably pathological responses, such as the emphasis on "cash cows" in the form of international or out-of-state students willing to pay extra to attend a state school.

Thurm explains:
CU has long attracted lots of out-of-state students, who pay higher tuition. Non-Coloradans currently pay $31,559 for tuition, not counting room and board. The higher nonresident fees go "a long way toward keeping the lights on," says Mr. Gleeson.

In 2010, officials persuaded lawmakers to exclude foreign students from the cap on out-of-staters—currently 45% of freshmen—arguing that the foreigners would add more global perspective. But they also covet the additional revenue, which officials estimate at $30 million a year. This year, CU is dispatching recruiters to more than a dozen countries, from Latin America to the Middle East.

Others are pursuing the same strategy. At Purdue University, 17% of undergraduates are from outside the U.S., mostly from China, up from 9% in 2009. At the University at Illinois, 13% of this year's freshmen are foreign students.
None of these various arguments will be news to long-time readers of this blog. If we are smart, then the end result of this wrenching process will be to do away with the two-tiered in-state/out-of-state tuition system -- which is a fiction -- and move to a market-based approach. The state subsidy for in-state students need not go away, as the state government could still cut a check for in-state students and send it to them directly. Further, a levelized tuition model might not make sense for every state university, but it almost certainly does for flagship campuses.

The de facto privatization of state universities is likely to continue. Profound changes are likely to be coming down the road, as business as usual is no longer an option.

16 December 2012

Why Strong Science Assessors Matter

In my latest Bridges column I connect Hurricane Sandy to Mad Cow disease through the repeatedly-learned lesson that effective use of science in decision making depends upon having strong institutions. Here is how it starts:
Last month in Berlin, I participated in the 10th anniversary conference of the German Federal Institute for Risk Assessment – the Bundesinstitut für Risikobewertung (BfR). The BfR is one of a number of European organizations that Catherine Geslain-Lanéelle, executive director of the European Food Safety Authority (EFSA), characterized at the conference as "the children of Mad Cow disease." This group of siblings includes the EFSA, departmental chief scientific advisors in the UK, and others. These organizations, and the conditions under which they were created, remind us that if science is to be well used in policy and politics, then strong institutions are necessary. This is a lesson continuously relearned, most recently in the United States in the aftermath of Hurricane Sandy.
You can read the rest here, and for background on the science and politics of the "hurricane deductible" see this post from last week. No doubt I will have occasion to re-visit this subject later this week.

12 December 2012

Science, Politics and the "Hurricane Deductible"

The idea that coastal property owners should bear some of the risks of the exposure to hurricanes is not particularly controversial. However, implementing policies to align risks with economic incentives can be challenging, and "Hurricane" Sandy provides a vivid case study in the importance of institutions where science meets politics.

Some 18 states implement what is called a "hurricane deductible" as part of insurance policies. While a normal deductible (i.e., the amount the homeowner must pay in the event of a loss, before the insurance kicks in) for property damage might be set at $2,000, the "hurricane deductible" says that if the event causing the loss is a "hurricane" then the deductible is instead set at a much higher level, such as $25,000.

The "hurricane deductible" became important following Sandy because just about one hour before the storm made landfall, the National Hurricane Center re-categorized the storm from a "hurricane" to a "post-tropical cyclone." Because insurance is regulated at the state level, different states have different "triggers" for the application of the "hurricane deductible." Some of these triggers are tied to wind speed, some to the issuance of hurricane warnings by the NHC, some to the storm's categorization, and so on. New York, for example, does not have a single trigger for the state but allows each company to set up a trigger (the mish-mash of approaches can be seen here in PDF).

For a storm like Sandy the invocation of the "hurricane deductible" is a decision with tens of billions of dollars in consequences, as losses were spread over hundreds of thousands of homes. Either individual homeowners would bear these costs (if the deductibles were invoked) or insurance companies would (if they were not). Given the massive stakes, not surprisingly in the immediate aftermath of the storm politicians were quick to act.

In New Jersey Governor Chris Christie, a Republican, issued an executive order defining Sandy as a post-tropical cyclone, invoking the NHC re-categorization:
In light of the National Weather Service’s categorization of Sandy as a post-tropical storm, it shall be a violation of N.J.A.C. 11:2-42.7 for any insurer to apply a mandatory or optional hurricane deductible to the payment of claims for property damage attributable to Sandy.
US Senator Charles Schumer, a Democrat, sent a letter to NOAA (the parent agency of the NHC) reminding them of the political consequences of their storm categorization. He expalined why to a local radio station:
“Today, we’ve sent a letter to NOAA, the weather agency, as well as to the insurance companies that we’re looking over their shoulder. We want NOAA to keep this classified as a tropical storm and to save homeowners in New York and Long Island thousands of dollars and we don’t want the insurance companies to play any games.” 
A few years back, when politics-science issues were more fashionable, there might have been outrage from scientists and other observers at the idea of a US Senator "looking over the shoulder" of a federal science agency and telling it how to make a scientific judgment. But I digress.

Issues associated with the "hurricane deductible" likely played a role in NOAA's immediate reversal in setting up an assessment team to evaluate the agency's performanceon Sandy. NOAA initially established an assessment team, to be co-chaired by Mike Smith, a widely-respected and accomplished private sector meteorologist who has also been critical of NOAA at times in the past. Involving an outsider as co-chair made good sense from the standpoint of the credibility of the assessment. However, NOAA terminated the assessment almost as soon as it was created.

In his initial work on the assessment as co-chair, Smith had identified some key questions to investigate:
  • Was there a decision not to call Sandy a "hurricane" regardless of its meteorological characteristics? If this decision was made, was it made Friday (October 26th) or Saturday morning? If so, who made the decision and why?
  • Was this decision the reason hurricane warnings, in spite of a large and dangerous hurricane moving toward the coast, were never issued?
  • Given that an obvious large and powerful hurricane was headed for the U.S. coast, why wasn't that decision reconsidered? For example, Barry Myers, the CEO of AccuWeather, urged (on the AccuWeather.com website) the immediate issuance of hurricane warnings about eight hours before landfall. Others also urged the lack of hurricane warnings to be reconsidered.
Immediately after the termination of the assessment, Rep. Paul Broun (R-GA), Chairman of the House Subcommittee on Investigations and Oversight, sent NOAA a letter asking for a range of specific details about the curious decision (letter here in PDF, the replies from NOAA are due back to Rep. Broun this Friday).
For NOAA's part, they have since reconstituted a team to investigate their performance on Sandy comprised only of government employees, and chaired by a non-meteorologist. Smith was purged as co-chair and so too were any explicit questions about Sandy's meteorological status at landfall, prompting Smith to write yesterday:
They are ignoring the elephant in the room. Was Sandy a hurricane at landfall?
All of this still matters because the NHC still has not rendered a final determination on Sandy's actual status at landfall. Such determinations are always re-evaluated in the months after a hurricane season, when there is more time and a break from the pressures of an operational forecasting environment. The NHC explained this at the time it announced the re-categorization just before Sandy's landfall:
While I have every confidence in the scientists at NHC, can you imagine the consequences if they were to re-categorize Sandy as hurricane at landfall? The implications would be enormous and the political fallout immense.

We have here a situation where state and federal policy makers have already made judgments about what the science should say (for a longer list of state actions see this) yet the actual science is not yet completely in. This situation illustrates that the NHC is not well structured to play a key role in regulatory-type decision making. This is not the fault of NOAA of the NHC, but the policy makers who put NHC in such a position via policy. Accompanying the questions about Sandy's status at landfall is the messy prospect of NOAA appearing as if to cancel a partially-independent assessment for fear of the questions being asked, and replacing it with a plain-vanilla committee with a plain-vanilla mandate.

I have no opinion on (or much interest in, actually) the substantive judgment about whether Sandy should have triggered "hurricane deductibles" or not -- I have heard from experts arguments on both sides. The larger point that I am focused on is the process through which hurricane risks are translated into homeowner incentives as a case study in the institutional design of processes meant to bring science into policy making. Unfortunately, it appears that in this case weak policy design at the science-policy interface means that the role of science is far less than might have been hoped for, with political considerations instead driving the process. Worse still, NOAA faces the prospect of a scandal of some sorts based on its efforts to escape the political battles. All around, not good.

10 December 2012

Salad for Ethiopia: How Climate Policy Keeps Poor People Poor

The dirty secret of climate policy is that it works against the interests of poor people around the world who want to become rich like you and me. I have discussed how this works in terms of the definitions that we use in international assessments -- One way to create scenarios which have both carbon dioxide stabilized at 450 ppm and a dramatic expansion of energy access around the world is to define "energy access" at a very low level -- such as 2% of the amount that Americans consume in their households every year.

In this manner, international officials can make statements like the following:
Bringing electricity to everyone by 2030 would require electricity generation in 2030 to be only 3% higher than generation in our Reference Scenario . . . the increase in energy-related global CO2 emissions would be a mere 0.9% by 2030.
Such claims sounds great -- energy for everyone, hardly any more carbon dioxide emissions. Of course, behind the numbers lies the ugly reality of poor people staying mostly poor and with very little energy access, at least not of the kind that we have available.

The doublespeak is bad enough, but as Todd Moss of the Center for Global Development explains in a very hard-hitting post, such language and thinking gets translated into actual policies:
Imagine the United States sending low-calorie food aid to Ethiopia in response to the global obesity epidemic. Absurd, right? Even if global waistline trends are worrisome, Ethiopians didn't create the problem. Such a policy would be futile since it would have no noticeable impact on the global aggregate.

Worse, while obesity may be a very real concern, Ethiopians are understandably more focused on undernourishment. The United States should aim instead to increase caloric intake in that part of the world. To punish those we should be helping when we can't even tackle the obesity problem at home makes the policy not only misguided, but also morally dubious.

Sadly, that is pretty much what the United States does on energy. In response to rising global carbon dioxide emissions, the U.S. government put restrictions on the Overseas Private Investment Corporation, a federal agency that is a principal tool for promoting investment in poor countries. A recent rule, added in response to a lawsuit brought by Friends of the Earth and Greenpeace, imposes blind caps on the total CO2 emissions in OPIC's portfolio, which ends up barring the agency from nearly all non-renewable electricity projects.

Even if global carbon emissions are worrisome, it seems misplaced to ask people in poor countries to bear the costs of a problem they didn't create. Ethiopians emit less than 1 percent of what Americans emit on a per capita basis, and Americans still get most of their electricity from non-renewable coal and natural gas. The scale of energy poverty is such that sizable populations will still require old-school grid power.

OPIC's carbon cap is also largely pointless since it could have no conceivable impact on global emissions. While climate change is a very real concern, Africans are understandably more focused on the problem that seven in ten people living on the continent have no electricity at all. Because energy poverty is harmful to health, education, and prosperity, the United States should aim to increase access to electricity in Africa. To punish those we should be helping when we can't even implement a carbon cap at home makes the policy not only misguided, but also morally dubious.
Moss proposes lifting the OPIC carbon cap for the poorest countries, explaining that we in the US have no such cap yet we are refusing to extend the same level of energy access that we enjoy to poor countries as a means of keeping their energy sources from emitting carbon dioxide. "Morally dubious" seems a generous term.

We have decided -- not explicitly -- that we value carbon emissions more than energy access. Such choices are made all the time in democracies, but this one is made largely out of sight. The frustrating irony of course is that if we were to truly take on the challenge of global energy access, it might provide one way to stimulate much more progress on energy innovation and move us beyond the dead end that is current climate policy, which is not reducing emissions yet keeping poor people poor.

An explicit debate over energy access consequences of climate policy is one worth having. It would reveal the true scale of the global energy challenge (more on that soon), but also would bring out into the open the ugly, morally dubious reality that the policies we have chosen for dealing with carbon dioxide come at the expense of poor people around the world.

05 December 2012

Global Tropical Cyclone Landfalls 2012

Earlier this year, Jessica Weinkle, Ryan Maue and I published a paper in the Journal of Climate on trends in global landfalling hurricanes (a PDF can be found here as well). At the global level the data is good from 1970. Our analysis covered through 2010. With 2012 almost in the books I recently asked Ryan if he could provide an initial tabulation of the 2012 data (note that the data could be revised from these initial estimates, and 2012 is still not quite over). The tracks of the fourteen landfalling storms of 2012 can be seen in the graphic at the top of this post.

Below is the dataset from 1970 first presented in our paper, updated with 2011 and 2012 included. In short, 2012 is just about an average year with 14 total landfalls (15.4 is average) of which 4 (initially, but could change, 4.6 is the average) characterized as major.
Here are some updated factoids summarized from the data:
  • Over 1970 to 2012 the globe averaged about 15 TC landfalls per year (Category 1-5)
  • Of those 15, about 5 are intense (Category 3, 4 or 5) 
  • 1971 had the most global landfalls with 30, far exceeding the second place, 25 in 1996
  • 1978 had the fewest with 7
  • 2011 tied for second place for the fewest global landfalls with 10 (and 3 were intense, tying 1973, 1981 and 2002)
  • Five years share the most intense TC landfalls with 9, most recently 2008.
  • 1981 had the fewest intense TC landfalls with zero
  • The US is currently in the midst of the longest streak ever recorded without an intense hurricane landfall 
  • The past 4 years have seen 12 major landfalling hurricanes, very low but not unprecedented -- 1984-1987 had just 11. The most is 35 (2005-2008).
  • The past 4 years have seen 51 total landfalling hurricanes, on the low end -- the least is 41 (1978-1981) and the most is 80 (four periods, most recently 2004-2007).
  • There have been frequent four-year periods with more than 25 landfalling major hurricanes, or more than a 100% increase of what has been observed over the past 4 years. 
Anyone who'd like to argue that the world is experiencing a "new normal" with respect to tropical cyclones is simply mistaken. Over the past 4 years, the world is actually in the midst of a very low period in tropical cyclone landfalls -- at least as measured over the past 43 years.

There is even evidence in our paper (see our Figure 2) that the period before 1970 saw more intense hurricane landfalls than the period since. Older data from the North Atlantic and Western North Pacific (which together represents 64% of all global intense landfalling hurricanes 1970-2010 and 69% of all hurricanes) indicates that landfalling intense hurricanes in these two basins occurred at a 40% higher rate from 1950-1969 than 1970-2010. There were 9 intense landfalls in 1964 and 1965 in just these two basins, which equals the global record for all basins post-1970.

What we can glean from this data is that in terms of US and global damage, things will get much worse when the statistics return to the "old normal" (and this is independent of whether you think such a return is due to natural variability, human-caused climate change or the prophesies of Nostradamus). It will happen -- and you can take that to the bank.

For those interested in the details, here from Ryan are the preliminary details for 2012 to date:
** with the appropriate caveats about operational TC tracks subject to revision, which is expected in the post-season analyses.

My list of 2012 storms includes a total of 14 Hurricane Strength Landfalls of which 4 were major. 

Western Pacific:  8 total Typhoon landfalls of which 3 were Major(+)
+Vicente (09W)
Saola (10W)
Damrey (11W)
Kai-Tak (14W)
+Tembin (15W)
Sanba (17W)
Son-Tinh (24W)
+Bopha (26W)

Eastern Pacific:  1 minor hurricane landfall
Carlotta (03E)

North Atlantic:  3 minor hurricane landfalls:  
Ernesto (05L)
Isaac (09L)
Sandy (18L)*  
* Sandy was rapidly deepening prior to landfall in Cuba at 95 knots, 1-knot below major threshold.  Thus, strong possibility it could be upgraded in final best-tracks.

Southern Indian Ocean:  1 major and 1 minor Cyclone landfall
Giovanna (12S) -- major over Madagascar
Lua (17S) -- minor over Australia, was 95 knots, so could be an upgrade.
Lots more great data and graphs, including the one below on total global tropical cyclone activity (not just those that make landfall) at his website here.

04 December 2012

An Alternative View on Sandy's Lessons: A Guest Post

Editor's note: This is a guest post by Stéphane Hallegatte, Senior Economist, Sustainable Development Network, Office of the Chief Economist, The World Bank, offering a response to my recent WSJ op-ed on "Hurricane" Sandy. Comment welcomed -- Thanks Stéphane!

The storm Sandy is not unprecedented. Similar events occurred in the past, and this storm is not a consequence of climate change. The Intergovernmental Panel on Climate Change stresses that it is impossible to make a causal link between one event and climate change (1). Roger Jr. Pielke, in an Op-Ed in the Wall Street Journal (2), pushes the argument further and claims that Sandy should not influence the discussion on climate change.

His argument is convincing: he has shown that population and economic growth and migrations toward risky areas explain the trend in disaster losses in the US (3). Because current land regulations allow new developments in flood prone areas that lead to the recent increase in damages, he concludes that we should focus on them, not on climate change.

The facts are correct, but I would like to discuss the policy conclusion. In the US, the first death risk factor is smoking, with more than 400,000 annual deaths attributed to it (4). But does it mean that nothing should be done about other factors, such as unbalanced diets? Of course not. It is not because you do one thing wrong (land regulations or smoking), that you absolutely need to do everything wrong (energy policy or food habits).
First, risk factors plays through different channels. Smoking and eating too much have different effects and increase death probabilities in different ways, exactly like climate change and inappropriate land regulations increase disaster losses through different channels. The former increases the likelihood of high water levels; the latter increases the consequences of these high water levels.

Second, risk factors do not overlap completely. Smoking does not only increase death probability, it also reduces physical performance. In the same way, climate change does not only increase disaster risk, but also threatens biodiversity, food production, and the landscapes we know. Inappropriate land regulations do not only increase damages from hurricanes, they also increase dependency to cars and foreign oil, and increase the cost of providing infrastructure services. There are thus good reasons to improve both climate change policies and land regulations.

And finally, risk factors are not independent. Smoking may be more dangerous for an overweight person. In the same way, bad land regulations are more costly if climate change is not mitigated, and climate change will be much more dangerous is land regulations are not fixed. So, the different drivers interact closely, and need to be considered together in policy design.

In the same way a healthy lifestyle includes a balanced diet and to stop smoking, the best policy to reduce disaster losses will use land regulation (and hard defenses) and climate policies. There is moreover no reason to search for the dominant driver of future disaster losses (climate change or bad land regulation?). Indeed, the balance between the actions on the two drivers will not depend on which one is responsible for most of the increase in damages, but on their relative cost-effectiveness to reduce future losses.

The timescales of these two options are different: emission reductions need a few decades to influence disaster risk, but then the effect lasts for centuries. Hard protection and land regulation can be quicker, but there is only so much they can do, and they require maintenance forever. In the absence of additional emission reductions to limit global warming, the most intense hurricanes are likely to become more frequent, and sea level will keep rising over the 21st century, possibly reaching more than one meter over the 22nd and 23rd century (1). What kind of land regulations and coastal defenses will New York City need by then?

There are limits to what coastal defenses and land regulations can achieve. Some like the idea that we build in risky areas because of “wrong incentives”, namely flood insurance subsidies through the National Flood Insurance Program. According to them, removing these incentives would solve the problem.

“Wrong incentives” exist and play a role – this is obvious – but unfortunately they cannot explain the current trend in risk exposure alone. Flood losses are on the rise in almost all countries, including those that have no flood insurance system (5). And if insurance claims help pay for rebuilding, they cannot compensate for all the losses. Getting flooded is a tragedy, with or without insurance.
People move toward risky areas because this is where better jobs and higher incomes are. And they are there because growing sectors are in coastal areas – driven by harbors and global trade – and in cities – that are usually located next to rivers and coasts and thus in flood prone areas. Would financial sector professionals quit their high-wage jobs in Manhattan in the absence of flood insurance? Would their employers move their headquarters to the Great Plains? Would the beach club owner in New Jersey move her business two miles inside the country?

Better land regulations may be able to decrease flood exposure, but they cannot do so in a significant manner – in the absence of a large-scale buys-out and house destruction program that appears extremely unlikely (6).

Flood exposure will not disappear anytime soon, even if land regulations are improved and bad incentives are removed. So even if climate change is not the dominant driver of hurricane and flood losses in the future, it may well be an efficient lever to reduce future damages, especially over the long term. Better risk management is badly needed in most of the world. But it would be absurd to use only one tool when we can use two.


(1) Intergovernmental Panel on Climate Change, Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation (SREX), 2011.
(2) Roger Pielke: Hurricanes and Human Choice. The Wall Street Journal. October 31, 2012.
(3) Vranes, K. and Pielke, R. (2009). ”Normalized Earthquake Damage and Fatalities in the United States: 1900–2005.” Nat. Hazards Rev., 10(3), 84–101. doi:10.1061/(ASCE)1527-6988(2009)10:3(84) (PDF)
(4) Danaei G, Ding EL, Mozaffarian D, Taylor B, Rehm J, et al. (2009) The Preventable Causes of Death in the United States: Comparative Risk Assessment of Dietary, Lifestyle, and Metabolic Risk Factors. PLoS Med 6(4): e1000058. doi:10.1371/journal.pmed.1000058.
(5) Hallegatte, S., 2011. How economic growth and rational decisions can make disaster losses grow faster than wealth, Policy Research Working Paper 5617, The World Bank; and the Vox-Eu column,
(6) The French experience after the storm Xynthia on its Atlantic coast is not encouraging – the retreat strategy (more than 700 houses were supposed not to be rebuilt) was largely disrupted by political issues. If you read French, see Przyluski and Hallegatte (2012), Gestion des risques naturels, Quae Editions.

03 December 2012

Record US Intense Hurricane Drought Continues

The graph above provides an update to data on the remarkable ongoing US "intense hurricane drought." When the Atlantic hurricane season starts next June 1, it will have been 2,777 days since the last time an intense (that is a Category 3, 4 or 5) hurricane made landfall along the US coast (Wilma in 2005). Such a prolonged period without an intense hurricane landfall has not been observed since 1900.

Some thoughts:

Even with hurricane Sandy and its wide impacts, things will indeed get worse. The US coastlines as a whole have actually been very lucky with respect to hurricanes since 2005, with aggregate damage (even including aggressive estimates for Sandy) 2006-2012 falling at or below the historical average. Sandy made landfall as a post-tropical cyclone of hurricane strength -- a phenomena that has only been documented 3 times since 1900 (1904, 1924, 1925 -- later this week I'll have a post on Sandy damage estimates).

The long-term intense hurricane drought means that a mere "regression to the mean" will see more hurricane landfalls and considerably higher damage in the years to come. The fashionable talk these days of a "new normal" is of course utter bullsh*t. Just wait until we return to the "old normal" -- I know that it may be hard to believe, but both hurricane damage and climate hype are set to increase dramatically in the years to come.

28 November 2012

Against "Modern Energy Access"

Access to energy is one of the big global issues that has hovered around the fringes of international policy discussions such as the Millennium Development Goals or climate policy, but which has been getting more attention in recent years. In my frequent lectures on climate policy I point out to people that 1.3 billion people worldwide lack any access to electricity and an 2.6 billion more cook with wood, charcoal, tree leaves, crop residues and animal waste (an additional 400 million cook with coal).

The "success" scenarios of climate advocates hoping to power the world with carbon-free energy almost always leave a billion or more people in the dark and several billion cooking with dirty fuels. Sometimes, magic is invoked to suggest that "electricity can be brought to everyone" without appreciably increasing carbon emissions. Of course, if we could bring electricity to the 1.3 billion without any access with no effect on emissions, then we could probably do it for 6 billion others.

There is a devil in the details which helps us to keep the energy poor out of view while we debate issues important to rich people, like climate change. That is the very definition of "energy access." The International Energy Agency explains some of the difficulties in defining energy access and gives its definition as follows:
There is no single internationally-accepted and internationally-adopted definition of modern energy access. For our energy access projections to 2030, the World Energy Outlook (WEO) defines modern energy access as “a household having reliable and affordable access to clean cooking facilities, a first connection to electricity and then an increasing level of electricity consumption over time to reach the regional average”. By defining it at the household level, it is recognised that some other categories are excluded, such as electricity access to businesses and public buildings that are crucial to economic and social development, i.e. schools and hospitals.

Access to electricity involves more than a first supply connection to the household; our definition of access also involves consumption of a specified minimum level of electricity, the amount varies based on whether the household is in a rural or an urban area. The initial threshold level of electricity consumption for rural households is assumed to be 250 kilowatt-hours (kWh) per year and for urban households it is 500 kWh per year. The higher consumption assumed in urban areas reflects specific urban consumption patterns. Both are calculated based on an assumption of five people per household. In rural areas, this level of consumption could, for example, provide for the use of a floor fan, a mobile telephone and two compact fluorescent light bulbs for about five hours per day. In urban areas, consumption might also include an efficient refrigerator, a second mobile telephone per household and another appliance, such as a small television or a computer.
I have found when you start talking in terms of "kilowatt-hours per year" people's eyes glaze over. And when I am lecturing about "energy access" students might look up from their smart phone, tablet or laptop to register a look of understanding: "Energy access -- yeah, I have that, gotcha."

Actually I want to tell them, you have wayyyyy more than that. To better explain this issue I have made up the following graph.
When "energy access" is used by organizations like the IEA, they mean something very different than what you, I or my students might take the term to mean in common parlance. (And note, this is no critique of the IEA, they have done excellent work on energy access issues.) The graph above provides a comparison of the 500 kWh per year household threshold for "energy access" used by the IEA to a comparable number for the United States (both numbers are expressed in per capita terms, so 100 kWh per person from IEA and data on US household electricity consumption here and people per household here).

A goal to secure 1.3 billion people access to 2.2% of the electricity that the average American uses might be characterized as a initial start to more ambitious goals, but it is not a stopping point (and again, IEA recognizes that energy access is a process, but this gets lost in broader discussions).

We do not label those who live on $1 per day as having "economic access" -- rather they are desperately poor, living just above the poverty line. Everyone understands that $1 a day is not much. Very few people get that 100 kWh per year is a pitifully small amount of energy. Therefore, I suggest that we start talking in terms of  "energy poverty" measured as a percentage of the average American (or European or Japanese or Australian or whatever energy rich context you'd prefer as a baseline, the results will be qualitatively the same). To use the IEA numbers, one would be in "energy poverty" with access to less than 2% of the energy access enjoyed by those in the rich world.

It is bad enough that the energy poor are largely ignored in our rich world debates over issues like climate change. It is perhaps even worse that our "success stories" often mean creating scenarios where the energy poor attain just 2% of the access to energy that we enjoy on a daily basis. The frustrating irony of course is that the issues that rich world environmentalists most seem to care about might be best addressed by putting energy poverty first, but that is a subject for another time.

27 November 2012

US Hurricane Intensity 1900-2012

The figure above comes courtesy Chris Landsea of the US National Hurricane Center. It shows the annual intensity of US landfalling hurricanes from 1900 to 2012. The figure updates a graph first published in Nature in 2005 ( Figure 2 here in PDF, details described there).

The red bars show the annual data. The grey straight line is the linear trend (no trend) and the black line shows the five-year average. The most recent five years have the lowest landfalling hurricane intensity of any five-year period back to 1900. By contrast 2004 and 2005 saw the most intense seasons of landfalling storms.

The data shown above includes both hurricanes and post-tropical cyclones which made landfall at hurricane strength (i.e., storms like Sandy). In addition to Sandy, there have been 3 other such storms to make landfall, in 1904, 1924 and 1925. The addition of the storms does not make a significant impact on the graph.

26 November 2012

Inequity Among Nations in the Climate Negotiations: A Guest Post

Editor's Note: This is a guest post by Heike Schroeder, University of East Anglia, and Max Boykoff, University of Colorado, who along with Laura Spiers of PwC have co-authored a new piece in Nature Climate Change on the international climate negotiations (available here in PDF). Please feel free to comment on their paper or the climate negotiations more generally, as this is likely to be the only post here on them. Thanks!

Another round of climate negotiations is starting today. On the agenda are two main objectives: the implementation of a second commitment period under the Kyoto Protocol to start right away – on 1 January 2013 – and to make progress toward a new climate agreement to be finalised by 2015. Issues to be discussed include, among others, adaptation finance, strengthening mitigation efforts by developed countries and reducing deforestation.

While it may be viewed as good news that the Kyoto Protocol is moving into a new phase, only the EU countries, Australia and likely Norway and Switzerland will take part in this second commitment period, covering only some 10-12 percent of global emissions. Thus, Kyoto raises the age-old conundrum between focusing on a few willing countries to lead, even if their efforts are wiped out by massive emission rises elsewhere, and waiting until a critical mass of countries is ready to mitigate seriously.

Our study in the current issue of Nature Climate Change (PDF) looks into embedded questions of who represents the interests of a global populace, by way of considerations regarding who attends and participates in climate negotiations. Based on our results, we argue that a restructuring of UN rules and practices around state representation at UN climate conferences is urgently needed. Current practice that give countries a free hand at sending as many delegates representing mainly vested national interests to the COPs results in serious differences in negotiating power between rich and poor countries. Overall participation increased from 757 individuals representing 170 countries at the first Conference of the Parties (COP) in 1995 in Berlin to an all-time high of 10,591 individuals from 194 countries at COP-15 in 2009 in Copenhagen (a 14-fold increase).

Because there are so many parallel negotiating tracks and so much technical detail, small delegations cannot participate in every session while larger delegations can. We also find significant difference in terms of delegation composition across countries. Moving forward we recommend that countries consider capping national delegations at a level that allows broad representation across government departments and sectors of society while maintaining a manageable overall size. We also argue for a stronger role of constituencies in the UNFCCC (e.g. business, environmental non-governmental organizations, local government, indigenous peoples, youth and so on). Finally, formal and informal arenas – negotiations and side events on specific topics at COPs, for example adaptation finance or addressing drivers of deforestation – could be joined up in innovative ways to facilitate exchange of ideas and foster dialogue among various stakeholders.

21 November 2012

Science Academies and the L'Aquila Earthquake Trial

The science academies of the US and UK have responded very differently than several of their European counterparts to the recent verdict in an Italian court against government scientists involved in the L'Aquila affair. The French, German and Italian academies have adopted a much more sophisticated -- and ultimately more constructive -- approach to understanding the implications of the lawsuit for the practice of science advice in government. This contrasts with the ill-informed snap judgement offered by the US and UK academies. This post provides some details on the different approaches.

The US National Academy of Sciences and the UK Royal Society were quick to criticize the Italian court verdict in somewhat hyperbolic terms. Here is the statement in full:
Oct. 25, 2012
Joint Statement Regarding the Recent Conviction of Italian Earthquake Scientists
by Ralph J. Cicerone, President, U.S. National Academy of Sciences, and Sir Paul Nurse, President, The Royal Society (U.K.)

The case of six Italian scientists sentenced to be jailed for failing to warn of the L'Aquila earthquake in Italy in 2009 highlights the difficult task facing scientists in dealing with risk communication and uncertainty.

We deal with risks and uncertainty all the time in our daily lives. Weather forecasts do not come with guarantees and despite the death tolls on our roads we continue to use bikes, cars, and buses. We have also long built our homes and workplaces in areas known to have a history of earthquakes, floods, or volcanic activity.

Much as society and governments would like science to provide simple, clear-cut answers to the problems that we face, it is not always possible. Scientists can, however, gather all the available evidence and offer an analysis of the evidence in light of what they do know. The sensible course is to turn to expert scientists who can provide evidence and advice to the best of their knowledge. They will sometimes be wrong, but we must not allow the desire for perfection to be the enemy of good.

That is why we must protest the verdict in Italy. If it becomes a precedent in law, it could lead to a situation in which scientists will be afraid to give expert opinion for fear of prosecution or reprisal. Much government policy and many societal choices rely on good scientific advice and so we must cultivate an environment that allows scientists to contribute what they reasonably can, without being held responsible for forecasts or judgments that they cannot make with confidence.
As I explained two days before the statement above, the idea that the scientists were being punished for a failure to predict did not reflect the actual complexities of the case.

Fortunately, the Italian, German and French science academies have taken a more measured look at this situation. The Italian Academy has set up a commission to examine the issues raised by the L'Aquila lawsuit, and the French and German academies offered the following statement in support of the Italian commission.

Here is the full statement from the French and German academies, issued last week:
Statement on the handling of risk situations by scientists

In late October, Italian scientists have been sentenced for supposedly not having warned sufficiently against the severe earthquake of L'Aquila 2009. On occasion of this verdict, the German National Academy of Sciences Leopoldina and the French Académie des sciences publish a statement concerning the handling of risks situations by scientists. We forward the statement in the exact wording.

Joint Statement of the German National Academy of Sciences Leopoldina and the French Académie des sciences, 12 November 2012

On the science-based communication of risks following the recent sentencing of Italian scientists

On 22 October 2012, a court in L'Aquila sentenced seven members of the Italian National Commission for the Forecast and Prevention of Major Risks to prison terms of several years. The verdict has sparked a worldwide discussion on the legal aspects of the accountability of scientists who advise government institutions. Scientists must participate in this discussion actively and as objectively as possible. The German National Academy of Sciences Leopoldina and the French Académie des sciences therefore expressly support the Accademia Nazionale dei Lincei, the Italian National Academy of Sciences, in its endeavours to set up an independent expert commission of geologists and legal experts. The role of this commission will be to examine the scientific and legal aspects of the L'Aquila verdict.

Scientific research is substantially motivated by the aim of providing greater protection against natural disasters. In the case of uncontrollable events such as cyclones, earthquakes and volcanic eruptions, scientific forecasting methods are becoming increasingly important. Scientists and representatives of state institutions must work together with mutual trust in order to inform the public responsibly, and on the basis of reliable data, about possible risks.

In their risk forecasts, scientists assess the probabilities of future events. Probability-based statements are per se fraught with uncertainty. At all times, scientists must communicate this fundamental fact as clearly as possible. This is no easy task when it involves communicating with public-sector decision-makers and concerned members of the public who expect clear forecasts. However, scientists cannot – and should not – absolve themselves of this responsibility.

It is very unfortunate when the trust between scientists, state institutions and the affected members of the public is profoundly damaged. This occurred as a result of the devastating earthquake in L'Aquila on 6 April 2009.

It is thus in the interests of all those involved that the events are reconstructed comprehensively, precisely and objectively. Only in this way is it possible to evaluate on a reliable basis whether the persons involved performed their duties appropriately in the situation in question.

The scientific community must also take an active part in the necessary examination process from the start. The decision of the Accademia Nazionale dei Lincei to set up an independent expert commission to examine the L'Aquila verdict is a clear and decisive signal in this regard.
It is not too late for the National Academy of Sciences and Royal Society to join the German and French academies in offering support for the Italian commission, and to correct their earlier misinterpretation of the L'Aquila lawsuit. There are difficult and complex issues involved in this case, and scientists everywhere will benefit from the drawing of lessons.

20 November 2012

Anne Glover on EU Science Policy

Today, I had the pleasure to meet Anne Glover, Chief Scientific Advisor to the European Union, in Berlin at an interesting science policy workshop organized by the German Federal Institute for Risk Assessment. Like just about every science advisor to governments that I have met, she is an impressive individual. 

Here are a few comments that she made in oral testimony to Parliament in the UK a few weeks ago:
I started as CSA and was the first person to take up that post in the European Commission in January of this year. I will finish at the end of 2014; so I have three years. I will start off in a slightly light-hearted way. I would say that in the first week or two at the European Commission I set myself the target that at the end of two weeks I would understand how the Commission worked. I now realise that if I can understand part of it by the end of 2014 I’ll be very lucky. There is a lot involved in understanding procedure and how the Commission and Parliament works, and that, in itself, has an impact on what I hope to achieve.

The one single thing that I think would be very important to achieve is how people regard evidence and policy making. For me, that is absolutely central. I would like to develop that a little bit more. From my point of view, science has an obligation to generate the knowledge and the evidence that can be used in policy making. That should be the fundamental platform on which policy is built. That is just as appropriate for every member state as it is for the European Commission.

At the moment, although the policy making process in the European Commission is very robust-if I look at how it is structured, how evidence is gathered and how impact is assessed, it is very impressive-when it gets to the stage where individual member states look at it and Parliament addresses it, the evidence is often unpicked and bits of it are removed in order to find consensus around a particular policy. Although that is part of the democratic process and so I think and expect that that would happen, there is not a great deal of transparency around why the evidence is not being followed.

At the end of 2014 I would like there to be an understanding that, if the evidence is not adhered to in policy making, there would be a statement to say that we accept the evidence, that it is robust and that the evidence is true, but for various reasons we are reducing our reliance on evidence; and that could be social, economic, ethical or whatever. We need that transparency and also accountability so that, if people vote against something where clearly the evidence supports it, there should be a degree of accountability there, and then, for me, we would be in a much better place. At the moment, I think, sometimes evidence is disregarded in policy and, quite rightly, citizens would feel that there is something wrong with the evidence then, and that is not the case in many instances. For me, that is a very important thing.

The second thing would be to try and raise more awareness across Europe about just how impressive the knowledge is that we generate in Europe. In my mind it is really second to none. If you look at the impact of the knowledge that we generate, the infrastructures that we have and the things that we can do as a European Union that no individual member state or indeed any other nation outside Europe could deliver-I am thinking there of things such as the Large Hadron Collider at CERN or the European Fusion for Energy project, for example, with the European Space Agency-they are all examples of where Europe absolutely excels. I would feel that we were in a much better position if citizens understood that and also could appreciate that science is culture. It is not accessible enough and we don’t celebrate it enough. I would like every one of us to be less modest about our achievement in science, engineering and technology in Europe because it is one thing we can truly shout about, claim we are the best and actually be the best.
 My views on the strengths and limitations of any science advisor to governments can be found here.