Rome Didn't Fall in A Day.









Objective Truth Exists and is Accessible to Everyone.

All Human Problems can be Solved with Enough Knowledge, Wealth, Social Cooperation and Time.


Photo: Rusty Peak, Anchorage, Alaska


Translate

Thursday, December 6, 2012

Job Recovery after the Great Recession: It's Different This Time


On the following unemployment chart, it appears that the 2008 recession was no different than earlier recessions.  There is the same saw-toothed pattern on the unemployment chart as seen in earlier recessions.   But a closer look at the data reveals some troubling issues with the recovery.    It is indeed different, this time.

In terms of job recovery and GDP growth, the “Great Recession” of 2008 was deeper and longer than any other recession since World War II.    Recovery from the 2008 recession continues to be weak, and is unlikely to restore either employment or GDP to pre-recession trends before the next recession.

The rate of job recovery after recent recessions has been weaker than following earlier recessions.   The weakness is seen clearly in three recessions since 1990, and may indicate structural changes in the economy.   A closer look shows that the recent weakness is part of a longer-term trend; job recovery following recessions has been declining since World War II.
-----
Let’s look at some graphs.

Employment Recovery from the 2008 Recession

The most striking thing about the current recovery is that employment has not recovered to pre-recession levels, even three and one-half years after the official end of the recession.    Another interesting observation form this chart is the progressive change in the shape of the recovery from the 1981 recession to the 2008 recession.   Employment recovery from each successive recession is slower, as indicated by flatter curves following the bottom of the recession.
Here is another chart showing the depth and duration of the 2008 employment recession.   Job losses and recovery are measured in percent of peak employment prior to the recession.    Recoveries from the 1990, 2001, and current recessions are flatter and broader than following earlier recessions.
A time-series chart also shows the depth and duration of employment recessions since 1959.  Recent recessions are broader, showing a slower recovery of jobs than after earlier recessions.
Long-term unemployment is dramatically higher than at any time since World War II. 
Not surprisingly, the quality of jobs has also deteriorated, with higher numbers of workers accepting part-time rather than full-time unemployment.

GDP

Persistent unemployment has been a drag on GDP.    Although GDP is now increasing at about the same rate as before the recession, there is a gap of about $1 trillion between actual GDP and potential GDP, as calculated by the Congressional Budget Office. 

 A graph presented by the Washington Post shows the growth rates required to restore GDP to the previous trend.  At growth rates of only 2%, the gap will not close.   Bill Gross, recognized as one of the brightest financial experts of our times, recently stated that he expects that 2% growth and persistent unemployment are the “New Normal” for the United States economy.
The entire WP slide show is worth seeing:  http://www.washingtonpost.com/wp-srv/business/the-output-gap/index.html

Structural Change in the Economy

We’ve seen how job recovery after recent recessions has been weaker than following previous recessions, indicated by flatter curves in figures 1 &2.    Let’s look at another presentation of figure 2, centering the curves on the bottom of the job recession.  
 All recessions prior to 1981 had full job recovery in less than 11 months; the latest three recessions have flatter recovery profiles, and require much longer to reach full recovery.    If we calculate the rate of job recovery (slope of the positive line) from the chart above, we see a long-term trend.    The rate of job recovery following recessions has been in a secular decline since World War II.  
 This trend would seem to indicate a progressive structural change in the economy since World War II.  Jobs which disappear during recessions are becoming harder to replace.  The manufacturing sector, the workhorse of the American economy is shrinking.  Manufacturing jobs are disappearing as work is outsourced overseas and American factories are increasingly automated.  New jobs are increasingly sophisticated, and time for workers to acquire specialized training and skills, leading to the progression we have seen in job recovery following recessions.
The Hamilton Project created a useful interactive graphic showing the jobs growth and time required to return the country to full employment.   The graphic allows the viewer to choose a rate of job growth, and see the time required to recover jobs lost in the recession.   Data from the most recent Dept. of Labor shows that employment growth averaged 153,000 jobs/month in 2011, and 157,000 jobs/month so far in 2012.   If we assume a long-term trend of 155,000 jobs/month, and place this data in the Hamilton Project calculator, we see that the jobs gap will not close before the year 2025.  I recommend trying a few scenarios on this interactive calculator (http://www.hamiltonproject.org/jobs_gap/).

The most recently reported jobs growth is 146,000, for November 2012; October jobs were revised downwards from 171,000 new jobs to 138,000 jobs.  It's clear we are falling short of the 220,000 jobs needed to restore jobs lost in the recession by the year 2020.
The average interval between recessions since WWII has been 5 years, 9 months; the maximum interval between recessions was 10 years, between 1990 and 2000.   We are already 3 years and 6 months past the official end of the Great Recession.  Considering the very real economic headwinds facing the nation, It is extremely unlikely that we will restore the jobs lost in the Great Recession before the beginning of the next recession.  The job recovery following the 2008 recession is indeed different, this time.
----

References and Credits:

Charts prepared by the blogger at http://www.CalculatedRiskBlog.com were especially helpful in preparing this blog post.

Charts at these sites were also particularly useful in providing insights into the 2008 Recession recovery.:
    The Washington Post graphic "Why it doesn't feel like a recovery"
    Hamilton Project jobs creation graphic
    The Center for Policy Priorities, The Legacy of the 2008 Recession: Chartbook


2008 Recession recovery
GDP loss during the 2008 recession is deeper than previous recessions; recovery after recession slower than previous recessions.
“Why it doesn’t feel like a recovery”
Jobs Creation Interactive Graphic;  Most recent data:  October, 171,000 jobs added
 Women in Workforce

Recession unemployment rate by demographics; NYT interactive graphic
Unemployment graph by different recessions
US Labor Demographics and Forecast
Urbanomics Blog: Job Growth, charts, etc.
Bill Gross interview; the “New Normal” for the US economy is 2% growth and persistent unemployment.  Workers, already displaced by cheap Asian labor, are now being replaced by machines.
US GDP growth in 2012 Q3 was 1.9%, excluding Inventory growth.               
Employment growth averaged 153,000 jobs/month in 2011, and 157,000 jobs/month so far in 2012.


November job growth is 146,000; October jobs revised downward from 171,000 to 138,000.






Monday, October 1, 2012

Shale Gas Production, Resources, and Prices


Gas production from conventional reservoirs has been falling since 1973, exactly as predicted by M.K.Hubbert in his landmark “Peak Oil” paper published in 1956.   Offshore natural gas production (1970–2000) moderated the decline but did not reverse the trend. 

Unconventional gas reservoirs (“tight” sands and coal-bed methane) began to contribute significant production about 1980.   Significant volumes of shale gas production began in the early 2000s, and grew from 2 TCF in 2008 to over 4 TCF/year in 2010.   Some of the production growth was the result of the “land-rush” of lease acquisition.  Leasing terms often required immediate drilling to maintain the lease.  Many wells were drilled simply to satisfy leasing terms, without regard for the primary economics of the project. This created a supply bubble, driving prices below equilibrium for profitability.
The contribution of shale gas helped to push total gas production to about 25 TCF per year, well above the Hubbert Peak of 1973 (22.7 TCF/year).  (Note: EIA gas production volumes are somewhat higher than data from Jean Laherrier; probably due to different handling of volumes re-injected for gas storage and oil recovery.   EIA gives current production over 27 TCF per year.)

Production from unconventional sources is simply the continuation of the long-term trend of diminishing returns in producing oil and gas.  Technology and economics allow profitable production of low-productivity wells, and many more wells are needed to supply the market. 
The growth in supply caused a decline in average well-head prices from over $8 per mcf (thousand cubic feet)  to about $2.50 per mcf.   Current prices have recovered somewhat to about $3.00 per mcf.
The shale gas revolution is the product of two technologies.  The first is horizontal drilling, which was developed in the1980s, and the second was the ability to controllably fracture the rock surrounding the horizontal wellbore, which was developed in the early 2000s.   Hydrofracturing a well to stimulate production is an old technology, widely used since the 1970’s.   The innovation that allows commercial production of shale gas is technology to distribute the fractures evenly along the entire length of the wellbore.  Previously, fracturing only occurred at the weakest point, leaving 90% of the horizontal wellbore unproductive. 

The new technology opened up huge prospective areas in the United States.   Initial estimates of total potential were staggering.  In 2010, estimates from the EIA for technically recoverable resource from lower 48 shales were over 800 TCF (trillion cubic feet).  Of that volume, the EIA attributed 410 TCF to the Marcellus shale of the Appalachian states.

By 2011, however, some of the bloom was off the rose.  Every exploration play tends to go through phases.  It is almost a law of nature that the best wells are drilled first.   Geologists drill prospects with the best potential before drilling average prospects.   Predictably, engineers, management and financial analysts extrapolate the results of those early wells to the entire field or play.  No one pays attention to the warnings of the geologist until the disappointing wells are drilled.

Disappointing results from shale gas began to be documented in the financial media in 2011.

In 2012, two major studies by the USGS reduced expectations for shale gas
First, the USGS published the technically recoverable resources from the Marcellus shale, with a mean estimate of 84 TCF.   This was an increase from the estimate of 2 TCF published by the USGS in 2002, but substantially below the 410 TCF published by the EIA.  (This compares to about 22 TCF consumed annually in the United States.)  Conflict between the two agencies is almost palpable in the USGS press release, which noted testily:  “USGS is the only provider of publicly available estimates of undiscovered technically recoverable oil and gas resources of onshore lands and offshore state waters.”

USGS issues new estimate of technically recoverable reserves for Marcellus Shale
commentary:

Secondly, the USGS published a study of well productivity across all of the shale gas plays in the United States.  The USGS estimated the average EUR (Estimated Ultimate Recovery) per shale gas well at 1.1 BCF/well, substantially less than major operators, who published estimates of 4 to 5 BCF/well. 

USGS revised estimates of well production downwards, relative to claims my major producers:
commentary:

The major operators (e.g. Chesapeake) are probably doing better than average.   The leading companies are using the technology appropriately and have come up the learning curve on drilling and producing these wells.  By contrast, many of the inexperienced competitors who jumped into this play lack the engineering expertise to perform well.   The USGS “average” reflects the experience of both groups.  Still, the USGS numbers are sobering, and suggest that a slow-down in the growth of production is likely, until the issues of productivity are clearly settled.

Despite the downgrade in expectations, the EIA still expects shale gas production to expand to nearly one-half of U.S. gas production by 2035. 
The low gas prices that have hovered at or below $3/mcf have depressed exploration for gas in both conventional and unconventional plays.   Anecdotally, drillers in the Gulf of Mexico have stopped pursuing new gas, both in deep plays on the continental shelf and prospects in deep water.   Offshore gas is simply not profitable at $3/mcf.   However, I expect the current high production rates to keep prices in the range of $3 to $4 for one or two years, until excess production and excess gas in storage is depleted.   In a longer range outlook, I expect gas prices to rise to the $5 to $6 range in the medium term of 3 to 5 years. 

The energy equivalence of a unit of gas (mcf) to a barrel of oil is about 6 to 1.   In other words, 6 thousand cubic feet (mcf) of gas produces about 6 million Btu (British Thermal Units) of energy, which is roughly equivalent to the energy content of a barrel of oil.  By comparison to oil, energy from gas is incredibly cheap.  At today’s prices ($3.51/mcf gas, and $91.58/barrel oil), natural gas is only 23% of the cost of oil.  Natural gas prices could double, triple or quadruple and still represent a savings with respect to a barrel of oil.

-----
General Information

Shale Gas plays in the lower 48 (USGS)

Warnings on disappointing Shale Gas results:

USGS issues new estimate of technically recoverable reserves for Marcellus Shale
commentary:

USGS revised estimates of well production downwards, relative to claims my major producers:
commentary:

Thursday, September 20, 2012

The Silly Season and the US Federal Budget


We are now in the silly season of American presidential politics.   
Both parties have nominated their candidates for president, written their platforms, held their conventions, made endless pleas for donations to run political advertisement.  Heeding the advice of Quintus Cicero*, they have distorted and disparaged the record of their opponents, avoided clear discussion of the issues, promised everything to everybody, and lied liberally about everything.

Let’s look at the United States’ Federal budget for 2012
Tax receipts, according the budget, are expected to be 2.469 trillion dollars.   A small increment of additional funding is found in the budget as line item 950, Undistributed Offsetting Receipts, adding about 98 billion dollars.   Total funding is thus expected to be $2.568 trillion dollars, or about 17 percent of GDP.
Spending is shown in the following chart.   Total spending is $3.894 trillion, or 26% of GDP.
Social Security and Medicare represent about 32% of total Federal spending.  Defense and Veteran’s benefits are about 22%.  Interest on existing debt is 6%.  Together, these items are about 15.5% of GDP.

The total Federal deficit for 2012 is about 1.3 trillion dollars.  Federal spending exceeds tax revenue by an astonishing 51%.

Federal debt held by investors (“public debt”) is now about $11 trillion dollars, or about 72% percent of GDP.   Public debt is growing at a rate of about 20 percent per year, a rate that would double the debt in 3 ½ years.  Foreign investors hold $5.3 trillion, or nearly half of the public debt.   Obligations between government agencies, such as the social security trust fund, amount to an additional $4.7 trillion, bringing total Federal debt to $16 trillion (107% of GDP).

Against this backdrop, Republicans propose an amendment to the constitution requiring a balanced budget.  Mitt Romney proposes to cut marginal tax rates by 20 percent.  He also proposes raising Defense spending by about 5% (excluding war expenses), adding 100,000 members of the military, modernizing weapons, and building more ships and aircraft.   He said that budgetary changes will not affect current retirees or veterans.   

Defense, Social Security, Medicare, and Interest are about 60 percent of Federal spending.  If all tax revenues were applied to these categories with no reductions and a balanced budget, as proposed by Republicans, it would eliminate 83% of all other government functions.  This is without even considering the proposed tax cut.

We cannot eliminate 7/8ths of the functions of the Federal government to pay for defense, medical subsides and transfer payments to the elderly.   It is absurd.

Democrats have been equally negligent in providing a clear blueprint toward solution of the crisis.
Both sides are simply promising everything to everybody.   

It is clear that any solution to the budget crisis will require a combination of increased taxes, cuts to defense spending, and reductions in elder care. 
 And these changes must be implemented sooner rather than later.
---------- 

* Quintus Cicero; letter to Marcus Cicero regarding his campaign for Consul of the Roman Republic, published as How to Win an Election, translated by P.Freeman.

Federal Budget


Thursday, May 3, 2012

The Envelope, Probability Theory, and the Black Swan


The Envelope, Probability Theory, and the Black Swan
The Envelope
Test pilots have a term that describes the known operating characteristics of an airplane: “The Envelope”.   The Envelope is the range of known, safe, operating parameters, such as top speed, stall speed, angle of attack, maximum bank, etc.  When the test pilot tries to extend the range of operating parameters, he is operating “outside of the Envelope”, and the airplane may perform in unexpected or dangerous ways.  There are abrupt discontinuities in the physics of flight, and test pilots’ reputation for courage is well-deserved. 

The concept of “The Envelope” applies to our total experience, as individuals, as businesses, as nations, or as a species.     When we are within the envelope of our experience, events unfold more or less as we encountered them before.  When we are outside the envelope of our experience, our theories about probability break down.   We are in a place where established rules do not apply, and where our experience is irrelevant, or worse, misleading.  We are in the realm of the Black Swan.   We will return to the Black Swan, but first we need to step back and consider the history of probability.
---------
Probability Theory
Our lives are ruled by probability.  Will the stock market rise or fall?  Will tomorrow be sunny or rainy?  Will I get a raise at work?   Will the Dodgers win the pennant?  Will I have an automobile accident?  Will I win the lottery?  Will I be late to the airport?  Will she get pregnant?  Will l die of cancer?   From the mundane to the life-shattering, every day is filled with uncertainty about events, and throughout our lives, we develop ways to understand those probabilities and to anticipate the results.

A Priori Knowledge and Discrete Outcomes
Gamblers were the first to take an interest in probability theory, for obvious reasons.  In the 16th and 17th centuries, gamblers and mathematicians (often being one in the same) began to develop the theory of probability in the simplest cases.  They learned how to calculate the odds for any event with a set of known a priori parameters and discrete outcomes.  A priori (“from earlier”) means that we inspected the dice before the throw, or counted the cards before the deal.   Discrete outcomes are critical; when we throw dice we do not expect the outcome to be eight and three-quarters, or the king of spades.  If we consider all of the possible discrete outcomes, we can calculate precisely the odds of winning or losing (assuming that the dealer isn’t cheating). 

Non-A Priori Cases
But what about situations without a priori knowledge? 
In the eighteenth century, mathematicians considered the problem of a bag filled with an unknown number of white balls and black balls.  We are not given the opportunity to inspect the bag, or count the balls.   The probability of drawing either white or black can only be discovered through experience.  As our experience grows, we gain confidence about the ratio of black to white balls in the bag, although we can never be completely certain of the total probability until we have drawn the final ball from the bag.  Modern sampling theory can quantify the degree of confidence in the probability as a function of how many balls we have drawn (i.e. the extent of our experience).
--------
Bayes’ Theorem
Bayes’ Theorem, developed in the eighteenth century, applies to probability problems without  knowledge of a priori conditions.   Bayes’ theorem is used to combine a subjective estimate of probability with previous experience about the actual rate of occurrence.

Suppose a bird watcher claims to have seen a rare species of duck.  Previous experience shows that in nature, the common bird is observed 99% of the time; the rare bird 1% of the time.  And suppose our experience also shows that amateur bird watchers accurately identify birds 90% of the time, and mis-identify birds 10% of the time. 
Bayes’ theorem gives the probability that the bird watcher actually saw the rare species as only about 8%.   (Bayesian calculator:  http://psych.fullerton.edu/mbirnbaum/bayes/BayesCalc.htm)

Consider the case of the Ivory-Billed Woodpecker, which is generally regarded as extinct since the last confirmed sighting in 1944.   In 2004, scientists from Cornell University claimed a sighting of the bird, creating a swarm of media interest.   Here is a blurry 2-second video which documents their sighting:
Is this the long-missing Ivory-Billed Woodpecker, or the similar, but relatively common Pileated Woodpecker?  What does Bayes’ Theorem say?


The actual population of the Ivory-Billed (assuming it exists) must be very low, compared to the Pileated Woodpecker.  Let’s assume there may be a total population of 10 Ivory-Billed Woodpeckers, and about 10,000 Pileated Woodpeckers.   Further, let’s assume that our scientists, paddling a canoe through a swamp, can identify a flying bird correctly 85% of the time.  Then we can do the math:  The probability that the observed bird was actually the rare Ivory-Billed Woodpecker is only 5 out of 1000.

This result brings to mind a rule of scientific analysis from Cornell’s most famous scientist, Carl Sagan:  Extraordinary claims require extraordinary evidence.”   Bayes’ Theorem is the mathematical expression of that principle.

Bayesian Theory in Geological Estimates of Success for Petroleum Prospects
One application of Bayes theorem is to calculate the true probability of an event, using subjective guesses about an event, and the actual rate of occurrence.   The method is essentially a “force-fit” of subjective guesses to the actual rate of occurrence, using the historical accuracy of previous guesses.   

We can extend the Bayesian method to subjective estimates of success used by petroleum geologists in prospect appraisal.  

In oil exploration, geologists assign a probability of commercial success to each prospect.  Over time, the cumulative experience of success and failure provides a means to review the accuracy of the predictions, provided that the predictions were made using the same methodology.

The following chart shows 74 exploration prospects drilled between 1996 and 2000.  The probability of success is the vertical axis, and prospects are shown in rank order, according to geological chance of success.  Successes are color coded in red, and failures in blue.  The chart demonstrates that geologists' estimates have merit; successful prospects generally occur on the left side of the chart.  But let's take a closer look.   Are the estimates quantitatively correct?  Can the estimates be improved with the Bayesian technique?


2  The second chart shows the prospect portfolio, roughly in thirds according to risk.  The upper third (highest chance of success) slightly outperformed the estimates, with 57% actual success, compared to 48% estimated success.  The middle group moderately underperformed the estimates, with 17% actual success compared to 28% in estimates.  And the bottom third substantially underperformed the estimates, with only 7% success, compared to 18% predicted success.



We can look at the performance of the entire portfolio by summing the estimated probability of success, to create a curve showing the cumulative predicted number of discoveries across the portfolio (blue curve).  We can compare the actual cumulative discoveries (red curve), which rises by integer steps over the successful prospects.  Actual results closely parallel the predictions to the midpoint (about 30% chance of success).  Actual results then trail the predictions to the bottom third (about 20% chance of success), where the actual results go flat, showing no success corresponding to prospects estimated at less than 20% chance of success.

Using Excel, we can run a regression on the curve representing the cumulative actual discoveries, relating the relationship between the rate of predicted to actual discoveries.  This function adjusts the estimated probabilities to actual results, and provides a predictive means to forecast future probabilities.  











The same function can be applied in a predictive fashion to a new portfolio of prospects.   The second group shares characteristics of the first group of prospects.   Success is concentrated in the lower-risk part of the portfolio.  Actual success is greater than predicted in the low-risk part of the spectrum, and success is almost absent in the higher-risk part of the spectrum.  The function derived from the first prospect group is not a perfect fit, but improves the fit of pre-drill estimates to actual results.  The cumulative success for the program using the Baysian-adjusted probability of success  very close to the actual success of the program.

Tracking predictions and results, combined with Bayesian methods allow the calculation of true probabilites from subjective estimates.  The method can be used iteratively to adapt to changes in exploration technology or improved estimates by the geological staff.  The process provides a way to obtain quantitatively better risk-adjusted investment decisions, and to concentrate attention on prospects most likely to yield success.
-------
Estimation of Low-Probability Events
People are very good at estimating probabilities in the middle range.  We fairly accurately assess the flip of a coin, the outcome of a college football game, or even the chance of a full house in five-card poker (14.4%). But people are terrible at estimating chances on the ends of the probability spectrum.  We are usually unable to discern the difference between the probability of events at 1:100, and 1:1000, or even 1:10,000 chances.   The same phenomenon occurs at the other end of the spectrum, for events of very high likelihood, but less than certainty.  We are simply unable to sense or quantify the difference.   

An example of this problem is shown in the risk assessments for the Space Shuttle program.  Richard Feynman wrote a stinging critique of the NASA risk estimates following the Challenger disaster.
Feynman's report includes this quote:
(Engineers at Rocketdyne, the manufacturer, estimate the total probability 
[of catastrophic failure]as 1/10,000. Engineers at marshal estimate it as 1/300, while NASA management, to whom these engineers report, claims it is 1/100,000. An independent engineer consulting for NASA thought 1 or 2 per 100 a reasonable estimate.)
The actual rate of failure was 2 disasters out of 135 missions, or about 1/67.

How is it possible that such wildly differing estimates existed, regarding the safety of such an important project?

Part of the problem is sampling.  For rare events, we must make a large number of observations to detect and quantify a possibility.  If we walk across a lake on thin ice ten times and do not fall through the ice, we can conclude that the chance of falling through the ice is probably less than 1:10.  It does not mean that walking on thin ice is safe, or that we can safely cross the lake 100 times.   For rare events, we simply cannot gain enough experience to adequate grasp the true probability.  This is particularly problematic for risky and dangerous events.

There are many other facets to the problem, including self-interest of management, and various other sources of bias which Feynman discusses in his report.

But the simple lesson that I take away is that people simply cannot understand risk in the range of low probability events.
------------
Excuse me, I will finish this post soon!
Black Swan Events
Author Nicholas Taleb introduced the term Black Swan into the modern vocabulary of risk in his book "Fooled by Randomness", in 2004.       



Wednesday, April 18, 2012

Modeling Global CO2 Cycles

This article is the fifth post in a series about Global CO2 trends and seasonal cycles.

1)  The Keeling Curve
      http://dougrobbins.blogspot.com/2011/05/keeling-curve.html
2)  The Keeling Curve and Seasonal Carbon Cycles
3)   Seasonal Carbon Isotope Cycles
4)   Long-Term Trends in Atmospheric CO2
5)   Modeling Global CO2 Cycles

In this post, I generate a model for the global CO2 record from 1971 - 2009.  Inputs to the model include agricultural biomass, fossil fuel emissions, absorption of excess CO2 by carbon sinks, and atmospheric mixing between the Northern and Southern Hemispheres.
The final model begins in the year 1971, and yields a set of CO2 curves, by latitude, that closely matches the actual record.

The important thing is that this model is made entirely by data resulting from human influences.   Natural factors also exist and clearly influence global CO2.  But the quantitative influence of agriculture and fossil fuel use is more than enough to model the annual cycles and long-term rise of CO2 in the atmosphere.

The ease with which the model was created, and the lack of any reasonable, quantifiable alternatives, indicate that changes in atmospheric CO2 are primarily the result of human activity.

Modeling
Modeling performs an important function in science.   After gathering data and making observations about pertinent parameters of a problem, it is necessary to put the pieces together in a quantitative model, to see if the parameters are acting as we expect.

The model “talks back” to us, in a way.   The model will show a fit to real-world data when our assumptions are reasonable, and a mis-fit to the data when the model is built on incorrect assumptions.   In this case, the model shows that the seasonal cyclicity of the Keeling curve is not the result of seasonal fossil fuel use, but instead is the result of seasonal photosynthesis and oxidation.  There is a good quantitative and geographic fit between the observed carbon cycles and the volume of carbon in agricultural biomass.   The increase of amplitude in CO2 cycles in proportion to population growth also supports the idea that the observed annual cycles are largely the result of agriculture.

This model considers separately the CO2 flux in the Northern Hemisphere and the Southern Hemisphere, and matches observations for the rate of mixing between the hemispheres.  A more complex model could be built, perhaps at increments of 5 or 10 degrees of latitude, and more closely identify the location of agriculture and fossil-fuel emissions, and that might be useful to addressing deeper questions.  But I believe a model should be simple in essence; sufficiently complex to answer the question at hand, and not any more complex.  This model is intended to answer the question of human influences on global CO2, and the division of the globe into two hemispheres is sufficient to answer that question.

Fossil Fuel Use and Annual Cycles
Fossil-fuel use has a cyclicity with the appropriate seasonal peaks for the Northern Hemisphere.  I spent quite a bit of time finding data on global fossil-fuel usage, assuming that it was a major factor in the Keeling curve annual cycles.  I extrapolated the seasonal patterns of use for coal, natural gas, and oil in the United States to global figures from the International Energy Agency (IEA) for the three commodities.  (I made an adjustment for the summer "air-conditioning"  bump in coal and natural gas, when extrapolating to the entire world.)
I converted the volumes (in gigatonnes) of CO2 emitted from fossil fuels to atmospheric CO2 concentrations in parts per million for the northern hemisphere and southern hemisphere.


In my last post,
http://dougrobbins.blogspot.com/2012/03/long-term-trends-in-atmospheric-co2.html, we saw that fossil fuel emissions are responsible for the long-term increase of atmospheric CO2.  But the peak of fossil fuel usage is in the months of December and January, but the steepest gains of CO2 occur in September through November.   In the fall months in high northern latitudes, CO2 concentrations rise by about 15 ppm.  The sum of fossil fuel emissions in those months total only about 1.5 ppm, an order of magnitude lower than the rise seen in the seasonal data.  Factors other than fossil fuels are causing most of the seasonal fluctuation of atmospheric CO2.

The Model

We can build a simple model to help identify the significant parameters driving global CO2 cycles, and quantify parameters wherever possible.   From our earlier observations, we can construct the model using the following parameters:
  • CO2 taken up by Plants during the growing season
  • Oxidation  of carbon in plants following the growing season
  • CO2 emissions from Fossil Fuels
  • Absorption of CO2 by carbon sinks (e.g. oceans)
  • Exchange of CO2 between Northern and Southern Hemispheres.
 We already observed that the behavior of CO2 cycles differs greatly by latitude.  The Northern Hemisphere, with 68% of the world’s landmass, 88% of the world’s population, and 83% of the worlds GDP, has a cycle showing very large seasonal fluctuations in CO2.  The Southern Hemisphere shows much less annual CO2 fluctuation.  The sharpest and largest fluctuations in CO2 occur in the summer and fall months of the Northern Hemisphere.  In the summer, plants are taking up carbon through photosynthesis, and atmospheric CO2 declines.  Immediately following the growing season, CO2 concentration rebounds sharply, as plants give CO2 back to the atmosphere through oxidation.



I constructed a model for annual CO2 uptake through photosynthesis, beginning with the volume of biomass generated through agriculture.  Agriculture generates about 140 gigatonnes of biomass every year.
http://www.unep.or.jp/Ietc/Publications/spc/WasteAgriculturalBiomassEST_Compendium.pdf
Adjustments for moisture content (50%), carbon content (45%), and conversion to CO2 (3.67x) results in about 96 gigatonnes of CO2 removed from the Northern Hemisphere atmosphere annually.   Keep in mind that this is only half of the air on the planet.  Thus, during the growing season, CO2 in the Northern Hemisphere falls sharply.

In the model, I distributed agricultural carbon and fossil fuel use according to economic output by hemisphere.  The Northern Hemisphere represents 83% of global economic output, and the Southern Hemisphere represents 17% of global economic output.

I assigned the 96 gigatonnes of agricultural CO2 intake in the summer growing months, as shown in the following graph.  For the oxidation part of the cycle, we can observe a very sharp rebound in CO2 in the data during the fall months.  It is possible that some of the rebound is from CO2 sinks, seeking equilibrium after the change during the growing season.  However, isotope data shows an equally sharp rebound. (http://dougrobbins.blogspot.com/2012/03/seasonal-carbon-isotope-cycles.html)   It appears to me that vegetation is giving back to the atmosphere the very same CO2 that was absorbed during the summer.
I adopted an oxidation/respiration model to return the CO2 to the atmosphere as a zero-sum annual exchange.  I tried an exponential decline for the oxidation part of the cycle, then tweaked it to match the annual CO2 cycles of the Northern Hemisphere high latitudes (with the long-term trend removed).

This model produced a surprisingly easy fit to the high latitude data of the Northern Hemisphere (see below).
Note that the long-term rising CO2 trend has been removed from the real world data, and there are no CO2 emissions from fossil fuels in the model at this point.

The total volume of vegetation includes both natural as well as agricultural biomass.  I found a single estimate for the net global annual uptake by plants, of about 60 gigatonne of carbon, or about 220 gigatonnes of CO2.   According to these estimates, agriculture represents 52% of the total annual carbon uptake by plants.  Of the total annual carbon cycle, some carbon is exchanged with carbon sinks (soil) and some carbon exchanged with the atmosphere.  The fit of the model to observed data shows that the NET amount of carbon taken out of the atmosphere by plants, and returned by oxidation, is very close to the volume of carbon taken up by agricultural activity.

I followed a similar procedure to model the Southern Hemisphere.  I found that 17% of global agricultural biomass produced CO2 fluctuations that were far too large to match the data in the Southern Hemisphere. I found a good match by using only 5% of global agricultural biomass.   The chart below shows the model parameters.
The following chart shows the match of the model to the data from the Southern Hemisphere.

Annual cycles from intermediate latitudes have lower amplitude than cycles from the high northern latitudes.  This was the topic of an earlier post: http://dougrobbins.blogspot.com/2012/03/keeling-curve-and-seasonal-carbon.html.  The Northern Hemisphere, with its large CO2 fluctuations, dominates global CO2 cycles.  CO2 cycles from low latitudes in the Southern Hemisphere (pink) follow the seasonal pattern of the Northern Hemisphere, showing the range and influence of atmospheric mixing between Northern and Southern Hemispheres.
I tried a simple mixing model to represent the cycles observed in intermediate latitudes.   The chart below shows a 50%-50% mixture at the equator, and 70% - 30% mixtures at intermediate latitudes.   As shown in the data above, the cycles of intermediate latitude (pink line) in the Southern Hemisphere follow the seasonal pattern of the Northern Hemisphere.
A more sophisticated model could be created, using greater detail in the location of agriculture by latitude, but I think this model demonstrates that atmospheric mixing between the Northern and Southern Hemispheres can reasonably explain the range of amplitude in CO2 cycles in intermediate latitudes.






Global CO2 data show distinctive characteristics of annual cyclicity and a long-term rising trend ("the Keeling Curve").   Subtle aspects of the curve include a rising rate of increase, and an increase in the amplitude of the cycles.

The final model runs from the year 1971 to 2009.  As a starting point, the model used values for the average CO2 concentration of the Northern and Southern Hemispheres in 1971, of 327 and 325 parts per million CO2, respectively.

The photosynthetic model, which was developed for the year 2009, was adjusted for earlier years as a function of global population. This resulted in cycles with increasing amplitude through the range of the model.
Agricultural production was assumed to vary directly as a function of population, but incremental agriculture was assumed to displace natural vegetation.  Growth of CO2 intake through photosynthesis was increased at a rate of 50% of incremental agricultural output (back-calculated from the 2009 model).

Carbon dioxide from fossil fuel emissions was added, according to estimates from IEA and the BP statistical review of world energy.  Annual figures given in these reports were scheduled on a monthly basis, by analogy to US monthly consumption of coal, natural gas, and oil, as described above.  As noted in a previous post, about 40% of fossil fuel CO2 emissions are absorbed by carbon sinks, including the ocean.  This fraction of new carbon emissions was removed from the model on a monthly basis.

The Northern Hemisphere receives the bulk of fossil fuel CO2 emissions, and modeled CO2 rises rapidly in Northern Hemisphere, unless a transfer to the Southern Hemisphere is allowed.  In an earlier post, we saw that rising CO2 in the Southern Hemisphere lags CO2 in the Northern Hemisphere, by a period of about 22 months.  In the model, I transfered half of the excess CO2 of the Northern Hemisphere to the Southern Hemisphere, using a lag of 22 months to represent the necessary mixing time.
Despite the general simplicity of the model, the resulting CO2 curve shows a reasonable correlation to actual data recorded across the global range of latitudes, and after 38 years of CO2 addition and subtraction, the model concludes at the appropriate concentrations of CO2 across the globe.

Conclusions:
1)  A model can be generated which provides a very good match to the long-term global CO2 record.  The model includes estimated fossil fuel use, absorption of CO2 by carbon sinks, carbon accumulation in agricultural biomass, and oxidation of agricultural biomass.  The volume of agricultural biomass was varied in the model according to world population growth.
2)  Surprisingly, fossil fuel use does not have a significant effect on seasonal CO2 cycles.   Known volumes and timing of fossil-fuel emissions do not match the cyclicity in CO2 observations.
3)  Photosynthesis in the Northern Hemisphere, dominates the seasonal cycles.  The volume of CO2 absorbed through agriculture closely matches the net volume of CO2 taken up by both natural and agricultural photosynthesis.
 4)  Oxidation of vegetation occurs quickly.  Three quarters of the net plant biomass is oxidized in the first three months following the growing season.  It seems likely to me that burning of agricultural waste accounts for some of the rapid oxidation following the growing season.
5)  The Northern Hemisphere dominates both seasonal and long-term trends in atmospheric CO2.
6)  Mixing between the hemispheres accounts very well for the gradation of cyclicity observed at intermediate latitudes.

The model shows that the long-term trend of rising CO2 is attributable to fossil-fuel emissions.  Fossil fuel emissions account quantitatively for the rise in CO2 over the last 38 years, and fit the data with regard to differences in concentration in the Northern and Southern Hemispheres.
The model also shows that the annual cyclicity of the biologic cycle is strongly influenced by agriculture.  Agricultural biomass alone can be used to model  and match observed data for seasonal CO2 cyclicity.

And finally, the uptake of CO2 through agriculture clearly outpaces emissions of CO2 from fossil fuels, at least on a seasonal basis.  As a tool for the management of CO2 concentrations, policy-makers should consider banning the burning of agricultural waste, and consider options for disposal of agricultural waste as a means of sequestering significant volumes of carbon.
--------------
Global CO2 concentration data in this report is credited to C. Keeling and others at the Scripps Institute of Oceanography, also Gaudry et al, Ciattaglia et al, Columbo and Santaguida, and Manning et al.  The data can be found on the Carbon Dioxide Information Analysis Center.
 http://cdiac.ornl.gov/trends/co2/
Data for CO2 released by fossil fuels is available from EIA CO2 Emissions from Fuel Consumption,
http://www.iea.org/co2highlights/co2highlights.pdf
And the BP Statistical Review of World Energy:
http://www.bp.com/sectionbodycopy.do?categoryId=7500&contentId=7068481

Monthly data for US fossil fuel consumption were taken from the EIA website:

Global population figures from 1970 - 2010 were taken from Wikipedia.
The estimate for annual global biomass, circa 2009 was taken from a UN report:
http://www.unep.or.jp/Ietc/Publications/spc/WasteAgriculturalBiomassEST_Compendium.pdf