The ASPO Plot - Jean Laherrerre replies

Jean Laherrerre is one of the major figures who have contributed to the understanding of Peak Oil over the years. He usually provides the more detailed analyses that are given at meetings (such as the one he gave in Lisbon recently).  He replies on the discussion relating to Rembrandt's piece:
Subject: Re: [energyresources] New study on peak oil
Date: Sat, 10 Sep 2005 18:50:05 +0200

dear Murray
Koppelaar has a peak around 2015 as [has] my model (no demand constraint), but his peak is higher and his decline steeper as you can see on the comparison.

I do not deal on [single] projects as Chris as I do not have the data, furthermore to get the world total, adding each individual project should be done with a probability ratio for each to be on time.

I am a retired geologist in the country with just access to the technical field reserves and I have three papers to finish before the end of the month, one for presentation to the CERN in Geneva

best regards
jean

Technorati Tags: ,
Re: "each individual project should be done with a probability ratio for each to be on time"

It's easy to agree with Laherrerre here but thinking about it, I see no way to associate a meaningful probability with each project in advance.

Yes, I am having trouble with the same thing.

I suppose one could start with probabilities of similar projects in the same area of the world, and use those as a guide, but that only helps a little.  You could have political instability, weather, or any one of a number of other issues that could get in the way.

For all I know you could have supplies and/or manpower diverted to help with Katrina related repairs.

ericy and Dave are both right, and this whole notion of the role of probability is why I keep running around waving my arms like a nutjob and yelling, "Nobody knows anything!  We're all guessing!"

There's always a degree of probability estimation in economic issues (and we are talking about economics--the allocation of scarce resources--even when we're not explicitly mentioning money).  But the current situation is very unsettling because the probabilities include such a wide swath of possible outcomes, nearly all worse than our current market conditions.  If everything goes our way, the oil and NG infrastructure will come back in fits and starts, and we'll have to pay prices around the current level or slightly higher, but there won't be outright shortages.  But if something goes wrong--a huge NG problem is discovered, a terrorist attack takes out a major Saudi production facility, the Bush admin. decides to invade Iran, etc.--then the situation changes dramatically, and not for the better.

Lou,
You define the problem perfectly.  Everything is based on Best Case scenarios.  I am more concerned about what might happen if the best case goals are not met.  Use more moderate expectations for models to see what the downside is under less rosy conditions.  Why is this not discussed enough in economic circles?

Thanks for bring some realism into the debate from the economic side.

lou-
Point taken. There is an analytical approach that might work here, a brute-force Monte Carlo analysis. Build up a model (say, country-by-country), and add in explicitly stated probabilities. We could even add in unknowns like at terrorist attack (as an X% chance that output would decline to zero for a given place and period). After the model is set, run every single combination and permutation. It will produce an average curve, and a confidence interval based on the probabilities that were input.

But I'm not convinced this would get us any closer to the truth because of the garbage in-garbage out problem. We'd still be taking bad production or reserve data, and essentially multiplying it by probablity assumptions that are also unproven.

Given the shaky data and methods we have now, it still comes down to a matter of faith: what do you believe? And I can't see a clear way to advance beyond that at the present time. It's a hell of a way to run the world economy, isn't it?

Given a database of past projects and their delay history, we'd have a population of delays to sample from in a Monte Carlo. This database of past projects, is also what we'd need for evaluating/correcting the Type2-Type3 conflation problem with Rembrandt's analysis. It would also allow us to do proper cross-training analysis of any methodology. A good test of any extrapolation method is to run it on the partial history up to date X and see how well it does at predicting X+1, X+2, etc up to the present year. If it does a pretty good job 2-3 years into the future and then sucks, you know how far to trust it when you train it on the whole history and try and project into the future. If it sucks at extrapolating even one year past the end of whatever training sequence you give it, you know it's a worthless method.
Another method would be to actually build a probability function that works like this:

All new projects P start with probability PROB = 1. Then start subtracting. Rick, you had mentioned political stability in West Africa and noted that it is low. So, let call that PS. Is it offshore or land-based? Deep or shallow water drilling if offshore? Let's call that T. Is it conventional oil or some unconventional source requiring unproven technology? Let's call that S. What type of subsurface geological properties are we dealing with? Let's call that G. Each factor is calculated on its own as a function using relevant paramaters, eg. G(P) = something based on the porosity or whatever factors are involved. Functions would be defined on past experience as Stuart talks about.

So, PROB(P) = 1 - PS(P) - T(P) - S(P) - G(P) - .... = 0.N, N < 1. The closer to 1, the more weight we'd give the project. Our PROB function could be evaluated by extrapolation into the future again as Stuart points out talks about.
Your error bars would swamp the model.  You'd be better off either (a) throwing darts, or (b) just deciding what answer you're looking for and writing it down.

Far better would be to look at existing reserves and try to estimate the quantities of potential oil production at each price point remaining in the reported reservers.  How much of the GigaBarrel that's left is $10-30 oil?  How much is $40 oil?  How much is $50 oil?  If we wanted 90mbd, where does that put us on a curve of potential supply?  What about 115mbd?

I mean, seriously - price is EVERYTHING.  Sure, economists overlook lots of detailed factors and risks and stuff, but geologists always seem to ignore PRICE.  If aliens came down and said they'd hit us with an ugly stick and we were all gonna die in three years unless the world pumped 150mbd by 2008, we'd figure out how to do it.  It's not impossible.  We might end up spending $250 per barrel for the last few mbd, but we could do it - even 150mbd would only be about 55 Bby, against global known remaining reserves of perhaps a trillion barrels.  A trillion friggin' barrels!

The US is the most mature region where we have the longest data. If you look at a history of production it looks like a big old logistic peak. If you plot the price, which has varied by easily 10x over history, it causes little wiggles on the back of the peak. Price is much less important in oil production than you think.
I was just thinking about this this morning and it occurs to me - please correct me if I'm wrong - but using US production as a model for global production is invalid because the US is not a closed market system: US production is price sensitive (+), so US production responds to changes in world prices, and thus declining US production is NOT a reflection of absolute limits on US supply, but rather a consequence of competition with low-cost foreign producers.  

It's like saying that the decline of the US consumer electronics industry is a model for the impending decline of the global consumer electronics industry.  Except that, quite obviously, US electronics makers declined because they have been under-cut by low-cost foreign producers and protected "national champion" electronics firms, first in Japan, then Korea and Taiwan, and now China.  And, equally obviously, total production of consumer electronics has and will continue to rise.

 - EvT (aka Silent E)

------

+ FN.1: Indeed, given that:
(a) Texas is the home of a large share of availabile drilling capacity and oil services,
(b) the US has the oldest and most mature oil industry in the world, and
(c) the US has less government intervention in the oil market than anywhere else,
the US oil industry is more price sensitive than nearly anywhere else in the world.

EvT,
Thank you for your contributions here.  Your alternative perspective on this issue is good for the debate.  

I have to disagree strongly with the argument that U.S. production declined due to the comparative advantage of foreign producers.  The price of crude from the date of U.S. peak production in 1970 through the price collapse in '86 was more than sufficient to increase domestic supply, if that domestic supply were available.  The U.S. has essentiall been picked clean beyond ANWR and GOMEX.  

Price is important, but it isn't everthing.  Price signals will encourage conservation, exploration, and R&D, but they can not get us to do the impossible.  If a Martian threatened to vaporize me with his P38 Space Modulator unless I made monkeys fly out of my butt, I'd be in trouble.  Similarly, the U.S. is not about to reverse a 35-year decline trend at any price.  

I haven't had the opportunity to read the article that you cite regarding reserve growth.  However, I will make two comments:  (1) this past correlation is not necessarily a predictor of the future, and (2) my understanding is that energy firms pace their reporting of proven reserves to show steady growth rather than reporting what they know to be proven the year that they have the reservoir data and market price to justify it.  This old habit apparently got Shell in trouble last year.  

I have to disagree strongly with the argument that U.S. production declined due to the comparative advantage of foreign producers.  The price of crude from the date of U.S. peak production in 1970 through the price collapse in '86 was more than sufficient to increase domestic supply, if that domestic supply were available.

I agree that there must be a peak, followed by a plateau, and a decline.  However, length of the plateau and the speed of the decline are contingent on the relative costs of extraction.  But I think we should look at oil as discrete products based on price (I'll use constant 2000 $US).  We're seeing the peak of $20 oil - it may have already happened.

But if there is a LOT more $40 oil in the world than $20 oil, we could see prices hover around $40 for 20 years or more, with supplies rising steadily until the $40 oil is gone and we have to start pumping $50 oil.

My point is that using the US as a model for world depletion may over-estimate the rate of decline, simply because the US experience was not isolated.  US production of $10 oil peaked in 1970.  Total production was 3.51 Bby - but by US 1985, high prices led total US production back to 3.2 Bby and it was still rising.  If the Saudis had not re-opened the taps and crashed the world price in 1985-6, US production might easily have surpassed the 1970 level.

I was wondering what is the probability of a forecasted field production to be correct. Because you don't know your real production number till the oil is flowing from the first wells and past a few months into production. Simmon is saying that a lot of the production figures are just "paper barrels" and have not being validated by costly appraisal wells and sample core analysis.
Regardless of how one looks at it, the business of forecasting production is fraught with difficulties, and it seems that all of these approaches could use what we in meteorology refer to as "hindcasting," i.e., critically examining forecast FAILURES for the past and using them to correct BIASES in future forecasts.  

 Take for example the gulf of mexico (GOM), where despite hurricanes, etc. at least the data is reasonable.   For instance, the AAPG in 2003 issued forecasts for GOM gas production ( http://www.aapg.org/explorer/2003/02feb/ocs_report.cfm  ) whose worst case scenario for 2005 has GOM gas production at 5 Tcf.  Now, even before Katrina, the latest monthly reports ( http://www.gomr.mms.gov/homepg/fastfacts/pbpa/pbpamaster.asp ) show production in the 8.6-9.3 bcf/d range, i.e., somewhere short of 3.5 Tcf annualized.

Another example is the MMS itself.  It's 2004 projection report (MMS report 2004-065) suggested total GOM oil production of 1.8 million bbl/d for 2005.  However, pre-Katrina, the actual level was 1.25-1.45 million bbl/d.  A similar story for gas - the MMS forecast for 2005 was 13 bcf/d (same report as shown above), where even pre-Katrina those numbers are way off as noted above.

What I don't see is a frank discussion by anyone as to how accurate their past forecasts are, and because of that, I have difficulty understanding how to weight future projections.  To evaluate an effort like Rembrandt's, it seems as if going backwards is the first step to making progress.  To be blunt, I don't believe that anyone has a good handle on the long term (5 years) dynamics of production in the US GOM right now, at least based upon the difficulties outlined above.

A note to Rembrandt if you are reading:  Both Mad Dog and Holstein in the GOM were actually online last year, and presumably added to Q1 2005 production to somewhere in the 100,000 - 150,000 bbl/d range.