Projecting US Oil Production

I began a discussion of US oil production in Four US Linearizations, and then continued in Predicting US Production with Gaussians, and Linearizing a Gaussian. This post wraps up my analysis of the US URR (at least for now).

The post that follows below the fold is rather technical, and filled with large images. Here's the executive summary for those who would like to skip the details.

Update [2006-1-13 3:22:37 by Stuart Staniford]: After spending more time thinking during the waking hours, I decided my approach had a flaw, which I've corrected. It changes my URR estimate very marginally from the original 218 ± 8gb to 219 ± 8gb. Details in another update inside the post. Apologies for any confusion.

  • I estimate that the ultimately recoverable resource for US field crude production as measured by the EIA will be 219 ± 8gb. Since the EIA believes we've used 192gb so far, the remaining balance is 27 ± 8gb. These are intended as two-sigma error bars, and the figures exclude NGL (see EIA definitions).
  • This conclusion come from fitting both Gaussian and Logistic curves to the data in two different ways each, doing extensive stability analysis, and making judgements about the level of agreement throughout the regions of stable prediction in all methods.
  • I show that on the US data, Hubbert linearization is the most broadly stable prediction technique of the four I considered (despite the fact that the Gaussian actually fits the data better than the logistic).
  • In particular, the linearization is the only one of the techniques considered that has any significant domain of reliability before the peak.
  • However, the Gaussian is likely more accurate now that it is well constrained by a long history of data.
  • The caveat to this extrapolation is, while these models seem to fit US production amazingly well, we still lack a deep understanding of why this is true. Therefore, there is some risk that the conditions which cause them to fit well might change in the future, thus breaking the projections. I would be surprised if this happens in the case of the US, however.
Recall that the other day I was making pictures like the one to the right, where I explore what happens when we try to extrapolate US production via a linearization, but we start to mess around with where we start and end the linear fit. The specific one shown is Hubbert linearization of EIA field crude production with linear fit from 1958-2005.

Long experience has taught us that the linearization generally does a bad job in the early part of the history, (though I didn't know why till know) so we don't usually start at the beginning. But then the question becomes how sensitive our answer is to where we start. Deffeyes picks 1958, but why, and is that a good place? Additionally, we'd like to know how sensitive it is to where we end fitting. In the past, we had less data and so had to stop the fit sooner. If our answer has been changing a lot in the recent past, that suggests we wouldn't have a lot of confidence that it wouldn't continue to be volatile in the future also.

Next, Khebab made the very nice density plot to the right, where he explored a fairly large space of possible starting and ending points. The way to read his graph is as follows. The ending year is on the X-axis, and the starting year is on the Y-axis. For each point on the graph, he has done a linearization, and predicted the final US URR (ultimately recoverable resource). A color-code denotes the answer, and you can see it's been fairly closely around 220gb-230gb for quite a while (the right hand side of the picture), with only modest fluctuations. Encouraging.

I wanted to move the ball a little further down the field. I had two major goals. One was to come up with a quantitative error bar for the URR estimate. The second was to explore which of various prediction techniques does the best job. I noted in a piece on Predicting US Production with Gaussians, that the Gaussian actually fits the US data better, especially in the early stages. Indeed, that seems to be the reason why the linearization doesn't quite work at early times in the production history - the early tail is not matching a logistic well, but is matching a Gaussian well. So I wanted to explore more thoroughly projection with Gaussians too.

Thus in my analysis I explore predicting URR via four different fit/extrapolate techniques. They are:

  1. Making a linearization of the data, and then extrapolating the straight line fit out to the X-axis to get the URR.
  2. Directly fitting a Hubbert peak (first derivative of the logistic) to the production data in the P/t (production versus time) domain.
  3. Fitting a quadratic to the log of the production data as a way to estimate the Gaussian parameters.
  4. Directly fitting a Gaussian to the production data in the P/t domain.
The first and third involve linear fits, while the second and fourth required non-linear iterative fits. However, modern computing equipment and sofware being what it is, the difference is barely noticeable any more. For all of these techniques, I repeated the fit at a sizeable range of starting and ending years. The following plots are the result.

These are analagous to Khebab's plot above. Notice that in the far back there is a region where the end year is before the start year, which doesn't make any sense. So I just set the answer to always be zero there. In the foreground of the plots, there is a more-or-less flat horizontal area which I refer to as the zone of stable prediction. Typically it involves having the end time fairly recent, and the start time reasonably early (but the exact nature of the stable region is dependent on technique). As you move around in that area, the answer doesn't change too much. However, as you move back into the plot, things go haywire. The curtain across the middle is the area where the start and end time are a very small number of years apart. Clearly, when that's true, the fit is unlikely to work well as it becomes very susceptible to the noise in the data.

Stability assessments of estimates for ultimately recoverable US oil production (URR). In each case, URR is plotted against the start and end of the range of years used for fitting. Click to enlarge each figure. Four prediction techniques are used: top left is Hubbert linearization. Top right is direct fitting of the logistic curve in the production versus time domain. Lower left is a Gaussian model based on fitting a quadratic to the log of production. Lower right is direct fitting of a Gaussian in the production versus time domain. The underlying data is from the EIA estimate of field crude production.

Now, the most interesting thing to me is that the Hubbert linearization technique (top left) produces the largest zone of stability - there's a noticeably larger smooth area towards the front of the plot, and as it goes haywire towards the middle, it goes less haywire less quickly. The other three techniques are all of roughly similar quality to one another.

This is particularly interesting given that the Gaussian actually models the data better. However, there's a big difference between making pretty fits of past data and actually being able to predict new data robustly. In general, making pretty fits to past data is best served by having a model with a lot of parameters in. That means the curve has more degrees of freedom with which to wiggle itself around to the nicest shape lying along the data. However, lots of parameters means that there are likely to be more different ways that the model can get close to the data, and that allows for greater uncertainty in what the parameters actually are, which allows them to get more wrong. Models with too many parameters can suffer from overfitting in which the regression chooses a model that is essentially optimized to the particular noise in the data, and is losing touch with the true dynamics that would allow it to successfully extrapolate outside of the range where the data is.

A simple model (ie fewer parameters), even if it doesn't actually fit the data as well, may do better just because the few parameters it has are better constrained. The linearization trick has the merit of removing one parameter from the situation (the date of the peak), which then means we are in a better position to estimate the others (at least better when we have only a marginally adequate part of the data history). At least, that's my best guess as to what's going on.

The next four plots are essentially the same thing for the same four techniques, except drawn as contour plots rather than rendered as three dimensional surfaces. This allows us to see what the zones of stability are like a little more quantitatively. In each case, blue corresponds to a URR of zero (or undefined), red corresponds to 300gb or more, and green is the 150gb point. The contours are 3gb apart.

Stability assessments of estimates for ultimately recoverable US oil production (URR). In each case, URR is plotted against the start (x-axis) and end (y-axis) of the range of years used for fitting. The zone of stability is generally in the upper left of each plot. Click to enlarge each figure. Blue is URR=0, Red is URR>=300gb, and contours are 3gb apart. Four prediction techniques are used: top left is Hubbert linearization. Top right is direct fitting of the logistic curve in the production versus time domain. Lower left is a Gaussian model based on fitting a quadratic to the log of production. Lower right is direct fitting of a Gaussian in the production versus time domain. The underlying data is from the EIA estimate of field crude production.

So the first thing to become clear again is that the linearization (top left) has the largest region of approximate stability. Most importantly, it's the only technique that is approximately reliable before one actually hits peak (in the mid seventies). J.H. Laherrère in his paper The Hubbert Curve: It's Strengths and Weaknesses makes several Gaussian extrapolations from early on that fail, but here we can see more systematically that the linearization works best for early predictions.

However, another thing becomes clear too - it's stability region is not as flat as the smaller stability regions of the other techniques - in particular the Gaussian techniques. I think what's going on here is that once there really is enough history to constrain the parameters well, the Gaussian technique starts to do better because it is actually a better model of the data. As we know, in the linearization, the early data is always drifting upwards from the straight line, and this tends to distort the estimate unless we make the start date late, and then we cannot take advantage of as much data to fix the parameters as the Gaussian technique can. By using more data, the Gaussian can effectively bridge across the (rather lumpy) noise.

To come up with the URR and error estimates, I took a triangular region at the top left of each picture and grabbed all the different URR estimates out of them. I then got averages and standard deviations of those estimates. I used a larger triangle for the linearization than the other three techniques. Those numbers come out at:

CaseTriangle side (yrs)Mean URR (gb)Edge URR (gb)Sigma (gb)
Linearization4021522510
Direct Logistic30 232 2365
Quadratic Gaussian302182204
Direct Gaussian302172184

You have to look at these in the context of the pictures above. The linearization URR trend is still going up, whereas the others are flat/wandering as they approach 2005. So I think the linearization is headed up towards the direct logistic estimate of 230gb or so. So the question becomes do we believe that answer or the Gaussian answer. I prefer the Gaussian at this stage, since it's well constrained with this much of the curve in view, and it seems to do a significantly better job of fitting overall, and especially in the early tail. Presumably, there is some central limit type reason for this (though I wish we knew exactly how that worked), and if so, we'd expect the late tail to be Gaussian also.

The main difference in the late tail is going to be as follows. The logistic curve has a decline rate that asymptotically approaches K. The Gaussian has a decline rate that increases at a fixed constant rate per year forever. So in the late tail, the Gaussian starts to decline a lot faster than the logistic. I believe this is why the Gaussian URR estimates are a little lower than the logistic ones.

Given all this, I take as my estimate the Gaussian estimates. What's in the table is the standard deviation, but what I quote above is a two-sigma error bar: 218 ± 8gb. Note, I am intentionally keeping the standard deviation here as the error bar rather than reducing it according to the number of observations since the noise here looks very lumpy rather than iid random. Therefore, I'm not assuming any potential for it to cancel (to be conservative).

My estimate can be contrasted with that of Deffeyes (228gb - based on linearization in Beyond Oil), and Bartlett of 222gb using Gaussians. Also, Khebab quotes a figure of 222gb and has some very interesting discussion but doesn't quote a single error bar. I don't quite agree with his technique there because he's effectively assuming random uncorrelated noise, and the real noise doesn't look like that - it's lumpy and nasty.

Update [2006-1-13 3:22:37 by Stuart Staniford]: I decided on reflection that there's a problem with estimating the URR by averaging over the triangle - it means that the most recent years in the production profile are underweighted in the overall average. Thus we fail to take account of the most recent data properly, which should inform us most. So I added an extra column to the table for the Edge URR, which is just averaged over the leading (most recent) edge of the triangle. I still use the fluctuations in the full triangle for my error bar estimate, however. Otherwise the reasoning is unchanged. So my estimate is now 219 ± 8gb.

Finally, one of the interesting discoveries I made in writing this post was that in making the plot to the right, I actually got very lucky. That is a pretty decent extrapolation that is sitting in a saddle between several areas of quite poor predictions. So sensitivity analysis is always a good idea.

So when you favour the Gaussian estimate, are you also saying that you favour the gaussian increasing decline rates?  If so, how does that compare with your earlier calculation of a constant, low decline rate?
For world production, as opposed to US production, we are at or close to peak (I believe). So decline rates will be quite low under either model for quite some time and we won't be able to tell the difference in the different decline rate behavior for decades. However, the US is a lot more advanced in its overall production profile.
So what do the Hubbert and Gaussian curves look like for the world so far?  Given the huge 'noise' in the production graph in the early 80s, how do these curves fit the actual data?
I'm planning to get there in stages.
Stuart - could you redo the world logistic but frame it in a per capita way? You can get the data from EarthTrends
It would be interesting to see the shape turn from a gaussian curve to more of a steeper drop on the right side.

When people are shown a Hubbert curve for the first time (or second or 3rd) they seem to think that 2030 will be like 1975 which wasnt too bad - but there will be 2.5x the people by then compared to 1975. Just wonder if youd graphically done that before...

So you're even more pessimistic than Deffeyes and Bartlett...
$150-a-barrel crude prices, a $5.32 pump price for gas
-a certainty if we nuke Iran in March.

Teaching differs from simply broadcasting information in that the teacher must modify their behaviour, at some cost, to assist a naïve observer(my edit-me) to learn more quickly.
This is what Franks and Richardson found - follower ants would indeed find food faster when tandem running than when simply searching for it alone, but at the cost to the teacher (my edit-TOD) who would normally reach the food about four times faster if foraging alone.

Journal reference: Nature (DOI: 10.1038/439153a)

 Output fell to 4.01 million barrels of oil equivalent a day, 2.2 percent less than the 4.1 million reported in the year-earlier period, London-based BP said today in a statement. BP said it would cut more than 1,000 jobs in Europe to reduce costs.

The Texas City plant, the third-biggest refinery in the U.S., remains closed after a March explosion and damage from the storms and is a setback for BP at a time of high oil prices.

BP said fourth-quarter costs would include $130 million, partly to repair Thunder Horse (my edit-at least $110 m partly and TH doesn't get into the GOM until 07).

So BP's Texas City down, 3 refineries East of NO down along with Pascagoula and the heavy crude Valero plants can't process without hydrogen-the Airgas plant producing the Hydrogen destroyed outside NO.

That's 6 refineries down, still, or producing at much less than capacity.

Speaking of Thunder Horse...any word, even a rumor, of what the problem was?  BP is being very close-mouthed.  They said it wasn't the hurricane and it wasn't hull damage and it wasn't the computer-controlled ballast system.  Then what was it??
BP is being very close-mouthed-

I saw a short closely cropped video flash by on CNBC/Bloomberg.

If this was TH, the superstructure cranes were crushed into the decking.  BP said earlier that TH had suffered 10% damage.

That's how I came up with the $110 million pricetag-10% plus overage.

So many intelligent oil people out there.  So little info.

Conoco's 247,000 bpd Alliance refinery is not expected to restart until December or even January

Local environmental activists were alarmed that residents were even visiting the area. EPA tests of the air in mid-September had detected unsafe levels of benzene. "People shouldn't even have been given an option to go back in," says Wilma Subra, an environmental chemist in New Iberia, La., who has served on EPA advisory committees.

My information is that Thunder Horse had technical troubles with valves of its balast system, causing it to bend over out of balance. I can provide nice photo's of it.

Another concern was that it proved difficult to secure it to its anchors in the deepwater current.

Whats happening with Thunderhorse is an real time example of how EROI will impact the energy world going forward. Getting deep water oil has a higher energy cost to begin with - if companies start anticipating and including higher depreciation values on machinery, downtime, transportation, hurricanes, insurance etc - at what point to the majors say 'lets make oil from coal instead of gettind it from GOM deepwater'? EROI of 5-1, 3-1? Somewhere certainly...
(translation: a repeat of 2005 hurricane season in 2006 will cause large approvals of Fischer-Tropsh plants)
Great work Stuart (rain in San Fran correlates to detailed oil analysis on TOD).

I am going to do a quick check on the major 20 or so oil companies in US to look at what they claim to have as proven reserves in US - your analysis assumes we have 218-192 = approx 26 billion barrels left. My gut tells me adding the proven reserves up will be way higher than that.

Which gets back to EROI. Is it possible that one of the mystery reasons why linearization works so well (as compared to other methods) is that it implicitly accounts for eventually reaching a point of EROI of 1-1, even though there is 'oil' left, it just doesnt make energetic sense to get it? Whereas other more aggresssive approaches see 'geologic' oil and just assume it will be pumped?

Hubbert pre-dated EROI but that may be an underlying principle he observed on individual wells and areas.

Incidentally, need I point out that 26 gb doesnt last long when the country in question is using 8gb a year...

I meant proven and PROBABLE - proven reserves for US are about 22gb
I tend to just ignore reserve numbers because of all the various problems. However, 22gb does not sound wildly inconsistent with my estimate.
EROI: doesn't it depend on the type of energy invested? For example, might it not be worth a boe (barrel of oil equivalent) of solar to create a barrel of highly useful liquid fuel? Obviously it's not worth burning a barrel of oil to gain a barrel of oil; that would be consuming capital to no net effect. But isn't using solar/wind more like living off income? (conveniently ignoring CO2 and climate chaos considerations...)
Yes it depends on the energy invested, but the value of that liquid fuel is likely to be very high in the marketplace if the EROEI is less than 1. So, for instance, it might make sense for certain uses to have and continue using some petroluem fuel products even after EROEI drops below 1.0 but the general public won't be that consumer. It would likely be very restricted to the ultra wealthy and/or government services that absolutely had to have such to operate (military?). The net effect to the economy is the same - the gasoline-powered automobile-driven suburban culture we've created can't be sustained in that scenario without switching to an entirely different energy base.
Thanks Stuart for that post! for confidence interval estimates I tried to apply a bootstrapping techniques on the Hubbert linearization fit using the R software:

Bootstrapping Technique Applied to the Hubbert Linearization

The results are the following for the US production:

larger image
for the [1936, 2004] interval I find the following confidence intervals:

URR(50%)= [220.39 222.65] Gb
URR(90%)= [218.62 225.24] Gb
URR(95%)= [218.19 228.21] Gb

The figure below is the corresponding histogram of the URR estimates from the bootstrap replicates:

larger image

There is more details in the first link above (with the R source code also). That's it for now, I have to go, I will post more comments later.

I apologize about the bad image display, I put 100% in the width attribute but then it does not rescale properly! the link (larger image) are the original images that are actually smaller. I wish we had an edit button!
After staring at my pictures some more, I think I'm going to do a slight update tonight. I think estimating the URR from the average across the stability region is not quite the best approach because it doesn't take all the latest information fully into account - basically the newest production data is underweighted in the average. I don't have time to do it now, but I think I'm going to estimate my URR from just the leading edge of the stability region, but continue to estimate my error bars by the deviation across the stability region. I'll take a look at your new post tonight too.
Hm... looks like the lower-95% line in 2005 is higher than the upper-95% line in 1985. This implies that the method is broken, or at least unreliable.

In fact, I'm seeing a definite upward trend from 1980 on. I'm not sure that the trend is asymptotic to anything.

Hm, maybe the abiotic oil is seeping in, at a rate that looks like about 750 MB/yr.

YES I'M JOKING!

But something funny is going on.

Chris

Yes - I'm concerned that Khebab's method right now underestimates the error bars because his bootstrap procedure assumes the noise is uncorrelated - which it clearly isn't.  I speculate that what you're seeing is probably a consequence of that.  You're right that that upward trend higher than the error bars is a sign of trouble.

Khebab:  One rough and ready bootstrap approach you could take is the following.  Fit the model to the actual data.  Obtain the residuals curve (data minus model).  Chop the residual curve up into sections where the end of a section is always a point where the residual curve crosses the x-axis.  Thus each section will be a little bump where the data is strictly above the model, or a little bump where the data is strictly below the model.  The bumps, I believe, should alternate one up and then one down (some bumps may only have one year in).  Now create a series of random permutations of the bumps that preserve the alternating property of them.  Create a new replicate by adding this permuted-bump residual curve to the original model.  Then fit a new model to that replicate.  I believe the permuted bump replicate of the residual curve should have roughly similar autocorrelation structure to the original residual curve.  Repeat for many replicates, and use the resulting histogram of parameters for your confidence interval estimation.  You'd better actually plot the residual curve, some replicates, and some autocorrelation vs lag graphs and make sure this looks sensible in practice, however.

Another thing that would be worthwhile with your current iid-bootstrap replicates is to do the start-end sensitivity analysis for one of them (since you already have the code).  I suspect you'll find the prediction is much more stable than the real data (ie the bootstrap is not producing replicates with enough fidelity to the true bumpiness of the data).

Stuart, the bootstrap procedure does not require to generate new noise samples but use an in-place resampling procedure. For more details check the following document:

John Fox. Bootstrapping Regression Models, Appendix to an R and S-plus Companion to Applied Regression

In contrast, the nonparametric bootstrap allows us to estimate the sampling distribution of a statistic
empirically without making assumptions about the form of the population, and without deriving the sampling distribution explicitly. The essential idea of the nonparametric bootstrap is as follows: We proceed to draw a sample of size n from among the elements of S (the initial dataset), sampling with replacement.

I you want to play with the procedure, you can use the R source code (R is a free open source software) I posted on po.com (Bootstrapping Technique Applied to the Hubbert Linearization).

I'm familiar with bootstrapping, but I took a look at your link just in case. Let me restate what I think you're doing, and you can tell me if I'm wrong. You take the collection of N (P,t) data points. From that pool of N observations, you repeatedly sample one, with replacement. You do that N times, and that gives you a new collection of (P,t) points - some of the originals are missing, and some are duplicated. You then fit the model to that replicate. Repeat ad-nauseum, and then use the distribution of obtained model parameters to estimate confidence intervals.

The second procedure in the document - what he calls fixed x-resampling - is what I assumed you were doing last night - taking the residuals from the model, and resampling from those. The issue with applying bootstrapping here - under either of those approaches - is that because the resampling is independent, the fluctuations it introduces relative to the original data have no autocorrelations year to year. Hence these fluctuations are very unlike the original noise. In particular, they are likely to move the regression fit around less. The original noise, because it is quite autocorrelated (bumpy) can have multiple years conspire together to throw the prediction off. The deviations from the model introduced by the bootstrapping do not have this property.

I suggested my residual-bump-permutation idea because it seems to me the resulting replicates would have the right kind of time structure (lumpiness), but preserve the non-parametric sampling aspect that is so attractive about bootstrapping. I could be wrong though - it's just an idea at this point.

Thinking about it now, if you construct the sequence of residual bumps, you could also resample from those, instead of permuting them. I doubt it would make much difference either way.

You take the collection of N (P,t) data points. From that pool of N observations, you repeatedly sample one, with replacement. You do that N times, and that gives you a new collection of (P,t) points - some of the originals are missing, and some are duplicated. You then fit the model to that replicate. Repeat ad-nauseum, and then use the distribution of obtained model parameters to estimate confidence intervals.

That's correct. I didn't have a chance to look at the fix x-resampling yet. I don't know if iid noise is a requirement of the bootstrapping approach. My guess is that it's not a requirement because the method is a non parametric way to evaluate the error pdf. If I understand correctly you are proposing a non uniform resampling to take into account the noise "bumpiness".
Hmmm.  What can I say more than, "trust me, it is a problem".  Think about a simpler situation.  Suppose we have 10 observations x_i drawn from an unknown distribution and we'd like to create a confidence interval around the mean.  So we assiduously start resampling the 10 observations to see how much the mean jiggles around and we create a confidence interval.  Suppose in scenario 1 that the x_i were in fact iid picks from a normal distribution.  Then our bootstrap procedure is useful, and the error bar it generates should pretty much map to the error bar you'd expect from dividing the sample deviation by sqrt(10).  Now suppose in scenario 2 that the experimenter who took the ten data points tells us that he believes they are not independent, and in fact he expects that the autocorrelation R^2 in successive observations is 99%.  Should we still trust our error bars?  Clearly not, right?  We have only slightly more than one independent observation.  The data contain no useful information that would allow us to estimate an error bar and our bootstrap confidence interval procedure is worthless.<p>

Does that more extreme example make the general issue clearer?

Ok, I got your point. There is maybe a pertinent technique called importance resampling, that you can use with a bootstrapping technique, where you assigned non uniform weights to your data.

There is also maybe another phenomenon when we do the following regression:

P/A= aQ+b

then we estimate the URR:

K= -b

URR= -b/a

The URR will be dependent on the error (bias and variance) on both a and b. Put another way, there is a negative correlation between the estimates for K and the URR. For instance, the figure below shows a catterplot of the bootstrap replications of the URR and K coefficients for the BP data (the concentrations ellipse are drawn at 50, 90, and 99% level from the covariance matrix of the coefficients):

In particular, if K is overestimated post-peak then the URR will be underestimated and conversly.

This makes sense in terms of the original graph. If we imagine taking principal components in the above picture (are you familiar with PCA?), the dominant component (the axis along the long thin dimension of your confidence interval) is related to uncertainty in the slope of the straight line (in the original P/Q vs Q plot). Changes in the slope change U and K together, but inversely to one another (as you rotate the line slightly, U increases and k decreases, or vice versa). By contrast, the orthogonal direction in your plot above represents uncertainty in the vertical position of the line, which also changes U and k together, but in the same direction (as the line moves up with no change in angle in the P/Q vs Q graph, both U and k increase). The data constrain the vertical position better than the slope, so your confidence region is long and thin in the "slope" principal component, and narrow in the "vertical offset" principal component. I think this feature of the graph is not an artifact of the independent bootstrap, but would be present in a less biassed error analysis also.
Hm... looks like the lower-95% line in 2005 is higher than the upper-95% line in 1985. This implies that the method is broken, or at least unreliable.

Not necessarily, any estimator has a bias and a variance that hopefully converge toward 0 when the size of the dataset increases. We can see clearly that the estimator variance is going down with time.
Yes, but the size of your confidence intervals is quite stable over time, but systematically too small relative to how much the prediction actually moves.
That's true, it should mean that the hubbert linearization is biaised with a slow converging bias compared to the variance which is converging faster. I will have to think about it.
Presumably, there is some central limit type reason for this (though I wish we knew exactly how that worked), and if so, we'd expect the late tail to be Gaussian also.

The requirement for a gaussian seems to be a lot of identical and independdent production profiles. I started a little experiment on peakoil.com about that (Convergence of the sum of many oil field productions). If the production profiles are not identical (random URR, depletion rate, growth rate), the curve becomes skewed and has a tendency to become a Gamma function (the gaussian is actually a special case of the  Gamma). Part of the answer lies in the Central Limit Therorem convolution formulation or equivalently in the manipulation of the characteristic functions (fourier transform of a pdf).
Two questions:

  1. The sum of N logistic functions is supposed to be what? a Gaussian or a new Logistic?

  2. What made you assume the production of a single field to be a traingular distribution function? Can you reference some work on single field production?

Thanks, and keep up the good work.
  1. a gaussian for sure if the N logistic function have exactly the same parameters and are simply randomly shifted. If you add some randomness on the shape (for instance the surface) you will have a slightly skewed gaussian.

  2. because it's the simplest (dumbest?) unimodal function that you can design. You can easily control the shape of the curve (upward  and donward slope), it's also a finite support function (there is a beginning and an end) and it's very different from a gaussian.
However, another thing becomes clear too - it's stability region is not as flat as the smaller stability regions of the other techniques - in particular the Gaussian techniques.

I looked at your 3D surfaces and I'm not convinced of that. For the Hubbert linearization, the domain delimited by the end year > 1990 seems very flat and stable to me. I you take the line of URR values for the end year equals to 2005, it's a very flat line compared to the other methods which have a bumpy behavior when start year > 1975. Because the Hubbert linearization increasingly dampens the effect of noise for mature years (>1975) the fit will always be more robust for ending years > 1975. Too bad, we can't perform a gaussian fit in the P/Q vs Q domain to see how if this effect benefits also to the gaussian.
There are certain start years (around 1960 and again around 1980) where there's ridges of high stability (running up and down the picture above). However, between those ridges (say if you pick your start year significantly before 1960, or around 1970 where you're in a valley that appears to be rising. Eg if you had a start year of around 1950, then the URR estimate is rising around 3gb with each decade of additional end-year. If you pick 1930 as your start year, then you have a lower URR estimate right now, but now it's rising around 5gb-6gb per decade. These trends have been approximately steady for at least a couple of decades. So it's acting like the whole thing is heading up towards the ridges, which are at around 230 gb - higher than the average of the approximately stable region. So my bet is that the linearization estimator will stabilize not too far from there, but that the actual URR will be lower because the data curve will do the Gaussian die out and sneak down a little bit at the end (assuming there isn't some wild-card of unexpected major new discovery).