The Fake Fire Brigade Revisited #3 - The Biggest Part of Business As Usual - Electricity
Below the fold is the 3rd in a series of follow up posts providing analysis on the difficulties of maintaining our current energy paradigm with renewable energy (generally, 'the fake fire brigade'). The main authors are Hannes Kunz, President of Institute for Integrated Economic Research (IIER) and Stephen Balogh, a PhD student at SUNY-ESF and Senior Research Associate at IIER. IIER is a non-profit organization that integrates research from the financial/economic system, energy and natural resources, and human behavior with an objective of developing/initiating strategies that result in more benign trajectories after global growth ends. The authors have written an extensive follow-up to the questions raised in the original posting and I've broken into 5 pieces for readability - the 3nd installment, with a focus on electricity generation in an energy transition, is below the fold. This installment has been delayed a few weeks due to Hannes taking time off to get married....
The Biggest Part of Business As Usual - Electricity
In this third installment in this series, we want to put some emphasis on one of the most important enablers of human civilization of the 20th century: electricity. Its ubiquitous availability from every power plug is something we take for granted, despite the fact that stable electricity production is probably one of the most complex continuous endeavors of mankind, and one where many poorer countries fail.
In this post we would like to provide an overview of some of the properties of electricity, describe its nature (as a flow based system), and explain what challenges it faces in the future – especially those related to maintaining current delivery patterns once we have to increasingly rely on inputs no longer coming from fossil fuels that can be stored and burned mostly at our discretion, but from increasingly stochastic, largely uncorrelated flows such as solar or wind.
Electricity is a core topic of IIER’s research, because for us, maintaining anything that more or less resembles our current advanced economies is synonymous with uninterrupted, reliable electricity which mostly comes as a discretionary service to the user. Users, in this case, aren’t just private consumers, but also industrial and commercial applications, which are part of any advanced society.
Electric power is also the area of greatest debate, greatest hope and greatest investment, and the area where IIER thinks that societies face challenges with all their current attempts. Presently, OECD countries are targeting electricity generation as a means to meet carbon emission reduction goals, while simultaneously encouraging the development of non-fossil fuel based transportation (e.g. electric vehicles) and other moves away from coal and oil in industrial applications. They do this – so we think – without a robust plan as to how to maintain today’s delivery security. All plans aim at combining wind, solar, geothermal, and nuclear, super- and smart grids into one new robust delivery system, and there seems to be general agreement that this will actually work. But after thorough and unbiased research of the characteristics of electricity delivery systems, the parameters of those new technologies and the discrepancies between assumptions and reality, we are now skeptical as to whether societies will be able to provide stable electricity at acceptable prices going forward. We realize that this statement is almost considered a sacrilege.
Below, we will try to explain our concerns step by step, and why we fear that investing hundreds of billions in an electricity system that is far more complex and far less reliable will lead us in the wrong direction, given the details of our current situation. Once again, a clarification: we are not arguing the fact that we slowly have to move away from fossil fuels and start using more renewable sources to provide our energy needs. However, we disagree with the common notion that societies can make this renewable energy transition and still receive the same services as today: stable and affordable electricity not just for private consumption, but for all uses that are part of an advanced industrialized society.
IIER’s Electricity Availability Index
In our first post, we introduced IIER’s Electricity Availability Index. It measures the availability of electricity in a country based on penetration (% of population with electricity) and reliability (outages and duration of outages per average customer).
Figure 1 – IIER Electricity availability index
Some commenters questioned the relationship between electricity and wealth (measured in purchasing-power adjusted GDP per capita). Such was the first hypothesis we tested when developing the EAI metric. The chicken-and-egg question can - as we think - be resolved quite easily, by testing in which directions we find the outliers. In case the assumption of “wealth is possible without stable electricity” is correct, there should be countries with low electricity availability that still are quite rich (measured in GDP per capita). However, these do not exist, the “richest” outlier is resource-rich Botswana (diamonds, copper, nickel) with close to $14’000 per capita and an EAI of only 21.9%. On the other hand, we do find rather poor countries with almost 90% electricity availability (such as The Philippines and Mongolia, with a per capita GDP of around $3’500), which leads to the conclusion that the correlation is unidirectional, or in other words: You don't have to be rich to have stable electricity, but your country needs stable electricity to become (or stay) rich.
The benefits of electricity
There are two discrete aspects of electricity’s importance to society: the benefit of its ubiquitous on-demand availability, and the severe side-effects of power interruptions. Let’s look at a simple illustration. Few companies in OECD countries install backup power for desktop computers, despite the risk of data loss during a power outage. The reason is economic – outages are so rare that the possible the cost for buying, maintaining and operating the backup equipment outweighs the risk of outage, which is why only servers and data centers are deemed worthy investments into power backup solutions. In emerging or developing countries, backup systems are commonplace, but only if businesses can afford them. But most local businesses cannot, which makes it primarily an option for international corporations, while local companies are at a disadvantage.
Other applications, particularly of industrial nature, can’t even operate with backups; they simply need a power guarantee. The pots of an aluminum smelter require uninterrupted power 24/7, 365 days a year. If the power is lost for more than a few hours, not only does the process stop, but after a short while the aluminum begins to congeal, with the consequence that the entire pot has to be scrapped, incurring costs of millions of dollars. Or think of a shopping mall that suddenly goes dark. No lights except for emergency lighting, no access to transaction services to process a credit or debit card, no elevators or escalators, and ultimately no sales. There are multiple studies on the cost of “reliability events” in power grids, each reporting very significant losses (a lot of research has been done at Berkeley Lab, documents can be found at: http://certs.lbl.gov/CERTS_P_Reliability.html). So while – as many people correctly say - power outages are just a nuisance to private households as long as they don’t exceed the time a fridge or freezer can hold its temperature, they are a threat to all more complex industrial and commercial activities that make our societies “advanced” and require the humming of electricity-driven machinery almost around the clock.
This now ties back to the Electricity Availability Index – many things are either impossible or economically not feasible in environments where grid stability becomes an issue. And even for applications where it is theoretically possible to ramp them up and down without efficiency or material losses based on energy availability, there are significant social costs associated with unpredictability. If there is no power, should we send all the workers home for a week, and call them again at 1am on the Sunday when supply comes back? We can certainly do this, but in reality we would probably rather cease many of those activities, because the opportunity cost of underutilized equipment and labor becomes so big that the final objective no longer makes economic sense.
What is electricity and how is it delivered
There are two ways that electricity is supplied. In smaller, poorer, or more remote areas, electrical production is achieved by a standalone solution that provides comfort or capabilities to those able to afford it. Often this is provided by diesel generators which can produce electricity as required, or by standalone hydro, coal or natural gas power plants which serve a local area or industrial activity. Increasingly, solar panels combined with batteries provide this service, or wind turbines in conjunction with oil based generators. The key characteristic of this type of delivery system usually is very high cost per delivered kWh.
In richer economies or even in urban areas almost all around the world, electricity is delivered via a centrally managed grid, which balances inputs and outputs effectively to ensure that demand is always met. In poorer countries, this often does not work out, with the consequence of regular grid breakdowns. In OECD countries, however, we are so used to the grid’s reliability that even small power outages regularly make the news headlines. Below, we will mostly focus on grid based systems, as only those are capable of delivering the basic industrial and commercial services for societies we are used to receiving.
What we get from our power sockets as “electricity” is the product of an electric current that is converted into useful work by an appliance. To make sure that those appliances work, particularly more fragile ones involving electronics, voltage and frequency must be standardized across entire regions (for example 120V/60Hz in Northern America or 230V/50Hz in Europe).
An electricity delivery system can be compared to a complex set of water pipes where water (electricity) enters at multiple points and is withdrawn at hundreds of thousands of faucets. Contrary to a water delivery systems, these electrical ‘pipes and faucets’ are so fragile that they almost immediately burst or collapse when too much or too little water is in the system. Or in other words – electricity is a fully flow based system, where inputs and outputs have to be matched at any point in time with deviations of less than 0.5% between supply and demand (see ENTSO-E manuals for more detail: https://www.entsoe.eu/index.php?id=57, particularly the one on “Emergency Procedures”) .
Figure 2: Grid based system (Source)
Currently, this system is fully supply-controlled (i.e. production is following expected and actual demand), which is why it has become so beneficial to society. It delivers seemingly unlimited and unrestricted amounts of energy to each room in our homes, offices and factories, and except for heavy loads in an industry or computing (server farms), there is no user-level planning required before flipping a switch, plugging in a heater, turning on a computer. Electricity just flows according to one’s needs. Later, we will examine demand side flexibility, but first, we want to focus on the supply side, which is where electricity systems are controlled today.
Figure 3 – schematic delivery system (current status)
To meet demand, which follows the cycles of human ecosystem patterns (days, nights, work/non-work days, heat, cold) is today matched by a combination of power sources that together form a highly flexible supply system, which also includes reserves to match unexpected demand spikes or sudden supply-side failures, for example when a power plant experiences an emergency shutdown. We will dive into the different load patterns and reserve provisions a little further down, but the key characteristic of a vast majority of inputs today is that they are fully predictable and mostly controllable. This is because inputs come from steady flows (like a running river), but by a large majority from stock based resources that can be consumed whenever there is a need, such as coal, natural gas, stored water or nuclear power (the latter could, for reasons to be discussed further down, also be seen as a steady flow). So in essence, what we have built is a highly complex system that converts steady flows and stocks into a well-managed, demand driven flow of electric current.
Figure 4 – types of inputs into electricity grids
What most OECD countries plan to do is to replace some of those steady flows or stocks on the supply side by adding more and more renewables with erratic flows. Currently, those stochastic, non-controllable flows from solar and wind power account for a maximum of 5% of total power production in each interconnected grid systems we are aware of [see Table 1 for the U.S. (combining Western and Eastern interconnection for lack of data) and for the European interconnected grid system – ENTSO-E], but by 2030, most countries in the Western world plan for 20 or 30% of electricity to be delivered from those two sources alone, accompanied by other new technologies.
Table 1: wind and solar power share in 2009/10 for major grid systems (EIA 2010, ENTSOE 2010)
In Europe, the almost 5 % of solar and wind are very irregularly distributed, with some countries totaling close to 0%, and others already experiencing up to 20% (Denmark) of those renewable sources. All those countries with high shares manage their problems with the significant help of their neighbors. Very small Denmark for example uses the comparably huge water power systems in Norway and Sweden to buffer its heavily variable wind outputs.
This grand plan – to maintain something that already now is highly complex by adding multiple layers of complexity – is something we are very concerned about. The overlying challenge is to keep a flow-based demand system working while stochastic, non-controllable flows gain a significant share of supply, and to do so without jeopardizing grid stability, and at a price which is still affordable. We believe that most people underestimate this challenge and that it actually may be insurmountable. Important: “affordable” in this case doesn’t mean it can be paid by individual households for their relatively small amount of required electricity, as they may be able to bear 20 or 25 cents for a kWh, but instead for an entire industrialized society with the need to provide all the goods and services that make it what is considered “advanced”.
Figure 5 – shift to larger amounts of stochastic flows
What is an acceptable price for electricity?
What a high cost of oil does to societies has been well researched and documented in a number of papers (see: http://www.iiasa.ac.at/Research/ECS/IEW2005/docs/ppt/IEW2005_Maeda.ppt) . High oil prices seem to be a clear inhibitor of economic growth and early indicators of coming recessions. The reason behind this is the fact that the higher the cost for energy is, the less of our efforts can go towards discretionary spending (Hall, Powers and Schoenberg 2008). It is an inherent property of EROI: the energy and money we spend to procure and extract energy, is unavailable to spend on discretionary and non-discretionary investment and consumption.
There is no reason why the situation should be different for energy inputs other than oil, as higher energy costs always leads to this diversion away from consumption and investment. However, creating a benchmark is not easy, as electricity rates have been relatively steady during the times when oil prices fluctuated heavily, which gives us no past reference.
Using oil, where a relatively solid research base exists, we wanted to create a benchmark for “tolerable” electricity prices. Some papers suggest that oil prices that grow from 25 to 35 dollars have a negative impact of 0.3-0.5% on GDP in various countries (http://www.iea.org/papers/2004/high_oil_prices.pdf). We currently are at around $80/barrel, and are still in the middle of a bad crisis, which just looks less bad because governments have started to run up deficits at a breathtaking pace. At $150/barrel, in 2008, the current recession began with a vengeance, and many researchers suggest that high oil prices had their fair share in pricking the problem.
So based on experiences from 2008, we can probably assume that oil prices around $150 per barrel choke many economic activities, as the marginal cost becomes unbearable for many private and commercial consumers alike. Even at the current price of approximately $80/bbl, transportation and other energy-intensive sectors are under heavy pressure, and oil prices push commodity prices up. As a reminder: During the past 50 years, the median price for oil stood at about $25/bbl (inflation adjusted to current dollars). If we look at energy content in a barrel of oil (6.1 GJ or 1700 kWh), a price of $150 translates to a cost per kWh of 8.8 cents, $25 translates to 1.5 cents per kWh in oil.
The difficulty now comes in finding a meaningful comparison between oil and electricity. Oil is a high quality and high density raw energy source with excellent properties with respect to transportation, storage and processing, while electricity provides a distributed service at a comparably high quality. We assume that the same energy content in electricity is of higher value to society when compared to oil, which thus can bear a higher cost for the same amount of energy (this was also part of the Divisia index developed by Cleveland et.al.: http://www.eoearth.org/article/Net_energy_analysis).
One method of comparison would be to compare the ability to convert a specific source to heat (http://www.eia.doe.gov/cneaf/electricity/epa/epat5p4.html). To produce the same amount of useful heat, about three times as much oil is required when compared to electricity. So while the lower limit would ask for a direct 1:1 comparison, a “bonus” factor of three for electricity sets the upper limit. However heat – today – is no longer the key use of oil; heat may be produced with natural gas or coal at much lower cost (at less than a third of that of oil). In the predominant applications for crude oil today, transportation fuels and chemicals, electricity is at a clear disadvantage. We therefore decided to assume a bonus for electricity in the middle of the two possible values at 200%, i.e. we attribute twice as much value to a kWh in electricity when compared to crude oil, and equally, set the threshold for economic trouble at twice that of oil.
Table 2: relative prices of electricity and oil
Under such an assumption, we see in Table 2 that electricity prices become critical at around 9 cents per kWh, equivalent to about $70/barrel of oil, and then unbearable at 15-18 cents (equivalent to 130-150$ oil). This is an average value for an entire industrial society, as wealthy private consumers can tolerate rates even higher than 20 cents per kWh.
But unfortunately, a society doesn’t just consist of consumers; it also needs to produce goods and services, and there, a cost of 15-18 cents will definitely be unacceptable. Given that Chinese manufacturers often operate with final electricity cost between 4-5 cents per kWh, even the 2008 average price paid for industrial electricity of 6.83 cents puts domestic U.S. companies at a significant disadvantage. At today’s electricity levels, highly energy-intensive applications are no longer competitive, which is already visible in industrial trends – it is not only labor-intensive work that is going abroad, energy-intensive industries such as aluminum smelting and steel manufacturing are leaving areas with high electricity cost.
Another method available to create a metric for “acceptable” electricity prices is to use the ratio of electricity cost to total GDP. At the average rate of 9.74 cents per kWh of delivered electricity, all electricity consumption costs the United States about 2.6% of U.S. GDP. If we separate out the industrial portion of GDP (2,737bn US$ in 2008), a similar portion (2.5%) is spent on electricity, at the average price of 6.83 cents. Should this price – for example – triple to 20 cents, suddenly 7.4% of total industrial cost would go towards electricity. This is far more than the profit margins of most energy-intensive industries.
For the U.S., where a large portion of heavy industry has been cut back already due to the relatively high cost of labor and energy compared to other places, such an increase may seem bearable. But what if China would operate under the same regime, replacing current low-cost electricity from coal with expensive new sources? In China, electricity alone totals to approximately 3.5% of GDP at an average cost of 5 cents/kWh, quadrupling the cost per kWh to the same 20 cents would demand that the country diverts 13.8% of its GDP to electricity. This is not feasible, as it – together with oil, coal and natural gas, would divert more than 25% of total GDP towards energy alone – representing a society-level EROI of 4:1. One of the reason why China fares so badly here is because the country provides a lot of the cheap energy Western societies no longer have, and then import it embedded in goods.
Table 3 – electricity price sensitivity U.S. and China
If we want to run a complete industrial society, looked at on a global scale, energy prices above certain levels are not sustainable, as they reduce available surpluses for consumption and investment. And unfortunately, those cost levels of 15-20 cents per kWh on average are exactly where societies are headed with the planned changes. We will cover those aspects in more detail further below, when looking at individual technologies.
Meeting demand – in more detail
In order to understand what we need and what we receive from multiple technologies, it seems important to split out the various types of load grid operators have to deal with.
Base load – defined as the long-term minimum demand expected in a region – is usually provided by technologies with relatively low cost, high reliability and limited ability to modulate output. This includes nuclear power plants, lignite coal plants and hydroelectric water mills in rivers. Those plants typically have to operate continuously at relatively stable loads, as otherwise their efficiency is reduced significantly, leading to higher cost per unit of output. Also, re-starting those power plants is relatively time-consuming and inefficient. In most countries, base load capacity is capable of covering approximately 100% of low demand (during nights and weekends).
Intermediate or cyclical load – the foreseeable portion of variety in loads over a day is provided by load-following sources that can modulate to higher or lower output levels – or almost entirely be turned off and on within a relatively short time. However, these sources usually require some lead time to grow or reduce output, for example some coal power plants. Today, natural gas is used for a significant portion of cyclical load.
Peak load – usually required within very short periods of time for a few hours a day – can be provided only from sources that can be turned on and off within minutes, this typically includes gas and small oil power plants as well as stored hydropower (dams or pumped hydro). Peak capacity can be provided by spinning reserve plants (e.g. running plants that can increase capacity quickly) or by non-spinning sources, which can be turned on within minutes.
Beyond technology limitations that make it difficult or uneconomic to ramp capacity up or down quickly, the key factor in the eligibility of a technology for the use in peak, cyclical and base load mode is the cost share between capital investment and fuel cost. The higher the fuel cost share, the more suitable a technology becomes to support peak power; the higher the investment share, the more operational hours are required to arrive at an acceptable average price per kWh. We will look at this issue further below, but this for example is the main reason why nuclear power is such a bad load-following or peak source.
Demand flexibility has a (high) cost
Another point has to do with the flexibility of electricity use, i.e. the possibility of turning something on when supply is abundant, and turning it off when power is scarce. The problem lies with the nature of most uses: many applications are simply inflexible, like those that require something to run for 24 hours a day - data centers are among them, and so are some key industrial processes. Lighting is not flexible, nor is access to heavy uses of electricity in households, such as cooking, using electronics or most kitchen appliances. We also want hot water and cool air when we need it, and usually we don’t want to schedule our laundry because someone tells us to do so, even though this is probably the easiest part. Now some applications, particularly heating (air and water) and cooling (air and goods), indeed have certain flexibility potential. We can run a freezer or air conditioner that produces ice to bridge supply gaps, or we can build a water heater which produces enough hot water to get us through the day, a very common application today in Switzerland, where night energy rates are often half of daytime rates even for households. However, such a time shift comes with tradeoffs: any application that uses storage instead of directly converting electricity into the desired quality output (heat or cold here), ultimately adds cost, for several reasons.
Making equipment flexible comes at a cost, either the cost of information transfer (for price-regulated markets) or the cost of storing the required energy for later use. France has been quite active at experimenting with contracts allowing them to regulate energy according to supply, where customers pay less for power that can be cut off at any point in time. This is especially important in France because of the inflexible nature of their generation technology mix with almost 70% coming from nuclear power. Yet the flexibility French grid operators were able to evoke from that market mechanism, despite the heavy incentives, was around 2-3% of total peak demand (according to RTE, the French grid operator). Most users obviously prefer the inconvenience of higher prices versus the inconvenience of service interruptions, even for things that are not mission-critical. This fact leaves us with approaches that actively shift energy consumption without affecting the end-user. Mostly, this translates to some kind of storage, which has a number of disadvantages.
Every piece of equipment that includes a storage mechanism is significantly more complex than one that operates without, and because of that complexity becomes more expensive, more energy-intensive in its manufacturing, and more exposed to failure. Additionally, each storage process incurs losses. If we produce hot water at night that should last through the entire day, some of the heat dissipates, dependent on how well insulated the storage tank is (again this is dependent on cost and effort, as well as space). The same is true for air-conditioners or freezers that use ice produced at night as buffer – they are less energy efficient overall. Both applications can still be economical for the end user and society as a whole if they use cheap base-load power at night and avoid using peak electricity during the day. Ice-based air-conditioning systems are quite common in office buildings in some parts of the U.S., where utilities charge different rates between night and day. But there is a caveat: all those approaches are geared at balancing two almost steady systems with fully predictable 24 hour cycles, nightly base load production and daily usage patterns with a peak or two. Thus, the maximum storage time required is 10-15 hours, which reduces system complexity as well as conversion and storage losses to acceptable levels. Now with renewable energy supplies, we are suddenly confronted with irregular patterns that can include days to weeks of over- and undersupply. In those cases, storage and conversion losses beyond a few days become almost insurmountable hurdles, as cumulative losses grow quickly over time.
So in a nutshell – there are technical solutions for many of these problems, but often the outcome no longer makes economic sense – neither for the individual user nor for a society.
Moore’s law and receding horizons
A key assumption of many forward projections for renewable energy production is that the technology will become cheaper and cheaper over time. Unfortunately, this isn’t true for many technologies, especially as fossil fuel inputs become more expensive.
One of the often cited rules in energy discussions is Moore’s law, which describes the fast advancement of capacity improvements (and price decreases) in computing power. It says that the density of calculation power can double every two years, and has been relatively consistently achieved since 1970. This has led to the fact that a smartphone today has more capacity than large mainframe computers in the early Seventies.
However, outside electronics, Moore’s law does not apply and has never applied for anything. A physical structure remains a physical structure, and does not have the multiplication potential that comes from miniaturization. We may be able to raise the efficiency for a photovoltaic panel from 18 to 20%, but not double it every two years no matter what we do, given the physical limits. The same is true for the materials used for its manufacturing; we might reduce them, but often by 10-20% and sometimes at the cost of more complex tools and purer materials (which also require energy). And erecting a modern wind turbine always requires steel, concrete and many advanced materials, which won’t change, no matter how much we optimize it.
For normal industrial goods, price curves often show an asymptotic form. When a technology is new, neither its production nor its outputs are focused on efficiency; production facilities are small and processes involve a lot of manual labor. Also, new technologies often get produced in advanced economies with higher labor and energy cost. With maturing manufacturing technologies, more efficient and scaled up factories, and the inclusion of lower cost labor and energy from – for example – China, production becomes cheaper and prices fall. Eventually, when labor and production costs become optimized, the decline in price of the product slows, until it reaches a stable retail price more dependent on the raw materials and energy required to produce and transport the good.
In many cases, the picture for raw materials and raw-material-driven products begins to look like the dotted line, despite rapidly growing output:
Figure 6 - Marginal cost curve for supply-constrained resources
During the past few years, we have seen this important reversal in this key underlying trend, which briefly visited our economies in 2008 when - with rising resource prices – everything from food to fuels became suddenly more expensive. Thanks to the economic crisis and reduced demand, this phenomenon has partially disappeared, but for some key commodities (such as copper, iron ore, coking coal and some others), we are already back to pre-crisis levels or higher. This is the “glass-half-full” trend, which applies to almost all natural resources, but first and foremost energy. Even if we – as many people correctly state – have enough of something in the ground, getting it out becomes more difficult, has to happen further away and in geopolitically riskier places etc..
This is confirmed by the cost for new power plants, where cost estimates have recently gone up based on higher input cost (for almost everything ranging from nuclear to coal to wind towers), and even for solar panels, the permanent reductions experienced in the past haven’t continued between 2003 and 2008, despite rapidly growing production. The last important cost reduction happened since around 2006, when Chinese manufacturers entered the market, bringing low-cost production energy (mostly coal-based) into the game. Not truly a sustainable model. And, in 2009, due to overcapacity and massively reduced raw material prices, costs came down again, and there might even be more room for some reductions, but this story has an end once input prices go up.
Figure 7 - Cost of solar panels ((Pdf warning)
If that core trend of higher energy cost, particularly at the historically lowest-priced end, cannot be reversed, which we doubt it can, this has implications for everything that uses those inputs, as it raises the price with the cost of the raw materials and the energy that go into them. This effect might, in turn, effectively end the trend of lower and lower prices for everything, including energy generation technology, no matter what it is.
Figure 8 - The “old” trend ............. Figure 9 - The “new” trend
Base load power – a real problem
Except for solar and wind, most of the technologies currently seen as potential future output providers deliver base load power. This is true for biomass, for geothermal, for nuclear, and to a certain extent for coal. All those generation approaches have only limited load following capabilities, for very different reasons.
Now, stochastic renewable sources (mostly wind) coming into play, often with a “right of passage”, i.e. no limits in selling into the grid at a preferred price. Whoever comes next only gets to sell when there is still demand, and – in a free electricity market like we have it in most OECD countries – that means that prices for coal, nuclear and other base load outputs without a preferred status (biomass mostly has that status), drop sharply. Some analysts have even considered this a positive phenomenon, but actually it is not. What it really does: due to the preference of wind, it pushes marginal price (but not cost) of those steady sources down and thus makes base load generation economically unattractive, because less steady demand at lower prices simply translates to an unacceptable risk for investors. Spot markets are among the key reasons why no more nuclear and hardly any coal power plants were built in Western economies during the past decade.
In a future electricity system, we will see an increasing disparity between a growing pool of inflexible (for cost or technology reasons) base load power, a mission-critical pool of peak and cyclical load capacity, and that new, unpredictable pool of sources that deliver whenever they deliver, irrespective of demand.
A new electricity mix
If we use some currently available numbers for various electricity generation techniques, we might come up with the following for generation capacity in the United States, without any subsidies:
Table 4 – cost and suitability of various generation technologies
We are aware of the fact that the above numbers are being disputed, which is why we have included broad ranges. This is not the point we are trying to make – the point is incremental replacement of fossil fuel-based plants, especially cheap coal with more expensive technologies has the potential to lead to large increases in the price of electricity.
Now on top of the generation cost shown in Table 4, we have to bear the cost for maintaining and operating the electricity grid, which delivers the power to homes, offices and factories. For a standard grid today, which does not have to do much more than transmit electricity generated according to demand, this might add about 2-3 cents per kWh. When looking at the cost ranges above, it becomes quite obvious that even the lowest cost sources already bring the total price of electricity dangerously close to what industrial users can afford.
Now on top of the generation cost shown in Table 4, we have to bear the cost for maintaining and operating the electricity grid, of metering, and some profit margins for the utility companies which delivers the power to homes, offices and factories. For the U.S. today, where the grid does not have to do much more than transmit electricity generated according to demand, this adds between 2 and 7 cents per kWh.
Table 5 – approximate share of final electricity cost (multiple sources, IIER calculations)
When looking at the cost ranges, it becomes quite obvious that even the new lowest cost sources already bring the total price of electricity dangerously close to what industrial users can afford.
What really matters is “useful energy”
And now comes the challenge: Only power that meets someone’s demand has a positive price. If I am asleep and someone offers me free power to light my entire house like a Christmas tree, I don’t care. On the other hand, when the food in my freezer starts to thaw, I would probably be ready to pay a very high price for the few kWh it needs to keep that device going. The same is true in aggregate. Spot electricity prices go as low as 0-3 cents during the night (or even negative, http://www.scribd.com/doc/27816762/Negative-Prices-in-Electricity-Market), and up to 12, 15, sometimes even 50 cents at peak times during the day.
Now what we need to measure in order to understand the entire delivery system is not so much about the prices paid for one kWh of electricity produced, but instead the cost of electricity delivered according to demand. We want to determine how much it costs to provide a kWh from a particular source to supply our human energy demand patterns, and if that doesn’t work in a straightforward manner, we have to estimate the extra cost required to either shift it to the right time, or to shift demand to the time of production. Only once that has been factored in, do we know how expensive a kWh of electricity from a particular source really is.
Sources with little flexibility, such as coal and nuclear or run-of-river hydro plants, mostly produce around the clock. Given their low average cost, the average prices received are profitable, despite the fact that during the night they sell below full cost, but usually above marginal (fuel) cost. The rest (power plant investments, non-flexible operations cost) are incurred irrespective of plant outputs. Thus, adjusting output to more closely meet demand would incur even higher cost (or efficiency losses, or both), put stress on the equipment and require higher operations and maintenance efforts.
If we had to run our grids with just those base load sources, electricity would be more expensive, either from those efficiency losses, from lost overproduction during the night (to still meet peak demand), or from additional measures to shift demand, such as incentives and storage (either in the network or in end-user appliances, as described above). This would add to the basic generation cost. After including these extra efforts, electricity generated in coal or nuclear plants (see section below) would have to sell at a higher price than just the generation plus distribution cost.
Other sources, mostly dammed hydro, oil, and natural gas, are generally able to deliver exactly on time. (hydro only to a limited extent, as certain minimum flows need to be maintained in order to keep ecosystems in rivers below the dam intact). In general, we can turn them up when demand rises, and cut production back as soon as less power is needed. Those sources do not require extra cost on top of their generation cost and the basic effort to operate a grid. A kWh of electricity produced from natural gas thus usually costs approximately 6-10 cents (obviously as long as natural gas prices don’t change).
For sources that don’t have the characteristics described above, things become trickier. We wouldn’t be talking about smart grids, high voltage DC lines, storage in ELVs, and more, if it wasn’t for the fact that most of the sources we want to add to our grid are unpredictable beyond the reach of our weather forecasts. For sources that are capable of producing everything between 0% and 100% of total nameplate capacity at any given time, irrespective of demand, we need to have very different approaches to make them work, and none come cheaply.
So overall, as with all energy sources, we have limits in electricity cost to make it bearable for people. And not for us rich people who plan future energy systems, but also for everybody, and for those industries that manufacture the stuff we all use.
To be continued...
Next week, we will go through a list of all the currently available technologies for generation, transmission and storage, and review total feasibility and cost for each including transmission and grid management, and show certain trends for the future, and, ultimately, provide our assessment as to whether these technologies will be able to deliver what we need to keep grids going.
Previously in this series:
The Fake Fire Brigade - How We Cheat Ourselves about our Energy Future