CNet: Power could cost more than servers, Google warns

We have had our problems with Google over the past few months, but it is good to hear their engineers speaking some sense about energy efficiency...(link).
The largest growth in consumption for residential and commercial has been in space-cooling (air conditioning). For the US 30% of the overall power generated is used to power space-cooling.

This has the side effect of making cities hotter. Mix that in with global warming and this will make space-cooling to continue to be the largest growth factor for electricity consumption in the years ahead.

Here is an article I wrote detailing this:
Power Struggle: Plugged in to Global Cooling

The two are not unrelated.  Computers throw off a lot of heat, especially if you have a lot of them, and they are more vulnerable to heat than human beings.

My office did not have air-conditioning until we computerized. I bought an air-conditioner for my computer room because I was afraid the heat would burn it out during the summer.  Before I got a computer, I lived here 10 years without an air-conditioner.

Google, however, is in a VERY unique position.  Their applications require insane multiprocess throughput, but each process can be very VERY weak.

Thus Google, much more than anybody else, can really benefit from heavily multicore/multithreaded chips:  Such designs are all about THROUGHPUT/watt on multiple tasks, rather than LATENCY/watt (throughput on single-tasks).

As such, their design space ends up being very different from many other servers where latency is as important as throughput.

It wouldn't suprise me if Google is experimenting with the Cell processor or Sun's 32-thread processors.  Google already uses custom systems (custom motherboards and custom software), and with the throughput/watt being so vastly superior on these heavy-threaded architectures, they could benefit from them.

The Cell looks particularly interesting from Google's point of view, because it is designed to be high throughput & cheap.

I listened to are really good presentation (mp3 on the web somewhere) last year, about google's hardware architecture.

They built a parallel computer using the cheapest possible components, and with software to provide redundancy and error recovery.

Picture a regional center with long rows of naked motherboards on shelves with just memory and hard disks.  A low-$ grunt would show up once a month to swap out dead disks and put in new ones.  Motherboards were not fixed, just dropped from the network until they were ultimately replaced with newer, faster, hardware.

It's a fantastic way to build the cheapest possible supercomputer in the shortest possible time, but as processors grew hotter the electric (and AC) bill had to climb.

I'd love to hear details of Google's evolution ... the best possible thing for the rest of us would be if the commodity motherboards they have been using shifted to more efficient processors.  I've been out of it for a while, but a couple years ago the Pentium-M was just starting to get a crossover desktop following, after making its name in notebooks.

I think that's the ticket (sorry Sun), commodity low-power notebook processors moving over to commodity desktop/server systems.

So it seems to me that Google needs to find some place in the far north with a fast-moving cold river (or tidal bore), run fat fiberoptic lines to it from all over the US, build a big hydroelectric plant and a big(!) data center, and use the cold water for cooling. After all, the only things that needs to get to and from their site are bits, and those are easy to transport.
Maybe Google should build superserver centers beside the oilsands projects...  THey need every bit of heat they can get.

Co-location however is the obvious answer.  If Google could see its way to installing more server arrays in places that actually have heat demand, particularly low density heat demand, this might put a dent in the cooling side of their number crunching costs.  Institutional and commercial buildings in much of Canada, the northern US and Europe are good candidates for co-location.  They may not need heat all the time and some conventional and inefficient outside heat dumping may be necessary, but a 50% saving is better than 0% anyday...

But I have to admit that there's something appealing about living in the woods and working as a network adminstrator...

What will it cost to get engineers and technicians to move from sunny California to the frozen north?  We recently looked at real estate in North Dakota and found many places where the highest asking price for a large house with acreage was under $100k. Whole towns could be bought for the price of one Silicon Valley McMansion. North Dakota also has abundant cheap wind energy resouces.
I often see the assumption that expensive energy will make computers go away: We'll have to learn to entertain ourselves when we can no longer watch DVD's.

I don't think that's the case. My laptop uses about 30 watts, which means it costs 0.06 kWh to watch a DVD. At today's prices, that's half a penny. If electricity got 100 times as expensive, it would still cost less than a dollar.

My network connection says I've downloade about 400 megabytes in the last ten days, just in email and web surfing. That's literally too cheap to meter, today. ISP's today are selling data transfer at less than $1 per gigabyte. If electricity got 100 times as expensive, data would still be less than ten cents per megabyte. Downloading half a megabyte per day could still be too cheap to meter. That's plenty for text email and news. Downloading a new song at near-CD-quality would cost a fraction of a dollar. VOIP would cost pennies per minute.

My MP3 player uses about 0.1 watt (runs 15 hours on one AAA). That could be supplied with about 200 cm^2 of solar cells, costing about 30 cents.

The cost of the computer itself would go up some, but electronics have such high value-density that shipping must be a tiny fraction of their cost. In fact, the precipitous decline in price of outdated technology argues that almost all the cost is in making new designs, not building the hardware. If as much as 1/10 the cost is energy, and if I'm willing to use older technology, then if energy went up 100 times I could still buy a computer for under $1,000.

Watching DVD's by candlelight, or doing manual agricultural labor while wearing an MP3 player connected to solar cells on your hat, or using email to keep in daily touch with family members a month's travel away, are actually quite plausible. Actually, the most implausible part of this is the candle. White LED's are far better than candles.

This also means that the core communications infrastructure of a modern society could remain more or less intact, as long as society held together well enough to maintain it. News could still be delivered globally. Long distance calls could cost less than $1.00 per minute even with 100X electricity increase.

All of these prices will continue to drop exponentially for as long as the oil boom lasts. Even five years could make a substantial improvement.

Chris

Part of what you point out is true, consumer electronics can be run with solar panels and the proper power supply/ transformer, but I don't see consumer electronics in our future.  They are so prolific today because energy we have cheep energey.  Producing electronics is an energy intensive process. Mining of materials, refing materials, runing production facilaties etc.......

They are cheep because we have cheep energy, allowing the current scale of production.  Electronics will probably be manufactured for years or decades to come, but I don't think they will be made on the same scale they are today.  

I know they cost money to produce, but I already thought of that.

  1. They do last a few years, and we'll have our existing inventory at the time the lights go dim.

  2. Looking at the rapid decline in price of less modern models, it seems that energy is a small part of the cost.

Here's a hypothetical: Plot a curve of price vs. the logarithm of CPU speed for computers sold new today. Look at where that line intersects the X axis (near-zero CPU speed). That will more or less show the cost of the non-IP component of the computer, including materials, labor, and energy (including transportation). I haven't done this, but I suspect that energy will turn out to be less than 1/10 the cost of the computer. If that's true, and energy increases by 10X, computer cost will less than double.

Chris

I'm having trouble finding a reference, but I used to work at a memory manufacturer, and if I recall correctly, electrical power alone was well over 10% of manufacturing costs.  
  • Silicon for chip manufacture has to be crystaline, which means melting the pure silicon and letting it cool slowly into a 12 inch wide cylinder.  
  • Memory fabs run 24x7x365 with precise temperature and humidity control, and in very large clean rooms (filtered air).  From a bare wafer to a die that can be packaged into a chip takes months.  A power outage will ruin the batch.
  • Chip manufacture takes large amounts of water.  The company I worked at spent millions a year recycling water from production.

There are people out there who know a lot more about this than I do (I was a Perl programmer!) but you get the idea.  I do know that cpu manufacturers (Intel, AMD) and other VLSI chip manufacturers use the same basic production methods.  There are a lot of these in your computer.  The cheaper memory and cpus we have now come result (mostly) from figuring out how to get more circuits on an area of silcon, and bigger silicon wafers (up to 12 inches now).  The cost to produce each wafer doesn't change so much.

Writing this down makes me realize just how complex this system of production is.  Increased electrical costs will definitely increase the cost of the electronics.  However, what strikes me as more important is the vulnerablity of the process to social disruption.  If you are running a batch of dies that takes 60 days to complete, and if on the 59th day the electrical grid goes down, or the workers don't show up to work for one shift, you throw away those dies, worth millions of dollars, and start over.  If it happens more that a few times, the company goes under.  No paychecks for the workers and no memory for your xbox 360.

I would rather check the wholesale price for the cheapest computers etc that still are in production to get a lower limit for the manufacturing cost of a computer etc.

Consumer electronics could easily be built to last for 20-30 years and be built with replacable buttons, screens, connectors, etc to make repairs easy.

What might be lost, perhaps only for an uncomfortable decade, is the rapid progress in capacity.

It would be enough with a reasonable fraction of todays electronics production to keep at least todays rich population with computers, internet, TV:s, radios, and MP3-players and more important the gadgets needed in the control systems for the grid, waterworks, home heating, etc.

Brook, I just tried to find a reference, and I also had trouble. Finally found one (see below): looks like energy cost may be as low as 1% of sales revenue, for AMD.

Magnus, your method would also work, though would give some over-estimate for profits (which don't have to scale with costs).

Yes, the continuing availability of infrastructure is a key implication of this line of reasoning, and it's why I've gone into so much detail (and may post a pointer in the next open thread). But don't discount the importance of communications relative to industrial infrastructure. Communications are crucial to government infrastructure, and also to various accountability mechanisms.

On power use: First I found this:
http://ismi.sematech.org/modeling/iem/docs/SilSymp2002.pdf
Figure 1 shows manufacturing cost in $/cm^2, capital investment, and several other curves (transistors per chip, $/transistor, etc) on the same scale. Manufacturing cost is a small fraction of capital investment--and it's a log scale!

According to this:
http://www.micromagazine.com/archive/05/08/reality.html
a modern fab costs $2 to $3 billion; capital expenditures in 2004 were $49 billion; device revenue $220 billion. So 22% of revenues are spent on new construction each year. Hm. If fabs last 3 years, then a fab can produce over 10X its construction cost.

Here's another way to approach it. According to previous cite, 400 fabs produced $220 billion; half a $billion per year apiece. If 10% of that was spent on energy at 5c/kWh, it would buy 10 billion kWh per year per fab, or about a megawatt. OK, maybe a bit small. Or maybe not...

Hah, finally found a reference! AMD uses about a terawatt-hour per year.
http://www.amd.com/us-en/Corporate/AboutAMD/0,,51_52_531_12132%5E12135,00.html
Net sales in 2004 were $5 billion.
http://www.amd.com/04copdf p. 16
If they pay 5 cents per kWh, that TWh costs only $50 million. That's only 1% of $5 billion.

I know that un-interrupted electrical power is crucial to a fab. But if interruptions are a significant risk, they can build their own power supply. If a fab costs three billion dollars, uses 50 MW, and produces $12 billion per year, surely it wouldn't be hard to just slap that generating capacity onto the project. There's regulation, and the fact that power is still reliable... but 50 MW is what, 10 microturbines?

If there are serious social disruptions, you'll have workers begging you to house them onsite, and relaxation of regulations to allow you to do it. You can put up tents in the parking lot that's going unused because no one can afford to drive to work. :-)

Chris

OK, more numbers...
http://www.future-fab.com/documents.asp?grID=208&d_ID=2304
Look under "energy metrics" and "water metrics"
A square centimeter is supposed to use around 0.5 kWh/cm^2, and 8-10 liters of water.

That's really negligible, considering the number of transistors you get in a square centimeter. Rising energy costs would increase capital costs somewhat, of course. But I just can't see energy being a major limiter of semiconductor production.

For energy use: they've already put a Palm Pilot into a wristwatch. Granted, it's a big wristwatch, and it has to be recharged frequently, but still...

When I remember what we did with 20 MHz 286 computers running DOS, (uphill both ways) it is obvious that there is a vast amount of inefficiency in computers today that doesn't need to be there. Many apps are written in scripting languages that cost an order of magnitude. On top of operating systems that cost another order of magnitude. Probably 99.9% of your computer's cost (mfg and operating) is spent in retroactively saving time for the designers. If energy starts to get expensive enough to affect computers, we'll easily give back at least one of those orders of magnitude.

Somewhere in Google, someone is probably already studying how to run their algorithms on FPGA's instead of CPU's, for a 90-99% energy savings...

Chris