CNet: Power could cost more than servers, Google warns
Posted by Prof. Goose on December 10, 2005 - 1:14pm
We have had our problems with Google over the past few months, but it is good to hear their engineers speaking some sense about energy efficiency...(link).
This has the side effect of making cities hotter. Mix that in with global warming and this will make space-cooling to continue to be the largest growth factor for electricity consumption in the years ahead.
Here is an article I wrote detailing this:
Power Struggle: Plugged in to Global Cooling
My office did not have air-conditioning until we computerized. I bought an air-conditioner for my computer room because I was afraid the heat would burn it out during the summer. Before I got a computer, I lived here 10 years without an air-conditioner.
Thus Google, much more than anybody else, can really benefit from heavily multicore/multithreaded chips: Such designs are all about THROUGHPUT/watt on multiple tasks, rather than LATENCY/watt (throughput on single-tasks).
As such, their design space ends up being very different from many other servers where latency is as important as throughput.
It wouldn't suprise me if Google is experimenting with the Cell processor or Sun's 32-thread processors. Google already uses custom systems (custom motherboards and custom software), and with the throughput/watt being so vastly superior on these heavy-threaded architectures, they could benefit from them.
The Cell looks particularly interesting from Google's point of view, because it is designed to be high throughput & cheap.
They built a parallel computer using the cheapest possible components, and with software to provide redundancy and error recovery.
Picture a regional center with long rows of naked motherboards on shelves with just memory and hard disks. A low-$ grunt would show up once a month to swap out dead disks and put in new ones. Motherboards were not fixed, just dropped from the network until they were ultimately replaced with newer, faster, hardware.
It's a fantastic way to build the cheapest possible supercomputer in the shortest possible time, but as processors grew hotter the electric (and AC) bill had to climb.
I'd love to hear details of Google's evolution ... the best possible thing for the rest of us would be if the commodity motherboards they have been using shifted to more efficient processors. I've been out of it for a while, but a couple years ago the Pentium-M was just starting to get a crossover desktop following, after making its name in notebooks.
I think that's the ticket (sorry Sun), commodity low-power notebook processors moving over to commodity desktop/server systems.
Co-location however is the obvious answer. If Google could see its way to installing more server arrays in places that actually have heat demand, particularly low density heat demand, this might put a dent in the cooling side of their number crunching costs. Institutional and commercial buildings in much of Canada, the northern US and Europe are good candidates for co-location. They may not need heat all the time and some conventional and inefficient outside heat dumping may be necessary, but a 50% saving is better than 0% anyday...
But I have to admit that there's something appealing about living in the woods and working as a network adminstrator...
I don't think that's the case. My laptop uses about 30 watts, which means it costs 0.06 kWh to watch a DVD. At today's prices, that's half a penny. If electricity got 100 times as expensive, it would still cost less than a dollar.
My network connection says I've downloade about 400 megabytes in the last ten days, just in email and web surfing. That's literally too cheap to meter, today. ISP's today are selling data transfer at less than $1 per gigabyte. If electricity got 100 times as expensive, data would still be less than ten cents per megabyte. Downloading half a megabyte per day could still be too cheap to meter. That's plenty for text email and news. Downloading a new song at near-CD-quality would cost a fraction of a dollar. VOIP would cost pennies per minute.
My MP3 player uses about 0.1 watt (runs 15 hours on one AAA). That could be supplied with about 200 cm^2 of solar cells, costing about 30 cents.
The cost of the computer itself would go up some, but electronics have such high value-density that shipping must be a tiny fraction of their cost. In fact, the precipitous decline in price of outdated technology argues that almost all the cost is in making new designs, not building the hardware. If as much as 1/10 the cost is energy, and if I'm willing to use older technology, then if energy went up 100 times I could still buy a computer for under $1,000.
Watching DVD's by candlelight, or doing manual agricultural labor while wearing an MP3 player connected to solar cells on your hat, or using email to keep in daily touch with family members a month's travel away, are actually quite plausible. Actually, the most implausible part of this is the candle. White LED's are far better than candles.
This also means that the core communications infrastructure of a modern society could remain more or less intact, as long as society held together well enough to maintain it. News could still be delivered globally. Long distance calls could cost less than $1.00 per minute even with 100X electricity increase.
All of these prices will continue to drop exponentially for as long as the oil boom lasts. Even five years could make a substantial improvement.
Chris
They are cheep because we have cheep energy, allowing the current scale of production. Electronics will probably be manufactured for years or decades to come, but I don't think they will be made on the same scale they are today.
Chris
Writing this down makes me realize just how complex this system of production is. Increased electrical costs will definitely increase the cost of the electronics. However, what strikes me as more important is the vulnerablity of the process to social disruption. If you are running a batch of dies that takes 60 days to complete, and if on the 59th day the electrical grid goes down, or the workers don't show up to work for one shift, you throw away those dies, worth millions of dollars, and start over. If it happens more that a few times, the company goes under. No paychecks for the workers and no memory for your xbox 360.
Consumer electronics could easily be built to last for 20-30 years and be built with replacable buttons, screens, connectors, etc to make repairs easy.
What might be lost, perhaps only for an uncomfortable decade, is the rapid progress in capacity.
It would be enough with a reasonable fraction of todays electronics production to keep at least todays rich population with computers, internet, TV:s, radios, and MP3-players and more important the gadgets needed in the control systems for the grid, waterworks, home heating, etc.
Magnus, your method would also work, though would give some over-estimate for profits (which don't have to scale with costs).
Yes, the continuing availability of infrastructure is a key implication of this line of reasoning, and it's why I've gone into so much detail (and may post a pointer in the next open thread). But don't discount the importance of communications relative to industrial infrastructure. Communications are crucial to government infrastructure, and also to various accountability mechanisms.
On power use: First I found this:
http://ismi.sematech.org/modeling/iem/docs/SilSymp2002.pdf
Figure 1 shows manufacturing cost in $/cm^2, capital investment, and several other curves (transistors per chip, $/transistor, etc) on the same scale. Manufacturing cost is a small fraction of capital investment--and it's a log scale!
According to this:
http://www.micromagazine.com/archive/05/08/reality.html
a modern fab costs $2 to $3 billion; capital expenditures in 2004 were $49 billion; device revenue $220 billion. So 22% of revenues are spent on new construction each year. Hm. If fabs last 3 years, then a fab can produce over 10X its construction cost.
Here's another way to approach it. According to previous cite, 400 fabs produced $220 billion; half a $billion per year apiece. If 10% of that was spent on energy at 5c/kWh, it would buy 10 billion kWh per year per fab, or about a megawatt. OK, maybe a bit small. Or maybe not...
Hah, finally found a reference! AMD uses about a terawatt-hour per year.
http://www.amd.com/us-en/Corporate/AboutAMD/0,,51_52_531_12132%5E12135,00.html
Net sales in 2004 were $5 billion.
http://www.amd.com/04copdf p. 16
If they pay 5 cents per kWh, that TWh costs only $50 million. That's only 1% of $5 billion.
I know that un-interrupted electrical power is crucial to a fab. But if interruptions are a significant risk, they can build their own power supply. If a fab costs three billion dollars, uses 50 MW, and produces $12 billion per year, surely it wouldn't be hard to just slap that generating capacity onto the project. There's regulation, and the fact that power is still reliable... but 50 MW is what, 10 microturbines?
If there are serious social disruptions, you'll have workers begging you to house them onsite, and relaxation of regulations to allow you to do it. You can put up tents in the parking lot that's going unused because no one can afford to drive to work. :-)
Chris
http://www.future-fab.com/documents.asp?grID=208&d_ID=2304
Look under "energy metrics" and "water metrics"
A square centimeter is supposed to use around 0.5 kWh/cm^2, and 8-10 liters of water.
That's really negligible, considering the number of transistors you get in a square centimeter. Rising energy costs would increase capital costs somewhat, of course. But I just can't see energy being a major limiter of semiconductor production.
For energy use: they've already put a Palm Pilot into a wristwatch. Granted, it's a big wristwatch, and it has to be recharged frequently, but still...
When I remember what we did with 20 MHz 286 computers running DOS, (uphill both ways) it is obvious that there is a vast amount of inefficiency in computers today that doesn't need to be there. Many apps are written in scripting languages that cost an order of magnitude. On top of operating systems that cost another order of magnitude. Probably 99.9% of your computer's cost (mfg and operating) is spent in retroactively saving time for the designers. If energy starts to get expensive enough to affect computers, we'll easily give back at least one of those orders of magnitude.
Somewhere in Google, someone is probably already studying how to run their algorithms on FPGA's instead of CPU's, for a 90-99% energy savings...
Chris