Mission Earth

The seminar "Mission Earth - Modeling and Simulation for a Sustainable Future" (http://www.inf.ethz.ch/personal/fcellier/AGS/AGSME_2009.html) was held in Zurich on Jan 26 2009, organized by Francois Cellier and Andreas Fischlin. It was a rare occasion of a truly interdisciplinary meeting where people from different fields of modeling were given a chance to present their work and exchange views. Climate modelers and resource modelers haven't interacted very much, so far; however, resource depletion will surely have a strong effect on the future of earth's climate. While we are still far from integrated world models that take into account all factors, economic as well as environmental, this seminar was a first attempt at understanding what issues are involved.

The “Mission Earth” meeting was about three kinds of models: climate models, world models, and resource exploitation models.

Climate models are designed to predict the evolution of the earth's climate system, mainly in view of the forcing caused by the emissions of greenhouse gases such as CO2. The results of these models form the basis of the reports periodically published by the international panel on climate change (IPCC). These are very sophisticated models, the result of decades of work by thousands of scientists. The models take into account the physical interactions of the various elements of the atmosphere, geosphere, hydrosphere, etcetera. The latest versions arrive to simulate climate down to the details of single clouds. Normally, the emissions of greenhouse gases from fossil fuels are calculated as a parameter external to the core of the model.

Resource exploitation (or depletion) models are specifically designed to describe the cycle of production of a resource, often a geological one such as crude oil. In their simplest form, these models are very rough: little more than summing up what is believed to be extractable from a geological point of view. More sophisticated models have their ancestor in the well known “Hubbert model”, which assumes that the production of the resource will follow a bell shaped curve. Although very simplified, the Hubbert model is robust and has been used for predictions about production trends which have turned out to be often accurate over a range of several years in the future. The Hubbert curve is, actually, a stripped down version of more complex models of resource exploitation that can be created using system dynamics.

World models put together all parameters, including climate and resources, and try to predict the evolution of the whole world's system, including not only the environment but also the economy, agriculture, pollution, and the human population. The best known of these models is the “World3” model developed by the MIT group of Meadows and based the work of Jay Forrester in the 1960s. The results of the first comprehensive study performed using this model was published in 1972 in the form of the report called “The Limits to Growth”. This model remains in use with some modifications and the latest results of the world simulations were published in 2004 by the same authors of the first report. In this, as in other similar models, the physical elements of the system are defined in terms of “stocks” (e.g. mineral reserves) and much attention is paid on the dynamic evolution of stocks according to a complex system of feedbacks which depend on assumptions on people's behavior (e.g., population growth). Although historically older than climate models, world models are far less sophisticated. That is due to the decline in interest in the 1980s, when these models where subjected to a wave of politically oriented criticism. As a result, funding for this field of research disappeared and, still today, it is extremely difficult to find. At present, the situation appears to be changing, but progress in this field is still slow.

At the meeting in Zurich, we could see how large was the distance, actually a chasm, between climate modelers and world/resource modelers. A climate scientist present at the discussion said that he was quite surprised to hear that energy production was going to peak and decline. He had never heard of such a thing. On the other hand, as a resource modeler, I confess that I was amazed by the level of sophistication reached by climate modelers. I had not even imagined that you could model the world's climate at the level of single clouds!

As you may imagine, the core of the discussion at the "Mission Earth" meeting was on how the dynamic models of resource depletion would affect climate models. As always, a model, no matter how sophisticated, cannot be more reliable than the data it uses in input. You don't have to think that climate modelers are naïve in this respect; on the contrary, the people responsible for the IPCC reports made a considerable effort in describing emission scenarios according to the state of the art data and economic models. The problem is that state of the art models of the economy don't include collapse, whereas that is exactly what world models predict. So, it was Dolores Garcia, independent scientist based in Brighton, who connected the dots with her integrated model. She showed a version of the world3 model that incorporates greenhouse gas emissions. Her results suggest that the concentration of CO2 might stabilize at around 600 ppm by the end of 21st century - much less than what is the standard result of the models described in the IPCC reports. Qualitatively, these results are similar to those recently reported by De Sousa and Mearns on TOD and to those published by Nel and Cooper in "Energy Policy" (2009).

Obviously, the uncertainty in this kind of estimates is enormous. First, the collapse of the industrial society can't be seen as an actual prediction; it just as a possible scenario, very uncertain both in quantitative terms as in terms of the times involved. Second, collapse – if an when it occurs – will certainly lower CO2 emissions, but the final CO2 concentration and its effects on temperature can't be calculated except as a rough approximation. Finally, there is little comfort in knowing that we need an industrial collapse to be saved from a catastrophic global warming. One day, we may develop truly integrated world models where economic and climatic factors are taken into account together at a level of detail comparable to the presently purely climatic models. But we are not there yet. For the time being, we must do with the models we have and use the old principle that says that you can never predict the future exactly and that, therefore, you must always be prepared for the worst.


Personally, I am a fan of simple and robust models, and at the seminar I presented a study in which myself and my coworker Alessandro Lavacchi developed what we call a “mind sized” dynamic model of resource depletion. We hope to be able to place our results on line very soon. The paper by Dolores Garcia should also be on line soon on TOD. I would like to thank Francois Cellier and Andreas Fischlin who did excellent work with this seminar. They should do it again.


Nel W. P. and C. J. Cooper, "Implications of fossil fuel constraints on economic growth and global warming", Energy Policy 37, 166 (2009).

Mearns, E.; De Sousa, L. 2008 "Fossil fuel ultimates and CO2 emission scenarios. "The Oil Drum, http://europe.theoildrum.com/node/4807

I'm glad to hear about this conference given how much disconnect there seems to be between the Peak Oil and climate change communities.

Ugo, can you tell me to what extent these world models try to forecast positive feedback climate effects? That's one of the main reasons the IPCC FAR is widely considered too conservative - it does not try to model feedbacks.

But many scientists such as Joe Romm believe the 600ppm figure mentioned above would be enough to cause a massive permafrost thaw which would send enough carbon into the atmosphere to send us soaring toward 1000ppm.

So if that's true, and the modelled 600 doesn't include this feedback possiblity, then it still would be reckless to assume Peak Oil will automatically cap carbon concentrations at a non-catastrophic level.

(This is something I think about alot, since I'm very concerned about climate change, and I'd love to believe that Peak Oil means in effect I don't need to worry about it anymore, but so far that just looks like speculation without much model data behind it.)

I asked exactly the same question to climate modelers at the seminar. The answer I got is that some of the feedback effects are taken into account, but not all of them - not all the time at least. But it is a huge field and I can't speak for the real experts. My impression is - anyway - that standard climate models could neglect some important feedback - from methane hydrates for instance.

What about (?)sulfuric florides(?)? I've never heard of them until recently; they are supposedly used as incesticides (against termites) in enclosed structures.

I find it hard to believe, but the claim is that these chemicals have (?)4,000(?) times more greenhouse effect than CO2 does... and last about 40 years in the atmosphere.

The comment that some Climate Change models include some feedbacks is correct, but not any of those used in the IPCC report, according to one of the principal authors, Dr. David Karoly, formerly of the University of Oklahoma but now back in his home country of Australia, because the feedbacks could not be replicated in modelling. Some of the models were subsequently tweaked to include anticipated feedbacks, but not included in the IPCC report, and have not been publicized, and are not considered to be as reliable as the models which replicated actual climate given the influencing variables which were considered. BTW, Dr. Karoly was not at all optimistic in the two presentations I saw.

claim is that these chemicals have (?)4,000(?) times more greenhouse effect than CO2 does...

Per kg, yes. But those gases are millions of times less abundant than CO2, so their net effect is minor.

I would be remiss if I didn't point out that WORLD3 and LTG models did not incorporate net energy (e.g. the energy costs of energy procurement accelerating over time)
I think people will increasingly realize the limiting variable is not flow rates/resources but flow rates/affordability. And we can improve affordability by increasing externalities.

Serious funding in terms of time, expertise and capital needs to be put towards these interdisciplinary conferences/pow-wows by governments. In viewing energy and finance these past 6 or 7 years it has become clear to me that the generalists (outside of the industry) have in general had a much better bead on what is happening ahead of time. Specialists are great, but their noses are so close to the cheese they miss the pizza.

Yes, but generalists are not funded in universities. They are actually ferociously discriminated and interdisciplinary research is actively discouraged. One of those small problems.....

why is that Ugo? As you know I am 'new' to academia, but I sense what you say is true. Can you give me some history/explanation?

Well, I have been in academia for more than 30 years.... lessee... an example? We had a nice interdisciplinary PhD program on materials that was shared between engineering, physics, and chemistry. It was very nice, really interdisciplinary, students learned and shared a lot. It lasted a few years; then it was closed. Never understood why but, as you say, it is something that can be "sensed". I also sense that the situation is particularly perverse in Italy, but it is not much different in the rest of the world. Academia rewards specialization, not interdisciplinary research. As we are talking about the LTG study, that is also an example: it was a very interdisciplinary study and zillions of researchers felt threatened because they felt that their hold on their specialized fields of expertise was threatened. You know how the study and its authors were shredded to pieces - fortunately only metaphorically.

On the reasons for this; well, there should be some dynamic modeling that explains why organizations tend to veer all the time towards hyperspecialization. It happens also for biological organisms (by the way, today is Darwin's bicentennial). In academia, in particular, the process goes through a series of elements that include the peer review process, such atrocious distortions as the "impact factor" that determines a scientist's career, and, in general because it is easy to keep plodding the same little field. On the other side, interdisciplinarity is very difficult and it is very easy to make a fool of yourself when you try to enter a new field. So, that's the way things are. With the economic crisis, interdisciplinarity will be even more penalized, I am afraid.

In this podcast on positive psychology (which I recommend watching in any case), the Harvard professor mentions that current peer-reviewed journal articles that get published now get an average of SEVEN total reads. What good is that doing anyone, unless is it some super obscure but important fact that is later picked up on?

Seven. Amazing if true. (and I think one of the seven was the author)

A good illustration of Sturgeon's law which says, if I remember correctly "90% of everything is crap"

I have noticed that much of the interesting resource depletion work is coming out of Applied Maths departments. I think B.Michel is an applied mathemetician, and of course others like Bartlett I believe are as well. Perhaps Applied Mathematics is the only real interdisciplinary field left, but where do they publish to get any more than a few reads?

the Harvard professor mentions that current peer-reviewed journal articles that get published now get an average of SEVEN total reads. What good is that doing anyone, unless is it some super obscure but important fact that is later picked up on?

That is a (somewhat) shocking figure if true. Such data are only sporadically available (for instance in that some journals publish download stats for .pdf versions of publications), and a better indicator might be citation metrics. For each time a publication is cited it has presumably been read at least several times.

With the increasing emphasis on citation rates in scientific career progression any publication that is not ultimately cited at least a couple of times per year was demonstrably not worth the effort of writing it in the first place. The scientific journals all know this of course and prefer to accept only the (anticipated to be) highest-impact manuscripts submitted to them. The lowest-impact stuff thus ends up filling journals which few people read and again in terms of career progression there is little reward in writing stuff that ends up in such journals (better to bin the manuscript and try again).

Shelf life value of work is rarely considered but also important. I did taxonomic classification work (describing and discriminating among species, mapping their distributions, compiling their ecological characteristics, etc.) and while none of this entailed a lot of break through theory, it was all useful from a practical point of view and will be looked at over and over again. I know this because I sometimes relied on the work of predecessors that could be up to 200 years old. Even if it is not cited, hundreds of people will directly look at these papers and they will remain the standard for many years if not decades. Citations may be in field reports from conservation NGOs, national park staff, etc, that are never found by citation search engines.

And if seven is the average, and you consider that some people and articles can safely be assumed to get hundreds of reads, if not more, it is fairly safe to say that a lot don't get any reads after peer review.

I was always told that, in academia, science is 'incremental.' When I was working on my thesis, my advisor said that I had two distinct ideas here. Which one did I want to write on? I had to ‘hone it down’ to one specific thing I wanted to study. I assume this is because it is easier to test something if there are few variables. Interdisciplinary studies are too complicated to devise workable experiments. How do you falsify a theory in a dynamic system?


It's appropriate that you mention Darwin, not only in connection with biological specialization, but also because he was such a fine example of what's so rare today, a scientist whose scientific endeavor was one part of a broader philosophical endeavor.

This issue always makes me think of Nietzsche, who was prophetic on this as on so many issues. He foresaw what a danger hyperspecialization would be for thought in general, as well as for how much we could even trust scientific results.

He figured the time would come when no one outside a specialization, even other scientists within the same branch of science, would be able to judge that specialization's results, and that we'd simply have to have faith in the integrity of peer review within each specialty.

That's obviously a problem, yet I guess there's no helping it.

This is why it is always so interesting to read Vaclav Smil's interdisciplinary works.

Ugo, I'm not so sure that the incominc crisis will decourage the inderdisciplinarity. I suppose that hyper-specialization research will fall more ...

I wish you were right, Franco, but I am afraid that academia will die hyperspecialized

Interesting observation that specialization is seen in ecological systems as well as human society, H. T. Odum had some brilliant insights on that very subject.

In a nutshell, he understood that systems self-organize around available resources and he observed that when a critical resource, especially energy, becomes available in abundance a system will experience a period of fast, frenzied consumption and growth characterized by "weedy" fast-growing but short-lived structure. Not unlike our current industrial civilization.

As the system matures the fast weedy growth, having consumed the abundance, dies-back and is replaced with slower-growing and much longer-lived structure characterized by recycling of resources and, interestingly, high levels of specialization.

Speaking of Odum, I couldn't help but notice you were remiss in not mentioning his significant contribution to systems modeling, not least of which being the world models using his Energy Systems Language (Modeling for all scales: an introduction to systems simulation). As Nate rightly pointed out the Systems Dynamics language suffers from a lack of an explicit basis in the universal energy hierarchy, which Odum elucidated as the basis for ALL complex systems, both living and non-living, including human society.

Odum's Energy Systems language, OTOH, lacks the explicit mechanisms found in Systems Dynamics for setting "policies" which regulate rates and flows, and by extension storages.

It would be interesting to see an effort made to combine the two languages, drawing on the strengths of both, and present the result in a way that allows people to explore fun, game-like simulations of the world and possible future scenarios that respect the realities of the universal energy hierarchy.

Personally I believe that there has been no greater power in the entire history of human civilization than the one we have now, available for the first time using computers and complex simulations, in the ability to ask in a meaningful and realistic way:

What if?


Regarding Nate's comments about World3 and net energy, Charlie can correct me if I'm wrong, but the World3 model indirectly accounts for net energy in other ways.

These are interesting insights about climate modelers blind spots regarding peak oil. Those generalists who model global dynamics such as the Club of Rome and Odum try to look at both issues. Odum was much more concerned about peak oil than the climate problem. When asked which one would be the bigger problem, his opinion was that fossil fuels would be the most immediate and limiting factor, rather than climate.

Yes, I have Odum's book. His formalism is very interesting and I have often used it as a didactic purposes - it is more pictorially descriptive than the standard images that you get out of s.d. software such as Vensim. Odum's concept of transformity is equivalent to that of EROI/EROEI but more sophisticated and more useful. But most people don't even know what EROI is, imagine trying to explain transformity to them! By the way, at the meeting in Zurich I met one self styled expert on CO2 sequestration who had never heard of EROEI!

True. The same happened to me. I had a discussion with a CCS expert (possibly the same?) about this very issue.

I proposed the following thought experiment to him. He should take his car, drive it for a mile and calculate how much CO2 he emitted into the atmosphere while doing so. Then he should store his car in his garage, not drive it for 20 years, then go to gas station with it to get new gas, and drive the car for another mile.

I told him that he now was emitting more CO2 into the atmosphere than 20 years earlier, because of the shrinking EROEI of the fuel he was using.

According to Charlie Hall, the EROEI of oil was somewhere around 100 when oil exploration started around 1930. You drilled a hole and the oil came gushing out. Now, you get much more dry holes, and the deposits that you drill into (if and when successful) are usually smaller, i.e., you need to drill again sooner. By now, the average EROEI of oil has shrunk by at least a factor of five.

Hence you are spending more energy in producing one barrel of oil than you did earlier, and in calculating the emitted CO2, you need to take into account also the energy used to produce your car and the energy used to produce its fuel.

I believe I finally got this idea anchored in his head, but I am not sure that I succeeded.

...hyperspecialization. It happens also for biological organisms (by the way, today is Darwin's bicentennial).

It's mid-20th century thinking that selection usually or invariably pushes a niche generalist towards specialization, and that the converse rarely or never happens. The fact is that selection fosters adaptation as readily in one direction as in the other. Abundant examples exist of highly specialized species becoming trophic generalists as environmental conditions change.

As for academic specialization, it depends on the discipline. In the Ecology & Evolution program where I did my graduate work, scholastic generalization was highly encouraged. The encyclopedic knowledge of some of the profs was awe inspiring. I was expected to learn everything from techniques of molecular phylogenetics to tropical lotic ecology pretty thoroly. My general ignorance of plants & of inverts has been a professional liability.

Jellyfish, rats, human, we have recently been bombarded with information that says generalists are now trying to fill all the niches of the species being eliminated.

Darwin's first edition of 'Origin' has been my current slog. I noticed he tended to describe the 'perfection' of new varieties as their newly specializing to take advantage of the 'conditions of life' available thus gaining advantage and edging out their less perfected ancestors but his descriptions of the process always allows for the expanding less specialized to essentially shear the specialist from its roots as the specialist confines itself to a narrower set of survival conditions. He just never seems describe it happening in this way. Considering the tree he was building out of the fragments he (and independently Wallace) had pieced together one can hardly consider his not descibing such process a great omission. Although his examples usually imply that 'perfection' of a variety means developing a more specialized adaptivity he never equates 'perfection' with speciallization per se. In its finest distillation Darwin's reasoning seemed never to falter.

Modern science has been mostly reductionist since it proved such a powerful technique over the last couple centuries. Unfortunately, this is reflected in hyperspecialization in academia and there have been strong institutional pressures to keep scientists from "straying" into cross-disciplinary pursuits. The excellent observation that this specialization is analogous to what biological species tend to do is just the sort of comment that one needs an overview to perceive.

Several times I have put together cross-disciplinary teams of scientists to work on specific mission goals, and these would only last as long as funding and strong direction could be supplied; there was a strong tendency for the participants to lose focus and revert to hyperspecialization and tenure tracks. Ironically, making them more well-known for the cross-disciplinary work had the effect of exposing them to more universities and institutes, generating ever more lucrative offers until they were quickly peeled from the work that had made them famous. This amounts to an error-correction mechanism in a system designed for specialization, and there are many of them.

Only sophisticated generalists, who are able to move smoothly between disciplines and describe patterns, will IMO be very useful as advocates and planners. There are relatively few of them, and certainly in the past it was not really possible to get a respected degree in such a thing, so generalists initially had to specialize to get the societal recognition of several degrees and then expand from there.

Some of the most clueless people I've ever met have had advanced degrees, but have been so specialized as to need keepers to feed and dress them. They will toil at their data until death stills their hearts, and work hard every day. But they are clueless about most things, and I don't look to academia to save us. Indeed, by soaking up the "science" funding, they may do more harm than good.

For some reason my mind drifts to scientists working on the ITER fusion reactor, which if all goes well MIGHT provide commercial power in 40 years. The high probability that we will not have the resources in 40 years to built out and maintain a network of giant tokamak fusion reactors is not only not discussed, it would be threatening to their lives. I'm sure we could come up with thousands more such examples.

An enlightening post. I would like to share a positive interdisciplinary science experience. In the late 1980's, in response to the growing resource challanges faced by the industrial society, there emerged a mature group of academics in Australia who began the Bachelor of Applied Science Coastal Resource Management course. This was an interdisciplinary course integrating the life and earth sciences, together with computer science and economics.

As fate would have it, I was one of the first on board in 1989. At the time, the institution was a college of advanced education. The academic environment was small and intimate. The dons there seemed to relish the cross-pollination between the disciplines presented in the course. It also came at a time when there were many scientific breakthroughs-climate change being one that I remember well. My study years certainly left an indellible stamp on me.

Soon after, the institution evolved to be a university (Southern Cross) The course now has multiple strands and is offered in three campuses, and it is thriving.

The seeds (graduates) of this course are scattered all over the world working with diverse, complex resource management issues like tourism management on coral reefs, and fisheries resource allocation. Some have left the machine and then raged against the machine to become activists (including myself).

I think part of the problem is that interdisciplinarity doesn't fit into the way universities run. Consequently, there's no "career structure". You may well fit in in your particular centre which was opened for whatever reason (probably by a specialist who later branched out and could persuade someone to give them some money). But when that goes, or when you want to move on, where will you go? The chances of there being another centre into which you fit is very slim, so you're left competing for jobs in specialisms, and by definition, you're not a specialist. You may have half what they're looking for, but you will be competing against people who have all which they are looking for,


Those close to academia are certainly better qualified than me to answer, but IMO specialization often leads one to be defensive of his/her turf. A 'generalist' who points out the pizza has both pepperoni and cheese (I like your analogy) gets a negative reaction from both the cheese specialist and the pepperoni specialist, because the generalist hasn't sufficiently 'paid their dues' in either pepperoni or cheese.

This attitude is poisonous to collaboration in many respects, but academia often rewards deep specialists who self-promote to the top of their professions.

Yes, and also note that departments have budgets. When budgets get tight, those efforts not directly supporting a department get cut. Protecting rice bowls is another form of social endowment survivalism.

I tried to make it as a generalist in academia and the going was tough. I did get a great collaboration going, some grant monies, etc., but then the institution didn't quite understand what I was up to and it wasn't part of their "core" so when funding got tight (post 9/11) they immediately looked at me. By this time I was a bit fed up by it all and left amicably, even while dear colleagues had pledged to take me in.

I really like this post, by the way. I made much the same case and tried to start this sort of conversation some years ago.

If interested see this presentation: http://coexploration.org/biodiversity/html/EarthDay2004_files/v3_documen...

Scroll down to the slide "The Case of the IPCC"

Good presention, nice dense presentation of data and paradigms.

I am not as pessimistic as you are, Ugo.

When I was young (I have been working in Academia for about as many years as you have), I would write a small research proposal to the Swiss National Science Foundation, and more often than not, these proposals were actually being funded.

This approach no longer works. By now, most of the research foundations operate almost exclusively on panel reviews, and they are only funding large research efforts that involve multiple universities ... and usually are (and actually have to be) quite interdisciplinary.

The problem is, the research proposals must still be well focused. They must clearly identify a well-defined research question, and they must document believably, why the research team stands a good chance of solving that identified problem.

What I have seen on many occasions is that research proposals are overly ambitious. Almost invariable, the result will be that one large research proposal is simply a collection of a dozen or so individual smaller research proposals with a poor match between them. This approach is generally doomed to fail.

Maybe what you mean is less that there is no money available for interdisciplinary research than that there is no money available to paint with a wide brush.

I am glad to hear that, Francois. But our different viewpoints probably derive from the fact that we work in different fields. Apart from moonlighting with dynamic models, my main activity is in materials science. There, having large partnership doesn't help improving interdisciplinarity. True, occasionally in a large project there is space for a little budget for doing the funny things. But, on the whole, I see that opportunities for doing something interesting and innovative are disappearing. I may be wrong - I hope so!

Ugo, I was interested in learning more about Dolores Garcia's world3 model, though I didn't see much on Google. Do you know of any papers or more information online anywhere?

Doly has submitted her paper to TOD. Hopefully, it should appear soon

I have attempted, without success, to suggest an integrated model that not only models, but takes on the aspect of a global internet game, a.k.a MMRPG. Though messy, I will let e-mails do the talking:

Hello XXXX,

Below is stuff I have culled from e-mails with a couple other list members. The origins of my and my wife's thinking were us brainstorming on how to do CC and PO outreach. She came up with the idea to use a game. From there my ideas got grandiose immediately. It's not as if distributed models don't run now. There are SETI and that climate change model... I used to run it, but can't remember now what it's called... running on PCs all over the world. Massively Multi-player Role Playing Games do the same. Second Life is an example of a virtual on-line community. What we don't have now is an integrated tool, but we do have all the elements running in one way or another.

I should back up and explain my concept. I see a MMRPG-esque modeling program that would work with realistic scenarios to help identify real possible solutions to The Perfect Storm. I envision participation on a world-wide scale, hopefully including people from gov't officials to everyday people. Real people maling real decisions in fastforward mode that produce realistic scenarios. That software already exists in two forms. One I knew of, one XXXX introduced me to recently.

The first is SEAS out of Purdue University.

It has been used for gov't simulations before. The software team (Yes, one already exists. But I think this might involve a much expanded team.) codes a scenario and real people are agents in the game. There are other nodes (agents) that are run as part of the model and represent people. The goal, of course, is to have millions of real people actually acting in the simulations. That is where Purdue might need some help, unless they can automate the process. They may already have that. I know they have a virtual presentation like a MMRPG.

The second is the T21 program out of the Milenneum Institute.

Now, it looks like ASPO might have beat me to it:

T21-North America User Interface Released!

Arlington, VA, October 18, 2007 - The T21-North America (T21-NA) model user interface was released today. The user interface allows the model to be open, modified and simulated on any Windows-based computer.The T21-NA model project is a collaboration with Association for the Study of Peak Oil and Gas-USA and State University of New York’s College of Environmental Science and Forestry to examine energy issues in the context of an integrated framework that incorporates the relations of the energy sector to the broader economic, social, and environmental framework. The project is part of ASPO-USA’s Global Energy Modeling project.

Except that their software is a package run by eggheads, so far as I can tell. That is, it is not a MMRPG or SEAS style package that real people are involved in; it's based on a few people inputting what they think is important rather than real people acting as they, hopefully, would in reality. I think. XXXX may be able to tell us more if he's of a mind.

[EDITORIAL NOTE: He apparently wasn't.]

The core needed is a scenario package wedded to a climate model, or with a robust climate module. The PO scenario would be rather easy to do, it seems to me, as it's a fairly straightforward issue, and would be added as a basic module. I could be wrong about that.

As for implementation, I'd be hoping to have this supported by serious money and made available world-wide for free so there are no restrictions. Ideally, we'd capture the imagination of some important people and be able to make a big deal of the launch and subsequent modeling with an eye toward 1. raising awareness of how serious the times are and 2. actually modelling a solution or two that might be viable, or at least move us towards some solutions.

I am BCCing this to XXXX and XXXX. Hope you don't mind. I'd like get some points of view on how viable this idea is, then, if something worthwhile can be sketched out, look at a grant porposal or presenting something eventually to... I don't know... ASPO? Gore's new outfit? Our new presidente? Bill Gates' foundation?

Note: I have no programming skills. I'm an idea man, not a programmer.



Here are cut-and-pasted e-mails that you can sort through if it strikes your fancy. There may be some thoughts in there I missed here.

This is a role-player program to an extent, but most of the nodes or "agents" are dummies. It's been used with the US gov't.

We are thinking the same thing: MMORPG. To my reading, that's what SEAS does. It takes scenarios then plugs in real people as the agents. They make real decisions and create real outcomes; no different than the MMORPG. The difference would be that the programmed dummy nodes/agents would also be live people. I'm certain I've read before that this is possible with this software.

Since they are doing scenarios already, the people at Purdue should have no problem putting together the various scenarios at one all-encompassing scenario. Marrying it to something like Climate Prediction (I used to do that, but my RAM is to light) would be a challenge, of course. It might be easier to simply write the nodule in. After all, the program doesn't actually NEED to run a real GCM, but hust just have climate scenarios as variables coded in, no?

I think this would be a good way to create awareness on a global level of the perfect storm. I have no doubt a room of good programmers could get this up and running in months - certainly less than a year.

Now, perhaps the thing to do is to create a crude simulation that works to use as a demo to present the idea... make it a viral hit and get someone with money to back a high quality version.

But I'm not a programmer; just a guy with an idea.

That and $5 will get me some Starbucks sewage.

I've not had time to look over your stuff, but will do so this weekend.
My aim is much higher, actually. The idea is to have a global model, many models, actually, using real people with real scenario creation. The software that can do this is SEAS out of Purdue University.

The home page: http://seas.mgmt.purdue.edu/index.html

What they do: http://seas.mgmt.purdue.edu/projects.html

Here's some detail:

I believe the software can be used to model millions of users. For our purposes, it would have to be wedded to GCM(s), with energy and economy components perhaps done through the existing software.

This would be released to the web with real people signing up and being a node on each run. I imagine (as in dream of) multiple models running currently of various time lengths.

Might even be done for profit, but my preference is to get funding from somewhere and have this running as a free, real-world model of what we face. Runs with positive outcomes would hopefully provide clues as to which way to go with public policy and/or individual efforts.

I see this as a real way, and perhaps the only realistic way, to create a mass movement of people that are fully informed on all aspects of The Perfect Storm.
Hey, fellow climate-change Doomer, way to fight the good fight at TOD. My name’s XXXXX, and I’ve been thinking about doing some sort of climate modeling, just for fun. I don’t have any particular qualifications (EE degree), but the problem domain is fascinating. I’ve been trying to imagine how to make a playable game, with realistic climate dynamics. Something the classic old game Global Effect. The new game Spore lets you terraform planets, but this amounts to pointing a ray gun at the planet for awhile.
If you take the T21, the SEAS scenario modeling software and the idea of MMPRPG's (online role-playing games), such as SecondLife, then you've got my idea for finding a global solution while at the same time creating global awareness.

Maybe you can broach this idea to someone at Millenium. Actually, SEAS may be enough to do the job, just scaled up massively and put online with multiple (Hundreds? Thousands?) models running simultaneously.

The key is to have real people involved, not just "leaders." Millions of them.
Our - my wife thought of the original idea as an educational tool for AGW - idea is to take this way beyond that. It would model reality as closely as possible. You'd have the module idea that T1 goes with, I suppose, but with AGW, Peak Oil, AGW added in. Might have to mate the thing with the outputs from some of the climate models, but that info should be easy enough to get from NASA, etc. Or, it might have to be integrated with a watered down - or even full - climate model.

As many nodes as possible would be live, with all dummy nodes (agents?) filled in with a variety of "persons" shaped to match world population, preferably proportionately.

Live players would be able to shape their behaviors directly, and dummy nodes would be shaped by parameters and changes that occur in-game. Millions of nodes are needed to make it a globally realistic scenario, I'd think.

On the other hand, it would be cool to have all communities be real communities in the world. You'd have to sign in where you live. If you're the only one, everyone else would be averaged for your location. Etc., etc.

The beauty of this idea is it would be a model that might actually end up modeling a solution, or at least give a strong hint about which direction(s) we might take. Let people see it unfold, with their input... and not a game, a simulation.

Could be powerful stuff.

You could have many going at the same time with different input parameters. Players might even set up their own scenarios and start them. Perhaps they'd have to have a critical mass of real people to actually start up.

Another beautiful thing is, it takes the policy research and data out from behind the doors and halls of TPTB and gets them into the hands of Everyman.

This sounds like a very worthwhile project and I am hoping it will be expanded. Climate change and resource depletion are very important; on the other hand to be truly predictive you also need a way of modeling technological progress and political dynamics.

There's been quite a lot of work done on historical modeling of technology (e.g. Korotayev), which generally concludes that it's doubly hyperbolic until the physical base that supports it hits some kind of limit of growth and causes technology to regress.

Political systems are much trickier; nonetheless, as we see on the Oil Drum geopolitics are probably going to become very baneful influences on our civilization as energetic decline sets in. Thinking of a good way to model these is pretty hard though...

Curious as to what "doubly hyerbolic" implies. There is hyperbolic in terms of sinh and cosh functions, and then you have hyperbolic as in hyperbolas.

modeling technological progress and political dynamics

I believe this is taken care of by two elements. 1. If you check out the link to the SEAS software, it is a scenario-driven model which appears to have been geared to top-down modeling of emergency responses. 2. The use of real people from all levels of society, including gov't, as noted above.


Psychohistory anyone?

"Curious as to what "doubly hyerbolic" implies."

I meant to write quadratic hyperbolic, sorry. Also, it was world GDP growth not technology (which is merely hyperbolic). It's been a while since I read it.

The idea is that Y = k / (t0 - t)^2, where Y is gross world product, k is a constant, t is time, and t0 is a "singularity point" where T approaches infinity. A conclusion was that the above equation descrbed growth (as proxied by GDP per capita) extremely well, until it ceased being hyperbolic in the 1970's, This is not surprising since about that time it was realized that there exist limits to growth both physically (e.g.the oil shocks) and psychologically (the appearance of mass environmental movements, the book Limits to Growth, etc).

It's not "psychohistory", it's sometimes called "mathematical history".

My immediate thoughts whenI read Ugos post was that we need a collaborative, distributed live dynamic model, that "learns" as it gathers more information from many different sources.

One of the problems I can see however is that, particularly in relation to politics and economics, the model itslef could become part of the feedback loop, effectively becoming a self fulfilling prophecy if enough faith was placed in it.

My immediate thoughts whenI read Ugos post was that we need a collaborative, distributed live dynamic model, that "learns" as it gathers more information from many different sources.


the model itslef could become part of the feedback loop, effectively becoming a self fulfilling prophecy if enough faith was placed in it.

Any run results should be considered guidelines to possible solutions, hints, if you will, rather than definitive solutions.


My gut feeling is that climate change and resource depletion are fairly loosely coupled. Especially in the climate change effecting resource depletion, current trajectories are to use up all the climate altering resources, the only thing that climate might do is slightly modify the time scale of this process. Going the other way, the total amount of carbon burned obviously has an important effect of CC, but this could be adequately modeled by a oneway exchange of data. Of greater importance both ways would be if, and how much geoengineering might be done to combat CC. This ties in the resource depletion because of the need to devote resources to whatever schemes are attempted. But, it will also be affected by politics, which I think makes it extremely uncertain.

In industry I saw some of the same pro-specialization forces operating. The department effect is probably most important, as the interdisciplinary guy can easily fall between the funding and recognition gaps.

... The Hubbert curve is, actually, a stripped down version of more complex models of resource exploitation that can be created using system dynamics. ...

I think this characterization of Hubbert's work is quite unfair and misleading. In normal usage, a stripped down version is something that come after, in time, a more serious effort at a model. In fact, prior to Hubbert there were no serious, i.e. mathematically and physically sound models. His was the first serious model in that he built into the model the fact that the resource was finite, all be it without any fixed idea as to its numerical size. He used the Verhulst equation because it was first introduced (by Verhulst) to improve on Malthus' model of 'geometric' growth of human population. It is very much in the tradition of serious scientific modeling. First identify the features of the situation that you know must pertain, but that you have no idea how to measure. It was a very good piece of work. Complex models that have followed, have not added much, IMHO. Human Nature seems to demand that a 'good model' must be to complicated for a mere human to understand. Human Nature is mistaken.

And, Hubbert is the Patron Saint of TOD. You blaspheme! ;-)

Verhulst equation assumes things procreate and die. Oil does not do this. It just so happens that the logistic sigmoid can also be derived from a dispersion model, which has nothing to do with birth-death dynamics.

Hubbert was a good scientist but his use of a logistic was a pure heuristic. I don't care if stating this is blasphemous or not.

Geek7, I stand by what I wrote, even though I know I am sinning of blasphemy against our Patron Saint! You are right that you can base a very simple depletion model on the Verhulst or the logistic equation. Of course, but, if you use system dynamics, you have a much more detailed and flexible model of what is going on. You say that complex models have not added much to Hubbert's model, but I have to disagree. Think, for instance, of the many historical cases in which the depletion curve has not been symmetric. To simulate a non-symmetric curve, you need a relatively complex model - you can't do it with a logistic (at least not with the standard definition of a logistic curve). But I agree that system dynamics is often over-complexified. It seems that some people can't believe in a model unless they don't understand it :-)

My objection was to characterizing the model as 'stripped down'. Since it was prior in time, it could not be stripped down. When he did his work there was no complex model from which he could strip out the excess complexity. Presumably, the system dynamics crowd knew about Hubbert's work before they started their own work on this topic. The ones with intellectual integrity surely cite Hubbert in their reviews of the field. It would have been surprising had they not made improvements, given the data and computers that have become available since Hubbert's time. But 'stripped down' is an uncalled for denigration, and is factually untrue. IMHO

I would like to see a complex model in which various aspects of the complexity can be switched on and off. With such an arrangement, one might gain some insight into what is more important and what is less important. And insight into the precision of the predictions and the sensitivity of the predictions to changes in the adjustable parameters.

But we know that the trend is downward. We would not believe a model that predicted a drop in production to half of peak, and then a burst of long burst exponential growth. So what are we looking for in more complex models? And are current models capable of meeting our needs? It's all very interesting.

There is a documentary called 'Crude' (a ABC - BBC collaboration if I recall rightly), which explains that fossil fuels, especially crude, were created mostly during peaks in global warming, when phyto plankton blooms created anoxic conditions, causing immense die-offs, the cadavers of which sank unconsumed to the bottom of the sea, thereby sequestering enormous amounts of carbon, and lowering the mean temperature of the worlds climate.
The documentary made the point that by using fossil fuels, we are now releasing into the air all the carbon that was sequestered millions of years ago, thus recreating the conditions that caused the creation of oil in the first place.

Inevitably, we will produce much, but not all of the carbon that was sequestered in oil, coal and gas fields.
How much of an impact would this shortfall have on the global warming we still can cause? Has anyone got numbers we could get a handle on, calculating and comparing atmospheric and geological carbon amounts?

Yes we all know what is happening. You can make all the super complex system analysis your tinkering heart desires.You can predict the collapse to the day or hour and know exactly why and where we went wrong.But can you we do f*ck all to stop,help or mitigate in any way the rising shit storm?I dont hold out much hope, humanity has momentum a force and direction it seems that has been building for hundreds or thousands of years,now on the cusp do you think we can avoid reaping what was sown ? Also a large part of the world is not industrialized ,lives in poverty and dosent enjoy any of the benefits of ff.There is lots of speculation here about what p.o. will look like,I dont think you have to look far. When we speak of collapse we're probably talking about a minority. Built a world we cant maintain, built a big house on shoddy foundation.All us oil subsidized ppl are in deep sh*t (US) but we have been stock piling guns so i dont expect we will go down without a fight.A heavily armed strung out addict. . scary.Love this site has the most sophisticated analysis of a train wreak and i cant take my eyes off it.

But can you we do f*ck all to stop,help or mitigate in any way the rising shit storm?

The point of the idea I posted above re: modeling is to do just that: model solutions, not necessarily the problems. Millions of real people engaged in a model of what they really do/might really do.


The issue of how to model contemporaneous climate destabilization and resource depletion/substitution may be helpful in keeping analysts off the streets, but the sheer range and qualitative diversity of variables implies that cogent forecasts for the coming decades are likely to be rare - like hens teeth.

Consider just two factors within the present 'problematique'.

1/. In 2003 Nature published a paper by one Dr Freeman of Aberystwyth Uni, regarding the impact of elevated CO2 in boosting one micro-organism, on which another in turn thrived, while generating an enzyme that decays the peat bogs they inhabit.
This explained the puzzle going back to the observation in the 'early '60s of the annual ~6% increase of Dissolved Organic Carbon [DOC] in watercourses flowing from peatbogs worldwide, from whence the carbon outgasses as CO2.
It was remarked by the author that unless the process is controlled, peatbogs' DOC will be emitting an annual output of carbon equalling total 2003 anthro-output by about 2060.

2/. In 2005 the EU funded research that developed a new catalyst for processing wood-sourced syngas (CO+H2) to methanol (CH3OH - which is an exceptionally clean-burning liquid fuel) at a conversion rate of ~65% by weight - i.e. ~650kgs of methanol from 1,000kgs dry Wood.
Notably, wood has about half the hydrogen needed for a perfect ratio with its carbon content for methanol production -
meaning that unless cheap hydrogen is somehow available, there is a large volume (>200kgs) of surplus carbon per tonne of dry wood processed.
The wise use of that carbon is undoubtedly as biochar in the modern version of the ancients' practice of Terra Preta, where charcoal is buried in agricultural soils. This can raise tropical soils' yields by 2 or 3 times or better, while also removing carbon from the forest-to-atmosphere cycle at least for millennia.
Moreover, in helping to restore airborne CO2 concentration from the present 388ppmv to the pre-industrial 265ppmv, this use of forest produce would be helping to decelerate the GW feedback loops, including that of DOC from peatbogs.

Either of the above two factors may transform the outcome we'll see in the coming decades - For instance, planting sufficient forestry for coppices to supply methanol, biochar and raised crop yields, would itself add to the biochar's recovery of a significant annual gigatonnage of airborne carbon, while the raised food supply, and home-grown liquid fuel supply (potentially wherever trees grow well) could have restorative effects on society's stability.

Yet there is as yet little recognition of the DOC threat - and it is only one of many feedbacks that have or may yet take off - just as there is as yet no official global effort for the requisite reforestation for methanol & biochar, which of course is only one of a myriad of mitigation options, albeit an exemplar of integrated multi-yield mitigation policy.
What effect these two factors, among too many others, will have in the coming decades thus seems simply wide open.

Perhaps the key unknown is just what "Treaty of the Atmospheric Commons" may become possible, and the extent to which it embodies the global climate policy framework of "Contraction & Convergence." IMHO, without such a treaty, it looks rather unlikely that we shall avoid energising the feedback loops to the point that they defenestrate our society.

So how should we begin to model the potent interactions of these destabilized organic systems reliably, with their (almost) entirely unknown characteristics & capacities ?
Ugo helpfully remarked the relatively high sophistication of the climate models compared with the resource-depletion models shown at the conference. Thus it is worth noting that it is the IPCC that has refrained from attempting to model the climate feedbacks.



The results of the first comprehensive study performed using this model was published in 1972 in the form of the report called “The Limits to Growth”. ... Although historically older than climate models, world models are far less sophisticated. That is due to the decline in interest in the 1980s, when these models where subjected to a wave of politically oriented criticism.

I had heard about The Limits to Growth, which came out just when I was entering college. I was worried that they might have found serious obstacles to progress, so when it showed up in the college book store I leafed through a copy to see if it might be worth buying. Just as a quick sanity check I decided to verify that they acknowledged that we could never run out of the most common elements, like aluminum. When they failed that test there was no hope for me every having any trust in the information in the report, and I dismissed it from my mind. No politics involved at all.

Hell, education was wasted on you.........

Do you think every resource is a stand alone entity?
What use is all the metal ore we could ever use if there was no coal, or oil or gas? Or if our energy supplied peaked and declined.

Read that and understand what energy is required to manufacture aluminium. See if you can come to grips with what limits that resource and many others.

It takes a lot of energy to refine aluminum, and always will. What does that have to do with treating bauxite as the only possible ore? Whatever the additional cost would be to refine aluminum from kaolinite would not be enough to stop aluminum from being used in those cases where it has a clear superiority. Treating the future world aluminum supply as if it would run out once all the bauxite is gone is scaremongering at best.

If mining aluminum ore depends on having sufficient oil, which I don't think it really does, then what difference would it make how large the bauxite reserves are? We'll almost certainly run out of oil first anyhow.

Luckily there is plenty of energy available to the world, either from solar or nuclear. I can't see energy being a long term constraint at all.

Ambulator, it is not a very good idea to dismiss a book just by reading a few pages of it. What you stumbled on was a table that stated how long the known reserves would have lasted if used at current rates, or on a hypothesis of exponentially growing exploitation. If you had read (and understood) the text you would have seen that they were speaking of RESERVES, not of aluminum as a chemical element. Please note that there is a difference between reserves and chemical abundance in the crust. It is - literally - a world of difference and you missed it completely. It is not enough to read, one must understand what one reads. If you had made the effort of reading (and understanding) the book, instead of reading (and misunderstanding) just one table of it, you would have discovered that - curiously - the group of scientists from the MIT who had written the book were not so completely feebleminded as it had seemed obvious to you. They understood very well that we would never run out of aluminum, as we will never run out of copper or of zinc, or of any mineral: after all, we can't mine ourselves out. No matter how strange you may find that, even MIT professors can understand such a concept. What we will run out of is of minerals that we can profitably extract. Another world of difference.

I was wondering about the impact of PO on coal production. Although mechanization hs replaced lots of manpower there are still lots of coal miners. With drastically higher diesel costs lots of open pit mining could get pretty expesive what with excavation costs and diesel train transport costs it might not be worthwhile to get out a lot of dirty coal with low energy levels(brown coal). Electricity in deep pits is also in heavy use and that could feed from the coal itself.

Some nice pictures and data here.


Of course after PO, labour could get cheap so maybe coal production will climb, not fall drastically. A climb in coal production would exacerbate GW whereas a fall would ameliorate it. The energy intensity of coal mining and its priority in a post PO society would determine which case would occur.

So regardless of models a bit of common sense would seem necessary beforehand to determine whether PO causes Peak Coal or even a steep coal crash saving the world from GW.

There have been a lot of very interesting comments here. Some went in depth on models; what models are, how we should use them, which kind, etcetera. That is a fascinating subject. Worth a dozen of posts.

I have a comment on one of Nate Hagens' first comments. It is about the EROEI and the LTG (better said, system dynamics) models. It is true that there is no EROEI in those models - not explicitly at least. However, it is nearly unavoidable. S.D. models are based on the concepts of stock and flow, whereas thermodynamics is based on the concept of state function. The two kinds of concepts just don't mix well. Basically, we don't have a formal thermodynamic description of the kinetic behavior of a system, even though we have it for its initial and final state. We have here something similar to the relation of chemical thermodynamics and chemical kinetics. We need both for understanding chemistry, but the two are separate fields with different formalisms and methods. Anyway, without going too much in depth here, it is possible to explicit the EROEI in system dynamics equation. When we succeed in completing our "mind sized" version of the Hubbert model, you'll see how EROEI is linked to the bell curve.

That brings me back to the interdisciplinarity problem - the fact that I can't get any funding for studying these things. But, on the other hand, everyone knows that things done illegally are done most efficiently :-)

Ugo, while I have a real respect for the groundbreaking effort behind “Limits to Growth” and its update, I’m concerned that we now face conditions which lack a precedent and for which we lack data to formulate effective models.
Technically, we are dealing with the conundrum of an organic system's recognizable but unpredictable responses to perturbation.

I refer both to the interaction of GW feedback loops, which have been advancing unexpectedly fast (e.g. Arctic summer ice loss forecast for 2100 seen in 2007)
and to the necessarily rapid application of novel mitigation techniques whose outcome will be a matter of trial and error.

In a post above I outlined the Coppice Methanol + Biochar option which may, quite possibly, provide significant liquid fuel supply & diminish global food shortages & steadily reduce airborne carbon; but then again it may not.

Similarly, the loss of the Arctic Ice Cap may well so reduce arctic albido as to cause massive sudden melting of the Greenland Ice Cap (with its ~ 20ft of SLR) and also greatly accelerate the melting of permafrost and the collapse of methyl hydrates (with their vast stores of CH4 & CO2) and also further encourage the northward boom of the pine bark beetle that is already setting up vast conflagrations of dead conifers in the Boreal Forests.

Then again, the loss of the Arctic Ice Cap may cause only a marginal incremental advance of any or all of these feedbacks. We simply do not know. We are in uncharted waters.

The IPCC has refrained from modelling the feedbacks’ interaction precisely because they lack the data to provide credible models, despite having, as you remark, far more advanced modelling capacities to deploy than the resource-depletion fraternity.

In terms of the laudable ‘Mission Earth’ objective, I suggest that without useful climate stability projections, there are no reliable forecasts of global food production, nor of political stability, nor of society’s thruput of raw materials, and in having to rely on novel techniques such as Biochar (in addition to intentional fossil fuel contraction) we cannot even give reliable projections of the rate of anthro-mitigation of airborne carbon.

There are of course further issues, such as the ongoing disruption of the planet’s major carbon sinks, for which we again lack reliable development data.

Perhaps you could explain how you see these difficulties being overcome ?



A thought just occurred to me: We don't need to model climate. Hang with me here...

If we simply accept a few parameters as TRUE/FALSE, then we can deal with the precursors rather than needing to model impossible levels of complexity.

1. CO2 is driving ACC = TRUE

2. Rising temps will destabilize ice caps over timeframe N = TRUE

3. Rising temps will decrease food production beginning at time point T = TRUE

4. Rising temps will increase catastrophic weather and weather-related events = TRUE

5. Personal and societal scales of change must occur to arrest the rise of GHGs = TRUE

6. Since we do not know at what points in time or in accumulations of GHGs tipping points are reached, it is possible they already have been, are now, or soon will be, then the risk of that is so huge it must be avoided at virtually any cost = TRUE

7. Gov't policy alone can arrest the rise in GHGs = FALSE


In this case, the actual effects of climate change need not be modeled. What needs modeling is reductions in GHG emissions and increases in sequestration.



SO GW effects are a horror story to make us change our behaviour. Critical however is really the leverage of PO on system which could kill the whole release of GHGs dueto costs for oil and gas and coal production skyrocketing and car, etc. use and general electrical production plummeting(less coal and nuclear mining available as mining is energy intensive).

Surfer dude,

This idea that there is some savior in the form of PO's effects on productivity is wishful thinking. It's something anti-ACC people grasp at to justify their bizarre beliefs. In order to think demand destruction will save us you must ignore that there is more CO2 in the air ALREADY, and that it is ALREADY causing devastation. With it growing in atmospheric abundance every year at an accelerating rate, currently about 2.2 parts per, the future effects are only being amplified.

Now, if we calculate on a steady rate, not an increasing one, we get to 450 by 2037. we get to 550 by about 2083. Now, that first target is not going to be greatly affected without very large changes in use of FFs. Will the downturn be enough? Perhaps. But are you willing to bet your life (rather, your descendants lives) on it? I'm not.

But let's say it does bring about serious reductions in the rate of emissions. Let's say it drops to a steady 1/4 of current rates (emissions from humans will never go to zero), say .55 instead of the current 2.2. We still hit 450 by 2123. That may seem far off, but my grandchildren, if I have any, could still be alive. Certainly my great-grandchildren would be likely to be, and absolutely my great-great-grandchildren would be.

A recent study pointed out temps will be rising for quite some time and the effects will last a thousand years just from what is currently in the atmosphere. Here's some commentary on that study:

No, we cannot hope that a recession or depression is going to solve our problems, because it isn't - even in the best of circumstances.

Let's not forget other systems and other gases that affect these things. Methane emissions are rising and will continue to be added to the atmosphere due to clathrate and permafrost releases. Possibly in huge volumes given there is more carbon in the tundra and clathrates than already in the atmosphere.



thanks for the reality check. I am not against GW but I want to put it into perspective in terms of the modeling. If PO crashes the GHG release down to 25% and a lot of FFs never gets used due to above ground cataclysms then there might be some hope, but not for what is already in the air unless we can make a lot of biochar to soak it all up. If PO kills the system in ten years and it went to down 25% of current emissions (cows and modern industrial ag are also heavy in that too) maybe we we would have time to adjust to the new life instead of just being almost instantly killed off(Greenland etc. going wildly unstable by 2020-30 say). I just want to be realistic. Maybe I can have sliver of hope and not go around like Lovelock. Unfortunately even if we go down to zero emissions today it might just be too late. 50 meters more of water would be on the earth with this level of CO2 in comparison with last time such CO2 level was around, but it just takes time to change WAIS and Greenland stability.

I'll just have to hope for a terror attack on Abaqaiq followed by Nuclear war and mass starvation. Quick megadeath is more efficient than a long drawn out efficient destruction of everything. Here's hoping for the worst.

ccpo -

while I'd strongly agree that we need to model the potential for cuts in GHG output and for growth in carbon sequestration,
the inability to model climate reliably including those changes means flying blind with regard to policy.

For instance, a notional modern version of Stalin (aka Cheyney?) may feel justified in planning to leave a billion or two to starve by 2020,
given that despite great efforts at reform, we may, quite possibly, lack a climate to keep them fed in the 2030s,
and their early absence would improve the remaining population's prospects.

False logic of course: just the damage done to agriculture during extreme famine is itself massive.
But I hope this illustrates the point that without the ability to model climate,
policy planning is open to massive bandity as much as to well-meant profound error,
on a scale that is unprecedented.

Thus modelling the feedbacks' interactions despite the lack of data on their future conduct
is perhaps the highest of research priorities, but appears simply unachievable.

In terms of where we've got to so far, the link below shows a pair of graphics
juxtaposing world food production by $ value of national output
alongside the current distribution of drought, extreme drought & historic drought.




while I'd strongly agree that we need to model the potential for cuts in GHG output and for growth in carbon sequestration, the inability to model climate reliably including those changes means flying blind with regard to policy.

Not at all. Your points here are ignoring one thing: it doesn't matter what future CO2 is or isn't: it's already too high. Regardless of what we model, we already know they are too high and need to be stabilized, if not reversed.

Thus, the primary question really isn't what happens if we succeed, or don't succeed, it's how do we stabilize GHGs. Everything else hinges on that. The points you raise are more important in the arena of adaptation, I'd say.


I wish I'd have an answer for that. One thing that I know is that we have better and better models. But we are also discovering that the climate system is more complicated than we thought. Still, even the most complex system must obey the laws of physics and if we put a thermal forcing on the complex climate system we know that this extra heat has to go somewhere. Surely it will not be a linear process, maybe it will oscillate wildly; maybe it will become chaotic. But warming up it must. The problem is that we are discovering that the social and the political systems are impervious to warnings. So, humans are able to build these beautiful models and, at the same time, they are unable to act on the results. It is a bit humiliating for a species that liked to use the term "sapiens" for its name that we might only be saved by our inability of extracting and burning everything we would like to extract and burn

Ugo -

thanks for your response. I'd observe that while modelling is improving, and while even the most complex of systems
must obey the laws of physics, we are significantly ignorant of how those laws may work
with regard to utterly complex organic systems.

For instance, given the ongoing development of a potential multi-billion-tonne fuel store in the Boreal Forests,
which year will provide the low ground moisture and high lightning incidence
to put that much carbon into the atmosphere ?
2012 ? 2020 ? 2028 ? Or would it be a small volume per year ?

It appears that while the organic system must obey the laws of physics, it chooses when it will do so quite randomly.

Hoping that you & your colleagues may find a way to resolve this conundrum,



A curious feature about the boreal system that no model seems yet to have taken into account is what substantially decreasing ice will do to the far northern cloud cover. I live smack in the middle of the boreal forest and have been looking at world satellite composites for ten years, nothing too sophisticated and memory is all I have to go by--summer weather up here has shown considerable influence from the open arctic seas north of Siberia the last couple of years, before that our wet summer weather almost always came from the south asia monsoons drifting north and flowing eastward atop the norhtern portion of the siberian land mass. A ten or twenty year animation of cloud cover and ice edge would be instructional. I would love to see it.

When five to seven million acres are burning a within a couple hundred to a couple miles of your house in all directions it gets a little smokey--sometimes down to 100 foot visibility. We see it all first up here. Of course for long term evidence of climate change the North Slope is hard to beat as well, hundreds upon hundreds of miles of tundra covered deep gravel plains. The amount of snow that fell in what is now the arid arctic must have been phenomenal to leave such a huge area of glacial outwash. Obviously something big eventually happens to cloud cover when the arctic ice all melts.

Thanks, Ugo, for your kind words and nice report.

The Mission Earth workshop was organized as part of the annual meeting of the Alliance for Global Sustainability (AGS), a collaborative research effort of Chalmers University (Sweden), ETH (Switzerland), M.I.T. (USA), and the University of Tokyo (Japan).

The AGS was originally (in the early 1990s) funded by the Max Schmidheiny Foundation. The original grant has meanwhile dried up, but the four partner universities have been able to keep the effort alive by independently acquiring additional research funding.

This year's AGS conference was placed under the theme Urban Futures: The Challenge of Sustainability. It was a high-level event with active participation by the Presidents of the four universities. Many interesting presentations were made. These will eventually be made available on the web in streaming video format, but this hasn't happened yet.

This year's AGS conference was accompanied by two workshops, the Mission Earth workshop that is the topic of this thread, and a second workshop organized by the World Student Community for Sustainable Development. Also that workshop was very interesting with excellent presentations by Colin Campbell and Bernard Lietaer, among many others. Also these presentations will eventually be made available in streaming video format.

I started preparing for our own Mission Earth workshop two years ago during the AGS meeting held in Barcelona.