Information and Crude Complexity

This is a guest post by WebHubbleTelescope.
Abstract (please read this as a set of squished-together PowerPoint bullet points):

People become afraid when you mention theory. Everyone talks about entropy without actually understanding it. Simplicity can come out of complexity. "Knowledge" remains a slippery thing. We think that science flows linearly as previous knowledge get displaced with new knowledge. Peak oil lies in this transition much like plate tectonics at one time existed outside of the core knowledge. We define knowledge by whatever the scientific community currently believes. “Facts are not knowledge. Facts are facts, but how they form the big picture, are interconnected and hold meaning, creates knowledge. It is this connectivity, which leads to breakthroughs …” You will either think you understand the following post, or know for a fact that you don't.


Scientific theories get selected for advancement much like evolution promotes the strongest species to survive. New theories have to co-exist with current ones, battling with each other to prove their individual worth [Ref 1]. That may partly explain why the merest mention of "theory" will tune people out, as it will remind them of the concept of biological evolution, which either they don't believe in, or consider debatable at best. Generalize this a bit further and you could understand why they could also reject the scientific method. If we admit to this as a chronic problem, not soon solved, the idea of accumulating knowledge seems to hold a kind of middle ground, and doesn't necessarily cause a knee-jerk reaction like pushing a particular theory would.1 So, what kinds of things do we actually want to know? For one, I will assert that all of us would certainly want to know that we haven't unwittingly taken a sucker's bet, revealing that someone has played us. I suspect that many of the diehard TOD readers, myself included, want to avoid this kind of situation.  In my mind, knowledge remains the only sure way to navigate the minefield of confidence schemes. In other words, you essentially have to know more than the next guy, and the guy after that, and then the other guy, etc. TOD does a good job of addressing this as we constantly get fed the unconventional insights to explain our broader economic situation.

Ultimately we could consider knowledge as a survival tactic -- which boils down to the adage of eat or be eaten. If I want to sound even more pedantic, I would suggest that speed or strength works to our advantage in the wild but does not translate well to our current reality. It certainly does not work in the intentionally complex business world, or even with respect to our dynamic environment, as we cannot outrun or outmuscle oil depletion or climate change without putting our thinking hats on.

This of course presumes that we know anything in the first place. Nate Hagens had posted on TOD earlier this year the topic "I Don't Know". I certainly don't profess to have all the answers, but I certainly want to know enough not to get crushed by the BAU machine. So, in keeping with the traditions of the self-help movement, we first admit what we don't know and build from there. That becomes part of the scientific method, which a few of us want to apply.

As a rule, I tend to take a nuanced analytical view to the way things may play out. I will use models of empirical data to understand nagging issues and stew over them for long periods of time. The stewing is usually over things I don't know. Of course, this makes no sense for timely decision making. If I morphed into a Thompson's gazelle with a laptop cranking away on a model under a shady baobob tree on the Serengeti, I would quickly get eaten. I realize that such a strategy does not necessarily sound prudent or timely.

Nate suggests that the majority of people use fast and frugal heuristics to make day-to-day decisions (the so-called cheap heuristic that we all appreciate).  He has a point in so far as not always requiring a computatonal model of reality to map our behaviors or understanding. As Nancy Cartwright noted:
This is the same kind of conclusion that social-psychologist Gerd Gigerenzer urges when he talks about “cheap heuristics that make us rich.” Gigerenzer illustrates with the heuristic by which we catch a ball in the air. We run after it, always keeping the angle between our line of sight and the ball constant. We thus achieve pretty much the same result as if we had done the impossible—rapidly collected an indefinite amount of data on everything affecting the ball’s flight and calculated its trajectory from Newton’s laws.
This points out the distinction between conventional wisdom and knowledge. A conventionally wise person will realize that he doesn't have to hack some algorithm to catch a ball. A knowledgeable person will realize that he can (if needed) algorithmically map a trajectory to know where the ball will land.  So some would argue that, from the point of timely decision making, whether having extra knowledge makes a lot of sense. In many cases, if you have some common sense and pick the right conventional wisdom, it just might carry you in your daily business.

But then you look at the current state of financial wheeling-dealings. In no way will conventional wisdom help guide us through the atypical set of crafty financial derivatives (unless you stay away from it in the first place).  Calvin Trillin wrote recently in the NY Times that the prospect of big money attracted the smartest people from the Ivy Leagues to Wall Street during the last two decades, thus creating an impenetrable fortress of opaque financial algorithms, with the entire corporate power structure on board.  Trillin contrasted that to the good old days, where most people aiming for Wall St careers didn't know much and didn't actually try too hard.
I reflected on my own college class, of roughly the same era. The top student had been appointed a federal appeals court judge — earning, by Wall Street standards, tip money. A lot of the people with similarly impressive academic records became professors. I could picture the future titans of Wall Street dozing in the back rows of some gut course like Geology 101, popularly known as Rocks for Jocks.
I agree with Trillin that the knowledge structure has become inverted; somehow the financial quants empowered themselves to create a world where no one else could gain admittance.  And we can't gain admittance essentially because we don't have the arcane knowledge of Wall Street's inner workings. Trillin relates:

"That’s when you started reading stories about the percentage of the graduating class of Harvard College who planned to go into the financial industry or go to business school so they could then go into the financial industry. That’s when you started reading about these geniuses from M.I.T. and Caltech who instead of going to graduate school in physics went to Wall Street to calculate arbitrage odds."

“But you still haven’t told me how that brought on the financial crisis.”

“Did you ever hear the word ‘derivatives’?” he said. “Do you think our guys could have invented, say, credit default swaps? Give me a break! They couldn’t have done the math.”

If you can believe this, it appears that the inmates have signed a rent-controlled lease on the asylum and have created a new set of rules for everyone to follow. We have set in place a permanent thermocline that separates any new ideas from penetrating the BAU of the financial industry.

I need to contrast this to the world of science, where one can argue that we have more of a level playing field. In the most pure forms of science, we accept, if not always welcome, change in our understanding. And most of our fellow scientists won't permit intentional hiding of knowledge. Remarkably, this happens on its own, largely based on some unwritten codes of honor among scientists. Obviously some of the financial quants have gone over to the dark side, as Trillin's MIT and Caltech grads do not seem to share their secrets too readily. By the same token, geologists who have sold their soul to the oil industry have not helped our understanding either.

Given all that, it doesn't surprise me that we cannot easily convince people that we can understand finance or economics or even resource depletion like we can understand other branches of science. Take a look at any one of the Wilmott papers featuring negative probabilities or Ito calculus, and imagine a quant using the smokescreen that "you can't possibly understand this because of its complexity". The massive pull of the financial instruments, playing out in what Steve Ludlum calls the finance economy, does often make me yawn in exasperation out of the enormity of it all. Even the domain of resource depletion suffers from a sheen of complexity due to its massive scale -- after all, the oil economy essentially circles the globe and involves everyone in its network. 

Therein lies the dilemma: we want and need the knowledge but find the complexity overbearing. Thus the key to applying our knowledge: we should not fear complexity, but embrace it. Something might actually shake out.

Complexity

The word complexity, in short order, becomes the sticking point.  We could perhaps get the knowledge but then cannot breech the wall of complexity.

I recently came across a description of the tug-of-war between complexity and simplicity when I happened across a provocative book called "The Quark and the Jaguar :  Adventures in the Simple and the Complex" by the physicist Murray Gell-Mann. I discovered this book while researching the population size distribution of cities.  One population researcher, Xavier Gabaix, who I believe has a good handle on why Zipf's law holds for cities, cites Gell-Mann and his explanation of power laws.  Gell-Mann's book came out fifteen years ago but it contains a boat-load of useful advice for someone that wants to understand how the world works (pretentious as that may sound). 

I can take a couple of bits of general advice from Gell-Mann. First, when a behavior gets too complex, certain aspects of the problem can become more simple. We can rather counter-intuitively actually simplify the problem statement, and often the solution. Secondly, when you peel the onion, everything can start to look the same. For example, the simplicity of many power-laws may work to our advantage, and we can start to apply them to map much of our current understanding2. As Gell-Mann states concerning the study of the simple and complex in the preface to the book:
It carries with it a point of view that facilitates the making of connections, sometimes between facts or ideas that seem at first glance very remote from each other. (Gell-Mann p. ix)
He calls the people that practice this approach "Odysseans" because they "integrate" ideas from those who "favor logic, evidence, and a dispassionate weighing of evidence", with those "who lean more toward intuition, synthesis, and passion" (Gell-Mann p. xiii). This becomes a middle ground for Nate's intuitive cognitive (belief system) approach and my own practiced analysis. Interesting in how Gell-Mann moved from Caltech (one of Trillin's sources for wayward quants) to co-founding the Santa Fe Institute where he could pursue out-of-the-box ideas3. He does caution that at least some fundamental and basic knowledge underlines any advancements we will achieve.
Specialization, although a necessary feature of our civilization, needs to be supplemented by integration of thinking across disciplines. One obstacle to integration that keeps obtruding itself is the line separating those who are comfortable with the use of mathematics from those who are not. (Gell-Mann p.15)
I appreciate that Gell-Mann does not treat the soft sciences as beneath his dignity and he seeks an understanding as seriously as he does deep physics.  He sees nothing wrong with the way the softer sciences should work in practice, he just has problems with the current practitioners and their methods (some definite opinions that I will get to later).

For now, I will describe how I use Gell-Mann and his suggestions as a guide to understand problems that have confounded me. His book serves pretty well as a verification blueprint for the way that I have worked out my analysis. As it turns out, most of what Gell-Mann states regarding complexity I happily crib from, and allows me to use an appeal to authority card to rationalize my understanding. (For this TOD post I was told to not use math and since Gell-Mann claims that his book is "comparatively non-technical", I am obeying some sort of transitive law4 here) As a warning, since Gell-Mann deals first and foremost in the quantum world, his ideas don't necessarily come out intuitively.

That becomes the enduring paradox -- simplicity does not always relate to intuition. This fact weighs heavily on my opinion that cheap heuristics likely will not provide the necessary ammunition that we will need to make policy decisions.

BAU (business as usual) ranks as the world's most famous policy heuristic. A heuristic describes some behavior, and a simple heuristic describes it in the most concise language possible. So, BAU says that our environment will remain the same (as Nate would say "when NOT making a decision IS making a decision"). Yet we all know that this does not work. Things will in fact change. Do we simply use another heuristic? Let's try dead reckoning instead. This means that we plot the current trajectory (as Cartwright stated) and assume this will chart our course for the near future. But we all know that that doesn't work either as it will project CERA-like optimistic and never-ending growth.

Only the correct answer, not a heuristic, will effectively guide policy. Watch how climate change science works in this regard, as climate researchers don't rely on the Farmer's Almanac heuristics to predict climate patterns.  Ultimately we cannot disprove a heuristic -- how can we if it does not follow a theory? -- yet we can replace it with something better if it happens to fit the empirical data. We only have to admit to our sunk cost investment in the traditional heuristic and then move on.

In other words, even if you can't "follow the trajectory" with your eye, you can enter a different world of abstraction and come up with a simple, but perhaps non-intuitive, model to replace the heuristic.  So we get some simplicity but it leaves us without a perfectly intuitive understanding. The most famous example that Gell-Mann provides involves Einstein's reduction of Maxwell's four famous equations in complexity by 1/2 to two short concise relations; Einstein accomplishes this by invoking the highly non-intuitive notion of the space-time continuum. Gell-Mann specializes in these abstract realms of science, yet uses concepts such as "coarse graining" to transfer from the quantum world to the pragmatic tactile world, with the name partially inspired by the idea of a grainy photograph (Gell-Mann p.29). In other words, we may not know the specifics but we can get the general principles, like we can from a grainy photograph.
Hence, when defining complexity it is always necessary to specify a level of detail up to which the system is described, with finer details being ignored. (Gell-Mann p.29)
The non-intuitive connection that Gell-Mann triggers in me involves the use of probabilities in the context of disorder and randomness. Not all people understand probabilities, and in particular how we apply them in the context of statistics and risk (except for sports betting of course), yet they don't routinely get used in the practical domains that may benefit from their use.  How probabilities work in terms of complexity I consider mind-blowingly simple, primarily due to our old friend Mr. Entropy.

Never mind that entropy ranks as a most anti-intuitional concept.

Simplicity

Reading Gell-Mann's book, I became convinced that applying a simple model should not immediately raise suspicions. Lots of modeling involves building up an artifice of feedback-looped relationships (see the Limits to Growth system dynamics model for an example), yet that should not provide an acid test for acceptance. In actuality, the large models that work consist of smaller models built up from sound principles, just ask Intel how they verify their microprocessor designs.

My approach consists of independent research and then forays into what I consider equally simple connections to other disciplines, essentially the Odyssean thinking that Gell-Mann supports.

I would argue that the fundamental trajectory of oil depletion provides one potentially simplifying area to explore. I get the distinct feeling that no one has covered this, especially in terms of exactly why the classical heuristic, i.e. Hubbert's logistic curve, often works. So I have merged that understanding with the fact that I can use it to also understand related areas such as:
  1. Popcorn popping times
  2. Anomalous transport
  3. Network TCP latencies
  4. Reserve growth
  5. Component reliability
  6. Fractals and the Pareto law
I collectively use these to support the oil dispersive discovery model -- yet it does bother me that no one has happened across this relatively simple probability formulation. You would think someone would have discovered all the basic mathematical principles over the course of the years, but apparently this one has slipped through the cracks.

Gell-Mann predicted in his book that this unification among concepts would occur if you continue to peel the onion. To understand the basics behind the simplicity/complexity approach, consider the complexity of the following directed graphs of interconnected points. Gell-Mann asks us which graphs we would consider simple and which ones we would consider complex. His answer relates to how compactly or concisely we can describe the configurations. So even though (A) and (B) appear simple and we can describe them simply, the graph in (F) borders on ridiculously simple, in that we can describe it as "all points interconnected".   So this points to the conundrum of a complex, perhaps highly disordered system, that we can fortunately describe very concisely. As humans, the fact that we can do some pattern recognition allows us to actually discern the regularity from the disorder.


Figure 1:  Gell-Mann's connectivity patterns.

However, what exactly the pattern means may escape us. As Gell-Mann states:
We may find regularities, predict that similar regularities will occur elsewhere, discover that the prediction is confirmed, and thus identify a robust pattern: however, it may be a pattern that eludes us. In such a case we speak of an "empirical" or "phenomenological" theory, using fancy words to mean basically that we see what is going on but do not yet understand it. There are many such empirical theories that connect together facts encountered in everyday life. (Gell-Mann p.93)
That may sound a bit pessimistic, but Gell-Mann gives us an out in terms of always considering the concept of entropy and applying the second law of thermodynamics (the disorder in an isolated system will tend to increase over time until it reaches an equilibrium).  Many of the pattens such as the graph in Figure 1(F) have their roots in disordered systems. Entropy essentially quantifies the amount of disorder, and that becomes our "escape hatch" in how to simplify our understanding.
In fact, however, a system of very many parts is always described in terms of only some of its variables, and any order in those comparatively few variables tends to get dispersed, as time goes on, into other variables where it is no longer counted as order. That is the real significance of the second law of thermodynamics. (Gell-Mann p.226)
One area that I have recently applied this formulation to has to do with the distribution of human travel times. Brockmann et al reported in Nature a few years ago a scalability study that provoked some scratching of heads (one follow-on paper asked the questions "Do humans walk like monkeys?"). The data seemed very authentic as at least one other group could reproduce and better it, even though they could not explain the mechanism. The general idea, which I have further described here, amounts to nothing more than tracking individual travel times over a set of distances, and thus deriving statistical distributions of travel time by either following the cookie trails of paper money transactions (Brockmann) or cell phone calls (Gonzalez).  This approach provides a classic example of a "proxy" measurement; we don't measure the actual person with sensors but we use a very clever approximation to it. Proxies can take quite a beating in other domains, such as historical temperature records, but this set of data seems very solid. You will see this in a moment.


Figure 2: Human travel connectivity patterns, from Brockmann, et al.

Note that this figure resembles the completely disordered directed graph shown by Figure 1(f). This gives us some hope that we can actually derive a simple description of the phenomenon of travel times. We have the data, thus we can hypothetically explain the behavior. As the data has only become available recently, likely no one has thought of applying the simplicity-out-of-complexity principles that Gell-Mann has described.

So how to do the reduction5 to first principles? Gell-Mann brings up the concept of entropy as ignorance. We actually don't know (or remain ignorant of) the spread or dispersion of velocities or waiting times of individual human travel trajectories, so we do the best we can.  We initially use the hint of representing the aggregated travel times -- the macro states -- as coarse-grained histories, or mathematically in terms of probabilities.
Now suppose the system is not in a definite macrostate, but occupies various macrostates with various probabilities. The entropy of the macrostates is then averaged over them according to their probabilities. In addition, the entropy includes a further contribution from the number of bits of information it would take to fix the macrostate. Thus the entropy can be regarded as the average ignorance of the microstate within a macrostate plus the ignorance of the macrostate itself. (Gell-Mann p.220)

In the way Gell-Mann stated it, I interpret it to mean that we can apply the Maximum Entropy Principle for probability distributions. In the simplest case, if we only know the average velocity and don't know the variance we can assume a damped exponential probability density function (PDF). Since the velocities in such a function follow a pattern of many slow velocities and progressively fewer fast velocities, but with the mean invariant, the unit normalized distribution of transit probabilities for a fixed distance looks like the figure to the right (see link for derivation). To me it actually looks very simple, although people virtually never look at exponentials this way, as it violates their intuition. What may catch your eye in particular is how slowly the curve reaches the asymptote of 1 (which indicates a power-law behavior). If normal statistics acted on the velocities, the curve would look much more like a "step" function, as most of the transits would complete at around the mean, instead of getting spread out in the entropic sense.

Further since the underlying exponentials describe specific classes of travel, such as walking, biking, driving, and flying, each with their own mean, the smearing of these probabilities leads to a characteristic single parameter function that fits the data as precisely as one could desire. The double averaging of the microstate plus the macrostate effectively leads to a very simple scale-free law as shown by the blue and green maximum entropy lines I added in Figure 3.

Figure 3: Dispersion of mobility for human travel. The green line indicates agreement with a truncated Maximum Entropy estimate, and the blue dots indicate no truncation

I present the complete derivation here and the verification here. If you decide to read in more depth, keep in mind that it really boils down to a single-parameter fit -- and this over a good 5 orders of magnitude in one dimension and 3 orders in the other dimension.  Consider this agreement in the face of someone trying to falsify the model; they would  essentially have to disprove entropy of dispersed velocities.
It has often been empasized, particularly by the philosopher Karl Popper, that the essential feature of science is that its theories are falsifiable. They make predictions, and further observations can verify those predictions. When a theory is contradicted by observations that have been repeated until they are worthy of acceptance, that theory must be considered wrong. The possibility of failure of an idea is always present, lending an air of suspense to all scientific activity. (Gell-Mann p.78)
Further, this leads to a scale-free power law that looks exactly like the Zipf-Mandelbrot law that Gell-Mann documents, which also describes ecological diversity (the relative abundance distribution) and the distribution of population sizes of cities, from which I found Gell-Mann in the first place.

Since we invoke the name of Mandelbrot, we need to state that the observation of fractal self-similarity on different scales applies here. Yet Gell-Mann states:
Zipf's law remains essentially unexplained, and the same is true of many other power laws. Benoit Mandelbrot, who has made really important contributions to the study of such laws (especially their connection to fractals), admits quite frankly that early in his career he was successful in part because he placed more emphasis on finding and describing the power laws than on trying to explain them (In his book The Fractal Geometry of Nature he refers to his "bent for stressing consequences over causes.").  (Gell-Mann p.97)
Gell-Mann of course made this statement before Gabaix came up with his own proof for city size, and obviously before I presented the variant for human travel (not that he would have read my blog or this blog in any case). 

Barring the fact that it hasn't gone through a rigorous scientific validation, why does this formulation seem to work so well at such a concise level? Gell-Mann provides an interesting sketch showing how order/disorder relates to effective complexity, see Figure 4 below. At the left end of the spectrum, where minimum disorder exists, it takes very little effort to describe the system. As in Figure 1(a), "no dots connected" describes that system. In contrast, at the right end of the spectrum, where we have a maximum disorder, we can also describe the system very simply -- as in Figure 1(f), "all dots connected". The problem child exists in the middle of the spectrum, where enough disorder exists that it becomes difficult to describe and thus we can't solve the general problem easily.


Figure 4: Gell-Mann's complexity estimator. "the effective complexity of the observed system (can have) more to do with the particular observer's shortcomings than with the properties of the system observed." (Gell-Mann p.56)

So in the case of human transport, we have a simple grid where all points get connected (we can't control where cell phones go) and we have a maximum entropy in travel velocities and waiting times. The result becomes a simple explanation of the empirical Zipf-Mandelbrot Law [wiki]. The implication of all this is that through the use of cheap oil for powering our vehicles, we as humans have dispersed almost completely over the allowable range of velocities. It doesn't matter that we have one car that is of a particular brand and that an airliner is prop or jet, the spread in velocities while maximizing entropy is all that matters. Acting as independent entities, we have essentially reached an equilibrium where the ensemble behavior of human transport obeys the second law of thermodynamics concerning entropy.  
Entropy is a useful concept only when a coarse graining is applied to nature, so that certain kinds of information about the closed system are regarded as important and the rest of the information is treated as unimportant and ignored. (Gell-Mann p.371)
Consider one implication of the model. As the integral of the distance-traveled curve in Figure 3 relates via a proxy to the total distance traveled by people, the only direction that the curve can go in an oil-starved country is to shift to the left. Proportionally more people moving slowly means that fewer proportionally will move quickly -- easy to state but not necessarily easy to intuit. That is just the way entropy works.


Figure 5: Assuming that human travel statistics follows the maximum entropy velocity dispersion model, a reduction in total travel will likely result in a shift as shown by the dotted blue curve.

But that does not end the story. Recall that Gell-Mann says all these simple systems have huge amounts of connectivity. Since one disordered system can look like another, and as committed Odysseans, we can make many analogies to other related systems. He refers to this process as "peeling the onion". Figuratively as one can peel a particular onion, another layer can reveal itself that looks much like the surrounding layer. I took the dispersive travel velocities way down to the core of the onion in a study I did recently on anomalous transport in semiconductors.
Often in physics, experimental observations are termed "anomalous" before they are understood.
-- Richard Zallen, "The physics of amorphous solids", Wiley-VCH, 1998
If you can stomach some serious solid-state physics take a peek at the results -- it's not like you will see the face of Jesus, but the anomalous behavior does not seem so anomalous anymore. Like Gell-Man states, these simple ideas connect all the way through the onion to the core.

Scaling

The big sweet Vidalia onion that I want to peel is oil depletion. All the other models I work out indirectly support the main premise and thesis. They range from the microscopic scale (semiconductor transport) to the human scale (travel times) and now to the geologic scale. I assert that in the Popper sense of falsifiability, one must disprove all the other related works to disprove the main one, which amounts to a scientific form of circumstantial evidence, not quite implying certainty but substantiating much of the thought process. It also becomes a nerve-wracking prospect; if one of the models fails, the entire artifice can collapse like a house of cards. Thus the "air of suspense to all scientific activity" that Gell-Mann refers to.

So consider rate dispersion in the context of oil discovery. Recall that velocities of humans become dispersed in the maximum entropy sense. Well, the same holds for prospecting for oil. I suggest that like human travel, all discovery rates have maximum dispersion subject to an average current-day-technology rate.

A real eye-opener to me occurred when I encountered Gell-Mann's description of depth of complexity.  I consider this a rather simple idea because I had used it in the past, actually right here on TOD (see the post Finding Needles in a Haystack where I called it "depth of confidence").  It again deals with the simplicity/complexity duality but more directly in terms of elapsed time. Gell-Mann explains the depth of complexity by invoking the "monkeys typing at a typewriter" analogy. If we set a goal for the monkeys to type out the compleat works of Shakespeare, one can predict that due solely to probability arguments they would eventually finish their task. It would look something like the following figure with the depth D representing a crude measure of generating the complete string of letters that comprises the text.


Figure 6: Gell-Mann's Depth (d) is the cumulative Probability (P) that one can gain a certain level of information within a certain Time (T).

No pun-intended, Gell-Mann coincidentally refers to D as a "crude complexity" measure; I use the same conceptual approach to arrive at the model of dispersive discovery of crude oil. The connection invokes the (1) dispersion of prospecting rates (varying speeds of monkeys typing at the typewriters) and (2) a varying set of sub-volumes (different page sizes of Shakespeare's works). Again, confirming the essential simplicity/complexity duality, the fact that we see a connectivity lies more in the essential simplicity in describing the disorder than anything else.

The final connection (3) involves the concept of increasing the average rate of speed of the typewriting monkeys over a long period of time. We can give the monkeys faster tools without changing the relative dispersion in their collective variability6. If this increase turned out as an exponential acceleration in typing rates (see Figure 10), the shape of the Depth curve would naturally change. This idea leads to the dispersive discovery sigmoid shape -- as our increasing prospecting skill analogizes to a speedier version of a group of typewriting monkeys. See the figure to the right (click image to see larger version) for a Monte Carlo simulation of the monkeys at work [link].

It doesn't matter that we have one oil reservoir that has a particular geology and that this somehow deflects the overall trajectory, as we would have to if we considered a complete bottom-up accounting approach. I know this may disturb many of the geologists and petroleum engineers who hold to the conventional wisdom about such pragmatic concerns, but that essentially describes how a thinker such as Gell-Mann would work out the problem. The crude complexity suggests that we turn technology into a coarse grained "fuzzy" measurement and accelerate it to see how oil depletion plays out. So if you always thought that the oil industry essentially flailed away like various monkeys at a typewriter, you would approximate the reality more so than if you believed that they followed some predetermined Verhulst-generated story-line.  So this model embraces the complexity inherent of the bottom-up approach, but ignoring the finer details and dismissing out of hand that determinism plays a role in describing the shape.

Luis de Sousa gives a short explanation of how the deterministic Verhulst equation leads to the Logistic here and it remains the conventional heuristic wisdom that one will find on wikipedia concerning the Hubbert Peak Oil curve. However, Verhulst generated determinism does not make sense in a world of disorder and fat-tail statistics, as only stochastic measures can explain the spread in discovery rates. This becomes the mathematical equivalent of "not seeing the forest for the trees".  Pragmatically, the details of the geology do not matter, just like the details of the car or bicycle or aircraft you travel in does not matter for modeling Figure 3.

This approach encapsulates the gist of Gell-Mann's insights on gaining knowledge from complex phenomena. His main idea is the astounding observation that complexity can lead to simplicity. I am starting to venture onto very abstract ice here, but the following figure represents where I think some of the models reside on the complexity mountain.


Figure 7: Abstract representation of our understanding of resource depletion.

Notice that I place the "Limits to Growth" System Dynamics model right in the middle of the meatiest complexity region.  That model has perhaps too many variables and so will mine the swamps of complexity without adding much insight (or in more jaded terms, any insight that you happen to require). Many people assume that the Verhulst equation, used to model predator-prey relationships and the naive Hubbert formula of oil depletion, is complex since it describes a non-linear relation. However the Verhulst actually proves too simple, as it includes no disorder, and doesn't really explain anything but a non-linear control law. The only reason that it looks like it works is that the truly simple model has a fortuitous equivalence to the simplified-complex model7, which exists as the dispersive discovery model on the other right-hand side of the spectrum. On the other hand, consider that the export land model (ELM) remains simple and starts to include real complexity, approaching the bottom-up models that many oil depletion analysts typically use. 

Further to the left, I suggest that the naive heuristics such as BAU and dead reckoning don't fit on this chart.  They assume an ordered continuance of the current state, yet one can't argue heuristics in the scientific sense as they have no formal theory to back them up8. The complementary effect way to the right suggests enough disorder that we can't even predict what may happen, the so-called Black Swan theory proposed by Taleb.

On the bulk of the right side, we have all the dispersive models that I have run up the flag-pole for evaluation. These all basically peel the onion, and follow Gell-Mann's suggestion that all reductive fundamental behaviors will show similarities at a coarse graining level. This includes one variation that refer to as the dispersive aggregation model for reservoir sizing.  This has some practicality for estimating URR and it comes with its own linearization technique along the same lines as Hubbert Linearization (HL).  You may ask if this is purely an entropic system, why would reservoirs become massive?
Sometimes people who for some dogmatic reason reject biological evolution try to argue that the emergence of more and more complex forms of life somehow violates the second law of thermodynamics. Of course it does not, any more than the emergence of more complex structures on a galactic scale. Self-organization can always produce local order.  (Gell-Mann p.372)
Gell-Mann used the example of earthquakes and the relative scarcity of very large earthquakes to demonstrate how phenomenon can appear to "self-organize". Laherrere has used a parabolic fractal law, a pure heuristic to model the sizing of reservoirs (and eathquakes), whereas I use the simple dispersive model as shown below. 


Figure 8: Dispersed velocities suggests a model of aggregation, much like Gabaix suggests for aggregation of cities. Very few large reservoirs and many small ones, just as in the distribution of cities.

These dispersive forms all fit together tighter than a drum. That essentially explains why I think we can use simple models to explain complex systems.  I admit that I have tried to take this to some rather unconventional analogies, yet it seems to still work. I keep track of these models at http://mobjectivist.blogspot.com


Figure 9: Popcorn popping kinetics follows the same dispersive dynamics [link]

Discussion

I found many other insights in Gell-Mann's book that expand the theme of this post and so seem worthwhile to point out. I wrote this post with the intention of referencing Gell-Mann heavily because many of the TOD comments in the past have criticized not incorporating a popular science angle to the discussion. I consider Gell-Man close to Carl Sagan in this regard (w/o the "billions" of course). I essentially used the book as an interactive guide, trying to follow his ideas by comparing them to models that I had worked on.
Evidently, the main function of the book is to stimulate thought and discussion.
Running through the entire text is the idea of the interplay between the fundamental laws of nature and the operation of chance. (Gell-Mann p.367)
The role of chance, and therefore probabilities, seems to rule above all else. Not surprising from a quantum mechanic.

Gell-Mann has quite a few opinions on the state of multi-disciplinary research, with interesting insight in regards to different fields of study. He treats the problems seriously as he believes certain disciplines have an aversion to accommodating new types of knowledge. And these concerns don't sit in a vacuum, as he spends the last part of the book discussing sustainability and ways to integrate knowledge to solve problems such as resource depletion.
The lnformational Transition  

Coping on local, national, and transnational levels with environmental and demographic issues, social and economic problems, and questions of international security as well as the strong interactions among all of them, requires a transition in knowledge and understanding and in the dissemination of that knowledge and understanding. We can call it the informational transition. Here natural science, technology behavioral science, and professions such as law, medicine, teaching, and diplomacy must all contribute, as, of course, must business and government as well. Qnly if there is a higher degree of comprehension, among ordinary people as well as elite groups, of the complex issues facing humanity is there any hope of achieving sustainable quality.

It is not sufficient for that knowledge and understanding to be specialized. Of course, specialization is necessary today But so is the integration of specialized understanding to make a coherent whole, as we discussed earlier. It is essential, therefore, that society assign a higher value than heretofore to integrative studies, necessarily crude, that try to encompass at once all the important features of a comprehensive situation, along with their interactions, by a kind of rough modeling or simulation. Some early examples of such attempts to take a crude look at the whole have been discredited, partly because the results were released too soon and because too much was made of them. That should not deter people from trying again, but with appropriately modest claims for what will necessarily be very tentative and approximate results.

An additional defect of those early studies, such as Limits to Growth, the first report to the Club of Rome, was that many of the critical assumptions and quantities that determined the outcome were not varied parametrically in such a way that a reader could see the consequences of altered assumptions and altered numbers. Nowadays, with the ready availability of powerful computers, the consequences of varying parameters can be much more easily explored. (Gell-Mann p. 362)
Gell-Mann singles out geology, archaelogy, cultural anthropology, most parts of biology for criticism, and many of the softer sciences, not necessarily because the disciplines lack potential, but because they suffer from some massive sunk-cost resistance to accepting new ideas. He gives the example of distinguished members of the geology faculty of Caltech "contemptuosly rejecting the idea of continental drift" for many years into the 1960's (Gell-Mann p. 285).  This extends to beyond academics, as I recently I came across some serious arguments about whether geologists actually understand the theory behind geostatistics and the use of a technique called "kriging" to estimate mineral deposits from bore-hole sampling (just reporting the facts).  And then Gell-Mann relates this story on practical modeling within the oil industry:
Peter Schwartz, in his book "The Art of the Long View", relates how the planning team of the Royal Dutch Shell Corporation concluded some years ago that the price of oil would soon decline sharply and recommended that the company act accordingly The directors were skeptical, and some of them said they were unimpressed with the assumptions made by the planners. Schwartz says that the analysis was then presented in the form of a game and that the directors were handed the controls, so to speak, allowing them to alter, within reason, inputs they thought were misguided. According to his account, the main result kept coming out the same, whereupon the directors gave in and started planning for an era of lower oil prices. Some participants have a different recollections of what happened at Royal Dutch Shell, but in any case the story beautifully illustrates the importance of transparency in the construction of models, As models incorporate more and more features of the real world and become correspondingly more complex, the task of making them transparent, of exhibiting the assumptions and showing how they might be varied, becomes at once more challenging and more critical. (Gell-Mann p. 285)

Trying to understand why some people tend to a very conservative attitude, Gell-Mann has an interesting take on the word "theory" and the fact that theorists in many of these fields get treated with little respect.

"Merely Theoretical" -- Many people seem to have trouble with the idea of theory because they have trouble with the word itself, which is commonly used in two quite distinct ways. On the one hand, it can mean a coherent system of rules and principles, a more or less verified or established explanation accounting for know facts or phenomena. On the other hand, it can refer to speculation, a guess or conjecture, or an untested hypothesis, idea or opinion. Here the word is used with the first set of meanings, but many people think of the second when they hear "theory" or "theoretics".  (Gell-Mann p.90)

Unfortunately, I do think that this meme that marginalizes peak oil "theory" will gain momentum over time. Particularly, in terms of whether peak oil theory has any real formality behind it, as certainly no one in academic geology besides Hubbert9has really addressed the topic. Gell-Mann suggests that many disciplines simply believe that they don't need theorists. TOD commenter SamuM provided some well-founded principles to consider when mounting a theoretical approach, especially in responding to countervailing theories, i.e in debunking the debunkers. I am all for continuing this as a series of technical posts. 10 

In the field of economics, Barry Ritholtz has also recently suggested a more scientific approach in The Hubris of Economists, yet he doesn't think that modeling necessarily works in economics (huh?). He might well consider that economics and finance modeling assumes absolutely no entropic dispersion. Taleb suggests that they should include fat-tails. The amount of effort placed in applying normal statistics has proven out as a colossal failure. We get buried daily in discussions on how to best to generate a course-correction within our economy, balanced between a distinct optimism and a bleak pessimism. At least part of the pessimism stems from the fact that we think the economy will forever stay conveniently complex beyond our reach. I would suggest that simple models may help just as well and that it allows us to understand when non-cheap heuristics and complex models work against our best interests (i.e. when we have been played).

The "cost of information" addresses the fact that people may not know how to make reasonable free market decisions (for instance about purchases) if they don't have the necessary facts or insights. (Gell-Mann p.325)

Above all, Gell-Man asks the right questions and provides some advice on how to move forward..

If the curves of population and resource depletion do flatten out, will they do so at levels that permit a reasonable quality of human life, including a measure of freedom, and the persistence of a large amount of biological diversity, or at levels that correspond to a gray world of scarcity, pollution, and regimentation, with plants and animals restricted to a few species that co-exist easily with mankind? (Gell-Mann p.349) We are all in a situation that resembles a fast vehicle at night over unknown terrain that is rough, full of gullies, with precipices not far off. Some kind of headlight, even a feeble and flickering one, may help to avoid some of the worst disasters. (Gell-Mann p.366)
I hope that I have illustrated how I have attempted to separate the simple from the complex. If this has involved too much math, I apologize.

I admit that we still don't understand economics though.

References

  1. Murray Gell-Mann, "The Quark and the Jaguar :  Adventures in the Simple and the Complex", 1995, Macmillan
  2. Calvin Trillin, "Wall Street Smarts", NY Times, October 13, 2009
  3. Nassim Nicholas Taleb, "The Black Swan: The Impact of the Highly Improbable", 2007, Random House
  4. D. Brockmann, L. Hufnagel & T. Geisel,  "The scaling laws of human travel", Nature, Vol 439|26, 2006.
  5. Marta C. González, César A. Hidalgo & Albert-László Barabási,, "Understanding individual human mobility patterns"Nature,  Vol 453, 779-782 (5 June 2008).


Figure 10: A damped exponential contains a maximum entropy amount of information, such as the decay of radioactive material. The rising exponential usually occurs due to a degree of feedback reinforcing some effect, such as technology advances.

Notes

1 The TV pundit Chris Matthews regularly asks his guests to "tell me something I don't know". That sounds reasonable enough until you realize that it would require beyond mind reading.
2 Power laws are also fat-tail laws, which has importance wrt Black Swan theory.
3 See this Google video of Gell-Mann in action talking about creative ideas. Watch the questions at the end where he does not suffer fools gladly.
4 Unfortunately the transitive law is a mathematical law which means that we can never escape math.
5 In terms of coarse graining, explaining the higher level in terms of the lower is often called "reduction".
6 In marathon races, the dispersion in finishing times has remained the same fraction even as the winners have gotten faster.
7 See this link with regard to Fermi-Dirac statistics. That also looks similar but comes about through a different mechanism.
8 Excepting perhaps short-term Bayes. Bayesian estimates use prior data to update the current situation. BAU is a very naive Bayes (i.e. no change) whereas dead reckoning is a first order update, the derivative.
9 Who was more of a physicist and did it more out of curiosity than anything else.
10 I also see many TOD comments that show analogies to other phenomena that basically don't hold any water at all. We do need to continue to counter these ideas if they don't go anywhere, as they just add to the noise.

'There is nothing so practical as a good theory' Kurt Lewin, psychologist.

Lewin had a good attitude. He also said that "Research that produces nothing but books will not suffice". He wanted research to be directly put into practice. The practical applications I see are primarily in terms of policy decisions. I admit to the fact that nothing here will lead to some great new invention but we really need to occasionally put stakes in the ground to anchor our understanding.

Think of the whole theory of global warming. Some would argue that the AGW theory has no practical benefit, yet in practice it has had profound implications on policy decisions. I would love it if the amount of effort that the global warming skeptics put in their hours of fact-checking (see climateaudit.org) that they would match that in the area of critiquing resource depletion.

In other words, as a practical matter, we need the theory to attract the cockroaches.

'Lewin had a good attitude. He also said that "Research that produces nothing but books will not suffice". He wanted research to be directly put into practice.'

This is a very dangerous attitude to research. Plenty of research, especially in pure mathematics may have no immediate use but is later, sometimes much later, used by others in developments.

There is a distinction to be made here, as the field of Applied Mathematics exists to serve a lot of practical concerns. I would agree that Theoretical Mathematics often doesn't hold much immediate value.

The one person from an Applied Math department that has been doing interesting research on oil depletion is B. Michel. He wrote a paper called "Oil Production: A probabilistic model of Hubbert's curve" published in ASPO-5, 2006. I don't see why this would be considered "dangerous", all he did was present his ideas directly to a practitioners conference.

while knowledge without action is not going to do anyone much good or much harm, action without knowledge is quite likely to do harm.

As a culture we fetishize action, to the detriment of ourselves and our surroundings.

Connecting the Dots

WHT,

I must confess that at the moment,
I can empathize with how my dog feels
after I have lectured it on some basic fundamental of quantum physics.

My dog would wag its tail,
kind of say that all sounded very impressive although I have no idea what any of it meant;
and then again, where's the beef?

Because without getting some treat at the end of all that listening, what's the point?

Now given all that, and also given your explanation about showing a bunch of random dots on a planar sheet of paper and then drawing straight lines through some or all of them so that we can allegedly divine a grander understanding from the exercise, where do all the lines (or hyperbolic curves) of your discussion point to?

What path of discussion does it lead us towards?
It all kind of feels like waves of confusion washing over oneself while sun bathing on a beach located at one of the entropic corners of the world.

This might be an appropriate time to discuss Nicholas Georgescu-Roegen. and his attempted formulation of A Fourth Law of Thermodynamics, applying the Entropy Law to materials. He garnered some publicity during the 70's with books and articles such as The Entropy Law and the Economic Process. One could argue that The geologic deposition of minerals represented a gift of low entropy. Materials were then mined and dispersed. Recycling slowed the entropic dispersion but required energy and other materials. I sometimes found it simpler to minimize complexity and consider the entropic dispersion of a single metal such as copper or silver.

http://homepage.newschool.edu/het//profiles/georgescu.htm

http://dieoff.org/page148.htm

http://college.holycross.edu/eej/Volume12/V12N1P3_25.pdf

I am not too familiar with his work but it looks like he was trying to bridge economic theory with entropy considerations.

I like how you say that the geologic deposition of minerals is a gift of low entropy. Its kind of non-intuitive but these regions of order can exist within the greater disorder, and we really do take advantage of the high grade mineral deposits.

It sounds like his 4th law of thermodynamics is just that perpetual motion is impossible, not because that energy will always be available but because the matter required to generate energy will not forever be available. It seems a very pragmatic type of law that makes a lot of sense but would have a hard time getting accepted because it relies on practical considerations.

One place it leads us to is to being able to defend ourselves and our point of view in the future.

This may seem a bit presumptuous (as if our opinion matters at all), but if you look at the very recent brouhaha over the global warming e-mails, I contend that it is only a matter of time before those same skeptics/deniers will come after the energy depletion folks.

I have been following the skeptics on that side quite closely and they do have masses and masses of very qualified statisticians that have been obsessively poring over the data for the last several years. Face it, in comparison we have little to show on the depletion side of the fence. The armies of laymen that sits on the AGW skeptic side of the fence will eventually come after the peak oil crowd.

Look at what the WSJ says in their opinion piece "Revenge of the Climate Laymen"

Mr. McIntyre offers what many in the field do not: rigor.

http://online.wsj.com/article/SB1000142405274870433590457449685093984671...

Look at that statement, they are essentially turning logic upside down. The skeptical side of the AGW fence is now promoting rigor? They will basically stop short of nothing in preventing any change in BAU. If we have nothing but heuristics, good luck. That is my opinion, and look at how it is playing out.

The "climate laymen" in the WSJ case is Steve McIntyre from climateaudit.org and all his followers. Again, what is amazing is that the opinion pieces are now pushing "rigor"? And that all this rigor that McIntyre offers is just fact-checking of bits of data that don't seem to fit (a bit like cherry picking in reverse). The opinion piece even points out that McIntyre has no climate model of his own.

I have been saying this all along. The rigor from the oil depletion side is all from us, the amateur sleuthers on TOD and elsewhere. There is no "rigor" at all from places like IEA or EIA. They have no fact-checking and openness. We have interpretations of the data and some rather deep and fundamental oil depletion analyses. I suggest that it is only a matter of time before the data from IEA and EIA becomes completely marginalized and some other collective gives a more rigorous appraisal of what is going on.

Heck, if the media can claim that the research of real credentialed climate scientists lack rigor, that leaves us an opening that we can drive a Mack truck through. There are no geologists or petroleum engineering researchers or economists (save for Rembrandt or Nate or others on TOD) looking at peak oil. I bet that the whole discipline of "petroleum engineering" will be the first ever dropped from a mainstream engineering curriculum. As Black Swan Taleb says, the entire economics profession is so insular a field that I conclude they would never even acknowledge something as pedantic as oil depletion. There is a real knowledge gap here that will get filled one way or another. I just hope it gets filled by the people on the progressive side of the equation, and not let the WSJ corporatists control the framing.

I contend that it is only a matter of time before those same skeptics/deniers will come after the energy depletion folks.

And well they should. Energy does not deplete. Conventional oil depletes and in such a manner that makes it possible to draw Hubbert's curve and apply it to the world. When abstract concepts such as energy are used interchangeably with oil it drives me up the wall.

Clear thinking requires clear meanings for words. The mind can not handle mixing the concrete and the abstract.

The abstract is not subject to scientific investigation. It is a construct of convenience for the human mind and does not exist in the real world.

We can analize corn, soybeans or rice and reach some conclusions about each. But analyzing grain leads us nowhere because of the differences. Likewise we can scientifically analize iron, gold or uranium. But analyzing metal is a dead end.

And we can analize oil, coal or ethanol and reach come conclusions. But analyzing energy is also a dead end. The differences in the forms are as real as the differences between corn and soybeans or iron and gold.

Those who "come after" energy depletion folks are well justified.

I agree with your concern regarding the meaning of words, especially in regard to fundamentals such as the laws of thermodynamics.

However, I'm concerned that you're pushing this analizing thing too far. Are you talking about sheep, or other defenceless farm animals?

Listen, I don't have a problem with this kind of activity among consenting adults, but I think you should make that it clear that you're not pushing an alternative lifestyle on minors or even the barnyard.

Yes indeed I made a mistake in referring to it as "energy depletion" instead of "oil/natural gas/uranium/coal/whale oil/etc depletion" or as "non-renewable energy depletion". I used a short hand where I likely shouldn't have.

Not to over-analize :) too much, but so much for my goal of being completely anally retentive.

Actually, even "oil" and "coal" are not energy, but rather only half of the combustion equation:

CnHm + O2 ---> CO2 + H2O + concentrated heat which entropy will disperse

The oxygen part of the "energy" equation comes from:

Sunlight + photosynthesizer + CO2 + H2O ---> CxHyOz + O2

By destroying all those "free" photosynthesizers out there, we are depleting from the energy supply line

I think the best way to look at it is that we can use exploitable potential energy whether mechanical or chemical or atomic etc.

Unlike methodology debates, semantics discussions better are on the safe side and risk being dispersive otherwise.

> Renewables EROI equals lightspeed square divided by price.

One of my until here private grumblings from contribution 5965, considered too hard for posting there.

They will basically stop short of nothing in preventing any change in BAU.

Then at some point they will become completely irrelevant because BAU will change regardless.
They can no more stop BAU from changing than they can stop gravity from acting on an apple after it separates from it's stem up in a tree.

Though I think your point is more along the lines that TPTP will marshal all their forces to keep the illusion that BAU can continue in it's present form and to make sure that society at large continues to believe that it can and that it will. No surprise there, their "salaries" depend on it.

So how do you change paradigms? Thomas Kuhn, who wrote the seminal book about the great paradigm shifts of science, has a lot to say about that. In a nutshell, you keep pointing at the anomalies and failures in the old paradigm, you come yourself, loudly, with assurance, from the new one, you insert people with the new paradigm in places of public visibility and power. You don't waste time with reactionaries; rather you work with active change agents and with the vast middle ground of people who are open-minded.

Donella Meadows: Leverage Points - Places To Intervene In A System

Of course to be able to insert people with the new paradigm in places of public visibility and power, does not mean that it will be by any means and easy or painless power struggle, quite frankly I expect it to become rather ugly in the not too distance future.

Though I do have a smidgen of hope that some of these active change agents are already beginning to have some effect. AlanfromBigEasy comes to mind as an example of this.

Alan definitely has been doing good work in the transport realm.

Since many seem not willing to read the whole thing or find it very complicated, ... I'll try to summarize what I understood, having read both Gell-Mann's book (and having met him =) ... irrelevant but cool), and WHT's full post.

The main question behind the article is: Should we model oil depletion/discovery and if so, how?
The motivation is: We need good rigorous tools (mathematical ones) to model future behavior and be able to have analytical discussions about it.

Gell-Mann is relevant because he says: Sometimes you don't need very complex models even if the thing you are studying is incredibly complicated. That's precisely the situation WHT faces.
People says: What can your models predict for goodness sake? Look at economical factors, enhanced oil recovery, technology, field distribution, unknown and unexplored areas of the world ... how could a simple probability distribution grasp all that stuff?

Gell-mann says: sometimes a lot of unknowns at different levels create a coarse simple picture. For example ... how many people travel at 6miles/hour in the US? Nobody knows ...
yet a rough fit matches the data (this fit has interesting assumptions inside: that there is a lot of uncertainty, that there is some efficiency in the allocation of possible speeds, etc ... in a way related to the entropy of the system).

The conclusion is: Some smart cookies (Gell-mann) find that some simple models account well for complex phenomena. Some of WHT's methods apply to other complex objects (travel, semi-conductors, etc). This method might well apply to oil discovery/consumption until proven otherwise.

Now, WHT, feel free to grill my summary! :)

I think you covered it well, thanks.

I would agree, but add that sometimes remarkably simple questions about what else might be going on, can also fill out the rough picture a lot. For exploration questions with limits you might ask what the explorers will run into, that will alter their motives, and what evidence would signal the approach of that threshold and approaching change of strategy before it comes. Predicting reversals in the direction of progressions, and watching for how the systems approach them, can be very useful too.

Thanks for this excellent analysis.

I think this new book may be helpful for tackling our communication issues:

The Psychology of Climate Change Communication
http://cred.columbia.edu/guide/

It is originally about the communication of climate change, but as this shares the same issues of communicating a complex and disputed topic with peak oil I think it is very helpful for communicating the limits of supply growth.

The book can be purchased or downloaded here:
http://cred.columbia.edu/guide/pdfs/CREDguide_full-res.pdf

That is a very nice guide and you are correct in that the info in there is very general and can be used to communicate in other areas besides climate change. It has all the rhetorical and psychological arguments such as framing and confirmation bias clearly explained. I think the authors have read Taleb's Black Swan and they also know the political playbook. They have a big section just dealing with uncertainties, which I think is very valuable.

Whether one believes in climate change or not, a lot of ideas can be shared. It seems as if TOD is somewhat divided on the subject of climate change science, but remember that the other side (the BAU folks) are not divided at all. When, or if, they finish trying to marginalize AGW, they will come after the PO crowd, IMO. We have to be ready with good science, and just standing behind a bunch of heuristics with no discussion of uncertainties and all the other things that Taleb talks about won't necessarily cut it.

As someone with a short attention span I think I'm qualified to assess what the general public will be able to digest. Don't know if your paper mentioned the old rule of thumb that, if you want to reach the average person, write at a 5th grade level. Since that was coined we've invented texting, too, so include lots of pop up Flash animation...

The LATOC front page is the most punchy peak oil presentation around, as Matt realizes the public at large knows the answers to depletion of oil in advance - or think they do, he's ready to prick those optimistic bubbles as they come. Biofuels, electric cars, oil shale, etc. It's riddled with factual inaccuracies of course, but the framework can't be improved upon. When you go to the TOD front page where is the document explaining why we can't just grow enough ethanol to replace gasoline? "Search the archives" or "Read the primers" you say? OK, we're Doomed.

JD's blog is organized properly as well. People have better things to do than become energy experts and conduct hours of independent research. Gore's visual of the scissor lift ascending to the heavens to get to the top of the hockey stick is how you put asses in seats; take cues from a successful politician.

He was just as successful in becoming a watchword for his opponents to castigate, of course. I don't care if the heat is turned up in the anti-depletion camp, as we will have irrefutable demonstration of the correctness of our position in short order, if the bottom up data is correct. Shortages will be the equivalent of seawater running into the Manhattan financial district, good luck building models countering that trend.

It isn't just the 5th grade level thing, Obama is an intelligent guy but you wouldn't present something like this to him-there would be an executive summary possibly followed by this paper.

Perhaps not Obama, but I would present it to the senator from my state, Al Franken. Seriously, Mr. Franken is probably the smartest politician around and eats up policy discussion. He also attracts a lot of smart, young followers who want to make a difference.
What would at least happen is that Franken would send a staffer off to read the paper, and the staffer would write the executive summary that Franken would read. That is the way policy change works.

Weren't there enough comedians in the democratic party already?

Predictable that conservationist pops in here to to to attack my own personal opinions but fails completely to mount a challenge to any of the science I describe. Ultimately, I find these kinds of posts effective in revealing the true character of the conservative mindset. They have little intellectual curiosity in exploring the fundamentals of how nature works. (it helps to read Lies and the Lying Liars to understand why the discussion always reverts to covering the horse-race and not the policy ...)

Sure, and they get most of their material from observing people like you.

To be a good comedian, you need to have smarts.

To be a clown, you merely need to be a conserva...

Well, most people get the picture.

_______________________________
What me exit stage left?

I understand what you are saying and I have heard that argument before. That is part of the reason that I incorporated the writings of Murray Gell-Mann into this article. People always want a writer that brings in some "popular science" aspect to the info, otherwise they deem it doomed to failure. Gell-Mann has the credibility and he is a great read so I encourage people to pick up the book, and see how far it can get them. (Perhaps not a 5th grader, but you never know).

On the other hand, here is what can go wrong when you have a popular science writer discussing things he knows nothing about. Take the case of best-selling writer Malcolm Gladwell who's most recent book got a devastating review by the NY Times -- http://www.nytimes.com/2009/11/15/books/review/Pinker-t.html

He ... quotes an expert speaking about an "igon value" (that's eigenvalue, a basic concept in linear algebra). In the spirit of Gladwell, who likes to give portentous names to his aperçus, I will call this the Igon Value Problem: when a writer's education on a topic consists in interviewing an expert, he is apt to offer generalizations that are banal, obtuse or flat wrong.

So we can always go for that route and be ridiculed and humiliated for not knowing what we are talking about (I found a quote that said that "namely, that Gladwell misuses technical terms in a way that suggests he has no idea what he's talking about").

So of course this is a completely debatable argument. I could argue that the reason climate change awareness was even able to grab any kind of foothold, was that the deep climate science research came first, and only later did the marketing campaigns come out. Without the fundamental science you basically have nothing.

So I would only ask that we come up with a set of non-heuristic scientific theories and supporting experimental confirmation of those theories that we should parade around to achieve the kind of impact that an Inconvenient Truth could have.

People have better things to do than become energy experts and conduct hours of independent research.

People also have better things to do than be mislead by poor research. I wish it wasn't the case, but we must face the fact that all we have is M.King Hubbert, who basically did his work as a hobby, and then the rest of the crew, such as Deffeyes, Laherrere, etc who basically pound out heuristics.

Yes, this certainly is another masterpiece of inscrutability from WHT. For those who's eyes glazed over after the first paragraph I can pretty much summarize the entire article in one sentence: "My model is better than theirs, especially those Limits to Growth dorks".

While I'm sure the directive from the TOD editors to "not use math" was well intentioned, it utterly and completely misses the point that one must first learn to use concise, plain English when writing for a general audience.

Cheers,
Jerry

Thanks for the vote of confidence, or a vote for simplicity.

I don't do it in comments a lot because it is hard work, but I try to use as active a voice as possible when I write. If you read my blog at http://mobjectivist.blogspot.com you will really notice this. I started to write from an active voice within a few months of starting the blog several years ago. So you will rarely see any passive constructs such as "is,was,being" etc. I decided to do this to make sure that every statement I made had a source or destination and I wanted to make sure that every idea had ownership and did not introduce ambiguity. The problem with the passive construct is that it leaves ideas hanging, i.e "there is a theory which ...". (see the e-prime language for more rationale, http://en.wikipedia.org/wiki/E-Prime)

This reliance of active voice writing results in the occasional problem of sounding pretentious, i.e. "my theory states ...".

So Jerry's summary fits. I would only rephrase his suggestion that "my model is better than theirs". I would never say this, instead I would say that "my model improves on the model of LTG because .."
Get the hang of it?

I wouldn't call this latest inscrutable, but I do agree your writing style's a bit hard to parse casually. A collaborator might help - a Jeffery to your Sam. Although "Dr. Foucher" as someone on a financial blog termed him is a more than serviceable writer; and, tell the truth, Jeff tends to repeat himself over and over saying the same thing. Does he have a hotkey for phrases like "In Sam's best case model"?

The WHT Battle Royal between you and memmel was pretty funny too, seeing's how memmel is the sum of all of your litereary shortcomings multiplied a few times, plus I'm at a near-total loss as to what you're getting at in your models. It was like watching two of Godzilla's opponents destroy each other. We need "Dispersive Discovery for Dummies." I do own "Statistics Demystified" which I parse on occasion to try and get past the stage of knowing what lognormal distributions and deviations from the mean are. But I have a lot on my plate, and if I'm going to be waiting in the freezing cold a year from now to cash in my food stamps for a cup of lukewarm beans why should I devote any time to sussing out statistical arcana?

I do agree that those who can should continue analyzing depletion on a level suitable for peer review academia; those who can, do. But you'll need Gores and Hansens and Manns and Schmidts and Flannerys and Lynases to convey the message to us hoi polloi. You already have McIntyres a plenty, perhaps including Gore and Hansen - they push the message that we can either dispense ourselves of carbon in a decade if we really want to, or alternatively switch to other sources of it than oil. Neither vision jibes with reality if you ask me.

You know that it only takes one person to figure out intent. Then that person repeats it and someone else understands it. This process multiplies. I will never try to write to the level at which no information gets transferred because of some difficulty of interpretation. I assume that someone will eventually understand.

I would likely surmise that 99% of people couldn't understand Gell-Mann's book on general science that I referenced either. And of the 1% who understood that, another 99% couldn't understand his work on fundamental particle (quark) theory. So I made an effort to understand what Gell-Mann is trying to say, just like someone else will make an effort in trying to understand what I am trying to say.

Such is the way of progress.

It is important to remember that people have been through the tech bubble (when somewhat incomprehensible jargon and reasoning was used to promote equities) and the Greenspan/Goldman/Bernanke era, when similar incomprehensible jargon and reasoning was used to promote derivatives, including basically worthless securities masquerading as valuable assets, so many readers simply aren't as open to assuming that incomprehensible paragraphs are actually of extreme high intelligence that they lack the ability to comprehend (hence the need for an accurate and concise summary). Nowadays, if the summary isn't there, many readers will just assume you are spewing B/S, even if you aren't.

I just noticed that a TOD editor added the abstract that I supplied with my submitted article. So there is your executive summary.

Still, I don't think it will change your mind as to whether it is B/S or not.

I just listened to Ralph Nader discuss his latest book. He said that people love to read about dishing dirt, such as bad car designs and safety problems, but their eyes glaze over when the talk turns to solutions. As his latest book is about solutions, he made the decision to write his book as a fictionalized account of a group that came up with some solutions. Talking about having to bend over backwards to gain an audience, he basically wrote a novel!

Well we do love a good story, could be why ol' Odysseus popped in ?-) Decreasing exponential and damped exponential are one and the same? I ran into nothing but more complexity as I chased around the web trying to ferret out the meaning of the latter about a month ago. I'll probabably try to bang at the meat of your post again but toilforoil cracked me up so hard I hadn't the heart to go back today. As entropy is critical, I'm guessing the boys who have calculated we at best can trace the universe back to Planck time have run their calculations the other way as well. Does it play out kind of like your mountain graphs, that we can only calculate the time required for maximizing entropy of our ever expanding universe up to within 1 Planck unit of when that would occur?

Damped and decreasing exponential are the same shorthand in engineering jargon.
The converse would be accelerating or increasing exponential for the curve that blows up over time.

You can use entropy ideas without having to assume equilibrium so I don't understand the point of the last question.

Even if the outing won't raise the standing of me user with United Americans, I fell for this review; at the danger of appearing as contrarian to the argument WHT is making for whiz-bang peak oil.

Curiously enough, the book implicitly thematises frequency variation patterns akin to those used here for oil depletion discovery.

Unscientifically yours,
Serge

And one author blogs.

That review says that the authors claim that the "bloggers can't save us".

Everyone misses Carl Sagan (and Johnny Carson ... "billions and BILLIONS").

Well, I think I can reply to Nate's idea of why "theory" has such a bad name, before fully reading WHT's approach. I think we frequently overlook one of the major cognitive defects of theory that could be the central reason it turns out to actually be a very unreliable way to represent the physical world.

Theory in the modern tradition is almost entirely devoted to representing the physical world with our invented rules. In particular that has us representing the developing changes in economies and ecologies and other complex systems as numeric variables connected by formulas,... They clearly don't work that way. Economies of all kinds are clearly composed of learning processes of groups and individuals each individually exploring their environments for something to do.

By and large, theory posits a deterministic world of equations where an actively learning world is continually inventing new ways to respond to things in real time... That's a major problem. I've been fascinated by it for a long time, and set out to solve it many years ago. I think I've made at least a little more progress in doing than in attracting others to ask the same question, though...

When you say "theory posits a deterministic world of equations ..." that partly addresses the crux of
the problem. I don't know if you intended to frame it like that, but a "deterministic world of equations" is definitely different than a "world of deterministic equations".

I think for a certain class of problems we can do the former, because we have a chance of including uncertainty (i.e. stochastic, not deterministic) by setting up the problem correctly, it just becomes harder when we try to do it for socieconomic kinds of problems. That's partly why I throw up my hands in ever trying to understand economics completely, but we can understand the class of problems that fall under resource depletion and other mechanical or bean-counting types of exercises because they do follow some relatively simple stochastic principles.

I thought I would toss this diagram up here as an additional piece. For example the idea of the human mobility plot that I refer to is exceedingly simple to demonstrate. I have the stochastic equation that gives the probabilities of how far people have moved in a certain time. To simulate that is beyond simple. You simply have to draw from a uniform random distribution for distance (x) and then draw another number for a random time span (t). You then divide the two to arrive at a random velocity, i.e. v=x/t. Nothing more simple than this formula or formulation. The histogram for the simulation looks like the following points with the stochastic formula in red:

I don't know why people have problems accepting the simplicity of such an interpretation. These are not really invented rules, it is plain pragmatism as to understand how the world works around us.

So theorizing about economics and all the black swans and gray swans that encompass that discipline will definitely have to follow an empirical and adaptive approach, but other things can be formulated within a solid footing.

Well, I think the question is what is "a solid footing" for relating to a world full of uncontrolled systems that keep changing the situation you're in. For me it's only "representational theory", the kind that represents itself as reality, that has such a problem with that.

When people are fixated on old theories and applying them to new situations they no longer fit is indicated by visits from "black swans" as one symptom. Today we still use the perpetual growth, model, in a new situation of vanishing resources and things, for example. The believers are left to only observe "oh gosh I don't know why that didn't work, must have been an act of God...".

So, what is "nonrepresentational theory", then? I guess just having theory being used as you would any other tool, instead of to represent the world to use as appropriate. I use theory as the situation calls for, reading the situation at hand, importantly by watching the learning systems to see what they're learning.

For economies the most general rules seemed to reverse about 50 years ago, as the situation switched from multiplying to diminishing returns. It was noticed in all kinds of ways, of course, but not as a sign we might need to change our theory, of course. Everyone just deiced we needed more of it.

So that's also how I'd use Monet Carlo simulations, to raise good questions where they might apply to help me study what non-statistical developments are taking place.

Being fixated on old theories is what Taleb refers to as the "narrative fallacy". It is hard to disabuse people of their beliefs, partly due to sunk-value costs and partly due to human nature.

So that's also how I'd use Monet Carlo simulations, to raise good questions where they might apply to help me study what non-statistical developments are taking place.

Umm, I am afraid that you cannot use Monte Carlo in a non-statistical fashion. In that case, it would be a Monte Carlo run of one trial with all parameters known and fixed, which is the definition of deterministic.

Other than that, I would agree that MC is useful to produce some insight and to verify the analytical forms that you have derived. Unless that is exactly what you are trying to say?

I think the old theories are based on a incomplete data set.

Well, it matters a lot whether misreading the present as the past causes you 10 years of opportunity for every year of denial or not. That seems likely to be the true case with our pulling out all stops to accelerate our resource use and continue increasing our dependency on unsustainable systems, really.

There is a long discussion in the Gestalt psychology literature on "functional fixity", and lots reasons for ideas that once made sense to cling on till they make no sense. People seem to want to believe all kinds of things that "just ain't so" whenever they think they can get away with it.

The big one we're dealing with is the idea of peretual exponential doubling of the economy as or paradigm of stability, and model on which we run the finances for all our institutions... What happens with physical growth systems is that growth reaches a point of diminishing returns. That our cultural model is not like that seems to mean they are based only on expectations for a fictional world, just projections of theories, and not models of physical systems.

I think if you smooth out the bumps we now see in the oil curves, the first inflection point, the point of diminishing returns for increasing investments, was in the 1950's for finding new reserves, and in the 1970's for total production, right? What we might looked for was which models the real experience of oil supply was diverging from. I think even in the 1960's we should have seen in the curves what we are just now finding out, that relying on oil for growth is a mistake.

The concern I have with relying on models is that they invariably omit what they will run into to invalidate them. IF one asks what a model will run into, the surprising thing is it's often really obvious, but just not being asked. Maybe your statistical models for oil search could be compared with the actual progression, to detect, or even predict, the coming threshold of disinvestment in oil exploration.

There's the point, not far ahead by all indications, where the net energy return on energy investment will be insufficient to sustain an economic system built for cheap energy. Our world policy to improve the efficiency of everything, to sustain growth mostly it seems, is rapidly accelerating the approach of that point, is my firm conclusion and a big worry.

As far as using models to detect trends, I will have to admit that I often look at data from what some people refer to as reciprocal space. If you have ever worked with diffraction and signal processing you will understand. Disorder is easy to detect with diffraction and a trained eye can pick up dispersive effects in all kinds of data. Essentially, the model exists to aggregate all the randomness in a way that the human brain can't process. So when you talk in these deterministic terms for looking at thresholds etc, I am on a completely different mindset. Dispersed effects often don't know anything about thresholds. I don't know if that answer pertains to what you are wondering about, just that this behavior has to be considered, and that is what this whole post is about. Taleb warns about the effects of significant randomness for the entirety of The Black Swan book.

"Reciprocal space" sounds like a good general name for what I'm referring to, but for me one "reciprocal space" is the physical world of natural processes, and the other is the information space our models are part of that we design to either substitute for, or to help us navigate, the reciprocal space of the physical world. So, the reciprocals are "information space" and "physical space". Does that "compute"?

One of the more reliable things I do is just look at patterns in the physical phenomenon of progressive divergence from the models. In physical space that generally implies the emergence of physical system change. For example, the evidence in the 1950's, compared to the theory of perpetual multiplying reserves, could have signaled the end of systematically growing rates of discovery for new oil reserves.

I then go one step beyond the usual comparison of just the physical system measure that a model is supposed to predict. I also look at thing like what the people involved are learning about, whether it's about changing environmental responses or not, and what kind, and things. Because what we're dealing with in growth systems is a complex search process, I study how it's going and what's being found.

That might include things that are corollary indicators, such as if speculators are jumping in and running up prices and able to make them stick, etc. like OPEC did in the 70's. That helps me understand the whole system. It takes learning how to both refer to natural systems as being what we learn from (physical space) as well as what we define (information space).

It seems to identify some useful physical space questions that are as answerable as any in information space.

entirely devoted to representing the physical world with our invented rules

PF,

I think you and I are viewing things from similar frames of reference.

The way I like to phrase is this:

Mother Nature is deaf.
Mother Nature is mute.

She does not listen to the chatterings of us monkeys.
She does not pull out a megaphone to pronounce her "laws".

Yes indeed, it is we monkeys who "invent" our own set of rules (or "false narratives" as Nassim Taleb calls them) and we monkeys who insist that these "laws" were handed down to us at the mountaintop from Mother Nature herself just like the 10 Commandments were handed over to Moses in the Judeao-Christian Bible .

I did admire what Taleb wrote in The Black Swan. However, I find it endlessly fascinating that he cannot explain some of the most simple power-laws. He essentially states that no one can explain anything like the Zipf law or Pareto law because of something he does not care to relate. Instead he thinks we ought to just study things empirically and we will get the hang of it. Talking about someone basically punting the ball!

In fact some people can explain the power-laws quite simply and they appear mainly to have to do with heterogeneous randomness, well beyond the Gaussian narrow randomness that Taleb rightly rails against. So Taleb is on the right track but does not care to put the final nail in the coffin, and won't show how you can generate a distribution with a fat tail, even in an appendix. My problem with many of the scientists is that they want to make it needlessly complex or make it consultant safe (maybe Taleb does not want to divulge his real secrets, even though he lauds others openness). Thus scientists would rather come up with theories like the Continuous Time Random Walk (CTRW) model instead of KISS.
http://mobjectivist.blogspot.com/2009/06/dispersive-transport.html
CTRW is a very difficult model to follow and they slip in some empirical rules to reach "and then a miracle occurred" stage. It is interesting to discuss the love/hate relationship we have with theory and the gyrations that scientists perform to keep some aspects of their work intentionally obfuscated.

Well, I think network scientists get fairly close to identifying the origin of power laws distributions in networks. They seem to come about as an artifact of the growth and development processes, when redundant connections atrophy or are absorbed by some central hub offering superior connectivity.

In economies you might see that in the mutual advantages to businesses of the same kind flocking together, or the result of a technology radiation being the dominance of one as the others are relegated to niche positions. It makes sense that system networks organized around developing energy flows would accumulate efficient design, but why would entropic decay process? I don't know.

but why would entropic decay process?

I agree with everything you said, but I don't quite grok that last combination of words. Did you mean to say "why would entropy decay the process?" or "why would entropy delay the process?"

I have a feeling that your spelling checker is picking the wrong word.

I guess the models for increasing organization in nature are mostly based on entropy and random decay processes. I think noise and entropy cause organizations and other concentrations to disperse, quite the opposite of causing them to develop.

The bad syntax is not my spell checker malfunctioning, though, but my brain checker. It should have been "...but why would AN entropic decay process?", referring back to the first subject, "accumulate efficient design". I thing growth is different from decay.

What power law distributions often seem to reflect is the accumulation of efficient designs, as in evolution or network development. I've always been mystified by why evolution theory would posit that random disordering would develop efficient design...

I see now. So you are asking why would an entropic process create order? Because these get aggregated into lower energy states. Reservoirs accumulate in low energy configurations. Why can a crystal lattice generate order from the liquid or gas phase? Pretty much the same reason.

I suggest that there are perhaps differences between power-laws as they relate to critical transitions and to those that relate to heterogeneous randomness. I am interested in the latter, yet most physicists aren't because they want to find some new revolutionary physical process. I just want to model some pragmatic bits of reality.

Evolution and intelligent design aren't even on my radar. I would suggest doing the obvious problems first.

To me the apparent problem is that the most common order creation process is system development by developmental growth, weather, biology, society or economic systems, including resource exploitation, etc. Those developmental cascades are not random trial and error things at all, but non-linear flowing processes of progressive change with distinct start-up's and conclusions.

I think it comes down to whether systems are developing some internal organization along paths of opportunity for their development within their environments or not. One might call them "exploratory systems" maybe. The question is whether that is different from the alternative, being bumped into least energy states by unordered impacts from their environments.

Both styles of ordering may have pattern development consequences. I think it distinguishes two very distinctly different classes of order development, though, those centered on the development of internal systems and those that don't.

I tried to talk to the Santa Fe complexity community for years, was very involved in one of their forums. They only considered order to be what they could describe in equations, and I'm approaching it empirically. I simply could not persuade them to consider that.

My main empirical observation seems to be that where highly ordered things are developing, a storm, an organism or a business say, there is always a particularly large mismatch between the outside information and the complexity of the inside process. You ever notice that?

Describing something empirically is equivalent to taking a photograph. It may be interesting but where is the understanding? I may be missing something. The Sante Fe people obviously like to do equations because they are mathematicians and physicists.

Yes indeed, you need to have a way of using observation to give you understanding.

I found a way to use the continuity laws as an observation tool, to employ how a craftsperson might learn about the systems of their interest. They use observation to understand how the materials respond directly. What I do is use changes in the continuity of a process as information on what to look for happening within the process or between a process and its environment, and regularly find it.

That approach, learning to recognize behaviors in physical systems directly, before articulating an abstract theory to substitute for them, seems very productive in some areas. That's what my formal methods of observing natural systems amplify and systematize.

The two approaches, observational discovery and representational modeling, tend to be done by different people, though. It creates a need for those with better skills and interests in each to consider each other as being "stakeholders" in a process to which different kinds of knowledge contribute.

The Santa Fe folks are resolutely opposed to going down that sort of path, and confident beyond all reason that their theories are inclusive, is sort of what I found. Of course,it is also decidedly difficult to distinguish between words that refer to physical things and to our explanations or images for them sometimes. eco at synapse9.com

Yes indeed, nature does speak very quietly. What is just a little easier to notice through our self-entertaining images is what you might call the usual "disturbance in the force"...? of real change, the imperceptibly slow and then surprisingly rapid appearance of altogether new relationships.

Please help us spread WHT's work here around the interwebs! Help us spread awareness and educate. This is the perfect post to send to friends, family, colleagues, neighbors, etc.

"How do I help" you ask?

Well, here's the reddit and SU links for this post: create an account on these sites (it's really easy) and upvote these articles. The more upvotes they get, the more people see them. It's that simple.

http://www.reddit.com/r/energy/comments/a6re6/information_and_crude_comp...
http://www.reddit.com/r/science/comments/a6re9/information_and_crude_com...
http://www.reddit.com/r/reddit.com/comments/a6reh/information_and_crude_...
http://www.reddit.com/r/collapse/comments/a6ref/information_and_crude_co...

http://www.stumbleupon.com/submit?url=http%3A%2F%2Fwww.theoildrum.com%2F...

Find us on twitter:
http://twitter.com/theoildrum
http://friendfeed.com/theoildrum

Find us on facebook and linkedin as well:
http://www.facebook.com/group.php?gid=14778313964
http://www.linkedin.com/groups?gid=138274&trk=hb_side_g

Feel free to submit things yourself using the share this button on our articles as well to places like stumbleupon, metafilter, or other link farms yourself--we appreciate it!

(we appreciate your helping us spread our work around, both in this post and any of our other work--if you want to submit something yourself to another site, etc., that isn't already here--feel free, just leave it as a reply to this comment, please so folks can find it.)

Two comments, perhaps slightly tangential.

One. Take peal oil and climate change. The first is simple -- a finite resource depletes. There are some unknowns and legitimate counter-arguments, but the fundamental issue is very simple. Climate change, however, is much more complicated. Why? Because it involves the behavior of a very complicated and dynamic system, with many interacting cycles. (I don't doubt the reality of human induced climate change -- I'm simply saying it's a complicated issue.)

This is typical. Western science developed first in observing the heavens, where there was predictability and order. Whereas we are still far from understanding an amoeba in its entire operation as a whole.

Two. Some stubborn reactionaries have a very important role to play in science. Einstein was the clearest example of this in relation to quantum mechanics. He could not accept that quantum mechanics was complete. He more than any other pointed out its irreconcilability with the classical world view. But QM has held up, and its irreconcilability continues to manifest itself and mystify in ever subtler ways.

There are other reactionaries -- Dawkins with with neo-Darwinism (selectionism explains all), Chomsky with his rationalism (universal grammar). Both have taken certain theoretical ideas completely out to lunch, i.e. to their logical conclusions. But without doing that, progress cannot be made. In science, people who ride their bombs all the way down perform a service (sometimes at least).

In that respect, I disagree with the TOD quote-of-the-day by Matt Simmons about data trumping theory ten times out of ten (or some such). Wrong theories are (sometimes) immensely important. Without a theory, one doesn't even bother to go out and collect data. BTW, Einstein was sure his theory of gravity was right before there was any evidence for it -- he thought it was too beautiful not to be true!

What's almost never useful is the personalization of an issue, not useful for science itself. But it makes for interesting history. Scientists are human beings.

Yes the fundamental idea behind peak oil is so very simple, although no one has really tried to formalize it. The model of dispersive discovery covers the discovery peak, and the oil shock model covers the production peak. That is all we have and all we will need in its simplistic glory, if it can only gain some consensus. However, the idea behind climate change is indeed a lot more complicated, and the formalization in climate science largely reflects that situation.

Good story on Einstein. He actually won his first Nobel Prize on the Photoelectric effect, which lead to much of the work on QM. He also was acknowledged on his work on Brownian Motion, which essentially helped to open up the field of statistical mechanics. He was a renaissance man who could really play the devil's advocate on any problem that came before him.

I wonder, what about people who just differ with the popular wisdom of the time? I happen to think if Einstein invented statistical physics he probably accepted and understood the uncertainty of events, so his complaint seems like it might have been more about the discontinuity of events posited by QM.

For me the conservation laws imply that all energy transfers require a process of energy transfer, and that would agree with Einstein but not QM. QM doesn't actually need to ask or answer that, however, since QM is not really about observables anyway, is it? If any quantum event were to take place by any process, or not, it seems like it might be of no real interest. That the macro world absolutely requires process continuity for events to transpire, then, seems oddly inexplicable.

Still, I think that keeps QM from being "complete". No doubt it seems to be a very effective way to use the data available. Does it do any more than what all other science have proved to be so useful, being careful to only ask the simple questions your available data can answer?

I'll just go ahead and pick this post to comment.
I completely agree with you in your assessment that we are trying to pin down a moving target.
The learning process itself is going to skew any results.
Every new level of understanding opens up new possibilities.

The more I know the more I don't know.

The more I know the more I don't know.

I find comfort in the knowledge that total ignorance is a convenient lower bound.

Of course Taleb has a name for all these observations. This one he calls the Luddic Fallacy (aka The Nerd Fallacy). This is the belief that the future can be perfectly modeled, analyzed and predicted.

There is also the modeling paradox (which I cannot recall the name of) which supposedly happens in economics and effectively skews the results. In this case, having a good model to predict the future trends would be immediately nullified as the market would adjust to the perfect knowledge that the model would provide you. Of course, this one can never be proven or disproven.

I must confess that I have never read anything that Taleb wrote just surmised based on what I heard here and there.
I don't think that any of what we are talking about is new.
By the way I admire your thoroughgoing method of evaluation.
I have never been as rigorous as you seem to be.

In which case I have to mention Taleb's "Expert Problem". It has to do with always questioning an expert's confidence, or overconfidence.

Apparently, Taleb has all the bases covered.

No one not even you has all the bases covered.

Well, I think the markets may actually not be "inscrutable". Have you seen the 100 year energy use and market value curves? This one is from Charlie Hall with a note I added.

See how wildly detached from the relatively smooth physical progression of the economy "the market" is, *always*? I think the markets seem to be SO detached from reality that it appears every one just follows each other, i.e. doing close to a true random walk... That's some "really smart guys"! ;-) Could it be the short term predictor of what the guy down the hall is doing is the only one they know how to follow?

The 1967 inflection point of primary energy use for the US economy, where the curve switched curvature from exponential to asymptotic. That is the end of physical growth for the US economy. That indicates the point in time when real exponents for the US economy went from greater than 1 to less than 1. I think for what people think economics is about, 1967 is the year for the US when the laws of economics reversed meaning...

Since then the world economic boom seems like it was powered by the illusion that the US was still growing exponentially because our debt was. This side of peak oil, and with the collapse, there's no more of that funny money available. So I expect the world's energy use inflection point, marking its physical point of diminishing returns, will turn out to be in the 90's sometime, once things settle down.

@pfhenshaw,

a quick explanation of why Einstein's refusal to accept QM it is not strictly related to conservation laws or continuity of processes.

He was the first physicist to take discretized energy (and momentum) seriously (as Plank used it more as a mathematical trick to solve the black-body radiation but didn't quite believe it applied at first). Following upon this belief he studied the photo-electric effect.

Now the incompleteness of quantum mechanics has to do with more subtle concepts such as non-locality and entanglement. Let's see what they mean with a simple example. Imagine an atom emits two smaller and equal particles (say electrons) flying in opposite directions.
A <== boom ==> B

And you say, "ok, I'm going to measure some properties of these particles.".
For example, I'm going to measure the positions and speeds of A and B at some point.

But there is a rule in Quantum Mechanics called the uncertainty principle which says something like: the more you know about the position of a particle, the less you know about its speed.
(see Heisenberg's uncertainty principle). This strict rule says you will never know both the speed and the position, it's forbidden!

You say, "Aha!" "I just had a brilliant idea which will prove you wrong!"

You do the experiment, the atom in the middle doesn't recoil, so the particles left in opposite directions with the same speed,
A <== bang! ==> B

And now, in a witty way you measure:
The position of A (it flew 3 meters)
The speed of B (it was going at 2m/s),

And you say, well, if they flew in opposite direction at the same speed then obviously,
particle A had also speed -2m/s. EUREKA! Now you know the speed and position of "A".

"A" flew 3 meters at -2m/s. I found out both!!

And that's when the weirdness of quantum physics kicks in! Quantum Mechanics says, I'm sorry mate, if someone measured the speed of "B", then the speed of "A", even if it was a mile away, changed instantly. (this instantaneous effect at a distance is what entanglement is about ... roughly, of course).

If this phenomenon was real, then things could affect each other at a distance to preserve that ignorance of position and momentum at once. This is the kind of conceptual jump Einstein was faced with. (by the way, in the 80s some experiments proved that entanglement seems to exist, but that's another long story)

Thanks, I hadn't heard that version of it before. Maybe the question is about projecting our information to make theorize about things we can't observe. If it seems to imply contradictions we can't resolve, are those contradictions then what is physically happening? Does "physical" still mean anything? One is kind of stuck if you can't check your information with anything but your information. It certainly leaves you to treat the situation as it seems, though.

When I've discussed this with others before it always seemed to go there. It seems those contradictions should make a difference to the laws and behaviors we do find observable, but apparently do not. So I think it seems possible that we just don't really understand it. That's consistent with both QM and traditional physics, right?

Bohr seemed to think Einstein was simply unable to renounce the ideals of continuity and causality... I think it may have been more directly a reluctance to say the content of our information is the physical reality science studies. For QM it seems we have no choice but to treat what we are observing as having no physical reality, though. God knows this can be confusing!

Deleted

The first is simple -- a finite resource depletes.

So should we be worried about the sun running out of hydrogen?

Well, I know one thing we apparently don't have to worry about running out of.

I seem to remember that in Lovelock's gaia hypothesis, the sun is slowly ageing (burning through its hydrogen) and getting hotter. Also the earth's orbit is decaying and getting closer to the sun. However, negative feedbacks in the earth's biosphere have fine tuned the atmosphere to balance the net greenhouse effect with the rising net solar radiation over a period of billions of years. This is the core of the hypothesis.

However, there are limits. He estimates, even without human interference, that in about another billion years, the increasing solar radiation will be sufficient to overwhelm gaia into a runaway greenhouse effect. If humans manage to drive themselves to extinction and take out most of the megafauna with them in an orgy of AGW, evolution will probably only get one more chance at producing intelligent life before all life is snuffed out.

So yes, we need to worry about the sun running out of hydrogen.

I'd be interested to see this approach applied to the housing bubble.
Its a complex system with lots of good data.
I'd argue that this approach can readily model the housing bubble.

However we happen to know that underlying the housing bubble was economic policies that allowed money to be steadily lent to people less and less able to pay the loan.

The driver was not real economic expansion and real wealth but for all intents and purposes a massive ponzi scheme spanning decades.

In my case I realized that disinformation or lies follow the same mathematics as the truth.

The divergence becomes obvious at the end when the lies become blatant as with the ridiculous lending that happened at the peak of the housing bubble but I'd argue that studying the data alone does not suffice to elucidate the chain of lies that allowed the bubble to grow over many decades. At each step of the process the incremental weakening of lending standards lead to a system slightly different from the one that would have existed if standards had remained stronger.

At some point of course the system was forced to either cease weakening its lending standards and deal with the resulting deflation or make even larger and less prudent changes to keep growing. Telling the truth if you will became and ever more painful option and one we have yet to really take in the case of the housing bubble.

Regardless I see no easy way to use this approach to elucidate the difference between truth and fiction in complex systems as the incremental lie if you will follows similar statistics.

For oil of course we have the blatant increase in reserves by OPEC which are obviously incorrect what we don't have is knowledge like we do for housing of the steady divergence of rental rate and mortgages to allow us to detect the bubble.

Indirectly of course at the macro economic level oil usage is related to economic growth and the credit bubble was widespread not just in housing so there is at least a loose coupling between oil production and the macro economics.

Its anyones guess as to what this relationship means but the housing bubble should at least make you a bit suspicious about whats really happening with oil production since it was intrinsic to allowing expansion of the McMansion/SUV lifestyle with expanding credit.

Obviously I've come to the conclusion that they are indeed closely related with very similar underlying injections of incremental lies.

Regardless I see absolutely no way to differentiate the truth from fiction using the existing data sets and complexity models to discern the facts. The underlying problem is incremental lies ape or perturb the truth ever so slightly and once incorporated and treated as fact further lies simply perturb the new mixed set of truths and lies further.

Without the ability to fact check the data set as we can using rents vs mortgages for housing I argue the models simply can't pierce or see the truth in a complex system that has steadily included ever larger falsehoods as fact.

Whats a bit funny given your paper is good old common sense seems to have been the savior of most that did not get caught up in the housing bubble. They did not buy into the new "rules".

Whats ironic is this approach is in a sense too powerful in that it seems disinformation or lies follows the same general mathematics as a natural complex system in fact this is why you can successfully execute the creation of a complex system based ever increasingly on lying.

I'd love to see if you can use your approach to elucidate the housing bubble without resorting to rents to pierce the lie. I was not able to see how it can be done.

Incorporating lies and untruths puts the ideas into the realm of game theory. You can have lies that are only disguised as lies and all sorts of reverse psychology tricks. From what I have read, the recent research indicates that many game theory problems are essentially unsolvable.

Read "What computer science can teach economics"
http://www.physorg.com/news176978473.html

So you do the best you can with the basis numbers that you have. That is why scientists hate frauds and poseurs more than anything else. They basically waste everyone's time if they purposely generate false data or results. That is probably the reason that many economics problems are unsolvable. Lies are uncountable or innumerable and don't figure as concrete metrics, unlike barrels of oil, they are essentially unobservable. The premise of observability is one of the first things you learn in engineering.

Yet in the end, a modeler shouldn't really care if any one country lies about their oil data. Unless the conspiracy is vast, the macro model will filter out the effects. I only offer sympathy for those that choose to use a bottom-up model that puts faith in all the data from say Saudi Arabia. For example, dispersive discovery will only use that suspect data as a Bayesian weighting factor, not placing all the eggs in the one basket. In other words, the lies have to reach a certain tipping point before they effect a stochastic model.

The climate change scientists are being accused of lying about some of their data. The skeptics do not understand that the individual lies have to reach some tipping point before they make a difference.

In general this is a difficult and perhaps moot issue to address, since you shouldn't just give up your modeling effort because you think someone is lying. That is the scientific equivalent to punting.

Yet in the end, a modeler shouldn't really care if any one country lies about their oil data. Unless the conspiracy is vast, the macro model will filter out the effects. I only offer sympathy for those that choose to use a bottom-up model that puts faith in all the data from say Saudi Arabia. For example, dispersive discovery will only use that suspect data as a Bayesian weighting factor, not placing all the eggs in the one basket. In other words, the lies have to reach a certain tipping point before they effect a stochastic model.

Exactly my concern has the data for oil been fudged so its reached the point that its equal to selling a half million dollar house to a strawberry picker ?

As far as vast goes well the housing bubble was a multi trillion dollar ponzi scheme across the world played out by numerous people for personal gain. A vast conspiracy does not require a secret cabal just willing participants to play the game.

The key to the fact the housing bubble was a vast conspiracy was the divergence of rents and mortgages over time. If oil is subject to a similar sort of situation then it must also have a key to exposing the divergence between truth and fiction.

This key may be impossible to find or it could be hiding in plain sight.

I watched the same relationships between rents and cost of owning the same property diverge as well.
The rents will always reflect the ability to pay because after all the land lord can't securitize his tenants and sell them to some unsuspecting fool fund manager.
So since the canary in the coal mine for the housing bubble was the landlord who had to make good leases then maybe there is some analogous data in the oil industry.
I am not familiar with oil at all but maybe some production numbers that just can't be fudged could be a leading indication.
Other than that price at production levels seem good enough.
Right now the price is saying that demand is out doing supply at lower prices.
If we don't see prices fall with lower demand or if prices keep creeping higher at present production numbers then that seems like confirmation.
What kind of divergence are you looking for?

I agree that you need to latch on to to some observable (production levels) or a good proxy (prices) that can substitute for the observable. Or else use some combination of the two, like backdated discovery numbers.

All models start from somewhere.
Why is the 80 buck oil not good enough to validate Peak Oil?

That is precisely what Steve from Virginia proposes. Based on prices, I think Steve nails the PO time circa 1998.

http://economic-undertow.blogspot.com/2009/11/conversation-with-michael-...

you trying to get rid of me?

Did I accidentally mention gold somewhere? sorry.

Nice try, but I am not that paranoid or desperate.
But maybe I should Be?

I try and use price as much as I can however its not a good predictor of future behavior as price is set on marginal demand. A fantastic example of course is the recent price collapse from 140->30 then the steady rise back to around 80 now. Obviously even with my doomer scenario at any point along this price roller coaster supply did not vary that much on a month to month basis. Oil is fairly close to a just in time production/consumption system with in general less than 30 days of storage so its a system thats easily swamped by excess production. Even though we burn 70mbpd plus of oil and excess of even a few 100 million barrels is more than enough to swamp the system.

Take the recent economic crash if you think about it as the entire world effectively taking a few days off over the period of a couple of months and if production remained constant you have 100 million or more barrels of oil that now clog the system. Its just a few days worth of production but its enough to seriously depress prices for quite some time.

On the same hand as long as the market is well supplied esp after a crash even as storage is drained price will probably not respond as quickly. Esp of course if some key players are lying about their true storage levels :)

Now the fairly quick rebound in prices after the crash suggests that we do have serious underlying production problems but even with this price is a function of supply and demand and we could well see another price crash from demand collapsing even as underlying production falls. Or we could not.

My opinion is that as the economy shrinks remaining oil demand will be ever more inelastic so price crashes probably either won't happen or won't be of the same magnitude as we have just seen but this is entirely dependent on the demand side of the equation not the production side.

The fact that the price of oil has steadily risen even as the unemployment roles have swollen is suggestive of a serious underlying production problem but its not predictive.

In fact using price we can only know for sure whats happening after the fact. If prices continue to steadily increase and remaining demand really is increasingly inelastic then price will serve as a good proxy for production. If further demand collapses occur and prices fall then we don't know for sure what the real relationship is between changing demand production and price. If production numbers are suspect then the actual demand which equals production at a given price is also suspect i.e we simply don't know how to balance all the variables.

I'm certainly not dismissing price and given I don't believe the production numbers I don't have a lot of other variables to use but that does not make it a good predictive indicator of future production just one that is suggestive that the truth and whats reported may not be in alignment. The potential variability in demand however prevents price alone from being the sort of smoking gun indicator one would need to elucidate real production much less guess at future production levels.

So basically if price is high supply is low.

Or try integrating price over a time span. Price fluctuations to first order is a derivative of production with respect to time.

I posted this a while ago

To model this:

http://www.theoildrum.com/node/5853#comment-549151

As a way to read the tea leaves, let's not forget that the classic "up and down" squiggles that we see on gasoline and natural gas price are often caused by the time derivative of an underlying supply/demand bubble.

The blue line below is the scaled time derivative of the red line

Compare that against the two curves that Gail shows:

Note that right around October 2008 did we see the inflection point of the transient demand (due to credit crunch?) bubble. My point is that these indicators are potentially just derivatives (in the mathematical sense and otherwise) of other indicators. A real derivative can reach a very high value simply due to the steepness of the underlying change. In general, that is why these curves are so noisy. Derivatives act as low-pass filters and they pass all the high frequency noise and jitter right through.

Take that FWIW because I don't know how far you can go down this path.

So in the short term a lag or accordion effect.

Yes, since you have to integrate, you will have to pay the price (no pun intended) of a potential lag.

You know.
When you look at all this you can't help but feel that we can do better.

Well, the one I latched onto a few years ago that doesn't seem to be appreciated yet is price sensitivity. For necessities in high demand any rigidity in supply will be jumped on by speculators to send the prices way up. Otherwise short supply is jumped on by investors to send the supply up.

The latter is just what did not happen, world wide, for all the commodities markets. It started (retrospectively) in about 2002. I didn't notice it till it made the news in terms of the collision between food and fuel commodities and their conflicting demand rigidities acting out in the corn fields of Iowa in 2006-7.

I wrote about what I think that really means to us in Profiting from scarcity". That was before I understood that peak conventional oil had already occurred, now seeming to many to have been in 2005. If this commodities bubble was the economic response to permanently rigidifying oil supply conditions... the response seems to have begun about 2002. I think that looks like quite enough of a "black swan" to upset some century old betting models based on the opposite continuing forever... ;-\

Anyway, price stability might indicate that investors know what the future supply will be, growing or not, and instability when people are caught off guard by either supply or demand rigidities or collapses, etc. At the moment with the relatively high oil price, are speculators just waiting around for the next commodities bubble, which they now know to just wait for? Or are they pricing oil realistically and the price level likely to be stable for a while? I don't know...

There are an infinite number of ways to lie but only one way to tell the truth.

The truth is often the outlier or Black Swan that people have yet to discover.

The whole Black Swan thing to me is just that we simply don't live long enough to accumulate the amount of data necessary to evaluate.
Granted there are data bases but not accumulated by one person.
We just don't know enough to predict.
It is the law of large numbers.

I would love to see one human mind live for One Thousand Years...........................

I would love to see one human mind live for One Thousand Years...........................

Well maybe we can sum up the resulting knowledge gained by looking at the one thousand day lifespan of Taleb's trusting thanksgiving day turkey. This is my rough reconstruction of his graph...;-)

Thanksgiving

Gooble, gooble gooble.

Doesn't it count that the earth appeared to switch permanently from being ever more yielding to us to becoming ever more complicating for us sometime in the 1960's ?

The projected beginning of the housing bubble was in about 1970's I think, coinciding when speculation started to so outpace incomes as a source of income. Isn't that consistent with natural resistance to development being a reason for the "smart money" to go into bidding up fixed assets as an alternative to rewarding people for earning money by employing them?

I watched that one, it is a good short video about 15 minutes. I also watched a longer video that lasts over an hour, which is Gell-Mann lecturing on the incubation of creative ideas. The Q&A is really fun because he starts to get cranky and you can watch his attacks on people like Michael Crichton.
http://video.google.com/videoplay?docid=1181750045682633998&ei=6D4IS7H9O...

I can see now that the content of this post is what was on your mind the other day when you were responding on the Enter the Elephant Campfire.

Good call. The Elephant was Nate's metaphor for the beast that can have a mind of its own. At the time of posting to the campfire, I was pushing for the premise that we still need the intelligent Driver to try to control The Elephant.

I also like Taleb's metaphor for The Cemetery. No one thinks about The Cemetery (uncertain data and outcomes) because it is too unpleasant.

I like his thanksgiving turkey analogy the best.
Plus it is in season!

Not that I'm likely to read Taleb, 20 pages of excerpts gives me enough of where he can and can't go. As to the cemetery I will counter he is dead wrong, we always are thinking about the cemetery that is why we so heavily weight the here and now.

I agree.
I call it the 1000 year mind in the 100 year body.

WHT - Thanks for the thoughtful post. Not sure I completely understand the entire post, but regarding Figure 7 - and assuming my analogy processor is working - isn't this summary similar to the basic QM problem of predicting both position and velocity? It seems like you are stating a Heisenberg Uncertainty principle that applies to all models of the observed world. On the left is classical deterministic world of the single particle that follows Newton and Kepler's equations, and on the right the probabalistic behavior of many particles. I realize that you are also including progressively more variables as well - (which accounts for the hump in the middle?) Is that correct, or am I barking up the wrong tree here?

Close, but not quite a cigar. There is a bit of problem to scaling the Uncertainty Principle into the macro world. Actually what happens is because of the law of large numbers, things can become more deterministic. Say that you had a single molecule that eventually would comprise a block of wood. One can apply the uncertainty principle and say that you don't know the exact location and velocity of the molecule at the same time. Yet if you put all these molecules together, you would have absolute confidence of the absolute position and velocity of the block of wood. So that idea doesn't necessarily scale.

My (and Gell-Mann's and Taleb's) idea of uncertainty simply states that we don't know the absolute size and position of some unknown block of wood. Blocks of wood come in many different sizes and shapes and can occur anywhere. The idea of informational entropy states that all these blocks of wood wouldn't come in exactly the same size or occupy only one region. In other words, the population is dispersed, and one can use maximum entropy arguments to quantify this based on extensive attributes such as mean density, etc.

Put that in the perspective of finding an oil reservoir, and that gives you my general thesis. If we knew all the locations of the oil reservoirs a priori we would be at the left side of the diagram, and if we only knew moments, we are at the right side.

This points out the eventuality of pinpointing the location of all known reservoirs; we will by definition reside in a state of low entropy, and the model would be complex only in terms of you would have to exhaustively list every known location. Unfortunately this would result in a boring model that would list 10's of thousands of book-keeping entries. It remains complex because it connects every last dot.

Stuff in the middle of the diagram have the mix of some complexity and some simplicity. This is essentially a bottom-up model with a lot of unknowns.

WHT has made this point several times and bears repeating; the oil extraction/consumption discipline needs good models. It needs them so that practical forecasting can be improved. It also needs to counter arguments of vested interests which constantly tout their book but nothing else.

The take- away is this:

It has often been emphasized, particularly by the philosopher Karl Popper, that the essential feature of science is that its theories are falsifiable. They make predictions, and further observations can verify those predictions. When a theory is contradicted by observations that have been repeated until they are worthy of acceptance, that theory must be considered wrong. The possibility of failure of an idea is always present, lending an air of suspense to all scientific activity. (Gell-Mann p.78)

This is the crisis in economics, as most of the establishment practitioners' models (theories) failed to either predict the arrival of a blatantly obvious onrushing financial disruption and also failed to predict its severity and duration. The same models are failing now, the revised models are also failing, the new practitioners' models are failing and the failures generally are taking place because enduring prejudice/embedded interests constrain model inputs. Gell- Mann is correct: The falsifiable theories have failed, period. Garbage in means garbage out.

Heuristics can be correct as often as otherwise. Resource depletion is not a sensation, when it becomes one it will be so unpleasant which is why the business community and their lackeys and sock puppets don't want to touch it with a bargepole. Of course, it will be thence too late to do anything about it. I learned this common- sense from my mother. "Don't use capital for operating expenses." Our culture has been burning through its capital for fifty years, as fast and as 'efficiently' as possible.

Why does everyone out there in the greater world hate scolds such as peak climate and peak oilers, again?

This article speaks to me about languages; which are linear in effect:

'Rational' ................................................................................................................ 'Hysterical'

At one end of the scale is Art => Naturalistic description => mathematic (language) description => scientific theory => Applied technology => propaganda => statistics => rumors => leading back to art.

Ideas get personalized because it's easier to follow names. The ideas themselves become fixed as in Leonardo Da Vince's idea was to create artworks which, by his act of creation, described observable characteristics of nature. That idea is embodied in 'Da Vinci' which then becomes a 'Code' ... then a major motion picture:

Leonardo Da Vinci described nature => John Stuart Mill described society => Carl Gauss described electrostatic forces => James Lovelock observes the relationship between ecosystems => Thomas Edison related electric force and mechanics => Samuel Goldwyn took the mechanics and sold its lifestyle products => Paul Krugman today gives us comforting lies about the process => Rush Limbaugh gives us disquieting lies about Krugman and others like him => the end product of the system is Andy Warhol who is personally is as enigmatic but equally descriptive as Da Vinci and disquieting as Limbaugh .

Thomas Eakins => Adam Smith => Albert Einstein => James Crick => Genghis Khan => Joseph Goebbels => Milton Friedman => Father Coughlin leading up to James Dean!

None of these 'actors' have any connection save in the minds of those would follow the 'arc of conceptual reasoning' from art to observation to description to use to lies and ultimate fantasy. At the same time, the actors' primary contributions all have equal validity, they are internally coherent to the degree that they defy criticism. The logic framework of prior art cannot be applied to them. It would be hard for Louis Pasteur to critique Francis Crick's work although he would certainly understand it and Crick stands upon Pasteur's shoulders. The critique of Poussin cannot be made of Jeff Koons (... or vice versa, but that is another discussion). Yet, prior art claims are launched against the sciences constantly; Galileo just missed being burned at the stake by the 'Earth- centric' crowd. Here's more, you can do this at home:

Eric Hoffer => John Locke => Enrico Fermi => Georges Lemaitre => Johannes Gutenberg => Leni Riefenstahl => George Gallup => Orson Welles => H. G. Wells.

Jackson Pollock => Charles Darwin => Pierre Fermat => Sigmund Freud => Eli Whitney => Conde Nast
... eventually leads to Thomas Nast and a corpulent Santa Claus.

It's not a hard game to play, start with:

Edward Gibbon => Carl Menger => Euclid => Karl Marx => Steve Jobs => climate change deniers => Freeman Dyson => Peak Oilers => Adam Sandler. You start with James Madison who describes (human) nature toward different forms of observation toward reasoning toward application toward rationalization to advertising and profits where you wind up with Nathan Rothschild ... to Sir Evelyn de Rothschild to Rothschild wannabes and ending up with Groucho Marx and Sarah Palin. The last two are both artists of sorts. Andy Warhol is usually parked at the beginning and the end of the 'circle of descriptive rationalizations'. Warhol's descriptions (colored photographs) of his environment are as valid as Palin's evocation of hers.

Both are equally 'gay' too, BTW. See how it's done? The reduction of complexity into derogatory simplification: wanna watch me do it again?

What can be taken away from all this is the distracting folly of discrediting others' models by discrediting those who are interchangeable with them. Everyone constantly does it, I do and that's human nature. We are all as competitive as race- car drivers and don't care if the other crashes! 'Ms. Gayness' Palin can cry all the way to the bank with the $20 million she's made from her gay- bashing, neo- Nazi 'disciples' over the past six months! Good grief! I look better in a pink chiffon dress than she does! I'm also a far better shot. Where's MY $20 mil?

All the models are valid which are rendered internally consistent; Gell- Mann's process is the forge of internal linearization or the crafting of a narrative by absorbing components from the outside. The models gain integrity. Blaming the ills of the world on Jews may not be factual or historically accurate - certainly not geologically accurate - but internal consistency of the approach that rationalizes this way cannot be easily argued against. There is always to a mathematical certainty that wherever there is some trouble there is a Jew within a couple of thousand miles. The quid pro quo is as clear as the nose on my face; at the same time, how do you model absurdities? It's not required when the model is self- reinforcing. The components of absurdities are inwardly rational as laws of science. Scientific method is irrelevant to show business. Both Freeman Dyson's and James Hanson's imaginations of climate are superficially coherent, differentiated in ways that are quibbling and hard to quantify but the AGW argument is fatally undermined when Dyson becomes 'Mr. Climate Change'.

This happens here! Whaddya going to do? Arguing with Dyson now is arguing with science itself. All the deniers' rigor follows from that point.

How will Peak Oil ever transcend Matt Simmons? Simmons is the 'Mr. Peak Oil Dude' that's all that matters; the issue becomes whether he beats his dog. The next step is for Joe Isuzu to take that dog- beating Matt Simmons' place. When that happens, peak oil as a concept disintegrates.

Until reality arrives, that is ...

Louis Farrakhan called Hitler a 'Great Man'; what did he mean by 'great'? Did that word mean a great ... disaster? Germany's surrender in May of 1945 spared Berlin or Dresden or Leipzig the atomic bomb. Hitler was a catastrophe to Germany first! Yet, the Germans were 'under a spell' and denial and unreality held sway until the end; we Americans are trodding a well- worn path that ends in the same place in spirit if not if form. Yet today, in the New York Times' Bob Herbert measures the American cultural and gestalt center of the Auto Age as a 'Dresden Equivalent':

What you’ll see are endless acres of urban ruin, block after block and mile after mile of empty and rotting office buildings, storefronts, hotels, apartment buildings and private homes. It’s a scene of devastation and disintegration that stuns the mind, a major American city that still is home to 900,0000 people but which looks at times like a cross between postwar Berlin and the ruin of an ancient civilization.

The personalities elbow out the concepts. Language ultimately short- circuits itself. Reality is its own master; Dyson will die a fool and as is/was the undying Joe Isuzu. Pitiless and remorseless reality triumphs, witness the demise of 'Joe Camel'.

Joe Camel

Like 'Mr. Penis Head', ideas become easily- consumed 'quanta'. The consumer can pass down the aisle with a shopping cart and fill it with a helping of Federalism from James Madison and a box of repressed sexuality from Freud, a container of Jew- hatred from Billy Graham and several servings of paranoia from Franz Kafka and Glenn Beck. What are you 'up for'? Marshall McLuhan is still right, the medium is the message.

Paul Krugman is wrong and will be wrong to his grave but he's still the 'world's best economist' because he resides @ the New York Times. His is in the 'statistical' category as in Will Roger's, "Lies, damned lies and statistics". What are you going to do? Wait for that 'Bus Named 'Desire' with both anticipation and dread as it is going to run over all of us. It doesn't matter if we are sitting at the stop or not.

The container for all the arguments is language, whatever your language happens to be. It is the supermarket in which the consumers of lies and truths jostle for position. Where does WHT fit into all this? Asking him to make an argument without using math is like asking Willie Shoemaker to win a horse- race without a horse! Take Einstein's chalkboard away from him and you have a haircut. Nevertheless, WHT's points are well made and certainly valid. I appreciate the masterful effort.

I had to reproduce Steve's name/idea association in table format, as it makes the associations more explicit

Art Naturalistic description math (language) description scientific theory Applied technology propaganda statistics rumors art
Leonardo Da Vinci John Stuart Mill Carl Gauss James Lovelock Thomas Edison Samuel Goldwyn Paul Krugman Rush Limbaugh Warhol
Thomas Eakins Adam Smith Albert Einstein James Crick Genghis Khan Joseph Goebbels Milton Friedman Father Coughlin James Dean
Eric Hoffer John Locke Enrico Fermi Georges Lemaitre Johannes Gutenberg Leni Riefenstahl George Gallup Orson Welles H. G. Wells.
Jackson Pollock Charles Darwin Pierre Fermat Sigmund Freud Eli Whitney Conde Nast - Thomas Nast Santa Claus
Edward Gibbon Carl Menger Euclid Karl Marx Steve Jobs climate change deniers Freeman Dyson Peak Oilers Adam Sandler

The bottom-line that Steve never mentioned and is intending for us to figure out is that both the Climate Change Deniers and the Peak Oilers have no "Big Name" associated with them. It is only a matter of time that some name will be attached to that brand of propaganda or that rumor. And which name gets associated with these slots is still up in the air.

Will these names be Freeman Dyson and Matt Simmons? We don't know yet, or at least until reality arrives.

deleted: irrelevant

Oops ...

It just dawned on me that some people might be upset that I used Joe Camel out of context. For anyone who was or is offended by this, I'm truly sorry ...

;-)

Nice book report despite the occasional logical fallacy. I point to your pitfall of the bifurcation. Claiming that either I will think I understand the post or know for a fact that I won’t just doesn’t wash. Try this: WebHubbleTelescope, has tried, but failed, to convince me that we ought to buy yet another new term, crude complexity, and that nonlinear mathematics is more powerful than simple models that seem to work such as the logistic equation.

We can dream up new terms and refined models, striving for more detail but unless the theory behind it has predictive skill, it doesn’t matter. We all know things get complicated. The derivative of acceleration is jerk, and of jerk, maybe a wild chaotic mess. And on and on, to, what, simplicity? You seem to be saying things reduce to certain universals like pi or the Feigenbaum number, or zero, like opinions in a democracy. Okay, agreed.

Carl Frederick Gauss coined the term "complex" to denote a "complete" mathematical treatment, a way of thinking that accounts for the imaginary plane, a phase shift off from the real. It has given us stunning power in explaining such things as magnetism and even humor. Using the term "crude complexity" appears to be a misuse, going from precise to crude language.

Sorry to be such a picky curmudgeon but wading through theory about theory seems to be an unnecessary effort unless, to borrow from William James, the theory has a little "cash value,"--answers that matter, that are demonstrated to refine Hubbert's simplicity.

Let me go through your points:

WebHubbleTelescope, has tried, but failed, to convince me that we ought to buy yet another new term, crude complexity, and that nonlinear mathematics is more powerful than simple models that seem to work such as the logistic equation.

I did not say that "nonlinear mathematics is more powerful than simple models that seem to work such as the logistic equation". I essentially said that a simple model can be used to explain the logistic, without having to resort to a non-linear model such as the Verhulst equation. Your rephrasing is about 180 degrees wrong in interpretation from what I was trying to say. If you want to read about my derivation, go to a previous post to TOD that I wrote The Derivation Of Logistic-Shaped Discovery.

Using the term "crude complexity" appears to be a misuse, going from precise to crude language.

Just to be clear, that was Murray Gell-Mann's phrase, not mine. I kind of liked it because it had the word crude in it and it showed the contradiction that Gell-Mann was trying to convey -- that you can often use crude calculations to understand complexity.

Sorry to be such a picky curmudgeon but wading through theory about theory seems to be an unnecessary effort unless, to borrow from William James, the theory has a little "cash value,"--answers that matter, that are demonstrated to refine Hubbert's simplicity.

I hate to inform you but Hubbert had no real formal theory. If you are after a refinement of what Hubbert had to say in terms of his empirical observations, then I don't understand your beef. As I said above, the Dispersive Discovery model will reproduce the Logistic curve. The Logistic curve was a heuristic to match the empirical observations and Dispersive Discovery is a theory that justifies the observations.

We have a huge case of semantic disconnect here.

I think there's a fallacy that sticking to "knowledge" is a useful option. It seems impossible for a human mind to hold knowledge without eventually surrendering to the drive to frame it in at least some theory. Theory is far more ubiquitous than generally assumed. Our lives are utterly pervaded by it, from the theory that a chair will support us, and the theory that we have functioning hands, to the theory that there is something useful to be found at this website.

I have a very peculiar biographical relationship with theories. Up to age 28 I was of the view that theories were without value, as any old fool could produce any number of (rubbish) theories. (Well, certainly I could as I have an overflowing creative capacity.) But then late one night I accidentally asked a question. Could it be that autism, high iq and genius were linked by all being variations on deficiency of innate prejudices? And having asked that question I was then compelled to explore it further. The conception developed into "general impairment of gene-expression manifesting as raised IQ (in moderation), autism (in excess) and genius (in a critical intermediate threshold range)". Oh heck I had produced a theory (the first of a number)!

Trashy theories, lacking sound causal logic or with defective relationship to evidence, are indeed two a penny. But it turns out that decent scientific theories which actually do have sound reasoning and evidential support are on the contrary extremely rare. "Robin P Clarke is one of those rare souls..." - wrote Bernard Rimland no less.

In the absence of the sound theory, people draw false inferences from their "knowledge" such that it handicaps as much as it helps. But armed with my now superior understanding, I was able to know things that no-one else did. Thanks to my theory I have long confidently known that autistics would be more rational, that body symmetry would correlate with IQ, that molecules binding to DNA and blocking transcription would cause autism. And all those predictions and more have now been confirmed by others. Similarly my Alzheimers (AD) theory tells me that those who hope to cure AD or live forever are questing after the impossible, because the cause of dementia is the brain running out of capacity. You could only usefully enjoy a 300-year lifespan if you could expand your brain with additional neuro-modules (i.e. fat chance), or else do a personality/intellect "reformat" once in a while.

In the present case of energy decline, the "leaders" may indeed have the facts before them, but whether they have the understanding that this is not just some "contemporary" problem that will pass, is another matter.

Quite what was in the mind of the person who chopped down the last tree on Easter Island? And quite what was in the minds of the Australian senate this week when they voted not to have peak oil? Knowledge without the sound theory for the understanding thereof?

Billionaire politicians are the bosses and physicists merely obey their orders! We order more energy to be produced!

What was in the mind of the person who chopped down the last tree on Easter Island?

"Oh hear my prayers Oh Great Economy God whose Stone Head I worship daily by the shore. Accept this stimulus offering and renew our Island Nation to the Glory that the Founding Fathers foresaw for it. Surely we are now at the bottom of the U-shaped recovery and it is all uphill from now and forever more. Accept this, my stimulus offering, oh heavenly father. In the name of the stone, the rock and the pebble I now fell thee. Amen."

Web,

Great geek stuff. Nothing like a dose of non-linear dynamical systems to put a smile on my day.

The exposition, sure, it could use some more cartoons and handholds for the reader. Even my eyes glazed over at points, and I just love a nice juicy real world dose of applied mathematics.

I liked the (admittedly heuristic) description of how complex societies fail in "The Great Wave". Got that one from Stoneleigh. The author is a historian and he talks about the process has worked, or rather failed to keep working, the last few times. His conclusion is that in developed economies, societies don't actually bump up against Malthusian limits. What happens is that as the society approaches resource constrained limits, the financial system always oscillates and then breaks first. What follows is that the society becomes increasing unable to mobilize and coordinate to deal with challenges (a string of bad harvests, the plague, invaders, etc.) Population falls, not only due to catastrophic events, but because people perceive that times are not good, marry at older ages and have fewer children.

Thanks for the tip about Gell-Mann's book. I'll have to check that out.

By the way, I heard James Woolsey, former head of the CIA, speak recently. He was essentially talking about the implications of dwindling oil supplies for our place in world politics. It sounded good until he blew it at the end, where he proposed that used french fry grease, algae and electricity would solve our transportation fuel problems, and our dependance on people who don't like us, and how we have to get right on that.

One thing I have never done is try to model how a society will fail. I know enough not to go there. I will egg other people on if they want to try it, but I would rather work with the simpler problems. The real complexity and Black Swan territory of that kind of fail model makes the stuff I work on look essentially like bean-counting exercises.

World Energy, Population, Economic Trends

Oil
Timing - Crude oil production peaked in May 2005 and has shown no growth since then despite dramatic surges doubling in price and in exploration activity.
Decline Rate - The US has been in decline since 1971. The giant Cantarell field in Mexico is losing production at rates approaching 20% per year.
Globally, the annual decline rate is around 5%.
The Net Export Problem – As higher Oil prices stimulate Oil Exporters economies, they are using more Oil internally, thus reducing the amount of Oil for Export.

Natural Gas
The supply situation with natural gas is very similar to that of oil. While oil and gas will both exhibit a production peak, the slope of the post-peak decline for gas will be significantly steeper. As with oil, we found and drilled the big ones first. The peak of world gas production may not occur until 2025, but two things are sure: we will have even less warning than we had for Peak Oil, and the subsequent decline rates may be shockingly high.

Coal
The ugly stepsister of fossil fuels. It has a terrible environmental reputation. Most coal today is used to generate electricity. Coal may also Peak in around 2025.

Nuclear
Given their usual lifespans, many reactors are nearing the end of their useful life. Given the likely level of decommissioning and of proposed new reactors to be built, it is likely that we have already seen the Peak of Nuclear power.

Hydro
If coal is the ugly stepsister, hydro is one of the fairy godmothers of the energy story, this form of energy production may be set to increase

Renewable Energy
While I do not subscribe to the pessimistic notion that renewables will make little significant contribution, it's equally unrealistic to expect that they will achieve a dominant position in the energy marketplace. This is primarily because of their late start relative to the imminent decline of oil, gas and nuclear power, as well as their continued economic disadvantage relative to coal.

Fossil Fuels have been by far the most important contributors to the world's current energy mix, but all three are set for rapid declines, whilst Hydro and renewables are set to make respectable contributions.

In an overall context, this shortfall contains an ominous message for our future, Energy shortages are coming!

The Effect of Energy Decline on Population
Human population growth has been enabled by the growth in our Cheap & Abundant energy supply.

The Historical and Current Situation - The world's population has increased by a similar amount in that time, from 200 million in 1 CE to 6.6 billion today. There is of course a great disparity in global energy consumption. The combined populations of China, India, Pakistan and Bangladesh (2.7 billion) today use an average of just 0.8 toe (Tonnes of Oil Equivalent) per person per year, compared to the global average of 1.7 and the American consumption of about 8.0.
Long-Term and Aggregate Effects - The net oil export crisis may well be the defining geopolitical event of the next decade.

The Population Model - It is likely that things such as major regional food shortages, a spread of diseases due to a loss of urban medical and sanitation services and an increase in deaths due to exposure to heat and cold.

Effects of Ecological Damage
There are two ecological concepts that are the keys to understanding humanity's situation on our planet today. The first is Carrying Capacity, the second is Overshoot.

Carrying Capacity – The carrying capacity of an environment is established by the quantity of resources available to the population that inhabits it. The usual limiting resource is assumed to be the food supply.

Overshoot - Populations in serious overshoot always decline. This is seen in wine vats when the yeast cells die after consuming all the sugar from the grapes and bathing themselves in their own poisonous alcoholic wastes. Another example is the death of the oceans, where 90% of all large fish species are now at risk, and most fish species will be at risk within 40 years.

As our supply of energy (and especially that one-time gift of fossil fuels) begins to decline, this mask will be gradually peeled away to reveal the true extent of our ecological depredations. As we have to rely more and more on the unassisted bounty of nature, the consequences of our actions will begin to affect us all.
It is impossible to say with certainty how deep into overshoot humanity is at the moment. Some calculations point to an overshoot of 25%, others hint that it may be much greater than that.

Conclusion
All the research I have done for this paper has convinced me that the human race is now out of time. We are staring at hard limits on our activities and numbers, imposed by energy constraints and ecological damage. There is no time left to mitigate the situation, and no way to bargain or engineer our way out of it. It is what it is, and neither Mother Nature nor the Laws of Physics are open to negotiation.
We have come to this point so suddenly that most of us have not yet realized it. While it may take another twenty years for the full effects to sink in, the first impacts from oil depletion (the net oil export crisis) will be felt within five years. Given the size of our civilization and the extent to which we rely on energy in all its myriad forms, five years is far too short a time to accomplish any of the unraveling or re-engineering it would take to back away from the precipice. At this point we are committed to going over the edge into a major population reduction.

We need to start now to put systems, structures and attitudes in place that will help them cope with the difficulties, find happiness where it exists and thrive as best they can. We need to develop new ways of seeing the world, new ways of seeing each other, new values and ethics.
Link –
http://www.countercurrents.org/chefurka201109.htm
================
I have edited & abbreviated the above article to make it more readable, as it is quite long. That said, I recommend it be read fully & that the relevant graphs be examined, as it certainly provides some strong pointers for the future direction of Market segments, entire markets & beyond!

We often hear the phrase “Market Fundamentals”, you can not get any more fundamental than the information contained in this article. But, of course there are also numerous other Factors which will impact on the future, including the relatively recent massive increases in Private & Government DEBT & the relationship of the DEBT to GDP Ratio, as well as other very significant factors!

However, some issues in the article where I am not in full disagreement, but I believe this article is fairly close to one possible reality and all that goes with it.

There may be other possible Realities!

Globally, the annual decline rate is around 5%.

Increase production by 5% to make up for the decline. Wow, that was a hard one.

You will either think you understand the following post, or know for a fact that you don't.

Sorry for reading only the first thousand or so lines. Just for a comment on the "cheap heuristic" notion I find relevant after doing so.

Unlike information, which is both of biological and mathematical essence, heuristics is a more or less mathematical concept that whatever it may be applied for, seems alien to scientific descriptions of human operation even at a psychological level.

I tend to stick to explanations of the psyche and neurobiology at the expense of others when I do need to understand something pertaining to that domain. And I think that fluency in the latter science helps.

The referred paper asks a good question.

Its conclusion kept me from reading it too :-)

I mainly brought these ideas in to the article to establish some continuity with what Nate had been discussing in previous TOD posts. I am thinking about the question from the Nancy Cartwright paper that you and I both refer to -- this one?

Which scientific account is right for which system in which circumstances?

The answer I think is fairly obvious. The correct one.

So, for example, you can either buy into my dispersion-based theoretical models and find them practical and apply them ... or not. That is all there is to it. At some point the meta-theorizing does get a bit tiresome and you have to go with something.

Right.

The interesting part is:

This is a difficult question: evidence that may support a scientific claim in one context may not support it in another.

Maybe the following assertion comes close?

Correctness, and more abstractly truth, are an art form.

Moreover, for a material explanation of the latter concept, neurobiology comes to the rescue.

At some point the meta-theorizing does get a bit tiresome and you have to go with something.

My criticism of the paper would be that while order may be required to set up a scientific account, it plays no exceptional role in its judging, and becomes a hindrance when contexts start to shift. Organization for instance seems to be a valid drop-in replacement.

In mathematics there's a huge gulf between their respective definitions. If I remember well, organization is a concept more or less within information theory while the definition of order is one of the foundations of set theory.

Edit: And the correct word would have been "words" instead of "lines" in the thread starter.

I guess I can't tell what you are trying to say in those few words, just as you probably can't interpret what I am trying to say if you have only read only 1000 words of the post.

So I will say no more given the insufficient context.

Fits nicely with the block quote in the thread starter.

By now the grasp should be sufficient for a clarification attempt of my previous post though. (I learned that thermodynamical entropy is hot)

I was understanding from:

> I am thinking about the question from the Nancy Cartwright paper

that a development thereof had been asked for. My apology if I did misread that. If I didn't, sorry that a rigorous development ain't possible atm, only to add that the statement is somewhat related to the

Self-organization can always produce local order.

citation. Incidentally I had considered putting the phrase "Topology theory might be a candidate for a place where both meet." at the end of the previous post. It didn't happen cause I never managed to really get behind the topology math.

Next, a development on one of the Gell-Mann quotes:

In fact, however, a system of very many parts is always described in terms of only some of its variables,

Ok, some system variables could be redundant, but this pre-supposes the concept of transient systems. There's only one Wikipedia match for it, a funny one, but not very descriptive. "Open system" did it then to help out the other non-initiated folk. "Physical system" too.

and any order in those comparatively few variables

Ok.

tends to get dispersed, as time goes on, into other variables where it is no longer counted as order.

I start to resent that some get thrown out of physics courses when thermo comes up without being told whats in there, and would be grateful for a pointer available to a notation both ergonomic and, umm, systematic, able to represent the stuff. Also I was wondering whether such an endeavour could be achieved with the learning formulas used by some of the more abstract neural net formalisms?

Furthermore neural net systems could well be used to counter-check dispersive discovery for oil depletion. Actually I would speculate thats what the majors are using for their internal guesswork.

The idea of informational entropy states that all these blocks of wood wouldn't come in exactly the same size or occupy only one region. In other words, the population is dispersed, and one can use maximum entropy arguments to quantify this based on extensive attributes such as mean density, etc.

I couldn't follow the argument well enough to understand how the model organizes that attribute heap to derive the quantification.

That keeps me wondering whether neural net learning might even be integrated into the dispersive discovery models without creating a mess.

would be grateful for a pointer available to a notation both ergonomic and, umm, systematic, able to represent the stuff

... Ergonomic relates to human comfort. The probability math term Ergodic indicates that all states are generally accessible and visited. It is getting to the point that I can't discern the context of your requests at all. Do you want a mathematical notation within your comfort zone, or do you want to systematically make sure that all the states get visited?

I think we are really talking past one another. Sorry if it is just a natural language problem. If you prefer, just write down some equations or math. That is more of a universal language.

Human Travel Question

I have generally used the so-called Gravity Model to estimate travel density between city pairs. That is the travel density is inversely proportional to the distance between them and proportional to the population of City A x population of City B with "dark matter" added for special cities (Las Vegas is a premier example).

Not enough time to read the linked papers, but how does the Gravity Model work with their empirical findings ?

Thanks,

Alan

This is a great practical question.

The Gravity Model seems to establish a steady-state density of traffic but it doesn't say anything about the velocity of the traffic.
density(r) = population(X)*population(Y)/r

One thing nice about the Dispersive Transport model is that you can work out the velocity probabilities as a function of elapsed time or separated distance, so the cumulative looks like:
P(r,t) = beta/(beta + r/t)
and the partial derivatives look like:
dP/dt = beta*r/(beta*t+r)^2
dP/dr = beta*t/(beta*t+r)^2

A two-dimensional plot of the dispersion at point separation plotted against time separation looks like the following (this has an extra knee in the profile that I explain here ):

So the way that you would use this with the Gravity Model, is that you would look at the distance (r) between the two cities and you can then look up the probability of a specific velocity that a person would travel at for a particular time (t) duration between the points (i.e. velocity and time are related for a fixed distance, and velocity and distance are related for a fixed time). This is very general, so with this approach you can generate averages, modes, medians, or higher-order moments from the distribution. Eventually the Dispersive Transport model together with the Gravity Model would provide a useful approach for policy planning. You can generate densities and density flows and it would all have to work out consistently.

I know that you do quite a bit of alternative transit proposals, and I would suggest that this might prove useful for attracting interest to your ideas. It never hurts to work with additional knowledge that the BAU crowd doesn't have.

This touches on several empirical observations.

1) Urban Rail is necessary for high modal share by rail for inter-city travel.

Example: Take the Red Line of DC Metro to Union Station, go New York City's Penn Station and transfer to the subways there. Far less "friction" than going by air or by car (through Philadelphia).

2) People will ride trains for 3 hours, but generally prefer air for trips much longer than that.

3) People will ride trains for trips of 200 miles or more, but generally prefer cars for trips shorter than that.

H'mm

Alan

The difficulty will always be to emulate the legs ordinarily dedicated to air. Those are the sgments in the tail of the pwer-law curve.

I got lost half way through WHTs post. Sparked some thoughts...Long...

Science is a moving, developing, dare I say evolving, object. All three of its components, that is 1) it’s content; its method, divided into two parts, i.e. 2) conceptual, abstract schemes (hypothesis testing, mathematics, argument, etc.) and 3) the material supports or tools (thermometer, computer, brain imaging, etc.) used in its pursuit are all subject to change.

2) is often believed to be static at a certain point in time. That point in time is usually the present, as it is quite obvious that the ‘scientific method’ has itself changed over time, has a history. We are now under the sway of a hypothetical-deductive scheme which rests on the logical end on a certain type of abstraction - on propositional logic (if p then q and so on) and somewhat more shakily on parametric, interval scale, statistical procedures, which often furnishes us with the ‘facts’ that then can be arranged into some kind of ‘logics’ or ‘deductive’ (also inductive etc.) frame.

(This approach has proved acceptable to describe part of the physical world (19th, 20th cent) but not so felicitous for understanding what is often called complexity, meaning simply a topic or state of affairs that is not understood, so called ‘complex’ - to many variables interacting, which ones are pertinent, what in fact is one trying to explain? How to prove one’s intuition or hunch? How to convince others? Etc. )

The paths to true knowledge (true, only as an approximation, a novelty, a better fit with human desires) doesn’t run smooth, and new ideas, or new paradigms, or scientific revolutions, or a new societal consensus, can arise from any of the three branches, as well as from others not listed here.

Society, by contrast, is run through power, influence, coercion, and cooperation, altruism, sharing; novelty spikes, glamour, propaganda, posturing, etc. Those seemingly inherent or individual, particular, local (exceptional events, people, etc.) characteristics are all, always, deployed in, acted upon, groups that are to some degree soldered by a common culture and common, supra-ordinate aims, which may, at the same time, be subject to rifts, overt or covert, created by sub-groups who seek personal advantage, to be accrued in various forms (money, position, etc.) Science provides part of the feed in - novelty and advancement (mainly technology) - habits of thought, conventional action and analysis, as well as justification (the math guru or the ‘quant’ says it is so!) Science is thus also a tool, a shaper of opinion, a guide, a recipe for action, - thus a topic that ppl can legitimately have an opinion about.

Dissidence against “Science” was, in the recent past, 1800, 1850, or 1920, in our ‘world’ more or less confined to different ‘scientific schools’ (method, facts, fields, etc.); balking at societal changes (Luddites) or seemingly based on religion, explanatory or moral / ethical schemes that seek to fix morality, the proper relations of Man to God, others and the natural world, with some sort of super-ordinate scheme.

Boundaries have shifted recently, become blurred. (Science becoming politically oriented, or funded by corporations, Gvmts becoming more centralized and technocratic, etc.) So while the roots of ‘dissidence’ are perhaps the same, they have become amplified and are instrumentalized in the service of broader ideologies.

For example, a ‘core’ US Christian nutters contest the validity (pertinence? interest?...) of evolution, global warming, and peak oil. It appears they do so in favor of a cluster of arguments that at the same time exalts humans as Supreme Beings, and sees them as powerless - que sera, sera. In effect, they are supporting BAU, and the present power structure in the US (which still favors whites, rural communities, etc.) Well meaning ‘leftists’ (democrats, socialists in the EU) support global warming, and believe that ‘alternatives’ and ‘technology’ will compensate for the depletion (always relative) of fossil fuels...as humans, in their view, through concerted action, can accomplish, well just about anything! (Just some lame and obvious exs.)

Seen through the lens of a discussion of this kind, one can question whether Peak Oil belongs to Science, in several ways:

1) It is fact, and not a *theory*, though ‘scientific’ tools are used to establish it as fact. Not: a speculative explanatory scheme that leads to testable hypotheses (See the indeterminate and shifting characteristics of Science above)

Modeling it, ‘proving’ it is another matter, subject to ‘scientific’ debate on varied matters...

2) Peak Oil is a cultural meme which has gained some traction. It will wax and wane. Propaganda, Go!

3) Science’s remit is not to discuss or direct human behavior, only to prove ultimate truths. (Not my pov.)

Hoping this disjointed, too broad, off-da-cuff post is of some interest, use.

Good thoughts. If oil depletion does not belong to science, it at least belongs to the world of applied mathematics. It is essentialy a math class word problem that up to now, no one has thought important enough to solve. All the student needs is elementary probability with the hint of entropy as a premise.

Yes, applied Math, I agree.

Applied math however is idiosyncratic, non-rigorous, and most often an outcome of cherry picking for particular uses. Be it as simple calculation algorithms, habits of book-keeping or ‘tallying’ or ‘communicating’, heh, ‘taxing’, or even as obligatory, or advised, conventionally sanctioned, more complex (equations) procedures. Math is at once a useful, pragmatic tool, and a means of oppression, co-opted to serve the ruling elite...Swindlers put it to good use as well.

Not that it could be otherwise.

While a Platonic realm of pure mathematical objects may exist - if only in the minds of some, such as 90% of mathematicians - and that must be accepted as it a general belief - on the ground you have credit card interest rates, and more impenetrable matters such as mathematical transliterations of ‘risk’ and ‘uncertainty’ in the banking world, based on peculiar hybrids of probability theory and physical world metaphors, such as Brownian motion (? not sure) or other, constructed in the hushed, or harried halls of corporate meets.

Over take-out pizza or at formal lunch, with proper silverware and napkins, mathematical monstrosities are hammered out. :)

So I like how this little "Applied Math" technique I have come up with helps us understand previously misunderstood problems such dispersive transport in amorphous semiconductors, reliability problems such as the bathtub curve, the spread in TCP network latencies, human transport velocity distributions, and all the oil depletion scenarios that face us. Many of these came with adjectives such as anomalous and enigmatic indicating that people have long been puzzled as to their derivation.

Quite a good application of cherry picking and serving the ruling elite I would say. Murray Gell-Mann would say it was just part of the process of sparking creative ideas.
http://video.google.com/videoplay?docid=1181750045682633998&ei=6D4IS7H9O...

WEB -- Sorry to show up late to the party...logging a well all weekend. Didn't even have time to read your entire post but will take it with me back to the rig tonight.

Cherry picking: the mother of all reservoir analysis screw ups IMHO. And so common that its relatively accepted as SOP. Thus when I would try to apply anything close to statistical reality it's been rejected. My peers tend to regard me as the most pessimistic geologist they've worked with...even more pessimistic then an engineer (that's consider a real slap in the world of geologists). Perhaps the prevailing attitude is just a reflection of how difficult it is to generate a truly viable drilling project or reserve analysis. Imagine if your stock broker said he had a buy recommendation for you but it had only a 40% chance of being correct. I doubt he would have a very long client list (although the clients he did have might think he was the best thing to come along since sliced bread).

I'm sure I'll have a ton of questions/clarifications requests for you in a few days. Thanks again for your contribution.

I've put Mr. Gell-Mann on my shamefully long reading list.

.."part of the process of sparking creative ideas. " I agree, and, really any new descriptive tool throws up a lot. There's probably not enough effort made along that line. (Your post and a few others on TOD are exceptions.) I was thinking about what I know of the history of mathematics (1) / use of mathematics outside of 'pure' mathematical constructs from Sumer on.

1. more properly, logical-cum-quantitative methods of representation of certain aspects of 'reality' as well as the accounting book used for exchanges.