The Gulf Deepwater Oil Spill - Was Complexity a Factor?

Kurt Cobb in a recent post raises what is to me a good point: Was complexity a major factor in the Deepwater Horizon blowout and oil spill? According to Cobb, the approach is simple:

It is a strategy as old as civilization. Assign each person to do a part of the entire job, and the job will get done faster and better as each member of the work team hones skills and learns tricks to improve his or her performance with each repetition of the task. It's called the division of labor, and as it spreads and intensifies, it leads to greater and greater complexity in society.

But this means that parts of responsibility are assigned to several different contractors, and parts of responsibility are assigned to company employees. And regulators play a role as well, approving many of the major steps in the process.

Within a company like BP, there is also a division of labor. One division of labor relates to different job functions. But there is also a division of labor related to level of experience. An oil company will have quite a few inexperienced employees, plus a relative handful of employees with 30 years of more experience. To try to make up for this lack of experience, the more experienced employees are typically located in a central location, like Houston, and can be consulted in the case of difficult situations. But this takes time, and the step may be overlooked, especially if there seem to be many others involved who also have responsibility and seem to know what they are doing.

When there are many employees and contractors with partial responsibility, it is all too easy for things to slip through the cracks. Part of the problem is that not very many people know the complete story--it is just too complicated. Hopefully, each person knows enough about the situation to make the decisions he or she is supposed to make--but there can be slip-ups. And if there are a lot of different people who have somewhat shared responsibility, there can be the assumption that others will be looking out for problems, so a person doesn't have to be quite as vigilant.

And this division of labor seems to be an issue. Notice that Art Berman's post yesterday indicates that one of the issues was the well plan BP was working from:

What can be addressed now is the larger issue that a flawed, risky well plan for the MC 252 well was approved by the MMS, and BP, Anadarko and Mitsui management.

As long as everyone else seemed to be looking at what needs to be done, there might have been less concern about really looking at the plan closely, to make certain it is complete enough to detect all problems.

Also, today's Wall Street Journal has an article BP Tries to Shift Blame to Transocean

The two BP executives read from a part of Transocean's Emergency Response Manual for the rig, emphasizing sections that stated that Transocean's offshore installation manager was "fully responsible" for activities onboard the rig, and BP's representative was there to "assist." "For obvious reasons," the manual said, "only one person can be in charge at any one time."

The manual also said it was the responsibility of Transocean's driller to shut in the well upon detecting an intrusion of oil or natural gas.

In response to BP's claims, Transocean provided The Wall Street Journal with a complete copy of the manual's well-control section. The document suggested the responsibility for decision-making was less clear-cut than what BP highlighted.

The well-control section stated that top managers on the rig for both BP and Transocean were supposed to jointly decide whether the situation was deteriorating to a point where they might lose control of the well. Moreover, while Transocean's top official was atop the chain of command, BP's senior representative was supposed to consult with shore-based management in Houston to "decide appropriate well control procedures" if rig crews had trouble handling a serious problem.

According to Cobb,

The broader question is how such a system of oil exploration became subject to such a catastrophic failure.

One answer is that offshore drilling, specifically deepwater drilling, is an exceedingly complex enterprise. And, the more complex an operation is, the greater the chances of a breakdown. Counterintuitively, the safer we try the make such operations, the more the operators of such rigs will likely push the limits of what those rigs are capable of doing and thereby invite additional disasters. (We already know that automobile drivers take more risks as cars and roadways are made safer, something known as the rebound effect.)

Cobb goes on to talk about Joseph Tainter's theory of collapse:

Joseph Tainter, author of The Collapse of Complex Societies, the seminal work on the fall of entire civilizations, explains that increases in complexity in a society are natural responses to challenges to survival. For a time, sometimes a long time, increased complexity succeeds in aiding the expansion and success of a society. The primary manifestations are the ever greater division of labor (often in the form of additional layers of managers, technical experts and government regulators) and the ever greater technical complexity of the methods and devices deployed.
. . .

But there comes a time, Tainter cautions, when the returns from additional complexity begin to diminish and ultimately turn negative--that is, additional complexity can result in a reduction of resources, safety, security and other measures of societal well-being. When he wrote his book in 1988, Tainter already believed that our global society was experiencing diminishing returns on additional complexity. Might we now be reaching the point where additional complexity brings negative returns?

This additional complexity can bring breakdowns, like the blowout and oil spill. The easiest fixes, like more regulation, are likely counterproductive, if the result is that decision-makers take more risks, assuming someone else is now protecting them from risk.

According to Tainter's theory, it takes additional energy to keep an increasingly complex system going. Let's think about how additional energy might be used in this case to make the system safer:

• Regulators are normally not very high paid individuals. Pay scales could be raised, so that experienced workers from the oil industry can be hired as regulators.

• More experienced people can be hired in the oil field (not certain where they would come from, however) or additional training can be given to company employees, so that company employees know the jobs of independent contractors, and can act as a double check on them.

• Less experienced company people can be tied even more closely with experienced employees. (But even this doesn't help, if the inexperienced person doesn't realize the possibility of a problem, so doesn't check.)

• Company incentives can be changed to reward accident free operation, rather than speed of drilling and completing wells.

All of these changes would be more expensive--that is use more energy, in one way or another, if only to pay employees more. But no one will ever approve approaches that will support the higher-cost system needed to support an increasing complex system. Instead, the lessons from the auto industry will be ignored, and new regulations adopted, which only seem to make the industry safer.

The oil industry isn't alone in its complexity. One can think of a lot of other complex industries. For example, the electrical industry is terribly complex, especially after deregulation, and the breaking apart of electric utilities into smaller competing parts. Many people assume that the electrical industry is one that people can fall back on, if oil supply is inadequate. Given at least equivalent complexity in the electrical industry, and the unwillingness of governments/regulators to throw lots of energy ($$) at offsetting the problems that arise with increasing complexity, it seems to me that the electrical industry is at least as likely as the oil industry to be reaching the limits of complexity. We just haven't been following the electrical industry as closely, so don't understand the situation as well. Perhaps I can run some updated versions of electrical posts that were run earlier, one of these days.

Cutting down a tree, chopping it up and drying the wood is pretty simple.
No change of environmental disasters.

True: making things simple will not be sufficient to avert environmental disasters. It may, however, be a necessary condition. Intuitively Kurt Cobb's thesis that complexity is a factor, perhaps even the most significant factor, is strong.

I worked in an office in IT for 20 years. When everyone understood what was happening and why and who the boss was, we might complain, but things would get done and we'd get paid. When no one quite knew what we were doing or why, or who was in charge, or when people would ask questions that no one knew the answers to, or the questions were brushed off, or no one bothered to ask questions in the first place, you knew you were headed for trouble. The KISS principle ("keep it simple, stupid") applies here. You don't want to make things any more complicated than you have to -- and if you have to, then you may be in a predicament.

Peak oil has created the impetus to take riskier and riskier actions, and one risk is to drive complexity to the limit, and one consequence of this is increased risk of failures just like the BP oil spill.

I appreciate this analysis. Take a look at the make-up of a nuclear powered aircraft carrier. This has got to be an exceedingly "complex" system. Yet the average age of all people on-board is the absurb number of 19!

Keep in mind that the average age of sailors on board is 19. Yes, teenagers are the engine behind this 1000 foot aircraft carrier. It says a lot about the power of training, procedures, and accountability

However, I would guess that teenagers' role in creating the carrier was probably quite limited. That required an army of miners, welders, etc. in their 20s, 30s, and 40s. And a few nuclear scientists, who probably did their best work in their 20s, but would still be quite competent for several more decades.

A co-worker of mine did a stint as a nuc engineer in the Navy back when he was young. He said his job was the most boring one on the ship -- it consisted of sitting in the control room and watching all these gauges which never moved at all. Operating a properly configured water-cooled reactor is incredibly simple. It just sits there and cranks out power, adjusting to load changes all by itself. Follow the rulebook in shutting it down or starting it up and nothing happens.

Cutting too many trees can of course cause environmental disaster by the removal of the forest. However the act of cutting a tree seldom kills 11 men. Usually one at the most.

I dunno....Complexity is an issue - but it is an issue all over the industry. I don't know how much complexity in an of itself is a problem, provided that there is good communication among all players.

I think that there is likely more around what Rockman mentioned re: the temperament of the people who end up out on the rigs and communication. IME the guys that are really, really good at what they do, aren't necessarily good at listening and having nice, calm discussions about things when under a lot of pressure.

Plus, I'm guessing that having big-wigs out on the ship wasn't helping anybody's ability to focus on the job at hand, especially given that the 'job at hand' should have been in 'safe' mode.

Demographics IS a problem, however. I have seen issues w/ newbies making poor decisions when nobody with more experience was there to help. However, we don't really know if that was a problem in this particular case. For all any of us know, BP had a team on the well who all were 20+ year guys.

"Plus, I'm guessing that having big-wigs [out on the ship wasn't] *ISN'T* helping anybody's ability to focus on the job at hand, especially given that the 'job at hand' should have been in 'safe' mode."

Actually that comment is an accurate description of the current attempt to remediate the situation at hand. The engineers and techs dealing with this mess can hardly be insulated from the s*** rolling downhill. I was struck by the fact that our live feed is almost always inaccessible while the BP cam is and has been up and running close to 100%.

There needs to be an networked feed of tech data real time (from all involved disciplines) at one accessible site. No comments, raw data feed only and let the public figure it out or ask TOD for interpretation. And yes, this could be done.

I heard on one of the press conferences that they are working on getting more live data. My impression was that it might be posted on a BP site.

This is really a stretch. It's typical when something like this happens which fills the news, everyone tries to piggyback their pet theories on to it.

The reason the blow out happened was because of drilling into a high pressure oil formation. The first oil wells always resulted in gushers because the drilling system did not prevent it. All the technology and complexity added since has made drilling for oil relatively safe, and the in 99.99% of cases gushers/blow out do not happen, and the incidence of such events rare compared to the number of wells drilled. That would suggest the complexity introduced is actually highly effective.

And, the more complex an operation is, the greater the chances of a breakdown.

That statement is not just wrong, but very obviously wrong. The logic of the argument is seriously flawed.

Very well said Bob.

Cobb has written quite a meaningless analysis that adds very little value. Complexity really has nothing to do with what otherwise is a basic risk versus reward analysis. Why decide to treat complexity as the whipping boy when these are simple human decisions on a riskiness scale?

I will give a couple of counter-examples.

First, look at our nuclear defense systems. At some point the complexity of any one MAD scenario hinged on the fact that one individual could press a button and unleash the hounds. We could have created bombs that are very simple, but it ultimately hinged on human decision making.
Simplicity could have killed us.

Then you look at biological systems which everyone argues as very complex. Some mutation could similarly unleash the hounds and generate a disease that some population has little defense for. Yet that same complexity in the shear variety of lifeforms could provide a defense mechanism that will serve to prevent that disease from propagating. The internet kind of works that way.
Complexity could save us.

Since Gail is the actuarian, I have a straightforward question. How would complexity factor into an actuarial algorithm? If it was an obvious metric, we would likely use it for creating insurance policies. In that case, no one would insure the internet because of its complexity, yet we all know that many businesses don't even consider this as a risk/reward factor.

Some of this stuff is not really worth dwelling over.

True. However, we are dealing with different life experiences and different kinds of systems when we talk about "complexity." The disputed statement:

And, the more complex an operation is, the greater the chances of a breakdown.

. . . would be regarded by anyone who has worked in an office for a long time as so obvious not to require defense. We are talking about human systems, not mechanical systems. A mechanical system, such as a computer program, can be tested before it is implemented. It's much harder to do this with a human system. Kurt Cobb is talking about human systems, as is evident in the very first block quote from his article above.

Certain systems you can't test before bringing up. How could an offshore oil rig be tested before it is implemented?

Everything is a human system if it is designed by humans. Testers are human too, which means that even a computer program is human because it was designed by humans and the testers can't be relied on to assert complete test coverage. I get your point, but this is all lost in anecdotal information. And Cobb's points are OK but usually vapid.

I think denial is a real issue here. People read about complexity, but somehow can't connect it to their situation today. If the breakdown in job functions (and between company and contractors) doesn't cause complexity, what does?

By the way, I am an actuary not an "actuarian".

I have been saying that the actuarial models are wrong. They depend too much on assuming what happened in the past will continue. The future will be different--sharp turning points are missed.

The obvious result of increased complexity is that we are likely to eventually get to overshoot and collapse. Everyone is so busy forecasting increased growth from the past, that they completely miss collapse when energy (food, oil or whatever) is not sufficient to maintain the current system. When we are on the downslope of oil production, net energy from oil is clearly dropping, and this makes it harder and harder to maintain the current system.

Sorry. You can now call me an engineerian or an engineerialist.

(But why is not a librarian called a library?)


I am so stealing that.
I've just found religion.

According to Wikipedia by the 1980s that number had reached zero. Not being subject to the attofox problem, the number is final, I believe.

We are on the downslope of CONVENTIONAL oil production....not the downfall of western civilization. Might we get to the point where we tip the whole shooting match over? Yep! Is this blowout a symptom of that? Not IMHO.

What causes things to fail if not a breakdown in job function? IME it's more likely breakdown in communication. If all players take the time to listen and understand each other, things usually go OK.

If people:
a) don't listen - but impose their idea regardless


b) take silence as consent (or shut up instead of making sure that they are understood).


c) assume that the other parties know what they are talking about

the things are more likely to go to hell. Note BP's continuing comments that they 'assume' that there was good 'collaborative discussion' going on on the rig as to whether to move forward with the riser displacement, vs the 60 mins interview about 'chest bumping'. Makes me wonder how much OTHER good collaborative discussion was going on. Like when the mud logger ended up w/ rubber in his hands, or when it took multiple tries to get the well seal to hold, etc, etc, etc.

What causes things to fail if not a breakdown in job function? IME it's more likely breakdown in communication.

This is true, but also deliberately cutting corners can cause a system to fail. Companies are cutting corners all over the place to reduce cost. They have been for several years now. Sometimes, in hind sight, they cut things they maybe should not have cut.

Also true. Cut enough corners and you can shoot your foot off. But that isn't an issue inherent because the systems are complex.

I guess you could make an argument that because they are complex, they are expensive, and therefore prone to cost cutting in in-appropriate places. But I wouldn't tend to go there. At some point - you are just in a complex system.

You shouldn't try to make it more complex than it needs to be - but it needs to be as complex as it has to be.

(I think I need to go have some coffee now or something...).

The problem with complexity is not that it is inherently unstable or that it can't exist, but rather that it becomes difficult if not impossible to determine the consequences of particular actions (or inaction). Complex life works because the actions (mutations) than don't work are weeded out by attrition -- over a long time. Right now, medical research is at the point where we can come up with a clever idea to modulate the expression of a particular gene only to discover that there are unforseen (often fatal) consequences. Economies are like this too.

This doesn't mean that complexity was an important factor in this case. Industrial accidents happen all the time, usually due to some simple mistake or lack of attention. But to design a safe system or process, one has to factor all of the possible situations that could arise along with possible human actions. Anybody who has tried to code an internet application that works on all browsers has tasted complexity. But you can't solve complex problems without some measure of complexity in the solution. It's the Einstein quote "as simple as possible...but no simpler".

Complexity makes the task of risk vs. reward analysis more difficult. You know what the reward is, but you don't know all the risks because you can't foresee all the problems.

Then you look at biological systems which everyone argues as very complex.

I read somewhere that if you inflict random damage to a complex organism it can be as much as 96% intact before it ceases to be viable and dies.

Scale that up a bit and I can easily imagine a failure in one of our critical systems, such as reliable electrical power, abundant supplies of liquid fuels, or a stable financial market, rendering vast amounts of our infrastructure useless.

Now that's what I would call sunk costs.


Here is my pet theory:

There is simply a lack of oil field professionals in the 45 to 55 age group today. In the years 1986 to 1991 there were few new hires and the young 20's professionals were all layed off in 86 and went on to find other fields of interest. Also during this time all the old farts were given packages and retired. Many of the stories of what can happen because of this or that were never told. A good story will be burned into a guys brain for further reference as opposed to a class room and white board with power point. Today they are learning the hard way.

You may have a very good point there.
For my last project, up in NE BC, I needed well logs to the small scale, 1 to 1200, becase I was evaluating several thousand feet of shale section in an area 60 miles X 60 miles.
I very specifically and very obviously (SHOUTING ON) DID NOT WANT (SHOUTING OFF) logs at a scale of "Print to File 8 1/2" X 11" paper".
The (DELETE EXPLETIVE) computer geek from the service company providing the paper copies of the logs (who I obviously will not name) had never heard of the concept of "constant scale" on logs or maps. He was very impressed with the idea. "So ... they'd all be the same on more than one X-section?"
And no, he said resignedly, I'm not exagerating.
I here the Chinese and the Egyptians have independently introduced the Next Big Thing to replace computers.
It's something called "paper". I like the concept but I personally prefer Mylar.

See, Harry, your problem is that you're seriously not With It :(

If the computer can print every sheet at a different scale, that demonstrates the software's manager- and purchasing-agent-friendly superior extensive sophisticated flexible advanced multipurpose up-to-date capabilities ;)

A computer geek who doesn't know from maps, I take it. (Probably should have had maps and scaling explained to him in small words.)

Harry, I feel your pain. Nearly every day.

The last space shuttle disaster was arguably generated by PowerPoint (see

As a geoscientist, I find the most thoroughly objectionable feature of PowerPoint (other than its ubiquity) is its default behavior of scaling every figure to fit into a standard space, using independent X and Y scaling factors.

I wonder how many PowerPoint presentations were used in planning the MC252 well, and whether any of them led to the Deepwater Horizon blowout?

That statement is not just wrong, but very obviously wrong. The logic of the argument is seriously flawed.

I wouldn't be so quick to judge, there are two ways to look at it.

Think of a fishing net. Each line of the weave is knotted to every line of the weft that it crosses. Cut one knot and the adjacent knots immediately take up the slack. That system has an extremely high degree of resilience, mostly through the vast amount of redundancy embodied in the knots.

Now think of dominoes randomly standing on a chessboard. The fewer the number of dominoes, the greater the space between them, and knocking over any one domino will have little likelihood of knocking over other dominoes. Double the number of dominoes, double it again, and now you have an extremely brittle system where knocking over one domino has a very high likelihood of triggering a catastrophic cascade failure that knocks over all the dominoes.

The critical factor is the cost. Any fisherman can tell you that maintaining a net is very labor intensive, damage must be continually repaired with new lines and new knots or the system will eventually degrade to the point that it fails completely. What would it take to make the dominoes more resilient? Certainly an added layer of complexity that would carry it's own high costs.

Making systems more resilient comes with a high price. Gail is right. Anyone maintaining a complex system that is either unwilling or unable to pay the price of making the system more resilient will eventually experience the domino effect.

Complexity is not a "whipping boy". Everyone is totally missing the point that adding complexity to a system returns great benefits, at least at first, but eventually results in diminishing marginal returns. Once that inflection point is reached then any effort to "solve" problems with increased complexity only serves to exact a higher and higher cost relative to any benefit gained.

That question of cost is universally ignored. Every discussion I see about so-called "solutions" to our predicament, and I mean every single one, utterly fails to ask that one simple but absolutely critical question: "At what cost?". Talk of new energy sources in particular almost always completely fails to consider costs, especially in terms of energy and environment as well as economic (not to mention the equally important companion question "At what rate?").

Continual growth is the driver of collapse. Resources are ever more difficult and expensive to find and extract, and wastes are ever more difficult and expensive to mitigate. As demand for those services and the resulting costs to provide them continues to grow it will divert ever greater amounts of productive capital from the rest of society. Eventually we will hit the inflection point and experience the full effects of diminishing marginal returns on a global scale. This is the observation made by the much maligned Limits To Growth report.

A system is only as resilient as it's weakest part. Also known as Leibig's law of the minimum. One could not imagine a more spectacular example of this principle at work than the Deepwater Horizon. Whatever part failed on that rig, be it downhole or on the BOP, it most certainly paled to insignificance compared to the enormous cost and complexity of the entire rig. And yet, the failure of that part destroyed the entire rig, taking down untold amounts of productive capital and embodied energy, and exacting an as yet untold amount of damage on the surrounding environment.

All for want of a little resilience.


"That question of cost is universally ignored. Every discussion I see about so-called "solutions" to our predicament, and I mean every single one, utterly fails to ask that one simple but absolutely critical question: "At what cost?". Talk of new energy sources in particular almost always completely fails to consider costs, especially in terms of energy and environment as well as economic (not to mention the equally important companion question "At what rate?")."

The Elephantine cost you, and everyone else, cannot bring yourself to mention is the complexity pressure of overpopulation.

Mired in technological detail, you forget this is all really about governance, and as long as government denies the Elephant, we will have overpopulation-driven complexity to the point of failure.

Remember that the quantum unit of governance is the human, already a complex, and, to use Freud's term, polymorphously perverse unit indeed. Multiply the problem by just the sheer number of humans, and that's what puts you over the tipping point of societal collapse called war.

Humans are impossible to govern in these very large groups without destructive ego pressures (Civilization and Its Discontents). I think we're doomed, of course. So sue me.

There's always someone who comments that the issues aren't looked at broadly enough, and then balks at overpopulation.

If TheOilDrum cannot bring itself to consider overpopulation, it's just angels on pin-heads, isn't it?

Hi Bob, I think there is a function of complexity there. There is a greater need to go after oil in more difficult places. By its nature, there will be a learning curve for understanding the problems that might arise.

So, complexity is context driven and as you said early oil wells were more prone to blow outs. Systems in traditional settings have now evolved enough to manage the complexity.

Well, I think I've made that as clear as mud!

Seems obvious to me that if you have more cars on the road traveling at higher speeds you have a more complex situation than when you have less cars traveling at slower speeds. And it seems apparent that when you have a more complex traffic situation there are more accidents and the accidents cause more harm.

Seems obvious to me that new cars with computer controls are more likely to breakdown and harder to repair than older models. Seems obvious to me, as many have mentioned here, that old bicycles without all the fancy gears are less likely to breakdown than new ones.

Amazing how different obvious can be.

In fact I think that greater complexity (especially when failure might be catastrophic) is recognized to have more risk and therefore controls and procedures are put in place to try to prevent that failure. More controls are put in place for instance for Nuclear Bombs than for handguns. More controls are in place for vehicles, especially public buses than for bicycles. But controls and duplicate controls are costly and when poorly supervised by those who are charged with the public welfare they are often skipped.

Flying is safer than driving per mile of travel, but pilots are better trained than auto drivers, mandatory inspection of planes is in place but few states require any mandatory inspection of autos, traffic controllers monitor planes to avoid crashes while for most car travel it is up to the individual driver. Basically isn't this an acknowledgement that planes, being more complex need more controls to keep them safe.

Flying is safer than driving per mile of travel, but pilots are better trained than auto drivers, mandatory inspection of planes is in place but few states require any mandatory inspection of autos, traffic controllers monitor planes to avoid crashes while for most car travel it is up to the individual driver. Basically isn't this an acknowledgement that planes, being more complex need more controls to keep them safe.

The modern world of aviation is a good parallel for deep water drilling and the potential for accidents. After every aviation crash there is an accident investigation team that is assigned to discover what led to the accident. In modern aviation, systems from airplane mechanical inspections and design to airmen certifications and flight procedures is weaved with redundancies in order that if an operator error occurs in one area, there are back-ups designed to compensate. In my experience with aircraft accidents there is seldom (if ever) one single factor that resulted in an aircraft accident. It is always a combination of mistakes or shortfalls which lead to the tragedy.

After the accident investigation team publishes their findings there are recommendations made for procedures (increasing complexity) to solve the problem in the future. Case in point:

Aeroméxico Flight 498, registration XA-JED, was a Douglas DC-9-32 en route from Mexico City, Mexico to Los Angeles International Airport, Los Angeles, California, United States (with stops in Guadalajara, Loreto, and Tijuana) on August 31, 1986. N4891F was a privately-operated Piper PA-28-181 Archer owned by the Kramer family en route from Torrance to Big Bear City, California. The two aircraft collided in mid-air over Cerritos, California, killing all 67 aboard both aircraft and 15 people on the ground. In addition, 8 persons on the ground sustained minor injuries from the crash.

The cause of the air crash or who was at fault? According to published procedures there was no party that was at fault. The Pa-28 (Piper Cherokee - a single engine private aircraft) was flying VFR (visual flight rules) NE at approximately 7,000 ft. crossing above the Los Angeles TCA (Terminal Control Area) with no Mode C (altitude encoding information) and no two-way radio communication established. If there were Mode-c equipment aboard it would have given the air controller an opportunity to issue traffic information to the DC-9 jet entering into the Los Angeles TCA on an IFR (Instrument Flight Rules) clearance from the SE. With over 10 miles of visibility at altitude the 2 aircraft fatally intercepted.

What was done as a result: The FAA changed the rules for flying VFR aircraft within 25 miles of a Stage III TCA. The ceiling of the TCA climbed from an altitude of 7,000 ft. to 12,500 ft. "All aircraft flying within 25 miles of a Stage III TCA shall be equipped with a Mode-C altitude encoding equipment"

The cost to private aviation was enormous. Altitude encoding equipment and other on board equipment put a lot of weekend flyers out of business in Southern CA. However the needs of the public outweighed that of private individuals.

In the case of the Horizon disaster they will design systems and procedures which will prevent another similar disaster from occurring. I think the days of cowboy wildcatters (yes I know that's not fair to all of the professionals in the industry) running deep water oil rigs to be at an end. It will be a lot more expensive in the future to drill deep water rigs but the threat to the commons far outweighs any financial burden placed on the industry as well as consumers like you and I having to pay higher costs for fuel.


You say it's obviously wrong yet fail to say why. Always a telling point.

The more complex something is, the more interelated components it has. It is trivially obvious that the more components something has, the more things can break. Also the more connections in something, the more that can fail in one go if one part breaks.
It's also more likely for an underlying problem to be missed as other parts of a complex system can take it's place. Until they fail and the whole thing collapses in one go.

Trying to kill bacteria is harder than killing a human. Cars are less reliable than bicycles, and if one breaks when driving at a decent speed, you are seriously screwed. Two examples off the top of my head.

Well, here we are. Almost Star trek. You know wanna a secret. God's masturbation is US! Failing but keep ON failing!

If you crunched the numbers, I bet you would find the petroleum industry is generally safe and these events are very rare. The problem is when failures do occur, they can cause global or regional problems. To suggest complexity is the root cause of the problem is not borne out by the fact that all or at least some of those other wells were just as complex. Those other wells did not release petroleum into the environment. Apparently, abnormal actions by nature and personnel combined to cause this disaster. It is the actions of both that must be reviewed. At the end of the day, it was a failure of human imagination. Key people must have not imagined this could occur.

Yes ... I've heard numerous industry analysts and experts describe 40 years of drilling, 30,000 wells, and "nothing of this sort" taking place in the Gulf (see recent John Hofmeister interview on Charlie Rose). Well, this isn't even close to correct, Mexico is also drilling in the Gulf (the Ixtoc I blowout was the second largest in history and lasted nine months), and MMS and NOAA both have extensive databases listing blowouts, accident investigations, and incidents in shipping and other material releases or loss of life that are chronic and routine in industry. I see no drop off in incidents, or any indication that the industry is getting safer over time:

I count 26 incidents on the NOAA archive page for the first five months of 2010 alone!

First – thanks for the link, it’s in my bookmarks, a very interesting set of data.

In the interest of using factual information, rather than implying this all associated with offshore oil, we should review the 26 incidents.

5 of the 26 incidents had something to do with the Gulf of Mexico, 2 were absolutely associated with the offshore oil industry.

The Deepwater Horizon incident
A large refueling spill (2,500 gal) between two offshore work boats.

There were two “sheens” of unknown origin spotted in areas used by the offshore industry. One of them was reported to be a 1/4 mile long sheen with an estimated spill of 1/2 CUP of unknown oil. The other had no information as to source or size.

There was a pipeline in a Louisiana swamp wildlife refuge that was presumably hit by a boat and spilled an unknown amount of what appears in the photo to be crude oil. As normally no offshore work boats are allowed in a refuge it was probably a sports fishing boat.

Of the 26 incidents 8 did not have any spill, 8 were various fishing boats that sunk, went aground or caught fire on both coasts of the US. 7 were shipping incidents not related to the oilfield.

1 was a drill in Maine for volunteer training, not actual an incident.

1 was a report of tarballs on a beach in Guam.

1 was a helicopter crash in Canada that might have allowed fuel to reach the US.

1 was overfilling a storage tank in Alaska.

1 was a frozen valve on an onshore oil site in Louisiana (see, I told you there was no global warming – and this proves it) that spilled about 8 bbls.

1 was a broken gauge on a pipeline in Florida that released an unknown quantity to the atmosphere.

Apparently, the people in charge of investigating this disaster have not read the USCG investigation of the Java Sea disaster. More than one person was in charge on the Java Sea and consequently, no decisions were made. The crew started to rely more and more on the Houston office for advice. Houston was not on site and could not possibly see the situation. Later, the ship capsized with the superintendent still talking to Houston.


BP and Governors of LA plus FLA were provided with the INSTANT 8 HOUR SOLUTION for sealing the runaway well using a FOOLPROOF 1991 PATENT from a Mr. Branko R. Babic in Oxford United Kingdom. Total time making the Counter Pressure Plug perhaps 1 hour in a welding shop provided you give them a 6 foot section of 18 inch OD thick wall steel pipe with a special beveled FLANGE holding two or three 8 inch sections of ANNUALAR type rubber and welding the INSERTION side beveled flange while allowing compression with the TOP flange which would allow full flow of oil, gas, water out into the sea past the shutoff valve and possibly adding a flange so BP could hook up and draw contents to the surface AFTER UNIT WAS WELDED IN PLACE.



Thanks for allowing me to come aboard.
Ralph Charles Whitley, Sr. CFC032631 Former Professional Welder and Former Shipyard Worker !
052210 @ 10:08 AM Eastern

Hi Ralph,

I seem to remember that welding at 5,000 feet under sea level is not possible. Pressures are just too high. Someone correct me if I'm wrong.

Simple. Welding at that depth is not possible...but WELDING is.

WELDING underwater!!?
- On a system rated for 15,000 psi!!?
- At 2,200 psi ambient pressure!!?
- Remotely via ROV!!?

I'd like to meet a welder who thinks his/her certifications and/or experience are up to those specifications. I don't THINK I'd like to spend much TIME with 'em, but there would be a certain curiosity in MEETING 'em!

Perhaps more relevantly, I'd be interesting in examining and pressure-testing a weld made by this individual under such circumstances.

Sorry. Thanks for the idea, but don't be surprised if nobody takes you up on it.

I thought that you can do tig (GTAW) welding underwater.

All that arc welding is is an arc plasma whose heat melts the metal, fusing them together.

I do tig welding as a hobby to make jewelry and I can see the simplicity of it.

What difference would pressure make? As long as you have an arc, you can melt the metal and fuse it together.

Or is it because water is more conductive to electricity at higher pressures then at the surface?

The welder him/her self would not have to be down there. The ROV's can hold the torch and the welder should be able to use the video link to see what they are doing.

I can problems with doing gas welding because of pressures, but an electric arc should be no problem.

Or am I missing something glaringly obvious?


Mark Allyn

Welding technology research for underwater use has been carried out since the 1930s starting with the Navy (wet welding) and progressing into the offshore oilfield. Some of the main researchers were Taylor Diving (merged into Brown and Root), COMEX, and several others pursuing dry hyperbaric welding and Chicago Bridge and Iron (CBI) doing wet welding. I was personally involved heavily in the CBI program for most for my 10 years with that company.

Welding research pretty much stopped at about 1,000 feet as that is also the practical limit for divers.

There are significant effects of pressure on almost all aspects of welding. The arc tends to constrict and is often much more susceptible to magnetism (arc blow). The metallurgy and mechanical properties of the weld are affected by the density of the background and cover gases and cooling rates are greatly increased. There are other problems including welder/diver safety that together make underwater welding substantially more difficult as the pressure increases.

Dry welding is practical down to about 1000 feet and TIG welding has played a major role in the success of very deep welding qualifications and jobs.

Qualified wet welding to AWS standards is pretty much limited to depths of less than 150 feet. The deepest wet weld I know of was at about 680 feet to weld up a leak in a dry habitat so it could hold the helium background gas to do a dry weld. And it was not pretty.

I seriously doubt an ROV could hold a welding torch steady enough to weld. They are large pieces of machinery hovering in place with their thrusters and subject to any stray current. Just watch a video of them trying to insert a hydraulic plug. But they are certainly capable of landing an automatic welding machine if there was one capable of welding at 5,000 feet. Unfortunately such a machine does not exist and a R&D effort to make one would probably take several years.

I went to the website cited in the original comment that started this thread to see if there might be anything helpful. I couldn't find and additional documentation or sketches beyond what was posted on TOD but he has fired off a lot of letters to various politicians, the media, etc., many filled with various accusations of criminal negligence and even murder.

He is also working on a couple other documents:

Sworn Notarized Affidavit of Felony Murder/Grand Larceny/Perjury of Barack Hussein Obama and others.

Dangers of Sonic Boom damage to residences and concrete even underwater from SHUTTLE RETURNING TO ATMOSPHERE AND MILITARY JET GOING SONIC WITH ASSOCIATED BOOMS.

Never ascribe to complexity that which can be adequately explained by incompetence. I'm paraphrasing from a comment today in another TOD node which said "Never ascribe to conspiracy . . . "

A super essay. Read it.

Not a very good essay link IMO, except for this point

A counterpoint to a lot of the invective too. It is very unlikely that there is any easy to lay blame. Humans like to lay blame.

The only actual difference between Challenger and Columbia is in trying to deduce the cause from the forensic evidence. Contrary to what everyone says, no one actually knew what caused the Challenger to fail. It is all still conjecture. Laying blame on the o-rings when it was just as likely that torque on the launch pad caused the failure. That was the judgment at the time and this ambiguity has since been lost because something has to be the root cause to meet the requirements for empathetic human closure.

Columbia on the other hand had an external camera pointed on the wing, and everyone could see the damage on the surface. If you believe that a piece of "foam" caused the damage, that is one thing. Yet, once that damage occurred it was almost a 100% certainty that something was going to burn up on reentry.

Complexity was the same on both space shuttles. Or you could say Columbia was more complex because it had several more years of technology to build upon.

So the reason I think that is a bad essay is that I don't get why this disaster is closer to Columbia than Challenger. The use of anecdotes taken several steps too far.

If one believes in Bayesian probability analysis, we now have data points for future risk analysis. The probability of failure for future Space Shuttle flights is essentially 2 out of #Launches so far. It is no longer 1 in 100,000 because Bayes says otherwise.

Same thing for future oil rig catastrophes. The probability of future risk is essentially:
P(Catastrophic Failure) = (Number of catastrophic failures observed)/(Total number of rigs constructed)

... give or take. I still don't understand why Gail does not talk about risk from this perspective. She is the actuaraian. Is it because it is too dry a topic and this will bore readers?

What is the probability of failure for something that hasn't been tried before? Tried once (either successful or not)? At what point (how many samples) do we make decisions based on what the statistics (Bayesian or otherwise) say?

Statistics is good if you want to decide whether it makes sense to keep doing something again and again based on the risk of failure vs. the reward. But a single number, the probability of failure, doesn't tell you why something failed - and it tells you even less for small sample sizes. You can look for commonalities, and then analyze those statistically for an explanation. So you make a change, based on what you find. What is the probability of failure then? Hopefully less, but maybe the change had an unforseen consequence (complexity again). So does the prior information have any remaining relevance, or are you back to square one?

(be gentle, as I have just started reading Jaynes book)

RE statistics (as in lies, damned lies and ...)

I get tired of hearing someone tell me that "flying is safer than driving." Or, more likely, and even more false, "Your chances of dying from a plane crash are less than your chances of dying from a car wreck."


In simplest terms, the statistical odds assigned to a group apply to that group and only that group. The odds do not apply to individuals in the group. The odds of crashing and dying while driving my car are not realistically calculable.

I'm sure WHT can explain the details to anyone who doubts this.

Obviously, chance of success of an outcome for a particular operation using historical information is suspect for determining the risk of a catastrophic event such as we have had but it is safe to say using probability distributions on the factors that go into drilling and containing a blowout can certainly show that a disaster that we have seen is has a very low chance of occurring. Even with a high probability that somebody might make an error in a step, overall odds of disaster are low. Even with a low expected value of loss or negative outcome, the worst case (or reasonable worst case) is planned for which is why there is a regional (practiced) spill plan. Too bad the quants, who put the chance of success on a nation wide housing crash, due to a limited historical samples,to low, did not prepare a response for the catastrophic case (or did they?)I do think,if drilling was too risky, Anadarko, Mistsui and Transcoean would not be able to get insurance.

A story. Despite the jokes about engineers not being able to think in probability distributions compared to explorers I remember a very senior exec (a geologist) who questioned the whole use of risked chance of success being used throughout the company : "I don't know how you can do all this stuff? These wells are all 50%; they well find oil or they do not". Uncertainty and risk ain't they fun.

Now that the disaster has occurred, the updated probabilities have increased in value, both in terms of the real math and in terms of people's expectations of it potentially happening again.

The general consensus on Wall Street is that the probability of the 2008 crash happening again has also increased. That's the way Bayesian inferencing works in practical terms. Nothing really mysterious as it is partially guided by human intuition.

The bookish answer is, you need 30 samples atleast to have a normal curve, that is when you could start trusting statistics.

WebHubbleTelescope -

I didn't particularly care for the essay, either. This fellow Cobb, seems to fall into the same trap that many academically inclined people who post articles here do ..... and that is a tendency to try to force-fit the facts of an incident or problem to conform with one's pet theory or area of research.

I also think it is a fundamentally flawed conjecture that increased complexity automatically results increased risk of injury, loss of life, or environmental damage. To see how this is flawed, all one has to do is to look at what industry and transportation was like in say 1900 and compare that to the industry and transportation of today. It is an irrefutable fact that it was far more dangerous being involved in industry and transportation back then as compared to now, whichever way one wants to measure it.

But first lets go back even further. A square-rigger sailing ship is far less complex by many orders of magnitude than a modern container ship. Yet being a sailor was a very dangerous occupation back then. Not only did one have to worry about falling to one's death from way up in the rigging, getting washed overboard in a storm, but also succumbing to various diseases related to diet deficiency, such as scurvy.

Moving ahead a bit, in the early days of rail transportation, boilers used to explode right and left, often killing and maiming passengers and crew. Ditto for early steam ships. Yet they were quite simple relative to a modern diesel or jet liner.

How about the dark satanic mills of Dickens' time? Or coal mines in the 19th century? Or even oil drilling during the early wildcatting days?

Factories and chemical plants of the early 20th Century had minimal health and safety protection, and it was quite common for workers to be killed or maimed on a routine basis, not to mention developing long-term chronic health problems from excessive exposure to all sorts of hazardous chemicals. Yet, these operations were far less complex than a modern petrochemical complex, for instance. The environmental impact of those old plants was also quite atrocious.

Even farming in that period was more dangerous than it is now. Not that long ago it wasn't unusual to see old farmers missing at least one finger, or even much worse permanent injuries.

I think what we have here, though, is a situation where some of these operations are so large that even if they are relatively safe, when things do turn to shite, the consequences are quite profound.

Take the following hypothetical scenario: a 10-megaton nuclear device has a 16-digit trigger code. The device is placed smack in the middle of midtown Manhattan. The trigger code is connected to an electronic random number generator that generates a new 16-digit random number once every second. If a random number comes up and corresponds to the code, then goodbye New York City. Now, on the face of it, this is a very safe situation, as the probability of the thing going off in your lifetime is next to nil. However, the consequences of that remote event happening are hardly trivial to say the least. So, the question becomes: would you still want your family to live in NYC under those conditions?

So, the question becomes: would you still want your family to live in NYC under those conditions?

And what makes the question really interesting as a hypothetical is that people go ballistic over one-in-ten-million lifetime risks, and remain heedless of one-in-ten risks. And I still think that's often not really about risk at all, it's about cherry-picking risk numbers to argue for pre-existing political stances. So if you're a self-righteous prig arguing that people shouldn't eat sausages, because your glands work themselves into a lather over the cute pigs or whatever, then best to argue that someone who eats sausages will die of it, or is killing the planet (as if such a thing were within reach) rather than argue from your own capricious and arbitrary aesthetic preference.

" The trigger code is connected to an electronic random number generator that generates a new 16-digit random number once every second. "

I'm not so sure that I'd put a lot of faith in a truly electronic random number generator. Maybe they've improved since I last worked with and on them. Ten or twelve years ago I was working on a product that required high volumes of data, we used random number generators (several different ones) to supply test data. After running several hundred tests we noticed that under certain conditions (lots of math transforms involved)the end results showed an (abnormal) pattern of occurances detected by our test software (note - there should have been no pattern at all). Looking at the problem much deeper we reached a dead end. The dead end? Cryptic messages from the generator that said in essence further use of this generator in the mode you are attempting is prohibited by DOD security policies.

Since that time I've taken the output of random number generators with a grain of salt.

JoshK -

I am well aware that it is extremely difficult, if not impossible, to create a truly random random number generator. (Which is an interesting problem in itself, because it speaks to a number of very deep mathematical and philosophical problems.)

However, I was speaking abstractly, or in other words presenting a sort of thought experiment that assumed an ideal perfect random number generator. A platonic ideal random number generator, if you will.

This whole thing raises the fundamental question: in this universe, is there such thing as total randomness, or are there always patterns present, no matter how subtle, in everything created or capable of being observed by man?

As an engineer, I find myself over my head in this sort of stuff, and I will leave it to other people to ponder.

Still, I wouldn't care to live in New York City under the circumstances described, regardless of how good the random number generator was or how remote the possibility of total annihilation was.

I guess the point I was trying to make was that people generally don't make life-or- death decisions based on what the actual probabilities are, but rather on what scenarios they feel comfortable with.

Gotta say, I think you, WebHubbleTelescope and Bob are wrong: It is down to complexity. Since this is coming from practical experience of large complexity projects (including submarines as referenced above), not academic theories, maybe you will consider it.

Firstly, no new system is built from fresh. Each system is based on what came before, using the rules of thumb that were used before. To create the new system either two old systems are joined, or an old system is patched or extended to meet new capability targets. Bringing in new technology to cut complexity is rare, and usually difficult since its usually used in the old understood way for a generation at least.

Second, as the number of elements increases the number of potential interactions exponentially. Single point failures rapidly get weeded out, but two, or more, point failures tend not to be. Obvious or common combinations are found, but not all, and not all modes either. From point 1) the fix tends to be a patch, an addition system added on top. Then point 2) comes into play again....round and round the loop.

Third, many systems/processes/etc. use people as machines - performing rote operations over and over again. That is particularly of 'monitoring' where someone is supposed to check something which will be fine 99.999% of the time. This is like using a screwdriver to knock in a nail - humans are horribly bad at rote operations and boredom. You use them in this mode in a system and you deserve what you get. Humans are good at intelligent adaptation WHEN the system is simple enough for them to understand. Again, use them outside these bounds and its your fault.

Fourth, a new system is there to do new things. When you patch something up to achieve this you tend to focus on what's new, giving less attention to old stuff. But new circumstances test the entire system in new ways - so it breaks in new ways.

Lastly, repetition of the usage of a system tends to increase the perception of its reliability as obvious problems get worked out. However there is an entire sea of low probability failures to bite you, and all that has happened with a 'tested' system is you have ensure you are now swimming in that sea - while you think its a 'tested' solution. Mix that in with point 4) and guess what happens.

In practical reality system designs have a lifetime. When its new its simple, effective, and usually much cheaper than what replaces. Over time it ages, gets slower, more expensive and more 'trouble' as the environment around it changes and it gets 'patched'. Eventually a jump is made to a new approach at the cycle begins again. This lifecycle is totally based in complexity and how systems evolve.

Where we are is a drilling paradigm which at a system of system level has NEVER really jumped to a new approach, and that has accreted more and more patches to deal with new environments and objectives over the decades. Those are not only to deal with instances like drilling in deep water, but to 'create' safety by patching on checks and systems to prevent blowout/leak/etc. Without even looking at the complexity that postings such as Rockman's describe - you can predict that this is an old and creaking paradigm that will be stuffed full of complexity and patches such that the system is never fully under control. Its run by rule of thumb, which effectively says "it hasn't blown up like this before".

When you step back and look at the entirety, you can see that its fitness for purpose is poor. Not only is it horrifically expensive (making much of the reserves that remain uneconomic to recover), but when the purpose includes "no leaks, ever" it basic setup is such that an approach that's basically "stick a pin in a balloon, then put your finger over the hole" starts from complex tree of failure modes.

The problem here is very much complexity, the complexity that comes from continually extending an approach into new domains and operating modes that weren't there at the start. You can't fix that by sticking another layer of patches on top - all that happens is you get a new sea of failure modes and a cost that means you drill no new wells anyway. Instead the solution lies in new paradigms that simplify the process, cut the complexity by an order of magnitude, are cheap, and maintain pressure control of the reservoir and oil by default.

This is why I don't agree with Tainter, it IS possible to cut the complexity level whilst still keeping the benefits; but you have to be willing to make a generational leap in HOW it's done and organised. That's true in drilling, and in society.

Mills and coal mines of yesteryears were not more dangerous because they were less complex. They were more dangerous because they had less regulation. It took lots of strikes and lives lost to get the most minor of regulations in place.

Some farmers used to loose fingers, some didn't. Some were more careful than others. The more careful ones self regulated. Unfortunately in mills self regulation wouldn't work as the owner was always pushing you to work faster and in more dangerous conditions. Steep fines from OSHA have eliminated the worst of those conditions (unless you live in one of the countries where we offshore our work - say India where the Bophal accident happened. Unlikely it would have happened here).

Put together complex operations WITH ignoring or lobbying away of regulations and you have a DeepWater Horizon spill.

Incompetency at riding a bicycle is quite different from incompetency at piloting a space shuttle.

Increasing complexity requires increasing competency.

Come on. Deep water drilling is difficult and technically challenging but it's not particularly complex. Not space shuttle complex, which is what's implied.

I also listened to one of yesterday's press conferences.

Among other things, a team of scientists has been put together to better determine the flow rate of the well. There is no specific date by which the team will produce its results, but a reasonable guess is some time next week. The team emphasized that their response is not limited by the 5,000 barrel a day estimate.

At this point, the oil skimming and burning operations are going well. An estimate was given that skimmers were at least temporarily getting 50% to 60% of the oil that came to the surface. Burning is going well too. The dispursants are reducing the amount that gets to the surface.

Another point was that most of the places where oil has gotten onshore are beaches. The approach for handling tar balls and other oil on beaches is to remove it using shovels and rakes.

There is one marsh where oil has gotten on shore. The approach there is different. There is an attempt to box it in so it doesn't spread further, and try to make use of tides to get rid of it. Each ecological area has its own plan for oil control, trying to avoid further damage.

50% to 60% of what? 5000 bpd? How about some actual numbers, possibly?

That is a major part of the problem. Nobody, including BP, has any real hard data on the volume of the leak, the amount of oil rising to the surface, the amount suspended in the water, the amount eaten by microbes and absorbed into the natural environment, the exact amount on beaches, the amount burned, etc., etc., etc. That said I'm sure BP has a better range of data than anyone else and I would be delighted if they made it public.

This is an exceedingly frustrating situation for engineers and scientists for want data to the 6th or 16th decimal place and can't even get it to 1 decimal point. Not to mention the general public who would just like to know what is going on. Or at least that portion of the general public who haven't already made up their minds and don't want to be confused with facts.

I wrote a post last night that was basically a response to the outlandish claims by the Purdue professor about the flow at the end of the riser being over 70,000 bpd (+/- 20%).

In that I found at least 6 errors he had made, all of which contribute to the uncertainly of the flow. And I am not a process engineer or fluid dynamics specialist. I'm sure people with those backgrounds could find many more parameters blocking access to a definitive answer.

Using the professors particle flow measurement as a gospel baseline I can construct a defensible engineering estimate that the oil outflow from the risers is 8,000 bpd. I can construct a just as valid estimate that it is 25,000 bpd.

Shelburn: Thanks for your expansive posts on this and other strands.I do have a different take on BP public approach ,but it is just an opinion from my experiences in and around a lot of E/P companies ... that was a while ago.

As far as the flow, admiral Allen said they specifically did not want to use special tools and more ROV's to spend time calculating a flow and decided to go with the BP and surface measurement always saying their was uncertainty. As you have so eloquently pointed out, it is crowed down there with lots of ROV's doing lots of tasks that are time consuming and important. It is, in my opinion, naive to think that BP did not clear their number through the Coast guard, MMS, Chu and others, and explain to them how they calculated it.(I still believe if they thought it was way higher, they would not be going on the path they have chosen to go. If CG or others knew it is not as good as any, it would be out and known. Those who accuse BP of wanting to low-ball the number are many who would benefit politically and financially from having a bigger number. BP knew all along that there would be a more accurate estimate at some point. This whole thing will go on a long time. Because of the Unified Command structure it has been way more transparent than I would have thought it would be. If they can find ways to stop it , whether it is 5000 or 35000 it will be good. Of coarse think about it, they could say it is 100,000 BOPD and then if they stop it they can brag, that for the first time ever they stopped a massive flow with a new technique at a mile under the ocean. Much more heroic than putting out a puny 5000 BOPD well!

I think BP could easily take a measurement of the velocity of the flow and I would not be the least bit surprised if they haven't done that already.

Current measurement equipment is part of an ROVs standard tool kit and they seem to have dedicated a couple of the lessor ROVs (I will probably get hammered for that very catty remark) to monitor the leaks. They could easily take that measurement, or could have prior to inserting the RITT. It would probably be too risky to push an instrument into the flow now and take the chance of displacing the RITT.

BP, the MMS and USCG also have hours of high definition video. After seeing Professor Wereley's slides his method of measuring particle velocity is not all that sophisticated. BP could easily duplicate his efforts with much better subject matter and they also have a better handle on all the other parameters and the engineering understanding to know what is critical to the calculation.

But with all that they still have a lot of unknown factors and I doubt they can get any better than 40% accuracy. For instance a range of 12,000 to 28,000 bpd. See my previous post for some of the problems.

A similar calculation could be done at the kink and that combined with the recovery from the RITT would give a reasonable range for the flow.

Even though you can not get accurate flow rates from any available information you should be able to get much more accurate information of the change of rate of the flow. That would give them daily or hourly information as to how quickly the flow is increasing (or decreasing) as a percentage of the original flow measurement.

"marshes fall victim"

Listen to this and watch the images.

Not so.....its all in the Breton Sound area, Fourchon, etc. There are only a few monitors....its a huge area. Go out in a boat. Have a look.

No mention of how much oil might be sinking to the bottom where it may ruin the shellfish harvest for decades. Oh right out of sight out of mind. Burn, use dispersants, and shovel it off the beaches. Nothing here to see, all is well.

Was complexity what brought down the Roman Empire?

No. Too general. Complexity is often the "solution" to an oversized "problem." Historians disagree on what kaput the Romans. Some say it was a failure of their nerve, others say corruption, others the rise of Christianity, and so on. Read Edward Gibbon (The Decline and Fall of the Roman Empire) for ultimate answers--if you have the time . . . and stomach for it.

It is also difficult to determine what is meant by fall. Did British fall because they lost/gave up their colonies? Nothing is permanent and nations are very tenuous creations based on collective usefulness of a central body combined with ability to extend authority which is respected. When nations lose their power and usefulness, they dissolve to form other nations. Its not necessarily complexity of a governing system that causes the fall. Its simply the changing dynamic of usefulness and power that precipitates the constant alteration of ideintities.

Gibbon's makes a convincing case for Christianity.
Obviously a major factor, but as you point out, its messy.

The fall of the Roman Empire was caused by lead in the wine & wine bottles. Same problem happened in USA, only the lead was in the gasoline. Drinking and/or breathing lead makes people really stupid.

Riiiiiight... pity all those historians who have looked into the matter carefully and delineated a variety of contributing factors. No, there was one single, exclusive magical cause, which by awesome coincidence just happens to square perfectly with the neurosis du jour over trace elements.

We know this because we validated it: after all, the USA took the great bulk of the lead out of the gasoline more than 30 years ago. Since then, the new generations, relieved of the burden, have undergone a vast intellectual revolution. By all measures, including test scores, they're now light-years ahead of poor old Europe, where many countries only took the lead out of gasoline quite recently...

Lead is considered to be particularly harmful for women's ability to reproduce. Lead(II) acetate (also known as sugar of lead) was used by the Roman Empire as a sweetener for wine, and some consider this to be the cause of the dementia that affected many of the Roman Emperors.

The ancient Romans mined lead on a huge scale, mostly from deposits in England and Spain, spinning it into vast subterranean waterworks, hammering it into pewter tableware, cheaply doping their silver coins, kohl-lining their eyes.

A bad harvest year? Boil the sour grapes in lead vessels, and the release of lead acetate would sweeten the wine. (In fact, the sweet flavor of some lead compounds is thought to aggravate the danger of lead-painted toys, adding the temptation of sweetness to young children already inclined to explore the world with their mouths.)

Ban on Leaded Gas Linked to Drop in Crime

Tainter's theory is that complex societies collapse when they reach a point of declining marginal returns. Complexity in the beginning is a benefit to societies, but eventually it takes more and more complexity with more and more energy invested to make tiny improvements for society. An example would be science once gave us penicillin which had large benefits to society. Now in order to get a new medicine that makes a small difference for society huge amounts of scientists and money must be used. Another is the need for increased time and money in education to function in a complex society. For money you can substitute energy. While he doesn't use the example perhaps one could also point to how much energy it takes to make an American happy, more and more is expended and little if any happiness is gained.

Whether or not you end up agreeing with Tainter, his book The Collapse of Complex Societies, is well worth reading.

Hi Gail,

Complexity is too generic an issue to overlay onto this disaster. I suggest there are typically only two predominate factors at the core of disasters of this type.

First, an intentional lack of money to manage risk effectively. Good people and good equipment cost money. That money is controlled by the top executives and major shareholders of the company; it comes out of their pockets. Although saying "their pockets" implies they have a "right" to that money which is absolutely false. No one has the right to intentionally shortchange safety.

If BP, Transocean, and any other corporations who may be at fault cut corners by using low wage scales (resulting in unqualified workers) or substandard equipment or poor maintenance practices, this is not a complexity issue, it is a criminal issue.

The second factor is a lack of understanding of an existing flaw in an evolving system. The final investigative report will detail one or more flaws that were not fully understood before this. In hindsight they may be easy to see. We can look to the commercial aviation system for examples of this. Each time the Transportation Safety Board completes their accident report, there are typically specific changes made to prevent discovered flaws in the system from repeating.

There is no excuse for the first factor, it is simply corporate greed. The only way too stop it is too make the punishment too expensive for them to take the risk. If Tony from BP had to give up his personal fortune and live on the public dole for the rest of his life because of this you can bet that would get the attention of his peers, and investment in safety throughout the industry would go up dramatically.

The second factor is just a fact of life, we learn from our mistakes, correct them the best we can, and try again. We continually look for the weak link in the strong chain as it evolves.

We are quite capable of running a safe oil industry at its current level of complexity and at increased levels of complexity in the future if we have the willingness to dramatically increase the punishment of those at the top who benefit from taking calculated risks at the expense of their employees, minor shareholders, the public, and the environment in order to increase their personal wealth.

"a lack of understanding of an existing flaw in an evolving system"

Zymurgy's First Law of Evolving Systems Dynamics: Once you open a can of worms, the only way to recan them is to use a bigger can.

Or go fishing ;-)

One could make the contrary argument that complex systems help PREVENT accidents, since the people in each subunit comprising the system can focus on risks and safety issues just in their little area of expertise, and not have to worry about the system as a whole. Complexity itself is not intrinsically the problem, no more than the stage on which actors act is part of the drama, or the brush strokes make up the painting. It's just the framework on which the system is built.

I am not an expert on deep water drilling however was there a BOP on the gulf floor or was BP saving money and did not put one there only on the drill deck? How many wells have been supded in in the Gulf without a hint of a blowout? Deep water drilling is for experienced exploration companies and big does not make the better or experienced.

MMS apprpved a flawed plan......the fox is guarding the chicken coop. Nothing complex about that-- its simple.

Gail, I assume an actuarian is a specialized mathematician. In the world of math, can the complexity of drilling for oil be described in terms of the Chaos Theory? From what little I know about the subject, the theory states that small differences in initial conditions yield widely diverging outcomes for chaotic systems, rendering long-term prediction impossible in general. I see that as the same thing as a system that is too complex to control. I also know that theories fall in, out, and maybe even back into favor. From what little I encounter on the subject, Chaos Theory is still in vogue, but chaos control is being used with positive outcome.

I would suggest forgetting about chaos.

The actuarial approach to this is to figure out how many rigs have been constructed and how long they operated before going out of commission. Then figure out how many catastrophic failures have occurred in this set. The ratio of the latter to the former is the Bayesian probability for future mission failures assuming prior information. You can do this per year or per rig.

So we have had 2 major spills in GOM (the one in Mexico that I read included). Someone can work out the numbers because I don't know the total rigs built so far.

I would suggest forgetting about chaos.

Science tried that--- It didn't work in the real world.

You have to crawl before you can walk.
Chaos is deterministic and so it can't be used to predict anything if you lack any knowledge about the initial conditions.

Probability theory has at least the benefit of known priors that one can use to predict future outcomes.

That is what I am saying. All the power to you if you can harness chaos theory.

Chaos theory is the study of dynamical systems. It is entirely relevant here.

I have seen many comments over the years about too much regulation and the need for deregulation, etc. Does anyone here on TOD not see the need for EFFECTIVE regulation at this point? The regulators need to be adequately compensated and trained, and until the needed personnel can be trained to adequately evaluate and supervise all plans for deep water operations, I think that any new operations should not be started. I would think, in the future, there should be regulatory personnel on the drill platform/ship to be sure that the plans are followed as designed and that alterations to that plan are reasonable.

Well, as I said below, getting effective regulation in the real world seems to be a problem. It's too early to really tell, but it seems possible that had existing rules been heeded, the spill might have been avoided or have been much smaller. But regulation is a political matter ultimately supervised by politicians, so the vote-garnering - i.e. important - thing is to be seen as "doing something". But "doing something" that's truly useful, well, that's certainly an issue, but at the top levels it's quite secondary.

Then there's that pesky problem that if you pack the rig (or any other workplace) with a throng of people doing little but peering over each other's shoulders, you're likely to find everyone tacitly assuming that someone else must already have seen any possible problem. Not much incentive to really look, nothing new to see. Not much incentive to speak up if you see something, someone in the throng is surely more expert than you and if you raise a false alarm you may feel like an idiot.

Even the minimum unavoidable required automatic measurement and alarm equipment can have a similar side-effect. Have I ever mentioned the subway-train drivers ("motormen") who would roll into the stations with their faces buried in the New York Daily News, the paper that prided itself on its third-grade reading level? They figured the automatic trip-stop would take care of any problem. Usually it did - as evidenced by the deafening sledgehammering of the flat spots on the wheels of trains that had been stopped that way and had yet to be repaired.

I think the bottom line is that it is still too early to tell what really caused this disaster. Given the lack of honesty and transparency , we may never know for sure. Speaking of complexity, when will Microsoft come up with an operating system that doesn't crash constantly. I have Windows 7 and haven't seen any improvement.

Never, although XP seemed to be a big improvement over 98/ME.

For one thing, this is a cousin of the Halting Problem, which is provably not, in general, solvable. That is, most often the only way to prove what will happen with a computer program is simply to run it.

For another thing, while it's possible to impose enough restrictions on software to make it provable, as a practical matter the process is difficult enough to probably push the software back to a 1980 level or earlier. Since most people boot the computer daily, the issue isn't big enough to make them want to go far enough backwards to get provability.

And for yet another thing, the big driver of crashes and security holes is advertising. Get rid of all those aspects of Object-Oriented Programming that involve self-loading code, and a vast array of security and crash problems simply disappear. However, advertisers want the fanciest plug-ins available in order to "cut through the clutter" (their jargon, not mine, for that zero-sum game) without waiting years for users to upload them or to get a forced "upgrade" when the old computer dies.

This just illustrates once again that in the real world, where we will never attain Utopia, people must accept some degree of risk, or else just about strap themselves in bed. Things are tough all over, deal with it because it's not going away.

Microsoft's crashing operating systems are not a good example of failure due to complexity because their operating systems crash by design. Poor quality products are a consequence of an unregulated monopoly.

I am troubled by talk about complexity. A recent surge in the use of this word seems to me to be a new trend in techno-babble. I won't define techno-babble because it is also something that troubles me.

I can imagine several different definitions. None of them well thought out positions on what the term ought to mean in order for it to be technically or scientifically useful. So I keep reading stuff about complexity hoping to sort out what is going on in the community mind.

So far, it seems, not much. Complexity seems to be considered something that is intuitively obvious. I am not aware of anything in science or engineering education that is allowed to remain intuitively obvious past the introductory week of a course of study. Where are these definitions? Where are the glossaries of technical terms? Or are there so many competing definitions that the field of complexity is just too complex for serious study?

It means complicated, not simple. It indicates something with intricate parts, pieces, events, systems, and the like.

Don't be troubled by complexity talk, techno-babble, community mind, or missing definitions. Listen carefully then find out for yourself what there is to know. Pay attention and learn. The beginning of wisdom is knowing you don't know --Socrates.

TOD is a great forum, which means it is a font of thought, information, opinion, feelings, rhetoric, and even poetic language. Only some of it is true--but don't let that get you down.

I can imagine several different definitions. None of them well thought out positions on what the term ought to mean in order for it to be technically or scientifically useful.

Funny how every time someone posts an article on TOD with the word "complexity" it always brings howls of complaint that no one can define the word.

I think it was either Catton or Odum (I don't recall) who had a good definition. They made the brilliant observation that complexity in system can be indirectly measured by the number of "artifacts" and/or by the number of specialists or niche roles involved.

One example being simple a hunter/gather society, few specialized roles and a relatively small number of artifacts. Compare that to a modern army, or other such industrial organization, with very large numbers of specialized roles and a truly vast number of artifacts.


I think Tainter is really onto something, but I've never seen what I'd consider to be a rigorous definition of "complexity" in the sense he invokes it. It seems a bit along the lines of pornography.... you know it when you see it. It tends to be described rather than defined....

Of course, I'm posting this hoping someone will post a concise definition which jibes with Tainter's use of it.


I'm suprised I got this far down the comments before Tainter got a mention! He makes a clear cut case for diminishing returns on complexity and I see no reason why this wouldn't apply to energy production as well.

"Complexity is generally understood to refer to such things as the size of a society, the number and distinctiveness of its parts, the variety of specialized social roles that it incorporates, the number of distinct social personalities present, and the variety of mechanisms for organizing these into a coherent, functioning whole. Augmenting any of these dimensions increase the complexity of a society" Joseph Tainter

This is the definition Tainter uses and it is tailored to societies. I am sure it could be easily rewritten to define software, drilling operations, etc. but as far as I can tell from a quick flip through of his book that is the definition he uses. Not very concise.

However if you take some operation like various ways of drilling for oil, "Complexity is generally understood to refer to such things as the size of a drilling operation, the number and distinctiveness of its parts, the variety of specialized job roles that it incorporates, the number of distinct parts in the equipment required, and the variety of mechanisms for organizing these into a coherent, functioning whole. Augmenting any of these dimensions increase the complexity of a drilling operation"

Maybe someone else could do a better job of translating it to drilling that I did?

Complexity seems to be considered something that is intuitively obvious.

When doubt ... Wiki it:

The key problems of complex systems are difficulties with their formal modeling and simulation

Instead, the lessons from the auto industry will be ignored, and new regulations adopted, which only seem to make the industry safer.

Ummm... well, duh. Regulatory agencies are ultimately run by... politicians. Politicians derive fame, fortune, votes and power from scaring people and then making a great show of "doing something" - however ineffective - to assuage the fears they have created.

In practice that involves endless tub-thumping speeches, and piling on ever more rules until the only way to avoid complete paralysis is to ignore them. It never means enforcing the rules already there. That would lack the proper mediagenicity; it's just too far under the radar to scare anyone into voting for you.

It all reminds me yet again of the labeling for sodium chloride, obtained from a chemical house instead of the grocery store. You have to look twice and then look again to be reassured that, oh, it's just salt, it's not rat poison. Or, as I've heard more than one lawyer say, when everything is in a red folder, then nothing is really in a red folder.

But more directly to the point, I'm remembering a guy in Belgium - which is one giant cat's plaything of red tape - standing on a tall free-standing ladder in the curb lane, reaching up and changing out a street-light bulb, no traffic cones or anything. The principle, which we ought to meditate upon as the lust for revenge gives way to the hunt for scapegoats in the case at hand, might have been:

Rules: imported by the chariot-load going all the way back to ancient Rome; vast gouts added by the dump-truck load in modern times.

Rules heeded: zero.

One might wonder how the vast overabundance of red tape, shouting labels, hysterical media blathering and eschatological punditry that makes real risks utterly indistinguishable from negligible ones might affect what work is asked for and how it gets done - and neither for the better.

One might equally well wonder about the polar opposite, absurd oversimplification. Consider the substantial potential for high silliness in the overwrought use of "complexity" or other one-dimensional metrics as Theories Of Everything...

In answer to when will Microsoft have an os that doesn't crash, never.
It has made me almost dislike the color blue.

Gates is getting into nuclear power.
That may up the stakes on The Blue Screen Of Death.

Haha - I never thought of the BSOD as just a synonym for Cherenkov Radiation

Thanks for the link-- it does seem apropos.

"when will Microsoft have an os that doesn't crash"

Correct answer : When all device drivers are properly tested or when Microsoft makes a their own closed PC or when Oracle (or any other s/w maker) creates a real commercial app that doesn't crash.

We used to have an old saying - All non obsolete s/w has bugs.

From a recent speech by Steve Ballmer:

The saga of our Windows product is probably one of the better chronicles, and I'm sure many people went through a cycle either at home or at work with our Vista product. It was just not executed well, not the product itself, but we went a gap of about five, six years without a product.

I think back now, and I think about thousands of man-years and it wasn't because we were wrong-minded and thinking bad thoughts and not pushing innovation. We tried too big a task, and in the process wound up losing essentially thousands of man-years of innovation capability. And so a discipline and execution around the innovation process, I think, is essential.

Cheer up Steve, at least you did not crash the GOM.

It is rather enlightening reading engineers take on complexity.
Engineers solve problems by moving dirt, and making machines, the bigger and more complex, usually the better!

The though that simplicity and a "great leap backward" might be the wise approach is not considered.

Have you read Tainter?

Yes, and I mostly agree with his concept. However, trying to apply that to a specific thing like a drilling rig failure is a stretch. Tainter's idea of complexity is simple (sorry, I just had to write that) - in a period with few resource limits and growth a society responds to challenges by adding complex social systems and infrastructure. At some point the costs of increasing complexity outweigh the benefits, creating a need to simplify - which often happens catastrophically because the existing system is not capable of simplification. You could apply this to our society's dependence on fossil fuels and the increasing costs and difficulties associated with trying to get it in the quantities and rates required.

There are many enormous costs associated with our use of fossil fuels, but we've spent centuries structuring our societies and infrastructure around them and so we must continue, regardless of the falling return on the investments required - until it just is not possible anymore.

So this disaster is a good example of how the costs of maintaining our lifestyle are increasing, but Tainter's ideas are not really useful in analyzing the specifics of why this drilling operation failed.

I was wrong they had a BOP but it failed to function correctly; Maybe it was Chinese. Look here:

Yes, there is a BOP on the ocean floor manufactured by Cameron International as reported in Heading Out's Oil Spill Update - Including Oil Spill Discussion - May 7. It appears to be incorporated in the USA. Cameron sold the BOP to Transocean (the operator and owner of the Deepwater Horizon Drilling rig) apparently in 2001.

If the BOP had of functioned properly we wouldn't be having this discussion. I do not know the record of reliability of BOPs however I would guess they are very reliable and this is just one of the very few failures. The only complexity factor is why did it fail. Clean up and shuting in the well is another story. BP has made a good one it is just a little wild right now.

"If the BOP had of functioned properly we wouldn't be having this discussion."

The above quote is not necessarily a given. More facts are needed to determine the exact current path of the reservoir fluids to the surface. We might be looking at a broach to the surface and wilder blowout even if the BOP's had worked.

And does the quick fix lay in the realm of simplicity? In my slice of process engineering I become very circumspct when too many PhDs get involved in trying to work out a pragmatic solution to an engineering problem.
SO.. cut and remove the damaged exisiting pipe rendering the problem to one leak... Then why is it not possible to install a rolling mill and a automatic pipe welder system on a couple barges,and commence prefabbing pipe sections of an appropriate diameter to surround the well head(say~4'dia??) secure in a vertical axis guide and feed it down while continuously welding on, say 50' long sections while continuously until the fabrication is seated into the seafloor bed, then concrete it in place. This arrangement would avoid induction of seawater and the concequent peculiar things that happen on the oil-water phase diagram at **Kpsi, allowing the oil to be managed at sealevel while the diversion wells are drilled. It's actually not that big a fabrication job...

Complexity was a factor, but that statement of fact is insufficient to shed any light on this recent disaster.

Now, for a workable definition of complexity. A system can be said to have complexity if the system produces either unexpected or unpredicted events.

The regulations are there. Read the CFR's. Every inch of a rig has a regulation. The problem is to find the regulators to enforce the CFR's. Apparently, everyone in this country can fall to corruption.

Someone in the comment string mentioned the need of complexity definitions. The following definitions are adapted from several sources including Richard Bookstaber's Demons of our Own Design, Charles Perrow's Normal Accidents and the draft of a dynamic stability glossary:

Interactive complexity (Bookstaber) — is a measure of the way the components of a system connect and relate. An interactively complex system is one whose components can interact in unexpected or varied ways, where there is feedback that can lead the components to differ in their state or their relationship with the rest of the system from one moment to the next, where the possible stages and interactions are not clearly apparent or cannot be readily anticipated or computed for prediction purposes. Systems with high levels of interactive complexity are subject to failures that seem to come out of nowhere or that appear unfathomably improbable (Taleb calls them Black Swan events)

Loose Coupling — Loose coupling means that the component failures of a linear system can be considered as having independent probabilities and thus joint probabilities are computable.

Tight coupling —Tight coupling means that components of a process are critically interdependent and thus non-Gaussian. They are linked with little room for error or time for recalibration or adjustment. A process that is tightly coupled can be prone to accidents even if it is not complex. It can also be applied to a time-driven system in which one event leads to another in short order, i.e. a chain accident with no time to correct the actions. Other system failures grow out of a chain or errors and mishaps over time without the operators discovering the source of the misinformation as was the case for the Three mile Island accident

Complex system failures can be classified by type:

Common-cause failure — defined as a specific condition which may result in a single failure event and which would be capable of causing each element of channel of a redundant system to fail.

Critical period —: Multiple failures though common cause of elements of a redundant system will result in overall systems failure if they occur within a time interval known as the critical period.

Non-concurrency —: When multiple failures through common cause of elements of a redundant system occur over a give time interval greater than the critical period, the individual element are said to be non-concurrent. System failure in non-concurrency mode is capable of prevention by human intervention or scheduled changes in process operation. (Mediation)

Common Mode Failures — In technical facilities, for example in nuclear power plants and commercial aircraft, redundant systems are used to prevent random failures from deleting the complete system function. However, although this redundancy concept is adequate to cope with random failures in single redundancies, its applicability is limited in case of multiple failures due to a systematic failure cause to which all redundancies are submitted due to their identical features. Some general considerations have been formulated to rule out the occurrence of such common mode failure (CMF) in redundant systems under certain circumstances. CMF means that in more than one redundancy the systematic failure cause is activated at the same time, or within the same frame of time (e.g. during the mission time for an accident)

Cascade Failure — a failure in a system of interconnected parts in which the failure of a part can trigger the failure of successive parts. They occur in power system grids, computer networks, vessel capsizes involving impaired stability. System failure in non-concurrency mode is capable of prevention by human intervention such as the USCG delivering a dewatering pump to a fishing vessel taking on water. In finance systems, where the risk of cascading failures of financial institutions is referred to as systemic risk: the failure of one financial institution may cause other financial institutions (its counterparties) to fail, cascading throughout the system. Institutions that are believed to pose systemic risk are deemed "too big to fail" (TBTF)

Independent failures vs. dependent failures — The overriding problem in risk assessment is to differentiate between the dependent failure and that of the independent failure. It is much simpler to predict the frequency of independent failures for which the probabilities are knowable. Dependent failures involve conditional probabilities which are much more complex. To this end there have been many modeling methods, mostly theoretically based, which lack a practicable engineering approach to a satisfactory understanding and solution.

Normal Accident (Perrow) — The cascade of failure that occurs in systems that share tight coupling and complexity gives rise to what are called normal accidents.' As ironic as the term sounds, normal accidents are accidents that are to be expected; they are an unavoidable result of the structure of the process. The more complex and tightly coupled the system, the greater the frequency of normal accidents. The probability of any one event is small enough to be dismissed, but taken together, with so many possible permutations and with combinations beyond comprehension, the odds of one or the other happening are high.

Note also that a normal accident sequence involving potential loss of life can be interrupted by lucky human intervention. These involve non-computable probabilities such as having an experienced glider pilot at the controls of a commercial aircraft struck in both engines by birds. Examples of non-interrupted accident sequences: capsizing caused by being in the wrong place (i.e. the focal point of a rogue wave or microburst) at the wrong time (when it breaks or strikes) even if the vessel satisfies existing stability criteria.

Normal accidents include rare events which are frequently called Black Swan events in the popular press. These events lie outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable. Tight coupling system failures are generally rare, previously not experienced and involve unknown but non-zero retrospective probabilities.(See definitions in Probability and Statistics Section, not included in this post) Examples: penetrating the Maginot Line in 1940; the 9/11/2001 terrorist attacks; the consequences of the Pacific tsunami of December 2004, The Gulf oil spill of 2010, and so on.

Note: these examples may be normal accidents to knowledgeable persons in particular disciplines who recognize that such tightly coupled events have non-zero probabilities which are not computable. Actually, the majority of normal accidents arise from non-repeating sequences of events. These events can be analyzed using statistical hindcasts (a weather term) for modification to assumed “fat-tail” probabilities, but not accurately forecast since this new sequence of events has never happened before. Note the different sequences in the Market Crashes of 1987 and 2008 and the accidents involving the space shuttles Columbia and Challenger.

I don't disagree with any of that and I appreciate your points.

But to me a simple system can fail just as easily and spectacularly as a complex one. To me complexity is a function of perspective in that what might appear simple to one person could be complex to the next.

As a CPA I appreciate the simplicity of the double entry bookkeeping system but to others it can seem complex - it certainly did to me at first until my education exceeded the level of understanding required.

Want to see complexity and failure happen day-to-day: Look at financial systems around the world, starting with our own. I'm surprised this topic doesn't come up more often on TOD, but it's not called TOD for nothing. Maybe we need a blog site where we can debate the risks and failures in our financial system and educate people like TOD does. Or am I dreaming?

I'd certainly voice my opinion that anytime a system rises in complexity beyond the understanding of those responsible to oversee it then risks of failure increase accordingly.

Or is that too simple? :-)

I agree about simple system failures. From tightly coupled definition above: "A process that is tightly coupled can be prone to accidents even if it is not complex." However, without the tightly coupled criteria, simple systems are less likely to fail unexpectedly unless caused by operator error. For example, having a brake override on the accelerator pedal is a simple system not prone to acceleration runaway accidents. Having no mechanical brake override but using complex software "fail safe" logic converts a simple mechanical system into a complex software system, as Toyota found out.

Interesting post from

by "peakoilerrrr"

Here is an idea currently in practice at nuke power plants, and more specifically at the one where I have been last 10 years.

There is fair effort made to announce the incidence of major "accidents" and "near-misses" to educate ALL WORKERS. Any degrading commentary about classes of workers is discouraged...whether engineers, electricians and laborors, all are virtually equal in capacity to cause BAD WIDESPREAD effects due to "accidents".

Because it seems that ALWAYS, prior to major failures, someone had questions and either did not speak-up or did not insist his concerns get fairly considered, here is a ...

KEY RULE: ANY worker has the right to STOP any operation if he has a question about safety...because it seems that ALWAYS, prior to major failures, someone had questions and either did not speak-up or did not insist his concerns get fairly considered.

How the worker's concerns are handled may be not so nice, but the operation is at least momentarily stopped and an opportunity for review occurs that would not exist if no one spoke-up...a last there are always witnesses and peer-pressure to not err.

Read that again if it seems trite, or cute or just "eyewash". In practice, that rule has prevented some horrific "accidents".

I've done it many times, and was either proved right or quickly corrected and acknowledged for my concerns or, a few times, had to bypass lower management and go as high as one step before going to NucRegCommission, which is my option. Usually there was no flashback from other crew or management. Sometimes there was, and even got fired once and a few layoffs, but always got rehired [so far] without rancor after all the matters were handled, [and the company made changes without having to admit error].

While on paper there was not supposed to be any intimidation for "whistleblowing" on any safety matter , there were some large unpleasantries.

So 2 suggestions for rules: Try to treat others with respect; encourage their communication about safety-related matters, with official policy ,in writing, making any intimidation or threats actionable. I think the oil/gas industry does not have this RULE, perhaps because BAD WIDESPREAD effects have not been so anticipated.

[edit: NUKworkers have written, Federal protections about safety-related matters...not perfect, but at least it is in-writing.]

Good post. Without naming names there are some oil/gas companies where safety has tried to be brought to the forefront of all processes and decisions as far as it being a big part of annual performance appraisals and bonus. Perhaps not as rigorous as a power plant but where it was institutionalized down to the secretaries and not just for show.

Interesting. At my company we do the same thing but the stakes are not nearly as high. Our drivers are faced with many situations, such as bad weather, which only they can evaluate. Dispatchers can't see the road conditions. We have found by giving every driver the authority to stop when they determine it is not safe to continue, tensions are lower and safety is higher. Since we are an extremely time sensitive delivery company, stopping has tremendous negative consequences to our customers. Still, it is better to get there late than not at all.

The airline industry found this out the hard way in the cockpit starting with the two 747s that crashed in Tenerife. It took many years of crew training for a climate of "the Captain is not questioned" to giving the First Officer and Flight Engineer the feeling they could freely speak up.

I think one of the best examples of "complexity" could be the US Tax Code.

But I digress.

There are two successive activities involved in the activities related to well drilling: 1) Drilling and completing the well, and 2) mitigating any leak, which may result in spillage over a significant period of time.

If enough care is taken to do the first one properly, the chances of having to do the second one can be made very small.

That said, while the chances of any ONE well leaking may be quite small, given the quantity of wells to be drilled, the chances of a leak occurring "someday" are significant.

What BP apparently wasn't adequately prepared for was the latter. A better plan than what they had was obviously needed. A generic (oil+gas) leak on the sea floor is a contingency BP (together with other drillers?) should have prepared for better than it was.

The research plan would have been simple. Starting in 500 feet of water, develop a method of making sure the gas released takes a direct path (constrained within a duct or not) to the surface without having a lot of sea water mixed in. If this can be achieved, any oil released will follow the gas to the point where it breaks the surface, from which point it can be collected relatively easily. A "small" spill of jet fuel could have then be carried out to confirm this.

Next, do the same thing in 1000' of water...then 2000'.

From this I think you can catch my drift...

Hi everyone, another long term megalurker signing up to say thanks for all the chitchat.

And make sure you all see this, courtesy of Penultimate Straw at b3ta

Coming from the IT industry, most really poorly executed scenarios I have seen come from adding multiple contractors. The contractors do this to spread risk/liability. Watching BP, Transocean, and Haliburton execs testify in front of Congress, that seems to be their strategy. It doesn't work well when everyone is under oath and in the same room.

I have a few fairly simple questions.
---Is deep sea drilling controllable? This whole episode makes me assume it isn't. This has gone on for a month now and, what I assume are the best minds in the industry, have not been able to stop it.
---Is it that there is no reliable solution for dealing with this sort of problem?
---From afar, it would appear that BP is more interested in preserving their investment rather than taking draconian steps to shut this off. Could the well be blown up and shut off?
---What was the expected, lifetime production from this well?

1. Relief wells will work. This is not a if they can control the well, its a when can they control the well. The sooner the better, but you have to define the acceptable time period (which is 0 days going by people's online opinions) to say whether disasters like this are controllable. Now, there are many many successful deep sea drilled wells, so its obvious to say its not impossible to safely drill a deep sea well.
2. Yes, relief wells. However, this being unacceptable to the public, new solutions will have to be developed.
3. There is no preservation of this well. Once it is brought under control, it will be killed and then cemented up, never to produce again. Besides the damage that has occurred downhole, the government has strict guidelines to prevent the conflict of interest you indicate. Once you lose control of a well, consider your investment gone.
4. The reservior volume is in the millions of barrels (I really don't know if it was 1 mil, 10 mil or 100 mil). Given no articficial driver, it could probably produce 5-20% of whatever the reservoir volume is just from pressure differential.

I hope that answer your questions.

1 It must be controllable if they have done it 500 times up til now.

2 There appears to be no solution, yes.

3 I am so tired of hearing this. By law, they are required to kill the well permanently so the motivation to recover it as a source of income is possible only in the imagination of ignorant haters. Someone please explain again how Macondo will never produce anything no matter what.

4 Look at Thunderhorse, lifetime projected: billions of $, lifetime actual: fractions of billions of $

---Is deep sea drilling controllable?

Answer 1: Yes - It has been over 40 years and more than 30,000 offshore wells in the USA since an uncontrolled offshore blowout, a 99.997% safety record.

Answer 2: No - We have a horrific uncontrolled blow out that may not be stopped for months, what other evidence do you need.

---Is it that there is no reliable solution for dealing with this sort of problem?

A: Drilling a relief well is the standard and accepted procedure to stop a blowout like this. The problem is it takes at least a couple months, sometime much longer. In this case BP is drilling two, but many are suggesting they should go for three or four. On the Ixtoc 1 blowout in the Mexican Gulf of Mexico, which was similar in size, or larger as it was flowing totally open, it took almost 10 months to complete the relief well. All the other methods they are talking about, top kill, junk shot, etc have been used occasionally on land but never hooked up underwater before.

---From afar, it would appear that BP is more interested in preserving their investment rather than taking draconian steps to shut this off.

A: BP lost their investment in this well before the blowout hit the drill floor, there is nothing for them to save. Even any oil they recover from the RITT will return way less than the cost to recover it. The cost of the recovery vessel alone, without any of the supporting vessels and ROVs is probably about $900,000 per day. The response is much larger and more expensive than the Exxon Valdez response and even if the well was stopped today I'm sure the ultimate bill will be the most expensive clean up in the world.

---Could the well be blown up and shut off?

A: No, it is common to use explosives to snuff out the fire of a burning well but not to actually blow up the well. Any explosives will probably make thing worse.

---What was the expected, lifetime production from this well?

A: Outside my pay grade but I would guess 50,000,000 barrels per well drilled in the field, possibly more.

I think I need help with something that may be obvious to everyone else.

You say "BP lost their investment in this well before the blowout hit the drill floor".

Is that to say that even before the fire and the sinking of the rig?

What if they were able to prevent the fire and the sinking of the rig? That would mean that there would be no leak to the ocean, right?

If that is the case, then couldn't they have somehow collected the oil and get it to market?

Maybe I am that dumb in the oil business (I am a Linux programmer and a light artist, but I am very curious with this oil business). Isn't the rig there to capture the oil once they 'hit it' and git to onto tankers and then to market (the nearest refinery)?

I do agree with the answers regarding the use of explosives and the relief wells. Those seem to make sense to my limited logic in this game.

Respectfuly yours,

Mark Allyn

No...the rig is just there to get the hole in the ground and ascertain if there are hydrocarbons present. That rig was in the process of moving off so that another rig could come in at some point and finish or 'complete' the well. E.g. get it set up to produce oil.

I don't really know if they even finished appraisal for this prospect. I'm guessing not as this well was labled a 'discovery'. E.g. the first find in that particular potential field.

So it may have been YEARS before they were ready to bring the well on production.


If they got the rig disconnected from the riser prior to the blowout, it sounds to me like what some people are saying implies that it may have ended up being a wellhead blowout. (e.g. hydrocarbons coming from the wellhead seals). I have been thinking that that might actually be worse from an oil spill perspective (altho certainly better from a loss of human life perspective).

I may be way off base with that, though.

you are thinking correctly.

Here's another thing I was wondering: Presumably they couldn't disconnect the burning rig from the riser since it was too dangerous. But if they could have, the blowout would be on the surface, and we could all just watch it burn. Problem solved until the riser collapses?

the emergency procedure(like for extremely bad weather or a failure in the positioning equipment) is to disconnect the riser from the stack after closing appropriate rams and shearing off the drill pipe, then drift or drive away with the riser hanging below the rig, leaving the well shut in on the subsea stack on the seafloor to be re-entered later. If everything holds together you can get a second chance to reenter the well. In a blowout situation, the subsea stack may or may not be able to contain the reservoir fluids, depending on the path of the fluids, mud weight, etc and many factors. Hope this helps.

Wow, so they would disconnect the riser at the BOP and figure it out later. I hadn't heard that anywhere before. So you're saying the rig can support a mile of riser hanging off it? Can a mile of riser stand up by itself? With help from chain anchors? Or is that never done?

The pipes in the riser are coated with flotation material for near neutral bouyancy.


Thanks. I have learned more with this exchange than I have learned from the media over the last month.

This site is a real gem.

Coming from the IT industry, most really poorly executed scenarios I have seen come from adding multiple contractors. The contractors do this to spread risk/liability.

Coming from the IT industry, I definitely do not share this experience. Contractors are added mostly because ramp up time to get full time employees is too high - also there may not be enough work for full time employees once the project finishes.

Evnow, have you seen very large project work well with multiple contractors? Typically, there would be general contractor with subs things have a chance of working. where the client choses to be the general contractor has frequently been a problem.

I agree with your points about ramp up and ramp down.

Another peakoilerrr from gCaption website:

In the field of nuclear power[electrical] generation, the term "safety-related" essentially covers any equipment/activity that could affect the ability to shut-down the reactor.

Note this involves ANY consequences that COULD affect THE PUBLIC, as distinct from a worker getting a splinter or crushed finger. E.g., that is why basic emergency planning begins by pre-supposing a 20-mile radius effect.

Also note that ANYONE can call the USG NucRegCom 24/7 to report any safety-related matter..."anonymously" or not. It will get atttention.

Seriously? Well, there are many 100's of carbines/cameras routinely on-site 24/7 in safety-related areas. In this industry, dangers and complexity have a long experiential history...which is also why there is recognition that no activity is ever the same as before...because time and conditions never exactly repeat.

E.g.--only after long experience under special conditions did the fact of Flow Accellerated Corrosion suddenly become recognized as causing pipe-walls to become thinner. It is a phenomena different from usual corrosion syndromes wherein liquid flow can "wipe" atomic layers off a metal wall, highlighting the always-present imperfections [as distributions of atoms] in metal lattice, at the micro level, of metal alloys.

Perfectly spec'ed pipe suddenly has random thin spots. Surprise! So unknown [surprises] events are planned for in some fashion so as to limit consequences. Deepwater drilling in the GOM needs a new planning mindset to match possible consequences. [like fail-safe BOP-type arrangements; pre-drilled killing-wells; engineered hardware that accepts emergency control mechanisms].

The GOM crisis is a CONTROL-FAILURE.

Of course … when you increase the complexity of a task (absent regulatory checks) you also increase the likelihood of mistakes and accidents, but our societies and industries undertake complex tasks all the time. I believe the issue here is "economics" plain and simple, and cost cutting. The break even point for oil production in the Gulf is near the current price of oil:

The only way this works is with extensive tax credits, direct and indirect subsidies, and other federal/taxpayer support for the industry. The MMS conducted a study on use of accoustic triggers on BOP devices in the Gulf, and determined it would not be cost effective for region. These are required in Norway, Brazil and Canada. And cost cutting has been cited as contributing factors to accidents by BP at refinery in Texas and oil pipeline leak in Alaska. We know they were behind schedule on the Deepwater Horizon, ruined one well because they went too fast (and had to drill a second), and rushed to removed the mud in the well bore prior to capping the well (which likely contributed to a pressure release in the riser pipe). It also appears they were hasty in retrofitting the BOP device for use on this well, since there was miscommunication and confusion regarding technical schematics that delayed the response to the spill. This was an exploratory test well, which nobody seems to highlight, the goal was to drill it as quickly as possible, get back results on pressures and other tests, cap it (for "permanent temporary abandonment," as BP described in Congressional testimony) and move on.

BP is competing with other locations where oil is far more affordable to produce (i.e., every other location on the planet). If we want it done more safely in the Gulf, we have to be willing to pay more for it at the pump (and a lot more … since these costs are spread out thinly across the whole industry). And there may be more affordable and less polluting source of energy when compared to domestic sources of oil and gas offshore in the Gulf.

Thank you for a great post. I could not read all the great comments so far. I posted some comments on complexity on my blog at

Thanks everyone for a very informative site!! A very basic question: how is a pressure relief well supposed to stop the flow? And why should this well intersect the main well at 13000' instead of a faster and easier target at say 10000' or 5000' depth? My natural thinking of fluidics is in terms of electric analogies: the well is pressurized at 13000 psi -- analogous to a voltage across a pore (resistor). Adding a parallel resistor can reduce the flow (if there's a common series resistance), but it won't eliminate the leak until the reservoir is exhausted.

Also about blowing up the well -- nukes and big bombs are silly. Isn't a 13000' oil well a somewhat complex, unnatural system. As such, there should be a million ways to break it -- and stop the flow -- rather than fix it to regain control of the flow. A relatively small explosion -- e.g., <100 lbs of explosives detonated a few hundred feet or 1000' from the bore 10000' underground shouldn't cause landslides or any effect on the ocean floor. But the shock wave, hitting a discontinuity in the speed of sound from the pipes and liquid in the well, should create huge shear forces that should implode the well. Seismic reverberations should further send debris down the crushed bore. Again, there are a billion ways this should shake down and settle into a lower energy, higher entropy state, that seals the well -- without risking massive release of oil. This should also be far quicker and easier than trying to drill a bullseye hit of the existing bore.

Thanks again!!

The procedure (per Pockman, our resident expert) is to change the drill bit to a steel milling bit for the last foot, cut through the steel and inject very high density "mud" into the wild well.

Once oil is out of the cap (or trap) formation, it wants to go up. The steel used is of exceptional quality and designed for very high pressures. With high internal pressures, collapsing such a well would be difficult.


Hi Alan,

Is it typical that the entire 13000' of well is steel lined? My knowledge of oil wells is near squat ... my water wells in So. Cal. are only cased near the top, the lower parts are rock/dirt--native material. That stuff has got to be brittle and inelastic, so under a nearby shock wave, it would crumble.