Kurt Vonnegut Sums Up the Situation of Humans and Other Life on Earth. Use the Main Menu (below-right) for desired topics → ...

Monday, May 16, 2016

Ingenious or Misleading Rationalization for the "Pause" in Global Warming?

[Left] The temperature of a glass of ice water "Pauses" at freezing until nearly all the ice melts.
[Right] Top: The ice water effect is due to the Heat of Fusion of water. Does that effect apply to the melting of polar ice caps, and explain the statistical "Pause"? Bottom: The IPCC's climate theory produced climate models that grossly over-estimated warming and failed to predict the "Pause". 
"A glass of ice water in a hot place is certainly warming," said the confident questioner, "despite the thermometer 'pausing' at freeing until most of the ice has melted. Don't look at the thermometer to detect the warming, watch the ice cubes melt!"

"Seriously," he continued, "we should watch the alarming melting of glaciers and polar sea ice rather than the 'Pause' in Global Warming according to thermometer readings."

When I give talks about climate science to intelligent audiences, my general theme is that Global Warming is REAL, and partly due to human activities, but it is NOT a big DEAL.,
  • Yes, the Atmospheric "Greenhouse" effect is real. It is responsible for the Earth being about 33⁰C (60⁰F) warmer than it would be absent "Greenhouse" gasses in the Atmosphere.
  • Yes Carbon Dioxide (CO2) is a key "Greenhouse" gas, second only to Water Vapor (H2O).
  • Yes CO2 has increased by about a third during the past century (from 300 to 400 parts per million), mostly due to unprecedented burning of large quantities of coal, oil, and natural gas.
  • Yes, temperatures have gone up by about 0.8⁰C (1.5F) over the past century. 
  • Warming is mostly natural and due to Earth's recovery from the depths of the last ice age, some 18,000 years ago. 
  • No matter what we do, the Earth will warm for hundreds or thousands of years, then plunge into the next ice age. Of course this will not happen monotonically. There will be multi-decade periods of warming and of cooling, just as the Medieval Warm Period (1000-1200s) was considerably warmer than today, and the Little Ice Age (1600-1700s) was much colder.
  • IPCC climate theory and computer models have failed to match actual satellite temperature data. Alarming predictions have not come to pass. They totally missed the statistical warming "Pause" of the early 2000s. [The IPCC is the Intergovernmental Panel on Climate Change]
  • [See the lower right section of the figure] For several periods, even the lowest edge of the Yellow error band is warmer than the highest edge of the Blue band! [These error bands are 5%-95% statistical confidence limits, which means there is less than 1 chance in 20 any point outside a band is due to random error. Thus, there is less than 1 chance in 20 x 20 = 400 that any point in the White space between the Yellow and Blue bands is due to random error. Either the NASA satellite sensor systems are badly out of order or the IPCC climate models are terribly wrong!]
  • The gross failure of the IPCC models to correctly predict warming, despite a significant increase in CO2, proves that the models, and the underlying IPCC climate theories, are wrong. 
  • The most generous explanation is that the IPCC climate scientists simply over-estimated the sensitivity of climate to CO2 increase by a factor of two to three. 
  • The most likely explanation is that their climate theory is either incomplete or totally wrong, so their models failed. Either that, or, for political purposes, they purposely jiggered the model parameters to create alarming projections and keep research funding coming from we taxpayers to their organizations. 
Rationalizations for what happened to the excess heat due to human-made CO2:
  • The Oceans absorbed it! 
  • The melting Ice Caps absorbed it!
How can the world's leading climate theorists and modelers still be considered competent if they did not know about the heat capacity of the oceans? (Or, apparently, even the Ice Water Experiment! :^)

The Abstract for the recently published study by Michael "Nature Trick - Hockey Stick" Mann, et. al admits the reality of the "Pause" "temporary slowdown". Guess what he blames it on?:
The temporary slowdown in large-scale surface warming during the early 2000s has been attributed to both external and internal sources of climate variability. Using semiempirical estimates of the internal low-frequency variability component in Northern Hemisphere, Atlantic, and Pacific surface temperatures in concert with statistical hindcast experiments, we investigate whether the slowdown and its recent recovery were predictable. We conclude that the internal variability of the North Pacific, which played a critical role in the slowdown, does not appear to have been predictable using statistical forecast methods... [emphasis mine]
In other words, the unpredictable "internal variability of the North Pacific" ate my alarmist projection! (A variation on the old "dog ate my homework" excuse :^)

Why was it not predictable?
  • Because statistical forecast methods are weak? 
  • Because the alarmist climate theory is wrong? 
  • Because they knew better but did not dare to reign in their catastrophic predictions for fear of losing research grants?
I find it amazing that so many of my friends (who are otherwise intelligent people) cling to their firm belief in a coming human-caused climate catastrophe. Their confidence is based on the alarming predictions rooted in IPCC climate theory and computer models.

Yet, like the confident questioner I mentioned in the first paragraph, they seem to acknowledge that the IPCC theorists did not know about the relatively simple concepts of ocean heat capacity, or even the temperature profile of ice water due to the Heat of Fusion!

If these  models could not correctly predict a near-term event, such as the "Pause", why put any credence in their catastrophic predictions for 50 or 100 years hence?

How Does the Ice Water Experiment Relate to Earth's Proportion of Ice to Liquid Water?

To satisfy my own curiosity, I decided to do some research and figure out how much the melting of glaciers, sea ice, and ice sheets might have reduced Global Warming since 1979. This period includes the statistical "Pause" (or "temporary slowdown in large-scale surface warming during the early 2000s" as Mann refers to it).

The Ice Water Temperature Pause Experiment works for two reasons:
  1. It takes nearly 80 times as much energy to melt a given mass of ice as it does to raise an equivalent mass of water 1⁰C (1.8⁰F). (This is called the heat of fusion associated with the state transition of water from solid to liquid form.)
  2. The Ice Cubes make up a substantial percentage of the total mass of the ice water mixture. (When the ice cubes melt down to a small proportion of the water, the temperature does rise.)
So, what is the percentage of ice to liquid water on Earth, and has enough of it melted to account for the failure of the IPCC models since 1979, or during the "Pause"?

According to Debenedetti, Pablo G. & H. Eugene Stanley. "Supercooled and Glassy Water."Physics Today. Vol. 56, No. 6 (June 2003): 40 (quoted by http://hypertextbook.com/facts/2000/HannaBerenblit.shtml) here is what we need to know about the Earth's Ice and Water:
  • 1,300 x 106 km3 of water in the oceans [106 km3 = millions of cubic kilometers]
  •      33 x 106 km3 of ice in the polar ice caps
    •   3 x 106 km3 in the Greenland ice shelf and
    • 30 x 106 km3 in the Antarctic ice shelf
  •     0.2 x 106 kmof ice in glaciers
  •     0.1 x 106 km3 of water in lakes
  •     0.0012 x 106 kmof water in rivers
  •     0.22 x 106 kmof water in annual precipitation
We can see from the above that virtually all of the Earth's liquid water is in the oceans and virtually all the ice is in the polar caps. Even if we froze all the water in lakes, rivers, along with annual precipitation, and combined that with the ice in glaciers, the total would be 0.32 x 106 km3, less than 1% of the total ice caps and less than 0.03% of the total water on Earth!

If both the Arctic/Greenland and Antarctic Ice were to melt, that would account for a reduction in warming of about 33 x 80 / 1300 = 2⁰C (3.6⁰F). Wow! That seems substantial, and there certainly would be catastrophic flooding in some low-lying places if all the Earth's ice melted.

However, actual ice melt rates are much, much, much less, according to https://nsidc.org/cryosphere/sotc/ice_sheets.html
... best estimates of mass balance changes per year for 1992 through 2011: Greenland: lost 142 ± 49 gigatons; East Antarctica: gained 14 ± 43 gigatons; West Antarctica: lost 65 ± 26 gigatons; Antarctic Peninsula: lost 20 ± 14 gigatons. [net annual melt loss 213 gigatons]
Conveniently, 1 gigaton is the weight of one cubic kilometer (km3) of fresh water. So, 213 gigatons is equal to 213 km3 of ice (momentarily ignoring the fact that 1 km3 of ice weighs a bit less than 1 kmof sea water). Lacking more specifics, let us assume an average annual melt rate of 213 km3 is at least roughly representative of average annual melt rates from 1979 to 2015. Thus, the total melt for 1979-2015 would be 213 x 36 = 7688 km3, which we will round up to 8000 kmto more than make up for the difference in weight of ice and sea water.

So how much does all that melting amount to in terms of delayed temperature increase? 80 x 8000 / 1,300,000,000 = 0.000492⁰C, which we may round up to 0.0005⁰C (0.0009⁰F) of the warming since 1979, and even less of the missing warming during the "Pause".

So, total Earth ice melt accounts for less 0.09% of the warming missing from the IPCC's alarming projection. Not so impressive, is it?

So, if anyone hits you with the Ice Cube Temperature Pause Experiment, congratulate them on being 0.09% right (and thus 99.91% wrong :^)

Ira Glickstein

Sunday, April 24, 2016

Tips for VISUALIZING Science and Technology

I recently presented well-received a talk on "VISUALIZING Science and Technology" to a general audience at the Civil Discourse Club in The Villages, FL. I've also presented more technically deep material to the Science and Technology Club.

The key to UNDERSTANDING (at least for me) is the ability to "picture" the information, almost as if you had a physical model you could "touch and feel".

In my talk, I discussed several techniques, including animation, "blick" graphics, and others, and used them to illustrate:
  • Einstein's Relativity (based on his 2005 theories of "Special Relativity" and 2015 "General Relativity"), 
  • A "Nash Bargain" (if you saw the 2001 hit movie "A Beautiful Mind", you know that John Nash earned the 1994 Pulitzer prize in Economics in despite having paranoid schizophrenia), 
  • Bayesian "Inverse Probability" (based on the work of Rev. Thomas Bayes, published in 1763), and 
  • Global Warming (Real but not a big Deal).


How to UNDERSTAND the relativistic effects of Kinetic Energy (Special Relativity) and Gravitational Potential Energy (General Relativity)?

One of the relativistic effects is "Time Dilation". Clocks (of all kinds, including biological clocks) slow down when they are at high levels of either Kinetic or Gravitational Potential Energy.

Yes, I said it was CLOCKS that slow down (not TIME), and "Kinetic Energy" and (not "Speed"), and "Gravitational Potential Energy (not "Gravity") that cause these relativistic effects. Some readers may think it is TIME that slows down and that it is Relative Speed and Gravity/Acceleration that cause it, but I plan to motivate you otherwise!

Please refer to the above graphic

The image above is a screen capture from an animated "thought experiment" involving four identical, extraordinarily accurate clocks and an idealized Earth that has a conveniently-located, friction-free Tunnel, cut through the Center, from the left Surface to the right Surface.

Initially, all clocks are "at rest". The RED and BLUE clocks are at the left edge of the image, very far from the gravitational attraction of the Earth. The GREEN clock is fixed in position at the Earth Surface, and the GREY clock is fixed at the Center.

OK, let us set the BLUE and RED clocks to zero. Keep the BLUE clock fixed in place and release the RED clock.

Release the RED clock

The RED clock slowly "falls" towards the Earth, accelerating as it gets closer. As the RED clock enters the left end of the Tunnel and passes the GREEN clock at the Surface, each clock makes a record of its reading and that of the other. As the RED clock passes the GREY clock at the Center, they each record both readings. The RED clock continues out the right end of the tunnel, decelerates as it gets further from the Earth, momentarily stops moving very far from the Earth, and then "falls" back to the Earth.

Compare Intervals recorded by each clock

As the RED clock it passes the GREY and GREEN clocks, and as it stops momentarily next to the BLUE clock they exchange readings and figure out the interval between their first encounter and this one. According to Einstein, the RED clock will measure a shorter interval than the GREY clock, the GREEN clock, and the BLUE clock.

This difference may be expressed as picoseconds per second (picosec/sec), that is how many trillionths of a second less the interval indicated on the RED clock, divided by the time interval, is less than the interval measured by the other clocks.

Using the BLUE clock as a reference, the table in the image above indicates that the GREEN clock, at the Earth Surface, will measure a Time Dilation" of 696 picosec/sec, and the GREY clock, at the Surface, will measure 1044 picosec/sec. (These values agree with data found on Wikipedia and other sources.)

So, is it Gravity or Gravitational Potential Energy that causes Time Dilation?

As the graph above shows, Gravity is about zero at the far left, when the RED and BLUE clocks are far from the gravitational attraction of the Earth. Gravity increases to a maximum (negative) value of 32 feet per second squared (32 ft/s2) at the Surface, and then DECREASES to zero at the Center. Gravity is considered negative because it points to the center of the Earth.

Gravitational Potential Energy (also considered negative) is about zero at the far left, continually increases in (negative) magnitude as the RED clock passes the Surface, and continues to increase in magnitude, reaching a maximum (negative) magnitude at the Center (unlike Gravity which decreases in magnitude to zero at the Center).

In mathematical terms, Gravitational Potential Energy increases "monotonically" from the far left to the Center, while Gravity increases and then decreases, so it is "non-monotonic" between these points.

Well, Time Dilation is about zero at the far left, and MONOTONICALLY increases in magnitude from the far left to the Center. Therefore, Time Dilation is more like Gravitational Potential Energy than it is like Gravity. So, it is not Gravity, per se, that causes Time Dilation but it is Gravitational Potential Energy.

The graph indicates that the speed of the RED clock is about zero at the start, increases to 25,000 mph at the Surface, and then to 31,000 mph at the Center. (These values agree with data found on Wikipedia and other sources.)

Kinetic Energy increases and decreases more like Time Dilation than does Speed. (Kinetic Energy varies as the square of the Speed as does Time Dilation). So, it is not Speed, per se, that causes Time Dilation but it is Kinetic Energy.

(NOTE: This topic is not complete, I intend to add more material)

Ira Glickstein

Wednesday, April 20, 2016


Kurt Vonnegut Sums Up the Situation of Humans and Other Life on Earth. Use the Main Menu (below-right) for desired topics

The Atmospheric "Greenhouse" Effect is Real, and the Earth is Warming. How Much Global Warming is due to Human Activities?
Einstein's Special Relativity (1905) and General Relativity (1915) Revolutionized Physics Forever
[Click desired Topic] [Click desired Topic]

Practical "Artificial Intelligence Advisers" YOU Can Use - Computer-Aided Decision Making
Hierarchy Theory - "The Magical Number Seven (plus or minus two) - Optimal Span
[Click desired Topic] [Click desired Topic]
tbs tbs
tbs tbs
tbs tbs


Wednesday, April 13, 2016



John Nash won the 1994 Nobel in Economics for his work on what came to be known as “Nash Equilibrium”, where two or more competing entities “cooperate” (without illegally colluding) to reach a “Nash Bargain”. A Nash Bargain is reached when two or more competitors produce optimal quantities of the same or similar product or service to maximize their own self- interest, assuming others are rational and will do the same. The book and movie “A Beautiful Mind” dramatized Nash’s life story and work.
A relatively simple Excel-based tool helps you calculate a Nash Bargain in a competitive situation. It is available for FREE.

What Can the Nash Bargain Advisor Do For You?

Whether you are in a competitive business situation or not, it is important to understand how producers and consumers may come to an effective, mutually-beneficial market solution. If a producer is selling a product or service in an “elastic” market, where demand increases with reduced price, profit may sometimes be maximized by reducing prices to increase sales.
The Nash Bargain Advisor can handle a competitive situation where you know: 1) The “Demand Curve” data for the market, namely the relationship between market price and quantity of product on the market, 2) Your own Cost Structure, namely the non-recurring investment to set up your production facilities and the recurring production cost of each item sold, and 3) An estimate of your competitor(s) Cost Structures.
The Nash Bargain Advisor will compute the optimal quantity, market price, and estimated profit (or loss) you are likely to make if you follow the given advice, and if your competitors independently do the same.

About “Elastic” Markets

Let us start with a monopoly, the simplest case. Say a given producer is the only source for some unique product or service. Of course, if it is “a necessity of life” they can charge anything they want for it. On the other hand, if people can do without it, the market will be elastic and the monopoly producer will have to set the price and quantity to obtain the highest profit.
It is a myth that the best way to increase profits is to increase prices. Often reducing prices will increase sales and reduce unit production costs such that overall profits increase. The figure below illustrates that case.
The heavy black line is the Demand Curve that indicates how the market price declines from about $12 per unit to $4 when the quantity on the market increases from 10 million to 100 million units. (You can change the Demand Curve by entering different numbers on the SETUP sheet of the Nash Bargain Advisor.)
The thin red and blue curves indicate the production Cost Structures per unit for two alternate production facilities, as a function of the number of units produced. (You can change the Cost Structures by entering different numbers on the SETUP sheet of the Nash Bargain Advisor.)
A producer (whether a monopoly or not) has to decide the optimum level of capital investment. Capital investment in more automated production facilities will increase initial, non-recurring costs, but may reduce incremental production costs by a sufficient amount to pay back the investment -or not- depending upon the number of units eventually sold and the market price when they are sold. The Nash Bargain Advisor allows you to enter and compare two different sets of Cost Structures.
In the graph above, the thin red curve represents a more highly-automated producer we’ll call “Alpha” and the thin blue dashed curve a less-automated alternative we’ll call “Beta”. Note that, if you produce a smaller number of units, the production cost for each (which is the recurring, incremental cost per unit plus the share of the non-recurring costs) will be higher than if you produce a larger number of units. A more automated facility, corresponding to higher non-recurring investment, will be at a relative disadvantage for low production quantities but may gain an advantage for larger production quantities, as indicated by the thin red and blue curves.
The heavy red and dashed blue curves indicate the profit per unit as a function of the number of units produced. You might think the maximum overall profit occurs when the profit per unit is maximized, but you would be wrong! The figure below illustrates the overall profit (or loss) for Alpha and Beta alternatives as a function of quantity produced. 
The overall profit for the alternative Cost Structures is maximized with a market quantity of 65 million for Alpha and 62 million for Beta, assuming each is a monopoly in a given market. This corresponds to a market price of $7.20 to $7.36 per unit. If the monopoly produces too few units, say 10 to 14 million, it will get $11.68 to $12 per unit, but will lose money overall. On the other hand, if produces too many units, say over 70 million, there will be a glut on the market and overall profits will go down substantially.
Different Demand Curve and production Cost Structures that you can experiment with on the SETUP sheet of the Nash Bargain Advisor may result in situations where overall profits increase or decrease monotonically as production quantities increase. However, it is much more typical for profits to maximize with a moderate number of units on the market and for there to be lower profits (or net losses) for very low or very high production quanities.  

About Competitive Markets

The insight John Nash brought to Economics and that gained him the Nobel in Economics for 1994 is that the situation is the same for multiple producers in a competitive marketplace. If two or more companies produce the same or similar products in an elastic market, such as Burger King and McDonalds or HP and Acer, it is to their advantage to collectively produce a certain number of units, neither too few nor too many.
If there are too few fast-food restaurants in a given geographic area, they may be able to charge a bit more per burger, but they will sell fewer as potential customers choose to eat at home or to go to full-service eateries. On the other hand, if there are too many fast-food places, they will have to reduce prices drastically to attract customers and their overall profits may decline or turn into losses.
The same is true for PC makers. As production quantities have multiplied, prices have come down sharply and features have improved dramatically. This, in turn, has increased sales to the point of nearly 100% market penetration in the US and other westernized countries. More and more people have at least one PC and some have a desktop plus a laptop, and other families have one for each member of the family. (My wife and I have one desktop plus three laptops between us.) With the economic slowdown, however, there may be too many units on the market and prices may drop to the point where some producers face losses and have to cut back production or drop out of the market.
So, how can competitors in an elastic market adjust production quantities such that they can each make a fair profit? Well, they could collude and fix quantities and prices and divide markets to increase their profits. However, that would be totally illegal!
Using game theory, John Nash came up with a way to reach “equilibrium” without illegal collusion. His solution is for each competitor to use their own Cost Structure and estimate the Cost Structures of competitors and calculate the quantity they should produce, assuming others are rational and will do the same. (The highlighted part of the previous sentence is the most important part. If competitors are not rational, or if they try to “cheat” by producing too many units, the Nash Bargain will not work.)
The Nash Bargain Advisor calculates the optimal quanities each competitor should produce to maximize their own self-interest, assuming others “cooperate” by doing the same in a rational way. The Nash Bargain Advisor also calculates the consequences if one or more producers “cheat” and over-produce more than their optimal quanitiy, or, if one or more producers under-produce due to miscalculation or disruption in supplies or production facilities.

How to Use the Nash Bargain Advisor

You have to enter only eight items of data. The SETUP sheet is shown below.
The first four entries define the Demand Curve. In the real world, that data would come from marketing surveys or actual experience with sales volume at various outlets with different prices. It is assumed we are working in the relatively linear portion of the Demand Curve, where market price and quantity available vary inversely.
Enter the minimum reasonable number of units that may be on the market in the first cell. In the above example, that number is 10 million units. With that number of units available, market demand will support a price of about $12 per unit, entered into the second cell.
Then, enter the maximum reasonable number of units and the estimated price market demand will support. In the example, 100 million units will drive the price down to about $4 per unit.
The next four entries define the Cost Structures for two different, competitive producers that we will call Alpha and Beta.
The first cell for each producer contains the non-recurring costs for setting up the production facility and various fixed costs that do not vary with production volume. In this case, Alpha is assumed to have invested $150 million dollars and Beta $120 million.
The second cell for each producer contains the incremental production cost per unit, including supplies, factory, distribution, and sales costs. Alpha is assumed to produce units at $1.40 each and Beta at $1.90.
For this example, the Nash Bargain Advisor calculates that Alpha should optimally produce about 32 million units and Beta about 31 million, for a total of about 63 million units on the market.
The graphs on the SETUP sheet indicate the options Alpha and Beta have, assuming their competitor “cooperates” with the Nash Bargain. (A larger version of these graphs is available on the COOPERATION sheet.) The figure below shows the situation for Beta asuming Alpha produces their Nash Bargain quantity.
Note that Beta could increase their profits by producing about 40 million units, about 9 million more than the Nash Bargain calls for. However, doing so will reduce Alpha’s profits to nearly zero, making Alpha, in game theory terms, the “SUCKER”. That would most likely prompt Alpha to retalliate by also over-producing in the next production cycle. If Beta produces fewer than the Nash Bargain quantity, their profits will decrease. Beta will lose money if they decrease below about 24 million units. If Beta under-produces, Alpha, the cooperator, will see the market price go up and Alpha will earn greater profits.
The situation is similar for Alpha, assuming Beta produces their Nash Bargain quantity, see the figure below.
So, for their long-term self-interest, both Alpha and Beta should refrain from cheating and avoid getting into a “price war”.
The Nash Bargain Advisor also calculates nine examples for the cases where both Alpha and Beta cooperate and where one or both competitors over- or under-produce.
Here is a summary based on the SETUP data discussed in the previous paragraphs:
  • BOTH COOPERATE: Market price is $7.24, Alpha makes a net profit of $40M and Beta $46M.
  • ALPHA CHEATS (over-produces by 50%): Market price drops to $5.80, Alpha (Cheater) makes a hefty net profit of $64M and Beta (the SUCKER) is driven down to a profit of $1M.
  • BETA CHEATS (over-produces by 50%): Market price drops to $5.86, Beta (Cheater) makes a hefty net profit of $65M and Alpha (the SUCKER) is driven down to a loss of $5M
  • BOTH CHEAT: Market Price drops to $4.42, both Alpha and Beta lose about $3M each.
  • ALPHA UNDER-PRODUCES (by 50%): Market price is driven up to $8.68. Beta (Cooperator) makes a hefty profit of $91M and Alpha (under-producer) takes a loss of $32M.
  • BETA UNDER-PRODUCES (by 50%): Market price is driven up to $8.62. Alpha (Cooperator) makes a hefty profit of $84M and Beta (under-producer) takes a loss of $15M.
  • BOTH UNDER-PRODUCE (by 50%): Market price is driven up to $10.06. Alpha takes a loss of $9M and Beta’s profits go down to only $7M.
  • ALPHA CHEATS (over-produces by 50%) and BETA UNDER-PRODUCES (by 50%): Market price is $7.18, Alpha (Cheater) makes a hefty profit of $130M and Beta (under-producer) takes a loss of $31M.
  • BETA CHEATS (over-produces by 50%) and ALPHA UNDER-PRODUCES (by 50%): Market price is $7.30, Beta (Cheater) makes a hefty profit of $132M and Alpha (under-producer) takes a loss of $54M.
Cheating by one or more competitors gluts the market with excess product and drives market price down. That, at least temporarily, benefits consumers. If a price war develops, consumers may benefit for several years. However, there is a risk that one or more competitors may be driven out of business by losses sustained in a price war and that may leave the market to one supplier, which may raise prices for the consumer.
If one or more competitors under-produces, that hurts consumers by reducing supplies below market needs and raising consumer prices.
If some of the competitors under-produce while others over-produce such that the net production quantity on the market is around the number called for by the Nash bargain, that will neither hurt nor benefit the consumers. However, the under-producer will see a drop in profit and perhaps endure a loss, while the over-producer will see a hefty profit. Therefore, if one or more competitors is hit by a disruption in production, the best action is for the other competitor(s) to over-produce to make up for the disruption..
Therefore, it appears that the best long-term situation for consumers and producers is a competitive market where producers meet their Nash Bargain quantities and do not cheat. Consumers benefit from reasonable and relatively stable prices while producers make a fair profit.

How to Use Nash Bargain Advisor for More Than Two Producers

The Nash Bargain Advisor may be used for more than two producers by combining additional producers into Alpha or Beta. For example, say there are four producers in a given market and two have advanced production facilities while the other two basic facilities. You could use Alpha for the two advanced facility and Beta for the two basic facilities, combining their non-recurring costs and dividing their Nash Bargain quantities.

Wednesday, September 16, 2015

INTRODUCTION to Hierarchy Topics - Optimal Span

Optimal Span is at the AMAZING Intersection of Hierarchy Theory, Information Theory, and Complexity Theory!
I love Kurt Vonnegut's great poem that captures the very essence of human inquisitiveness:
Tiger got to hunt,
Bird got to fly,
Man got to sit and wonder 'WHY, WHY, WHY',

Tiger got to sleep,
Bird got to land,
Man got to tell himself he UNDERSTAND

My contribution to "UNDERSTAND" is my PhD dissertation, "Hierarchy Theory - Some Common Properties of Competitively-Selected Systems", System Science Department, Binghamton University, NY, 1996. If you wish to pursue further research in this area please contact me at ira@techie.com. A few copies of my dissertation are available.

The OPTIMAL SPAN HYPOTHESIS is at the heart of my dissertation. Using Hierarchy Theory, Information Theory, and Graph Theory, I proved that Optimal Span is about the same, between five and nine, for virtually all complex structures that have been competitively selected.

That includes:
  • The products of Natural Selection (Darwinian evolution) and 
  • The products of Artificial Selection (Human inventions that competed for acceptance by human society)
My hypothesis is supported by empirical data from varied domains and a derivation from Shannon’s Information Theory and Smith and Morowitz’s concept of intricacy.

You may download my PowerPoint Show that should run on any Windows PC here:


Most complex structures are compositional or control hierarchies:

  • An example of a compositional hierarchy is written language. A word is composed of characters. A simple sentence is composed of words. A paragraph is composed of simple sentences, and so on. 
  • An example of a control hierarchy is a management structure, where a manager controls a number of foremen or team leaders, and they, in turn, control a number of workers.

Hierarchy (from Greek:ἱερός — hieros, ‘sacred’, and ἄρχω — arkho, ‘rule’) originally denoted the holy rule ranking of nine orders of angels, from God to Seraphims to Cherubims and so ondown to the Archangels and plain old Angels at the lowest level. Kind of like the organization of God’s Corporation!

The seminal book on this topic is Hierarchy Theory: The Challenge of Complex Systems[ Pattee, 1973 ]. This book includes chapters by distinguished academics, including:
  • Herbert A. Simon (Nobel laureate) on “The Organization of Complex Systems”.
  • James Bonner “Hierarchical Control Programs in Biological Development”
  • Howard H. Pattee “The Physical Basis and Origin of Hierarchical Control” and “Postscript: Unsolved Problems and Potential Applications of Hierarchy Theories”
  • Richard Levins “The Limits of Complexity” 
  • Clifford Grobstein “Hierarchical Order and Neogenesis”.
(Howard Pattee was the chairman of my PhD committee)

A more recent book, Complexity – The Emerging Science at the Edge of Order and Chaos, observes that the “hierarchical, building-block structure of things is as commonplace as air.” [ Waldrop, 1992 ]. Indeed, a bit of contemplation will reveal that nearly all complex structures are hierarchies.
There are two kinds of hierarchy. A few well-known examples will set the stage for more detailed examination of modern Hierarchy Theory:


1 -Management Structure (Control Hierarchy)

Workers at the lowest level are controlled by Team Leaders (or Foremen), teams are controlled by First-Level Managers who report to Second-Level managers and so on up to the Top Dog Executive. At each level, the Management Span of Control is the number of subordinates controlled by each superior. 

The diagram shows three different ways you might organize 49 workers. In (A) you have ONE manager and 48 workers, which is a BROAD hierarchy. Management experts would say a Management Span of Control of 48 is way too much for anyone to handle! In (B) you have THIRTEEN managers in a three-level management hierarchy and only 36 workers, which is a TALL hierarchy with an average Management Span of Control of only 3.3. Management experts would say this is way too inefficient with too many managers! In (C) you have SEVEN managers and 42 workers in a MODERATE hierarchy with an average Management Span of Control of about 6.5. Management experts would say this is about right for most organizations where the workers have to interact with each other. Optimal Span theory supports this common-sense belief!

2 -Software Package (Control Hierarchy)

Main Line computer program controls Units (or Modules, etc.) and the Units control Procedures that control Subroutines that control Lines of Code. At each level, the Span of Control is the number of lower-level software entities controlled by a higher-level entity.

3 – Written Language (Containment Hierarchy)

Characters at the lowest level are contained in Words. Words are contained in Simple Sentences. Simple Sentences in Paragraphs, and so on up to Sections, Chapters and the Entire Document. At each level, the Span of Containment is the number of smaller entities contained by each larger.

4 – “Chinese boxes” (Containment Hierarchy)

A Large Box contains a number of Smaller Boxes which each contain Still Smaller Boxes down to the Smallest Box. At each level, the Span of Containment is the number of smaller entities contained by each larger.

Traversing a Hierarchy

Note that Examples 1 and 3 above were explained starting at the bottom of the hierarchy and traversing up to the top while Examples 2 and 4 were explained by starting at the top and traversing to the bottom.

Simple hierarchies of this type are called “tree structures” because you can traverse them entirely from the top or the bottom and cover all nodes and links between nodes.

Folding” a “String”

A tree structure hierarchy can also be thought of an a one-dimensional “string” that is “folded” (or parsed) to create the tree structure. What does “folding” mean in this context?

As an amusing example, please imagine the Chief Executive of a Company at the head of a parade of all his or her employees. Behind the Chief Exec would be Senior Manager #1 followed by his or her First-Level Manager #1. Behind First-Level Manager #1 would be his or her employees. Behind the employees would be the First-level Manager #2 with his or her employees. After all the First-levels and their employees, Senior Manager #2 would join the parade with his or her First-Levels and their employees, and so on. If you took the long parade and called it a “string”, you could “fold” it at each group of employees, then again at each group of First-Level Managers, and again at the group of Senior Managers, and get the familiar management tree structure!

The above “parade” was described with the Chief Exec at the head of it, but you could just as well turn it around and have the lowest-level employees lead and the Chief Exec at the rear. When military hierarchies go to war, the lowest-level soldiers are usually at the front and the highest-level Generals well behind.

A more practical example is the text you are reading right now! It was transmitted over the Internet as a string of “bits” – “1″ and “0″ symbols. Each group of eight bits denotes a particular character. Some of the characters are the familiar numbers and upper and lower-case letters of our alphabet and others are special characters, such as the space that demarks a word (and is counted as a character that belongs to the word), punctuation characters such as a period or comma or question mark, and special control characters that denote things like new paragraph and so on.

You could say the string of 1′s and 0′s is folded every eight bits to form a Character. The string is folded again at each Space Character to form Words. Each group of Words is folded yet again at each comma or period symbol that denotes a Simple Sentence. Each group of Simple Sentences is again folded to make Paragraphs, and so on.

You could lay out a written document as a tree structure, similar to a Management hierarchy. The Characters would be at the bottom, the Words at the next level up, the Simple Sentences next, the Paragraphs next, and so on up to the whole Section, Chapter, and Book.


With all these different types of hierarchical structures, each with its own purpose and use, you might think there is no common property they share other than their hierarchical nature. You might expect a particular Span of Control that is best for Management Structures in Corporations and a significantly different Span of Containment that is best in Written Language.

If you expected the Optimal Span to be significantly different for each case, you would be wrong!
According to System Science research and Information Theory, there is a single equation that may be used to determine the most beneficial Span. That optimum value maximizes the effectiveness of the resources. A Management Structure should have the Span of Control that makes the best use of the number of employees available. A Written Language Structure should have the Span of Containment that makes the best use of the number of characters (or bits in the case of the Internet) available, and so on.

The simple equation for Optimal Span derived by [ Glickstein, 1996 ] is:

So= 1 + De
(Where D is the degree of the nodes and e is the Natural Number 2.71828459)

In the examples above, where the hierarchical structure may be described as a single-dimensional folded string where each node has two closest neighbors, the degree of the nodes is, D = 2, so the equation reduces to:

So= 1 + De = 1 + 2 x 2.71828459 = 6.43659

“Take home message”: OPTIMAL SPAN, So = ~ 6.4

Also see Quantifying Brooks Mythical Man-Month (Knol) , [Glickstein, 2003 ] and [ Meijer, 2006 ] for the applicability of Optimal Span to Management Structures.

[Added 4 April 2013: The Meijer, 2006 link no longer works. His .pdf document is available at http://repository.tudelft.nl/assets/uuid:843020de-2248-468a-bf19-15b4447b5bce/dep_meijer_20061114.pdf ]

Examples of Competitively-Selected Optimal Span

Management Span of Control

Management experts have long recommended that Management Span of Control be in the range of five or six for employees whose work requires considerable interaction. Depending upon the level of interaction, experts recommend up to nine employees per department.This recommendation comes from experience with organizations with different Spans of Control. The most successful tend to have Spans in the recommended range, five to nine,an example of competitive-selection.

When the lowest level consists of service-type employees, whose interaction with each other is less complex, there may be a dozen or two or more in a department, but there will usually be one or more foremen or team leaders to reduce the effective Management Span of Control to the range five to nine.Corporate hierarchies usually have about the same range of first-level departments reporting to the next level up and so on.

Say you had a budget for 49 employees and had to organize them to make most effective use of your human resources. Which of the following seems most reasonable?

(A) you have ONE manager and 48 workers, which is a BROAD hierarchy. Management experts would say a Management Span of Control of 48 is way too much for anyone to handle!

(B) you have a third-level chief executive, three executive-level managers, each with three department managers, totaling THIRTEEN managers in a three-level management hierarchy and only 36 workers, which is a TALL hierarchy with an average Management Span of Control of only 3.3. Management experts would say this is way too inefficient with too many managers!

(C) you have a second-level manager and six department managers, totaling SEVEN managers and 42 workers in a MODERATE hierarchy with an average Management Span of Control of about 6.5. Management experts would say this is about right for most organizations where the workers have to interact with each other. Optimal Span theory supports this common-sense belief!

Human Span of Absolute Judgement

Evolution and Natural Selection have produced the human brain and nervous system and our senses of vision, hearing, and taste. It turns out that these senses are generally limited to five to nine gradations that can be reliably distinguished. It is also the case that we can remember about five to nine chunks of information at any one time. This is another example of competitive-selection, where, over the eons of evolutionary development, biological organisms competed and those that best fit the environment were selected to survive and reproduce.

George A Miller wrote a classic paper titled The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information [ Miller, 1956 ]. He showed that human senses of sight, hearing, and taste were generally limited to five to nine gradations that could be reliably distinguished.

Why do we “chunk” things in groups of about seven – seven days of the week, seven seas, seven sins, etc? The presentation I gave to the Philosophy Club in The Villages, FL, 14 March 2014 provides the theoretical answer. You may download a PowerPoint Show that should run on any Windows computer here:https://sites.google.com/site/iraclass/my-forms/PhiloMAGICALsevenMar2014.ppsx?attredirects=0&d=1 

This is an easy-to-understand version of a more technical presentation I made to the Science-Technology Club in February, seehttp://tvpclub.blogspot.com/2014/02/optimal-span-amazing-intersection-of.html

George A Miller's classic paper appeared way back in 1956 the Psychological Review with the intriguing title: The Magical Number Seven, Plus or Minus Two – Some Limits on Our Capacity for Processing Information. That paper was extremely important and influential and is still available online. George A. Miller contains a strange plea: 
"My problem is that I have been persecuted by an integer seven plus or minus two … 
Miller’s paper continues as follows:

"For seven years this number has followed me around, has intruded in my most private data, and has assaulted me from the pages of our most public journals. This number assumes a variety of disguises, being sometimes a little larger and sometimes a little smaller than usual, but never changing so much as to be unrecognizable.

"The persistence with which this number plagues me is far more than a random accident …

"There is, to quote a famous senator, a design behind it, some pattern governing its appearances. Either there really is something unusual about the number or else I am suffering from delusions of persecution.Miller’s paper is well worth reading and is available on the Internet at this link [Miller, 1956]"

Miller presents the results of twenty experiments where human subjects were tested to determine what he calls our "Span of Absolute Judgment", that is, how many levels of a given stimulus we can reliably distinguish. Most of the results are in the range of five to nine, but some are as low as three or as high as fifteen. For example, our ears can distinguish five or six tones of pitch or about five levels of loudness. Our eyes can distinguish about nine different positions of a pointer in an interval. Using a vibrator placed on a person's chest, he or she can distinguish about four to seven different level of intensity, location, or duration, etc. The average Span of Absolute Judgment is 6.4 for Miller's twenty one-dimensional stimuli.

Miller also presents data for what he calls our "Span of Immediate Memory", that is, how many randomly presented items we can reliably remember. For example, we can remember about nine binary items, such as a series of "1" and "0", or about eight digits, or about six letters of the alphabet, or about five mono-syllabic words randomly selected out of a set of 1000.

At the end of his paper Miller rambles: 
...And finally, what about the magical number seven? What about the seven wonders of the world, the seven seas, the seven deadly sins, the seven daughters of Atlas in the Pleiades, the seven ages of man, the seven notes of the musical scale, and the seven days of the week? What about the seven-point rating scale, the seven categories for absolute judgment, the seven objects in the span of attention, and the seven digits in the span of immediate memory?

For the present, I prefer to withhold judgment.

Perhaps there is something deep and profound behind all these sevens, something just calling out for us to discover it. 
But I suspect that it is only a pernicious, Pythagorean coincidence. [my bold]
Well, it turns out that there IS something DEEP and PROFOUND behind "all these sevens" and I (Ira Glickstein) HAVE DISCOVERED IT. And, my insight applies not only to the span of human senses and memory, but also to the span of written language, management span of control, and even to the way the genetic "language of life" in RNA and DNA is organized. Furthermore, my discovery is not simply based on support from empirical evidence from many different domains, but has been mathematically derived from the basic Information Theory equation published in 1948 by Claude Shannon, and the adaptation of "Shannon Entropy" to the Intricacy of a biograph by Smith and Morowitz in 1982.

Glickstein’s Theory of Optimal Span

Miller’s number also pursued me (Ira Glickstein) until I caught it and showed, as part of my PhD research,[ Glickstein, 1996 ]that, based on empirical data from varied domains, the optimal span for virtually all hierarchical structures falls into Miller’s range, five to nine. Using Shannon’s information theory, I also showed that maximum intricacy is obtained when the Span for single-dimensional structures is So = 1 + 2e = 6.4 (where e is the natural number, 2.71828459). My “magical number” is not the integer 7, but 6.4, a more precise rendition of Miller’s number!

Hierarchy and Complexity

Howard H. Pattee, one of the early researchers in hierarchy theory, posed a serious challenge:

Is it possible to have a simple theory of very complex, evolving systems? Can we hope to find common, essential properties of hierarchical organizations that we can usefully apply to the design and management of our growing biological, social, and technological organizations? [Pattee, 1973]
Pattee was the Chairman of my PhD Committee and I took the challenge very seriously!

The hypothesis at the heart of my PhD dissertation is that the optimal span is about the same for virtually all complex structures that have been competitively selected. That includes the products of Natural Selection (Darwinian evolution) and the products of Artificial Selection (Human inventions that competed for acceptance by human society).

Weak Statement of Hypothesis

In  the “weak” statement of the hypothesis, it is scientifically plausable to believe that diverse structures tend to have spans in the range of five to nine, based on empirical data from six domains plus a computer simulation.

The domains are:

Human Cognition: Span of Absolute Judgement (one, two and three dimensions), Span of Immediate Memory, Categorical hierarchies and the fine structure of the brain. These all conform to the hypothesis.

Written Language: Pictographic, Logographic, Logo-Syllabic, Semi-alphabetic, and Alphabetic writing. Hierarchically-folded linear structures in written languages, including English, Chinese, and Japanese writing. These all conform to the hypothesis.

Organization and Management of Human Groups: Management span of control in business and industrial organizations, military, and church hierarchies. These all conform to the hypothesis.

Animal and Plant Organization and Structure: Primates, schooling fish, eusocial insects (bees, ants), plants. These all conform to the hypothesis.

Structure and Organization of Cells and Genes: Prokaryotic and eukaryotic cells, gene regulation hierarchies. These all conform to the hypothesis.

RNA and DNA: Structure of nucleic acids. These all conform to the hypothesis.

Computer Simulations: Hierarchical generation of initial conditions for Conway’s Game of Life. (Two-dimensional ). These all conform to the hypothesis.

Strong Statement of Hypothesis

Shannon’s information theory, andthe concept of intricacy of a graphical representation of a structure [ Smith and Morowitz, 1982 ] can be used to derive a formula for the optimal span of a hierarchical graph.

This work extended the single-dimensional span concepts of management theory and Miller’s “seven plus or minus two” concepts to a general equation for any number of dimensions. I derived an equation that yields Optimal Span for a structure with one-, two-, three- or any number of dimensions!

The equation for Span (optimal) is:

So= 1 + De

(Where D is the degree of the nodes and e is the Natural Number 2.71828459)

NOTE: For a one-dimensional structure, such as a management hierarchy or the span of absolute judgement for a single-dimensional visual, taste or sound, the degree of the nodes, D = 2 . This is because each node is a link in a one-dimensional chain or string and so each node has two closest neighbors.

For a two-dimensional structure, such as a 2D visual or the pitch and intensity of a sound or a mixture of salt and sugar, D = 4. Each node is a link in a 2D mesh and so each node has four closest neighbors.

For a 3D structure, D = 6 because each node is a link in a 3D egg crate and has six closest neighbors.

Some of the examples in Miller’s paper were 2D and 3D and his published data agreed with the results ofthe formula. The computer simulation was 2D and also conformed well to the hypothesis.

In normal usage, complexity and intricacy are sometimes used interchangeably. However, there is an important distinction between them according to [ Smith and Morowitz, 1982 ].

COMPLEXITY - Something is said to be complex if it has a lot of different parts, interacting in different ways. To completely describe a complex system you would have to completely describe each of the different types of parts and then describe the different ways they interact. Therefore, a measure of complexity is how long a description would be required for one person competent in that domain of knowledge to explain it to another.

INTRICACY - Something is said to be intricate if it has a lot of parts, but they may all be the same or very similar and they may interact in simple ways. To completely describe an intricate system you would only have to describe one or two or a few different parts and then describe the simple ways they interact. For example, a window screen is intricate but not at all complex. It consists of equally-spaced vertical and horizontal wires criss-crossing in a regular pattern in a frame where the spaces are small enough to exclude bugs down to some size. All you need to know is the material and diameter of the wires, the spacing betwen them, and the size of the window frame. Similarly, a field of grass is intricate but not complex.

If you think about it for a moment, it is clear that, given limited resources, they should be deployed in ways that minimize complexity to the extent possible, and maximize intricacy!

Using [ Smith and Morowitz, 1982 ] concepts of inticacy, it is possible to compute the theoretical efficiency and effectiveness of a hierarchical structure. If it had the Optimal Span, it is 100% efficient, meaning that it attains 100% of the theoretical intricacy given the resources used.If not, the percentage of efficiency can be computed. For example, a one-dimensional tree structure hierarchy is 100% efficient (maximum theoretical intricacy) with a Span of 6.4. For a Span of five, it is 94% efficient (94% of maximum theoretical intricacy).It is also 94% efficient with a Span of nine. For a Span of four or twelve, it is 80% efficient.

In Chapter 6 of my novel, Jim and Luke wonder about the control structure for the 1600 scepter-holders:
... After a period of silence, Luke spoke up. “Sixteen hundred people are way too many for there not to be a hierarchical structure,” he began. “If the scepter-holder system was properly designed, according to system science theory at least, there would have to be several grades above the lowest class of scepter-holder.”

He took out his read-WINs and put them on.

“Luke,” I observed, “There’s no WIN coverage in this area …”
“Right,” answered Luke, “But there are processors and software in my read-WINs that allows them to operate independently. I’ve got a program for ‘optimal span’ – you know the ‘magical number seven plus or minus two.’”>
“What the heck is that?” I asked, “And why would I care? Where are we going here?”
“Well, back about a century ago, a psychologist named Miller discovered that human perception, such as sight and smell and taste and memory and so on, is limited to five to nine gradations. He called it 'the magical number seven, plus or minus two' or, more scientifically, the 'span of human perception'."
“Another guy, an engineer named Glickstein, about sixty years ago, proved the optimal span for any structure is one plus the degree of the nodes times 2.71828459, the natural number ‘e.’ For a one-dimensional string, the degree is two and the formula comes out to be around six and a third, or a little more. He also showed with Shannon’s information theory that the range five to nine was, at least theoretically, over ninety-six percent efficient and four to twelve was over eighty percent efficient. And that’s not just for control hierarchies like a management chain, but also containment hierarchies in all types of physical systems and even software systems like …”
“You just told me how to build a clock,” I laughed, interrupting Luke. “All I want to know is what time it is! Please, tell me why I give a hoot about the range five to nine or the number six and a third or a bit more?” 
“About forty years ago,” continued Luke, “A management expert rediscovered the optimal span theory and proclaimed that all management structures must adhere to it! Did you ever notice how nearly all departments at TABB have either six or seven workers to each manager? How each second-level manager has six or seven first-level managers working for him or her?” 
“Yeah, come to think of it,” I replied, “That’s how it is. On the other hand, when I worked in a factory as a college summer job, we had about a dozen guys and gals in our team.” 
“Well,” replied Luke, “The lowest level, like a platoon in the military, can have ten or twelve or sometimes a bit more. The theory only applies when the workers have to interact with each other in complex ways, not when they’re doing grunt work.” 
“If you’d quit interrupting, I’ll tell you,” Luke said good-naturedly, “According to the optimal span program in my read-WINs, sixteen-hundred scepter-holders would break down into about two-hundred-fifty first-level ‘departments,’ each with six or seven scepter-holders and one higher-level scepter-holder ‘managing’ them. The two-hundred-fifty second-level scepter-holders would report to thirty-six third-level scepter-holders who, in turn, would report to six fourth-level scepter-holders who would report to the top dog scepter-holder if there was one.” 
“Yeah,” replied Luke, “There should be thirty-six scepter-holders at the third level. What about it?” 
“Well,” I began, very seriously, “We have a tradition in Judaism that there are thirty-six ‘tzadikim’ or ‘righteous ones’ for whose sake the world exists. No one knows who they are. When one dies, he, or she I guess, is replaced by another, chosen by God. They are sometimes called the ‘Lamed Vovniks’ because, according to gematria, which we discussed some months ago, the Hebrew letter Lamed stands for thirty and the letter Vuvfor six, which adds up to thirty-six.” 
“So,” replied Luke with a level of interest that surprised me at the time, “There would be thirty-six especially powerful scepter-holders who would regulate the rest! And they do need regulation. I’m not one-hundred percent pleased with Stephanie’s ethics ..."

Ira Glickstein

Tuesday, September 15, 2015



How are decisions made where you work? In your personal life? In government and politics?

Are we really in control of our own decision-making?
Is it a completely objective, fact-based, unemotional process? 
(If you answered "YES" you are almost certainly WRONG!)

EMOTION generally plays a key role in virtually all decision-making. As I was taught in IBM Marketing School "You buy WHAT you like from WHO you like!"

The image is from a great TED talk that asks the question: "Are we in control of our own decisions?"

The answer is NO!

This TED talk clearly demonstrates how our emotions and other non-rational factors control our decision-making much more strongly than reasonable logic.

For example, the person on the far left is "Tom" and the one on the far right is "Jerry". The figure in the top middle is a distorted version of "Jerry" to make him look ugly. The middle bottom is an ugly version of "Tom".

When presented with the top form, and asked who they would date, most picked good-looking Jerry. When shown the bottom form, they picked good-looking Tom. Amazingly, the ugly choice totally changed the results of the selection process!

The TED presenter, Dan Ariely, uses several other examples to show how our decision process may be totally altered by the presentation of undesirable, non-chosen alternatives.


Well, if a decision is close between two alternatives, which is always the case for hard decisions in business (or the Supreme Court, where, by definition, cases are almost always close choices), a good strategy could be to introduce a slightly "ugly" version of the choice you want the deciders to make.

For example, a prosecutor could include the death penalty as an option, even if he or she thought a 20-year sentence is most appropriate. The "ugly" death penalty option would make it more likely the jurors would settle on a long sentence. Given a choice between 10 years and 20 years, they might pick 10. If the death penalty was added to the menu, they would be more likely to choose 20 years.

The other lesson I take from this TED talk is that professionals should adopt methodologies that, to the extent possible, exclude emotional factors.


I have used Excel to construct several "AI Advisers" - Computer-Aided tools that you may use to reduce the emotional quotient in your personal and professional decision-making. They are all totally FREE (and well worth it :^).

Decision-Making In the Face of Uncertainty Utilizing Multiple Evaluation Factors and Weights

 my Decision tool "forces" the deciders to consider multiple factors and weights in reaching a decision.

Bayesian Inference (Inverse Probability)


Nash Bargain (Setting the most advantageous price in a competitive market)


Management Span of Control Based on Hierarchy Theory and Information Theory


Visual Acuity

How big and what size fonts should you use for signs, PowerPoint presentations, and printed documents. How big should your new High Definition television display be in your living room, your den, your meeting room, etc.

Ira Glickstein