< PREV | NEXT > | INDEX | GOOGLE | UPDATES | EMAIL | $Donate? | HOME

DayVectors

oct 2005 / last mod sep 2021 / greg goebel

* 21 entries including: Dawkins' THE BLIND WATCHMAKER, global water supply, the Black Death, resurgent volcanic calderas, 1918 flu deciphered, figuring out crowd crush, supercapacitors, banana horticulture, battle of Palmdale 1956, Afghan opium, virtual machines against digital decay, Japans rises, Democrats be careful, underground coal fires, London Underground map, Dick Feynman, Bretz floods, and suburban wildlife problems.

banner of the month


[MON 31 OCT 09] THE BLACK DEATH (1)
[FRI 28 OCT 05] THE BLIND WATCHMAKER (2)
[THU 27 OCT 05] 1918 FLU DECIPHERED
[WED 26 OCT 05] WALK DON'T RUN
[TUE 25 OCT 05] SUPERCAPACITORS
[MON 24 OCT 05] RESURGENT CALDERAS (3)
[FRI 21 OCT 05] THE BLIND WATCHMAKER (1)
[THU 20 OCT 05] NO BANANAS?
[WED 19 OCT 05] BATTLE OF PALMDALE
[TUE 18 OCT 05] POPPY FIELDS / VIRTUAL MACHINES
[MON 17 OCT 05] RESURGENT CALDERAS (2)
[FRI 14 OCT 05] LIQUID TREASURE (6)
[THU 13 OCT 05] SUNRISE
[WED 12 OCT 05] NO BUBBLY
[TUE 11 OCT 05] SMOLDERING EARTH
[MON 10 OCT 05] RESURGENT CALDERAS (1)
[FRI O7 OCT 05] LIQUID TREASURE (5)
[THU O6 OCT 05] LONDON UNDERGROUND MAP
[WED 05 OCT 05] FEYNMAN CONSIDERED
[TUE 04 OCT 05] MYSTERY OF THE MEGAFLOOD
[MON 03 OCT 05] BAMBI & COYOTES

[MON 31 OCT 09] THE BLACK DEATH (1)

* THE BLACK DEATH (1): As discussed in an article from some years back in SCIENTIFIC AMERICAN ("The Bubonic Plague" by Colin McEvedy, February 1988), in the year 1346, Europe, North Africa, and the Middle East had a population of about 100 million people. By 1352 a quarter of these people were dead, struck down by a wave of pestilence, caused by bubonic plague. The outbreak from 1346 to 1352 became known as the "Great Dying" or the "Great Pestilence", but it has come down through history as the "Black Death".

vision of the Apocalypse

The plague had swept through Europe during the reign of the emperor Justinian, 800 years earlier, with milder recurrences for the two centuries after that; similar recurrences occurred for four centuries after the Black Death. The disease has not been a major health threat since then, though it still occurs infrequently in many parts of the world.

About three-quarters of those infected by bubonic plague during the Black Death died from it, usually within about five days. One of the early signs of the disease were the "buboes" that gave the disease its name: gross and painful swellings of the lymph nodes in the armpits, neck, and groin. Three days after the appearance of the buboes, victims were generally struck down by high fever, becoming delirious, and afflicted with black splotches from hemorrhaging under the skin. The buboes continued to swell during the course of the disease, and if the victim lived long enough, they would burst, causing agonizing pain. That was actually an encouraging sign because it meant the victim was still putting up a fight; half of those who died were gone before this phase.

The disease appeared in two forms. In "septicemic plague", the victim's blood was infected directly, leading to massive hemorrhaging, septic shock, and rapid death. In "pneumonic plague" the victim's lungs were infected, leading to quick collapse and death.

Nobody had any clear idea of what caused the Black Death. People were inclined to think of it as the consequence of unfavorable astrological combinations, or "miasmas" -- malignant vapors in the air. There were more paranoid explanations. Some believed that it was the result of spells cast by evil witches. Some Christians believed it was caused by poisons spread by Moslems, and some Moslems believed it was caused by poisons spread by Christians. Some Moslems and Christians blamed the Jews; in some cases, Jews were burned alive in their houses. It is to some credit of the Church, noted in medieval times for persecution of the Jewry, protested against such pogroms. The pope, who was corrupt but not inhumane, protested that the Jews were suffering as badly from the disease as the Gentiles and so were unlikely to have been its authors.

* The disease remained mysterious until 1894, when the French bacteriologist Alexandre Yersin discovered the cause, a rod-shaped bacterium that became known as Yersinia pestis after him. This bacterium infects wild rodent populations around the world, with the infection transmitted from one rodent to another through fleas.

The rat and its associated fleas were the primary disease vectors for the Black Death. If a flea bites an infected rat, the plague bacteria multiply in the flea's gut until it cannot digest blood any longer. The flea then goes wild, biting continuously in a futile attempt to obtain sustenance. This helps spread the infection through the host rat, which then normally dies. The flea then moves on to another host. If the fleas can't find another rat, they attack other hosts, such as humans and their domesticated animals, which lived in close proximity to rats in medieval times. Humans could sometimes pass the disease to one another from the inhalation of infected respiratory droplets, but otherwise the plague was not particularly contagious as such. Once the rodents died off, so did the plague. [TO BE CONTINUED]

NEXT
BACK_TO_TOP

[FRI 28 OCT 05] THE BLIND WATCHMAKER (2)

* THE BLIND WATCHMAKER (2): Most critics concede the truth of simple examples of evolution such as the emergence of antibiotic resistance, but call them "microevolution". They insist it is a jump to think that modern evolutionary theory could account for all the elaborations of forms cited by Paley, or "macroevolution". There are those who simply refuse to believe it, claiming that there is no way organisms could spontaneously achieve such levels of elaboration. Some suggest that the only way macroevolution could happen is by supernatural means -- "supernatural" very literally meaning events not known in the rules of nature as we have observed and understand them.

Dawkins points out bats as an example of this seeming design elaboration. Many (though not all) bats rely on sonar for navigation, in which a sound is emitted and its echo then heard to range and identify an object. Radar works on the same general concept, though it uses radio waves instead of sound. The interesting thing is that bat sonar has so many technical similarities to human radar.

For example, if a jet fighter is operating a radar to search the sky for an opponent, it sends out radio pulses on a fairly long interval -- what radar engineers call a "low pulse repetition frequency (PRF)". This allows a pulse to go a long ways before a second pulse is sent out, giving the radar more range. If a target is identified and locked, the PRF jumps up drastically, allowing the target to be "seen" in more detail and tracked closely. The common brown bat Myotis will chirp at a rate of about 10 times per second when searching for insects, but this rate will go to 200 times a second when an insect is spotted.

Similarly, a radar usually has a single antenna for both transmit and receive. Since the radio echo is faint, the receiver is sensitive and the powerful transmit pulse will tend to fry it. The trick is to use a "diplexer" that shuts off the receiver path while the transmit pulse is being sent. The Myotis bat has sensitive ears to pick up its ultrasonic chirps; a special bone mechanism shuts down the ear channels while a chirp is being emitted.

* There are several other ways in which bat sonar strongly resembles human radar systems, but these two examples get the point across. Is it even possible to imagine that something so sophisticated could arise by natural selection? Of course, if provable small changes can occur over a short period of time -- or maybe even not so small, such as the transition from wolf to pekinese -- then it's not unreasonable to think that many small changes over a long period of time could add up to big changes. Besides, what is improbable over a short period of time will become a certainty over a long period of time: if we didn't age and were indestructible, then we could expect to be struck by lighting every rare now and then.

The critics use the failure to observe macroevolution in action as the basis for their accusation of the non-falsifiability of MET. Of course it isn't possible to observe it in action, because it would require observation over millions of years. Advocates reply that this is willfully setting the bar of truth to an unrealistically high level -- like insisting that astrophysicists know nothing about how stars work because they can't build a star, and claiming that small-scale experiments in particle accelerators are irrelevant. If modern evolutionary theory isn't science, then neither is astrophysics.

Certainly, it seems arbitrary to say that MET works up to a certain level, but then stops abruptly at a certain level -- with the level being conveniently adjusted higher as the evidence becomes more substantial. Dawkins created an interesting computer simulation that suggests just how powerful the force of natural selection really is. To simplify the discussion, consider my name spelled as follows:

   GREG GOEBEL

That string consists of 11 characters, including a space. If I were to write a computer program to simply throw together characters and see if the result matches my name, how long would it take to come up with a winner? Given 26 capital letters and a space character, the number of possible strings of 11 characters is 27^11 = 5.56E15. Assuming that a computer could generate and test a million strings a second, going through the entire sequence, which would be the worst case for finding my name, would take over 175 years. For every character added to the string to be searched for, the time would increase by a factor of 27.

Now let's try another approach as follows:

It converges to an answer in less than a minute. Even if it were an hour, which I wouldn't think for a second, what's that compared to 175 years? This is what is known as a "genetic" algorithm, and it's nothing more than a crude simulation of a selection process -- natural or artificial, it works as well either way. This is a toy example, but the US National Aeronautics & Space Administration (NASA) wrote a genetic program to determine the optimum design for a small radio antenna, ending up with a piece of bent wire that looked like something a hopelessly bored schoolkid would make out of a big paper clip.

It is certainly very improbable -- to be precise, 1 chance in 5.56E15 -- that a computer program would be able to throw together combinations of 11 characters and come up with the string GREG GOEBEL. It is straightforward for a computer to make small changes at random, test them, and quickly converge on the answer. Engineers call this a "cut and try" design approach. Dawkins takes the concept farther, writing a program to create animal-like graphical designs he calls "biomorphs", and observes their explosion into a range of forms as the program cycles on.

In any case, Dawkins' "weasel program" has become a classic simple example in evolutionary science, and to no surprise it has been endlessly criticized One criticism is that it actually represents a model of artificial selection, not natural selection, because it works toward a prespecified goal. Actually, Dawkins pointed that out to begin with, saying the program was "misleading in important ways", most significantly in that it did work towards a prespecified goal, while evolution does not. The weasel program was never intended to be a realistic model of evolution; its purpose was simply to demonstrate the power of a selective process over random assembly -- the bogus "monkeys blindly pounding on typewriters" model of evolution.

More sophisticated programs have been created following the weasel program that model evolution much more accurately, but the critics remain unsatisfied and have come up with more criticisms -- one of the most popular being that the weasel program (and other evolutionary simulations) prove the work of a designer in nature, because the programs are designed themselves.

This sounds plausible for a few seconds -- until it is realized that it's simply reasoning by analogy, with no evidence to show the analogy is valid. This kind of reasoning is confusing a representation of a thing with the thing itself, and ends up being something like the old cartoon of painting a tunnel mouth into the side of a mountain -- to having a train come thundering out. Any natural process, like a hurricane, can be simulated, but nobody could sensibly say that means hurricanes are a complete mystery to science that can only be explained by the action of a Designer. Humans can make fires; that hardly implies that fires are Designed, and so the sciences cannot account for them. [TO BE CONTINUED]

START | PREV | NEXT
BACK_TO_TOP

[THU 27 OCT 05] 1918 FLU DECIPHERED

* 1918 FLU DECIPHERED: As discussed in an article from AAAS SCIENCE ("Resurrected Influenza Virus Yields Secrets Of Deadly 1918 Pandemic" by Jocelyn Kaiser, 7 October 2005), in 1918 and 1919, an influenza pandemic swept the globe, killing tens of millions of people. The 1918 flu was extremely virulent and destructive, reducing the lungs of victims to what examiners described as something like "red currant jelly" and, bizarrely, killing people in their prime much more readily than the sickly and elderly. Flu pandemics have occurred then, but the 1918 pandemic remains something of a grim benchmark, raising the specter that a new flu virus might arise that would be even more destructive. That has made the nature of the 1918 flu virus an interesting subject for researchers.

Now a collaborative team of researchers -- from the US Centers for Disease Control (CDC) in Atlanta, Georgia; the US Armed Forces Institute of Pathology (AFIP) in Washington DC; Mount Sinai School of Medicine in New York City; and the US Department of Agriculture (USDA) -- have resurrected the 1918 flu virus and examined its effects. The project was begun by AFIP pathologist named Jeffrey Taubenberger in 1995. The researchers obtained tissue samples from an Alaskan victim who had been buried in permafrost. The team used the samples to reconstruct the critical parts of the eight genes of the virus, which were then replicated and spliced into modern flu genes for replication in target cells, and used on lab mice.

The effort was conducted under a high level of biosecurity -- Biosafety Level 3 (BSL-3), with additional precautions -- as well it might be: the revived virus hit the lab mice in exactly the same ferocious way it swept through human populations in 1918 and 1919. The mice died in 3 to 5 days, and were found to have the gruesome lung inflammation described by medical researchers trying, without success, to fight the pandemic.

Analysis of the 1918 flu showed that it had an unusually effective hemaggluttinin (HA) surface protein, which the virus uses to "lock on" to host cells for infection. Inserting the gene for that particular variant of HA into mild strains of flu virus made them just as nasty as the 1918 flu; splicing in other genes from the 1918 flu had little effect. However, the 1918 flu also was not dependent on host cells providing the protease (protein enzyme) trypsin to activate the HA protein before the virus was released from the victim cell. That implies that the 1918 flu was not limited to infecting trypsin-loaded lung cells.

Knowledge of the 1918 flu will help development of defenses against other violent flu strains in the future. Although there were suspicions that the 1918 flu strain was a swine flu, genetically it appears to have been a bird flu that managed to perform a species jump to humans. Understanding the specific mutations that allowed this flu virus to successfully jump to humans will help give some alert against future dangerous strains, as well as help in the development of countermeasures.

The important genetic data from the 1918 strain were published in the scientific press. There were concerns that this data might be used by developers of "doomsday bugs" to synthesize nastier biowarfare agents that could end up in the hands of terrorists, but a new US Federal review board judged that the benefits to researchers outweighed the risks. Suppressing data without leaks is difficult, and terrorists might not be too quick to make use of a virus if the ability to create countermeasures was obviously available.

BACK_TO_TOP

[WED 26 OCT 05] WALK DON'T RUN

* WALK DON'T RUN: As discussed in an article in AAAS SCIENCE ("Directing The Herd: Crowds & The Science Of Evacuation by John Bohannon, 14 October 2005), on the morning of 11 September 2001, an airliner hijacked by Islamic terrorists flew into one of the towers of the World Trade Center in New York City. Another airliner soon followed, hitting the other tower. Less than two hours after the first impact, the towers collapsed. About 500 people were killed by the impacts themselves; about 1,500 died in the collapse of the towers, because they were unable to evacuate the buildings. Things might have been vastly worse: one estimate states that if both towers had their full capacity of 20,000 people each, the casualties would have run to about 14,000 dead.

This leads to the question of how to design buildings and create procedures so such structures can be evacuated quickly. To this time, skyscrapers haven't been designed with full, rapid evacuation in mind, with regulations specifying only that the design factor in the evacuation of a few floors in response to a localized fire. That approach is now seen as inadequate.

One of the first tasks is to understand the group psychology and dynamics of crowds in an emergency. Two researchers from Monaco have been quietly filming pedestrian traffic in ten different cities around the world, trying to identify common elements as well as differences. They have found that pedestrians in London walk faster than pedestrians in New York City. One British researcher points out that even on busy dense city sidewalks, people rarely collide with each other, a group dynamic that invites further investigation.

When a panic occurs, dynamics change. For example, in an emergency everyone tries to dash for the exit. The end result is a traffic jam that prevents most of them from getting out; it turns out that the optimum speed for escape in an emergency is a brisk walk. People can become packed into a herd that flows right by marked exits. In the worst case, people will be suffocated by "crowd crush", packed together so tightly that they cannot breathe. Crowd crush can be so powerful that it will bend steel barriers, and since the victims can't breathe, they can't cry for help, so nobody realizes what is going on until it is too late. Incidentally, people are rarely trampled in such panics.

A team at the US National Institute of Standards & Technology (NIST) was ordered by the US Congress to investigate the evacuation of the Trade Towers on 9-11, while a British group under fire safety engineer Ed Galea at the University of Greenwich performed a similar study in parallel. Interviews were performed and computer models written. One interesting fact was that many people didn't try to evacuate until well after the emergency; 77% of the survivors interviewed packed up in five minutes or so, 19% took an hour, and 4% took over an hour -- people were uncertain and wanted to save their computers.

Galea wrote a computer model named EXODUS, which both teams used to model the evacuation. When NIST ran the model with the full occupancy of 40,000 people, they came up with the 14,000 deaths, mostly of people trapped in the stairwells. Nobody was surprised at this, since the stairwells weren't designed to handle that kind of traffic, and in fact no skyscraper is designed that way. NIST is pushing for new building codes for skyscrapers when they are reviewed in 2008. Builders are not enthusiastic, claiming that the 9-11 catastrophe was an event not likely to be repeated any time soon. NIST officials reply that since a skyscraper may well last a century or more, the probability of any one such structure eventually suffering a catastrophic disaster, from natural or man-made causes, is pretty high.

Obviously, even if new building codes are implemented, since skyscrapers last for a long time, we'll still be stuck with buildings not designed to those specifications. That means that over the short to mid term, the emphasis has to be on procedures and, where possible, retrofits. The evacuation rate from the second Trade Tower was much better than from the first, since the elevators were still working until the impact of the second airliner. New elevator systems can be installed that have their own power supplies and sensors so they won't open on floors where a fire is in progress. Models show that sky bridges to neighboring buildings would have also greatly aided an evacuation of the towers.

All sorts of other escape schemes have been proposed, such as external "slides" made of fabric, or sliding down exterior poles while wearing a vest with an electromagnetic braking system, or even "ballutes" -- balloon-parachutes in the shape of shallow cones -- that could pop open immediately after deployment. For the moment, however, the focus will have to be on modifying stairs and elevators, and taking fire drills much more seriously.

BACK_TO_TOP

[TUE 25 OCT 05] SUPERCAPACITORS

* SUPER CAPACITORS: As discussed in an article from IEEE SPECTRUM ("Super Charged" by Glenn Zorpette, January 2005), engineers at the NessCap Company in Yongin, South Korea, have a bright idea: they want to replace batteries with "supercapacitors" that can hold far more charge than any capacitors built to date. NessCap now produces a supercapacitor about the size of a soda pop bottle that can store 5,000 farads at 27 volts, and company officials think they can do a lot better. Ultimately, they want to replace automotive batteries with supercapacitors.

The idea has a lot going for it, on paper at least. Supercapacitors can provide greater power on demand, can be charged rapidly, can function at wider ranges of temperatures, and compared to batteries have an unlimited number of charge-discharge cycles. The only problem is that current supercapacitors are expensive and have energy densities -- charge per unit weight -- about 20 times smaller than that of, say, lead-acid batteries. This means that for the moment supercapacitors are only really applicable for niche applications, where their long life compared to batteries is an asset, only small amounts of electricity have to be stored, and the cost of the supercapacitor is small relative to the overall cost of the system.

Makers of high-end battery-operated stereo gear have used them to provide peak power for musical crescendos that can't be provided quickly enough by the battery itself. A particularly interesting application is in solar-powered lighting tiles that can be embedded into a walkway or staircase. The tiles acquire power during the day and use an LED to provide a safety guide at night; battery operation wouldn't last long enough to be workable. Since supercapacitors charge and discharge so fast, they are regarded as potentially ideal for hybrid-electric or fuel-cell cars.

* The traditional notion of a capacitor is two conducting plates separated by a nonconducting "dielectric", composed of molecules that can be electrically polarized. Opposite electric charges are placed on the two plates; the amount of charge is proportional to the area of the plates and the "dielectric constant" of the dielectric filler, with this constant giving how much more charge the capacitor can store than it would if there were simply empty space between the plates. In addition, reducing the spacing between the plates gives more charge storage for a given voltage.

In the early 1980s, researchers at Standard Oil of Ohio (SOHIO) discovered that if carbon granules, or "activated" carbon, were immersed in a conductive liquid solution or "electrolyte", the result was a really impressive capacitor. The reason was that the activated carbon granules had such an enormous surface area relative to their weight. Nippon Electric (NEC) of Japan signed a license with SOHIO for the technology in 1971; a decade later Panasonic of Japan picked it up in a big way, with the US Department of Energy (DOE) then performing a number of further studies.

The basic recipe for a supercapacitor is to take two metal-foil electrodes; coat them with activated carbon; sandwich a paper separator between the two foil layers; and immerse the sandwich in a liquid electrolyte -- these days, typically acetonitrile, which has excellent electrolytic properties but suffers from the flaw that when it burns, it can produce cyanide gas. Manufacturers would like to have something that works as well but is safer. Electrical leads are connected to each of the foil electrodes. If the supercapacitor is then connected to a DC voltage source, electrons will begin to accumulate on the negative electrode, while being depleted on the positive electrode.

The electrolyte provides positive and negative ions. The positive ions migrate to the carbon granules bonded to the negative electrode, while the negative ions migrate to the carbon granules bonded to the positive electrode. Although the paper separator is nonconductive, preventing the electrodes from shorting out, it is porous and ions migrate through it freely in both directions. The result is a thin capacitive layer on each carbon-coated electrode, and so supercapacitors are sometimes called "double-layer capacitors".

Few of the processing steps for building a supercapacitor are trivial. Obtaining high-quality activated carbon is a significant critical path. Currently, activated carbon has a surface area of about 1,500 square meters per gram. That sounds very impressive, but improving energy density means achieving much higher ratios of surface area to mass. One interesting but highly speculative approach is to build supercapacitors with electrodes coated with carbon nanotubes that are fabricated and laid down to optimize the ability to trap electrolyte ions.

If this scheme could be made to work, it might well represent a revolution in energy-storage technology. However, research is still focusing on basic technologies and hasn't even advanced to a proof-of-concept prototype yet. Even if it can be shown to work, cost-effectively producing such supercapacitors will be a challenge. In the meantime, supercapacitor makers like NessCorp are trying to refine and cost-reduce their existing product and find new applications niches to keep them afloat.

BACK_TO_TOP

[MON 24 OCT 05] RESURGENT CALDERAS (3)

* RESURGENT CALDERAS (3): Evidence indicates that the catastrophic eruptions that form resurgent caldera are surprisingly short lived. Studies of sea-floor cores with ash layers associated with the eruption that created the 28 kilometer (17.4 mile) wide Atitlan caldera in Guatemala 84,000 years ago -- which spewed out 300 cubic kilometers (70 cubic miles) of ash -- only lasted 20 to 27 days. Apparently a plinian column rose and fell a number of times in that interval. Evidence also indicates that the eruption that created the huge Toba caldera spewed out more than 1,000 cubic kilometers (240 cubic miles) of ash in as little as 9 days.

After the eruption, typically a lake fills the new caldera; sediment erodes the caldera wall and accumulates at the bottom of the lake. Then the caldera floor begins its slow resurgence -- though the resurgence doesn't normally occur in the center of the caldera, and its motion is usually skewed from the vertical. In some cases, including Yellowstone, two separate centers of resurgence are present in one caldera.

It was the recognition that young lake sediments had been raised hundreds of meters that allowed van Bemmelen to show that the resurgence of the floor of the Toba caldera had formed the 640 square kilometer (250 square mile) island of Samosir. Since the Toba caldera is only about 75,000 years old, the resurgence may not yet be complete. At Cerro Galan, resurgence of more than one kilometer (0.6 mile) has raised the center of the caldera to an altitude of more than 6 kilometers (3.75 miles) above sea level, making it one of the highest mountains in Argentina.

Toba caldera

After resurgence come the final stages in the evolution of a caldera: the relatively quiet effusion of dacitic or rhyolitic lava from a necklace of vents along the ring fracture. Typically, the volume of material released is small, but the effusions continue intermittently long after the catastrophic caldera-forming eruption. At Long Valley, distinct episodes of effusion took place 500,000, 300,000 and 100,000 years ago.

The volcanic events associated with the formation of a caldera may continue with little violence for up to a million years. Hot springs and geysers, the result of geothermally heated water that finds its way to the surface, may be present for much longer than that.

* A caldera-forming eruption would be cataclysmic. Consider a 1,000 cubic kilometer (240 cubic mile) eruption. A tract of land with a surface area of possibly 500 square kilometers (195 square miles) would sink, and the resulting caldera would be filled entirely by ignimbrite. A surrounding area of up to 30,000 square kilometers (11,575 square miles) would also be covered by ignimbrite; the depth of the cover would range from more than 100 meters (330 feet) on the caldera rim to a few meters at the farthest extent of the ignimbrite. Destruction within this area would be complete.

Fine co-ignimbrite ashes would be dispersed over a major portion of the Earth's surface. This would interfere with ground and air traffic over the short run, and the disruption of at least a year's crop over the dispersal area. The ash would also modify the Earth's climate for a number of years.

A caldera-forming eruption might be predicted by the "leakage" of dacitic or rhyolitic magma to the surface as a ring fracture develops. Seismic signals might indicate the movement of magma into a chamber a few kilometers below the surface. The site of the chamber would be confirmed by a local anomaly in the Earth's gravitational field, since the density of magma is less than the density of solid crustal rock. Another important indicator would be the rise of the terrain and, particularly, an increase in the rate of rise. This could be revealed by conventional surveying techniques. Such signals do not indicate any timetable for an eruption, but they should not be ignored.

Resurgent calderas have benefits; the geysers and hot water may persist for two or three million years after the eruption and could be a useful source of energy. Resurgent calderas also may deposit useful minerals; the Kari Kari caldera eruption left silver lodes, whose mining made the city of Potosi on the caldera's rim the largest city in the Western Hemisphere in the 17th century.

It is fortunate that eruptions that form resurgent calderas are rare. We are beneficiaries of the catastrophes of the past. We may not feel so fortunate if such eruptions take place in the future. [END OF SERIES]

START | PREV
BACK_TO_TOP

[FRI 21 OCT 05] THE BLIND WATCHMAKER (1)

* THE BLIND WATCHMAKER (1): I've long tried to ignore the squabbling over modern evolutionary theory (MET), but the squabbling got loud enough to make me finally wonder: Do the criticisms have any basis in fact? That led me to a book, Richard Dawkins' THE BLIND WATCHMAKER, that I found stimulating enough to summarize here. It's roughly half on evolutionary biology and half on debunking creationism, which despite the fact that it was written in the mid-1980s is still much on the mark -- not surprising, since creationists always recycle the same old arguments.

* Dawkins introduces his work with a subtitle comment: "Why The Evidence Of Evolution Reveals A Universe Without Design". He takes as his starting point the book NATURAL THEOLOGY, an 1802 work by English theologian William Paley. The focus of Paley's book was to show that the elaborations of the forms and details of life on Earth implied the action of a higher power, a Designer. Dawkins gives Paley a respectful treatment: in 1802 his thinking was state-of-the-art, and his writing was clean and persuasive.

In 1859 Charles Darwin published THE ORIGIN OF THE SPECIES and overturned Paley's ideas. In Darwin's view, creatures weren't designed as such. Species were mutable, with individuals being born with small variations in traits. The traits of individuals that survived and propagated were passed on; the traits of those who died out without leaving progeny died out. According to Darwin, "natural selection" was enough to account for the entire diversity of nature. Later generations of researchers would elaborate on Darwin's ideas, but his work still remains at the core of MET.

Critics sometimes contrive arguments that MET is not scientific because it is "non-falsifiable", meaning that it is a simple assertion that cannot be proven wrong but has no evidence to back it up. However, much of what MET states is supported by plenty of evidence. The variety of dogs demonstrates the mutability of species and the way their forms can be modified by a process of selective breeding. They are all clearly human-made, or at least human-modified, forms: no pekinese could survive in the wild. Crop plants have in some cases gone through similar drastic modifications: it is startling to realize that a few thousand years ago, corn looked pretty much like an ordinary sort of grass. Now it features hugely distorted heads with heavy stalks to support them. It can no longer disperse seeds on its own and would die out in a year or two if humans didn't take care of it.

Species are so mutable that zoo-keepers trying to preserve rare animals find it difficult to maintain captives that really match their wild cousins. Animals that are happy with being captives tend to breed much more easily than those that aren't, and so zoo animals tend to become increasingly tame from one generation to the next -- which still doesn't mean that it's a good idea to walk into a tiger cage. A Russian silver fox farm made an effort to breed more docile foxes, and within twenty years came up with animals that clearly seemed more like dogs than foxes in their appearance and behavior.

Of course, these are examples of artificial selection, but natural selection can be demonstrated easily enough. Take a culture of harmless bacteria, and then douse it with a toxin that kills the culture off 100% -- no survivors. Repeat this action as many times as desired and get the same results. Now take ten cultures of the same bacteria and administer the toxin in graded doses, from a very small dose to the maximum dose. Take the culture that was given the biggest dose for which some of the bacteria survived, then use these survivors to create ten more cultures, which are given the same treatment. Keep repeating this procedure, and eventually the result will be cultures of bacteria that shrug off the maximum dose of the toxin.

This is a lab experiment, but much the same can be seen in nature. The most notorious example is the emergence of "antibiotic-resistant bacteria". The introduction of antibiotic drugs in the middle of the 20th century provided medicine with a powerful set of weapons against dangerous bacterial infections, but even at the time the inventors of antibiotics knew that bacteria would evolve to defeat the antibiotics as they were, and now we are suffering from an ever-rising tide of bacteria that shrug off drugs that would have killed them off neatly thirty years ago.

Ironically, one possible solution for this problem is an exercise in biological selection as well. The Soviet Union long bred strains of "bacteriophages" -- viruses that infect and kill bacteria -- to deal with bacterial infections, with new strains of bacteria dealt with by selective breeding of new strains of bacteriophages. The modern Russian state still uses the approach and has been trying to export it.

As far as more complicated organisms go, metal electric power towers that are clad in zinc to resist corrosion will form zinc deposits that kill normal grasses and other plants. In fact, most such plants grow perfectly well around the towers -- but on examination they are strains that can tolerate high levels of zinc. Try to bring in plants that grew up far away from a tower, and they will die. [TO BE CONTINUED]

NEXT
BACK_TO_TOP

[THU 20 OCT 05] NO BANANAS?

* NO BANANAS? According to an ECONOMIST article ("Going Bananas", 22 October 2005), bananas are the fourth largest food crop in the world, after wheat, rice, and maize. Bananas are a staple of the diet of about 400 million people, and are commonly raised on small farms in the tropics. In the US, bananas are a sweet, put on breakfast cereal, but they mean survival in many areas of the world, with the starchy plantain banana in particular acting as an analogue to the potato.

bananas as a staple

Another interesting fact about bananas is how they grow. All banana strains in existence today are derived from two strains, Musa acuminata, originally from what is now Malaysia, and Musa balbiana, from what is now India. Bananas don't grow on trees, the plant instead being a very structurally robust herbal, with a "pseudostem" erupting from the ground in a whorl of fronds that, eventually, end up as bunches of bananas. The acuminata banana is only about the size of an okra pod or a sweet pickle, with the fruit carrying peppercorn-sized seeds; the balbiana banana is about four times bigger and the fruit is crammed with seeds. The plantain is a hybrid of the two types.

Somewhere along the line, a farming culture found a mutant line of bananas that had vestigial seeds and decided to cultivate it. The lack of seeds in domesticated bananas is a very important fact, because it means that banana plants have to propagated by replanting cuttings taken from the base of a parent plant. That means that the genetic diversity of bananas is not very great, and so the global banana crop is highly vulnerable to diseases.

In the 1950s, nearly all bananas grown commercially were of a single strain, the Gros Michel, which was said to have been unusually tasty. It was all but wiped out by a fungus named "fusarium" that came from Panama. The Gros Michel now only survives in remote areas of Uganda and Jamaica. The current commercial banana strain is the Cavendish -- about the only type of banana sold in the West out of the wide range of strains, a bizarre situation compared to, say, apples -- but it is now threatened: a mutant strain of fusarium arose in 1992 and wiped out banana plantations in Malaysia. The disease remains local to Asia for now, but nobody expects this state of affairs to last much longer. Other strains of bananas grown as staples in Africa and Asia are also presented with serious fungal and other threats.

Banana research is dominated by the Catholic University of Leuven in Belgium. Although Belgium might seem to be a strange place to be growing bananas, the university has cultivated 1,175 different strains, learning how to cryogenically freeze samples so they can be preserved. The location has advantages for banana research: since bananas are not grown there, the specimens can neither pass diseases nor pests to local crops, nor pick them up in turn.

Hybridization of domesticated banana strains is difficult, since they have little ability to reproduce sexually -- but they can do so with a lot of help in the form of researchers accumulating pollen from male plants to inseminate female plants, and then sorting through the fruit that results for seeds. A tonne of such bananas will produce a handful of seeds, only about half of which will germinate. Researchers there are also working on transferring genes between strains to improve disease resistance, and a disease-resistant strain is now being field-tested in Africa. There is a certain irony in that humans, having decided to preserve a plant line that would have died out on its own, are now forced to directly intervene in its evolution so that it does not die out in the future.

BACK_TO_TOP

[WED 19 OCT 05] BATTLE OF PALMDALE

* BATTLE OF PALMDALE: America during the height of the Cold War was a different place than it is now, with one of the particular features being the fact that the authorities could get away with a lot more. A case in point was the day when the US Air Force bombarded a southern California town.

During the 1950s, interceptor aircraft often used salvos of unguided rockets to shoot down intruders. At least, that was the plan; the idea wasn't all that practical, though during the Vietnam War there was a case or two in which US aircraft shot down North Vietnamese MiG fighters with unguided rockets. These incidents were pure freaks of chance: strike aircraft carrying unguided rockets for ground attack loosed them at a MiG to scare it off, and managed to score a hit.

The standard US unguided rocket, then and now, was the 70-millimeter (2.75-inch) diameter "Folding Fin Air Rocket (FFAR)", which was carried in pods, with the rockets popping out spring-loaded tailfins after launch. The rockets jinked all over until they stabilized, which wasn't too much of a problem for blasting a spread of rockets at a truck convoy or the like on the ground, but seemed unlikely to score a hit on an aircraft unless it was a big one. The usual comment of those familiar with supposed use of unguided rockets in interceptions was: "It was a wonder they could hit anything with them."

The Germans had used unguided rockets successfully during World War II to break up American bomber formations, but the target in that case was a large number of bombers flying in a small chunk of sky, huddled up for the mutual protection of their defensive guns. In the nuclear age of the 1950s, a single bomber could destroy an entire city, and the nice fat target of a big bomber formation was history.

However, there was a reason for going with unguided rockets in the 1950s: collision-course intercepts. Traditionally, to shoot down a bomber, an interceptor got on its tail and blew the intruder out of the sky with automatic cannon, but with the development of modern radar and electronics, it was possible to vector an interceptor on a "collision course" against an intruder from the front or side. The problem was that the two aircraft flashed past each other at high relative speed -- so quickly that bomber crews on the receiving end of practice intercepts might not even see the interceptor coming and going -- meaning that there was very little time to score hits with automatic cannon. The only way to do that at the time was to loose a salvo of unguided rockets at the target; one hit by a rocket, with its relatively big warhead, would be lethal. There was also the problem that combat aircraft could often soak up many cannon hits, and so using rockets even for traditional tail-chase intercepts made a certain amount of sense. The real solution was the guided air-to-air missile (AAM), but such weapons were only under development at the time. They wouldn't come into widespread service until the 1960s, and wouldn't become really effective until well into the 1970s.

In any case, in the 1950s there were several interceptor aircraft that were armed only with unguided rockets -- no cannon. I had heard a vague story that two such interceptors had tried to shoot down a target drone that had lost radio control -- "slipped the leash" -- and failed to destroy it despite using up all their rockets. I finally got all the details from an article in AIR & SPACE magazine ("Fast, Cheap, & Out Of Control" by Peter W. Merlin, August-September 2005). The story turned out to be interesting, and it's no surprise it was obscure: the military would have preferred to forget about it.

* On 16 August 1956, a Grumman F6F-5K Hellcat target drone was sent aloft from the Point Mugu Naval Air Station, in southern California up the Pacific coast from Los Angeles. The Hellcat had been one of the great US Navy fighters of the Second World War, but now it was reduced to the status of a target, painted red and flying under radio control.

Or at least it was supposed to be flying under radio control. Although the target area was over the Pacific, the Hellcat decided to head towards Los Angeles instead. Nobody wanted the unpiloted aircraft to crash into a school or whatever, so two US Air Force Northrop F-89D Scorpion interceptors were scrambled from Oxnard Air Force Base, not far from Point Mugu, to destroy the wandering drone. The Scorpion was a subsonic jet aircraft, with a two-man crew and a sophisticated (by the standards of the time) all-weather radar fire control system, armed with a pod of 52 FFARs on each wingtip. The two F-89Ds carried a total of 208 rockets in all.

F-89 Scorpion

The Hellcat actually passed over Los Angeles; of course, shooting it down over the city would have been lunacy, and the interceptor crews could only hold their breath and wait for the drone to get clear. It finally decided to orbit over the town of Santa Paula, where the interceptor crews tried to get opportunities to take a shot at it. They were using an automatic fire-control mode, but as was not unusual for high tech avionics in those days, the fire-control system malfunctioned, and they didn't get a single rocket off.

Then the Hellcat decided to meander for a time, eventually turning back towards Los Angeles. The Scorpion crews switched to manual fire control and loosed salvos of rockets at the drone. They missed the Hellcat, the rockets falling to the ground to start a raging brush fire. They tried again, with no better luck, starting two more brush fires, one of them fueled by oil rigs in the unintended target area. Finally, as the drone headed toward Palmdale, the Scorpions fired their last rockets at it. They missed again.

This time, the rockets fell into Palmdale. A piece of shrapnel smashed through the living-room window of one house, passing through a wall to end up in a cupboard. Another fragment passed through a garage and home. A car's front end was shredded when a rocket fell in front of it. Astoundingly, nobody was hurt. Explosive ordnance disposal teams from Edwards Air Force Base picked up 13 dud rockets from around Palmdale. It took hundreds of firefighters two days to put out the three brush fires, after the blazes had consumed hundreds of acres.

The Hellcat finally wandered over the Mojave Desert near Palmdale, where it ran out of fuel, falling to earth in an uninhabited area but cutting three power lines doing it. By the records, the incident seems to have attracted very little public attention. Later models of the F-89 carried Falcon guided AAMs and were likely more effective. They could also carry the Genie missile, which was unguided but carried a small nuclear warhead -- leading to the passing thought that Palmdale might have got off lucky.

The author found fragments of the aircraft in that area in 1997. He also cited the names of the aircrews of the Scorpions in the article; I was reluctant myself to pass them on, but I do have to mention that the last name of one of them was "Einstein".

BACK_TO_TOP

[TUE 18 OCT 05] POPPY FIELDS / VIRTUAL MACHINES

* POPPY FIELDS: An article in THE ECONOMIST ("Not What The Doctor Ordered", 8 October 2005) points out that the international effort to destroy Afghanistan's opium poppy agriculture is trapped in an ugly dilemma: the majority of Afghan farmers don't have any other cash crop to raise, and it will take a long time to fundamentally change the system.

The French-based Senlis Council suggests an alternative: legalize growing opium poppies. This is not as backwards as it sounds, since opium poppies are legally grown in Australia, India, Turkey, and France for production of legitimate pharmaceutical opiates. Given that medicines are in short supply in poor countries, large-volume opium production from Afghanistan might be what the doctor ordered.

Afghan poppy fields

The problem is that, in countries where opium poppies are grown legally, the business is under very tight control. Nothing in Afghanistan is under much control at all. Since illegal drugs in general fetch a higher price than legal ones, the temptation to divert opium production to the black market would be very high. Senlis acknowledges this, but claims that legalization would be the best of all the bad alternatives. Westerners involved in opium eradication in Afghanistan fear that the idea will muddy the waters and are giving it short shrift.

* VIRTUAL MACHINES: An ECONOMIST article ("A New Way To Stop Digital Decay", 17 September 2005) reports on efforts to preserve data stored on digital media. It might seem like data stored on a plastic CD-ROM is much more permanent than that printed on paper, but technological progress means that in ten or twenty years, the technology to read that CD-ROM may be long obsolete and hard to come by. In 1986, for example, the BBC built a multimedia database for Britain and distributed it on laserdiscs that could be read by a BBC Micro home computer. This was essentially a proprietary system, and all the material was nearly lost; somebody spent over two years using a creaky old BBC Micro to port the database to a PC format.

The only way around this problem is to make sure that archival materials are updated to current media on a regular basis. However, even if that's done, there's another problem: file formats. For example, the popular GIF image file format is now being replaced by the more capable PNG format, and so in a decade or two or three, it may be hard to find anything that can read the GIF format.

The National Library of the Netherlands has a solution: file decoders written for a "virtual machine", essentially a computer implemented in software that runs on top of real computer hardware. The idea is not new; old videogames are often run on PCs without any changes to the game software, simply by implementing software that operates like the game machines that originally ran the games. If a new type of computer is introduced, all that has to be done to run the file decoders is to port the virtual machine to the computer.

The implementation is actually being performed by IBM. Work is now being done on decoders for GIF and JPEG image files, with a decoder for Adobe PDF document files next on the queue. IBM is also talking with pharmaceutical companies to develop decoders for data files storing results of clinical trials.

BACK_TO_TOP

[MON 17 OCT 05] RESURGENT CALDERAS (2)

* RESURGENT CALDERAS (2): It might seem the most likely place for an eruption that forms a resurgent caldera would be a subduction zone, the boundary on the surface of the Earth where a plate of oceanic crust slides under a continental plate and plunges into the underlying mantle. Subduction zones are normally sites of intense seismic and volcanic activity; the Toba caldera in Sumatra is near a subduction zone. However, it's not always that simple. Most of the younger calderas in the US, for example, are hundreds of kilometers from any modern subduction zone.

Still, resurgent calderas are not randomly distributed around the globe. The ignimbrites that characterize them result from the eruption of dacitic or rhyolitic magma, which is viscous, rich in silica, and typically produced where the continental crust is thick. As a result, resurgent calderas may form in regions of the continental crust where a thermal plume (a "hot spot") in the mantle of the Earth is large enough and long-lasting enough to melt huge volumes of rock. The plume does not melt the continental crust directly; instead, it melts part of the mantle to create a basaltic magma. The basaltic magma rises, melting the rock at shallower levels.

In the US, the Yellowstone caldera lies at the northeastern end of a trail of volcanic activity that begins in Idaho in the basaltic rock of the Snake River plain. Over the past 15 million years, the focus of volcanic activity has shifted along the trail to its present position in Wyoming, possibly in response to the movement of the plate that includes the North American continental crust over a stationary thermal plume in the mantle.

A number of other calderas, no more than a few tens of millions of years old, are found in a zone many hundreds of kilometers wide in Nevada, Arizona, Utah, and New Mexico. The youngest caldera in the group is on the flanks of the Rio Grande Rift, which runs for hundreds of kilometers northward through New Mexico into Colorado. It is thought the continental crust at the Rio Grande Rift has somehow been thinned, producing the rift itself. A similar process is thought to have caused rifts in the oceanic crust near many of the island arcs in the Pacific.

In Argentina and Bolivia, resurgent calderas have formed not only along the main volcanic cordillera of the Andes, but also in a second cordillera more than 200 kilometers farther inland. Here there is no evidence of crustal thinning; on the contrary, the continental crust may be as much as 40 to 50 kilometers (25 to 30 miles) thick under the Cerro Galan and Kari Kari calderas. It is thought magma conduits to the surface of the inland cordillera may have developed by fracturing of the crust caused by the pressure of moving magma.

* Formation of a typical caldera follows a series of steps: precaldera forming, caldera collapse, eruption of air-fall material and pyroclastic flows, postcaldera resurgence, and finally late-stage extrusions of lava.

Precaldera doming is the rise of the Earth's surface that precedes a massive eruption. It happens when a great volume of magma intrudes itself into a shallow level of the continental crust, creating a "pluton", or magma chamber, whose top may lie only a few kilometers beneath the surface. The doming causes stress that leads to the next step, the collapse of the caldera -- it is thought either through upward pressure on the crust, or by subsidence of the crust into the magma.

The magma at the top of the pluton has a temperature of 700 to 1,000 degrees Celsius (1,200 to 1,700 degrees Fahrenheit) and is rich in dissolved gases, mostly water vapor. The magma rises toward the surface along the newly-opened ring fracture. As it rises, the pressure on it lessens, until at a depth of about a kilometer the gases come out of solution, much as they do when the cork on a bottle of champagne is popped. Dacitic or rhyolitic magma is much more viscous than champagne (or even basaltic magma), however, and so the gases do not merely bubble away. Instead, they blow the magma apart. The magma escaping from the pluton toward the surface expands into pumice and explosively fragments into incandescent solid particles ranging in size from micrometers to meters.

If the rate of the eruption is great and the vent is relatively small -- possibly 50 or 100 meters (165 or 330 feet) in diameter -- an eruption column develops, rising tens of kilometers into the atmosphere. The eruption column of Mount St. Helens reached about 20 kilometers (12 miles). The pumice in the column is not simply blasted upward from the vent like buckshot from a shotgun. Directly above the vent, the upward velocities are hundreds of meters per second. As the pumice rises, however, it rapidly decelerates; it is slowed not only by gravity but by aerodynamic drag. Then a second process begins to add energy: the decelerating mass of incandescent pumice, ash, and gas captures and heats up air from around the column.

As a result, the mass becomes buoyant and begins to rise convectively; it may even accelerate upward again. Such convective eruption columns are known as "plinian columns", after Pliny the Elder, whose description of Vesuvius in 79 AD is the first documented example. Convection can send a plinian column to as high as 50 kilometers (30 miles).

* Massive plinian columns may indicate the beginning of a catastrophic collapse that creates a caldera. As the eruption proceeds, however, the plinian columns typically give way to pyroclastic flows, which make up by far the greatest part of the volume of the eruption. The transition from plinian column to pyroclastic flows is caused by the increasing size of the vent as the eruption proceeds and loss of gas in the magma in the pluton, which robs the column of the heat needed to keep it going.

As the materials fall back to the ground, they surge forward rapidly. It is known from the distribution of the ignimbrites they deposit that they can sweep over hills as much as a kilometer high and travel distances of up to 150 kilometers (95 miles) at velocities of 100 meters per second (360 KPH / 225 MPH). The flow travels fast because it is in a partially fluidized condition, still loaded with hot gases and fine particles that act as a lubricant -- as well as the fact that the mass of the flow has fallen several kilometers from the sky, giving it considerable kinetic energy.

The remains of a pyroclastic flow is a blanket of pumice and smaller particles that can be several meters thick more than 50 kilometers (30 miles) from the vent. In addition, the fine particles trapped by the flow typically form a secondary ash cloud that rises many kilometers by convection. The subsequent fall of particles from the cloud can deposit a thin layer of ash over a region much larger that the one covered by the ignimbrite of the pyroclastic flow itself. In fact, the layer, which is called "co-ignimbrite" ash, can amount to as much as a third of the entire volume of the ignimbrite.

The eruptions that form resurgent calderas are always eruptions of dacitic or rhyolitic magma, not basaltic magma. Basaltic magma has lower viscosity, and gases coming out of it easily escape, preventing a catastrophic explosion. Basaltic magma also doesn't produce much in the way of fine ash; fine particles more easily lose heat than large ones because of the higher ratio of surface area to volume, and in the absence of such particles convective currents don't arise in the atmosphere. Basaltic magmas do not form plinian columns.

The "fire fountains" seen on the active volcanoes of the Hawaiian Islands provide an excellent example. In a fire fountain, great volumes of lava are sprayed high into the air, but the lava, which is basaltic, emerges as large liquid gobbets, sometimes a meter or more across, and is simply spattered around the vent. [TO BE CONTINUED]

START | PREV | NEXT
BACK_TO_TOP

[FRI 14 OCT 05] LIQUID TREASURE (6)

* LIQUID TREASURE (6): There are success stories in the water supply business, including Chile, South Africa, and Australia. Chile has achieved near-universal water supply, with an efficient operation run under contract to Suez. Everyone is charged for the full costs of their water, with poor people able to obtain a rebate.

South Africa has been another star. Under the apartheid regime, water utilities were focused on providing water to the white elite in the cities, while blacks in the countryside were neglected. When the African National Congress took power in 1994, about 14 million South Africans, a third of the population, lacked access to clean water. Now 9 million of those people have access, and the rest are expected to obtain access by 2008. "Access" in this case is defined as piped water in cities, or an outlet within 200 meters (650 feet) of every home in villages.

South Africa passed a comprehensive water bill in 1998 that cleaned the slate on the old water rights laws, made water allocations temporary and tradeable -- more on these concepts below -- and dictated that all users, except for the most poor, bear the actual costs of their water. Originally the poor were given an allocation of 25 liters (6.6 US gallons) of water a day at a minimal rate, but even that was more than the poor could bear, so now the poor get 25 liters a day free. Although the government is clearly in control, private contractors are welcome and encouraged.

The real superstar is Australia. Water is clearly a problem DownUnder, and originally the nation took something along the lines of the American route, building dams and providing water to farmers at subsidized rates. That has now all changed. Now the government claims ownership of all the water, with users granted access rights. A system of water trading was introduced, in which those with more water than they could use could transfer rights to it to others for a fee, discouraging waste and encouraging competition. In the cities, all water is metered, with users paying by volume instead of the wasteful flat rate. The system is designed to take into consideration all uses and costs of the water.

Laggards are starting to get wise as well. Even in California, at the very soul of tangled American water bureaucracy, people are taking a more realistic view of water use, and considering innovations such as water trading. More progress will require revising the laws that backed up the old system, but nobody thinks that will happen soon.

In India, where the government bureaucracy is at least as tangled but the resources are far more scarce, there are rays of light as well. A non-governmental organization (NGO) named "Tarun Barat Sangh", led by Rajendra Singh, is working with villages to build up their "rainwater harvesting" infrastructure, building up systems using old cisterns and other hydrotechnologies linked with new installations to allow them to suffer through droughts with less pain. The enthusiastic Singh, called the "Water Man" by the media, claims to have helped set up thousands of rainwater harvesting systems not only in villages but in cities. He's strongly skeptical of the central government's scheme to interlink all the country's rivers, not just because he sees it as overkill but because it's a bureaucratic plan in which the "little people" have little say. Rainwater harvesting might not be the answer to all of India's water problems -- but in devolving authority and ownership to villages, it sets a precedent for the future.

* In summary, modern water planning is based on three assumptions:

Green activists don't like water misuse or big water projects, but they are very reluctant to accept market mechanisms; that water should be universally metered; and that people should pay the actual costs of the water, including environmental costs. Such attitudes promote waste, and, as discussed, in places where water prices are kept unrealistically low, the poor are usually not the beneficiaries, with the system focusing on the big users and lacking the funds or the interest in providing services to the small fish. Even if full rates are charged, there are ways of ensuring that the poor will not be shut out, such as the Chilean use of rebates and the South African provision of a free daily minimum.

Market mechanisms also mean cleaning up legal tangles over rights and ownership, and setting up trading mechanisms. Private firms will play a vital role in any market-based water system, though as has been repeatedly emphasized in this survey, private firms will not own the water and they will remain under overall government oversight.

Technology of course has a role to play. Drip irrigation, popular in Israel, is much more efficient than spray or flood irrigation, but it not widely used in other desert lands yet. Big technologies such as dams still remain important, though they have to be handled carefully, and there is no sense in turning to an elaborate and expensive solution when a simpler and cheaper one will do the job. It is not clear if the Johannesburg goal of halving the number of people without access to water by 2015 will actually happen, but if it doesn't, it won't be for any lack of good ideas for making it happen. [END OF SERIES]

START | PREV
BACK_TO_TOP

[THU 13 OCT 05] SUNRISE

* SUNRISE: A survey in THE ECONOMIST of contemporary Japan ("The Sun Also Rises" by Bill Emmott, 8 October 2005) used the recent elections in Japan as a springboard. Koizumi Junichirou, prime minister since 2001, had tried to push a bill privatizing the Japan Post service through the Japanese Diet (Parliament), and when it was rejected he took drastic action, dissolving the Diet and calling for new elections. In the election, he did all he could to block the reelection of members of his Liberal Democratic Party (LDP) who had voted against the Japan Post bill, putting his own candidates in their place. There were those who suspected this was an act of political "seppuku" -- ritual disembowelment -- but this September Koizumi pulled it off.

Privatizing the postal service might not sound like the stuff of political revolutions. In itself, it wasn't, but it was a landmark in a process of gradual social and political turnaround that seems likely to continue. Why this is so requires a bit of background.

During the boom years of the 1980s, Japan's industries were the object of global admiration, as well as jealousy and fear. Even when things were at their best, however, there was somewhat more appearance than substance to things. Where this was most true was in the belief that wise government policies had created this economic miracle. To be sure, the Japanese government did some things right, but success was more in spite of the political system than because of it. In truth, while Japan's industries were world-class, Japan's government was second-rate.

Politics were dominated by the LDP, creating an effective one-party state. Politics were almost completely cynical, based on the acquisition of power and influence, with factions in the LDP maneuvering with each other for control. When the economic bubble finally popped in the 1990s, the deficiencies of the government were laid bare -- too addicted to pork-barrel contracts; politicians whose ethics would get them into big trouble in most Western states; and a financial system that was rotten with bad loans, granted mindlessly during the boom times and resulting in a mountain of debt.

There was high unemployment, even poverty and homelessness. Suicides soared, not too surprising in a nation where suicide is wired in various ways into cultural traditions, but crime rose somewhat too. The Japanese, once regarded as frugal, now only save about 5% of their income. Still, the economic changes were not all bad, at least on the macroscale, though not necessarily all that comfortable to the people on the bottom of the economic pyramid. In the good times, Japanese industry had a tradition of paternalistic "lifetime employment", though this was something more of a custom than a hard and fast rule. Still, there was resistance to firing personnel even when they were excess. In the new regime, companies hired on large numbers of "furiba" -- from the German "frei arbeiter (part-time worker)" -- resulting in much greater labor flexibility.

With the economy now finally starting to pick up again, thanks in good part to exports to China, full-time employment is starting to pick up, but the industries have become used to the idea of a more flexible approach to labor, and aren't unhappy about the fact that Japanese labor is, these days, not all that expensive. There is a looming problem that the population is graying and the labor market will shrink, but Japanese companies have been pathfinders in the use of industrial robots, and current social circumstances are encouraging for further automation. The companies may not find keeping up with a declining labor market all that difficult, ensuring a high GDP per head.

In the meantime, the ghastly debt, kept afloat by the reluctance of the government and banks to pull the plug on "zombie" companies that couldn't make good their repayments, has started to fall drastically, to less than half the level of 2001. This reduction was mostly due to mergers that finally allowed the bad loans to be properly written off. The banks are feeling much healthier.

Changing times have also meant macroscale improvements and microscale discomforts for businesses. Although Japan is stereotyped as over-regulated, formally speaking the regulatory environment has traditionally been loose, governed mostly by traditions and obnoxiously vague laws that government bureaucracies could implement as they saw fit. Now corporate entities are being required to conform to more specific laws ensuring transparency and accountability, like they would in the West. To be sure, the new laws are a mixed lot, with some making life easier for company presidents, and enforcement is still weak, but things seem to be headed in the right direction.

The government itself is undergoing a comparable form of discipline. Koizumi's privatization of Japan Post is one of the centerpieces. Japan Post, it turns out, is no mere postal service, it also operates as the biggest personal-deposits banker and personal insurer in the country. This not only meant the government obstructed the operations of private companies in those fields, it also gave LDP politicians an immense source of "black budget" that could be siphoned off without accountability. Japan Post was indeed a distinctively Japanese institution.

The reform that Koizumi pushed through was also distinctively Japanese, since it will privatize Japan Post over ten years, in gradual stages; the Japanese do not like to do things abruptly. There was unusually large turnout for the special election, with voters clearly preferring the gradual approach to the more drastic prescriptions of the Democratic Party of Japan (DPJ), the LDP's main rival: the DPJ was crushed in the polls.

The privatization of Japan Post was, however, somewhat symbolic of changes, many of which had already been underway. The government had been cutting pork-barrel contracts, in particular working to get public works projects under control. Building projects had been a particular source of patronage, with the absurd result of seeming to pave the country over completely, ruining much of Japan's natural splendor of mountains, streams, and forests. Now there's an effort emerging to deal with the worst eyesores. Finally, new laws are imposing greater accountability and transparency on the government itself, while changes in the LDP are undermining the power of party factionalism.

Finally, there is the question of Japan's relationship with the outside world. The alliance with the US remains strong, surprisingly so with the differences in mindset and sometimes unhappy history between the two nations; it gives Japanese some confidence in their dealings with their rising trade rivals -- China, Taiwan, and South Korea -- and potential enemies -- meaning the unbalanced North Korean regime. North Korea has pushed Japan to measures that would have been unthinkable in the 1980s, such as launching spy satellites, built with American blessing and support.

Indeed, there is a touch of belligerence in Japanese leadership these days. Koizumi has caused controversy by going to the Yasukuni shrine, a memorial to the nation's war dead. Being a war memorial is not a bad thing in itself, of course, but Yasukuni also enshrines convicted war criminals as heroes, which is grating, and even claims that Japan's war was a crusade of national liberation of Asian nations from Western colonialism. This claim merely annoys Americans and other Westerners, but generates rage in China, Korea, and the Philippines, where the war is generally remembered in terms of Japanese barbarities. There are alternate options for remembering the war dead that are less controversial. Japan still remains basically pacifistic, and in fact has been gradually working towards the construction of regional economic and security alliances.

Koizumi Junichirou

Koizumi plans to leave office for private life in September 2006. He is 63 and wants to have time to listen to music and go out to dinner. He is a political loner and will have little to keep him involved in politics once his time in office is done. The best-odds candidate for his successor is 51-year-old Chief Cabinet Secretary Abe Shinzou, a member of the politically prominent Abe clan and a strong supporter of Koizumi's reforms. Whether the momentum for reform accumulated under Koizumi will persist after he leaves office remains to be seen.

BACK_TO_TOP

[WED 12 OCT 05] NO BUBBLY

* NO BUBBLY: An essay in THE ECONOMIST ("Hold The Champagne", 8 October 2005) points out the generally quiet pleasure of Democrats in the Bush II administration's bad case of second-term blues: a troubled and unpopular intervention in Iraq, a flatfooted reaction to national disasters, high officials indicted for leaking the name of a secret CIA agent to the media. Bush now is starting to get flak from the Right -- the Left has always sniped at him, that's not news -- since conservatives who have been unhappy about the Bush II administration's lack of fiscal discipline and efforts to override the other branches of government are not as willing as they were to swallow their misgivings.

That gives Democrats some hope of believing the next president won't be a Republican. However, as the essayist pointed out, that's getting way ahead of things. The Democrats do not seem to be benefiting much from the problems of the Republicans. As House Speaker J. Dennis Hastert put it in a recent meeting of fellow GOP politicians: "I submit to you that even today, as tough as things seem, it is much better to be us than them."

The problem is that simply having an adversary in disarray is not enough: the Democratic Party needs to have good ideas, and such good ideas seem to be in short supply, with the party unable to resolve differences of opinion between moderates -- willing to meet conservatives in the middle -- and extremists -- who regard conservatives as agents of evil and are energetic in denouncing them in their blogs.

Bill Clinton had his unarguable, even well-documented, faults, but he was able to win the presidency, twice, by moving the Democrats towards the center. This important lesson is lost on the extreme Democrats, who regard concessions to moderation as a betrayal of basic principles. So far, no Democratic leader has emerged who seems acceptable to both factions, much less to the generally moderate independents who increasingly hold the balance of power in presidential elections -- and unless somebody emerges who is acceptable, the chances of the Democrats of winning the White House in 2008 don't make for a good bet. [ED: Twaddle, Obama won handily in 2008.]

BACK_TO_TOP

[TUE 11 OCT 05] SMOLDERING EARTH

* SMOLDERING EARTH: I had heard stories of underground coal fires that went on for years, but it wasn't until I read an article in SMITHSONIAN ("Fire Hole" by Kevin Krajick, May 2005), that I learned the fantastical details of the phenomenon.

Welcome to Centralia, Pennsylvania, in coal-mining country. Many decades of mining left the town and its surrounding area sitting over a lattice of unused mining tunnels, many of which had caved in. In May 1962, local sanitation workers began to burn trash over the entrance to an old mine just outside of town. Soon afterward, smoke began to seep up out of the ground as fire slowly crept its way through the coal seams.

At first, firefighters tried to dig trenches to cut off the fires, but the fires infiltrated beyond the trenches. Then they tried boring holes and pouring down wet sand, gravel, slurries of concrete, and fly ash, but traditionally "flushing" doesn't work, and it didn't in Centralia. State and Federal officials tried to dig another trench, a big one this time, and it didn't work either. Flooding was considered but rejected, because the area was too well drained. Digging a pit that would have had a good chance of isolating the entire fire area would have cost $660 million USD, more than all the property in Centralia was worth even if the ground hadn't been oozing up smoke.

The authorities gave up trying to put out the fire, and it is still burning. At last count, it covered 1.6 square kilometers (400 acres) and was spreading along four arms at 23 meters (75 feet) a year in each direction. It is believed that the fire will burn itself out after covering about 9.25 more square kilometers (3,700 acres) -- which could take over two centuries.

At first, the residents found the situation more strange than frightening; they could harvest tomatoes for Christmas in their heated gardens. Then people began to pass out in their homes from carbon monoxide poisoning. Sections of road began to subside into the ground as the coal beneath turned to ash, and in 1981 the ground opened up and swallowed a 12-year-old boy, who managed to save himself by grabbing onto a tree root. The Federal government finally bought out most of the 1,100 inhabitants of the town -- it was much cheaper than trying to stop the fire -- and leveled their residences with bulldozers. About 600 buildings were destroyed.

About a dozen die-hards, mostly old folks with no place else they want to go, cling on. The government doesn't want to evict them, though they could be poisoned in their homes, or even swallowed up into the ground along with their residences. Wild turkeys, deer, rabbits, and even the occasional bear walk down the streets where there were once houses; tourists drop in on occasion to see the little town out of the Twilight Zone.

Centralia's smoldering earth

There are dozens of other underground coal fires in the coal mining regions of the US. Pennsylvania has 38, the largest number for any one state; one's been burning since 1915. China has the most of any one country; one that had been burning for a century was just put out in 2003 after four years of effort. The coal fires amount to a substantial source of air pollution and a waste of an otherwise valuable resource, but in so many cases trying to extinguish the blazes has proven so difficult that people adopt the same solution as was applied to Centralia: let it burn.

BACK_TO_TOP

[MON 10 OCT 05] RESURGENT CALDERAS (1)

* RESURGENT CALDERAS (1): As discussed in an article from some years back in SCIENTIFIC AMERICAN ( "Giant Volcanic Calderas" by Peter Francis, June 1983), the eruption of Mount St. Helens in southern Washington on 18 May 1980 ejected 0.6 cubic kilometers (0.14 cubic miles) of magma and volcanic ash, and left a crater two kilometers (1.25 miles) in diameter. It was regarded as an awesome event.

However, some 600,000 years ago, a volcanic eruption occurred 950 kilometers (590 miles) to the east, ejecting 1,000 cubic kilometers (240 cubic miles) of pumice and ash and leaving an elongated caldera (the name for a large volcanic crater) 70 kilometers (43 miles) across its longest dimension. The growth of vegetation and effects of glaciation have hidden the main features of that eruption; the most obvious remnant is the Old Faithful geyser in Yellowstone National Park. Yellowstone is a product of one of the largest-scale volcanic processes: it is a resurgent caldera, a caldera whose floor has slowly domed upward in the millennia since the eruption. Volcanic eruptions such as the one that formed the Yellowstone caldera are one of the greatest natural catastrophes, possibly equivalent to the impact of an asteroid.

Fortunately, they're rare. None have taken place through the few thousand years of recorded history, and in the US only three are known to have happened in the last million years. In addition to the one in Yellowstone, an eruption 700,000 years ago formed the Long Valley caldera in California, and an eruption a million years ago formed the Valles caldera in New Mexico.

Calderas of similar age may eventually be uncovered in other parts of the world. Still, it will probably turn out that in the past million years, no more than 10 eruptions of calderas have occurred world-wide. However, studies of the San Juan mountains in Colorado have revealed at least 18 calderas between 20 and 30 million years old, and many others of similar age have been recognized in southern New Mexico, Arizona, and Nevada. In the past decades, vulcanologists have made rapid progress in understanding the origins of giant, resurgent calderas and the catastrophic eruptions that form them.

* The fundamental mechanism of caldera formation was outlined early in the 20th century. The sudden ejection of large volumes of magma from a magma chamber a few kilometers below the surface of the Earth abruptly removes the support of the roof of the chamber, and the roof collapses, leaving a caldera at the surface. Calderas can range in diameter from a few kilometers to 50 kilometers (30 miles) or more.

Apart from size, the definitive feature of a resurgent caldera is the slow upheaval of its floor, which probably results from the intrusion of fresh magma into the magma chamber that created the caldera in the first place. The height of vertical resurgence can exceed a kilometer. This means that unlike an ordinary volcano, a resurgent caldera is a broad depression with a central massif.

Resurgence was first described in 1939 by the Dutch geologist R.W. van Bemmelen, during a study of the Toba caldera in the northern Sumatra. Van Bemmelen estimated that the floor of the caldera had subsided by as much as two kilometers, forming a deep lake, but had later been heaved up hundreds of meters to create the island known as Samosir in the middle of the lake. Toba is the largest resurgent caldera known: its maximum dimension is nearly 100 kilometers (60 miles). Further details of resurgence were later worked out by geologists of the US Geological Survey who studied the calderas of the American Southwest, and who actually introduced the term "resurgent caldera".

One other feature of resurgent calderas relates to the nature of volcanic processes. In volcanic eruptions, magma reaching the surface can erupt in three ways: as lava, as air-fall material, or as a pyroclastic flow:

Most volcanic eruptions produce lava and air-fall ashes; pyroclastic flows are less common. However, in eruptions that form a resurgent caldera, pyroclastic flows account for, by far, the greatest part of the ejecta. In fact, pyroclastic flows often accumulate in a newly formed caldera to a thickness of more than a kilometer. At the bottom of a such a pile, the pumice clasts typically soften and weld together. Deposits laid down by pumiceous pyroclastic flows, whether the clasts are welded or not, are referred to as "ignimbrite": fire-cloud rock. Large thicknesses of welded ignimbrite are excellent evidence for the location of ancient calderas.

* Resurgent calderas are big and throw out a lot of material. Surprisingly, they're not easy to spot. One reason is because they're so big; two completely unknown large calderas in the Andes were only discovered by analysis of satellite photographs. The more impressive of the two is the Cerro Galan caldera in northwestern Argentina, which is 34 kilometers (21 miles) in diameter and surrounded by a spectacular apron of ignimbrite extending as far as 70 kilometers (45 miles) from the caldera rim itself. It's a young caldera, only 2.6 million years old.

The second caldera was less obvious. The satellite images suggested that the Kari Kari mountain massif in Bolivia, which is 5 kilometers (3 miles) high, probably represents the resurgent center of a large, old caldera. Earlier mapping of Kari Kari, however, had suggested that the massif was a "batholith": a great mass of coarsely crystalline igneous rock that solidified in the crust of the Earth and was later exposed by erosion. Field work at the site confirmed it was a resurgent caldera. The welded texture of the rock of the massif showed conclusively that it is ignimbrite. The resurgent center is the remaining evidence of a caldera that was originally about 36 kilometers (22 miles) across in its largest dimension. It is 20 million years old. [TO BE CONTINUED]

NEXT
BACK_TO_TOP

[FRI O7 OCT 05] LIQUID TREASURE (5)

* LIQUID TREASURE (5): It is hard, as such, to criticize the use of water to grow crops, in contrast to its use to grow lawns and such. Unfortunately, the devil lies in the details. Irrigation can have disastrous long-term effects on the land, such as salinization that renders the ground completely unsuited to growing crops, and in many places irrigation is wasteful and poorly managed.

The old Soviet Union wrote the book on waste and mismanagement. With regards to irrigation, this is blatantly obvious, even from orbit, in the case of the Aral Sea. In the 1950s Soviet planners decided to stop "wasting" the water flowing into the Aral Sea, and dammed the rivers that fed it. Now the Aral Sea is only a quarter the size that it was, and is so salty that even the fish have died off. The land reclaimed by the shrinking sea is too salty to grow crops. The only benefit was the production of a relatively small and totally uncompetitive cotton crop.

Aral Sea

However, the capitalist United States has nothing to crow about in this regard. As the American West became settled, state and Federal governments decided to build dams to irrigate the vast stretches of desert in the region. California invested heavily in massive water projects to irrigate central California and the Imperial Valley area in southern California. The result is highly productive agriculture, to be sure -- along with incidental environmental damage and perverse use of resources. Does it really make sense to raise rice in what would be, in the absence of plentiful and artificially cheap water, a desert region?

There are similar stories elsewhere in the US. The American Midwest took a beating in the 1930s, when years of drought turned the region into the "Dust Bowl". The drought did end, but the real salvation of the region was the discovery of the huge Ogallala Aquifer, an underground "ocean" that stretches from South Dakota to west Texas. The problem is that water is being sucked out of the aquifer eight times faster than it is being replenished, and it could go dry within decades. It's a prescription for deferred disaster, but the politics of water in the American West are short-sighted, with states, companies, and individuals fighting over their piece of the water-rights pie. The laws are agonizingly complex, to the benefit of a small army of lawyers, heavily concentrated in Denver, Colorado.

In Florida, the US Army Corps of Engineers has spent decades trying to "control" the Everglades, with the result of all but destroying it. Now the Corps is trying to un-engineer some of what it has done. Much of the water diverted has gone to support Florida sugar growers, whose crop is also totally uncompetitive. The Soviets couldn't have done any worse.

There are similar stories elsewhere. In India, water and electricity prices are kept artificially low, which encourages waste. In some cases the waste is synergistic: farmers keep inefficient pumps running continuously to deplete groundwater. Saudi Arabia is sucking out groundwater and desalinizing seawater to grow, of all things, wheat, at a cost estimated to be a hundred times greater than world market average. The desalinization effort is particularly inefficient. Desalinzation isn't cheap, though its cost has fallen enough so that it is a workable solution for drinking water in places where usable water is hard to come by. Using it to irrigate crops is an absurdity. [TO BE CONTINUED]

START | PREV | NEXT
BACK_TO_TOP

[THU O6 OCT 05] LONDON UNDERGROUND MAP

* LONDON UNDERGROUND MAP: I was looking through the TV listings and found a short program on the Public Broadcasting Service on "The London Underground Map". I was intrigued, since the London "Tube" system is the world's oldest and one of the biggest; detailed information on its history is a bit hard to find and anything I could obtain was all for the good. I was a bit surprised to find out the program, which was a BBC production, was strictly on the map of the London Underground -- and further surprised to find out that the map was both ingenious and significant. In fact, a copy is displayed as an exhibit in the New York Museum of Modern Art.

* The London Underground covers 400 kilometers (250 miles) of track, with 273 stations. 2.5 million passengers ride the Tube daily; without the map, many would find it much harder to get to where they wanted to go.

The Tube system is the product of an evolution from the mid 19th century, initially consisting of different subway lines operated by different companies -- a fact that would tend to promote confusion in itself. In 1933, a London Passenger Transport Board was established to bring order to the chaos of the Underground (as well as surface buses). The first chief executive of the board, Frank Pick, was strong on design, and imposed his stamp on the system, from building new terminals using modern design concepts down to remaking the logo in up-to-date style, creating a distinctive and uniform corporate identity for the system.

One aspect of the makeover was the introduction of a map that changed everything. Ironically, Pick had nothing to do with it. Harry C. Beck, a 29-year-old engineering draftsman who worked for London Transport, came up with it on his own initiative while he was on temporary layoff. He submitted it to the board to have it rejected, the board members thinking it was "too revolutionary". Some of Beck's friends recognized that he had a good thing going and encouraged him to try again. The board shrugged and decided to give it a go on a trial basis, printing a few hundred copies. Much to their surprise, Beck's map was a smash hit. Maps couldn't be printed fast enough.

Previous maps of the Underground system had been maps in the traditional sense of the word. They were scaled representations of the city geography, with the subway routes drawn in appropriately. Beck was the sort of person -- by no means universal among engineers -- who could get his head into what somebody using his product actually needed, which is not a trivial feat since users may not be able to specifically articulate their needs. What did someone riding the Tube care about curves or literal distances? All the rider wanted to know was how to make a connection, get from one station to an ultimate destination. In other words, the real issue was the topology of the network.

Beck's map got rid of the winding spaghetti of the old maps with a nice, neat, color-coded "connection diagram", with the core of the network represented as a rectangular ring and spur lines at diagonals off the ring. Since the core was the most heavily traveled part of the system and the densest in terms of the number of stations and lines, the core was represented in a larger scale than the spur lines, to provide the most detail where it would do the most good.

The map was not only easy to use, it was esthetically pleasing. Frank Pick showed little enthusiasm over the map, but he had created an environment where it could be more easily accepted by encouraging modern art concepts in posters and other displays in the Underground. Modern art's main point of divergence from classical art is the belief that it was not necessary to provide a literal representation of objects, a notion that the map embodied. In fact, some have wondered if there were modern art influences on the map, for example the Dutch painter Piet Mondrian, whose works consisted of grids of rectilinear lines and blocks of color -- not as silly as it sounds, modern magazine layouts and the like owe a lot to Mondrian, just look at 19th century periodical reproductions to see how stilted their layouts were. However, any such influences on Harry Beck were indirect, since his real sources were engineering diagrams. Beck even made a joke of it, drawing up a version of his map labeled as if it were an electrical system diagram.

* Harry Beck was paid the magnificent sum of seven guineas for his map -- which made it an insane bargain for London Transport. Not only did subway riders like the map, it also helped London Transport improve utilization of the system. The spur lines to outlying regions of the system were underused except at peak commuting times; the reduced scale of the spur lines in Beck's map made the spur lines seem much shorter than they really were, helping advertising campaigns that encouraged people to take leisure trips on the Tube to the outlying areas.

Harry Beck kept tweaking the map for the next 26 years, adapting it to additions to the network and tweaking its style. He found it particularly difficult to figure out a way to represent station transfers, coming up with a series of different graphical devices. Ordinary stops on the line were originally represented by dark blobs, but Beck came up with the idea of using simple tick marks instead, much reducing clutter. He did all the work in his spare time, taking notes on pieces of scrap paper that his wife would find everywhere -- in his bedclothes, in odd corners of the house.

By 1949, Beck had finally come up with a map that he regarded as satisfactory. It was even more rectilinear than earlier maps; station transfers were designated by two circles with a "pipe" between them. In 1960, the London Underground dropped his map in favor of one drawn up under the direction of Harold Hutcherson, boss of publicity at London Transport. However, Hutcherson didn't have Beck's knack of making things clear, and Underground riders found his map confusing. Beck's map was restored in 1963, updated by a London Transport official named Paul Garbutt, who also did the work in his spare time. It gave him an appreciation of Beck's skill in design, since Garbutt found that it was impossible to make relatively minor changes without having to adjust the layout of the map as a whole.

London Underground map

Harry Beck died in 1974. By that time, his map was no longer unique. Anyone riding a subway system, trying to navigate through a sprawling airport complex, or even using a city bus system, will have encountered a schematic map that is clearly a descendant of Beck's Underground map. Beck's map continues to be updated as more changes are made to the Underground system, and it remains as valuable as ever in the 21st century.

[ED: The prominent arts / design guru Milton Glaser, famed for his "I <heart> New York" logo, was quoted in the program and made a very interesting comment: "You frequently have the most serious intentions, intellectually, and then you do something, and it turns out to look absolutely dreadful. And all design, basically, is a strange combination of the intelligence and the intuition -- where the intelligence only takes you so far, and then your intuition has to sort of reconcile some of the logic in some peculiar way."

I found this intriguing because I draw diagrams for my physics documents and the like, and go through much the same process. To be sure, sometimes I see a diagram in a book and then basically redraw it in my own style. However, some of the diagrams that may seem simple are products of a surprising amount of effort -- some of the stuff I did on relativistic physics paradoxes, and on Stirling-cycle engines. As drawings, they were nothing much, but figuring out what to draw was not easy, and sometimes during the struggle the insights just magically pop into my head. Anybody who's in a creative line of work understands this phenomenon perfectly well, and it's one of the major attractions of the life. There is a certain deep pleasure in waking up at 2:00 AM when the answer to a difficult problem simply materializes out of the darkness.]

BACK_TO_TOP

[WED 05 OCT 05] FEYNMAN CONSIDERED

* FEYNMAN CONSIDERED: I just finished reading James Gleick's 1992 book GENIUS: THE LIFE AND SCIENCE OF RICHARD FEYNMAN -- or maybe I should say finished skimming through it rapidly. It's a typical biography, giving me about three times more than I wanted to know. I'm not slamming Gleick for that, since he needed to be thorough, and the book wouldn't have been salable if it hadn't been: by their nature, biographies are usually overstuffed.

The book travels through Feynman's upbringing on Long Island, then his stint at MIT, graduate studies at Princeton with John Wheeler, duty at Los Alamos, and then postwar work that eventually landed him at Caltech. It goes through his construction of quantum electrodynamics, playing him off Julian Schwinger, Murray Gell-Mann, and Freeman Dyson -- then his work in superfluids, tinkerings with genetics and nanotech, and his publications of popular books and the Columbia shuttle accident panel. He didn't die easy, suffering a sequence of cancers, some of them imaginative, that took him down in stages.

As far as his personal life goes, the book plays up his first marriage to Arline, who was tubercular when they tied the knot -- he didn't dare kiss her on the lips at the marriage ceremony -- and died in 1945. If she had held out about another year, she might have pulled through, since antibiotics were just becoming available that could deal with the disease. The marriage caused some problems with his family, since his mother Lucille had good reasons to oppose it and did so, but the rift was eventually patched up.

After that he became something of a rake -- I used to have a roommate along the same lines, it might look fun from a distance but up close it seems more tiresome and sordid, just as it does here -- with a four-year intermission of marriage to his second wife, Mary Lou, in the early 1950s. The divorce actually made the news, not because he was famous with the public at the time, but because Mary Lou cited "mental cruelty" in the divorce, claiming he played the bongos too much and did mathematics in his head when he went to bed. He finally settled down with his third wife, Gwyneth, a Briton, and raised a family.

There's a few anecdotes in GENIUS that didn't get much play elsewhere. When he was a grad student at Princeton, he was having an argument with a colleague about the mobility of sperm; Feynman went away for a bit and came back with a sample. "Surely you're joking, Mr. Feynman." Another interesting item was the colleague who said that following Feynman's thought processes was like listening to Chinese opera. One of my favorite bits was a side reference to Francis Crick, the blunt Briton who won the Nobel for determining the DNA double helix, and was later compelled by his admiring public to create the following form letter:

   Dr. Crick thanks you for your letter, but regrets that he is unable to
   accept your kind invitation to:

   [ ] send an autograph            [ ] help you in your project
   [ ] provide a photograph         [ ] read your manuscript
   [ ] cure your disease            [ ] deliver a lecture
   [ ] be interviewed               [ ] attend a conference
   [ ] talk on the radio            [ ] act as chairman
   [ ] appear on TV                 [ ] become an editor
   [ ] speak after dinner           [ ] write a book
   [ ] give a testimonial           [ ] accept an honorary degree

I had to laugh because I get a bit of this sort of thing over email myself -- and I'm not even famous!

The more I know about Feynman, the more I learn that I am glad I never met him -- definitely a genius, but still mouthy, cocky, obnoxious, a terror to students. It becomes more obvious that much of the Feynman myth was a creation of Feynman's own self-promotion.

BACK_TO_TOP

[TUE 04 OCT 05] MYSTERY OF THE MEGAFLOOD

* MYSTERY OF THE MEGAFLOOD: A recent episode of the US Public Broadcasting System's (PBS) NOVA science series told the story of an ancient catastrophe whose results confounded geologists for decades.

I'm originally from Washington State, in the northwest corner of the US. Folks from back east often assume that it is usually rainy there, but in fact there is a mountain range, the Cascades, that cuts off the western third of the state, where damp Seattle resides, from the eastern two-thirds, which is relatively dry -- in fact, much of it is a sagebrush desert. The terrain grows more forested and mountainous in the states of Idaho and Montana further to the east, where the sprawling complex of mountain ranges called the Rockies rises.

The mighty Columbia River snakes down through central Washington, turning west to make up much of the state's southern border. In southeast Washington state, there is very unusual terrain, showing evidence of severe erosional processes and with some geologically unique features. The settlers in the area descriptively called it the "Scablands", believing that the terrain had been injured in some way. The place features deep gorges; a huge dry waterfall, known simply as "Dry Falls"; hills of gravel; large round "potholes" ponds gouged out of stone; and enormous boulders dropped here and there.

the Washington Scablands

Geologists found the Scablands fascinating, but did not recognize at first just how unusual the place was. Before modern science, there was an assumption in Western culture that the Earth was shaped by great catastrophes, such as Noah's flood, but as the science of geology arose in the 19th century, scientists began to convert from "catastrophism" as the force shaping the Earth to "gradualism", the belief that changes took place over very long periods of time. The assumption in the early decades of the 20th century was that the Scablands were created in such a gradual fashion, by the action of rivers. The only problem with that idea was that there are no major rivers in the region, and there doesn't appear to have been any there for a long time. The Columbia has the power needed to reshape terrain in a big way, but it is 80 kilometers (50 miles) from the Scablands, and there is no evidence it ever flowed through there.

The big potholes in the rock floor of the Scablands were particularly puzzling. Rivers will gouge out potholes in their beds, but they're usually only a few meters across. The potholes in the Scablands are an order of magnitude bigger than that. The huge boulders didn't seem quite as puzzling. They're scattered erratically through the area -- in fact, geologists call them "erratics" -- and are clearly not native to the region: a granite boulder may be found on top of a bed of non-granitic rock. There is a known mechanism for scattering erratics around, however: glaciers. During the last Ice Age, about 20,000 years ago, fingers of glaciers probed down over what is now the Canadian border. Glaciers will pick up large boulders and carry them for long distances, with the boulders left when the glacier melts.

Another bit of evidence that suggested the action of glaciers in the Scablands was the presence of "hanging valleys" on the edges of canyons. When a glacier gouges out a canyon, it cuts through lots of little valleys on either side, leaving their exits into the gouge hanging far above the main valley floor. Since the glacier amounts to the new "floor" of the canyon, at least until it melts, the hanging valleys don't get any deeper. The problem with glaciers was that there was no evidence of their action outside the border of the Scablands. That didn't make any sense: the glaciers would have come down from the north or out of the mountains, and there was no trace that they had done so.

* In the 1920s, a geologist named J. Harlen Bretz came up with a spectacular theory to explain the origin of the Scablands. He was extremely familiar with the area, and he felt the only way to explain its features was through the action of one huge, catastrophic flood, a wall of water 275 meters (900 feet) high dumping a total of 2,000 cubic kilometers (500 cubic miles) of water in a very brief period of time.

On 27 January 1927, Bretz delivered a lecture on his ideas to a meeting of the Geological Society of America in Washington DC (which is on the other side of the US from Washington state, incidentally). The lecture did not go over well. Geology had embraced gradualism, for the good reason that it seemed to be how things generally happened, and Bretz's "megaflood", as it would later be known, was too much like the old catastrophism, something a Biblical literalist would have invented. Bretz did have a significant problem is that he couldn't say where the huge volume of water had come from to begin with. However, according to the story, one of the geologists in the audience, Joseph T. Pardee, whispered to a colleague: "I know where Bretz's water came from." Pardee was more timid than Bretz, and did not come to his defense.

* Where it came from was Missoula, Montana, 400 kilometers (250 miles) to the east of the Scablands. In the area to the west of Missoula in Idaho there is clear evidence of glaciation -- and in fact it is obvious today that a glacier moved down through a valley that was the only outlet for a huge basin to the east. This glacier was about 800 meters (2,600 feet) high and 37 kilometers (23 miles) wide. With the glacier in place, what is now called the Clark Fork River -- named after William Clark, who mapped it during the famous Lewis & Clark exploration expedition of 1804:1806 -- began to fill up the basin behind the huge "ice dam".

In its prime, "Glacial Lake Missoula" was half the size of Lake Michigan, larger than Lake Erie and Lake Ontario combined, with a volume of about 2,170 cubic kilometers (520 cubic miles). The site where Missoula now sits was under 300 meters (a thousand feet) of water. Pardee knew that Missoula had once been underwater, but he assumed that the lake had been basically static. In the 1930s, however, he noticed huge ripples in the terrain there, the size of fair-sized hills, and came to the conclusion that the ripples were created by the rapid motion of huge volumes of water. Pardee notice something else: the ripples pointed towards the scablands.

Pardee's discovery made Bretz's ideas seem more plausible, though there was still one major issue to be addressed: how could all that water have been released so quickly? As it turned out later, an ice dam under heavy water pressure doesn't just melt over a period of time: it will give way catastrophically. Water expands when it freezes. At the bottom of a glacier, the pressure is so great that the water can't expand, and so it remains liquid at several degrees below zero Celsius. This highly pressurized water begins to force its way into tiny cracks in the ice. The motion of the supercooled water creates friction and heat, melting cracks in the ice that grow bigger over time. Ultimately, the ice dam simply crumbles abruptly. A much smaller ice dam collapsed in this way in Iceland in 1996, creating a destructive flood.

As one geologist put it, when the ice dam plugging up Glacial Lake Missoula finally disintegrated in an explosion of ice fragments, the sound was beyond belief: "Imagine the loudest noise you've ever heard; multiply that by a thousand times." This huge wall of water swept through Idaho and into southeastern Washington. Experiments with scale models of the terrain show that the megaflood would have gouged out canyons and falls just like those seen in the Scablands.

What about the huge potholes? Other experiments show that when a high-velocity flow of water hits an outcropping of rock, it produces a vortex, an "underwater tornado", that arcs back into the bedrock, with the spiral of bubbles efficiently gouging a hole in the stone. Of course, explaining the erratic boulders was easy: they were embedded in chunks of glacial ice that were carried along with the megaflood. The whole process took about a day, with the flood finally draining through the Columbia basin and out into the Pacific, helping widen the dramatic Columbia Gorge through the Cascade range nearer the coast, and even dumping erratics into the Willamette Valley, on the west side of the Cascades to the south, in what is now the state of Oregon.

* Bretz was finally vindicated, receiving the Geological Society of America's Penrose Medal in 1980. However, geologists following in Bretz's footsteps are now wondering if he didn't go far enough. There's a canyon in the Scablands whose walls have many layered deposits. It was assumed that the layers were due to "pulses" in the megaflood, until somebody noticed a thin white layer in the stack. The Cascade mountain range has a number of semi-active volcanoes, Mount Saint Helens near the southern border of Washington being one of the best known. The white layer was composed of ash from an eruption of Mount Saint Helens about 15,000 years ago. There was no way the ash could have fallen into the flood and been deposited in such a nice neat layer; obviously, the ash fell on dry ground -- and then was overlaid by new layers of sediment.

That meant that there was more than one megaflood. A glacier would come down and produce a Glacial Lake Missoula, with the ice dam disintegrating in a monstrous flood, and then the cycle would begin all over again. Geologists are sifting through the layers to see what the chronology was: at present, they believe the bottom layer was deposited 20,000 years before the top layer.

Research continues. Geologists now realize that gradualism and catastrophism are complementary, not opposed, concepts, and that megafloods -- and huge eruptions, and meteor impacts -- have had a significant influence on the geological history of the Earth. Hopefully, researchers will not get an opportunity to observe such a catastrophe any time in the near future.

BACK_TO_TOP

[MON 03 OCT 05] BAMBI & COYOTES

* BAMBI & COYOTOES: It is often assumed that human settlement of wild regions is necessarily hard on the wildlife, but this is not entirely the case. Large predators and free-ranging herbivores will suffer, but some other creatures will find the new order to their liking.

One species that has adapted very well to human settlement is deer. It is believed that the deer population of the US is now larger than it's ever been, since the deer can live in surprisingly small open areas and have no large predators to cull their numbers. People who live on the outskirts of towns find them cute at first -- they are, I remember playing with a captive fawn when I was an adolescent -- but once the deer start destroying their gardens, the citizens start to think of setting up booby traps.

Rising populations of deer also mean automotive accidents -- there's some rural roads near where I live where I'm very paranoid while driving in the dark about what might jump out onto the road -- with US statistics claiming about a billion USD is lost and a few hundred people killed a year in collisions with animals, mostly deer. There is plenty of justification for declaring energetic open hunting seasons on deer, though that isn't a real solution when residences are in the line of fire.

There have even been a series of attacks by bucks in rut on humans and dogs. One person who was badly gored in the face died a few weeks later of complications. Deer are normally frightened of dogs, but in some of the recent attacks the bucks have killed fairly large dogs. Bambi, with an attitude.

* Coyotes are another species that has adapted well to human habitation. A recent study of urban coyotes in Tucson, Arizona, found they were getting aggressive, following after people walking with dogs on a leash. Urban coyotes are fond of cats and small-to-middling dogs as prey; there are reports of attacks on small children as well.

coyote on the prowl

I live on the outskirts of Loveland, Colorado, a town on the prairie in the shadow of the Rockies, and on a walk before sunrise a week ago I heard the eerie wailing of the coyotes, having one of their "pow-wows" on nearby farmland. In the summers, when I kept the window open at night, I could hear them getting it on near a pond to the west. One night I heard the wailing and then: BAP BAP BAP BAP! BAP! BAP BAP! Somebody was taking pot shots at them. That was the end of their sessions at the pond; they're cagey creatures and decided to go serenade someplace else. I told my mother the story over the phone and she suggested: "He probably wanted to get rid of them so he could get some sleep."

When I was an adolescent, some neighbors at our lake place in north Idaho had a pet coyote. Coyotes really are wild animals, and the whole exercise ended up a little tragedy -- it's not sensible to keep one as a pet. However, it was an adorable creature, with some odd habits. I picked it up one time, wrapping my arms around its body, front legs to back, enjoying its deep soft fur. It gave me a "kiss" in return, which in coyote talk was to open its jaws very wide and gently frame my face with its fangs. I felt not the least fear; it was obviously anything but a hostile act. If it had meant to do me harm, it would have gone for my throat.

BACK_TO_TOP
< PREV | NEXT > | INDEX | GOOGLE | UPDATES | EMAIL | $Donate? | HOME