< PREV | NEXT > | INDEX | GOOGLE | UPDATES | EMAIL | $Donate? | HOME

DayVectors

jan 2007 / last mod jan 2022 / greg goebel

* 23 entries including: power plant infrastructure, medical infosystems, PC interfaces & networking, clean coal, Randi challenge, money cards, green business bubble, modifying intestinal flora, future energy tech, online jihadism, Japanese balloon maker, Bell Labs fumbled the microchip, my new Yaris, Airborne quackery, recycling airliners, emerging economies.

banner of the month


[WED 31 JAN 07] CLEAN COAL?
[TUE 30 JAN 07] TAKE THE CHALLENGE?
[MON 29 JAN 07] THE MEDICAL INFOSYSTEM CHALLENGE (1)
[FRI 26 JAN 07] INFRASTRUCTURE -- POWER PLANTS (6)
[THU 25 JAN 07] GIMMICKS & GADGETS
[WED 24 JAN 07] MONEY CARDS
[TUE 23 JAN 07] GREEN BOOM GREEN BUST?
[MON 23 JAN 07] PC INTERFACES & NETWORKING (4)
[FRI 19 JAN 07] INFRASTRUCTURE -- POWER PLANTS (5)
[THU 18 JAN 07] SILENT PARTNERS
[WED 17 JAN 07] ENERGY FUTURE
[TUE 16 JAN 07] JIHAD ONLINE
[MON 15 JAN 07] PC INTERFACES & NETWORKING (3)
[FRI 12 JAN 07] INFRASTRUCTURE -- POWER PLANTS (4)
[THU 11 JAN 07] BALLOON MAN
[WED 10 JAN 07] DEAD END
[TUE 09 JAN 07] YARIS BACK TO THE FUTURE
[MON 08 JAN 07] PC INTERFACES & NETWORKING (2)
[FRI 05 JAN 07] INFRASTRUCTURE -- POWER PLANTS (3)
[THU 04 JAN 07] READ THE FINE PRINT
[WED 03 JAN 07] SKY JUNK
[TUE 02 JAN 07] EMERGING ECONOMIES
[MON 01 JAN 07] ANOTHER MONTH

[WED 31 JAN 07] CLEAN COAL?

* CLEAN COAL? As discussed in an article from DISCOVER magazine ("Can Coal Come Clean?" by Tim Folger, December 2006), while there's been considerable worry about running short on fuel over the last few years, there's one source of energy that nobody worries about running out of any time soon: coal. It's not only common, it's also much cheaper than oil or natural gas, and is the primary source of electrical energy in the US -- which has 27% of all known coal reserves, with some coal enthusiasts calling the USA the "Saudi Arabia of coal". America has, at current consumption rates, about a 180-year supply of coal. To be sure, coal use is increasingly rapidly here, with the US Department of Energy (DoE) estimating that 153 new coal-fired power plants will be built in the US by 2025. That's nothing compared to China and India, the second and third biggest coal producers respectively; China alone is expected to build 562 coal-fired plants by 2013. Some coal advocates believe that coal could also be used as a feedstock to produce gasoline and diesel fuel.

Coal has a catch, however: it's the dirtiest of all fossil fuels. It is not only dirty to burn, it is also dirty to dig out and process. To be sure, pollution control systems do a fairly good job of getting rid of the pollutants in the emissions of coal plants, and they could do an even better job in time. However, even with perfect pollution control systems, burning coal would still end up producing carbon dioxide -- a lot of carbon dioxide, twice as much per unit of energy produced as natural gas. Worries have been steadily increasing that rising atmospheric carbon dioxide concentrations will lead to disastrous global warming, with a wide range of unpredictable consequences. Expanded use of coal is certain to accelerate the process.

Coal advocates believe that the problems can be managed. Improved pollution control systems will be able to remove the toxic products of coal burning, while schemes for underground sequestration of carbon dioxide will head off global warming. A number of advanced-technology demonstrator coal plants are already in operation to point the way to a "clean coal" future.

* The Polk demonstration plant near Tampa in Florida provides a showcase for clean coal technology. The centerpiece of the scheme is "integrated gasification combined cycle (IGCC)". A conventional coal-burning power plant burns powdered coal to ensure thorough combustion; IGCC does one better by burning coal converted to gas, which not only provides efficient combustion but better control over exhaust products.

At the Polk plant, coal is still powdered but it is then mixed with water to form a slurry, which is fed into a "gasification unit" about 91 meters (300 feet) tall. The slurry is fed under about 14 atmospheres of pressure into a 9 meter (30 foot) tall vessel at the top of the unit, which contains 96% pure oxygen at 1,370 degrees Celsius (2,500 degrees Fahrenheit). Under such conditions, the coal doesn't burn; instead it decomposes into "coal gas" or "syngas" composed mostly of hydrogen and carbon monoxide, along with various impurities.

There is nothing new about coal gas synthesis, the scheme having been used to produce municipal gas for about a century. It is no longer much used in that role, since the carbon monoxide makes the gas very toxic; natural gas has replaced coal gas for home heating. At the Polk plant, the coal gas is pumped through filters that extract the sulfur, particulates, and other pollutants, with the purified output sent to the plant's power system. It is much more effective to nail the pollutants before combustion than after, and indeed the plant's operator, Tampa Electric, sells off the sulfur and other pollutants captured. There is an optimistic saying that pollution is simply a misplaced resource, and Tampa Electric officials claim the plant produces extremely small quantities of landfill waste.

At a traditional coal-burning power plant, the electric power is generated by boiling water and using the steam to drive a turbine. At the Polk plant, the coal gas is burned directly in a combustion turbine to generate 125 megawatts of power. Burning the goal gas in nearly pure oxygen leads to more complete combustion, resulting in improved efficiency, and suppresses the formation of toxic nitrous oxides (NOX) that would be created in burning the gas in ordinary, nitrogen-laden air. However, the Polk plant doesn't completely part with tradition, with the hot exhaust from the combustion turbine used to boil water that drives a second steam turbine to produce another 125 megawatts. The Polk plant is about 15% more efficient than a conventional coal-burning power plant.

The IGCC coal-gas combustion scheme also makes it easier to capture carbon dioxide for sequestration, though the Polk plant doesn't do that -- it just dumps the carbon dioxide into the air. It would be straightforward to retrofit the plant, but Tampa Electric plans to replace it instead with a new 600 megawatt IGCC plant that will have carbon dioxide sequestration. Most IGCC plants built in the future will have carbon dioxide sequestration as a matter of course.

A report by the United Nations Intergovernmental Panel on Climate Change released in 2005 was very optimistic about carbon dioxide sequestration, estimating that trillions of tonnes of the gas could be stored in old coal mines, depleted oil and gas fields, and in various types of natural underground formations. That would be enough to swallow the carbon dioxide emissions of coal-burning power plants for centuries. Sequestration pilot projects are already underway. Statoil, the highly-regarded Norwegian national oil company, extracts thousands of tonnes of carbon dioxide from natural gas and pumps it through the floor of the North Sea, into a sandstone deposit covered by a thick layer of shale. Statoil experts believe that this formation could store all the carbon dioxide from Europe's coal-burning power plants for hundreds of years. In North America, EnCana Petroleum of Calgary, Alberta, is pumping carbon dioxide into oil fields in southern Saskatchewan to not only sequester the gas but to drive out otherwise unrecoverable oil. They've been doing for six years and the gas has shown no sign of leaking out. A third project in Salah, Algeria, is pumping carbon dioxide into depleted natural gas wells.

* There's a big difficulty with the gleaming IGCC future. Only 9 of the 75 coal-burning power plants planned for the US over the next decade will be IGCC plants; the problem is that while they are about 15% more efficient than conventional coal-burning power plants, they also cost about 20% more, and at present the regulatory environment in the US doesn't give an electric power utility any sort of incentive to sequester carbon dioxide. Advocates of clean coal suggest, with more than a little plausibility, that the regulatory environment is very likely to change in a few years, and any electric power utility that isn't taking a good hard look at IGCC may end up being painfully blindsided.

There is also an issue that IGCC and carbon dioxide sequestration don't address in the slightest. Oil and natural gas are obtained from wells; coal is obtained from mines, either underground mines or surface mines. Underground coal mines are notoriously dangerous, not just because of the possibility of cave-ins but because coal generates explosively flammable gas and dust. China's underground coal mines have an infamously high fatality rate.

Surface mining is safer but more drastic. In Appalachia, the usual scheme is to find a mountain full of coal and simply dig it out with oversized earth-moving machines, using the tailings to fill up a valley. This involves destruction of natural scenery and forest, as well as displacement of small mountain communities. Poor environmental enforcement has led to considerable pollution of ground water as well. Coal mining is a problem that will have to be addressed. There was once a time that nuclear power was seen as the energy source of the future. The future has arrived, and it looks much more like coal than uranium. [ED: As of 2018, it doesn't look much like either.]

BACK_TO_TOP

[TUE 30 JAN 07] TAKE THE CHALLENGE?

* TAKE THE CHALLENGE? Ten years ago, well-known skeptic James Randi offered a one million USD prize to any psychic whose skills could survive a controlled examination. According to an article from WIRED.com ("Skeptic Revamps $1M Psychic Prize" by Kevin Poulsen), administering the "Million Dollar Challenge" has proven troublesome, and the team that does the job has been forced to streamline their processes.

Randi originally offered a $1,000 USD prize in 1964, then boosted it to $10,000 USD. Even that wasn't all that tempting, but in 1996 an unnamed donor contributed a million bucks to the cause, and the office of the James Randi Education Foundation in Fort Lauderdale, Florida, then ended up dealing with a steady stream of applicants. The office is supported by member contributions, grants, and the interest off the million-dollar prize.

The initial process required that an applicant submit a notarized application form, negotiate a test protocol with the foundation, then pass a preliminary test administered by independent local investigators. Randi himself would then observe the psychic in operation, and if Randi couldn't provide specific details of any fakery, the said psychic would win the prize.

In ten years, nobody's passed the preliminaries. The last test was in Stockholm in October 2006, when a Swedish medium named Carina Landin claimed to be able to identify the authors of 20 diaries just by touching the covers. The negotiated threshold for passing was 16 right guesses, but she only got twelve -- about as well as might have worked for flipping a coin. (There were some problems with the trial, and it will be repeated.) In July 2005, a Hawaiian psychic named Achau Nguyen claimed to be able to mentally project his thoughts performed a trial in which he "transmitted" the readings from 20 cards with a word written on each selected from a pile of 30 to a "receiver" in another room. The receiver got all 20 readings wrong.

It's actually rare to get to the preliminary stage. A Nevada man whose legal name was "The Prophet Yaweh" claimed he could summon two UFOs to Las Vegas, but the foundation canceled the preliminary test when he told them that he would bring armed guards to protect him from "negative personalities". Jeff Wagg, who administers the challenge, says: "One a week gets as far as a protocol negotiation, and then drops off."

That's where the changes come in. Says Randi: "We can't waste the hundreds of hours that we spend every year on the nutcases out there -- people who say they can fly by flapping their arms." It's a more troublesome issue than it sounds, since paranormal phenomena are by definition "beyond the fringe", and those who claim paranormal powers are liable to be dismissed as nutcases, even if they actually have them. On the other side of the coin, the foundation has to worry about exploiting or feeding the delusions of the mentally unbalanced. A San Francisco woman who insisted to the foundation that she wasn't human wasn't able to give a coherent explanation of why she thought so, but it appeared that the Secret Service was involved. Says Wagg: "If we get them to go to a challenge and they lose, we're exposing someone who had serious mental illness. That doesn't do us any good, and it doesn't do them any good. It doesn't prove anything."

From the appropriate date of 1 April 2007, the foundation will only accept applications from those who can prove a "media profile" -- television reports, newspaper articles or a reference in a book that chronicles his or her extraordinary abilities, with the claim then backed up by an academic authority.

Says Wagg: "We're not going to deal with unknown people who have silly claims. Let's say, somebody claims they can walk on water. We'll say, prove it to somebody else first. Get on the local news. Then bring it to us." Adds Randi: "They have to get some academic to endorse their claims, and that academic is not the local chiropractor or some such thing." The academic will be contacted by the foundation and asked to confirm the endorsement. On the positive side of the new protocol, the preliminary test will be dropped, the screening having been performed in an alternate fashion. The net effect will be to allow the foundation to pursue its real goal, the high-profile psychics who soak the gullible, and who so far won't touch the Million Dollar Challenge with a ten-foot dowsing rod.

The challenge is a carrot, but Randi is also interested in sticks, considering pressing cases of criminal fraud or civil suits against self-proclaimed psychics. The foundation plans to select a half-dozen or so high-profile targets each year, document their claims in detail, and then call them out. Says Wagg: "We're going to pick people every year and hammer on them. We're going to send certified mail, we're going to do advertising. We're going to pick a few people and say, we are actively challenging you. We may advertise in THE NEW YORK TIMES. This will make the challenge a better tool, to be what it is supposed to be."

The initial targets are author and talk-show darling Sylvia Browne, who claims to have precognition and see angels, and TV psychic John Edwards. In 2001, on an appearance on Larry King Live, Randi goaded Browne into taking the challenge, but she later reneged, publishing an open letter on her website: "As the saying goes, my self worth is completely unrelated to your opinion of me, and I've worked far too hard for far too many years, and have far too much left to do, to jump through hoops in the hope of proving something you've staked your reputations on mocking. I have no interest in your $1 million or any intention of pursuing it." Foundation members were disappointed in Browne's refusal to take part in a scientific experiment that could change the basis of science as we know it. Edwards, at least, was consistent, simply dismissing the challenge. Neither Browne nor Edwards answered queries from the WIRED reporter.

Randi is annoyed at the clear complicity of the media in the psychic circus show: "People like Sylvia Browne have a very high profile, and she's always going to be on Montel Williams and she's going to be on Larry King. And they know what's going on, they're smart people. They know what's going on and they don't care."

James Randi

* The unfortunate difficulty with Randi's debunkery is that nobody with any sense needs it, and those who are taken in are too gullible to learn better. I've met such people, I like to call them "antiskeptical" for want of a better word -- the kind of folks who believe things because they are preposterous and not in spite of it, who see the unbelievable as the only thing that can be believed. Randi seems to understand this, and I am still sympathetic to what he's doing -- somebody needs to call "baloney" and at least put a bound on the reach of frauds, though I'm not keen on his ideas about using the courts to harass them.

WIRED.com allows reader commentary for articles, and this one was littered with angry remarks by Bill Perron, a self-described psychic whose attempt to claim the challenge broke down in the negotiation phase. A search demonstrated that Perron makes use of every venue available on the Internet to denounce Randi. It's amusing, since even if all the claims Perron makes are true, very few people would fail to be turned off by the ranting way he makes them.

One of the interesting things is that some of these folk, such as Achau Nguyen, are obviously sincere in their belief in their abilities, since it would be hard to understand why they would take the challenge if they weren't. As far the obvious nutcases are concerned, a survey of the lunatic fringe on the web suggests in hindsight that it shouldn't have been a big surprise that offering a million dollars would have brought them out of the woodwork. I suspect that one of the unmentioned reasons for wanting to get rid of them was out of concern that some of them might be dangerous.

Incidentally, in a famous story James Randi was accused by a professor of being a fraud. Randi replied: "Yes indeed, I'm a trickster, I'm a cheat, I'm a charlatan, that's what I do for a living. Everything I've done here is by trickery." The professor replied: "That's not what I mean. You're a fraud because you're pretending to do these things through trickery, but you're actually using psychic powers and misleading us by not admitting it." I keep trying to visualize the look on Randi's face.

BACK_TO_TOP

[MON 29 JAN 07] THE MEDICAL INFOSYSTEM CHALLENGE (1)

* THE MEDICAL INFOSYSTEM CHALLENGE (1): As discussed in an article from IEEE SPECTRUM ("Dying for Data" by Robert N. Charette, October 2006), modern hospitals are full of leading-edge tech, but in general they are massively behind the times in the use of computing. Medical records of patients are still kept in scattered paper files, with no way to easily search them all. This means that considerable time and money is wasted in, say, repeating tests if records cannot be found. Worse, it leads to potentially disastrous misprescriptions, misuse of drugs, or erroneous treatments of patients.

The backwardness of the medical profession in information technology (IT) is all the more appalling because IT is in normal use almost everywhere else. However, there are economic and political problems restraining the use of IT in medicine; in addition, building a usable, interoperable medical IT system is a horrendous challenge. The US is now working towards a comprehensive medical IT system, with other countries -- including Australia, Canada, Denmark, Finland, Germany, and the United Kingdom -- also working on the concept. The Finns expect to have their system in place by 2007. The British, however, have been working on a medical IT system for four years and have accomplished very little.

The US effort is a private-sector project, supported by and with some funding from the US Federal government. The goal of the "National Health Information Network (NHIN)", as it's called, is to provide immediate access to a patient's full medical history, any time and any place. Not only will this help patients and hopefully reduce medical costs, it will also provide epidemiological data to show which treatments are effective and which are not. It would also be able to monitor the population to spot the emergence of pandemics or terrorist poisonings. The US government has set a target date for NHIN for 2014.

The problem in reaching that date is that building a comprehensive medical IT system is very difficult, with some estimates of the costs of NHIN running to hundreds of billions of dollars. It is not a trivial piece of technology, involving a network of hundreds of thousands of separate nodes that have to work more or less seamlessly to provide access to patient data. This vision immediately suggests potential security and privacy issues. Any serious analysis of how to build such a system immediately leads to a runaway proliferation of issues and design considerations -- and the worry that even if a system can be designed to more or less meet the spec, it will be too cumbersome and clumsy to be workable.

* The idea of a medical IT system has been around a long time, with efforts going back to the 1960s, and most of them ending in disaster. In the early, naively optimistic days of computer science, few had any idea of how difficult implementing such a complicated system would really be.

One of the first organizations to actually come up with a workable medical IT system was the Mayo Clinic of Rochester, Minnesota. The Mayo Clinic has actually long been a pioneer in medical IT systems: early in the 20th century, Dr. Henry Plummer, a partner in the clinic, decided that the traditional approach of doctors writing up patient data in a ledger book, in which daily entries for different patients were all entered together, was clumsy. Plummer came up with the idea of a "patient dossier", a file in which all the paperwork was consolidated, with the patient linked to the file by a unique registration number.

In 1993, the Mayo Clinic decided to automate the process. The effort involved upgrading the clinic's fiber-optic network, installing 16,000 workstations, and setting up a software system with a database and code written both by GE Healthcare and in-house. It went online in 2004, and is now being used to support the clinic's 1.5 million patient visits and 60,000 hospital admissions a year.

Each patient is, as before, assigned a unique registration number, which identifies the patient's account on the system. The record is updated by doctors and other staff for each patient visit. Test results are automatically put into the system and prescriptions are automatically routed to the hospital pharmacy, with checks automatically performed for conflicts between drugs and patient allergies. The system also is used to schedule visits, perform billing, and perform other administrative tasks. Cost of the system is estimated at $80 million USD, with savings of up to $40 million USD a year due to elimination of record-keeping overhead and improvement in service.

The Mayo Clinic system was obviously not easy to build, since it integrated so many different sources of data and performed so many tasks, and it took time to train the 17,000 staff of the clinic to make use of the system. In fact, the Mayo Clinic has two other facilities, one in Jacksonville, Florida, and the other in Scottsdale, Arizona -- and though all three facilities are automated, they aren't interlinked. A doctor at one site can view data from another, but that's it. Attempting to interlink the three sites was simply judged too ambitious.

The US Veteran's Administration also has had a very highly regarded medical IT system called "VistA" -- mentioned here some months back. There is an even bigger medical IT system for the active US military, covering over 9 million personnel, named the "Armed Forces Health Longitudinal Technology Application (AHLTA)" system. Right now, in Iraq and Afghanistan, a US Army medic can punch in data on a wounded soldier into a commercially-available PDA to obtain important health data about the soldier, and also alert the nearest aid station to prepare to treat the soldier. All inputs to the system are entered into the soldier's medical record.

AHLTA not only integrates data for all field medical operations, but also the 70 hospitals, 411 medical clinics, and 417 dental clinics run by the US military worldwide. AHLTA's design faced three major challenges: scalability, integration, and system availability. It had to run seamlessly, reliably, and quickly on everything from hospital servers to laptops in field hospitals to PDAs in the hands of combat medics, with information being accessed at computing nodes all over the globe. Although AHLTA has used commercial software and hardware whenever possible to keep down costs, it still hasn't been cheap, running to $800 million USD for development and deployment, and $100 million USD in yearly upkeep. However, the military believes that every dollar spent on AHLTA saves $1.29 USD in other expenses, and it simply does a better job. [TO BE CONTINUED]

NEXT
BACK_TO_TOP

[FRI 26 JAN 07] INFRASTRUCTURE -- POWER PLANTS (6)

* INFRASTRUCTURE -- POWER PLANTS (6): It's somewhat ironic that coal remains America's main electric power source in the 21st century, since 50 years ago the general belief was that nuclear power was the future. Worries about reactor safety and radioactive waste disposal derailed that gleaming dream of the future, though atomic power still produces about 14% of the USA's electricity.

The basic principle of a nuclear reactor involves a "reactor core" full of refined uranium involved in an atomic chain reaction: the breakdown of uranium atoms releases neutrons that cause other uranium atoms to break down, and so on, with the breakdown of the atoms producing incidental heat. The fuel is in the form of cylindrical pellets of uranium oxide stored in zirconium alloy tubes, with these "fuel rods" inserted into the core to put it into operation.

The rate of the chain reaction is controlled by "control rods" containing materials such as cadmium that absorb neutrons and "dampen" the chain reaction. If the control rods are shoved all the way into the core, the chain reaction stops; they're then pulled out partway until the reaction starts up again. Pulling them out all the way is generally asking for serious trouble. An emergency set of control rods is automatically released in case of an emergency, with the emergency rods falling into the core under the influence of gravity.

There are two general classes of nuclear reactors used for power generation in the US, the "pressurized water reactor (PWR)" and the "boiling water reactor (BWR)". In the PWR, water is circulated through the core in a loop so highly pressurized that the water never boils; the heat from the water in the loop is passed through heat exchangers to a secondary steam loop, conceptually not all that different from that of a coal-burning plant, to drive a steam turbine. In the PWR, there's only one loop: water is circulated through the core to be turned into steam, which drives a turbine directly. In both cases, the water also acts as the reactor core coolant system.

Both types of reactors require support facilities, of course including the turbine and switchyard systems, much like those of a coal-fired plant, but they also require a fuel-rod handling structure, with the rods stored in racks at the bottom of a big "swimming pool" of very pure water to keep them cool and provide shielding. The radiation from the rods makes the water glow with a faint blue light called "Cerenkov radiation". There's also a control room, maintained at a positive pressure in order to keep out radioactivity in case of an accident.

In a PWR, there's only a containment vessel around the reactor itself, intended to seal it off in case of a catastrophic accident. Since a BWR includes the turbine system in the radioactive loop, much of a BWR site is inside a large containment building. A BWR containment building may have a very tall "chimney" with a tapering "hyperbolic" curve, intended to disperse trace releases of radioactive gases.

nuclear power plant cooling towers

This chimney is not related to the plant's cooling towers, which are intended to carry off waste heat. The stereotype of a nuclear power plant includes the squat, tapered "natural draft" cooling tower, with its ominous appearance, but a nuclear power plant may also use a much less prominent fan-driven cooling tower system, not much different from one that might be found at a liquid air plant. The natural draft tower is more expensive to build, but it doesn't require much power for operation, the tapered hyperbolic form promoting the natural flow of air upward. The fans for a fan-driven cooling tower system can eat up a great deal of electricity and so they have higher operating costs.

The natural draft cooling towers may look creepy, but as far a nuclear power plant being a threat goes, they have little to do with it. All they do is dump heat into the atmosphere. Thermal power generation, whether the power is from coal or uranium, requires that the generation system have a source of heat at the input and a "cold sink" at the output. The greater the difference in temperature between the source and the sink, the greater the efficiency of the generation system. That means that there must be some way to get rid of the heat efficiently at the output, and the cooling towers do that job. The alternative would be to dump hot water into local rivers -- which would likely cook the fish and not do the local environment much good. [TO BE CONTINUED]

START | PREV | NEXT
BACK_TO_TOP

[THU 25 JAN 07] GIMMICKS & GADGETS

* GIMMICKS & GADGETS: According to an article from TECHNOLOGYREVIEW.com, a group of researchers at Tokyo University (Tokyo Daigaku or "Todai") have come up with a new scheme for charging battery-operated devices by induction -- without an electrical plug-in connection. This is already being done with devices like electric toothbrushes, but an electric toothbrush has to be put into a stand that places it in a highly specific orientation relative to the induction head that transmits the power.

The Todai device looks like a simple sheet that a battery-operated device can be laid down on. The sheet has two layers: one to sense the position of the device, the other to deliver induction power, but only at the position of the device. The sheet can drive 30 watts of power, using a patterned array of copper coils about 10 millimeters in diameter, with organic transistors laid down by an inkjet printer switching electricity to the coils. Devices to be powered by the sheet will need a coil and associated charging circuitry. Placing a device on the sheet will affect the inductance of the copper coils, allowing the device position to be sensed. Power is then driven through an array of devices on the second sheet, which consists of an array of switches, made of silver and plastic, and copper coils, with the switches pulsing power to the coils beneath the device.

The scheme is regarded as interesting not merely for its functionality, but for the use of potentially-low cost electronic manufacturing technologies. Reliability of the components still leave much to be desired, but the Todai researchers are confident that problems will be solved.

* WIRED.com had a short survey of the newest technology for personal computers in 2007 worth summarizing here. One of the big innovations is the "flash-aided hard drive". The idea is to use a flash-memory buffer associated with a hard disk drive as a buffer to store critical software, allowing the PC to boot faster and allow commonly-used applications to load faster. It will also improve battery life in portables.

Another major innovation will be the wide-scale production of "quad-core CPUs" now being built by Intel and Advanced Micro Devices. The big improvement will be that software is now available to permit parallel multiprocessing using the four processors in the chip, providing a massive improvement in performance. Yet another "big thing" for 2007 will be the widespread introduction of high-speed wireless interfaces for PCs, including the new 802.11n wi-fi spec, capable of handling high-resolution video, and 3G broadband networks.

An interesting "little thing" that's now appearing is support for auxiliary displays and inputs for laptop computers, using the "SideShow" capability of the new Microsoft Vista OS. The auxiliary display and controls will be on the exterior of a laptop, giving the user some "pocket PC" style functionality while on the go. How useful that would be is hard to say, but at least it's cutesy.

* WIRED.com also described the solar power system that's been installed at Google's headquarters in Mountain View, in the California Bay Area. The power system consists of 9,000 photovoltaic panels that can produce 1.6 megawatts of electricity. There's nothing particularly innovative about the system -- except for the fact that a third of the panels are mounted on poles in the facility parking lot, with the rest mounted on rooftops.

Google solar campus

The parking-lot panels were installed by Energy Innovations of California. They not only provide electric power, they provide shade to keep vehicles from overheating on hot sunny days. Parking-lot panels are actually catching on, with a number of similar installations having been set up by Energy Innovations. Company officials say the parking-lot panels are much easier and cheaper to set up than rooftop units; placing solar panels on rooftops means working around elevator shafts, air conditioning units, and the like. Another company, Envision Solar of San Diego, is now planned to move into the residential market, selling "solar carports" that can be set up by homeowners. Prices were not discussed in the article.

* According to TECHNOLOGYREVIEW.com, a group of Swiss researchers at the Ecole Polytechnique Federale de Lausanne (EPFL) has come up with a solar panel power system to produce hydrogen by electrolysis of water. The critical item in the panels is iron oxide -- just plain old rust.

Iron oxide has been considered for such applications for some time, but although it will generate charge carriers to electrolyze water when exposed to sunlight, the charge carriers tended to be quickly reabsorbed. The EPFL researchers doped the iron oxide with silicon, which created cauliflower-like nanostructures with high surface area, to increase the acceptance of electrons from the surrounding water. The silicon also improved conductivity of the material. The EPFL further improved the scheme by adding cobalt, which catalyzed the electrolysis reaction.

The panels have a conversion efficiency of about 4% -- not very good. Further tinkering with dopants and nanostructures might well get the efficiency up to 10% or better. The technology could be, in principle, very cheap and easy to mass produce.

BACK_TO_TOP

[WED 24 JAN 07] MONEY CARDS

* MONEY CARDS: Previous articles in these pages on the use of "near-field communications (NFC)", related to RFID, for use in financial transactions, have shown how the idea is starting to catch on overseas, with NFC cards used for ticketing and small transactions, and some new cellphones also incorporating NFC technology for use as a smart wallet.

According to THE ECONOMIST ("Panhandlers Beware", 18 November 2006), the idea is starting to catch on stateside as well. MasterCard, Visa, and others have now introduced "contactless cards" for purchases under $25 USD, with the card simply waved a transceiver system at a checkout counter or on a vending machine to perform the transaction. The exchange of funds only takes a few seconds, meaning less time stuck in a queue. It is of course harder to steal such electronic money, and businesses don't have to deal with the hassle in tallying up the bills and coins, then taking them to the bank. Also, making transactions easier means consumers buy more on impulse.

Right now, there's only about 21 million contactless cards in circulation in the USA, compared to 1.5 billion credit cards. It's still early for the technology, but already contactless cards are being accepted at fast-food joints, movie houses, and sports arenas. The technology is likely to spread quickly, with work already underway to incorporate NFC technology into US cellphones for small transactions.

MasterCard and Visa have also begun to promote "prepaid card" lines. There's nothing new about prepaid cards, or "gift cards" as they are often called, but traditionally they have been associated with a particular company. Now they're available from the big credit-card companies, and can be used at any business that accepts normal MasterCard or Visa charge cards. These are handy for consumers who don't have bank accounts; Visa estimates the number of "unbanked" consumers in the USA at a surprising 80 million, and they pay $1.5 billion in check-cashing fees each year.

Some employers are now paying employees without bank accounts using prepaid cards, and dozens of US state governments are using MasterCard and Visa prepaid cards for handing out government benefits. The state of Ohio uses Visa prepaid cards to give out unemployment benefits and claims the scheme saves the state a tidy $2 million USD a year. The days of cash are clearly on the decline, though it may take some time for coins and bills to disappear completely.

[ED: 1.5 billion credit cards in the USA? Five per every man, woman, child? Boggles the mind.]

BACK_TO_TOP

[TUE 23 JAN 07] GREEN BOOM GREEN BUST?

* GREEN BOOM GREEN BUST? It comes as no news that there's an enormous amount of activity going on over alternative energy sources these days. It's all very exciting, but anyone who remembers the similar fad of the 1970s still takes it with a grain of salt. An article in THE ECONOMIST ("Tilting At Windmills", 18 November 2006) gives the current burst of enthusiasm a careful looking-over and recommends a bit of caution.

Some die-hards who stayed with alternative energy after the 70s boom went bust are now feeling like they've gone to heaven, with venture capitalists in a mad rush to give them money. Driven by high oil prices, plus worries about energy independence and greenhouse warming, the level of investment has doubled or even quadrupled since 2004, with tens of billions of dollars being invested in green schemes. Everyone's infatuated with solar and wind, biofuels, and the ultimate hydrogen economy lurking over the horizon. Analysts say that growth in the field stands to run at 10% to 30% for another decade at least.

Jeffries, a British investment bank, ran a conference in which participants were asked when solar power would become competitive with traditional power sources: 2010? 2015? 2020? Later dates were not even mentioned. A similar event in San Jose, California, in the heart of Silicon Valley, was standing room only, with Governor Arnold Schwarzenegger visiting to announce: "I feel the energy! I feel the electricity! Clean energy is the future!"

Silicon Valley of all places ought to be suspicious of hype, having been hit the worst from the collapse of the dotcom bubble a few years back, but renewable energy advocates think they've found true love this time. They see it in the earnest efforts of government figures, Governor Schwarzenegger being the leader of the pack in greenery, with his highly successful reelection campaign banking on his dedication to a green future to overcome the doubts the public had accumulated over his earlier years in office. He has pushed CO2 emissions legislation. The state has also set up a very ambitious solar-power program named "One Million Solar Roofs", which involves $2.9 billion USD in rebates over the next decade to Californians who install solar systems in their homes and businesses -- with the Federal government adding in a tax credit of 30% of the cost of installation. California businesses, particularly wineries, have been installing solar panels at a fast clip.

By 2010, California plans to obtain 20% of its power from renewables. California isn't alone in such schemes: another 20 of the 50 US states also have "renewable portfolio standards". Maine has the most ambitious, set at 30%, though there's plenty of hydropower there and meeting such a spec shouldn't be troublesome. New Jersey wants 22.5% of its power to come from renewables by 2021, and in fact that state surprisingly is number-two in solar power usage, after sunny California.

The Federal and state governments have been enthusiastic about support for corn-based ethanol production as well, even though it's an unimpressive fuel source when the bottom line is examined. No matter: farmers get subsidies for growing the corn, refineries get subsidies for adding it into fuel blends, filling stations get subsidies for installing pumps for it, and consumers get subsidies for tanking up on it. Several states mandate gasoline-ethanol mixes to a specified level of ethanol, ensuring a market. It is a libertarian truism that the government can't give away anything without obtaining it from the taxpayers first, and one group has estimated all the giveaways came to $5 billion USD in 2006.

The European Union is even more determined to promote alternative energy, with EU standards specifying that 5.75% of transport fuels come from non-fossil sources by 2010, which means that European biodiesel producers have a potential market to justify expansion of production capability to the maximum extent possible. In addition, the EU wants to obtain 18% of its power from renewable sources by 2010. Some member states have even more ambitious plans.

Other government alternative-energy programs are being floated around the world. Advocates of alternative energy say, with some plausibility, that the massive subsidies being pumped into the field are just "pump priming", paying in advance for a resource that will pay itself back many times over in the near future: alternative energy will become cost-competitive and the subsidies will no longer be needed. Proponents of solar power, for example, point out that the historical record shows that the cost of solar power has decreased by 18% with every doubling of yearly production capacity. In Japan, they add, solar power is now competitive without any government subsidy.

The critics are skeptical: cost-reduction curves can level out, and Japanese electric-power pricing is very high, meaning it's not any real feat to beat it with solar power systems. For now, the alternative-energy business is floating on subsidies, and it is likely to do so for years to come. The argument that this is just an advance payment on something we need to do does carry weight, but the problem is that investors pumping their money into alternative energy aren't concerned with such a consideration of principle -- they're after a return on their investment.

As the bust of the 1970s alternative-energy fad in the 1980s showed, what the governments give they can take away just as easily, and investors pumping money into heavily subsidized businesses may find themselves stranded if the political wind shifts again. Some analysts have noted the alternative-energy business would be in deep trouble if oil went below $50 USD a barrel again. To be sure, oil producers are trying to keep it above $60 USD a barrel, but expanded production and competition could undercut their attempts to hold up prices.

In the meantime, the alternative-energy business is booming. Wind turbine manufacturers can't ship product fast enough, and even the most aggressive forecasts for expansion of European biodiesel production don't see it as keeping up with the EU targets. Production of silicon solar cells is so intense these days that production of silicon itself is a bottleneck, with new investment in silicon production facilities to keep pace.

Few analysts think the push for green energy is a fraud, instead simply warning that investors pay more attention to the road and less to the roadmap. Production bottlenecks, technological difficulties, and political changes may make the highway bumpier than expected, and optimism has to be tempered by considerations of cold reality. Says an analyst of the solar power industry in a comment that could just as well apply to the entire alternative-energy business: "There's too much money chasing too few opportunities. How is it possible that his many solar companies are going to succeed? They're not." When the shakeout comes, dreams of riches from alternative energy may well end in an unpleasant wake-up in bankruptcy court.

[ED: It appears that spot oil prices have dropped below $55 USD a barrel. I was certainly astonished to see that prices at the pump have actually nudged below $2 USD a gallon -- I thought I'd never see it happen again in my lifetime. I wouldn't bet it will drop much lower, though, or even that it stays there long. We will see.]

BACK_TO_TOP

[MON 23 JAN 07] PC INTERFACES & NETWORKING (4)

* PC INTERFACES & NETWORKING (4): Once upon a time, serial connections to PCs were performed over what were called "RS-232" interfaces. All PCs had RS-232 connectors, and in general they could support data transfers of up to about 19.2 kilobits per second. That wasn't blazing fast, but fast enough for a printer or the like. RS-232 left a lot to be desired: there were several different connector schemes -- 9 pin and 25 pin, male and female, different wiring schemes -- and figuring out genders or wiring between connections could be a real pain. It could also be a pain to configure two interfaces to make sure they could talk at the same data rates, using the same protocols, and so on.

This got less painful in time as RS-232 implementations on PCs gradually became more standardized and predictable: it was possible to buy a cable to hook a PC up to a printer and actually feel confident it was the right one. However, RS-232 still remained a fairly dumb interface, and though it lingers on PCs, over the last decade PCs have moved to faster and much smarter serial interfaces.

* The best known is the "Universal Serial Bus (USB)". As it was originally conceived in the mid-1990s, USB was intended as a low-cost scheme to support low-speed peripheral devices, such as keyboards, mice, joysticks, printers, scanners, digital cameras, and so on. A single USB system can connect to up to 127 devices at a maximum data rate of up to 12 megabits per second (MBPS). The devices are hooked over a four-wire cable, providing a serial link and 5 volt DC power -- though at only 500 milliamps current. The connection scheme is a logical daisy chain, but can be connected as a daisy chain or a star (through a hub box), or any combination of the two; in practice, it's usually hooked up in a star configuration, with the PC as the hub. Each cable segment can be up to 5 meters (16 feet 5 inches) in length. Peripherals can be "hot-plugged", or connected and disconnected without rebooting the PC host.

USB was defined an open standard without any royalty requirements, which along with its very low cost of implementation contributed to its success. It is now all but universal for keyboards and mice, and is used with a wide range of other devices, particularly scanners, digital cameras, and digital music pods. Another factor that helped make it succeed was the general transparency of operation: generally USB devices can be plugged into a PC, and typically will work without installation of a driver or performing any special configurations on the PC. Before USB, the notion of "plug & play" operation of devices was a joke; typically, the reaction of a user when introduced to USB was a surprised: "It just works!"

In 2000, the USB group introduced a "USB 2.0" that featured a data rate of 480 MBPS, enough to support low-resolution video, and making USB a better solution for external mass storage devices. The new specification was compatible with the old, in that USB 1.0 devices could work with a USB 2.0 host or the reverse -- though in a mixed system of course the data rate was restricted to 12 MBPS. A "wireless USB" scheme is now emerging with similar data rates, but using a radio link based on "wi-fi" technology (discussed below) instead of cables.

* The introduction of USB was paralleled by the introduction of another serial interface scheme, known as "IEEE 1394" or "Firewire". It was intended for mainly for video hookups, linking devices to a PC over a 6-wire cable that could be up to 4.5 meters (14 feet 9 inches) in length. It can be hooked up in daisy-chain or star configurations, with up to 63 devices per basic system element, expandable to up to 64,449 devices. It supports hot-plugging and plug & play operation. Unlike USB, Firewire can also be used without a computer host: two Firewire devices can be plugged together to transfer data on their own.

Initially, Firewire supported a maximum data rate of 400 MBPS. This was less than USB 2.0's data rate, but an "IEEE 1394b" spec was introduced to compete, providing 800 MBPS, with a possibility of doubling or quadrupling that rate -- though the original IEEE 1394a spec and the IEEE 1394b spec are not compatible. Firewire is commonly implemented in PCs, digital camcorders, and digital video systems, but it's a pretty good high-speed external disk drive connection as well. It is more expensive than USB and, unlike USB, using Firewire means paying a small royalty. Neither cost is very high in absolute terms, but USB 2.0 is definitely cheaper and 480 MBPS is more than enough performance for many devices; as a result, USB has dominated low-speed low-cost peripherals. However, Firewire's higher performance, as well as its ability to provide higher power output to devices, gives it a niche, though USB seems to be eating away at it.

* It might be worth mentioning here that PCs also have traditionally had a simple "parallel" or "Centronics" interface for linking a PC with a printer. It was a very simple, one might say stupid, 8-bit parallel bus hooked up over a 25-pin printer cable. Its simplicity meant that it actually was very cheap, as well as easy and reliable to use, certainly much easier in general than an RS-232 hookup, and it became very popular. A bidirectional derivative, "IEEE 1294", was introduced in the early 1990s, permitting use with external mass storage devices. USB is displacing the old parallel interface, but it hasn't died out yet.

* As far as PC networking goes, there is an enormous range of local area network (LAN) schemes -- twisted-pair wire, coaxial cable, fiber-optic link connections with bus, star, or ring architectures -- but only a small number are used in home-based PCs.

The traditional home LAN connection has been Ethernet or "802.3", which in its household form uses a twisted pair link, and can typically operate at speeds from 10 to 100 MBPS. An elaborate set of specs and protocols is associated with the 802.3, but from the point of view of a home PC user it's straightforward: plug two devices together with a standard cable with phone-type jacks, perform some simple configurations on the PC to define access rights and the like, and then the two nodes on the network can be accessed almost transparently.

The problem with Ethernet is the cable. Unless a house is wired for networking, cables have to be strung from room to room, which can be an expensive nuisance. There are home networks available that operate over household power lines, but the solution favored at present is short-range wireless networking. There are a lot of different specs, but the "802.11" AKA "wi-fi" scheme is becoming a popular. It operates at a minimum of 1 MBPS. Another wireless spec, called "Bluetooth", provides similar functionality but at less range and slower data rates.

The traditional scheme for hooking a PC up to the greater Internet has been the dial-up modem, operating over phone lines. However, high-speed links are increasingly the norm, though there are a range of schemes: high-speed digital wiring, digital communications over cable TV links, fiber-optic links, and wireless links -- with some wireless links connected through communications satellites. No clear winner has emerged in the "broadband Internet" arena yet, and given the interactions between high-speed PC communications, cellphones, and "voice over Internet" technology, it promises to remain in flux for some times. [END OF SERIES]

START | PREV
BACK_TO_TOP

[FRI 19 JAN 07] INFRASTRUCTURE -- POWER PLANTS (5)

* INFRASTRUCTURE -- POWER PLANTS (5): The bus bars from the generator of a power plant lead to an electric "switchyard", a fenced-off area full of heavy-duty electrical equipment where the electricity is routed out to the power grid.

The voltage produced by the turbines is on the order of a few tens of thousands of volts in amplitude; that sounds like a lot more than anyone would want to grab on to, and it is, but it's still not high enough for transmission over long-distance power lines. For reasons that will be explained in a later installment, it's most efficient to transmit power over long distances by raising it to as high a voltage as possible, and so a transformer in the switchyard raises the voltage to hundreds of thousands of volts. The switchyard also contains switches to route the power and circuit breakers to deal with electrical faults.

A typical power plant uses from 4% to 7% of its own power to keep its pumps, fans, and other systems running. Many of these systems have to be running before the plant can be brought online, so the switchyard will provide power for startup drawn back in from the far-flung power grid. This leads to the problem of what might happen if the entire grid went down, but for decades nobody thought that was possible. It happened to the US Northeast on the night of 9 November 1965 and the entire power grid of the region was out for at least 12 hours. These days, plants generally have their own local backup systems to get restarted.

* While conventional coal-fired power plants use turbines driven by an external steam source, there are also power plants used for peak power generation that use combustion turbines, with the turbine containing a combustion chamber, just as with an aircraft turbojet, and burning natural gas. Such combustion turbines tend to be smaller and less efficient than steam turbines, but unlike steam turbines they can be turned on and off at will. Such combustion turbine plants have really impressive intake and exhaust systems, not only to handle the airflow but to suppress noise.

gas turbine spools

Experimental coal-fired power plants have also been built that use combustion turbines, with the coal converted into coal gas for burning. The advantage is that pollutants can be filtered out of the coal gas stream before it's burned, which is easier than doing after it's burned. Such plants also use various schemes to provide greater efficiency than a conventional coal-fired power plant, but they're more expensive and so the idea hasn't caught on yet. [TO BE CONTINUED]

START | PREV | NEXT
BACK_TO_TOP

[THU 18 JAN 07] SILENT PARTNERS

* SILENT PARTNERS: Back in August 2006, an article here discussed the bacteria that live in our guts, which outnumber the cells in our body by an order of magnitude. I added a speculation that someday, probably not soon, we might be able to modify our intestinal bacterial ecology to give us new capabilities.

According to an article in SCIENTIFIC AMERICAN ("Digestive Decoys" by Christine Soares, October 2006), I was being pessimistic. A research team under James C. Paton at the University of Adelaide in Australia has developed strains of the common human colon bacterium, Escherichia coli, to neutralize the toxins produced by intestinal bacterial infections, such as those acquired by visitors to foreign lands where the water isn't safe for outsiders to drink.

The research team engineered an E. coli bacterium that featured surface receptors to lock onto toxins released by cholera bacteria, with each modified E. coli bacterium able to bind up to 5% of its own weight in toxin. In test-tube experiments, the bacteria neutralized all but a fraction of percent of cholera toxin in a solution. In a test on a dozen mice infected with cholera, eight mice with the modified bacteria survived, while all the dozen mice in the control group died. The treatment was effective even if it was administered hours after the infection.

E. coli is easy to grow, and so a treatment based on it is likely to be cheap. It can be easily administered in a flavored solution. It also doesn't attack the cholera bacteria itself, instead protecting the host, so the cholera won't build a resistance to the treatment through selection pressure. There are still concerns about Paton's modified E. coli bacteria, and human trials are not being conducted just yet. They could trigger a dangerous immune reaction, and there's also the possibility of political backlash against a treatment based on a genetically modified organism. Paton believes that such an obstacle could be overcome by administering killed bacteria: "It works when it's dead, not quite as well, but once it's dead it's no longer a 'genetically modified organism'."

BACK_TO_TOP

[WED 17 JAN 07] ENERGY FUTURE

* ENERGY FUTURE: The September 2006 issue of SCIENTIFIC AMERICAN was devoted to "Energy's Future Beyond Carbon" and was worth a survey. It mostly arrested the usual subjects -- carbon emissions versus global warming, the Kyoto treaty and US noncompliance, more efficient cars and buildings, carbon sequestration at power plants, hydrogen as a fuel, and renewables in general. However, it did present a number of interesting factoids and ideas along the way:

* A loosely related recent article in BUSINESS WEEK provided a closeup on US farmers investing in biofuels and wind power, and though it also was mostly familiar stuff, it had some interesting details. Farmers who have invested in ethanol facilities are finding it highly profitable these days, though recent soft oil prices are cutting into their margins. Wind seems like an even bigger plus: farmers who lease sites for wind turbines get a few thousand dollars a year per turbine, and when they own the turbines themselves, they can make some really good money. Green power has reversed the decline of farming in some regions out on the prairies, and more profits to corn farmers means the government can cut farm subsidies.

Sounds like a win all around. The article did say another thing that got my eye. It cited the ratio of "energy in" to "energy out" for corn ethanol as 1.5, which seemed about the median estimate from what else I've read -- and then added that the ratio is 3 for biodiesel. If so, that's surprising, two-thirds the ratio of gasoline, which is like about 4.5 -- and biodiesel also has the advantage that it acts pretty much like ordinary diesel, not requiring that engines burning it be corrosion-resistant, as is the case for engines that burn high-proof ethanol or methanol fuel mixes. Then the article went on to say that cellulosic ethanol could ultimately have a ratio of up to 36; I just laughed, wondering how anyone could come up with such an estimate, since at the present time it's twice as expensive to get energy out of cellulosic materials than corn.

Along this line, SCIENTIFIC AMERICAN had a short article about cellulosic ethanol feedstocks. A team of economists and ecologists at the University of Minnesota have suggested that nitrogen-poor, degraded land planted with a mixture of perennial prairie grasses-- including goldenrod, Indian grass, big blue stem and switchgrass -- can actually provide up to 238% more bio-energy than the same land planted with only one species. A plot planted only with switchgrass, the great hope so far of cellulosic ethanol, only yields a third as much bio-energy. Compared to cultivated corn grown on good land, the mix of prairie grasses can provide 51% more energy per hectare.

The really nice thing of the prairie grass mix is not just that it can grow easily on land worthless for growing crops, but that the grasses have extensive root systems, meaning that they sequester more carbon than will be extracted by converting their stems and stalks into biofuels. Of course, right now this is of little practical use, since it's so much more expensive to produce cellulosic ethanol as corn-based ethanol. Critics also point out that if cellulosic ethanol processes become more cost-effective, then corn stover -- stalks and other plant waste -- will be convertible to fuel, making the economics of corn look better.

I was daydreaming once and thought that if I could have three wishes, I would ask for much better and cheaper technologies to convert sunlight to electricity, convert sunlight to fuel, and store electricity. The interesting part was to wonder if we could actually obtain all three such things if we threw billions of dollars at them. It would certainly improve the odds, but then again the laws of physics do not always accept bribes.

BACK_TO_TOP

[TUE 16 JAN 07] JIHAD ONLINE

* JIHAD ONLINE: A BBC.com article ("The Growth Of Online Jihadism" by Frank Gardner) took a peek at how "jihadis" -- Islamic militants -- make use of the Internet in their war against the infidels. The tour started at the Norwegian Defense Research Institute (NDRI) near Oslo, where a group of academics engage in research on terrorism in an interactive fashion.

The team members are fluent in Arabic and surf the web to sites run by jihadis, infiltrating under assumed Arabic names. One of the team, Brynjar Lia, who has written a popular book on Islamic militancy and the Internet, describes the virtual environment they visit:

BEGIN QUOTE:

Propaganda, calling people to jihad, is the primary purpose. It has always been like that from the beginning, but secondly it is to communicate to the internal community of jihadis with the message to continue to fight and build up the spirit of combat, and also internal communication with cell members and so on. This can be via e-mail or encrypted messages. Usually they don't use much encryption, they only use easy codes, simple codes that can be read by people but interpreted as something that doesn't have anything to do with terrorism.

Then there is also the external audience, those enemies who they want to frighten and terrorize. The idea is to produce videos that are very scary, like decapitations and other similar movies. Then there is also the electronic jihad part of it, which is to destroy enemy websites which are critical of the jihadi movement.

The last area is training. That could be anything from providing security instructions, how to withstand interrogations, how to evade surveillance but it could also be how to produce explosives, how to put together a mine, how to place the mine and so on.

END QUOTE

Over the last year, the Norwegian analysts say, the websites have been shifting their message to reach audiences in Europe, particularly Britain, which is seen as a high-priority and relatively soft target. Online videos of speeches, for example those by al-Qaeda chief strategist Dr. Ayman al-Zawahiri, are now subtitled in English, German, Spanish, Swedish, and other European languages.

The sophistication of the material is also improving, with front-line combat videos from Iraq and Afghanistan, along with well-made training videos on bomb-making and weapons handling. The jihadis have become very slick at reaching the youth market, setting up chat rooms to discuss jihadism, creating online games where players can blow away American soldiers, and providing flashy jihadi rap videos. According to Thomas Hegghammer of the NDRI, some forums are used to obtain the latest news from the jihadi community:

BEGIN QUOTE:

These forums are like the sort of town square of online jihadism, it's where people meet to collect information and discuss topics. If you look at the address it's quite anonymous, it's just numbers and this is because they move around all the time to avoid hackers and government agencies that try to take them down, so some of these sites more around on a weekly basis or even daily basis and the way you find these addresses is from other forums. So there is always a redundancy. So if one forum is shut down then you go to the other one to get the new address.

END QUOTE

Western governments have been slow to react to the phenomenon of online jihadism, but British intelligence now believes the Internet has become the main conduit for recruiting new troops to the cause. However, a London-based Arab journalist named Camille Tawil who keeps track of online jihadi activities believes the British are content to just monitor the web traffic for now, while the US has tried to infiltrate the sites and get data on users in hopes of foiling attacks and capturing jihadis: "The Americans are well ahead of the British in this."

Comments an official at the British Home Office, responsible for the UK's internal security: "We are doing a number of things, some overt and some covert. But we admit some of them are not working".

BACK_TO_TOP

[MON 15 JAN 07] PC INTERFACES & NETWORKING (3)

* PC INTERFACES & NETWORKING (3): Most folks pay much less attention to the interfaces used inside a PC to hook up disk drives than they do to plug-in card interfaces, but disk drive interfaces are a complicated subject of their own.

The main PC disk drive interface is the "AT Attachment (ATA)" bus, which goes back to the late 1980s and, as its name implies, was originally associated with the IBM PC/AT. In the early days, it used a 40-wire ribbon cable connector scheme with a 16-bit data bus, and could support up to 137 gigabytes of disk space in principle. Unlike earlier PC hard disk drive controllers, the drive controller chips were on the hard disk itself, and so ATA was also known as "Integrated Drive Electronics (IDE)". Multiple drives could be supported using a linked cabling scheme -- which those who had to occasionally deal with it could find a literal pain, since routing ribbon cables around in cramped PC chassis was often troublesome and could result in scraped knuckles, chipped fingernails, and the like.

ATA has been successively enhanced, with the "ATA Packet Interface (ATAPI)" AKA "ATA-4" enhancement introduced to support tape drives, CD-ROM drives, and the like; "ATA-5", with an 80-wire ribbon cable that used interleaved ground wires to permit boosted speeds; and "ATA-6", which supported 144,000 gigabytes of disk space.

"ATA-7" was a new scheme, "serial ATA (SATA)", using a serial interface configuration, originally running at 1.5 gigabits per second, but later boosted to 3 and now 6 gigabits per second. Each drive has its own dedicated SATA cable connector. Of course, traditional ATA is now referred to as "parallel ATA (PATA)". Incidentally, in both PATA and SATA power is provided to disk drives over a separate connector.

* The Shugart-derived "Small Computer System Interface (SCSI)" hard disk drive interface has also been around for a long time and is hanging in there handily. It is another parallel bus interface scheme and has been used on Apple Macs and Sun workstations -- but rarely on Windows-type PCs because of relative cost. For this reason it's not discussed in detail here, and it would be difficult anyway: as a standard SCSI is weak, with a bewildering range of connector schemes and a massive range in performance. At the top end, it blows the doors off of ATA schemes in performance at a cost in higher price, and so SCSI interfaces are often used on high-end workstations and server systems.

External memory devices are now often hooked up to PCs over the USB and Firewire serial interface systems, which are the subject of the next installment in this series.

* While poking around on HDD interfaces I had vague memories of the "Drive Bay Specification" that was floated around some years ago. It seemed like a good idea, a common spec that would allow disk drive makers to build plug-compatible disk drives, but on investigation the idea seems to have completely disappeared. I would guess that the disk drive business was just too cutthroat to permit anyone to agree on a standard, particularly one that might have added to expense even slightly. I've met people who worked for disk drive manufacturers; one described it as a "diverse environment -- the longer you work there, 'di verse' it gets." [TO BE CONTINUED]

START | PREV | NEXT
BACK_TO_TOP

[FRI 12 JAN 07] INFRASTRUCTURE -- POWER PLANTS (4)

* INFRASTRUCTURE -- POWER PLANTS (4): Once the steam goes through the power plant turbine system, it's cooled in a condenser to be returned back through the boiler loop. Since the flow density in the condenser is at its lowest for the entire loop, simple mass-flow considerations mean the condensers are necessarily big.

The cooling actually helps drive the flow through the turbine system, since the condensation of water results in a partial vacuum inside the condenser. The water has to be purified or "polished" before it's sent back through the loop again. If it wasn't purified, it would deposit minerals inside the linings of the boiler tubes, which would act as an insulating layer and cause the tubes to overheat, with the threat of a disastrous rupture.

The polishing takes place in a number of steps, which are somewhat reminiscent of those that take place at a municipal water-treatment plant:

Since some water is lost through the loop, a bit of "makeup water" is added before the flow is pumped back into the boiler. Since the water is going from low to much higher pressure, the "feedwater pumps" to do the job are hefty, the biggest in the plant. They are either electrically driven or are driven by steam turbines of their own. They soak up 2% to 3% of the entire power output of the plant.

* All this manipulation of steam is of course to spin an electric generator. An electric generator consists of a "rotor", a spinning coil of wire that produces a continuously changing magnetic field. The spinning magnetic field "induces" an electric field in the surrounding fixed coils or "stator" to produce electricity. In the case of generators for North America, the rate of rotation is 3,600 revolutions per minute (RPM) or 60 times a second, which generates alternating current at 60 hertz (Hz). The rate of rotation has to be precisely controlled or the AC going out over the network from different power plants will be out of phase, causing unacceptably irregular power distribution over the power lines. Outside of North America, the AC rate is 50 hertz, or 3,000 RPM.

The rotor of the main generator uses electromagnets instead of permanent magnets, meaning that electricity has to be used to magnetize the rotor so it can produce electricity. That leads to the "chicken or egg" question of where the electricity needed by the main rotor comes from. It is in fact produced by a smaller generator, the "exciter", on the same driveshaft. However, the exciter rotor is also electromagnetic, making the source of its current another puzzle. The answer's simple: there's a third and still smaller generator, the "pilot exciter", on the driveshaft -- and it has a permanent-magnet rotor.

The generator system is very efficient, converting about 98% or 99% of the mechanical power coming in to electricity going out. However, one or two percent of hundreds of megawatts of power is still megawatts, and that means getting rid of a lot of heat. The main stator windings are made of copper tubing, with water pumped through them as a coolant. The rest of the generator is cooled by gaseous hydrogen. It's an effective coolant, though the explosive nature of hydrogen means the generator casing has to be sealed to prevent oxygen from getting in. The electric power is carried off by thick aluminum or copper "bus bars" as thick as tree limbs, capable of handling tens of thousands of amperes of current. [TO BE CONTINUED]

START | PREV | NEXT
BACK_TO_TOP

[THU 11 JAN 07] BALLOON MAN

* BALLOON MAN: The Japanese are stereotyped as buttoned-down and unimaginative, but on closer inspection this generally ends up seeming an illusion. For example, as considered in THE ECONOMIST ("Flying High", 30 September 2006), consider Nishi Naoki of Show Corporation.

Nishi used to work in a firm connected with the Japanese auto industry. In the 1990s, the company hit hard times, and he was assigned to restructure the operation. Much to the surprise of his superiors, part of his proposal was that they fire him as excess, suggesting that Nishi is a brutally honest man. They did as he recommended.

Nishi was a hot-air balloon enthusiast and decided in 1993 to go into that line of business, cashing in his insurance and pension for funding. His initial strategy foundered, but he adapted, focusing on character balloons and expendable "flyaway balloons". Show Corporation now dominates the Japanese market for character balloons and even sells them to Disney Corporation for use worldwide, but the flyaway balloons are the company's big money-maker.

Flyaway balloons were nothing new when Nishi got into the business, but they were declining in popularity since they left plastic litter over the countryside. He developed flyaway balloons from Japanese washi paper, made from mulberry bushes and a biodegradable material -- incidentally, the "bomb balloons" the Japanese built late in World War II to attack the USA were also made of washi paper. Nishi's washi balloons proved popular, with the company establishing a high public profile with events such as the release of flocks of flyaway balloons in the form of white doves at the beginning of the 1998 Winter Olympics in Nagano, Japan.

Show Corporation now has 16 employees in Japan and 71 in China, where the balloons are produced. These days they're made of a cheap rubber that degrades quickly in sunlight; up to a half million a month are sold in Japan alone, and the export business is picking up. The company is also diversifying, producing such items as a cushion to protect workers cleaning high ceilings from falls, and a life-vest for tsunami-prone areas that will protect the wearer when dashed against obstructions by a tidal wave.

BACK_TO_TOP

[WED 10 JAN 07] DEAD END

* DEAD END: Anybody with a background in the electronics industry knows about the heroes of the kingdom, such as Bob Noyce and Jack Kilby -- but few remember the name of Jack A. Morton of AT&T Bell Labs. As discussed in an article from IEEE SPECTRUM ("How Bell Labs Missed The Microchip" by Michael Riordan, December 2006), Morton's career promised to make him another hero of the industry, but in the end he made decisions that would consign him to obscurity.

Jack Morton was from Saint Louis, obtaining an electrical engineering degree from Wayne University in Detroit, where he was also on the football team. In 1936, he hired on to AT&T Bell Labs in New Jersey, pursuing a doctorate from Columbia University in New York. During World War II, Morton worked on radar and microwave technology, with his work leading to the development in the postwar period of a compact microwave vacuum tube that became a key to the microwave telephone relay towers that sprouted across the USA.

In 1948, Morton took on responsibility for development of the newfangled solid-state transistor that promised to replace vacuum tubes. The transistor had been demonstrated in 1947 by three other Bell Labs researchers -- William Shockley, John Bardeen, and Walter Brattain. However, it was strictly a lab prototype device and not remotely a commercial product. As Morton said later, the performance of the transistor was likely to shift "if someone slammed a door."

Morton was sharp, determined, aggressive, and single-minded. By 1950, Western Electric, AT&T's manufacturing arm, had transistors in production. These were still crude "point contact transistors", with the more modern "junction transistor" being demonstrated in 1951. Morton saw its potential and got it into production in a year. These early devices were made of germanium, but some researchers like Shockley were convinced that silicon would do a better job, the problem being that silicon devices were more difficult to fabricate. In 1954, AT&T brass decided to develop the first "electronic switching system (ESS)" for telephone exchanges, and the decision devolved down to Morton as to whether it would be based on germanium transistors, then in reliable production, or the still-experimental silicon transistors. Morton grasped the nettle and chose silicon. The ESS-1 switching system, introduced in the early 1960s, would prove an outstanding success.

* That would turn out to be the zenith of Morton's career. By the late 1950s, researchers like Noyce at Fairchild and Kilby at Texas Instruments were pioneering the first integrated circuits (ICs), which combined multiple transistors and other electronic components on a single silicon crystal. At the outset, like any other new technology, ICs were not particularly impressive, with a limited number of devices per chip and an expensive pricetag. The US military, requiring compact electronics to fit into missiles, was funding the technology and willing to pay the price. AT&T didn't have to worry about lightweight electronics for their switching systems and the like, and so the company had no immediate motive to be concerned about ICs.

Morton had become a company vice-president in 1958. He was convinced that ICs weren't going to go much farther. His concerns were not unreasonable; the way he saw it, the more devices on a chip, the higher the probability that the chip would be defective. To ensure reasonable production yields for a chip with a thousand transistors would require that the failure rate per device would be well less than 1 in 1,000. He also believed that operational reliability would be a problem, and that tooling up IC production would be so expensive as to present an obstacle to technical improvement. Morton wasn't subtle in expressing his objections either, with one of his employees saying later: "Morton was such a strong, intimidating leader that he could make incorrect decisions and remain unchallenged because of his aggressive style." When competitors began to talk about "LSI" or "large scale integration" in the mid-1960s, Morton sneered at the competition as "large scale idiots".

However, by the end of the decade the writing was on the wall, with competitors employing the new "metal oxide semiconductor (MOS)" technology to make ever denser chips. In the early 1960s, Morton had decided that MOS technology was unpromising and ordered work on the technology at AT&T abandoned. There's a saying that being on the rear of the advance means being in the forefront of the retreat, but Morton's predictions of a retreat turned out to be dead wrong and AT&T ended up in the rear, period. The reality was that the problem of chip yields, though hardly a simple issue, could be addressed even as chips started to begin their rapid rise towards millions of devices. The reliability of a one-chip device was vastly greater than that of a comparable system wired together from simpler components; and though fabricating chips was and remains an expensive business, manufacturers never hesitated to update their processes as fast as they could. If they didn't, they went broke.

* By the end of the 1960s, Morton's failure was becoming apparent, though his prestige didn't suffer. LSI was still in its infancy at that time, Morton could continue to think that he would be vindicated in the end -- but given his headstrong personality, he was frustrated at being stuck at the corporate VP level. He took to drinking more, his favorite hangout being the Neshanic Inn, located in Neshanic Station, New Jersey, not far from where he lived.

On the evening of 10 December 1971, Morton had returned from a trip to Europe and gone to the Neshanic Inn from the airport, driving his Volvo. The inn was closing when he got there, but two men, Henry Molka and Freddie Cisson, offered to give him a drink from a bottle in their car. The two men robbed Morton in the parking lot. Of course, Morton had to fight back, but he was outmatched. They beat him unconscious and threw him in the back seat of his Volvo. They drove it a block down the road and set it on fire. In the dark hours of the morning of 11 December 1971, firemen arrived to put out the fire, to then be shocked to find Morton's charred corpse in the back seat. An autopsy revealed Morton's lungs were singed, meaning that he was still breathing when the fire started. Molka and Cisson had murdered Morton for all of the $30 USD he had in his wallet. They got life in prison, but only served 18 years.

* The story of Jack Morton's life and death provides a certain morbid fascination of how chance and decisions could lead down a branching path of possibilities to a man's destruction. Organizationally, the story of Morton's failure to see the future suggests a less dramatic lesson anyone who's ever worked for a big bureaucracy knows perfectly well: that a good leader can be an SOB -- but there's a lot of SOBs in management, and the only ones that are any good also happen to be right. There's a subtler lesson, in that objections to a new course of action can be raised that seem perfectly sensible and valid at the time, but turn out to be completely wrong in hindsight.

BACK_TO_TOP

[TUE 09 JAN 07] YARIS BACK TO THE FUTURE

* YARIS BACK TO THE FUTURE: I finally got rid of my 1983 Toyota Tercel 2-door hatchback, obtaining a Toyota Yaris 2-door hatchback in its place. The Tercel had been so reliable that I could hardly have bought a car from anyone but Toyota. In fact, when the finance guy at the dealership -- an engaging Russian with the looks and personality of a middleweight boxer -- tried to sell me on an extended warranty, I basically replied: "What for? Toyota makes such a good car!" It was the perfect rejoinder, not merely blunting his push but leaving him beaming with pleasure.

In any case, the abrupt jump from a 1980s car to a 21st-century car turned out to be something like "back to the future", illuminating the progress in automotive design in over two decades. To be sure, the Yaris is a budget machine and not an exotic vehicle by any means, but even Toyota's bottom of the line vehicle represents plenty of evolution.

Toyota Yaris

* First off is the streamlining, the Yaris featuring the smooth rounded contours and sharply raked windscreen often seen on modern cars. Sexy? Not really, but it is kind of cutesy. It reminds me a bit of the little toy cars that are wound up by rolling them backwards over a tabletop. I keep having a vague idea of putting a big windup key to put on the rear.

The headlights were interesting, consisting of one large lamp and one small lamp behind a curved transparent plastic fairing. The small lamp is "always on"; this appears to be common in new cars these days -- there's probably some regulation someplace requiring it, I would bet California, but I know not the details.

On the inside, the swept windscreen gives a surprising impression of spaciousness for such a small car. One of the few negative reviews of the Yaris that I read said it lacked headroom, but I'm six foot three (190 centimeters) tall and had plenty of leg and headroom. The back seats are cramped, but that's what I would expect for a subcompact.

The biggest change in the interior is that the instrument panel is not in front of the steering wheel, but is in front of the stick shift in the center. This gives a clearer view to the front and more space for storage -- it has a very nice and convenient set of storage compartments and niches. The critical review didn't like the centerline dashboard either, but it's a wash: instead of glancing down to get my speed, I glance to the side; the swept windscreen allows the dashboard to be set well forward, and it's not like I take my eyes off the road. I do wish it had a digital speedometer instead of an analog readout; since I suspect one's just about as cheap as the other, I would bet I'm a minority in that preference. It has a dual mileage counter along with the odometer. I was hoping it might have a "mileage reader" that gives current gas consumption, but no such luck.

Some of the surprises a new car can provide can be amusing. The first time I took the Yaris out in the dark and turned on the headlights, the dashboard went black. Huh? Oh, there's a dimmer control somewhere ... where's the dimmer?! I had to check the manual when I came to a stop and found out it was next to the driver's door, not exactly an intuitive spot when the dashboard is in the center.

The "fasten seat belts" alarm is persistent, rising to a frantic throb if I don't fasten my belt after a minute; it will go off eventually, but an alarm light on the dash keeps blinking at me. The alarm would also go off if I put something halfway heavy on the passenger seat, so I finally just put the seat belt in place there. One very peculiar feature is that I can't start the machine unless the clutch pedal is all the way down. Apparently it's a dodge to keep people from firing the thing up when it's still in gear, but I still stall the vehicle at stop lights by mishandling the clutch every rare now and then and if I get a bit flustered I can have trouble starting it again.

There were a number of features that everyone is familiar with these days but which were basically new to me, such as airbags and a rear-window wiper. As primitive as it might sound, this is the first car I've had with air conditioning; I'm at the age where I tire more quickly and drastically when it's hot, and so that's almost to be a necessity now. During the first long-range trip I took in it, when I started to get woozy I'd turn up the air conditioning until I literally chilled out, and it made a big difference. When I got to my destination, I didn't feel like I'd been run through a wringer. Now I can take long-range trips in the heat of summer no problem.

The stereo system is state of the art, however, built around an MP3 CD player. It's fun to put a dozen or more CDs, or at least the high points of each, on a single disc and then play for hours. Trying to figure out how to navigate through the CD menus can be confusing at first, however; people talk about "hang up and drive", but in my case I had to remember a few times to "stop fiddling with the fancy stereo system and drive". There's also an input jack for using a music pod or the like, but I haven't tried it out yet.

The only really annoying feature of the Yaris was the spare tire. It has a temporary spare and I detest such things, so the first thing I did was buy a real spare. I tried measuring the spare tire bay before I did it, but I still didn't get the full spare to fit. That annoyed me -- I can't stow a spare tire that fits the car? There's plenty of space for it in the rear and I would have got rid of the despicable temporary spare in any case, but it's still klunky and annoying.

Incidentally, trying to dispose of a temporary spare was a nuisance. I even asked the dealership if they wanted it, and they said NO. It's so useless I can't even give it away. Trying to break the thing down for disposal proved another nuisance -- the tire was snugged up to the wheel with thick cable loops, a scheme which makes sense considering nobody ever thinks to change the tire on a temporary spare but which makes getting the tire off a real pain. I had to use heavy wire-cutters and a cold chisel to finally get the tire loose.

* Becoming cosy with a new car turns out to be something of a Zen experience. Driving a familiar vehicle is something done in a Zen way, without much conscious thought, and it requires a learning curve to get to the point where a new vehicle becomes so familiar. To be sure, I've driven plenty of other vehicles on occasion, but it's not possible to drive a strange vehicle for a short period of time and feel knowledgeable of its quirks or fully comfortable in driving it.

One of the issues is just the "situational awareness" of where the car is: how wide is it? I'm backing up, where's the back end? One of the funny things about driving the Yaris was when I would park it in a normal parking stall, I would leave the nose far from the head of the stall. It took me a little time to figure out it was because the nose of the Yaris was so much shorter than the nose of the Tercel.

The big adjustment comes in handling the rear view, tweaking the positioning of the mirrors and figuring out where the blind spots are. My situational awareness has got to the point where I can synthesize the rear view upstairs and not have to think about it much, though I'm still leery of spacing out and getting nicked on a lane change.

* I'm very satisfied with the car and found my dealings with John Elway Toyota very pleasant, at least in comparison with the sheer ripoff sales jobs more local dealerships tried to pull on me. I once observed that the only thing worse than a big stupid bureaucracy is a little stupid bureaucracy: at least the big bureaucracy has some standards. John Elway is part of the Autonation group and they have some rules. They didn't dicker with me too much, but then again their profit margin on the Yaris was so low that it didn't pay to try to drag things out. I gave them a reasonable deal, and it wasn't worth their time to do more than go through the motions of talking me down.

In fact, I had a lot more trouble with the Colorado motor vehicles department. Before I bought the car I called up to get an outline of what I needed to do to make my car legal, and the woman on the end of the line glibly fabricated a list of things where only half the entries had anything to do with reality. I ended up being worse off than if I hadn't called at all. If I'd just asked her one question, any question, I would have probably seen through her, but she had me conned.

Then I tried to order custom plates -- "RYO OKI", I wonder how many readers will know what that means, do a search on "cabbit" -- and they approved it as "RYO OK", even though I had listed three alternatives spellings that all had an "I" on the end. They misspelled my name, too; I'm used to having my last name mangled, but not my first name. I sent back a correction. After a few weeks, I got a letter in the mail telling me to go down to the motor vehicles department and pick up the plates. I did so immediately and put on the plates on the spot. Enough messing with this thing.

* Incidentally, I read an article in a car magazine on the Yaris and it pointed out that the old VW Rabbit, my first car, only weighed 860 kilos (1,900 pounds), while the Yaris weighs 1,090 kilos (2,400 pounds). In terms of weight, subcompacts have become heavier, up into the range of compacts -- it seems due to regulations on crash-worthiness and the like.

My little green Rabbit was not the best car I ever owned. The worst experience I had was when I was driving down the road one sunny day and smoke started pouring out from under the hood. I pulled off on a side street, parked, yanked open the hood, and all the insulation on the electrical system was sizzling away and burning. I snatched a sprinkler that was conveniently on the lawn of the house where I had parked to put out the flames, then grabbed a tool and pried off one of the cables to the battery to stop the sizzling.

The fiasco did have its bit of humor. All my frantic activity was witnessed by a young guy who was walking down the sidewalk when I parked. The fact that he wasn't very helpful was neither here nor there, it all happened so fast -- but then he had to tell me that he would have handled removal of the battery cable more effectively: "I would have just reached in and pulled it off!"

Right, you'd put your hand on a cable that was so hot it had melted its insulation off? I've always remembered that incident, and am glad I said nothing in reply, there being nothing good I could have said. By the way, I do now carry a fire extinguisher in the car where I can get at it quickly. And then there was the time the muffler got a hole in it and the thing sounded like a farm tractor. I do not remember the Rabbit with any fondness, but when I do think of it, I at least have one pleasant memory associated with it: the report of a person on a humor newsgroup about seeing a white Rabbit with the plates "IML8".

BACK_TO_TOP

[MON 08 JAN 07] PC INTERFACES & NETWORKING (2)

* PC INTERFACES & NETWORKING (2): About the time that the original PCI PC-plugin card system was introduced, a "mini-card" scheme designated PCMCIA was introduced for laptop computers. The original PCMCIA was developed by IBM in the late 1980s as a plug-in card spec for laptops, with the acronym standing for "Peripheral Component Microchannel Interconnect Architecture". In the meantime, the Japanese put together a plug-in memory card spec for laptops named after the "Japanese Electronic Industries Development Association (JEIDA)". Apparently IBM learned from their failure to be a good standards player with the MCA cards, and the two factions worked together to come up with a common standard. This emerged in 1991 as "JEIDA 4.1" or "PCMCIA 2.0", where the acronym was modified to mean "Personal Computer Memory Card International Association". The gag went around that it actually meant "People Can't Memorize Computer Industry Acronyms".

The original "Type I" PCMCIA card had a form factor with a length of 85.6 millimeters, a width of 54 millimeters, and a thickness of 3.3 millimeters. The Type I had a 16-bit data bus with a single row of pins and could support 5-volt or 3.3-volt operation, with cards being "keyed" to prevent them from being plugged into the wrong slot. They supported "hot plugging", meaning they could be yanked from or plugged into a PC without turning it off -- a notion so universal these days that it might come as a surprise that it wasn't true some years ago. The power connections were longer than the signal and data connections, meaning that plugging in or pulling out didn't lead to the card going wild on the bus. Type I cards were only used for memory expansion and died out after a few years.

Later types had the same length and width but had varying thicknesses. The "Type II" card was 5 millimeters thick and had two rows of pins supporting a 16-bit or 32-bit bus. The 32-bit bus was standardized in PCMCIA 2.1 as "Cardbus", operating at 33 MHz, which was keyed so it couldn't be plugged into a 16-bit slot. However, a 16-bit card could fit into a Cardbus slot in principle. The Type II / Cardbus scheme became an effective standard from the mid-1990s. The "Type III" card was 10.5 millimeters thick and were intended for plug-in hard drives. For a time, laptops were shipped with two Type II slots that could accommodate a single Type III card as well, but the Type III never proved very popular.

* A new spec, "ExpressCard", is now emerging to replace the PCMCIA / Cardbus scheme. This is more or less a next-generation PCMCIA-type card using a single PCI Express channel, providing 250 gigabit per second throughput -- it doesn't support more than one channel. The "more or less" qualification is that the ExpressCard bus also supports the "Universal Serial Bus (USB) 2.0" spec. USB, to be discussed in more detail later in this series, is the well-known PC serial interface, with the 2.0 spec supporting 480 megabit per second transfer rate -- less than a fifth of the PCI-E channel but adequate for ordinary sorts of devices.

Why the USB 2.0 interface was also included despite its lower throughput is a bit puzzling. It most likely was done to accommodate low-cost hosts that only feature a USB 2.0 controller chip and not a PCI-E controller chip, though it also will make it simple to build USB 2.0 converter boxes to use the cards. ExpressCards have no compatibility with PCMCIA Type II cards.

The ExpressCard connector is only 26 pins, in a single row. The standard card format is 5 millimeters thick, same as the Type II PCMCIA card, but at 75 millimeters length it is shorter than the PCMCIA card, and at 34 millimeters width is much narrower. A "big" ExpressCard format, 54 millimeters wide, same as the PCMCIA card, has also been defined, though the bus connector is still only 34 millimeters wide -- giving the big cards a tabbed appearance. The ExpressCard supports 1.5 volt and 3.3 volt power, and like PCMCIA, it is hot-pluggable. ExpressCard is being driven by the PCMCIA group, though anybody who wants the spec has to buy it. However, it appears to be catching on since its introduction by HP in 2004, and so the price of the spec doesn't seem to be much of an obstacle.

* As something of a footnote to the discussion of PCMCIA / ExpressCard, the Cardbus spec also led to the popular "CompactFlash (CF)" memory card, which basically uses a 50 pin / two row subset of the PCMCIA interface in a more compact package, 43 millimeters wide and 36 millimeters deep. There are two thicknesses, the 3.3 millimeter "CF1" format, and the 5 millimeter "CF2" format. CompactFlash is the most venerable surviving flash-card format; it is the bulkiest, but it's not so big that the size is any problem for anything except pocket-sized gear, music pods, and the like. The bulk means that Cardbus cards have the biggest memory capacity; the use of the PCMCIA interface means that all a CF card needs to fit into a Cardbus slot is a passive PCMCIA Type II adapter card.

Other flash cards are based on different interfaces:

There are variations on many of these flash cards, but further discussion would be getting further and further off the track of PC interfaces. [TO BE CONTINUED]

START | PREV | NEXT
BACK_TO_TOP

[FRI 05 JAN 07] INFRASTRUCTURE -- POWER PLANTS (3)

* INFRASTRUCTURE -- POWER PLANTS (3): The next item on the tour of a coal-fired powerplant is the stack, the big chimney that can be seen from far away. The stack was once used to generate a draft to pull air into the furnace, but with the convoluted pathway now present between input and output, the stack has to be driven by blowers. The height of the stack is simply to ensure dispersal of the exhaust. There may be a separate stack for each furnace at a plant, or the flues may be consolidated into a single stack. There may also be short stacks around the plant to support small auxiliary boilers.

Old-fashioned furnaces had boilers involving tubes run through the middle of a firebrick-lined firebox. Fireboxes now run hot enough to destroy firebrick, and the water is run through the walls of the firebox, with the walls divided up internally into tubes called "risers". Incidentally, as hot as the firebox is, it's so heavily insulated that it's not uncomfortable to be right up next to it. The insulation is not just for comfort, but to prevent heat energy from leaking away and being wasted.

The tubes in the firebox walls are called "risers" because the water and steam rise in them, to be fed to a "steam drum" above that separates water from the steam in the flow using centrifugal principles, and pipes the steam off to the power turbine. The steam uses up its energy going through the turbine and heat exchangers, with the liquid water returned to the firebox through a set of "downcomer" tubes. The downcomers feed the risers through a junction over a "mud box", a sump that collects rust and scale for disposal when the furnace is shut down for maintenance. The entire boiler assembly is suspended from the ceiling, to give it room to change its dimensions as it heats up and cools down.

A modern boiler system will have automated valves and computer control, but it will also have a spring-loaded relief valve on the steam drum that will pop open when a maximum pressure level is reached. Such relief valves go back to the age of steam, when boiler explosions were common and often disastrous.

* The steam plant's turbine is not so different at the core from the turbine of a jet engine, though instead of generating its own gas flow in a combustion chamber, it's driven by the steam from the boiler. The turbine may be mounted on its own foundation, distinct from the rest of the powerplant structure, to ensure a tight alignment; the spinning turbine contains an enormous amount of rotational momentum and kinetic energy, and if it's not kept on axis it could shake itself to pieces disastrously in short order.

A typical turbine has a "spool" with stages of "rotor" vanes rotating around the driveshaft, with the moving rotor stages spaced by fixed "stator" vanes connected to the casing. The vanes are designed with their aerodynamics carefully considered, and they are built of the most exacting materials, capable of tolerate high pressure, forces, and heat. A broken vane would lead to disaster.

steam turbine spool

There are usually three units in the turbine: a "high pressure" turbine that accepts the raw steam; with the steam output then driven through a reheater to kick its heat (if not pressure) back up; and then being fed to an "intermediate pressure" turbine. The cooled output of the intermediate pressure turbine is finally driven through a "low pressure" turbine. Since the steam expands as the pressure drops, the casings for the three turbines go from relatively small for the high pressure turbine to relatively big for the low pressure turbine, just to maintain mass flow. The relative sizes are deceptive, since the high pressure turbine generates 60% of the power, while the bigger low pressure turbine only provides 15% of the power.

The lubricating system for the turbine system is large and complicated, with big pumps and motors to keep the oil flowing. A computer-controlled governor system keeps the turbine assembly from spinning too fast and destroying itself. [TO BE CONTINUED]

START | PREV | NEXT
BACK_TO_TOP

[THU 04 JAN 07] READ THE FINE PRINT

* READ THE FINE PRINT: Well-known debunker Michael Shermer had to admit in a recent essay in his SCIENTIFIC AMERICAN column that he'd been had himself recently. He was traveling on a tour in crowded airports and airliners over the winter holidays, and had regretted not packing his "Airborne" tablets -- which, dropped into a glass of water, create a fizzy orange-flavored potion of herbs, vitamins, electrolytes, and whatnot that supposedly wards off colds.

By coincidence, while on the tour Shermer ran into a fellow who had actually investigated Airborne, and discovered some interesting things. Most notably, the product says in bold letters:

   Take at the FIRST sign of a cold symptom 
   or before entering crowded environments ... 
   repeat every three hours as necessary.

However, the labeling also says, in tiny print:

   These statements have not been evaluated by 
   the Food and Drug Administration.  This product
   is not intended to diagnose, treat, cure, or
   prevent any disease.

Further investigation showed that the makers of Airborne, Knight-McDowell Laboratories, have interesting qualifications for development of a cold remedy: Victoria Knight-McDowell is a schoolteacher and her husband, Rider McDowell, is a scriptwriter, which explains something about the verbiage on the product packaging. Indeed, their Web page actually boasts about their home-grown credentials, claiming that the product was CREATED BY A SECOND-GRADE SCHOOL TEACHER!

The website had originally featured a link to results of a clinical trial, but that was dropped, the explanation being that it "confused consumers". ABC News looked into the matter and found the clinical trial was conducted by GNG Pharmaceutical Services, "a two-man operation started up just to do the Airborne study. There was no clinic, no scientists, and no doctors."

Shermer dipped into his network of knowledgeable contacts and was informed there was no component of Airborne that was known to have any effect on colds, except for vitamin C, which some evidence suggests may be able to reduce the duration of a cold. The herbs and vitamins in the concoction will tend to provide a bit of a lift, but Airborne also has high levels of vitamin A, in fact high enough to violate safety guidelines if the nostrum is taken several times a day. Shermer's contact told him: "There's more evidence for chicken soup than for Airborne."

[ED: Shermer's misadventure with Airborne proves the truth that even a very sharp professional skeptic will be conned every now and then. Nobody has the time or energy to investigate every possible course of action in detail, and so we often have to do things on a degree of trust. Sometimes, of course, that trust is misplaced. As for Knight-McDowell Labs, it appears they have been doing a good business with Airborne, their website saying they have made hundreds of millions of dollars in sales. With the bad publicity piling up, that may not be for much longer -- alas, though this fad will run its course, its authors will still walk off with the money.]

BACK_TO_TOP

[WED 03 JAN 07] SKY JUNK

* SKY JUNK: Jetliners, like any other machines, eventually wear out and have to be disposed of, preferably before they do so on their own. Given how big they can get, obviously this can become an elaborate process. A BBC.com article ("Where Old Aeroplanes Go To Die" by Chris Legard) took a closer look at the details.

There was a big buildup of jetliners, including the then-new jumbo jets like the Boeing 747, in the early 1970s. Jetliners have a working life of roughly 30 years, and now these aircraft are being junked. When some operators simply dumped their junkers in the ocean, officials at Boeing became alarmed. In response, Boeing helped set up the "Aircraft Fleet Recycling Association (AFRA)", a consortium of recycling companies operating at two airports -- Chateauroux in France and Evergreen Air Center in Arizona.

The Beeb reporter paid a visit to Chateauroux and spoke with Martin Fraissignes, who operates the facility and is also the executive director of AFRA. The airport is littered with jetliners being dismantled; Fraissignes jokes that it's good that only freightliners fly out of Chateauroux, since some airline passengers might find the sight of piles of wreckage unsettling.

The jetliners are broken down, with some components refurbished for resale and metals separated for sale as scrap. However, new jetliners are increasingly made of lightweight, strong carbon-epoxy composites; they will make up half the weight of Boeing's new 787 jetliner, now in development. Not to worry, however; the Milled Carbon factory in West Bromwich in the UK, another node in the AFRA network, has implemented processes that can quickly render down composite materials for re-use, with the end product good enough to be used in new aircraft.

The factory's boss, John Davidson, is a founding member and director of AFRA. According to Davidson, the consortium was set up as an exercise in "best processes", with an eye to the concern that somewhere down the road jetliners would be covered by "End Of Life Vehicle Regulations" that forced manufacturers like Boeing to cover the cost of ultimate disposal of their products. The general sense at Boeing and elsewhere was that it would be far preferable to take control of the issue before governments did and imposed solutions on industry that would be more painful than necessary.

Says Jim Toomey of the AFRA Evergreen Air Center in Arizona: "Why is AFRA going to be great? Number one, it's going to get the best practices established. Number two, it's going to keep us at the cutting edge of recycling technology. And number three, it's going to do it without government regulation and interference. We're going to do it before they tell us to do it, and we're going to come up with practices we can live with and which are better than maybe they can enforce because this is our business."

Boeing's arch-competitor Airbus is also working on the disposal issue through a program named the "Process for Advanced Management of End of Life Aircraft (PAMELA)". It's a research program right now, backed by 3.2 million Euros, some of that coming from the European Union. In recycling, it seems, the sky is not the limit.

BACK_TO_TOP

[TUE 02 JAN 07] EMERGING ECONOMIES

* EMERGING ECONOMIES: THE ECONOMIST ran an extended survey on emerging economies in the 16 September issue ("The New Titans" by Pam Woodall). The discussion was mostly in macroeconomic terms and a bit dry for anyone not into monetary exchange rates or labor balances, but there were interesting tidbits to report.

In 2005, the emerging economies finally caught up with the cumulative GDP of the rich-world economies, meaning that the balance of economic power is beginning to shift in a major way. It should be pointed out that the accounting was in terms of purchasing power, and once the greater purchasing power of money in poor countries is factored in, the actual contribution to global GDP by emerging economies is only 30%. However, no matter how it's figured, the times are changing. In 1970, the share of world exports by the emerging economies was only 20%; now it is 43%. They consume over half the world's energy, accounting for most of the growth in fuel demand, and hold 70% of the world's foreign-exchange reserves.

The four biggest emerging economies are Brazil, Russia, India, and China -- "BRIC" in the jargon -- with Mexico beginning to catch up with the pack. Ironically, three or four hundred years ago, countries like India and China were the world's economic powerhouses, but the industrial revolution left them behind. Now they are making a comeback. Over the past five years, the economies of India and China have grown by 7% a year, an unprecedented rate and far above the 2.3% growth average of rich countries over the same period. Forecasts envision the growth rates of emerging economies and rich countries to follow this track for the next five years. The first decade of the 21st century is experiencing the most rapid growth in wealth in history. If the emerging economies continue on the track of free-market economics and social reform, the boom is likely to continue.

The boom is being driven by international commerce, with the Internet providing a nervous system for communications and coordination. Growth should make almost everyone happy, with more of the world's citizens enjoying more prosperous lives and so providing new markets for those who want to sell them goods and services.

* However, in the nature of things, not everyone is turning out to be a winner. One of the biggest advantages of emerging economies is a low-cost, highly motivated workforce. That means a tendency to transfer production from rich countries to emerging economies. The fact that this implies layoffs for workers in rich countries is arguably less important than the fact that it gives management a powerful lever against labor, and one that management ends up being forced to wield even if they don't want to: high-priced labor means trouble from competitors and the danger of bankruptcy.

The bottom line is that in rich countries like the US, the real wages of unskilled workers have been gradually declining, if at a slow rate. However, in the new global economy, skilled labor and management are in demand. With global business booming, profits are running high, and in the US there has been a tendency to soften regressive tax systems -- so the rich are getting richer. America's top 1% of earners now pull in 16% of all US income; they were pulling in 8% in 1980. The same thing has been happening in Europe and Japan, though not to such an exaggerated level. Emerging economies are now starting to compete more in skilled and managerial jobs, though nobody expects them to make the same inroads as they have in unskilled jobs.

Protectionism isn't the solution: nations need to engage in the world economy to stay competitive, and overall wealth is increasing for all. Even unskilled workers whose pay is falling get some compensation from cheap goods imported from emerging economies. However, there is still a case for careful government intervention, tweaking the tax and benefits system, improving education, and helping workers change jobs. If real wages at the bottom continue to decline, the pressure for protectionism may well become politically irresistible, no matter what its drawbacks are.

* Another important aspect of a world increasingly dominated by emerging economies is the scramble for resources. Demand for materials is climbing rapidly in emerging economies, and they are still low on the curve. Some estimates indicate that China will import about 20 times more oil in 2020 than it does now, and over six times more copper. Right now, there's only two cars for every 100 Chinese, compared with 50 cars for every 100 Americans. By 2040, it is estimated there will be 29 cars for every 100 Chinese.

The result of the race for resources is pressure on commodities prices. The pressure is not absolutely consistent. There has been an increasing shift towards services in emerging economies like India, and services are based on human resources more than material resources. The demand and rising prices have also led to discovery of more resources and new ways to make use of existing resources, as well as more recycling and investigation of alternatives.

The race to increased production has also strained the environment. China has 16 of the world's most air-polluted cities. Many think that there is no way for the emerging economies to ever reach the levels of consumerism found in the rich countries; it seems more likely that the levels of consumerism in rich countries will be forced to decline as competition for resources increases. The interesting question will be as to how this affects the quality of life overall: a humming world economy means legions of inventors and manufacturers coming up with new ideas that can change our fundamental ways of doing things. The future has its promises as well as its threats.

BACK_TO_TOP

[MON 01 JAN 07] ANOTHER MONTH

* ANOTHER MONTH: According to that gold mine of the true hard facts, THE ONION News, the legislature of the state of Kansas has now banned the practice of evolution within its borders. The sweeping new law prohibits all living beings in the state from being born with random genetic mutations that could make them better suited to evade predators, attract a mate, or adapt to environmental changes. In addition, it bars any sexual reproduction or battles for survival that might lead, after several generations, to a more well-adapted species or subspecies.

Violators of the new law may face punishments that include jail time, stiff fines, and rehabilitative education to help organisms suspected of evolutionary tendencies change their ways. Repeat offenders could face sterilization. To enforce the law, Kansas state police will be trained to investigate and apprehend organisms who exhibit suspected signs of evolutionary behavior. Roadside spot checks with DNA monitors will be performed to ensure compliance.

Says Dr. Robert Hellenbaum, a chemist from Indiana University who helped draft the new law: "Barn swallows that develop lighter, more streamlined builds to enable faster migration, for example, could live out the rest of their brief lives in prison. And butterflies who try to mimic the wing patterns and colors of other butterflies for an adaptive advantage -- well, they better think twice about trying to pull con games like that from now on."

Enforcement will strongly focus on single-cell microorganisms, notorious for their rapid evolutionary adaptation. Says Hellenbaum: "These repeat offenders are at the root of the problem." However, enforcement of the law will be even-handed, a state police spokesman saying: "No species is exempt. Whether you're a human being or a fruit fly, any practice of natural selection will result in prosecution to the full extent of the law."

BACK_TO_TOP
< PREV | NEXT > | INDEX | GOOGLE | UPDATES | EMAIL | $Donate? | HOME