* 23 entries including: Cold War (series), CRISPR revolution (series), digital intelligence against terror (series), last universal common ancestor revealed by genomics, STELR turbojet, difficulties with foreign aid, trees give bats hollow spaces to roost, maternal vaccines, new brain map, 2015 hottest year ever, NASA work on nuclear thermal engines, and Allen Institute maps the brain.
* NEWS COMMENTARY FOR AUGUST 2016: An article from BLOOMBERG BUSINESSWEEK ("Trump Has One Approach To Trade" by Brendan Greeley, 11 July 2016) zeroed in on the trade policies being advocated by Republican presidential hopeful Donald Trump. They're straightforward, Trump saying:
According to Trump, existing US trade policy causes "poverty and heartache". He believes the USA is being taken to the cleaners by America's foreign trade partners. Trump says: "I'm not against trade -- I just want to make better deals." He says he'll direct the Commerce Department to "identify every violation of trade agreements a foreign country is currently using to harm our workers."
Julia Gray, an assistant professor at the London School of Economics, heard Trump make that promise on TV, and could say only one thing in reply: "OOH! I want that data!" Gray says she doesn't have it -- because it doesn't exist. Violations are very hard to determine, even given the best data and most perceptive analysis: "It's really hard to figure out what's one country's subsidy and what's another country's closely guarded health and safety standard."
This is very much along the lines of Trump's insistence that the Chinese are manipulating the value of the yuan to short the US on trade; there's actually no articulated policy by the Chinese government to do so, with the value of the yuan more obviously affected by the fact that Chinese save more of their money than Americans.
As far as trade deals go, Gray says her studies show that politicians rarely follow through on campaign threats to renegotiate or abandon trade agreements. However, she does concede that many agreements end up being what she calls "zombies" -- meaningless because they do nothing to change business as usual, having no enforcement capability to do so. That suggests a more hard-nose attitude toward trade might indeed benefit the USA, by getting foreign trading partners to abide by the terms of trade agreements.
Alas, it is not clear that the hard-nosed attitude actually does much good. A recent paper examined 15 years of WTO cases, and failed to detect any significant change in the trade status quo as a consequence of them. That is, if the US isn't selling widgets to China and claims that China is blocking imports of said widgets, a decision by the WTO in favor of the USA may not result in one more widget sold to China. Such litigation does not, the paper concludes, "increase the size of the total trade pie".
* Of course, Trump is hardly alone in is assault on free trade. Under pressure from the Left by Bernie Sanders, Democratic presidential hopeful Hillary Clinton has also come out against the TPP. The recent vote of British citizens to leave the European Union (EU) was driven by the widespread perception that the stipulations, particularly on free movement of peoples, that went along with membership in the European free trade bloc were too much to bear. Former London Mayor Boris Johnson -- now the foreign secretary in the new government of Prime Minister Theresa May -- proclaimed during the Brexit campaign: "The European Court of Justice is now taking decisions on absolutely every sphere of political life in this country."
Nobody ever accused Boris Johnson of British understatement. In reality, according to Christina Davis -- who teaches trade policy at the Woodrow Wilson School of Public & International Affairs at Princeton -- fair trade requires both rules and enforcement:
The US represents a free trade bloc unto itself, with the Federal government regulating commerce between the 50 states; provincial protectionism is much more apparent in Canada. Although fighting over "states' rights" is a long-standing American tradition, nobody seriously challenges the US national internal free trade arrangement; overt economic warfare between US states simply isn't on.
There is certainly an element of absurdity in Brexit. British leadership still wants to maintain Britain's membership in the EU common market, but would like to exempt the UK from the free movement of peoples, which was the driving force behind the Brexit vote. EU leaders are holding the line, German Chancellor Angela Merkel saying: "No cherry-picking." In short, Britain can't have cake and eat it too, access to the EU common market implying certain obligations.
Princeton's Davis says that's exactly the way it should be: "Even medieval Europe found that trading in large merchant fairs required the development of legal institutions. Simply making a good deal is not so easy without the accompanying rules, and bureaucrats to manage the rules."
* It should be noted US President Barack Obama is continuing to push forward on the TPP, even though both presidential hopefuls have come out against it. It doesn't seem a good bet that Hillary Clinton will follow through on that campaign promise, since it's so easily reversed -- and she's got every good reason to stay with the TPP, no good reason to dump it.
Clinton can appoint a special commission of experts with instructions to make sure that American interests, including the various campaign hot buttons, are protected in TPP negotiations. The negotiations leading up to signing the treaty can then be presented as an exercise in hard-nosed deal-making, and a triumph for the administration. The reality, of course, is that there will be no more than cosmetic changes in the TPP -- the US always had its national interests as the bottom line in the deal, to be compromised only to the extent that equitable bargaining demanded it. The challenge will only be to show to the American public that the US has much more to gain than to lose.
It is not so clear, possibly not even to the central participants, that the negotiations over Brexit will play out in the same way. Prime Minister May is driving forward to get the best deal for the UK with the EU; once the deal is in place, she can then call an election. If the deal looks as if it leaves Britain in a clearly worse position, which is likely to be the case, Parliament can then decide to stay with the EU after all -- a decision that the government cannot make itself. A recent poll shows that about half of Britons believe that implementing Brexit will take at least a year, or may not happen at all.
* As for Trump, it becomes something of an amusement to keep a watch out for what outrageous thing he will say next. An article from THE ECONOMIST ("Defend Me Maybe", 30 July 2016), focused on a comment Trump made in an interview on 20 July, in which he said the USA, under his leadership, would not intervene against a Russian attack on the Baltic states if he felt they had not met their "obligations" -- even though the Baltics are NATO members, and the US is required by the NATO arrangement to come their aid.
The remark was a cause for dismay by anyone who had a grasp of sensible foreign policy -- though it might be strongly suspected gave satisfaction to Russian leader Vladimir Putin. Russian hostility to the Baltics and Poland is an evident fact; Russian war games regularly involve simulated attacks on them, on occasion rehearsing the use of tactical nuclear weapons.
It is very difficult to believe that Russia would seriously contemplate using nuclear weapons, and so making oblique threats of their use is, instead, intended to sow "fear, uncertainty, & doubt". Putin does not want to re-conquer the Baltics or Poland; he simply understands the Clauswitz dictum that "war is an extension of politics by other means". Putin's inclination is towards the "hybrid warfare" -- disinformation, political subversion, cyber-attacks, and covert military operations -- that Russia used to destabilize Ukraine. If it worked in Ukraine, then why can't it work against NATO? Undermining the strength of NATO would give Russia a free hand in its "sphere of influence", and intimidate NATO states when disagreements come up.
As discussed here in the spring, NATO is beefing up its forces in the Baltics, while the US is re-aligning forces to back up the defense there. Putin's military strength is actually no match for the US and NATO's, and so there's no reason to tolerate being pushed around by Russia. As far as the Baltics meeting their "obligations" go, not too surprisingly they're investing sums in defense -- and, as Estonian President Toomas Hendrik Ilves pointed out, while Americans have never fought at the request of Estonia, Estonians have fought at the request of America. Estonian troops were committed to combat in Afghanistan and Iraq, and took disproportionately high casualties there. Reality, however, has never been Donald Trump's strong suit.COMMENT ON ARTICLE
* BACK TO THE BEGINNING: As discussed by an item from AAAS SCIENCE NOW Online (Our Last Common Ancestor Inhaled Hydrogen From Underwater Volcanoes" by Robert F. Service, 25 July 2016), we may never be able to have more than plausible but, in the absence of a time machine, unproveable models of the origins of life on Earth. However, some of our earliest ancestors -- including the "microbial Eve" from which all modern cells descended -- did leave behind traces in the genes they passed to their descendants.
To find these shared genes, geneticists have surveyed nearly 2000 genomes of modern microbes. Researchers have now mined this genomic data to obtain insights into the "last universal common ancestor (LUCA)" of all life on Earth -- showing that the LUCA was a heat-loving microbe that fed on hydrogen gas and lived in a world with no free oxygen, reinforcing strong suspicions that life on Earth formed in and around hydrothermal vents, such as those found near undersea volcanoes.
Today, all life on Earth makes up three broad groups: bacteria, archaea, and eukaryotes. The first two make up prokaryotes, or cells without a nucleus. Long ago, bacteria and archaea symbiotically teamed up to result in the eukaryotes, the cells with a nucleus that make up all multicellular organisms, including plants and animals.
There are universal features of cells that necessarily reflect the organization of the LUCA. The LUCA must have stored genetic information using DNA; it also built proteins and used adenosine triphosphate as its currency for energy. Finding out more details has been difficult, however. One problem is that microbes not only pass down their genes to successive generations, they also swap them among their neighbors in a process called "horizontal gene transfer". That makes it harder to trace down the genomic tree to its roots in the LUCA.
Researchers led by William Martin, an evolutionary biologist at Heinrich Heine University in Dusseldorf, Germany, took a new approach to see if they could sort out signals from noise. Instead of looking for genes shared by a single species of bacteria and archaea, they searched for those shared by at least two species of bacteria and two archaea. That gave them an initial count of some 6 million genes grouped into more than 286,000 related gene families. Further winnowing-down revealed that only 355 of these gene families were broadly distributed across all modern organisms, and so might be likely LUCA candidates.
Martin and his colleagues report that these genes are not haphazardly distributed through the genomes of modern organisms, but fall into distinct groups that give substantial clues to the nature of the LUCA. Most significantly, they show that the LUCA was an "anaerobe", living in an environment without free oxygen. That fits with what scientists know about what the Earth was like 4 billion years ago, a period known as the "late heavy bombardment". Not long after the planet's formation, meteors and comets slammed repeatedly into the Earth, with the seas periodically boiled away by the impacts; the atmosphere lacked oxygen.
The researchers also determined that the LUCA was very probably a heat-loving "thermophile" that fed on hydrogen gas (H2), which also fits with current thinking. Today, many microbes produce H2 -- but since the LUCA preceded them, it could not have used them as a source of H2. The LUCA would have necessarily had to associate with a geological source of hydrogen, such as a hydrothermal vent like those found near an undersea volcano.
James Lake, an evolutionary biologist at the University of California in Los Angeles, calls the new work "remarkable" and "an important step forward". Lake also notes that the LUCA has much in common with two groups of modern microbes: clostridium, a genus of anaerobic bacteria, and methanogens, a group of methane-producing archaea that live off H2 -- both of which, in different ways, must reflect what the LUCA was like.COMMENT ON ARTICLE
* CRISPR EXCITEMENT (2): By the beginning of 2015, CRISPR papers were among the most cited in the biosciences. There has been a rush of commercial activity and investment, with large pharmaceutical companies are eyeing the technology for research. Patent fights have erupted.
The technique has been applied to dozens of species, including zebrafish (a popular "lab rat" for developmental biologists), yeast, fruit flies, rabbits, pigs, rats, mice, and macaques -- macaques being the first primates to be genetically engineered with the technique. It has been used to cure mouse versions of muscular dystrophy and a rare liver disease. Ways have been found to make the technique more reliable, more versatile, and less likely to make cuts where it is not supposed to; further improvements are on the way, not least at the startup companies based on the technology.
One of CRISPR's great attractions is that it can be used to introduce, or remove, a number of different genes at a time. Most genetic disorders are not caused by just one gene going wrong; being able to manipulate many different genes in a plant or animal cell line opens new avenues for the study of conditions such as diabetes, heart disease, and autism, where a number of genes are involved, along with the environment. In the past a mouse with as few as three genes knocked out would have taken as many years to create; now it can be done in three weeks.
CRISPR is also letting researchers get more out of other technological breakthroughs -- notably the ability to make stem cells which can then be turned into the cells typical of any sort of tissue. George Church at Harvard is using CRISPR to edit the genomes of stem cells before turning them into nerve cells, so as to find the mechanisms behind a range of neurological disorders. Feng Zhang, a scientist at the nearby Broad Institute, has been using CRISPR to model Angelman syndrome, a neurological disorder.
Previous genetic-engineering technologies have tended to be species-specific; there have been lots of tools for manipulating E. coli and yeast, but they have often not been broadly applicable. This is another area where CRISPR excels; it can be used in organisms that have resisted previous attempts at engineering, permitting the development of new crop plants. Another biotech application is the use of CRISPR to build a "kill switch" which allows any genetic modifications made to bacteria to be removed after they have been used, either for safety or to protect intellectual property.
One particularly impressive, and worrying, application is "gene drive" -- the creation of genes that can spread themselves quickly through a population. In sexual reproduction, there is a balance between the "alleles", variants of the same gene, of two parents, with the allele effectively selected at random. Under gene drive, one allele is "selfish", ensuring that it is selected, no matter what. Such a technology might, advocates say, be used to make the mosquitoes that carry malaria, or dengue fever, unable to spread the organisms responsible for causing the disease.
Zhang says CRISPR has enormous potential for treating previously intractable diseases. For example, genome editing may make it possible to eliminate viral infections within the body, creating entirely new antiviral treatments. He also speculates that it might be possible to make red meat that is less harmful, or to engineer pig organs so that they could be transplanted into humans with much less risk of rejection. Church, for his part, has more controversially speculated about using gene editing to turn elephants into mammoths -- or to recreate Neanderthals. [TO BE CONTINUED]START | PREV | NEXT | COMMENT ON ARTICLE
* THE COLD WAR (125): Eisenhower continued to be distracted by election-year politics. Of course, the president backed Nixon in the race but Eisenhower was an administrator, not really a politician. The first political office he had ever run for was the presidency, nor had the two elections he had won been close-run things. Although a Republican, Eisenhower had never really had much fondness for party politics -- while Nixon was a strong party man, as well as demonstrably aggressive while Eisenhower was not. There was a certain inevitable disconnect between the two men.
The news media tended to play up the disconnect. On 24 August, during a press conference, reporters had hounded Eisenhower to tell them what administration decisions Nixon had played a prominent part in. Earlier, in the face of accusations that John Foster Dulles acted independently, Eisenhower had replied that Dulles worked closely with the president, and had never made decisions against the presidential will. In the same fashion, Eisenhower said that Nixon had always been involved in discussions of national policy, but that the final decision was the president's.
The reporters were not satisfied, and continued to press him to give an example of a presidential decision where Nixon had played a prominent role. Eisenhower, clearly irritated, finally snapped back: "If you give me a week, I might think of one. I don't remember."
A reporter replied: "Thank you, Mr. President." -- to end the press conference. Eisenhower, to his mortification, quickly realized he'd badly mis-spoken, and called Nixon to apologize. It was too late to do anything else; the news media spread the comment far and wide.
* As if to add to Eisenhower's frustrations, Khrushchev decided to come to New York City to set up shop at the United Nations for a time. The premier arrived in New York harbor on the liner BALTIKA -- a German vessel seized by the Soviets as war reparations, having been previously named the VYACHESLAV MOLOTOV, until Molotov had been unceremoniously sent away from Moscow -- on 19 September. The vessel had been painted up to make a good entrance, but unsurprisingly Khrushchev received no official welcome. He was, however, greeted by members of the International Longshoreman's Union, carrying placards such as:
ROSES ARE RED VIOLETS ARE BLUE STALIN DROPPED DEAD HOW ABOUT YOU?
Khrushchev believed the US government had orchestrated the demonstration. The longshoremen, of course, were New Yorkers, and hardly needed the authorities to tell them to be rude and assertive. The longshoremen also refused to handle the BALTIKA, forcing the crew and passengers to offload the vessel themselves. PRAVDA reported that the weather was sunny and the premier was greeted by enthusiastic crowds -- but it was actually raining, and to the extent there was a crowd, it wasn't at all a friendly one.
Khrushchev spent 26 days in New York City, which was a remarkable length of time to be away from the Kremlin. He addressed the UN General Assembly; he was inclined to the disruptive when listening to the addresses of others, interrupting a speech by UN Secretary General Dag Hammerskjoeld by pounding his fists on his desk. The incident was caught on video, showing Foreign Minister Gromyko appearing a bit bewildered, then joining in with a nervous smile, with the rest of the Soviet delegation following.
High among Khrushchev's annoyances with Hammerskjoeld were the Secretary General's efforts to use the UN to restore order in the Congo, the premier having told an aide that Hammerskjoeld was "sticking his nose in important affairs that are none of his business", and promised to "make it really hot" for the Secretary General. Khrushchev proposed that the Secretary General position be abolished, to be replaced by a "troika" consisting of one representative from a Western country, one from a communist country, and one from a neutral country. There was little enthusiasm for the idea, not even much among the Soviet delegation, the general consensus being it was totally unrealistic.
When Harold Macmillan said in an address to the General Assembly that he regretted the failure of the Paris summit, Khrushchev got up and threw a tantrum; Macmillan asked Frederick Boland of Ireland, the assembly president: "I'd like that translated, if I may." When the Spanish delegation didn't applaud one of his addresses, Khrushchev almost seemed ready to come to blows with a Spanish diplomat, only breaking off the confrontation when UN and Soviet security men intervened. Towards the end of his stay, he was said to have actually taken off one of his shoes and pounded it on his desk to disrupt a session; it was in character with much else he did during his sojourn at the UN -- but though there's no dispute that something happened, there is dispute on when it happened, or exactly what happened. A faked photo of the premier with a shoe in his hand still circulates. [TO BE CONTINUED]START | PREV | NEXT | COMMENT ON ARTICLE
* WINGS & WEAPONS: The US Navy has casting about for a number of years on a replacement for the Northrop Grumman C-2 Greyhound carrier onboard delivery (COD) aircraft -- that mission involving transfers of personnel and cargo back and forth from carrier task groups at sea.
The Navy has now settled on a derivation of the US Marines' Bell-Boeing MV-22B Osprey tilt-rotor transport, the COD version to be designated the "CMV-22B". The CMV-22B will feature more fuel tankage to extend range from 1,590 kilometers (990 miles / 860 NMI) to 2,125 kilometers (1,320 miles / 1,150 NMI); it will also feature a beyond-line-of-sight communications system and a public address system to stay in touch with the carrier group. 44 will be built, with initial deliveries planned for 2020.
* Britain's "Selective Precision Effects At Range (SPEAR)" guided-munitions program was discussed here in 2014 -- that article focusing on the "SPEAR 3" weapon, a miniature cruise missile based on the British Brimstone anti-armor missile, itself a re-engineered derivative of the US Hellfire anti-armor missile.
While future weapons programs tend to come and go, SPEAR 3 looks to be on track, the British government having now awarded MBDA a contract to perform full development. SPEAR 3 is projected to go into service in the mid-2020s on British Lockheed Martin F-35 strike fighters. MBDA officials believe that SPEAR 3 has export potential, and could be adopted by other F-35 operators.
* I was poking around online concerning the "glide torpedoes", used by the US late in the Pacific War; they were air-dropped torpedoes attached to a glider airframe, originally developed for glide bombs, to give the torpedoes greater stand-off attack distance.
Much to my surprise, it turns out the glide torpedo is making a comeback, a 2013 note mentioning the "High-Altitude Anti-Submarine Warfare Weapon Capability (HAAWC)", an anti-submarine homing torpedo being developed by Boeing, fitted to a switchblade airframe derived from that Boeing developed for the Small Diameter Bomb GPS-guided weapon. The HAAWC will allow the Navy P-8 Poseidon, and other anti-submarine-warfare aircraft, to perform attacks on adversary submarines from high altitude and long stand-off distance.COMMENT ON ARTICLE
* STELR TURBOJET: Traditionally, the only jet engine that could propel missiles and such to Mach 3 and above was the ramjet, which amounts to not much more than a mere "stovepipe" -- effective enough at high speeds, but not so much at low speeds, and requiring a rocket or other booster to get up to speed to operate. As discussed by a note from AVIATION WEEK Online ("Turbine Engine Could Pave Way For Supersonic Cruise Missiles" by Guy Norris and Graham Warwick, 23 September 2015), the US is now conducting research into high-speed turbine engines.
The effort is being performed by the Rolls-Royce Liberty Works and Williams International under the US Air Force Research Laboratory's (AFRL) "Supersonic Turbine Engine for Long-Range (STELR)" program. A follow-on to the joint AFRL and US Defense Advanced Research Projects Agency (DARPA) "High-Speed Turbine Engine Demonstration (HISTED)" program, STELR is focused on the development of Mach 3-plus weapons and vehicles. These include long-range stand-off missiles, air-launched cruise missiles, and high-speed drones capable of sustaining flight at maximum Mach number for an hour, with a range of at least 3,200 kilometers (2,000 miles).
According to John Kusnierek, an official at the Rolls Liberty Works unit, Rolls-Royce's STELR engine has already operated for "more than two hours at Mach 2:2.5, and will run up to Mach 3.2 in the next few months." Although Rolls has applied lessons learned on its HISTED engine, the "YJ102R", to the STELR project, Kusnierek says the STELR item "is not the same engine."
A mock-up has been publicly displayed. The engine is similar in size to the YJ102R, which was earmarked for the canceled Lockheed Martin "Revolutionary Approach to Time-critical Long-Range Strike (RATLRS)" missile flight demonstrator. Just like YJ102R, the STELR is non-afterburning, providing longer range at supersonic speed. While designed for expendable weapons, the STELR design could be enhanced for re-usability. It also may provide clues for the development of engines for hypersonic vehicles.
STELR is one of three active Air Force and National Aeronautics & Space Administration (NASA) high-speed propulsion efforts underway to support development of re-usable "turbine-based combined-cycle (TBCC)" engines. In these propulsion systems, a turbine engine would provide the power from takeoff to Mach 4, with a ramjet/supersonic-combustion ramjet (scramjet) taking over for higher-speed flight.
The second effort is the AFRL's "Medium-Scale Critical Components (MSCC) program, which is exploring first-generation, larger-scale scramjet engine characteristics, following up those of the pioneering X-51A hypersonic demonstrator, which last flew in 2013. MSCC is designed to evaluate engines with ten times the airflow rate, performance, operability and thermal management capabilities of the X-51. The Air Force's Aerodynamic and Propulsion Test Unit at the Arnold Engineering Development Center in Tennessee, has been modified to conduct the first direct-connect tests of these larger scramjet engines, and calibration testing began in July. Combustor testing will begin at the site in February 2016.
In the third effort NASA, supported by funding from AFRL and DARPA, has been performing aerodynamic tests of a large-model TBCC under the "Combined-Cycle Large-scale Inlet Mode transition (CCE-LIMX)" program. Conducted in the 3 x 3 meter (10 x 10 foot) wind tunnel at NASA Glenn Research Center in Ohio, the test unit consists of a high-Mach turbine simulator or engine, paired with a scramjet simulator. A modified Williams WJ38-15 turbojet, similar in size to the company's XTE88 HISTED engine, was made available for the tests, though it was limited to Mach 3. Flow to the engines, depending on the operating speed and mode, is controlled via a set of low-speed and high-speed ramps and flowpaths.
STELR is also one of the propulsion options included in a NASA-funded Lockheed Martin study in support of the proposed "SR-72" hypersonic, reconnaissance-strike aircraft. The study has been looking into the viability of a TBCC propulsion system with several combinations of "near-term turbine engine solutions" and a very-low-Mach ignition dual mode ramjet -- in which the ramjet would take over Mach 2.5 at most.COMMENT ON ARTICLE
* FOREIGN AID RECONSIDERED: As discussed by an article from THE ECONOMIST ("Foreign Aid Is A Shambles In Almost Every Way", 8 June 2016), while the notion of providing assistance to poor countries is an appealing one, foreign aid has a long history of failure.
Take Malawi, once the favorite of international assistance. The country is dirt poor and ravaged by AIDS. With a population of just 17 million, it seemed that even modest assistance might go a long way. The country is also more-or-less democratic, with President Joyce Banda being well-regarded by the leadership in UK and the US. In 2012 Western countries pumped $1.17 billion USD into it, with foreign aid accounted for 28% of gross national income.
In 2013, corrupt officials, businessmen, and politicians looted at least $30 million USD from the Malawian treasury in just six months. A bureaucrat investigating the thievery was shot three times; he survived, just barely. Malawi still gets a lot of foreign aid -- $930 million in 2014 -- but donors try to make sure the money doesn't go through the government's hands.
Foreign aid has had its successes. It did much to raise South Korea and Taiwan to prosperity; helped wipe out smallpox in the 1970s; and has almost eliminated polio. However, only too often it enriches crooks. Aid can also burden weak bureaucracies, distort markets, prop up dictators, and help prolong internal conflicts. Aid donors have become paranoid, seeking the right formulas to provide help, without getting ripped off.
A decade ago governments rich and poor set out to define good aid. They declared that:
These are high-minded but sound principles. However, they are not being achieved. Aid is as coordinated as a demolition derby; much goes neither to poor people nor to well-run countries. Donors tend to work at cross purposes, and do not plan out their aid programs very well. How could so many smart, well-intentioned people make such a bad job of things?
Official development aid, which includes grants, loans, technical advice and debt forgiveness, is worth about $130 billion USD a year. The channels originating in Berlin, London, Paris, Tokyo and Washington DC are deep rivers; others are mere creeks, though the Nordic countries are generous for their size. More than two-fifths flows through multilateral organizations such as the World Bank, the UN, and the Global Fund. In 2015, 9% was spent on refugees in donor countries, reflecting the surge of migrants to Europe.
The flow of aid is very uneven. India includes some 275 million people living on less than $1.90 USD a day. It got $4.8 billion in "country programmable aid", the most routine kind in 2014, which is $17 USD per poor person. Vietnam also got $4.8 billion USD; but, because it is much smaller and proportionally better off, that works out to $1,658 USD per poor person By that measure, South-East Asia and South America are doing especially well.
Western countries have mostly abandoned the Cold War-era habit of funneling aid to friendly regimes and former colonies; nonetheless, aid is still used more-or-less explicitly as a tool of foreign policy. Today, the enemy is not the Reds, but radical Islam. Afghanistan, Egypt, Jordan, Syria, and Turkey each got more net aid than Bangladesh in 2014, although none contains nearly as many poor people.
Donors do try to use aid to promote democratic reforms in the governments of recipient countries; donors also try to punish corruption and backsliding, as in Malawi, and have achieved some success in cleaning things up. However, much more money goes to strategically important states that are not always well-governed. Net foreign aid to Turkey, an increasingly autocratic country that is not poor, rose more than tenfold between 2004 and 2014, to $3.4 billion USD. On the other side of that coin, countries that do achieve stable democracies are then penalized by reduced donations, since they don't seem to need as much help any more. US aid to Peru followed that pattern.
The international flow of aid has become more complicated:
A decade ago, the cure for fragmentation was seen as providing aid to recipient governments with few strings attached, allowing the recipients to do what they pleased with it. This failed, not only due to corruption in the recipient countries, but because donor countries were then held publicly accountable for whatever misguided policies the recipient countries chose to enact. Aid, in short, is a sorry mess; the only good thing that can be said about it now is that it is generally recognized to be a mess.COMMENT ON ARTICLE
* CRISPR EXCITEMENT (1): As discussed by an article from THE ECONOMIST ("The Age Of The Red Pen", 22 August 2015), from the time genetic engineers began to piece together toolkits in the 1970s, one of their primary objectives has to been to repair faulty genes. One in every ten-thousand or so children are born with gene defects -- some that leave them impaired, some that kill them in childhood.
The first clinical attempts at "gene therapy" began in the 1990s, with viruses used to insert genes into cells that lacked them. It didn't go well. The new genes could not be guaranteed to slot into the right place in the genome; that often meant they did not work as desired, and it also meant there was a risk that, by disrupting other genes, they could cause cancer. There were indeed cancers in some early trials, and there was notoriously a case in which a patient died of a lethal immune reaction to the virus used to carry the gene.
That reset, but did not stop, work on gene therapy, with researchers canvassing nature for ideas. Some years back, biologists discovered an odd feature in the genomes of some bacteria that were named "clustered, regularly interspaced short palindromic repeats (CRISPR)". Bacteria use them to make little bits of RNA, the molecule that mirrors genomic sequences made up of DNA for protein production and other tasks. A CRISPR RNA will bind to a piece of DNA that has a "complementary" sequence. A protein called "Cas9" -- an enzyme, working like of pair of molecular scissors -- recognizes the structure made when a CRISPR RNA binds to a piece of DNA, and reacts by cutting through the DNA at precisely that point.
Bacteria make CRISPR RNAs that recognize the DNA of viruses that infect them, marking that DNA for destruction by Cas9, and so protecting the bacteria from infection -- they're a component of the bacterial immune system. Scientists can make RNAs that target any sequence they want. And because of the way that cells repair broken DNA, if they put a new gene into a cell along with the CRISPR-Cas9 system, they can get that new gene to replace an old one. The effect of CRISPR-Cas9 is to give scientists something that works like the find-&-replace function on a word processor.
When Jennifer Doudna -- of the University of California, Berkeley -- and Emmanuelle Charpentier -- who is now at the Helmholtz Centre for Infection Research in Germany -- and colleagues worked out how to turn the bacterial CRISPR system into a genome editor in 2012, there were already two other techniques for making specific and precise changes to genomes. However, CRISPR is far easier to use. Matthew Porteus, a pioneer in gene editing at Stanford University, says research that required a sophisticated molecular biology lab three years ago can now be done by a high-school student.
Because it's so simple and easy to use, CRISPR has generated huge excitement in the domains of molecular biology, medical research, commercial biotechnology -- and gene therapy. To date, gene therapies have been designed to fix cells such as those of the blood, or the retina, or the pancreas. CRISPR makes it possible to think about altering the genomes of gametes, sperms and eggs, or the genome of a fertilized embryo awaiting implantation in the womb. CRISPR would, in effect, banish a genetic defect from the next generation, and from all generations after that.
Such "germ-line" editing is widely seen as ethically troublesome. Some scientists and research organizations want a moratorium on any work aimed at engineering the germ line; others say basic research on such things should continue, but any moves to use it in the clinic should at the very least be widely debated by society as a whole. The US National Academies of Science convened a gathering last December to look at the options. [TO BE CONTINUED]NEXT | COMMENT ON ARTICLE
* THE COLD WAR (124): On 18 August, along with the briefing on Cuba, Eisenhower was briefed on events in the Congo. Following elections in May, Belgium had granted independence to the country at the end of June, which promptly led to a rebellion -- one leader, Moishe Tshombe, taking control of the mineral-rich province of Katanga, to declare independence, and strike a deal with the Belgians for exploitation.
The United Nations was getting involved in the effort to restore order; the US was entirely in favor of the UN effort, but not so happy about tales that the Soviets were funneling in military assistance. Khrushchev was busy making the usual threats, protesting UN involvement; the Leftist Congolese prime minister, Patrice Lumumba, was also telling the UN to stay out. Allen Dulles believed Lumumba was on the Soviet payroll.
Eisenhower saw Lumumba as another Castro or worse. A week later, in a meeting of the 5412 Committee, the president expressed his concerns about Lumumba, and suggested that actions be taken to deal with the problem. What sort of actions? The record, possibly by deliberate intent, does not say; and witnesses questioned in Congressional investigations some years later on were unwilling or unable to say much more. Whatever the CIA might have planned to do, whatever Eisenhower told the agency to do, events would render it unnecessary, and ensure that later investigations into the agency's actions against Lumumba would be a literal dead end.
What was more significant in this White House debate was that it marked the start of American involvement, in pursuit of a Cold War agenda, in Africa. During the previous decade, country after country there had become independent, often with chaotic results. The US and the USSR were being drawn into the turmoil; neither they, nor the peoples of Africa, would benefit as a result.
Another crisis in a backwater country was of more concern to the Eisenhower Administration at the time. The Kingdom of Laos had achieved formal independence in 1953, with King Sisavang Vatthana at the head of a constitutional monarchy. The Americans backed the official government, but it was weak and ineffectual, under threat from communist Pathet Lao insurgents, as well as North Vietnamese meddling. The king's brother Prince Souphanouvong attempted to establish a coalition government that included the Pathet Lao in 1957, but the arrangement quickly collapsed. Conditions, not good to begin with, deteriorated rapidly. The North Vietnamese came into Laos in force in the spring of 1959, with the objective of establishing a road link into South Vietnam, what would become known as the "Ho Chi Minh Trail".
General Phoumi Nosavan, an anti-communist royalist, performed a coup on Christmas Day 1959 -- but on 10 August 1960, a Captain Kong Le of the Royal Laotian Army overthrew Phoumi in turn, with Kong Le declaring a neutralist course, and working towards an alliance with the Pathet Lao. Not much attention had been paid to Laos by the rest of the world in the preceding years, but Kong Le's coup attracted global headlines. The Americans suspected he was another Castro, intending to paint Laos Red; the CIA worked against him from behind the scenes, supporting General Phoumi, with the Soviets backing Kong Le's government. [TO BE CONTINUED]START | PREV | NEXT | COMMENT ON ARTICLE
* Space launches for July included:
-- 07 JUL 16 / SOYUZ ISS 47S (ISS) -- A Soyuz-Fregat booster was launched from Baikonur at 0136 UTC (local time - 6) to put the "Soyuz ISS 47S" AKA "MS-01" crewed space capsule into orbit on an International Space Station (ISS) support mission. The all-rookie crew included Soyuz commander Anatoly Ivanishin of the RKA (first space flight), flight engineer Takuya Onishi of JAXA (first space flight), and astronaut Kate Rubins of NASA (first space flight).
The Soyuz capsule docked with the Rassvet module just after midnight UTC on Saturday, 9 July. The Soyuz crew was greeted by the ISS Expedition 48 of commander Jeffrey Williams, plus cosmonauts Alexey Ovchinin and Oleg Skripochka. Soyuz MS-01 was the first "Modernized Systems" Soyuz capsule, featuring:
Many of the MS upgrades were first installed and tested on the two launches of Progress supply capsules preceding this Soyuz launch. The traditional two-day Soyuz ascent was used instead of the current six-hour ascent to give time to shake down the new systems.
-- 16 JUL 16 / PROGRESS 64P (ISS) -- A Soyuz-U booster was launched from Baikonur at 2141 GMT (next day local time - 6) to put the "Progress MS-03" AKA "Progress 64P" tanker-freighter spacecraft into orbit on an International Space Station (ISS) supply mission. It docked with the ISS Pirs module two days after launch. It was the 64th Progress mission to the ISS.
-- 18 JUL 16 / SPACEX DRAGON CRS 9 -- A SpaceX Falcon 9 FT booster was launched from Cape Canaveral at 0445 UTC (local time + 4), carrying the ninth operational "Dragon" cargo capsule to the International Space Station (ISS). The Falcon 9 first stage performed a successful soft landing at Cape Canaveral. The Dragon capsule docked with the station's Harmony module two days later.
-- 28 JUL 16 / NROL-61 (USA 269) -- An Atlas 5 booster was launched from Cape Canaveral at 1237 UTC (local time + 4) to put a secret military payload into space for the US National Reconnaissance Office (NRO). The payload was designated "NROL-61". It was believed to be a data-relay satellite for low-Earth orbit surveillance spacecraft; it was placed in an inclined geostationary orbit. The booster was in the "421" configuration, with a 4-meter (13.1-foot) diameter payload fairing, two solid-rocket boosters, and a single Centaur engine on the upper stage.
* OTHER SPACE NEWS: As discussed by an item from NATURE.com ("Troubled Japanese Space Agency Seeks Fresh Start" by Alexadra Witze, 29 July 2016), the Japan Aerospace Exploration Agency (JAXA) had high hopes for its ASTRO-H / Hitomi X-ray astronomy satellite when it was launched in February -- the mission being discussed here at the time -- but hope collapsed into bitter disappointment when Hitomi suffered a catastrophic failure during its orbital checkout. Investigation showed the space platform had been the victim of a software glitch that threw it into an uncontrolled spin, with the spin finally ripping off the solar panels.
JAXA is considering whether to rebuild and relaunch a copy of the Hitomi's primary payload, US-built X-ray spectrometer, with help from NASA. On 5 August, representatives of the two space agencies will meet to discuss the possibility of resurrecting the instrument that was the heart of Hitomi's science. But whether JAXA can regain the confidence of the Japanese nation, and of its international partners, remains to be seen.
JAXA has pulled off surprising recoveries before. Agency engineers managed to coax the crippled Hayabusa spacecraft to bring dust from an asteroid back to Earth, and nudged its Akatsuki probe into orbit around Venus five years after an engine failure seemed to have derailed the mission. There is no recovering Hitomi, it went to pieces; but the Japanese are inclined to the persistent, and are now trying to regain the lost ground. There has been a managerial house-cleaning, and a replacement mission is being developed by a small in-house team, along with the spacecraft manufacturer. The new mission is simpler than Hitomi, which is now perceived as having been too complicated to be workable.
JAXA officials were forthright about the failure of Hitomi, releasing detailed information on the failure to the public, which has done much to restore the agency's credibility. Nonetheless, selling a new big-ticket space science mission will be tricky in today's funding environment. The instrument suite for the new mission is still being considered, but the space platform will certainly carry the NASA-built spectrometer lost on Hitomi. Incidentally, effectively the same spectrometer was lost on two previous missions. NASA appears willing to provide another spectrometer, at a cost of from $70 million to $90 million USD.COMMENT ON ARTICLE
* AVAILABLE SPACE: As discussed by a note from THE ECONOMIST, ("Hollow Trees That Host Bats Benefit From Free Fertiliser", 27 June 2015), nobody finds the fact that trees may be hollow very surprising or interesting. Usually, the belief is that a hollow tree is dying -- but lots of trees in tropical forests remain alive long after their cores have rotted away. They're perfectly common.
Christian Voigt of the Leibniz Institute for Zoo & Wildlife Research in Berlin got to wondering if there was some evolutionary advantage for trees going hollow, and decided to see if that were the case. He started with three facts:
Were the trees providing homes for bats that then fertilized the tree? The idea was very suggestive -- but to check out that notion, Voight went to Costa Rica and set up shop in La Selva, a biological reserve in the country's north. To validate his theory, he relied on a fourth fact: that nitrogen comes in two isotopic forms, the heavier of which, 15N, is more common in bat guano than in forest soils because of the way bats process nitrogenous compounds in their bodies.
He and his research team monitored ten artificial bat roosts scattered around the reserve. They collected soil from within these roosts and compared the nitrogen in it with that of soil collected ten meters (33 feet) from each roost. Nitrogen in the roost soil, they found, was 8.8% 15N, while the soil ten meters away from the roost was only 2.1% 15N.
Next, the researchers inspected natural roosts in hollow trees of a species named Dipteryx panamanesis. They found 7.7% 15N inside the hollows, and 5.2% ten meters away. That isn't a big difference; the suggestion was that the trees were absorbing the bat guano, and distributing the 15N around their immediate environment. Examination of the seeds of trees with bat colonies showed they had elevated levels of 15N, while the seeds of trees without colonies did not.
To confirm that idea, the researchers analyzed the nitrogen content of seeds from the hollow Dipteryx panamanesis specimens looked at in the study and compared it with that from seeds of uncolonized trees of similar size. The seeds from the colonized trees had more 15N in them and, moreover, the size of the surplus was correlated with the number of bats in the colony. However, the enhancement of the 15N was dependent on the species of bat, particularly the bat Desmodus rotundus -- better known as the vampire bat. Why that is so remains to be understood.
Incidentally, the article stated: "It would ... be no surprise if hollowness is an adaptive trait rather than a useful accident. That seems a naive statement, the question in response being: "So what's the difference?"
* In other news from evolutionary science, the serpentine columbine plant is often threatened by the larvae of Heliothis phloxiphaga moths, which eat the plant's buds, flowers, and fruits. The plant has acquired an indirect way of fighting back, emitting a chemical signal that attracts dragonflies, beetles, and other insects. The plant has a sticky, hairy surface that traps the insects; they die, leaving the columbine coated with their corpses. The corpses attract spiders that can avoid getting stuck, with the spiders devouring moth larvae as well. Nobody can recall any other plant with a similar defense mechanism, but it seems likely more will be found.COMMENT ON ARTICLE
* MATERNAL VACCINES: As discussed by an article from BLOOMBERG BUSINESSWEEK ("Vaccine Makers Target Pregnant Women", 18 July 2016), vaccine manufacturers are now interested in a new market: expectant mothers. There's long been a clear need for vaccines for mothers and the infants in the womb, but manufacturers have traditionally steered clear. According to Carol Baker of Baylor College of Medicine, who has studies a bacteria named "group B streptococcus" that can cause meningitis in infants: "It took me a while to figure out what the problem was. The problem was the word 'pregnancy'."
Now Big Pharma companies such as GlaxoSmithKline, Novavax, Pfizer see baby-protecting vaccines for expectant mothers as potentially big business. They're working on inoculations against group B strep and "respiratory syncytial virus (RSV)", which infects the lungs and breathing passages of infants. The expectation is that maternal vaccines could become routine, with a market as big as that for pediatric vaccines.
The industry became more interested in maternal vaccines after the 2009 swine flu pandemic, when public-health authorities urged widespread immunization of pregnant women, and during later whooping-cough outbreaks, which saw upticks in expectant mothers getting tetanus-diphteria-pertussis shots. According to Anne Schuchat, a senior official at the US Centers for Disease Control, the vaccinations proved highly effective: "We really had a sea change in the US in terms of pregnant women getting the flu vaccine."
Maternal vaccinations work because a pregnant woman is able to pass antibodies on to the fetus, offering newborns protection before they're old enough to be given pediatric vaccines. According to Ruth Karron, director of the Center for Immunization Research at Johns Hopkins University Bloomberg School of Public Health, there's been discussion of maternal RSV vaccines for decades, but Big Pharma was leery: "The companies had some concerns about the use of maternal vaccines in a litigious society."
In the 1960s, the US National Institutes of Health developed an RSV vaccine for young children that not only failed to protect them, but made them more susceptible to the disease; two children died. Karron says: "For a long time after that happened, there was no work on RSV vaccines. It was just completely set aside."
Today, vaccine makers are more confident they can produce safe vaccines, while public resistance to vaccines appears to be on the retreat -- though it's far from extinct. There's an obvious need for maternal vaccines. While the US Food and Drug Administration has never approved a vaccine specifically for safeguarding babies before birth, the agency is working to overcome perceptions regulators aren't supportive of manufacturers working on such inoculations. Marion Gruber, director of the FDA Office of Vaccines Research and Review, says: "We are open to discussing alternative trial designs and alternative endpoints."
Nonetheless, trials take time, there's no way around that. Glaxo is developing vaccines for RSV and group B strep. The company will start testing the RSV vaccine in 2016; it's modifying a group B strep vaccine it got through a deal with rival Novartis in 2015. It will take six to nine years to conduct trials and submit the vaccines to the FDA for approval.
Testing for the group B strep vaccine is likely to be particularly troublesome. It's not such a problem in the US; the bacteria can live in the birth canal and infect a baby as it's delivered, but American women are often screened for it and administered antibiotics if they're at risk. In poor parts of the world, such sub-Saharan Africa, the infection is much more common. While several companies have run trials of maternal vaccines in African countries, the poor health infrastructure in Africa is a problem -- not just in administering vaccines, but in being able to track the history of test subjects, and sorting out the results from all the other health problems that the subjects might have.
Nonetheless, there's some confidence that maternal vaccines will be on the market eventually. Gregory Glenn, head of R&D and Novavax, says: "People were looking at us with interest in the topic six years ago. Today there's a huge amount of affirmation, support and optimism."COMMENT ON ARTICLE
* DIGITAL INTELLIGENCE AGAINST TERROR (3): The authorities would like to have the means to read encrypted message traffic, but their proposals for doing so seem futile. Anybody can get their hands on encryption software that makes messages very difficult to crack; even a totalitarian government would be hard-pressed to ban and successfully suppress such technology, except through indiscriminate use of government terror. Protonmail -- a free secure email system, based in Switzerland, where privacy laws are very tough -- now has hundreds of thousands of users worldwide. Not only are the messages securely encrypted, but Protonmail doesn't store messages on their servers, or know the encryption keys of its users.
Lawful users have every valid reason to want secure encryption. The internet is badly infected by malicious operators; both individuals and organizations need secure encryption to protect themselves. Vendors know that security is a selling point, and go to considerable lengths to convince clients that data is safe. There is a particular need for a secure online transaction system to perform "trusted transactions" that eliminates the need to mail documents back and forth for signature -- a scheme that is not only antiquated and cumbersome, but hardly secure.
The authorities don't like the hands-off attitude of the vendors, saying it provides aid and comfort to terrorists; the vendors reply that weak security would provide holes in the banking system, the electrical grid, and almost any modern infrastructure that the malicious would be certain to exploit. If the authorities can crack security, then so can the Black Hats. Losses to cybercrime were estimated at well over a half-billion USD in 2014. America's Information Technology Industry Council, which represents giants such as Apple and Microsoft, said: "Weakening security with the aim of advancing security simply does not make sense."
Besides, although it may not be possible to crack an encrypted message, it still has to be decrypted to be read -- meaning the people on the two ends of the conversation become the weak link. With proper authorization, the authorities can bug their targets and wait for them to slip up. The authorities, in effect, are holding all the good cards; so how can they complain about secure encryption?
On the whole, the internet is not too secure; it is not secure enough. Malicious hackers break into corporate and government databases on a regular basis -- sometimes only too easily, by exploiting the incompetence and laziness of their targets. Western spymasters are beginning to concede the inevitable and admit that it is too late to suppress secure encryption, and that it would cause more harm than good even if it could be. The suspicious wonder if such concessions are a smokescreen, that the spooks have developed means of readily breaking secure encryption.
The squabbling over encryption tends to override the other side of the digital security puzzle: anonymity. On the internet users can adopt any name they want when they open an e-mail or social-media account, write comments on a web page, or set up a website. People can buy and use a smartphone while giving flimsy or false personal details; or none at all.
Of course, a legitimate case can be made for such privacy, particularly with of users who live in authoritarian societies -- but a good case can be made against it as well. Although people can reasonably claim a right to anonymity, they can also claim a right not to deal with people who won't identify themselves; it's not like we have much liking for people who knock at the door, wearing a mask, or approach us and then refuse to identify themselves. It's not safe to do business with people who don't want to identify themselves; and anyone familiar with the internet knows there are many people online who just like to make nuisances of themselves, vandalizing cyberspace to no good end, while concealing their identity.
In short, while encryption enhances online security, anonymity tends to undermine it. In the real world, we have no unrestricted right to anonymity. In most countries, it is not possible to drive a car without registration plates, a license, or insurance. Most countries require babies to be registered at birth, and issue numbers to track payments in and out of social-security systems. People do not expect to live in an anonymous house, draw an anonymous income or -- at least in the 21st century -- open an anonymous bank account.
ID, put simply, cuts both ways: we would like to preserve our anonymity online, but we also need trusted transactions for banking, online purchases, or whatever, and trusted transactions mean a loss of anonymity. The state is the ultimate guarantor of trusted transactions; one party to the transaction cannot deny the other the right to obtain government assistance if the transaction is disputed, while both parties cannot deny the right of the government to monitor the legality of transactions.
To be sure, only those with a legitimate "need to know" and legal authorization should have access to data on such transactions, but robust ID is still required. Many countries now require those who buy mobile devices to provide some form of identification -- Britain is an exception -- but these rules are weakly implemented and enforced. The rules are starting to be tightened:
The other problem with anonymity is that, as discussed here, it's easily penetrated with data-mining; if the authorities want to figure out who's doing what online, all they have to do is put the pieces together. We can readily get our hands on good encryption technology, but it's very hard to conceal our identity from those who have the means to figure it out.
Besides, in practice, much of the time we don't want to be anonymous. While people complain about the lack of privacy online, all but the most paranoid give away vast amounts of personal information; and hardly complain about the tracking of purchases to give them targeted ads and deals. Every website can record the details of the visitor's browser and computer settings that often make up a unique fingerprint. Only the most paranoid try to block such data collection, and they do so only with inconvenience.
After recent terror attacks, democratic societies can properly ask whether the right to remain anonymous, online or on the street, should remain near-absolute -- when it's both legally and technically a fiction anyway. As long as there is proper democratic oversight of those handling the data, Europeans will have to give up some anonymity to preserve the liberty and security that matter. In an open internet, the security of personal data and identities should be preserved with secure encryption; in an open Europe, personal safety will be safeguarded by police and intelligence services, who must be able to share data at least as easily as do terrorists. [END OF SERIES]START | PREV | COMMENT ON ARTICLE
* THE COLD WAR (123): The intelligence returned by the CORONA satellites didn't help Eisenhower hold the line on defense spending, and in fact, under pressure by the Democrats, he was forced to raise it by a half billion dollars, even though he saw no necessity for additional weapons. The president managed to hold the line on calls for a national civil-defense program -- he felt that was a state and local issue, in which the Federal government should not get involved -- nor was he keen on increasing space spending. He took a dim view of Project Mercury, seeing it as an absurd stunt that wouldn't be worth the money pumped into it. He had even less use for talk of sending astronauts to the Moon.
Premier Khrushchev had no such hesitations about the USSR's space stunts. On 19 August, the Soviet Union put two dogs, "Belka (Squirrel)" and "Strelka (Arrow)" into orbit in a Vostok capsule, designated "Korabl Sputnik 2", publicly announced as "Sputnik 5". This was the second attempt to put dogs into orbit, an attempt on 15 July having ended abruptly when the R-7 booster exploded after liftoff -- but the second try worked like a charm, Belka and Strelka being recovered safely after a bit over a day in orbit. It was a remarkable technical achievement.
All the US had done to that time was send a few monkeys on short hops into space. The American Mercury astronauts feared the Soviets would put a man into space before the US did. They had good reason for that fear, but the Soviet "lead" in space was something of an illusion; although the USSR had beaten the USA in putting the first satellite into orbit, since that time the Americans had been launching satellites at about three times the rate of Soviet launches, with the American spacecraft covering a substantially wider range of practical applicability. The Soviets were simply ahead in stunts.
Not only was the US busily launching spy satellites, if with little success for the time being, but they had also launched navigation satellites named "Transit" -- a military project, one of the prime motivations in the program being to aid ballistic-missile submarines in determining their location, and so refining the targeting of their missiles -- and a demonstrator weather satellite named "Tiros" -- it was a NASA project, for civilian purposes, but it had been started by ARPA, and clearly had military applications.
NASA was also continuing to grow and become more formidable. In March, following an executive order from the president late the year before, the agency had acquired the Army Ballistic Missile Agency in Huntsville, with ABMA's technical boss, Werner von Braun, becoming a NASA official, and the Huntsville organization being renamed the "Marshall Space Flight Center". The name was in honor of George C. Marshall, who had died the year before. Von Braun was focused on the development of a series of big "Saturn" boosters, with an eye towards ultimately using a Saturn to send astronauts to the Moon.
* The Moon was not much of a concern to Eisenhower. He was much more worried about Cuba, particularly since he wasn't sure how to deal with that problem. A meeting of the American Foreign Ministers that took place in August in Costa Rica was a source of satisfaction, resulting in a statement warning against Sino-Soviet interference in the New World -- and also condemning Trujillo, calling for member states to break diplomatic relations with the Dominican Republic. The US soon did so.
On 18 August, Eisenhower met with Dulles, Gates, and Richard Bissell to see how the CIA's anti-Castro campaign was doing. Anti-Castro propaganda was underway, but attempts to set up a resistance movement in Cuba had gone nowhere, the regime enjoying considerable public support, while Castro's security apparatus was diligent and effective. Bissell was making better progress in setting up a paramilitary force of Cuban exiles, with a training camp established in Guatemala. Eisenhower authorized increased funding for the effort, and the use of US military personnel in training and support roles.
However, when the president asked: "Where's our government in exile?" -- he was told the anti-Castro Cubans were much too fractious to agree on a leader. To Eisenhower, that was a central issue; the idea was that the paramilitary force would land, establish itself on Cuban soil, and then set up a government -- which would request American support. The neat plan was a non-starter in the absence of a government; the president said: "Boys, if you don't intend to go through with this, let's stop talking about it." In reality, events continued to sputter along, with Eisenhower being unwilling or unable to simply shut down the half-baked exercise, before it led to real trouble. [TO BE CONTINUED]START | PREV | NEXT | COMMENT ON ARTICLE
* GIMMICKS & GADGETS: Apple pre-announced its iOS 10 release for the iPhone in mid-June. While I don't own a smartphone, the features promoted in the intro video were at least amusing:
* In other smartphone news, new phones like the LG G5 and Huawei P9 have a noticeable innovation: dual camera lenses on the back. The scheme has the potential to address some of the limitations of smartphone cameras. For example, a larger imaging sensor generally means better images -- but that presents challenges in packaging and thermal management. Using two small sensors can provide many of the advantages of one bigger sensor at lower cost.
Dual cameras can also provide an optical zoom capability, as with the LG G5, with one camera giving a wide-angle view, and the other giving a conventional narrow-angle view. Other possibilities include better auto-focus, and even range-finding capabilities. The next generation of Apple iPhones are rumored to feature dual cameras. Who knows? Maybe someday, smartphones will have little arrays of four, or even more, cameras on the back.
* A note from WIRED Online, German engineering firm Thyssenkrupp was definitely thinking "out of the box" in devising the "Twin" elevator system -- which, as its name suggests, involves two elevators in the same shaft.
Say what? How does that work? Visualizing it is difficult, but Twin works by users entering their destination on a display panel, with the system allocating an elevator for the most efficient overall transfer. It seems to be magic with scheduling software, and it works, the firm saying it moves 40% more passengers than a conventional elevator -- and so reduces the footprint of elevators needed for a particular building. If traffic is light, the bottom elevator will be stowed at the bottom of the shaft to save power.
* In other unusual technology news, WIRED Online discussed the "Polycom RealPresence Videoprotect 500" -- which is a videoconferencing phone intended primarily for correctional facilities. It has a handset, plus display and camera in a heavy-duty wall-mount box, protected against almost any assault by a convict. It has "Pin-Torx" screws that can't be removed by makeshift screwdrivers. All it needs to work is a broadband connection.
The Videoprotect 500 is useful for other applications, places where they may need a durable video kiosk outside, such as shopping malls or courts. In prisons, the phone isn't just for personal calls and tele-visitations. It's used for video arraignments, attorney consultations, inmate testimonies, and bail hearings. It's also used in situations when an inmate is deemed a transportation risk. It's much cheaper to perform a hearing by teleconferencing than by hauling a prisoner around with armed guards.
Few would think the Videoprotect 500 to be suitable for the home, however. It weighs 25 kilograms (55 pounds), costs about $15,000, and only comes in one color: gray.COMMENT ON ARTICLE
* NEW BRAIN MAP: As discussed by an item from AAAS SCIENCE NOW Online ("Updated Human Brain Map Reveals Nearly 100 New Regions" by Emily Underwood, 20 July 2016), in the early 1900s, neurologist Korbinian Brodmann drew some of the first diagrams of the human cortex by hand, working off on differences in cellular architecture that he observed under a microscope. For more than a century, scientists have used Brodmann's maps, or maps that built off Brodmann's work.
Now, neuroscientists at Washington University in Saint Louis MO have created a long-overdue new brain map, working from anatomical and functional brain data obtained by the Human Connectome Project -- a "big data" project to map the brain's functions and structures from hundreds of human samples. Previous attempts to rethink the map of the cortex -- the convoluted outermost layers of the brain responsible for sensory and motor processing, language, and reasoning -- have had mixed results, because some were based on small samples, while others focused on just one aspect of brain structure or function.
To create the new map, the team looked at four measures of structure and function -- including the thickness and number of folds in the cortex, and what activity different regions displayed in functional magnetic resonance imaging (fMRI) scans during a given task. They obtained data from 210 healthy adults, and then trained a machine-learning algorithm to detect distinct regional "fingerprints".
According to a paper released by the research team, the program mapped out 180 distinct areas, including nearly 100 that have never been defined before. The sharper, multi-layered map will allow for more detailed comparisons between humans and other primates, shining light on how our brains evolved. In more directly practical terms, it should also prove highly useful to neurosurgeons working with patients suffering from brain injuries.
ED: Incidentally, THE GUARDIAN had an article on this research, with a casual comment that, among the regions defined, some were associated with consciousness. To a cognitive functionalist, that's absurd; consciousness is not a function in itself, it is an inherent aspect of a broad range of cognitive functions such as sight and other senses, memory, and judgement. It is not out of the question, but there is no reason to think that any particular parts of the cortex will be identified as associated with consciousness -- except to the extent that such regions support cognitive functions that have a conscious aspect.
* A related item from AAAS SCIENCE NOW Online ("Watching neurons talk in a living brain" by Jessica Boddy, 21 July 2016), discussed how Yale researchers have developed a scheme to track the activity of the brain's roughly 86 billion neurons. Neurons are long, slender cells, with many branching threadlike "dendrites" on the input connecting to a cell body, with a long "axon" snaking out from the cell body to disperse, at its end, into many branching "terminals". The terminals of one neuron bridge to the dendrites of others via gaps known as "synapses".
When a sufficient number of dendrites are activated, a neuron "fires", performing rapid changes in concentrations of ions between the interior and exterior of the neuron, the result being that an electrical signal travels up the axon, to activate the terminals driving the dendrites of other axons. The synaptic connections grow stronger the more times each is exercised; it is the synapses that provide memory capability.
The number of synapses that fire in one region, a measure known as "synaptic density", is a good indicator of brain health. Higher synaptic density means more signals being sent successfully; if there are significant interruptions in large sections of the neuron highway, many signals may never reach their destinations, leading to disorders such as Huntington disease.
Traditionally, the only way to investigate synaptic density is to biopsy brain tissue removed from a patient, which is obviously of limited value in tracking the progress of brain conditions. The Yale researchers have come up an "in vivo" scheme that can track synaptic density in living subjects; they call the procedure "synaptic density imaging (SDI)".
SDI uses a radioactive molecule that, when applied to brain tissue, selectively latches on to certain membranes. When paired with positron emission tomography (PET), a scan that measures nuclear radiation given off by the molecule, the selected areas light up on the image of the organ being studied. The brighter the light, the more glucose -- energy -- used by those cells. When applied to the hundreds of trillions of synapses in the brain, the result is a real-time picture of synaptic density.
The researchers tested the scheme on baboons, injecting them with a radioactive tracer that had an affinity for a membrane protein in the brain known as "synaptic vesicle glycoprotein 2A (SV2A)". After conducting PET scans of the baboons, they compared the PET scans to autopsy results, demonstrating that SV2A really was a good marker for synaptic density. Given the SV2A marker, researchers could potentially tell whether some areas of the brain were affected by disorders like Parkinson's: a lack of synaptic firing would cause those areas to come up dark on a PET scan.
To confirm the results in humans, the researchers used SDI to inspect the operation of brains of people with temporal lobe epilepsy. The condition causes seizures through the loss of synaptic firing in the same area every time. PET scans showed the predicted areas of the brain were indeed dark. The researchers believe SDI can be used to follow a neurological disorder over a patient's lifetime -- not just to show brain malfunction, but also to see if medications are doing any good.COMMENT ON ARTICLE
* 2015 HOTTEST YEAR EVER: As discussed by an article from THE GUARDIAN ("Environmental Records Shattered As Climate Change 'Plays Out Before Us'" by Oliver Milman, 2 August 2016), the yearly "State Of The Climate" report -- led by the US National Oceanic & Atmospheric Administration (NOAA), with input from hundreds of scientists from 62 countries, and now in its 26th year -- confirmed there was a "toppling of several symbolic mileposts" in heat, sea level rise, and extreme weather in 2015. According to Michael Mann, a well-known climatologist at Pennsylvania State University: "The impacts of climate change are no longer subtle. They are playing out before us, in real time. The 2015 numbers drive that home."
The annualized surface temperature of 2015 was the warmest on record, beating the previous record set in 2014 by 0.1 degrees Celsius. The world is now 1C warmer than it was in pre-industrial times, mostly due to human emissions of greenhouse gases. 2016 is projected to end the year breaking the 2015 record in turn, after 14 straight months of elevated temperatures aided by an El Nino climatic event -- a weather event that typically raises global temperatures. Not incidentally, the el Nino event is over, but temperatures are still continuing their slow but steady upward climb. The oceans also reached a new record temperature, with sharp spikes in the el Nino-dominated eastern Pacific, which was 2C warmer than the long-term average, and the Arctic, where the temperature in August hit a remarkable 8C above average.
The thermal expansion of the oceans, compounded by melting glaciers, resulted in the highest global sea level on record in 2015. The oceans are around 7 centimeters higher than the 1993 average, which is when comprehensive satellite measurements of sea levels began. The seas are rising at an average rate of 3.3 millimeters a year, with the western Pacific and Indian Oceans experiencing the fastest increases. Assuming that the rate does not increase, that means seas will be 28 centimeters (almost a foot) higher than they are now by 2100.
The rate is expected to increase. The temperature rise is being primarily driven by a CO2 concentration that surpassed the symbolic 400 parts per million mark at the Mauna Loa research station in Hawaii last year. The NOAA report states that the global CO2 level was actually slightly lower, at 399.4 PPM, an increase of 2.2ppm compared to 2014. As CO2 concentrations rise, temperatures will rise in turn, and sea level rise will accelerate -- not just because of melting ice, but because, as noted above, warmer seas have greater volume.
NOAA said other "remarkable" changes in 2015 include the Arctic's lowest maximum sea ice extent in the 37-year satellite record, recorded in February 2015. Alpine glaciers recorded a net annual loss of ice for the 36th consecutive year, while the Greenland ice sheet exhibited melting over more than 50% of its surface. Were the entire Greenland ice sheet to melt, it would raise the world's oceans by 7 meters (23 feet). Severe heatwaves and droughts, along with extensive wildfires, occurred in 2015; such things have always happened, of course, but statistics show they are on the increase in number and severity.
* In other climate-change news, as discussed by an item from SCITECH DAILY Online ("NASA Study Reveals That Historical Records Miss a Fifth of Global Warming", 22 July 2016), a study by researchers at the US National Aeronautics & Space Administration (NASA) shows that almost one-fifth of the global warming that has occurred in the past 150 years has been missed by historical records, due to errors in how global temperatures were recorded. The study explains why projections of future climate based solely on historical records estimate lower rates of warming than predictions from climate models. With the corrections provided in the study, the models and observations largely agree on expected near-term global warming.
The Arctic is warming faster than the rest of the planet, but given its inaccessibility, there are fewer historic temperature readings from there than from lower latitudes. A data set with fewer Arctic temperature measurements is more likely to suffer errors. Since there's no way to get more data out of the past records, the NASA researchers instead built climate models to mimic the limited coverage in the historical record, which showed how limited samplings could have skewed estimates. The study also addressed two other issues:
These sources of error were not news, but nobody had ever generated a thorough analysis to deal with them. In any case, the bottom line is that earlier estimates lost about 19% of global air-temperature warming since the 1860s. That meant calculations generated from historical records alone were cooler than about 90% of the results from the climate models used by the Intergovernmental Panel on Climate Change (IPCC) for its assessments. With the corrections, the calculations from historical records were close to the middle of the range of calculations from the IPCC's suite of models.
Mark Richardson of NASA's Jet Propulsion Laboratory in Pasadena CA, the lead author of the study, says the errors are "quite small on their own, but they add up in the same direction. We were surprised that they added up to such a big effect." Richardson adds: "It had seemed like real-world data hinted that future global warming would be a bit less than models said. This mostly disappears in a fair comparison."COMMENT ON ARTICLE
* DIGITAL INTELLIGENCE AGAINST TERROR (2): The appetite among Western nations for increased exploitation of digital intelligence to fight terror is, to precisely no surprise, strongest in France. It is weakest in Germany, which has notably strict rules for protecting data about its citizens: information can be shared only with the person's explicit consent, or with specific, and extraordinary, legal authorization.
Given the past history of Germans with Nazi and communist totalitarianism, that's not surprising either. Another factor is that Germany has not suffered terrorist attacks the way France has. Germans are inclined to believe that the low profile of German forces in foreign interventions has spared Germany from being a target; in addition, Germany's Muslims, mainly of Turkish extraction, are more secular and less alienated than those of Arab origin in France's banlieues.
One big factor that Germans don't make much of is that German intelligence -- backed up by the intelligence of other allies, notably America -- is very good, and has thwarted a number of plots. German sensitivity over surveillance ensures that this is kept quiet, that sensitivity having been inflamed when information released by Edward Snowden hinted that Chancellor Merkel's phone had been tapped by the NSA.
A German government investigation found no reason to believe that was so, but the damage had been done. A parliamentary inquiry digging into German intelligence gathering then highlighted the close relationship between US and German intelligence -- and also revealed that German intelligence had been collecting intelligence on other European governments, bodies such as the International Red Cross and Oxfam, and on individuals. While France is granting its spooks greater powers, Germany is reining them in -- a new and beefed-up parliamentary committee, staffed with experts, will oversee Germany's foreign and domestic spies.
Privacy advocates don't see that as going far enough, judging that all mass data collection should be banned. Europeans are inclined to see digital privacy as a fundamental human right, while the main concern in the US is consumer protection -- meaning that the right can be abridged if national security requires it. The idea that Passenger Name Record data can be shared among European governments, let alone the US, is contentious.
The European Court of Justice has struck down the EU's "Safe Harbour" agreement with the US, under which tech firms were allowed to transfer personal data across the Atlantic. The court handed down this ruling because of concerns that the NSA would snoop through it. The British are worried that the court may rule in a case this year that much of its electronic eavesdropping is simply illegal.
* Along with data collection, another problem is encryption. Citizens can now easily get their hands on data encryption software that makes it difficult to impossible for security services to crack encrypted messages, and they're complaining:
Law-enforcement officials are not at all happy with the idea of places they can't get into, even with search warrants. If they've got a warrant, they can go effectively anywhere to catch child-abusers, gangsters, and money-launderers; why should cyberspace be different? Security hawks want to counter the spread of encryption with four powers:
The fourth item has generated particularly loud screams from privacy advocates -- while the recent flap between the US authorities and Apple Corporation over cracking iPhones demonstrated that industry is not at all happy about giving the authorities a door into encrypted systems, perceiving that customers need and are entitled to encryption. There are serious questions about the practicality of such schemes, and whether it is possible, over the longer run, to defeat encryption. [TO BE CONTINUED]START | PREV | NEXT | COMMENT ON ARTICLE
* THE COLD WAR (122): President Eisenhower could take some encouragement in the fact that the CORONA program was finally getting on track: a test shot on 10 August was fully successful, the film bucket being recovered from the Pacific Ocean two days later. A fully operational satellite was launched on 18 August, with its bucket being recovered, the film having imaged a fifth of the USSR. Photo analysts were "flabbergasted" with the haul, finding the images "terrific, stupendous".
The US Navy had actually performed a successful spy satellite launch earlier, putting the "Galactic Radiation And Background (GRAB)" AKA "Solar Radiation (SOLRAD)" spacecraft into orbit on 22 June -- the names, of course, being covers for its real mission. It was much simpler than a CORONA satellite, being a signals intelligence (SIGINT) satellite that relayed radar and other persistent radio signals to ground stations for analysis. It had only the coarsest ability to actually locate such "emitters", and was not in a league of capability with the CORONA satellites.
With intelligence now being returned by space assets, Eisenhower moved to put it under civilian control. He had not been happy to find that Air Force brass were publicly leaking information about the spy satellite program to enhance the status of the service as America's "space force"; he set up a review committee in response that proposed the generals be taken out of the loop.
On 31 August 1960, the Eisenhower Administration established the "Office of Missile & Satellite Systems", which would become the "National Reconnaissance Office (NRO)" about a year later. The NRO was so secret that even its name was not publicly revealed until after the Cold War. The top leadership was provided by the Undersecretary of the Air Force, Joseph C. Charyk, a civilian, with a deputy from the CIA, who unsurprisingly turned out to be Richard Bissell, the father of CORONA. The spy satellite effort went completely secret. The CORONA satellites would soon become known by the "KEYHOLE (KH)" codename, with the initial spacecraft being "KH-1", which would be followed by the "KH-2" before the end of 1960 -- the KH-2 being bigger and having an improved camera system.
At roughly the same time, the photographic intelligence organizations of the Army, Air Force, Navy, and CIA were consolidated into the "National Photographic Interpretation Center (NPIC)", which reported to the CIA and so wasn't under the control of the generals, either. NPIC would take reconnaissance imagery provided by the NRO, analyze it, and pass it on to end users. The generals were not particularly happy with this arrangement.
The intelligence brought back by the spy satellites only confirmed what Eisenhower already knew, that Soviet talk of strategic superiority, or even parity, was just that -- talk. Unfortunately, the president could not publicly reveal the existence of CORONA, despite the fact that it would have taken the wind out of Democratic election-year posturing about the "missile gap". The problem wasn't the need to keep CORONA hidden from the Soviets, they would figure out what was going on soon enough, if they hadn't already; it was the need to keep everyone else from finding out about it. If the program were to become public knowledge, the Soviets would feel compelled to object, or even figure out how to destroy the satellites. The existence of CORONA could not be revealed until the Soviets had their own space reconnaissance capability, which would make them think twice before taking action against American spy satellites. [TO BE CONTINUED]START | PREV | NEXT | COMMENT ON ARTICLE
* SCIENCE NOTES: As discussed by a note from AAAS SCIENCE NOW Online, it's long been known that bees have communications capabilities, performing "waggle dances" in the hive to direct foragers to flowers, but researchers are now finding out just how sophisticated they are. Six years ago, scientists discovered that foraging European honey bees, Apis mellifera, make "stop signals" in the hive to provide warning that they've encountered, say, a spider on a flower. They head-butt individual bees and generate a brief, vibrating pulse, warning of danger. Now research has shown that the Asian honeybee (Apis cerana) has even better communications skills.
The Asian honeybee is preyed upon by a number of species of hornets, including the world's biggest hornet. The researchers set up experiments in which hornets attacked the bees, with the reactions of the bees observed. When victimized bees returned to their hives, they generated stop signals that increased in pitch according to the size of the predator, which inhibited the waggle dances of the other bees. If there were hornets threatening the nest, the guard bees and foragers coming back to the hive made distinctive and lengthy stop signals to announce that danger was right outside; and generated more stop signals if a giant hornet was invading the hive. Foragers then remained in place, while nest defenders formed a ball around the hornet, to kill it by raising its body heat.
* In other insect news, as also reported by AAAS SCIENCE NOW Online, it is well-known that moths are attracted to light, as per the saying "like a moth to the flame". Most moths being nocturnal, with global light pollution, that attraction is fatal to large numbers of moths.
Two researchers got to wondering if that high level of mortality wasn't evolving the attraction out of moths. In 2007, they collected the larva of 1,048 ermine moths (Yponomeuta cagnagella); 320 were from areas where the nights were dark, 728 were from light-polluted areas. They were raised in a lab, with 16 hours of daylight and 8 hours of darkness daily. Two to three days after emerging as moths, they were released in a flight cage with a fluorescent tube at one side.
Moths from high light pollution areas were 30% less attracted to the light than those from the darker areas. Those with less attraction to lights would have higher survival rates in urban areas, but that comes with a price: to avoid lights, the moths are flying less, so they aren't pollinating as many flowers, or feeding as many spiders and bats.
* As discussed by a note from AAAS SCIENCE NOW Online ("This Butterfly Has Extreme Color Vision" by by Virginia Morell, 8 March 2016), while the compound eyes of butterflies can't come close to matching the resolution of the human eye, they do have tricks of their own: their field of view is larger, they're better at perceiving fast-moving objects, and they can distinguish ultraviolet and polarized light.
Now it turns out that a species of swallowtail butterfly from Australasia, the common bluebottle (Graphium sarpedon), known for its prominent blue-green markings, has eyes equipped with at least 15 different types of photoreceptors -- the light-detecting cells required for vision, like the rods and cones of the human eye. The researchers conducted experiments on the eyes of 200 male bluebottles, collected in Japan -- they were not able to collect enough female bluebottles for analysis. The researchers found that different colors stimulate each class of receptor. For instance, UV light stimulates one, while slightly different blue lights set off three others; and green lights trigger four more.
Most insect species have only three classes of photoreceptors. Even humans have only three types of cones, yet we still see millions of colors. Butterflies need only four receptor classes for color vision, including spectra in the UV region. So why should the bluebottles have 11 more? The researchers believe that is may have something to do with the breeding habits of the bluebottle, allowing males to spot rivals and ignore butterflies with similar patterns. In other words, it's sexually selected -- something like the tail of the peacock, but in reverse.COMMENT ON ARTICLE
* NUCLEAR THERMAL ROCKETS REVISITED: As discussed by an article from AVIATION WEEK Online ("NASA's Road Map Toward Possible Nuclear Rocket Flight Demo" by Guy Norris, 21 September 2015), the idea of the "nuclear thermal rocket (NTR)" goes back to the beginning of the space age.
The concept is straightforward. The NTR is built around a nuclear fission reactor, which heats liquid hydrogen (LH2) to generate high-velocity exhaust. An NTR has twice the "specific impulse" -- a measure of thrust efficiency -- of the best chemical rockets; an NTR is no good for lofting payloads into orbit, but it is seen as highly desireable for deep-space missions, to Mars and beyond.
The idea has been studied for more than a half century, but it has never been put to operational use. The US National Aeronautics & Space Administration (NASA) is now becoming interested in NTR again. 60 years after the US Atomic Engine Authority began Project ROVER to evaluate the first NTRs, NASA is funding the second phase of a technology development effort that it hopes will lead to the successful ground testing, and eventual flight, of a small NTR engine by 2025.
The most important issues concern the size of the demonstrator; where it would be ground-tested; and how it would be tested. Stan Borowski, technical lead for NTR at NASA's Glenn Research Center in Cleveland, Ohio, asks: "Is there a simple flight technology demonstration we can do, and what are the kinds of things that are required to do it in a 10-year time frame? Is it do-able at all? That's the key thing."
Primary tasks identified for the second phase include development, demonstration, and validation of graphite-composite nuclear fuel elements; conceptual design of a demonstrator; requirement definition for a small, but scalable, low-thrust engine; ground test options; plus devising an affordable development and test plan. The notional plan calls for these questions to be answered within the next two years, hopefully leading to a formal go-ahead late in fiscal 2017. Cost is a major consideration, with the effort focusing on proven technologies and procedures. Ground-testing of engines would run from 2022 to 2024, leading to selection of an engine, with a thrust of either 33.4 kN (3,400 kgp / 7,500 lbf) or 74.3 kN (7,575 kgp / 16,700 lbf), for flight tests. According to Borowski:
Our proposed demonstration mission is very simple. It is a one-burn lunar flyby which would be launched on a small vehicle. We don't need a lot of liquid hydrogen propellant, so the tank [would be] small. And seeing as it's one burn, then you don't have to worry about active cryogenic coolers for long-term storage. You do the one burn and you fly by the Moon. Three days later you flip around, mark the end of the demonstration with a photograph of Earth, and perform a lunar gravity assist maneuver that puts the demonstrator on a trajectory into deep space for disposal.
As currently envisioned, the lunar flyby demonstrator would be launched on a Delta IV-class booster. While the low-thrust NTR engine is sufficient for the demonstrator mission and for a variety of robotic science missions, the study tilts towards the high-thrust option because, since it's better suited to support future crewed exploration missions.
The high-thrust engine would leverage off the detail work performed on a comparable "small nuclear rocket engine (SNRE)" that emerged from NASA's "Nuclear Engine for Rocket Vehicle Applications (NERVA)" program in the 1960s. The SNRE was developed by Los Alamos National Laboratory; it was never flown or even ground-tested, but it was small enough to be placed in the shuttle cargo bay, to support deep-space robotic missions.
The high-thrust engine is also preferred because its design is much the same as those of larger engine designs. It could also be used for operational missions -- NASA analysis showing that a cluster of three such engines could support re-usable lunar cargo delivery, crewed landings, and asteroid survey missions. Borowski says:
Even human missions to Mars are possible with the smaller crew size and pre-positioning of assets currently being envisioned in NASA's evolvable Mars campaign. [NASA wants] to use solar electric propulsion to pre-deploy cargo in Mars orbit and, at the same time, reduce crew size from six to four. If we do that and use SEP to pre-deploy return propellant, we can use these three [high-thrust] engines on a Mars transfer vehicle; [we could also] return that vehicle to Earth orbit, and capture it for potential refueling with hydrogen.
... The evolvable Mars campaign right now is [reducing] the transit time to nine months. But with NTR, even with smaller engines, we are going out in six months and coming back in six months.
The NTR will be built around a compact fission reactor core, containing 93%-enriched uranium-235 fuel, with the reactor generating hundreds of megawatts of thermal power to heat the propellant. The LH2 will be driven through the engine by a turbopump, being split into two loops to cool engine assemblies. After passing through the dual loops, the flow will be combined again, now in the form of a heated gas, which will be used to drive the turbopump, and driven through the reactor core. The superheated gas then expands out a high-area-ratio nozzle, of about 300:1, to produce thrust. Given the long history of work on NTR, few see any major technical hurdles; however, the past history also shows that maintaining momentum on NTR development is problematic.COMMENT ON ARTICLE
* THE ALLEN INSTITUTE PROBES THE BRAIN: As discussed by an article from Nature.com ("Brain-Data Gold Mine Could Reveal How Neurons Compute" by Helen Shen, 13 July 2016), neuroscientists in Seattle, Washington, have spent four years mapping the neural activity of the mouse visual cortex, using the large-scale sky surveys with which astronomers explore the cosmos as inspiration. The Allen Brain Observatory's (ABO) first data release, on 13 July 2016, provides a publicly-accessible data set of unprecedented size and scope, as a stepping-stone to the ultimate goal of modeling and understanding the human brain.
The project is part of an ambitious ten-year brain-research plan announced in 2012 by the Allen Institute for Brain Science. The work is focused on cataloging neurons and their electrical characteristics in detail, to help understand how perception and cognition arise.
To create the ABO's first data set, researchers used a specialized microscope to record calcium waves that occur when neurons fire, sampling activity in 25 mice over 360 experimental sessions, while the animals viewed a battery of visual stimuli such as moving patterns of lines; images of natural scenes; and short movies. The data set so far includes 18,000 cells in four areas of the visual cortex. The set also includes information about each neuron's location and its expression of certain genetic markers. Even with only 18,000 cells, the data set runs to 30 terabytes -- but users can download a condensed processed data set, or explore online.
Other labs have collected similar data, but on a much smaller scale, with fewer animals or fewer neurons. The labs also used different species, techniques, or brain regions, making it more difficult to merge and compare data, with most of the data remaining in the possession of individual labs. To create the extensive ABO data set, more than 100 researchers developed, then used, standardized equipment and protocols for every stage of the experiment, permitting the methodical creation of the huge data set.
Allen Institute researchers now plan to monitor activity while the mice carry out behavioral tasks; they also want to acquire more recording techniques, as well as extend their sampling across the entire mouse visual cortex, and well as the rest of the brain. Christof Koch, president of the Allen Institute, expects that over the next 3 to 4 years, the project will evolve into a true observatory, with researchers able to request certain experiments, and the results made publicly available.
Ultimately, the Allen Institute wants its own researchers and others to be able to use the massive data set to help to unravel the fundamental computational principles that underlie cognition. This goal is shared by the US government's "Brain Research through Advancing Innovative Neurotechnologies (BRAIN)" Initiative, which was launched in 2013 -- being discussed here in 2015 -- with the Allen Institute among its private partners.
However, while the BRAIN Initiative has largely supported individuals and small groups of investigators with conventional grants, the Allen Institute has concentrated personnel and money on a small number of large projects. The aim is to create public research tools that individual labs would not have the resources to produce. The impact of the ABO does depend on whether the neuroscience community embraces the effort, but early response seems enthusiastic.
Koch believes that a sweeping survey of neural activity will allow theoreticians to put together more accurate models of brain function, and find better ways to test the validity of existing models. But he is also realistic about the challenges ahead: "We're under no illusions that now we have all this data that the solution will jump out at us."COMMENT ON ARTICLE
* ANOTHER MONTH: As discussed by an article from THE ECONOMIST ("Hello Kitty, Goodbye Panda", 16 July 2016), Taiwan loves the Japanese "Hello Kitty" icon. This last spring, the first Hello Kitty train began service, with riders stealing almost all the head-rest covers on the first day. EVA Air, Taiwan's second-biggest airline, is ramping up flights of its Hello Kitty jetliner service to Paris. Taipei airport has a Hello Kitty check-in area, gift shop, and play area. Hello Kitty is everywhere in Taiwan, the kitty face displayed on an enormous range of goods.
The fad has a broader context, an embrace of Japanese "kawaii (cutesy)" culture. Although Japan once seized Taiwan as a colony, that era is no longer much in living memory, while China continues to insist that Taiwan must submit to Chinese rule sooner or later. Hello Kitty is Taiwan's way of discreetly thumbing the nose at China. Consider the Hello Kitty train: it is decorated with multinational Hello Kitty environments, but the Taiwanese Hello Kitty, which drinks tea under the Taipei Tower, is separated from the Chinese Hello Kitty -- who visits the Great Wall and schmoozes with pandas -- by a Hello Kitty in a kimono.
It appears the Hello Kitty revolution was started by McDonald's in August 1999, when the fast-food chain provided Hello Kitty toys with its Happy Meals. The supply of a half-million toys lasted less than a day. Later in 1999, Chunghwa Telecom sold out 50,000 Hello Kitty phone cards in less than five minutes.
There is a clear political context: this year Tsai Ing-wen, candidate of the independence-minded Democratic Progressive Party (DPP), defeated the pro-unification Kuomintang (KMT). One of the weapons in the DPP's campaign was a feline-oriented video featuring Mrs. Tsai, with the video narrative in Taiwanese, not Mandarin as the KMT has traditionally preferred. Although Taiwanese did not much care for Japan's rule, with the passage of time, there has been a tendency among Taiwan to establish stronger cultural roots with Japan. That inclination may well be superficial; but it does announce that Taiwanese would like to weaken their cultural connection with China.
* I don't often get reviews on my Kindle ebooks, but they are usually flattering. Of course, I knew I would eventually get some negative reviews, and I finally got two in a row. I was more stressed with them than I thought I should be.
One was annoying because the criticism was that the book was too short and simple; the critic wanted a more elaborate book. Oh please, that's not useful -- but I made no response. If a reader has useful criticisms, they should be taken to heart; if they're not useful, they are shrugged off. Getting into confrontations with readers is a bad idea.
The other criticism did have something specific, a complaint about the format of the data tables I use in my aircraft documents. Yes, they're certainly plain, each consisting of a column of data, clarified by indention and line spacings, bounded vertically by horizontal ruling lines. I originally came up with the scheme because it's very hard to have a fixed-format table that works on various sizes of tablets. In response to the criticism, I did a lot of tinkering with different approaches -- only to find I was worse off.
OK, I was back at square one, but there was one thing I could improve on. In between the peculiarities of OpenOffice Writer, which I use to draw up the ebooks, and the .MOBI ebook format, it turns out to be very hard to ensure consistent line spacing for the horizontal rules -- what looks properly spaced in Writer may be a line short or a line too long, giving a skewed appearance.
It appears that the line spacing is very sensitive to how the lines and neighboring text are inserted in Writer; it may look OK in Writer, but ends up wrong in .MOBI. I found I could carefully re-enter the lines and text, then delete the old lines and text, and get the vertical spacing right. It took some trial-&-error iterations to do it, but I eventually cleaned up all my ebooks.
The tables are still plain, but they do the job better than any other way I can think of, and most readers don't complain. I doubt one bad review on an ebook does me much harm, and since I mostly get good reviews, I can have some assurance that I won't get two bad reviews for one ebook. If I got three bad reviews on one ebook and no good reviews, that would be it for the ebook; nobody would ever buy it again. Ironically, after getting two bad reviews in a row, I got three good reviews in a burst, even though I don't get reviews more often than once every few months. I keep hoping the next ones will mask out the bad reviews, but I'm not holding my breath.
* And then, while I was tweaking my ebooks, Amazon Kindle refused to republish my ebook THE BOEING B-47 STRATOJET, claiming it wasn't my own work because they could find it online. That was exasperating because they'd done the same thing before to me with that ebook, and I'd had a hard time convincing them it was on the level. Getting nailed twice was a pain, but I was more confident I would prevail, and I did so after several explanations. Amazon has been, on the average, more than fair to me, so I can't complain too much -- but I hope they don't do it to me AGAIN.
In compensation, just after I got that confusion settled out, the Funimation anime website announced they had released an app for their service for the Amazon Kindle Fire HD video downloader. I'd been using a notebook hooked up to my TV to run Funimation anime videos on a web browser; it worked OK, but it was a bit cumbersome. I promptly downloaded the app from Amazon.com and got it running -- much more convenient, and very satisfying given my irritable state of mind. It locks up every now and then, but downloads are like that for the time being. Give it ten years.
* Thanks to one reader for a donation to support the websites last month. That is very much appreciated.COMMENT ON ARTICLE