< PREV | NEXT > | INDEX | GOOGLE | UPDATES | EMAIL | $Donate?

DayVectors

jan 2018 / greg goebel / follow "gvgoebel" on twitter

* This weblog provides an "online notebook" to provide comments on current events, interesting items I run across, and the occasional musing. It promotes no particular ideology. Remarks may be left on the site comment board; all sensible feedback is welcome.

banner of the month


[TUE 23 JAN 18] IMPROVING AFRICA'S CROPS
[MON 22 JAN 18] UNDERSTANDING AI (8)
[FRI 19 JAN 18] ONCE & FUTURE EARTH (22)
[THU 18 JAN 18] SPACE NEWS
[WED 17 JAN 18] RABIES VERSUS BRAIN CANCER
[TUE 16 JAN 18] HUMAN PHAGEOME
[MON 15 JAN 18] UNDERSTANDING AI (7)
[FRI 12 JAN 18] ONCE & FUTURE EARTH (21)
[THU 11 JAN 18] GIMMICKS & GADGETS
[WED 10 JAN 18] NO MICROBIOME?
[TUE 09 JAN 18] CANNY ROBOCARS?
[MON 08 JAN 18] UNDERSTANDING AI (6)
[FRI 05 JAN 18] ONCE & FUTURE EARTH (20)
[THU 04 JAN 18] SCIENCE NOTES
[WED 03 JAN 18] COSMIC LIGHTS SHOW
[TUE 02 JAN 18] MICROBIOME MEDICINE
[MON 01 JAN 18] ANOTHER MONTH

[TUE 23 JAN 18] IMPROVING AFRICA'S CROPS

* IMPROVING AFRICA'S CROPS: As discussed by an article from ECONOMIST.com ("No Crop Left Behind", 23 November 2017), Africans raise a number of crops unfamiliar or under-appreciated in developed countries: cassava and sweet potatoes; lablab beans and water berries; bitter gourds and sickle sennas; Elephant ears and African locusts. Sweet potatoes are known everywhere, but elephant ears? They're a leafy vegetable. African locusts? Legumes that grow on trees. However familiar or unfamiliar these crops are, they have one thing in common: they're not big cash crops in developed countries, and so there hasn't been much interest in improving them.

Cereal crops like rice, wheat, and maize have had their genomes mapped out, and have been the focus of intensive crop improvement, making them far superior to their ancestors of only two centuries ago. The "orphan crops" on which Africans are heavily dependent have seen little such improvement. They aren't as nutritious as they need to be, the result being widespread malnourishment. A report from the World Health Organization estimates that almost a third of Africa's children, nearly 60 million of them, are stunted. Researchers at the World Bank reckon the effects of stunting have reduced Africa's GDP by a tenth.

Two recent, interrelated projects are working to improve orphan crops. They are both based in Nairobi, being conducted under the umbrella of the World Agroforestry Center -- an international non-governmental research organization. The first project is the "African Orphan Crops Consortium (AOCC)"; the other is the "African Plant Breeding Academy". The AOCC's mission is to obtain complete sequences of the DNA of 101 neglected food crops, while the academy's is to disseminate those sequences, and much other data, to young scientists from universities and other institutes around the continent.

The AOCC is largely the brainchild of Howard-Yana Shapiro. His official job is as chief agricultural officer of Mars, a well-known US candy-maker. As discussed here in 2012, Mars researchers once sequenced the genome of cacao, the source of chocolate, in order to improve one of the firm's most important raw materials. When Shapiro ran into Ibrahim Mayaki, the head of an African development agency known as "NEPAD", Mayaki suggested that other tropical crops should be sequenced. Shapiro liked the idea, with the duo then recruiting Tony Simons -- who runs the World Agroforestry Center -- and Rita Mumm -- a plant geneticist at the University of Illinois. In 2013, the group launched the consortium and the academy.

To date, AOCC has fully sequenced the genomes of ten orphan crop plants, and partially sequenced 27 others. Once complete genomes are available, the differences between those of different natural varieties of the same species, known as "landraces", can be identified. Very importantly, full sequencing allows maps of DNA markers within a genome to be constructed, with the markers then used to nail down the movement of blocks of DNA from parent to offspring when different landraces are crossed.

Traditionally, crop hybridization was a basically hit-or-miss affair: varieties with desired traits were crossed, with the offspring inspected to see if exhibited the traits as hoped. With the traits linked to genetic markers, the offspring can be inspected for the desired markers, with those that don't being discarded. The end result is accelerated development of new hybrid crop varieties that have better yields -- because of virus, pest, or drought resistance, for example -- or better nutritional value -- such as enhanced vitamin content -- or both.

In the meantime, the academy has brought in 81 researchers from all over Africa for what amount to "master classes" from the world's top plant breeders. As part of their studies, the researchers are informed of the consortium's latest results, so they can then apply those results to their work.

As an example, Dr. Robert Mwanga -- a Ugandan at the International Potato Center who has long advocated improvement of African crops -- has worked on improving the sweet potato. The varieties of sweet potato available in Uganda and elsewhere in Africa in the 1980s were deficient in vitamin A, resulting in damage to the eyes and brain of children, as well as making them more vulnerable to disease.

Starting with Asian varieties that had more vitamin A in them, Mwanga bred a dozen strains richer in vitamin-A and with more dry matter -- meaning more calorific value -- than African landraces. He then led a campaign to encourage local farmers to adopt the new varieties, the farmers proving enthusiastic. Mwanga won the World Food Prize in 2016 for his work. Mwanga is now working on virus resistance, viral infection being a big problem for sweet potatoes.

Other African crop researchers, graduates of the Nairobi academy, are following Mwanga's example:

As Mwanga understood, it wasn't enough to come up with new crop varieties: farmers had to be persuaded to plant them. Farmers, particularly those at a subsistence or near-subsistence level, are resistant to change. When they did initially adopt Mwanga's improved sweet potatoes, they only used them for animal fodder, since the new potatoes were orange while the old ones were white. Mwanga patiently encouraged them to grow such crops for human consumption. He also found it useful to work with seed companies, having worked with two Ugandan firms -- BioCrops and Senai -- to distribute the latest varieties of the new sweet potato.

At present, most of the work on crop improvement is focused on subsistence farming -- but, as Oselebe's work suggests, there is a potential for bigger markets. The developed world has long demonstrated an inclination to adopt new and exciting fruits and vegetables: bananas, mangoes, pineapples, and pawpaws are all tropical fruit that have gone global. Improving Africa's orphan crops may not just help Africans stay healthy; it also promises to make them more prosperous.

COMMENT ON ARTICLE
BACK_TO_TOP

[MON 22 JAN 18] UNDERSTANDING AI (8)

* UNDERSTANDING AI (8): As discussed by article from TIME.com ("Google Wants to Give Your Computer a Personality" by Lisa Eadicicco, 16 October 2017), for a long time machines that could hold conversations with humans were lab toys, useless in the real world. Over the past few years, however, voice-enabled gadgets have become a red-hot technology. Android Assistant (A2) is available on phones from the likes of Samsung and LG, while Amazon.com offers its take, Alexa, on the firm's popular Echo speakers. Apple has built Siri into many of its iDevices, and Microsoft is putting its Cortana helper in everything from tablets to thermostats. Now, tens of millions of Americans use Assistant, Alexa, or another virtual butler at least once a month, with sales of smart speakers alone soaring up into sales worth billions a year.

Right now, these voice-enabled digital assistants are limited in their conversational capabilities: they're given orders, and carry them out. Google, which has built a monster business by providing "information on demand", sees truly conversational machines as a next big step in its services. To create machines with personalities, the company has assembled a team of creative types not traditionally seen as Googlers: fiction writers, film-makers, video-game designers, psychologists, and comedians.

Computer scientists had worked on giving machines conversational capabilities from the beginning. The first "virtual assistant" recognizable as such was Microsoft's "Clippy", an animated paper clip that was supposed to be helpful, introduced as an element of MS Windows in 1997. Clippy's primary usefulness turned out to be as a bad example, since people generally found him irritating, and rarely of much help. He was quietly deleted from Windows in 2007.

The concept behind Clippy -- predicting what information a user might need next, providing tips when they seemed necessary -- actually wasn't a bad one. In 2011, Apple introduced Siri, which did the job much better. Siri also integrated voice to a degree, which was particularly handy in a smartphone, where keyboard input tends toward the troublesome. Soon, everyone was jumping on the virtual assistant bandwagon, with Google developing Assistant.

37-year-old Ryan Germick heads the Google team working on the A2's personality. He sees the mission in straightforward terms: "We want you to be able to connect with this character. Part of that is acknowledging the human experience and human needs. Not just information, but also how we relate to people."

This is a subtle and tricky task. One of the problems, in giving A2 a personality, is avoiding the inclination to make it pretend to be a human. In 1970, the Japanese roboticist Masahiro Mori coined the term "uncanny valley", suggesting that the more a machine tried to fake being a human, the more people regarded it as a fake. People regard a robot like R2D2 as cute; they tend to find the robot Lincoln at Disneyland a curiosity at best, weird at worst. It is, by that coin, not surprising that Google called their virtual assistant simply "Assistant" -- implying an obedient servant -- and did not give it a human name like "Siri" or "Alexis".

In other words, a virtual assistant has to be seen as a synthetic character, a sort of cartoon character like Mickey Mouse or Bugs Bunny, that people like, but don't confuse with a real human being. Emma Coats, the "character lead" of the A2 personality team, has plenty of experience in constructing cartoon characters. She worked for five years at Pixar Animation Studios on animated movies like MONSTERS UNIVERSITY, BRAVE, and INSIDE OUT.

With a conversational system like A2, the machine personality is all about responses -- after all, A2 never does anything on its own initiative. Coats points out the kinds of questions team members ask when crafting an appropriate response:

For example, she says, consider a user asking Assistant if it's afraid of the dark. It would be fake to say it was, a conversational dead end to just reply NO, so what A2 says is: "I like the dark because that's when stars come out. Without the stars, we wouldn't be able to learn about planets and constellations."

Think of it as a taste of entertaining light philosophy. It's maybe not that much more than what one might get out of a fortune cookie, but people like fortune cookies. Coats says: "This is a service from Google. We want to be as conversational as possible without pretending to be anything we're not." [TO BE CONTINUED]

COMMENT ON ARTICLE
BACK_TO_TOP

[FRI 19 JAN 18] ONCE & FUTURE EARTH (22)

* ONCE & FUTURE EARTH (22): There is no doubt that greenhouse warming of the planet does take place, since otherwise the Earth would be an icebox. There are four principal greenhouse gases:

Anyone reading this list might have reason to feel puzzled, because it clearly shows that water vapor is the most important greenhouse gas. So why the fuss over CO2? However, the effect of CO2 is not negligible, that effect ranging in possible value from about a tenth to a quarter of the whole -- and there are good reasons to believe CO2 is the "lever" in the system.

One of the characteristics of CO2 is that the "sinks" that draw it out of the atmosphere, most significantly through the photosynthetic operation of plants, operate slowly. That means that a rise in CO2 concentrations can take a long time to fall back down. Water, in contrast, simply falls out of the sky as precipitation, and water vapor concentrations can change very rapidly -- everybody knows that the weather can shift from humid to dry overnight. CO2 concentrations don't, can't fluctuate anywhere near that rapidly.

Obviously, water vapor is produced by evaporation, mostly from the seas, and also obviously, an increase in global temperatures means a higher rate of evaporation. That suggests a temperature increase due to a rise in CO2 concentrations could well be amplified by positive feedback from an increase in water vapor concentrations. As noted, the concentrations of CO2 in the atmosphere are small, a fraction of a percent. That means its effectiveness as a greenhouse gas is disproportionately greater than of water vapor on the basis of the total mass of each gas, and so increments in the concentrations of CO2 could have a disproportionate effect.

All that said, water vapor occupies an ambiguous position in the global warming debate. Although it is certainly a greenhouse gas and helps trap heat, as water vapor concentrations rise it also produces more cloud cover -- and, in winter, more snowfields -- reflecting more sunlight back into space. To confuse matters further, clouds also reflect radiation from below, helping trap more heat; in addition, condensation of water droplets is an exothermic process, it releases energy, and so formation of clouds tends to produce local warming. After considerable discussion, the general consensus in the climate research community is that more water vapor means, overall, more warming.

* So what does real-world data show? Measurements made from the 1950s show the level of CO2 rose from 316 parts per million (PPM) in 1959 to 387 PPM in 2009. Indirect measurements suggest the rise began about 1750, starting from the 280 PPM that appears to have been the long-term average for the 10,000 years before that -- though everyone acknowledges that natural CO2 concentrations did tend to vary around that average.

The timing of the rise in CO2 concentration from 1750 tracks the rise in human population and industrialization. It is true that the relative proportion of human emissions of CO2 to natural emissions is small -- but while natural processes have been producing CO2 for a lot longer than humans have been around, they also provide sinks that soak up the CO2, keeping the levels roughly constant. Human activity has provided a persistent increment of CO2 emissions that natural processes can't quite keep up with, a trickle that is gradually leading to an overflow. Estimates suggest that humans produce CO2 in the range of 25 to 30 gigatonnes a year; the rate of growth needed to account for the parts-per-million changes observed is about 15 gigatonnes per year, which is roughly only half the human contribution.

But what is the actual effect of that overflow? It's not entirely clear from the data just how temperature rises with CO2 concentration, or in other words what the "sensitivity" of climate to CO2 concentration really is. Climate is a noisy phenomenon, making it hard to spot and track changes, and the oceans can absorb a good deal of heat, inserting considerable inertia into the system.

Climate records now available do support warming. There were protests that analyses that showed warming were biased, or suffered from confounding effects in measurements -- but all professional organizations that have analyzed the data, including national weather agencies, have come up with the same results, with all known confounding effects factored into the analyses. [TO BE CONTINUED]

COMMENT ON ARTICLE
BACK_TO_TOP

[THU 18 JAN 18] SPACE NEWS

* Space launches for December included:

-- 02 DEC 17 / LOTOS (COSMOS 2524) -- A Soyuz 2-1b booster was launched from Plesetsk at 1043 UTC (local time - 3) to put the "Cosmmos 2524) satellite into orbit. It was believed to be the third "Lotos" electronic intelligence satellite -- missions, following the launch of a Lotos test satellite in November 2009 and a follow-up launch of the first Lotos payload in December 2014.

-- 03 DEC 17 / LKW 1 -- A Chinese Long March 2D booster was launched from Jiuquan at 0411 UTC (local time - 8) to put the "LKW 1" satellite into Sun-synchronous orbit. It was believed to be an optical surveillance satellite.

-- 10 DEC 17 / ALCOMSAT 1 -- A Chinese Long March 3B booster was launched from Xichang at 1640 UTC (next day local time - 8) to put the "Alcomsat 1" geostationary comsat into orbit for the government of Algeria. The satellite was based on the DFH-4 satellite design manufactured by the China Academy of Space Technology, with a launch mass of 5,225 kilograms (11,520 pounds) a 15-year service life, and a payload of 19 Ku-band / 12 Ka-band / 2 L-band transponders. It was placed in the geostationary slot at 24.8 degrees west longitude.

-- 12 DEC 17 / GALILEO 19:22 -- An Ariane 5 ES booster was launched from Kourou at 1836 UTC (local time + 3) to put four "Galileo" navigation satellites into orbit, bringing the total number of satellites in space up to 22. These were the 19th through 22nd Galileo satellites launched, being the 15th through 18th of the fully operational constellation (FOC) satellites.

The four satellites in this launch weighed roughly 715 kilograms (1,575 pounds); they featured two passive hydrogen maser atomic clocks; two rubidium atomic clocks; clock monitoring and control unit; navigation signal generator unit; L-band antenna for navigation signal transmission, C-band antenna for uplink signal detection, two S-band antennas for telemetry and tele-commands, plus a search and rescue antenna. They were built by OHB Systems in Germany, with Surrey Satellite Technology of the UK supplying the navigation payloads. The complete Galileo constellation will consist of 30 satellites along three orbital planes in medium Earth orbit, including two spares per orbit.

-- 15 DEC 17 / SPACEX DRAGON CRS 13 -- A SpaceX Falcon booster was launched from Cape Canaveral at 1536 UTC (local time + 5), carrying the 13th operational "Dragon" cargo capsule to the International Space Station (ISS). It docked with the ISS Harmony module two days after launch. The Falcon first stage performed a soft landing at Cape Canaveral; it had been launched and recovered on a previous mission. The Dragon capsule was also recycled from a previous mission.

-- 17 DEC 17 / SOYUZ ISS 53S (ISS) -- A Soyuz booster was launched from Baikonur at 0721 UTC (local time - 6) to put the "Soyuz ISS 53S" AKA "Soyuz MS-07" manned space capsule into orbit on an International Space Station (ISS) support mission. The crew included vehicle commander Anton Shkaplerov of the RKA (third space flight), flight engineer Scott Tingle of NASA (first space flight), and astronaut Norishige Kanai of JAXA (first space flight). The Soyuz capsule docked with the ISS Rassvet module two days after launch, the three spacefarers joining the ISS Expedition 54 commander Alexander Misurkin, and NASA astronauts Mark Vande Hei and Joe Acaba.

-- 23 DEC 17 / CGOM-C, SLATS -- A JAXA H-2A booster was launched from Tanegashima at 0126 UTC (local time - 9) to put the "Global Changing Observation Mission-Climate (GCOM-C)" space platform the "Super Low Altitude Test Satellite (SLATS)" into orbit for the Japan Aerospace Exploration Agency (JAXA).

GCOM-C AKA "Shikisai (Color)" had a launch mass of 2 tonnes (2.2 tons) and carried a wide-area global imaging instrument payload -- including a visible radiometer, a near-infrared radiometer, and an infrared scanner. During its five-year mission, it was to perform surface and atmospheric measurements related to the carbon cycle and radiation budget, such as clouds, aerosols, ocean color, vegetation, and snow and ice.

GCOM-C

SLATS AKA "Tsubame (Swallow)" was an experimental technology demonstration satellite carrying an ion engine; it flew in a "super low" orbit where it encountered greater air resistance than most spacecraft, with the ion engine maintaining it in orbit. It had a launch mass of 400 kilograms (880 pounds); it was aerodynamically designed, and featured a coating on its thermal insulation to protect it from abrasion. It carried an imager to take pictures of the Earth.

-- 23 DEC 17 / IRIDIUM NEXT 31:40 -- A SpaceX Falcon 9 booster was launched from Vandenberg AFB at 0127 UTC (previous day local time + 8) to put ten "Iridium Next" satellites into orbit. The launch left an awe-inspiring trail in the evening sky that attracted considerable public attention. The Falcon booster first stage had been flown before, but was not recovered.

-- 23 DEC 17 / LKW 2 -- A Chinese Long March 2D booster was launched from Jiuquan at 0414 UTC (local time - 8) to put an Earth observation payload designated "LKW 2" into orbit. It was announced as an Earth survey satellite, but was judged to be a military optical surveillance satellite.

-- 25 DEC 17 / YAOGAN 30 x 3 -- A Long March 2C booster was launched from Xichang at 1944 UTC (next day local time - 8) to put the secret "Yaogan 30" payloads into orbit. It was a triplet of satellites -- including Yaogan 30G, 30H, and 30I -- and may have been a naval signals intelligence payload.

-- 26 DECR 17] / ANGOSAT -- A Ukrainian Zenit booster was launched from Baikonur at 1900 UTC (next day local time - 6) to put the "AngoSat" geostationary communications satellite. Built by RSC Energia in Russia, AngoSat had a launch mass of 1,647 kilograms (3,631 pounds). It was Angola's first satellite; communications were lost on arrival into orbit, but ground controllers managed to regain contact.

COMMENT ON ARTICLE
BACK_TO_TOP

[WED 17 JAN 18] RABIES VERSUS BRAIN CANCER

* RABIES VERSUS BRAIN CANCER: Sometimes medical researchers almost seem to go out of their way to shock the public. As a case in point, consider the title of an article by Matt Blois, dated 10 February 2017, from SCIENCEMAG.org: "How To Stop Brain Cancer -- With Rabies".

Rabies is a particularly fearsome disease because it attacks the brain. It is adapted to do so, using its ability to infect nerve cells as a conduit through the "blood-brain" barrier that prevents other pathogens from reaching the brain via the bloodstream. That same blood-brain barrier also complicates treatment of brain cancer. South Korean researchers, inspired by a dark Zen, have seen an opportunity in the ability of the rabies virus to infiltrate the brain -- leveraging off the virus to ferry tumor-killing nanoparticles into brain tumors, targeting them for elimination.

Researchers have already packaged cancer-fighting drugs into nanoparticles coated with part of a rabies surface protein that lets the virus gain access into the central nervous system. Nanoparticle expert Yu Seok Youn and his team at Sungkyunkwan University in Suwon, South Korea, have taken that approach a step further, having engineered gold particles so that they have the same rodlike shape and size as the virus. Once coated with the surface protein, that shape improves the ability of the nanoparticle assembly to bind with receptors on nerve cells, to then use the nerve cells to gain access into the brain. The nanoparticles don't carry drugs, the gold element being the therapeutic element; it absorbs infrared laser light that can penetrate into the brain, to heat up and destroy surrounding tissue.

To test the efficacy of their nanoparticles against tumors, Youn and his team first injected them into the tail veins of four mice with brain tumors. The nanoparticles quickly made their way to the brain, where they accumulated near the tumor sites. The researchers then illuminated the nanoparticles with a near-infrared laser, heating them to about 50 degrees celsius (120 degrees Fahrenheit). The tumors shrank dramatically. In another experiment, the researchers used the same treatment on mice with tumor cells that had been injected into their flanks. Tumors on two of the mice disappeared after 7 days, whereas the other tumors shrank to about half their original size.

Youn is not certain as to how the nanoparticles targeted the tumor cells. That they did is not disputed, but there's no saying at this time that they're all that selective, and so the treatment might be damaging healthy tissues as well. Feng Chen, a materials scientist at the Memorial Sloan Kettering Cancer Center in New York City, also worries about toxicity. Large nanoparticles along the lines of those used in Youn's experiment often end up in the liver and take a long time to clear out.

However, Youn sees the approach as very promising, and believes that proper design of the nanoparticles will minimize side effects: "Researchers need to develop [nanoparticles] precisely and effectively to target tumors. That's my obligation."

COMMENT ON ARTICLE
BACK_TO_TOP

[TUE 16 JAN 18] HUMAN PHAGEOME

* HUMAN PHAGEOME: The bacteriophages -- viruses that infect bacteria -- were discovered late during World War I. As discussed by an article from SCIENCEMAG.org ("Does A Sea Of Viruses Inside Our Body Help Keep Us Healthy?" by Giorgia Guglielmi, 21 November 2017), researchers are now zeroing in on the role that "phages" play in keeping us alive and healthy.

There was interest after the First World War in using phages to treat bacterial infections. There was considerable activity in the Soviet Union, but the approach never caught on elsewhere. One of the reasons for the disinterest was a failure to truly appreciate the universality of phages; they're found everywhere, from oceans to soils.

Now a study suggests that humans absorb up to 30 billion phages a day through their intestines. The significance of that fact is, at present, unclear, but it reinforces growing interest among scientists who wonder what influence the vast numbers of phages in the human body, the "phageome", has on our physiology. Do phages regulate our immune system?

Phage researcher Jeremy Barr -- of Monash University in Melbourne, Australia, previously at the University of San Diego -- says that traditionally, phages were seen as having little or no role in the body, being merely there to infect the bacteria we host: "Basic biology teaching says that phages don't interact with eukaryotic cells." Now he's concluded "that's complete BS."

Early on in his research, as mentioned here in 2013, Barr felt that the focus on phages as antibiotics was too narrow; he wanted to take a wider look and see what turned up. His initial studies of phages showed that they might be acting as complements to the human immune system, protecting us from bacterial pathogens. Investigating animals ranging from corals to humans, Barr and his team found that phages are four times as abundant in mucus layers, like those that protect our gums and gut, as they are in the adjacent environment. It turned out that the protein shell of a phage can bind "micins" -- large secreted molecules that, together with water, make up mucus. The result was a minefield for bacteria, keep bacterial pathogens from reaching the cells underneath.

Now Barr has found that phages in the gut's mucus can make their way into the body. In lab dish experiments, his team showed that human epithelial cells -- such as those that line our guts, lungs, and the capillaries surrounding the brain -- take up phages and transport them across their interior. The researchers haven't figured out how they do it yet, but it's certain they do it, since the phages could be found enclosed in vesicles within the cells. The transfer was effectively one-way as well, from an exterior surface -- like the gut lining -- to the interior. The researchers estimated the rate of transfer of phages in a typical human as about 30 billion a day.

Molecular biologist Krystyna Dabrowska -- of the Polish Academy of Sciences's Institute of Immunology and Experimental Therapy in Wroclaw -- warns that what happens in a lab dish is not necessarily what happens in the human body. She is nonetheless intrigued by Barr's research, since it poses a question of interest to her own research: what are the phages doing after they have been absorbed into the body?

In 2004, researchers led by Dabrowska reported that a specific type of phage can bind the membrane of cancer cells, damping tumor growth and spread in mice. A few years later, her graduate adviser, phage expert Andrzej Gorski, showed that phages can affect the mouse immune system when injected, ramping down T-cell proliferation and antibody production. In mice, they can even prevent the immune system from attacking transplanted tissues.

Barr himself suspects that in humans, a steady influx of the viruses creates an "intrabody phageome", which may modulate immune responses. Recent studies reinforce that idea:

Barr suspects the phageome might be an immune-system signal. For example, a particular bacterial infection would produce phages targeting that bacteria in response. Once these phages are presented to the human immune system, it would target the bacteria as well.

Barr cautions that he's only speculating, thinking of attractive paths for future research. He says we don't know enough yet, that "phage biology is an inch wide and a mile deep." Medical applications? He says they "are probably decades away."

COMMENT ON ARTICLE
BACK_TO_TOP

[MON 15 JAN 18] UNDERSTANDING AI (7)

* UNDERSTANDING AI (7): As discussed by an article from ECONOMIST.com ("The Latest AI Can Work Things Out Without Being Taught", 21 October 2017) one of the checkpoints in the history of artificial intelligence technology was in 1997, when IBM's Deep Blue computer defeated chess master Garry Kasparov. Chess, it turns out, is not the toughest traditional game to master: it wasn't until 2016 that a computer defeated master players of the Asian game of Go, with a program named "AlphaGo", developed by DeepMind, defeating Go master Lee Sedol.

Go is a deceptively simple game, with players occupying intersections on a board grid of 19 x 19 lines with white versus black stones as per simple rules; whoever has the most stones when the game runs out of possibilities wins. However, the game requires an ability to think in depth and detail, leading to a "combinatorial explosion" that defeated a brute-force approach, and so long stymied computers.

AlphaGo used AI instead of the brute-force approach. The program learned to play Go by supervised learning, studying thousands of games between expert human opponents, extracting rules and strategies from those games, and then experimentally playing millions of matches against itself.

Although AlphaGo ended up able to defeat any human player, DeepMind researchers still felt it could be further refined. They moved on to an improved version, "AlphaGo Zero", which was more competent at the game, acquired expertise much more quickly, and didn't require as much computing horsepower. Most significantly, however, AlphaGo Zero learned how to become an expert Go player without reference to human players.

As with chess players, Go players focus on strategic visions and tactical "tropes" to play the game. Players talk of features such as "eyes" and "ladders", and of concepts such as "threat" and "life-and-death". The original AlphaGo, through supervised learning, acquired the visions and ploys of human players.

The problem with supervised learning is that giving an AI system an adequate set of examples to learn from is very laborious and expensive. It also limits the AI system to human ways of doing things. A computer doesn't have the built-in biases of the human brain; it may be able to come up with its own ways of thinking out tasks that work better.

AlphaGo Zero discarded supervised learning. The program started with only the rules of the game and a "reward function" -- which awarded a point for a win, and docked a point for a loss, the software's goal being to maximize wins. AlphaGo Zero originally just placed stones at random; but after one day it was playing at an advanced professional level, and after the second day, it could defeat the original AlphaGo.

In simpler terms, AlphaGo Zero rediscovered for itself effectively everything humans had ever learned about playing Go. Its creators, observing its progress, sometimes found it strangely humanlike, making errors exactly like those of human novices, which it quickly outgrew. At other times, it went off in strategic directions that made no sense to a human player, discarding those that didn't work, retaining those that did. Expert Go players in combat against AlphaGo Zero found it alien, superhuman, as far beyond those expert human players as they were beyond skilled amateurs.

That gives fuel to those who fear AI, that it will eventually outpace humans in all respects. However, expert Go players don't seem unduly worried about AlphaGo Zero showing them up -- any more than weightlifters are concerned by the fact that a fork lift can pick up and carry around far more weight than any human can. That's precisely why there are fork lifts. In fact, expert Go players have found being beaten by AlphaGo Zero educational, since some of its strategies and tactics are completely unfamiliar, providing instruction to humans on how to play a tougher game.

Besides, as before, Go is a structurally simple game with a handful of clearly-defined rules, and relatively easy for a machine to get a grip on; not all difficult tasks that humans can accomplish are so well-structured. When taking on a poorly-defined task such as planning a vacation, humans draw on powers of reasoning and abstraction that so far elude AI software.

They may always do so; why would we want a machine to, say, tell us what kind of a vacation we want to take? At most, we could tell it where we want to go and what we want to do, then let it offer options and work out the specifics. We would nonetheless have to realize there is no game of pure strategy that a machine won't be able to beat us at, sooner or later.

* A follow-on article from WIRED.com discussed the next generation beyond AlphaGo Zero, named "AlphaZero". It was given the ability to handle a wider range of moves / rules, to then be (separately) programmed with the rules of Go, chess, and shougi (Japanese chess). AlphaZero took eight hours to exceed AlphaGo Zero at Go, four hours to top out at chess, two hours to top out at shougi. Deepmind says that training AlphaZero took 5,000 of Google's custom machine-learning processors, known as "TPUs". WIRED suggested that the DeepMind team may focus on using AlphaGo, or a derivative, against the StarCraft online game -- which has a large number of pieces, and a large set of rules, making it a challenge for an AI system. [TO BE CONTINUED]

COMMENT ON ARTICLE
BACK_TO_TOP

[FRI 12 JAN 18] ONCE & FUTURE EARTH (21)

* ONCE & FUTURE EARTH (21): Although it seemed in the 1970s and 1980s that the air pollution challenge was being met, in the 1990s worries began to spread that human activities had an effect that promised to be much harder to deal with: global warming, leading to chaotic climate change.

In 1896, the Swedish chemist Svante Arrhenius published a paper in which he suggested that Ice Ages might be linked to atmospheric concentrations of CO2. The Sun pours light down on the Earth, heating it up; the warm Earth then produces infrared radiation, much of which escapes off into space. Atmospheric CO2 tends to "trap" infrared radiation, preventing it from escaping and making the Earth warmer; in modern terms, CO2 is a "greenhouse gas". The trapping effect is proportional to CO2 concentrations, and so low CO2 concentrations might have led to the Ice Ages.

There was concern at the time, and later, that the Earth was headed for another Ice Age, which would undoubtedly have a brutal impact on human population, but in a later book Arrhenius suggested: not to worry. Human industrial emissions of CO2 would be strong enough to prevent the Earth from slipping back into another Ice Age, and the warmer Earth that would result from these high CO2 levels would allow humans to grow more crops to feed an expanding population.

Climate scientists generally believed that Arrhenius was right in principle, but in the period after World War II there was actually a cooling trend. Some climate scientists -- not all, it seems not a majority -- even suggested that a new Ice Age might be imminent, and in fact as late as 1975 the US magazine NEWSWEEK ran a cover article titled "The Cooling World", which predicted that a disastrous Ice Age was then in the making.

However, the midcentury cooling trend, it turned out, was ironically also due to emissions -- of particulate pollutants, which reflected sunlight back into space and help cool the world. Effective pollution control measures dropped the concentration of particulates, and the temperature began to climb again. By the 1990s, climatologists had become increasingly worried about what might happen to the Earth if CO2 concentrations continued their climb, and spoke out about their concerns.

At that time, there was considerable public skepticism over "anthropocentric global warming (AGW)" -- human-caused climate change -- with the climate research community accused of sloppy research, hysteria, even fraud. It would take about two decades to resolve the dispute.

The foundation of global climate theory is the simple fact, established by thermodynamics, that for a planet to maintain a constant temperature, the amount of energy absorbed from sunlight must be matched by the amount of energy the planet loses to space in the form of infrared thermal radiation, with the intensity of this radiation increasing with temperature. The Earth receives an average of 239 watts of sunshine per square meter; a simple body re-radiating that energy back into space would have an average temperature of -18 degrees Celsius -- about zero Fahrenheit.

Clearly, on the average the Earth is warmer than that, and the reason that is so is because the greenhouse gases, like CO2, in the Earth's atmosphere block the escape of infrared thermal radiation back into space by absorbing it and re-emitting it -- incidentally, in the tenuous upper atmosphere where greenhouse gases are too diffuse to have much of an effect, the average planetary temperature really is about -18 degrees Celsius. Increasing the concentration of greenhouse gases makes it harder for the heat to leak out, with the surface of the Earth and the lower atmosphere heating up. The rise in temperature alters the way the atmosphere transports energy from the warm equator to the cold poles, changing weather patterns. [TO BE CONTINUED]

COMMENT ON ARTICLE
BACK_TO_TOP

[THU 11 JAN 18] GIMMICKS & GADGETS

* GIMMICKS & GADGETS: As discussed by an article from BBC.com ("Robot Automation Will Take 800 Million Jobs By 2030", 29 November 2017), a study of 46 countries and 800 occupations by the McKinsey Global Institute found that up to 800 million global workers will lose their jobs by 2030 and be replaced by robotic automation. One-fifth of the global work force will be affected; one-third of the workforce in rich nations like Germany and the US may need to retrain for other jobs. Machine operators and food workers will be hit the hardest. Poorer countries that aren't capable of the same level of investment in automation won't be hurt as bad. India, for example, will only see about 9% of its workforce displaced by automation.

The report suggests tasks carried out by mortgage brokers, paralegals, accountants, and certain back-office staff are particularly vulnerable to automation. Occupations that can't be reduced to a routine -- doctors, lawyers, teachers, care workers, plumbers, gardeners, and bartenders are less vulnerable.

In developed countries, the pool of jobs not requiring higher education is going to shrink, while those that do require higher education will grow. In the US alone, 39 to 73 million jobs could be eliminated by 2030, but about 20 million of those displaced workers should be to readily transfer to other industries.

* As discussed by an article from WIRED.com ("Soon Your Desk Will Be a Computer Too" by Elizabeth Stinson, 5 July 2017), back in the early 1990s the Xerox Palo Alto Research Center -- PARC, at the time a fountain of innovative ideas in computing -- demonstrated a concept called the "Digital Desk". It looked like an ordinary workstation built out of metal -- except for dual cameras hanging from a frame above the desk, with the cameras keeping track of a user's movements. The workstation also included a projector that cast a display image onto the desktop.

The Digital Desktop was a dazzlingly impressive idea. A user might highlight text in a book or magazine, then drag & drop the text onto an electronic document. Expenses could be logged into a spreadsheet or the like by similarly reading the numbers from a paper receipt. However, the idea was too radical, too questionably practical to take off.

Now, researchers at Carnegie Mellon University (CMU) in Pittsburgh PA have revived the Digital Desktop in a project with the somewhat unwieldy name of "Desktopography", the brainchild of CMU computer scientist Robert Xiao. Xiao says: "I really want to break interaction out of the small screens we use today and bring it out onto the world around us."

As with Digital Desk, Desktopography projects digital applications onto a desktop, where a user can swipe, tap, or otherwise interact with it. It is, not surprisingly, a technological generation ahead of its predecessor. Using a depth camera and pocket projector, Xiao built a module that people can screw directly into an ordinary lightbulb socket. Presumably, it communicates by wireless.

The camera maintains a continuously updated 3-D map of the desktop, keeping track of when objects move and when hands enter the scene. The camera relays inputs to a processor system, which can distinguish between different objects and observe actions on the desktop, interpreting the desktop as equivalent to a big touchscreen.

The big problem is that workspaces tend to be cluttered with books, papers, cups, the occasional trinket, and so on. The Desktopography software sorts through the clutter, maps what it sees, and then uses the map to figuring out what to do. It will try to display an app in a flat, clear space, but it will make do as best it can if it can't find an optimum location. Move a book or magazine around, the software will automatically reorganize and resize its apps.

The user interface is much like that on a tablet, involving tapping, pinching, and swiping -- though Xiao added some new tricks, like tapping with five fingers to bring up an application launcher, or lifting a hand to exit an app. It also will try to get a crisp fit of displays projected onto a tablet or phone. The goal is to integrate the camera and projection technology with a conventional LED light bulb, the target price being about $50 USD.

* The history of the shopping cart was discussed here in 2013, the technology being shown as mature, with little innovation. There is some, as discussed by an article from, of all places, the Target store website ("Caroline's Cart is Rolling into Target Stores Nationwide", 4 February 2016) -- point in case being "Caroline's Cart", which is a shopping cart for families who have to care for kids with special needs.

Caroline's cart

Bringing an impaired child in a wheelchair along to supermarket shopping is problematic. Caroline's Cart adds a rear-facing large seat for the child, with bicycle-type handlebars to each side of the cart instead of the usual crossbar handle. Target started testing the carts in early 2015, with them going into standard operation in 2016.

Caroline is actually the impaired daughter of Drew Ann Long, an inventor and stay-at-home mom from Alabama. When Caroline was small, Drew Ann could put her in the normal cart seat, but realized that wasn't going to work after Caroline got bigger. She came up with the idea for the cart, with her and her husband David founding Parent Solution Group LLC in 2008. They partner with Technibilt of Newton, North Carlina -- a manufacturer of commercial shelving, shopping carts, and such -- to produce Caroline's cart.

COMMENT ON ARTICLE
BACK_TO_TOP

[WED 10 JAN 18] NO MICROBIOME?

* NO MICROBIOME? There's been much activity in the study of the microbiome, the set of microorganisms that co-exist with humans and other large organisms. As discussed by an article from SCIENCEMAG.org ("The Curious Case Of The Caterpillar's Missing Microbes" by Erin Ross, 18 May 2017), researchers have been somewhat startled to find out that some species, including caterpillars, do without them.

Tobin Hammer, an evolutionary ecologist at the University of Colorado in Boulder, investigated the intestinal microbes of 124 species of wild, leaf-eating caterpillars from the Americas by sequencing a gene commonly used to identify microorganisms. Hammer and his colleagues finally reported that they found no sign of what he calls "resident" microbes; the caterpillars do not have a permanent community of microorganisms.

Other studies have suggested that some animals and insects don't have microbiomes. Hammer says researchers have had trouble getting such papers into print, he thinks because it's hard to nail down a null result. Surely they simply overlooked something? Besides, it is known that herbivores such as cows need gut microbes to break down the fibers in plant cell walls. Since all of the caterpillars that Hammer studied ate leaves, Hammer thought at the outset they would similarly have a diverse and elaborate microbiome.

Hammer said that he was mistaken: "Caterpillars are not mini cows." Researchers inspecting cowflops find more microbe DNA than plant DNA; but Hammer found it was the opposite case with caterpillars. The few bacteria and viruses he found appeared to come from the insect's food and environment. To be thorough, he also hatched 72 tobacco hornworms (Manduca sexta), a common North American moth, and treated them with varying levels of antibiotics to "sterilize" them of microbes. He found that such treatments had no effect on the survival or health of the hornworms.

Other studies of caterpillars have come to similar negative conclusions about caterpillar microbiomes, but each of them only considered a handful of caterpillar species -- while Hammer's research examined more than a hundred. He also went beyond other studies in dosing the hornworms with antibiotics to kill off microorganisms.

Melissa Whitaker -- an ecologist who studies the relationships between caterpillars and bacteria at Harvard University in Cambridge, Massachusetts -- says the implications of Hammer's work are huge. There are about 180,000 known species of caterpillars; Whitaker says: "They're one of the largest groups of herbivores. If they're not relying on the bacteria in their guts to help with their diet, what are they relying on? "It's got to be something entirely different. It's fascinating."

Similarly, Jon Sanders -- a postdoc at the University of California, San Diego, who investigates the co-evolution of microbes and their hosts -- performed a study of Peruvian ants, to find that some didn't have an intestinal microbiome. He had serious problems getting his paper published.

Entomologist Matan Shelomi, affiliated with the Max Planck Institute for Chemical Ecology in Jena, Germany, searched for a microbiome in the gut of herbivorous stick insects (Phasmatodea), to find nothing. However, he eventually found that Phasmatodea could break down pectin, another fiber found in plant cell walls, using genes stolen from bacteria early on in the insects' evolutionary history. He also had problems getting published.

Hammer adds: "Anecdotally, I've heard from researchers having similar problems in birds and fish." His own study included data from several vertebrate species as controls. He found that some, like goats, do have a gut microbiome, but he could find no such thing in droppings from geese and bats. Microbial symbionts are so common that it is hard to believe that some species don't have them, but there's no saying they all do. Whitaker says: "As a discipline we're really ready to claim that everything is related to the microbiome and every organism has one. It only takes one exception before all of that goes out the window."

COMMENT ON ARTICLE
BACK_TO_TOP

[TUE 09 JAN 18] CANNY ROBOCARS?

* CANNY ROBOCARS? As discussed by an article from WIRED.com ("To Survive the Streets, Robocars Must Learn to Think Like Humans" by Eric Adams, 19 October 2017), driving a car is a very complicated activity. In fast traffic, the driver has to keep an eye on the traffic flow; in neighborhoods, the driver has to watch out for cyclists, kids, pedestrians, and pets. A competent driver has a lifetime of experience and context to know how to handle the environment.

Trying to build a robocar to handle that environment is not trivial. To be sure, it is straightforward for a robocar to slow down or stop if something seems wrong -- but that might well mean a car that won't go anywhere. A timid robocar would suffer particularly in places like New York City, where the pedestrians are aggressive in challenging right-of-way against drivers. Anca Dragan -- who studies autonomy in UC Berkeley's electric engineering and computer sciences department -- comments: "We call it the freezing robot problem. Anything the car could do is too risky, because there is some worst-case human action that would lead to a collision."

Researchers are now leveraging off artificial intelligence to teach robocars, through modeling and repetitive observation, how to interpret and appropriately react to events in their environment. According to Dragan:

BEGIN QUOTE:

Unlike, say, a tumbleweed moving along the street under the wind's effect, people move because they make decisions. They want to do something, and they act to achieve it. We're first looking into inferring what people want based on the actions they've been taking so far. So their actions are rational when seen from [that perspective], and would appear irrational when seen from the perspective of other intentions.

END QUOTE

In other words, the robocar doesn't work on the basis of psychology -- what a subject is thinking -- but on the basis of "intentionality" -- what a subject intends to do. If a robocar waits at stop sign before a busy street and notices a car at the stop sign on the other side of the street, the robocar will assess the car's turn signal to determine if the driver intends to turn left, turn right, or cross the street. The robocar doesn't know what the driver is thinking, but doesn't need to know.

Suppose, for another example, the robocar spots a car coming up an on-ramp onto a freeway. The robocar knows that a competent driver will try to match speeds with the traffic flow and merge into the inner lane; the robocar will slow down, speed up, or move to another lane to compensate. In short, the robocar will have to do more than just log the individual elements of a scene and their actions; it will have to fit them into a "story" to know what to do next.

For yet another example, envision the robocar spotting a man walking towards a curb. If he seems to be getting ready to cross the street, the robocar is likely to stop; but if he's carrying car keys, it's more likely he's planning to get into his car. The robocar may slow down or move into another lane, but it won't stop.

According to Melissa Cefkin, a design anthropologist at Nissan's Silicon Valley R&D center: "The ways people move through the environment are already culturally and socially encoded. It's not always people-to-people interactions, but people interacting with things, too."

Cefkin adds that, unsurprisingly, things get more complicated when there's multiple agents involved in the scene: "If a pedestrian is going to cross in front of me, rather than looking at me they're just as likely to look out into traffic for a gap. So now I'm trying to figure out whether or not it's safe to keep going based on what the rest of the traffic is going to do."

Obviously the number of possible scenarios is very large, but those working in the field doesn't see training robocars to get along in the world as impossible. At the Delft University of Technology in the Netherlands, Dariu Gavrila is training learning systems to deal with factors such as road debris, traffic police, and things as unusual as someone pushing a cart down the middle of the street. Such dynamic factors interact with static factors, such as curbs, driveways, building entrances, and traffic signs.

A robocar must also factor itself into the equation -- it's part of the scene -- as well as deal with subtleties, such as a person's head looking one direction while the torso is pointing in another. Gavrila says: "Recognizing pedestrian intent can be a life saver. We showed in real vehicle demonstration that an autonomous system can react up to one second faster than a human, without introducing false alarms."

Of course, a learning system doesn't have much ability to plan far ahead: "Uncertainty in future pedestrian or cyclist position rapidly increases with the prediction horizon -- how many seconds in the future we're trying to model. Basic behavior models already stop being useful after one second. More sophisticated behavior models might give us up to two seconds of predictability."

To be sure, a robocar can follow a long-range trip plan, factoring in traffic and weather conditions -- but that isn't the same as the ability to assess the car's local environment. It wouldn't be cost-effective to have the robocar try to think very far ahead in those terms, the possibilities are too intractable, too open-ended. According to Jack Weast, Intel's chief systems architect for autonomous drive systems: "When you're essentially trying to predict the future, that's a massive computational task, and of course it just produces a probabilistic guess," So rather than throw a supercomputer into every car, we just want to ensure that the car's never going to hit any of those people anyway. It's a much more economically scalable way of doing things."

It won't be possible to allow robocars to drive themselves without supervision until they're at least as capable of driving safely as a human driver -- and likely well more so, since people have justifiable difficulties with the idea of cars driving themselves. However, once robocars arrive, continued learning from experience will drive the accident rate down towards zero. Indeed, once robocars become ubiquitous, they will cooperate with each for safety, a robocar sending warnings via wireless to other robocars in the vicinity. A human driver can't read the minds of other drivers; a robocar will do it on a continuous basis -- the end result being that robocars, through distributed intelligence, will have a big advantage over human drivers.

COMMENT ON ARTICLE
BACK_TO_TOP

[MON 08 JAN 18] UNDERSTANDING AI (6)

* UNDERSTANDING AI (6): As discussed by an article from SCIENCEMAG.org ("Computers Are Starting To Reason Like Humans" by Matthew Hutson, 14 June 2017), humans are capable of "relational reasoning" -- putting together the pieces of the puzzle of a poorly-defined problem to see an answer from the whole. Traditionally, computers don't do well at such poorly-defined problems; but now researchers at Google's Deepmind office in London have figured out how to get an artificial intelligence system to use relational reasoning. Although the work isn't far along, results so far have been impressive.

Up to the present, there have been two classes of AI systems:

The Deepmind researchers decided to try to bridge the gap with an ANN focused on relational processing. They designed a "relation network" that picked out all the pairings of elements in a scenario to examine their relationships. A team under Deepmind researcher Timothy Lillicrap began by using the relation network to examine an image containing a number of simple objects -- cubes, spheres, cylinders. The network was then posed questions such as: "There is an object in front of the blue thing; does it have the same shape as the tiny cyan thing that is to the right of the gray metal ball?"

They coupled the relational network to two other ANNs for the task: one ANN for recognizing objects in an image, another for interpreting the question. In trials, other machine-learning algorithms were right 42% to 77% of the time, while humans scored 92%. The relational network system was correct an impressive 96% of the time.

The DeepMind team also tried its neural net on a language-based task, in which it received sets of statements such as: "Sandra picked up the football." -- and: "Sandra went to the office." These were followed by questions like: "Where is the football?" With the answer: "At the office." The system performed about as well as other AI algorithms on most questions, but it was a cut above the others on "inference questions", such as: "Lily is a Swan. Lily is white. Greg is a swan. What color is Greg?" White, of course. On inference questions, the system scored an impressive 98% -- while the competition scored about 45%.

Finally, the system analyzed animations in which 10 balls bounced around, some connected by invisible springs or rods. Using the patterns of motion alone, it was able to identify more than 90% of the connections. It then used the same training to make out human forms represented by nothing more than sets of moving dots.

The scheme is actually simple, Lillicrap describing it as a "plug-&-play" approach, in which different subsystems with different competencies can be plugged together. The sky's the limit for applications; relation networks could someday help study social networks, inspect surveillance video, or guide robocars through traffic. However, getting more sophistication may require not just pairs of things -- but triplets; pairs of pairs; or, for efficiency, some pairs wisely selected from a larger set. We're still a long ways from reaching the capability of the human brain. [TO BE CONTINUED]

COMMENT ON ARTICLE
BACK_TO_TOP

[FRI 05 JAN 18] ONCE & FUTURE EARTH (20)

* ONCE & FUTURE EARTH (20): Over the past few million years, the modern human species emerged from earlier, less intelligent primates. Following the last ice age, humans gradually spread around the Earth -- taking up agriculture a few thousand years ago, leading to extended societies, and the growth of technical civilization. A few hundred years ago, humans began to industrialize, with their transformation of the Earth moving into higher gear.

Pollution from factory smokestacks and the production of chemicals became widespread. It wasn't until after World War II that awareness of environmental issues became widespread. In December 1952, unusual weather conditions carpeted London with a smoky fog -- what would be given the name of "smog" -- for the better part of a week, killing thousands of people. There was also a growing awareness that pesticides and other chemicals might be having unexpected impacts on the environment.

From the mid-1960s, environmental regulations controlling the emission or dumping of toxic substances were established in industrialized nations. Companies that dumped toxic chemicals were fined, and improved techniques for disposal of public and industrial wastes were developed to prevent contamination of the soil and water. Remediation of wastes focused on incineration or chemical neutralization, or when that wasn't possible, disposal in secure toxic waste dumps. From the 1970s, there was also a push to reduce pollution from industrial smokestacks and automotive exhaust.

On inspection, environmental problems had a nasty tendency to appear out of nowhere. Early household refrigerators used noxious gases such as sulfur dioxide and ammonia as coolant fluids, with documented cases of families being killed by coolant leaks. In the 1930s, CFCs were introduced as a replacement coolant, and they seemed all but perfect for the job: they were effective, cheap, nonflammable, noncorrosive, and in particular nontoxic. A person can breathe CFCs and suffer no harm, except from oxygen deprivation.

By the 1970s, CFCs were not only in widespread production and use as coolant fluids in refrigerators and air conditioners, they were also used as "blowing agents" to bubble up foam plastic insulation, cleaning agents, and spray propellants. In 1974, however, researchers discovered that CFCs might well be depleting the ozone layer. Once released, the CFCs could migrate to high altitudes and be broken apart by ultraviolet radiation that didn't reach lower altitudes:

   CF2Cl2  --UV-->  CF2Cl  +  Cl

The reactive chlorine atoms would then react with ozone in a two-step process to produce oxygen:

   Cl  +  O3  -->  ClO  +  O2

   ClO  +  O  -->  Cl  + O2

The particularly unpleasant thing about this reaction was that it was catalytic: the chlorine was not consumed in the reaction, which meant that a small amount of chlorine could convert a vastly larger amount of ozone into diatomic oxygen. The argument was theoretical at the time, but by the early 1980s, satellite observations were showing a growing region of ozone depletion over the South Pole, with the hole getting bigger and bigger every winter. The matter became a public controversy.

Not everyone agreed that CFCs were a real threat. There was no serious dispute that CFCs could cause ozone depletion, but the trick was that the depletion was strongly enhanced by cold temperatures, which was why the Antarctic ozone hole only appeared in the winter. On that basis, it was possible to argue that ozone depletion by CFCs would not amount to a threat at higher latitudes. However, on the basis of the notion of "better safe than sorry", in 1987 representatives from 43 nations signed the "Montreal Protocols", which mandated the gradual phaseout of CFCs.

CFCs were to be replaced by hydrofluorocarbons (HFCs). HFCs contain no ozone-depleting chlorine, though they were not as efficient refrigerants, and are more expensive. However, HFCs fell out of favor in the next century, as discussed later. [TO BE CONTINUED]

COMMENT ON ARTICLE
BACK_TO_TOP

[THU 04 JAN 18] SCIENCE NOTES

* SCIENCE NOTES: As per an article from SCIENCEMAG.org ("Your Kitchen Sponge Harbors Zillions Of Microbes" by Giorgia Guglielmi, 28 July 2017), according to a recent study, an ordinary funky kitchen sponge is loaded with microbes -- including relatives of the bacteria that cause pneumonia and meningitis. One of the microbes, Moraxella osloensis, can infect people with weak immune systems, and is also known for making laundry stink, it seems explaining the nasty odor of a kitchen sponge.

Researchers discovered this by sequencing the DNA from 14 used kitchen sponges. What was surprising that microwaving sponges didn't help; indeed, some of the worst actors were predominant in sponges that had been cleaned, it seems because microwaving killed off less troublesome but not so hardy competitors. Pathogens tend to be tougher than more benign bacteria, since they target organisms whose immune systems try to kill them off.

Analysis showed that a cubic centimeter of sponge could harbor 5E10 bacteria -- a density comparable to that in feces. What to do, then? Treat sponges as an expendable item, and throw them out once a week.

* As discussed by an article from SCIENCEMAG.org ("Why Whales Grew To Such Monster Sizes" by Elizabeth Pennisi, 23 May 2017), the blue whale is the biggest animal that has ever lived. From an evolutionary point of view, it is a puzzle as to why the great whales got so big. Certainly, it's easier to support such great bulk in water than it is on land, but what evolutionary advantage does sheer size provide? Were whales involved in an "evolutionary" arms race with predators that were getting bigger as well?

In 2010, Graham Slater, an evolutionary biologist now at the University of Chicago in Illinois, suggested that cetaceans -- dolphins and whales -- -- split into groups characterized by different scales in their history, perhaps 30 million years ago. Dolphins remained small, filter-feeding baleen whales became giants, and predatory beaked whales stayed mid-sized.

Nicholas Pyenson, a whale expert at the Smithsonian Institution's National Museum of Natural History in Washington DC, was skeptical. A few years ago Slater and Pyenson decided to settle matters by examining the museum's huge fossil collection. Pyenson had already surveyed living whale proportions, and determined that the size of the whale correlated with the width of its cheek bones; he then measured or obtained these data from skulls of 63 extinct whale species and of 13 modern species, to plot them on a whale family tree with timeline.

According to a study published by the researchers, the data suggested that whales didn't get really big early on, instead becoming moderately large until about 4.5 million years ago. Slater says they then went "from relatively big to ginormous." A modern blue whale is 30 meters (98 feet) long; up to 4.5 million years ago, they were 10 meters (33 feet) long.

What happened 4.5 million years ago that might be correlated to the change? Slater and his colleagues investigated, to find that the baleen whales' growth spurt coincided with the beginning of the first ice ages. The researchers suspect that as glaciers expanded, spring and summer runoff poured nutrients into the coastal ocean, fueling explosive growth in krill and small animals the whales consumed. To that time, prey had been plentiful all year round, but the climate change meant a decline in ecosystem productivity. The new seasonal runoff created a new pattern of food availability: seasonal bursts of very abundant food, spaced far apart over the course of the year.

Jeremy Goldbogen of Stanford University, who studies whale behavior and collaborated in the research efforts, thinks he knows why that change in environment drove bigger whales. Goldbogen's studies show that the more concentrated the food supply, the more efficient the feeding, especially in whales with reallyy big mouths. In addition, larger whales can travel faster between patches of prey. In short, balleen whales that were bigger had a selective advantage over smaller balleen whales in moving between locales and feasting when the feeding was good, gradually driving the smaller balleen whales out of business.

The study is seen as persuasive; but selection pressures are rarely black and white, and some other researchers suspect that an arms race with big predators, like giant sharks, was a factor as well. Being higher up on the food chain, such predators would find it hard to keep up with prey that got too big for them, since it would become problematic for bigger sharks to stay fed.

* As discussed by another article from ECONOMIST.com ("Spider Bites", 18 March 2017), we tend to pay more attention to spiders than the other "creepy-crawlies" around us, because we find them, well, creepy. We generally do not wonder how common they are, and what their environmental impact is.

Marion Nyffeler of the University of Basel in Switzerland and Klaus Birkhofer of Lund University in Sweden decided to investigate, using estimates of the density of spiders per square meter in different environments and their food requirements to determine the total biomass of spiders, along with the mass of prey they collectively ate. They came up with a total spider mass of 25 million tonnes, and estimated that they ate from 400 million to 800 million tonnes of prey a year.

That puts spiders in a league with humans, who consume about 400 million tonnes of meat a year. The total biomass of humans is about 400 million to 500 million tones, so spiders are literally punching an order of magnitude above their weight. Without spiders, there would be a lot more of the other creepy-crawlies around.

COMMENT ON ARTICLE
BACK_TO_TOP

[WED 03 JAN 18] COSMIC LIGHTS SHOW

* COSMIC LIGHTS SHOW: As discussed by an article from SCIENCEMAG.org ("Merging Neutron Stars Generate Gravitational Waves And A Celestial Light Show" by Adrian Cho, 16 October 2017), at four times since 2015, scientists in charge of gravitational-wave observatories have spotted gravitational waves, caused by a cosmic cataclysm in the distant Universe.

At 1241 UTC on 17 August 2017, researchers at three gravitational-wave observatories -- the twin 8-kilometer-long detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO) in Hanford, Washington, and Livingston, Louisiana, and the 6-kilometer Virgo detector near Pisa, Italy -- spotted a fifth event that was unlike the previous four. The earlier events had lasted for only a few seconds, with gravitational at frequencies of tens of hertz. The new event went on for 100 seconds, with the signal going up to thousands of hertz.

The earlier signals were interpreted as generated by pairs of black holes, in a shrinking mutual orbit, finally merging into each other. The new signal was interpreted as due to the collision of a pair of neutron stars, with masses 1.1 and 1.6 times that of the Sun, falling into each other to form a black hole.

While the merger of the black holes left no trace observable by other means, the collision of the two neutron stars set off cosmic fireworks. Even as the gravitational-wave detectors were picking up the gravitational wave, NASA's orbiting Fermi Gamma-ray Space Telescope picked up a high-energy gamma-ray burst (GRB) in the distant skies.

Since three gravitational-wave detectors at different locations on Earth had picked up the signal, researchers were able to triangulate a location, within a 30-square-degree patch of sky -- about 60 times the apparent size of the Moon and much more precise than Fermi's localization. It took about an hour to get an alert out, with astronomers searching the target area. Before the day was out, five groups had identified a new source in the galaxy NGC 4993. The source faded from bright blue to dim red within a few days; about two weeks later, it began to emit X rays and radio waves.

More than 70 observatories kept watch on the event. Laura Cadonati -- a physicist at the Georgia Institute of Technology in Atlanta and deputy spokesperson for the LIGO collaboration -- commented: "This is first time we have a 3D IMAX view of an astronomical event."

The comprehensive observations established at least three advances. First, it explained the origins of a subclass of GRBs. Since the 1990s, theorists have thought that bursts shorter than two seconds originate when neutron stars merge to create a black hole. Longer bursts, lasting minutes, are thought to be the result of the collapse of individual massive stars. The 17 August event confirmed the neutron-star mechanism.

Second, it confirmed the existence of an hypothetical object called a "kilonova", which briefly shines thousands of times brighter than an ordinary nova. As two neutron stars twirl together and rip each other apart, they should expel neutron-rich atomic nuclei, forming a shroud of matter amounting to a few percent of a solar mass. Those nuclei absorb neutrons rapidly, and then quickly radioactively decay. This "rapid neutron capture process" or "r-process" should make the shroud glow for a few days, with its light reddened by heavy elements that soak up blue wavelengths. That was precisely what was observed.

Third, it provided insights into the origins of elements heavier than iron. such as silver, gold, and platinum. While elements up to iron are produced in the cores of stars, the production of heavier elements does not produce net energy, it absorbs it, meaning that once a heavy star produces an iron core, it can go no further, and collapses. There's been no well-accepted theory of how the heavier elements are formed; traditionally, it was believed they were formed in the collapse of a heavy star, but then merging neutron stars were proposed as a mechanism. The 17 August event shows that at least some heavy elements are produced by collisions of neutron stars.

The event has posed a few puzzles as well. Although the GRB was much closer than any other spotted previously, it was not very bright. That appears to have been due to the black hole resulting from the neutron star collision generates highly energetic cosmic jets from its poles of rotation -- and the Earth was off-boresight from the rotational axis. The lag in emission of X ray and radio signals supports that scenario; they were generated by the jet, which was narrow in its early days but then widened.

The 17 August event demonstrated the capability of the global astronomy network, which links together different types of observatories all over the world to allow them to quickly perform coordinated observations. Astronomers are eagerly awaiting another spectacular event so they can take further advantage of their experience.

COMMENT ON ARTICLE
BACK_TO_TOP

[TUE 02 JAN 18] MICROBIOME MEDICINE

* MICROBIOME MEDICINE: As discussed by an article from THE ECONOMIST ("No Guts, No Glory", 9 November 2017), the sequencing of the human genome in 2003 was rightly seen as opening new avenues for improving human health. That was not a false impression; since that time, researchers have made huge advances in understanding how the thousands of human genes affect our health. Those advances have, however, opened up a huge can of worms of complications -- one the biggest, it turns out, being due to the fact that the human genome isn't the only genome responsible for our health.

We've long known that humans have a "microbiome" of microorganisms that live on or in us, particularly in the lower digestive tract. There was a traditional inclination to think the microbiome was mostly just a passive free-rider, causing us little harm but not doing us much good, either. It has become obvious that the microbiome is far from passive, our health being dependent on it in many ways. Exactly what ways is hard to say, since we don't understand the microbiome very well.

The microbiome is estimated to have, in total, 150 times as many genes as the human genome, and the interactions of its elements with the human organism are obscure. They are not, however, necessarily subtle. It is increasingly realized that "dysbiosis", an imbalance in the microbiome, can have troublesome effects, being linked to inflammatory bowel disease, autism, multiple sclerosis, obesity, diabetes and chronic-fatigue syndrome. It is certainly clear that heavily dosing patients with antibiotics can be counterproductive; the antibiotics will kill off the bad actors, but they can raise hell with the microbiome as well, with unpredictable effects. The clear influence of the microbiome suggests that editing it, by adding or subtracting microbial species, might help improve health. A number of firms are now investigating "microbiome medicine".

Microbiome medicine got kick-started through "fecal microbial transplants (FMT)", discussed here in 2014. The idea is to re-stock the intestinal microbiome of a patient using feces from a healthy subject. It's not such a new idea; Chinese doctors were using it centuries ago. In modern times, the primary use of FMTs is to deal with the nasty Clostridium difficile bacterium, which tends to afflict people who have been heavily dosed with antibiotics, unbalancing their microbiomes and allowing C. difficile to run wild. FMT has, with effort, become a highly effective means of dealing with C. difficile, and is making progress in treating other diseases.

The latest scheme for FMT involves use of capsules or "crapsules". They work, but there's a general inclination to think something better is needed. Rebiotix, a firm based in Roseville, Minnesota, is developing a more refined approach: a standardized liquid suspension of healthy gut bacteria. Most of the company's focus has been on C. difficile, but the firm is working on treatments for other diseases as well:

Other firms are investigating more selective treatments, only transplanting microbes they think are needed -- an approach sometimes called "bugs as drugs". For example, researchers at Seres Therapeutics, in Cambridge Massachusetts, suspect that proper combinations of particular microbes may catalyze transformations in entire bacterial ecosystems, the objective being to restore an unhealthy microbiome to good working condition. The company is conducting clinical trials of such treatments for C. difficile infections, as well as ulcerative colitis.

The reverse strategy is to restore a microbiome to health by selectively knocking out the bad actors. To this end C3J Therapeutics, in Marina del Rey, near Los Angeles, is developing an antimicrobial peptide -- a small protein molecule -- that targets Streptococcus mutans, a microbe that lives in the mouth, believed to be the prime cause of cavities. The peptide actually has two components, one that targets S. mutans, joined to another that can attack a wide range of bacteria. Another approach to targeting the bad actors is to use bacteriophages, or viruses that target different types of bacteria. EpiBiome, in San Francisco, and Eligo Biosciences, in Paris, are both working on this approach.

A third approach is to introduce genetically-modified bacteria into the microbiome to enhance it up, this scheme being investigated by Blue Turtle Bio, in San Francisco, and Synlogic, in Cambridge, Massachusetts. Both companies want to engineer gut bacteria to deliver a constant supply of such things as the enzymes missing in genetic diseases like phenylketonuria, in which the absent enzyme means a chemical called phenylalanine can build up to toxic levels. It's an elegant idea: if the body won't produce something it needs, tweak the microbiome to do it instead.

That idea suggests tweaking a healthy microbiome for enhancement, but that's not an idea that anyone takes all that seriously yet. Manipulating the microbiome implies having a better characterization of it, and an understanding of its effects on health. For example Second Genome, in San Francisco, is investigating the possible connection between dysbiosis and autism. If a link can be found, that could lead to treatments for autism.

More remarkably, Isabelle de Cremoux -- head of Seventure, a French venture-capital firm that has many microbiome-based investments -- says that while research so far has primarily focused on gastroenterology, since that's connected to the part of the body where the microbes actually live, the number of scientific papers suggesting links between the microbiome and cancer has been growing rapidly.

Seventure has invested in two biotech firms performing research along such lines, including Enterome and Vedanta Biosciences, both in Cambridge, Massachusetts. Finding such linkages might well help prevent cancers; as a long shot, it might even help cure them. There's a long ways to go, and a lot to be learned.

COMMENT ON ARTICLE
BACK_TO_TOP

[MON 01 JAN 18] ANOTHER MONTH

* ANOTHER MONTH: According to an article from BBC.com, ("Thai Fraudster Sentenced To 13,275 years In Prison", 29 December 2017), a Thai court has sentenced Pudit Kittithradilok, 34, to over 132 centuries in prison for running a Ponzi pyramid scheme. In a Ponzi pyramid, a scammer offers an investment with unrealistically high rates of return to bring in suckers, paying off new suckers with the money coming in from the earlier suckers, instead of from investment returns. Once the money stops coming in, the smart scammer takes the money and runs.

Pudit brought in more than $160 million USD from about 40,000 people in the exercise, but got caught before he managed to run off with the money. The court found he engaged in illicit lending, and some 2,653 counts of fraud. Thanks to his confession, it halved his sentence to 6,637 years and six months. He actually won't serve more than 20 years, since the maximum sentence for each of the two counts against him is ten years. It appears the extremely long sentence was just what the letter of the law required.

The court fined his two front companies the equivalent of $20 million USD each. Pudit and the firms were ordered to repay around $17 million USD to the 2,653 identified victims, with 7.5% yearly interest.

* Another year gone by, time to review. Of course, here in the States, President Donald Trump has done very well in his central objective, to dominate the headlines -- good or bad, doesn't care, as long as he amuses his core supporters. The Democrats universally can't stand him, but he doesn't care about them. The Republicans, for the most part, can't stand him either, but they're stuck with him, and for the most part have to be circumspect. GOP in Congress, like Senator Bob Corker of Tennessee, who are planning to return to the private sector after their terms end haven't been at all circumspect, engaging in exchanges of insults with the president.

The GOP didn't do so well with trying to repeal ObamaCare, but they did finally manage to cobble together a tax bill of sorts. It remains to be seen, going into this year's midterm elections, whether that will help the GOP at all; or, as seems more likely, America's turnabout to Rightist populist nationalism is going to end up a passing fad.

As far as events overseas go, Brexit continues to hobble the UK; Emmanuel Macron rose in France; Angela Merkel declined in Germany; Vladimir Putin continues to be a global nuisance; Robert Mugabe got the boot as president of Zimbabwe; North Korea continues to bluster and threaten; while Islamic State is on the run. The fight against Islamic terror smolders along around the globe; most of the world's nations are trying to work together to deal with climate change, while the USA tries to pretend it will go away if it's ignored. All reliable evidence shows it won't, things are instead getting worse. No real worries, America will be back in a few years after resolving some technical difficulties: "Please stand by."

The global economy seems to be booming these days. Bitcoin and other cybercurrencies have gone crazy, the expectation being that the boom will be a bust. The bust is likely to reflect badly on high-tech firms, already under severe public pressure for a numbers of reasons: the manipulation of social-media outlets like FaceBook by the Russians in 2016, personal misconduct by leaders at Uber, and the disturbingly anti-democratic dogmas of other leaders like Paypal's Peter Thiel. The bust in bitcoin seems likely to precede a general bust in markets -- if only on the basis of the odds, the last time a bull market lasted this long was up to 1928, which is an unsettling precedent.

As for technology, there weren't too many breakthroughs in consumer technology, though there were refinements, such as multiple cameras in smartphones; in industry, continued gains in 3D printing were big news. Social media firms like FaceBook and search-engine companies like Google enhanced their efforts to prevent bad players, like the Russians, from gaming the systems. The most high-profile technologies were robotic vehicles and artificial intelligence. AI, after decades of going nowhere in a hurry, is being put to work in one domain after the other -- with, as an absurdist side-effect, dire warnings circulated that AI presents a threat to human survival.

As for aviation, the big news is in drones, which are gradually becoming established, and also becoming a regulatory nuisance that has yet to be resolved. As something of an extension of drones, there's been a lot of work on electric / hybrid aircraft for "air taxi" use, though nobody thinks such machines are going to be fielded any time soon.

As for space technology, it's been mostly more of the same -- satellites for communications, surveillance, science, and so on, plus the ongoing International Space Station exercise -- but nanosatellites are a boom business, notably the popular CubeSat standard. CubeSats have been getting bigger, moving up from the popular triple CubeSat format, used on the Dove Earth-observation satellites and Lemur weather satellites, to six-unit or even larger formats. Small boosters for nanosatellites, like the Rocket Labs Electron, are being prepped for service, but not there yet. However, the SpaceX Falcon booster has been on a roll, with soft-landings of the first stage having become routine, and increasing relaunches with those first stages.

In science, the big news was that gravitational wave astronomy has come of age, with ongoing detection of the gravitational signatures of cataclysmic events in the distant Universe. On a smaller scale, the CRISPR-Cas9 genetic modification technique is also on a roll -- though ironically, public fear of genetic modification is hardly fading, and possibly growing.

In the end, however, in the US all the news was dominated, entirely from intent, by Donald Trump, with his unorthodox concepts of government. What happens next year Trump remains to be seen, since it certainly can't be predicted. What happens in the two years after that, isn't even worth thinking about for now.

* Moving from that to the RFN (real fake news) for December 2017, the month started out with the GOP in the Senate managing to cobble together a tax bill and, cooperating with the GOP in the House, vote it into reality. Although no Democrat or Independent voted for it, the general feeling was that the tax bill was more a muddle than a monstrosity. Some of the worst pitfalls were avoided -- for example, tax credits for renewable energy were retained. Republican senators from prairie states just love wind turbines. Maine Senator Sue Collins, who had broken ranks with the GOP on killing ObamaCare, voted YES on the basis that monies would be made available for public health care.

The alteration of the corporate tax system was actually not unwelcome. The cut from corporate taxes north of 35% was seen as necessary, since the system had encouraged big companies to offshore their profits. Cutting the rate to 21% was drastic, however, the general expectation having been that it would be hard to get under 25%. Rates for repatriation of offshored profits were also cut to a low level -- though that was mainly a concession to the reality that the profits wouldn't be repatriated if they weren't. The general principle is that if, say, Apple makes profits in Germany, it's the Germans who tax Apple; in the same way that if BMW makes profits in the USA, it's the US government that taxes BMW.

Nonetheless, public disapproval of the tax bill was predominant. While it threw bones to the middle class, analysis showed they were temporary; while tax cuts for the wealthy were more persistent. The tax bill is also projected to pile up the budget deficit, with White House projections that it would not based on frivolous growth estimates. THE ECONOMIST's cartoonist, KAL, showed Donald Trump, saying that rose-colored glasses wouldn't do, fitting Uncle Sam with VR goggles, presenting 3D imagery of leprechauns frolicking in a huge pot of gold.

In the meantime, a high-profile battle in Alabama for a seat in the US Senate between Republican Roy Moore and Democrat Doug Jones lit up the headlines -- to finally conclude with Jones beating Moore by a nose. Moore had been discredited by allegations that, in his 30s, he had picked up high-school girls; that might have been discarded as hearsay by a skeptic, but Moore had been documented as saying that America had gone wrong after the 10th Amendment to the Constitution, implying that things like banning slavery and giving women the vote were bad ideas. He tried to clarify, but why would anyone have said any such ridiculous thing in the first place?

Moore had been backed by Steve Bannon of the extremist Breitbart website; Moore's defeat led to recriminations between the extremists and the moderates in the GOP. It also gave GOP leadership more reason to fear the outcome of mid-term elections in 2018. Traditionally, as noted previously, Congress tends to change hands in mid-term elections; it's a fair bet the GOP will lose the House, and with the victory of Jones, an even bet the GOP will lose the Senate as well -- even though not many Republican senators are up for re-election in 2018.

Although Speaker of the House Paul Ryan made noises in the wake of the tax debate of cutting Medicare and Social Security, Senate Majority Leader Mitch McConnell demonstrated that he really does have some political sense, saying: "I think that Democrats will not be interested in entitlement reform. So I would not expect to see that on the agenda." McConnell, clearly with an eye to mid-term elections, said that he wanted to move on to things where a deal with the Democrats was possible, like infrastructure: "To do something in that area, we're going to have to have Democratic participation."

As far as a continued assault on ObamaCare -- which has been greatly injured by the tax bill's revocation of the requirement that citizens have health insurance, but not killed off, despite Trump's claims of having done so -- McConnell say: "Well, we obviously were unable to completely repeal and replace with a 52:48 Senate. We'll have to take a look at what that looks like with a 51:49 Senate. But I think we'll probably move on to other issues."

In other encouraging news, an administration report on US national security strategy identified China and Russia as America's primary antagonists in the current era -- which seemed spot-on, and was reflected in the US handing a batch of Javelin anti-tank missiles to Ukraine. However, Trump then went right on with his public cozying-up to Vladimir Putin. Putin played along; however meaningless it was in the face of a chillier US position towards Russia, he had nothing to lose by pretending to be chums with Trump.

Trump followed up with a particularly windy speech, glorifying his administration's foreign policy successes while bashing Barack Obama. One John Kirby -- a retired US Navy rear admiral, previously a spokesman for the State and Defense Departments during the Obama Administration, now with CNN -- dissected the "whoppers and overstatements" in the speech, starting with Trump attempting to blame the North Korea crisis on Obama, fussing over the Iran nuclear deal, and boasting of successes in the war against IS:

BEGIN QUOTE:

This came at the top of Trump's speech amid a list of his grievances about the Obama administration. It is true Trump inherited a more dangerous and more advanced threat from Pyongyang, but that wasn't the result of anyone's neglect. It's indicative of how difficult the problem is to solve, precisely because no one nation can solve it peacefully.

As for the Iran deal, he can brag about decertifying it, but Congress chose not to follow suit. Iran is meeting its commitments. And no matter what you may think of its other destabilizing behavior, no problem in the Middle East is going to be easier to solve with a nuclear-armed Iran.

On ISIS, he conveniently forgets that in 2014, Secretary of State John Kerry fashioned together a coalition of more than 65 nations to beat back the terror group in Iraq and Syria. When Trump took office, ISIS was already a shadow of its former self. Iraqi forces had successfully reclaimed more than 60% of ISIS territory, including Mosul, and nearly 30% of ISIS territory in Syria had been lost. There's still work to do, but it's not like the fight against ISIS achieved no results before Trump. Without question, he has capitalized on what President Barack Obama set in motion.

END QUOTE

Kirby then took on Trump's claim that Obama "surrendered our sovereignty to foreign bureaucrats in faraway and distant capitals." -- to which Kirby replied, "not true":

BEGIN QUOTE:

Take a look at Obama's national security strategy from 2015: "In an interconnected world, there are no global problems that can be solved without the United States, and few that can be solved by the United States alone. American leadership remains essential for mobilizing collective action to address global risks and seize strategic opportunities."

Obama spoke of American leadership in multilateral organizations, because through that leadership we could best achieve outcomes beneficial to our interests. But there was altruism at work as well. It was the United States that led the international effort to stem the tide of the Ebola outbreak. It was the United States that fashioned together the International Syria Support Group. And it was the United States that led the international response to natural disasters, including the earthquake in Haiti, the tsunami in Japan and a typhoon in the Philippines. No surrender of sovereignty. Just leadership.

END QUOTE

In response to Trump's boasting about withdrawing from the Trans-Pacific Partnership and the Paris agreement on climate change:

END QUOTE:

By 2032, according to the US International Trade Commission, TPP would have increased employment in the United States by about 128,000 full-time equivalent jobs. And the real wage rate would have gone up by about 0.19%. As for the Paris accord, all he needs to do is change our emissions targets. The accord is not a deal. It's not a treaty. It's a voluntary commitment to curb emissions by levels that each nation determines for themselves. Nothing unfair or expensive about that.

END QUOTE

And so on. In sum, the Trump Administration has had its foreign policy successes -- but to extent it has, it's been invariably due to continuation of policies from the Obama Administration, which in turn tended to follow those of the Bush II Administration. It hasn't gone well when Trump has followed his troglodyte interests, his denial of climate change being particularly oblivious. Incidentally, in response to a cold snap across the Northeast US, Trump tweeted that maybe we could use a little more global warming. Wearisome, yes, but not at all surprising -- and a suggestive note for how the next year is going to go.

Oh, and today the Defense Department starts officially accepting transgender recruits, as planned by the Obama Administration. The Federal judiciary resolutely rejected Trump's attempt to ban transgenders; the White House could have gone to the Supreme Court, but the day has come, and reversing the policy would be difficult. Given Trump's single-minded focus on the theatrical, it appears that his attacks on trangenders were 100% for show, with no great concern for actually doing anything.

The irony is that Trump, through his clumsy attempt to persecute transgenders, has done more to legitimize them than any other individual. He can jam up with workings of government, but he can't turn back the clock. One year down; three more to go.

* Thanks to one reader for a very generous donation to support the website last month. That is very much appreciated.

COMMENT ON ARTICLE
BACK_TO_TOP
< PREV | NEXT > | INDEX | GOOGLE | UPDATES | EMAIL | $Donate?