< PREV | NEXT > | INDEX | GOOGLE | UPDATES | EMAIL | $Donate? | HOME

DayVectors

dec 2017 / last mod oct 2020 / greg goebel

* 21 entries including: once & future Earth (series), understanding AI (series), New Zealand pest extermination effort, Microsoft Sopris IOT security project, bat 1K research effort, Sweden uses data centers for heating, Latin America loves aerial cablecars, building to resist climate change disasters, evidence of ongoing human evolution, & robot shuttle buses.

banner of the month


[FRI 29 DEC 17] NEWS COMMENTARY FOR DECEMBER 2017
[THU 28 DEC 17] WINGS & WEAPONS
[WED 27 DEC 17] KILL THEM ALL
[TUE 26 DEC 17] MICROSOFT DOES IOT SECURITY
[MON 25 DEC 17] UNDERSTANDING AI (5)
[FRI 22 DEC 17] ONCE & FUTURE EARTH (19)
[THU 21 DEC 17] SPACE NEWS
[WED 20 DEC 17] BAT 1K
[TUE 19 DEC 17] KEEPING WARM
[MON 18 DEC 17] UNDERSTANDING AI (4)
[FRI 15 DEC 17] ONCE & FUTURE EARTH (18)
[THU 14 DEC 17] GIMMICKS & GADGETS
[WED 13 DEC 17] LATIN AMERICA LOVES SKYRIDES
[TUE 12 DEC 17] RESILIENT DESIGN
[MON 11 DEC 17] UNDERSTANDING AI (3)
[FRI 08 DEC 17] ONCE & FUTURE EARTH (17)
[THU 07 DEC 17] SCIENCE NOTES
[WED 06 DEC 17] CONTINUING EVOLUTION
[TUE 05 DEC 17] ROBOSHUTTLES
[MON 04 DEC 17] UNDERSTANDING AI (2)
[FRI 01 DEC 17] ANOTHER MONTH

[FRI 29 DEC 17] NEWS COMMENTARY FOR DECEMBER 2017

* NEWS COMMENTARY FOR DECEMBER 2017: As discussed by an article from REUTERS.com ("In Democracies, Voters Warm To Secret Services" by John Lloyd, 22 December 2017), in the dark hours of the morning of 19 December in the northern UK cities of Sheffield and Chesterfield, armed police blew open doors of homes and a Muslim community center , arresting four men aged between 22 and 41. The police didn't give many details, speaking of a "Christmas bomb attack" that was presumably foiled. There was little doubt the police were working from a tipoff by British intelligence -- presumably the domestic intelligence service, MI5.

Britain suffered four Islamic militant attacks through 2017, with 35 people killed, 22 at a concert in Manchester. The police say they have thwarted nine other attacks, some of them large-scale. Britons believes that the authorities are doing their best to keep the public safe -- a belief that has done much to raise the public esteem of security services like MI5.

That's contrary to the long-standing suspicion of the "spooks"; traditionally citizens have worried that they're abusing their authority, even operating outside the law. Suspicion was particularly inflamed by the 2014 revelations of Edward Snowden, who disclosed that the US National Security Agency was collecting the phone records of millions of Americans, and was tapped directly into the servers of international firms like Facebook, Google, Microsoft, and Yahoo.

Snowden's revelations were half-truths; the NSA wasn't reading emails, just tracking connections, with a warrant from a secret court required to actually zero in on a suspect. Half-truths were still half-uncomfortable; few liked the idea of being tracked, and a warrant from a secret court was not so reassuring. However, the US government, obligated to ensure public security, didn't give up the surveillance in the face of Snowden's revelations; President Barack Obama simply announced some tightening-up of oversight, and continued to march.

Activists have continued to complain bitterly, but Snowden has outlived his 15 minutes of fame. The general public agrees with Obama that the desire for privacy has to be balanced against the need for security. Public fear of terrorism is sky-high, out of proportion to the actual threat; security services have been the beneficiaries. In the US, President Donald Trump, in one of his ongoing exercises in irony, has further boosted the prestige of the US security services by attacking them for their investigations of Russian meddling in American political life -- of which Trump was a beneficiary.

In Britain, while the heads of British secret agencies have traditionally kept a low profile, they now go public from time to time. In October Andrew Parker, head of MI5, gave a speech warning that the threat from Islamic militants was on the increase, saying: "That threat is multidimensional, evolving rapidly, and operating at a scale and pace we've not seen before." That could be seen as self-serving, funding for intelligence agencies like MI5 having been on a boom in the age of terror -- but it's also perfectly believable.

France has suffered the most savage Islamic terrorist attacks in recent years. Consequently French spooks, once the targets of angry criticism, are now recognized as at the top of the game. They are aggressively recruiting communications specialists and linguists to help head off future attacks. The German government is pumping funds into both its domestic and foreign intelligence agencies, with a particular focus on communications intelligence. The Germans, thanks to German history, are particularly touchy about secret police and public surveillance -- but nonetheless the government has plenty of support for a more aggressive security stance. Italy, in the wake of a 2007 law to impose cooperation on a set of intelligence organizations often at odds with each other, now has a central Security Intelligence Department, reporting directly to the prime minister.

The shrill protests of activists against mass surveillance and other intelligence-gathering activities have nowhere to go. The irony, however, is that there does need to be oversight of secret intelligence services -- transparency, no, such organizations can't work that way, but they need to have rules and be kept to them -- and citizens do have some need and entitlement to privacy. Do companies like Apple have the right to give customers encryption that even Apple can't break? Does it make sense to improve public security by denying the right of citizens to the security they need to protect themselves? And does giving citizens that security honestly undermine the war on terror? Even with public acceptance of surveillance and the security services, there's still a lot of hard questions that need answers.

* As discussed by an article from THE ECONOMIST ("Deal Or No Deal?", 30 November 2017), the British government under Prime Minister Theresa May has been making a bit of progress in negotiations with the European Union over the "Brexit" of the UK from the alliance. Of course, to make progress, May had to make concessions to the EU -- for example, guaranteeing the rights of EU citizens in the UK. Progress in the deal is important, since Brexit will inevitably take effect on 29 March 2019. If there's no deal by then, Britain will walk out of the EU without an agreement. The EU is holding all the good cards in the negotiations, and the British government is accordingly going to make more concessions.

To hardcore Leavers, any concession at all is way too much. Why, they ask, should Britain have gone through the motions of "declaring independence" of the EU, and then still kowtow to EU rules? If continued negotiations lead to more concessions -- they will -- then, so the thinking goes, obviously the authorities will just be trying to sabotage Brexit. Better that the UK should take the "hard Brexit", and leave with no deal at all. Why, they ask, should anyone think that's a problem? It's in the interests of both the UK and the EU to have a constructive relationship, so won't they sensibly decide to have just that?

Only the hard-core Leavers pose that a question; nobody has to be a die-hard Remainer to see it's nonsense, based on a position of bargaining strength Britain simply does not have. The EU is demanding a hefty exit bill -- alliances, after all, often have penalty clauses to discouraging quitting, with such clauses defined from the outset -- and would have problems with the status of EU citizens in the UK thrown into doubt. EU leadership would see nothing constructive in being stiffed, and it's not likely they would be quick to make concessions of their own.

Consider in detail what walking out would mean in practice. One big factor is that the EU is a legal system as much as it is a political construct. If the UK simply walks out, that would equally mean walking out of all EU organizations, from Euratom to the European Medicines Agency (EMA). The European Court of Justice (ECJ) would lose jurisdiction.

The legal aspect lends a subtle twist to the economic aspect, which clearly spells trouble in itself. Oxford Economics conducted a study of Brexit with no deal, the bottom line being that it would cut a cumulative 2% off Britain's GDP by the end of 2020, amounting to about 40 billion GBP. The EU would suffer some pain, but nowhere near as much. The biggest problem would be a decline in trade. According to the Confederation of British Industry, hard Brexit would mean:

A study by the Resolution Foundation, a think-tank, and Sussex University, adds that food prices would rise about 2.7% for affected goods, with the poor suffering the most. A hard Brexit would very likely lead to a loss of confidence in the GBP, and a corresponding loss of the GBP's value relative to foreign currencies, enhancing the pain.

Now consider how the legal issues compound the pain of the trade issues. Dropping the existing customs arrangement would lead to chaos. The data systems underlying the customs system would have to be updated -- in the face of a quintupling of customs declarations to 250 million a year. Adding a few minutes' delay in hauling road freight into the UK would result in long line-ups of vehicles.

Okay, eventually that would be worked out, somehow. However, some industries would clearly suffer from a hard Brexit. Britain exports 80% of the cars made there, over half of them to the EU. The cars could lose their EU certification, and they would face 10% tariffs -- plus 2.5% to 4.5% tariffs on car components, which flow in both directions. Manufacturers don't like to keep a lot of parts inventory on hand, since it means more overhead in storage and handling; Honda only maintains a half-day's supply of EU-made components, and so disruption of parts flow would be painful. Aston Martin has said that losing EU certification might drive the company out of manufacturing.

Pharmaceutical and chemicals industries -- which together account for 10% of value added in British manufacturing -- would also suffer from a hard Brexit. Dropping out of the EMA and the REACH chemicals arrangements could make it impossible for firms in these industries to export to the EU, and in some cases make it impossible to continue to operate in Britain. Also, if Britain walks out of Euratom, not only would nuclear power stations have difficulties in importing plutonium, but there would be a mad scramble to obtain radioactive isotopes needed for cancer treatment, and aren't made in the UK.

British-based airlines are subject to EU rules through the European Aviation Safety Agency, which like all such agencies is regulated under the ECJ. Under hard Brexit, the rules by which air travel is legally conducted in and out of the UK would disappear. In addition, air travel between the US and the UK is currently under the EU legal umbrella, and that legal basis would disappear as well. The end of ECJ jurisdiction would also mean that banks would lose the passport that allows them to do business within the EU out of London, and deny the legality of financial contracts between the UK and the EU.

Last but hardly least, security cooperation would be profoundly undermined, knocking Britain out of both Europol and the European Arrest Warrant (EAW), as well as ending access to many EU databases of suspected criminals and terrorists -- including the passenger-names record that Britain did much to promote. To be sure, intelligence-sharing would no doubt be performed on an ad-hoc basis; but being outside the EAW might well make Britain a safe haven for EU criminals, much like Spain in the 1960s and 1970s.

Would a hard Brexit be survivable? Of course; again, sooner or later everything would be worked out, one way or another. It's not that hard Brexit would be fatal, it's just a simple statement of fact that it will be painful. The only question is: how painful?

However, Leavers aren't detail thinkers; they continue to operate on the basis that, if they hide their heads under the bedcovers, the bogeyman will go away. The bogeyman is the unavoidable fact that a deal between the British government and the EU will mean, have to mean, soft Brexit. How could the EU see hard Brexit, making no concessions and just walking out, as a deal? Again, the EU has all the good cards.

Sometime before 29 March 2019, matters are going to come to a head: the government will announce a deal, and Parliament will pass judgement on it. If the verdict is a thumb's down -- then what happens?

There will be a decision -- but who the hell knows what it will be? One can easily sympathize with Theresa May, who by all appearances is a conscientious public servant in a completely impossible position. Her political standing is inevitably weak, but there's good reason to think she's going to take Brexit to its conclusion, whatever that will be. Who else could do better in her ugly circumstances? And who with sense would want to trade places with her?

* Another article from THE ECONOMIST ("Local-Content Requirements Make For Appealing Slogans But Bad Policies", 23 November 2017), zeroed in on the economic nationalism underlying "buy local (BL)" laws, familiar here in the USA under the slogan of "Buy American".

Randy Kull of Illinois sells traffic signs -- all kinds of signs, for all sorts of buyers. When he sells signs to the US Department of Transportation, however, he is compelled by local-content rules to use US-made sign mounting brackets, and fill in a form certifying their source, which is a company in Arkansas. Kull finds it exasperating: "We live in a global economy."

To some that sentiment, in an age of nationalism, sounds almost treasonous. Shouldn't good patriotic Americans be buying American, and saving American jobs? US President Donald Trump says as much, and BL didn't start with him; US government BL regulations have been growing by leaps and bounds over the last decade. It's not just an American phenomenon either, with countries around the world playing the same game over the last decade.

A reasonable case can be made for encouraging, as opposed to mandating, buying locally. The problem is that it doesn't amount to much. In 1887, Britain established a legal requirement that goods "Made in Germany" were to be labeled as such. The law was meant to protect British industry -- but it became a badge of quality. Few prefer to buy locally if they can get the same or better thing more cheaply from elsewhere. That iron-clad market reality is what makes BL more of a burden than it's worth.

Governments come up with different ploys to justify the practice. In Argentina, where 30% of the music broadcast on local radio must be made locally, it's proclaimed protecting national culture. In China, data-localization laws are justified on the basis of national security. Rules on locally produced sources of clean energy, coupled with subsidies, are often defended as environmental protection -- even when, say, wind turbines can be obtained more cheaply from elsewhere.

In reality, BL is a form of trade protectionism, providing cover for local businesses and the jobs those businesses provide. One study suggests it lowers global trade by $93 billion USD annually. The Left slams free trade as a capitalist plot; trade protectionism, in its various forms, is also a capitalist plot, it's just a question of which capitalists benefit. Randy Kull's supplier of brackets in Arkansas is happy with BL, while Kull isn't. In a typical irony, Donald Trump campaigned against overbearing and burdensome government regulation -- but he's perfectly happy with it when it suits his agenda.

Multiple studies show no evidence that BL laws promote innovation; instead, they feather-bed poorly-run companies. It is an economic truism that producers can only be protected at the expense of consumers; and since all producers are consumers as well, protectionism also tends to work against them across the board. BL not only means suffering through obnoxious and time-consuming paperwork, it means higher expenses. A study showed that the Obama Administration's BL requirements for steel cost the government about $5.7 billion USD; Canadian restrictions on wind turbines meant utilities in Ontario and Quebec spent $500 million USD more than if they had bought American ones.

That doesn't factor in the reality that, in the face of BL in the US, American trading partners implement BL as well, crimping exports of US firms. Yet another study suggested that the US economy would gain about 300,000 jobs if BL laws were discarded. To be sure, US industries are only subject to BL laws when dealing with the government, and the government is not the primary customer for many industries. However, the taxpayer would benefit as well if BL laws were thrown out.

Randy Kull is right: we live in a global economy, and denying its reality is self-defeating. Given the popular support for BL, and the current public distaste for free trade, it's a hard sell to dump BL rules. All we can hope for is a future in which there's a better public appreciation that trade protectionism benefits a minority, at the expense of everyone else. Once that happens, trade deals can then start rolling back BL laws. That doesn't seem likely to happen soon.

COMMENT ON ARTICLE
BACK_TO_TOP

[THU 28 DEC 17] WINGS & WEAPONS

* WINGS & WEAPONS: As discussed by an article from AVIATIONWEEK.com ("Skunk Works Hints At SR-72 Demonstrator Progress" by Guy Norris, 6 June 2017), four years ago Lockheed Martin revealed plans to develop a Mach 7 strike and reconnaissance aircraft. The company now says that hypersonic technologies have matured to the level where the firm's secretive Skunk Works organization has begun work on a flight demonstrator.

It appears that the Skunk Works has been working for a decade or more on enabling technologies for hypersonic vehicles. In 2013, Lockheed Martin announced work on an "SR-72", a proposed successor to the classic SR-71 Mach 3 spy plane. However, nothing much has been said on the matter since then.

Rob Weiss -- Lockheed Martin's executive vice president and general manager for Advanced Development Programs -- is still not too forthcoming, but at a recent aviation forum, he did say things are moving along: "We've been saying hypersonics is two years away for the last 20 years, but all I can say is the technology is mature and we, along with DARPA and the services, are working hard to get that capability into the hands of our warfighters as soon as possible."

Weiss adds: "I can't give you any timelines or any specifics on the capabilities. It is all very sensitive. Some of our adversaries are moving along these lines pretty quickly, and it is important we stay quiet about what is going on. We can acknowledge the general capability that's out there, but any program specifics are off limits."

Weiss did say that work on combined-cycle (jet-rocket) propulsion and other essential technology has reached the level where a piloted "flight research vehicle (FRV)" can be developed, with initial flight in the early 2020s. The FRV is projected as about the same size as an F-22 Raptor fighter and powered by a fully-developed combined-cycle engine. If successful, it could be followed by the full-scale, twin-engine SR-72, which is projected to be about the size of the SR-71, with initial flight late in the 2020s.

* As discussed by an article from JANES.com ("MBDA Readies Enforcer" by Robin Hughes, 21 March 2017) MBDA Deutschland is beginning trials for the company's "Kleinflugkoerper (KFK / small missile) Enforcer precision-guided weapon system later this year, to lead to final qualification and series production in 2018.

Enforcer is an 89-millimeter day-night, lightweight, disposable shoulder-launched weapons system designed to perform precision attacks against a range of targets, including light armored vehicles, moving at speeds of up to 50 KPH (31 MPH), as well as targets behind cover in battlefield and urban environments. It was originally developed in response to a German Army Special Forces requirement for a light precision-guided munition, with the intent to be supplied to regular infantry forces as well. It is intended to complement the Dynamit Nobel Defense RGW90 MATADOR / Wirkmittel 90 90-millimeter recoilless rifle.

MBDA Enforcer

The KFK Enforcer will have a range of up to two kilometers (1.25 miles); the range of the RGW90 weapon is half that at best, and it's not a guided weapon. The KFK Enforcer has a minimum range of 100 meters (330 feet), although the company is working to reduce it to half that.

The KFK Enforcer is compatible with the Hensoldt (previously Airbus Optronics and before that previously Zeiss) Dynahawk clip-on "FeuerLeitVisier (fire-control visor)" currently used with the RGW90 weapon. The Feuerleitvisier features an optical viewfinder with 5.5X magnification; a laser rangefinder with an error of a meter at maximum range; atmospheric sensors (pressure, air temperature, wind speed), and an electronic targeting system that permits automatic engagement of static or moving targets.

* As discussed by an article from DIGITALTRENDS.com ("Next-generation US Military Grenade Is Two Grenades In One" by Dallon Adams, 21 September 2016), the US Army Armament Research, Development, and Engineering Center (ARDEC) is developing the first new lethal hand grenade for the US military in more than 40 years.

Nonlethal grenades include gas, smoke, demolition, and flash-bang types. Lethal grenades fall into the fragmentation or concussion categories. A fragmentation grenade sends out fragments -- shrapnel or ball bearings -- and has a lethal radius of up to about 15 meters (50 feet). Concussion grenades don't throw out fragments, achieving effects by blast shock, and have a small lethal radius; they can be used in close-quarters combat. Currently US soldiers carry only the M67 fragmentation grenade. The MK3A2 concussion grenade was taken out of service in 1975 because of an asbestos hazard.

The new grenade, the "Enhanced Tactical Multi-Purpose (ET-MP)" hand grenade -- provides both capabilities. Soldiers will have the option to to switch between a fragmentation or concussion setting by simply flipping a lever on the device. The ET-MP also has other improvements, including ambidextrous use, the M67 being clumsy to arm for left-handers; plus a highly reliable electronic fuze timing system, with detonation time programmable down to milliseconds. The ET-MP should go into service in the next decade.

COMMENT ON ARTICLE
BACK_TO_TOP

[WED 27 DEC 17] KILL THEM ALL

* KILL THEM ALL: As discussed by an article from NATURE.com ("Behind New Zealand's Wild Plan To Purge All Pests" by Brian Owens, 11 January 2017), New Zealand was once an antipodal world unto itself, with unique flora and fauna. Then humans arrived, making enough trouble for the local environment by themselves, and greatly compounding difficulties by bringing in invasive species that moved in on the locals.

Rats and rabbits made nuisances of themselves; stoats -- a species of weasel -- were introduced in the 1880s to deal with them, only to become nuisances as well. Cats went feral as well, while brushtail possums, introduced from Australia in the 1830s to raise for fur, also became troublemakers. Half of New Zealand's vertebrate species have disappeared. Enough is enough: James Russell of the University of Auckland, an expert on pest eradication, is now taking on the challenge of coordinating an effort to wipe out all the invasive pests by 2050.

It's not so wild an idea as it might seem. Around the world, more than a thousand islands have been cleared of invasive species, with New Zealand doing the job on more than 200 of them. However, the biggest island ever cleared was Australia's Macquarie Island, which covers about 128 square kilometers. In contrast, New Zealand's total area is about 268,000 square kilometers, and the country's cities and towns complicate eradication efforts.

It can't be done with current techniques. Russell and his colleagues want to conduct research on improved techniques that could do the job, such as new baits, species-specific poisons, and invasive organisms genetically modified to help destroy their own populations. That will cost money; in a 2015 paper, the research team estimated it would cost about $9 billion NZD, or about $6 billion USD.

That sounds like a lot, but the paper argued that was cheap at the price compared to the ongoing environmental damage and crop loss caused by invasive species. The government spends about $70 million NZD a year on mammalian pest control, but the pests still cost the country an estimated $3.3 billion NZD a year, primarily in agriculture -- though the environmental degradation also has a potential impact on the tourist trade, which is now bringing in more money than agriculture. Both the government and the public think killing the pests off completely is a great idea.

The normal procedure for wiping out rats and other invaders is to lace bait stations with a poison -- usually sodium fluoroacetate AKA "1080", or the anticoagulant brodifacoum -- and disperse the poison across the landscape by helicopter. The few survivors are trapped or shot. It only takes a few weeks, as long as it's planned out properly.

In 2011, after a four-year effort, all invasive mammals were eradicated from Rangitoto and Motutapu, two inhabited islands with a combined size of 38 square kilometers. The effort began with two years of planning and consultations with local people. Once it went into operation, rats were wiped out in less than a month, with extermination operations moving on to rabbits, stoats, hedgehogs, and feral cats.

The exercise was complicated by the need to get the cooperation of the local inhabitants, and by the proximity of the islands to Auckland, New Zealand's largest city, which provides a pool of potential re-invaders. So far, although ferries and boats regularly dock at the islands, they have remained pest-free. Hitchhiking rats and mice are intercepted about once a year.

Exterminating pests from all of New Zealand will demand improvements in technology. 1080 is an effective toxin, but it isn't very discriminating, being able to kill game animals like pigs and deer, as well as local species. Toxins that only kill rats or mice would be very attractive; the brushtail possum's genome is being sequenced to see if selective toxins could be devised for it as well.

Smarter traps, requiring minimal human intervention, would also be useful. A New Zealand company named "Goodnature" already makes rat and possum traps with a skull-crushing piston, driven by compressed gas. It can reset itself 24 times, with clean-up provided by scavenging birds and cats. One nice feature for a trap would be a wireless reporting feature. There's also interest in species-specific biosensors that could spot pests by their species-specific odorants. Drones with sensors could help track down pests as well.

However, the most excitement right now is in genetic modification of target species. The CRISPR-Cas9 gene-editing scheme can be used to put a "gene drive" in a species, in which specific alleles of genes are always passed on to progeny. For example, a susceptibility to a particular poison could be spread through a pest population with a gene drive; it only needs about ten generations to completely transform the population, setting it up for annihilation.

Another genetic-modification approach, known as the "Trojan female technique", is being developed in New Zealand as part of Russell's project. Imagine producing a genetically-modified female rat or mouse with defective mitochondria -- the organelles the cells use to generate energy. Done properly, the impaired mitochondria won't harm a female, but they will effectively disable sperm, rendering males sterile. The beauty of the scheme is that the females will continue to be produced from matings with genetically-healthy males, gradually swamping the population with "Trojan females", until the male line finally dies out -- and then so do the females. Releasing a large number of Trojan females might be counterproductive in pest control, so it may be a better approach to wiping out the last of a population that has been cut down by other means.

The issue of public cooperation is also significant; all the citizens have to be helpful, allowing access to private property, or pests will have "sanctuaries" from which they might make comebacks. Russell doesn't think that's going to be a real problem: "We're in a relatively unique position in New Zealand, where people are really, really willing to kill for conservation. It's kind of a national pastime."

People do tend to be skittish of genetic modification of anything, but the GM schemes pose no threat, except to the target species -- that being the point of the exercise. There's also the matter of money: the government and philanthropic groups have committed to donate about $3 billion NZD by the 2050 deadline, but that's only a third of the estimated cost. Russell believes that future technologies will cut down the price tag, pointing out that, from the perspective of the first eradication efforts, nobody could have believed how far the work has come today: "We don't know how we'll do it in 2050 -- but back in 1960 we didn't know we'd be doing what we were doing in 1980 or 2010."

BACK_TO_TOP

[TUE 26 DEC 17] MICROSOFT DOES IOT SECURITY

* MICROSOFT DOES IOT SECURITY: As discussed by an article from WIRED.com ("A Tiny New Chip Could Secure the Next Generation of IoT" by Lily Hay Newman, 7 December 2017), homes are gradually being infested by internet-enabled devices -- not just PCs, tablets, and smartphones, but also webcams, appliances, even toys, making up an "internet of things (IOT)". People are waking up to the fact that the IOT presents a massive security challenge. There may not be much advantage in taking over an internet-enabled toy in itself; but the toy could be enlisted to operate as part of a botnet, or be used to infiltrate other parts of a network.

Many vendors selling IOT devices haven't given much thought to data security in the past. They may not know much about the subject -- and besides, if they're selling a cheap product, run by a low-cost microcontroller, they may not be all that willing to add to cost or development time. That's why Microsoft Research is working on "Project Sopris", the goal of which is to develop a security system for microcontroller chips that doesn't add much cost, and provides tools for security that are easy to use. According to Galen Hunt, managing director of Project Sopris:

BEGIN QUOTE:

Everything you interact with that you don't typically think of as a computer has some kind of microcontroller in it, and over the next five to ten years we believe that those devices will all be replaced by versions of the devices that will be interconnected. The manufacturers of those devices are very woefully unprepared for the security challenges of the internet. So what we set out to do was see if we could figure out how to help those devices be secure, and also accelerate the learning of the manufacturers of the devices.

END QUOTE

The Project Sopris team is guided by a list of principles, the "Seven Properties of Highly Secure Devices", which include:

The Sopris microcontroller is built around a conventional ARM microcontroller core, but adds a "Pluton" security subsystem -- notably featuring an auxiliary security processor that handles much of the cryptographic overhead. The Pluton subsystem also has the ability to audit the system for anomalous behavior, then reset individual processes, or the whole device, as necessary. This ongoing surveillance and correction capability is important in routers, connected printers, and similar devices that are only infrequently rebooted. Should such devices become infected with malware, it will persist until rebooted, even if it can't embed itself into the system.

Compartmentalization is another big feature in Pluton. Microcontroller software has traditionally been implemented as a single big program; malware that makes its way into such a system can get control over everything. Compartmentalization ensures that troublemakers are limited in the damage they can cause -- in much the same way that a badly-behaved app on a smartphone generally can be deleted, without disrupting the rest of the system. Once an intrusion is detected, the system will be able to restore itself to working order.

So far, Sopris has proven highly resilient. A challenge set up by bug bounty facilitator HackerOne threw 150 security researchers at Sopris; they failed to crack it. Hunt says the team was actually disappointed that the penetration testers didn't find more flaws; better to find out under controlled conditions than in the wild. The team is planning a more aggressive security challenge.

One of these days, Hunt says, full schematics of the Sopris chip will be available as open source, reassuring potential users that they won't be held hostage to proprietary Microsoft technology. He believes that the chip overhead for added security is small, meaning a secure microcontroller won't be much more expensive than an ordinary one. Eventually, so the vision reads, the IOT will be secure, and everyone will have forgotten there was ever a problem. That might take some time.

COMMENT ON ARTICLE
BACK_TO_TOP

[MON 25 DEC 17] UNDERSTANDING AI (5)

* UNDERSTANDING AI (5): Other AI researchers are trying to rethink the technology to make it more transparent. A few years ago Gupta began project named "GlassBox", the idea being to develop transparent neural nets. Her fundamental principle is "monoticity" -- that is, predictable relationships between one variable and another. For example, all other things being equal, the square footage of a house is proportional to its price.

Gupta lists such monotonic relationships in sprawling databases called "interpolated lookup tables". Conceptually, they're not so different from the trig and log tables found in old dusty textbooks: a student could find where a particular number was between two entries in the table, then interpolate between the entries to get the proper log value. However, Gupta's lookup tables have millions of entries, corresponding to multiple "dimensions" in the input data -- as opposed to the one-dimensional operation of looking up a log value for a number. Gupta incorporates the tables into neural networks, adding a layer of determinism that she believes will make the network more controllable. The neural network will still be able to learn; but it will be constrained within the grid defined by the lookup tables.

Caruana, on his part, has turned to statistics to get neural networks under control. In the 1980s, statisticians pioneered a technique they called a "generalized additive model (GAM)". GAMs were derived from linear regression, a classic statistical technique in which data is fitted to a curve. GAMs, however, can establish "regression lines" through multiple operations -- for example, by squaring a set of numbers while taking the log for another group of variables, for example. Caruana uses machine learning to obtain those operations -- with the resulting system being much more transparent than the usual workings of a DNN. Caruana says: "To our great surprise, on many problems, this is very accurate."

Or at least, his GAMs are for a certain subset of problems. They don't do well at unstructured data, such as images or sounds, which are the bread and butter of many DNN applications. However, they work well for any data that fit in the rows and columns of a spreadsheet, such as hospital records, Caruana turned one of his GAMs on his old pneumonia records, and found out why he had run into an anomaly. What happened was that hospitals usually put asthmatics with pneumonia in intensive care, improving their outcomes. Only seeing only their rapid improvement and not factoring intensive care into the scenario, the AI would have recommended the patients be sent home. Caruana is now pushing his GAM systems to California hospitals.

* Along with work on probes into neural nets or rethinking them, other AI researchers are using deep learning to probe into deep learning. As with many AI researchers, Mark Riedl -- director of the Entertainment Intelligence Lab at the Georgia Institute of Technology in Atlanta -- likes to use 1980s video games to test his tech. He particularly likes FROGGER, in which a frog tries to make his way across a busy highway to reach a pond. Training a neural net to play expert FROGGER is easy, but he found that figuring out how the neural network actually got things done was hard.

Riedl took a step back from the neural network itself, and had human subjects play FROGGER, giving a running commentary on their play. He recorded the comments: "There's a car coming -- I need to hop out of its way!" -- and correlated them with the state of the game. Riedl used a second neural network to translate between the game code and the corresponding comments. He then merged the two neural networks, to come up with a new neural network that would comment on its gameplay: "I can't hop forward, I'll have to wait for the next lane to clear." When trapped, the neural network would even complain: "Damn, I'm dead!"

Riedl call this scheme "rationalization". It makes perfect sense: if we don't know why a neural network does something, why not have it tell us? It's very similar to a tracing capability in a computer program. He feels that rationalization may become a common capability in operational neural nets. Not sure of what a neural net is doing? Set it to "verbose" mode, and it will tell you.

* Uber's Yosinski also has been teaming up AIs to get more predictable results from his image recognition system, using what is known as a "generative adversarial network (GAN)", which involves two neural nets working against each other.

Given a training set of images, the "generator" learns rules about imagemaking and can create synthetic images. A second "adversary" network tries to determine whether the resulting pictures are real or fake, prompting the generator to try again. That feedback eventually results in crude images with features that humans can recognize.

On its own, the generator couldn't produce convincing images; when coupled to the adversary, it learned to synthesize realistic-looking images of volcanoes in day or night, some erupting, some dormant. The volcano images might have flaws that would point to some gap in the AI's learning. Yosinski believes his GAN system is applicable to any image-analysis system -- helping AIs not only become smarter, but more reliable. [TO BE CONTINUED]

START | PREV | NEXT | COMMENT ON ARTICLE
BACK_TO_TOP

[FRI 22 DEC 17] ONCE & FUTURE EARTH (19)

* ONCE & FUTURE EARTH (19): The cycles of algal blooms did have one persistent effect of building up oxygen concentrations in the atmosphere. When the Earth got hot, the algae pumped out oxygen; when the planet froze down, the oxygen concentration remained roughly constant. By about 650 million years ago, oxygen concentrations had risen to near-modern levels, in a second Great Oxygenation Event, with a protective ozone layer screening out most solar ultraviolet.

During this time, while life on Earth remained predominantly single-celled -- but the symbiotic combination of several different lines of single-celled organisms led to the "eukaryotic" cell, a cell with a nucleus. Although the bacteria and archaea can form mats and films, they can't form true multicellular organisms. The eukaryotic cell could, with the first multicellular organisms arising during this era -- soft-bodied creatures, little more than shadows in stone, known as the "Ediacaran fauna".

In the meantime, further breakup of Rodinia, into chunks that would lead to the modern continents, meant the cessation of the long cycles of ice, followed by intense heat. One large component, named "Gondwana", remained; it stretched from the South Pole to above the equator. It would give birth to all the southern continents, plus a good part of Asia. Most of the Northern Hemisphere remained ocean, though there were components, most notably Laurentia, that would eventually come together to form North America, Europe, and Siberia.

The Phanerozoic Eon, beginning 542 million years ago, saw an apparent explosion of diversity of multicellular organisms, during the initial Cambrian period of that eon. This alleged "Cambrian explosion" may have been something of an artifact, with Cambrian organisms simply being larger and sometimes having shells, meaning they left more of a trace in the fossil record. The land, at the time, was still largely barren of life, but that was gradually changing; ground-based plant life was common by 400 million years ago, with small animals -- bugs, worms, and the like -- also present.

The first records of land vertebrates, in the form of fishlike amphibians, go back about 375 million years. At about this time, atmospheric oxygen concentrations began a slow rise to well exceed modern levels -- 30% or more -- it seems in part because of riotous growth in plant life, with leafy plants leading to primeval forests carpeting the Earth. The plant life accumulated, to be sequestered in the Earth, eventually being tapped by humans for coal and oil. Insects grew huge in this environment, with dragonflies the size of a plate.

Then, about 251 million years ago, in the Permian period, the system collapsed, in the greatest mass extinction of species in all of Earth's history. About 70% of land species died out; about 95% of sea species did as well. Nobody is sure what happened. There was no dramatic event like an asteroid impact, the dying happening over an extended period. Atmospheric oxygen concentrations declined from their high, back to a level of 20%, more like that of today. There was some cooling, though not to a level as there had been during "Snowball Earth"; and there was an uptick in volcanic activity. The problems seem to have been due to a system of causes, not any one difficulty.

The Earth recovered in tens of millions of years, with the Mesozoic Era bringing in the age of reptiles, the time of the dinosaurs and the other great reptilian beasts. The Mesozoic was not entirely a time of reptiles, however, since it saw the introduction of flowering plants; of birds -- which were really an offshoot of one branch of the dinosaurs, the theropods; and the mammals, though none from the era have been found that were much bigger than a housecat.

65 million years ago, the age of the great reptilians came to an end, punctuated by the impact of a giant meteor in what is now the Yucatan. By all evidence, the impact was only the endpoint of a series of events that disrupted the existing order of the Earth. In any case, in the aftermath, the mammals and birds became the predominant land species.

There were other mass extinctions in the "age of mammals", dating from 56, 37, and 34 million years ago. They were not so dramatic as the event that ended the age of dinosaurs, much less the Permian extinction, and their causes are not known. Twenty million years ago, the Earth entered into another era of climate instability, with at least eight ice ages occurring in that interval. [TO BE CONTINUED]

START | PREV | NEXT | COMMENT ON ARTICLE
BACK_TO_TOP

[THU 21 DEC 17] SPACE NEWS

* Space launches for November included:

-- 05 NOV 17 / BEIDOU x 2 -- A Chinese Long March 3B booster was launched from Xichang at 1145 UTC (local time - 8) to put two "Beidou" navigation satellites into orbit. These were placed in a medium Earth orbit with an altitude of 13,700 miles (22,000 kilometers) and an inclination of 55 degrees. They were the 24th and 25th Beidou satellites to be launched, bringing the constellation up to 15 operational satellites.

The completed Beidou constellation will consist of 27 satellites in medium Earth orbits, five in geostationary orbits around 22,300 miles (36,800 kilometers) over the equator, and three in inclined geosynchronous orbits that oscillate north and south of the equator.

-- 08 NOV 17 / MOHAMMED VIA -- A Vega booster was launched from Kourou in French Guiana at 0142 UTC (previous day local time + 3) to put the "Mohammed VIA" AKA "MN35 13" Earth observation satellite into orbit for the government of Morocco. The satellite, named after the King of Morocco, had a launch mass of 1,100 kilograms (2,450 pounds), and was built by a collaboration between Thales and Airbus. It was believed to be based on the Airbus AstroSat 1000 bus, like the two Pleiades optical observation satellites for France and the two similar Falcon Eye satellites for UAE. Mohammed VIA was intended for both civil and military observation. A "Mohammed VIB" satellite was to follow.

-- 12 NOV 17 / CYGNUS 8 (OA-8) -- An Orbital Sciences Antares booster was launched from Wallops Island off the coast of Virginia at 1219 UTC (local time + 4) to put the eighth operational "Cygnus" supply capsule, designated "OA-8", into space on an International Space Station support mission. It docked with the ISS Unity module two days after launch.

Cygnus OA-8 at ISS

Along with supplies, the capsule also carried a set of CubeSats on a Nanoracks deployer:

-- 14 NOV 17 / FENGYUN 3D, HEAD 1 -- A Long March 4C booster was launched from Taiyuan at 1835 UTC (next day local time - 8) to put the "Fengyun 3D" polar-orbiting weather satellite for the China Meteorological Administration -- along with the "HEAD 1" microsatellite for the HEAD Aerospace company of Beijing.

Fengyun 4A -- the name means "Wind & Cloud" -- had a launch mass of 2,450 kilograms (5,400 pounds) and had a design life of five years. There were three instrument payloads on the satellite, for sounding, ozone studies, and imaging:

The HEAD 1 satellite, the first space platform in the HEAD Aerospace Skywalker constellation, had a launch mass of 45 kilograms (100 pounds) and featured 3-axis stabilization. It carried a Automatic Identification System (AIS) receiver to track maritime traffic. The HEAD Aerospace Skywalker constellation will ultimately consist of 30 satellites with different buses and payloads to provide a space-based observation and data-handling system.

- 18 NOV 17 / JPSS 1 -- A Delta 2 booster was launched from Vandenberg AFB at 0947 UTC (local time + 8) to put the first "Joint Polar Satellite System (JPSS)" satellite, the first of NOAA's next-generation series of polar-orbiting spacecraft. The launch also included five CubeSats:

The booster was in the "7920" configuration, with nine solid rocket boosters and no third stage.

-- 21 NOV 17 / JILIN 1 x 3 -- A Long March 6 booster was launched from Taiyuan at 0450 UTC (local time - 8) to put three "Jilin 1" Earth observation satellites into near-polar Sun-synchronous orbit. The three satellites -- named "Jilin 1-04", "1-05", and "1-06" -- were owned by Chang Guang Satellite Technology LTD, a commercial spinoff of the Chinese Academy of Sciences.

This launch brought the total number of satellites put into space by the company to eight, including six in the Jilin 1 video imaging constellation. Chang Guang plans to have 60 satellites in orbit by 2020, providing global coverage and capturing a view of any location in the world as often as every 10 minutes. At present, the primary customer is the Chinese government and military, but the firm wants to expand to commercial and mass-market clients.

This was the second launch of the Long March 6. The 29-meter (95-foot) tall Long March 6 booster is one of three new Long March-series satellite launchers introduced since 2015. The Long March 6 is a lightweight rocket, capable of hauling up to 500 kilograms (1,100 pounds) into Sun-synchronous orbit.

Long March 6 booster

The Long March 6's first stage is powered by a kerosene-fueled YF-100 main engine, The engine generates approximately 1,180 kN (120,000 kgp / 264,000 lbf) thrust. A YF-115 engine provides propulsion for the Long March 6 second stage. The YF-100 and YF-115 engines are the same new-generation powerplants used on China's bigger Long March 5 and Long March 7 rockets.

-- 26 NOV 17 / YAOGAN 30-02 -- A Long March 2C booster was launched from Xichang at 1810 UTC (next day local time - 8) to put the secret "Yaogan 30-02" payload into orbit. It included three satellites, and may have been a naval signals intelligence payload.

-- 28 NOV 17 / METEOR M2-1 (FAILURE) -- A Soyuz 2-1b booster was launched from Vostochny at 0541 UTC (local time - 9) to put the "Meteor M2-1" polar-orbiting weather satellite and 18 smallsats into orbit. Meteor M2-1 had a launch mass of 2,750 kilograms (6,062 pounds) and carried four meteorological payloads:

The satellite also hosted a search-and-rescue radio transponder, plus a payload to relay data from remote weather stations and offshore buoys to Russian forecasters. Meteor M2-1 was the first Russian weather satellite equipped to receive emergency distress beacons through the international Cospas-Sarsat network.

Built by VNIIEM, a Moscow-based aerospace contractor, the Meteor M2-1 spacecraft was the third in a series of upgraded Meteor M weather satellites. Two Meteor M satellites launched in 2009 and 2014 are still functioning, according to VNIIEM.

LEO Vantage 2

The secondary payloads included:

The upper stage of the booster reached orbit, but unfortunately failed to deploy the payloads, and fell back to Earth.

COMMENT ON ARTICLE
BACK_TO_TOP

[WED 20 DEC 17] BAT 1K

* BAT 1K: As discussed by an article from NATURE.com ("Geneticists Hope To Unlock Secrets Of Bats' Complex Sounds" by Ramin Skibba, 19 November 2016), there are bats that sing or call much like birds, but the genetic basis of such abilities is unknown. Now, under the new "Bat 1K" program, researchers intend to sequence sequencing the genomes of more than 1,000 bat species to understand the genetic basis of their singing abilities -- as well as their ability navigate in the dark through echolocation, their strong immune systems that can shrug off Ebola, and their relatively long lifespans.

According to Mirjam Knoernschild, a behavioral ecologist at Free University Berlin, Germany, some bats show "babbling behavior", such as barks, chatter, screeches, whistles, and trills. Young bats learn the songs and other sounds from older male tutors. The bats use these sounds during courtship and mating, when they obtain food, and as they defend their territory against rivals.

Scientists have investigated the vocal sounds of only about 50 bat species so far, and they know much less about bat communication than about that of birds. Four species of bats have so far been found to learn vocal sounds from each other, their fathers and other adult males, just as a child gradually learns how to speak from its parents. The four species are:

Bat vocalization is diverse, varying by geographic location, gender, age of starting to vocalize, as well as frequency and types of sounds. Genetic studies have identified at least one gene in bats linked to speech and language, named FOXP2. The gene is also known to have a role in how humans learn language, and in vocal learning in songbirds. The versions of FOXP2 found in these species are often very similar, but bats are an exception, their FOXP2 genes being much more diverse in coding than those of people. Nobody's sure why.

Researchers working on the Bat 1K project expect to learn that other genes are involved in communication, and that many more bat species have the ability to learn songs, calls or other sounds. Knoernschild says: "It's not a rare trait. I'm becoming convinced that there's a whole continuum in bat vocal learning, and it's more widespread than just four species."

Although the echo-location ability of bats has been studied for many years, partly because of its applications to sonar and radar, researchers know very little about the acoustic communication and social behavior that drive how bats learn their songs and sound. Researchers say the study of vocal learning in bats is primitive, about on the level of the state of research into birdsong 60 years ago -- one big obstacle being that bats are harder to observe than birds.

Songbirds have been studied in detail, especially the zebra finch (Taeniopygia guttata), which is easy to breed in a lab, according to Tecumseh Fitch, a cognitive biologist at the University of Vienna. However, birds don't have a mammalian brain or use a larynx to make sounds. Some mammals, including elephants, whales, pinnipeds, and dolphins, display vocal learning, but bats are much more practical to study. Fitch says: "My hope is that bats will become the model species for vocal learning."

* A related article from NATURE.com ("Bat Banter Is Surprisingly Nuanced" by Ramin Skibba, 22 December 2016) focused on the communications in colonies of Egyptian fruit bats -- a research team having concluded the bats have elaborate communications skills, in particular an ability to communicate between individuals. It was tricky getting that far. According to Yossi Yovel -- a neuroecologist at Tel Aviv University in Israel who led the study -- it's hard to make out distinct calls in a bat colony: "If you go into a fruit-bat cave, you hear a cacophony."

The researchers conducted studies on 22 captive bats for 75 days, keeping the bats under audio and video observation. 15,000 vocalizations were obtained, with software linking particular vocalizations to particular incidents, such as disagreements over access to food. In the end, more than 60% of the calls were correlated to four classes of activities:

The software was able to determine which bat was "doing the talking" about 70% of the time, and determine who was being addressed about half the time. It turned out that the bats made slightly different sounds when communicating with different individuals, particularly when communicating with members of the opposite sex. Only a few other species, such as dolphins and some monkeys, are known to specifically address other individuals, instead of broadcasting generalized sounds, such as alarm calls.

The bats, to no real surprise, got particularly noisy when annoyed with each other. Yovel and his group are continuing to refine their research; he believes the communications of bats are much more sophisticated than they have traditionally seemed.

COMMENT ON ARTICLE
BACK_TO_TOP

[TUE 19 DEC 17] KEEPING WARM

* KEEPING WARM: As discussed by an article from BBC.com ("The City Where The Internet Warms People's Homes" by Erin Biba , 13 October 2017), everyone knows that data centers are power hogs. How could they not be? A building dedicated to racks of data server boxes, almost all of them churning away in support of the online cloud. It not only takes power to run them, it also takes power to keep them cool.

The pragmatic Swedes saw an opportunity in the heat generated by data centers, and are now using it to heat homes in Stockholm. The project is named "Stockholm Data Parks"; it's being run as a partnership with the city's government, Fortum Vaerme -- an arm of the Finnish Fortum energy group -- and others. A number of major Stockholm data centers are involved; businesses are enthusiastic about the scheme, since it not only shines their "green" credentials, but also pays off. The system now includes data centers run by cell network systems giant Ericsson, and clothing retail chain H&M.

The data centers are, as a rule, cooled by piping in cold water, which is used to generate cold air that is blown through the data center. The water gets heated up, to be piped on to Fortum's plants, where it is sent out for heating. Fortum pays for the hot water, and also pumps in the cold water gratis. The scheme is not unique to Sweden, with similar programs happening in Finland, the US, Canada, and France. However, the Swedes are implementing it on a much bigger scale.

Stockholm Data Parks is expecting to generate enough heat to warm 2,500 residential apartments by 2018 -- with the long-term goal being to to meet 10% of the entire heating needs of Stockholm by 2035. It's hardly an unreasonable goal; according to Data Centers By Sweden -- which is setting up similar projects across the country -- only 10 megawatts (MW) of energy is needed to heat 20,000 modern residential apartments. The typical Facebook data center, to provide context, uses 120 MW.

At Interxion, a company whose data centers support mobile gaming apps and other cloud-based software, the cost:benefit analysis was so promising that they're building a new facility for heat capture. It makes perfect sense on an economic basis, with Peder Bank -- a managing director of development for the firm -- saying: "We're trying to turn it into a secondary business."

It's not just the economics, however; Interxion is willing to share engineering know-how with any data center that wants to set up shop in Stockholm. Bank says that the effort reflects "a global purpose", explaining: "If I'm able to protect the higher agenda and do my business, I should do that. If I am able to attract business to the region I should do that and then I should compete after. I don't see a mismatch. We're all living on the same planet."

Green thinking comes somewhat naturally to the Swedes, in part because the country doesn't have hydrocarbon resources of its own -- no coal mines, no oil wells. It has over 2,000 hydropower plants, which account for 40% of energy production. The rest comes from nuclear power, which is being phased out; and Sweden's sole coal plant, which is driven by Russian coal. The coal plant will be shut down in the next few years. Sweden plans to be running on 100% renewables by 2020.

The Swedes also recycle more than 99% of their household waste, with only 3% ending up in landfills. Waste is burned to generate electricity; Sweden even imports garbage for burning. Still, Sweden isn't the greenest country in the world. Iceland is on the top rank, with 85% of its energy from renewables, the country being particularly rich with geothermal energy. Sweden does have some days when it is 100% fossil-fuel free -- but Denmark does so more regularly, and in fact exports some of its green energy to Sweden and other neighbors.

Sweden also has an advantage in extracting heat from data centers, in that the hot-water heating system was already in place. In the 1950s, homes in Stockholm were generally heated by oil, but Fortum Vaerme then began piping hot water to hospitals. When the first energy crisis hit in the 1970s, the heating system expanded; today, Fortum provides to about 12,000 buildings, or 90% of the city of Stockholm. The water used to be heated by coal, but now the primary source of energy is biofuel -- wood pulp left over from production by the country's massive forestry industry, brought in to Stockholm on ships.

The introduction of data centers into Stockholm, then, simply provided a new source of heat. It's not a small contribution, either; in fact, there are concerns that eventually data center construction will outstrip the need for heat obtained from them. Nonetheless, the Swedish central government is encouraging the growth of data centers by cutting the electricity tax. The Swedes, it seems, have less fear of the future than others.

COMMENT ON ARTICLE
BACK_TO_TOP

[MON 18 DEC 17] UNDERSTANDING AI (4)

* UNDERSTANDING AI (4): Modern neural nets are much more powerful than those Caruana used as a grad student, but they are not conceptually different. A neural net is trained with a huge and disorderly pile of data -- for example, millions of pictures of dogs -- with the data being labeled. That data is driven into a neural network with many layers, which learns to associate the data with its labels. The first layer establishes patterns from the data, the next patterns of patterns, and so on, until the final layer is able to tell the difference between, say, a terrier and a dachshund even without labels.

Given a small sample, the neural network does a poor job; but using a scheme known as "back-propagation", the network adjusts itself in response to PASS / FAIL evaluations of its effectiveness. Given a huge sample set and back-propagation, eventually the neural network becomes highly competent at its task. Caruana says: "Using modern horsepower and chutzpah, you can get these things to really sing."

Nonetheless, the insecurity over the operation of DNNs remains. They work fine with inputs much like those they are familiar with -- but the more unfamiliar the data is, the less the confidence in the neural network. It is also impossible to know if some little tweaky change in data might throw the neural network for a loop.

Indeed, there's been work in the production of "adversarial examples", meaning inputs deliberately tweaked to spoof the DNN. It turns out that AI can be used to hand a DNN an example, see how the DNN reacts, then tweak the example, progressively "walking" the DNN into a false match. It proved straightforward for MIT researchers to take a picture of a set of automatic weapons, and tweak it until a DNN thought it was an image of a helicopter -- even though any human looking at the picture would see it as a helicopter.

The researchers judged they could have tricked the DNN into recognizing the image as anything they wanted, just by nudging the tweaks in the appropriate direction. This approach to spoofing a DNN has practical implications, for example figuring out how to write spam so it can get through spam filters. Tightening up the spam filters to catch such trickery has the unfortunate result of making the spam filters more prone to "false positives", marking legitimate emails as spam.

No wonder AI researchers are worried about the interpretability problem. A great deal of effort is now being spent to develop tools for probing DNNs; to devise alternative schemes to DNNs with more transparency; or to even use deep learning itself to unravel the tangled ball of yarn of connections inside a DNN. Jason Yosinski calls this effort "AI neuroscience".

Marco Ribeiro, a graduate student at the University of Washington in Seattle, probes DNNs using what are called "counterfactual probes" -- a trick related to that used by the MIT researchers to spoof DNNs. The idea is to give the DNN inputs carefully varied over a wide range, and see which way the outputs of the DNN jump. For example, consider a neural network that reads in the text of movie reviews, and then flags those that give a movie a thumb's-up. To do this, the DNN would first be trained by being given reviews flagged as being positive, plus reviews flagged as being negative, with the DNN's ability then using the training from these examples to flag reviews appropriately itself.

It is obviously true that the more extensive the training, the better the DNN can evaluate reviews -- but that doesn't give us much insight as to how it does so. Ribeiro's counterfactual probe program, called "Local Interpretable Model-Agnostic Explanations (LIME)", would take a review given a thumb's-up, subtly tweak it by deleting or swapping out words, then resubmit it to see if it still gave a thumb's-up. If done hundreds or thousands of times, it's possible to find out what the DNN's paying attention to, and how much weight it places on it. For a simple example, the word "crap" is clearly going to be associated with a negative review.

Mukund Sundararajan, another Google computer scientist, came up with a probe that takes a more methodical approach. Instead of tweaking the input in various ways, Sundararajan and his team simply blank out part of the text, and then restore it step-by-step. As they run each step through the network, they observe which way it jumps, and gradually map out how the DNN makes its decisions. Sundarajan regards the probing of the DNN as open-ended, making a comparison with his preschool child: "I have a 4-year-old who continually reminds me of the infinite regress of WHY?" [TO BE CONTINUED]

START | PREV | NEXT | COMMENT ON ARTICLE
BACK_TO_TOP

[FRI 15 DEC 17] ONCE & FUTURE EARTH (18)

* ONCE & FUTURE EARTH (18): The earliest free-living organisms were "chemotrophs", deriving their sustenance via chemical reactions with minerals. Compared to the solar energy flooding onto the Earth, that was a limited way of making a living. Certainly, organisms were obtaining at least some of their energy from the Sun from early on, if at the very least by soaking up heat. Gradually, organisms emerged whose metabolisms were based on solar energy via photosynthesis, drawing in carbon dioxide and water, exhausting oxygen after solar synthesis took place. Modern "cyanobacteria" can be regarded reasonable models for these early photosynthesizers -- though the scheme was "reinvented" by evolution multiple times, using different approaches.

In any case, the result was the first "Great Oxygenation Event (GOE)", which began about 2.5 billion years ago, at the beginning of the Proterozoic eon. It might not sound so great, since oxygen concentrations only reached about a percent of their present level in 300 million more years. However, it would have massive effects.

The markers in the geological record of the GOE are distinctive: deposits of minerals older than 2.5 billion years that had been exposed to the atmosphere show no sign of oxidation, while oxidation becomes increasingly noticeable from that time on -- iron oxides, manganese oxides, hundreds of different oxides. The result was a slow minerological transformation of the surface of the Earth, with oxygen-laden ground water ensuring the transformation was more than merely skin-deep. The evolution would continue for a billion years, with a generation of minerals emerging, to provide feedstocks for a subsequent generation of minerals emerging from that.

At the time, there was no life on land, except possibly "pond scum" on the oceanic edges, or blown inland by the winds. Given the prevalence of iron, the emerging continents acquired a red hue, from iron oxide. Oxygen helped boost weathering processes that slowly ground down rocky outcrops, resulting in growing sedimentary deposits.

The oxygen also began to form an "ozone layer" as solar radiation broke down the oxygen, O2, molecules in the upper atmosphere, forming ozone, O3. To that time, solar ultraviolet radiation had flooded down to the surface of the Earth, making existence difficult for such organisms as came above the protective surface of the sea. The ozone layer helped block the UV, though there was too little oxygen to more than lightly impede it.

For over a billion years, the Earth might well have seemed unchanging to any alien visitors that stopped by every now and then. The continental cratons shifted; new minerals arose; while single-celled organisms continued to evolve; but from space, it was unlikely that the planet's appearance changed much.

Changes started coming more rapidly about 850 million years ago. At that time, the movements of the continents had accumulated most of the cratons in a supercontinent named "Rodinia", surrounded by a global ocean named "Mirovia". Rodinia, however, then began to break up, a rift forming to shed off the Congo and Kalahari cratons -- which now form southern Africa. The West Africa craton then split off 800 million years ago.

By 750 million years ago, the breakup of Rodinia was in full steam, the supercontinent having split into north-south halves. The multiplication in coastline meant a multiplication of coastal erosion, with a comparable multiplication of oceanic mineral content and sediments. With more mineral nutrients available, photosynthetic microorganisms bloomed, increasing the oxygen content of the atmosphere. When these microorganisms died, they fell to the seafloor, to be buried in sediments -- sequestering their carbon, with the effect of a reduction in atmospheric CO2. The signature of this process is thick limestone deposits from that era, generated with the help of the dead microorganisms, with the deposits featuring a distinctive isotopic signature.

The algal blooms had the result of depleting atmospheric CO2. In addition, the breakup of Rodinia led to shallow inland seas. Water evaporating from these shallow seas contributed to more rainfall, and more erosion to cause the weathering of rock. The weathering of rock also absorbs CO2. However it happened, atmospheric CO2 concentrations fell. CO2 traps infrared radiation, and so a reduction in CO2 concentrations means less heat trapped by the Earth's atmosphere.

In an era when the Sun was dimmer than it is today, that had dramatic consequences, in the form of a drastic ice age. Glaciers leave clear marks on the terrain, with these marks extensively revealed in geological deposits dating from 740 million to 580 million years ago. In 1998, geologist Paul Hoffman (born 1941) and Daniel Schrag (born 1966), both of Harvard University, published a paper titled "A Neoproterozoic Snowball Earth", suggesting that during that period, the Earth completely froze over at least three times.

According to Hoffman and Schrag, the depletion of CO2 led to growing icecaps. As the icecaps extended their range, more sunlight was reflected into space, further cooling the Earth. The icecaps extended towards the equator until, as the story goes, the entire planet was covered with ice averaging hundreds of meters thick.

The ice shut down the bulk of photosynthesis and much of the CO2 absorption through weathering. Volcanic eruptions belched CO2 into the atmosphere, much as they always do, but the sinks to remove CO2 were shut down, with the result that CO2 concentrations climbed to higher and higher levels. The CO2 trapped solar heating, with the ice melting; as more dark land was exposed, it helped absorb more solar energy, boosting the heating.

The heating was boosted still further by the action of methane-producing microorganisms -- "methanogens" -- and possibly by methane produced from the Earth's mantle. Methane is an order of magnitude more potent a greenhouse gas than CO2, and large quantities of methane released into the atmosphere would produce dramatic warming.

Incidentally, the production of methane in the mantle can also lead to production of more elaborate hydrocarbons, with suggestions that the Earth's oil was mostly produced from the mantle as well, and not from ancient deposits of biomass buried in the Earth. The notion of "abiotic oil" is not the conventional wisdom; it is not seen as an unreasonable idea in itself, but cranks are inclined to treat it as a solidly-established fact that defeats the conventional wisdom. The usual rationale for doing so is to suggest that oil supplies are effectively unlimited.

In any case, once the snow had melted off the ground, CO2 absorption by weathering started up again, as did algal blooms. Methane breaks down spontaneously into CO2 and water on a half-life of a few decades, so it didn't persist for very long after production fell off. The end result was the Earth fell downhill towards cooling again, with the cycle repeated a number of times. Exactly how many is a matter of argument, and in fact it's a matter of argument that the Earth ever froze over completely. Climate modelers have found it difficult to come up with simulations in which the planet completely freezes over, with a faction countering "Snowball Earth" with "Slushball Earth". The discussion continues. [TO BE CONTINUED]

START | PREV | NEXT | COMMENT ON ARTICLE
BACK_TO_TOP

[THU 14 DEC 17] GIMMICKS & GADGETS

* GIMMICKS & GADGETS: As discussed by an article from WIRED.com ("Your Browser Could Be Mining Cryptocurrency For a Stranger" by Lily Hay Newman, 20 October 2017), "cryptocurrencies" like bitcoin are generated by a process of "mining" -- a computation-intensive process that ensures new bitcoins can only be produced through substantial use of computing power. Now, it turns out, sneaky bitcoin miners are leveraging off the web browsers of users to do their mining for them.

This "cryptojacking" is not really a new idea; the Black Hats have long set up "zombienets" of malware-infected computers to do their bidding, and using a zombienet to perform mining was a natural step. However, browser cryptojacking is a bit slicker than that, since it doesn't require that malware make its way onto a mark's computer. Instead, a web page simply contains Javascript active content that goes to work when a user brings up that page.

The game got rolling in September 2017, when a company name Coinhive introduced a script that could start mining the cryptocurrency Monero when a webpage loaded. The Pirate Bay torrenting site quickly incorporated it to raise funds, and within weeks the script proliferated. Hackers figured out ways to sneak it into websites like Politifact.com and Showtime.

Website providers who were cryptojacked got wise when they saw their CPU load rising dramatically during user access to targeted pages. It was easy to find out what was going on by checking the page sources. Users can put a stop to cryptojacking with an adblocker; browser extensions to block the Coinhive script, and other miners, are also becoming available. Karl Sigler -- threat intelligence research manager at malware research organization SpiderLabs -- says:

BEGIN QUOTE:

We've seen malicious websites use embedded scripting to deliver malware, force ads, and force browsing to specific websites. We've also seen malware that focuses on either stealing cryptocurrency wallets or mining in the background. Combine the two together, and you have a match made in hell.

END QUOTE

Cybersecurity experts suggest that cryptojacking doesn't have to be malign. It could, for example, be used by charity websites to raise money, and in-browser miners could be an alternative to digital ads. Discussions with users suggest they like that idea, but worry that having multiple windows open on different websites would bog down their PCs. The big problem is transparency: users have to know that they are being mined, and have some level of control over the process.

Coinhive has appropriately introduced a new version of their script titled "AuthedMine", which requests an explicit opt-in from a user to run the miner. There are doubts that's a real fix, because even if users allow a miner to run, they don't have control over what the miner is doing. What if a miner loads down the PCs of a business or government organization? No matter how many rules were invented, miner operators would have an incentive to lie and take as much as they could get away with taking. The technology is still evolving, and possibly the difficulties will be ironed out. Then again, it might end up being yet another digital nightmare to deal with.

* As for bitcoin, the original cryptocurrency, it's been on a skyrocketing climb in value as of late -- the general impression that it's become a classic investment bubble, with big fortunes to be made until the bubble pops, leaving the suckers holding the bag. The frenzy obviously won't last too much longer; the interesting question is whether the collapse of bitcoin will lead to a collapse of the overheated stock market in general. There's no law that says it has to, but those who remember the dot-com bubble around the turn of the century can recall the general market hysteria that preceded the fall.

There's another disturbing aspect to bitcoin: it's an environmental disaster. Every time a bitcoin operation takes place, the "blockchains" that store the system state are updated in all the nodes of the network. That's estimated to demand the same energy as needed to keep nine average US homes running for a day. Much worse, the international network of computers that performs bitcoin mining is estimated to draw over 30 terawatt-hours of energy per year. Only a minority of nations on Earth draw that much power. The energy consumption is rapidly increasing; at the current rate of growth, bitcoin mining will demand the entire world's energy production by 2020. That is an impossibility.

Incidentally, bitcoin enthusiasts like to say the currency is backed by the energy used to produce it. The problem with that idea is that currencies don't require, and generally do not have, any backing. Money is just a controlled medium of exchange with no inherent value in itself, only worth as much as it can buy, with government central banks in charge of supply. Bitcoin is merely money that is expensive to produce; when -- not if, when -- its value collapses, all the resources used to generate it will never be recovered. If there was a seed of a good idea in bitcoin, it has become apparent that in practice, it's a fiasco.

* In gimmick news closer to home ... I have long been a user of the Mozilla FireFox web browser, but it had been getting elderly, in particular choking on websites with bumptious active content that gave the browser a flat tire, requiring that it be restarted. The old Internet Explorer was worse, Microsoft having abandoned it in favor of the new Edge -- but Edge was unacceptable to me, since I couldn't set background colors and fonts as I liked. That's a big point to me, since I get eyestrain and headaches if I can't set up the display to my liking.

Fortunately, Mozilla just came out with a new release named "Firefox Quantum" that is a big leap forward, able to run a dozen windows easily. I didn't like the control layout very much, but it was straightforward to rearrange it as per my preferences. It had new features, including an ability to do "whole page" screen captures -- that could be done with a plug-in before, but it wasn't for free.

Mozilla is working on blocking auto-run videos, too, but that appears to be technically tricky, and won't be available for a while. Firefox had been on the decline, with only about a 20th of the market for web browsers, but FFOX Quantum is likely to restore its fortunes to a degree. It is free for download, and isn't chained to an organization like Google, implying impartiality.

At the same time, I decided to update my email system. I had been using the Mozilla Thunderbird emailer, but I got to thinking that a webmail system would be better; I could use it on any computer without having to configure that system, all I'd need for portability is the webmail URL. Since email is pointless without an internet connection, there was no advantage in hosting an emailer on my PC.

I had thought of doing that before, but traditionally webmail was pretty weak. That was then; I did some poking around, and found out that Microsoft Outlook webmail was regarded as capable. What finally pushed me over the edge were the "q.com" email addresses I had been using, from the days when my ISP was Qwest. They were never all that satisfactory, not always being seen as valid email addresses, and the problem had been getting worse.

So I ended up scrapping my email system and reconstructing it. This is a pain, in particular having to jump through hoops to make sure that email addresses for my bank account, healthcare provider, Amazon.com, and so on were properly updated and validated. One acquires a lot of such contact listings after a time, and it is unlikely I will be able to remember them all. Oh well, the ones I miss, I'll just have to fix as they become problems.

As for Outlook webmail, it seems clean and it does everything I want it to do well enough, though the learning curve has been a bit troublesome. In time, I will be acclimated and comfortable with it. I just installed the Outlook web app for Android on my smartphone; it seems to work neatly with the default webmailer. Onwards to unification!

COMMENT ON ARTICLE
BACK_TO_TOP

[WED 13 DEC 17] LATIN AMERICA LOVES SKYRIDES

* LATIN AMERICA LOVES SKYRIDES: The use of aerial cable cars as mass-transit systems was discussed here in 2013. As discussed by an article from THE ECONOMIST, ("Subways In The sky", 26 October 2017), they're catching on big-time in Latin America.

Welcome to Ecatepec, a poor suburb of Mexico City. The citizens used to take the bus from San Andres de La Canada, at the top of the hill, and Santa Clara Coatitla at the bottom; the trip took 80 minutes one way. Now they have Mexicable, an aerial cable-car line 4.9 kilometers (3 miles) long, to make the trip. Its 185 cars haul 18,000 people a day. The line makes five stops, taking less than 20 minutes to get from one end of the line to the other. A passenger named Nelly Hernandez, riding with her delighted little girl, says Mexicable is far superior to the bus, that the cable car ride is "super-quick and much less stressful,"

Aerial tramways do well in high-density urban areas; they are particularly attractive for the mountainous cities found in many locales of Latin America. Cable-car lines are relatively inexpensive and quick to build, and not so constrained by right-of-way issues because they have a small ground footprint -- using skyways, not roadways. The pioneer in Latin American aerial cable-car development was Medellin, the second city of Columbia, the exercise being an outgrowth of Colombia's long civil war. The countryside was at the mercy of FARC insurgents, so refugees crowded into Medellin's hillside districts. They overstressed the road networks; an aerial cable-car system seemed to be the most cost-effective and practical solution, with the first lines starting up in 2004.

Medellin aerial tramway

The idea caught on, with systems then set up in Cali, Colombia; Caracas, Venezuela; Rio de Janeiro, Brazil; Mexico City; and La Paz, Bolivia. The La Paz system is the highest and longest in the world. Aerial tramways are popular with the people not only because they are convenient, but because they are subsidized by governments. Mexicable charges seven pesos, about 37 cents, which is half the break-even price. The fact that there's a bit of fun and style to them helps as well. Their attractiveness isn't missed by politicians, who find the speed with which an aerial cable car line can be erected allowing them to be around to cut the opening ribbon.

It is still uncertain if aerial cable car systems are honestly cost-effective. A study of Medellin's system showed that crime fell and jobs increased in areas served by the cable cars -- but other investments had been made in those areas during that time. They do make citizens prouder of their communities. However, Rio's cable car system ended up being a bad example, its construction involving considerable graft and the state finally abandoning it, with the cable cars halted at last notice.

Rio was an exception. Other Latin American cities are charging forward. Bogota, Colombia's capital, will open its first line in 2018; in all, about 20 projects are in planning in the region. "Ole!"

ED: I remember my 2010 trip to Seattle, which went remarkably well -- it was pleasant and sunny in the spring, when it normally drizzles -- except for dealing with the traffic, the place being in the list of the ten cities in the USA with the worst traffic. The heavy population density of Seattle is part of the problem, but it's also due to the fact that in the old core city, the streets tend to be narrow -- and even more so because it's so hilly, as well as broken up by lakes and inlets, helping to tangle the road network.

I keep thinking that an aerial cable car network would be a great solution there, and after reading this article I'm more certain of it. There would be a star hub downtown, at the center of a network of hubs paralleling the city's freeways. It wouldn't be very fast -- but it's not fast to drive around either, and it can be very nerve-wracking. It would be relatively cheap to implement, with the technology being well-developed and unlikely to lead to major cost overruns. In addition, although Seattle is afflicted with the "Not In My Back Yard" mindset, people tend to like aerial cable cars. They're like something from a theme park.

There's a security issue, but less of one than in other mass transit. There would only be a few riders per gondola, and the gondolas could have security cameras. If anybody makes trouble, there's no place to go until the car reaches a station, and the cops will be waiting. There would be more problems with troublemakers at the stations.

COMMENT ON ARTICLE
BACK_TO_TOP

[TUE 12 DEC 17] RESILIENT DESIGN

* RESILIENT DESIGN: As discussed by an article from Climate Desk / WIRED.com ("Museums Are Ready for the Next Natural Disaster. Are You?" by Eleanor Cummins, 31 October 2017), the writing is increasingly on the wall that climate change is going to make big trouble for everyone -- but there's been a lot of complacency. Museums, with their precious inventories, are not ignoring the potential for trouble.

When Hurricane Sandy hit the US Northeast in October 2012, the construction site of the Whitney Museum of American Art was flooded; had the facility been in service, it would have been devastated. The Rubin Museum of Art, a distance uptown, lost power. The facility had backup generators, since it needed to protect its artifacts, but they weren't intended for extended use. Executive director Patrick Sears says: "We thought if we do lose power, in the history of New York City, it would be for a day or two. No one really anticipated we could go without power for a week."

Sandy was a wake-up call. All along the Eastern Seaboard, from Miami to Manhattan, museums are taking extraordinary measures to protect their irreplaceable collections. In doing so, they are pioneering ideas and procedures for "resilient design" that may prove useful to vulnerable coastal communities everywhere.

John Stanley, chief operating officer of the Whitney, says that the museum was fortunate that Sandy happened when it did: construction was in its early phase, and the building's design could be modified to protect it from similar disasters in the future. Stanley says: "We searched the world for flood experts and engineers." The building designers got help from WTM Engineers of Hamburg, Germany, to come up with one of the most flood-hardened structures in NYC.

taking it easy at the Whitney

The Whitney is protected from any depth of storm surge that could be expected for the time being, thanks to its raised elevation and waterproofing via carefully selected materials. There's also a 150-meter (500-foot) emergency wall that can be put into place in seven hours, and a loading door that can withstand a bus thrown at it by a surge. Providing the storm reinforcement only added $10 million USD to a total final cost of $220 million USD. The Whitney hasn't been tested yet, but Stanley feels confident it will handle the next Big One unscathed.

When Hurricane Irma struck Saint Petersburg, Florida, in 2017, the Salvador Dali Museum was ready for trouble. The museum houses the biggest collection of Dali art in the world; loss of the collection to a storm would be a disaster. The Dali is protected by walls 45 centimeters (18 inches) thick, built to stand up to a Category 5 storm, and fortified glass, which deal with Category 3 winds.

* The architectural features that protect the Whitney, Dali, and their kin have begun to proliferate, thanks to consumer demand and new municipal standards. One of the prominent examples is the new residential American Copper Buildings of NYC, on Manhattan's eastern shore, on the East River near the United Nations complex.

There's a long waiting list of people wanting to take up residence in the apartment towers, mainly because the $650 million USD buildings, which were started before Sandy hit, meet or exceed the city's latest resilient design codes. The fact that the copper-clad buildings, offering 760 apartments, are stylish is another plus.

Connected by a three-story skybridge, the two towers have an elevated lobby, allowing them to stay above the storm surge level. They also have rooftop backup generators that can drive the elevators, a fridge, and one electrical outlet for a week. To be sure, if a big storm is coming, all the occupants will be ordered to evacuate, but they'll have something to come back to when the storm passes. Considering that a studio apartment in the towers goes for like $4,000 USD a month rent, the building's owners want to give potential occupants every reassurance that they're safe.

JDS Development Group, which owns American Copper Buildings, is one of the leaders in the movement towards resilient design; the movement is catching on. After Sandy, the Mayor's Office of Recovery & Resiliency set about studying the metro area's weather and climate vulnerabilities, and crafting solutions. The city is now implementing new building codes, with all new construction now held to these updated resiliency standards.

* That's great for new construction; more problematic for existing construction. It may be difficult to do much to protect old structures -- two-thirds of NYC's buildings were put up before 1960 -- and to the extent it can be done, the incremental expense is much greater than it would be if the structure were resilient from the outset. Raising an existing single-family home on stilts, as many thousands of East Coasters have done since Sandy, can cost more than $100,000 USD, on a house that's maybe only worth $400,000. Costs of local and Federal support programs have ballooned.

Fortunately, there's more to resilience than altering structures. The Rubin did not have the funds to fully update the building. Some money was put into improvements -- such as a stronger, waterproof roof -- but the museum has primarily focused on better training and communications. Patrick Sears says: "We're thinking about manual ways, simple ways, things you can buy on Amazon." One of his favorite investments is a crank-driven cellphone charger that doesn't require an electricity source.

With a little homework, anyone can devise a sensible disaster plan -- but a 2015 Federal Emergency Management Agency survey showed only 39% of Americans have their own plan in place. The Rubin's disaster plan, in contrast, is 153 pages in length. That plan is focused on protecting the museum's collection; museums are not in a position to act as public shelters in an emergency. Making a city resilient against natural disasters, as they become more common, is going to require far-sighted efforts by city planners.

COMMENT ON ARTICLE
BACK_TO_TOP

[MON 11 DEC 17] UNDERSTANDING AI (3)

* UNDERSTANDING AI (3): As discussed by an article from SCIENCEMAG.org ("How AI Detectives Are Cracking Open The Black Box Of Deep Learning" by Paul Voosen, 6 July 2017), big tech firms are now very interested in artificial intelligence, pumping vast sums into research on AI.

Welcome to Uber's headquarters in San Francisco, California. There Jason Yosinski, an Uber researcher, probes into a deep neural network (DNN), an electronic system modeled on the brain. This AI was trained, using a vast store of labeled images to recognize a wide range of objects, from zebras and fire trucks to seat belts. With the DNN filtering an image of Yosinski and the author from a webcam, Yosinski is able to find a neuron in the network that apparently had learned to recognize the outlines of faces. Yosinski says: "This responds to your face and my face. It responds to different size faces, different color faces."

The strange thing is that nobody ever tried to teach the DNN to recognize faces. How it managed to do so is not clear. According to Yosinski: "We build amazing models, but we don't quite understand them. And every year, this gap is going to get a bit larger."

For decades, AI technology remained a largely academic exercise, early efforts to go commercial ending in disappointment. Now "deep learning" provided by DNNs is being put to use in one profession after another, and having a particularly profound influence in the sciences. DNNs can determine the best way to synthesize elaborate molecules, to sort out the effects of specific genes from genomes, to search images of deep space for interesting cosmic objects. However, DNNs pose a puzzle, in that nobody knows how they really work.

Sure, the architecture of a DNN is understood in detail; there's no mystery at all about how its elements work. The difficulty is that, given training from a huge stockpile of examples, there's little comprehension of how inputs get to specific outputs. Nobody has a good handle on properly sorting out exactly what the DNN is doing, as it mangles and tangles input data to get to the desired output data.

Of course, people who are using DNNs to solve particular problems may not care much about this "interpretability problem"; they know how a DNN works, it gets the results they want, and they don't care about exactly how it gets from here to there. So what if they don't know why the DNN does what it does? They run the DNN through test sets, and have confidence in it to the extent that the test sets are thorough and the DNN handles them competently. That isn't really different from any other software validation -- given elaborate software, we can only have confidence in it to the extent it's been tested thoroughly, and has been put to a lot of use.

However, those working on neural networks in both industry and academia regard the interpretability problem as a major issue. Given a bug in elaborate software, it can be traced down and fixed; given a bug in a DNN, all that can be done at present is add relevant training and hope the bug goes away. When Maya Gupta, a machine-learning researcher at Google in Mountain View CA, joined the company in 2012, she asked AI engineers about their concerns with the systems they were working on. They usually told her: "I'm not sure what it's doing. I'm not sure I can trust it."

Rich Caruana, a computer scientist at Microsoft Research in Redmond WA, had first-hand experience in that weakening of trust. In the 1990s, he was a graduate student at Carnegie Mellon University in Pittsburgh, Pennsylvania, a hotbed of AI research. There, he joined a team trying to see whether machine learning could help with the treatment of pneumonia patients. It's usually best for them to stay at home, since they could pick up other infections in a hospital -- but some patients, particularly those with complicating factors like asthma, need to be hospitalized soonest.

Caruana ran a data set of symptoms and outcomes provided by 78 hospitals through a neural net, and it appeared to work well. However, a simpler, more transparent model using the same data suggested that asthmatic patients be sent home, which was the wrong answer. He had no way of knowing if his neural net had picked up on the same bad answer. Carauna says: "Fear of a neural net is completely justified. What really terrifies me is what else did the neural net learn that's equally wrong?"

AI geeks aren't the only ones worried about the interpretability problem. A directive issued by the European Union stated that, in 2018, companies deploying algorithms that substantially influence the public must by create "explanations" for their models' internal logic. The Defense Advanced Research Projects Agency, the Pentagon's blue-sky research office, is pumping $70 million USD into a new program named "Explainable AI", for interpreting the deep learning that flies drones and obtains intelligence through data-mining operations. [TO BE CONTINUED]

START | PREV | NEXT | COMMENT ON ARTICLE
BACK_TO_TOP

[FRI 08 DEC 17] ONCE & FUTURE EARTH (17)

* ONCE & FUTURE EARTH (17): Following the Big Thwack, the metal core had separated from the peridotite-rich mantle, with partial meltings of the peridotite producing basalt -- forming the Earth's initial crust, and incidentally producing its early atmosphere and oceans. That initial crust trapped heat from the mantle below, resulting in melting of the bottom of the crust. This melt was affected by the presence of water, generating a new material with different properties from the peridotite from which it came -- richer in silicon, enhanced in sodium and potassium, incorporating water and dozens of trace elements.

This new material was lighter than its parent basalt -- only about 2.7 times denser than water -- and so forced its way to the surface, to become granite rock. Granite hosts four different mineral species:

This matrix is easily observed in any slab of polished granite. There are also dispersions of tiny grains of minerals, for example zircons. As noted, the zircons found in the Jack Hills deposits have been dated to over 4 billion years old; some of them incorporate quartz, a marker of granite, and so may be remnants of the oldest granite on the Earth.

Granite requires a good deal of heat to be formed, with the heat proportional to the size of the rocky world from which it arose. The smaller rocky worlds of the Solar System -- Mercury, the Moon, Mars -- couldn't produce such heat, and so they are generally lacking in granite. It played a much more significant role on Earth, creating great land masses and high mountains. Incidentally, like icebergs, most of the mass of a granitic mountain range is underground: while the peaks of the US Rocky Mountain range may exceed 4 kilometers in height, the roots of the mountains go 60 kilometers deep, or deeper.

At first, the elements of the new granite crust were small and isolated islands. How larger elements emerged is unclear: possibly asteroid impacts, much more common then, left scars that encouraged the emergence of more granite to the surface. In any case, the engine of plate tectonics then began to assemble the separate granite elements, rafting on the seafloor conveyor belts of basalt, into continents. By three billion years ago, the continents had emerged, though not in the configuration they are today.

Also by that time, single-celled life had emerged on Earth. The oldest undisputed fossils of microorganisms are about 3.4 billion years old; there are older candidates, but they remain more or less disputed. Exactly how these microorganisms arose is a matter under intense study. Given that it involves biochemistry, it's not useful to discuss it here. What can be said is that elementary building blocks of life were commonplace; and there was no shortage of sites on or under the ocean floor where volcanic venting could provide the energy to support intensive chemical activity.

Much is made today of the complexity of even the most humble single-celled organism that exists today, leading to the claim that life arising from nonlife -- "abiogenesis" -- is obviously impossible. Since the definition of "impossible" is "can't happen", and it did happen, then obviously it wasn't impossible. True, we don't have solid handle on how it did happen, but those doing research in abiogenesis are confident they are making progress towards credible theories of how life started. They do agree that a single-celled organism could not have emerged from nonlife in a single step, instead envisioning a process of "chemical evolution" -- in which there was a sequence of "proto-life" systems, the first being very inefficient, with each subsequent generation being more efficient, and devouring the generation that came before it. Different variants of proto-life may have teamed up, the whole being more than the sum of the parts. [TO BE CONTINUED]

START | PREV | NEXT | COMMENT ON ARTICLE
BACK_TO_TOP

[THU 07 DEC 17] SCIENCE NOTES

* SCIENCE NOTES: As discussed by an article from SCIENCEMAG.org ("Watch These Tiny Parrots Reveal How Dinosaurs May Have Learned To Fly" by Ryan Cross, 17 May 2017), there's long been a discussion among paleontologists and evolutionary biologists as to how birds acquired the trick of flying. There are many animals that can glide -- flying squirrels as the stereotypical example -- but it is uncertain that gliding, by itself, could lead to true flight. One alternative concept is that flight began as a boost to running.

Now a group of researchers has come up with a new idea, after training four Pacific parrotlets (Forpus coelestis) -- small, colorful parrots about 13 centimeters (5 inches) long -- to jump and fly for millet seed rewards. The researchers built a cage with perches that also measured the birds' leg forces, and surrounded the cages with high-speed cameras to study the birds' wing beats as they moved between branches.

Pacific parrotlets

For short jumps, the parrotlets primarily used their legs, using the wings only for controlling touchdown. For longer jumps, they relied mostly on their wings. The researchers used the parrotlet data to build a software model to see how four feathered dinosaurs, which were obviously capable of gliding, might have obtained propulsion as well from their feathered arms. The model showed a distinction between the four:

The suggestion is that Archaeopteryx and Microraptor acquired an edge over other tree-foraging competitors by using jumping and wing flapping to minimize energy expenditure while foraging for food in their trees, hopping from branch to branch.

* As discussed by an article from NATURE.com ("Ant Colonies Flow Like Fluid To Build Tall Towers" by Laura Castells, 12 July 2017), to deal with streams or water currents, fire ants will hook together to form towers or rafts. Given the tower configuration, the question arises: how do the ants on the bottom of the tower keep from being crushed by the load of all the other ants above them? Researchers have now discovered how: the tower isn't static, with the ants circulating around in it, as if particles in a fluid, each bearing the load and yielding.

Fire ants (Solenopsis invicta) have sticky pads on their feet that help them to link to each other. Researchers had already figured out how they made rafts: the ants joined to each other at their feet to form air pockets, making up a relatively uniform matrix of such air pockets to distribute the mass of the collective. A team co-led by Craig Tovey -- a modeling mathematician at the Georgia Institute of Technology in Atlanta -- then went on to investigate how they formed into towers.

In the lab, the researchers used high-speed cameras to observe how the ants assembled around a slippery teflon rod, and tagged half the colony with a radioactive tracer to observe the movements of the ants in the tower. It turned out they use a trial-&-error method, rebuilding weaker parts of the tower that collapse until they finally have a sound structure.

Each individual ant can support three other ants; when an ant is overloaded, it lets go and drops down the tower, until it emerges from the base at the bottom. The resulting tower is a bell-shaped dynamic structure with resemblance to a fluid, the ants ending up carrying balanced loads. According to Tovey: "The ants are circulating like a water fountain, in reverse."

The dynamic nature of such ant structures is not news, but nobody had ever observed it carefully. The researchers were able to predict the shape and growth rate of the towers using mathematical models. They already knew that fire ants form rafts using "swarm intelligence", each ant following a few of simple rules on its own, with no central direction; the rules can be used as the basis of a mathematical model for the formation of the rafts. The researchers were surprised to find that the ants used the same rules for the formation of the towers. Tovey says: "The next step is to figure out how they build bridges."

* As discussed by an article from THE NEW YORK TIMES ("Ladybugs Pack Wings and Engineering Secrets in Tidy Origami Packages" by Joanna Klein, 18 May 2017), the ladybug is an endearing insect, and it also knows a few tricks. One of the most intriguing are its hind wings, which are four times its size. On landing after a flight, it folds the wings neatly and packs them away under its protective hard-shell forewings, the "elytra", which is normally decorated with polka dots. That's trickier than it sounds. Imagine trying to fold two large tents, with poles that do not detach, that are stuck to your back beneath a plastic case and you have no hands to help you. A ladybug does it many times in a day.

Saito Kaito, an aerospace engineer at the University of Tokyo, works on deployable structures like large sails and solar power systems for spacecrafts. He decided to conduct a study on how the ladybug -- in Japanese, "tentou mushi" -- manages to pack away its wings. Saito commented: "Ladybugs seem to be better at flying than other beetles because they repeat take-off and landing many times in a day. I thought their wing should have excellent transformation system."

The difficulty in the study was figuring out what happened underneath the elytra. Through microsurgery, Saito and colleagues swapped out the ladybug elytra with transparent plastic replacements, then observed the transformation with a high-speed camera, supported by high-resolution X-ray images.

The study revealed that, on landing, the ladybug closes its elytra and aligns them backward. Vertical movements of the abdomen pull the wings under the elytra, with tiny structures on the elytra and abdomen helping keep the wings in place through friction. The wings fold in and over, then tuck into a Z shape. The veins on the wings, springy like a tape measure, bend into a cylindrical shape, elastic under pressure. When the ladybug wants to take off again, it pops open the elytra, and the wings spring out spontaneously. Saito finds the process fascinating, and is impressed with its effectiveness: "The beetles can fold their wing without any mistakes from the first folding."

COMMENT ON ARTICLE
BACK_TO_TOP

[WED 06 DEC 17] CONTINUING EVOLUTION

* CONTINUING EVOLUTION: Genomics has now become a facet of "big science", with ever more ambitious analysis efforts sorting through mountains of genomics data. As a case in point, as discussed by an article from NATURE.com ("Massive Genetic Study Shows How Humans Are Evolving" by Bruno Martin, 6 September 2017), a study of the genomes of 215,000 people gave clues as to how humans are evolving over a few generations.

The study checked US and UK databases to see which mutations were associated with different age groups. According to Hakhamanesh Mostafavi, an evolutionary biologist at Columbia University in New York City who led the study: "If a genetic variant influences survival, its frequency should change with the age of the surviving individuals."

It's simple, if cold-blooded: if people have mutations that cause them to die at relatively young ages, that mutation gets scarcer as people get older. The researchers scanned for more than 8 million common mutations, and found two that appear to become less common with age:

Of course, if the subjects with such genes died after reproductive age, there's no reason to think those genes would be less common in the next generation. However, as the researchers point out, they could only find two troublesome genes; if bad actors weren't being weeded out by natural selection, they would have expected to see many more.

That leaves the question of how the weeding was performed. The authors suggest that for men, it might be that those who live longer can have more children, but they don't believe that's the whole story. There are two other possibilities:

The researchers also discovered that certain clusters of genetic mutations -- none of which represented much of a threat by themselves, but did so as a group -- were found less often in people with long lives. That included predispositions to asthma, high body mass index, and high cholesterol. More surprising was that sets of mutations that delay puberty and childbearing are more common in long-lived people.

According to Jonathan Pritchard -- a geneticist at Stanford University in California -- the link between longevity and late fertility has been spotted before, but those studies were confounded by the effects of wealth and education, since people with high levels of both tend to have children later in life. The genetic evidence uncovered in this study does hint to an evolutionary trade-off between fertility and longevity, a correlation that had previously examined in other animals. Pritchard commented: "To actually find this in humans is really pretty cool."

COMMENT ON ARTICLE
BACK_TO_TOP

[TUE 05 DEC 17] ROBOSHUTTLES

* ROBOSHUTTLES: As discussed by an article from WIRED.com ("Self-Driving Shuttle Buses Might Be the Future of Transportation" by Aarian Marshall, 10 November 2017), in early November a collaboration of organizations -- multinational transportation company Keolis, French manufacturer Navya, and the American Automobile Association -- launched a small driverless vehicle service in Las Vegas, the roboshuttle carrying eight people in a loop around the Fremont Street Entertainment District. It had an attendant to keep an eye on things.

Only hours after beginning service, it was in an accident -- because it couldn't understand clueless humans. The vehicle spotted a truck backing out of an alley and obediently stopped; there was a vehicle behind it, so it couldn't back up, and it just sat there as the truck backed into it. It might have honked a warning, but the peculiarities of the truck's movements kept the robot vehicle from recognizing it as a threat. The shuttle was back in service the next day.

The Vegas roboshuttle service wasn't really practical transportation, mostly being a Vegas amusement. John Moreno, a spokesman for the AAA, says: "It's a fun, short experience, similar to an attraction you'd ride at a theme park."

Nonetheless, robotic shuttle vehicles are on the leading edge of autonomous vehicle technology. The Vegas roboshuttle is something of an innovation in the USA, but such vehicles are becoming established in Europe and Asia. Navya shuttles have been in operation in Switzerland and Singapore since the fall of 2016. London's Heathrow Airport has transported passengers in autonomous "pods" since 2011, while the Australian Intellibus completed a three-month pilot in 2016.

The Vegas experiment is not the only one in the USA, either. Navya started running self-driving shuttles on the University of Michigan campus before one showed up in Vegas; while another company, TransDev, is working to put its electric minibus on the streets of a planned community in Florida, and EasyMile showed off its own little transporters in Arlington, Texas, during the summer.

The companies see these experiments as not only a way to get experience, but also show off the technology to the public. Local governments tend to be enthusiastic as well, getting good press by encouraging innovative technology. However, they're more than just publicity stunts, all involved believing the technology has potential -- to provide transport on college campuses, in retirement communities, in the suburbs. Maurice Bell, Keolis North America's head of mobility, says: "Most transit authorities are looking for opportunities to answer the 'first-mile, last mile' question." -- bridging the gap between transit hubs and people's final destinations.

Navya shuttlebus

According to Susan Shaheen, a civil engineer who studies mobility innovation at UC Berkeley: "Automated shuttles have the ability to reduce operational expenditures by lowering per mile costs, reducing labor expenditures, and offering a variety of flexible and on-demand public transportation services when paired with advanced algorithms and smartphone apps."

It is doubtful that such shuttles would be useful in central urban areas; they're slow, running at about a sprint, and simply create traffic congestion. The full-size autonomous bus is better suited to the high-density urban traffic environment, consolidating a good number of passengers and fast enough to keep up with other vehicles. Automated shuttles do have a role to play in the overall transport network, however, and they will also assist in the development of technologies useful for other elements of the network. The future whole will be more than the sum of its parts.

COMMENT ON ARTICLE
BACK_TO_TOP

[MON 04 DEC 17] UNDERSTANDING AI (2)

* UNDERSTANDING AI (2): Many of the basic concepts in AI go back to the middle of the last century. In the 1950s, researchers like Frank Rosenblatt, Bernard Widrow, and Marcian Hoff came up with models, based on mathematical procedures, for how the brain's neurons got things done. However, it takes a lot of neurons to get anything particularly useful done, and the field made little practical progress for decades.

Now the neural approach underlies most of the AI activities of major tech companies, from Google and Amazon to Facebook and Microsoft. In the mid-2000s, graphics processor unit company Nvidia concluded that their chips were well-suited for running neural networks, and began making it easier to use its hardware for AI applications. With faster and more elaborate neural networks available, AI actually started to amount to something.

A neural net is not programmed as such; it is instead trained, typically being fed a set of tagged samples of what the neural net is supposed to recognize. In 2009, AI researcher Fei-Fei Li published a database named ImageNet, which contained more than 3 million images with labels of what they were about. She thought that if these algorithms had more examples of the world to find patterns between, it could help them understand more complex ideas. She started an ImageNet competition in 2010, and by 2012 researcher Geoff Hinton used those millions of images to train a neural network to beat all other applications by more than 10% accuracy. Hinton also moved on to "deep" neural networks, with neural layers stacked on top of each other, capable of "deep learning". Today, deep neural networks are almost synonymous with AI.

The tech industry was impressed, and the AI boom began. Researchers who had been working on deep learning for decades became superstars. By 2015, Google had more than a thousand projects that involved some sort of machine learning.

Along with the boom in AI, there's been a boom in hysteria over the technology. Won't the technology keep on improving until it results in superintelligences that will overthrow us -- even exterminate us? AI researchers consider such scenarios silly. Even if we were to build a general superintelligence that outstripped humans, it would have no incentive to become a threat. Humans were designed by evolution to get by in the world, and were not put together according to a formal specification; machines are designed by humans, using formal specifications, to serve humans. As Yann Lecun, head of Facebook's AI research, commented:

BEGIN QUOTE:

Behavior like becoming violent when we feel threatened, being jealous, wanting exclusive access to resources, preferring our next of kin to strangers ETC were built into us by evolution for the survival of the species. Intelligent machines will not have these basic behaviors unless we explicitly build these behaviors into them.

END QUOTE

We're not remotely close to building an AI that could even in principle compete with a human, and there's no reason to build one that could try to. After all, a scientific pocket calculator is "superintelligent", able to perform calculations with ease that would stymie even a math savant, but we don't feel threatened by pocket calculators. As Andrew Ng, a senior AI researcher at Google, likes to say: "The reason I say that I don't worry about AI turning evil is the same reason I don't worry about overpopulation on Mars,"

That isn't saying AI poses no hazards. AI could, by data mining, undermine the privacy of citizens; keep close tabs on citizens for an authoritarian government; accumulate monopoly power unto corporations; and be corrupted by malware. There is also the subtler problem of taking the results generated by an AI system at face value, failing to realize those results may be affected by biases, possibly ones not known to the people who set up the system. AI researchers don't worry about overpopulation on Mars; they've got way too much in the here and now to worry about. [TO BE CONTINUED]

START | PREV | NEXT | COMMENT ON ARTICLE
BACK_TO_TOP

[FRI 01 DEC 17] ANOTHER MONTH

* ANOTHER MONTH: In ridiculous news for last month, one Mike Hughes, a 61-year-old California limousine driver, planned to launch himself from the Mojave desert to altitude in a homebuilt rocket so he could obtain proof that the Earth is flat. The authorities, fearing the exercise might end badly, told him he couldn't fly in his rocket over public land.

There's been a resurgence in Flat Eartherism (FE) over the last few years, somewhat in harmony with the current spirit of the era. This has led to the question as to if FEs honestly believe the Earth is flat. That's naive: of course they do, people don't tilt at windmills unless they believe they're evil giants. The trick is that such folk have no interest in whether something is true or false, instead believing whatever they want. Their dishonesty is as evident as that of a deliberate liar; it's just at a deeper level.

That leads to the next question of: so why? That's not an easy question to answer, because it's trying to unravel broken thinking. There's a certain obvious conceit to it, along the lines of physics cranks who think they've refuted Einstein, even though they clearly know little or nothing about physics: "I'm smarter than EINSTEIN!" It's a kind of showing off, an expression of defiance. Arguing with them is futile; they're passive-aggressive, they intend to provoke, they start barking contests so they can out-bark the opposition. People who care about their credibility, who want to be regarded as grown-ups, do not play such games.

* In more widespread absurdity, talk-show host Jimmy Kimmel sent a video team out on the streets of Los Angeles to ask people: "Do you think Hillary Clinton should be impeached?" A number of people bit hard on that one: "Absolutely!" "She needs to be locked up for her crimes!" BENGHAZI! EMAILS! URANIUM-1! LOCK HER UP!

I have a old friend in Birmingham, Alabama, and I had to report to him that the one person who didn't bite was a Southern boy. He nibbled at the bait a bit, then said hey wait: "She's not in office!" Hillary C is last year's news; in weeks, she'll the year after last's news.

* In the current Real Fake News, US President Donald Trump went on a Far East tour. It was nothing unprecedented, with Trump's griping about unfair trade with China, South Korea, and Japan somewhat overshadowed by continued bluster over North Korean missile tests. As crises go, this one is becoming tiresome.

At home, the main issue preoccupying the White House remained the Republican push for a tax cut bill -- but there was an overriding distraction from a widespread frenzy over sexual misconduct by the great and powerful. The most relevant focus was Judge Roy Moore, running in Alabama for a US Senate seat in a special election. Moore, who is very far to the Right, once had a habit of picking up high-school girls, and it caught up with him in a big way. Nobody's betting on it, but there is a chance a Democrat might even win the election, if enough Republican Alabamans decided not to vote.

Moore's problems are only part of a huge wave of sexual misconduct accusations being thrown around. In some cases, they seem only too justified, with Hollywood producer Harvey Weinstein being pilloried for well-known habits of sexual predation -- actresses saying they had to barricade their hotel room doors to keep him out. Indeed, TV sitcoms were making jokes about Weinstein's behavior well before the current furor.

In other cases, it's not easy to see if there's substance to the accusations. It does seem that the dust-up traces back to the presidential campaign and Trump's unfortunate comments about his regard, or lack thereof, for women. The fact that he then won the election aroused considerable female anger.

Other than that, it was quiet through the month, with the GOP in the Senate carefully keeping the tax cut exercise under cover. Well for them that they should, since the general perception is that it's a tax cut for the rich, with some small sops thrown out to placate the lower orders. Worse, it promises to ramp up the Federal budget deficit ferociously. Nobody feels too confident on betting whether the bill will pass or not -- but Trump didn't help his case by insisting that the bill also kill the "individual mandate" to buy health insurance that is needed to keep ObamaCare afloat.

Since the GOP got run through the mill in trying to kill ObamaCare earlier in the year, it seems foolish to have added that provision into the tax cut bill. The only way it makes sense is on the basis that the Senate didn't want to cross Trump -- not out of fear so much, they just want to make sure that, should the bill fail, Trump can't throw all the blame on Congress: "Hey, we did what you wanted!"

Trump, after insisting on the anti-ObamaCare provision, then stated he might not insist on it after all. However, everyone's long got used to Trump's clumsy smoke-&-mirrors, and knows not to pay any attention to what he says, instead watching what he does, which is confusing enough.

Indeed, one gets so used to the nonsense the president says as to automatically tune it out. Late in the month, Trump was in prime form. First, on 27 November he called Senator Liz Warren "Pocahontas" again -- but this time, at a commemoration of Navaho code-talkers of World War II. Yeah, the reaction is: "So what else is new?" Warren's tough-minded and can give as good as she gets. However, what might be greeted with a roll of the eyes and a shrug when coming from a dim-witted relative carries a lot more significance when it comes from the President of the United States.

On 28 November, he tweeted:

BEGIN QUOTE:

Meeting with 'Chuck and Nancy' [Democratic leaders Schumer and Pelosi in the Senate and House respectively] today about keeping government open and working. Problem is they want illegal immigrants flooding into our Country unchecked, are weak on Crime and want to substantially RAISE Taxes. I don't see a deal!

END QUOTE

To no surprise, Schumer and Pelosi responded that they weren't going to the meeting. Why should they, if Trump ruled out a deal with them? The president then petulantly sent out a photo of him sitting with a sullen demeanor in a White House meeting room, an empty chair on each side of him. On top of that, the next day Trump retweeted anti-Muslim videos produced by British trolls, leading to a protest from the British government. In response, Trump told the British government off.

Again, so what else is new? CNN's Stephen Collinson commented that such behavior raised questions about the president's competence. Questions? There is no doubt any longer that Trump is unqualified for the job. Unfortunately, given there's no prospect at present of removing him, we have to accept that he's going to be in the White House for three more years.

That hardly makes things easier to swallow, and nobody has to be Left of center to dislike the taste. Bill Kristol, a traditional moderate conservative pundit, tweeted on 21 November:

BEGIN QUOTE:

The GOP tax bill's bringing out my inner socialist. The sex scandals are bringing out my inner feminist. Donald Trump and Roy Moore are bringing out my inner liberal. WHAT IS HAPPENING?

END QUOTE

* Thanks to one reader for a very generous donation to support the website last month. That is very much appreciated.

COMMENT ON ARTICLE
BACK_TO_TOP
< PREV | NEXT > | INDEX | GOOGLE | UPDATES | EMAIL | $Donate? | HOME