* 20 articles including; US Constitution (series), understanding AI (series), once & future Earth (series), low-cost lidars, AI in astronomy, US infrastructure problems exaggerated, NASA Earth science decadal survey, Chinese citizens worry about online security, low-orbit comsat networks, and robocars not imminent.
* NEWS COMMENTARY FOR FEBRUARY 2017: As discussed by an article from ECONOMIST.com ("Putin's Syrian Gambit: It Is Not Going Well", 15 February 2018), in December 2017, Russian President Vladimir Putin used a surprise visit to Syria to announced that Russia's mission there was "basically accomplished". He had a nice story -- that Russia had saved the regime of its client, Bashar al-Assad, in a conflict that the bumbling Americans had not been able to handle. Russians could bask in the glory of the victory.
It seems Putin spoke too soon. In early February 2018, Iran sent a large surveillance drone into Israeli airspace; the Israelis promptly shot it down, and then conducted an airstrike on its control station, near Palmyra. An Israeli F-16 fighter returning from the strike was shot down by a Syrian surface-to-air missile -- with the Israelis replying savagely by destroying about a third of Syria's anti-aircraft defenses. Russian military advisors may have been killed in the airstrikes. It was the biggest Israeli air assault on Syria since 1982.
The incident had two lessons: first, Russia wasn't actually done with Syria; and second, the war was apparently entering a new, even more dangerously unpredictable phase. Neither Iran nor Israel, for all their mutual hatred, want to get into a direct shooting match with each other, but confrontations are becoming more certain, since the Assad regime and the Iranian-backed militias that are its best ground troops have pushed rebel groups out of an area close to the Israeli-controlled Golan Heights.
Israeli commanders are bracing themselves for attacks by Iranian proxy forces. It is certain that any such attack will provoke a furious Israeli response. That puts Russia in a bind. Russia and Iran have drawn closer, particularly in their joint effort to save Bashir al-Assad's regime. However, Putin is also on good terms with Israeli Prime Minister Binyamin Netanyahu. Russia has been careful not to object to Israeli attacks on Hizbullah militias, backed by Iran, as long as the Israelis didn't attack the Syrian regime directly.
The reality is that Putin, for all his big talk about resolving the Syrian crisis, is no more in control there than anyone else. Russian-sponsored peace talks in Sochi in January were a bust: the opposition was largely a no-show, while the Syrian delegation blew off calls from the UN and Russia for a new constitution. Turkey and Iran had also backed the conference -- but then Iranian-backed militias shelled a Turkish convoy in Syria, with Russian acquiescence. Of course, the Turks responded with force themselves.
Along with supporting Russian's long-standing Syrian client, Putin intervened in Syria to impress the Russian people that their country really is a great power. Public support has withered as casualties rise, with only a third of Russian citizens supporting continued intervention. Tales that scores of Russian contract soldiers were killed fighting American-led anti-Islamic State forces in eastern Syria in mid-February are unlikely to make the public any happier.
Russia, in short, is stuck in Syria, the sole satisfaction to Russia being that the Americans are only peripheral to the conflict -- having been drawn in simply to defeat Islamic State, while fearing to take on the potentially bottomless job of "regime change". However, that leaves the question of whether, for the Americans, discretion might have been the better part of valor. It is certainly true that it is not in American interests to let the conflict flame on endlessly, but if there's no sensible way to resolve it, at least it's Vladimir Putin who is more stuck with the impossible situation than Donald Trump.
* A related essay from ECONOMIST.com ("Russia's Dirty Tricks: How Putin Meddles In Western Democracies", 22 February 2018), surveyed Vladimir Putin's campaign to undermine the societies -- not the governments, the societies -- of the West.
The end of the Soviet Union led to cautious hopes in the West that the Russia emerging from the ruins would become a modern, progressive state, capable of real leadership on the international stage. Any remnants of such hopes were crushed on 16 February 2018, when American special counsel Robert Mueller, investigating Russian interference in the 2016 US election, indicted 13 Russians who had been implicated in the meddling.
Putin's motives in subversion are opaque; it seems he decided to attack the US because he believed the US was fomenting anti-Russian sentiment in Ukraine. To that end the Russian Internet Research Agency, backed by an oligarch with links to the Kremlin, set up a team of trolls, backed by a money machine and using false identities, to sow discord in America and prevent Hillary Clinton -- perceived as anti-Russian -- from becoming president of the USA.
For whatever other reasons, Putin's hackers also targeted Europe, with the Russians believed to have provided money to extremist politicians, raided computer systems, organized marches, and spread endless lies. There's no way of knowing how much malign influence the Russians have actually had, but that doesn't matter: Russian actions are pernicious and must be countered. To that end, Mueller's investigation has to be taken deadly seriously. So far, it has generated three lessons.
First, social media is a much more powerful tool for Russia's spooks than the mainstream media manipulated decades ago by the Soviet KGB in their misinformation campaigns. Social media has wide penetration; it's cheap to exploit, easy to game, and can be overrun with bots. Hackers breaking into the computers of Democratic bigwigs could spread the dirt far and wide. All it cost Putin was a bit over a million USD a month, to fund hackers -- mostly operating out of Saint Petersburg -- who used every dirty trick they could think up to do the job.
Second, the Russians did not manufacture controversies, so much as they leveraged off controversies that existed or had been manufactured right at home in the USA. Russian trolls zeroed in on the games of American trolls, identifying opportunities to make trouble, to then amplify the message in a methodical fashion. The Russians played the race card, encouraging black voters to regard Hillary Clinton as an enemy, and not vote for her, while stoking a white sense of grievance. The Russians also pumped up personal hatred against Clinton, helping the extreme Right and Left to criminalize her -- while obscuring Trump's far more blatant defects.
It's still going on. During February, there was a clear uptick in Russian bots trying to exploit the furor over gun control, following the massacre of high-school students at Parkland High School in Florida. However, everybody's aware of what's happening, with trolls posting to online forums often blasted as Putin stooges. Of course, there's no way of telling if they're actually on the Russian payroll, or if they're simply "useful tools", but the distinction is immaterial.
In any case, the game is not as effective as it was, and resistance continues to build. The uproar over Russian meddling has already pressured social media firms to take the first steps to make sure they are no longer conduits for subversion by the Russians, or their witless Western accomplices.
Third, the Western response to Russian meddling has been indecisive. Barack Obama was reluctant to take action, clearly being worried that he would be seen as trying to manipulate the 2016 election. Donald Trump has been in denial over Russian meddling, since it calls the legitimacy of his election into question. Mueller's probe has taken place in the face of the active hostility of the White House, as well as the hostility of the president's allies in Congress.
However, Mueller's adversaries have been unable to derail the probe; in fact, their ineffectual attempts to do so may well be lending it strength. Trump has hinted at firing Mueller and his minders in the Justice Department. The president has been warned that would be a grave miscalculation, and so far he hasn't gone beyond the big talk that few take seriously any longer.
To get more traction against the Russians, Europe needs to step up formal investigation of Russian activities and raise its public profile. Yes, such actions do risk blowing the cover of intelligence sources, and it is easy to think the Russians take some pleasure in watching the ant's nest they have stirred up. However, tracking down the Russian money trail will clearly lead to more indictments.
Beyond investigation, there should be push-back. German Chancellor Angela Merkel warned Putin there would be consequences if he interfered in German elections; the Russians kept a low profile. In France, President Emmanuel Macron counter-trolled Russian hackers by planting fake e-mails among real ones, which discredited leaks when they were shown to contain false information. Finland teaches media literacy, while the country's press works together to purge fake news and correct misinformation.
For the moment, push-back from America is painfully inconsistent; while it's no surprise that Trump won't take on Putin, there are also Republican leaders in Congress who refuse to do so as well. As the USA heads for mid-term elections -- with the looming threat of further Russian meddling -- that failure could represent a dangerous electoral vulnerability for the Republicans. They can hardly continue to wrap themselves in the Red-White-&-Blue, while acting as if democracy isn't worth fighting for.
* An essay run here last month by Peter Marino -- founder and policy director of The Metropolitan Society for International Affairs, an NYC-based think tank, and senior researcher at the Global Narratives Institute -- discussed growing international distrust of China. A follow-on essay by Marino published by REUTERS.com ("China's Next Ideological Front", 16 February 2018) zeroed in on the vital US-China relationship.
In 2014, in statements by Chinese leadership and in government media outlets, Beijing began to talk of a "new type of Great Power relations" with Washington DC. The notion was grand but fuzzy, and the talk soon faded out. Nonetheless, relations between the two nations are evolving, and not in such a good sort of way. For decades, since the emergence of China onto the global stage, China and America focused on shared interests, downplaying differences in ideology and interests. Now the differences are moving to center stage. Beijing is clamping down on internal dissent, while the Americans are growing concerned about Chinese actions abroad.
It is fashionable to see that the election of Donald Trump to the presidency of the USA as the turning point in relations, but the reality is that Trump is an inept and whimsical leader, little more than a mouth with no ideas of substance. It is Chinese President Xi Jinping, neither weak nor incoherent, who is making the decisive moves, following his consolidation of power by becoming the Secretary of the Chinese Communist Party (CCP) in 2012.
Under pressure from the CCP, WeChat -- the chat and social networking app -- has agreed to step on "distorted" versions of Chinese history that appear in private conversations on its service. Foreign universities operating in China are now being squeezed as well: institutions like New York University and Duke that were enticed to China with promises of academic freedom have now been forced to set up CCP units on their campuses, and give high-level decision-making powers to Party officials. American universities, in response, are becoming skeptical about Beijing's network of "Confucius Institutes" across US campuses. The US FBI has expressed concerns that Chinese students in America may be collecting intelligence for Beijing.
The Americans are now expressing unhappiness with China. In January 2018, Trump announced steep tariffs on Chinese solar panels in response to China's "unfair" trade practices. It's not always easy to see that Trump's actions make much sense -- but many American political wonks who had, until recently, promoted the incorporation of China into the global system are criticizing Beijing's values, motives, and behavior. Political scientist Joseph Nye of the has coined the term "sharp power", with an eye to China: not brute-force "hard power", but power surgically applied, as with a scalpel.
Cooperation between the US and China was always based on overlooking fundamental ideological differences. Despite Trump's authoritarian theatrics, American political culture remains based on democracy, openness, civil liberties, and a noisy public debate. The CCP is committed to autocratic rule, authoritarian values, and control of information. The leadership of both countries -- again, Trump's theatrics aside -- sees their countries as having a dominant role in the rest of the world. The American perception is based firmly in the US role after World War II; the Chinese perception is less based in the present, with roots millennia in the past.
The clashes in ideology are now coming to the surface. The confrontation will outlast Trump, who is no more than a passing eccentricity in American history. The future of relations between the two countries promises to be tense.
At the end of the month, the ideological gap between China and the West was underlined when the CCP proposed to eliminate the constitutional limit that grants a Chinese president no more than two terms in office. The CCP is largely Xi's creature; not only was this clearly done in accordance with his will, but there's little doubt the term limit will be struck down.
Xi, president for life? There's really no such thing, it's just another term for "dictator". Of course, China's never pretended to be anything but a "people's democracy", which to the extent it's actually defined, is a crazy-house mirror image of a Western democracy. In any case, in recent decades China was the iron fist in the velvet glove -- but Xi is seeing less need for the glove.COMMENT ON ARTICLE
* LOW-COST LIDAR: As discussed by an article from THE ECONOMIST ("Eyes On The Road", 24 December 2016), work on self-driving cars is proceeding rapidly. One of the critical paths in the effort are the sensors a robocar uses to see the road and otherwise assess the environment around it. The sensors fall in four categories: daylight and infrared cameras; ultrasonic sonars; radars; and lidars, or light radars.
The first three are more or less mature technologies, being compact and relatively cheap; lidar is not, being bulky and expensive. In fact, it's about as costly as the car itself. A lidar is like a radar, in that it sends out a pulse of electromagnetic radiation -- infrared light for lidar, radio waves for a radar -- towards a target, and times how long it takes the pulse to come back to determine the distance to the target. Lidar's focus is better than radar, making it easier for lidar to build up an imaging of a target, by scanning it with pulses, than radar.
Lidar technology was an outgrowth of the invention of the laser in the early 1960s, and traditionally has been used for scientific and military purposes. Mars orbiters, for example, have used lidar to obtain precision height maps of the Red Planet's surface. Traditionally, lidars use revolving mirrors to perform scans with a laser beam, which usually operates in the invisible near-infrared part of the spectrum.
Such lidars are elaborate, and cost tens of thousands of dollars. Lidar technology is now being dramatically refined, in particular due to the introduction of lidar "systems on a chip". Beta samples are now being delivered to major automotive-component suppliers, including Delphi and ZF, with low-cost lidar systems expected to be in mass production by the end of the decade.
The company bringing the new lidars to market is Infineon, a German chipmaker that has already established itself as a maker of chips for low-cost automotive radars. Early in this century, car radars cost thousands of dollars; Infineon brought the cost down by an order of magnitude. Radar sensors are now an essential part of robocar and are increasingly used in conventional vehicles too, to provide safety features such as automatic emergency braking.
Some companies working on cheap lidar technology use a flash of laser light, not a beam, capturing the reflections on an imaging array. Infineon, however, is using a micro-electro-mechanical system (MEMS) -- this particular device having been developed by Innoluce, a Dutch firm which Infineon bought up in 2016. The device consists of an oval-shaped mirror, just 3x4 millimeters in size, fabricated in a silicon substrate as part of a chip. A laser beam is shined on the mirror; the mirror oscillates thanks to actuators on the chip, scanning the beam over the target.
Infineon officials point out that this ensures the full power of the beam is used for the scan, while in a flash-based system, the power is dispersed over the imaging array elements. The MEMS lidar can scan up to 5,000 data points from a scene per second, and has a range of 250 meters (820 feet). Infineon officials say that in mass production, it should cost about $250 USD, and could have other applications -- in robots and drones, for example.
Lidar complements radar and cameras. Radar measures distance and speed precisely; it works in the dark and in fog, but it doesn't image small objects well. In addition, some materials, such as rubber, are poor radar reflectors, and so lidar may not pick up chunks of tire lying in the road. Cameras don't do so well in the night and fog, but in reasonable lighting conditions, they can do as well as or better than the human eye, with image-recognition systems able to pick up obstacles like tire chunks in the road. Lidar is able to work in the dark like radar -- though it has trouble with fog -- while being able to provide a camera-like ability to distinguish objects.
Google, Uber, and most carmakers investigating robocars already use lidar, and will appreciate Infineon's new technology. However Elon Musk, the boss of electric vehicle company Tesla, doesn't care about lidar. Musk claims that camera, radar, and ultrasonic sensors that support the Autopilot autonomous-driving mode in Tesla's vehicles can do the job. That remains to be seen.COMMENT ON ARTICLE
* UNDERSTANDING AI (13): Different companies, to no real surprise, take different approaches on how to handle their AI staff. Some, like Microsoft and IBM, invest heavily in AI research and publish a large number of papers, with little immediate concern for working products -- the attitude being that they'll be spun out of such activity in time. At the other end of the spectrum, companies like Apple and Amazon publish little and are focused on product development. Google and Facebook fall between those endpoints.
The scramble for talent may force companies like Apple and Amazon to loosen up somewhat. According to Facebook's Yann LeCun: "If you tell them: 'Come work with us, but you can't tell anyone what you're working on.' -- then they won't come, because you'll be killing their career."
The Chinese giants are faced with the same trade-off as they attempt to establish outposts in the West and recruit American researchers. Baidu has set up two AI-oriented research labs in Silicon Valley, one in 2013 and the other in 2017. They are respected by Western AI researchers, but most such prefer to work for the big American firms, partly because they are more transparent.
In any case, the big companies are mesmerized by the power of AI to supercharge their businesses. According to Benedict Evans of Andreessen Horowitz, AI is "like having a million interns" at one's disposal. Where AI turns out to be particularly useful is in determining what customers want. Automated recommendations and suggestions account for about three-quarters of what people watch on Netflix, for example, and more than a third of what people buy on Amazon. Facebook, which owns the popular app Instagram, uses machine learning to recognize the content of posts, photos, and videos -- then hand the most relevant to users, while filtering out spam.
In the past, Facebook ranked posts chronologically, but realized that users were concerned with relevance. According to Joaquin Candela, head of Facebook's applied AI group, Facebook would have never become an economic superpower without machine learning. Companies like Yahoo and Microsoft that were late to use AI in search ended up struggling.
Amazon and Google have been the most energetic in making use of AI. Amazon has about 80,000 robots in its fulfillment centers, and also uses AI to keep track of inventory, as well as determine how to efficiently distribute product to customers. For grocery ordering, it uses computer vision to see which strawberries and other fruits are ripe and fresh enough to be delivered to customers, and is famously working on aerial drone delivery.
As for Google, it uses AI to organize content on YouTube, making efforts to weed out objectionable material; and also uses AI to identify people and group them in its Google Photos app. AI is increasingly embedded in Android, Google's operating system, helping it to work more smoothly, and to predict which apps people want to use.
Google Brain is regarded in the field of AI as one of the best research groups at applying machine-learning advances to making money, for example by improving search algorithms. As for DeepMind, the British firm may not ever generate much direct revenue for Alphabet, but it has helped its parent save money by improving the energy efficiency of its global data centers. Even though DeepMind's AlphaGo wasn't a commercial product, it got a lot of great publicity -- while Alphabet management has little doubt such powerful technology will eventually more than just pay for itself.
David Kenny, the boss behind IBM's Watson system, sees the AI market splitting in two, with one branch, as with FaceBook, using AI to hook up with consumers, and the other branch using AI as a lever for commerce with businesses. The two are intertwined, of course, with AI being peddled to businesses that use it to hook up with consumers.
This is apparent with the cloud-computing services of the tech giants. The three largest -- Amazon Web Services, Microsoft's Azure and Google Cloud -- offer application-programming interfaces (APIs) that provide machine-learning capabilities to other companies. Microsoft's cloud offering, Azure, for example, helped Uber build a verification tool that asks drivers to take a selfie to confirm their identities when they work. Along the same general lines, Google Cloud offers a "jobs API", which helps companies match jobseekers with the best positions.
Firms in other industries, from retailing to media, also stand to benefit from what cloud business professionals call the "democratization" of AI. Providing AI services to companies that don't have the skills or scale to develop such themselves stands to be profitable in the $250 billion USD cloud market. However, such smaller players need APIs tailored for their needs, and it's not trivial to make that happen. Microsoft has a history of selling software to clients and then giving them support, and seems likely to profit in this area. Other players suggest that AI will become more flexible, and so users will not require as much hand-holding.
So far, the tech giants have mostly tried to apply AI to pull in profits from their existing operations. In the next few years, they hope that AI will open up new business frontiers. Virtual assistants are particularly hot at present. Apple started the rush when it bought Siri, the pioneering virtual assistant, in 2010. Amazon, Google, and Microsoft quickly followed; Samsung, Facebook, and Baidu are also now getting into the race. It's not clear if Amazon's push into smart speakers as virtual assistants represents a huge market opportunity -- but there's little doubt that people are moving beyond the keyboard to engage the internet. [TO BE CONTINUED]START | PREV | NEXT | COMMENT ON ARTICLE
* AMERICA'S CONSTITUTION (3): On inspection of the Articles of Confederation, the weakness of the government the document defines are apparent:
The inadequacy of the central government established under the Articles of Confederation was obvious from the outset. With the final settlement of the war with Britain, interest in collective action among the thirteen states faded -- but external challenges to the newborn United States remained. One of the most immediately troublesome was that Britain didn't live up to the terms of the peace agreement. If the United States couldn't stand up to Britain, it was inevitable that France, Spain, everybody would push the Americans around as well.
The more direct issue was that the states only obeyed the terms of the Articles as convenient. It wasn't just a question of ignoring requisitions by Congress; the states were also inclined to ignore the provisions of treaties established by Congress; to implement their own trade policies, and print their own currencies; to maintain standing forces, and to conduct extended wars with the native tribes without even consultation with Congress.
The obvious result of each state going its own way would be confrontations, even armed clashes, between the states, particularly over territorial expansion in the West -- with states enlisting foreign powers for backing. Instead of a United States, the future seemed to present a set of little countries, bickering and fighting with each other, to their collective detriment. The far-sighted could see this, and knew there was no future in it. As Franklin liked to say during the Revolution: "We shall hang together, or we will all hang separately."
Matters came to a head in the summer of 1786, when a disgruntled veteran named Daniel Shays led thousands of citizens of Massachusetts -- mostly poor farmers, angry at state taxes, plus the insistence of the authorities and merchants on being paid in hard currency -- in protests. The protests escalated to violence on 25 January 1787, when Shaysites marched on the armory in West Springfield; state militia defending the armory opened fire on them, killing four and wounding 20.
Fighting between the rebels and militia continued through February, with the rebellion finally suppressed by the end of the month. Most of the rebels were granted amnesty; two of the leaders were hanged, in part because they were found guilty of looting, though Shays would be set free the next year. There was already discussion among the American leadership class of moving beyond the Articles of Confederation. Shays' rebellion gave greater drive towards a new system of national government.
* In the summer of 1787, only six years after the Articles of Confederation had been ratified, twelve of the states -- excluding Rhode Island -- sent delegates to a conference in Philadelphia to discuss ideas for a better central government. In mid-September 1787, the conference released a proposal for ratification by the states; it was signed by 31 of America's most influential men, including George Washington, Benjamin Franklin, Roger Sherman, James Wilson, John Rutledge, James Madison, Alexander Hamilton, and Governeur Morris.
Coming up with the proposal had not been easy; some of the delegates had left in disgust in the course of the deliberations, others who stayed refused to endorse it. After all, they had come to Philadelphia to fix the Articles of Confederation; they had not been expecting to see the Articles torn up, with a much more formidable system of central government proposed in its place.
As per the Articles of Confederation, the new Constitution needed the assent of nine states to become law. With the release of the proposal, political discussion in the states went into high gear, with states forming conventions to decide whether to ratify or not. The conventions, not the state legislatures, had the power to decide to ratify or not; the state legislatures had no direct say in the process. To help keep up momentum James Madison, Alexander Hamilton, and John Jay wrote a series of essays, under the common alias of "Publius", that discussed issues relative to the new Constitution. The essays were published in the newspapers, to become known as the "Federalist papers".
Delaware, Pennsylvania, New Jersey, Georgia, and Connecticut had signed up by early 1788; Massachusetts joined in February 1788, following an intense debate and a close vote. Maryland and South Carolina followed, the vote being much more enthusiastic in favor of the Constitution. Eight down, one more to go; New Hampshire became the ninth state to ratify in June 1788, with Virginia following right behind. The new United States was a going operation, with George Washington as its first president.
That still left three states out in the cold. They might well have gone their own ways, but the ratification of the Constitution put them under great pressure to join. New York signed up in July 1788; North Carolina held out until late 1789, with Rhode Island finally caving in by mid-1790.
* It might be noted that, while the copy of the Constitution written on parchment skin and signed at the Philadelphia Convention remains on protected display in the US National Archives, it was of course not the only copy in circulation in the period. There were copies printed to be distributed to the states for ratification, as well as copies printed for popular consumption. Different printings had minor differences due to printing errors and the like.
That has led to the question: so which copy is the "true" Constitution? Emotionally, there's been an attachment to the parchment in the National Archives. However, legally there's much more basis to tag the unsigned copies sent to the states for their consideration -- since they were the ones the states ratified or "enacted". It seems like a trivial distinction, but when it comes to the Constitution, there are people who take trivia to an absurd extreme. [TO BE CONTINUED]START | PREV | NEXT | COMMENT ON ARTICLE
* Space launches for January 2018 included:
-- 08 JAN 18 / ZUMA (USA 280 / FAILURE) -- A SpaceX Falcon booster was launched from Cape Canaveral at 0100 UTC (previous day local time + 4), to put the secret "Zuma" payload into orbit. The launch failed; SpaceX announced it was not a booster failure, but no other details were released. Apparently the satellite failed in orbit. The Falcon 9 first stage performed a successful soft landing.
-- 09 JAN 18 / SUPERVIEW 1-03, 1-04 -- A Long March 2D booster was launched from Taiyuan at 0324 UTC (local time - 8) to put the "Superview 1-03" and "Superview 1-04" Earth observation satellites for Beijing Space View Technology Company. Also named "GaoJing", the satellites had a launch mass of 560 kilograms (1,234 pounds) and provided sub-meter high-resolution images for civilian and commercial customers in China and internationally.
-- 11 JAN 18 / BEIDOU x 2 -- A Chinese Long March 3B booster was launched from Xichang at 2318 UTC (next day local time - 8) to put two "Beidou" navigation satellites into orbit. They were placed in a medium Earth orbit with an altitude of 13,350 miles (21,500 kilometers) and an inclination of 55 degrees. They were the 28th and 29th Beidou satellites to be launched, bringing the constellation up to 17 operational satellites.
The Beidou system is being developed and deployed in three phases:
The Beidou 1 system was based on a "Radio Determination Satellite Service (RDSS)", with a user location determined by a ground station using the round trip time of signals exchanged via GEO satellite. It also provided a short message service. RDSS persists in the current Beidou system, with enhancements.
Beidou 2 introduced the GPS / GLONASS-like "Radio Navigation Satellite Service (RNSS)", initially with China coverage, now being expanded to global coverage. The system is dual-use, including a civilian service with an accuracy of 10 meters in the user position, 20 centimeters per second for user velocity, and 50 nanoseconds in time accuracy; and a military / authorized user's service, providing greater accuracy.
The two new Phase 3 MEO satellites -- formally "Beidou 3 M7" and "Beidou 3 M8" had a launch mass of 1,014 kilograms (2,235 pounds). They featured a new bus with a phased array antenna for navigation signals; a laser retro-reflector for orbital tracking; and changes in signals:
-- 12 JAN 18 / CARTOSAT 2F -- An ISRO Polar Satellite Launch Vehicle (PSLV) was launched from Sriharikota at 0358 UTC (local time - 5:30) to put the ISRO "Cartosat 2F" Earth observation satellite into orbit. A set of 31 smallsats satellites was flown as well. This was the first PSLV mission since a payload fairing separation failure in August 2017. The PSLV was in the "XL" configuration, with bigger solid rocket boosters.
Cartosat 2F was the seventh in the ISRO Cartosat 2 satellites. India's Cartosat constellation consists of a series of satellites in sun-synchronous orbit, obtaining panchromatic and multispectral images of the Earth's surface. The satellites are used for both civilian and military purposes, although ISRO has been notably more secretive about the later satellites in the series.
The original satellite in the series, Cartosat-2, was a primarily civilian satellite. It was deployed in January 2007, being followed into orbit by the military Cartosat-2A in April 2008. Cartosat-2B was deployed in July 2010. These initial satellites carried only a panchromatic imaging payload, with a multispectral imager being introduced with the upgraded Cartosat-2C, which was launched in June 2016. Cartosat-2D and 2E were launched in February and June of 2017.
Cartosat-2E was expected to be the last of the Cartosat 2 series. Cartosat-2F was originally referred to as Cartosat-2ER, suggesting that it may have been built as a ground spare.
The Cartosat-2 series is based on ISRO's IRS-2 bus. Each spacecraft has a mass of about 710 kilograms (1,570 pounds) and is designed for a five-year service life. The satellites are equipped with reaction wheels, magnetorquers, and hydrazine-fuelled reaction control thrusters to provide three-axis stabilization, while a pair of solar arrays generate up to almost a thousand watts of electrical power for the spacecraft.
Total mass of the smallsat payloads was about 600 kilograms (1,300 pounds). They included:
This was the 42nd PSLV launch.
-- 12 JAN 18 / NROL 47 (USA 281) -- A Delta 4 booster was launched from Vandenberg AFB at 2211 UTC (local time + 8) to put a secret military payload into space for the US National Reconnaissance Office (NRO). The payload was designated "NROL 47". It was believed to be a TOPAZ-class radar surveillance satellite. The booster was in the Medium+ (5,2) configuration with two solid rocket boosters.
-- 13 JAN 18 / LKW 3 -- A Chinese Long March 2D booster was launched from Jiuquan at 0720 UTC (local time - 8) to put an Earth observation payload designated "LKW 3" into orbit. It was announced as an Earth survey satellite, but was judged to be a military optical surveillance satellite.
-- 17 JAN 18 / ASNARO 2 -- A JAXA Epsilon booster was launched from Uchinoura at 2106 UTC (next day local time - 9) to put the "Advanced Satellite with New System Architecture for Observation 2 (ASNARO 2)" Earth observation satellite into orbit.
ASNARO 2 was developed by Nippon Electric (NEC) Corporation and was the second flight of the "NEXTAR" Standard Minisatellite Bus, devised by a collaboration between NEC and the Japanese space agency JAXA. The program was under the umbrella of Japan Space Systems, a government-chartered non-profit organization under contract to Japan's Ministry of Economy, Trade and Industry, with NEC hoping to sell low-cost observation satellites on the export market.
There are three size ranges in the NEXTAR series, including the "NEXTAR-100L", the "NEXTAR-300L", and "NEXTAR-500L", with the relatively small size tailored to the new JAXA Epsilon booster. They consist of a standard bus containing satellite support systems with well-defined interfaces for payloads and support for autonomous operations, as well as well-defined communications links and ground support systems.
The first ASNARO satellite carried a visible-range imager, while ASNARO 2 carried an X-band synthetic aperture radar. It was based on the NEXTAR-300L bus, with a launch mass of 570 kilograms (1,256 pounds). ASNARO 3 will carry a hyperspectral imager. This was the third launch of the Epsilon booster.
-- 19 JAN 18 / JILIN x 2, SMALLSATS: A Long March 11 booster was launched from Jiuquan at 0412 UTC (local time - 8) to put the "Jilin 1-07" and "Jilin 1-08" high-resolution Earth observation satellites into orbit.
Jilin 1-01 and Jilin 1-02 were launched on October 7, 2015, by a Long March-2D launch vehicle out of the Jiuquan Satellite Launch Center, while Jilin 1-03 was launched on January 9, 2017, using a Kuaizhou 1A solid-fuel launch vehicle. Jilin 1-04 to Jilin 1-06 were launched on November 21, 2017, by a Long March-6 rocket from the Taiyuan Satellite Launch Center. The Jilin constellation is being deployed in three phases:
The first two Jilin 1 satellites were launched on 7 October 2015 by a Long March 2D booster, with the third launched on 9 January 2017, by a Kuaizhou 1A solid launch vehicle -- both launches being from Jiuquan. The next three were launched from Taiyuan on 21 November 2017 by a Long March 6 booster. The Jilin 1 satellites have a launch mass of 95 kilograms (210 pounds) and have a design lifetime of three years. They capture video imagery with meter resolution.
There were four other small satellites in the launch:
The Long March 11 (Chang Zheng-11) is a four-stage, solid-fueled quick-reaction launch vehicle developed by the China Academy of Launch Vehicle Technology (CALT) with the goal to provide an easy to operate quick-reaction launch vehicle, that can remain in storage for long period and to provide a reliable launch on short notice.
-- 20 JAN 18 / SBIRS GEO 4 (USA 282) -- An Atlas 5 booster was launched from Cape Canaveral at 0048 UTC (previous day local time + 5) to put the fourth "Space Based Infrared System Geosynchronous (SBIRS GEO 3)" missile early-warning satellite into orbit for the Pentagon. The SBIRS GEO constellation was intended to replace the long-standing "Defense Support Program (DSP)" geostationary early-warning satellite network.
The three previous SBIRS satellites were launched on Atlas 5 rockets in 2011, 2013 and 2017. The fourth gave the SBIRS fleet global coverage. Four "SBIRS HEO" piggyback sensor packages have also been launched on classified NRO spy satellites. Two are needed operational at any time. The Air Force two more SBIRS satellites on order for launch in 2021 and 2022 to improve coverage, and eventually replace the first two SBIRS geosynchronous spacecraft. The booster was in the "411" vehicle configuration with a 4-meter (13.1-foot) fairing, one solid rocket booster, and a single-engine Centaur upper stage.
-- 21 JAN 18 / ELECTRON ST -- A Rocket Labs Electron light booster was launched from a facility on the Mahia Peninsula on New Zealand's North Island at 0143 UTC (next day local time - 11) on its second test flight, titled "Still Testing". The Electron is designed to carry small spacecraft into orbit. The booster carried three triple-unit CubeSats on this flight, including a single Planet Dove Earth-observation satellite, and two Spire Lemur weather-observation satellites; it also carried an inert "disco ball" reflector payload named "Humanity Star".
-- 25 JAN 17 / YAOGAN 30 -- A Long March 2C booster was launched from Xichang at 0539 UTC (next day local time - 8) to put the secret "Yaogan 30" payloads into orbit. It was a triplet of satellites and may have been a "flying triangle" naval signals intelligence payload. Rhe launch also included the "NanoSat-1A" payload.
-- 25 JAN 18 / SES 14, AL YAH 3 -- An Ariane 5 ECA booster was launched from Kourou in French Guiana at 2220 UTC (local time + 3) to put the "SES 14" and "Al Yah 3" geostationary comsats into orbit.
Built by Airbus Defense and Space for SES of Luxembourg, SES 14 was based on the Airbus Eurostar 3000 satellite bus, and had a launch mass of 4,423 kilograms (9,751 pounds). It was placed in the geostationary slot at 47.5 degrees west longitude to provide aeronautical and maritime mobility connectivity, wireless communications, broadband delivery, and video and data services over North, Central and South America, the Caribbean, the North Atlantic and parts of Europe. It replaced the NSS-806 satellite; it also hosted NASA's "Global-Scale Observations of the Limb and Disk (GOLD)" payload to measure densities and temperatures in Earth's thermosphere and ionosphere.
Al Yah 3 was built by Orbital ATK for Yahsat of Abu Dhabi. It was based on the Orbital GEOStar 3 spacecraft bus, and had a launch mass of 3,795 kilograms (8,366 pounds). Al Yah 3 was placed in the geostationary slot at 20 degrees west longitude to support broadband internet and data services over Africa and Brazil. While there was a launch anomaly, both satellites boosted themselves to their proper orbits, effectively as planned.
-- 31 JAN 18 / GOVSAT 1 (SES 16) -- A SpaceX Falcon 9 booster was launched from Cape Canaveral in Florida at 2125 UTC (local time + 4) to put the "GovSat 1 (SES 16)" geostationary comsat into orbit for LuxGovSat, a joint venture between SES and the government of Luxembourg. The satellite was built by Orbital ATK and had a launch mass of 4,230 kilograms (9,325 pounds). GovSat 1 was placed in the geostationary slot at 21.5 degrees east longitude to provide secure military X-band and Ka-band communications links, helping support Luxembourg's NATO obligations. The Falcon 9 first stage, which had been flown on a previous mission, conducted a controlled splashdown in the Atlantic as a test; it was not recovered.COMMENT ON ARTICLE
* SEARCH THE SKY WITH AI: As discussed by an article from WIRED Online ("Astronomers Deploy AI To Unravel The Mysteries Of The Universe" by Sarah Scoles, 6 March 2017), the sky is vast, with astronomers confronted with the task of sorting through and assessing ever-growing numbers of observations. 21st-century technology is making matters worse -- but that same technology can also help.
For example, astronomer Kevin Schawinski, of ETH Zurich in Switzerland, had a central interest in how massive black holes shape galaxies, but didn't like the idea of wading through all the data. He thought artificial neural net (ANN) technology could help with the job, but didn't have the expertise to get them to work for him. Then a colleague got him in touch with computer scientist Ce Zhang, also of ETH Zurich, who did have the expertise. They have now released their first effort, an ANN that cleans up blurry, noisy astronomical images.
The work was based on what is known as a "generative adversarial network (GAN)", which consists of two systems, both based on ANNs: one that generates images according to certain criteria, the other which assesses them on the basis of such criteria. Traditionally, an ANN is "programmed" by learning, for example being fed images of cats, and being told they are images of cats. The various patterns of connections in the ANN that result from the cat images are associated with cats. Once a sufficient number of cat images have been fed into the ANN, the system can then recognize new cat images on its own. The reliability of the recognition improves with the number of diverse training images.
A GAN, in contrast, trains itself. One element of the GAN, the "generator", invents images; while the other, the "discriminator", accepts both real and invented images, and attempts to tell them apart, scoring itself on how well it does so. The discriminator becomes more adept with this practice; and also hands scoring back to the generator for the images it invented, allowing the generator to invent more convincing images.
Ian Goodfellow -- a computer scientist at the non-profit organization Open-AI in San Francisco, California -- says: "You can think of the discriminator as a teacher that tells the generator how to improve." It can also be compared to a banker working with a counterfeiter to produce better counterfeits. Goodfellow came up with the idea of GANs in 2014, while he was a student of machine-learning pioneer Yoshua Bengio at the University of Montreal in Canada. According to Goodfellow, GANs are very efficient, able to achieve high competence on the basis of hundreds of training images, while current state-of-the-art image recognition typically requires tens of thousands.
Schwaniski and Zhang's GAN starts out with a corrupted image of, say, a galaxy along with the proper image, with the generator and discriminator then figuring out how to get from the corrupted image to the real one. Having learned how to do so, the GAN can then accept poor images of cosmic objects and, as Schwaniski puts it, "make it better than it actually is." They see this as only a starting point, the long-range goal being to provide astronomers with tools they can just use, without knowing how they work under the hood.
They're not the only astronomers tinkering with AI. Another group of researchers at ETH Zurich used an ANN to recognize and then mask out the human-made radio interference that comes from satellites, airports, wi-fi routers, microwaves, and malfunctioning electric blankets. Computer scientist Max Welling at the University of Amsterdam is interested in using GANs to support the Square Kilometer Array (SKA), a radio-astronomy observatory to be built in South Africa and Australia. The SKA will produce such vast amounts of data that its images will need to be compressed into low-noise but patchy data. Generative AI models will help to reconstruct and fill in blank parts of those data, producing the images of the sky for astronomers to examine.
Of course, the refinement of AI technology for astronomy will open doors for other applications of AI in science. Goodfellow is enthusiastic, seeing AI as not only helping researchers, but also protecting the privacy of test subjects. There's nothing more valuable than patient data for medical research, but it involves a loss of privacy. If we have an AI, certified with confidentiality in mind, search through patient data, researchers will never actually get their hands on patient records. AI analysis via data-mining presents a threat to privacy; but AI may end up being the tool that neutralizes the threat.COMMENT ON ARTICLE
* CRUMBLING INFRASTRUCTURE? As discussed by an article from REUTERS.com ("Crumbling Bridges? Fret Not America, It's Not That Bad" by Jason Lange & Katanga Johnson, 30 January 2018), there's been a great deal of fuss over America's deteriorating civic infrastructure over the past some number of years. In his state of the Union address in January, President Donald Trump called for a major infrastructure program, to run to at least $1.5 trillion USD over 10 years. Trump did not advance much in the way of specific plans, with critics suggesting that his underlying idea was to shift the problem onto the states. However, there is a broad perception that the issues is serious, US infrastructure is in a bad way, on the road to disaster. Other politicians, as well as business groups, have called for action.
Excitement over infrastructure got rolling in 2007, when the Interstate 35 bridge over the Mississippi River in Minneapolis collapsed during rush hour, with 13 people killed. However, Federal investigators blamed a design flaw and not deterioration for the collapse. A recent study conducted by Reuters suggests the "crisis" is overblown, that bridges and other road infrastructure are not in such bad shape, and that the fuss over the matter may be distracting us from more urgent problems. The study revealed:
In 2017, the state of Missouri rated four bridges with more than 200,000 daily crossings as structurally deficient. According to Missouri transportation engineer, they don't present an imminent danger: "If they were at a point of being dangerous, they would be closed."
A 2014 study of bridge failures found that about 120 US bridges collapse or partly collapse every year -- but most do so because of floods, fires and collisions, not structural failure. Furthermore, most of these bridges were little used, with less than 755 daily crossings, and only 4% of the failures involved fatalities. Wesley Cook -- a structural engineer at the New Mexico Institute of Mining and Technology who authored the 2014 study -- comments: "We, the public, should feel safe."
Other studies give US infrastructure generally good marks. According to the World Economic Forum's latest global competitiveness report, America's road network, including its bridges, ranks third among the largest advanced economies by company executives, slightly behind Japan and France, but ahead to those of Germany, Britain, Canada, and Italy,
State and local governments are always repairing highways and bridges; according to a 2017 study by the US Federal Highway Administration, a relatively modest increase in spending would cut the number of bridges needing repairs by about two-thirds by 2032.
There are certainly problems with American infrastructure. The American Society of Civil Engineers (ASCE) judges that US mass transit is in worse shape than any other infrastructure in terms of quality and funding. In a 2017 report, the ASCE also scored US dams, levees, and drinking water facilities as in worse condition than bridges. There is a consensus that public and private investment in US infrastructure should indeed be boosted by $1 trillion to $2 trillion USD over the next decade. However, such an effort will require a patient, long-term commitment and thorough planning -- things too dull to be the stuff of political speeches.COMMENT ON ARTICLE
* UNDERSTANDING AI (12): As discussed by an article from ECONOMIST.com ("Battle Of The Brains", 7 December 2017), there was a time when artificial intelligence was an obscure game of academics. Now it's in the headlines, with the world's tech giants scrambling to get an edge in AI technology.
The AI revolution is being driven by the growing floods of digital data, plentiful computing power, and ever-cleverer AI techniques. The West's biggest tech firms, including Alphabet (Google's parent), Amazon, Apple, Facebook, IBM and Microsoft are pumping huge sums into AI, as are their counterparts in China. In 2017, companies paid out over $20 billion USD in AI-related mergers and acquisitions, or over 25 times what was paid in 2015.
The main focus is on machine learning, with AI systems sifting through data to teach themselves to recognize patterns and to predict trends. Machine learning is now used in a wide range of applications in tech industries -- online ad targeting, product recommendations, augmented reality, and self-driving cars. Zoubin Ghahramani, head of AI research at Uber, plausibly believes that AI will be as transformative as the rise of computers.
For an example of the transformative power of AI, consider databases. From the 1980s, databases became the tool to store data, sort out insights, and perform tasks such as inventory management. AI promises to go one big step beyond databases to make software that absorbs data to make decisions on its own -- being "far more predictive and responsive", according to Frank Chen of Andreessen Horowitz, a venture-capital firm. For example, consider Google's Gmail, which scans the content of e-mails and suggests quick, one-touch replies on mobile devices. We're entering an age of smart software that learns from users and its work environment, to adapt itself accordingly.
As with past waves of new technology, such as the rise of personal computers and mobile telephony, AI has the prospect of radically altering how businesses work, changing the way they conduct operations, and providing options for new enterprises. That makes it both a promise and a threat. Jeff Wilke -- chief executive of "Worldwide Consumer" at Amazon, a lieutenant to Jeff Bezos -- says: "If you're a tech company and you're not building AI as a core competence, then you're setting yourself up for an invention from the outside."
In other words, if you're not with the steamroller, you're part of the road. This is well-understood -- the result being the AI boom feels like the California gold rush. Although Chinese firms like as Baidu and Alibaba are investing and making use of AI, the biggest players are the Western tech firms. Alphabet is the most prominent of all: it has been making money from AI for a number of years, and has an elite group of researchers. However, Alphabet is by no means without challengers, with rival firms competing for talent; attempting to gain a business edge by leveraging off of machine learning; and to use AI to obtain new markets to exploit.
The grab for talent is particularly frantic, since brains are harder to come by than raw data or computing power. The demand for AI "builders" who can come up with new AI technology far exceeds the supply of students being generated by AI-oriented universities. According to Gurdeep Singh Pall of Microsoft, current AI systems are "idiot savants ... They are great at what they do, but if you don't use them correctly, it's a disaster."
Fending off disaster means hiring the right people, the result being plundering academic departments of professors, and hiring on graduate students who haven't got their degrees yet. Andrew Moore -- dean of Carnegie Mellon University's (CMU) school of computer science, a pioneering institution in AI, whose robotics department was famously plundered by Uber in 2015 -- says that job fairs now resemble frantic "Thanksgiving Black Friday sales at Walmart."
Academic conferences are now frequented by corporate head-hunters. The best recruiters are superstar academics, people like Facebook's Yann LeCun and Alphabet's Geoffrey Hinton. They're both professors who retain a university affiliation, who can attract others to sign up. If huge salaries aren't enough, inside knowledge can also be a big draw -- AI researchers are geeks, after all.
If push comes to shove, corporations simply buy startups. This trend emerged in 2014, when Google spent a cool half-billion dollars on DeepMind -- a London-based startup with no product and no revenue, but with some high-powered AI professionals. DeepMind, of course, went on to build the AlphaGo AI system that mastered the game of Go. Other big firms have also paid big money to snap up money-losing startups. The sale prices end up being rated not on future profits, but on the price per head for employees, running up to $5 million to $10 million USD. [TO BE CONTINUED]START | PREV | NEXT | COMMENT ON ARTICLE
* AMERICA'S CONSTITUTION (2): The ratification of the Articles of Confederation in 1781 did not change the status quo of America's flimsy governance. The document featured statements of high principles, but defined an ineffectual central government for the United States. The document emphasized the effective independence of the individual states:
Each state retains its sovereignty, freedom, and independence, and every power, jurisdiction, and right, which is not by this Confederation expressly delegated.
Given that premise, then what was the point of a United States government? According to the Articles:
The said States hereby severally enter into a firm league of friendship with each other, for their common defense, the security of their liberties, and their mutual and general welfare, binding themselves to assist each other, against all force offered to, or attacks made upon them, or any of them, on account of religion, sovereignty, trade, or any other pretense whatever.
The Articles specified free movement between states, US citizens being able to move unhindered from one state to another, with visitors having the same civic rights as the inhabitants of the state -- with an exception made for indigents and vagrants. A state to which a criminal fled was obligated to extradite the criminal back to the state in which the crime was committed.
The Confederation Congress of the United States consisted of delegations from each state, ranging from two to seven members, with the members of the delegation appointed by state legislatures on one-year terms; Connecticut and Rhode Island involved voters in the process. Somewhat strangely, no congressman could serve for more than three years out of a six-year interval -- that is, a congressman might serve for three years, be out of Congress for three years, come back to congress for three years, leave again for three years, and so on. Each state got a single vote. Congress was empowered to appoint a president -- meaning a presiding officer, not chief executive -- for a term no longer than one year.
The Confederation Congress had the sole right to to declare war, although the states could on their own initiative fight Indian tribes or pirates, with the states maintaining militias for their own defense. Congress could designate general officers and raise military forces from the states; the states could appoint officers of the rank of colonel and below. Congress could also authorize privateers -- in effect, legalized pirates -- with "letters of marque & reprisal". Congress also had the sole ability to conduct foreign diplomacy and establish political or commercial treaties. Such treaties were binding on all the states, and the states could not interfere in them.
Other rights and responsibilities of the Confederation Congress included:
Congressional decisions were to be obtained by a majority of no less than nine states. As for funding, Congress could only beg the states for money and resources, with the state legislatures responsible for their provision -- if they felt like it. The Articles closed with the declaration that they were "perpetual, and may be altered only with the approval of Congress and the ratification of all the state legislatures." [TO BE CONTINUED]START | PREV | NEXT | COMMENT ON ARTICLE
* GIMMICKS & GADGETS: As discussed by an article from REUTERS.com ("Amazon's Automated Grocery Store Of The Future Opens Monday", by Jeffrey Dastin, 21 January 2018), online retailer Amazon.com has now opened a checkout-free convenience store, after more than a year of testing. The Seattle store, called "Amazon Go", used cameras and sensors to track what shoppers take from the shelves, and what they put back. Although customers have to come in through a turnstile and identify themselves with a smartphone app, there's there's no checkout lines, no cash registers; whatever customers take out is electronically billed to them when they go out the door.
The Go store, which has a footprint of 167 square meters (1,800 square feet), is located in an Amazon office building, Grocers are paying close attention; the purchase of the high-end supermarket chain Whole Foods Market last year for $13.7 billion USD made it clear Amazon was moving in on their turf. The Seattle store was open to Amazon employees all through 2017, but bugs in the system kept it from being opened to the public until now.
Cameras monitoring from above and weight sensors in the shelves help track what a customer picks up, with the item added to the customer's account; the item is withdrawn if the customer puts it back. One of the troubles was that the video system tended to confuse shoppers who were similar in appearance. Another was that children had a tendency to reshelve products in the wrong places. Gianna Puerini, vice president of Amazon Go, said in an interview: "This technology didn't exist. It was really advancing the state of the art of computer vision and machine learning."
Pointing to two different Starbucks drinks -- one regular, one with light cream -- Puerini noted that the system could tell them apart, even though a shopper might confuse them: "If you look at these products, you can see they're super similar."
Amazon did not comment on when more Go stores might be opened, but did say there were no plans to update the bigger and more elaborate Whole Foods stores.
* As discussed by an article from WIRED.com ("To Prevent Motorcycle Crashes, Make Riders 'Talk' to Cars" by Eric Adams, 16 June 2017), the age of intelligent vehicles promised to make them far safer -- or at least, it promised to make those with four wheels safer. What about motorbikes?
People are about 30 times more likely to be killed in a motorbike accident than a car accident. While cars have acquired seat belts, air bags, and other safety technologies, there's not much that can be done to protect motorbike riders in accidents. Improving motorbike safety, then, means preventing accidents from happening in the first place.
An Israeli company named "Autotalks" thinks that "vehicle to vehicle (V2V)" communications will be useful to that end. Autotalks is collaborating with industry supplier Bosch to develop and test a wi-fi-based communication system that can track vehicles in the vicinity of a motorbike, even those the rider can't see, and tip off the rider to a possible collision.
The Autotalk "bike to vehicle (B2V)" scheme relies on a module that exchanges location, speed, heading, braking mode, and other data with nearby vehicles fitted with a comparable module. The module will alert the rider to threats via audio or visual cues; the module in a car can similarly tip the car off to the motorbike. According to Autotalk co-founder Onn Haran: "It's a low-cost solution with a small form factor, which is critical for motorcycles. It can operate in a wide temperature range and be placed anywhere on the motorcycle."
Early research by Bosch suggests that B2V could prevent about a third of all motorbike accidents in Germany. True, all vehicles will need standardized, and presumably mandated, V2V technology for the system to work to optimum -- but as more vehicles with V2V hit the roads, the safer they will gradually become, and so it's useful to get rolling on the standards today. Haran is bullish on the Autotalk scheme, seeing it ultimately as a normal feature of smartphones: "It can span to other vulnerable road users, like bicycles. It can even serve pedestrians, for example elderly people crossing remote streets."
* A column from BLOOMBERG BUSINESSWEEK ("Innovation: WindTree" by Nick Lieber, 20 March 2017) spotlighted the "WindTree", a product of French inventor Jerome Michaud-Lariviere and his NewWorldWind firm. The WindTree is an unorthodox wind turbine system, with 54 90-centimeter (3-foot) tall green-colored vertical turbines mounted on a 9.15-meter (30-foot) tall metal tree.
It costs over $50,000 USD and can generate at least 1,000 kilowatt-hours of energy a year, more in persistently breezy places. A number of prototype installations have been set up in Europe. It's hard to say how practical it is, but it certainly is cute; one could imagine adding pinwheels and LED toys to the WindTree to get an artistic and decidedly amusing effect.COMMENT ON ARTICLE
* EARTH SCIENCE DECADAL SURVEY: As reported by an article from NATURE.com ("Long-Awaited US Report Charts Course For Studies Of Earth From Space" by Alexandra Witze, 5 January 2018), the US National Academies of Sciences, Engineering, and Medicine released the latest "decadal survey" for US Earth observation satellite missions. The document defined efforts to improve weather forecasts, predict sea-level rise, and come to grips with ecosystem change as top priorities for the the US National Aeronautics & Space Administration (NASA), the National Oceanic and Atmospheric Administration (NOAA), and the US Geological Survey (USGS) for the next decade.
The report comes at a critical time, since the White House and conservative Republicans in Congress have been making noises about getting NASA -- and by implication users of NASA Earth-observation satellites -- out of the Earth-observation business. According to Antonio Busalacchi, president of the University Corporation for Atmospheric Research (UCAR) in Boulder, Colorado. "This is a very important process, having the community speak up and come up with a consensus set of priorities. Congress reads these, staffers read them, agencies pay attention in a very serious way."
The last decadal survey, released in 2007, focused on spacecraft designs. The new survey focuses instead on science questions to be answered, emphasizing how Earth-observing missions benefit society and national security -- for example, helping farmers with drought assessments and providing support for military operations. Waleed Abdalati, director of the Cooperative Institute for Research in Environmental Sciences (CIRES) in Boulder, co-chair of the group that wrote the report, commented: "Earth information is a critical part of our lives."
The report listed 35 questions, including how the planet's water cycle is changing; learning why powerful storms occur where and when they do; and reducing the uncertainty in projections of future planetary warming. It identifies aerosol particles, clouds and changes in mass across Earth's surface as among the most significant environmental variables to study.
The decadal surveys are no more than outlines. NASA used the 2007 survey to decide what missions to fly, and to prioritize them. Shifting budgets and development difficulties meant that some projects didn't happen and some were delayed for years. For example, the "ICESat-2" mission to measure polar ice sheets, which the 2007 survey suggested be launched between 2010 and 2013, won't fly until well into 2018.
The 2018 survey recommends that NASA establish a 'designated' category of missions for the next decade to tackle the high-priority questions. It suggests the agency develop five of these spacecraft:
Along with these five projects, the report recommends development of three lower-priority "Earth System Explorer" missions, selected on a competitive basis from a list of seven missions, including:
On the lowest rung of priority, the report recommends flying two "Venture Continuity" missions, at $150 million USD each, leveraging off innovative technology to maintain observations from previous missions.
NASA has more than 15 Earth-science missions at various stages in the pipeline for launch by 2023. The agency currently spends about $1.9 billion USD a year on Earth sciences, the Obama Administration having been enthused about NASA's efforts in that field. The Trump White House is less enthusiastic, proposing a cut to $1.75 billion USD, having targeted a set of missions. Advocates believe the cuts aren't likely to be fully approved by Congress.
Of missions recommended by the survey not currently in the pipeline, the report suggests they be funded, with the total cost estimated at $3.4 billion USD over the next decade. Trump's nominee for NASA administrator, Representative Jim Bridenstine (R–OK), has been inclined to climate-change denial in the past, but publicly pledged to follow the decadal survey. What actually happens, as is broadly the case in the Trump Administration, remains to be seen: watch what they do, not what they say, and remember that they don't always have a clear idea of what they're doing.COMMENT ON ARTICLE
* PUSHING BACK: The inclination of China's authorities to use the internet as a tool of social control was discussed here in the summer. As discussed by an article from ECONOMIST.com ("Public Pushback", 25 January 2018), Chinese citizens are not entirely happy with the uncertainty of their online privacy.
The most visible public worry is not government intrusion but, as in the West, data security. A prominent case is that of Xu Yuyu, a poor young student who had all her family's money stolen from her, to then die of a heart attack. The scammer, Chen Wenhui, had paid a hacker to steal her personal details. Chen got life behind bars, the crime being theft of private information. In response to such stories, public awareness and indignation over flimsy data security has been on the rise:
As porous as online privacy is in the West, it's worse in China, A man who talked on his mobile phone one day about picking strawberries claimed that when he used his phone the next day to open Toutiao -- a news aggregator driven by artificial intelligence -- his news was all about strawberries. Coincidence? Doubtful. The story went viral. Toutiao denied it was snooping, but admitted the story pointed to a growing public "awareness of privacy".
Chinese are traditionally and culturally not inclined to be overly concerned about privacy. The Chinese word for privacy, "yinsi", has a negative connotation of secrecy. Things that in the West are off-limits in conversation between strangers -- for example, asking how much a person makes -- are normal subjects of discussion in China. Surveys reinforce the perception that Chinese place a relatively low value on privacy. A 2015 study from HARVARD BUSINESS REVIEW concluded Chinese would pay less to protect data derived from their government-issued identification cards and credit cards than people from America, Britain, or Germany.
A survey in China determined that 60% of respondents allowed their mobile apps to share personal information with third parties. Chinese law didn't even define what counts as personal information until a cyber-security bill took effect in 2017. Two things are changing public attitudes:
Consumer pressure and practicalities are forcing Chinese companies to take data security more seriously. Nie Zhengjun -- Ant Financial's chief privacy officer; yes, they have one -- says that that Chinese consumers are "no longer content with preventing information from being used for fraudulent purposes ... Now they want control in protecting their privacy."
There is little in this agitation that directly affects the Chinese government, but the government can't quite get clear of the issue. To have proper data security, the Chinese government will have to define and enforce it. However, that leads to the same quandary as exists over strong encryption in the USA -- in that the government can't a "back door" to the personal data of citizens, and keep out the Black Hats at the same time. In any case, citizens can find strong encryption tools online; trying to ban them would be heavy-handed and difficult.
In 2017, the government conducted an inspection campaign examining the privacy policies of ten internet firms. At least five were found to have enhanced data protection by making it easier for users to delete personal information. The government was able to proclaim that it was a defender of data protection. At the same time, however, the new Chinese cyber-security law requires that copies of all personal data gathered by operators of "critical information infrastructure" in mainland China must be stored in the country.
Of course, the suspicion is that the government wants access to that data, either covertly or by squeezing the data-storage companies for it. Apple of the US is now complying with the law by handing management of the data of iCloud customers in China to a state-owned company. Apple has declared that there will be "no back doors will be created into any of our systems" and that it will ensure "strong data privacy".
For the time being, the Chinese government seems perfectly comfortable with clamping down on abuses of data privacy by non-state actors, while disregarding data privacy when it suits the convenience of the state. How long this disconnect can persist without leading to trouble remains to be seen.COMMENT ON ARTICLE
* UNDERSTANDING AI (11): As discussed by an article from SCIENCEMAG.org ("Artificial Intelligence Just Made Guessing Your Password A Whole Lot Easier" by Matthew Hutson, 15 September 2017), artificial intelligence has now become more unsettling -- by figuring out how to crack passwords. Researchers used AI, combined with existing tools, to crack more than a quarter of the passwords from a set of more than 43 million LinkedIn profiles.
Password-cracking programs are not news; two of the most popular now available are "John the Ripper" and "hashCat". They use a number of techniques -- one being simple brute force, in which they try lots of randomly-generated passwords until they get the right one. That's inefficient; a password has to be 100% correct or it fails, meaning there's no way to converge on a solution; no way to tell if a password is just one character off, or if all the characters are off. Sensible login systems also use "throttling", which means slowing down acceptance of entries after a certain number of tries, and blocking entry completely if failed attempts persist.
A smarter approach is to leverage off previously leaked passwords and probability methods to make educated guesses of passwords. However, programs smart enough to do that are not trivial to write. Researchers at Stevens Institute of Technology in Hoboken, New Jersey, decided to come up with a system that performed such educated guessing -- but got its education from machine learning, not brute-force coding.
They started with a generative adversarial network (GAN) -- mentioned earlier here in this series -- which used two ANNs: a "generator" to produce password candidates, and a "discriminator" to evaluate the candidates. They tuned each other until the generator became highly skilled at guessing passwords.
The Stevens team compared their GAN, which they named "PassGAN", with two versions of hashCat and one version of John the Ripper. Each was fed tens of millions of leaked passwords from a gaming site named "RockYou", with each of them generating lists of hundreds of millions of passwords in response. The sets of generated passwords were "graded" by counting up how many matches there were with a set of password obtained from LinkedIn.
On its own, PassGAN wasn't so impressive, scoring only a 12% match, with the best of the competition scoring 23%. However, a collaboration between PassGAN and hashCat managed to crack 27% of the passwords in the LinkedIn set. Giuseppe Ateniese -- a computer scientist at Stevens, one of the lead researchers in the project -- believes that PassGAN's ability to learn from experience will allow it to eventually pull ahead of hashCat, which is only extensible by adding more code. He says of PassGAN: "It's generating millions of passwords as we speak."
Arteniese compares PassGAN to AlphaGo, the Go-playing system developed by DeepMind, also discussed earlier in this series: "AlphaGo was devising new strategies that experts had never seen before, so I personally believe that if you give enough data to PassGAN, it will be able to come up with rules that humans cannot think about."
The Stevens researchers say that PassGAN should actually help security, by suggesting rules for constructing passwords that give it a hard time. To be sure, as long as people use flimsy passwords like "12345678", they're not going to care enough to construct strong ones; but nobody needs any computing horsepower to crack such passwords in the first place.
* As discussed by a related article from SCIENCEMAG.org ("Artificial Intelligence Can Evolve To Solve Problems" by Matthew Hutson, 11 January 2018), San Francisco-based Uber, the car-sharing giant, is interested in robot vehicles, and is accordingly interested in AI. Uber researchers have published papers showing the company is investigating an evolutionary approach to AI known as "neuroevolution", having used it to play video games, solve mazes, and make a simulated robot walk.
Neuroevolution is a scheme of mutating artificial neural networks (ANNs), then selecting the best of the mutated batch -- with the process then repeated until a satisfactory solution is obtained. It's not a completely new idea, having been used to build ANNs that can compose music, control robots, and play the video game SUPER MARIO WORLD. However, these exercises either involved relatively easy tasks, or relied on programming tricks to simplify a task.
Uber researchers have developed neuroevolutionary systems that can handle more elaborate tasks, without relying on programming trickery. Uber researchers began by trying to find an alternative to "gradient descent" -- a popular machine-learning scheme, in which a system trained attempts to find a match to an input, making adjustments to its ANN in response to error to get a better match -- repeating the process until a solid match is obtained. Most methods of training ANNs use gradient descent.
One Uber research team decided to take the less focused neuroevolutionary approach. A large collection of randomly programmed ANNs was tested on, say, an Atari game, with the best copied, with slight random mutations, replacing the previous generation, and the process repeated for several generations. Gradient learning takes one path towards improvement, at the expense of ignoring others that might be more productive over the long run; the neuroevolutionary approach, with a "population" of possible solutions, explores more options.
The exploratory neuroevolutionary approach outscored other popular schemes for training ANNs on 5 of 13 Atari games. It also taught a virtual humanoid robot to walk, in the process coming up with a ANNs a hundred times bigger than any previously developed through neuroevolution to control a robot. The researchers say their neuroevolutionary scheme was strictly a lab demonstration, and far from optimized; they have been adding improvements, with its performance being enhanced accordingly. Indeed, there is no reason that an evolutionary approach and gradient learning can't get along: evolution is good for finding diverse solutions, while gradient descent is good for refining each solution. [TO BE CONTINUED]START | PREV | NEXT | COMMENT ON ARTICLE
* AMERICA'S CONSTITUTION (1): The structure of the government of the United States of America (USA) is defined by the nation's Constitution, established in the late 18th century. The Constitution still provides a framework for governance, but it is not well-understood by the general public, indeed often willfully misconstrued by political partisans. This series provides an explanation and a clarification of the Constitution, with reference to supporting laws and judicial decisions.
* In Philadelphia, Pennsylvania, on the 4th of July 1776, the Second Continental Congress -- a convocation of representatives of the thirteen colonies of America -- signed the Declaration of Independence, throwing off British rule and establishing "the united States of America". Britain, of course, did not recognize the legitimacy of the United States, and was at that time engaged in combat operations to suppress the American rebellion. To conduct the war for independence, the thirteen colonies needed to establish something resembling a central government.
The notion of an alliance between the thirteen colonies was not completely new. In 1754, a number of colonies had sent representatives to a meeting in Albany, New York, primarily to coordinate military policies relevant to the French & Indian War, then in progress. Benjamin Franklin -- one of the most prominent citizens of Philadelphia, Pennsylvania -- and other members went farther, proposing a governing body to provide overall direction to the colonies. The colonial assemblies and the British Crown all rejected the proposal, Franklin commented: "The colonial assemblies and most of the people were narrowly provincial in outlook, mutually jealous, and suspicious of any central taxing authority." Franklin couldn't have been too surprised, but it was exasperating anyway.
Ten years later, in the fall of 1774, with relations between the colonies and Britain headed for crisis, a Continental Congress assembled in Philadelphia, the primary goal being to assemble a petition of grievances, to be sent to British King George III. There was no great confidence of a breakthrough, and so provisions were made for a second Continental Congress, to be convened if the petition went nowhere.
Conditions continued to get worse, and so the Second Continental Congress met, again in Philadelphia, in the spring of 1775. By the summer of 1776, the drive towards independence was becoming unstoppable; Congress accordingly set up committees to draft a declaration of independence, a plan for a central government, and an associated plan for foreign relations.
The "Articles of Confederation", defining a central governing system, were not fully written up until the fall of 1777, to then be sent to the states -- as of 4 July 1776, they were no longer colonies, instead engaged in setting up their respective independent governments -- for ratification. Ratification was not completed until 1781, by which time the rebellion had been decided in favor of the Americans. In the meantime, the Continental Congress functioned as best it could as a national government, even though the states hadn't really granted Congress any legal authority.
Congress appointed ambassadors, signed treaties, raised military forces, appointed generals, obtained loans from Europe, and issued paper money -- called "Continentals". Congress had no real authority over the states, only being able to request money, supplies, and troops to support the war effort. Compliance with the requests was inconsistent. [TO BE CONTINUED]START | NEXT | COMMENT ON ARTICLE
* SCIENCE NOTES: As discussed here in the past, cockatoos are notably inventive and brainy birds. As further evidence of their intelligence, as reported by an article from THE NEW YORK TIMES ("Cockatoos Rival Children In Shape Recognition" by James Gorman and Christopher Whitworth, 21 November 2017), a new study has demonstrated their ability to match shapes.
Cornelia Habl, a master's student at the University of Vienna, and Alice M. I. Auersperg, a researcher at the University of Veterinary Medicine in Vienna, ran several experiments with captive cockatoos, with captive cockatoos. They presented the birds with a "key box", a test apparatus along the lines of a baby toy. The birds had to put a square tile into a square hole and more elaborate, asymmetrical shapes into matching holes. If they succeeded, they were rewarded with a treat. The cockatoos were not only able to match the shapes to the holes, but did much better than monkeys or chimpanzees.
Key boxes are well-established in tests to determine milestones in child development. Babies can put a sphere into the right hole at age 1, but they can't place a cube until age 2, to then continue to improve from there. Some primates can perform similar tasks, although they need considerable training to get up to speed on a key box. The cockatoos needed no training, and did better than primates.
It is puzzling that cockatoos are so smart at such tasks, since they haven't been observed using tools in the wild. Cockatoos, however, are adaptable, adjusting quickly to different diets and environments, indeed establishing themselves in urban areas. They had few problems figuring out how to game the tests, according to Habl: "They did figure out a couple of ways to trick the box -- but it was not counted as successful, because it was not what I wanted them to do."
Habl added that they sometimes outsmarted her, and that they are charming, but troublesome as pets: "They are escape artists ... they are very, very exhausting in a home environment."
* As discussed by an article from SCIENCEMAG.org ("City Trees Grow More Quickly Than Their Rural Cousins. Here's Why" by Lakshmi Supriya, 15 November 2017), a recent study shows that trees grow faster in the city. Researchers assessed the growth over 150 years of about 1,400 trees in 10 different cities -- including Paris; Houston, Texas; Santiago; and Sapporo, Japan -- by taking corings, then inspecting tree rings to estimate growth.
It turned out that climate change has been, on the whole, beneficial to trees, since warmer temperatures stimulate photosynthesis and stretch the growing season. Both rural and urban trees grew faster by up to 17% after 1960. However, urban trees grew even faster, by as much as 25%, compared with trees of the same age outside the cities. That may have been because cities tend to be warmer than the countryside, as per the "urban heat island" effect.
Puzzlingly, in subtropical cities -- such as Hanoi, Houston, and Brisbane -- urban trees grew much faster than rural trees before 1960, though the difference in their growth became negligible afterward. That might be due to the fact that warmer urban temperatures may not compensate for the downsides of city life, like air pollution, limited water supply, and rooting space. That suggests a limit to the edge of urban trees over rural trees on a warming planet: eventually the downsides will slow and possibly reverse growth rates.
* As discussed by another article from SCIENCEMAG.org ("GM Banana Shows Promise Against Deadly Fungus Strain" by Erik Stokstad, 17 November 2017), bananas are one of the world's most popular fruits, a staple for more than 400 million people, as well as a huge export business. In the 1950s, a soil-dwelling fungus destroyed Latin American crops of the most popular variety of the time, the Gros Michel. It was replaced by a resistant banana, the Cavendish, which now makes up more than 40% of harvests worldwide. In the 1990s, the Cavendish's came under assault as well, with the emergence in Southeast Asia of another fungus, the "Fusarium wilt tropical race 4 (TR4)".
Fungicides don't hurt TR4. Disinfecting boots and farm tools can only slow down the spread of the fungus. It was detected in the Middle East in 2012, to appear in Mozambique a year later. It has reached all banana-growing regions of China, and was discovered in Laos and Vietnam in 2017. It hasn't reached the Americas yet, but it's clearly only a matter of time before it does.
A field trial in Australia has shown that genetically modified banana trees can resist the fungus. Biotechnologist James Dale and colleagues at Queensland University of Technology in Brisbane, Australia, cloned a resistance gene named RGA2 from a variety of wild banana that shrugs off TR4, then inserted it into the Cavendish, creating six lines with varying numbers of RGA2 copies. They also created Cavendish lines with Ced9, a nematode gene known to provide resistance to many kinds of plant-killing fungi.
In 2012, the researchers planted their transgenic bananas, along with unmodified controls, at a farm near Darwin, Australia, where TR4 emerged 20 years ago. To make sure the plants would be exposed to TR4, the researchers buried infected material near each plant. In the three-year trial, the majority of control banana plants died or had yellow, wilting leaves and rotting trunks -- but several of the GM banana lines remaining symptomless, and two lines -- one tweaked with RGA2, the other with Ced9 -- were completely invulnerable. The resistance genes did not reduce the yield of the plants.
Plant specialists are impressed by the exercise, though they have concerns that anti-GM sentiment will bog down fielding the GM banana plants. There are alternatives. Small-scale farmers often grow a range of non-Cavendish banana varieties for local consumption that can tolerate or resist TR4. Many larger farms in the Philippines, where TR4 arrived in 2000, have learned to prevent its spread and have begun planting disease-tolerant varieties of Cavendish.
Corporation in Jaguariuna, says it's important to consider the options, including varieties other than Cavendish: "There are many opportunities to diversify the industry with different bananas, including more nutritious and better-tasting ones."COMMENT ON ARTICLE
* INTERNET IN THE SKY: There's been a lot of talk in recent years of putting an "internet in the sky" into orbit, with large constellations of satellites providing global data connectivity. An article from AVIATIONWEEK.com ("Eight Satellite Constellations Promising Internet Service From Space" by Thierry Dubois, 19 December 2017) gave a survey of current efforts in.
The pathfinder in the exercise was the Iridium constellation, what amounted to a global cellphone constellation, with launches beginning in 1997. Iridium went bankrupt, to be bought out by a new Iridium firm -- which is now replacing the original Iridium constellation with an "Iridium Next" constellation, which will go into full operation this year.
In completion, Iridium Next will consist of 66 operational satellites, plus 9 on-orbit spares, placed in a low Earth orbit (LEO) at an altitude of 780 kilometers (485 miles). The satellites are being built by Thales Alenia Space, in partnership with Orbital ATK, each having a launch mass of 860 kilograms (1,900 pounds), with the communications payload operating in the L and Ka bands, the satellites having crosslinks between them.
While the original Iridium system could only handle voice data rates, Iridium Next will provide data rates of up to 1.4 MBPS. The company is targeting safety services for aircraft, but not wi-fi data connectivity. Each satellite is carrying a secondary payload that provides satellite-based tracking of aircraft, under the Aireon brand, and ships.
The LeoSat company plans to orbit a constellation of small, high-throughput Ka-band spacecraft to deliver internet services globally. The LeoSat constellation will have from 78 to 108 satellites, placed in LEO at an altitude of 1,400 kilometers (895 miles). The satellites will be built by Thales Alenia Space, each with a launch mass of 1,250 kilograms (2,755 pounds), interconnected by high-bandwidth laser links. Initial launch is slated for 2019, with the constellation to go into service in 2022.
The OneWeb Satellites company, a joint venture with Airbus, plans a similar constellation, differing in use of many more and smaller satellites -- a planned total of 900, with 648 operational, placed in LEO at an altitude of 1,200 kilometers (745 miles). Each satellite will have a launch mass of 150 kilograms (330 pounds), and operate on the Ka & Ku bands. It seems the target market is local ISP firms and organizations, not end users.
Elon Musk's SpaceX is pushing "Starlink", a constellation of 4,425 satellites, not counting on-orbit spares, operating in LEO from 1,110 to 1,325 kilometers (685 to 823 miles). The satellites will operate in the Ka and Ku bands, and will feature optical interlinks. Following launch of demonstrators, the first operational satellites are to be launched in 2019, leading to operational capability in 2024. Of course, people know it's wise to take Musk's plans with a grain of salt.
The "O3b" system was originally pushed as for the "Other Three Billion", being intended to provide internet connectivity for undeveloped countries. It was bought out by satcom giant SES in 2016, and is now focused more on business applications. O3b satellites have a launch mass of 700 kilograms (1,543 pounds); they've been flown since 2014, operating in medium Earth orbit (MEO) at an altitude of 8,000 kilometers (4,970 miles), with the base constellation of 20 satellites to be completed by 2019. The O3b satellites use spot beams to provide 1 GB connectivity to, say, a cruise ship. Service coverage ranges between 45 degrees north and south latitude. The current satellites will be followed by seven next-generation "O3b mPower" satellites, with 4,000 spot beams each.
Telesat Canada is planning a constellation known just as "Telesat LEO", with at least 117 satellites in polar orbits at an altitude of 1,000 kilometers (620 miles), and inclined orbits at 1,248 kilometers (775 miles). The satellites will operate in the Ka band and have optical interlinks. According to the company, Telesat LEO will target "busy airports; military operations on land, sea and air; major shipping ports; large, remote communities; and other areas of concentrated demand." Introduction to service is expected in 2021.
Samsung of Korea is promoting a network of 4,600 micro-satellites, to be placed in LEO at 1,500 kilometers (930 miles). The satellites would be interconnected and operate in the V band. Operational capability would begin in 2028. Boeing is considering a constellation of 2,956 satellites, placed in LEO at an altitude of 1,200 kilometers (745 miles), to provide broadband service using the V band. There's no in-service date yet, Boeing only saying that initial deployments will begin six years after the company gets a license.COMMENT ON ARTICLE
* NOT SO FAST: As discussed by an article from WIRED.com ("After Peak Hype, Self-Driving Cars Enter the Trough of Disillusionment" by Aarian Marshall, 29 December 2017), back in 2014, Sweden's Volvo car firm began a program named "Drive Me" to develop autonomous vehicles. By 2017, so the vision had it, the company would have 100 robot SUVs in service on the roads for user evaluation. Erik Coelingh, a technical lead at Volvo, proclaimed: "The technology, which will be called Autopilot, enables the driver to hand over the driving to the vehicle, which takes care of all driving functions."
At the end of 2017, Volvo announced that the pilot program had been pushed back to 2021 ... and at the outset, the vehicles would not offer much more than the driving aids available in, say, the latest Tesla. Marcus Rothoff, Volvo's autonomous driving program director, explained: "On the journey, some of the questions that we thought were really difficult to answer have been answered much faster than we expected. And in some areas, we are finding that there were more issues to dig into and solve than we expected."
Over the past few years, optimism over robocars swelled; now it is being scaled back. In 2012, Google's Sergey Brin said autonomous vehicles would be widely available in five years -- but it didn't happen. Tesla's Enhanced Autopilot system has slipped out. New Ford CEO Jim Hackett says the company will indeed have "products" in 2021, but warns that expectations have been "overextended".
Research firm Gartner has defined a "hype cycle" for new technologies, which starts with an "innovation trigger" that gets things rolling, to then make headlines in a "peak of inflated expectations". That inevitably leads to "trough of disillusionment", when all involved find out it's not going to be that easy.
In a blog post, Bryan Salesky -- boss of the Ford-backed autonomous vehicle outfit Argo AI -- described the obstacles his team had encountered. The first issue was sensors. Self-driving cars need at least three types of sensors:
The sensors cost money, lidars being particularly pricey right now; the sensor suite for a robocar might well run to tens of thousands of dollars at present. Even taking sensors for granted, it turns out to be difficult to perform "sensor fusion", linking all the sensor inputs together to give the robocar a coherent view of the world around it.
And then, there's the problem that driving is a very elaborate task, with drivers factoring road obstacles, other cars, bicyclists, pedestrians, and occasionally animals in a potentially fast-moving environment. They have to, for example, recognize when an ambulance is coming, and give it space. Salesky writes: "Those who think fully self-driving vehicles will be ubiquitous on city streets months from now or even in a few years are not well connected to the state of the art, or committed to the safe deployment of the technology."
Technology developers failed to appreciate just how big a job the nitpicky details and corner cases are. According to Karl Iagnemma, boss of Nutonomy, a Boston-based self-driving car company now part of automotive supplier Delphi, they failed to comprehend the true magnitudes of the parts of the job: "Technology developers are coming to appreciate that the last 1% is harder than the first 99%. Compared to last 1%, the first 99% is a walk in the park."
The technology issues interlock with business model issues. Estimates suggest that autonomous vehicles could contribute trillions of dollars to the global economy by mid-century; which makes it puzzling that few seem to have figured out how to make money with them. Those working in the field believe that robocars will initially be operated as urban taxi fleets -- operating in a specific and generally well-understood environment, over a known set of routes. However, there also hasn't been too much appreciation of the fine details of how such a service would work. One might fetch a robotaxi with a smartphone, but then there's other questions, such as how passengers should report an emergency or some problem with the vehicle. On top of that, how much will such a service cost? Robocars won't come cheap. What happens in a crash? Who's liable, who insures, who pays?
There's been a push for alliances to tackle with the issues, companies such as Waymo, GM, Lyft, Uber, and Intel, forming partnerships with potential rivals to develop robocars, and the infrastructure to support them. However, some developers are taking an independent approach, either providing parts of the solution to the big car companies, or coming up with niche products that don't need to have all the smarts of a fully-autonomous robocar. Robot shuttle buses were discussed here previously; similarly Optimus Ride, an MIT spinoff, is pushing for autonomous vehicles to help riders with disabilities.
There isn't going to be a robocar in every garage in the next decade. There will be robotaxis in specific neighborhoods of big cities like San Francisco, New York, or Phoenix, with these vehicles operating on specific, well-mapped routes. Instead of a robotaxi finding a rider, a rider will have to go to a predefined stop for a pickup. These vehicles will be smart, but they will likely have teleoperation capabilities, so a human driver can take over when necessary. That won't be the end of progress, of course; working from there, robocars will keep on getting smarter and cheaper -- passing from the "trough of disillusionment" to what Gartner called the "plateau of productivity".
However, nobody thinks automotive automation is a dead end. Early on in the move towards automated transport, the general belief was that fully autonomous vehicles were several decades away; then the players got over-enthusiastic. The reset of expectations doesn't change the game board much. Cars are getting smarter and more capable; there's a lot that can be done, short of a car that can drive itself. Drivers will have a car that allows them to, say, watch a video while cruising down the interstate in benign driving conditions, and make sure that a driver that nods off gets a wake-up call. We basically have that technology now, we can expect it to continue to be refined -- and few are going to be very upset that they don't have a car that can fully drive itself. We'll get there eventually.COMMENT ON ARTICLE
* UNDERSTANDING AI (10): As discussed by an article from WIRED.com ("Apple's Machine Learning Engine Could Surface Your iPhone's Secrets" by Lily Hay Newman, 26 October 2017), Apple's new iOS 11 for the iPhone took a step forward by incorporating an AI subsystem named "Core ML". Core ML gives developers a set of machine-learning tools that allow an app to tailor itself to a specific user's preferences. However, Core ML's access to personal data, and its ability to dig into that data, gives some security researchers fears that it might be able to give more information than a user likes, to apps that shouldn't have it.
Core ML provides tools like neural nets and decision trees to support tasks like image and facial recognition, natural language processing, and object detection. Yes, Apple is obsessively concerned with user primacy and so, like other iOS apps, those using Core ML ask the user permission to access data streams, like those associated with the microphone or calendar apps. The problem is that the flexibility of Core ML hands apps the ability to ferret out inferences about a user that could be misused.
Suman Jana -- a security and privacy researcher at Columbia University, with a focus on machine learning -- comments:
The key issue with using Core ML in an app from a privacy perspective is that it makes the [Apple] App Store screening process even harder than for regular, non-ML apps. Most of the machine learning models ... are hard to test for different corner cases. For example, it's hard to tell during App Store screening whether a Core ML model can accidentally or willingly leak or steal sensitive data.
The Core ML platform includes supervised learning algorithms. In service, they can be trained with sets of examples to build up a framework -- and then, say, search through a user's Photo Stream to pick out pictures of dogs, surfboards, or a document image. An untrustworthy app might access that framework to fulfill a user's request -- and then sneakily look for products the user likes or activities the user enjoys, to then use that information for targeted advertising. That's flatly against Apple App Store rules, but right now it's not clear how to screen against such sneaky apps.
Of course, the Black Hats could sort through and leverage a user's photos before, but machine learning tools like Core ML -- or Google's similar TensorFlow Mobile -- could make it a lot faster and easier. The Black Hats are notoriously quick to exploit system vulnerabilities, and so Core ML might prove a treasure for marketers, spammers, and phishers. Machine learning tools present a painful screening challenge for both the iOS App Store and Google Play.
Apple is very security-conscious, and so Core ML does have privacy and security features built in. All the processing is done locally, without transfer of data across the internet, meaning sensitive data doesn't leave the phone. For example, a messaging tool might be able to incorporate emojis into text as per a user's custom -- the text could be sent out, but the machine learning that selects the emojis stays in the phone.
Since iOS apps are only now starting to make use of Core ML, it's not clear what its impact will be. One of the early adopter apps is "Nude", which uses Core ML to find compromising images on a phone, then move them to a secure vault. Of course, it is easy to see that a malicious app might zero in on compromising images for less benign reasons.
Security researcher emphasize that Core ML doesn't present a fundamentally new threat. According to Will Strafach, an iOS security researcher and the president of Sudo Security Group. "I suppose CoreML could be abused, but as it stands apps can already get full photo access. So if they wanted to grab and upload your full photo library, that is already possible if permission is granted." Nonetheless, Core ML can give leverage to the Black Hats -- and the Black Hats are notoriously clever and tricky. [TO BE CONTINUED]START | PREV | NEXT | COMMENT ON ARTICLE
* ONCE & FUTURE EARTH (24): The argument over AGW began to drop under the radar after 2010, all objections that had been raised by the critics having been addressed. There was no case in the science against it, and events were increasingly bearing out the science -- the rapid melting of the northern icecap and most mountain glaciers, along with rising sea levels, could no longer be honestly disputed. Unseasonably warm winter weather, if not a global phenomenon, became common enough to persuade most of the public that AGW was for real. The dispute lingers, but it is dissipating, the critics having been reduced to nit-picking, recycling thoroughly discredited claims, and generally making useless nuisances of themselves.
An international agreement to restrain climate change was hammered out in Paris in late 2015; the agreement was not seen as enough to address the problem, but it was seen as a significant step towards doing so. In 2017, the Trump Administration announced the United States would drop out of the agreement; but at that time, the USA was the only nation not participating, and the expectation is that the next US administration will rejoin. Indeed, the US can't formally exit the agreement until November 2020, the time of the next presidential election, and so there may not even be a formal lapse.
Incidentally, AGW put the brakes on the move to replace CFCs with HFCs in refrigeration systems -- since it turned out that HFCs are extremely potent greenhouse gases. Chemists proposed the use of "hydrofluoro-olefins (HFO)" as an alternative, HFOs being neither implicated in ozone depletion nor climate change, though there was considerable controversy over the suggestion, skeptics saying HFOs were an expensive solution that would primarily benefit the chemical companies, and proposing "natural" refrigerants instead: ammonia, propane or other hydrocarbons, high-pressure CO2, even dried air.
Use of ammonia would seem to be going full circle, but ammonia is still used as a refrigerant in industrial applications, where the refrigeration unit can be isolated and not present a health hazard. Its toxicity means that it is unlikely to be returned to general use. In addition, while HFOs are flammable, they are not strongly so, but hydrocarbons like propane tend towards the outright explosive in gaseous form; and though dried air is hard to beat as far as being environmentally benign in itself, it's not highly efficient, meaning more expensive and energy-hungry refrigeration systems. There isn't a simple answer to the problem, and in fact there may be several answers, to be exploited in parallel. Discussions continue.
* Some researchers have suggested that if AGW does become too immediate a threat, we might need to perform "geo-engineering" to cool the planet. The most exotic scheme proposed so far is the idea of placing a constellation of "sunshade" spacecraft at the Earth-Sun Lagrange point -- the location in space where the gravitational force of the Earth and Sun balance, where spacecraft can be kept on station with relatively little effort. Each spacecraft would be about a meter across, using solar-powered thrusters for positioning. The spacecraft would be shot into space using a magnetic launcher; the total mass of the constellation would be about 20 million tonnes.
A second approach takes a hint from nature. As mentioned above, volcanic eruptions can throw particulates into the upper atmosphere that cause a cooling effect; a massive program could be started to inject harmless aerosols into the upper atmosphere to achieve the same effect. Others have suggested the scheme might be used locally, for example to help preserve the polar icecaps.
A third idea involves spraying droplets of seawater into the air to generate low-lying, highly reflective oceanic clouds. This scheme could be implemented by a fleet of unmanned vessels that could generate the sprays using wind power, with each vessel handling 10 kilograms of seawater a second. About 100 vessels would be needed to cool off the Earth, though only 50 would be needed once the climate was stabilized. The fleet could be dispatched to the North Atlantic in the summer to protect the Greenland ice sheets, and transfer to Antarctica six months later. Cooling clouds could be used to lower sea temperatures in tropical areas, and help prevent hurricanes from forming.
Other ideas have included seeding the ocean with nutrients, for example iron, to encourage the growth of photosynthetic plankton that would soak up carbon dioxide; encourage planting of fast-growing trees; setting up networks of CO2 scrubber stations; or to cover deserts with reflective sheets.
There has been great skepticism over geo-engineering schemes, on the basis of their cost, practicality, and effectiveness. Some of the skeptics have opposed the idea on principle, that it encourages people to believe that there is a quick technological fix to the problem of climate change, discouraging efforts to take real action. However, advocates can point out that if matters go from bad to worse too rapidly, the technological fix might be the only thing available to hold off disaster, and so we should at least know what options are available.
* So what does the future hold for the Earth? The temperature and the seas are going to rise over the near term, but there's no reason to despair that humans will not be able to address the problems. Humanity has reached a threshold where it has become obvious the Earth's environment can't be taken for granted, and needs to be managed. Exactly how that works out, remains to be seen. [END OF SERIES]START | PREV | COMMENT ON ARTICLE
* ANOTHER MONTH: In the category of "unnecessary excitement", on 13 January, a Hawaii civil defense official got a bit mixed up in the face of a drill, and issued a live alert to the public:
EMERGENCY ALERT: BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.
In the face of current tensions over North Korea, this announcement had some plausibility, and it seems raised a degree of public excitement. There was a general shakeup in the alert system in the wake of the incident, with the employee being sacked.
* I discussed reorganizing my email system in December, which turned out to be a more troublesome job than I expected -- a project complicated by the fact that, for the second time in 2017, somebody ripped off my charge card number. I had to go through the rigamarole of getting a new card, updating accounts that use my charge card, and changing passwords.
However, on having to update all that, I went on to rethink some of my other computer-based activities. One was use of my smartphone and my old Samsung tablet as nodes for the Berkeley Open Internet Computing (BOINC) project -- which is a framework for scientific distributed internet computing. It's really just a set of independent projects, connected through the BOINC app on Android, or other platform.
Trying to figure out what was available and would run on Android -- not all projects work on all platforms -- was a pain, until I discovered a page at:http://boinc.berkeley.edu/projects.php
-- which lists all the projects and the platforms they're supported on. Some of the projects require "VirtualBox", which appears to be a Linux emulation system to run Linux-based distributed apps. VirtualBox doesn't run on Android; it's a bit puzzling that BOINC apps haven't been written in, say, Java to be platform-independent.
In any case, after earlier passes on getting BOINC to fly right, I finally got it down to a system. The key is the Android BOINC app: using the list page, all I had to do was tell the app what project I wanted, and use it to sign up with the project. That done, I logged into the project, and I was flying. It's really simple to use; but in between the balkanization of BOINC projects, and the fact that the projects are driven by academics who aren't always familiar with the concept of "user-friendliness", it was a lot more work than it needed to be.
* Following that, my XBOX 360 / KINECT video game console had been gathering dust. I liked the interactive adventure games -- they gave me a nice workout and were fun, like having a theme-park ride in my living room -- but I couldn't find time to play it. I finally decided that on Tuesdays, instead of going for a walk in the morning as is my usual custom, I would play KINECT action games instead.
I couldn't get the game box to come up, however; after some extended poking around, I found that I had inserted a rear connector into the wrong socket. There was a recess that had two sockets, it not being clear from feel that there was more than one. It had worked the last time I played with it; I might have just knocked the connector out while cleaning and then absent-mindedly shoved it back in wrong. In any case, I got running, and found the baseline KINECT games to be energizing.
I like RIVER RUSH, which is riding a raft down a stream, collecting points and dodging obstacles; and also like REFLEX RIDGE, which involved riding down a track, also collecting points and dodging obstacles. REFLEX RIDGE is more exhausting, because it sometimes involved having to jump up with both feet to get over a barrier, then promptly squat down to get under one. I decided to make that motion part of my daily workout. It's not easy for an old geezer -- yes, in a few weeks I qualify for senior discount everywhere.
I'm thinking of getting DISNEYLAND ADVENTURES next, it's a virtual tour of Disneyland; I don't believe it has rides, each ride equates to a different "mini-game", for example fighting pirates in PIRATES OF THE CARIBBEAN.
* Getting the XBOX working led, by a pinball process, to another revision of my personal customs. When I was trying to get the XBOX to work, I also found that only one of the two sockets in the relevant wall AC outlet panel was working. I like to use these 2-plug-to-6-plug wall adapters, and I ended up swapping some of them I had around for a workaround. Turned out I had forgotten the lower socket on that outlet was activated by a wall switch in the hall -- a puzzling thing, apparently the idea was that it would be hooked up to a lamp, activated by flipping the switch.
Anyway, I decided after shuffling the six-plug adapters around that I needed to get another one. I ordered one from Amazon.com, this one featuring twin USB outlets as well. When I got it, I used it with the sockets for my TV system in the living room. That weekend, I found out Amazon Prime was offering me DOCTOR WHO SEASON 10 on download, so I tried to turn on the TV -- to find it was dead.
The power supply seemed to be kaput, the pilot light not coming on even though I tried it on different sockets. Not true, fortunately. A few days later I tried shuffling the six-socket adapters around and retried the TV power supply, to find it worked. I guessed I'd tripped a protection circuit, and it took a bit for the power supply to discharge.
In any case, since I couldn't watch the TV that evening, I watched DOCTOR WHO on my smartphone while lying in bed. I'd tried to watch live-action videos on the smartphone before and didn't think it satisfactory -- while the smartphone works well enough for animes, live-action video tends to be too visually busy for it. However, watching DOCTOR WHO on the smartphone went well enough.
I then got to thinking: if I had a ten-inch tablet at arm's length, that would have a wider field of view than the TV in my living room. So I promptly ordered a 10-inch ASUS tablet from Amazon.com. It cost me about $205 USD with a nice black bumper and a 32 GB flash chip. It worked as expected. I've been gradually loading it up with apps. As long as I was buying things, I also got a new display for my desktop PC, this one being 1920 pixels wide instead of 1280 pixels wide. It's a bit overwhelming to have that much visual space.
* Another little improvement I came up with thanks to a syntax checking website at:http://www.online-spellcheck.com/spell-check-url
-- that I can give a URL, to have it then perform a spell / syntax check on the corresponding web page. Doing a spell check wasn't a big deal; what I had long needed was some way of checking syntax, to clean out persistent syntax errors that I can't see when I proofread my own documents.
I thought a syntax checker would be very hard to write, but I was overthinking matters. This syntax checker is dimwitted, but it flags anything that doesn't look right to it -- which means mostly false alarms, but it nonetheless catches the bugs. It only flags a small portion of the text, meaning it's not so hard to find the real errors. Sigh, now I'm going through all my ebooks and making sure most of the embarrassing errors are gone. That's something I've needed to have for years.
In yet another trivial refinement, I figured out how to use emojis on Windows 10 when posting to online comment sections and such. Right-clicking on the Windows Taskbar gives a menu; enabling "Show touch keyboard button" results in a little keyboard icon on the taskbar. Clicking on that gives a pop-up keyboard, with a set of menus of emojis -- one menu being for "most recently used" emojis, so I don't have to hunt through the hundreds of emojis to find a favorite. Commenting online is a bad habit, but I can't pass up twitting trolls. The emojis, I suspect, make the twitting more annoying. Good.
* However, the biggest improvement in my procedures was getting into YouTube music channels. I had been downloading YouTube music videos in a haphazard fashion, and had learned how to tweak them with a cheap audio editor app. I finally decided to start subscribing to music channels; every morning, I go run down the list for new video postings, checking them out and downloading the ones that seem interesting.
I originally started out with what I call "techno-pop" -- electronic-oriented pop tunes, run by channels like FLUIDIFIED, and NOCOPYRIGHTSOUNDS, and XKITO -- the last having an anime-oriented bent. Some of it is hard to swallow after a bit; in some cases I can get along with the simple-minded drum-machine tracks, in others they wear on me very quickly. I had to branch out to find more channels:
I need to get more into the jazz channels. I like acapella a lot, but I haven't found a dedicated channel for it yet, having to make do with individual ensembles, like "Perpetuum Jazille" -- a choir-sized Slovenian group, with a remarkable ability to do US pop-soul tunes with perfect accents.
And so on. I was worried at the outset when I started tracking channels that I would gradually trim down what I was interested in listening to, and eventually drift away from it. That hasn't happened; instead, I've ended up expanding my interests. This takes me back to my youth, when I really like exploring music. Now that I can play an mkeyboard pretty fair, that lends another dimension to the exercise. I may drift away eventually, but not for the immediate future; I keep adding channels, much more often than I drop them. For now, I'm downloading videos faster than I can tweak them into audio files.
* As for the Real Fake News in January ... it's typically quiet in the space between Christmas and New Year's Day, giving a hint that 2018 might be politically quieter than 2017. Well, that was unlikely, and the new year got off to a roar with the publication of Michael Wolff's FIRE & FURY, describing much of the Trump Administration's first year in office.
It did not give a positive read on matters, giving the impression that White House staffers regard President Trump as juvenile, intellectually negligible, and erratic. Worse, the impression was that his mental powers, not great to begin with, have been fading. Underlying the problems in the White House, according to Wolff, is the difficulty that Trump didn't really expect to win the election.
Since Trump is not a person who thinks in terms of detailed plans, it is hard to say that he planned to lose; it is, however, clear that he had a much better idea of what he was going to do if he lost than if he won. The election made him a media superstar; he planned, following the loss, to create his own TV network, which would have been to the Right of Fox News. From there, he would have sniped endlessly at President Hillary.
According to Wolff's book, Trump was as white as a ghost when he found out he had won, while his wife Melania wept in despair. Trump had no preparation for the job, no specific agenda for what he would do as president; he doesn't like the demands of the job, and certainly doesn't like the Justice Department probing into his business dealings. Of course not; although it seems unlikely there was any real plan of collusion between the Trump campaign and Russia, everyone knows Trump's business dealings are unlikely to tolerate close examination.
Comments by Trump's exiled advisor Steve Bannon dominated FIRE & FURY. Bannon was particularly incredulous that Trump publicly said nobody should investigate his finances. That was like putting up a big neon sign with blinking arrows and dancing bears, Bannon saying: "Don't look here! Let's tell a prosecutor what not to look at!"
Few honestly thought FIRE & FURY was objective; the problem for Trump is that it really didn't do much more than confirm the obvious, that Trump is the "un-president", a complete mismatch for the job. The United States has been effectively decapitated for the next three years.
* As an illustration of this reality, the Senate began to consider legislation to finally settle the issue of the "Dreamers" -- illegal immigrants brought to the USA as children and raised here, meaning the USA is the only home they really know. The general consensus in the Senate was that they should get a path to citizenship, with South Carolina Senator Lindsey Graham coming up with a bipartisan "gang of 6" to propose legislation. Trump was agreeable ... but only if Congress funded his absurd border wall.
However, Congress had to authorize a stopgap spending measure in mid-month; Democrats, joined by some Republicans, balked, with the government shut down on 20:22 January. After being given reassurances for talks by GOP leadership, the Democrats relented. CNN's Chris Cillizza called the quick reversal a defeat for the Democrats -- but was it? The stopgap funding measure that was authorized will only last into early February, and so the game board hasn't changed.
The bottom line, it seems, is that Trump is trying to bully Congress, and Congress had to sent a message to the White House: "We can fight dirty, too." Since a government shutdown makes everyone look bad, the Democrats didn't want to do more than send a message ... of course, while realizing that a shutdown hurts Trump worse, since he has tweeted in the past that he thinks it would be a good idea.
One wonders if Cillizza was talking for the benefit of Trump, who pays close attention to CNN, while he keeps on blasting it. Cillizza strengthened the hand of Graham, who has declared he wants to form a "gang of 60" that will be able to get legislation through the Senate unchallenged. Graham did not seem particularly upset over the shutdown, saying that it had cleared the air. In any case, this particular game is not over yet, with the next round to take place in a week or two.
Incidentally, other CNN commenters targeted Trump's advisor Stephen Miller as the mastermind of the Trump Administration's cynical policy on immigration, saying that Trump was no more than an "empty suit". Of course, there's no reason to doubt the sincerity of such commentaries -- but everyone who's paying attention knows that a good way to drive a wedge between Trump and one of his advisors is to publicly let out that the advisor is really in charge. More than meets the eye?
* Near the end of the month, Trump went to the World Economic Forum in Davos, Switzerland. Some comedians thanked the Swiss for getting him out of the country, even if just for a short time. It seems the general attitude towards Trump in Davos was one of indifference; everyone knew that he had no constructive agenda, everyone was familiar with his noisy song-&-dance. When he started talking about "fake news" in a speech, he was greeted with boos.
In the meantime, Trump's "America First" economic policy is staggering drunkenly down the road. Boeing got shot down by the US International Trade Commission on its attempts to have punitive tariffs slapped on the Canadian Bombardier C-Series jetliners. Only a short time later, however, the ITC approved tariffs on Chinese solar panels and South Korean appliances. The details of these tariffs are obscure, and it is unclear how this is going to play out. More significantly, talks are reaching the endgame between the US, Canada, and Mexico over the North American Free Trade Agreement (NAFTA). American farmers and automakers are screaming for the Trump Administration not to abrogate NAFTA; economists think it would unlikely, folly to do so -- but we'll see in a few weeks.
Along another track, government strategic analyses have been released identifying China and Russia as America's primary adversaries. A Reuters article from Moscow showed that the Russians, once happy to see Donald Trump defeat Hillary Clinton, are now regretting it. Relations between the US and Russia are worse than ever; the Russians feel that Clinton would at least have been more rational and consistent to deal with.
That's just too bad for the Russians. There are enough unintended consequences when people try to do the right thing, there are far more when they are simply trying to make trouble for its own sake. Call it justice. As mad as things are right now, there's still plenty of humor.COMMENT ON ARTICLE