Thursday, April 13, 2017

The Dark Secret at the Heart of Artificial Intelligence



technologyreview |   No one really knows how the most advanced algorithms do what they do. That could be a problem.

In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.”

At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.”

Artificial intelligence hasn’t always been this way. From the outset, there were two schools of thought regarding how understandable, or explainable, AI ought to be. Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code. Others felt that intelligence would more easily emerge if machines took inspiration from biology, and learned by observing and experiencing. This meant turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself.

At first this approach was of limited practical use, and in the 1960s and ’70s it remained largely confined to the fringes of the field. Then the computerization of many industries and the emergence of large data sets renewed interest. That inspired the development of more powerful machine-learning techniques, especially new versions of one known as the artificial neural network. By the 1990s, neural networks could automatically digitize handwritten characters.

But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception. Deep learning is responsible for today’s explosion of AI. It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand. Deep learning has transformed computer vision and dramatically improved machine translation. It is now being used to guide all sorts of key decisions in medicine, finance, manufacturing—and beyond.

Is Artificial Intelligence a Threat to Christianity?


theatlantic |  While most theologians aren’t paying it much attention, some technologists are convinced that artificial intelligence is on an inevitable path toward autonomy. How far away this may be depends on whom you ask, but the trajectory raises some fundamental questions for Christianity—as well as religion broadly conceived, though for this article I’m going to stick to the faith tradition I know best. In fact, AI may be the greatest threat to Christian theology since Charles Darwin’s On the Origin of Species.

For decades, artificial intelligence has been advancing at breakneck speed. Today, computers can fly planes, interpret X-rays, and sift through forensic evidence; algorithms can paint masterpiece artworks and compose symphonies in the style of Bach. Google is developing “artificial moral reasoning” so that its driverless cars can make decisions about potential accidents.

“AI is already here, it’s real, it’s quickening,” says Kevin Kelly, a co-founder of Wired magazine and the author of The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future. “I think the formula for the next 10,000 start-ups is to take something that already exists and add AI to it.”

Will Artificial Intelligence Redefine Human Intelligence?


theatlantic |  As machines advance and as programs learn to do things that were once only accomplished by people, what will it mean to be human?

Over time, artificial intelligence will likely prove that carving out any realm of behavior as unique to humans—like language, a classic example—is ultimately wrong. If Tinsel and Beau were still around today, they might be powered by a digital assistant, after all. In fact, it’d be a littler weird if they weren’t, wouldn’t it? Consider the fact that Disney is exploring the use of interactive humanoid robots at its theme parks, according to a patent filing last week.

Technological history proves that what seems novel today can quickly become the norm, until one day you look back surprised at the memory of a job done by a human rather than a machine. By teaching machines what we know, we are training them to be like us. This is good for humanity in so many ways. But we may still occasionally long for the days before machines could imagine the future alongside us.

Wednesday, April 12, 2017

Why is the CIA WaPo Giving Space to Assange to Make His Case?


WaPo |  On his last night in office, President Dwight D. Eisenhower delivered a powerful farewell speech to the nation — words so important that he’d spent a year and a half preparing them. “Ike” famously warned the nation to “guard against the acquisition of unwarranted influence, whether sought or unsought, by the military-industrial complex. The potential for the disastrous rise of misplaced power exists and will persist.” 

Much of Eisenhower’s speech could form part of the mission statement of WikiLeaks today. We publish truths regarding overreaches and abuses conducted in secret by the powerful.

Our most recent disclosures describe the CIA’s multibillion-dollar cyberwarfare program, in which the agency created dangerous cyberweapons, targeted private companies’ consumer products and then lost control of its cyber-arsenal. Our source(s) said they hoped to initiate a principled public debate about the “security, creation, use, proliferation and democratic control of cyberweapons.”

The truths we publish are inconvenient for those who seek to avoid one of the magnificent hallmarks of American life — public debate. Governments assert that WikiLeaks’ reporting harms security. Some claim that publishing facts about military and national security malfeasance is a greater problem than the malfeasance itself. Yet, as Eisenhower emphasized, “Only an alert and knowledgeable citizenry can compel the proper meshing of the huge industrial and military machinery of defense with our peaceful methods and goals, so that security and liberty may prosper together.” 

Quite simply, our motive is identical to that claimed by the New York Times and The Post — to publish newsworthy content. Consistent with the U.S. Constitution, we publish material that we can confirm to be true irrespective of whether sources came by that truth legally or have the right to release it to the media. And we strive to mitigate legitimate concerns, for example by using redaction to protect the identities of at-risk intelligence agents.





The Blockchain and Us


blockchain-documentary |  What is the Blockchain?

blockchain, NOUN /ˈblɒktʃeɪn/
A digital ledger in which transactions made in bitcoin or another cryptocurrency are recorded chronologically and publicly.
From en.oxforddictionaries.com/definition/blockchain

A mysterious white paper (Nakamoto, Satoshi, 2008, “Bitcoin: A Peer-to-Peer Electronic Cash System”) introduced the Bitcoin blockchain, a combination of existing technologies that ensures the integrity of data without a trusted party. It consists of a ledger that can’t be changed and a consensus algorithm—a way for groups to agree. Unlike existing databases in banks and other institutions, a network of users updates and supports the blockchain—a system somewhat similar to Wikipedia, which users around the globe maintain and double-check. The cryptocurrency Bitcoin is the first use case of the blockchain, but much more seems to be possible.

The Next Generation of the Internet
The first 40 years of the Internet brought e-mail, social media, mobile applications, online shopping, Big Data, Open Data, cloud computing, and the Internet of Things. Information technology is at the heart of everything today—good and bad. Despite advances in privacy, security, and inclusion, one thing is still missing from the Internet: Trust. Enter the blockchain.

The Blockchain and Us: The Project
When the Wright brothers invented the airplane in 1903, it was hard to imagine there would be over 500,000 people traveling in the air at any point in time today. In 2008, Satoshi Nakamoto invented Bitcoin and the blockchain. For the first time in history, his invention made it possible to send money around the globe without banks, governments or any other intermediaries. Satoshi is a mystery character, and just like the Wright brothers, he solved an unsolvable problem. The concept of the blockchain isn’t very intuitive. But still, many people believe it is a game changer. Despite its mysterious beginnings, the blockchain might be the airplane of our time.

Economist and filmmaker Manuel Stagars portrays this exciting technology in interviews with software developers, cryptologists, researchers, entrepreneurs, consultants, VCs, authors, politicians, and futurists from the United States, Canada, Switzerland, the UK, and Australia.
How can the blockchain benefit the economies of nations? How will it change society? What does this mean for each of us? The Blockchain and Us is no explainer video of the technology. It gives a view on the topic far from hype, makes it accessible and starts a conversation. For a deep dive, see all full-length interviews from the film here.

The Content Of Sci-Hub And Its Usage


biorxiv |  Despite the growth of Open Access, illegally circumventing paywalls to access scholarly publications is becoming a more mainstream phenomenon. The web service Sci-Hub is amongst the biggest facilitators of this, offering free access to around 62 million publications. So far it is not well studied how and why its users are accessing publications through Sci-Hub. By utilizing the recently released corpus of Sci-Hub and comparing it to the data of ~28 million downloads done through the service, this study tries to address some of these questions. The comparative analysis shows that both the usage and complete corpus is largely made up of recently published articles, with users disproportionately favoring newer articles and 35% of downloaded articles being published after 2013. These results hint that embargo periods before publications become Open Access are frequently circumnavigated using Guerilla Open Access approaches like Sci-Hub. On a journal level, the downloads show a bias towards some scholarly disciplines, especially Chemistry, suggesting increased barriers to access for these. Comparing the use and corpus on a publisher level, it becomes clear that only 11% of publishers are highly requested in comparison to the baseline frequency, while 45% of all publishers are significantly less accessed than expected. Despite this, the oligopoly of publishers is even more remarkable on the level of content consumption, with 80% of all downloads being published through only 9 publishers. All of this suggests that Sci-Hub is used by different populations and for a number of different reasons and that there is still a lack of access to the published scientific record. A further analysis of these openly available data resources will undoubtedly be valuable for the investigation of academic publishing.

ISP Data Pollution: Hiding the Needle in a Pile of Needles?


theatlantic |  The basic idea is simple. Internet providers want to know as much as possible about your browsing habits in order to sell a detailed profile of you to advertisers. If the data the provider gathers from your home network is full of confusing, random online activity, in addition to your actual web-browsing history, it’s harder to make any inferences about you based on your data output.

Steven Smith, a senior staff member at MIT’s Lincoln Laboratory, cooked up a data-pollution program for his own family last month, after the Senate passed the privacy bill that would later become law. He uploaded the code for the project, which is unaffiliated with his employer, to GitHub. For a week and a half, his program has been pumping fake web traffic out of his home network, in an effort to mask his family’s real web activity.

Smith’s algorithm begins by stringing together a few words from an open-source dictionary and googling them. It grabs the resulting links in a random order, and saves them in a database for later use. The program also follows the Google results, capturing the links that appear on those pages, and then follows those links, and so on. The table of URLs grows quickly, but it’s capped around 100,000, to keep the computer’s memory from overloading.

A program called PhantomJS, which mimics a person using a web browser, regularly downloads data from the URLs that have been captured—minus the images, to avoid downloading unsavory or infected files. Smith set his program to download a page about every five seconds. Over the course of a month, that’s enough data to max out the 50 gigabytes of data that Smith buys from his internet service provider.

Although it relies heavily on randomness, the program tries to emulate user behavior in certain ways. Smith programmed it to visit no more than 100 domains a day, and to occasionally visit a URL twice—simulating a user reload. The pace of browsing slows down at night, and speeds up again during the day. And as PhantomJS roams around the internet, it changes its camouflage by switching between different user agents, which are identifiers that announce what type of browser a visitor is using. By doing so, Smith hopes to create the illusion of multiple users browsing on his network using different devices and software. “I’m basically using common sense and intuition,” Smith said.

Tuesday, April 11, 2017

Chasing Perpetual Motion in the Gig Economy


NYTimes |  The promises Silicon Valley makes about the gig economy can sound appealing. Its digital technology lets workers become entrepreneurs, we are told, freed from the drudgery of 9-to-5 jobs. Students, parents and others can make extra cash in their free time while pursuing their passions, maybe starting a thriving small business.

In reality, there is no utopia at companies like Uber, Lyft, Instacart and Handy, whose workers are often manipulated into working long hours for low wages while continually chasing the next ride or task. These companies have discovered they can harness advances in software and behavioral sciences to old-fashioned worker exploitation, according to a growing body of evidence, because employees lack the basic protections of American law.

A recent story in The Times by Noam Scheiber vividly described how Uber and other companies use tactics developed by the video game industry to keep drivers on the road when they would prefer to call it a day, raising company revenue while lowering drivers’ per-hour earnings. One Florida driver told The Times he earned less than $20,000 a year before expenses like gas and maintenance. In New York City, an Uber drivers group affiliated with the machinists union said that more than one-fifth of its members earn less than $30,000 before expenses.

Gig economy workers tend to be poorer and are more likely to be minorities than the population at large, a survey by the Pew Research Center found last year. Compared with the population as a whole, almost twice as many of them earned under $30,000 a year, and 40 percent were black or Hispanic, compared with 27 percent of all American adults. Most said the money they earned from online platforms was essential or important to their families.

Since workers for most gig economy companies are considered independent contractors, not employees, they do not qualify for basic protections like overtime pay and minimum wages. This helped Uber, which started in 2009, quickly grow to 700,000 active drivers in the United States, nearly three times the number of taxi drivers and chauffeurs in the country in 2014.

Student Debt Bubble Ruins Lives While Sucking Life Out of the Economy


nakedcapitalism |  The Financial Times has a generally good update on the state of the student debt bubble in the US. The article interesting not just for what it says but also for what goes unsaid. I’ll recap its main points with additional commentary. Note that many of the underlying issues will be familiar to NC readers, but it is nevertheless useful to stay current.
Access to student debt keeps inflating the cost of education. This may seem obvious but it can’t be said often enough. Per the article:
While the headline consumer price index is 2.7 per cent, between 2016 and 2017 published tuition and fee prices rose by 9 per cent at four-year state institutions, and 13 per cent at posher private colleges.
It wasn’t all that long ago that the cost of a year at an Ivy League college was $50,000 per year. Author Rana Foroohar was warned by high school counselors that the price tag for her daughter to attend one of them or a liberal arts college would be around $72,000 a year.
Spending increases are not going into improving education. As we’ve pointed out before, adjuncts are being squeezed into penury while the adminisphere bloat continues, as MBAs have swarmed in like locusts. Another waste of money is over-investment in plant. Again from the story:
A large chunk of the hike was due to schools hiring more administrators (who “brand build” and recruit wealthy donors) and building expensive facilities designed to lure wealthier, full-fee-paying students. This not only leads to excess borrowing on the part of universities — a number of them are caught up in dicey bond deals like the sort that sunk the city of Detroit — but higher tuition for students.
And there is a secondary effect. As education cost rise, students are becoming more mercenary in their choices, and in not a good way. This is another manifestation of what John Kay calls obliquity: in a complex system, trying to map a direct path will fail because it’s impossible to map the terrain well enough to identify one. Thus naive direct paths like “maximize shareholder value” do less well at achieving that objective than richer, more complicated goals.
The higher ed version of this dynamic is “I am going to school to get a well-paid job,” with the following results, per an FT reader:
BazHurl
After a career in equities, having graduated the Dreamy Spires with significant not silly debt, I had the pleasure of interviewing lots of the best and brightest graduates from European and US universities. Finance was attracting far more than its deserved share of the intellectual pie in the 90’s and Noughties in particular; so at times it was distressing to meet outrageously talented young men and women wanting to genuflect at the altar of the $, instead of building the Flux Capacitor. But the greater take-away was how mediocre and homogenous most of the grads were becoming. It seemed the longer they had studied and deferred entry into the Great Unwashed, the more difficult it was to get anything original or genuine from them. Piles and piles of CV’s of the same guys and gals: straight A’s since emerging into the world, polyglots, founders of every financial and charitable university society you could dream up … but could they honestly answer a simple question like “Fidelity or Blackrock – Who has robbed widows and orphans of more?”. Hardly. In short, few of them qualified as the sort of person you would willingly invite to sit next to you for fifteen hours a day, doing battle with pesky clients and triumphing over greedy competitors. All these once-promising 22 to 24 year old’s had somehow been hard-wired by the same robot and worse, all were entitled. Probably fair enough as they had excelled at everything that had been asked of them up until meeting my colleagues and I on the trading floors. Contrast this to the very different experience of meeting visiting sixth formers from a variety of secondary schools that used to tour the bank and with some gentle prodding, light up the Q&A sessions at tour’s end, fizzing with enthusiasm and desire. Now THESE kids I would hire ahead of the blue-chipped grads, most days. They were raw material that could be worked with and shaped into weapons. It was patently clear that University was no longer adding the expected value to these candidates and in fact was becoming quite the reverse. 
And for many grads, an investment in higher education now has a negative return on equity. A 2014 Economist article points out that the widely cited studies of whether college is worth the cost or not omit key factors that skew their results in favor of paying for higher education.

Navient: Student Loans Designed to Fail


NYTimes |  Ashley Hardin dreamed of being a professional photographer — glamorous shoots, perhaps some exotic travel. So in 2006, she enrolled in the Brooks Institute of Photography and borrowed more than $150,000 to pay for what the school described as a pathway into an industry clamoring for its graduates.

“Brooks was advertised as the most prestigious photography school on the West Coast,” Ms. Hardin said. “I wanted to learn from the best of the best.”

Ms. Hardin did not realize that she had taken out high-risk private loans in pursuit of a low-paying career. But her lender, SLM Corporation, better known as Sallie Mae, knew all of that, government lawyers say — and made the loans anyway.

In recent months, the student loan giant Navient, which was spun off from Sallie Mae in 2014 and retained nearly all of the company’s loan portfolio, has come under fire for aggressive and sloppy loan collection practices, which led to a set of government lawsuits filed in January. But those accusations have overshadowed broader claims, detailed in two state lawsuits filed by the attorneys general in Illinois and Washington, that Sallie Mae engaged in predatory lending, extending billions of dollars in private loans to students like Ms. Hardin that never should have been made in the first place.
“These loans were designed to fail,” said Shannon Smith, chief of the consumer protection division at the Washington State attorney general’s office.

New details unsealed last month in the state lawsuits against Navient shed light on how Sallie Mae used private subprime loans — some of which it expected to default at rates as high as 92 percent — as a tool to build its business relationships with colleges and universities across the country. From the outset, the lender knew that many borrowers would be unable to repay, government lawyers say, but it still made the loans, ensnaring students in debt traps that have dogged them for more than a decade.

While these risky loans were a bad deal for students, they were a boon for Sallie Mae. The private loans were — as Sallie Mae itself put it — a “baited hook” that the lender used to reel in more federally guaranteed loans, according to an internal strategy memo cited in the Illinois lawsuit.
The attorneys general in Illinois and Washington — backed by a coalition of those in 27 other states, who participated in a three-year investigation of student lending abuses — want those private loans forgiven.

Monday, April 10, 2017

Jeff Sessions Will Reinstate the War on Black Men Drugs


WaPo  |  Cook and Sessions have also fought the winds of change on Capitol Hill, where a bipartisan group of lawmakers recently tried but failed to pass the first significant bill on criminal justice reform in decades.

The legislation, which had 37 sponsors in the Senate, including Sen. Charles E. Grassley (R-Iowa) and Mike Lee (R-Utah), and 79 members of the House, would have reduced some of the long mandatory minimum sentences for gun and drug crimes. It also would have given judges more flexibility in drug sentencing and made retroactive the law that reduced the large disparity between sentencing for crack cocaine and powder cocaine.

The bill, introduced in 2015, had support from outside groups as diverse as the Koch brothers and the NAACP. House Speaker Paul D. Ryan (R-Wis.) supported it as well. The path to passage seemed clear.

But then people such as Sessions and Cook spoke up. The longtime Republican senator from Alabama became a leading opponent, citing the spike in crime in several cities.

“Violent crime and murders have increased across the country at almost alarming rates in some areas. Drug use and overdoses are occurring and dramatically increasing,” said Sessions, one of only five members of the Senate Judiciary Committee who voted against the legislation. “It is against this backdrop that we are considering a bill . . . to cut prison sentences for drug traffickers and even other violent criminals, including those currently in federal prison.”

Cook testified that it was the “wrong time to weaken the last tools available to federal prosecutors and law enforcement agents.”

After Republican lawmakers became nervous about passing legislation that might seem soft on crime, Senate Majority Leader Mitch McConnell (R-Ky.) declined to even bring the bill to the floor for a vote.

“Sessions was the main reason that bill didn’t pass,” said Inimai M. Chettiar, the director of the Justice Program at the Brennan Center for Justice. “He came in at the last minute and really torpedoed the bipartisan effort.”

Now that he is attorney general, Sessions has signaled a new direction. As his first step, Sessions told his prosecutors in a memo last month to begin using “every tool we have” — language that evoked the strategy from the drug war of loading up charges to lengthen sentences.

And he quickly appointed Cook to be a senior official on the attorney general’s task force on crime reduction and public safety, which was created following a Trump executive order to address what the president has called “American carnage.”

“If there was a flickering candle of hope that remained for sentencing reform, Cook’s appointment was a fire hose,” said Ring, president of FAMM. “There simply aren’t enough backhoes to build all the prisons it would take to realize Steve Cook’s vision for America.”

Mass Incarceration: The Problem With the Standard Story


newyorker  |  So what makes for the madness of American incarceration? If it isn’t crazy drug laws or outrageous sentences or profit-seeking prison keepers, what is it? Pfaff has a simple explanation: it’s prosecutors. They are political creatures, who get political rewards for locking people up and almost unlimited power to do it.

 Pfaff, in making his case, points to a surprising pattern. While violent crime was increasing by a hundred per cent between 1970 and 1990, the number of “line” prosecutors rose by only seventeen per cent. But between 1990 and 2007, while the crime rate began to fall, the number of line prosecutors went up by fifty per cent, and the number of prisoners rose with it. That fact may explain the central paradox of mass incarceration: fewer crimes, more criminals; less wrongdoing to imprison people for, more people imprisoned. A political current was at work, too. Pfaff thinks prosecutors were elevated in status by the surge in crime from the sixties to the nineties. “It could be that as the officials spearheading the war on crime,” he writes, “district attorneys have seen their political options expand, and this has encouraged them to remain tough on crime even as crime has fallen.”

Meanwhile, prosecutors grew more powerful. “There is basically no limit to how prosecutors can use the charges available to them to threaten defendants,” Pfaff observes. That’s why mandatory-sentencing rules can affect the justice system even if the mandatory minimums are relatively rarely enforced. A defendant, forced to choose between a thirty-year sentence if convicted of using a gun in a crime and pleading to a lesser drug offense, is bound to cop to the latter. Some ninety-five per cent of criminal cases in the U.S. are decided by plea bargains—the risk of being convicted of a more serious offense and getting a much longer sentence is a formidable incentive—and so prosecutors can determine another man’s crime and punishment while scarcely setting foot in a courtroom. “Nearly everyone in prison ended up there by signing a piece of paper in a dingy conference room in a county office building,” Pfaff writes.

In a justice system designed to be adversarial, the prosecutor has few adversaries. Though the legendary Gideon v. Wainwright decision insisted that people facing jail time have the right to a lawyer, the system of public defenders—and the vast majority of the accused can depend only on a public defender—is simply too overwhelmed to offer them much help. (Pfaff cites the journalist Amy Bach, who once watched an overburdened public defender “plead out” forty-eight clients in a row in a single courtroom.)

Meanwhile, all the rewards for the prosecutor, at any level, are for making more prisoners. Since most prosecutors are elected, they might seem responsive to democratic discipline. In truth, they are so easily reëlected that a common path for a successful prosecutor is toward higher office. And the one thing that can cripple a prosecutor’s political ascent is a reputation, even if based on only a single case, for being too lenient. In short, our system has huge incentives for brutality, and no incentives at all for mercy.

Jeff Sessions Should Prosecute the Koch Bros. for Bribery


counterpunch |  On March 22, organizations led by Charles and David Koch, who have made tens of billions of dollars from the environmentally toxic business that they inherited from their father (Koch Industries), issued a lucrative offer to Republican congressmen: vote against Rep. Paul Ryan’s healthcare bill in exchange for generous 2018 campaign donations. Naturally, the flip-side of their offer was a threat: vote for the bill and we give you nothing.

The two multi-billionaires opposed Ryan/Trumpcare because of their libertarian, Social Darwinist belief that everybody, no matter how poor, is on his/her own and should not receive even the most minimal help from the government. This is an old American story – white plutocrats, deluded into thinking that they are self-made men rather than fantastically lucky beneficiaries of their parents’ wealth, opting to manipulate politicians into helping them keep as much of it as possible – and then helping them make even more to boot.

Aside from the Koch Brothers’ callousness, insatiable greed, and arrogant sense of entitlement, the real story here is that they just committed a serious white-collar crime: bribery. Bribery, as defined in federal statute 18 U.S.C. § 201, includes “directly or indirectly, corruptly giv[ing], offer[ing] or promis[ing] anything of value to any public official . . . with intent to influence any official act . . .”
For our purposes, the most important words in this statute are “offers” and “promises.” Even if the Koch Brothers were now to retract their offer or fail to follow through for any particular politician, they still issued it. In this sense, it’s like attempt or conspiracy. It does not require actual consummation – that is, an actual exchange of money for legislative action.

Many, if not most, Americans, including politicians and journalists, probably believe that this kind of “quid pro quo” – the exchange of a thing of value for an “official act” – though distasteful, is perfectly legal, especially after the Supreme Court’s Citizens United decision in 2010. But Citizens United did not legalize bribery. On the contrary, it said that bribery – “quid pro quo corruption” or its appearance – is the one thing that corporations may not engage in; pretty much everything else, including spending anonymous and unlimited “independent expenditures” on political advertisements, is constitutionally permitted. Of course, we know that this bribery still goes on all the time between candidates and Super PACs, but we rarely have hard evidence because they are generally smart enough to do all their bribing behind the scenes, not directly in front of the media like the Koch Brothers just did.

Sunday, April 09, 2017

Greer, Kunstler, Martenson, Morris, and Orlov Chew the Fat...,


Tulsi Gabbard Drops the Mic on .45's Ridiculous Syria Strike


foxnews |  Rep. Tulsi Gabbard, D-Hawaii, told Fox News' "Tucker Carlson Tonight" Friday that the American missile strike on a Syrian airfield as "an illegal and unconstitutional military strike" that drew the United States closer to military conflict with Russia.

Gabbard, an Iraq War veteran, also said the strike was "an escalation of a counterproductive regime change war in Syria that our country’s been waging for years, first through the CIA covertly, and now overtly."

FLASHBACK: GABBARD SAYS SHE MET WITH ASSAD DURING SYRIA TRIP

In January, Gabbard met with Syrian President Bashar Assad in Damascus. When host Tucker Carlson asked if she believed Assad's forces to be responsible for the chemical weapons attack that precipitated the missile strikes, Gabbard answered, "It doesn’t matter what I believe or not. What matters is evidence and facts.

"If the Trump administration has the evidence, unequivocally proving this, then share it with the American people," Gabbard continued. "Share it with Congress. Come to Congress and make your case before launching an unauthorized, illegal military strike against a foreign government."
Gabbard also said that efforts to overthrow Assad would only strengthen extremist groups, and expressed concerns about Moscow's response to the missile strikes.

"Russia ... are very closely allied with Syria and ... have their own military operating [on] the ground there," the congresswoman said, "and when you consider the consequences of that, the United States and Russia being the two nuclear powers in the world, it should be a cause of great concern for everyone."

Miss Lindsey Graham Sucks Her Pearls and Plays the Fool with Tucker Carlson


media-ite |  Tucker Carlson spoke with Senator Lindsey Graham tonight and confronted him about his proposal to send 7000 troops into Syria.

Graham has been very complimentary of President Trump taking action, telling Carlson tonight he’s actually “proud” of the president for taking action where Barack Obama would not.

Carlson expressed heavy skepticism about what Graham was proposing, asking him if he’s calling for a “whole new war” and whether the U.S. should be getting into that fight in the first place.

He brought up Graham’s proposal and asked about the cost. Graham didn’t have a number ready, and Carlson asked, “Did you not think through what the cost might be?”

Graham responded that it’s “minimal compared to the threats we face” and that “our national security interest can’t be monetized.”

Saturday, April 08, 2017

A Kind of Thin-ness - Just Right for the Alzheimers Demographic....,


radiolab |  We begin with a love story--from a man who unwittingly fell in love with a chatbot on an online dating site. Then, we encounter a robot therapist whose inventor became so unnerved by its success that he pulled the plug. And we talk to the man who coded Cleverbot, a software program that learns from every new line of conversation it receives...and that's chatting with more than 3 million humans each month. Then, five intrepid kids help us test a hypothesis about a toy designed to push our buttons, and play on our human empathy. And we meet a robot built to be so sentient that its creators hope it will one day have a consciousness, and a life, all its own.

Many-Worlds vs. Boltzmann Brains


nautilus |  In physics, the pressure, temperature, and volume of a gas are known as the state of a gas. In Boltzmann’s model, any arrangement of atoms or molecules that produces this state is known as a microstate of the gas. Since the state of a gas depends on the overall motion of its atoms or molecules, many microstates can produce the same state. Boltzmann showed that entropy can be defined as the number of microstates a state has. The more microstates, the greater the entropy. This explains why the entropy of a system tends to increase. Over time, a gas is more likely to find itself in a state with lots of possible microstates than one with few microstates.

Since entropy increases over time, the early universe must have had much lower entropy. This means the Big Bang must have had an extraordinarily low entropy. But why would the primordial state of the universe have such low entropy? Boltzmann’s theory provides a possible answer. Although higher entropy states are more likely over time, it is possible for a thermodynamic system to decrease its entropy. For example, all the air molecules in a room could just happen to cram together in one corner of the room. It isn’t very likely, but, statistically, it is possible. The same idea applies to the universe as a whole: If the primordial cosmos was in thermodynamic equilibrium, there is a small chance that things came together to create an extremely low entropy state. That state then triggered the Big Bang and the universe we see around us.

However, if the low entropy of the Big Bang was just due to random chance, that leads to a problem. Infinite monkeys might randomly type out the Complete Works of Shakespeare, but they would be far more likely to type out the much shorter Gettysburg Address. Likewise, a low entropy Big Bang could arise out of a primordial state, but if the universe is a collection of microstates, then it is more likely to find itself in a conscious state that thinks it is in a universe rather than the entire physical universe itself. That is, a Boltzmann brain existing is more probable than a universe existing. Boltzmann’s theory leads to a paradox, where the very scientific assumption that we can trust what we observe leads to the conclusion that we can’t trust what we observe. 

Although it’s an interesting paradox, most astrophysicists don’t think Boltzmann brains are a real possibility. (Carroll, for instance, mercilessly deems them “self-undermining and unworthy of serious consideration,” on account of their cognitive instability.) Instead they look to physical processes that would solve the paradox. The physical processes that give rise to the Boltzmann brain possibility are the vacuum energy fluctuations intrinsic to quantum theory—small energy fluctuations can appear out of the vacuum. Usually they aren’t noticeable, but under certain conditions these vacuum fluctuations can lead to things like Hawking radiation and cosmic inflation in the early universe. These fluctuations were in thermal equilibrium in the early universe, so they follow the same random Boltzmann statistics as the primordial cosmos, making them also more likely to give rise to a Boltzmann brain rather than the universe we seem to be in. 

But it turns out that, since the universe is expanding, these apparent fluctuations might not be coming from the vacuum. Instead, as the universe expands, the edge of the observable universe causes thermal fluctuations to appear, much like the event horizon of a black hole gives rise to Hawking radiation. This gives the appearance of vacuum fluctuations, from our point of view. The true vacuum of space and time isn’t fluctuating, so it cannot create a Boltzmann brain. 

The idea, from Caltech physicist Kimberly Boddy, and colleagues, is somewhat speculative, and it has an interesting catch. The argument that the true vacuum of the universe is stationary relies on a version of quantum theory known as the many-worlds formulation. In this view, the wave function of a quantum system doesn’t “collapse” when observed. Rather, different outcomes of the quantum system “decohere” and simply evolve along different paths. Where once the universe was a superposition of different possible outcomes, quantum decoherence creates two definite outcomes. Of course, if our minds are simply physical states within the cosmos, our minds are also split into two outcomes, each observing a particular result.

What is Artificial Intelligence?


bruegel |  The specific term “artificial intelligence” was first used by John McCarthy in the summer of 1956, when he held the first academic conference on the subject in Dartmouth. However, the traditional approach to AI was not really about independent machine learning. Instead the aim was to specify rules of logical reasoning and real world conditions which machines could be programmed to follow and react to. This approach was time-consuming for programmers and its effectiveness relied heavily on the clarity of rules and definitions.

For example, applying this rule-and-content approach to machine language translation would require the programmer to proactively equip the machine with all grammatical rules, vocabulary and idioms of the source and target languages. Only then could one feed the machine a sentence to be translated. As words cannot be reduced only to their dictionary definition and there are many exceptions to grammar rules, this approach would be inefficient and ultimately offer poor results, at least if we compare the outcome with a professional, human translator.

Modern AI has deviated from this approach by adopting the notion of machine learning. This shift follows in principle Turing’s recommendation to teach a machine to perform specific tasks as if it were a child. By building a machine with sufficient computational resources, offering training examples from real world data and by designing specific algorithms and tools that define a learning process, rather than specific data manipulations, machines can improve their own performance through learning by doing, inferring patterns, and hypothesis checking.

Thus it is no longer necessary to programme in advance long and complicated rules for a machine’s specific operations. Instead programmers can equip them with flexible mechanisms that facilitate machines’ adaptation to their task environment. At the core of this learning process are artificial neural networks, inspired by the networks of neurons in the human brain. The article by The Economist provides a nice illustration of how a simple artificial neuron network works: It is organized in layers. Data is introduced to the network through an input layer. Then come the hidden layers in which information is processed and finally an output layer where results are released. Each neuron within the network is connected to many others, as both inputs and outputs, but the connections are not equal. They are weighted such that a neuron’s different outward connections fire at different levels of input activation. A network with many hidden layers can combine, sort or divide signals by applying different weights to them and passing the result to the next layer. The number of hidden layers is indicative of the ability of the network to detect increasingly subtle features of the input data. The training of the network takes place through the adjustment of neurons’ connection weights, so that the network gives the desired response when presented with particular inputs.

The goal of the neural network is to solve problems in the same way that a hypothesised human brain would, albeit without any “conscious” codified awareness of the rules and patterns that have been inferred from the data. Modern neural network projects typically work with a few thousand to a few million neural units and millions of connections, which are still several orders of magnitude less complex than the human brain and closer to the computing power of a worm (see the Intel AI Documentation for further details). While networks with more hidden layers are expected to be more powerful, training deep networks can be rather challenging, owing to the difference in speed at which every hidden layer learns.

By categorising the ways this artificial neuron structure can interact with the source data and stimuli, we can identify three different types of machine learning:
  • Supervised learning: the neural network is provided with examples of inputs and corresponding desired outputs. It then “learns” how to accurately map inputs to outputs by adjusting the weights and activation thresholds of its neural connections. This is the most widely used technique. A typical use would be training email servers to choose which emails should automatically go to the spam folder. Another task that can be learnt in this way is finding the most appropriate results for a query typed in a search engine.
  • Unsupervised learning: the neural network is provided with example inputs and then it is left to recognise features, patterns and structure in these inputs without any specific guidance. This type of learning can be used 
to
 cluster
 the 
input
 data
 into
 classes
 on
 the
 basis
 of 
their
 statistical 
properties
 It is particularly useful for finding things that you do not know the form of, such as as-yet-unrecognised patterns in a large dataset.
  • Reinforcement learning: the neural network interacts with an environment in which it must perform a specific task, and receives feedback on its performance in the form of a reward or a punishment. This type of learning corresponds, for example, to the training of a network to play computer games and achieve high scores.
Since artificial neural networks are based on a posited structure and function of the human brain, a natural question to ask is whether machines can outperform human beings. Indeed, there are several examples of games and competitions in which machines can now beat humans. By now, machines have topped the best humans at most games traditionally held up as measures of human intellect, including chess (recall for example the 1997 game between IBM’s Deep Blue and the champion Garry Kasparov), Scrabble, Othello, and Jeopardy!. Even in more complex games, machines seem to be quickly improving their performance through their learning process. In March 2016, the AlphaGo computer program from the AI startup DeapMind (which was bought by Google in 2014) beat Lee Sedol at a five-game match of Go – the oldest board game, invented in China more than 2,500 years ago. This was the first time a computer Go program has beaten a 9-dan professional without handicaps.

Friday, April 07, 2017

The Crush Control Chappelle Conspiracy Continues...,


esquire |  In 2014, before the sexual assault allegations against Bill Cosby went mainstream, a standup routine from Hannibal Burress went viral. "Bill Cosby has the fucking smuggest old black man public persona that I hate," Burress said during a set in Philadelphia. "'Pull your pants up, black people, I was on TV in the '80s. I can talk down to you because I had a successful sitcom.' Yeah, but you raped women, Bill Cosby. So, brings you down a couple notches." Buress's bit made headlines, prompting a procession of women to come forward with new allegations, which ultimately led to the undoing of Cosby the comedian—and Cosby the man.

Now skip forward two years.

"The '70s were a wild era, and while all this was going on, Bill Cosby raped 54 people. Holy shit, that's a lot of rapes, man! This guy's putting up real numbers. He's like the Steph Curry of rape." That's Dave Chappelle in 2017, likening Cosby's "400 hours of rape" to a Top Gun pilot. His first specials in 13 years—Netflix paid $60 million for three, the first two of which premiered last month on the streaming service—were considered his big comeback. Instead, they feel more like a throwback. In Age of Spin, Chappelle mimics flamboyant Hollywood producers, fears trans women cutting off their genitalia, and is in creases over a hypothetical superhero who rapes women to activate his powers.

No longer wiry like he once was, Chappelle is not only physically less nimble—he has also seemingly lost his nuance as a storyteller. His delivery is preachy, his punchlines banal. For Vice, Australian comic Patrick Marlborough writes that Chappelle's stand-up in the early '00s "had a sublime mastery of taking a taboo, reiterating it, guiding it to a point, flipping the meaning, and shooting it in the back of the head." As he watched the Netflix specials, however, he was forced to wait for the twist that never came. In its place stood a man who performed ignorance rather than questioning it, who had become trapped in the bubble of his own privilege—a world where the last 10 years of identity politics haven't really made much of a difference. ("The jokes were mean, they were lazy," Marlborough writes. "They were something I never thought I'd see: Dave Chappelle punching down.") Unfortunately that puts him out of touch with the cultural conversation at large, which has itself progressed and in turn shifted the way comedians tackle loaded topics like race, class, gender, and sexuality. In short, Dave Chappelle may not have progressed, but many of us have.

When Zakharova Talks Men Of Culture Listen...,

mid.ru  |   White House spokesman John Kirby’s statement, made in Washington shortly after the attack, raised eyebrows even at home, not ...