Tuesday, November 21, 2017

Enlighten Radio Podcasts:Resistance Radio -- Giving Thanks to the Resistance

John Case has sent you a link to a blog:



Blog: Enlighten Radio Podcasts
Post: Resistance Radio -- Giving Thanks to the Resistance
Link: http://podcasts.enlightenradio.org/2017/11/resistance-radio-giving-thanks-to.html

--
Powered by Blogger
https://www.blogger.com/

Saturday, November 18, 2017

Artificial Intelligence Learns to Learn Entirely on Its Own


Artificial Intelligence Learns to Learn Entirely on Its Own

A new version of AlphaGo needed no human instruction to figure out how to clobber the best Go player in the world — itself.


A mere 19 months after dethroning the world's top human Go player, the computer program AlphaGo has smashed an even more momentous barrier: It can now achieve unprecedented levels of mastery purely by teaching itself. Starting with zero knowledge of Go strategy and no training by humans, the new iteration of the program, called AlphaGo Zero, needed just three days to invent advanced strategies undiscovered by human players in the multi-millennia history of the game. By freeing artificial intelligence from a dependence on human knowledge, the breakthrough removes a primary limit on how smart machines can become.

Earlier versions of AlphaGo were taught to play the game using two methods. In the first, called supervised learning, researchers fed the program 100,000 top amateur Go games and taught it to imitate what it saw. In the second, called reinforcement learning, they had the program play itself and learn from the results.


AlphaGo Zero skipped the first step. The program began as a blank slate, knowing only the rules of Go, and played games against itself. At first, it placed stones randomly on the board. Over time it got better at evaluating board positions and identifying advantageous moves. It also learned many of the canonical elements of Go strategy and discovered new strategies all its own. "When you learn to imitate humans the best you can do is learn to imitate humans," said Satinder Singh, a computer scientist at the University of Michigan who was not involved with the research. "In many complex situations there are new insights you'll never discover."

After three days of training and 4.9 million training games, the researchers matched AlphaGo Zero against the earlier champion-beating version of the program. AlphaGo Zero won 100 games to zero.

To expert observers, the rout was stunning. Pure reinforcement learning would seem to be no match for the overwhelming number of possibilities in Go, which is vastly more complex than chess: You'd have expected AlphaGo Zero to spend forever searching blindly for a decent strategy. Instead, it rapidly found its way to superhuman abilities.

The efficiency of the learning process owes to a feedback loop. Like its predecessor, AlphaGo Zero determines what move to play through a process called a "tree search." The program starts with the current board and considers the possible moves. It then considers what moves its opponent could play in each of the resulting boards, and then the moves it could play in response and so on, creating a branching tree diagram that simulates different combinations of play resulting in different board setups.


Video: David Silver, the lead researcher on the AlphaGo team, discusses how AlphaGo improves its Go strategy by playing against itself.

DeepMind

AlphaGo Zero can't follow every branch of the tree all the way through, since that would require inordinate computing power. Instead, it selectively prunes branches by deciding which paths seem most promising. It makes that calculation — of which paths to prune — based on what it has learned in earlier play about the moves and overall board setups that lead to wins.


Earlier versions of AlphaGo did all this, too. What's novel about AlphaGo Zero is that instead of just running the tree search and making a move, it remembers the outcome of the tree search — and eventually of the game. It then uses that information to update its estimates of promising moves and the probability of winning from different positions. As a result, the next time it runs the tree search it can use its improved estimates, trained with the results of previous tree searches, to generate even better estimates of the best possible move.

The computational strategy that underlies AlphaGo Zero is effective primarily in situations in which you have an extremely large number of possibilities and want to find the optimal one. In the Nature paper describing the research, the authors of AlphaGo Zero suggest that their system could be useful in materials exploration — where you want to identify atomic combinations that yield materials with different properties — and protein folding, where you want to understand how a protein's precise three-dimensional structure determines its function.

As for Go, the effects of AlphaGo Zero are likely to be seismic. To date, gaming companies have failed in their efforts to develop world-class Go software. AlphaGo Zero is likely to change that. Andrew Jackson, executive vice president of the American Go Association, thinks it won't be long before Go apps appear on the market. This will change the way human Go players train. It will also make cheating easier.

As for AlphaGo, the future is wide open. Go is sufficiently complex that there's no telling how good a self-starting computer program can get; and AlphaGo now has a learning method to match the expansiveness of the game it was bred to play.

--
John Case
Harpers Ferry, WV

The Winners and Losers Radio Show
7-9 AM Weekdays, The Enlighten Radio Player Stream, 
Sign UP HERE to get the Weekly Program Notes.

Enlighten Radio Podcasts:Podcast: The Moose Turd Cafe -- HATS ARE COMING OFF THEIR HEADS!! -- SOME ADULT HUMOR INCLUDED

John Case has sent you a link to a blog:



Blog: Enlighten Radio Podcasts
Post: Podcast: The Moose Turd Cafe -- HATS ARE COMING OFF THEIR HEADS!! -- SOME ADULT HUMOR INCLUDED
Link: http://podcasts.enlightenradio.org/2017/11/podcast-moose-turd-cafe-hats-are-coming.html

--
Powered by Blogger
https://www.blogger.com/

Enlighten Radio Podcasts:Podcast: Resistance Radio -- The Fight Against the Tax Fraud, and Misogyny Gone Crazy

John Case has sent you a link to a blog:



Blog: Enlighten Radio Podcasts
Post: Podcast: Resistance Radio -- The Fight Against the Tax Fraud, and Misogyny Gone Crazy
Link: http://podcasts.enlightenradio.org/2017/11/podcast-resistance-radio-fight-against.html

--
Powered by Blogger
https://www.blogger.com/

Friday, November 17, 2017

Enlighten Radio Podcasts:Podcast The Moose Turd Cafe -- News from China Energy Colony #12

John Case has sent you a link to a blog:



Blog: Enlighten Radio Podcasts
Post: Podcast The Moose Turd Cafe -- News from China Energy Colony #12
Link: http://podcasts.enlightenradio.org/2017/11/podcast-moose-turd-cafe-news-from-china.html

--
Powered by Blogger
https://www.blogger.com/

Enlighten Radio Podcasts:Podcast: The Moose Turd Cafe -- Feel Replaced by your Exact Duplicate?

John Case has sent you a link to a blog:



Blog: Enlighten Radio Podcasts
Post: Podcast: The Moose Turd Cafe -- Feel Replaced by your Exact Duplicate?
Link: http://podcasts.enlightenradio.org/2017/11/podcast-moose-turd-cafe-feel-replaced.html

--
Powered by Blogger
https://www.blogger.com/

Thursday, November 16, 2017

Millions fewer would get overtime protections if the overtime threshold were only $31,000 [feedly]

Millions fewer would get overtime protections if the overtime threshold were only $31,000
http://www.epi.org/blog/millions-fewer-would-get-overtime-protections-threshold-31000/

Federal law requires that people working more than 40 hours a week be paid 1.5 times their rate of pay for the extra hours, but exempts salaried workers who make above a certain salary threshold and are deemed to have "executive, administrative, or professional" duties. The salary threshold is meant to help protect salaried workers with little bargaining power—for example, low- or modestly-compensated front-line supervisors at fast food restaurants—from being forced to work unpaid overtime. But, at $455 per week (the equivalent of $23,660 per year), the overtime threshold has been so eroded by inflation that it is now less than the poverty rate for a family of four. If the rule had simply been adjusted for inflation since 1975, today it would be well over $50,000.

In 2016, the Department of Labor published a highly vetted, economically sound rule that would have increased the threshold to $913 per week ($47,476 per year). However, a district court judge in Texas ruled that the new overtime threshold is invalid. While the Trump DOL plans to appeal the judge's flawed ruling, they will not defend the $47,476 threshold. Instead, they intend to propose a new threshold, and have asked the court to stay the appeal while they engage in new rulemaking.

DOL officials have repeatedly indicated that they would prefer a salary threshold far below $47,476—rolling back protections for millions of workers. It is likely that they are considering proposing a new threshold of around $31,000.

Where does that number come from?  In 2004, under George W. Bush, DOL increased the overtime threshold, but fell far short of fully adjusting it for inflation since its prior increase almost 30 years earlier, in 1975. The $31,000 figure is the 2004 threshold adjusted for inflation. Meanwhile, if the 1975 threshold had been adjusted for inflation, it would be well over $50,000. Labor Secretary Alexander Acosta has suggested that simply adjusting the weak 2004 threshold for inflation might be appropriate. Additionally, in a July Request for Information, DOL specifically asked for public comment on whether updating the 2004 salary level for inflation would be an appropriate basis for setting the salary level, further suggesting that that level is under strong consideration by the department.

EPI estimated that the 2016 rule, with a salary threshold of $47,476, would have provided new or strengthened overtime protections to 12.5 million workers. Using that same data, we find that a threshold of $31,000 would provide new or strengthened protections to only 3.4 million workers. In other words, 9.1 million workers—close to three-quarters of the 12.5 million—would be left out. The table below shows how many people in each state would not get new or strengthened overtime pay protections if the threshold were set at $31,000 instead of $47,476. Setting the salary threshold below the 2016 level would roll back a long overdue wage increase for American workers across the country.

Table 1
VISIT WEBSITE
 -- via my feedly newsfeed