Tuesday, July 18, 2017

Don't Touch the Computer


Under what circumstances should humans override algorithms?

From what I have read I doubt that a hybrid team of human + AlphGo would perform much better than AlphaGo itself. Perhaps worse, depending on the epistemic sophistication and self-awareness of the human. In hybrid chess it seems that the ELO score of the human partner is not the main factor, but rather an understanding of the chess program, its strengths, and limitations.

Unless I'm mistaken the author of the article below sometimes comments here.
Don’t Touch the Computer
By Jason Collins

BehavioralScientist.org

... Some interpret this unique partnership to be a harbinger of human-machine interaction. The superior decision maker is neither man nor machine, but a team of both. As McAfee and Brynjolfsson put it, “people still have a great deal to offer the game of chess at its highest levels once they’re allowed to race with machines, instead of purely against them.”

However, this is not where we will leave this story. For one, the gap between the best freestyle teams and the best software is closing, if not closed. As Cowen notes, the natural evolution of the human-machine relationship is from a machine that doesn’t add much, to a machine that benefits from human help, to a machine that occasionally needs a tiny bit of guidance, to a machine that we should leave alone.

But more importantly, let me suppose we are going to hold a freestyle chess tournament involving the people reading this article. Do you believe you could improve your chance of winning by overruling your 3300-rated chess program? For nearly all of us, we are best off knowing our limits and leaving the chess pieces alone.

... We interfere too often, ... This has been documented across areas from incorrect psychiatric diagnoses to freestyle chess players messing up their previously strong position, against the advice of their supercomputer teammate.

For example, one study by Berkeley Dietvorst and friends asked experimental subjects to predict the success of MBA students based on data such as undergraduate scores, measures of interview quality, and work experience. They first had the opportunity to do some practice questions. They were also provided with an algorithm designed to predict MBA success and its practice answers—generally far superior to the human subjects’.

In their prediction task, the subjects had the option of using the algorithm, which they had already seen was better than them in predicting performance. But they generally didn’t use it, costing them the money they would have received for accuracy. The authors of the paper suggested that when experimental subjects saw the practice answers from the algorithm, they focussed on its apparently stupid mistakes—far more than they focussed on their own more regular mistakes.

Although somewhat under-explored, this study is typical of when people are given the results of an algorithm or statistical method (see here, here, here, and here). The algorithm tends to improve their performance, yet the algorithm by itself has greater accuracy. This suggests the most accurate method is often to fire the human and rely on the algorithm alone. ...

Saturday, July 15, 2017

Genetic variation in Han Chinese population


Largest component of genetic variation is a N-S cline (phenotypic N-S gradient discussed here). Variance accounted for by second (E-W) PC vector is much smaller and the Han population is fairly homogeneous in genetic terms: ...while we revealed East-to-West structure among the Han Chinese, the signal is relatively weak and very little structure is discernible beyond the second PC (p.24).

Neandertal ancestry does not vary significantly across provinces, consistent with admixture prior to the dispersal of modern Han Chinese.
A comprehensive map of genetic variation in the world's largest ethnic group - Han Chinese
https://doi.org/10.1101/162982

As are most non-European populations around the globe, the Han Chinese are relatively understudied in population and medical genetics studies. From low-coverage whole-genome sequencing of 11,670 Han Chinese women we present a catalog of 25,057,223 variants, including 548,401 novel variants that are seen at least 10 times in our dataset. Individuals from our study come from 19 out of 22 provinces across China, allowing us to study population structure, genetic ancestry, and local adaptation in Han Chinese. We identify previously unrecognized population structure along the East-West axis of China and report unique signals of admixture across geographical space, such as European influences among the Northwestern provinces of China. Finally, we identified a number of highly differentiated loci, indicative of local adaptation in the Han Chinese. In particular, we detected extreme differentiation among the Han Chinese at MTHFR, ADH7, and FADS loci, suggesting that these loci may not be specifically selected in Tibetan and Inuit populations as previously suggested. On the other hand, we find that Neandertal ancestry does not vary significantly across the provinces, consistent with admixture prior to the dispersal of modern Han Chinese. Furthermore, contrary to a previous report, Neandertal ancestry does not explain a significant amount of heritability in depression. Our findings provide the largest genetic data set so far made available for Han Chinese and provide insights into the history and population structure of the world's largest ethnic group.
See also Large-Scale Psychological Differences Within China.

The Loveless (1982) and Born to Run



The Loveless (free now on Amazon Prime) was the first film directed by Kathryn Bigelow (Point Break, Zero Dark Thirty) and also the first first film role for a young Willem Dafoe. Dafoe has more leading man star power in this role than in most of his subsequent work.

Loveless was shot in 22 days, when Bigelow was fresh out of Columbia film school. The movie could be characterized as a biker art film with some camp elements, but overall a fairly dark and nihilistic mood. The video above is a fan mash up of Loveless and Bruce Springsteen's Born to Run. It works well on its own terms, although Born to Run is more romantic than nihilistic, at least musically. The lyrics by themselves, however, fit the film rather well.
Born To Run

Bruce Springsteen

In the day we sweat it out on the streets of a runaway American dream
At night we ride through the mansions of glory in suicide machines
Sprung from cages out on highway nine,
Chrome wheeled, fuel injected, and steppin' out over the line
H-Oh, Baby this town rips the bones from your back
It's a death trap, it's a suicide rap
We gotta get out while we're young
`Cause tramps like us, baby we were born to run

Yes, girl we were

Wendy let me in I wanna be your friend
I want to guard your dreams and visions
Just wrap your legs 'round these velvet rims
And strap your hands 'cross my engines
Together we could break this trap
We'll run till we drop, baby we'll never go back
H-Oh, Will you walk with me out on the wire
`Cause baby I'm just a scared and lonely rider
But I gotta know how it feels
I want to know if love is wild
Babe I want to know if love is real

Oh, can you show me

Beyond the Palace hemi-powered drones scream down the boulevard
Girls comb their hair in rearview mirrors
And the boys try to look so hard
The amusement park rises bold and stark
Kids are huddled on the beach in a mist
I wanna die with you Wendy on the street tonight
In an everlasting kiss

One, two, three, four

The highway's jammed with broken heroes on a last chance power drive
Everybody's out on the run tonight
But there's no place left to hide
Together Wendy we can live with the sadness
I'll love you with all the madness in my soul
H-Oh, Someday girl I don't know when
We're gonna get to that place
Where we really wanna go
And we'll walk in the sun
But till then tramps like us
Baby we were born to run
Oh honey, tramps like us
Baby we were born to run
Come on with me, tramps like us
Baby we were born to run

Thursday, July 13, 2017

Super-human Relational Reasoning (DeepMind)



These neural nets reached super-human (better than an average human) performance on tasks requiring relational reasoning. See the short video for examples.
A simple neural network module for relational reasoning
https://arxiv.org/abs/1706.01427

Adam Santoro, David Raposo, David G.T. Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, Timothy Lillicrap (Submitted on 5 Jun 2017)

Relational reasoning is a central component of generally intelligent behavior, but has proven difficult for neural networks to learn. In this paper we describe how to use Relation Networks (RNs) as a simple plug-and-play module to solve problems that fundamentally hinge on relational reasoning. We tested RN-augmented networks on three tasks: visual question answering using a challenging dataset called CLEVR, on which we achieve state-of-the-art, super-human performance; text-based question answering using the bAbI suite of tasks; and complex reasoning about dynamic physical systems. Then, using a curated dataset called Sort-of-CLEVR we show that powerful convolutional networks do not have a general capacity to solve relational questions, but can gain this capacity when augmented with RNs. Our work shows how a deep learning architecture equipped with an RN module can implicitly discover and learn to reason about entities and their relations.

Tuesday, July 11, 2017

Probing deep networks: inside the black box



See also AI knows best: AlphaGo "like a God":
Humans are going to have to learn to "trust the AI" without understanding why it is right. I often make an analogous point to my kids -- "At your age, if you and Dad disagree, chances are that Dad is right" :-) Of course, I always try to explain the logic behind my thinking, but in the case of some complex machine optimizations (e.g., Go strategy), humans may not be able to understand even the detailed explanations.

In some areas of complex systems -- neuroscience, genomics, molecular dynamics -- we also see machine prediction that is superior to other methods, but difficult even for scientists to understand. When hundreds or thousands of genes combine to control many dozens of molecular pathways, what kind of explanation can one offer for why a particular setting of the controls (DNA pattern) works better than another?

There was never any chance that the functioning of a human brain, the most complex known object in the universe, could be captured in verbal explication of the familiar kind (non-mathematical, non-algorithmic). The researchers that built AlphaGo would be at a loss to explain exactly what is going on inside its neural net...

Sunday, July 09, 2017

Yoel Romero, freak athlete



Romero is 40 years old! He is a former World Champion and Olympic silver medalist for Cuba in freestyle wrestling. Watch the video -- it's great! :-)

He lost a close championship fight yesterday in the UFC at 185lbs. The guy he lost to, Robert Whittaker, is a young talent and a class act. It's been said that Romero relies too much on athleticism and doesn't fight smart (this goes back to his wrestling days). He should have attacked Whittaker more ruthlessly after he hurt Whittaker's knee early in the fight with a kick.

Saturday, July 08, 2017

Politico: Trump could have won as a Democrat


I'm old enough to have been aware of Donald Trump since before the publication of Art of the Deal in 1987. In these decades, during which he was one of the best known celebrities in America, he was largely regarded as a progressive New Yorker, someone who could easily pass as a rich Democrat. Indeed, he was friendly with the Clintons -- Ivanka and Chelsea are good friends. There were no accusations of racism, and he enjoyed an 11 year run (2004-2015) on The Apprentice. No one would have doubted for a second that he was an American patriot, the least likely stooge for Russia or the USSR. I say all this to remind people that the image of Trump promulgated by the media and his other political enemies since he decided to run for President is entirely a creation of the last year or two.

If you consider yourself a smart person, a rational person, an evidence-driven person, you should reconsider whether 30+ years of reporting on Trump is more likely to be accurate (during this time he was a public figure, major celebrity, and tabloid fodder: subject to intense scrutiny), or 1-2 years of heavily motivated fake news.

In the article below, Politico considers the very real possibility that Trump could have run, and won, as a Democrat. If you're a HATE HATE HATE NEVER NEVER TRUMP person, think about that for a while.
Politico: ... Could Trump have done to the Democrats in 2016 what he did to the Republicans? Why not? There, too, he would have challenged an overconfident, message-challenged establishment candidate (Hillary Clinton instead of Jeb Bush) and with an even smaller number of other competitors to dispatch. One could easily see him doing as well or better than Bernie Sanders—surprising Clinton in the Iowa caucuses, winning the New Hampshire primaries, and on and on. More to the point, many of Trump’s views—skepticism on trade, sympathetic to Planned Parenthood, opposition to the Iraq war, a focus on blue-collar workers in Rust Belt America—seemed to gel as well, if not better, with blue-state America than red. Think the Democrats wouldn’t tolerate misogynist rhetoric and boorish behavior from their leaders? Well, then you’ve forgotten about Woodrow Wilson and John F. Kennedy and LBJ and the last President Clinton.

There are, as with every what-if scenario, some flaws. Democrats would have deeply resented Trump’s ‘birther’ questioning of Barack Obama’s origins, and would have been highly skeptical of the former reality TV star’s political bona fides even if he hadn’t made a sharp turn to the right as he explored a presidential bid in the run up to the 2012 election. His comments on women and minorities would have exposed him to withering scrutiny among the left’s army of advocacy groups. Liberal donors would likely have banded together to strangle his candidacy in its cradle—if they weren’t laughing him off. But Republican elites tried both of these strategies in 2015, as well, and it manifestly didn’t work. What’s more, Trump did once hold a passel of progressive stances—and he had friendships all over the political map. As Bloomberg’s Josh Green notes, in his Apprentice days, Trump was even wildly popular among minorities. It’s not entirely crazy to imagine him outflanking a coronation-minded Hillary Clinton on the left and blitzing a weak Democratic field like General Sherman marching through Georgia. ...
See also Trump on Trump.

I voted twice for Bill Clinton and twice for Obama. Listen carefully: their positions on immigration, as expressed below, do not differ much in substance from Trump's.





Thursday, July 06, 2017

10 Years of GWAS Discovery


See post from 2012: Five years of GWAS discovery to see how far the field of human genomics has advanced in just a short time.
10 Years of GWAS Discovery: Biology, Function, and Translation

The American Journal of Human Genetics 101, 5–22, July 6, 2017
DOI: http://dx.doi.org/10.1016/j.ajhg.2017.06.005

Peter M. Visscher,1,2,* Naomi R. Wray,1,2 Qian Zhang,1 Pamela Sklar,3 Mark I. McCarthy,4,5,6 Matthew A. Brown,7 and Jian Yang1,2

Application of the experimental design of genome-wide association studies (GWASs) is now 10 years old (young), and here we review the remarkable range of discoveries it has facilitated in population and complex-trait genetics, the biology of diseases, and translation toward new therapeutics. We predict the likely discoveries in the next 10 years, when GWASs will be based on millions of samples with array data imputed to a large fully sequenced reference panel and on hundreds of thousands of samples with whole-genome sequencing data.

Background
Five years ago, a number of us reviewed (and gave our opinion on) the first 5 years of discoveries that came from the experimental design of the GWAS.1 That review sought to set the record straight on the discoveries made by GWASs because at that time, there was still a level of misunderstanding and distrust about the purpose of and discoveries made by GWASs. There is now much more acceptance of the experimental design because the empirical results have been robust and overwhelming, as reviewed here.

... GWAS results have now been reported for hundreds of complex traits across a wide range of domains, including common diseases, quantitative traits that are risk factors for disease, brain imaging phenotypes, genomic measures such as gene expression and DNA methylation, and social and behavioral traits such as subjective well-being and educational attainment. About 10,000 strong associations have been reported between genetic variants and one or more complex traits,10 where “strong” is defined as statistically significant at the genome-wide p value threshold of 5 × 10−8, excluding other genome-wide-significant SNPs in LD (r2 > 0.5) with the strongest association (Figure 2). GWAS associations have proven highly replicable, both within and between populations,11, 12 under the assumption of adequate sample sizes.

One unambiguous conclusion from GWASs is that for almost any complex trait that has been studied, many loci contribute to standing genetic variation. In other words, for most traits and diseases studied, the mutational target in the genome appears large so that polymorphisms in many genes contribute to genetic variation in the population. This means that, on average, the proportion of variance explained at the individual variants is small. Conversely, as predicted previously,1, 13 this observation implies that larger experimental sample sizes will lead to new discoveries, and that is exactly what has occurred over the last decade. ...

Tuesday, July 04, 2017

Building the Gadget: A Technical History of the Atomic Bomb


This is the best technical summary of the Los Alamos component of the Manhattan Project that I know of. It includes, for example, detail about the hydrodynamical issues that had to be overcome for successful implosion. That work drew heavily on von Neumann's expertise in shock waves, explosives, numerical solution of hydrodynamic partial differential equations, etc. A visit by G.I. Taylor alerted the designers to the possibility of instabilities in the shock front (Rayleigh–Taylor instability). Concern over these instabilities led to the solid-core design known as the Christy Gadget.
Critical Assembly: A Technical History of Los Alamos during the Oppenheimer Years, 1943-1945

... Unlike earlier histories of Los Alamos, this book treats in detail the research and development that led to the implosion and gun weapons; the research in nuclear physics, chemistry, and metallurgy that enabled scientists to design these weapons; and the conception of the thermonuclear bomb, the "Super." Although fascinating in its own right, this story has particular interest because of its impact on subsequent devel- opments. Although many books examine the implications of Los Alamos for the development of a nuclear weapons culture, this is the first to study its role in the rise of the methodology of "big science" as carried out in large national laboratories.

... The principal reason that the technical history of Los Alamos has not yet been written is that even today, after half a century, much of the original documentation remains classified. With cooperation from the Los Alamos Laboratory, we received authorization to examine all the relevant documentation. The book then underwent a classification review that resulted in the removal from this edition of all textual material judged sensitive by the Department of Energy and all references to classified documents. (For this reason, a number of quotations appear without attribution.) However, the authorities removed little information. Thus, except for a small number of technical facts, this account represents the complete story. In every instance the deleted information was strictly technical; in no way has the Los Alamos Laboratory or the Department of Energy attempted to shape our interpretations. This is not, therefore, a "company history"; throughout the research and writing, we enjoyed intellectual freedom.

... Scientific research was an essential component of the new approach: the first atomic bombs could not have been built by engineers alone, for in no sense was developing these bombs an ordinary engineering task. Many gaps existed in the scientific knowledge needed to complete the bombs. Initially, no one knew whether an atomic weapon could be made. Furthermore, the necessary technology extended well beyond the "state of the art." Solving the technical problems required a heavy investment in basic research by top-level scientists trained to explore the unknown - scientists like Hans Bethe, Richard Feynman, Rudolf Peierls, Edward Teller, John von Neumann, Luis Alvarez, and George Kistiakowsky. To penetrate the scientific phenomena required a deep understanding of nuclear physics, chemistry, explosives, and hydrodynamics. Both theoreticians and experimentalists had to push their scientific tools far beyond their usual capabilities. For example, methods had to be developed to carry out numerical hydrodynamics calculations on a scale never before attempted, and experimentalists had to expand the sensitivity of their detectors into qualitatively new regimes.

... American physics continued to prosper throughout the 1920s and1930s, despite the Depression. Advances in quantum theory stimulated interest in the microscopic structure of matter, and in 1923 Robert Millikan of Caltech was awarded the Nobel Prize for his work on electrons. In the 1930s and 1940s, Oppenheimer taught quantum theory to large numbers of students at the Berkeley campus of the University of California as well as at Caltech. Also at Berkeley in the 1930s and 1940s, the entrepreneurial Lawrence gathered chemists, engineers, and physicists together in a laboratory where he built a series of ever-larger cyclotrons and led numerous projects in nuclear chemistry, nuclear physics, and medicine. By bringing together specialists from different fields to work cooperatively on large common projects, Lawrence helped to create a distinctly American collaborative research endeavor - centered on teams, as in the industrial research laboratories, but oriented toward basic studies without immediate application. This approach flourished during World War II.

Sunday, July 02, 2017

Machine intelligence threatens overpriced aircraft carriers


The excerpt below is from a recent comment thread, arguing that the US Navy should de-emphasize carrier groups in favor of subs and smaller surface ships. Technological trends such as rapid advancement in machine learning (ML) and sensors will render carriers increasingly vulnerable to missile attack in the coming decades.
1. US carriers are very vulnerable to *conventional* Russian and PRC missile (cruise, ASBM) weapons.

2. Within ~10y (i.e., well within projected service life of US carriers) I expect missile systems of the type currently only possessed by Russia and PRC to be available to lesser powers. I expect that a road-mobile ASBM weapon with good sensor/ML capability, range ~1500km, will be available for ~$10M. Given a rough (~10km accuracy) fix on a carrier, this missile will be able to arrive in that area and then use ML/sensors for final targeting. There is no easy defense against such weapons. Cruise missiles which pose a similar threat will also be exported. This will force the US to be much more conservative in the use of its carriers, not just against Russia and PRC, but against smaller countries as well.

Given 1. and 2. my recommendation is to decrease the number of US carriers and divert the funds into smaller missile ships, subs, drones, etc. Technological trends simply do not favor carriers as a weapon platform.

Basic missile technology is old, well-understood, and already inexpensive (compared, e.g., to the cost of fighter jets). ML/sensor capability is evolving rapidly and will be enormously better in 10y. Imagine a Mach 10 robot kamikaze with no problem locating a carrier from 10km distance (on a clear day there are no countermeasures against visual targeting using the equivalent of a cheap iPhone camera -- i.e., robot pilot looks down at the ocean to find carrier), and capable of maneuver. Despite BS claims over the years (and over $100B spent by the US), anti-missile technology is not effective, particularly against fast-moving ballistic missiles.

One only has to localize the carrier to within few x 10km for initial launch, letting the smart final targeting do the rest. The initial targeting location can be obtained through many methods, including aircraft/drone probes, targeting overflight by another kind of missile, LEO micro-satellites, or even (surreptitious) cooperation from Russia/PRC (or a commercial vendor!) via their satellite network.
Some relevant links, supplied by a reader. 1. National Air and Space Intelligence Center report on ballistic and cruise missile threats (note the large number of countries that can utilize basic missile technology; all they need is an ML/sensor upgrade...), and 2. Stop the Navy's carrier plan by Capt. Jerry Hendrix (ret.), director of the Defense Strategies and Assessments Program at the Center for a New American Security:
... the Navy plans to modernize its carrier program by launching a new wave of even larger and more expensive ships, starting with the USS Gerald Ford, which cost $15 billion to build — by far the most expensive vessel in naval history. This is a mistake: Because of changes in warfare and technology, in any future military entanglement with a foe like China, current carriers and their air wings will be almost useless and the next generation may fare even worse.

... most weapons platforms are effective for only a limited time, an interval that gets shorter as history progresses. But until the past few years, the carrier had defied the odds, continuing to demonstrate America’s military might around the world without any challenge from our enemies. That period of grace may have ended as China and Russia are introducing new weapons — called “carrier killer” missiles — that cost $10 million to $20 million each and can target the U.S.’s multibillion-dollar carriers up to 900 miles from shore.

... The average cost of each of the 10 Nimitz class carriers was around $5 billion. When the cost of new electrical systems is factored in, the USS Ford cost three times as much and took five years to build. With the deficit projected to rise considerably over the next decade, defense spending is unlikely to receive a significant bump. Funding these carriers will crowd out spending on other military priorities, like the replacement of the Ohio class ballistic missile submarine, perhaps the most survivable and important leg of our strategic deterrent triad. There simply isn’t room to fund an aircraft carrier that costs the equivalent of the entire Navy shipbuilding budget.

... The Navy’s decision on the carriers today will affect U.S. naval power for decades. These carriers are expected to be combat effective in 2065 — over 150 years since the idea of an aircraft carrier was first conceived. ...
See also Defense Science Board report on Autonomous Systems.

Thursday, June 29, 2017

How the brain does face recognition


This is a beautiful result. IIUC, these neuroscientists use the terminology "face axis" instead of (machine learning terminology) variation along an eigenface vector or feature vector.
Scientific American: ...using a combination of brain imaging and single-neuron recording in macaques, biologist Doris Tsao and her colleagues at Caltech have finally cracked the neural code for face recognition. The researchers found the firing rate of each face cell corresponds to separate facial features along an axis. Like a set of dials, the cells are fine-tuned to bits of information, which they can then channel together in different combinations to create an image of every possible face. “This was mind-blowing,” Tsao says. “The values of each dial are so predictable that we can re-create the face that a monkey sees, by simply tracking the electrical activity of its face cells.”
I never believed the "Jennifer Aniston neuron" results, which seemed implausible from a neural architecture perspective. I thought the encoding had to be far more complex and modular. Apparently that's the case. The single neuron claim has been widely propagated (for over a decade!) but now seems to be yet another result that fails to replicate after invading the meme space of credulous minds.
... neuroscientist Rodrigo Quian Quiroga found that pictures of actress Jennifer Aniston elicited a response in a single neuron. And pictures of Halle Berry, members of The Beatles or characters from The Simpsons activated separate neurons. The prevailing theory among researchers was that each neuron in the face patches was sensitive to a few particular people, says Quiroga, who is now at the University of Leicester in the U.K. and not involved with the work. But Tsao’s recent study suggests scientists may have been mistaken. “She has shown that neurons in face patches don’t encode particular people at all, they just encode certain features,” he says. “That completely changes our understanding of how we recognize faces.”
Modular feature sensitivity -- just like in neural net face recognition:
... To decipher how individual cells helped recognize faces, Tsao and her postdoc Steven Le Chang drew dots around a set of faces and calculated variations across 50 different characteristics. They then used this information to create 2,000 different images of faces that varied in shape and appearance, including roundness of the face, distance between the eyes, skin tone and texture. Next the researchers showed these images to monkeys while recording the electrical activity from individual neurons in three separate face patches.

All that mattered for each neuron was a single-feature axis. Even when viewing different faces, a neuron that was sensitive to hairline width, for example, would respond to variations in that feature. But if the faces had the same hairline and different-size noses, the hairline neuron would stay silent, Chang says. The findings explained a long-disputed issue in the previously held theory of why individual neurons seemed to recognize completely different people.

Moreover, the neurons in different face patches processed complementary information. Cells in one face patch—the anterior medial patch—processed information about the appearance of faces such as distances between facial features like the eyes or hairline. Cells in other patches—the middle lateral and middle fundus areas—handled information about shapes such as the contours of the eyes or lips. Like workers in a factory, the various face patches did distinct jobs, cooperating, communicating and building on one another to provide a complete picture of facial identity.

Once Chang and Tsao knew how the division of labor occurred among the “factory workers,” they could predict the neurons’ responses to a completely new face. The two developed a model for which feature axes were encoded by various neurons. Then they showed monkeys a new photo of a human face. Using their model of how various neurons would respond, the researchers were able to re-create the face that a monkey was viewing. “The re-creations were stunningly accurate,” Tsao says. In fact, they were nearly indistinguishable from the actual photos shown to the monkeys.
This is the original paper in Cell:
The Code for Facial Identity in the Primate Brain

Le Chang, Doris Y. Tsao

Highlights
•Facial images can be linearly reconstructed using responses of ∼200 face cells
•Face cells display flat tuning along dimensions orthogonal to the axis being coded
•The axis model is more efficient, robust, and flexible than the exemplar model
•Face patches ML/MF and AM carry complementary information about faces

Summary
Primates recognize complex objects such as faces with remarkable speed and reliability. Here, we reveal the brain’s code for facial identity. Experiments in macaques demonstrate an extraordinarily simple transformation between faces and responses of cells in face patches. By formatting faces as points in a high-dimensional linear space, we discovered that each face cell’s firing rate is proportional to the projection of an incoming face stimulus onto a single axis in this space, allowing a face cell ensemble to encode the location of any face in the space. Using this code, we could precisely decode faces from neural population responses and predict neural firing rates to faces. Furthermore, this code disavows the long-standing assumption that face cells encode specific facial identities, confirmed by engineering faces with drastically different appearance that elicited identical responses in single face cells. Our work suggests that other objects could be encoded by analogous metric coordinate systems.
200 cells is interesting because (IIRC) standard deep learning face recognition packages right now use a 126-dimensional feature space. These packages perform roughly as well as humans (or perhaps a bit better?).

Monday, June 26, 2017

Face Recognition applied at scale in China



The Chinese government is not the only entity that has access to millions of faces + identifying information. So do Google, Facebook, Instagram, and anyone who has scraped information from similar social networks (e.g., US security services, hackers, etc.).

In light of such ML capabilities it seems clear that anti-ship ballistic missiles can easily target a carrier during the final maneuver phase of descent, using optical or infrared sensors (let alone radar).
Terminal targeting of a moving aircraft carrier by an ASBM like the DF21D

Simple estimates: 10 min flight time means ~10km uncertainty in final position of a carrier (assume speed of 20-30 mph) initially located by satellite. Missile course correction at distance ~10km from target allows ~10s (assuming Mach 5-10 velocity) of maneuver, and requires only a modest angular correction. At this distance a 100m sized target has angular size ~0.01 so should be readily detectable from an optical image. (Carriers are visible to the naked eye from space!) Final targeting at distance ~km can use a combination of optical / IR / radar that makes countermeasures difficult.

So hitting a moving aircraft carrier does not seem especially challenging with modern technology.

Friday, June 23, 2017

The Prestige: Are You Watching Closely?



2016 was the 10th anniversary of The Prestige, one of the most clever films ever made. This video reveals aspects of the movie that will be new even to fans who have watched it several times. Highly recommended!
Wikipedia: The Prestige is a 2006 British-American mystery thriller film directed by Christopher Nolan, from a screenplay adapted by Nolan and his brother Jonathan from Christopher Priest's 1995 novel of the same name. Its story follows Robert Angier and Alfred Borden, rival stage magicians in London at the end of the 19th century. Obsessed with creating the best stage illusion, they engage in competitive one-upmanship with tragic results. The film stars Hugh Jackman as Robert Angier, Christian Bale as Alfred Borden, and David Bowie as Nikola Tesla. It also stars Michael Caine, Scarlett Johansson, Piper Perabo, Andy Serkis, and Rebecca Hall.
See also Feynman and Magic -- Feynman was extremely good at reverse-engineering magic tricks.

Sunday, June 18, 2017

Destined for War? America, China, and the Thucydides Trap



Graham Allison was Dean of the Kennedy School of Government at Harvard and Assistant Secretary of Defense under Clinton. I also recommend his book on Lee Kuan Yew.

Thucydides: “It was the rise of Athens, and the fear that this inspired in Sparta, that made war inevitable.” More here and here.
Destined for War: Can America and China Escape Thucydides’s Trap?

In Destined for War, the eminent Harvard scholar Graham Allison explains why Thucydides’s Trap is the best lens for understanding U.S.-China relations in the twenty-first century. Through uncanny historical parallels and war scenarios, he shows how close we are to the unthinkable. Yet, stressing that war is not inevitable, Allison also reveals how clashing powers have kept the peace in the past — and what painful steps the United States and China must take to avoid disaster today.
At 1h05min Allison answers the following question.
Is there any reason for optimism under President Trump in foreign affairs?

65:43 Harvard
65:50 and Cambridge ... ninety-five
66:04 percent of whom voted [against Trump] ... so we
66:08 hardly know any people in quote real
66:11 America and we don't have any perception
66:15 or understanding or feeling for this but
66:17 I come from North Carolina and my wife
66:19 comes from Ohio ...
66:33 ... in large parts of the
66:36 country they have extremely different
66:38 views than the New York Times or The
66:39 Washington Post or you know the elite
66:43 media ...

I think part of what Trump
67:11 represents is a rejection of the
67:15 establishment especially the political
67:17 class and the elites
67:19 which are places like us places like
67:21 Harvard and others who lots of people in
67:25 our society don't think have done a
67:27 great job with the opportunities that
67:28 our country has had

67:33 ... Trump's willingness to not be
67:37 Orthodox to not be captured by the
67:40 conventional wisdom to explore
67:43 possibilities

... he's not
68:31 beholden to the Jewish community he's
68:34 not beholden to the Republican Party
68:36 he's not become beholden to the
68:38 Democratic Party

... I think I'm
69:26 hopeful

See also:
Everything Under the Heavens and China's Conceptualization of Power
Thucydides Trap, China-US relations, and all that

Friday, June 16, 2017

Scientific Consensus on Cognitive Ability?


From the web site of the International Society for Intelligence Research (ISIR): a summary of the recent debate involving Charles Murray, Sam Harris, Richard Nisbett, Eric Turkheimer, Paige Harden, Razib Khan, Bo and Ben Winegard, Brian Boutwell, Todd Shackelford, Richard Haier, and a cast of thousands! ISIR is the main scientific society for researchers of human intelligence, and is responsible for the Elsevier journal Intelligence.

If you click through to the original, there are links to resources in this debate ranging from podcasts (Harris and Murray), to essays at Vox, Quillette, etc.

I found the ISIR summary via a tweet by Timothy Bates, who sometimes comments here. I wonder what he has to say about all this, given that his work has been cited by both sides :-)
TALKING ABOUT COGNITIVE ABILITY IN 2017

[ Click through for links. ]

2017 has already seen more science-lead findings on cognitive ability, and public discussion about the origins, and social and moral implications of ability, than we have had in some time, which should be good news for those seeking to understand and grow cognitive ability. This post brings together some of these events linking talk about differences in reasoning that are so near to our sense of autonomy and identity.

Middlebury
Twenty years ago, when Dr Charles Murray co-authored a book with Harvard Psychologist Richard Herrnstein he opened up a conversation about the role of ability in the fabric of society, and in the process made him famous for several things (most of which that he didn‘t say), but for which he, and that book – The Bell Curve – came to act as lightning rods, for the cauldron of mental compression of complex ideas, multiple people, into simpler slogans. 20 years on, Middlebury campus showed this has made even speaking to a campus audience fraught with danger.

Waking Up
In the wake of this disrupted meeting, Sam Harris interviewed Dr Murray in a podcast listened (and viewed on youtube) by and audience of many thousands, creating a new audience and new interest in ideas about ability, its measurement and relevance to modern society.

Vox populi
The Harris podcast lead a response in turn, published in Vox in which IQ, genetics, and social psychology experts Professors Eric Turkheimer, Paige Harden, and Richard Nisbett responded critically to the ideas raised (and those not raised) which they argue are essential for informed debate on group differences.

Quillette
And that lead in turn lead to two more responses: First by criminologists and evolutionary psychologists Bo and Ben Winegard, Brian Boutwell, and Todd Shackelford in Quillette, and a second post at Quillette, also supportive of the Murray-Harris interaction, from past-president of ISIR and expert intelligence research Professor Rich Haier.

And that lead to a series of planned essays by Professor Harden (first of which is now published here) and Eric Turkheimer (here). Each of these posts contains a wealth of valuable information, links to original papers, and they are responsive to each other: Addressing points made in the other posts with citations, clarifications, and productive disagreement where that still exists. They’re worth reading.

The answer, in 2017, may be a cautious “Yes, – perhaps we can talk about differences in human cognitive ability”. And listen, reply, and perhaps even reach a scientific consensus.

[ Added: 6/15 Vox response from Turkheimer et al. that doesn't appear to be noted in the ISIR summary. ]
In a recent post, NYTimes: In ‘Enormous Success,’ Scientists Tie 52 Genes to Human Intelligence, I noted that scientific evidence overwhelmingly supports the following claims:
0. Intelligence is (at least crudely) measurable
1. Intelligence is highly heritable (much of the variance is determined by DNA)
2. Intelligence is highly polygenic (controlled by many genetic variants, each of small effect)
3. Intelligence is going to be deciphered at the molecular level, in the near future, by genomic studies with very large sample size
I believe that, perhaps modulo the word near in #3, every single listed participant in the above debate would agree with these claims.

(0-3) above take no position on the genetic basis of group differences in measured cognitive ability. That is the where most of the debate is focused. However, I think it's fair to say that points (0-3) form a consensus view among leading experts in 2017.

As far as what I think the future will bring, see Complex Trait Adaptation and the Branching History of Mankind.

Thursday, June 15, 2017

Everything Under the Heavens and China's Conceptualization of Power



Howard French discusses his new book, Everything Under the Heavens: How the Past Helps Shape China's Push for Global Power, with Orville Schell. The book is primarily focused on the Chinese historical worldview and how it is likely to affect China's role in geopolitics.

French characterizes his book as, in part,
... an extended exploration of the history of China's conceptualization of power ... and a view as to how ... the associated contest with the United States for primacy ... in the world could play out.
These guys are not very quantitative, so let me clarify a part of their discussion that was left rather ambiguous. It is true that demographic trends are working against China, which has a rapidly aging population. French and Schell talk about a 10-15 year window during which China has to grow rich before it grows old (a well-traveled meme). From the standpoint of geopolitics this is probably not the correct or relevant analysis. China's population is ~4x that of the US. If, say, demographic trends limit this to only an effective 3x or 3.5x advantage in working age individuals, China still only has to reach ~1/3 of US per capita income in order to have a larger overall economy. It seems unlikely that there is any hard cutoff preventing China from reaching, say, 1/2 the US per capita GDP in a few decades. (Obviously a lot of this growth is still "catch-up" growth.) At that point its economy would be the largest in the world by far, and its scientific-technological workforce and infrastructure would be far larger than that of any other country.




Gideon Rachman writes for the FT, so it's not surprising that his instincts seem a bit stronger when it comes to economics. He makes a number of incisive observations during this interview.

At 16min, he mentions that
I was in Beijing about I guess a month before the vote [US election], in fact when the first debates were going on, and the Chinese, I thought that official Chinese [i.e. Government Officials] in our meeting and the sort of semi-official academics were clearly pulling for Trump.
See also Trump Triumph Viewed From China.

Related: Thucydides trap, China-US relations, and all that.

Tuesday, June 13, 2017

Climate Risk and AI Risk for Dummies

The two figures below come from recent posts on climate change and AI. Please read them.

The squiggles in the first figure illustrate uncertainty in how climate will change due to CO2 emissions. The squiggles in the second figure illustrate uncertainty in the advent of human-level AI.



Many are worried about climate change because polar bears, melting ice, extreme weather, sacred Gaia, sea level rise, sad people, etc. Many are worried about AI because job loss, human dignity, Terminator, Singularity, basilisks, sad people, etc.

You can choose to believe in any of the grey curves in the AI graph because we really don't know how long it will take to develop human level AI, and AI researchers are sort of rational scientists who grasp uncertainty and epistemic caution.

You cannot choose to believe in just any curve in a climate graph because if you pick the "wrong" curve (e.g., +1.5 degree Celsius sensitivity to a doubling of CO2, which is fairly benign, but within the range of IPCC predictions) then you are a climate denier who hates science, not to mention a bad person :-(

Oliver Stone confronts Idiocracy



See earlier post Trump, Putin, Stephen Cohen, Brawndo, and Electrolytes.

Note to morons: Russia's 2017 GDP is less than that of France, Brazil, Italy, Canada, and just above that of Korea and Australia. (PPP-adjusted they are still only #6 in the world, between Germany and Indonesia: s-s-scary!) Apart from their nuclear arsenal (which they will struggle to pay for in the future), they are hardly a serious geopolitical competitor to the US and certainly not to the West as a whole. Relax! Trump won the election, not Russia.


This is a longer (and much better) discussion of Putin with Oliver Stone and Stephen Cohen. At 17:30 they discuss the "Russian attack" on our election.

Sunday, June 11, 2017

Rise of the Machines: Survey of AI Researchers


These predictions are from a recent survey of AI/ML researchers. See SSC and also here for more discussion of the results.
When Will AI Exceed Human Performance? Evidence from AI Experts

Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, Owain Evans

Advances in artificial intelligence (AI) will transform modern life by reshaping transportation, health, science, finance, and the military. To adapt public policy, we need to better anticipate these advances. Here we report the results from a large survey of machine learning researchers on their beliefs about progress in AI. Researchers predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans. These results will inform discussion amongst researchers and policymakers about anticipating and managing trends in AI.
Another figure:


Keep in mind that the track record for this type of prediction, even by experts, is not great:


See below for the cartoon version :-)



Wednesday, June 07, 2017

Complex Trait Adaptation and the Branching History of Mankind


A new paper (94 pages!) investigates signals of recent selection on traits such as height and educational attainment (proxy for cognitive ability). Here's what I wrote about height a few years ago in Genetic group differences in height and recent human evolution:
These recent Nature Genetics papers offer more evidence that group differences in a complex polygenic trait (height), governed by thousands of causal variants, can arise over a relatively short time (~ 10k years) as a result of natural selection (differential response to varying local conditions). One can reach this conclusion well before most of the causal variants have been accounted for, because the frequency differences are found across many variants (natural selection affects all of them). Note the first sentence above contradicts many silly things (drift over selection, genetic uniformity of all human subpopulations due to insufficient time for selection, etc.) asserted by supposed experts on evolution, genetics, human biology, etc. over the last 50+ years. The science of human evolution has progressed remarkably in just the last 5 years, thanks mainly to advances in genomic technology.

Cognitive ability is similar to height in many respects, so this type of analysis should be possible in the near future. ...
The paper below conducts an allele frequency analysis on admixture graphs, which contain information about branching population histories. Thanks to recent studies, they now have enough data to run the analysis on educational attainment as well as height. Among their results: a clear signal that modern East Asians experienced positive selection (~10kya?) for + alleles linked to educational attainment (see left panel of figure above; CHB = Chinese, CEU = Northern Europeans). These variants have also been linked to neural development.
Detecting polygenic adaptation in admixture graphs

Fernando Racimo∗1, Jeremy J. Berg2 and Joseph K. Pickrell1,2 1New York Genome Center, New York, NY 10013, USA 2Department of Biological Sciences, Columbia University, New York, NY 10027, USA June 4, 2017

Abstract
An open question in human evolution is the importance of polygenic adaptation: adaptive changes in the mean of a multifactorial trait due to shifts in allele frequencies across many loci. In recent years, several methods have been developed to detect polygenic adaptation using loci identified in genome-wide association studies (GWAS). Though powerful, these methods suffer from limited interpretability: they can detect which sets of populations have evidence for polygenic adaptation, but are unable to reveal where in the history of multiple populations these processes occurred. To address this, we created a method to detect polygenic adaptation in an admixture graph, which is a representation of the historical divergences and admixture events relating different populations through time. We developed a Markov chain Monte Carlo (MCMC) algorithm to infer branch-specific parameters reflecting the strength of selection in each branch of a graph. Additionally, we developed a set of summary statistics that are fast to compute and can indicate which branches are most likely to have experienced polygenic adaptation. We show via simulations that this method - which we call PhenoGraph - has good power to detect polygenic adaptation, and applied it to human population genomic data from around the world. We also provide evidence that variants associated with several traits, including height, educational attainment, and self-reported unibrow, have been influenced by polygenic adaptation in different human populations.

https://doi.org/10.1101/146043
From the paper:
We find evidence for polygenic adaptation in East Asian populations at variants that have been associated with educational attainment in European GWAS. This result is robust to the choice of data we used (1000 Genomes or Lazaridis et al. (2014) panels). Our modeling framework suggests that selection operated before or early in the process of divergence among East Asian populations - whose earliest separation dates at least as far back as approximately 10 thousand years ago [42, 43, 44, 45] - because the signal is common to different East Asian populations (Han Chinese, Dai Chinese, Japanese, Koreans, etc.). The signal is also robust to GWAS ascertainment (Figure 6), and to our modeling assumptions, as we found a significant difference between East Asian and non- East-Asian populations even when performing a simple binomial sign test (Tables S4, S9, S19 and S24).

Sunday, June 04, 2017

Epistemic Caution and Climate Change

I have not, until recently, invested significant time in trying to understand climate modeling. These notes are primarily for my own use, however I welcome comments from readers who have studied this issue in more depth.

I take a dim view of people who express strong opinions about complex phenomena without having understood the underlying uncertainties. I have yet to personally encounter anyone who claims to understand all of the issues discussed below, but I constantly meet people with strong views about climate change.

See my old post on epistemic caution Intellectual honesty: how much do we know?
... when it comes to complex systems like society or economy (and perhaps even climate), experts have demonstrably little predictive power. In rigorous studies, expert performance is often no better than random.  
... worse, experts are usually wildly overconfident about their capabilities. ... researchers themselves often have beliefs whose strength is entirely unsupported by available data.
Now to climate and CO2. AFAIU, the heating effect due to a increasing CO2 concentration is only a logarithmic function (all the absorption is in a narrow frequency band). The main heating effects in climate models come from secondary effects such as water vapor distribution in the atmosphere, which are not calculable from first principles, nor under good experimental/observational control. Certainly any "catastrophic" outcomes would have to result from these secondary feedback effects.

The first paper below gives an elementary calculation of direct effects from atmospheric CO2. This is the "settled science" part of climate change -- it depends on relatively simple physics. The prediction is about 1 degree Celsius of warming from a doubling of CO2 concentration. Anything beyond this is due to secondary effects which, in their totality, are not well understood -- see second paper below, about model tuning, which discusses rather explicitly how these unknowns are dealt with.
Simple model to estimate the contribution of atmospheric CO2 to the Earth’s greenhouse effect
Am. J. Phys. 80, 306 (2012)
http://dx.doi.org/10.1119/1.3681188

We show how the CO2 contribution to the Earth’s greenhouse effect can be estimated from relatively simple physical considerations and readily available spectroscopic data. In particular, we present a calculation of the “climate sensitivity” (that is, the increase in temperature caused by a doubling of the concentration of CO2) in the absence of feedbacks. Our treatment highlights the important role played by the frequency dependence of the CO2 absorption spectrum. For pedagogical purposes, we provide two simple models to visualize different ways in which the atmosphere might return infrared radiation back to the Earth. The more physically realistic model, based on the Schwarzschild radiative transfer equations, uses as input an approximate form of the atmosphere’s temperature profile, and thus includes implicitly the effect of heat transfer mechanisms other than radiation.
From Conclusions:
... The question of feedbacks, in its broadest sense, is the whole question of climate change: namely, how much and in which way can we expect the Earth to respond to an increase of the average surface temperature of the order of 1 degree, arising from an eventual doubling of the concentration of CO2 in the atmosphere? And what further changes in temperature may result from this response? These are, of course, questions for climate scientists to resolve. ...
The paper below concerns model tuning. It should be apparent that there are many adjustable parameters hidden in any climate model. One wonders whether the available data, given its own uncertainties, can constrain this high dimensional parameter space sufficiently to produce predictive power in a rigorous statistical sense.

The first figure below illustrates how different choices of these parameters can affect model predictions. Note the huge range of possible outcomes! The second figure below illustrates some of the complex physical processes which are subsumed in the parameter choices. Over longer timescales, (e.g., decades) uncertainties such as the response of ecosystems (e.g., plant growth rates) to increased CO2 would play a role in the models. It is obvious that we do not (may never?) have control over these unknowns.
THE ART AND SCIENCE OF CLIMATE MODEL TUNING

AMERICAN METEOROLOGICAL SOCIETY MARCH 2017 | 589

... Climate model development is founded on well-understood physics combined with a number of heuristic process representations. The fluid motions in the atmosphere and ocean are resolved by the so-called dynamical core down to a grid spacing of typically 25–300 km for global models, based on numerical formulations of the equations of motion from fluid mechanics. Subgrid-scale turbulent and convective motions must be represented through approximate subgrid-scale parameterizations (Smagorinsky 1963; Arakawa and Schubert 1974; Edwards 2001). These subgrid-scale parameterizations include coupling with thermodynamics; radiation; continental hydrology; and, optionally, chemistry, aerosol microphysics, or biology.

Parameterizations are often based on a mixed, physical, phenomenological and statistical view. For example, the cloud fraction needed to represent the mean effect of a field of clouds on radiation may be related to the resolved humidity and temperature through an empirical relationship. But the same cloud fraction can also be obtained from a more elaborate description of processes governing cloud formation and evolution. For instance, for an ensemble of cumulus clouds within a horizontal grid cell, clouds can be represented with a single-mean plume of warm and moist air rising from the surface (Tiedtke 1989; Jam et al. 2013) or with an ensemble of such plumes (Arakawa and Schubert 1974). Similar parameterizations are needed for many components not amenable to first-principle approaches at the grid scale of a global model, including boundary layers, surface hydrology, and ecosystem dynamics. Each parameterization, in turn, typically depends on one or more parameters whose numerical values are poorly constrained by first principles or observations at the grid scale of global models. Being approximate descriptions of unresolved processes, there exist different possibilities for the representation of many processes. The development of competing approaches to different processes is one of the most active areas of climate research. The diversity of possible approaches and parameter values is one of the main motivations for model inter-comparison projects in which a strict protocol is shared by various modeling groups in order to better isolate the uncertainty in climate simulations that arises from the diversity of models (model uncertainty). ...

... All groups agreed or somewhat agreed that tuning was justified; 91% thought that tuning global-mean temperature or the global radiation balance was justified (agreed or somewhat agreed). ... the following were considered acceptable for tuning by over half the respondents: atmospheric circulation (74%), sea ice volume or extent (70%), and cloud radiative effects by regime and tuning for variability (both 52%).






Here is Steve Koonin, formerly Obama's Undersecretary for Science at DOE and a Caltech theoretical physicist, calling for a "Red Team" analysis of climate science, just a few months ago (un-gated link):
WSJ: ... The outcome of a Red/Blue exercise for climate science is not preordained, which makes such a process all the more valuable. It could reveal the current consensus as weaker than claimed. Alternatively, the consensus could emerge strengthened if Red Team criticisms were countered effectively. But whatever the outcome, we scientists would have better fulfilled our responsibilities to society, and climate policy discussions would be better informed.

Note Added: In 2014 Koonin ran a one day workshop for the APS (American Physical Society), inviting six leading climate scientists to present their work and engage in an open discussion. The APS committee responsible for reviewing the organization's statement on climate change were the main audience for the discussion. The 570+ page transcript, which is quite informative, is here. See Physics Today coverage, and an annotated version of Koonin's WSJ summary.

Below are some key questions Koonin posed to the panelists in preparation for the workshop. After the workshop he declared that The idea that “Climate science is settled” runs through today’s popular and policy discussions. Unfortunately, that claim is misguided.
The estimated equilibrium climate sensitivity to CO2 has remained between 1.5 and 4.5 in the IPCC reports since 1979, except for AR4 where it was given as 2-5.5.

What gives rise to the large uncertainties (factor of three!) in this fundamental parameter of the climate system?

How is the IPCC’s expression of increasing confidence in the detection/attribution/projection of anthropogenic influences consistent with this persistent uncertainty?

Wouldn’t detection of an anthropogenic signal necessarily improve estimates of the response to anthropogenic perturbations?
I seriously doubt that the process by which the 1.5 to 4.5 range is computed is statistically defensible. From the transcript, it appears that IPCC results of this kind are largely the result of "Expert Opinion" rather than a specific computation! It is rather curious that the range has not changed in 30+ years, despite billions of dollars spent on this research. More here.

Saturday, June 03, 2017

Python Programming in one video



Putting this here in hopes I can get my kids to watch it at some point 8-)

Please recommend similar resources in the comments!

Wednesday, May 31, 2017

The mystery of genius at Slate Star Codex


Three excellent posts at Slate Star Codex. Don't miss the comments -- there are over a thousand, many of them very good.

THE ATOMIC BOMB CONSIDERED AS HUNGARIAN HIGH SCHOOL SCIENCE FAIR PROJECT
A group of Manhattan Project physicists created a tongue-in-cheek mythology where superintelligent Martian scouts landed in Budapest in the late 19th century and stayed for about a generation, after which they decided the planet was unsuitable for their needs and disappeared. The only clue to their existence were the children they had with local women.

The joke was that this explained why the Manhattan Project was led by a group of Hungarian supergeniuses, all born in Budapest between 1890 and 1920. These included Manhattan Project founder Leo Szilard, H-bomb creator Edward Teller, Nobel-Prize-winning quantum physicist Eugene Wigner, and legendary polymath John von Neumann, namesake of the List Of Things Named After John Von Neumann.

The coincidences actually pile up beyond this. Von Neumann, Wigner, and possibly Teller all went to the same central Budapest high school at about the same time, leading a friend to joke about the atomic bomb being basically a Hungarian high school science fair project. ...
See also

HUNGARIAN EDUCATION II: FOUR NOBEL TRUTHS


and

HUNGARIAN EDUCATION III: MASTERING THE CORE TEACHINGS OF THE BUDAPESTIANS

... Laszlo Polgar studied intelligence in university, and decided he had discovered the basic principles behind raising any child to be a genius. He wrote a book called Bring Up Genius and recruited an interested woman to marry him so they could test his philosophy by raising children together. He said a bunch of stuff on how ‘natural talent’ was meaningless and so any child could become a prodigy with the right upbringing.

This is normally the point where I’d start making fun of him. Except that when he trained his three daughters in chess, they became the 1st, 2nd, and 6th best female chess players in the world, gaining honors like “youngest grandmaster ever” and “greatest female chess player of all time”. Also they spoke seven languages, including Esperanto.

Their immense success suggests that education can have a major effect even on such traditional genius-requiring domains as chess ability. How can we reconcile that with the rest of our picture of the world, and how obsessed should we be with getting a copy of Laszlo Polgar’s book? ...

Friday, May 26, 2017

Borges, blogging, and a vast circle of invisible friends


This blog gets about 100k page views per month. My sense is that there are a lot of additional views through RSS feeds and social media (FB, G+, etc.), but those are hard to track. Most of the hits are on the main landing page, with a smaller fraction going to a specific article. I'd guess that someone hitting the landing page looks at a few posts, so there are probably at least 200k article views per month. I write somewhat fewer than 20 posts per month, which suggests that a typical post is read ~10k times. Some outlier posts get a lot of traffic from inbound links and search engine results even years after they were written. These have far more than 10k cumulative views, according to logs. From cookies, I can see that there are many thousands of regular readers (i.e., who visit at least several times a month).

Is there any better way to estimate impact/reach than what I've described above?

For comparison, I was told that a serious non-fiction book on the NY Times Best Seller list might sell ~10k copies. So it seems possible my blog has a significantly greater reach than what I could expect from writing a book. I've thought about writing books at various times, but have always been too busy. I fantasize about writing more when I retire, or later in my career :-)

When I attend meetings or conferences, I often bump into people I don't know who tell me they read my blog. This seems to be true whether the participants are scientists, technologists, investors, or academics. I'm guessing that for every person who tells me that they're a reader, there must be many more who are readers but don't volunteer the information. If you ever see me in person, please come right up and say hello! :-)

I've been told by some people that they have tried to read this blog but find it hard to understand. I suppose that regular readers are mostly well above average in intelligence.

Borges once said

... the life of a writer is a lonely one. You think you are alone, and as the years go by, if the stars are on your side, you may discover that you are at the center of a vast circle of invisible friends whom you will never get to know, but who love you. And that is an immense reward.

Thursday, May 25, 2017

Von Neumann, in his head


From Robert Jungk's Brighter than a Thousand Suns: A Personal History of the Atomic Scientists.

The H-bomb project:
... Immediately after the White House directive the Theoretical Division at Los Alamos had started calculations for the new bomb.

... There was a meeting in Teller's office with Fermi, von Neumann, and Feynman ... Many ideas were thrown back and forth and every few minutes Fermi or Teller would devise a quick numerical check and then they would spring into action. Feynman on the desk calculator, Fermi with the little slide rule he always had with him, and von Neumann, in his head. The head was usually first, and it is remarkable how close the three answers always checked.
The MANIAC:
... When von Neumann released his last invention for use, it aroused the admiration of all who worked with it. Carson Mark, head of the Theoretical Division at Los Alamos, recollects that 'a problem which would have otherwise kept three people busy for three months; could be solved by the aid of this computer, worked by the same three people, in about ten hours. The physicist who had set the task, instead of having to wait for a quarter of a year before he could get on, received the data he required for his further work the same evening. A whole series of such three months' calculations, narrowed down to a single working day, were needed for the production of the hydrogen bomb.

It was a calculating machine, therefore, which was the real hero of the work on the construction of the bomb. It had a name of its own, like all the other electronic brains. Von Neumann had always been fond of puns and practical jokes. When he introduced his machine to the Atomic Energy Commission under the high-sounding name of 'Mathematical Analyser, Numerical Integrator and Computer', no one noticed anything odd about this designation except that it was rather too ceremonious for everyday use. It was not until the initial letters of the six words were run together that those who used the miraculous new machine realized that the abbreviation spelled 'maniac'.

Wednesday, May 24, 2017

AI knows best: AlphaGo "like a God"


Humans are going to have to learn to "trust the AI" without understanding why it is right. I often make an analogous point to my kids -- "At your age, if you and Dad disagree, chances are that Dad is right" :-)  Of course, I always try to explain the logic behind my thinking, but in the case of some complex machine optimizations (e.g., Go strategy), humans may not be able to understand even the detailed explanations.

In some areas of complex systems -- neuroscience, genomics, molecular dynamics -- we also see machine prediction that is superior to other methods, but difficult even for scientists to understand. When hundreds or thousands of genes combine to control many dozens of molecular pathways, what kind of explanation can one offer for why a particular setting of the controls (DNA pattern) works better than another?

There was never any chance that the functioning of a human brain, the most complex known object in the universe, could be captured in verbal explication of the familiar kind (non-mathematical, non-algorithmic). The researchers that built AlphaGo would be at a loss to explain exactly what is going on inside its neural net...
NYTimes: ... “Last year, it was still quite humanlike when it played,” Mr. Ke said after the game. “But this year, it became like a god of Go.”

... After he finishes this week’s match, he said, he would focus more on playing against human opponents, noting that the gap between humans and computers was becoming too great. He would treat the software more as a teacher, he said, to get inspiration and new ideas about moves.

“AlphaGo is improving too fast,” he said in a news conference after the game. “AlphaGo is like a different player this year compared to last year.”
On earlier encounters with AlphGo:
“After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong,” Mr. Ke, 19, wrote on Chinese social media platform Weibo after his defeat. “I would go as far as to say not a single human has touched the edge of the truth of Go.”

Monday, May 22, 2017

NYTimes: In ‘Enormous Success,’ Scientists Tie 52 Genes to Human Intelligence


The Nature Genetics paper below made a big splash in today's NYTimes: In ‘Enormous Success,’ Scientists Tie 52 Genes to Human Intelligence. The picture above is of a UK Biobank storage facility for blood (DNA) samples.

The results are not especially surprising to people who have been following the subject, but this is the largest sample of genomes and cognitive scores yet analyzed (~80k individuals). SSGAC has assembled a much larger dataset (~750k, soon to be over 1M; over 600 genome-wide significant SNP hits), but are working with a proxy phenotype for cognitive ability: years of education.
Genome-wide association meta-analysis of 78,308 individuals identifies new loci and genes influencing human intelligence

Nature Genetics (2017) doi:10.1038/ng.3869
Received 10 January 2017 Accepted 24 April 2017 Published online 22 May 2017

Intelligence is associated with important economic and health-related life outcomes1. Despite intelligence having substantial heritability2 (0.54) and a confirmed polygenic nature, initial genetic studies were mostly underpowered3, 4, 5. Here we report a meta-analysis for intelligence of 78,308 individuals. We identify 336 associated SNPs (METAL P < 5 × 10−8) in 18 genomic loci, of which 15 are new. Around half of the SNPs are located inside a gene, implicating 22 genes, of which 11 are new findings. Gene-based analyses identified an additional 30 genes (MAGMA P < 2.73 × 10−6), of which all but one had not been implicated previously. We show that the identified genes are predominantly expressed in brain tissue, and pathway analysis indicates the involvement of genes regulating cell development (MAGMA competitive P = 3.5 × 10−6). Despite the well-known difference in twin-based heratiblity2 for intelligence in childhood (0.45) and adulthood (0.80), we show substantial genetic correlation (rg = 0.89, LD score regression P = 5.4 × 10−29). These findings provide new insight into the genetic architecture of intelligence.
Perhaps the most interesting aspect of this study is the further evidence it provides that many (the vast majority?) of the hits discovered by SSGAC are indeed correlated with cognitive ability (as opposed to other traits such as Conscientiousness, which might influence educational attainment without affecting intelligence):
To examine the robustness of the 336 SNPs and 47 genes that reached genome-wide significance in the primary analyses, we sought replication. Because there are no reasonably large GWAS for intelligence available and given the high genetic correlation with educational attainment, which has been used previously as a proxy for intelligence7, we used the summary statistics from the latest GWAS for educational attainment21 for proxy-replication (Online Methods). We first deleted overlapping samples, resulting in a sample of 196,931 individuals for educational attainment. Of the 336 top SNPs for intelligence, 306 were available for look-up in educational attainment, including 16 of the independent lead SNPs. We found that the effects of 305 of the 306 available SNPs in educational attainment were sign concordant between educational attainment and intelligence, as were the effects of all 16 independent lead SNPs (exact binomial P < 10−16; Supplementary Table 14). ...
Carl Zimmer did a good job with the Times story. The basic ideas, that
0. Intelligence is (at least crudely) measurable
1. Intelligence is highly heritable (much of the variance is determined by DNA)
2. Intelligence is highly polygenic (controlled by many genetic variants, each of small effect)
3. Intelligence is going to be deciphered at the molecular level, in the near future, by genomic studies with very large sample size 
are now supported by overwhelming scientific evidence. Nevertheless, they are and have been heavily contested by anti-Science ideologues.

For further discussion of points (0-3), see my article On the genetic architecture of intelligence and other quantitative traits.

Sunday, May 21, 2017

Contingency, History, and the Atomic Bomb

How Alexander Sachs, acting on behalf of Szilard and Einstein, narrowly convinced FDR to initiate the atomic bomb project. History sometimes hangs on a fragile thread: had the project been delayed a year, atomic weapons might not have been used in WWII. Had the project completed a year earlier, the bombs might have been used against Germany.

See also A Brief History of the Future, as told to the Masters of the Universe.


Excerpts below are from Robert Jungk's Brighter than a Thousand Suns: A Personal History of the Atomic Scientists. (Note the book contains inaccuracies concerning the wartime role of German physicists such as Weizsacker and Heisenberg.)

Alexander Sachs:
... This international financier could always obtain entry to the White House, for he had often amazed Roosevelt by his usually astonishingly accurate forecasts of economic events. Ever since 1933 Sachs had been one of the unofficial but extremely influential advisers of the American President, all of whom had to possess, by F. D. R.'s own definition, 'great ability, physical vitality, and a real passion for anonymity'.


... It was nearly ten weeks before Alexander Sachs at last found an opportunity, on October 11, 1939, to hand President Roosevelt, in person, the letter composed by [Leo] Szilard and signed by [Albert] Einstein at the beginning of August [1939]. In order to ensure that the President should thoroughly appreciate the contents of the document and not lay it aside with a heap of other papers awaiting attention, Sachs read to him, in addition to the message and an appended memorandum by Szilard, a further much more comprehensive statement by himself. The effect of these communications was by no means so overpowering as Sachs had expected. Roosevelt, wearied by the prolonged effort of listening to his visitor, made an attempt to disengage himself from the whole affair. He told the disappointed reader that he found it all very interesting but considered government intervention to be premature at this stage.

Sachs, however, was able, as he took his leave, to extort from the President the consolation of an invitation to breakfast the following morning. "That night I didn't sleep a wink," Sachs remembers. "I was staying at the Carlton Hotel [two blocks north of the White House]. I paced restlessly to and fro in my room or tried to sleep sitting in a chair. There was a small park quite close to the hotel. Three or four times, I believe, between eleven in the evening and seven in the morning, I left the hotel, to the porter's amazement, and went across to the park. There I sat on a bench and meditated. What could I say to get the President on our side in this affair, which was already beginning to look practically hopeless? Quite suddenly, like an inspiration, the right idea came to me. I returned to the hotel, took a shower and shortly afterwards called once more at the White House."

Roosevelt was sitting alone at the breakfast table, in his wheel chair, when Sachs entered the room. The President inquired in an ironical tone:

"What bright idea have you got now? How much time would you like to explain it?"

Dr. Sachs says he replied that he would not take long.

"All I want to do is to tell you a story. During the Napoleonic wars a young American inventor came to the French Emperor and offered to build a fleet of steamships with the help of which Napoleon could, in spite of the uncertain weather, land in England. Ships without sails? This seemed to the great Corsican so impossible that he sent [Robert] Fulton away. In the opinion of the English historian Lord Acton, this is an example of how England was saved by the shortsightedness of an adversary. Had Napoleon shown more imagination and humility at that time, the history of the nineteenth century would have taken a very different course."

After Sachs finished speaking the President remained silent for several minutes. Then he wrote something on a scrap of paper and handed it to the servant who had been waiting at table. The latter soon returned with a parcel which, at Roosevelt's order, he began slowly to unwrap. It contained a bottle of old French brandy of Napoleon's time, which the Roosevelt family had possessed for many years. The President, still maintaining a significant silence, told the man to fill two glasses. Then he raised his own, nodded to Sachs and drank to him.

Next he remarked: "Alex, what you are after is to see that the Nazis don't blow us up?"

"Precisely."

It was only then that Roosevelt called in his attaché, [Brigadier] General [Edwin] "Pa" Watson, and addressed him—pointing to the documents Sachs had brought—in words which have since become famous:

"Pa, this requires action!"
More on the challenges:
Teller criticizes as follows one of these excessively rosy views of the early history of the American atom bomb: 'There is no mention of the futile efforts of the scientists in 1939 to awaken the interest of the military authorities in the atomic bomb. The reader does not learn about the dismay of scientists faced with the necessity of planned research. He does not find out about the indignation of engineers asked to believe in the theory and on such an airy basis to construct a plant.'

Wigner remembers the resistance. 'We often felt as though we were swimming in syrup,' he remarks. Boris Pregel, a radium expert, without whose disinterested loan of uranium the first experiments al Columbia University would have been impossible, comments: 'It is a wonder that after so many blunders and mistakes anything was ever accomplished at all.' Szilard still believes today that work on the uranium project was delayed for at least a year by the short-sightedness and sluggishness of the authorities. Even Roosevelt's manifest interest in the plan scarcely accelerated its execution. ...

Blog Archive

Labels