Tuesday, October 18, 2022
Bernanke v. Kindleberger: Which Credit Channel?
By Perry G. Mehrling
OCT 13, 2022 | MACROECONOMICS
In the papers of economist Charles Kindleberger, Perry Mehrling found notes on the paper that won Ben Bernanke his Nobel Prize.
In the 1983 paper cited as the basis for Bernanke’s Nobel award, the first footnote states: “I have received useful comments from too many people to list here by name, but I am grateful to each of them.” One of those unnamed commenters was Charles P. Kindleberger, who taught at MIT full-time until mandatory retirement in 1976 and then half-time for another five years. Bernanke himself earned his MIT Ph.D. in 1979, whereupon he shifted to Stanford as Assistant Professor. Thus it was natural for him to send his paper to Kindleberger for comment, and perhaps also natural for Kindleberger to respond.
As it happens, the carbon copy of that letter has been preserved in the Kindleberger Papers at MIT, and that copy is reproduced below as possibly of contemporary interest. All footnotes are mine, referencing the specific passages of the published paper, a draft copy of which Kindleberger is apparently addressing, and filling in context that would have been familiar to both Bernanke and Kindleberger but may not be to a modern reader. With these explanatory notes, the text speaks for itself and requires no further commentary from me.
“May 1, 1982
Dr. Ben Bernanke
Graduate School of Business
Stanford University
Stanford, CA 94305
Dear Dr. Bernanke,
Thank you for sending me your paper on the great depression. You ask for comments, and I assume this is not merely ceremonial. I am afraid you will not in fact welcome them.
I think you have provided a most ingenious solution to a non-problem.[1] The necessity to demonstrate that financial crisis can be deleterious to production arises only in the scholastic precincts of the Chicago school with what Reder called in the last JEL its tight priors, or TP.[2] If one believes in rational expectations, a natural rate of unemployment, efficient markets, exchange rates continuously at purchasing power parities, there is not much that can be explained about business cycles or financial crises. For a Chicagoan, you are courageous to depart from the assumption of complete markets.[3]
You wave away Minsky and me for departing from rational assumptions.[4] Would you not accept that it is possible for each participant in a market to be rational but for the market as a whole to be irrational because of the fallacy of composition? If not, how can you explain chain letters, betting on lotteries, panics in burning theatres, stock market and commodity bubbles as the Hunts in silver, the world in gold, etc… Assume that the bootblack, waiters, office boys etc of 1929 were rational and Paul Warburg who said the market was too high in February 1929 was not entitled to such an opinion. Each person hoping to get in an[d] out in time may be rational, but not all can accomplish it.
Your data are most interesting and useful. It was not Temin who pointed to the spread (your DIF) between governts [sic] and Baa bond yields, but Friedman and Schwartz.[5] Column 4 also interests me for its behavior in 1929. It would be interesting to disaggregate between loans on securities on the one hand and loans and discounts on the other.
Your rejection of money illusion (on the ground of rationality) throws out any role for price changes. I think this is a mistake on account at least of lags and dynamics. No one of the Chicago stripe pays attention to the sharp drop in commodity prices in the last quarter of 1929, caused by the banks, in their concern over loans on securities, to finance commodities sold in New York on consignment (and auto loans).[6] This put the pressure on banks in areas with loans on commodities. The gainers from the price declines were slow in realizing their increases. The banks of the losers failed. Those of the ultimate winners did not expand.
Note, too, the increase in failures, the decrease in credit and the rise in DIF in the last four of five months of 1931.[7] Much of this, after September 21, was the consequence of the appreciation of the dollar from $4.86 to $3.25.[8] Your international section takes no account of this because prices don’t count in your analysis. In The World in Depression, 1929-1939, which you do not list,[9] I make much of this structural deflation, the mirror analogue of structural inflation today from core inflation and the oil shock. But your priors do not permit you to think them of any importance.
Sincerely yours,
[Charles P. Kindleberger]”
References
Bernanke, Ben S. 1983. “Nonmonetary Effects of the Financial Crisis in the Propagation of the Great Depression.” American Economic Review 73 No 3 (June): 257-276.
Kindleberger, Charles P. 1973. The World in Depression, 1929-1939. Berkeley CA: University of California Press.
Kindleberger, Charles P. 1978. Manias, Panics and Crashes: A History of Financial Crises. New York: Basic Books.
Kindleberger, Charles P. 1985. Keynesianism vs. Monetarism and Other Essays in Financial History. London: George Allen and Unwin.
Kindleberger. Charles P. and Jean-Pierre Laffargue, eds. 1982. Financial crises: Theory, History, and Policy. Cambridge: Cambridge University Press.
Mehrling, Perry. 2022. Money and Empire: Charles P. Kindleberger and the Dollar System. Cambridge: Cambridge University Press.
Notes
[1] Bernanke (1983, 258): “reconciliation of the obvious inefficiency of the depression with the postulate of rational private behavior”.
[2] Reder, Melvin W. “Chicago Economics: Permanence and Change.” Journal of Economic Literature 20 No. 1 (March 1982): 1-38. Bernanke (1983, 257) states explicitly, “the present paper builds on the Friedman-Schwartz work…”
[3] Bernanke (1983, 257): “The basic premise is that, because markets for financial claims are incomplete, intermediation between some classes of borrowers and lenders requires nontrivial market-making and information-gathering services.” And again at p. 263: “We shall clearly not be interested in economies of the sort described by Eugene Fama (1980), in which financial markets are complete and information/transactions costs can be neglected.”
[4] Bernanke (1983, 258): “Hyman Minsky (1977) and Charles Kindleberger (1978) have in several places argued for the inherent instability of the financial system, but in doing so have had to depart from the assumption of rational economic behavior.” It is perhaps relevant to observe that elsewhere Kindleberger takes pains to point out the limitations of the Minsky model for explaining the great depression: “it is limited to the United States; there are no capital movements, no exchange rates, no international commodity prices, nor even any impact of price changes on bank liquidity for domestic commodities; all assets are financial.” (Kindleberger 1985, 302) This passage appears in Kindleberger’s contribution to a 1981 conference sponsored by the Banca di Roma and MIT’s Sloan School of Management, which followed on a 1979 Bad Homburg conference that also included both men, which proceedings were published as Financial Crises: Theory, History and Policy (Cambridge 1982).
[5] Bernanke (1983, 262): “DIF = difference (in percentage points) between yields on Baa corporate bonds and long-term U.S. government bonds”.
[6] It is exactly the sharp drop in commodity prices that Kindleberger puts at the center of his explanation of why the depression was worldwide since commodity prices are world prices. Kindleberger (1973, 104): “The view taken here is that symmetry may obtain in the scholar’s study, but that it is hard to find in the real world. The reason is partly money illusion, which hides the fact of the gain in purchasing power from the consumer countries facing lower prices; and partly the dynamics of deflation, which produce an immediate response in the country of falling prices, and a slow one, often overtaken by spreading deflation, in the country with improved terms of trade, i.e. lower import prices.”
[7] Bernanke’s Table 1 cites August-December DIF figures as follows: 4.29, 4.82, 5.41, 5.30, 6.49.
[8] September 21 is of course the date when the Bank of England took sterling off gold, see Kindleberger (1973, 167-170).
[9] The published version, Bernanke (1983), still does not list Kindleberger (1973), citing only Kindleberger (1978), Manias, Panics, and Crashes. Notably, the full title of that book includes also the words “A History of Financial Crises.” Kindleberger himself quite explicitly frames Manias as an extension of the Depression book, now including all of the international financial crises he can find. Later commentary however follows Bernanke in viewing Kindleberger (1978) as instead an extension of Minsky’s essentially domestic Financial Instability Hypothesis, which is not correct. On this point see footnote 4, and more generally, Chapter 8 of my book Money and Empire (Cambridge 2022).
Perry G. MehrlingAcademic Council
Professor of Economics, Boston University
Share your perspective
More from Perry G. Mehrling
Payment vs. Funding: The Law of Reflux for Today
PAPER WORKING PAPER SERIES By Perry G. Mehrling
FEB 2020
A Money View of Keynes, Keynesians, and Post-Keynesians
ARTICLE By Perry G. Mehrling
FEB 4, 2020
Can Bitcoin Replace the Dollar?
ARTICLE By Perry G. Mehrling
OCT 14, 2017
More articles
Big Tech: Not Only Market But Also Knowledge and Information Gatekeepers
ARTICLE By Cecilia Rikap
OCT 4, 2022
Economist Offers Stark Climate Reality Check. Plus a Bit of Science-Based Hope.
ARTICLE By Lynn Parramore
SEP 27, 2022
The IRA as a Climate Bill
ARTICLE By Steven Fazzari
SEP 15, 2022
KEEP UP WITH OUR LATEST NEWS
Our e-mail newsletter shares new events, courses, articles, and will keep you updated on our initiatives.
Comments SIGN UP
The Institute for New Economic Thinking
FacebookTwitterYouTube
Institute for New Economic Thinking
300 Park Avenue South, Floor 5
New York, NY 10010
(646) 751-4900
Terms of Use | Privacy Policy
©2022 Institute for New Economic Thinking. All Rights Reserved
Explore
Featured work on Environment
Economist Offers Stark Climate Reality Check. Plus a Bit of Science-Based Hope.
ARTICLE By Lynn Parramore
SEP 27, 2022
Navigating the Crises in European Energy
PAPER WORKING PAPER SERIES By Michael Grubb
SEP 2022
Electricity Markets, Climate Change, and the European Energy Crisis
ARTICLE By Michael Grubb
SEP 5, 2022
View more from this topic
All topicsAGRICULTURE
BUSINESS & INDUSTRY
COMPLEXITY ECONOMICS
CULTURE
DEVELOPMENT
ECONOMIC GEOGRAPHY
ECONOMICS PROFESSION
EDUCATION
ENERGY
ENVIRONMENT
GENDER
GOVERNMENT & POLITICS
HEALTH
HISTORY
HUMAN BEHAVIOR
INEQUALITY & DISTRIBUTION
IMPERFECT KNOWLEDGE
LABOR
LAWS
FINANCE
MACROECONOMICS
MATH & STATISTICS
MICROECONOMICS
TECHNOLOGY & INNOVATION
PHILOSOPHY & ETHICS
PRIVATE DEBT
RACE
TRADE
Featured expertsView all
Rob JohnsonPresident
President, Institute for New Economic Thinking
The World After Capital
VIDEO Featuring Albert Wenger and Rob Johnson
JUL 6, 2022
Fear and Loathing in Expertise
VIDEO Featuring Rob Johnson
JUN 29, 2022
View more from this expert
View all experts
Featured work in South Asia
The Right to Energy & Carbon Tax: A Game Changer in India
ARTICLE By Rohit Azad and Shouvik Chakraborty
JUN 10, 2019
The Bogus Paper that Gutted Workers’ Rights
ARTICLE By Servaas Storm
FEB 6, 2019
Unstable Capital Flows Threaten Emerging Economies
ARTICLE By Terry McKinley and Francis Cripps
AUG 24, 2018
View more from this region
All regionsAFRICA
ASIAChina
Hong Kong
Japan
AUSTRALIA
EUROPEAustria
Denmark
England
France
Germany
Greece
Hungary
Ireland
Italy
Portugal
UK
Ukraine
MIDDLE EAST
NORTH AMERICACanada
United States
SOUTH AMERICABrazil
Chile
SOUTH ASIA
INDIA
Friday, September 30, 2022
Scotus Mailbag talk: If you were creating a new constitutional order from scratch...?
captured from Matt Yglesias substack: Interesting conversation (avatars for names) on comparative approaches -- mainly Canadian -- to constitutional reform of the supreme judiciary.
Lost Future: How do you feel philosophically about judicial review being part of a country's political system? I remember when I learned in school that a majority of developed countries actually don't have true judicial review, they practice 'parliamentary sovereignty' and the legislature can just pass whatever they want.... Was pretty shocking. But, most of those countries are in the EU, so aren't they now all subject to the EU Court of Human Rights? So maybe that's no longer true, I dunno.
If you were creating a new constitutional order from scratch, would you empower a supreme judiciary to strike down 'unconstitutional' laws? Or is that too subjective & inherently partisan? I think we've all heard criticisms that the justices are just unelected politicians, etc. etc. One reasonable compromise (for the US) that I was thinking is that it should require a supermajority to declare a law unconstitutional- using a raw majority to determine what should be a fundamental question is pretty dumb. Also, individual judges should have a lot less power in our system. Open to hearing your thoughts though!
It’s important to distinguish between two separate ideas. One is judicial review of laws to assess their conformity with the constitution. The other is the idea that the courts should be the people who “go last” in an interbranch conflict.
I think the Canadian system — in which laws are absolutely reviewed by the judiciary for conformity with the Charter of Rights and Freedoms, but Parliament has the right to overrule the Supreme Court — is good. Overrides do happen under this system, but relatively rarely — the Court’s rulings are not a dead letter. One reason they are not a dead letter is that the Court has a decent amount of legitimacy. But one reason they preserve that legitimacy is the Supreme Court is not a locus of massive partisan conflict. And that’s because strong policy-demanders at odds with an important constitutional ruling have a more promising course of action than politicizing the judiciary — they can just push for parliamentary override. To me, it’s a good system.
But note that in the United States, a lot of the de facto power of the judiciary comes from non-constitutional cases. Because of bicameralism, presidentialism, and the filibuster, the stakes in judicial interpretation of statutes are very high here. If the Supreme Court of Canada rules that some Canadian air pollution regulation violates the law and Parliament feels they don’t like the outcome, they can just pass a new law that clarifies the point. In America, if the Supreme Court rules that the EPA can’t regulate greenhouse gas emissions, then that is a de facto guarantee that there will be no emissions regulation because the barrier to passing a new law is so high in our country.
This is why on some level, I think “judicial review” is the wrong thing to ask questions about. Obviously courts need to be able to do statutory interpretation. But what we have in the United States is an extremely low-productivity legislature that in practice devolves massive amounts of powe
Tuesday, September 27, 2022
Matt Yglesias: Beating climate change absolutely requires new technology
via Slow Boring at Substack
Beating climate change absolutely requires new technology
We have what we need to drastically cut emissions — but we're going to need much more
Matt Yglesias
I was thinking of writing about why I disagree with Farhad Manjoo’s column arguing that nuclear power still doesn’t make sense, but I saw some tweets about a failed carbon capture and sequestration (CCS) project and realized that I never replied to an email asking why Slow Boring has paid tens of thousands of dollars for direct air capture (DAC) carbon removal programs.
What makes these topics hard to write about is that they involve complicated technical questions, and the honest truth is that well-qualified technical experts disagree about all of them.
But what links these topics together is that if we want to navigate the climate crisis in a remotely acceptable way, the world is going to need to develop some technologies that are currently unproven.
The opposite view is rarely expressed in explicit terms (because it’s wrong), but it often implicitly backstops skepticism about things like nuclear, CCS, and DAC. People will point out, rightly, that these technologies are pretty uncertain and unproven and currently seem to involve very high costs. Then they point to the fact that solar and wind work just fine, are well understood, and are cheap at the current margin. They’ll say “why not just do that?”
And every once in a while, you see a take like Alejandro de la Garza’s article in Time arguing that “We Have The Technology to Solve Climate Change. What We Need Is Political Will.”
This is true in the trivial sense that we could dramatically reduce CO2 emissions with currently available technology if we were willing to accept a large decline in living standards and world population. But that’s not really a solution to the problem. We should absolutely deploy current technology more aggressively and thereby contribute to solving the problem — but we will also need some new technology in the future, and that means we need to keep an open mind toward investment.
Renewables are great (at the current margin)
There’s a weird internet cult of renewables haters, and also the strange case of the state of Texas, where renewables are the number two source of electricity but politicians pretend they hate them.
This article is about the limits of the all-renewables strategy, but it doesn’t come from a place of hate. The reason Texas — a very conservative state with a locally powerful oil and gas extraction industry — has so much renewable electricity is that large swathes of Texas are very windy, and building wind farms in windy places is a very cost-effective way to make electricity.
And in a place where overall electricity demand is rising, both due to population growth (in Texas) and due to ongoing electrification of vehicles and home heat, renewables buildout does an enormous amount to reduce CO2 emissions and air pollution. Powering electric cars with electricity from gas-fired power plants would have emissions benefits, as I understand it, because natural gas is less polluting than oil and because big power plants are regulated more stringently than car engines or home furnaces. But still, the emissions benefits are much larger if the electricity is partially or wholly generated by renewables.
But the “partially” here is actually really important. An electric car that’s powered 50% by renewables has lower emissions than one powered by 10% renewables and higher emissions than one that’s at 90% — the more renewables in the mix, the lower your emissions. You don’t have to get to 100% renewable power; the key thing is to use “more” renewable power. And when people say (accurately) that renewables are now cheap, they mean that it’s cheap at the current margin to add more renewable power to the mix.
That’s because electricity demand is growing, so a marginal addition of renewable electricity just gets tossed into the mix usefully. But it’s also because Texas (and other states) have all this fixed infrastructure for burning natural gas already. If you get extra wind power you can just burn less gas, and the gas is still there to use if you need it. California, probably the leading-edge state on renewables, actually built a small number of emergency gas generators that they turned on for the first time on September 4 of this year to meet rare peak demand and avoid blackouts.
An all-renewable grid is very challenging
That California experience illustrates two things, I think:
Democratic Party politicians are, in practice, much more pragmatic than unflattering conservative stereotypes of them would suggest.
Democrats who’ve actually had to wrestle with practical problems know that we are further from an all-renewables utopia than environmentalist rhetoric suggests.
Gavin Newsom knows perfectly well he can’t just have a statewide blackout once or twice a year and tell voters that’s a small price to pay for meeting emissions goals. Voters want to see action on climate change, but they have a very limited appetite for enduring personal sacrifice or inconvenience. Using renewables to dramatically reduce emissions while still counting on gas backup when needed? Great. Securing even deeper emissions by accepting that power sometimes doesn’t work? Not great.
Consider this data from my rooftop solar panels.
The first pane says that so far in 2022, the panels have generated 102% of our household electricity use — hooray!
The second pane shows that we generated a huge electricity surplus (blue lines below zero) during the spring when it was sunny and cool, but we are in a small deficit over the summer when it’s even sunnier but our air conditioning use surges.
The third pane shows that in September, whether we are in surplus or deficit on any given day hinges crucially on the weather. It’s going to end up being a deficit month largely because of a big rainy stretch.
So how much does this cost? Well, not very much. Because the key thing about this scenario is that all my kilowatts of electricity get used. When I’m in surplus, that extra electricity goes “to the grid” where it substitutes for other sources of power, and I earn credits that offset my electricity usage during deficit periods. If I had to throw away my surplus kilowatts instead of selling them to the grid, my per-kilowatt cost would soar.
And if everyone had solar power, that’s the problem we would face. Who would we export the extra electricity to during surplus periods? At a small margin, we have the technology for this: instead of exporting power during the day and importing it at night, I could get a home battery and store daytime excess for use at night. That would raise my per-kilowatt cost, but only modestly since batteries aren’t that expensive. And you can add wind as well as solar to your grid so you have some resiliency against seasonal variations in sunlight.
The problem is that without fossil fuels for resilience, the cost per megawatt of renewables soars because redundancy is expensive.
Wasting electricity is costly
Seasonal variation is a big problem here, for example.
Let’s say you have enough solar panels to cover 100 percent of your electricity needs on an average December day. That means you’re going to have way more panels than you need on an average June day when the sun is shining for a much longer period of time. On a pure engineering basis, that’s fine — there are just some panels that in practice are only generating power for a few days per year in the dead of winter. But the cost per megawatt of those panels is going to be astronomical because a solar panel is almost 100 percent fixed costs.
The same is true of random fluctuations in weather. If you’re like Texas and rely on a mix of gas and wind, then wind is cheap — you add some turbines and that means you burn less gas. If there’s some freak day when there’s very little wind, then you burn an unusually large amount of gas. As long as you’re using almost all the wind power you generate, the cost per megawatt of your turbines is low. But if you try to build enough turbines to keep the lights on during low-wind days, you’re wasting wind on high-wind days. This means your cost per megawatt rises.
Because massively overbuilding renewables would not only cost a lot of money but wastefully consume vast tracts of land, it seems like a better idea would be to use long-term batteries. If you had really big batteries that stored electricity for a long time, you could simply store surplus power in the high season and unleash it in the low season.
In fact, if you are lucky enough to have large hydroelectric dams at your disposal, you can probably use them as a seasonal storage vehicle. You can let the water pile up when renewables are at maximum capacity and then run it through the dam when you need it. Not coincidentally, politicians from the Pacific Northwest — where there’s tons of hydro — tend to be huge climate hawks.
But for the rest of us, it’s Hypothetical Storage Technology to the rescue.
I’m not saying anything here that renewables proponents aren’t aware of. They write articles about seasonal electricity storage all the time. There are plenty of ideas here that could work, ranging from ideas on the technological cutting edge to brute force engineering concepts like using pumps to create extra hydro capacity. Another idea is that maybe you could replace a lot of current fossil fuel use with burning hydrogen, and then you could manufacture hydrogen using renewable electricity while accepting seasonal variation in the level of hydrogen output. It might work!
The known unknowns
Speaking of hypothetical hydrogen applications, it’s also worth saying that while electricity, cars, and home heat together constitute a very large share of global emissions, they are not the whole picture.
You can build an electric airplane with current technology, but we absolutely do not have a zero-carbon replacement for conventional passenger airplanes at hand. Nor do we currently have the ability to manufacture steel, concrete, or various chemicals in a cost-effective way without setting fossil fuels on fire. These aren’t necessarily unsolvable problems, but they have not, in fact, been solved. It isn’t a lack of “political will” that has denied us the ability to do zero-carbon maritime shipping. Right now, the only proven way to power a large ship without CO2 emissions is to use one of the nuclear reactors from an aircraft carrier. But this is both illegal and insanely expensive. You could maybe do something with hydrogen here, or else it is possible that if the Nuclear Regulatory Commission ever decides to follow the law and establish a clear licensing pathway for small civilian nuclear reactors, the companies who think they can mass produce these things in a cost-effective way will be proven right.
And if they are, that would not only solve the container shipping problem but would make decarbonizing electricity much easier. And that’s true even if the microreactors never become as cheap as today’s marginal renewable electricity because we ultimately need to move beyond these margins. The same is true for geothermal power. Even if the most optimistic scenarios here don’t pan out and geothermal remains relatively expensive, a new source of baseline zero-carbon electricity would solve a lot of problems for a mostly-renewable grid.
By the same token, CCS doesn’t ever need to be cheap enough to use at a massive scale to be incredibly useful. Even a very expensive gas + CCS system could be a cost-effective way to backstop renewables rather than engaging in massive overbuilding.
With Direct Air Capture — sucking carbon out of the air with essentially artificial trees — not only would the west pay de factor climate reparations, but we could also achieve net zero without actually solving every technical problem along the way. You could make airlines (and private jets) pay an emissions tax and use the money to capture the CO2. Of course, with all these capture schemes there’s the question of what you actually do with the carbon once it’s captured. One idea is that the CO2 removed from the air could be used to manufacture jet fuel. Airlines would then burn it again and put it back out into the atmosphere, but this process would be a closed loop that wouldn’t add net new greenhouse gases.
The case for agnosticism
People on the internet love to cheerlead for and fight about their favorite technologies.
But everyone should try to focus on what the real tradeoffs are. When towns in Maine ban new solar farms to protect the trees, that is a genuine tradeoff with the development of renewable electricity. When California votes to keep Diablo Canyon open, by contrast, that does absolutely nothing to slow renewable buildout. And the idea that investments in hypothetical carbon capture technologies are preventing the deployment of already existing decarbonization technologies in the present day is just wrong.
The basic reality is that some new innovations are needed to achieve net zero, especially in the context of a world that we hope will keep getting richer.
These innovation paths require us, essentially, to keep something of an open mind. As a matter of really abstract physics, “use renewables to make hydrogen, use hydrogen for energy storage and heat” makes a lot of sense. As a matter of actual commercially viable technologies, though, it’s stacking two different unproven ideas on top of each other. Insisting that all work on cutting-edge industrial hydrogen projects be conducted with expensive green hydrogen throws sand in the gears of difficult and potentially very important work. And when you tell the world that all the problems have been solved except for political will, you unreasonably bias young people who worry about climate toward either paralysis or low-efficacy advocacy work. What we need instead is for more young people who are worried about climate to find ways to contribute on the technical side to actually solving these important problems.
Monday, September 26, 2022
The Ferocious Complexity Of The Cell
from Bharath Ramsundar via Brad DeLong
Decentralized Protocol and Deep Drug Discovery Researcher
The Ferocious Complexity Of The Cell
Fifty years ago, the first molecular dynamics papers allowed scientists to exhaustively simulate systems with a few dozen atoms for picoseconds. Today, due to tremendous gains in computational capability from Moore’s law, and due to significant gains in algorithmic sophisticiation from fifty years of research, modern scientists can simulate systems with hundreds of thousands of atoms for milliseconds at a time. Put another way, scientists today can study systems tens of thousands of times larger, for billion of times longer than they could fifty years go. The effective reach of physical simulation techniques has expanded handleable computational complexity ten-trillion fold. The scope of this achievement should not be underestimated; the advent of these techniques along with the maturation of deep-learning has permitted a host of start-ups (1, 2, 3, etc) to investigate diseases using tools that were hitherto unimaginable.
The dramatic progress of computational methods suggests that one day scientists should be able to exhaustively understand complete human cells. But, to understand how far contemporary science is from this milestone, we first require estimates of the complexity of human cells. One reference suggests that a human sperm cell has a volume of 30 cubic micrometers, while another reveals that most human cells have densities quite similar to that of water (a thousand kilograms per cubic meter). Using the final fact that the molar mass of water is 18 grams, a quick calculation suggests that human sperm cells contain roughly a trillion atoms. For another datapoint, assuming neuronal cell volume of 6000 cubic micrometers, the analogous number for neurons is roughly 175 trillion atoms per cell. In addition to this increased spatial complexity, cellular processes take much longer than molecular processes, with mitosis (cell division) occurring on the order of hours. Let’s assume that mitosis takes eight hours for a standard cell. Putting together the series of calculations that we’ve just done suggests that exhaustively understanding a single sperm cell would require molecular simulation techniques to support an increase in spatial complexity from hundreds of thousands of atoms to trillions of atoms, and from millisecond-scale simulations to multi-hour simulations. In sum, simulations of sperm cells will require 10 million fold increases in spatial complexity and about 10 million fold increase in simulation length, for a total increase in complexity on the order of a hundred-trillion fold. Following the historical example from the previous paragraph, and assuming that Moore’s law continues in some form, we are at least 50 years of hard research from achieving thorough understanding of the simplest of human cells.
Let’s extend the thought experiment a bit further. How far is human science from understanding a human brain? There are roughly a hundred-billion neurons in the brain, and human-learning occurs on the order of years. Assuming that each neuron contains 175 trillion atoms, the total brain contains roughly 2 * 10^25 atoms. To put this number in perspective, that’s roughly 29 moles of atoms! It follows that simulation techniques must support an increase in spatial complexity from today’s limits on the order of a billion-trillion fold (20 orders of magnitude improvement). As for time complexity, there are rougly 31 million seconds in a year, so simulations would need to run tens of billions of times longer than today’s millisecond computations to understand the dynamics of human learning. Combining both of these numbers, roughly 30 orders of magnitude increase in handleable total complexity is required to understand the human brain. To summarize, assuming Moore’s law continues, we are at least 100 years of hard research from achieving thorough understanding of the human brain.
An important point that I’ve finessed in the preceding paragraphs is that both estimates above are likely low. An important, but often ignored fact about biological systems is that all nontrivial biomolecules exhibit significant quantum entanglement. The wavefunctions of biological macromolecules are quite complex, and until recently have remained beyond the reach of even the most approximate solvers of Schrodinger’s equation. As a result, most simulations of biomolecular systems use crude approximation to handle fundamental biological phenomenon such as phosphorylation or ATP processing. More accurate simulations of cells will require extremely large quantum simulations on a scale far beyond today’s capabilities. We have to assume that a Moore’s law for quantum computing will emerge for our estimates to have any hope of remaining accurate. However, given the extreme difficulty of maintaining large entangled states, no such progress has yet emerged from the nascent quantum computing discipline.
To summarize our discussion, simple back-of-the-envelope calculations hint at the extraordinary complexity that undergirds biological systems. Singulatarians routinely posit that it will be possible to upload our brains into the cloud within the not-too-distant future. The calculations within this article serve as a reality check upon such musings. It may well be possible to one day upload human brains into computing systems, but the computational requirements to simulate a brain (a necessary component of high-fidelity upload) are so daunting that even optimistic estimates suggest that a hundred years of hard research are required. Given that the US National Science Foundation is only 66 years old, the difficulty of planning research programs on the century time scale becomes more apparent.
None of this is to suggest that computational investigation of biological systems is useless. On the contrary, I believe that increased use of computation is the best path forward for science to comprehend the dazzling array of biological phenomenon that support even the most pedestrian life forms. However, today’s scientific environment is polluted with AI-hype, which naively suggests that artificial intelligence can solve all hard problems in short time frames. The danger of such thinking is when learning systems run into their inevitable failures, disappointed backers will start questioning research plans and will start withdrawing funding. Cold breezes from AI-winter have already started drifting into research buildings following the tragic death of Joshua Brown in a self-driving Tesla. Scientists need to combat the spread of hype and the proliferation of naive forecasts of AI utopia in order to create the solid research plans and organizations required to bridge the gap between modern science and revolutionary advances in biology.
Sunday, September 25, 2022
Dean Baker: Why should government funded research result in a private patent monopoly?
via Patreon
Dean Baker
I was glad to see Ezra Klein’s piece touting the Biden administration’s creation of ARPA-H. This is the Advanced Research Projects Agency-Health, a DARPA-type agency explicitly designed to promote the development of health-related innovations, like vaccines, drugs, and medical equipment.
Like Ezra, I’m a big fan of increased public funding for biomedical research. However, he goes a bit astray in his thinking near the end of the piece. He notes proposals, like those put forward by Bernie Sanders, for a cash prize to take the place of a patent monopoly. The government hands out $100 million, $500 million or $1 billion, and then allows the drug, vaccine, or whatever to be sold as a cheap generic. That likely means breakthrough cancer drugs selling for hundreds of dollars rather than hundreds of thousands of dollars.
“The government could identify, say, 12 conditions that it wants to see a drug developed for. The first group to develop and prove out such a drug would get a princely sum — $100 million, or $500 million, or a billion dollars, depending on the condition and the efficacy. In return, that drug would be immediately off-patent, available for any generic drug producer to manufacture for a pittance (and available for other countries, particularly poor countries, to produce immediately).”
I think the Sanders’ proposal is a great improvement over the current system. But coming in the middle of a discussion of a plan for more direct government funding, it makes the famous Moderna mistake: paying companies twice.
For folks who may have forgotten, we paid Moderna $450 million to develop its coronavirus vaccine. We then paid it another $450 million to cover the cost of the phase 3 testing needed for FDA approval. We then allowed it to claim intellectual property in the vaccine, which meant tens of billions of dollars in revenue. It also led to the creation of at least five Moderna billionaires. Tell me again how technology creates inequality.
It really shouldn’t not sound too radical to say that companies only get paid once for their work. If the government pays for the research, it doesn’t also give you a patent monopoly. These are alternative funding mechanisms, not part of a smorgasbord that we throw at innovators to allow them to get incredibly rich at the expense of the rest of us.
Like Ezra, I applaud the Biden administration’s commitment to increase government support for developing new technologies, but we should not be doing this in a way that makes our inequality problem even worse. We can argue over the best mechanisms.
I personally prefer direct public funding to a Sanders-type prize system. The main reason is that we can require everything be fully open-sourced as quickly as possible under a direct funding system, allowing science to advance more quickly.
Also, I suspect that awarding the prizes will prove to be a huge legal and moral nightmare. It is not always clear who actually met the prize conditions and also who made the biggest contribution to getting there. For example, a researcher may make a huge breakthrough that allows pretty much anyone to come along and cross the finish line and claim the prize. Direct upfront funding removes this problem. (I discuss this issue in chapter 5 of the good book, Rigged [it’s free].)
In any case, we can debate the best mechanism through which public funding can take place, patent monopolies, prizes, or direct funding, but the idea that you only get paid once should not be controversial. It is unfortunate that Ezra doesn’t address this issue in his piece, since he does know better (he reads my stuff). There is a huge amount of money at stake in who gets the gains from innovation, likely more than $1 trillion a year, and it would be an incredible failing of the political process if the issue is not even discussed.
Saturday, September 17, 2022
Dean Baker: Do Workers Have to Take It on the Chin to Fight Inflation?
via Patreon
Do Workers Have to Take It on the Chin to Fight Inflation?
I have had many people ask me if there is not a better way to fight inflation than the current route of Federal Reserve Board rate hikes. Just to remind people, this route fights inflation by slowing the economy, throwing people out of work, and then forcing workers to take pay cuts.
The people who are most likely to lose jobs are the ones who already face the most discrimination in the labor market: Blacks, Hispanics, people with less education, and people with criminal records. Not only will millions of these workers lose jobs, tens of millions of workers at the bottom half of the wage distribution will be forced to take pay cuts. They are the ones whose bargaining power is most sensitive to the level of unemployment in the economy, not doctors and lawyers.
It seems more than a bit absurd, that when we have all these various public and private initiatives that are supposed to improve the lot of disadvantaged groups, we can have an arm of the government, the Federal Reserve Board, just swamp these efforts by creating a tidal wave of unemployment. I don’t want to disparage well-meaning efforts to help the disadvantaged, but they can’t come to close to offsetting the impact of a three or four percentage point rise in the unemployment rate for Blacks or Hispanics. And, the hit could be much larger.
So, it seems like a good idea to think of an alternative path to lowering the rate of inflation. When I want to reduce inflationary pressures I naturally think of the ways in which we have structured the market to redistribute income upward, as discussed in Rigged [it’s free].
We can look to reduce some of the waste in the financial sector that makes some Wall Street types very rich at the expense of the rest of us. My toolbox includes a financial transactions tax, universal accounts at the Fed (eliminates tens of billions in annual bank fees), getting pension funds to stop throwing money away making private equity partners rich, and getting universities to stop making hedge fund partners rich managing their endowments.
These are all great things to do, but not the sorts of policies that can be implemented overnight to stem current inflation. We can maybe get a foot in the door, but even in a best-case scenario the benefits will only be felt several years down the road.
Then there is the plan to end the protectionism that benefits highly paid professionals, like doctors and dentists. If our doctors got paid roughly the same as their counterparts in Germany or Canada, it would save us around $100 billion a year, or $900 per family. This would mean setting up international standards to ensure quality, and then many years of foreign doctors coming to the U.S., as well as increased competition from nurse practitioners and other health care professionals, to bring our physicians’ pay structure into balance. Again, great policy, but not the sort of thing that will have a big effect in the immediate future.
Next, we have reducing the corruption in the corporate governance structure. As it is, the corporate boards, who are supposed to keep a lid on CEO pay, don’t even see this as part of their job description. They see their jobs as helping the CEO and other top management.
We can look to change the incentive structure for directors, so they actually do take an interest in putting a check on CEO pay. As I point out, this matters not just for the CEO, but bloated CEO pay affects pay structures at the top more generally, taking huge amounts of money out of the pockets of ordinary workers.
My favorite tool is to put some teeth in the “say on pay” votes by shareholders on CEO pay. I would have the directors lose their pay when a CEO pay package is voted down. That seems like a great way to get their attention, but again, a change that will take years to have an impact, not something that could address the current inflation.
The last item in my market restructuring has more promise. This is the system of government granted patent and copyright monopolies, which has allowedBill Gates and many others to get ridiculously rich. I have argued for radically downsizing the importance of these monopolies, by increasing the role of public funding for research and creative work.
While the full agenda calls for largely replacing patents as a source of funding for innovation in the case of prescription drugs and medical equipment, and reducing their importance elsewhere, there are intermediate steps which can be taken to both reduce costs and get us further down this road. One is simply the sort of price controls on prescription drugs that were part of the Inflation Reduction Act.
These don’t begin to take effect until 2026 and only apply to a limited number of drugs, but we could in principle go much further. We will spend over $500 billion this year (almost $4,000 per family), for drugs that would likely sell for less than $100 billion in a free market. In other words, there is lots of room for inflation reduction here.
If we don’t like the government setting prices, even when government-granted monopolies made prices high in the first place, there is also the option of weakening the monopoly. We can require drug and medical equipment companies to issue compulsory licenses.
This means that anyone could produce a patented drug or medical device, but they would have to pay a modest licensing fee (say 5 percent) to the holder of the patent. This can be put in place now under the Defense Production Act, but going forward we can require companies to agree to this condition as a requirement for anyone benefitting from the fruits of government funded research.
All of these routes to curbing inflation involve more time than just having the Fed raise interest rates, but it seems much fairer to make those who benefitted from the upward redistribution of the last four decades pay the price for reducing inflation than those who have been the victims. It is also worth keeping in mind that the pandemic inflation may not a one-time story.
We know that climate change is going to lead to more and larger weather disasters in the years ahead. If we see a major industry or agricultural crop knocked out by flooding, fires, or extreme heat or alternatively, if millions of people must be relocated due to such events, it will impose a serious strain on the economy. The supply-side impact could lead to another bout of inflation like we are seeing now.
It hardly seems fair that we again tell the Fed to throw the must vulnerable people out of work to get inflation under control. We can use other routes, if we plan ahead.
I know that this program has pretty much zero chance in Washington. It means challenging the Great Big Lie, that inequality just happened, but we can still talk about these sorts of alternatives. And progressives who actually want to see less inequality will push them.
Monday, September 12, 2022
Dean Baker on Republican Glee Over a Recession - via Patreon
Republicans: Another Reason a Recession Would be Really Bad News
There is a large recession lobby in Washington these days, that seems to view a recession as a positive good for the economy and society. The basic story is that we have seen a big jump in inflation, associated with the pandemic and the war in Ukraine. They argue that a recession will be needed to bring inflation back down to acceptable levels.
The logic is that the higher unemployment associated with a recession will force workers to take pay cuts. This will reduce inflationary pressures in the economy.
I, and others, have pointed out the enormous human costs associated with a recession. Unemployment is traumatic for everyone, but we know that the people who are most likely to lose their jobs in a recession are those who are most disadvantaged in the labor market, such as Blacks, Hispanics, people with less education, and people with a criminal record.
And, contrary to what you often hear in the media, the unemployed are not a relatively small group. While 5.0 percent or 6.0 percent unemployment might seem like a relatively small share of the population, most unemployment spells are relatively short (thankfully), as people move in and out of unemployment. But this means that two to three times this number of people may experience unemployment over the course of the year.
Furthermore, the reduced bargaining power, which is the whole point of this exercise, is experienced by tens of millions of workers, stretching up past the median worker. In our book, Getting Back to Full Employment, Jared Bernstein and I showed that the only times when the median worker saw sustained real wage growth in the years since 1980, were when we had very low rates of unemployment.
It seems more than a bit bizarre that, we have all these private and public efforts intended to reduce inequality and overcome the effects of centuries of discrimination, and then we have a government agency, the Federal Reserve Board, acting to deliberately increase inequality. This doesn’t mean that we shouldn’t take the concerns with inflation seriously, just that we should be very reluctant to go the route of pushing unemployment up as the main tool to reduce it.
Many of us have argued that a big jump in unemployment will not be needed to restrain inflation. The pandemic and war-related factors that led to the jump in inflation are gradually being reversed, as seen most clearly with the plunge in gas prices the last three months.
With monthly inflation coming in at zero in July, and likely to do so again for August, the Fed has the luxury of adopting a wait and see approach before going ahead with more big rate hikes. Expectations of inflation are actually falling, so there is little immediate basis for fear about a self-fulfilling burst of higher wage and price increases leading to higher inflation.
But there is another factor that needs to be considered when assessing the urgency of rate hikes and unemployment compared to the benefits of the wait and see approach: the likely Republican takeover of at least one house of Congress this fall. This is not a partisan question about whether the Fed should be acting to favor Republicans or Democrats (its actions at this point will have little impact on the pre-election economy), rather it is a recognition of how Republicans behave when there is a Democrat in the White House.
While political science on the whole probably has a worse track record than economists in terms of explaining its object of study, one item on which there is a high degree of certainty is that a weak economy is bad news for the incumbent president. The worse the economy looks in 2024, the poorer will be Joe Biden’s chances of getting re-elected.
Congressional Republicans certainly know this. They did everything they could to sabotage the economy after they gained control of the Congress in 2010. They insisted on large budget cuts, which slowed growth and kept the unemployment rate high.
Their story was the old joke about the deficit. When a Democrat is in the White House, the Republicans are the biggest worriers about deficits and debt anywhere. We saw that under both Clinton in the 1990s and President Obama in the last decade. However, deficits don’t matter when they want big tax cuts with Republicans like Reagan, Bush II, or Trump in the White House.
This history means that if the Republicans take over the House this fall, we can be absolutely certain that they will block any efforts to stimulate the economy to boost it out of recession. In fact, they will most likely be demanding budget cuts to make the recession worse. And, given recent history, they will be threatening government shutdowns and debt defaults to get these cuts.
The Republicans are a party that does not give a damn about the well-being of the country, after all, their rich backers will be doing fine in a recession. They want political power. Mitch McConnell put this as clearly as possible after Obama was elected in 2008. He said that it was his job to make sure that Obama was a one-term president. (McConnell obviously failed on that one.)
This political picture has to be taken into account when considering the relative risks of the Federal Reserve Board’s interest rate policy. The inflation hawks do have a point, there is a risk (albeit a small one in my view) that we will see the sort of wage-price spiral we saw in the 1970s, with inflation rates getting ever higher, or at least remaining at levels that few of us would consider acceptable.
However, there is also the risk that Republican control of Congress will make, what would have been a relatively short and mild recession, into a long and severe recession. If there is one lesson we have learned well following the Great Recession, it is much easier for the Fed to use monetary policy to slow the economy than to boost it out of a recession.
The Fed certainly has the ability to put the economy into a recession. If a Republican Congress wants to keep us in recession, there will not be much that the Fed can do to counter its actions.