Thursday, November 10, 2022

 A good question, Maicol David Lynch. And, an important one for socialists, especially Marxists.  However, the Gramscian "specific conditions" obtaining for a particular socialist government suggest there may be more than one answer to your question.


It seems to me there are a number of variables to think about in finding a "model" for socialist leadership. This is my understanding:

Variable 1: What is socialism?

There is an economic, and political answer. 

For both Marxist and non-Marxist theories of socialism, the state -- or some non-profit oriented proxy, supplants private enterprise in degrees and proportion based on (at least) these assumptions:

a) that abundant resources, and abundant production and financing capacities of industry make it possible to reduce the "prices" of the "means of life" toward zero, where little or no profit, no retained earnings, no money is required. If there are costs they are born by general taxation.
 b) The "means of life" become public goods, meaning they neither exclude shared use nor have any rival for universal, "free" provision of the good. The ratio of public to private goods is a good measure of the degree of economic socialization.
c) The "means of life" becomes increasingly expensive as both physical and social reproduction of human societies requires a rise in human capital -- the knowledge and capabilities to be fully productive in advanced society. One can thus expect "abundance" to spread gradually, perhaps never fully satisfied. Investments in education and training must be accompanied by cultural revolutions in work and leisure life that also are expensive, controversial and disruptive under the best of circumstances,  and inevitably contrary to many 'traditions' as past gender, family, age, nationality, racial and ethnic roles are challenged. 

 

The point is: progress toward "abundance" will be gradual, and measured, no matter how revolutionary the changes in political or state leadership may be. Progress in economic relations requires careful planning and development. Shifts can take decades to become pervasive in an economy even when they are fast moving. (Consider the use and regulation of cell phones combined with the Internet) Many outcomes are impossible to predict in advance. All investments in the future have RISKS of failure. 

     





Of all early expressions of socialism, only Marx's has stood the test of history. The countries that call themselves socialists are all strongly influenced by, and contributed extensions and expansions of Marx's key concepts. Among the key conc



**For modern non-Marxists, "socialism", or its real life expressions -- "social democracy" and "democratic socialism" -- is pretty much summed up as an effort to perfect the democratic values promised in most, not all, bourgeois revolutions, from the American revolution forward.

The democratic socialist aspiration for "a more perfect union" struggles constantly against the inequities of developing capitalist relations, and strives to offset or compensate or restrain destructive social and political  tendencies arising from market anarchy.

But it does not reject "market" relations, as in Utopian or Anarchist conceptions, like Robert Owen 19th American experiment. Given the vast wealth being socially created by capitalism,  such "rejection" was entirely Ideal -- as contrasted with Marx.

In earlier times most "social democratic" formations in Europe were Marxist in one form or another. Some saw Marx's effort to be "scientific" about socialism as meaning a natural, more or less smooth, evolution toward perfection requiring no extraordinary personal subjective effort.

*
The history of socialism as a political trend since Marx and Engels wrote the Communist Manifesto has often been consumed with debates over which approach to socialism was Utopian, and which was "scientific".

Tuesday, October 18, 2022



Bernanke v. Kindleberger: Which Credit Channel?

By Perry G. Mehrling

OCT 13, 2022 | MACROECONOMICS





In the papers of economist Charles Kindleberger, Perry Mehrling found notes on the paper that won Ben Bernanke his Nobel Prize.


In the 1983 paper cited as the basis for Bernanke’s Nobel award, the first footnote states: “I have received useful comments from too many people to list here by name, but I am grateful to each of them.” One of those unnamed commenters was Charles P. Kindleberger, who taught at MIT full-time until mandatory retirement in 1976 and then half-time for another five years. Bernanke himself earned his MIT Ph.D. in 1979, whereupon he shifted to Stanford as Assistant Professor. Thus it was natural for him to send his paper to Kindleberger for comment, and perhaps also natural for Kindleberger to respond.



As it happens, the carbon copy of that letter has been preserved in the Kindleberger Papers at MIT, and that copy is reproduced below as possibly of contemporary interest. All footnotes are mine, referencing the specific passages of the published paper, a draft copy of which Kindleberger is apparently addressing, and filling in context that would have been familiar to both Bernanke and Kindleberger but may not be to a modern reader. With these explanatory notes, the text speaks for itself and requires no further commentary from me.






“May 1, 1982



Dr. Ben Bernanke

Graduate School of Business

Stanford University

Stanford, CA 94305






Dear Dr. Bernanke,



Thank you for sending me your paper on the great depression. You ask for comments, and I assume this is not merely ceremonial. I am afraid you will not in fact welcome them.



I think you have provided a most ingenious solution to a non-problem.[1] The necessity to demonstrate that financial crisis can be deleterious to production arises only in the scholastic precincts of the Chicago school with what Reder called in the last JEL its tight priors, or TP.[2] If one believes in rational expectations, a natural rate of unemployment, efficient markets, exchange rates continuously at purchasing power parities, there is not much that can be explained about business cycles or financial crises. For a Chicagoan, you are courageous to depart from the assumption of complete markets.[3]



You wave away Minsky and me for departing from rational assumptions.[4] Would you not accept that it is possible for each participant in a market to be rational but for the market as a whole to be irrational because of the fallacy of composition? If not, how can you explain chain letters, betting on lotteries, panics in burning theatres, stock market and commodity bubbles as the Hunts in silver, the world in gold, etc… Assume that the bootblack, waiters, office boys etc of 1929 were rational and Paul Warburg who said the market was too high in February 1929 was not entitled to such an opinion. Each person hoping to get in an[d] out in time may be rational, but not all can accomplish it.



Your data are most interesting and useful. It was not Temin who pointed to the spread (your DIF) between governts [sic] and Baa bond yields, but Friedman and Schwartz.[5] Column 4 also interests me for its behavior in 1929. It would be interesting to disaggregate between loans on securities on the one hand and loans and discounts on the other.



Your rejection of money illusion (on the ground of rationality) throws out any role for price changes. I think this is a mistake on account at least of lags and dynamics. No one of the Chicago stripe pays attention to the sharp drop in commodity prices in the last quarter of 1929, caused by the banks, in their concern over loans on securities, to finance commodities sold in New York on consignment (and auto loans).[6] This put the pressure on banks in areas with loans on commodities. The gainers from the price declines were slow in realizing their increases. The banks of the losers failed. Those of the ultimate winners did not expand.



Note, too, the increase in failures, the decrease in credit and the rise in DIF in the last four of five months of 1931.[7] Much of this, after September 21, was the consequence of the appreciation of the dollar from $4.86 to $3.25.[8] Your international section takes no account of this because prices don’t count in your analysis. In The World in Depression, 1929-1939, which you do not list,[9] I make much of this structural deflation, the mirror analogue of structural inflation today from core inflation and the oil shock. But your priors do not permit you to think them of any importance.



Sincerely yours,



[Charles P. Kindleberger]”










References



Bernanke, Ben S. 1983. “Nonmonetary Effects of the Financial Crisis in the Propagation of the Great Depression.” American Economic Review 73 No 3 (June): 257-276.



Kindleberger, Charles P. 1973. The World in Depression, 1929-1939. Berkeley CA: University of California Press.



Kindleberger, Charles P. 1978. Manias, Panics and Crashes: A History of Financial Crises. New York: Basic Books.



Kindleberger, Charles P. 1985. Keynesianism vs. Monetarism and Other Essays in Financial History. London: George Allen and Unwin.



Kindleberger. Charles P. and Jean-Pierre Laffargue, eds. 1982. Financial crises: Theory, History, and Policy. Cambridge: Cambridge University Press.



Mehrling, Perry. 2022. Money and Empire: Charles P. Kindleberger and the Dollar System. Cambridge: Cambridge University Press.




Notes


[1] Bernanke (1983, 258): “reconciliation of the obvious inefficiency of the depression with the postulate of rational private behavior”.

[2] Reder, Melvin W. “Chicago Economics: Permanence and Change.” Journal of Economic Literature 20 No. 1 (March 1982): 1-38. Bernanke (1983, 257) states explicitly, “the present paper builds on the Friedman-Schwartz work…”

[3] Bernanke (1983, 257): “The basic premise is that, because markets for financial claims are incomplete, intermediation between some classes of borrowers and lenders requires nontrivial market-making and information-gathering services.” And again at p. 263: “We shall clearly not be interested in economies of the sort described by Eugene Fama (1980), in which financial markets are complete and information/transactions costs can be neglected.”

[4] Bernanke (1983, 258): “Hyman Minsky (1977) and Charles Kindleberger (1978) have in several places argued for the inherent instability of the financial system, but in doing so have had to depart from the assumption of rational economic behavior.” It is perhaps relevant to observe that elsewhere Kindleberger takes pains to point out the limitations of the Minsky model for explaining the great depression: “it is limited to the United States; there are no capital movements, no exchange rates, no international commodity prices, nor even any impact of price changes on bank liquidity for domestic commodities; all assets are financial.” (Kindleberger 1985, 302) This passage appears in Kindleberger’s contribution to a 1981 conference sponsored by the Banca di Roma and MIT’s Sloan School of Management, which followed on a 1979 Bad Homburg conference that also included both men, which proceedings were published as Financial Crises: Theory, History and Policy (Cambridge 1982).

[5] Bernanke (1983, 262): “DIF = difference (in percentage points) between yields on Baa corporate bonds and long-term U.S. government bonds”.

[6] It is exactly the sharp drop in commodity prices that Kindleberger puts at the center of his explanation of why the depression was worldwide since commodity prices are world prices. Kindleberger (1973, 104): “The view taken here is that symmetry may obtain in the scholar’s study, but that it is hard to find in the real world. The reason is partly money illusion, which hides the fact of the gain in purchasing power from the consumer countries facing lower prices; and partly the dynamics of deflation, which produce an immediate response in the country of falling prices, and a slow one, often overtaken by spreading deflation, in the country with improved terms of trade, i.e. lower import prices.”

[7] Bernanke’s Table 1 cites August-December DIF figures as follows: 4.29, 4.82, 5.41, 5.30, 6.49.

[8] September 21 is of course the date when the Bank of England took sterling off gold, see Kindleberger (1973, 167-170).

[9] The published version, Bernanke (1983), still does not list Kindleberger (1973), citing only Kindleberger (1978), Manias, Panics, and Crashes. Notably, the full title of that book includes also the words “A History of Financial Crises.” Kindleberger himself quite explicitly frames Manias as an extension of the Depression book, now including all of the international financial crises he can find. Later commentary however follows Bernanke in viewing Kindleberger (1978) as instead an extension of Minsky’s essentially domestic Financial Instability Hypothesis, which is not correct. On this point see footnote 4, and more generally, Chapter 8 of my book Money and Empire (Cambridge 2022).

Perry G. MehrlingAcademic Council
Professor of Economics, Boston University

Share your perspective


More from Perry G. Mehrling
Payment vs. Funding: The Law of Reflux for Today

PAPER WORKING PAPER SERIES By Perry G. Mehrling

FEB 2020


A Money View of Keynes, Keynesians, and Post-Keynesians

ARTICLE By Perry G. Mehrling

FEB 4, 2020


Can Bitcoin Replace the Dollar?

ARTICLE By Perry G. Mehrling

OCT 14, 2017

More articles

Big Tech: Not Only Market But Also Knowledge and Information Gatekeepers

ARTICLE By Cecilia Rikap

OCT 4, 2022


Economist Offers Stark Climate Reality Check. Plus a Bit of Science-Based Hope.

ARTICLE By Lynn Parramore

SEP 27, 2022


The IRA as a Climate Bill

ARTICLE By Steven Fazzari

SEP 15, 2022



KEEP UP WITH OUR LATEST NEWS

Our e-mail newsletter shares new events, courses, articles, and will keep you updated on our initiatives.
Comments SIGN UP
The Institute for New Economic Thinking


FacebookTwitterYouTube


Institute for New Economic Thinking

300 Park Avenue South, Floor 5

New York, NY 10010

(646) 751-4900
Terms of Use | Privacy Policy

©2022 Institute for New Economic Thinking. All Rights Reserved
Explore
Featured work on Environment

Economist Offers Stark Climate Reality Check. Plus a Bit of Science-Based Hope.

ARTICLE By Lynn Parramore

SEP 27, 2022

Navigating the Crises in European Energy

PAPER WORKING PAPER SERIES By Michael Grubb

SEP 2022


Electricity Markets, Climate Change, and the European Energy Crisis

ARTICLE By Michael Grubb

SEP 5, 2022

View more from this topic
All topicsAGRICULTURE
BUSINESS & INDUSTRY
COMPLEXITY ECONOMICS
CULTURE
DEVELOPMENT
ECONOMIC GEOGRAPHY
ECONOMICS PROFESSION
EDUCATION
ENERGY
ENVIRONMENT
GENDER
GOVERNMENT & POLITICS
HEALTH
HISTORY
HUMAN BEHAVIOR
INEQUALITY & DISTRIBUTION
IMPERFECT KNOWLEDGE
LABOR
LAWS
FINANCE
MACROECONOMICS
MATH & STATISTICS
MICROECONOMICS
TECHNOLOGY & INNOVATION
PHILOSOPHY & ETHICS
PRIVATE DEBT
RACE
TRADE
Featured expertsView all



Rob JohnsonPresident
President, Institute for New Economic Thinking

The World After Capital

VIDEO Featuring Albert Wenger and Rob Johnson

JUL 6, 2022


Fear and Loathing in Expertise

VIDEO Featuring Rob Johnson

JUN 29, 2022

View more from this expert

View all experts
Featured work in South Asia

The Right to Energy & Carbon Tax: A Game Changer in India

ARTICLE By Rohit Azad and Shouvik Chakraborty

JUN 10, 2019

The Bogus Paper that Gutted Workers’ Rights

ARTICLE By Servaas Storm

FEB 6, 2019


Unstable Capital Flows Threaten Emerging Economies

ARTICLE By Terry McKinley and Francis Cripps

AUG 24, 2018

View more from this region
All regionsAFRICA
ASIAChina
Hong Kong
Japan
AUSTRALIA
EUROPEAustria
Denmark
England
France
Germany
Greece
Hungary
Ireland
Italy
Portugal
UK
Ukraine
MIDDLE EAST
NORTH AMERICACanada
United States
SOUTH AMERICABrazil
Chile
SOUTH ASIA
INDIA

Friday, September 30, 2022

Scotus Mailbag talk: If you were creating a new constitutional order from scratch...?

 



captured from Matt Yglesias substack:  Interesting conversation (avatars for names) on comparative approaches -- mainly Canadian --  to constitutional reform of the supreme judiciary.


Lost Future: How do you feel philosophically about judicial review being part of a country's political system? I remember when I learned in school that a majority of developed countries actually don't have true judicial review, they practice 'parliamentary sovereignty' and the legislature can just pass whatever they want.... Was pretty shocking. But, most of those countries are in the EU, so aren't they now all subject to the EU Court of Human Rights? So maybe that's no longer true, I dunno.

If you were creating a new constitutional order from scratch, would you empower a supreme judiciary to strike down 'unconstitutional' laws? Or is that too subjective & inherently partisan? I think we've all heard criticisms that the justices are just unelected politicians, etc. etc. One reasonable compromise (for the US) that I was thinking is that it should require a supermajority to declare a law unconstitutional- using a raw majority to determine what should be a fundamental question is pretty dumb. Also, individual judges should have a lot less power in our system. Open to hearing your thoughts though!

It’s important to distinguish between two separate ideas. One is judicial review of laws to assess their conformity with the constitution. The other is the idea that the courts should be the people who “go last” in an interbranch conflict.

I think the Canadian system — in which laws are absolutely reviewed by the judiciary for conformity with the Charter of Rights and Freedoms, but Parliament has the right to overrule the Supreme Court — is good. Overrides do happen under this system, but relatively rarely — the Court’s rulings are not a dead letter. One reason they are not a dead letter is that the Court has a decent amount of legitimacy. But one reason they preserve that legitimacy is the Supreme Court is not a locus of massive partisan conflict. And that’s because strong policy-demanders at odds with an important constitutional ruling have a more promising course of action than politicizing the judiciary — they can just push for parliamentary override. To me, it’s a good system.

But note that in the United States, a lot of the de facto power of the judiciary comes from non-constitutional cases. Because of bicameralism, presidentialism, and the filibuster, the stakes in judicial interpretation of statutes are very high here. If the Supreme Court of Canada rules that some Canadian air pollution regulation violates the law and Parliament feels they don’t like the outcome, they can just pass a new law that clarifies the point. In America, if the Supreme Court rules that the EPA can’t regulate greenhouse gas emissions, then that is a de facto guarantee that there will be no emissions regulation because the barrier to passing a new law is so high in our country.

This is why on some level, I think “judicial review” is the wrong thing to ask questions about. Obviously courts need to be able to do statutory interpretation. But what we have in the United States is an extremely low-productivity legislature that in practice devolves massive amounts of powe

Tuesday, September 27, 2022

Matt Yglesias: Beating climate change absolutely requires new technology

 via Slow Boring at Substack

Beating climate change absolutely requires new technology

We have what we need to drastically cut emissions — but we're going to need much more

Matt Yglesias

 I was thinking of writing about why I disagree with Farhad Manjoo’s column arguing that nuclear power still doesn’t make sense, but I saw some tweets about a failed carbon capture and sequestration (CCS) project and realized that I never replied to an email asking why Slow Boring has paid tens of thousands of dollars for direct air capture (DAC) carbon removal programs.

What makes these topics hard to write about is that they involve complicated technical questions, and the honest truth is that well-qualified technical experts disagree about all of them.

But what links these topics together is that if we want to navigate the climate crisis in a remotely acceptable way, the world is going to need to develop some technologies that are currently unproven.

The opposite view is rarely expressed in explicit terms (because it’s wrong), but it often implicitly backstops skepticism about things like nuclear, CCS, and DAC. People will point out, rightly, that these technologies are pretty uncertain and unproven and currently seem to involve very high costs. Then they point to the fact that solar and wind work just fine, are well understood, and are cheap at the current margin. They’ll say “why not just do that?”

And every once in a while, you see a take like Alejandro de la Garza’s article in Time arguing that “We Have The Technology to Solve Climate Change. What We Need Is Political Will.”

This is true in the trivial sense that we could dramatically reduce CO2 emissions with currently available technology if we were willing to accept a large decline in living standards and world population. But that’s not really a solution to the problem. We should absolutely deploy current technology more aggressively and thereby contribute to solving the problem — but we will also need some new technology in the future, and that means we need to keep an open mind toward investment.

Renewables are great (at the current margin)

There’s a weird internet cult of renewables haters, and also the strange case of the state of Texas, where renewables are the number two source of electricity but politicians pretend they hate them.

This article is about the limits of the all-renewables strategy, but it doesn’t come from a place of hate. The reason Texas — a very conservative state with a locally powerful oil and gas extraction industry — has so much renewable electricity is that large swathes of Texas are very windy, and building wind farms in windy places is a very cost-effective way to make electricity.

And in a place where overall electricity demand is rising, both due to population growth (in Texas) and due to ongoing electrification of vehicles and home heat, renewables buildout does an enormous amount to reduce CO2 emissions and air pollution. Powering electric cars with electricity from gas-fired power plants would have emissions benefits, as I understand it, because natural gas is less polluting than oil and because big power plants are regulated more stringently than car engines or home furnaces. But still, the emissions benefits are much larger if the electricity is partially or wholly generated by renewables.

But the “partially” here is actually really important. An electric car that’s powered 50% by renewables has lower emissions than one powered by 10% renewables and higher emissions than one that’s at 90% — the more renewables in the mix, the lower your emissions. You don’t have to get to 100% renewable power; the key thing is to use “more” renewable power. And when people say (accurately) that renewables are now cheap, they mean that it’s cheap at the current margin to add more renewable power to the mix.

That’s because electricity demand is growing, so a marginal addition of renewable electricity just gets tossed into the mix usefully. But it’s also because Texas (and other states) have all this fixed infrastructure for burning natural gas already. If you get extra wind power you can just burn less gas, and the gas is still there to use if you need it. California, probably the leading-edge state on renewables, actually built a small number of emergency gas generators that they turned on for the first time on September 4 of this year to meet rare peak demand and avoid blackouts.

An all-renewable grid is very challenging

That California experience illustrates two things, I think:

  • Democratic Party politicians are, in practice, much more pragmatic than unflattering conservative stereotypes of them would suggest.

  • Democrats who’ve actually had to wrestle with practical problems know that we are further from an all-renewables utopia than environmentalist rhetoric suggests.

Gavin Newsom knows perfectly well he can’t just have a statewide blackout once or twice a year and tell voters that’s a small price to pay for meeting emissions goals. Voters want to see action on climate change, but they have a very limited appetite for enduring personal sacrifice or inconvenience. Using renewables to dramatically reduce emissions while still counting on gas backup when needed? Great. Securing even deeper emissions by accepting that power sometimes doesn’t work? Not great.

Consider this data from my rooftop solar panels.

  • The first pane says that so far in 2022, the panels have generated 102% of our household electricity use — hooray!

  • The second pane shows that we generated a huge electricity surplus (blue lines below zero) during the spring when it was sunny and cool, but we are in a small deficit over the summer when it’s even sunnier but our air conditioning use surges.

  • The third pane shows that in September, whether we are in surplus or deficit on any given day hinges crucially on the weather. It’s going to end up being a deficit month largely because of a big rainy stretch.

So how much does this cost? Well, not very much. Because the key thing about this scenario is that all my kilowatts of electricity get used. When I’m in surplus, that extra electricity goes “to the grid” where it substitutes for other sources of power, and I earn credits that offset my electricity usage during deficit periods. If I had to throw away my surplus kilowatts instead of selling them to the grid, my per-kilowatt cost would soar.

And if everyone had solar power, that’s the problem we would face. Who would we export the extra electricity to during surplus periods? At a small margin, we have the technology for this: instead of exporting power during the day and importing it at night, I could get a home battery and store daytime excess for use at night. That would raise my per-kilowatt cost, but only modestly since batteries aren’t that expensive. And you can add wind as well as solar to your grid so you have some resiliency against seasonal variations in sunlight.

The problem is that without fossil fuels for resilience, the cost per megawatt of renewables soars because redundancy is expensive.

Wasting electricity is costly

Seasonal variation is a big problem here, for example.

Let’s say you have enough solar panels to cover 100 percent of your electricity needs on an average December day. That means you’re going to have way more panels than you need on an average June day when the sun is shining for a much longer period of time. On a pure engineering basis, that’s fine — there are just some panels that in practice are only generating power for a few days per year in the dead of winter. But the cost per megawatt of those panels is going to be astronomical because a solar panel is almost 100 percent fixed costs.

The same is true of random fluctuations in weather. If you’re like Texas and rely on a mix of gas and wind, then wind is cheap — you add some turbines and that means you burn less gas. If there’s some freak day when there’s very little wind, then you burn an unusually large amount of gas. As long as you’re using almost all the wind power you generate, the cost per megawatt of your turbines is low. But if you try to build enough turbines to keep the lights on during low-wind days, you’re wasting wind on high-wind days. This means your cost per megawatt rises.

Because massively overbuilding renewables would not only cost a lot of money but wastefully consume vast tracts of land, it seems like a better idea would be to use long-term batteries. If you had really big batteries that stored electricity for a long time, you could simply store surplus power in the high season and unleash it in the low season.

In fact, if you are lucky enough to have large hydroelectric dams at your disposal, you can probably use them as a seasonal storage vehicle. You can let the water pile up when renewables are at maximum capacity and then run it through the dam when you need it. Not coincidentally, politicians from the Pacific Northwest — where there’s tons of hydro — tend to be huge climate hawks.

But for the rest of us, it’s Hypothetical Storage Technology to the rescue.

I’m not saying anything here that renewables proponents aren’t aware of. They write articles about seasonal electricity storage all the time. There are plenty of ideas here that could work, ranging from ideas on the technological cutting edge to brute force engineering concepts like using pumps to create extra hydro capacity. Another idea is that maybe you could replace a lot of current fossil fuel use with burning hydrogen, and then you could manufacture hydrogen using renewable electricity while accepting seasonal variation in the level of hydrogen output. It might work!

The known unknowns

Speaking of hypothetical hydrogen applications, it’s also worth saying that while electricity, cars, and home heat together constitute a very large share of global emissions, they are not the whole picture.

You can build an electric airplane with current technology, but we absolutely do not have a zero-carbon replacement for conventional passenger airplanes at hand. Nor do we currently have the ability to manufacture steel, concrete, or various chemicals in a cost-effective way without setting fossil fuels on fire. These aren’t necessarily unsolvable problems, but they have not, in fact, been solved. It isn’t a lack of “political will” that has denied us the ability to do zero-carbon maritime shipping. Right now, the only proven way to power a large ship without CO2 emissions is to use one of the nuclear reactors from an aircraft carrier. But this is both illegal and insanely expensive. You could maybe do something with hydrogen here, or else it is possible that if the Nuclear Regulatory Commission ever decides to follow the law and establish a clear licensing pathway for small civilian nuclear reactors, the companies who think they can mass produce these things in a cost-effective way will be proven right.

And if they are, that would not only solve the container shipping problem but would make decarbonizing electricity much easier. And that’s true even if the microreactors never become as cheap as today’s marginal renewable electricity because we ultimately need to move beyond these margins. The same is true for geothermal power. Even if the most optimistic scenarios here don’t pan out and geothermal remains relatively expensive, a new source of baseline zero-carbon electricity would solve a lot of problems for a mostly-renewable grid.

By the same token, CCS doesn’t ever need to be cheap enough to use at a massive scale to be incredibly useful. Even a very expensive gas + CCS system could be a cost-effective way to backstop renewables rather than engaging in massive overbuilding.

With Direct Air Capture — sucking carbon out of the air with essentially artificial trees — not only would the west pay de factor climate reparations, but we could also achieve net zero without actually solving every technical problem along the way. You could make airlines (and private jets) pay an emissions tax and use the money to capture the CO2. Of course, with all these capture schemes there’s the question of what you actually do with the carbon once it’s captured. One idea is that the CO2 removed from the air could be used to manufacture jet fuel. Airlines would then burn it again and put it back out into the atmosphere, but this process would be a closed loop that wouldn’t add net new greenhouse gases.

The case for agnosticism

People on the internet love to cheerlead for and fight about their favorite technologies.

But everyone should try to focus on what the real tradeoffs are. When towns in Maine ban new solar farms to protect the trees, that is a genuine tradeoff with the development of renewable electricity. When California votes to keep Diablo Canyon open, by contrast, that does absolutely nothing to slow renewable buildout. And the idea that investments in hypothetical carbon capture technologies are preventing the deployment of already existing decarbonization technologies in the present day is just wrong.

The basic reality is that some new innovations are needed to achieve net zero, especially in the context of a world that we hope will keep getting richer.

These innovation paths require us, essentially, to keep something of an open mind. As a matter of really abstract physics, “use renewables to make hydrogen, use hydrogen for energy storage and heat” makes a lot of sense. As a matter of actual commercially viable technologies, though, it’s stacking two different unproven ideas on top of each other. Insisting that all work on cutting-edge industrial hydrogen projects be conducted with expensive green hydrogen throws sand in the gears of difficult and potentially very important work. And when you tell the world that all the problems have been solved except for political will, you unreasonably bias young people who worry about climate toward either paralysis or low-efficacy advocacy work. What we need instead is for more young people who are worried about climate to find ways to contribute on the technical side to actually solving these important problems.

Monday, September 26, 2022

The Ferocious Complexity Of The Cell

 from Bharath Ramsundar via Brad DeLong

Decentralized Protocol and Deep Drug Discovery Researcher






The Ferocious Complexity Of The Cell

Fifty years ago, the first molecular dynamics papers allowed scientists to exhaustively simulate systems with a few dozen atoms for picoseconds. Today, due to tremendous gains in computational capability from Moore’s law, and due to significant gains in algorithmic sophisticiation from fifty years of research, modern scientists can simulate systems with hundreds of thousands of atoms for milliseconds at a time. Put another way, scientists today can study systems tens of thousands of times larger, for billion of times longer than they could fifty years go. The effective reach of physical simulation techniques has expanded handleable computational complexity ten-trillion fold. The scope of this achievement should not be underestimated; the advent of these techniques along with the maturation of deep-learning has permitted a host of start-ups (123, etc) to investigate diseases using tools that were hitherto unimaginable.

The dramatic progress of computational methods suggests that one day scientists should be able to exhaustively understand complete human cells. But, to understand how far contemporary science is from this milestone, we first require estimates of the complexity of human cells. One reference suggests that a human sperm cell has a volume of 30 cubic micrometers, while another reveals that most human cells have densities quite similar to that of water (a thousand kilograms per cubic meter). Using the final fact that the molar mass of water is 18 grams, a quick calculation suggests that human sperm cells contain roughly a trillion atoms. For another datapoint, assuming neuronal cell volume of 6000 cubic micrometers, the analogous number for neurons is roughly 175 trillion atoms per cell. In addition to this increased spatial complexity, cellular processes take much longer than molecular processes, with mitosis (cell division) occurring on the order of hours. Let’s assume that mitosis takes eight hours for a standard cell. Putting together the series of calculations that we’ve just done suggests that exhaustively understanding a single sperm cell would require molecular simulation techniques to support an increase in spatial complexity from hundreds of thousands of atoms to trillions of atoms, and from millisecond-scale simulations to multi-hour simulations. In sum, simulations of sperm cells will require 10 million fold increases in spatial complexity and about 10 million fold increase in simulation length, for a total increase in complexity on the order of a hundred-trillion fold. Following the historical example from the previous paragraph, and assuming that Moore’s law continues in some form, we are at least 50 years of hard research from achieving thorough understanding of the simplest of human cells.

Let’s extend the thought experiment a bit further. How far is human science from understanding a human brain? There are roughly a hundred-billion neurons in the brain, and human-learning occurs on the order of years. Assuming that each neuron contains 175 trillion atoms, the total brain contains roughly 2 * 10^25 atoms. To put this number in perspective, that’s roughly 29 moles of atoms! It follows that simulation techniques must support an increase in spatial complexity from today’s limits on the order of a billion-trillion fold (20 orders of magnitude improvement). As for time complexity, there are rougly 31 million seconds in a year, so simulations would need to run tens of billions of times longer than today’s millisecond computations to understand the dynamics of human learning. Combining both of these numbers, roughly 30 orders of magnitude increase in handleable total complexity is required to understand the human brain. To summarize, assuming Moore’s law continues, we are at least 100 years of hard research from achieving thorough understanding of the human brain.

An important point that I’ve finessed in the preceding paragraphs is that both estimates above are likely low. An important, but often ignored fact about biological systems is that all nontrivial biomolecules exhibit significant quantum entanglement. The wavefunctions of biological macromolecules are quite complex, and until recently have remained beyond the reach of even the most approximate solvers of Schrodinger’s equation. As a result, most simulations of biomolecular systems use crude approximation to handle fundamental biological phenomenon such as phosphorylation or ATP processing. More accurate simulations of cells will require extremely large quantum simulations on a scale far beyond today’s capabilities. We have to assume that a Moore’s law for quantum computing will emerge for our estimates to have any hope of remaining accurate. However, given the extreme difficulty of maintaining large entangled states, no such progress has yet emerged from the nascent quantum computing discipline.

To summarize our discussion, simple back-of-the-envelope calculations hint at the extraordinary complexity that undergirds biological systems. Singulatarians routinely posit that it will be possible to upload our brains into the cloud within the not-too-distant future. The calculations within this article serve as a reality check upon such musings. It may well be possible to one day upload human brains into computing systems, but the computational requirements to simulate a brain (a necessary component of high-fidelity upload) are so daunting that even optimistic estimates suggest that a hundred years of hard research are required. Given that the US National Science Foundation is only 66 years old, the difficulty of planning research programs on the century time scale becomes more apparent.

None of this is to suggest that computational investigation of biological systems is useless. On the contrary, I believe that increased use of computation is the best path forward for science to comprehend the dazzling array of biological phenomenon that support even the most pedestrian life forms. However, today’s scientific environment is polluted with AI-hype, which naively suggests that artificial intelligence can solve all hard problems in short time frames. The danger of such thinking is when learning systems run into their inevitable failures, disappointed backers will start questioning research plans and will start withdrawing funding. Cold breezes from AI-winter have already started drifting into research buildings following the tragic death of Joshua Brown in a self-driving Tesla. Scientists need to combat the spread of hype and the proliferation of naive forecasts of AI utopia in order to create the solid research plans and organizations required to bridge the gap between modern science and revolutionary advances in biology.

Written on June 3, 2016