https://conversableeconomist.com/2022/11/08/henry-adams-on-the-corruptions-of-power/
-- via my feedly newsfeed
A good question, Maicol David Lynch. And, an important one for socialists, especially Marxists. However, the Gramscian "specific conditions" obtaining for a particular socialist government suggest there may be more than one answer to your question.
a) that abundant resources, and abundant production and financing capacities of industry make it possible to reduce the "prices" of the "means of life" toward zero, where little or no profit, no retained earnings, no money is required. If there are costs they are born by general taxation.
b) The "means of life" become public goods, meaning they neither exclude shared use nor have any rival for universal, "free" provision of the good. The ratio of public to private goods is a good measure of the degree of economic socialization.
c) The "means of life" becomes increasingly expensive as both physical and social reproduction of human societies requires a rise in human capital -- the knowledge and capabilities to be fully productive in advanced society. One can thus expect "abundance" to spread gradually, perhaps never fully satisfied. Investments in education and training must be accompanied by cultural revolutions in work and leisure life that also are expensive, controversial and disruptive under the best of circumstances, and inevitably contrary to many 'traditions' as past gender, family, age, nationality, racial and ethnic roles are challenged.
The point is: progress toward "abundance" will be gradual, and measured, no matter how revolutionary the changes in political or state leadership may be. Progress in economic relations requires careful planning and development. Shifts can take decades to become pervasive in an economy even when they are fast moving. (Consider the use and regulation of cell phones combined with the Internet) Many outcomes are impossible to predict in advance. All investments in the future have RISKS of failure.
c
captured from Matt Yglesias substack: Interesting conversation (avatars for names) on comparative approaches -- mainly Canadian -- to constitutional reform of the supreme judiciary.
Lost Future: How do you feel philosophically about judicial review being part of a country's political system? I remember when I learned in school that a majority of developed countries actually don't have true judicial review, they practice 'parliamentary sovereignty' and the legislature can just pass whatever they want.... Was pretty shocking. But, most of those countries are in the EU, so aren't they now all subject to the EU Court of Human Rights? So maybe that's no longer true, I dunno.
If you were creating a new constitutional order from scratch, would you empower a supreme judiciary to strike down 'unconstitutional' laws? Or is that too subjective & inherently partisan? I think we've all heard criticisms that the justices are just unelected politicians, etc. etc. One reasonable compromise (for the US) that I was thinking is that it should require a supermajority to declare a law unconstitutional- using a raw majority to determine what should be a fundamental question is pretty dumb. Also, individual judges should have a lot less power in our system. Open to hearing your thoughts though!
It’s important to distinguish between two separate ideas. One is judicial review of laws to assess their conformity with the constitution. The other is the idea that the courts should be the people who “go last” in an interbranch conflict.
I think the Canadian system — in which laws are absolutely reviewed by the judiciary for conformity with the Charter of Rights and Freedoms, but Parliament has the right to overrule the Supreme Court — is good. Overrides do happen under this system, but relatively rarely — the Court’s rulings are not a dead letter. One reason they are not a dead letter is that the Court has a decent amount of legitimacy. But one reason they preserve that legitimacy is the Supreme Court is not a locus of massive partisan conflict. And that’s because strong policy-demanders at odds with an important constitutional ruling have a more promising course of action than politicizing the judiciary — they can just push for parliamentary override. To me, it’s a good system.
But note that in the United States, a lot of the de facto power of the judiciary comes from non-constitutional cases. Because of bicameralism, presidentialism, and the filibuster, the stakes in judicial interpretation of statutes are very high here. If the Supreme Court of Canada rules that some Canadian air pollution regulation violates the law and Parliament feels they don’t like the outcome, they can just pass a new law that clarifies the point. In America, if the Supreme Court rules that the EPA can’t regulate greenhouse gas emissions, then that is a de facto guarantee that there will be no emissions regulation because the barrier to passing a new law is so high in our country.
This is why on some level, I think “judicial review” is the wrong thing to ask questions about. Obviously courts need to be able to do statutory interpretation. But what we have in the United States is an extremely low-productivity legislature that in practice devolves massive amounts of powe
via Slow Boring at Substack
Matt Yglesias
I was thinking of writing about why I disagree with Farhad Manjoo’s column arguing that nuclear power still doesn’t make sense, but I saw some tweets about a failed carbon capture and sequestration (CCS) project and realized that I never replied to an email asking why Slow Boring has paid tens of thousands of dollars for direct air capture (DAC) carbon removal programs.
What makes these topics hard to write about is that they involve complicated technical questions, and the honest truth is that well-qualified technical experts disagree about all of them.
But what links these topics together is that if we want to navigate the climate crisis in a remotely acceptable way, the world is going to need to develop some technologies that are currently unproven.
The opposite view is rarely expressed in explicit terms (because it’s wrong), but it often implicitly backstops skepticism about things like nuclear, CCS, and DAC. People will point out, rightly, that these technologies are pretty uncertain and unproven and currently seem to involve very high costs. Then they point to the fact that solar and wind work just fine, are well understood, and are cheap at the current margin. They’ll say “why not just do that?”
And every once in a while, you see a take like Alejandro de la Garza’s article in Time arguing that “We Have The Technology to Solve Climate Change. What We Need Is Political Will.”
This is true in the trivial sense that we could dramatically reduce CO2 emissions with currently available technology if we were willing to accept a large decline in living standards and world population. But that’s not really a solution to the problem. We should absolutely deploy current technology more aggressively and thereby contribute to solving the problem — but we will also need some new technology in the future, and that means we need to keep an open mind toward investment.
There’s a weird internet cult of renewables haters, and also the strange case of the state of Texas, where renewables are the number two source of electricity but politicians pretend they hate them.
This article is about the limits of the all-renewables strategy, but it doesn’t come from a place of hate. The reason Texas — a very conservative state with a locally powerful oil and gas extraction industry — has so much renewable electricity is that large swathes of Texas are very windy, and building wind farms in windy places is a very cost-effective way to make electricity.
And in a place where overall electricity demand is rising, both due to population growth (in Texas) and due to ongoing electrification of vehicles and home heat, renewables buildout does an enormous amount to reduce CO2 emissions and air pollution. Powering electric cars with electricity from gas-fired power plants would have emissions benefits, as I understand it, because natural gas is less polluting than oil and because big power plants are regulated more stringently than car engines or home furnaces. But still, the emissions benefits are much larger if the electricity is partially or wholly generated by renewables.
But the “partially” here is actually really important. An electric car that’s powered 50% by renewables has lower emissions than one powered by 10% renewables and higher emissions than one that’s at 90% — the more renewables in the mix, the lower your emissions. You don’t have to get to 100% renewable power; the key thing is to use “more” renewable power. And when people say (accurately) that renewables are now cheap, they mean that it’s cheap at the current margin to add more renewable power to the mix.
That’s because electricity demand is growing, so a marginal addition of renewable electricity just gets tossed into the mix usefully. But it’s also because Texas (and other states) have all this fixed infrastructure for burning natural gas already. If you get extra wind power you can just burn less gas, and the gas is still there to use if you need it. California, probably the leading-edge state on renewables, actually built a small number of emergency gas generators that they turned on for the first time on September 4 of this year to meet rare peak demand and avoid blackouts.
That California experience illustrates two things, I think:
Democratic Party politicians are, in practice, much more pragmatic than unflattering conservative stereotypes of them would suggest.
Democrats who’ve actually had to wrestle with practical problems know that we are further from an all-renewables utopia than environmentalist rhetoric suggests.
Gavin Newsom knows perfectly well he can’t just have a statewide blackout once or twice a year and tell voters that’s a small price to pay for meeting emissions goals. Voters want to see action on climate change, but they have a very limited appetite for enduring personal sacrifice or inconvenience. Using renewables to dramatically reduce emissions while still counting on gas backup when needed? Great. Securing even deeper emissions by accepting that power sometimes doesn’t work? Not great.
Consider this data from my rooftop solar panels.
The first pane says that so far in 2022, the panels have generated 102% of our household electricity use — hooray!
The second pane shows that we generated a huge electricity surplus (blue lines below zero) during the spring when it was sunny and cool, but we are in a small deficit over the summer when it’s even sunnier but our air conditioning use surges.
The third pane shows that in September, whether we are in surplus or deficit on any given day hinges crucially on the weather. It’s going to end up being a deficit month largely because of a big rainy stretch.
So how much does this cost? Well, not very much. Because the key thing about this scenario is that all my kilowatts of electricity get used. When I’m in surplus, that extra electricity goes “to the grid” where it substitutes for other sources of power, and I earn credits that offset my electricity usage during deficit periods. If I had to throw away my surplus kilowatts instead of selling them to the grid, my per-kilowatt cost would soar.
And if everyone had solar power, that’s the problem we would face. Who would we export the extra electricity to during surplus periods? At a small margin, we have the technology for this: instead of exporting power during the day and importing it at night, I could get a home battery and store daytime excess for use at night. That would raise my per-kilowatt cost, but only modestly since batteries aren’t that expensive. And you can add wind as well as solar to your grid so you have some resiliency against seasonal variations in sunlight.
The problem is that without fossil fuels for resilience, the cost per megawatt of renewables soars because redundancy is expensive.
Seasonal variation is a big problem here, for example.
Let’s say you have enough solar panels to cover 100 percent of your electricity needs on an average December day. That means you’re going to have way more panels than you need on an average June day when the sun is shining for a much longer period of time. On a pure engineering basis, that’s fine — there are just some panels that in practice are only generating power for a few days per year in the dead of winter. But the cost per megawatt of those panels is going to be astronomical because a solar panel is almost 100 percent fixed costs.
The same is true of random fluctuations in weather. If you’re like Texas and rely on a mix of gas and wind, then wind is cheap — you add some turbines and that means you burn less gas. If there’s some freak day when there’s very little wind, then you burn an unusually large amount of gas. As long as you’re using almost all the wind power you generate, the cost per megawatt of your turbines is low. But if you try to build enough turbines to keep the lights on during low-wind days, you’re wasting wind on high-wind days. This means your cost per megawatt rises.
Because massively overbuilding renewables would not only cost a lot of money but wastefully consume vast tracts of land, it seems like a better idea would be to use long-term batteries. If you had really big batteries that stored electricity for a long time, you could simply store surplus power in the high season and unleash it in the low season.
In fact, if you are lucky enough to have large hydroelectric dams at your disposal, you can probably use them as a seasonal storage vehicle. You can let the water pile up when renewables are at maximum capacity and then run it through the dam when you need it. Not coincidentally, politicians from the Pacific Northwest — where there’s tons of hydro — tend to be huge climate hawks.
But for the rest of us, it’s Hypothetical Storage Technology to the rescue.
I’m not saying anything here that renewables proponents aren’t aware of. They write articles about seasonal electricity storage all the time. There are plenty of ideas here that could work, ranging from ideas on the technological cutting edge to brute force engineering concepts like using pumps to create extra hydro capacity. Another idea is that maybe you could replace a lot of current fossil fuel use with burning hydrogen, and then you could manufacture hydrogen using renewable electricity while accepting seasonal variation in the level of hydrogen output. It might work!
Speaking of hypothetical hydrogen applications, it’s also worth saying that while electricity, cars, and home heat together constitute a very large share of global emissions, they are not the whole picture.
You can build an electric airplane with current technology, but we absolutely do not have a zero-carbon replacement for conventional passenger airplanes at hand. Nor do we currently have the ability to manufacture steel, concrete, or various chemicals in a cost-effective way without setting fossil fuels on fire. These aren’t necessarily unsolvable problems, but they have not, in fact, been solved. It isn’t a lack of “political will” that has denied us the ability to do zero-carbon maritime shipping. Right now, the only proven way to power a large ship without CO2 emissions is to use one of the nuclear reactors from an aircraft carrier. But this is both illegal and insanely expensive. You could maybe do something with hydrogen here, or else it is possible that if the Nuclear Regulatory Commission ever decides to follow the law and establish a clear licensing pathway for small civilian nuclear reactors, the companies who think they can mass produce these things in a cost-effective way will be proven right.
And if they are, that would not only solve the container shipping problem but would make decarbonizing electricity much easier. And that’s true even if the microreactors never become as cheap as today’s marginal renewable electricity because we ultimately need to move beyond these margins. The same is true for geothermal power. Even if the most optimistic scenarios here don’t pan out and geothermal remains relatively expensive, a new source of baseline zero-carbon electricity would solve a lot of problems for a mostly-renewable grid.
By the same token, CCS doesn’t ever need to be cheap enough to use at a massive scale to be incredibly useful. Even a very expensive gas + CCS system could be a cost-effective way to backstop renewables rather than engaging in massive overbuilding.
With Direct Air Capture — sucking carbon out of the air with essentially artificial trees — not only would the west pay de factor climate reparations, but we could also achieve net zero without actually solving every technical problem along the way. You could make airlines (and private jets) pay an emissions tax and use the money to capture the CO2. Of course, with all these capture schemes there’s the question of what you actually do with the carbon once it’s captured. One idea is that the CO2 removed from the air could be used to manufacture jet fuel. Airlines would then burn it again and put it back out into the atmosphere, but this process would be a closed loop that wouldn’t add net new greenhouse gases.
People on the internet love to cheerlead for and fight about their favorite technologies.
But everyone should try to focus on what the real tradeoffs are. When towns in Maine ban new solar farms to protect the trees, that is a genuine tradeoff with the development of renewable electricity. When California votes to keep Diablo Canyon open, by contrast, that does absolutely nothing to slow renewable buildout. And the idea that investments in hypothetical carbon capture technologies are preventing the deployment of already existing decarbonization technologies in the present day is just wrong.
The basic reality is that some new innovations are needed to achieve net zero, especially in the context of a world that we hope will keep getting richer.
These innovation paths require us, essentially, to keep something of an open mind. As a matter of really abstract physics, “use renewables to make hydrogen, use hydrogen for energy storage and heat” makes a lot of sense. As a matter of actual commercially viable technologies, though, it’s stacking two different unproven ideas on top of each other. Insisting that all work on cutting-edge industrial hydrogen projects be conducted with expensive green hydrogen throws sand in the gears of difficult and potentially very important work. And when you tell the world that all the problems have been solved except for political will, you unreasonably bias young people who worry about climate toward either paralysis or low-efficacy advocacy work. What we need instead is for more young people who are worried about climate to find ways to contribute on the technical side to actually solving these important problems.
from Bharath Ramsundar via Brad DeLong
Decentralized Protocol and Deep Drug Discovery Researcher
Fifty years ago, the first molecular dynamics papers allowed scientists to exhaustively simulate systems with a few dozen atoms for picoseconds. Today, due to tremendous gains in computational capability from Moore’s law, and due to significant gains in algorithmic sophisticiation from fifty years of research, modern scientists can simulate systems with hundreds of thousands of atoms for milliseconds at a time. Put another way, scientists today can study systems tens of thousands of times larger, for billion of times longer than they could fifty years go. The effective reach of physical simulation techniques has expanded handleable computational complexity ten-trillion fold. The scope of this achievement should not be underestimated; the advent of these techniques along with the maturation of deep-learning has permitted a host of start-ups (1, 2, 3, etc) to investigate diseases using tools that were hitherto unimaginable.
The dramatic progress of computational methods suggests that one day scientists should be able to exhaustively understand complete human cells. But, to understand how far contemporary science is from this milestone, we first require estimates of the complexity of human cells. One reference suggests that a human sperm cell has a volume of 30 cubic micrometers, while another reveals that most human cells have densities quite similar to that of water (a thousand kilograms per cubic meter). Using the final fact that the molar mass of water is 18 grams, a quick calculation suggests that human sperm cells contain roughly a trillion atoms. For another datapoint, assuming neuronal cell volume of 6000 cubic micrometers, the analogous number for neurons is roughly 175 trillion atoms per cell. In addition to this increased spatial complexity, cellular processes take much longer than molecular processes, with mitosis (cell division) occurring on the order of hours. Let’s assume that mitosis takes eight hours for a standard cell. Putting together the series of calculations that we’ve just done suggests that exhaustively understanding a single sperm cell would require molecular simulation techniques to support an increase in spatial complexity from hundreds of thousands of atoms to trillions of atoms, and from millisecond-scale simulations to multi-hour simulations. In sum, simulations of sperm cells will require 10 million fold increases in spatial complexity and about 10 million fold increase in simulation length, for a total increase in complexity on the order of a hundred-trillion fold. Following the historical example from the previous paragraph, and assuming that Moore’s law continues in some form, we are at least 50 years of hard research from achieving thorough understanding of the simplest of human cells.
Let’s extend the thought experiment a bit further. How far is human science from understanding a human brain? There are roughly a hundred-billion neurons in the brain, and human-learning occurs on the order of years. Assuming that each neuron contains 175 trillion atoms, the total brain contains roughly 2 * 10^25 atoms. To put this number in perspective, that’s roughly 29 moles of atoms! It follows that simulation techniques must support an increase in spatial complexity from today’s limits on the order of a billion-trillion fold (20 orders of magnitude improvement). As for time complexity, there are rougly 31 million seconds in a year, so simulations would need to run tens of billions of times longer than today’s millisecond computations to understand the dynamics of human learning. Combining both of these numbers, roughly 30 orders of magnitude increase in handleable total complexity is required to understand the human brain. To summarize, assuming Moore’s law continues, we are at least 100 years of hard research from achieving thorough understanding of the human brain.
An important point that I’ve finessed in the preceding paragraphs is that both estimates above are likely low. An important, but often ignored fact about biological systems is that all nontrivial biomolecules exhibit significant quantum entanglement. The wavefunctions of biological macromolecules are quite complex, and until recently have remained beyond the reach of even the most approximate solvers of Schrodinger’s equation. As a result, most simulations of biomolecular systems use crude approximation to handle fundamental biological phenomenon such as phosphorylation or ATP processing. More accurate simulations of cells will require extremely large quantum simulations on a scale far beyond today’s capabilities. We have to assume that a Moore’s law for quantum computing will emerge for our estimates to have any hope of remaining accurate. However, given the extreme difficulty of maintaining large entangled states, no such progress has yet emerged from the nascent quantum computing discipline.
To summarize our discussion, simple back-of-the-envelope calculations hint at the extraordinary complexity that undergirds biological systems. Singulatarians routinely posit that it will be possible to upload our brains into the cloud within the not-too-distant future. The calculations within this article serve as a reality check upon such musings. It may well be possible to one day upload human brains into computing systems, but the computational requirements to simulate a brain (a necessary component of high-fidelity upload) are so daunting that even optimistic estimates suggest that a hundred years of hard research are required. Given that the US National Science Foundation is only 66 years old, the difficulty of planning research programs on the century time scale becomes more apparent.
None of this is to suggest that computational investigation of biological systems is useless. On the contrary, I believe that increased use of computation is the best path forward for science to comprehend the dazzling array of biological phenomenon that support even the most pedestrian life forms. However, today’s scientific environment is polluted with AI-hype, which naively suggests that artificial intelligence can solve all hard problems in short time frames. The danger of such thinking is when learning systems run into their inevitable failures, disappointed backers will start questioning research plans and will start withdrawing funding. Cold breezes from AI-winter have already started drifting into research buildings following the tragic death of Joshua Brown in a self-driving Tesla. Scientists need to combat the spread of hype and the proliferation of naive forecasts of AI utopia in order to create the solid research plans and organizations required to bridge the gap between modern science and revolutionary advances in biology.