Sunday, June 30, 2024

James Galbraith Critique of the "Berlin Declaration"


Industrial Policy Is a Nostalgic Pipe Dream


To address the public’s anger after four decades of neoliberalism, progressive and center-left economists are calling for innovation to create wealth “for the many” and to deal with climate change, while also reducing market concentration and power. Unfortunately, they are mistaken about where the real problem lies.

SIRACUSA, ITALY – At a recent “summit” in Berlin, prominent center-left economists announced a “new consensus” on industrial policy. Their joint declaration was then published in full by the Columbia University economic historian Adam Tooze, who described it as “remarkable both for its capacious agreement on economic and industrial policy principles and the way they are embedded in a reading of the political and geopolitical risks of the moment.”

According to the Berlin declaration, those risks are of two types. There are “real risks” such as climate change, “unbearable inequalities,” and “major global conflicts.” And there are risks such as “dangerous populist policies” driven by “a widely shared experience of perceived loss of control … stemming from globalization and technological shifts.” This second category, we are told, follows from “decades of poorly managed globalization, overconfidence in the self-regulation of markets, and austerity [which] have hollowed out the ability of governments to respond to such crises effectively.”

The group has nine recommendations: to “reorient our policies” from upholding “economic efficiency above all” to focusing on “shared prosperity and secure quality jobs”; “develop industrial policies ... supporting new industries and direct innovation toward wealth-creation for the many”; direct industrial policy away from subsidies and toward innovation; design a “healthier form of globalization”; address “income and wealth inequalities”; “redesign climate policies” around carbon pricing and infrastructure investment; support the climate transition in developing countries; avoid austerity “while investing in an effective innovative state”; and “reduce market power in highly concentrated markets.”

As I have written before, a consensus of economists – even well-meaning progressives – is a dangerous thing. Consensus, by its nature, is the enemy of consistency and logic. My friends have taken a half-step away from the previous neoliberal consensus, but it is only a half-step, and they aren’t all marching in the same direction.

It is true that ordinary people are angry. Having been brought up on the promise of a middle-class democracy underpinned by stable industrial jobs, many find themselves toiling as serfs in the gig economy. They are ruled by oligarchs, and condescended to by entitled urban professionals, with economists among the worst offenders.

How did this happen? It may be comforting to blame China (or Mexico, or Japan, or even South Korea), but the story properly starts with the breach between labor and anti-war liberals that occurred within the US Democratic Party in the 1970s. That set the stage for President Ronald Reagan and Federal Reserve Chair Paul Volcker’s destruction of US manufacturing and associated unions, followed by the rise of Big Finance and Big Tech in the Clinton era.

Secure your copy of PS Quarterly: Age of Extremes

The newest issue of our magazine, PS Quarterly: Age of Extremes, is here. To gain digital access to all of the magazine’s content, and receive your print copy, subscribe to PS Premium now.

Then came further militarization under George W. Bush, which was intended to consolidate US global power and control over resources, notably oil. The US economy, with Europe as an adjunct, came to rest on banks, bombs, bases, and informatics. Netting out gains and losses, hardly a single new manufacturing job has been created in America for four decades.

To address the public’s anger, my friends call for innovation to create wealth “for the many” and to deal with climate change, while also reducing market concentration and power. But innovation is the reason that market power becomes concentrated in the first place. It’s always about increasing wealth for the innovator and his financiers, and about doing more with fewer people, at less cost. That is how our tech oligarchs – Bill Gates, Jeff Bezos, Mark Zuckerberg, Elon Musk, Peter Thiel, Larry Ellison – came to be. Otherwise, we would never have heard of them.

Of course, addressing climate change is a noble goal. But one must not ignore the inconvenient realities standing in the way. The first is the Jevons paradox: increased energy efficiency allows for new energy uses, and thus tends to increase energy consumption. Just look at how much electricity cryptocurrency mining and AI models consume. Second, big renewable energy projects require big mines (which devour energy), vast new infrastructure (ditto), and – to be profitable – low, stable capital costs that are inconsistent with high interest rates. There is a reason why yesterday’s hot projects are now being downsized or canceled.1

A third, decisive problem is that there is no connection between climate investment and the well-being of the larger population today or even in the near future. Will utility bills, taxes, or interest rates fall as a result? No, they will not. Will new products hit the market because hefty tariffs have kept out goods already produced in China? Of course not. The only way to distribute the wealth benefits of innovation to “the many” is to socialize the entire process. You would need a “soviet of engineers,” as Thorstein Veblen once proposed – like the Manhattan Project or the space program.1

But to do any such thing requires state capacity, and the Berlin summiteers acknowledge that this has been “hollowed out” over 40 years of neoliberal neglect and predation. Who will supervise the new industrial policy? With no one home in today’s government, tariffs and corporate subsidies are the tools at hand, and the US Commerce Department has hired consultants from Wall Street to identify who should receive them. Good luck making that work.

The sad reality is that today’s advocates of industrial policy are often the same people who first advanced the idea more than 40 years ago to try to rescue the Democrats in the face of Reaganomics. Back then, at least, it was plausible. Yet now, as in the past, they seem unwilling to confront the banks, the military contractors, or the tech tycoons who now run the West. They do not call for definancialization, disarmament, or (as John Maynard Keynes once did) the socialization of new investment. They seek to rebuild state capacity while leaving in place all the forces that destroyed it.1

Meanwhile, vast new political forces are filling the vacuum left by neoliberal policies in America and Europe. Given the damage done, there may be no way to assuage the anger driving those “dangerous populists” toward power. Alas, the kumbayas of an outdated idea are not likely to help very much.

Secure your copy of PS Quarterly: Age of Extremes

The newest issue of our magazine, PS Quarterly: Age of Extremes, is here. To gain digital access to all of the magazine’s content, and receive your print copy, subscribe to PS Premium now.

James K. Galbraith, Professor of Government and Chair in Government/Business Relations at the University of Texas at Austin, is a former staff economist for the House Banking Committee and a former executive director of the Joint Economic Committee of Congress. From 1993-97, he served as chief technical adviser for macroeconomic reform to China’s State Planning Commission. He is the co-author (with Jing Chen) of the forthcoming Entropy Economics: The Living Basis of Value and Production (University of Chicago Press).

Dani Rodrik, Laura Tyson, Thomas Fricke, et al Laurates -- on the "Berlin Declaration"

 via Project Syndicate.


From the Washington Consensus to the Berlin Declaration


Dozens of leading economists and practitioners convened in Berlin at the end of May for a summit organized by the Forum for a New Economy. Remarkably, the summit led to something resembling a new understanding that may replace the decades-long reign of neoliberal orthodoxy.

CAMBRIDGE/BERKELEY/BERLIN – Paradigm shifts in mainstream economic thinking usually accompany crises demanding new answers, as occurred after stagflation – low growth and high inflation – gripped advanced economies in the 1970s. And it may be happening again, as liberal democracies confronted a wave of popular distrust in their ability to serve their citizens and address the multiple crises – ranging from climate change to unbearable inequalities and major global conflicts – that threaten our future.

The consequences can now be seen in the United States, where former President Donald Trump has a good chance of winning the presidential election in November. Similarly, a far-right government could take power in France after the coming snap election. To prevent dangerous populist policies that exploit voters’ anger, and to avert major damage to humanity and the planet, we must urgently address the root causes of people’s resentment.

With this imperative in mind, many leading economists and practitioners convened in Berlin at the end of May for a summit organized by the Forum New Economy. The “Winning Back the People” summit led to something resembling a new understanding that may replace the market-liberal “Washington Consensus,” which for four decades emphasized the primacy of free trade and capital flows, deregulation, privatization, and other pro-market shibboleths.

The Berlin Declaration published at the end of the gathering has since been signed by dozens of leading scholars, including Nobel laureate Angus Deaton, Mariana Mazzucato, and Olivier Blanchard, as well as by Thomas Piketty, Isabella Weber, Branko Milanovic, and many others.

The Washington Consensus has been wobbly for some time, challenged by abundant research documenting rising income and wealth inequality and its causes, as well as reassessments of the role of industrial policy and strategies to combat climate change. Recent crises, not to mention the danger of losing the fight for liberal democracy itself, have catalyzed an effort to translate all this research into a new common framework of policies to win back citizens.

The Berlin Declaration highlights widespread evidence that people’s distrust is to a large extent driven by the shared experience of a real or perceived loss of control over one’s own livelihood and the trajectory of societal changes. This sense of powerlessness has been triggered by shocks stemming from globalization and technological shifts, amplified by climate change, artificial intelligence, the recent inflation shock, and austerity.

This diagnosis logically leads to an equally clear conclusion. To win back people’s trust requires policies that restore confidence in their – and their governments’ – ability to respond effectively to the real problems they face. This means focusing policies on the creation of shared prosperity and good jobs, including policies that proactively address imminent regional disruptions by supporting new industries and directing innovation toward wealth creation for the many.

There is equally strong support for designing a healthier form of globalization, for coordinating climate policies, and for allowing national control over crucial strategic interests. Underlying these priorities is broad agreement that income and wealth inequalities must be narrowed.

As part of a new consensus, climate policies will need to combine reasonable carbon pricing with strong positive incentives and ambitious infrastructure investment. And there is widespread acceptance of the need for developing countries to get the financial and technological resources they need to embark on the climate transition. In sum, there is a new shared common sense that a new balance between markets and collective action needs to be established.

To agree on all of this would probably not have been possible five years ago. The large number of signatories, and the diversity of perspectives they represent, reflect how much the discussion has changed with the accumulation of more and more empirical evidence.

The signatories of the Berlin Declaration do not pretend to have all the answers; far from it. Rather, the Declaration’s purpose is to offer a statement of principles that obviously differ from the previous orthodoxy and to create a mandate to refine political concepts for practice. How to get industrial policy right must be defined in a national context, as well as in a cooperative international effort; the same is true of how governments can best incentivize climate-friendly behavior. How to re-frame globalization or most effectively reduce economic inequality also remain open questions.

Nevertheless, achieving a consensus on the principles that should guide policymakers is hugely important. Recognizing that markets on their own will neither stop climate change nor lead to a less unequal distribution of wealth is only one step toward devising optimal strategies that can effectively address the real challenges that confront us. A lot of progress has already been made on this front.

We now face a choice between a protectionist populist backlash, with all the conflict that this implies, and a new suite of policies that are responsive to people’s concerns. To preempt the populists, we need a new political consensus that focuses on the causes of citizens’ distrust, rather than on the symptoms.

A concerted effort to put citizens and their governments back in the driver’s seat and promote well-being for the many is needed to restore trust in our societies’ ability to overcome crises and secure a better future. To win back the people requires nothing more – and nothing less – than an agenda for the people.

The Berlin Declaration has also been signed by Adam Tooze, Gabriel Zucman, Jens Südekum, Mark Blyth, Catherine Fieschi, Xavier Ragot, Daniela Schwarzer, Robert Johnson, Dalia Marin, Jean Pisani-Ferry, Barry Eichengreen, Laurence Tubiana, Pascal Lamy, Ann Pettifor, Maja Göpel, Stormy-Annika Mildner, Francesca Bria, Katharina Pistor, and some 50 other researchers and practitioners.

Saturday, June 22, 2024

Bloomberg: Text Only: AI === Power


The Big TakeJune 21, 2024

Like much of Northern Virginia, Loudoun County was once known for its horse farms and Civil War battle sites.
But over the past 15 years, many of this community’s fields and forests have been cleared away to build the data centers that form the backbone of our digital lives.
The rise of artificial intelligence is now turbocharging demand for bigger data centers, transforming the landscape even more and taxing the region’s energy grids.
On a crisp afternoon this spring, the newest facility was nearing completion. When it’s done, this 200,000-square-foot building could use as much energy as 30,000 homes in the US.

But first, it needs to get enough power...

The energy supply can’t come soon enough for DataBank, the data center provider that owns the Virginia facility. An unnamed "big tech" client leased the entire facility and was so eager to tap into the complex to access computing resources for AI applications that it had servers ready in the building before DataBank was scheduled to have electricity for them.

“That’s the thing with AI. They need a lot of power and as soon as you have it, they want it right away,” said James Mathes, who manages some DataBank facilities. “Right now, it’s like a blank check for AI."

The almost overnight surge in electricity demand from data centers is now outstripping the available power supply in many parts of the world, according to interviews with data center operators, energy providers and tech executives. That dynamic is leading to years-long waits for businesses to access the grid as well as growing concerns of outages and price increases for those living in the densest data center markets.

The dramatic increase in power demands from Silicon Valley’s growth-at-all-costs approach to AI also threatens to upend the energy transition plans of entire nations and the clean energy goals of trillion-dollar tech companies. In some countries, including Saudi Arabia, Ireland and Malaysia, the energy required to run all the data centers they plan to build at full capacity exceeds the available supply of renewable energy, according to a Bloomberg analysis of the latest available data.

By one official estimate, Sweden could see power demand from data centers roughly double over the course of this decade — and then double again by 2040. In the UK, AI is expected to suck up 500% more energy over the next decade. And in the US, data centers are projected to use 8% of total power by 2030, up from 3% in 2022, according to Goldman Sachs, which described it as “the kind of electricity growth that hasn’t been seen in a generation.”
Globally, there are more than 7,000 data centers built or in various stages of development, up from 3,600 in 2015.
These data centers have the capacity to consume a combined 508 terawatt hours of electricity per year if they were to run constantly. That’s greater than the total annual electricity production for Italy or Australia.
By 2034, global energy consumption by data centers is expected to top 1,580 TWh, about as much as is used by all of India.

These are only estimates and there remains a high degree of uncertainty about how the current AI frenzy will play out. There’s also a difference between the projections for how much electricity data center developers want and how much generation actually gets built.

While tech companies are quick to point out that data centers account for less than 2% of global energy use even with all the expansion, an April report from Goldman Sachs estimates that figure could rise to 4% by the end of the decade. Any percentage point increase is monumental, given overall electricity demand has remained almost flat for years — if not declining in some regions.

In the US, power demand is expected to grow by 40% over the next two decades, compared with just 9% growth over the past 20 years, according to John Ketchum, chief executive officer at NextEra Energy Inc., the world’s biggest builder of wind and solar energy that isn’t backed by a government. Data centers are the biggest reason for that demand boom, Ketchum said, citing electrification and manufacturing as other factors. Asked why data centers were suddenly sucking up so much power, his answer was blunt: “It’s AI,” he said, citing the energy needs for training models and also the inference process by which AI draws conclusions from data it hasn’t seen before. “It’s 10 to 15 times the amount of electricity.”

Altogether, data centers use more electricity than most countries
Only 16 nations, including the US and China, consume more.

The biggest cloud services providers, Inc., Microsoft Corp. and Alphabet Inc.’s Google, have announced goals to run their data centers entirely on green energy — Amazon by next year, Google and Microsoft by 2030. All three say they are working on technological methods to use less power or balance the demand on the grid more efficiently. That can include wringing more efficiency from chips and servers, laying out equipment in ways that require less cooling and shifting loads to different areas based on where energy — particularly green energy — is available.

But some tech leaders like OpenAI CEO Sam Altman have said an energy breakthrough — likely from nuclear power — is needed to adapt to this new picture. Microsoft and Amazon are also betting on nuclear energy, with Amazon recently buying a nuclear-powered data center in Pennsylvania and indicating it’s open to more. For now, the path forward remains unclear. Microsoft recently admitted that its AI push is jeopardizing its long-held goal to be carbon negative by 2030.

“We need terawatts and terawatts more of traditional green energy, whether it’s wind or solar, and that’s across the globe,” said Amanda Peterson Corio, Google’s global head of data center energy, speaking broadly rather than solely about AI demands. For context, a single terawatt is equivalent to the output of about 1,000 nuclear power plants. “Of course we want to decarbonize ourselves, but if we’re just decarbonizing ourselves and not the whole grid, what’s the point?”

Global renewable energy supplies under pressure

As new data centers are built, their energy usage could equal or exceed some countries’ renewable energy supply
Ireland’s data centerenergy usage wasequal to53%of its renewableenergy supplyin 2022
Planned Consumption (2034)Live Consumption (2022)Renewable Energy Supply (2022)
Saudi Arabia
South Korea
South Africa

Sources: Bloomberg analysis of BloombergNEF and DC Byte data

Note: These 20 countries have the highest energy consumption from data centers relative to their renewable energy supply. The latest available data for renewable electricity supply is for 2022.

In today’s data centers, you might find thousands of Nvidia Corp.’s coveted H100 chips — the engine of the generative AI boom — each of which draws as much as 700 watts, or nearly eight times the power used by a typical 60-inch flat screen TV. Data centers built for training AI models require even more. Microsoft, for example, strung together tens of thousands of Nvidia processors inside a facility used to develop OpenAI’s technology. These are fed by networking and other types of chips which, combined with machinery in data centers used to prevent the gear from overheating, saps up even more power.

And the conventional wisdom in Silicon Valley is that the amount of energy needed will only go up. Nvidia’s newest chip, the B100, can consume nearly twice as much power as the H100. Nvidia contends companies will be able to do more with fewer chips, but Ian Buck, the company’s head of accelerated computing, admits it’s likely AI deployments will increase. “People like to fill their data centers,” he said.

AI development is evolving fast, too, with a feverish push toward developing ever larger artificial intelligence models. The Microsoft supercomputer built in 2020 that trained OpenAI’s GPT-3 system used 10,000 of what was then the latest AI chip. A November 2023 Microsoft supercomputer relied on 14,400 of the top of the line Nvidia H100 chips and the next one, which is already being designed, will be 30 times more powerful, according to a May slide presentation by Microsoft Azure Chief Technology Officer Mark Russinovich.

Meanwhile, Nvidia CEO Jensen Huang said many nations will push to build their own “sovereign” AI systems to stay competitive and process data locally. The global battle for AI supremacy may well depend on which countries have enough data centers and power to support the technology. If so, Loudoun County is a vision of what’s to come for the rest of the world — and the challenges other countries will face keeping up.

The data center capitals of the world

Over the past five years, Dominion Energy Inc., the power company that services Loudoun County, also known as “data center alley,” has connected 94 data centers that consume about four gigawatts of electricity, combined. Now it’s fielding requests for data centers campuses that would consume multiple gigawatts — enough to power hundreds of thousands of homes — two or three of which could use as much electricity combined as all the facilities hooked up since 2019.

The surge in demand is causing a backlog. Data center developers now have to wait longer to hook their projects up to the electric grid. “It could be as quick as two years, it could be four years depending on what needs to be built,” Dominion Energy Virginia president Edward Baine said in an interview.

Dominion is trying to build out the infrastructure to support it. New power lines hang from massive metal towers and run along roads and over creeks to feed electricity to these towering, windowless data centers. The company is building a large new wind farm off the coast and a lot of solar farms, but coal and gas powered plants could also stay online longer.

In late 2022, Dominion filed a previously unreported letter to its regulators asking for permission to build new substations and power lines to serve “unprecedented” load growth. In the letter, Dominion said it experienced 18 load relief warnings in the spring of that year. These warnings occur when the grid operator tells the company that it might need to shed load, the technical term for the controlled interruption of power to customers, which could include rotating outages.

“This is far outside of the normal, safe operating protocol,” Dominion told regulators.

A DataBank location in Ashburn, Va. Photographer: Moriah Ratner/Bloomberg

Virginia is not alone in struggling to keep pace with demand. In West London, historically considered a hub for data centers, new facilities have to wait until 2030 to connect to the grid, according to David Bloom, chairman of UK-based data center operator Kao Data. “We are now being pushed into new locations,” he said. Likewise, in the south of Sweden, a strong market for renewables, there is so much demand to get connected that businesses may have to wait years. “We have one queue, and you need to get your ticket,” said Peter Hjalmar, German utility E.ON SE’s regional manager for southern Sweden. And across the US, many new AI data centers are expected to consume 100 megawatts each, according to a recent Morgan Stanley analysis, prompting the analysts to wonder “how all of the proposed data center projects will be powered in a timely manner.” Demand is so high that large tech companies are having bidding wars over data center sites with ready access to power, according to NextEra.

Data center growth, as it’s being forecast, may also run up against the limits of how much power can be carried through transmission lines, said Ali Farhadi, CEO of the Allen Institute for AI. “I don’t think we can move that much electricity around the globe, forget about generating it,” he said.

Data centers will get more efficient over time, energy analysts say, but they’re also getting significantly bigger. The average size of data center facilities worldwide is now 412,000 square feet, an almost fivefold increase from what it was in 2010, according to data from DC Byte, a market intelligence firm.
More powerful data centers require more land
Includes building area in development stages

The surge in data center demand, combined with heavy investments from power companies like Dominion on new substations, transmission lines and other infrastructure to support it, are also increasing the likelihood customers will see their energy prices go up, experts say. The cost of some upgrades are typically allocated among electricity customers in an entire region, showing up as a line item on everyone’s monthly utility bill.

Goldman Sachs estimates that US utility companies will have to invest roughly $50 billion in new power generation capacity to support data centers. “That’s going to raise energy prices for both wholesale energy and retail rates,” said power market analyst Patrick Finn of energy consultancy Wood Mackenzie.

Costs including grid improvements are divided among each customer class, from residential to industrial, based on how much it actually costs to serve each, according to a Dominion representative. As a result, residential customers have seen their share of transmission costs drop in recent years while data centers have seen their portion rise, the representative said.

In Ireland, another heavily saturated market, there are some early signs of rate increases. Blessed with a moderate climate, Ireland has attracted so many data centers from Microsoft, Amazon and others that these facilities are forecast to consume a third of the country’s energy by 2026, up from 18% in 2022.

Wholesale power prices in Ireland have been a third higher on average this year than the rest of Europe. Other factors, including the country’s geography, play a role, but Sarah Nolan, senior modeler at Cornwall Insight, said growing data center demand can contribute — and that’s in a country that took strong steps to limit buildout just before the AI craze kicked off.

To manage energy constraints, Ireland’s state-owned electricity operator imposed a moratorium in Dublin in early 2022 and set out conditions to connect new data centers to the grid, including a preference for those generating their own electricity. The operator has “comprehensive” plans to build out the grid, said Donal Travers, the head of technology, consumer and business services for IDA Ireland, the state agency tasked with attracting foreign direct investment. But he said restrictions on large new connections are expected to continue “probably until 2028 or thereabouts.”

In the meantime, other regions are all too eager to open their doors.

A new data center under construction in Gainesville, Va. on March 20, 2024. Photographer: Moriah Ratner/Bloomberg
The next Virginia

When Rangu Salgame looks at Malaysia, he sees the next Virginia “in the making.” Johor, the southernmost state in peninsular Malaysia, has a policy to speed up clearances for data centers. Crucially, it’s also a short drive to Singapore, a longtime data center hub that imposed a moratorium for several years on new facilities to manage energy growth on the tiny island.

Once a sleepy fishing village, the suburbs of the city of Johor Bahru are now marked by vast construction sites. Microsoft and Amazon are investing in the region, as is Salgame’s company, Princeton Digital Group (PDG). At Sedenak Tech Park, a sprawling complex about 40 miles south of Johor Bahru’s city center, towering cranes dot the sky. PDG’s new 150 megawatt data center occupies one corner of the park, across from similar facilities from other providers.

“We have gone from shovel to production in 12 months,” said Salgame, who expects to have 300 megawatts of capacity in Johor within two years. “Not all parts of the world can execute at this speed and scale.”

But even markets eager to streamline data center buildout face constraints. What’s missing in Johor, especially for an industry like tech that is known for its climate pledges, is renewable energy. The power supply at Sedenak comes from Tenaga Nasional Berhad, which uses coal or gas-fired plants. While Malaysia has ambitious goals to bolster renewables, including plans to build a 500-megawatt solar farm in Johor, today it relies on coal for more than a third of its generation. Most of Malaysia’s data center capacity is not in use yet, but factoring in everything under construction, the amount of electricity used just by data centers would exceed the country’s total renewable output in 2022, the latest year for which data is available, a Bloomberg analysis found.
A data rush in Southeast Asia
Driven by Malaysia, the region has 153 new data centers that could potentially be built in the near future, adding a potential 5,419 MW of capacity

Like Malaysia, Texas has emerged as one of the fastest growing data center markets in the US, thanks in part to the promise of shorter wait times on its independent and deregulated grid. Texas sites can get on the grid in the one to two years it takes to build data centers, said Bobby Hollis, the vice president of energy at Microsoft, which is the largest player in Texas by megawatt, according to DC Byte.

Texas offers plentiful solar power and, in the state’s panhandle, some of the best access to wind power in the world, Hollis said. But a hotter climate and strained water supplies are pushing Microsoft and Google to try weaning their data center cooling gear off water. Alternate approaches, however, require more energy — about 5% more on average, according to Microsoft.

While power in Texas looks plentiful, there are limits. Solar panels and other gear needed for clean power are starting to see some supply constraints, Hollis said. The Electric Reliability Council of Texas also recently cautioned that it has underestimated demand in the San Antonio area, where Microsoft’s big data center campus is located, potentially causing cascading outages statewide in the future.

Back in Virginia, opposition to data centers is growing. At a March supervisors meeting in Prince William County, frustrated residents spoke out against a zoning change that would allow data centers on a specific plot of land to be built about twice as tall. One woman told officials that data centers were turning her quiet neighborhood into a “dystopian nightmare.”

A 48-year-old homemaker named Rachel Ellis spoke remotely to say the change would mean more demand on a grid that’s already strained. “It’s reckless governing to continue to approve data centers without knowing the full impacts of where this electricity will come from,” she said.

After hearing a dozen people speak against the zoning change, the supervisors voted. The bigger data centers were approved.

Bloomberg converted data center capacity values to energy consumption estimates using the following formula: MWh = (capacity) * (hours in a year) * (utilization rate) * (Power Usage Effectiveness) where capacity is the installed IT capacity, utilization rate is 70% and Power Usage Effectiveness (PUE) is equal to an average of 1.5. This calculation assumes that data centers are running 70% of the time and that their PUE -- a ratio to determine a data center's efficiency -- is 1.5 on average. These numbers can vary from facility to facility, but Bloomberg had energy experts review these calculations.

Bloomberg calculated the ratio of available renewable electricity (as of 2022, per latest data) to data center electricity consumption (estimate) for each country. In some countries, the data center electricity consumption estimate is nearly equal to, or greater than, the supply of renewable electricity.

Monday, June 17, 2024

Danny Rodrik: The Way Forward for Services-Led Economic Development




Today’s developing economies are in a bind, because innovation in manufacturing has taken a predominantly skill-biased form, reducing demand for workers with relatively low levels of education. This means that labor-absorbing services must become productive enough to support income growth.

BERLIN – The future of developing countries is in services. This may sound odd in view of the fact that industrialization has been the traditional road to growth and eventual prosperity, one traveled by all of today’s rich economies and by more recent successes such as South Korea, Taiwan, and China. Manufacturing seems even more essential given that industrial policies to revive it are back in fashion in the US and Europe.

But today’s manufacturing is different. Innovation in manufacturing has taken a predominantly skill-biased form, reducing demand for workers with relatively low levels of education. New technologies such as automation, robots, and 3D printing directly substitute physical capital for labor. While firms in developing countries have an incentive to use more labor-intensive techniques, competing in the global marketplace requires employing production techniques that cannot differ significantly from those used in the frontier economies, because the productivity penalty otherwise would be too high. The need to produce according to the exacting quality standards set by global value chains restricts how much unskilled labor can substitute for physical capital and skilled labor.

Thus, the rising skill- and capital-intensity of manufacturing in turn means that globally competitive, formal segments of manufacturing in developing countries have lost the ability to absorb significant amounts of labor. They have effectively become “enclave” sectors, not too different from mining, with limited growth potential and few positive effects on the supply side of the rest of the economy.

This means that enhancing productivity in labor-absorbing services has become an essential priority, for reasons of both growth and equity. Since the bulk of jobs will be in services, these jobs need to be productive enough to support income growth. The conundrum is that we do not know much about how to raise productivity in labor-absorbing services.

While some services, such as banking, IT, and business-process outsourcing (BPO) are both productively dynamic and tradable, they will not be labor-absorbing for the same reason that manufacturing is not. Even under the best circumstances, these relatively skill-intensive services will not provide the answer to the challenge of productive job creation. The challenge is to increase productivity in labor-absorbing services such as retail, care, and personal and public services, where we have had limited success, in part because such services have never been an explicit target of productive development policies.

In a new paper, we describe four strategies for expanding productive employment in services that create the most jobs in developing countries. The first focuses on established, large, and relatively productive incumbent firms, and entails incentivizing them to expand their employment, either directly or through their local supply chains. These firms could be large retailers, platforms such as ride-sharing services, or even manufacturing exporters (with potential to generate upstream linkages with service providers).

The second strategy focuses on small enterprises (which constitute the bulk of firms in developing countries) and aims to enhance their productive capabilities through the provision of specific public inputs. These inputs could be management training, loans or grants, customized worker skills, specific infrastructure, or technology assistance.

Given the heterogeneity of such firms, ranging from micro-enterprises and self-proprietorships to mid-size companies, policies in this domain require a differentiated approach that respond to their distinct needs. Moreover, given the numbers involved, policies often also require a mechanism for selecting among the most promising firms, since most are unlikely to become dynamic and successful.

The third strategy focuses on the provision, to workers directly or to firms, of digital tools or other forms of new technologies that explicitly complement low-skill labor. The objective here is to enable less educated workers to do (some of) the jobs traditionally reserved for more skilled professionals and to increase the range of tasks they can perform.

The fourth strategy also focuses on less-educated workers and combines vocational training with “wrap-around” services, a range of additional assistance programs for job seekers to enhance their employability, retention, and eventual promotion. Modeled after Project Quest, a US-based initiative, and other similar sectoral workforce development schemes, these training programs typically work closely with employers, both to understand their needs and to reshape their human-resource practices to maximize employment potential.

There are examples of these kinds of initiatives around the world, many of which have been rigorously evaluated and which we summarize in our paper. There is already a foundation of practice on what might be called “industrial policies for services” on which future efforts can build.

Regardless of the success of individual programs, it is important to bear in mind the scale of the challenge a services-oriented development strategy faces. A randomized policy intervention that increased earnings of low-income workers by, say, 20% would normally be judged a great success (assuming reasonable program costs). But even if it were successfully scaled up to the economy at large, this gain would not make up even 1% of the income gap that currently exists between a country like Ethiopia and the US. Real success will require greater ambition, continuous experimentation, and implementation of a very wide range of programs.

This commentary draws on the authors’ recent paper “Servicing Development Productive Upgrading of Labor-Absorbing Services in Developing Countries.”