Monday, July 22, 2019

The Stock-Buyback Swindle [feedly]

Good summary and review of this latest bandit CE0 scam....

by Jerry Useem

The Stock-Buyback Swindle
https://www.theatlantic.com/magazine/archive/2019/08/the-stock-buyback-swindle/592774/?utm_source=feed

In the early 1980s, a group of menacing outsiders arrived at the gates of American corporations. The "raiders," as these outsiders were called, were crude in method and purpose. After buying up controlling shares in a corporation, they aimed to extract a quick profit by dethroning its "underperforming" CEO and selling off its assets. Managers—many of whom, to be fair, had grown complacent—rushed to protect their institutions, crafting new defensive measures and lodging appeals in state courts. In the end, the raiders were driven off and their moneyman, Michael Milken, was thrown in prison. Thus ended a colorful chapter in American business history.

Or so it seemed. Today, another effort is under way to raid corporate assets at the expense of employees, investors, and taxpayers. But this time, the attack isn't coming from the outside. It's coming from inside the citadel, perpetrated by the very chieftains who are supposed to protect the place. And it's happening under the most innocuous of names: stock buybacks.

You've seen the phrase. It glazes the eyes, numbs the soul, makes you wonder what's for dinner. The practice sounds deeply normal, like the regularly scheduled maintenance on your car.

It is anything but normal. Before the 1980s, corporations rarely repurchased shares of their own stock. When they started to, it was typically a defensive move intended to fend off raiders, who were drawn to cash piles on a company's balance sheet. By contrast, according to Federal Reserve data compiled by Goldman Sachs, over the past nine years, corporations have put more money into their own stocks—an astonishing $3.8 trillion—than every other type of investor (individuals, mutual funds, pension funds, foreign investors) combined.

Corporations describe the practice as an efficient way to return money to shareholders. By reducing the number of shares outstanding in the market, a buyback lifts the price of each remaining share. But that spike is often short-lived: A study by the research firm Fortuna Advisors found that, five years out, the stocks of companies that engaged in heavy buybacks performed worse for shareholders than the stocks of companies that didn't.

One class of shareholder, however, has benefited greatly from the temporary price jumps: the managers who initiate buybacks and are privy to their exact scope and timing. Last year, SEC Commissioner Robert Jackson Jr. instructed his staff to "take a look at how buybacks affect how much skin executives keep in the game." This analysis revealed that in the eight days following a buyback announcement, executives on average sold five times as much stock as they had on an ordinary day. "Thus," Jackson said, "executives personally capture the benefit of the short-term stock-price pop created by the buyback announcement."

This extractive behavior has rightly been decried for worsening income inequality. Some politicians on the left—Bernie Sanders, Elizabeth Warren, Chuck Schumer—have lately gotten around to opposing buybacks on these grounds. But even the staunchest free-market capitalist should be concerned, too. The proliferation of stock buybacks is more than just another way of feathering executives' nests. By systematically draining capital from America's public companies, the habit threatens the competitive prospects of American industry—and corrupts the underpinnings of corporate capitalism itself.

The rise of the stock buyback began during the heyday of corporate raiders. In the early 1980s, an economist named Michael C. Jensen presented a paper titled "Reflections on the Corporation as a Social Invention." It attacked the conception of corporations that had prevailed since roughly the 1920s—that they existed to serve a variety of constituencies, including employees, customers, stockholders, and even the public interest. Instead, Jensen asserted a new ideology that would become known as "shareholder value." Corporate managers had one job, and one job alone: to increase the short-term share price of the firm.

The philosophy had immediate appeal to the raiders, who used it to give their depredations a fig leaf of legitimacy. And though the raiders were eventually turned back, the idea of shareholder value proved harder to dispel. To ward off hostile takeovers, boards started firing CEOs who didn't deliver near-term stock-price gains. The rolling of a few big heads—including General Motors' Robert Stempel in 1992 and IBM's John Akers in 1993—drove home the point to CEOs: They had better start thinking about shareholder value.

If their conversion to the enemy faith was at first grudging, CEOs soon found a reason to love it. One of the main tenets of shareholder value is that managers' interests should be aligned with shareholders' interests. To accomplish this goal, boards began granting CEOs large blocks of company stock and stock options.

The shift in compensation was intended to encourage CEOs to maximize returns for shareholders. In practice, something else happened. The rise of stock incentives coincided with a loosening of SEC rules governing stock buybacks. Three times before (in 1967, '70, and '73), the agency had considered such a rule change, and each time it had deemed the dangers of insider "market manipulation" too great. It relented just before CEOs began acquiring ever greater portfolios of their own corporate stock, making such manipulation that much more tantalizing.

Too tantalizing for CEOs to resist. Today, the abuse of stock buybacks is so widespread that naming abusers is a bit like singling out snowflakes for ruining the driveway. But somebody needs to be called out.

So take Craig Menear, the chairman and CEO of Home Depot. On a conference call with investors in February 2018, he and his team mentioned their "plan to repurchase approximately $4 billion of outstanding shares during the year." That day, he sold 113,687 shares, netting $18 million. The following day, he was granted 38,689 new shares, and promptly unloaded 24,286 shares for a profit of $4.5 million. Though Menear's stated compensation in SEC filings was $11.4 million for 2018, stock sales helped him earn an additional $30 million for the year.

By contrast, the median worker pay at Home Depot is $23,000 a year. If the money spent on buybacks had been used to boost salaries, the Roosevelt Institute and the National Employment Law Project calculated, each worker would have made an additional $18,000 a year. But buybacks are more than just unfair. They're myopic. Amazon (which hasn't repurchased a share in seven years) is presently making the sort of investments in people, technology, and products that could eventually make Home Depot irrelevant. When that happens, Home Depot will probably wish it hadn't spent all those billions to buy back 35 percent of its shares. "When you've got a mature company, when everything seems to be going smoothly, that's the exact moment you need to start worrying Jeff Bezos is going to start eating your lunch," the shareholder activist Nell Minow told me.

Then there's Merck. The pharmaceutical company was a paragon of corporate excellence through the second half of the 20th century. "Medicine is for people, not for profits," George Merck II declared on the cover of Time in 1952. "And if we have remembered that, the profits have never failed to appear." In the late 1980s, then-CEO Roy Vagelos, rather than sit on a drug that could cure river blindness in Africa but that no one could pay for, persuaded his board of directors to manufacture and distribute the drug for free—which, as Vagelos later noted in his memoir, cost the company more than $200 million. More recently, Merck has been using its massive earnings (its net income for 2018 was $6.2 billion) to repurchase shares of its own stock. A study by the economists William Lazonick and Öner Tulum showed that from 2008 to 2017 the company distributed 133 percent of its profits, through buybacks and dividends, to shareholders—including CEO Kenneth Frazier, who has sold $54.8 million in stock since last July. How is this sustainable? "It's not," Lazonick says. Merck insists it must keep drug prices high to fund new research. In 2018, the company spent $10 billion on R&D—and $14 billion on share repurchases and dividends.

Finally, consider the executives at Applied Materials, a maker of semiconductor-manufacturing equipment. As is the case at many companies, its CEO receives incentive pay based on certain metrics. One is earnings per share, or EPS, a widely used barometer of corporate performance. Normally, EPS is lifted by improving earnings. But EPS can be easily manipulated through a stock buyback, which simply reduces the denominator—the number of outstanding shares. At Applied Materials, earnings declined 3.5 percent last year. Yet the company still managed to eke out EPS growth of 1.9 percent. How? In part, by taking more than 10 percent of its shares off the market via buybacks. That move helped executives unlock more incentive compensation—which, these days, usually comes in the form of stock or stock options.

Corporations offer a variety of justifications for the practice of repurchasing stock. One is that buybacks are a more "flexible" way of returning money to shareholders than dividends, which (it's true) once raised are very hard to reduce. Another argument: Some companies just make more money than they can possibly put to good use. This likewise has a smidgen of truth. Apple may not have $1 billion worth of good bets to make or companies it wants to acquire. Though, if this were the real reason companies are repurchasing stock, it would imply that biotechnology, banking, and big retail—sectors that hold some of the biggest practitioners of buybacks—are nearing a dead end, idea-wise. CEOs will also sometimes make the case that their stock is undervalued, and that repurchases represent an opportunity to buy low. But in reality, notes Fortuna's Gregory Milano, companies tend to buy their stock high, when they're flush with cash. The 10th year of a bull market is hardly a time for bargain-hunting.

Capitalism takes many forms. But the variant that propelled America through the 20th century was, at its heart, a means of pooling resources toward a common endeavor, whether that was building railroads, developing new drugs, or making microwave ovens. There used to be a healthy debate about which of their stakeholders corporations ought to serve—employees, stockholders, customers—and in what order. But no one, not even Michael Jensen, ever suggested that a corporation should exist solely to serve the interests of the people entrusted to run it.

Many early stock certificates bore an image—a factory, a car, a canal—representing the purpose of the corporation that issued them. It was a reminder that the financial instrument was being put to productive use. Corporations that plow their profits into buybacks would be hard-pressed to put an image on their stock certificate today, other than, perhaps, the visage of their CEO.


 -- via my feedly newsfeed

Sunday, July 21, 2019

Chris Dillow: UK Centrists' failure [feedly]

(UK)Centrists' failure
https://stumblingandmumbling.typepad.com/stumbling_and_mumbling/2019/07/centrists-failure.html

 Simon's claim that Corbyn is the lesser of two evils has had a lot of pushback. Some say Corbyn is more antisemitic than we think, others that we do indeed have other choices*. All this, however, raises the question: how have we gotten into the position where the choices of future Prime Minster are so poor? The worse you think Corbyn is, the more force this question has.

The answer, to a large extent, is that centrists have failed. They had the ball - New Labour and the LibDems were in office for 18 years – and they dropped it. Johnson and Corbyn have risen as centrism has fallen. Centrists show an astounding lack of self-awareness about their own abject failure.

Corbyn is popular – insofar as he is – not because he is a political genius (he's not) nor because many of us have become antisemites or have lost our minds. His popularity – especially with young graduates – rests upon material economic conditions. The degradation of professional occupations and huge gap between the top 1% and others have radicalized young people in erstwhile middle-class jobs; financialization has made housing unaffordable for youngsters; and a decade of stagnant real wages – the product of inequality and the financial crisis - has increased demands for change.

There's nothing inevitable about young people being radical: in the 1987 election the Tories won the 25-34 year-old vote and only narrowly lost among 18-24 year-olds. Young people's support for Corbyn is because capitalism as it currently exists has failed them - and it has done so because centrists have let it.

The same can be said of Brexit. Austerity, at the margin, tipped the balanceagainst Remainers. The marginal Brexit voter wasn't some gammon who thinks Dambusters should the role model for our relations with the EU. It is someone who was cheesed off with living in a run-down area and being ignored by the (centrist) elite.

Centrists are oblivious to all this. The tiggers or cukkers or whatever they call themselves today have no awareness of the reality of capitalist stagnation, let alone have an answer to it. Corbyn became Labour leader because his rivals (with the partial exception of Liz Kendall) had no worthwhile material programme to offer. The LibDems acquiesced in the austerity that led to both Brexit and Corbynism. And the centre right merely wags its finger at voters to tell them they got it wrong, whilst showing no awareness that its own policies are responsible for Brexit, and no appreciation that the slogan "take back control" had so much power for so many voters.

Leftists often accuse centrists of being Blairite. This is a slur upon Blair. His genius was to recognise in the 1990s that the world had changed and that new policies were needed for new times. Centrists today are innocent of such insight. They are stuck in a 1990s timewarp. It is Corbyn who is the true heir to Blair, as he sees – albeit perhaps in the same way that a stuck clock is right twice a day – that our times require new policies. He at least offers a faint shadow of a glimmer of slim hope of an alternative to the neoliberal financialization that got us into this mess. Centrists offer jack-shit.

Even the IMF now acknowledges that inequality is a menace to economies. Whilst some individual centrists get this (I'm looking at you Paddy and Tony) centrist politicians do not. 

Now, some might object here that aspects of the LibDems 2017 manifesto were more redistributive that Labour's, which was insufficiently hostile to the benefit freeze. But LibDem manifestos are a soft currency: the 2010 one said nothing about trebling tuition fees or supporting austerity, benefit cuts and the hostile environment policy. Redistribution is part of the soul of the Labour left; the same cannot be said for the LibDems. As I said in another context, if you've got yourself a reputation for laziness you need to work doubly hard if you are to lose it. The LibDems are not doing so.

You might also object that politics is about more than economics, and that the battleground now is about culture and identity rather than a few quid here or there. This misses the point. The great virtue of economic growth, as Ben Friedman showed, is that it creates a climate in which toleration and openness can thrive. Stagnation, by contrast, gives us closed minds, intolerance and fanaticism. If centrists are sincere in wanting a more civilized and tolerant politics, they must create the material conditions for these. In fact, in office they did the opposite.

It's common to claim that Corbyn has enabled antisemitism. But it is centrists who enabled Corbynism by creating the conditions in which he – and indeed the far right – can thrive. A serious politics requires an awareness that political behaviour has a structural basis and offers a way of creating the structures that generate the ideals they favour. Centrists, however, do none of this. All they have is a sense of their own entitlement to rule and a smug moral superiority.

Rather than wag the finger at those of us who feel compelled to make the tragic choice between two deeply flawed potential Prime Ministers, centrists should show more humility and more cognizance that it is their acquiescence in inequality and austerity that got us into this mess.

* I know we do: I voted Green in the euro elections, but Jonathan Bartley and Sian Berry aren't going to be Prime Minister.  

VISIT WEBSITE


 -- via my feedly newsfeed

Saturday, July 20, 2019

Safety and accident analysis: Longford [feedly]

Safety and accident analysis: Longford
http://understandingsociety.blogspot.com/2019/07/safety-and-accident-analysis-longford.html

Andrew Hopkins has written a number of fascinating case studies of industrial accidents, usually in the field of petrochemicals. These books are crucial reading for anyone interested in arriving at a better understanding of technological safety in the context of complex systems involving high-energy and tightly-coupled processes. Especially interesting is his Lessons from Longford: The ESSO Gas Plant Explosion. The Longford refining plant suffered an explosion and fire in 1998 that killed two workers, badly injured others, and interrupted the supply of natural gas to the state of Victoria for two weeks. Hopkins is a sociologist, but has developed substantial expertise in the technical details of petrochemical refining plants. He served as an expert witness in the Royal Commission hearings that investigated the accident. The accounts he offers of these disasters are genuinely fascinating to read.

Hopkins makes the now-familiar point that companies often seek to lay responsibility for a major industrial accident on operator error or malfeasance. This was Esso's defense concerning its corporate liability in the Longford disaster. But, as Hopkins points out, the larger causes of failure go far beyond the individual operators whose decisions and actions were proximate to the event. Training, operating plans, hazard analysis, availability of appropriate onsite technical expertise -- these are all the responsibility of the owners and managers of the enterprise. And regulation and oversight of safety practices are the responsibility of stage agencies. So it is critical to examine the operations of a complex and dangerous technology system at all these levels.

A crucial part of management's responsibility is to engage in formal "hazard and operability" (HAZOP) analysis. "A HAZOP involves systematically imagining everything that might go wrong in a processing plant and developing procedures or engineering solutions to avoid these potential problems" (26). This kind of analysis is especially critical in high-risk industries including chemical plants, petrochemical refineries, and nuclear reactors. It emerged during the Longford accident investigation that HAZOP analyses had been conducted for some aspects of risk but not for all -- even in areas where the parent company Exxon was itself already fully engaged in analysis of those risky scenarios. The risk of embrittlement of processing equipment when exposed to super-chilled conditions was one that Exxon had already drawn attention to at the corporate level because of prior incidents.

A factor that Hopkins judges to be crucial to the occurrence of the Longford Esso disaster is the decision made by management to remove engineering staff from the plant to a central location where they could serve a larger number of facilities "more efficiently".
A second relevant change was the relocation to Melbourne in 1992 of all the engineering staff who had previously worked at Longford, leaving the Longford operators without the engineering backup to which they were accustomed. Following their removal from Longford, engineers were expected to monitor the plant from a distance and operators were expected to telephone the engineers when they felt a need to. Perhaps predictably, these arrangements did not work effectively, and I shall argue in the next chapter that the absence of engineering expertise had certain long-term consequences which contributed to the accident. (34)
One result of this decision is the fact that when the Longford incident began there were no engineering experts on site who could correctly identify the risks created by the incident. Technicians therefore restarted the process by reintroducing warm oil into the super-chilled heat exchanger. The metal had become brittle as a result of the extremely low temperatures and cracked, leading to the release of fuel and subsequent explosion and fire. As Hopkins points out, Exxon experts had long been aware of the hazards of embrittlement. However, it appears that the operating procedures developed by Esso at Longford ignored this risk, and operators and supervisors lacked the technical/scientific knowledge to recognize the hazard when it arose.

The topic of "tight coupling" (the tight interconnection across different parts of a complex technological system) comes up frequently in discussions of technology accidents. Hopkins shows that the Longford case gives a new spin to this idea. In the case of the explosion and fire at Longford it turned out to be very important that plant 1 was interconnected by numerous plumbing connections to plants 2 and 3. This meant that fuel from plants 2 and 3 continued to flow into plant 1 and greatly extended the length of time it took to extinguish the fire. Plant 1 had to be fully isolated from plants 2 and 3 before the fire could be extinguished (or plants 2 and 3 could be restarted), and there were enough plumbing connections among them, poorly understood at the time of the fire, that took a great deal of time to disconnect (32).

Hopkins addresses the issue of government regulation of high-risk industries in connection with the Longford disaster. Written in 1999 or so, he recognizes the trend towards "self-regulation" in place of government rules stipulating the operating of various industries. He contrasts this approach with deregulation -- the effort to allow the issue of safe operation to be governed by the market rather than by law.
Whereas the old-style legislation required employers to comply with precise, often quite technical rules, the new style imposes an overarching requirement on employers that they provide a safe and healthy workplace for their employees, as far as practicable. (92)
He notes that this approach does not necessarily reduce the need for government inspections; but the goal of regulatory inspection will be different. Inspectors will seek to satisfy themselves that the industry has done a responsible job of identify hazards and planning accordingly, rather than looking for violations of specific rules. (This parallels to some extent his discussion of two different philosophies of audit, one of which is much more conducive to increasing the systems-safety of high-risk industries; chapter 7.) But his preferred regulatory approach is what he describes as "safety case regulation". (Hopkins provides more detail about the workings of a safety case regime in Disastrous Decisions: The Human and Organisational Causes of the Gulf of Mexico Blowout, chapter 10.)
The essence of the new approach is that the operator of a major hazard installation is required to make a case or demonstrate to the relevant authority that safety is being or will be effectively managed at the installation. Whereas under the self-regulatory approach, the facility operator is normally left to its own devices in deciding how to manage safety, under the safety case approach it must lay out its procedures for examination by the regulatory authority. (96)
The preparation of a safety case would presumably include a comprehensive HAZOP analysis, along with procedures for preventing or responding to the occurrence of possible hazards. Hopkins reports that the safety case approach to regulation is being adopted by the EU, Australia, and the UK with respect to a number of high-risk industries. This discussion is highly relevant to the current debate over aircraft manufacturing safety and the role of the FAA in overseeing manufacturers.

It is interesting to realize that Hopkins is implicitly critical of another of my favorite authors on the topic of accidents and technology safety, Charles Perrow. Perrow's central idea of "normal accidents" brings along with it a certain pessimism about the ability to increase safety in complex industrial and technological systems; accidents are inevitable and normal (Normal Accidents: Living with High-Risk Technologies). Hopkins takes a more pragmatic approach and argues that there are engineering and management methodologies that can significantly reduce the likelihood and harm of accidents like the Esso gas plant explosion. His central point is that we don't need to be able to anticipate a long chain of unlikely events in order to identify the hazard in which these chains may eventuate -- for example, loss of coolant in a nuclear reactor or loss of warm oil in a refinery process. These final events of numerous different possible accident scenarios all require procedures in place that will guide the responses of engineers and technicians when "normal accidents" occur (33).

Hopkins highlights the challenge to safety created by the ongoing modification of a power plant or chemical plant; later modifications may create hazards not anticipated by the rigorous accident analysis performed on the original design.
Processing plants evolve and grow over time. A study of petroleum refineries in the US has shown that "the largest and most complex refineries in the sample are also the oldest ... Their complexity emerged as a result of historical accretion. Processes were modified, added, linked, enhanced and replaced over a history that greatly exceeded the memories of those who worked in the refinery. (33)
This is one of the chief reasons why Perrow believes technological accidents are inevitable. However, Hopkins draws a different conclusion:
However, those who are committed to accident prevention draw a different conclusion, namely, that it is important that every time physical changes are made to plant these changes be subjected to a systematic hazard identification process. ...  Esso's own management of change philosophy recognises this. It notes that "changes potentially invalidate prior risk assessments and can create new risks, if not managed diligently." (33)
(I believe this recommendation conforms to Nancy Leveson's theories of system safety engineering as well; link.)

Here is the causal diagram that Hopkins offers for the occurrence of the explosion at Longford (122).



The lowest level of the diagram represents the sequence of physical events and operator actions leading to the explosion, fatalities, and loss of gas supply. The next level represents the organizational factors identified in Longford's analysis of the event and its background. Central among these factors are the decision to withdraw engineers from the plant; a safety philosophy that focused on lost-time injuries rather than system hazards and processes; failures in the incident reporting system; failure to perform a HAZOP for plant 1; poor maintenance practices; inadequate audit practices; inadequate training for operators and supervisors; and a failure to identify the hazard created by interconnections with plants 2 and 3. The next level identifies the causes of the management failures -- Esso's overriding focus on cost-cutting and a failure by Exxon as the parent company to adequately oversee safety planning and share information from accidents at other plants. The final two levels of causation concern governmental and societal factors that contributed to the corporate behavior leading to the accident.

(Here is a list of major industrial disasters; link.)  

 -- via my feedly newsfeed

Krugman: Deficit Man and the 2020 Election [feedly]

Deficit Man and the 2020 Election
https://www.nytimes.com/2019/07/18/opinion/2020-trump-economy.html

I've seen a number of people suggest that the 2020 election will be a sort of test: Can a sufficiently terrible president lose an election despite a good economy? And that is, in fact, the test we'd be running if the election were tomorrow.

On one side, Donald Trump wastes no opportunity to remind us how awful he is. His latest foray into overt racism delights his base but repels everyone else. On the other side, he presides over an economy in which unemployment is very low and real G.D.P. grew 3.2 percent over the past year.

But the election won't be tomorrow, it will be an exhausting 15 months from now. Trump's character won't change, except possibly for the worse. But the economy might look significantly different.

So let's talk about the Trump economy.

The first thing you need to know is that the Trump tax cut caused a huge rise in the budget deficit, which the administration expects to hit $1 trillion this year, up from less than $600 billion in 2016. This tidal wave of red ink is even more extraordinary than it looks, because it has taken place despite falling unemployment, which usually leads to a falling deficit.

P

ADVERTISEMENT

[For an even deeper look at what's on Paul Krugman's mind, sign up for his weekly newsletter.]

Strange to say, none of the Republicans who warned of a debt apocalypse under President Barack Obama have protested the Trump deficits. (Should we put Paul Ryan's face on milk cartons?) For that matter, even the centrists who obsessed over federal debt during the Obama years have been pretty quiet. Clearly, deficits only matter when there's a Democrat in the White House.

Oh, and the imminent fiscal crisis people like Erskine Bowles used to warn about keeps not happening: Long-term interest rates remain very low.

Now, the evidence on the effects of deficit spending is clear: It gives the economy a short-run boost, even when we're already close to full employment. If anything, the growth bump under Trump has been smaller than you might have expected given the deficit surge, perhaps because the tax cut was so badly designed, perhaps because Trump's trade wars have deterred business spending.

For now, however, Deficit Man is beating Tariff Man. As I said, we've seen good growth over the past year.

But the tax cut was supposed to be more than a short-run Keynesian stimulus. It was sold as something that would greatly improve the economy's long-run performance; in particular, lower corporate tax rates were supposed to lead to a huge boom in business investment that would, among other things, lead to sharply higher wages. And this big rise in long-run growth would supposedly create a boom in tax revenues, offsetting the upfront cost of tax cuts.


ADVERTISEMENT

None of this is happening. Corporations are getting to keep a lot more of their profits, but they've been using the money to buy back their own stock, not raise investment. Wages are rising, but not at an extraordinary pace, and many Americans don't feel that they're sharing in the benefits of a growing economy.

And this is probably as good as it gets.

I'm not forecasting a recession. It could happen, and we're very badly positioned to respond if it does, but the more likely story is just a slowdown as the effects of the deficit splurge wear off. In fact, if you believe the "nowcasters" (economists who try to get an early read on the economy from partial data), that slowdown is already happening. For example, the Federal Reserve Bank of New York believes that the economy's growth was down to 1.5 percent in the second quarter.

And it's hard to see where another economic bump can come from. With Democrats controlling the House, there won't be another big tax cut. The Fed may cut interest rates, but those cuts are already priced into long-term interest rates, which are what matter for spending, and the economy seems to be slowing anyway.

Which brings us back to the 2020 election.

Political scientists have carried out many studies of the electoral impact of the economy, and as far as I know they all agree that what matters is the trend, not the level. The unemployment rate was still over 7 percent when Ronald Reagan won his 1984 landslide; it was 7.7 percent when Obama won in 2012. In both cases, however, things were clearly getting better.

That's probably not going to be the story next year. If we don't have a recession, unemployment will still be low. But economic growth will probably be meh at best — which means, if past experience is any guide, that the economy won't give Trump much of a boost, that it will be more or less a neutral factor.

And on the other hand, Trump's awfulness will remain.

Republicans will, of course, portray the Democratic nominee — whoever she or he may be — as a radical socialist poised to throw the border open to hordes of brown-skinned rapists. And one has to admit that this strategy might work, although it failed last year in the midterms. To be honest, I'm more worried about the effects of sexism if the nominee is a woman — not just the sexism of voters, but that of the news media, which still holds women to different standards.

But as far as the economy goes, the odds are that Trump's deficit-fueled bump came too soon to do him much political good.

The Times is committed to publishing a diversity of letters to the editor. We'd like to hear what you think about this or any of our articles. Here are some tips. And here's our email: letters@nytimes.com.

Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.

Paul Krugman has been an Opinion columnist since 2000 and is also a Distinguished Professor at the City University of New York Graduate Center. He won the 2008 Nobel Memorial Prize in Economic Sciences for his work on international trade and economic geography. @PaulKrugman


 -- via my feedly newsfeed

Tim Taylor: Antitrust in the Digital Economy [feedly]

What is property in intangible exchange?

Antitrust in the Digital Economy
http://conversableeconomist.blogspot.com/2019/07/antitrust-in-digital-economy.html

Discussions of antitrust and the FAGA companies--that is, Facebook, Amazon, Google, and Apple--often sound like a person with a hammer who just wants to hit something. Here's Peggy Noonan writing in the Wall Street Journal (June 6, 2019):
But the mood in America is anti-big-tech. Everyone knows they're too powerful, too arrogant, loom too large in public life. ... Here's what they [Congress] should be thinking: Break them up. Break them in two, in three; regulate them. Declare them to be what they've so successfully become: once a pleasure, now a utility. It all depends on Congress, which has been too stupid to move in the past and is too stupid to move competently now. That's what's slowed those of us who want reform, knowing how badly they'd do it. Yet now I find myself thinking: I don't care. Do it incompetently, but do something.
When it comes to regulation of the big digital firms, we may end up with incompetence at the end of the day. But first, could we at least take a stab at thinking about what competent antitrust and regulation might look like? For guidance, I've seen three reports so far this year on antitrust and competition in the digital economy:
The three report are all at least 100 pages in length. They are all written by prominent economists, and they have a lot of overlap with each other. Here, I'll just focus on some of the main points that caught my eye when reading them. But be aware that the topics and terms mentioned here typically come up in at least two of the three reports, and often in all three.

Antitrust Policy is Not a One-Size-All Solution for Everything  

Antitrust policy is about attempting to ensure that competition arises. Pace Noonan, it's not about whether the firms or the people running them are powerful, arrogant, or large.

Digital technologies raise all sorts of broad social issues that are not directly related to competition.  For example, the appropriate treatment of personal information would still be an issue if there were four Facebook-like firms and five Google-like firms all in furious competition with each other. Indeed, it's possible that a bunch of new firms competing in these markets might be more aggressive in hoovering up personal education and passing it along to advertisers and others. The question of if or when certain content should be blocked from the digital sites will still be an issue no matter how many firms are competing in these markets. 

Digital firms use algorithms in many of their decisions about marketing and price, and such algorithms can end up building in various forms of bias. This is a problem that will exist with or without competition. The forces of corporate lobbying on the political process are a legitimate public issue, regardless of whether the lobbying comes from many smaller firms, or a larger firm, or an industry association.

In other words, the digital economy raises lots of issues of public interest. Responding to those issues doesn't just mean flailing around with an antitrust hammer, hoping to connect with a few targets, but instead thinks about what problems need to be addressed and what policy tools are likely to be useful in addressing them.

What is the Antitrust Issue with Digital Firms?

In the US economy, at least, being big and being profitable are not violations of existing antitrust law. If a firm earns profits and becomes large by providing consumers with goods and services that they desire, at a price the consumes are willing to pay, that's fine. But if a firm earns profits and becomes large by taking actions that hinder and block the competition, those anticompetitive actions can be violation of antitrust law. The challenge with big digital firms is where to draw the line.
When the internet was young, two decades ago, there was a widespread belief that it would not be a hospitable place for big companies. [T]he economic literature of the beginning of the 21st century assumed that competition between online firms would arise as consumers hopped from site to site, easily comparing their offers. The reality however quickly turned out to be very different. Very early in the history of the Internet, a limited number of "gateways" emerged. With the benefit of hindsight, this might not be too surprising. Users have limited time and need curators to help them navigate the long tail of websites to find what they are looking for. These curators then developed a tendency to keep users on their platform, and by the end of the 1990s, it was common place to speak about AOL's "walled garden". AOL's market power however rested in great part on its role as an Internet service provider and both competition in that domain and, according to some observers, strategic mistakes after its merger with Time Warner eroded its power.
Fast forwarding to today, a few ecosystems and large platforms have become the new gateways through which people use the Internet. Google is the primary means by which people in the Western world find information and contents on the Internet. Facebook/WhatsApp, with 2.6 billion users, is the primary means by which people connect and communicate with one another, while Amazon is the primary means for people to purchase goods on the Internet. Moreover, some of those platforms are embedded into ecosystems of services and, increasingly, devices that complement and integrate with one another. Finally, the influence of these gateways is not only economic but extends to social and political issues. For instance, the algorithms used by social media and video hosting services influence the types of political news that their users see while the algorithm of search engines determines the answers people receive to their questions.
As all of these reports note, it is now apparent that when a large digital company becomes established, entry can be hard. There is a "network effect," where a system that has more users is also attractive to more users. There's a "platform" effect, where most buyers and sellers head for Amazon because the most buyers and sellers are already on Amazon. There are economies of scale, where the startup costs of a new platform or network can be fairly high, but adding additional users has a marginal cost of near-zero. There are issues involving the collection and analysis of data, where more users mean more attraction to advertisers and more data, to which can then be monetized.

None of these issues are new for antitrust. But the big digital firms bring these issues together in some new ways. The markets become prone to "tipping," which means that when an established firm gets a certain critical mass, other firms can't attract enough users to survive. As economists sometimes say, it becomes a case where there is competition for the market, in the sense of which firm will become dominant in that market, but then there is little competition within the market.

One consequence is that when a big firm becomes established, entry is hard and future competition can be thwarted. Thus, there is reason to be concerned that consumers may suffer along various dimensions: price, quality innovation. In the areas of innovationstartup in that area slows down dramatically. From the Scott Morton et al. report:
By looking at the sub-industries associated with each firm—social platforms (Facebook), internet software (Google), and internet retail (Amazon)—a different trend emerges. Since 2009, change in startup investing in these sub-industries has fared poorly compared to the rest of software for Google and Facebook, the rest of retail for Amazon, and the rest of all VC for each of Google, Facebook, and Amazon. This suggests the existence of so-called "kill-zones," that is, areas where venture capitalists are reluctant to enter due to small prospects of future profits. In a study of the mobile app market, Wen Wen and Feng Zhu come to a similar conclusion: Big tech platforms do dampen innovation at the margin. Their study analyzed how Android app developers adjust their innovation strategies in response to entry (or threat of entry) by Google ...
Some Oddities of a Market with a Price of Zero

Big digital firms provide a remarkable array of services with a market price of zero to consumers. They make their money by attracting users and selling ads, or by charges to producers. Is this a case where the market price should actually be negative--that is, the big tech firms should be paying consumers. The Scott-Morton et al. report offers some interesting thoughts on these lines:
Barter is a common way in which consumers pay for digital services. They barter their privacy and information about what restaurants they would like to eat in and what goods they would like to buy in exchange for digital services. The platform then sells targeted advertising, which is made valuable by the bartered information. But, in principle, that information has a market price. It is not easy to see if the value of any one consumer's information is exactly equal to the value of the services she receives from the platform. However, many digital platforms are enormously profitable, and have been for many years, which suggests that in aggregate we do know the answer: the information is more valuable than the cost of the services. ...
Online platforms offer many services for zero monetary price while they try to raise participation in order to generate advertising revenue. Free services are prevalent on the internet in part because internet firms can harness multi-sided network externalities. While the low price can be a blessing for consumers, it has drawbacks for competition and market structure in a world where institutions have not arisen to manage negative prices. Because there is currently no convenient way to pay consumers with money, platforms are able to mark up the competitive price all the way to zero. This constraint can effectively eliminate price competition, shifting the competitive process to quality and the ability of each competitor to generate network externalities. Depending on the context this may favor or impede entry of new products. For example, entry will be encouraged when a price of zero leads to supra-competitive profits, and impeded when a zero price prevents entrants from building a customer base through low price. Moreover, unlike traditional markets where several quality layers may coexist at different price levels (provided that some consumers favor lower quality at low price), markets where goods are free will be dominated by the best quality firm and others may compete only in so far as they can differentiate their offers and target different customers. This strengthens the firm's incentive to increase quality through increasing fixed costs in order to attract customers (known as the Sutton sunk cost effect) and further pushes the market toward a concentrated market structure. ...
It is a puzzle that, to date, no entrepreneur or business has found a way to pay consumers for their data in money. For example, a consumer's wireless carrier could aggregate micropayments across all manner of digital destinations and apply the credit to her bill each month. ... Furthermore, a carrier that could bargain effectively with platforms on behalf of its subscribers for high payments would likely gain subscribers. Notice that an easy method to pay consumers, combined with price competition for those consumers, might significantly erode the high profits of many incumbent platforms. Platforms likely have no economic incentive to work diligently to operationalize negative prices.
Of course, this idea of a market in which consumers barter attention for zero-marginal-price services isn't new. Television and before that radio operate on the same basic business model.

Categories of Data

The ability to collect data from users, and then to collate it with other information and pass it along to advertisers, is clearly a central part of the business model for digital firms. There's a lot of quick-and-easy talk about "my" data and what "they" should or shouldn't be able to do with it. The Cremer et al. report offers some useful drilling down into different ways of acquiring data and different ways in which data might be used. Sensible rules will take these kinds of distinctions into account. They note:
Data is acquired through three main channels. First, some data is volunteered, i.e. intentionally contributed by the user of a product. A name, email, image/video, calendar information, review, or a post on social media would qualify as volunteered data. Similarly, more structured data—directly generated by an individual—like a movie rating, or liking a song or post would also fall in the volunteered data category. 
Second, some data is observed. In the modern era, many activities leave a digital trace, and "observed data" refers to more behavioural data obtained automatically from a user's or a machine's activity. The movement of individuals is traced by their mobile phone; telematic data records the roads taken by a vehicle and the behaviour of its driver; every click on a page web can be logged by the website and third party software monitors the way in which its visitors are behaving. In manufacturing, the development of the Internet of Things means that every machine produces reams of data on how it functions, what its sensors are recording, and what it is currently doing or producing.
Finally, some data is inferred, that is obtained by transforming in a non-trivial manner volunteered and/or observed data while still related to a specific individual or machine. This will include a shopper's or music fan's profiles, e.g. categories resulting from clustering algorithms or predictions about a person's propensity to buy a product, or credit ratings.  The distinction between volunteered, observed and inferred data is not always clear. ...
[W]e will also consider how data is used. We will define four categories of uses: non-anonymous use of individual-level data, anonymous use of individual level data, aggregated data, and contextual data.
The first category, non-anonymous use of individual-level data, would be any individual-level data (volunteered, observed, or inferred) that was used to provide a service to the individual. For instance, a music app uses data about the songs a user has listened to in order to provide recommendations for new artists he or she might enjoy. Similarly, a sowing app uses data from farm equipment to monitor the evolution of the soil. Access to individual-level data can often be essential to switch service or to offer a complementary service.
The second category, anonymous use of individual-level data, would include all cases when individual-level data was used anonymously. Access to the individual-level data is necessary but the goal is not to directly provide a service to the individual who generated the data in the first place. These would typically include cases of data being used to train machine-learning algorithms and/or data used for purposes unrelated to the original purposes for which the data has been collected. An example of this would be the use of skin image data to train a deep learning (Convolutional Neural Network) algorithm to recognise skin lesions or the use of location data for trading purposes. In specific cases, the information extracted, e.g. the trained algorithm, can then be used to provide a better service to some of the individuals who contributed data. For instance, film reviews are used collectively to provide every individual with better recommendations (collaborative filtering). For the anonymous use of individual-level data, access to a large dataset may be essential to compete.
The third category, aggregated data, refers to more standardised data that has been irreversibly aggregated. This is the case for e.g. sales data, national statistics information, and companies' profit and loss statements. Compared to anonymous use of individual-level data, the aggregation is standard enough that access to the individual-level data is not necessary.
Finally, contextual data refers to data that does not derive from individual-level data. This category typically includes data such as road network information, satellite data and mapping data.
It's interesting to consider whether people's objections about use of their data are rooted purely in principle, or are not about being paid. If firms used your observed data on location, shopping, and so on, but only sold aggregated versions of that data and paid you for the amount of data you contributed to the aggregate, would you still complain?

Some Policy Steps

After this probably over-long mention of what seemed to me like interesting points, what are the most useful potential margins for action in this area--at least if we want to take competent action? Here are two main categories of policies to consider: those related to mergers and anticompetitive behavior, and those related to data.

1) Big dominant firms deserve heightened scrutiny for actions that might affect entry and competition. This is especially true when a firm has a "bottleneck" position when everyone (or almost everyone) needs to go through that firm to access certain service. One particular concern here is that big dominant firms buy up smaller companies that might, if they had remained independent, offered a form of competition or a new platform.

Furman et al. write:
There is nothing inherently wrong about being a large company or a monopoly and, in fact, in many cases this may reflect efficiencies and benefits for consumers or businesses. But dominant companies have a particular responsibility not to abuse their position by unfairly protecting, extending or exploiting it. Existing antitrust enforcement, however, can often be slow, cumbersome, and unpredictable.  ...
Acquisitions have included buying businesses that could have become competitors to the acquiring company (for example Facebook's acquisition of Instagram), businesses that have given a platform a strong position in a related market (for example Google's acquisition of DoubleClick, the advertising technology business), and data-driven businesses in related markets which may cement the acquirer's strong position in both markets (Google/YouTube, Facebook/WhatsApp). Over the last 10 years the 5 largest firms have made over 400 acquisitions globally. None has been blocked and very few have had conditions attached to approval, in the UK or elsewhere, or even been scrutinised by competition authorities.
But along with mergers, there are a variety of other potentially actions which, when used by a dominant firm, may be potentially anticompetitive. Another concern is when big dominant firms start offering a range of other services on their own, and then use their dominance in  one market to favor their own services in other markets. Yet another issue is when big dominant firms choose a certain technological standard that seems more about blocking competition than advancing its own business. Some dominant platform firms use "best-price clauses," which guarantee that they will receive the lowest possible price from any provider. Such clauses also mean that if a new platform firm starts up, it cannot offer to undercut the original provider on price.

In other words, if a large dominant firm keeps your business (and your attention) by providing you with the quality and price of services you want, and earns high profits as a result, so be it. But if that firm is earning high profits by seeking out innovative ways to hamstring the potential competitors, it's a legitimate antitrust problem.

2) Data openness, portability, and interoperability.

Data openness just means that companies need to be open with you about what data of yours they have on-hand--perhaps especially if that data was collected in some way that didn't involve you openly handing it over to them. Portability refers to the ability to move your data easily from one digital firm to another. For example, you might be more willing to try a different search engine, or email program, or a different bank or health care provider if all your past records could be ported easily to the new company. Interoperability refer to when technical standards allow your (email, shopping, health, financial or other) data to be used directly by two different providers, if you desire.

Again, the underlying theme here is that if a big digital firm gets your attention with an attractive package of goods and services, that's fine; but if it hold you in place because it would be a mortal pain to transfer the data they have accumulated on you, that's a legitimate concern.

Finally, I would add that it's easy to focus on the actions of big digital firms that one sees in the headlines, and as a result to pay less attention to other digital economy issues of equal or greater importance. For example, most of us face a situation of very limited competition when it comes to getting high-speed internet access. In many ways, the growth of big digital firms doesn't raise new topics, but it should push some new thinking about those topics. 

 -- via my feedly newsfeed

Thursday, July 18, 2019

Thai politics and Thai perspectives [feedly]

He had me at "non-doctrinaire" historical materialist, a.k.a, Marxism.


Thai politics and Thai perspectives
http://understandingsociety.blogspot.com/2019/07/thai-politics-and-thai-perspectives.html

One of the benefits of attending international conferences is meeting interesting scholars from different countries and traditions. I had that pleasure while participating in the Asian Conference on Philosophy of the Social Sciences at Nankai University in June, where I met Chaiyan Rajchagool. Chaiyan is a Thai academic in the social sciences. He earned his undergraduate degree in Thailand and received a PhD in sociology from the University of Manchester (UK). He has served as a senior academic administrator in Thailand and is now an associate professor of political science at the University of Phayao in northern Thailand. He is an experienced observer and analyst of Thai society, and he is one of dozens of Thai academics summoned by the military following the military coup in 2014 (link). I had numerous conversations with Chaiyan in Tianjin, which I enjoyed very much. He was generous to share with me his 1994 book, The rise and fall of the Thai absolute monarchy: Foundations of the modern Thai state from feudalism to peripheral capitalism, and the book is interesting in many different ways. Fundamentally it provides a detailed account of the political and economic transition that Siam / Thailand underwent in the nineteenth century, and it makes innovative use of the best parts of the political sociology of the 1970s and 1980s to account for these processes.

The book places the expansion of European colonialism in Southeast Asia at the center of the story of the emergence of the modern Thai state from mid nineteenth-century to the end of the absolute monarchy in the 1930s. Chaiyan seeks to understand the emergence of the modern Siamese and Thai state as a transition from "feudal" state formation to "peripheral capitalist" state formation. He puts this development from the 1850s to the end of the nineteenth century succinctly in the preface:
In the mid-nineteenth century Siam was a conglomerate of petty states and principalities and did not exist as a single political entity.... Economically Siam was transformed into what may be termed, in accordance with our theoretical framework, peripheral capitalism. Accompanying this development was the transformation of old classes and the rise of new classes.... At the political level a struggle took place within the ruling class in Bangkok, and new institutional structures of power began to emerge. As a result the previously fragmented systems of power and administration were brought under unified centralized command in the form of the absolute monarchy. (xiii-xiv)
This is a subtle, substantive, and rigorous account of the politics and economy of modern Siam / Thailand from the high point of western imperialism and colonialism in Asia to the twentieth century. The narrative is never dogmatic, and it offers an original and compelling account of the processes and causes leading to the formation of the Thai state and the absolutist monarchy. Chaiyan demonstrates a deep knowledge of the economic and political realities that existed on the ground in this region in the nineteenth century, and equally he shows expert knowledge about the institutions and strategies of the colonial powers in the region (Britain, France, Germany). I would compare the book to the theoretical insights about state formation of Michael Mann, Charles Tilly, and Fernand Braudel.

Chaiyan's account of the development of the Thai state emphasizes the role of economic and political interests, both domestic and international. Fundamentally he argues that British interests in teak (and later tin) prompted a British strategy that would today be called "neo-colonial": using its influence in the mid-nineteenth-century to create a regime and state that was favorable to its interests, without directly annexing these territories into its colonial empire. But there were internal determinants of political change as well, deriving from the conflicts between powerful families and townships over the control of taxes.
The year 1873-4 marks the beginning of a period in which the royalty attempted to take political control at the expense of the Bunnag nobility and its royal/noble allies. I have already noted that the Finance Office, founded in 1873, was to unify the collection f tax money from the various tax farmers under different ministries into a single office. To attain this goal, political enforcement and systematic administration were required. The Privy Council and the Council of State, established in June and August 1874, were the first high level state organizations. ... With the creation of a professional military force of 15,000 troops and 3,000 marines ... the decline of the nobility's power was irreversible, whereas the rise of the monarchy had never before had so promising a prospect. (85, 86)
Part of the development of the monarchy involved a transition from the personal politics of the feudal politics of the earlier period to a more bureaucratic-administrative system of governance:
Of interest in this regard was the fact that the state was moving away from the direct exercise of personal authority by members of the ruling class. this raises questions about the manner of articulation between the crown and the state. If direct personal control of the state characterizes a feudal state, the ruling class control of the state in a peripheral capitalist society takes the form of a more impersonal rule of law and administrative practice through which are mediated the influences of the politico-economic system and class interests. (88)
Chaiyan makes superb use of some of the conceptual tools of materialist social science and non-doctrinaire Marxism, framing his account in terms of the changes that were underway in Southeast Asian with respect to the economic structure of everyday life (property, labor, class) as well as the imperatives of British imperialism. The references include some of the very best sources in social and historical theory and non-doctrinaire Marxism that were available in the 1980s: for example, Ben Anderson, Perry Anderson, Norberto Bobbio, Fernand Braudel, Gene Genovese, Michael Mann, Ralph Miliband, Nicos Poulantzas, James Scott, Theda Skocpol, Charles Tilly, and Immanuel Wallerstein, to name a few out of the hundreds of authors cited in the bibliography. Chaiyan offers a relevant quotation from Fernand Braudel that I hadn't seen before but that is more profound than any of Marx's own pronouncements about "base and superstructure":
Any highly developed society can be broken down into several "ensembles": the economy, politics, culture, and the social hierarchy. The economy can only be understood in terms of the other "ensembles", for it both spreads itself about and opens its own doors to its neighbours. There is action and interaction. That rather special and partial form of the economy that is capitalism can only e fully explained in light of these contiguous "ensembles" and their encroachments; only then will it reveal its true face. 
This, the modern state, which did not create capitalism but only inherited it, sometimes acts in its favor and at other times acts against it; it sometimes allows capitalism to expand and at other times destroys its mainspring. Capitalism only triumphs when it becomes identified with the state, when it is the state.... 
So the state was either favorable or hostile to the financial world according to its own equilibrium and its own ability to stand firm. (Braudel Afterthoughts on Material Civilization and Capitalism, 64-65)
It is interesting to me that Chaiyan's treatment of the formation of a unified Thai state is attentive to the spatial and geophysical realities that were crucial to the expansion of central state power in the late nineteenth century.
A geophysical map, not a political one, would serve us better, for such a map reveals the mainly geophysical barriers that imposed constraints on the extension of Bangkok power. The natural waterways, mountains, forests and so on all helped determine how effectively Bangkok could claim and exert its power over townships. (2)
In actual fact, owing to geographical barriers and the consequent difficulty of communication, Bangkok exercised direct rule only over an area within a radius of two days travelling (by boat, cart, horse, or on foot). (3) 
Here is a map that shows the geophysical features of the region that eventually became unified Thailand, demonstrating stark delineation between lowlands and highlands:


This is a topic that received much prominence in the more recent writings of James Scott on the politics of southeast Asia, and his concept of "Zomia" as a way of singling out the particular challenges of exercising central state power in the highlands of southeast Asia (linklink). Here is a map created by Martin Lewis (link) intended to indicate the scope of the highland population (Zomia). The map is discussed in an earlier post.


And here is a map of the distribution of ethnic and language groups in Thailand (link), another important element in Chaiyan's account of the consolidation of the Thai monarchy:


It is an interesting commentary on the privilege, priorities, and limitations of the global academic world that Chaiyan's very interesting book has almost no visibility in western scholarship. In its own way it is the equal of some of Charles Tilly's writings about the origins of the French state; and yet Tilly is on all reading lists on comparative politics and Chaiyan is not. The book is not cited in one of the main English language sources on the history of Thailand, A History of Thailand by Chris Baker and Pasuk Phongpaichit, published by Cambridge University Press, even though that book covers exactly the same period in chapter 3. Online academic citation databases permitted me to find only one academic article that provided substantive discussion or use of the book ("Autonomy and subordination in Thai history: the case for semicolonial analysis", Inter‐Asia Cultural Studies, 2007 8:3, 329-348; link). The author of this article, Peter Jackson, is an emeritus professor of Thai history and culture at the Australian National University. The book itself is on the shelves at the University of Michigan library, and I would expect it is available in many research libraries in the United States as well.

So the very interesting theoretical and historical treatment that Chaiyan provides of state formation in Thailand seems not to have received much notice in western academic circles. Why is this? It is hard to avoid the inference that academic prestige and impact follow nations, languages, universities, and publishing houses. A history of a small developing nation, authored by a Thai intellectual at a small university, published by a somewhat obscure Thai publishing company, is not destined to make a large splash in elite European and North American academic worlds. But this is to the great disadvantage to precisely those worlds of thought and knowledge: if we are unlikely to be exposed to the writings of insightful scholars like Chaiyan Rajchagool, we are unlikely as well to understand the large historical changes our world has undergone over the past two centuries.

Amazon comes in for a lot of criticism these days; but one thing it has contributed in a very positive way is the easy availability of books like this one for readers who would otherwise never be exposed to it. How many other intellectuals with the insights of a Tilly or a Braudel are there in India, Côte d'Ivoire, Thailand, Bolivia, or Barbados whom we will never interact with in a serious way because of the status barriers that exist in the academic world?

*   *   *

(It is fascinating to me that one of the influences on Chaiyan at the University of Manchester was Teodor Shanin. Shanin is a scholar whom I came to admire greatly at roughly the same time when I was engaged in research in peasant studies in connection with Understanding Peasant China: Case Studies in the Philosophy of Social Science.)  

 -- via my feedly newsfeed