Currency Wars by James Rickards

Currency Wars by James Rickards PART 3

 

Keynesianism

 

John Maynard Keynes died in 1946 and so never lived to see the errors committed in his name. His death came just one year before the publication of Samuelson’s Foundations of Economic Analysis, which laid the intellectual base for what became known as neo-Keynesian economics. Keynes himself used few equations in his writings, but did provide extensive analysis in clear prose. It was only in the late 1940s and 1950s that many of the models and graphs associated today with Keynesian economics came into existence. This is where the conceptual errors espoused under the name “Keynesian” are embedded; what Keynes would have thought of those errors had he lived is open to speculation.

Near the end of his life, Keynes supported a new currency, which he called the bancor, with a value anchored to a commodity basket including gold. He was, of course, a fierce critic of the gold exchange standard of the 1920s, but he was practical enough to realize that currencies must be anchored to something and, for this reason, preferred a global commodity standard to the dollar-and-gold standard that emerged from Bretton Woods in 1944.

Our purpose here is not to review the field of Keynesian economics at large, but rather to zero in on the flaw most relevant to the currency wars. In the case of monetarism, the flaw was the volatility of velocity as expressed in consumer choice. In Keynesianism, the flaw is the famous “multiplier.”

The Keynesian multiplier theory rests on the assumption that a dollar of government deficit spending can produce more than a dollar of total economic output after all secondary effects are taken into account. The multiplier is the Bigfoot of economics—something that many assume exists but is rarely, if ever, seen. The foundation of Keynesian public policy is called aggregate demand, or the total of all spending and investment in the domestic economy, excluding inventories. For example, if a worker is fired, he not only loses his income, but he also then stops spending in ways that cause others to lose income as well. The lost income and lost spending cause a drop in aggregate demand, which can feed on itself, leading more businesses to fire more employees, who then spend less, and so on in a vicious circle. Keynesian theory says that government can step in and spend money that individuals cannot or will not spend, thereby increasing aggregate demand. The government spending can reverse the slide and contribute to renewed economic growth.

The problem with this theory of government spending to boost aggregate demand is that governments have no money of their own in the first instance. Governments have to print the money, take the money in the form of taxes or borrow the money from their citizens or from abroad. Printing money can cause nominal growth, but it can also cause inflation, so that real growth is unchanged over time. Taxing and borrowing may enable the government to spend more, but it means there is less for the private sector to spend or invest, so it is not clear how aggregate demand increases. This is where the multiplier claims to play a role. The idea of the multiplier is that one dollar of government spending will stimulate more spending by others and result in more than one dollar of increased output, and this is the justification for taking the dollar from the private sector.

How much more output is yielded by one dollar of government spending? Put differently, what is the size of the multiplier? In a famous study written just before the start of President Obama’s administration, two of Obama’s advisers, Christina Romer and Jared Bernstein, looked at the multiplier in connection with the proposed 2009 stimulus program. Romer and Bernstein estimated the multiplier at about 1.54 once the new spending was up and running. This means that for every $100 billion in the Obama spending program, Romer and Bernstein expected output to increase by $154 billion. Since the entire Obama program ended up at $787 billion, the “extra” output just from doing the stimulus program would amount to $425 billion—the largest free lunch in history. The purpose of this stimulus was to offset the effects of the depression that had begun in late 2007 and to save jobs.

The Obama administration ran U.S. fiscal year deficits of over $1.4 trillion in 2009 and $1.2 trillion in 2010. The administration projected further deficits of $1.6 trillion in 2011 and $1.1 trillion in 2012—an astounding total of over $5.4 trillion in just four years. In order to justify the $787 billion program of extra stimulus in 2009 with deficits of this magnitude, it was critical to show that America would be worse off without the spending. The evidence for the Keynesian multiplier had to be rock solid.

It did not take long for the evidence to arrive. One month after the Romer and Bernstein study, another far more rigorous study of the same spending program was produced by John B. Taylor and John F. Cogan of Stanford University and their colleagues. Central to the results shown by Taylor and Cogan is that all of the multipliers are less than one, meaning that for every dollar of “stimulus” spending, the amount of goods and services produced by the private sector declines. Taylor and Cogan employed a more up-to-date multiplier model that has attracted wider support among economists and uses more realistic assumptions about the projected path of interest rates and expectations of consumers in the face of higher tax burdens in the future. The Taylor and Cogan study put the multiplier effect of the Obama stimulus program at 0.96 in the early stages but showed it falling rapidly to 0.67 by the end of 2009 and to 0.48 by the end of 2010. Their study showed that, by 2011, for each stimulus dollar spent, private sector output would fall by almost sixty cents. The Obama stimulus program was hurting the private sector and therefore handicapping the private sector’s ability to create jobs.

The Taylor and Cogan study was not the only study to reach the conclusion that Keynesian multipliers are less than one and that stimulus programs destroy private sector output. John Taylor had reached similar conclusions in a separate 1993 study. Empirical support for Keynesian multipliers of less than one, in certain conditions, was reported in separate studies by Michael Woodford of Columbia University, Robert Barro of Harvard and Michael Kumhof of Stanford, among others. A review of the economic literature shows that the methods used by Romer and Bernstein to support the Obama stimulus program were outside the mainstream of economic thought and difficult to support except for ideological reasons.

Keynes’s theory that government spending could stimulate aggregate demand turns out to be one that works in limited conditions only, making it more of a special theory than the general theory he had claimed. Stimulus programs work better in the short run than the long run. Stimulus works better in a liquidity crisis than a solvency crisis, and better in a mild recession than a severe one. Stimulus also works better for economies that have entered recessions with relatively low debt levels at the outset. The seminal yet still underappreciated econometric work of Professor Carl F. Christ from the 1960s theorized that both Keynesian and monetarist tools work most powerfully for economies that have started with a balanced budget. Christ was the first to identify what he called the “government budget restraint,” a concept that seems to have been forgotten in the meantime. Christ wrote, “Results suggest forcefully that both the extreme fiscal advocates and the extreme monetary advocates are wrong: Fiscal variables strongly influence the effect of a given change in the . . . money stock, and open market operations strongly influence the effects of given changes in government expenditures and taxation.” Christ was saying that the impact of Keynesian stimulus could not be gauged independently of the deficit starting line.

None of the favorable conditions for Keynesian stimulus was present in the United States in early 2009. The country was heavily burdened with debt, was running huge deficits and was suffering from a severe solvency crisis that promised to continue for many years—exactly the wrong environment for Keynesian stimulus. The stimulus spending would increase the deficit and waste valuable resources, but not do much else.

Two years after the Romer and Bernstein study, the economic results were in, and they were devastating to their thesis. Romer and Bernstein had estimated total employment at over 137 million by the end of 2010. The actual number was only about 130 million. They had estimated GDP would increase 3.7 percent by late 2010; however, it had barely increased at all. They had also estimated that recession unemployment would peak at 8 percent; unfortunately, it peaked at 10.1 percent in October 2009. By every measure the economy performed markedly worse than Romer and Bernstein had anticipated using their version of the Keynesian multiplier. From the start, the Obama stimulus was little more than an ideological wish list of favored programs and constituencies dressed up in the academic robes of John Maynard Keynes.

The Romer-Bernstein plan almost certainly saved some jobs in the unionized government sector. However, few had argued that the stimulus would produce no jobs, merely that the hidden costs were too high. The combination of deficit spending, monetary ease and bank bailouts had boosted the economy in the short run. The problem was that the recovery was artificial and not self-sustaining, because it had been induced by government spending and easy money rather than by private sector consumption and investment. This led to a political backlash against further deficit spending and quantitative easing.

The increased debt from the failed Keynesian stimulus became a cause célèbre in the currency wars. These wars were primarily about devaluing a country’s currency, which is a form of default. A country defaults to its foreign creditors when its claims suddenly become worth less through devaluation. A country defaults to its own people through inflation and higher prices for imported goods. With debt in the hands of foreign investors reaching unprecedented levels, the international impact of devaluation was that much greater, so the currency wars would be fought that much harder.

Because debt and deficits are now so large, the United States has run out of dry powder. If the United States were struck by another financial crisis or a natural disaster of the magnitude of Hurricane Katrina or greater, its ability to resort to deficit spending would be impaired. If the United States were confronted with a major war in the Middle East or East Asia, it would not have the financial wherewithal to support a war effort as it had done in World War II. Vulnerability to foreign creditors is now complete. In the face of any one of these crises—financial, natural or military—the United States would be forced to resort to emergency measures, as had FDR in 1933 and Nixon in 1971. Bank closings, gold seizures, import tariffs and capital controls would be on the table. America’s infatuation with the Keynesian illusion has now resulted in U.S. power being an illusion. America can only hope that nothing bad happens. Yet given the course of events in the world, that seems a slim reed on which to lean.

Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46

Leave a Reply