Tuesday, September 29, 2009

The G20 Surprises

The Pittsburgh summit of the G-20 was a pleasant surprise. Having begun as a common front to restore calm during the 2008 global financial crisis, the Leaders’ forum finally got down to work: delivering a reasonable work plan complete with deadlines for progress. The strongest signal of its seriousness was its designation of the G-20 as “the premier forum for our international economic cooperation” effectively giving the major emerging market economies equal seats at the table and consigning the G-7/8 to security issues.


Canada will host the first meeting of this new configuration and has a major opportunity to set the tone. It should downgrade the smaller event and ensure the G-20 delivers on its commitments. Having one of the world’s soundest financial systems has gained Canada international credibility; some of this political capital might be used to push for completion of the Doha trade round next year.


The G-20 commitments at Pittsburgh are substantial but incomplete: they acknowledge the need for carefully phased withdrawal of government support as market forces once again drive growth, commit themselves to more realistic regulation of the financial sector and reform global governance, and they confront the need to reduce the international imbalances that were a significant seed for the crisis. They are incomplete because while climate change received attention there was only passing rhetorical commitment to completing the Doha trade round.


Many observers and people engaged in business decisions will dismiss the 16-page communiqué as obfuscating hot air that papered over some strong disagreements, such as European resistance to giving up some of their clout in governing the IMF, the US administration’s insistence that its duty on Chinese tires merely “enforces existing agreements,” and China’s sensitivity to discussion of sustainable exchange rates.


Some issues are ripe for action in 2010-11: on tightening financial regulation through higher capital and liquidity requirements on banks where consensus has emerged; now the actual numbers need to be negotiated. And on IMF reform where, if emerging market governments have more say, they are more likely to heed its analysis and listen to its advice.


Without stronger leadership other issues will disappoint: one is completion of the Doha trade negotiations in 2010. Business pressure has been notably absent as a driver of this round, in part because global supply chains and offshoring have reduced the impact of border barriers. On the climate change negotiations leading up to Copenhagen the prospects of a global compact by December are dim even though China provided an unexpected commitment to reduce the carbon intensity of its growth (mainly because of rising domestic pressures for cleaner air).


Deeper cooperation will be necessary to deliver some tangible process on reducing international imbalances. Is this possible in such a diverse group? Or will the majority of participants be free riders, leaving the hard bargaining to the United States and Europe or China?


I think deeper cooperation is possible. We tend to forget that G-7 governments engaged in policy coordination for a period from the late 1970s through the 1980s to reduce international imbalances and currency misalignments among Europe, Japan and the United States. International peer pressure played a role in encouraging governments to take unpopular policy actions that were in their own long term interest. Central banks engaged in unprecedented cooperation to accelerate dollar depreciation against the yen when the trend began in currency markets.


Policy changes and institutional reforms that would reduce imbalances today are similarly in countries’ own long term interests. Take the imbalances between the United States and China. The United States needs to save more and spend less while China needs to do the opposite. In the United States the administration is under considerable domestic political pressure to deliver on a commitment to reduce its fiscal deficit, expected to total 13 percent of GDP in 2009, to a more sustainable 2-3 percent by 2013. This will require deficit-neutral health care reforms and tax increases on households which have begun to repair their damaged balance sheets after a 30 percent decline in house prices. In the interim, continued Chinese purchases of US government securities is filling the gap despite publicly expressed concerns by the Chinese Premier and other senior officials about the sustainability of US fiscal policies.


For their part, the Chinese authorities have acknowledged the unintended consequences of their thirty-year dash for growth. Much of this growth was driven by investing nearly half of GDP in capital- and energy-intensive industrial production aimed at export markets. Rising income inequality and pollution are the unintended consequences. Rebalancing would shift the growth model towards one driven by domestic consumption in which services and less polluting and capital- and energy-intensive production play larger roles. To encourage more consumer spending the central government has increased its funding of health care, education and pensions which households have had to cover themselves from their savings. But other reforms are needed to change the model towards one that produces more jobs. These include reducing the manufacturing and investment bias in production by eliminating subsidized energy prices, reducing tax subsidies for manufacturing, requiring state enterprises to pay dividends, deregulating the service sector -- and even more fundamental—allowing greater exchange rate flexibility which would shift monetary policy away from exchange rate management, free up interest rates and force reforms in the government-owned banking system which relies heavily for profits on income from China’s administered interest rate spreads.

Reducing international imbalances will not be easy. Compared to US health care reform, which has dominated the headlines for months, China’s agenda is formidable, with inter-linked reforms of which exchange rate appreciation is one part. Little wonder that the Chinese authorities resist international pressures for exchange rate reform by itself. The package of social, industrial, monetary and financial reforms are very much in both China’s and the international economy’s long term interests, however, and should – like the US fiscal deficit -- be monitored and encouraged through peer review in the G-20 in the years ahead.

Tuesday, September 22, 2009

What should we expect the G20 Summit to accomplish?

On September 24-25, 2009 G20 leaders will hold their third summit in a single year. Officials’ attempts to play down expectations of what will be accomplished prompt questions about what leaders should attempt. To their credit, they have established the priorities. Now they have to deliver with action on (a) global financial standards to make the international financial system safer; (b) a policy framework for reducing the global macroeconomic imbalances that contributed to the crisis; (c) direction to trade ministers to complete the Doha negotiations; and (d) reform of the International Monetary Fund to give the large developing countries more say.

As immediate growth prospects improve the focus is shifting to sustaining the recovery and applying the lessons from the crisis to improve the future functioning of the world economy.

The most tangible action at Pittsburgh will be on financial regulatory reforms to reduce the risks posed by large cross-border institutions whose business failures would, like Lehman Brothers, have such serious systemic consequences that taxpayers are forced to bail them out. US Treasury Secretary Geithner’s call for higher capital cushions in banks and lower leverage ratios will reduce leverage throughout the system; the Basel Committee on Banking Supervision’s proposals to strengthen national governments’ powers to intervene and cooperate in resolving problems are one step, and forcing such institutions to prepare plans for their own windup in the event of insolvency, known as “living wills,” are another. Other steps are also needed to increase the transparency of derivatives transactions and oversight of credit rating agencies.

Government supervision of bank managers’ remuneration is also likely to be agreed, in part to paper over differences about centralizing global financial supervision, which French President Sarkozy pushed to address the too-big-to-fail problem. The reality is that most non-European governments are unwilling to cede sovereignty to a global super-regulator and for good reason. The size and reach of a global regulator cannot make up for the local knowledge and judgment of national regulators who must be very knowledgeable about the institutions they oversee. Nor is there any one model for a national financial supervisor. The UK model of a single independent regulator failed to prevent a crisis in which the banks had to be temporarily nationalized while the decentralized arrangements in the United States had severe short comings as well. Large complex institutions like Citigroup, with an entire floor of supervisors onsite, were at the heart of the financial crisis.

Macroeconomic cooperation is the other important issue for Pittsburgh but one where little progress will be evident. Cooperation on fiscal and monetary stimulus prevented the collapse of the international financial system. Now attention must shift to reducing the large current account surpluses and deficits that contributed to the crisis. The “engine that could”-- the US consumer -- is now busy repairing balance sheets, saving more and spending less. Public and private saving will have to rise for the United States to dig out of its deficit hole, which in 2009 will be at least 13 percent of GDP. Future growth will have to come from the large exporters: Germany, Japan, China and other East Asian economies who must now import more. This kind of switch will require painful reforms, such as Asians relying less on the export-oriented regional production system targeted at the US consumer, and more on Chinese and other Asian consumers. The latter are habitually high savers, however, and have very different buying habits. Government rebate schemes have encouraged Chinese to buy stoves, cars, and TVs, but these are one-time purchases. Social safety nets are needed in Asia to assure people they can save less and spend more and these will take years to construct.

The third priority -- trade openness – is where Pittsburgh is likely to be dismissed as a talk shop. The US duty on low-end Chinese tires imposed on September 18, while technically legal, signaled that support for US health care reform trumps openness. The only way the G20 can restore credibility on openness is to give a clear order to complete the Doha round by a specific date in 2010.

Global governance is the fourth priority and part of the plumbing that makes the global architecture work. G20 leaders rely on existing institutions like the International Monetary Fund to implement their decisions. The IMF suffers from credibility problems following the Asian crisis decade ago when it was perceived to be less than helpful in their time of need. Giving emerging market economies a greater say in its governance in line with their relative economic clout (and the Europeans less) will help to restore its proxy status as the international lender of last resort.

In short, Pittsburgh needs to act on financial reforms; it needs to kick off a new phase of international cooperation on the difficult medium-term actions needed to rebalance world growth away from heavy reliance on US consumption. Asian reforms will be central to this process – which is a good reason to hold the next summit in Asia. South Korea, which takes over the chairmanship of the G20 in 2010, is well placed to push forward the rebalancing agenda. But Pittsburgh must launch it.

Friday, September 11, 2009

Canadian Content Regulation: An Outdated Policy

I’m planning a few blog entries about the economics of government policy, and its relationship to changes in technology. The first issue I am going to take up is Canadian content regulations.


Canadian content regulations have long generated significant debate. I’m not going to get into the classic arguments over whether these are cultural guarantees or simple protectionism. Nor will I step into the long debate about what the proper definition of “Canadian enough” ought to be. Instead, I am going to argue that these regulations are simply from a different time, and not appropriate to the new technologies that produce and deliver media. With every passing day the idea of regulating media content is less helpful to Canadian artists, more harmful to Canadian broadcasters, and in a broader sense becoming impossible.

Keep this thought in your head. The Canadian content regulation for television was first passed in 1959. Imagine what TV was like then. There were (at best) two broadcasters in a market. Producing content for TV was incredibly costly, and only a few large production companies could realistically compete. There was no alternative to TV; either you were played on TV, or you were not seen. In those days, the case could be made that the market was not very competitive, and that the returns to scale necessitated dominance by a few production companies selling to a few broadcasters. An economist might call it similar to a natural monopoly: a situation where the costs of doing business necessitate one, or few, producers. The economics textbook’s favourite example is utilities like electricity delivery. It was understandable, perhaps, to be concerned that Canadian content might be lost in the middle of these big players.

Radio wasn’t much different. It was the dominant mode of marketing music. Music production was focused at a few large publishing companies, and radio was king. Again, it might have seemed natural to worry about where the Canadian artist would fit in.

Contrast that with the state of affairs today. Music is delivered through the radio, but also through a variety of internet sources. People listen to songs directly on the web pages of the acts themselves, and sometimes buy directly from there. Developments in video technology mean that television production requires much less overhead, and there are such a large number of stations as to serve very narrow markets. There is a channel devoted to Golf. The US supports a Soccer channel, despite its low level of popularity there. It doesn’t take a huge market to generate a marketable channel. On top of the proliferation of cable channels, internet delivery of video, both televised material and material that has not reached the airwaves, is closely following the rollout of high speed internet connections capable of carrying that content. In the language of economics, technological change has reduced the “minimum efficient scale” of production. It is now possible to efficiently produce and distribute music at a much smaller scale.

What do all of these changes mean for content regulation? They mean that content regulations are less effective at encouraging Canadian works, and more of a burden for Canadian broadcasters.

In music, radio stations that are subject to these regulations now compete against other sources which do not. Music subscription services, like napster.ca, offer unlimited streaming of a large library of songs for a fixed monthly fee. Since the listener controls the playlist, there is no control over the “Canadian-ness” of the music. A listener can hear about an artist from a friend, or their favourite music review webpage, or simply by asking the music service itself what songs are typically liked by people with similar tastes. Of course the listener could also stream any of a number of US radio stations, without regard to content restrictions, directly over the internet. These new distribution technologies are making it harder for a radio station burdened by the extra restriction of Canadian content to survive. Independent record labels and internet distribution offer a democratization of music that makes the content of Canadian radio stations far less relevant. Canadian content regulations only hasten the demise of radio in Canada: it adds one more way in which radio can’t keep pace with the times.

If the competition from these new outlets seems like bad news for Canadian content, keep in mind that this reduction in minimum efficient scale is probably great news for exactly the kind of up and coming Canadian artists that content requirements seek to protect. As in many areas of industry, small is often associated with new. Content requirements mostly focus on radio stations, which by and large are entrenched outlets. That is probably why it is so hard to point to recent Canadian artists which were success stories as a result of radio airplay generated by content requirements, and why it is much easier to point to a few Canadian stars whose songs are repeated on Canadian radio to meet the content requirements. Rather than finding great new Canadian artists, the lumbering radio giants look to established Canadian stars that hardly need the exposure. This is nothing new to an economist: big firms are often not willing to take the risks that small firms will.

Some important rising Canadian artists, like Arcade Fire, were never the beneficiaries of much Canadian airplay until they broke through the independent music scene in North America generally. Their story is very telling: it illustrates avenues that are now available for Canadian bands (holding aside the issue of whether Arcade Fire, a band from Montreal but with members from Texas, would formally “count” as Canadian). They took advantage not of the content regulations, but rather benefitted from all of the new technologies that make small scale production and distribution feasible, and offer an avenue for artists who are not yet a sure thing.

TV is headed in the same direction. Streaming video is taking North America, especially the US, by storm. Hulu offers free on demand video content for many popular US shows. It is currently formally not available in Canada, but Canadian stations already offer a (diluted) version of on-demand, streaming content for shows through their webpages. Viewers choose which content plays. Even Hulu, although blocked to Canadian IP addresses, can be accessed easily through a proxy server service. Media is out there, and short of building a Great Firewall of Canada that would have to be even greater than the Great Firewall of China, there is no keeping it from Canadians. Content regulations only serve to hinder Canadian broadcasters who must compete in this environment.

Just as technological change in music gives artists new tools for finding markets, the same applies to video. Websites like Funny or Die offer video that would never see the light of day in a world with only a few broadcast channels, with some videos reaching more than 60 million views. Between the internet and specialized cable channels, video is becoming democratized in the same way as music, offering exciting new outlets for artists who want to produce content and have it be seen.

The world is flat. More and more each day, media comes from everywhere, and goes everywhere. This is great news for Canadian artists who now have the tools to reach out both to Canadians and the rest of the world using these new technologies. But it also means that the idea of regulating Canadian content is starting to look as out of date as a 1959 TV set.

Thursday, September 10, 2009

The Worst Economic Times Since the Great Depression? A Reality Check.

This article is from the RIIB blog of May 1, 2009.

We hear often that these are the worst economic times since the Great Depression. Certainly, these are the worst economic times in a generation. If you’re under the age of 40, these likely are the worst economic times that you can remember. The last serious recession in North America occurred in the early 1980’s. It was caused by a “credit crunch” engineered by North America’s central banks in their fight against entrenched inflationary expectations.

How does today’s credit crunch induced recession compare to the one of the 1980’s? Let’s look at the numbers. In early 1980’s, Canada’s unemployment rate peaked at 13percent after rising 80 percent above the 7.2 percent rate in December 1980 when the recession began. In today’s recession, unemployment in March was 8 percent, up from 5.8 percent about a year ago. If the hike in unemployment were eventually to match the 1980s peak, unemployment will rise much further, to about 10.5 percent.

What about stock market values? During November 1980, the value of the TSE Index averaged approximately 2400. It fell almost to 1400 by July 1982, a decline of more than 40 percent, and did not return to 2400 until the beginning of 1985. This decline in the TSE Index is similar to the decline we have already seen in the past few months.

While it’s too early yet to say much about the decline in real GDP in the current recession – it declined only in the last quarter of 2008 and likely will decline again this quarter – the recession of the 1980’s saw a 5 percent decline between June 1981 and December 1982. We will have to see quite substantial declines over the next several quarters to match this.

The other features of the early- 1980s recession included extremely high interest rates – the Bank of Canada rate hit 21 percent in August 1981 – and extremely high inflation – the inflation rate in 1981 averaged more than 12 percent. This translates into extremely high real interest rates. In the current recession, both real and nominal interest rates are extremely low, as is inflation.

Finally how do these indicators compare to the Great Depression? Over the four year period from 1929 to 1933 real GDP dropped 43 percent and by 1933 the unemployment rate had hit 37 percent. The good news is that it is extremely unlikely that Canada’s economy will approach anything like these numbers. We can credit some of this to the fact that economists learned from the experiences of the Great Depression that expansion of the money supply is critical during times of financial crisis. Current US and Canadian monetary policies are based on this learning and aim to prevent the kind of economic and financial market meltdowns that occurred in the United States during the Great Depression.

Are these monetary policies paying off? Already, we are seeing some hopeful signs. The TSX has risen over 20 percent from its low about 2 months ago. The housing market in the US is beginning to stabilize, with both sales of new and existing homes rising and price declines slowing. In Canada, retail sales rose nearly 2 percent in January. While Canada is certainly not out of the woods, it looks as though we are on track for a recession of a magnitude that is similar to, or even less severe than, that of the early 1980s.

Much Ado About Stimulus: If it sounds too good to be true...

This article is from the RIIB blog of May 11, 2009.

We are continually assured by politicians and pundits that a massive economic stimulus package is needed to get the economy out of the current recession. The road to recovery, we are told, is built with government spending on bridges, highways, public transit and other large public infrastructure projects; this recovery road is paved with tens of billions of dollars (or in the case of the US hundreds of billions of dollars) of government debt. With the recent, less optimistic projections by the Bank of Canada, opposition MP’s are calling for a second round of “stimulus spending” to save the faltering economy.

The power of government spending to fix our current economic problems seems almost magical. Like Rumpelstiltskin of fairy tale fame, who could spin straw into gold, the government seems able to borrow our otherwise worthless money and spin it into economic recovery. How does this magic work and is stimulus spending the appropriate remedy for our economic ills? In an insightful analysis of the current economic problems in the United States, John Cochrane, Myron S. Scholes Professor of Finance at the University of Chicago Booth School of Business, argues that debt-financed spending packages are no magic bullet and are not the cure for the United States’ current economic problems. (For the full text of Cochrane’s analysis, see his paper at the web site http://faculty.chicagobooth.edu/john.cochrane/research/Papers/fiscal2.htm).

While Cochrane’s analysis is specifically aimed at the US, it has valuable insights for Canada. The basic message comes in two parts. The first part can be summarised as follows. Imagine that the government borrows a dollar from you to spend on infrastructure projects. If that borrowed dollar would otherwise have gone to personal consumption, then the spending has no net impact on the economy – all that has happened is that the government has substituted a dollar of its spending for a dollar of your spending. Suppose, though, that the dollar you lend the government comes from your personal savings. If those savings were invested in private sector investment projects, either because you invested directly yourself or because your bank invested your savings for you, then again you are just substituting government investment for private investment and there is no net impact.

For the borrowing to have a net impact, one, or both, of two things must be happening. One is that you chose to keep your dollar out of both the bank and private sector investments. Some of this is surely going on in the US as households pull out of risky investments and move toward safer assets such as government bonds. This is not so obviously a problem in Canada. The other is that the bank chooses to hold on to your savings rather than invest them in private sector projects. Again, there is evidence in the US that this is also at play. The evidence for Canada is less clear. In any event, the message is simple: It is only to the extent that the borrowed money would otherwise “sit idle” that the stimulus package has an effect.

The second part of Cochrane’s message is that, if the economy is weak because consumers and banks are holding on to money, then we need to ask what the appropriate remedy is for this problem and if this remedy is debt-financed government spending. If consumers are keeping money outside of banks for fear of bank failure, then the solution is to ensure the stability of the financial system, not a stimulus package. If consumers and banks are investing in “safe assets” and not risky private investment projects – the presumed problem for the US in the current situation – then, again, Cochrane argues that debt-financed public infrastructure projects are not the answer. The reason is that, while a government stimulus package “puts the money to work”, we leave it to bureaucrats and politicians to decide what are and are not valuable investment projects. In addition, we must live with the effects of massive future tax liabilities caused by the stimulus package.

Instead, Cochrane argues, the appropriate policy for the US is one that puts the “idle cash” to work in the private sector. His plan calls for the Fed and Treasury Department to use borrowed money to purchase high quality corporate and securitized debt in normal credit markets. In essence, the government fills the credit market role that would normally be played by banks and private lenders in the US until these parties are prepared to assume risk again. By substituting for private lenders, the government uses competition for its investment dollars to drive investment in the most valuable projects. Moreover, the government can avoid the overhang of future tax liabilities since it can sell these assets in the future to pay for its borrowing.

What does all of this say for the Canadian context? We need to ask ourselves how much of Canada’s economic problems are caused by “idle money” -- consumers and banks holding on to cash/ government debt rather than making private-sector investments. If this is not the real problem, then a stimulus package will have little positive impact and will saddle us with large future tax liabilities. If the real causes of Canada’s economic ills are a $100 a barrel drop in oil prices and a faltering US economy, as seems likely given Alberta, Ontario and British Columbia have suffered the most significant job losses, then stimulus packages are largely irrelevant for improving the economy. Government spending isn’t a magic bullet that solves all economic problems.

Harmonizing the PST and GST

This article is from the RIIB blog of May 22, 2009.

One of the more unpopular taxes in Canada is the Federal Goods and Services Tax (GST). Stephen Harper and the Conservative Party of Canada recognized this and made a two percentage point cut to the GST part of their campaign platform in 2005/2006. Ontario Premier Dalton McGuinty promised in his most recent budget to harmonize the Provincial Sales Tax (PST) with the GST come 2010. He is now facing the backlash from this decision. Since taxes are an unavoidable consequence of government spending programs, how do we decide which taxes are “better” taxes and which are “worse” taxes?

One simple portrayal of tax policy is that taxation is a matter of taking money from one group’s pocket and transferring it into the pocket of another group. Under this view, one’s ideology determines what are good and bad taxes. An advocate for the poor, for instance, might view a “good” tax as one that takes money from the pockets of corporations and wealthy individuals and transfers that money to the poor via programs such as public housing, welfare, child care support and the like. Under this view, harmonization of the PST and the GST is bad policy because harmonization will require the PST to be imposed on various food and clothing items – necessities previously exempted from the PST – and so will impose additional burdens on the poor. An advocate for business, on the other hand, might see too much money coming from the pockets of business and believe that taxes on this group should be reduced. Such a reduction, the advocate might argue, would allow business to flourish and to create jobs that alleviate poverty. Under this view, harmonization of PST and GST is a good idea – harmonization will allow businesses to claim credits for PST payments on materials, thus reducing business costs and enhancing competitiveness.

If taxation really is a matter of transferring money from one set of pockets to another, then tax policy must come down to ideology. A “good tax” is one that takes money from pockets that are “too full” and transfers that money to pockets that are “too empty”. Depending on your point of view regarding full and empty pockets, a tax reduction for business may be competitiveness enhancing or mere corporate welfare; a tax increase may be a job killer or simply a matter of the wealthy paying their fair share.

Almost invariably, however, tax policy is more than a simple transfer of income from one group to another. Taxes and subsidies have impacts on the ways that resources are used in the economy and so on the amount of wealth the economy creates. As a consequence, different tax policies can result in the creation or destruction of more or less value in the economy. From an economist’s perspective, a “good tax policy” is one that either creates value or destroys as little value as possible. A “bad tax policy” is one that destroys significant value.

All of this, of course, begs the question of how this process of value creation or destruction works and how we decide what are better and worse tax policies.
The way that a tax creates or destroys value may be familiar. Think of cigarette taxes. Increased cigarette taxes have resulted in increased prices for cigarettes. As a result, fewer people are purchasing cigarettes from traditional outlets and fewer are smoking. This has resulted in reduced production of tobacco, reduced cigarette manufacturing and increased smuggling of untaxed cigarettes. In the long run, we should expect to see fewer cases of lung cancer. All of these changes have impacts on individuals and the society, some positive and some negative. The principles behind carbon taxes are the same. Increased taxes on carbon-based electricity consumption, gasoline consumption and the like will lead to increased prices and less consumption of these products. Carbon taxes will also lead to increased demands for products with a smaller carbon footprint. The result is that resources will move from production of carbon-based products to production of products with less carbon content and, in principle, less global warming.

These familiar ideas apply to virtually all taxes. An increase in income taxes reduces the returns from working and so reduces incentives for individuals, both rich and poor, to generate as much income. An increase in the capital gains tax reduces returns on investment and so reduces incentives for individuals to invest rather than consume. An increase in the sales tax increases the price of the taxed goods and so causes individuals to consume less of these goods and more of untaxed goods. In all cases, the principle is the same: A tax on a particular good raises the price of the good relative to the prices of other goods. This price increase results in relatively less consumption of the taxed good and relatively more consumption of other goods. This, in turn, results in resources flowing out of the taxed good sector and into other sectors.

These shifts in resources out of taxed sectors and into other sectors can either create value or destroy it. As market prices tend to reflect private, and not social, benefit, typically there will be too much consumption of goods that create pollution and other externalities – too many resources are in polluting sectors and so there is too much pollution. A tax on polluting goods that causes prices to reflect social benefit will result in resources moving out of polluting sectors, in less pollution and in greater value. When there are no externalities, prices reflect private benefit and then the tax-induced reallocation of resources will destroy value on net.

There is one exception to this principle. If all products are taxed the same, then there is no change in the price of one good relative to another. As a result, there is no change in the consumption of one good relative to another and so no reallocation of resources in one sector relative to another. Incentives for work, for consuming, for saving and investing are not distorted and so wealth and value are not destroyed. This is the idea behind the GST: If virtually everything is taxed equally, we destroy as little value as possible. This is why the Harper cuts to the GST are bad tax policy and why Premier McGuinty should be commended for harmonizing the GST and PST. What about the increased tax burdens on the poor? These can, and should, be corrected and the fix is an income-based GST/PST rebate.

Cap-and-Trade, Carbon Taxes and Cash Incentives: Getting Rich and Going Green?

This article is from the RIIB blog of June 4, 2009

Last week the Ontario government announced that, in conjunction with the government of Quebec, it would launch a cap-and-trade system for carbon emissions. The details of this plan are yet to be determined. About a year ago, the government of British Columbia instituted a system of carbon taxes to limit greenhouse gas emissions. These taxes apply to the purchase of carbon-emitting fuels such as gasoline, fuel oil, natural gas and coal. Various governments in Canada have provided cash incentives both for energy conservation – tax credits for transit passes, cash for the purchase of energy efficient appliances and for building retrofits – and for the creation of alternative energy sources such as wind and solar energy.

How should we assess the relative merits of the plans for reducing carbon emissions and is there any reason to prefer one plan over another? Cash incentives certainly seem popular with voters, especially relative to carbon taxes. Stephane Dion discovered this as his “Green Shift” shifted him right out of the leader’s job. But can we really go green and get rich at the same time? What do we make of cap-and-trade and is it better than a system of carbon taxes?

In a wide-ranging discussion on policies to combat global warming, Professor Paul Joskow of MIT and the Sloane Foundation, provides answers to these questions. (To see a video of Professor Joskow’s discussion, click on the Speakers Series link on the RIIB web page and go to the October 28, 2008 presentation.) Joskow points out that a cap-and-trade system and a system of carbon taxes are very similar. Under a cap-and-trade system, limits are placed on total CO2 emissions and emitters are allocated emission permits. A company that can cheaply limit CO2 emissions will do so and emit less than its total permit allocation. This company can sell its remaining permits to companies for whom emissions reductions are more expensive and so find themselves emitting more than their permitted allocation. The price for permits is a market price determined by supply and demand for permits. Because permits have a positive price, the cap-and-trade system not only limits emissions but also indirectly raises the cost of CO2 generating activities.

Under a carbon tax system like that in BC, prices for CO2 emissions are increased directly via the tax. The increase in costs for carbon emitting activities provides incentives for emitters to seek ways of reducing their CO2 emissions. In this way, the direct cost raising effect of the carbon tax leads indirectly to a reduction in CO2 emissions.

What’s the difference between the two systems? Joskow points to two differences. First, the cap-and-trade system leads to certainty about the level of CO2 emissions since this level is fixed by the cap. Cap-and-trade leads to uncertainty about the costs of business for CO2 emitters since this cost is ultimately determined by the market price for permits. This price is uncertain initially and can fluctuate over time. The tax system provides much more certainty on the current cost of doing business: the tax level is fixed by the policy and so CO2 emitters know how much emissions cost. The tax policy leads to uncertainty about CO2 emission levels since we have no good estimates of how much emission reduction a given price increase will induce. The tax system may create possible future cost uncertainty, though, as tax levels may need to be adjusted to achieve certain CO2 emission targets.

The second difference that Joskow points to is in the ease of policy coordination across jurisdictions. Since global warming is, by definition, a global and not regional problem, a system that coordinates policies on emission restrictions globally is preferred to a system of piecemeal regional policies. The reason is simple. Just as some companies can more cheaply reduce CO2 emissions than others, some provinces, states and countries can more cheaply reduce CO2 emissions than others. A system that takes advantage of these differences worldwide will be able to reduce global warming more cheaply. A cap-and-trade system can easily operate large regional and global markets for emissions permits and so achieve these global efficiencies. To coordinate tax policies globally, Joskow argues, is a far greater challenge.

What about cash incentives for conservation and alternative energy production? Can we really get green and get rich at the same time? The answer is, “if it sounds too good to be true, it probably is”. These policies are the political equivalent of the flim-flam or hustle. To see why, imagine a policy that provides up to $3000 per homeowner toward the purchase of energy efficient appliances, window and insulation upgrades and the like. If one million households take advantage of the program, the cost to the government is three billion dollars. Where does the government find three billion dollars? It finds it by increasing taxes elsewhere. Remember, ‘There ain’t no such thing as free lunch”.

The bottom line is that cash incentives programs operate by raising distortionary taxes on other activities, thereby raising the cost of these activities, in order to fund the cash-back scheme. By contrast, the cap-and-trade and carbon tax systems raise the prices of CO2 emitting activities so that they reflect the true social value. The revenues that these systems generate enable the government to cut distortionary taxes elsewhere in the economy.

In addition, systems that raise the price of CO2 emissions harness the creativity of the marketplace to determine the best ways to reduce global warming. The cash incentive schemes target particular activities that may or may not be most effective in lowering CO2 emissions. Wind energy may prove not to be a particularly practical alternative energy source but windmills will be built if the government provides sufficient cash to build them. Lowering the thermostat in winter time may be a better way for some households to reduce CO2 emissions than replacing the furnace all together. With no increase in heating prices but a subsidy for replacing the furnace, the household may not conserve at all and may not replace the furnace either. Even if it does replace the furnace, the wrong outcome occurs.

In the end, as Joskow notes, reductions in CO2 emissions will result in higher energy prices and so higher prices for goods that utilize energy. This is a fact of life; we need to accept it.

If I Only Had a Brain: Economic Man and the Straw Man

This article is from the RIIB blog of June 24, 2009.

This past Saturday I happened to be listening to CBC radio’s science show “Quirks and Quarks”. The show was advertised as having a report on the latest research advances on the space elevator. Being a bit of a science junky, I tuned in. Before I could hear about the space elevator, I was subjected to a report on the exciting new field of “Behavioral Economics”. The report began with the usual description of the “traditional economic approach”; namely, that economists have believed for most of the past 100 years that individuals make decisions fully aware of economic principles. This is the stereotype of the individual as “economic man”, the rational, calculating machine. The listener is then told that recently, thanks to research advances in behavioral economics, we have come to learn that individuals aren’t nearly so adept at making decisions as economists have traditionally believed. Individuals make mistakes, they employ rules-of-thumb and they are affected by framing and emotion. Our eyes have been opened, oh happy day!

I have heard this characterization of the traditional economic approach time and again. The characterization is complete rubbish. No proponent of the traditional economic approach believes that individuals are calculating machines who make decisions fully aware of economic principles. Such economists understand that individuals make mistakes, use rules-of-thumb and can be affected by emotions. The model of economic man as the “rational calculating machine” is an abstraction and not meant as an accurate description of human decision making. It is a simplification.

This admission, of course, raises the question, “If the description of human decision making is wrong, why use it?”

The object of economic study is not human decision making. The point of economics is to understand how large numbers of individual decision makers, interacting in markets of different sorts and faced with market prices, determine the allocation of goods and services in the economy. What economists observe and have systematic data on is not the decisions of each individual. Rather, economists observe market outcomes – prices and quantities. Economists observe the way that oil production and consumption change as oil prices rise; they observe the impact of taxes on investment, work and consumption. These sorts of market outcomes are the stuff of economic study, not individual decision making.

So where does economic man come in and why the emphasis on this sort of decision maker in traditional economics? As I pointed out above, economic man is a simple abstraction adopted by economists to provide a systematic means of getting from individual decision making to the variables of interest – the market outcomes. By forcing decision makers to be “rational, calculating machines”, economists provide discipline to their analysis that may otherwise be lacking. I once attended a lecture where the speaker was trying to understand the behaviour of stock prices after an initial public offering (IPO). The speaker claimed that the standard pattern was for prices to rise significantly above the initial offer price and then, after a week or so, to decline to a value at or slightly below the initial price. The question was how to understand the initial price run-up. Why would people buy at these very high prices when we all know that prices will likely decline later. The answer the speaker gave to this question was “some people are just stupid”. While it may be right that some people don’t understand the market, this “explanation” presents a basic problem. If I am allowed to resort to justifications that rely on the idea that people are crazy or stupid or just like doing certain things, then, while I’ve justified the observed phenomenon, I really haven’t explained or understood it. Give the economist sufficient leeway on the ways that individuals make decisions and anything can be justified but little may be understood. Economic man forces economists to be more disciplined in their search for understanding.

That economic man provides discipline for economic analysis is, of course, not sufficient reason to cling to the assumption. Economists have persisted in their use of this abstraction for one simple reason: It has worked extremely well in helping economists understand and (qualitatively) predict market outcomes. Think of the recent financial meltdown in the US. The original driver of the meltdown was the large mortgage default. If you tell an economist that mortgage salespeople are paid bonuses for writing mortgages but not penalized upon mortgage default, the economist will tell you that you should expect many mortgages to be sold and defaulted upon. Specific salespeople may continue to be careful when providing mortgages but, on average, the incentive in the market (for the “self-interested maximizer of income”) is to sell mortgages and not worry about default.

There are, of course, situations in which the notion of economic man is not helpful. Economic man does not work well at explaining why someone gives $5 to the Canadian Cancer Society, why an individual votes in a national election or why a fast-food restaurant sells more hamburgers if the interior is red than if it is green. An alternative notion of individual decision making is required here because these are inherently not questions about market outcomes.

And if, one day, some alternative model of decision making proves more successful at explaining market outcomes, I guarantee that economists will adopt it. Economists are pragmatic, if nothing else. In the mean time, economists will continue to assume that economic decision makers are “rational”, not because economists believe that this is an accurate description of human decision making but for a very practical reason: Damned if it doesn’t work.

Health Care Reform: A Tale of Two Systems, One Problem

This article is from the RIIB blog of July 13, 2009.

The debate over health care reform in the United States is on and Canada is featuring prominently in it. From the US right come ads detailing claimed shortcomings of Canada’s health care system. From the Great White North comes the “Jack Layton Goes to Washington” show, with Layton and company seeking to debunk “myths” perpetrated by the right. Certain facts are not in dispute. Approximately 46 million Americans have no health insurance and health care expenditures are more than 16 percent of US GDP. In Canada all are insured and health care expenditures amount to only about 10.5 percent of GDP. In Jack Layton’s words, “The Canadian system produces better health outcomes, reaches everybody and is less expensive to operate than the U.S. system”.

Whether 16 percent of GDP is too much to spend on health care or 10.5 percent too little is open for debate. The fact that 46 million Americans are uninsured does seem to be a problem. Focussing narrowly on these issues, however, misses the real problem facing the US health care system, and the real reason reform is needed. Contrary to popular belief in Canada, the US does not have a fully private health care system. About 50 percent of health care payments there come from public sources. The Medicare and Medicaid programs in the US account for most of these payments. Unless the government gets health care costs under control, Medicare threatens to bankrupt the US. Estimates for Medicare’s unfunded liability – the additional tax revenue required today to cover future Medicare expenditures – range as high as $90 trillion. To cover this liability the US government would need either massive spending cuts or huge tax increases. As a practical matter, the unfunded liability is so large that neither one is realistic. The only real route open to the US is health reform that reduces the cost of health care.

As we give advice on health care reform to our neighbour to the South, we in Canada should not be smug about our own health care system. A recent study by Peter Dungan and Steve Murphy from RIIB’s Policy and Economic Analysis Program indicates that Canada too is facing large future health care costs (An executive summary for the study can be found on the RIIB web page.). Dungan and Murphy construct population and demographic profile estimates for Canada out to the next century. They match these estimates with the 2006 National Health Expenditure Trends from the Canadian Institute for Health Information (www.cihi.ca) that gives health care expenditures in 2006 by age group and by sex. This allows Dungan and Murphy to estimate what expenditures on health care would be today if the population demographics were those of 2025, say.

Looking only at provincial health care expenditures, which account for about 65 percent of total expenditures in Canada, they find that, were the 2025 demographics in place today, expenditures would be almost 20 percent higher, coming in at $115 billion. With the demographics of 2050, provincial health care expenditures would be over 40 percent higher ($137 billion or 9 percent of GDP). These estimates are likely on the conservative side. The CIHI report estimates that real expenditures on health care by the provinces will have increased by an estimated $7 billion, or by about 10 percent, by 2008. Should anything like this trend continue into the future, real provincial expenditures on health care will be significantly higher than the Dungan and Murphy estimates.

Much of the increase in health care expenditures projected by Dungan and Murphy comes from the aging of the population. In 2006 approximately 13 percent of the population is aged 65 and over. Dungan and Murphy estimate that, by 2025, 20 percent of the population will be over 65. It is well known that health care expenditures fall disproportionately in the later years of life. This aging of the population also means that, at the same time as health costs are going up, the group of working age individuals who would pay these costs is shrinking. This combination of rising costs and shrinking tax base is a serious problem. The system we have now, with doctor shortages, crowded emergency rooms and wait lists for care, is barely affordable. What will we face 15 years from now?

The long and the short of it is that the problem that Medicare faces in the US is a problem confronting Canada as well. Canada cannot afford to be complacent. Health care reform is not just a US problem!

Oil Shocks: Are We Headed for More of Them?

This article is from the RIIB blog of July 30, 2009.

Do you recognize any of the following news excerpts?


Four leaders of America’s energy business yesterday warned of a relentless rise in fuel prices, possible industrial collapse, massive electrical blackouts and a “fairly deep and continuous recession” without a commitment to such alternative energy sources as nuclear power or synthetic fuels.


Congressional Panel Forecasts Little Boost in World Oil Output

Conventional world oil production will show little or no increase over the next 20 years while US production will decline sharply, according to a congressional study released yesterday.

A Scramble for Crude Oil Boosts Prices


The scramble for crude oil within the United States among major oil companies, independent producers and refiners has triggered a “gas war in crude oil – only the prices are going up, not down”, says the oil purchasing director for one of the nation’s major oil companies.

Oil Supply Running Out

Warning that global oil reserves are running out at “an alarming rate”, Saudi Oil Minister Sheikh Yamani proposed yesterday an urgent international energy program which “could move our world away from the edge of an abyss”.

If these stories sound familiar, you might be thinking that they’re from last summer. In fact, all of these stories are from 1980. That same year, a respected oil industry analyst predicted that world oil prices would be somewhere in the $100 to $150 range by the year 2000. In fact, oil prices bottomed out at about $10 a barrel in late 1998 / early 1999 – a post-war low when adjusted for inflation – and averaged about $25 a barrel in the year 2000.

Why bring all of this up now? As the world’s economies come out recession later this year and begin to grow again next year, predictions about the future path of oil prices will no doubt be making headlines once again. Will we see the return of $150 oil prices? Will prices be much higher, possibly $200 and beyond? Since none of us has a crystal ball, predicting the future is always a dicey business; however, there are some simple economic principles that we can use to guide our thinking about how oil prices might play out. The experiences of the last 30 years in the oil market are a useful illustration of these principles.

Until about 1974, post-war oil prices were quite stable in real terms, hovering around $22 a barrel (in 2008 dollars). Oil prices rose sharply through the late ‘70’s and early ‘80’s, with the average real price of oil topping out at almost $100 a barrel in 1980. Real oil prices remained above $50 a barrel through 1985. This sharp rise in prices had a dramatic impact on oil consumption. Whereas over the decade of the 1970’s, oil consumption rose about 40 percent, consumption declined over the first half of the 1980’s and didn’t return to its 1979 level until the end of the decade.

The simple economic principle at play here is this: a huge price run-up creates incentives for consumers to conserve. During the 1980’s, drivers bought more fuel-efficient automobiles, homeowners insulated their houses and purchased energy-efficient appliances and utilities turned to other energy sources to generate electricity. At the same time, the dramatic increase in prices created incentives for oil producers to find new sources of oil and to develop new technologies to recover oil more effectively. This is the second economic principle at play. Together, the increased supplier incentive to produce oil and the increased consumer incentive to conserve oil resulted in dramatic increases in oil reserves. In contrast to the 1970’s, when oil reserves essentially didn’t grow at all, over the decade of the 1980’s reserves increased by about 50 percent. The doomsayers and forecasters turned out to be wrong.

So what happened recently? Part of the explanation for the oil price run-up in 2007 – 2008 is just the other side of the same coin. Over the 1990’s oil prices declined steadily – not a surprise given the huge increase in reserves – and remained at or below $30 a barrel through the first several year of the 2000’s. The result of the low oil prices was that consumption rose sharply, increasing about 25 percent from 1990 to 2003. Incentives for conservation were diminished. In 1980, only about 15 percent of new vehicle sales in the US were light trucks – SUV’s, minivans and pick-up trucks. By 2003, virtually half of all new vehicles sold in the US were light trucks. As a consequence, the average fuel efficiency of all new vehicles sold in the US in 2003 was about the same as it was in 1980. Low energy prices also helped fuel the growth of China and other developing countries.

On the production side, the low prices meant that oil producers had little incentive to search for new sources of oil. Who’s going to search for oil when it’s selling for $10 a barrel? Combined with the growing consumption, this meant that, much as in the 1970’s, oil reserves barely increased at all through the decade of the 1990’s and into the early 2000’s. Not surprisingly, then, we saw upward pressure on prices beginning in 2004. It’s the same economic principles at work. Even with that, though, don’t forget that, as late as May 2007, oil prices averaged only about $60 a barrel. The doubling of prices one year later was not because of some huge spike in oil demand from China and India. There were other forces at play driving the price increase.
So where are prices headed in the future? We can all speculate on the answer to this question and intelligent people can come to different conclusions. One thing is sure, however. Rising prices will reduce consumption growth and increase growth in production. Together, these twin effects must ultimately put downward pressure on prices. So, don’t bet the farm on prices being high and rising forever. If oil prices are so high that no one can afford to buy oil, prices will come down.

Tuesday, September 8, 2009

Skills and the City: Who Makes Your City Run?

Much has been made of late of the importance of large cities to countries’ competitiveness and economic futures. The World Bank’s World Development Report, Entering the 21st Century, described cities as the “engines of growth” for countries. Closer to home, the City of Toronto’s Agenda for Prosperity notes that:

Toronto and Canada’s other major cities are the engines of regional, provincial and national economic growth. Their ability to generate wealth is critical to the future prosperity of all Canadians.

To be sure, there is substance to such claims. One of the facts about cities is that average wages and incomes in large cities are generally far higher than average wages and incomes in smaller cities and towns. This wage difference reflects the fact that worker productivity is generally much higher in large cities. Also, in the developed world, the majority of individuals live in cities. In the US, for instance, 68 percent of Americans occupy 1.8 percent of the total land area. With the service sector now the dominant source of employment and income in developed economies, the buzz is all about the city as a creative hub.

What does all of this mean for the future of large and small cities? Will governments focus on large cities and let smaller cities and towns languish and ultimately die? Would we all be better off living in large cities? Is there something smaller cities can do to remain prosperous?

In a recent series of papers, Will Strange and Bernardo Blum of the Business Economics group at Rotman and Marigee Bacolod of the University of California at Irvine dig underneath the simple averages and superficial facts about cities to look at who it is that selects into living in large cities and who really gains from this choice. They have some surprising results and their work sheds light on how different kinds of cities might compete and prosper. (A link to these papers can be found on the RIIB home page.)

In their study Bacolod, Blum and Strange identify the skill sets that different occupations require and use this information to infer the skills that individuals in these occupations possess. They distinguish 3 types of skills: cognitive skills – skills at doing calculations, problem solving, following instructions and reading/writing – people skills – skills at dealing with other people, accepting responsibility, influencing others – and motor skills – skills at handling and manipulating equipment, driving, operating machinery and precision work. They also obtain measures of individuals’ inherent abilities or intelligence. Their aim is to investigate the skill and intelligence distributions of individuals living in large and small cities and the returns to different sorts of skills and intelligence in large and small cities.

What Bacolod, Blum and Strange find is that the superstars of both cognitive skills and people skills are more likely to live in larger cities than in smaller ones. This is not surprising given they also find that the returns, as measured by wages, to cognitive and people skills are higher in large cities. Those individuals with the highest innate ability / intelligence are also more likely to live in large cities as the return to intelligence is also higher there. The higher returns are a result of the fact that large cities yield greater opportunities to learn from others and to specialize in jobs that best exploit one’s skills and abilities. High end cognitive and people skill individuals are best positioned to take advantage of these opportunities and so they locate in large cities.

While these findings support the standard view of cities as engines of growth and creative hubs in the new economy, there’s much more to the story of cities. Bacolod, Blum and Strange find that those individuals with little in the way of people skills are also more likely to live in large cities. And before you say, “of course”, this isn’t because scientists, engineers, economists and mathematicians – the high cognitive skills types – have no people skills to speak of. As it turns out, people with low measures for basic intelligence / ability are more likely to live in large cities also. In general, Bacolod, Blum and Strange find that large cities are quite heterogeneous places when it comes to the distribution of skills and intelligence – large cities have a bigger share of the population at either end of the distribution -- while smaller cities are much more homogeneous places, having fewer of both the very high end people and the low end individuals.

If large cities are places that generate high returns to basic intelligence, people skills and cognitive skills, why do we see this greater disparity in skills and abilities in large cities? Why are individuals at the bottom end of the distribution more likely to live in large cities? Recall that the way that the skills and intelligence superstars reap large rewards from their talents is by being highly specialized and spending time learning from others. To be able to do this, these individuals need someone to provide services like childcare, food preparation, cleaning, repair work, transportation and much more. A city can’t be prosperous and an engine of growth having only the skills and intelligence all-stars. These individuals can only reap the returns from being creative if others are in the trenches doing the work the all-stars don’t have time for. This is an important message that must not be lost in the debate on city structures. Large cities must make room for a very diverse set of people – blue collar, white collar and no collar – if they are to prosper. All are important for the city to be an engine of growth.

The other part of the story from Bacolod, Blum and Strange is that the return to motor skills is actually higher in smaller cities than in large ones. For motor skills, perhaps the value of learning from others or of specializing is not so large; it may be that high motor skill activities require significant investments in land and that land costs are lower in smaller cities. Whatever the reason, smaller cities are the locations where high motor skill individuals are most productive.
There’s a lesson here as well. Smaller cities aren’t going to compete for the cognitive and people skill superstars. These individuals are going to locate in large cities where the returns to their skills and abilities are much greater. Small cities, though, can compete for the motor skills superstars. And there’s the message: Instead of every smaller city trying to be a mini New York or Toronto, they should try to be very different. Success is more likely to come from manufacturing high quality products requiring skilled craftspeople and not from being the next Bohemia.