Paul Walker

Dr Paul Walker is an economist at University of Canterbury. He has expertise in microeconomics, institutional economics and industrial Organization. He blogs for The Dismal Science.

Access to higher education and the value of a university degree - The Dismal Science

Jan 09, 2015

Governments sometimes promote reforms, normally with little thought as to what the actual outcomes will be, that increase access to education for a large share of the population. These reforms may lower the returns to education by altering returns to skills, education quality, and peer effects. In a new column at VoxEU.org examines Nicola Bianchi the case of a 1961 Italian reform that increased enrolment in university STEM majors among students who had previously been denied access. The reform ultimately failed to raise their incomes.

Bianchi writes,

In a recent working paper, I illustrate the effects of a 1961 Italian reform that led to a 216% increase in enrolment in university STEM (Science, Technology, Engineering, and Mathematics) programmes over a mere eight years (Bianchi 2014). I find that:
  • The reform increased enrolment in university STEM majors among students that had been previously denied access, but ultimately failed to raise their incomes.
  • The enrolment expansion lowered the value of a STEM education by crowding out university spending and generating negative peer effects.
  • Due to lower returns to a university STEM degree, some students with the potential to succeed in STEM turned to other university programs.
He continues
Italian high schools offer different curricula. Until 1960, a student who graduated from a university-prep high school (licei) could enroll in university in any major. A student who graduated from a technical high school for industry-sector professionals (istituti industriali) could enroll in only a few majors and most often did not enroll in university at all. In 1961, the Italian government allowed graduates with a technical diploma to enroll in university STEM majors for the first time. Technical graduates embraced this opportunity to the extent that freshman enrolment in STEM programs had increased by 216% by 1968 [...]

To analyze the effects of this reform, I collected high school records, university transcripts, and income tax returns for the population of students that completed high school in Milan between 1958 and 1968. I chose Milan because it is Italy’s commercial capital and second largest city. It has the thickest market for university graduates and university-type jobs, and is believed to be the place where a university graduate can earn the highest returns.
The reforms resulted in higher university access but a lower value of education.
The reform was successful in increasing university access among students with a technical diploma. After 1961, many technical students enrolled in university and completed their degrees. However, I find little evidence that technical students gained positive returns to university STEM education. This is an important result for two reasons:
  • STEM degrees were leading to high-paying occupations and
  • the outside option of technical students was to enter the labour market with just a high school diploma.
To explain these findings, I lay out a simple framework in which enrolment expansions affect returns to education through three main channels:
  • higher supply leads to lower wages,
  • higher enrolment crowds university resources and decreases the quality of education,
  • learning is lower in classes with students from different types of high school.
Thanks to the reforms you got crowding of university spending, peer effects, and changes in major choices.
Several findings suggest that the enrolment expansion following the policy implementation lowered the returns to a STEM degree. To analyze changes in the value of a university education, I focus on the students who were not directly affected by the reform – the graduates from university-prep high schools. Among these students, returns to STEM education declined after 1961 to the point of erasing the pre-reform income premium associated with a STEM degree.

This decline can be partially explained by a lower amount of skills acquired in STEM majors after 1961. I find that human capital (measured by absolute grades) decreased more in STEM courses in which resources became more crowded and in which the entry of technical students had greater disruptive potential. Overall, lower resources per student can explain 31% of the income decline, while the change in class composition can explain another 37.3%. The remaining share can be attributed to higher supply of workers with a university STEM education or possibly other minor channels of general equilibrium effects.

By decreasing the value of STEM education, the reform might have deprived STEM majors of talented students. After 1961, many more students with a university-prep diploma decided to enroll in university majors that were still not accessible to technical students. This effect was concentrated among the students with higher high school grades.
Policy conclusions?
There are instances in which students should invest more in education. For students who do not have the resources to pay for education, public intervention is needed to improve access, but should just ease the financial constraints of students under-investing in education. Public intervention, should not take the form of greatly expanded education provision by state-controlled universities. The inefficiencies in the public provision of education might be magnified by enrolment expansions and might limit the benefits for targeted students.
Ref.:

How effective is the minimum wage at supporting the poor? - The Dismal Science

Jan 08, 2015

This is an important question since one good reason for supporting having a minimum wage would be if it is an effective antipoverty policy measure. Such a belief would rely on two assumptions: first, raising the minimum wage will increase the incomes of poor families; and second, the minimum wage imposes little or no public or social costs.

The policy debate over the minimum wage principally revolves around its effectiveness as an antipoverty program. A popular image used by both sides of the debate consists of families with breadwinners who earn low wages to support their children. Policies that raise the wages of these workers increase their earnings and contribute to their escaping poverty. As a counterbalance to this impact, opponent of the minimum wage argue that wage regulation causes some low-wage workers to lose their jobs and they will suffer income drops. The issue, then, becomes a tradeoff; some low-income breadwinners will gain and others will lose. Promoters of the minimum wage retort that employment losses are quite small and, consequently, the workers who gain far exceed those who lose.

In addition to potential adverse employment effects, opponents of minimum wages further counter the belief that the minimum wage assists poor families by documenting that many minimum-wage workers are not breadwinners of low-income families. They are, instead, often teenagers, single heads of household with no children, or not even members of low-income families. Promoters of the minimum wage admit that some of these groups may also benefit from the wage increase, but since few workers lose jobs, they contend that the minimum wage still benefits low-income families with children.

The notion that the minimum wage can be increased with little or no economic cost underlies many advocates’ assessments of the effectiveness of the minimum wage in its antipoverty role. Most economists agree that imposing wage controls on labor will not raise total income in an economy; indeed, elementary economics dictates that such market distortions lead to reduced total income implying fewer overall benefits than costs. If, however, one presumes that employment losses do not occur and total income does not fall, then the minimum wage debate becomes a disagreement over how it redistributes income. The efficacy of a minimum wage hike as an antipoverty program depends on who benefits from the increase in earnings and who pays for these higher earnings. Whereas a number of studies have documented who benefits, who pays is far less obvious. But someone must pay for the higher earnings received by the low-wage workers.

At the most simplistic level, the employer pays for the increase. However, businesses don't actually pay, for they are merely conduits for transactions among individuals. Businesses have three possible responses to the higher labor costs imposed by the minimum wage. First, they can reduce employment or adjust other aspects of the employment relationship (e.g., less fringes or training opportunities), in which case some low-wage workers pay themselves through loss of their jobs or by receiving less non-salary benefits; second, firms can lose profits, in which case owners pay; and, third, employers can increase prices,wherein consumers pay.

Of these three sources, entertaining that low-wage workers bear any cost of the minimum wage has been largely dismissed by proponents in recent years based on several (albeit much disputed) studies that found little or no job loss following historical increases in federal and state minimum wages. While the extra resources needed to cover higher labor costs could theoretically come out of profits, several factors suggest that this source is the least likely to bear costs. Capital and entrepreneurship are highly mobile and ill eventually leave any industry that does not yield a return comparable to that earned elsewhere. This means that capital and entrepreneurship, and hence profits, will not bear any significant portion of a “tax” imposed on a particular factor of production. Stated differently, employers in low-wage industries are typically in highly competitive industries such as restaurants and retail stores, and the only option for these low profit margin industries becomes lowering exposure to low-wage labor or raising prices. With jobs presumed to be unaffected, this leaves higher prices as the most likely candidate for covering minimum wage costs. In fact, supporters of minimum and living wage initiatives often admit that slight price increases pay for higher labor costs following minimum wage hikes.
The above comes from a new paper, How Effective Is the Minimum Wage at Supporting the Poor? by Thomas MaCurdy of the Department of Economics in Stanford University.

MaCurdy sets out to evaluate the redistributive effects of the minimum wage adopting the view implicitly held by its advocates, that is, the study examines the antipoverty effectiveness of this policy presuming that firms raise prices to cover the full amount of their higher labor costs induced by the rise in wages.
In particular, the analysis simulates the economy taking into account both who benefits and who pays for a minimum wage increase assuming that its costs are all passed on solely in the form of higher consumer prices. The families bearing the costs of these higher prices are those consumers who purchase the goods and services produced with minimum-wage labor. In actuality, most economists expect some of these consumers would respond to the higher prices by purchasing less, but such behaviors directly contradict the assertion of no employment effects since lower purchases mean that fewer workers would be needed to satisfy demand. Consequently, to keep faith with the view held by proponents, the simulations carried out in this study assume that consumers do not alter their purchases of the products and services produced by low-wage labor and they bear the full cost of the minimum wage rise. This approach, then, maintains the assumption of a steady level of employment, the “best-case” scenario asserted by minimum-wage proponents. Although highly stylized and probably unrealistic, the following analysis demonstrates that the minimum wage can have unintended and unattractive distributional effects, even in the absence of the employment losses predicted by economic theory.
To evaluate the distributional impacts of an increase in the minimum wage MaCurdy investigates the circumstances applicable in the 1990s in the U.S.A. when the federal minimum wage increased from US$4.25 in 1996 to US$5.15 in 1997. (In 2014 dollars, this increase corresponds to a change from US$6.40 to US$7.76.)
To identify families supported by low-wage workers and to measure effects on their earnings and income, this analysis uses data from waves 1-3 of the 1996 Survey of Income and Program Participation (SIPP). To translate the higher earnings paid to low-wage workers into the costs of the goods and services produced by them, this study relies on national input-output tables constructed by the Minnesota Impact Analysis for Planning (IMPLAN) Group, matched to a time period comparable with SIPP’s. To ascertain which families purchase the goods and services produced by low-wage workers and how much more they pay when prices rise to pay for minimum wage increases, this study uses data from the Consumer Expenditure Survey (CES), again matched to the same time period as SIPP’s. The contribution of this study is not to estimate the distribution of benefits of the minimum wage, nor is it to estimate the effect on prices; both of these impacts have already been done in the literature. Instead the goal of this paper is to put the benefits and cost sides together to infer the net distributional impacts of the minimum wage on different categories of families and to translate this impact into a format readily accessible to economists and policymakers.

To provide an economic setting for evaluating the distributional measures presented here, this study develops a general equilibrium (GE) framework incorporating minimum wages. [Details of the GE model are given in the Appendix to the paper] This model consists of a two-sector economy with the two goods produced by three factors of production: low-wage labor, high-wage labor, and capital. A particular specification of this GE model justifies the computations performed in the analysis, and entertaining alterations in its behavioral elements permits an assessment of how results might change with alternative economic assumptions. The model proposed here goes well beyond what is currently available in the literature, which essentially relies on a Heckscher-Ohlin approach with fixed endowments (supplies) of labor and capital inputs. In contrast, the GE model formulated in this study admits flexible elasticities for both input supplies and for consumer demand, as well as a wide range of other economic factors.
As to results. Remember the exercise described in this paper simulates the distributional impacts of the rise in the federal minimum wage from US$4.25 to US$5.15 implemented in 1996-97.
Following the assumptions maintained by advocates, the simulation presumes (i) that low-wage worker earned this higher wage with no change in their employment or any reduction in other forms of compensation, (ii) that these higher labor costs were fully passed on to consumers through higher prices, and (iii) that consumers simply paid the extra amount for the goods produced by low-wage labor with no change in their quantities purchased. The cost of this increase is about 15 billion dollars, which was nearly half the amount spent by the federal government on such antipoverty programs as the federal EITC, AFDC/TANF, or Food Stamp program. The analysis assesses the extent to which various categories of families benefit from higher earnings, and the amounts that these groups pay more as consumers through higher prices. Combining these two sides yields a picture of who gains and who pays for minimum wage increases, including the net effects for families.

On the benefit distribution side, as other research has shown, the picture portrayed by this analysis sharply contradicts the view held by proponents of the minimum wage. Low-wage families are typically not low-income families. The increased earnings received by the poorest families is only marginally higher than by the wealthiest. One in four families in the top fifth of the income distribution has a low-wage worker, which is the same share as in the bottom fifth. Virtually as much money goes to the highest-income families as to the lowest. While advocates compare the wage levels to the poverty threshold for a family to make the case for raising the minimum wage, less than $1 in $5 of the additional earnings goes to families with children that rely on low-wage earnings as their primary source of income. Moreover, as a pretax increase, 22% of the incremental earnings are taxed away as Social Security contributions and state and federal income taxes. The message of these findings is clear: raising wages wastefully targets the poor contrary to conventional wisdom.

Turning to who pays the costs of an increase in the federal minimum wage through higher prices, the analysis reveals that the richest fifth of families do pay a much larger share (three times more) than those in the poorest fifth. This outcome reflects the fact that the wealthier families simply consume much more. However, when viewed as a percentage of expenditures, the picture looks far less appealing. Expressed as a percentage of families’ total nondurable consumption, the extra costs from higher prices are slightly above 0.5% for families at large. The picture worsen further when one considers costs as a percentage of the types of consumption normally included in the calculation of state sales taxes, which excludes a number of necessities such as food and health care. Here, the implied costs approximately double as a percentage of expenditure. More important, the minimum-wage costs as a share of “taxable” annual expenditures monotonically falls with families’ income. In other words, the costs imposed by the minimum wage are paid in a way that is more regressive than a sales tax.

On net, the minimum wage does redistribute income slightly in favor of lower income families, with higher-income families paying more in increased prices than they benefit from the rise in their earnings. However, adverse impacts occur within income groups. Whereas less than one in four low-income families benefit from a minimum wage increase of the sort adopted in 1996, all low-income families pay for this increase through higher prices rendering three in four low-income families as net losers. Meanwhile, many higher income families are net winners.

Political support for the minimum wage largely depends on the apparent clarity of who benefits and the inability to trace who pays for the wage increase, irrespective of whether costs are paid through higher prices, or lower profits, or cutbacks in jobs or employee benefits. As shown in this study, the benefits created by the minimum wage goes families essentially evenly distributed across the income distribution; and, when minimum wage increases are paid through higher prices, the induced rise in consumption expenditures mimics the imposition of a sales tax with a higher tax rate enacted on the goods and services purchased disproportionately by low-income families. Consequently, a minimum-wage increase effectively emulates imposition of a “national tax” that is more regressive than a typical sales tax with its proceeds allocated to families unrelated to their income. This characterizes the income transfer properties of the minimum wage, which many might not view as an antipoverty program.
The highlighted sections are some of the more important takeaway bits from the study.

In summary, MaCurdy adopts a “best-case” scenario taken from minimum-wage advocates. His study projects the consequences of the increase in the national minimum wage instituted in 1996 on the redistribution of resources among rich and poor families. Under this scenario, the minimum wage increase acts like a regressive sales tax in its effect on consumer prices and is in fact even more regressive than a typical state sales tax. With the proceeds of this national sales tax collected to fund benefits, the 1996 increase in the minimum wage distributed the bulk of these benefits to one in four families nearly evenly across the income distribution. Far more poor families suffered reductions in resources than those who gained. As many rich families gained as poor families. These income transfer properties of the minimum wage document its considerable inefficiency as an antipoverty policy.

(HT: thanks to Tim Worstall for pointing out the study.)

The Economics of World War I. 5 and 6 - The Dismal Science

Jan 08, 2015

Two more in the series of posts from The Economics of World War I at VoxEU.org

The US learned the wrong lessons from WWI
Hugh Rockoff, 04 October 2014
World War I profoundly altered the structure of the US economy and its role in the world economy. However, this column argues that the US learnt the wrong lessons from the war, partly because a halo of victory surrounded wartime policies and personalities. The methods used for dealing with shortages during the war were simply inappropriate for dealing with the Great Depression, and American isolationism in the 1930s had devastating consequences for world peace.

World War I: Why the Allies won
Stephen Broadberry, 11 November 2014
In the massive circumstances of total war, economic factors play the deciding role. Historians emphasise size in explaining the outcome of WWI, but this column argues that quality mattered as well as quantity. Developed countries mobilised resources in disproportion to their economic size – the level of development acted as a multiplier. With their large peasant sectors, the Central Powers could not maintain agricultural output as wartime mobilisation redirected resources from farming. The resulting urban famine undermined the supply chain behind the war effort.

Returns to innovation - The Dismal Science

Jan 07, 2015

Recently Tim Worstall has been reminding us of a 2004 paper on Schumpeterian Profits in the American Economy: Theory and Measurement by William D. Nordhaus. The point of the paper is that entrepreneurs gain less than 3% of the social returns to their innovation. The paper's abstract reads:

The present study examines the importance of Schumpeterian profits in the United States economy. Schumpeterian profits are defined as those profits that arise when firms are able to appropriate the returns from innovative activity. We first show the underlying equations for Schumpeterian profits. We then estimate the value of these profits for the non-farm business economy. We conclude that only a minuscule fraction of the social returns from technological advances over the 1948-2001 period was captured by producers, indicating that most of the benefits of technological change are passed on to consumers rather than captured by producers.
Back in 2004 Don Boudreaux blogged on the paper at the Cafe Hayek blog. He said,
In a recent NBER working paper – “Schumpeterian Profits in the American Economy: Theory and Measurement” – Yale economist William Nordhaus estimates that innovators capture a mere 2.2% of the total “surplus” from innovation. (The total surplus of innovation is, roughly speaking, the total value to society of innovation above the cost of producing innovations.) Nordhaus’s data are from the post-WWII period.

The smallness of this figure is astounding. If it is anywhere close to being an accurate estimate, the implication is that “society” pays a paltry $2.20 for every $100 worth of welfare it enjoys from innovating activities.

Why do innovators work so cheaply? One possible reason is alluded to by Nordhaus himself: excess optimism. Nordhaus suggests that over-optimism might explain the late 1990s tech-market equity bubble. The social gains from innovation were in fact very large, but the ability of investors to capture more than a small sliver of these gains – rather than see these gains flow to consumers in the form of lower prices and improved products – proved undoable.

Another possible explanation for why innovators work so cheaply is that the prospects, few as they might be, for capturing gargantuan shares of the gains from innovation are sufficiently attractive that even rational, well-informed entrepreneurs and investors perform and fund innovating activities, each hoping that he or she will be among the tiny but inordinately lucky handful of entrepreneurs and investors who personally do capture a much-much-greater-than-normal share of the value of their innovative endeavors.

Whatever the reason, Nordhaus’s empirical evidence supports (at least my) casual observation that innovative economic activity yields benefits that are both enormous and widespread.
Boudreaux has now added an Addendum to the above blog posting noting the implications of Nordhaus's paper for the current debate about income inequality. He writes,
Nordhaus’s findings are relevant also to discussions of income inequality. His findings show that successful entrepreneurs have already, in the very process of succeeding in the market and becoming wealthy, increased the wealth of ‘society’ – have ‘given’ to others – far more than each successful entrepreneur has increased his or her own individual wealth. This process of enhancing the economic well-being of countless others through successful market innovation is neither intended nor choreographed by government, but this fact doesn't make the results any less real or significant.

True, in a society in which people are not all equally innovative and driven and risk-tolerant, the measured monetary results of such successful innovation are that some individuals (the successful entrepreneurs) gain more wealth than is gained by other individuals (those who passively prosper simply by being a consumer and worker in an innovation-filled market economy). Measured monetary incomes, therefore, do become less equal.

But why do we so seldom never hear from the fairness-obsessed, we’re-all-in-this-together crowd any expressions of concern about the great inequality of net contributions to total wealth? Where is the concern over this “unfairness”? Compared to successful market entrepreneurs, people who choose to consume much leisure or who remain consistently afraid to risk their wealth on entrepreneurial ventures enjoy over their lifetimes a higher ratio of wealth-increases to their own contributions-to-wealth. If we are to be concerned with cosmic fairness or “social justice” or “inequality,” why is this inequality one that is or ought to be ignored?
I still find, like Boudreaux, the smallness if the 2.2% figure astounding. This does tell us that "society" gets a very good deal out of entrepreneurs and thus instead of complaining about the absolute size, in terms of the number of dollars, of the 2.2% we should just be very happy with our 97.8%. Incentives matter and such a small percentage is a small price to pay for the incentive it gives for the generating of innovation and growth.

Some proper economics research for a change: LBW decisions and bias by umpires. - The Dismal Science

Jan 06, 2015

One of the most important questions in economics has to do with whether pressure from home crowds affects decision making of sports officials. A new column at VoxEU.org investigates this problem using new data from cricket matches. The authors find that neutral umpires decrease the bias against away teams, making neutral officials very important for a fair contest.

'Leg before wicket' decisions in cricket provide a fascinating case study in which to study the issue of bias in decision making by umpires.

Umpiring decisions in cricket provide a fascinating case study in which to study the issue. In the first place, decisions such as whether the batsman is out ‘leg before wicket’ (LBW) require significant judgement from the umpire in a very short period of time (less than 10 seconds). At least until recently, umpires have had complete discretion over these decisions, which can have crucial impacts on the outcome of matches (Chedzoy 1997). Unusually amongst professional sports, international cricket continues to use officials of the same nationality as the home team. Throughout most of the history of test cricket, both umpires were from the same country as the home team. In 1994, the regulations were changed and one of the umpires was required to be from a neutral country. From 2002, both umpires were required to be neutral. In One Day International (ODI) cricket, there is still one home and one neutral umpire in most matches. Unsurprisingly, cricket fans and sometimes players have long held suspicions that decisions by home umpires tend to favour the home team. The notorious altercation between the former England cricket captain Mike Gatting and Pakistani umpire Shakoor Rana in 1987 led to an international diplomatic incident, the ramifications of which were felt for many years.

Despite this, academic study of officials’ decision making in cricket has been limited to a handful of articles. An investigation by Sumner and Mobley in the New Scientist in 1981 was the first to focus on leg before wicket decisions against home and away teams, followed by Crowe and Middeldorp (1996) and Ringrose (2006). Although these articles broadly concluded that away teams suffer more leg before wicket decisions against them than do home teams, none was able to establish statistically meaningful links between neutrality of umpire and decisions against home and away teams.
Recent research by Ian Gregory-Smith, David Paton and Abhinav Sacheti takes a new look at the issue.
It was this issue that we sought to address in our recent article (Sacheti et al. 2014). We collected data from 1,000 test matches played between 1986 and 2012 from ESPNCricInfo. The changes to regulations about neutral umpires provided us with an ideal ‘natural experiment’; in our sample, around 20% of matches were umpired by home officials, 35% by one home and one neutral official, and 45% by two neutral officials. We also controlled for the quality of team, venue (as each pitch may have distinct characteristics making it more or less conducive to enabling leg before wicket decisions than others), and even the experience of the umpires in the match, among other things. With these controls in place, we found striking results as shown in the Table 1 below.
  • During the period when there were two home umpires, home teams had a clear advantage.
Reading across the columns for the Home batting marginal effect in Table 1, batsmen in away teams were given out leg before wicket about 16% more often than batsmen in home teams.
  • However, with one neutral umpire, the bias against away teams receded to 10%, and in the matches with two neutral umpires there was no home advantage at all.
It would thus seem that having neutral officials is very important for a fair contest.

Table 1. Negative binomial model of number of leg before wicket (LBW) decisions per innings
Notes: (i) Robust standard errors in brackets, clustered by match; (ii) *Significant at the 10% level. **Significant at the 5% level. ***Significant at the 1% level; (iii) ‘Home marginal effect’ is calculated as the Average Marginal Effect (see Cameron and Trivedi 2010, p.576); (iv) Controls are umpire experience; log of overs; innings; country level dummies for each home team; batting team effects and bowling team effects; (v) For the full table of results and a battery of robustness checks see Sacheti et al. (2014).
So the next question is crowd pressure or favouritism?
An obvious question is whether the apparent bias in favour of home teams was caused by crowd pressure. We examined this by comparing results between the first two innings and the final two innings of test matches. The rationale is that crowds tend to be higher in the early stages of a test match and decline significantly later on (Hynds and Smith 1994). We found that the advantage to home teams from home umpires was strongest in the final two innings of the match. So, there is little evidence that bias towards home teams from home umpires was driven primarily by crowd pressure.
What, you may ask, of the decision review system (DRS)?
In our sample there were 71 matches in which the decision review system was in place. Leg before wicket appeals or decisions in these matches can be referred to a third umpire who has the benefit of watching a slow-motion replay of the appeal or decision. All these matches had two neutral umpires, so we cannot use these data to identify any effect of favouritism by home umpires. However, any differences between home and away teams in referred decisions could indicate favouritism by neutral umpires towards home (or away) teams. Out of the 389 referred leg before wicket decisions in our sample, almost exactly the same proportion went against the away team as against the home team. This is consistent with our main finding that neutral umpires do not display bias.
So, conscious or unconscious favouritism?
It is important to note that our results do not necessarily suggest that home umpires deliberately tended to favour their own team. It is possible that home umpires could favour home teams sub-consciously. Our research does not attempt to examine the motivations of umpires. It is clear, however, that the introduction of neutral umpires in test cricket overcame the problem of home bias. This finding is important given the continued presence of home umpires in One Day Internationals and also because some commentators are suggesting a return to home umpires in test cricket on the grounds that new technology such as the decision review system makes it easier to reduce poor decision making. However, whilst the decision review system offers a ‘check’ of umpires’ decisions, it still allows some subjective decisions to stay in favour of the on-field umpire’s call. So in the light of our results, any proposal to revert to home umpires in test cricket should be treated with some caution.
Refs:
  • Cameron, A C and Trivedi, PK (2010), Microeconometrics using Stata, Texas: StataCorp LP.
  • Chedzoy, O B (1997), “The effect of umpiring errors in cricket”, The Statistician, 46, 529-540.
  • Crowe, S M and Middeldorp, J (1996), “A Comparison of Leg Before Wicket Rates Between Australians and Their Visiting Teams for Test Cricket Series Played in Australia, 1977-94”, The Statistician, 45, 255-262.
  • ESPNcricinfo (2010-12). Available from http://www.cricinfo.com (First accessed on December 5 2010).
  • Hynds, M and Smith, I (1994), “The demand for test match cricket”, Applied Economics Letters, 1, 103-106.
  • Ringrose, T J (2006), “Neutral umpires and leg before wicket decisions in test cricket”, Journal of Royal Statistical Society: Series A (Statistics in Society), 169, 903-911.
  • Sacheti, A, Gregory-Smith, I and Paton, D (2014), “Home bias in officiating: evidence from international cricket”, Journal of the Royal Statistical Society: Series A (Statistics in Society).
  • Sumner, J and Mobley, M (1981), “Are cricket umpires biased?” New Scientist, 91, 29-31.

Management and productivity: An interview with Nicholas Bloom - The Dismal Science

Jan 05, 2015

The following comes from an interview with Stanford University economist Nicholas Bloom available online at "Econ Focus", the economics magazine of the Federal Reserve Bank of Richmond.

EF: Another branch of your research has focused on how management practices affect firm and country productivity. Why do you think management practices are so important?

Bloom: My personal interest was formed by working at McKinsey, the management consulting firm. I was there for about a year and a half, working in the London office for industrial and retail clients.

There's also a lot of suggestive evidence that management matters. For example, Lucia Foster, John Haltiwanger, and Chad Syverson found using census data that there are enormous differences in performance across firms, even within very narrow industry classifications. In the United Kingdom years ago, there was this line of biscuit factories — cookie factories, to Americans — that were owned by the same company in different countries. Their productivity variation was enormous, with these differences being attributed to variations in management. If you look at key macro papers like Robert Lucas' 1978 "span of control" model or Marc Melitz's 2003 Econometrica paper, they also talk about productivity differences, often linking this with management.

Economists have, in fact, long argued that management matters. Francis Walker, a founder and the first president of the American Economic Association, ran the 1870 U.S. census and then wrote an article in the first year of the Quarterly Journal of Economics, "The Source of Business Profits." He argued that management was the biggest driver of the huge differences in business performance that he observed across literally thousands of firms.

Almost 150 years later, work looking at manufacturing plants shows a massive variation in business performance; the 90th percentile plant now has twice the total factor productivity of the 10th percentile plant. Similarly, there are massive spreads across countries — for example, U.S. productivity is about five times that of India.

Despite the early attention on management by Francis Walker, the topic dropped down a bit in economics, I think because "management" became a bad word in the field. Early on I used to joke that when I turned up at seminars people would see the "M-word" in the seminar title and their view of my IQ was instantly minus 20. Then they'd hear the British accent, and I'd get 15 back. People thought management was quack doctor research — all pulp-fiction business books sold in airports.

Management matters, obviously, for economic growth — if we could rapidly improve management practices, we would quickly end the current growth slowdown. It also matters for public services. For example, schools that regularly evaluate their teachers, provide feedback on best practices, and use data to spot and help struggling students have dramatically better educational outcomes. Likewise, hospitals that evaluate nurses and doctors to provide feedback and training, address struggling employees, and reward high performers provide dramatically better patient care. I teach my Stanford students a case study from Virginia Mason, the famous Seattle hospital that put in place a huge lean-management overhaul and saw a dramatic improvement in health care outcomes, including lower mortality rates. So if I get sick, I definitely want to be treated at a well-managed hospital.

EF: How much of the productivity differences that you just discussed are driven by management?

Bloom: Research from the World Management Survey that Raffaella Sadun, John Van Reenen, and I developed suggests that management accounts for about 25 percent of the productivity differences between firms in the United States. This is a huge number; to give you a benchmark, IT or R&D appears to account for maybe 10 percent to 20 percent of the productivity spread based on firm and census data. So management seems more important even than technology or innovation for explaining variations in firm performance.

Coincidentally, you do the same exercise across countries and it's also about 25 percent. The share is actually higher between the United States and Europe, where it's more like a third, and it's lower between the United States and developed countries, where it's more like 10 to 15 percent.

Now, you may not be surprised to learn that there are significant productivity differences between India and the United States. But you look at somewhere like the United Kingdom, and it's amazing: Its productivity is about 75 percent of America's. The United Kingdom is a very similar country in terms of education, competition levels, and many other things. So what causes the gap? It is a real struggle to explain what it is beyond, frankly, management.

EF: What can policy do to improve management practices?

Bloom: I think policy matters a lot. We highlight five policies. One is competition. I think the key driver of America's management leadership has been its big, open, and competitive markets. If Sam Walton had been based in Italy or in India, he would have five stores by now, probably called "Sam Walton's Family Market." Each one would have been managed by one of his sons or sons-in-law. Whereas in America, Walmart now has thousands of stores, run by professional nonfamily managers. This expansion of Walmart has improved retail productivity across the country. Competition generates a lot of diversity through rapid entry and exit, and the winners get big very fast, so best practices spread rapidly in competitive, well-functioning markets.

The second policy factor is rule of law, which allows well-managed firms to expand. Having visited India for the work with Benn Eifert, Aprajit Mahajan, David McKenzie, and John Roberts, I can say this: The absence of rule of law is a killer for good management. If you take a case to court in India, it takes 10 to 15 years to come to fruition. In most developing countries, the legal system is weak; it is hard to successfully prosecute employees who steal from you or customers who do not pay their invoices, leading firms to use family members as managers and supply only narrow groups of trusted customers. This makes it very hard to be well managed — if most firms have the son or grandson of the founder running the firm, working with the same customers as 20 years ago, then it shouldn't be surprising that productivity is low. These firms know that their sons are often not the best manager, but at least they will not rampantly steal from the firms.

The third policy factor is education, which is strongly correlated with management practices. Educated and numerate employees seem to more rapidly and effectively adopt efficient management practices.

The fourth policy factor is foreign direct investment, as multinational firms help to spread management best practices around the world. Multinational firms are typically incredibly well run, and that spills over. It's even true in America, where its car industry has benefited tremendously from Honda, Toyota, Mitsubishi, and Volkswagen. When these foreign car manufacturers first came to America, they achieved far higher levels of productivity than domestic U.S. firms, which forced the American car manufacturers to improve to survive.

The fifth factor is labor regulation, which allows firms to adopt strong management practices unimpeded by government. In places like France, you can’t fire underperformers, and as a result, it's very hard to enforce proper management.

EF: Do you expect America's productivity advantage to continue?

Bloom: On the above five criteria, the United States scores an "A" on four of them except education, where we score a "C." The United States has a weak school system and poor education standards compared to a number of our competitors. For example, based on OECD Pisa [Programme for International Student Assessment] scores, the U.S. educational system ranks in the mid-20s on math, below many European and East Asian countries. So improving educational standards is the most obvious way to improve management and ultimately growth, because poor education makes it harder to manage our firms. Fixing U.S. education will take more funding. But most importantly, it will require dismantling the cobweb of restrictions that teachers unions and politicians have put on schools, like tenure and seniority-based pay.

If you fix these five drivers of management, you're 95 percent of the way there. Most other factors seem of secondary importance compared to the big five of competition, rule of law, education, foreign direct investment, and regulations.

EF: Management practices can be viewed as "soft" technologies, compared to so-called "hard" technologies such as information technology. Do you see anything special about the invention and adoption of these "soft" technologies relative to "hard" technologies?

Bloom: The only distinction is that hard technologies, like my Apple iPhone, are protected by patents, whereas process innovations are protected by secrecy.

The late Zvi Griliches, a famous Harvard economist, broke it down into two groups: process and product innovations. Most people who think of innovation think of product innovations like the shiny new iPhone or new drugs. But actually a lot of it is process innovations, which are largely management practices.

Good examples would be Frederick Winslow Taylor and scientific management 100 years ago, or Alfred Sloan, who turned a struggling General Motors into the world's biggest company. Sloan pushed power and decision-making down to lower-level individuals and gave them incentives — called the M-form firm. It seems perfectly standard now, but back then firms were very hierarchical, almost Soviet-style. And then there was modern human resources from the 1960s onward — the idea that you want to measure people, promote them, and give them rewards. Most recently, we have had "lean manufacturing," pioneered by Toyota from the 1990s onward, which is now spreading to health care and retail. This focused on data collection and continuous improvement.

These have been major milestones in management technologies, and they've changed the way people have thought. They were clearly identified innovations, and I don't think there's a single patent among them. These management innovations are a big deal, and they spread right across the economy.

In fact, there's a management technology frontier that's continuously moving forward, and the United States is pretty much at the front with firms like Walmart, GE, McDonald's, and Starbucks. And then behind the frontier there are a bunch of laggards with inferior management practices. In America, these are typically smaller, family-run firms.

EF: What are the key challenges for future research on management?

Bloom: One challenge is measurement. We want to improve our measurement of management, which is narrow and noisy.

The second challenge is identification and quantification: finding out what causes what and its magnitude. For example, can we quantify the causal impact of better rule of law on management? I get asked by institutions like the World Bank and national governments which policies have the most impact on management practices and what size impact this would be? All I can do is give the five-factor list I've relayed here; it's very hard to give any ordering, and there are definitely no dollar signs on them. I would love to be able to say that spending $100 million on a modern court system will deliver $X million in extra output per year.

One way to get around this — the way macroeconomists got around it — is to gather great data going back 50 years and then exploit random shocks to isolate causation. This is what we are trying to do with the World Management Survey. The other way is a bit more deliberate: to run field experiments by talking with specific firms across countries.
In the New Zealand context its interesting to note Bloom's comments on the effects of foreign direct investment. Multinational firms help improve the standard of management in the host country and thus help improve productivity. Add this to the point that Eric Crampton noted about foreign firms paying their employees more, then foreign investment in New Zealand looks better than the anti-FDI crowd would have us believe. Also Bloom highlights the advantages of a flexible labout market. Given the small size of the internal New Zealand market Bloom's point about the importance of competition to good management practices emphasises the  need to keep New Zealand open to trade so that local produces face as much competition as possible from foreign firms.

Four principles for an effective state - The Dismal Science

Jan 05, 2015

The following four principles come from the 2014 Nobel prize winner in economics Jean Tirole. The original column was posted at VoxEU.org on 16 July 2007 (reposted 13 October 2014). Tirole argues that for the French state to meet the expectations of its citizens the state will have to become more effective. To do this requires, in Tirole's view, a four-pronged approach: restructuring, competition, evaluation and accountability.

Restructuring
Many countries have undertaken fundamental governmental reforms based on a consensus between political parties and unions. In the 1990s, the Swedish Social Democrats government made large cuts in the civil service. Ministers, who formulate overall strategy and make decisions on resource allocation, have to rely on a small number of civil servants. Operational details must therefore be delegated to a large number of independent agencies, each of which can recruit and remunerate their employees as they choose. These independent agencies operate under strict budgetary limits that ensure the sustained delivery of public services.

Around the same time, Canada cut government expenditure by 18.9% without social turmoil – and without greatly reducing health, justice, or housing programmes. They did this while maintaining tax levies, so the result was a reduced public deficit and falling public debt. Spending that could not be clearly justified in terms of the resulting service to the public was pruned. Subsidies for entrepreneurial projects and privatisation facilitated the elimination of one in six positions in the civil service. Indeed the sort of government reorganisation undertaken in Canada could only be dreamed of in France with its often nightmarish collection of laws and fiscal regulations. The Canadians have a single service for the calculation and collection of taxes and a one-stop-shop for government-business relations.
Competition
Contrary to common beliefs in France, head-on competition can produce high quality public services. In telecommunications, most countries, including France, have put a universal service obligation fund in place, which is compatible with competition between providers. It protects the smallest firms while ensuring that services are available in all regions of the country or to poor consumers.

When it comes to education, several countries (Belgium, the UK, Sweden) have tried voucher systems that give everyone access to education but create competition among schools for students. Such a system must be accompanied by clear and openly available information on schools so parents can make informed choices and “insider-ism” can be avoided (something that arose from the competition among the tracks in the French education system).

Competition can also be created via standardisation. In the healthcare realm, using more systematic comparisons between hospitals, or between the private and public sectors could help control costs. Sometimes the cost of treatment for a given disease varies by a factor of 2.5 with the variation having nothing to do with patient selection.
Evaluation
Every action of the State must be subject to a double independent evaluation. The first should be before the action: Is public intervention necessary? What are the costs and benefits? The second is after. Did it work? Was it cost effective? On this point, it would be necessary to require that the audit recommendations (for example, those of the Audit Court) be either followed according to a strict schedule, or rejected with a convincing justification.
Accountability
The 2001 Law (LOLF), adopted on the basis of a left-right consensus, is a small revolution in a country accustomed to the logic of budgetary processes. Embracing the logic of effectiveness, the law aims to transform public sector managers into true owners where their obligation to produce results goes hand in hand with the freedom to manage. Putting this principle into practice is certainly difficult. First of all, the objectives need to be clear and easily verifiable. Then, “accountability” must be introduced. For that, the objectives can’t be collective (as the failure of control of health expenses has shown), but must be the subject of rewards or sanctions. Lastly, one should be wary of the pernicious effects of “multi-tasking”. Incentives that are related to an easily measured objective (for example, the cost per student for a university, which can be easily reduced by teaching large numbers of students in large lecture halls) can cause one to ignore equally important objectives that one has neglected to measure (such as the quality of teaching or research). In other words, to construct good incentives, one has to evaluate actions comprehensively. That way, it’s clear that giving regulated enterprises more responsibility should go hand in hand with stricter safety and quality controls. The need for such controls is clear from the experience of British telecoms in 1984 and more recently, of British railways.
The French state does have something of a bad history when it comes to effectiveness. For example the French SOEs have not always been run well. In 1997, for example, the Economist magazine (`Banking's Biggest Disaster', vol. 344 issue 8024 July 5: 69-71) noted the near-bankruptcy of the then state-owned bank Credit Lyonnais. The magazine pointed out that the then French finance minister Dominique Strauss-Kahn had to admit that the bank had probably lost around Ffr100 billion (around US$17 billion). The bank had to be bailed out three times in the 1990s. The total cost to the French taxpayer of the whole debacle has been estimated at between US$20 and US$30 billion. Improving on such a record shouldn't be too difficult.

EconTalk from last week - The Dismal Science

Dec 24, 2014

Gary Marcus of New York University talks with EconTalk host Russ Roberts about the future of artificial intelligence (AI). While Marcus is concerned about how advances in AI might hurt human flourishing, he argues that truly transformative smart machin...

EconTalk for many, many weeks - The Dismal Science

Dec 13, 2014

Thomas Piketty of the Paris School of Economics and author of Capital in the Twenty-First Century talks to Econtalk host Russ Roberts about the book. The conversation covers some of the key empirical findings of the book along with a discussion of thei...

2014 Nobel Prize in economics: Jean Tirole - The Dismal Science

Oct 14, 2014

A number of people seem more excited by this award than me, see for example, A Fine Theorem and Tyler Cowen at Marginal Revolution.

Cowen does mention Tirole's survey article with Holmstrom on the theory of the firm which is well worth reading even if a few years old now. In a survey paper of mine on the theory of privatisation I say this about a paper by Laffont and Tirole (Jean-Jacques Laffont and Jean Tirole (1991). ‘Privatization and Incentives’, Journal of Law, Economics, & Organization, 7 (Special Issue) [Papers from the Conference on the New Science of Organization, January 1991]: 84-105.):

In the Laffont and Tirole (1991) model a firm is assumed to be producing a public good with a technology that requires investment by the firm’s manager. In the case of a public firm this investment can be diverted by the government to serve social ends. For example, the return on investment in a network could be reduced by the government if it were to allow ex post access to the general population. Such an action may be socially optimal but would expropriate part of the firm’s investment. A rational expectation of such an expropriation would reduce the incentives of a public firm’s manager to make the required investment. For a private firm, the manager’s incentives to invest are better given that both the firm’s owners and the manager are interested in profit maximisation. The cost of private ownership is that the firm must deal with two masters who have conflicting objectives: shareholders wish to maximise profits while the government purses economic efficiency. Both groups have incomplete knowledge about the firm’s cost structure and have to offer incentive schemes to induce the manager to act in accordance with their interests. Obviously the game here is a multi-principal game which dilutes the incentives and yields low-powered managerial incentive schemes and low managerial rents. Each principal fails internalise the effects of contracting on the other principal and provides socially too few incentives to the firm’s management. The added incentive for the managers of a private firm to invest is countered by the low powered managerial incentive schemes that the private firm’s managers face. The net effect of these two insights is ambiguous with regard to the relative cost efficiency of the public and private firms. Laffont and Tirole can not identify conditions under which privatisation is better than state ownership.
Cowen goes on to say,
It’s an excellent and well-deserved pick. One point is that some other economists, such as Oliver Hart and Bengt Holmstrom, may be disappointed they were not joint picks, this would have been the time to give them the prize too, so it seems their chances have gone down.
Hart and Holmstrom's chances may have gone up since they can now be given for their, separate and joint, work on different aspects of the theory of the firm.