Paul Walker

Dr Paul Walker is an economist at University of Canterbury. He has expertise in microeconomics, institutional economics and industrial Organization. He blogs for The Dismal Science.

Is history is more or less bunk? 2 - The Dismal Science

Apr 28, 2015

Further to my previous post I have now found a copy of the paper The Economist was talking about, “Lynchings, Labour and Cotton in the US South” (pdf) by Cornelius Christian.

In the paper Christian notes that in the short term the advantage of lynchings to whites was via the labour market. The evidence presented by Christian demonstrates that lynchings prevented black workers from fully participating in the labour market to the advantage of white workers. Lynchings cause blacks to migrate away, not too surprising, lowering labour supply and increasing wages for white labourers.

Using the fact that world cotton prices are exogenous from a single county’s perspective, I find that cotton price shocks strongly predict lynchings. More precisely, one standard deviation decrease in the world cotton price results in a 0.095 to 0.16 standard deviation increase in lynchings within a cotton-producing county. The findings are robust to the inclusion of controls, and to the use of tests with white-on-white lynchings and California lynchings. Cotton price shocks also do not predict legal executions of blacks, suggesting motives for lynching were different. These ffects are more pronounced in counties that had railroads in 1890, suggesting that links to world markets and greater local labour demand had an impact on lynchings. Disenfranchisement attempts such as the poll tax and literacy tests do not strengthen this effect, suggesting the substitutability of informal violence with formal institutions as a way to control workers. All this is indicative that greater numbers of lynchings served, at least in part, as a way of controlling black workers.

Using these observations as a guide, I claim that lynchings had labour market effects that benefitted white workers. During years of low cotton prices, wages are low. When whites lynch blacks, this causes other blacks to migrate out of a county, thus reducing labour supply and increasing wages. I show in my data that lynchings predict greater black out-migration, and higher state-level agricultural wages. A one standard deviation increase in lynchings within a county leads to 6.5 to 8 % more black out-migration, and a 1.2 % increase in state-level wages.
Given these short-run effects what are the long-run outcomes?
I then turn to the long-term effects of lynchings, starting with the Civil Rights era. Although lynchings became very rare in the 1930s, discrimination against blacks continued. I focus on the 1964 Mississippi Summer project, a campaign to register African Americans to vote - the campaign’s organisers encountered violence and discrimination throughout the summer. I show that Mississippi counties with more 1964 violence also had more lynchings in the past. Using datafrom the 2008-2012 American Community Survey, I also show that lynchings in the past predict white-black wage and income gaps today. This is robust to the inclusion of various controls and state fixed effects. Furthermore, I test the sensitivity of the coefficient estimates to control variables using Altonji, Elder, and Taber (2005) statistics. My results are shown to be robust to these tests, strongly suggesting that labour market discrimination has persisted from lynchings to the present day.
and
The modern-day and Mississippi Summer results suggest that the effects of lynchings persist up until the present day. This is consistent with a mechanism in which discrimination continues to affect African Americans. Such prejudice starts with lynchings of African Americans, and subsequently manifests in violence when Civil Rights community organisers went to Mississippi in 1964. It continues to affect contemporary black incomes, relative to their white neighbours.
If labour market discrimination today, driven by prejudice from the past,  is the cause of the income gap between blacks and whites then there are a couple of questions to ask. First is there some social mechanism at work to perpetuate the discrimination? and second what is preventing Becker type effects from reducing the gap? Gary Becker pointed out many years ago a competitive labour market provides strong incentives to keep our prejudices out of our business decisions. The force of competition will make even the most racist/sexist/homophobic/ employer see that by hiring only heterosexual men of Anglo-Saxon descent, they limit the talent pool accessible to them, which is not good business. What market imperfections are preventing such competitive forces working in the South?

An interesting paper which shows history is not bunk and has relevance even today.

How long do firms live? - The Dismal Science

Apr 23, 2015

Many commentators, even today, argue that the economy and the nation are controlled by powerful, large, very long lived corporations. John Kenneth Galbraith is perhaps the most (in)famous economist who argued along these lines. He argued that in the industrial sectors of the economy, which are composed of the largest corporations - think S&P 500 companies, the principal function of market relations is, not to constrain the power of the corporate behemoths, but to serve as an instrument for the implementation of their power. Moreover, the power of these corporations extends into commercial culture and politics, allowing them to exercise considerable influence upon popular social attitudes and value judgements. That this power is exercised in the shortsighted interest of expanding commodity production and the status of the few - the 1% - is, in Galbraith's view, both inconsistent with democracy and a barrier to achieving the quality of life that the "new industrial state" with its affluence could provide to the many. Galbraith argued that we find ourselves living in a structured state controlled by these large and all powerful corporations. Control over demand and consumers is exercised via the use of advertising which creates a never ending consumer "need" for products, where no such "need" had existed before. In addition, as Princeton University Press said in its advertising for a new edition of Galbraith's "The New Industrial State",

The goal of these companies is not the betterment of society, but immortality through an uninterrupted stream of earnings.
I have always thought that an implication of these ideas is that large firms, e.g. those in the S&P 500, would be very long lived. After all given the amount of control that these firms apparently have over their markets and the economy at large its hard to see how they could ever go bankrupt or be taken over. They are, after all, able to ensure "immortality through an uninterrupted stream of earnings." Thus these firms would have a long life.

Given this I was interested to see this comment by Bourlee Lam at The Atlantic:
[...] Richard Foster, a lecturer at the Yale School of Management, has found that the average lifespan of an S&P company dropped from 67 years in the 1920s to 15 years today. Foster also found that on average an S&P company is now being replaced every two weeks, and estimates that 75 percent of the S&P 500 firms will be replaced by new firms by 2027.
I just don't see how a 15 year (or even a 67 year) life span is in anyway consistent with the story that Galbraith tried to tell. Such a short life time looks more like support for a Schumpeter like "creative destruction" interpretation of the life cycle of business firms.

Just to show how short a life span 15, or even 67 years, is note:
Cho and Ahn (2009: 160-1) state “The oldest company in the world is known to be a Japanese construction company, Kongo Gumi, which was founded in 578 and thus existed for 1431 years. [However a footnote at this point states “Kongo Gumi went bankrupt in 2006 and was acquired by Takamatsu group, thus depending on the definition of corporate death it may be excluded from a long-lived company” According to Wikipedia (http://en.wikipedia.org/wiki/Kong_Gumi), “As of December 2006, Kong Gumi continues to operate as a wholly owned subsidiary of Takamatsu”.] There are also several other companies which are reported to have existed over 1000 years such as Houshi Ryokan (Japan, Innkeeping, founded in 717), Stiftskeller St. Peter (Austria, restaurant, founded in 803), Chateau de Goulaine (France, vineyard, founded in 1000) and Fonderia Pontificia Marinelli (Italy, bell foundry, founded in 1000)”.
Ref.:
  • Cho, Dong-Sung and Se-Yeon Ahn (2009). ‘Exploring the Characteristics of the Founder and CEO Succession as Causes of Corporate Longevity: Findings from Korean Long-Lived Companies’, Journal of International Business and Economy, 10(2) Fall: 157-87.

Should we be spooked by deflation? - The Dismal Science

Apr 19, 2015

Concerns about deflation – falling prices of goods and services – have loomed large in many recent policy discussions. In such discussions deflation is seen as always and everywhere a bad phenomenon. But as I have discussed a number of times before, see for example herehere and here, you need to draw a distinction between good and bad deflation The basic point is that we indeed do have two forms of deflation, the bad driven by demand shrinking and the good caused by supply expanding. The good kind of deflation is the result of increases in productivity. Research and development means new technology, efficiency gains, cost-cutting, price-cutting and, yes, deflation. Productivity gains mean that businesses could afford to sell their products for less since it is costing less to make them. The bad kind usually follows a collapse of aggregate demand. There is a severe drop in spending: producers have to cut prices to find buyers. This has the effect of causing recession, high unemployment and widening financial stress. This the 1930s type deflation that people fear.

The deflation debate is shaped by the deep-seated view that deflation, regardless of context, is an economic pathology that stands in the way of any sustainable and strong expansion. This view is largely based on the experience of the Great Depression. But in a new column, Should we be spooked by deflation? A look at the historical record by Claudio Borio, Magdalena Erdem, Andrew Filardo and Boris Hofmann, at VoxEU.org it is argued that it is misleading to draw inferences about the costs of deflation from the Great Depression since it was the archetypal example.

Borio, Erdem, Filardo and Hofmann write,

The evidence from our historical analysis raises questions about the prevailing view that goods and services price deflations, even if persistent, are always pernicious. It suggests that asset price deflations, and particularly house price deflations in the postwar era, have been more damaging. And it cautions against presuming that the interaction between debt and goods and services price deflation, as opposed to debt’s interaction with property price deflations, has played a significant role in past episodes of economic weakness.

Inevitably, our results come with significant caveats. The data set could be further improved. We have focused on only a few drivers of output costs. We have only a few episodes of persistent deflation in the postwar period. And present debt levels are at, or close to, historical highs in relation to GDP. This should caution against drawing sweeping conclusions or firm inferences about the future.

Even so, the analysis does suggest a number of considerations relevant for policy.
  • First, it is misleading to draw inferences about the costs of deflation from the Great Depression, as if it was the archetypal example.
The episode was an outlier in terms of output losses; in addition, the scale of those losses may have had less to do with the fall in the price level per se than with other factors, including the sharp fall in asset prices and associated banking distress.
  • Second, and more generally, when calibrating a policy response to deflation, it is critical to understand the driving factors and, as always, the effectiveness of the tools at the authorities’ disposal.
This can help to better identify the benefits and risks involved.
  • Finally, there is a case for policymakers to pay closer attention than hitherto to the financial cycle – that is, to booms and busts in asset prices, especially property prices, alongside private sector credit [...].
So deflation is not a good reason for running round screaming the sky is falling as many commentators, journalists and politicians seem to want to do. Reality is more complex and subtle. The VoxEU.org column finds a link between output growth and asset price deflations, particularly during postwar property price deflations, that there is no evidence that high debt has so far raised the cost of goods and services price deflations, in so-called debt deflations and that the most damaging interaction appears to be between property price deflations and private debt.

Modelling science as a contribution good 2 - The Dismal Science

Apr 18, 2015

Continuing on with the Kealey and Ricketts paper, Modelling science as a contribution good we see that in section 7.2 Kealey and Ricketts discuss "Science and the firm". They write,

The contribution good model requires that scientists are able to gain financial rewards from the common pool of science. The institutional mechanisms that enable these rewards to be claimed are not modelled explicitly but are simply assumed to exist. The contribution good model of science has direct relevance, therefore, for research programmes in business structure and organisation. In modern Institutional Economics the firm is seen (i) as a substitute for relatively high costs of transacting in the market, after Coase (1937); (ii) as a means of coping with uninsurable uncertainty and continual change, after Knight (1921); and (iii) as a vehicle for instigating technological innovation, after Schumpeter (1934, 1943). The conversion of scientific knowledge into new tradable goods and services confronts obvious transactional difficulties between scientists and technologists, technologists and entrepreneurs, and entrepreneurs and financiers. Cooperation between these elements entails high costs of transacting and is likely to involve the formation of firms with internal labour markets and specially designed incentive arrangements to mitigate them. Hansmann’s (1996) proposition that ownership rights tend to be assigned to the group that faces the highest transactions costs might suggest, for example, the development of scientist-owned firms or firms with significant control rights in the hands of the knowledge creators and users.
There are a number of reasons for thinking that the development of scientist-owned firms could occur.

Within the property rights (also often referred to as the incomplete contracts approach) approach to the firm Brynjolfsson (1994) and Rabin (1993) show that there are adverse selection and moral hazard reasons why a scientist-entrepreneur may have to form their own firm to develop their ideas. In Rabin (1993) Rabin shows that adverse selection problems can be such that, in some situations, an informed party (the scientist-entrepreneur in this case) has to take over or form a firm to show that their information is indeed useful. For Rabin an informed party has information about how to make a firm more productive but can't reveal the information to the owners of a current firm. If the information is revealed the current firm can produce using it without any payment to the informed party. If the information is not revealed why should the firm believe the information is in fact useful? Within the Rabin framework it is suggested that firms are more likely to trade through markets when informed parties are also superior providers of productive services that are related to their information but if, on the other hand, information is a firm’s only competitive advantage, it is likely to obtain control over assets, possibly by buying firms that currently own those assets or setting up his own firm.

The Brynjolfsson (1994) model on the other hand works within a moral hazard framework. Brynjolfsson considers a situation where an scientist-entrepreneur has some expertise needed to run a firm but no value can be created without both the knowledge asset of the scientist-entrepreneur and the physical assets of a firm. He assumes that no comprehensive contract can be written between the entrepreneur and the firm. If the scientist-entrepreneur does not own the firm and he makes an investment in effort and creates value, he can be subject to hold-up by the other party since he needs the firm's physical assets. If the scientist-entrepreneur owns the firm then clearly the hold-up problem ceases to exist. The most obvious interpretation of Brynjolfsson model is as a model of a labour-owned firm (scientist-owned firm in this case). Brynjolfsson argues that it is optimal to give the entrepreneur ownership of the physical assets of the firm since he has information that is essential to its productivity. This result is obviously just an application of Hart and Moore’s proposition that an agent who is ‘indispensable’ to an asset should own it (Hart and Moore 1990). Here, firms are owned by the indispensable human capital (a scientist), or, as is more usual, by a small section of the human capital, e.g. a partnership between a number of scientists.

The above arguments support the Kealey and Ricketts notion that scientist-owned firms are a viable form of governance to allow the scientists to capture the returns from their work. However we should ask if there are limits to such arguments? Walker (forthcoming) suggests their may be such limits. In this model the reference point approach to contracts (Hart and Moore 2008) is applied to the modelling of a human-capital based firm. First a model of firm scope is offered which argues that the organisation of a human-capital based firm depends on the "types" (a crude interpretation of a "type" in this context could be the kind of scientist involved in the project, e.g. chemist, microbiologist or may be both.) of human capital involved. Having a homogeneous group of human capital leads to a different governance structure for a firm than that of a firm which involves a heterogeneous group of human capital. For a homogeneous group of human capital, say just chemists, a labour (scientist) owned firm is viable but for a heterogeneous group, say chemists, microbiologists and physicists, ownership by the owners of the firm's non-human capital may be optimal (that is an investor-owned firm may develop). This is because the more heterogeneous the human capital, the more likely it is that some groups will be "aggrieved" (a party is aggrieved when they do not receive the payoff they think they should) and will therefore "shade" on their performance (i.e. they put in a low level rather than a high level of performance) thereby creating deadweight losses. A firm which involves heterogeneous human capital will more more unstable due to the greater amount of a aggrievement/shading and will therefore require some "glue"”, in the form of non-human capital of some kind, to keep the human capital together and thus keep the firm viable. Given the importance of this glue to the firm, ownership of the firm by the owner of the non-human capital is likely.

Thus while it is possible that Kealey and Ricketts are right that scientist-owned firms will develop such a governance arrangement is not the only possibility. What is likely is that we would see what we see today in terms of firm's governance structures with a range of different governance structures being utilised depending on the exact circumstances.

Refs.:
  • Brynjolfsson, E. (1994). Information assets, technology, and organization. Management Science, 40, 12, pp. 1645–62.
  • Hart, O.D. and Moore, J. (1990) Property rights and the nature of the firm. Journal of Political Economy 98(6): 1119–1158.
  • Hart, O.D. and Moore, J. (2008). Contracts as reference points, Quarterly Journal of Economics, 123(1), 1–48.
  • Rabin, M. (1993). Information and the control of productive assets, Journal of Law, Economics, and Organization, 9(1), 51–76.
  • Walker, P. (forthcoming). Simple Models of a Human-Capital-Based Firm: a Reference Point Approach, Journal of the Knowledge Economy.

Modelling science as a contribution good - The Dismal Science

Apr 18, 2015

is the title of a recent paper by Terence Kealey and Martin Ricketts in the journal Research Policy (Volume 43, Issue 6, July 2014, Pages 1014–1024).

The paper makes a contribution to "the new economics of science" in that it argues that science is not a pure public good, as is often believed, but is, rather, a contribution good. Pure public goods are both non-excludable and non-rival. A contribution good, in contrast, is like a club good in that it is non-rivalrous but at least partly excludable. The excludability is due to the fact that not everyone is a member of the "club". To be a member of the "club" you have to be able to understand the science at issue. Also consumption is tied to contribution. If you want to be able to make use of the science you need to have mastered the underlying material which normally means you have to be trained as a scientist - you are a member of the "club". This in turns means you will be contributing to the subject.

The important problem here is not, as is the case for public goods, that of free riding but rather being able to create a critical mass of scientists. The club must be of a size large enough to generate both private and social gains.

The abstract reads

The non-rivalness of scientific knowledge has traditionally underpinned its status as a public good. In contrast we model science as a contribution game in which spillovers differentially benefit contributors over non-contributors. This turns the game of science from a prisoner's dilemma into a game of ‘pure coordination’, and from a ‘public good’ into a ‘contribution good’. It redirects attention from the ‘free riding’ problem to the ‘critical mass’ problem. The ‘contribution good’ specification suggests several areas for further research in the new economics of science and provides a modified analytical framework for approaching public policy.

The exchange rate is just a price - The Dismal Science

Apr 17, 2015

How often must this be said?

Oliver Hartwich writes in the latest New Zealand Initiative Insights (Insights 12: 10 April 2015 ),

The Reserve Bank of Australia’s surprise decision not to cut interest rates only postponed the expected “parity party” between the Kiwi and the Aussie dollars. The way things are going, it is a matter of time until both currencies are of equal value.

The currency development leaves politicians and commentators divided. On Wednesday, The New Zealand Herald was jubilant (“Transtasman parity worth a celebration”) whereas the Waikato Times played the party-pooper (“Dollar parity bad news”).

Unsurprisingly, Prime Minister John Key claimed the strong Kiwi as an indication of a strong economy while his counterpart, Labour leader Andrew Little warned of negative side effects of our strong dollar.
But why are they talking about parity at all? The exchange rate is just a price like any other price in the economy. The exchange rate being talked about in this case is just the price of the Australian dollar, which given it is floating will go up and down when the demand for and supply of the two currencies change. Just like every other good in he economy. If there are changes in the supply and /or demand for bread, the price of bread changes but we don't see stupid comments by politicians and newspaper editors about it. Why not? If changes in one price are worthy of comments why not changes in all prices?

At best changes in exchange rates may act as an indicator that something is amiss in some sector of the economy. But if this is so then the proper reaction should be to identify the problem and curing it at its source. Not with going on about the exchange rate. Don't shoot the messenger.

Can these commentators just get over their unjustified obsession with exchanges rates. Also why are they getting excited about the nominal exchange rate without asking questions about what is happening to the real exchange rate?

Academic freedom at the University of Chicago and Princeton - The Dismal Science

Apr 13, 2015

The following comes from the website of The National Association of Scholars (NAS) who in turn got it from the Facebook page of NAS board of advisors member Robert P. George (McCormick Professor of Jurisprudence at Princeton University).

Every now and then sanity still manages to prevail. The worrying thing is that academics at these universities find it necessary to have to make such statements at all.

At campuses across the country, traditional ideals of freedom of expression and the right to dissent have been deeply compromised or even abandoned as college and university faculties and administrators have capitulated to demands for language and even thought policing. Academic freedom, once understood to be vitally necessary to the truth-seeking mission of institutions of higher learning, has been pushed to the back of the bus in an age of "trigger warnings," "micro-aggressions," mandatory sensitivity training, and grievance politics. It was therefore refreshing that the University of Chicago, one of the academic world's most eminent and highly respected institutions, in the face of all this issued a report ringingly reaffirming the most robust conception of academic freedom. The question was whether other institutions would follow suit.

Yesterday, the Princeton faculty, led by the distinguished mathematician Sergiu Klainerman, who grew up under communist oppression in Romania and knows a thing or two about the importance of freedom of expression, formally adopted the principles of the University of Chicago report. They are now the official policy of Princeton University. I am immensely grateful to Professor Klainerman for his leadership, and I am proud of my colleagues, the vast majority of whom voted in support of his motion.

At Chicago and Princeton, at least, academic freedom lives!

Here are the principles we adopted:
Education should not be intended to make people comfortable, it is meant to make them think. Universities should be expected to provide the conditions within which hard thought, and therefore strong disagreement, independent judgment, and the questioning of stubborn assumptions, can flourish in an environment of the greatest freedom' ... Because the University is committed to free and open inquiry in all matters, it guarantees all members of the University community the broadest possible latitude to speak, write, listen, challenge, and learn. Except insofar as limitations on that freedom are necessary to the functioning of the University, the University of Chicago fully respects and supports the freedom of all members of the University community “to discuss any problem that presents itself.” Of course, the ideas of different members of the University community will often and quite naturally conflict. But it is not the proper role of the University to attempt to shield individuals from ideas and opinions they find unwelcome, disagreeable, or even deeply offensive. Although the University greatly values civility, and although all members of the University community share in the responsibility for maintaining a climate of mutual respect, concerns about civility and mutual respect can never be used as a justification for closing off discussion of ideas, however offensive or disagreeable those ideas may be to some members of our community.

The freedom to debate and discuss the merits of competing ideas does not, of course, mean that individuals may say whatever they wish, wherever they wish. The University may restrict expression that violates the law, that falsely defames a specific individual, that constitutes a genuine threat or harassment, that unjustifiably invades substantial privacy or confidentiality interests, or that is otherwise directly incompatible with the functioning of the University. In addition, the University may reasonably regulate the time, place, and manner of expression to ensure that it does not disrupt the ordinary activities of the University. But these are narrow exceptions to the general principle of freedom of expression, and it is vitally important that these exceptions never be used in a manner that is inconsistent with the University’s commitment to a completely free and open discussion of ideas. In a word, the University’s fundamental commitment is to the principle that debate or deliberation may not be suppressed because the ideas put forth are thought by some or even by most members of the University community to be offensive, unwise, immoral, or wrong-headed. It is for the individual members of the University community, not for the University as an institution, to make those judgments for themselves, and to act on those judgments not by seeking to suppress speech, but by openly and vigorously contesting the ideas that they oppose.

Indeed, fostering the ability of members of the University community to engage in such debate and deliberation in an effective and responsible manner is an essential part of the University’s educational mission. As a corollary to the University’s commitment to protect and promote free expression, members of the University community must also act in conformity with the principle of free expression. Although members of the University community are free to criticize and contest the views expressed on campus, and to criticize and contest speakers who are invited to express their views on campus, they may not obstruct or otherwise interfere with the freedom of others to express views they reject or even loathe. To this end, the University has a solemn responsibility not only to promote a lively and fearless freedom of debate and deliberation, but also to protect that freedom when others attempt to restrict it.
Now is it time for New Zealand's universities to think about the adoption of such principles.

John von Neumann documentary - The Dismal Science

Apr 12, 2015

From the Mathematical Association of America comes this video of a 1966 documentary on John von Neumann.

While a mathematician and physicist von Neumann made three fundamental contributions to economics

The first is a 1928 paper written in German that established von Neumann as the father of game theory. The second is a 1937 paper, translated in 1945, that laid out a mathematical model of an expanding economy and raised the level of mathematical sophistication in economics considerably. The third is a book coauthored with his Princeton colleague, economist Oskar Morgenstern, titled Theory of Games and Economic Behavior, after Morgenstern convinced von Neumann that game theory could be applied to economics.


Refs.:
  • 1928 "Zur Theorie der Gesellschaftsspiele", Mathematische Annalen 100 (1): 295–320. English translation: "On the Theory of Games of Strategy," in A. W. Tucker and R. D. Luce, ed. (1959), Contributions to the Theory of Games, v. 4, p. 42. Princeton University Press.
  • 1944 (with Oskar Morgenstern). Theory of Games and Economic Behavior. Princeton: Princeton University Press.
  • 1945–1946. “A Model of General Equilibrium.” Review of Economic Studies 13: 1–9.

Are economists the only people who think like this? - The Dismal Science

Apr 12, 2015

This is from Diane Coyle at her The Enlightened Economist blog:

I wonder what Professor Kotz would have thought of the arrangement at dinner at the Royal Economic Society conference. There were two options for each course, and staff served each one to alternating places at the table. If you preferred the other, you had to exchange. Perfectly efficient and logical – surely only an economist could have thought of it? I’m tempted to do this every time I invite people round for a meal in future, unless that would be a bit neoliberal.
One assumes that side-payments made be necessary to clear the market. But who other than economists would use such a mechanism?