SciBlogs

Interesting blog bits Paul Walker Aug 29

  1. Carlos Vargas-Silva asks Are Migrants Good for the Economy?
    Two studies about the impact of migration on the UK economy have been published which – if media reports are to believed – appear to contradict one another. A closer reading of these reports, however, shows that in fact they come to very similar economic conclusions.

  2. Chris Dillow on Optimum Deaths
    What is the optimum number of migrant deaths? The answer is not zero.

  3. Tim Worstall points out What glories this capitalist free market thing hath wrought
    There’s nothing worse than being exploited by some running lackey pig dog of a capitalist, as Deirdre McCloskey reminds us.

  4. Joel Waldfogel on Piracy Undermining Content Creation: Loch Ness Monster or Black Swan?
    Theory and common sense dictate that piracy should threaten new product creation. If it costs money to bring new works to market, then a reduction in revenue – all else constant – should render some projects uneconomic. So compelling is this theory that the content industries share it with lawmakers at every opportunity. Robert Solow once quipped, “You see the computer age everywhere but in the productivity statistics.” So it has been with piracy and content creation: one can see a negative impact of piracy everywhere except in the evidence about content creation. Maybe until now.

  5. Ed Dolan on Universal Basic Income and Work Incentives: What Can Economic Theory Tell Us?
    Everywhere you look, it seems, people are talking about a Universal Basic Income (UBI)—a monthly cash benefit paid to every citizen that would replace the existing means-tested welfare system.

  6. Tim Harford on Monopoly is a bureaucrat’s friend but a democrat’s foe
    “It takes a heap of Harberger triangles to fill an Okun gap,” wrote James Tobin in 1977, four years before winning the Nobel Prize in economics. He meant that the big issue in economics was not battling against monopolists but preventing recessions and promoting recovery.

  7. Andrea Prat asks How can we measure media power?
    The potential for political influence is what most people think of when they talk about the power of the media. A new media power index, proposed in this column, aggregates power across all platforms and focuses not on markets but on voters. It measures not actual media influence but rather its potential. Using the index, the author finds that the four most powerful media companies in the US are television-based and the absolute value of the index is high. This indicates that most American voters receive their news from a small number of news sources, which creates the potential for large political influence.

  8. Gabriel M. Ahlfeldt, Stephen Redding, Daniel M. Sturm and Nikolaus Wolf on The economics of density: Evidence from the Berlin Wall
    Economic activity is highly unevenly distributed across space. Understanding what drives the agglomeration and dispersion is important for many economic and policy questions. This column describes a theoretical model of internal city structure incorporating agglomeration and dispersion and heterogeneity in local fundamentals. The authors use the division and reunification of Berlin as a natural experiment. Their findings show that both heterogeneity in locational fundamentals and agglomeration forces are important in shaping a city’s internal structure.

  9. Tim Worstall on Companies are the cells of the economy
    It’s often pointed out that companies are little sections of a command economy and thus, some leap to say, obviously it’s possible to have a command economy because we actually do.

Landslide costs Eric Crampton Aug 29

Christchurch Council and the government will buy out some Christchurch properties at high risk of landslide. Here’s Chris Hutching at NBR:

The government and Christchurch City Council will buy 16 Port Hills properties.
“The latest council-commissioned GNS Science reports show 37 green-zoned homes are in areas where the risk to life from mass movement (sometimes called landslide) is considered intolerable,” according to a council media statement.
An intolerable risk is defined when “the risk to life from mass movement in any one year is equal to or greater than one in 10,000.”
Geonet identified 37 at risk properties in total.

Ok. Recall that the value of a statistical life for policy purposes in New Zealand, or at least the one used by MoT for transport planning and subsequently adopted elsewhere in policy, is $3.85 million per fatality.

Let’s work out whether the policy here makes sense.

The median 4-bedroom house in Redcliffs/Sumner, where most of this kind of risk obtains, is $690 per week. Let’s take that as the value of housing services over and above the value of services that would be provided by a park in the same spot: the rental costs understate the value of housing services of owner-occupied properties, but parks provide some value too. Let’s be safe and call it $500/week extra value. For a year, let’s round that down to $25,000 per year. We want an annual figure because the 1/10,000 risk is annualised. Alternatively, we could take the value of the house and the lifetime risk of landslide for that house’s life.

If a house has an intolerable risk at a 1/10,000 risk of landslide death, and if that risk is sufficient for buying out the home-owner and taking that property out of housing use, then we’re willing to forego $25,000 per year in housing services to avoid a 1/10,000 risk of landslide death.

Now let’s suppose that a 4-bedroom house, our valuation basis here, has 5 people in it.

$25,000 * 10,000 = $250,000,000.

$250,000,000 / 5 = $50,000,000

The Ministry of Transport is willing to spend up to about $3.85 million to save a life by roading improvements that prevent deaths.

The government here is spending at least $50 million per (ballpark) statistical life saved.

I’m not sure that this makes a lot of sense where there are other projects that, for the same total cost, could save more expected statistical lives.

Unless we think that dying in a landslide is about thirteen times worse than dying in a car accident. I’m really rather sure I’d rather die in a car accident than in a landslide. Suppose a genie came to me and said, “Eric, you and your family, I know with certainty, have a 10% chance of dying in a bad car accident next year. It’ll be quick though. Would you like to trade that for a 0.8% chance of your family dying in a landslide? It’ll be pretty terrible.”

If you were given that choice, and you’d take the deal to get the landslide instead, then the government’s buyout doesn’t make sense. If you prefer the car accident, then the buyout can make sense.

Update: Note too that there’s an important difference between houses and roading investments. The government, in the latter case, makes investments to mitigate risk of death and accident for anyone using that road. While there may be some roads that risk-averse drivers avoid because they’re too terrifying, we all have reasonable expectations of safety on government-owned roads. If I choose to buy a house at the bottom of a very unstable hillside, I have demonstrated that I’m comfortable with that risk. While it’s true that the earthquakes reveal more about the actual risk and that some owners may have erred, that could be an argument for an insurance payout for the amount of capital loss, not for buying the owners out and barring future residential use. All we then need is the one-time compensation, plus a great big highlighted section at the front of the property’s LIM report noting the substantial landslide risk present at the property. If some prefer taking that risk for a lower-cost house, why should that be illegal?

.

EconTalk this week Paul Walker Aug 28

Terry Anderson, Distinguished Fellow at the Property and Environment Research Center (PERC) and Senior Fellow at the Hoover Institution, talks to EconTalk host Russ Roberts about free-market environmentalism, the dynamics of the Yellowstone ecosystem, …

The effects of patent trolls Paul Walker Aug 28

There is a new organisational form, called the non-practicing entity (NPE), in the world of intellectual property. NPEs have recently emerged as a major driver of IP litigation. The idea is that NPEs amass patents not for the sake of producing any actual product, but rather they aim to prosecute infringements of their patent portfolios. (rent-seeking?) The rise of NPEs has sparked a debate regarding their value and their impact on innovation. Proponents argue that imperfections in the legal system implicitly reward large, well-funded organisations, enabling them to infringe at will on small innovators’ IP and that NPEs are there to protect small innovators from such abuse. Opponents cast NPEs as organisations that simply raise the costs of innovation by exploiting the fact that an imperfect legal system will rule in their favour sufficiently often—even if no infringement has actually occurred—that the credible threat of the legal process can yield rents from producing, innovative firms.

So what are the effects of these “patent trolls”? A new NBER working paper, Patent Trolls: Evidence from Targeted Firms by Lauren Cohen, Umit Gurun and Scott Duke Kominers, tries to find out. Cohen, Gurun and Kominers add to the debate on NPEs by providing the first large-sample evidence on precisely which corporations NPEs target in litigation, when NPE litigation occurs, and the impact of NPE litigation on the targeted firms’ innovative activity.

Cohen, Gurun and Kominers argue that there are two reasons that patent trolls can prevent welfare-increasing innovation from being brought to market.

  1. innovators with profitably commercialisable inventions but with a high enough probability of being sued to be deterred from production
  2. innovators that decide not to commercialise because the ex ante expected profitability of becoming a patent troll is higher than that of commercialisation

In their empirical work Cohen, Gurun and Kominers

[...] link patent-level data on NPEs and their activities to data on all publicly traded firms. Using this linked data, we show that NPEs behave opportunistically; that is, typically acting as patent trolls. Specifically: NPEs target firms that are flush with cash (controlling for all other characteristics) and firms that have had recent, positive cash shocks.

Indeed, a one standard-deviation increase in cash level increases the probability of being sued by an NPE by 11% (t = 6.84). Given that the mean probability is 2%, this is more than a fivefold increase.

In fact, NPEs even target conglomerate firms that earn all of their cash from segments having nothing to do with the allegedly infringing patents. For example, an NPE is likely sue a firm regarding a technology patent even if the firm is earning all its revenue from a lumber division entirely unrelated to the allegedly infringing technology patent—even if the division holding that patent is unprofitable. Indeed, we find that profitability in unrelated businesses is almost as predictive of NPE infringement lawsuits as is profitability in the segment related to the allegedly infringing patent.

Consistent with our model, we also find that NPEs target firms against which they have a higher ex ante likelihood of winning. We demonstrate this fact using multiple measures of ex ante likelihood of lawsuit success. First, we show that NPEs are significantly more likely to target firms that are busy dealing with a number of other litigation events unrelated to intellectual property. Being tied up with outside litigation roughly doubles the probability (t = 2.87) of being sued by an NPE. Moreover, we show that, controlling for all other characteristics, firms with larger legal teams have a significantly lower probability of being targeted by NPEs, consistent with large legal teams serving as a deterrent.

Of course, the true prediction of our model is on the ex ante expected profitability of NPE litigation. To capture this, we interact our measures of expected cash payouts with our measures of expected lawsuit success. We find that, as the model predicts, NPEs systematically target those firms for which the ex ante expected profitability of litigation is large. In particular, the payout probability interaction terms are significant and economically large. Our finding suggests that nearly all the firms targeted by NPEs have large pools of cash for potential payouts and are ex ante more likely to pay off in some form (either an out-of-court settlement or an in-court loss). To further explore this connection, we construct a measure of the ex ante expected outcome if a targeted firm were to go to court. This measure relies on the assumption that defendants often make predictions about the likely outcome based on observations of other firms in the same industry and location. We find that the interaction term of this expected outcome and expected payout is again large and significant, providing further evidence that NPEs choose targets based on expected profitability: suits with high probability of payoff against firms with deep pockets.

Non-practicing entities don’t have a monopoly on IP litigation. Practicing entities (PEs), such as IBM and Intel, also sue each other for patent infringement. If our results are simply picking up general characteristics of IP litigation, then we might expect to see PEs behaving in much the same way as NPEs. In order to compare PE and NPE behavior, we hand-collected the universe of patent infringement cases brought by PEs against other PEs in the same period (2001–2011). However, we find the opposite. If anything, PEs are slightly less likely to sue firms with high cash balances and less likely to sue firms with many ongoing cases. All of the other determinants of NPE targeting have (statistically and economically) no impact on PE litigation behavior. This comparison suggests that our results on NPE litigation behavior are not just reflections of general characteristics of IP litigation. Rather, our findings are consistent with agent-specific motivations for NPEs in targeting firms flush with cash just when favorable legal outcomes are more likely.

Lastly, we examine the real impacts of NPEs’ litigation activity. Comparing firms that are sued by NPEs and go to court (and in this way controlling for selection of firms targeted by NPEs), we find that firms that lose in court have significantly lower post-litigation patenting activity and fewer citations to their marginal post-litigation patents, relative to firms whose cases are dismissed. Furthermore, after losing to NPEs, firms significantly reduce R&D spending—both projects inside the firm and acquiring innovative R&D projects outside the firm. Our evidence suggests that it really is the NPE litigation event that causes this decrease in innovation. Prior to litigation, firms that subsequently lose to NPEs are identical to those that subsequently have suits dismissed. They have the same R&D, patenting, and patent quality. Moreover, patents of firms developed pre-litigation continue to accrue citations at exactly the same rate after litigation, whether or not the suit was dismissed. This is in stark contrast to the divergent amount of citations of firms’ post-litigation patents.

In short, NPEs behave as patent trolls.

Competition and productivity Paul Walker Aug 28

An obvious and important question in industrial economics is does competition raise productivity and if so, through what mechanism? In a 2010 working paper, Does Competition Raise Productivity Through Improving Management Quality?, John Van Reenen sets…

Should change the way we teach economics? Paul Walker Aug 27

This is a question that Professor, and Nobel Prize winner, Alvin Roth was asked by a reporter from Brazil. The questions and Roth’s answers follow:

1) Should the content of economics degrees change? Why? Why not?

I guess you mean should we change what we teach young economists, and of course the only answer is “of course!” What we teach young physicists and biologists and doctors and civil engineers changes as we learn more about those things, and economics is no different.

2) Has the criticism of economics been exaggerated after the 2008 crisis? To what extent is the current debate on content useful?

I think the 2008 crisis has been useful for pointing out that economics is, in many of its parts, still a very young science. For an analogy, think of medicine, which is the part of biology that we most often look to for advice, and is also a young science in many of its parts. Each year we worry that there might be an influenza epidemic due to whatever new strain of flu is observed in Asia that year, and each year vaccines are prepared, in an attempt to avert a disaster like the influenza pandemic of 1918. So far we’ve been lucky, but it’s not because we have a deep understanding of what could cause another epidemic or how to prevent it. But if another epidemic occurs, we’ll need to rely more on doctors and medicine, not less. So, while we need to understand epidemics better, that’s not a deep criticism of medicine, just an acknowledgement of some of its current limitations. Similarly for economic crises, and economics.

3) What changes should be made?

One of the things we’re devoting more attention to at Stanford is the kind of economic engineering called market design, which pays attention to the detailed rules by which particular marketplaces operate, and to experimental economics, which gives us a tool to better understand how people behave in economic environments.

4) Has economics teaching become too wedded to scientific pretension? Was excessive faith invested in abstract mathematical models?

Abstract mathematical models are very useful, in combination with other kinds of investigation. A lot of my work is devoted to market design, and my colleagues and I build a lot of marketplaces that have some of their ancestry in abstract mathematical models (including some of those explored by the famous Brazilian economist Marilda Sotomayor, in whose honor there is a conference next week). Mathematical models are becoming increasingly important as we start to explore really big data sets, since not only do you need mathematical tools to test hypotheses on data, you need models to even suggest what hypotheses you should be testing. Theory and observation work best in combination…

On this last point one of the top economic theorists in the area of the theory of the firm and contract theory, Oliver Hart, has noted (and this would be my answer to Matt Nolan’s recent Discussion Tuesday question),

Although theory may not be as prominent as it once was, it remains essential for understanding the (increasingly) complex world we live in. One cannot analyze the bewildering amount of data now available without the organizing framework that theory provides. I would also suggest that one cannot understand the extraordinary events that we have recently witnessed, such as the financial crisis, or make sensible policy recommendations in response to these events, without the organizing framework of theory.

So for those who seem to think data can do everything, and we should therefore stop teaching theory, I don’t think so. Empirical work is only as good as the theory underlying it. So, no, running a million regressions and picking the one that confirms your prejudices isn’t how you do good economics.

When Hooton’s away, a Crampton will play Eric Crampton Aug 27

I filled in for Matthew Hooton in the NBR’s Opening Salvo this week. It’s only in the print edition; here’s a taste. [Update: here for subscribers]

We can count the costs of apartment stories left unbuilt. In a well-functioning market, developers will build upwards until the cost of an additional storey roughly equals the extra revenue the developer gets from selling the extra floor space, unless we think that property developers do not really like money all that much. We have pretty good data on what it costs to build a five-storey apartment building as compared to a four-storey one. If a fifth storey left unbuilt because of height limits, whether due to viewshed protection or for other regulation, could have sold for two to three times its construction cost, as the presented study found, the effective regulatory tax imposed by height limits is pretty high. If you add up the value of all the missing apartments, the total figure is going to be massive.

While urban planners often take a lot of stick for wishing to force people into compact city forms, and sometimes rightly so, urban height limits that artificially preventdensity impose a regulatory tax that either pushes prices up or pushes cities out. Auckland’s metropolitan urban limit has been pretty binding and artificially restricts building out; regulations barring development upwards need at least as much attention.

The economists at these sessions used similar method to estimate the regulatory tax implicit in zoning regulations in places like Epsom, Remuera, Point Chevalier and Grey Lynn. Add up the construction costs of a new house and the per-square-metre land cost. According to the study presented, which remains in the final polishing stages, mean house prices exceed those real costs by at least twelve percent in places like Epsom: it’s a regulatory zoning tax. The Greens’ Julie-Anne Genter was exactly on point when she excoriated ACT’s David Seymour in the Epsom candidates’ debate for opposing denisification. What kind of free-marketer thinks it right and proper to give neighbours several houses over a veto right over what I might wish to do with my house? One that needs to win votes in Epsom.

Do get a copy that you might read the whole thing. For the Genter-Seymour debate in question, hit the 8:50 – 9:16 mark here.

.

The payoff from deregulation Donal Curtin Aug 27

There are still lots of people who are in two minds about our big burst of deregulation – ‘Rogernomics’ – in the second half of the 1980s and first half of the 1990s. Even the name suggests that it might have been the Frankenstein-like work of one indi…

Two interesting looking recent working papers Paul Walker Aug 26

First a paper on Thomas Piketty’s recent book:

The Rise and Fall of General Laws of Capitalism
Daron Acemogluy James A. Robinsonz

Abstract
Thomas Piketty’s recent book, Capital in the Twenty First Century, follows in the tradition of the great classical economists, Malthus, Ricardo and Marx, in formulating “general laws” to diagnose and predict the dynamics of inequality. We argue that all of these general laws are unhelpful as a guide to understand the past or predict the future, because they ignore the central role of political and economic institutions in shaping the evolution of technology and the distribution of resources in a society. Using the economic and political histories of South Africa and Sweden, we illustrate not only that the focus on the share of top incomes gives a misleading characterization of the key determinants of societal inequality, but also that inequality dynamics are closely linked to institutional factors and their endogenous evolution, much more than the forces emphasized in Piketty’s book, such as the gap between the interest rate and the growth rate.

and then one on free banking and economic growth in Quebec:

Free Banking and Economic Growth in Lower Canada, 1817–1851
Mathieu Bedard and Vincent Geloso

Abstract
Generally, the historical literature presents the period from 1817 to 1851 in Lower Canada (modern day Quebec) as one of negative economic growth. This period also coincides with the rise of free banking in the colony. In this paper we propose to study the effects of free banking on economic growth using theoretical and empirical validations to study the issue of whether or not economic growth was negative. First of all, using monetary identities, we propose that given the increase in the stock of money and the reduction in the general price level, there must have been a positive rate of economic growth during the period. We also provide complementary evidence drawn from wages that living standards were increasing. It was hence impossible for growth to have been negative. Secondly, we propose that the rise of privately issued paper money under free banking in the colony had the effect of mitigating the problem of the abundance of poor quality coins in circulation which resulted from legal tender legislation. It also had the effect of facilitating credit networks and exchange. We link this conclusion to the emergence of free banking which must have been an important contributing factor. Although we cannot perfectly quantity the effect of free banking on economic growth in Lower Canada, we can be certain that its effect on growth was clearly positive.

Real decline? Eric Crampton Aug 26

Nolan rightly hits on a bit of chicanery in reporting on BERL’s policy costings for the Greens:

Investing to maintain real spending
This one is genuinely disappointing as it seems to be an almost explicit misinterpretation of Budget forecast figures.
The numbers for claiming falling real expenditure come straight from the Treasury forecasts here, but are then deflated.  This sounds good on the face of it, and people do this all the time.  However, it ignores that there is both unallocated spending, and allowances for additional spending in future Budgets – both which largely get allocated to Health and Education on the day.
It is an “open” secret that the Health and Education numbers work this way – as both Labour and National want to announce increases in spending on these items on the day. [Note: It is just like "tax cuts to get rid of fiscal drag" - political marketing all the parties do].

Now BERL is likely aware that the Budget Economic and Fiscal Update leaves out that unallocated spending. It’s right there in the darned table. Here:

So what do we have here? For each line, we have the expenditures by spending area. For example, health rises from $12,368m in 2009(actual) to $15,274 in the 2018 forecast. BERL then goes and deflates that by expected inflation; the Greens then claim that there’s a real cut in spending.

Now take a look at the line reading “Forecast for future new spending”. That’s the line where Treasury makes its best wink-wink-nudge-nudge guess as to future operating spending announcements, some of which it’s possibly already had to cost for future government policy announcements, and some of which will be based on expectations of future inflation adjustments.

When BERL runs its inflation adjusted accounting on Core Crown Expenditures, it finds a 9.9% nominal and 2.8% real spending increase over the next three years. That total Core Crown Expenditures category includes the future spending increases. Those future spending increases have not been allocated across spending categories. If it were allocated proportionately across all categories, the weighted average of the different categories’ increases would wind up being 2.8% real. But BERL doesn’t assume that. It just takes each line from the BEFU and inflation adjusts it while ignoring the forecast future new spending.

Nolan is right. And it’s worse than that. Nolan points to the 2013 Note 8 adjustments to BEFU. The 2014 table above has the forecast increases right there in the same table where BERL would most likely have pulled its data. It would be really hard to miss it. And if you didn’t miss it, it would be really hard not to know that it would be really misleading to run a deflation adjustment without incorporating future expected spending increases where some of those increases would be to offset future inflation! 
The forecast new capital spending and unallocated contingencies are in the 2014 Note 8 adjustments; there’s another $2.5 billion in forecast new capital spending by 2018. None of that’s included in BERL’s accounting.
Nolan’s evaluation, noting that he’s an Infometrics economist who hadn’t worked on the report:

One thing I will point out, after reading the Infometrics report for the first time, is that they don’t say the things in the Green’s summary – but if you do a costing for a party, that is the way they will sell it.  The BERL tables on the other hand do imply what the Greens take from them – and that is very disappointing as they are misleading.

So the Greens put the best spin they could on the Infometrics numbers, as would any other party. But they could have been misled by the BERL tables. And that turned into some very erroneous headlines for the Greens, and some embarrassment when the Minister of Finance used a yellow highlighter to point them to what BERL failed to notice:

@RusselNorman @stevenljoyce Here it is. P119 BEFU “Forecast for future new spending” – the line BERL missed. pic.twitter.com/V2pnaYwXdq
— Bill English (@honbillenglish) August 19, 2014

I love that our Minister of Finance will come in and correct these kinds of things, or at least somebody in his office on his behalf (who knows whether he runs his own Twitter account).

It looks like Russell Norman honestly believed that there was no provision for future spending. I wonder whether he’s satisfied with the advice he received.

.

Network-wide options by YD - Freelance Wordpress Developer