Filip Vachuda, Onehunga High School’s academic runner-up for 2017, began ”DuxGate” when he wrote he missed out on dux because the winner ”exempted herself from any math, science, or indeed, scholarship exams and extra subjects”.

Given that I was awarded dux twice in high school, having studied the sciences, the arts and the so-called ”non-academic” subjects of painting and physical education, I believe I’m in a qualified position to comment on this issue.

Filip wonders why his school didn’t consider his ”more demanding curriculum”, claiming that the ”classically academic” subjects he studied were more academically rigorous than printmaking, English or theology.

The notion that it’s laughably easy to achieve highly in arts-based subjects is a damaging misconception perpetuated throughout society. In my experience at least, in subjects where there are clear correct and incorrect answers – high school physics, for example – it is easier to score more highly than in subjects that value abstract and creative thinking. Calculus is chock-a-block with complicated theorems and intricate calculations.

But history, sociology and English literature require abstract thinking, creativity, writing skills and the ability to communicate in a clear and engaging manner. Moreover, these subjects are, for lack of a better word, subjective. There’s no clear-cut way to succeed. Both calculus and English are challenging, but in vastly different ways.

In school, I was a ”dramie”, a corpse-bride in our street performers troupe. Sure, dressing up in a frayed wedding dress and daubing ghostly paint on my face may have seemed practical, fun and stupidly easy to accumulate credits in. But drama also required me to submit numerous written portfolios and essays detailing every minute aspect of my various characters, my costume and my stage presence.

And perhaps NCEA year 13 painting was fun and interesting. But it also required I spend virtually every afternoon in the art studio researching my artist models, writing up long and tedious essays about Salvador Dali’s concept of time, mastering difficult painting techniques and employing a wide range of media. It was time-consuming, exhausting and difficult – despite being a so-called ”non-academic” subject.

Filip argues the end goal of education is a well-paying job. Financial gain is not and should not be the sole reason for educating oneself. The motivations for studying a particular subject are as wide and varied as the students who take them. In my humble opinion, it is more fulfilling to pursue learning for the joy of it, rather than focusing on the rewards and recognition one might obtain.

Moreover, Filip’s assertion that certain subjects automatically lead to higher-paying jobs is inherently speculative, failing to take into account how the job market may evolve in the future. New Zealand’s economic and cultural landscape will not remain static forever. In short, Filip’s essay is symptomatic of a wider societal disrespect towards the arts and subjects such as music, drama and PE.

And what are the consequences of this? Neglecting the humanities and one’s cultural education will ultimately leave Kiwi high-schoolers ill equipped to employ the self-reflection and self-criticism required of informed and critical citizens.

English literature, painting and Polynesian dance help us understand our fellow human beings, thereby fostering social justice, empathy and equality. History, sociology and gender studies teach us to weigh evidence sceptically and deal critically with complex, subjective and imperfect information.

Of course, calculus, physics and chemistry are and will remain vital and necessary to our country. But they must be balanced by subjects that value the study of humanity in all its manifestations. So can we please stop undervaluing and neglecting disciplines rooted in the arts, ideas and the celebration of cultural achievement?

Filip ends his essay by advising his sister to ”load up on her photography, PE and Polynesian dance if she wishes to continue being a top scholar”.

Perhaps Filip should take his own advice.

**This article was originally published in the ODT, on 21 December 2017. **

**Image: Photo by Davide Cantelli on Unsplash**

*This article was originally published on The Conversation. Read the original article.*

**Innovation is widely viewed as the engine of economic growth.**

**To maximize innovation and growth, all of our brightest youth should have the opportunity to become inventors. But a study we recently conducted, jointly with Neviana Petkova of the U.S. Treasury, paints a very different picture. We found that a child’s potential for future innovation seems to have as much to do with the circumstances of his or her family background as it does with his or her talent.**

We concluded that there are many “Lost Einsteins” in America – children who had the ability to innovate, but whose socioeconomic class or gender greatly reduced their ability to tap into the social networks and resources necessary to become inventors. Our analysis sheds light on how increasing these young people’s exposure to innovators may be an important way to reduce these disparities and increase the number of inventors.

Our first finding is that there are large differences in innovation rates by socioeconomic class, race and gender. Using new de-identified data that allows us to track 1.2 million inventors from birth to adulthood, we found that children born to parents in the top 1 percent of the income distribution are 10 times as likely to become inventors as those born to parents in the bottom half. Similarly, white children are three times as likely to become inventors as are black children. Only 18 percent of the youngest generation of inventors are female. Although the gender gap narrows somewhat each year, at the current rate of convergence, we won’t see gender balance until next century.

This is not to say that talent doesn’t play some role in determining who invents in America. In fact, math test scores for students even as young as third grade tell us a great deal about who will innovate. Unsurprisingly, inventors are typically found in the top tiers of math test scores. More concerning is that while high-achieving youth from privileged backgrounds go on to invent at high rates, many comparably talented children from more modest backgrounds do not. Even among the most talented kids, family background is still an important determinant of who grows up to invent.

The relative importance of privilege and skills changes as kids get older. And it does so in a way that suggests that differences in educational environment contribute to disparities in patent rates. Near the start of elementary school, we can identify many high-achieving students from less privileged backgrounds. But as these students get older, the difference in test scores between rich and poor become much more pronounced. By high school, youth from less privileged backgrounds who appeared to hold promise as future inventors when they were younger have fallen behind academically. Other recent research suggests that differences in schools and neighborhoods play a large role in this socioeconomic divergence in skills.

If we could somehow get all kids to grow up to invent at the same rate as white boys from America’s wealthiest families – that is, families with an income of $100,000 or more – we would have four times as many inventors in America. So what can be done to keep these “Lost Einsteins” in the pipeline to become innovators?

We found that increasing exposure to innovation may be a powerful tool to increase the number of inventors in America, particularly among women, minorities and children from low-income families. To test the importance of exposure, we first counted the number of inventors that lived in each child’s city when the child was young. We use this measure as a proxy for exposure to innovation. After all, a child’s chances of coming into contact with inventors increase when there are more inventors around. We found that growing up in a city with more inventors substantially increases the likelihood that a child will become an inventor as an adult. This is true even when we took kids who were the children of inventors out of the analysis. This suggests that it’s not just children of inventors who are likely to become inventors themselves.

We also found that kids who go on to become inventors tend to invent the same kinds of things as the inventors in the city where they grew up. For instance, among current Boston residents, those who grew up in Silicon Valley around computer innovators are most likely to invent computer-related technologies. On the other hand, Boston residents who grew up in Minneapolis – a hub for medical device companies – are more likely to invent new medical devices. These detailed patterns suggest that there is something specific about interactions with inventors during childhood that causes kids to follow in their footsteps.

The effects of growing up around inventors are large. Our estimates suggest that moving a child from an area at the 25th percentile of exposure to inventors, such as New Orleans, to one at the 75th percentile, such as Austin, Texas, would increase the child’s chances of growing up to invent a new technology by as much as 50 percent.

These effects are stronger when children are exposed to inventors with similar backgrounds. Girls who grow up in a city with more female inventors are more likely to invent, but growing up around adult male inventors has no effect on girls’ future innovation rates. Similarly, boys’ future innovation is influenced by the number of male rather than female inventors around them during childhood.

Since underrepresented groups are likely to have fewer interactions with inventors through their families and neighborhoods, differences in exposure play a large role in these disparities. Indeed, our findings suggest that if young girls were exposed to female innovators at the same rate as boys are to male innovators, half of the gender gap in innovation would be erased.

Together, our findings call for greater focus on policies and programs to tap into our country’s underutilized talents by increasing exposure to innovation for girls and kids from underprivileged backgrounds. It may be particularly beneficial to focus on children who do well in math and science at early ages.

Such policies could include mentoring programs, internships or even interventions through social networks. At a more personal level, those in positions to be mentors might give more thought to making sure students from underprivileged backgrounds have the guidance needed to follow them in their career paths. The more each of us does to help boys and girls from different backgrounds achieve their innovative potential, the more it will spur innovation and economic growth for us all.

**Alexander Bell, PhD Candidate, Economics, Harvard University; John Van Reenen, Professsor of Applied Economics, Massachusetts Institute of Technology; Raj Chetty, Professor of Economics, Stanford University, and Xavier Jaravel, Assistant Professor of Economics, London School of Economics and Political Science**

**Image: New research concludes that there are many “Lost Einsteins” in America – children who had the ability to become inventors but didn’t because of where they were born. Shutterstock.com**

**This article was originally published on The Conversation. Read the original article.**

Supporting those women who enter into a science, technology, engineering and mathematics or STEM career is just as important as using those same women as positive role models for young people to engage them with STEM. We think we’re striking a balance that helps to achieve both through our Series of Profiles on girls and women in (and involved with) science and technology and located on the Curious Minds website.

At least one new profile is published each week and yes I feature¹. But that’s not why I’m sharing this information. The profiles include some really inspirational women, all involved in some way in STEM. The latest profile is on insect and spider weaponry expert Dr Chrissie Painting, just returning to an academic career in New Zealand after time in Singapore as a postdoc.

Then there’s the recent profile on Dr Ocean Mercier, a Senior Lecturer, Te Kawa a Māui (School of Māori Studies), Victoria University of Wellington. She has a PhD in condensed matter physics, and was the presenter of Project Mātauranga, a science series on Māori TV celebrating Māori innovation in the science sector.

Or Kate de Ridder, an Aeronautical Engineer for the Royal New Zealand Air Force, based in Auckland. The rest of the inspirational and fascinating profiles can be found here. All the current profiled women (bar Chrissie who is featured above) are pictured below in the slideshow.

Each profile is based on seven questions:

- What do you do on an average work day?
- What did you study at school? And after high school?
- Was your study directly related to what you do now?
- What would you like to share with young women who are thinking about their career choices right now?
- What are some of your career highlights so far?
- Why do you believe engaging in STEM – whether it’s working in the field, studying it or just educating one’s self around the issues – is important to New Zealand?
- Why is it important to have more women working in STEM?

One aim of the profiles is to create positive role models, that encourage young people, and especially girls to engage with science. We hope they will see that they can identify with being a scientist and that there are many ways to connect with science, through learning each woman’s background. They will be able to see that there are many women working in STEM, beyond the more well-known ones.

We also hope young people will see from the profiles that there are many different areas of science, if they are considering science as a career option. In upcoming profiles we will also be showcasing women who are working in science-related careers; that is of science and for science, but who haven’t come from a science background per se.

The low numbers of women in STEM has been a longstanding issue, as has the career experiences of women in STEM, which in turn contributes to the low numbers. It is indeed one of our ‘wicked problems’, where as Dr Elena Bennett explains:

Pull at one thread and discover 10 more just as unsolvable. Science draws a lot of women, but can’t keep them.

One other aim in creating these profiles is celebrating women in STEM and raising their collective and individual profiles. This is one way our New Zealand women in STEM can feel more connected to each other. It also provides a visible presence to say “We’re here, we’re doing cool stuff, this matters and this is being recognised on this page”.

February 11th was the first International Day of Women and Girls in Science. In my other blog, Ice Doctor, I wrote about my reflections on women in science and where we’ve been and where we’re going. This explains some context around why profiling some of the many faces of women in STEM is so important.

¹I happened to be the inaugural profile published. You can read more about what I do and my background here.

]]>Regrettably, pi-day (March 14th, 2015, or 3.14.15) only works if you use the US system of recording dates. But fear not, e-day (2nd July 2018, or February 7th, 2018 if you're American) isn't so far away...

]]>Regrettably, pi-day (March 14th, 2015, or 3.14.15) only works if you use the US system of recording dates. But fear not, e-day (2nd July 2018, or February 7th, 2018 if you’re American) isn’t so far away…

]]>I would have thought that the obvious answer was 'in the same way that the opposition made it'. In other words, keep the scoreboard turning over nicely in the early stages, but without taking excessive risks, and then, with the wickets in hand (especially Gayle's), hit the accelerator at the end. Rather than taking 50 overs to reach the target, they looked like they were trying to do it in 40. It was never going to work. You can contrast this to the calm manner in which Sri Lanka reached England's 300+ score in the group stages. Nothing flash, no excessive risks, they just ticked the board over just short of the required run-rate, and then with wickets in hand they pushed on at the end to win easily. No fuss, no stupid shots. They knew exactly what was needed, and they achieved it.

A school of thought says, other things being equal, it's much better to bat second in a limited overs match, because you know exactly what you have to do. It clarifies your batting strategy: Choose whatever strategy maximizes your chances of reaching the target score within 50 overs. Yes, there are other considerations, our friends Duckworth and Lewis being one, but let's put those aside for the moment. You have 10 wickets and 50 overs, and a target score. You only need to beat that target by 1 run on the last ball, with one wicket left. What strategy will maximize your chance of success?

The team batting first has a much less defined problem. They don't know what a winning score is. True, they'll have some feel of what one will be, given the ground, the quality of opposition, the state of the pitch and so on, but fundamentally, they don't know what score is going to be good enough. For example, a team could probably secure a moderate score (let's say 280) with 80% probability, by batting conservatively. Or they could aim for a larger score (say 350) by taking more risks. They might get there with a probability of 25%, but then there's the possibility (maybe also 25%) they crash out on the way and end up with something more like 250. What should they do? Which strategy is better?

It's viewed as a criminal offence if the team batting first fails to bat out its 50 overs. If it's all out beforehand, it has obviously **taken too many risks**. But also, one could say its a criminal offence if they end up with just a handul of wickets down. In that case they **haven't been taking enough risk**. Balancing all that up, a typical strategy for a team batting first is to go at a moderate rate to start with, and slowly increase the rate as wickets allow. It seems to have stood the test of time.

It is possible to make this a bit more mathematical. What a team needs to do is to maximize its score, subject to the constraints that a. it only has 50 overs, and b. it only has 10 wickets. Since the rate of fall of wickets is certainly related to the scoring rate (the higher the runs-per-over, the higher the risk, usually, and the higher the wickets-per-over) we can write down some equations for the situation. [You'll be relieved to hear I won't try to do those here].

It turns out to be is an optimization problem, of which there are many in physics. We can tackle many of them with the Euler-Lagrange equation. For example, what shape does a chain have when it hangs under its own weight? Take a chain of length 2 metres, secure the ends to posts 1.5 metres apart. The chain clearly sags in the middle, but what shape is the resulting curve? The chain hangs in such a way as to minimize its gravitational potential energy, subject to the constraint that it has a constant length and that it has to start and end at fixed points. One can put this in equation form and solve for the shape. The resulting curve is called a 'catenary'.

Certainly, if one could write down the rate of loss of wickets as a function of scoring rate, wickets lost already and other factors, we should be able to have a go at tackling the cricket scoring rate problem. Given the degree of professionalism of the sport, it wouldn't surprise me if that has actually been done by someone. I imagine it would be a closely guarded secret.

]]>

I would have thought that the obvious answer was 'in the same way that the opposition made it'. In other words, keep the scoreboard turning over nicely in the early stages, but without taking excessive risks, and then, with the wickets in hand (especially Gayle's), hit the accelerator at the end. Rather than taking 50 overs to reach the target, they looked like they were trying to do it in 40. It was never going to work. You can contrast this to the calm manner in which Sri Lanka reached England's 300+ score in the group stages. Nothing flash, no excessive risks, they just ticked the board over just short of the required run-rate, and then with wickets in hand they pushed on at the end to win easily. No fuss, no stupid shots. They knew exactly what was needed, and they achieved it.

A school of thought says, other things being equal, it's much better to bat second in a limited overs match, because you know exactly what you have to do. It clarifies your batting strategy: Choose whatever strategy maximizes your chances of reaching the target score within 50 overs. Yes, there are other considerations, our friends Duckworth and Lewis being one, but let's put those aside for the moment. You have 10 wickets and 50 overs, and a target score. You only need to beat that target by 1 run on the last ball, with one wicket left. What strategy will maximize your chance of success?

The team batting first has a much less defined problem. They don't know what a winning score is. True, they'll have some feel of what one will be, given the ground, the quality of opposition, the state of the pitch and so on, but fundamentally, they don't know what score is going to be good enough. For example, a team could probably secure a moderate score (let's say 280) with 80% probability, by batting conservatively. Or they could aim for a larger score (say 350) by taking more risks. They might get there with a probability of 25%, but then there's the possibility (maybe also 25%) they crash out on the way and end up with something more like 250. What should they do? Which strategy is better?

It's viewed as a criminal offence if the team batting first fails to bat out its 50 overs. If it's all out beforehand, it has obviously **taken too many risks**. But also, one could say its a criminal offence if they end up with just a handul of wickets down. In that case they **haven't been taking enough risk**. Balancing all that up, a typical strategy for a team batting first is to go at a moderate rate to start with, and slowly increase the rate as wickets allow. It seems to have stood the test of time.

It is possible to make this a bit more mathematical. What a team needs to do is to maximize its score, subject to the constraints that a. it only has 50 overs, and b. it only has 10 wickets. Since the rate of fall of wickets is certainly related to the scoring rate (the higher the runs-per-over, the higher the risk, usually, and the higher the wickets-per-over) we can write down some equations for the situation. [You'll be relieved to hear I won't try to do those here].

It turns out to be is an optimization problem, of which there are many in physics. We can tackle many of them with the Euler-Lagrange equation. For example, what shape does a chain have when it hangs under its own weight? Take a chain of length 2 metres, secure the ends to posts 1.5 metres apart. The chain clearly sags in the middle, but what shape is the resulting curve? The chain hangs in such a way as to minimize its gravitational potential energy, subject to the constraint that it has a constant length and that it has to start and end at fixed points. One can put this in equation form and solve for the shape. The resulting curve is called a 'catenary'.

Certainly, if one could write down the rate of loss of wickets as a function of scoring rate, wickets lost already and other factors, we should be able to have a go at tackling the cricket scoring rate problem. Given the degree of professionalism of the sport, it wouldn't surprise me if that has actually been done by someone. I imagine it would be a closely guarded secret.

]]>

A mathematician can say what he likes... A physicist has to be at least partly sane

J. Willard Gibbs

What is it that makes a physicist sane (if only in part)? Everything has to be related back to the 'real world', or the 'real universe'. That is, a physicist has to talk about how things work in the world or universe in which we live, not some hypothetical universe. That's how I think of it, and I know, having done a bit of research with some of my students, a lot of them think the same way. That's not to say mathematicians don't have a lot to say about this universe too. It's just that the constraints on them are somewhat less.

Another way of looking at it is that physicists work with dimensioned quantities. Most things of physical relevance have **dimensions**. For example, a book has a length, width and thickness. All of these are distances, and can be measured. The unit doesn't matter; we could use centimetres, inches or light-years - but the physical size of the object is determined by lengths. Also, the book has a mass (one could measure it in kilograms). It might find its way onto my desk at a particular time (measured, for example, in hours, minutes, seconds, millennia or whatever). Perhaps it is falling at a particular velocity - which describes what distance it travels in a particular time. All of these things are physical quantities, and they carry dimensions.

One of my pet hates as a physicist is reading physics material in which the dimensions have been removed. You can do this by writing lengths in terms of a 'standard' length, but then only quoting how many of the standard length it is. So we might talk about lengths in terms of the length of a piece of A4 paper (which happens to be 297 mm); a piece of A2 paper has length 2 standard-lengths, and an area of 4 standard-areas. The problem really comes when the discussion drops the 'standard-length' or 'standard-area' bit and we are left with statements such as a piece of A2 paper has a length of 2 and an area of 4. It is left to the reader to work out what this actually means in practice. A mathematician can get away with it - she can say what she likes, but not so the physicist.

Here's a question which illustrates the point? What is the length of a side of a cube whose volume is equal to its surface area? The over-zealous mathematics student blunders straight in there: Let the length be x. Then volume is x^3, and surface area is 6 x^2 (the area of a face is x^2, and there are six on a cube). So x^3 = 6 x^2 ; cancelling x^2 from both sides, we have x=6. Six what? centimetres, inches, furlongs, parsecs? The point is that the volume of a cube can never be equal to its surface area. Volume and area are fundamentally different things.

The Wikipedia page on 'fundamental units' , along with many text books, blunders in this way too. The authors should really know better. (Yes, I should fix it, I know...) For example:

A widely used choice is the so-called Planck units, which are defined by setting

ħ=c=G= 1

No, NO, NO! What is wrong with this? How can the speed of light 'c' be EQUAL to Newton's constant of Gravitation 'G". They are fundamentally different things. The speed of light is a speed (distance per unit time), Newton's constant of gravitation is... well.. it's a length-cubed per mass per time-squared. It's certainly not a speed, so it can't possible be equal to the speed of light. And neither can be equal to 1, which is a dimensionless number. What the statement should say, is that c = 1 length-unit per time-unit; and G = 1 length-unit-cubed per mass-unit per time-unit squared.

However, doing physics can be more complicated that this. A lot of physics is now done by computer. In writing a computer programme to do a physics calculation, we almost always don't have explicit record of the units or dimensions in our calculations. Our variables are just numbers. It's left to us to keep track of what units each of these numbers is in. Strictly speaking, I'd say it's rather slack. It would be nice to have a physics-programming language that actually keeps track of the units as well. However, I'm not aware of one. (If someone could enlighten me otherwise, that would be fascinating...) Otherwise, I'll have to have a go at constructing one.

What's prompted this little piece is that I've been reviewing a paper that has been submitted to a physics journal. The authors have standardized the dimensions out of existence, which makes it awfully hard for me to work out what things mean physically. Just how fast is a speed of 1.5? How many centimetres per second is it? While that might be an answer their computer programme spits out, the authors really should have made the effort of turning it back into something that relates to the real world. In a mathematics journal, they might get away with it. But not in a physics journal. At least, not if I'm a reviewer...

]]>

A mathematician can say what he likes… A physicist has to be at least partly sane

J. Willard Gibbs

What is it that makes a physicist sane (if only in part)? Everything has to be related back to the 'real world', or the 'real universe'. That is, a physicist has to talk about how things work in the world or universe in which we live, not some hypothetical universe. That's how I think of it, and I know, having done a bit of research with some of my students, a lot of them think the same way. That's not to say mathematicians don't have a lot to say about this universe too. It's just that the constraints on them are somewhat less.

Another way of looking at it is that physicists work with dimensioned quantities. Most things of physical relevance have **dimensions**. For example, a book has a length, width and thickness. All of these are distances, and can be measured. The unit doesn't matter; we could use centimetres, inches or light-years – but the physical size of the object is determined by lengths. Also, the book has a mass (one could measure it in kilograms). It might find its way onto my desk at a particular time (measured, for example, in hours, minutes, seconds, millennia or whatever). Perhaps it is falling at a particular velocity – which describes what distance it travels in a particular time. All of these things are physical quantities, and they carry dimensions.

One of my pet hates as a physicist is reading physics material in which the dimensions have been removed. You can do this by writing lengths in terms of a 'standard' length, but then only quoting how many of the standard length it is. So we might talk about lengths in terms of the length of a piece of A4 paper (which happens to be 297 mm); a piece of A2 paper has length 2 standard-lengths, and an area of 4 standard-areas. The problem really comes when the discussion drops the 'standard-length' or 'standard-area' bit and we are left with statements such as a piece of A2 paper has a length of 2 and an area of 4. It is left to the reader to work out what this actually means in practice. A mathematician can get away with it – she can say what she likes, but not so the physicist.

Here's a question which illustrates the point? What is the length of a side of a cube whose volume is equal to its surface area? The over-zealous mathematics student blunders straight in there: Let the length be x. Then volume is x^3, and surface area is 6 x^2 (the area of a face is x^2, and there are six on a cube). So x^3 = 6 x^2 ; cancelling x^2 from both sides, we have x=6. Six what? centimetres, inches, furlongs, parsecs? The point is that the volume of a cube can never be equal to its surface area. Volume and area are fundamentally different things.

The Wikipedia page on 'fundamental units' , along with many text books, blunders in this way too. The authors should really know better. (Yes, I should fix it, I know…) For example:

A widely used choice is the so-called Planck units, which are defined by setting

ħ=c=G= 1

No, NO, NO! What is wrong with this? How can the speed of light 'c' be EQUAL to Newton's constant of Gravitation 'G". They are fundamentally different things. The speed of light is a speed (distance per unit time), Newton's constant of gravitation is… well.. it's a length-cubed per mass per time-squared. It's certainly not a speed, so it can't possible be equal to the speed of light. And neither can be equal to 1, which is a dimensionless number. What the statement should say, is that c = 1 length-unit per time-unit; and G = 1 length-unit-cubed per mass-unit per time-unit squared.

However, doing physics can be more complicated that this. A lot of physics is now done by computer. In writing a computer programme to do a physics calculation, we almost always don't have explicit record of the units or dimensions in our calculations. Our variables are just numbers. It's left to us to keep track of what units each of these numbers is in. Strictly speaking, I'd say it's rather slack. It would be nice to have a physics-programming language that actually keeps track of the units as well. However, I'm not aware of one. (If someone could enlighten me otherwise, that would be fascinating…) Otherwise, I'll have to have a go at constructing one.

What's prompted this little piece is that I've been reviewing a paper that has been submitted to a physics journal. The authors have standardized the dimensions out of existence, which makes it awfully hard for me to work out what things mean physically. Just how fast is a speed of 1.5? How many centimetres per second is it? While that might be an answer their computer programme spits out, the authors really should have made the effort of turning it back into something that relates to the real world. In a mathematics journal, they might get away with it. But not in a physics journal. At least, not if I'm a reviewer…

]]>

I tried to find the Nerdist episode on Youtube so I could share it but can’t seem to find it. I did come across several Youtube videos by Danica McKellar which attempt to make maths fun (and succeed).

I absolutely loved the World Maths video although I must admit I found the one on percentages a little busy and hard to follow. I’d be interested to know what others think.

]]>

So, Sunday saw Benjamin and I get on the bicycle and go on a ditch-witch hunt. (We're going on a ditch-witch hunt... We're going to catch a BIG one...we're not scared...). And, much to my relief, we found them, resting quietly on Thompson Street.

But this entry isn't about ditch-witches or diggers or cranes or other large pieces of machinery, it's about what we saw on the way. On the front lawn of one house, there was a teenage boy practising 'barrel walking'. He was standing on a barrel, and rolling it forward and backwards around the garden. He was obviously reasonably skilled at this since he had some pretty good control of where he was going.

An interesting observation is that to get the barrel to roll **forwards**, the rider has to walk **backwards**. That must feel a little disconcerting. To get the barrel (and you) moving forward at say 2 km/h, you have to walk backwards at 2 km/h . That's because the bottom of the barrel, in contact with the ground, is instaneously stationary, so if the centre is travelling at 2 km/h forwards, the top of the barrel must be going 4 km/h forwards relative to the ground. In order for you to go at the pace of the centre, 2 km/h forwards (and stay on top), you therefore need to go 2 km/h backwards with respect to the top of the barrel. In terms of mathematics: your speed relative to the ground = 2v - v = v, where the 2v is the speed of the top of the barrel, the '-v' is the speed of you relative to the barrel, and the 'v' is the speed of the centre. Go it?

That kind of relationship crops up quite a bit in physics. I've talked about a case before - when a satellite in orbit loses energy because it hits air molecules, it speeds up. Uh! How does that work? It's because, as it loses energy, it drops to a lower orbit, one with less potential energy. But lower orbits have higher orbital speeds. It turns out that the loss in potential energy is exactly double the gain in kinetic energy. That is, if the satellite loses 100 J of energy, It's made up of a gain of 100 J of kinetic energy and a loss of 200 J of potential energy. It's another '2 - 1 = 1' sum.

There's also the neat but confusing case of a parallel plate capacitor at constant voltage. Let's say a capacitor consists of two large flat plates, a distance of 1 cm apart. The plates are maintained at constant voltage of say 12 V by a power-supply (e.g battery). This means that the plates have opposite charge, and so attract each other. (To hold them at constant distance, you have to fix them in place somehow). Now, consider pulling those plates apart. Since they attract each other, it is clear that you have to do work on the system to do this. One might therefore expect that the energy stored in the capacitor has gone up. But no. Do the calculation, and you'll see that the energy goes down. (Energy stored = capacitance times voltage squared, divided by two. The voltage stays the same, and since the capacitance is inversely proportional to plate separation, increasing the separation will **decrease** the stored energy.) Uh! Where does the energy go then? In this case, you have to consider the power supply. What happens is that you are putting energy back into the battery, by causing a current to flow backwards through it. It turns out in this case, that the work you need to do is exactly half the energy that goes to the power supply. The other half comes from the loss in energy stored in the capacitor. So, if we put in 10 J of energy, we lose 10 J of stored energy in the capacitor, and we gain 20 J of energy in the power supply. So, again, we have the '2 - 1 = 1' sum.

So, for every kilojoule of energy burned by the ditch-witch, doe the toddler also burns a kilojoule, thus meaning 2 kilojoules of heat end up in the air? (As neat as it would be if that were true, I don't think the actual figures will come close).

]]>

So, Sunday saw Benjamin and I get on the bicycle and go on a ditch-witch hunt. (We're going on a ditch-witch hunt… We're going to catch a BIG one…we're not scared…). And, much to my relief, we found them, resting quietly on Thompson Street.

But this entry isn't about ditch-witches or diggers or cranes or other large pieces of machinery, it's about what we saw on the way. On the front lawn of one house, there was a teenage boy practising 'barrel walking'. He was standing on a barrel, and rolling it forward and backwards around the garden. He was obviously reasonably skilled at this since he had some pretty good control of where he was going.

An interesting observation is that to get the barrel to roll **forwards**, the rider has to walk **backwards**. That must feel a little disconcerting. To get the barrel (and you) moving forward at say 2 km/h, you have to walk backwards at 2 km/h . That's because the bottom of the barrel, in contact with the ground, is instaneously stationary, so if the centre is travelling at 2 km/h forwards, the top of the barrel must be going 4 km/h forwards relative to the ground. In order for you to go at the pace of the centre, 2 km/h forwards (and stay on top), you therefore need to go 2 km/h backwards with respect to the top of the barrel. In terms of mathematics: your speed relative to the ground = 2v – v = v, where the 2v is the speed of the top of the barrel, the '-v' is the speed of you relative to the barrel, and the 'v' is the speed of the centre. Go it?

That kind of relationship crops up quite a bit in physics. I've talked about a case before – when a satellite in orbit loses energy because it hits air molecules, it speeds up. Uh! How does that work? It's because, as it loses energy, it drops to a lower orbit, one with less potential energy. But lower orbits have higher orbital speeds. It turns out that the loss in potential energy is exactly double the gain in kinetic energy. That is, if the satellite loses 100 J of energy, It's made up of a gain of 100 J of kinetic energy and a loss of 200 J of potential energy. It's another '2 – 1 = 1' sum.

There's also the neat but confusing case of a parallel plate capacitor at constant voltage. Let's say a capacitor consists of two large flat plates, a distance of 1 cm apart. The plates are maintained at constant voltage of say 12 V by a power-supply (e.g battery). This means that the plates have opposite charge, and so attract each other. (To hold them at constant distance, you have to fix them in place somehow). Now, consider pulling those plates apart. Since they attract each other, it is clear that you have to do work on the system to do this. One might therefore expect that the energy stored in the capacitor has gone up. But no. Do the calculation, and you'll see that the energy goes down. (Energy stored = capacitance times voltage squared, divided by two. The voltage stays the same, and since the capacitance is inversely proportional to plate separation, increasing the separation will **decrease** the stored energy.) Uh! Where does the energy go then? In this case, you have to consider the power supply. What happens is that you are putting energy back into the battery, by causing a current to flow backwards through it. It turns out in this case, that the work you need to do is exactly half the energy that goes to the power supply. The other half comes from the loss in energy stored in the capacitor. So, if we put in 10 J of energy, we lose 10 J of stored energy in the capacitor, and we gain 20 J of energy in the power supply. So, again, we have the '2 – 1 = 1' sum.

So, for every kilojoule of energy burned by the ditch-witch, doe the toddler also burns a kilojoule, thus meaning 2 kilojoules of heat end up in the air? (As neat as it would be if that were true, I don't think the actual figures will come close).

]]>

Now, with beautiful clear skies, light winds, and frosty mornings, you'd be forgiven for thinking there's a big fat high pressure system sitting over us. But there isn't. For the last few days, we (by which I mean at this end of the country) have been in or around a saddle-point, in terms of pressure. There have been lows to the north and south, highs to the east and west, and somewhere in the middle over us. I note today that things have rotated a bit, so the lows now lie east and west, with a high to the north and another approximately south. Here's a picture I've stolen from the metservice website this morning (www.metservice.com, 18 July 2014, 11am); it's the forecaset for noon today. Note how NZ is sandwiched between two lows, but isn't really covered by either.

You can see the strength of the wind on this plot by the feathers on the arrow symbols. The more feathers, the stronger the winds. (The arrows point in the direction of the wind). Note how the wind blows clockwise around the low pressures (and anticlockwise, less strongly, around the highs). Have a look just around Cape Reinga (for non-NZ dwellers, and I know there's a few of you out there, that's the northern-most tip of the North Island.) There's a point where the wind (anthropromorphising) doesn't know what to do. It's in what's mathematically termed a saddle point. It's a point where locally there is no gradient in pressure, **but is neither a high or a low**. Winds are light. In two dimensions (this is what we have on the earth's surface) with a single variable such as pressure, there are those three possibilities where the gradient of pressure is zero - a the maximum of a high, the minimum of a low, or a saddle.

In terms of terrain, a mountain pass is a saddle point. It's where one goes from valley to valley (low to low), between two mountains. On top of the pass, you are at a point where the gradient is zero. But it's neither a peak or a trough. It's called a 'saddle', because the shape looks rather like a saddle for a horse - which is low on both flanks, but high at the front and back. A marble placed on top of a saddle should, if it were placed exactly at the equilibrium point with no vibrations, stay there.

Saddle-points crop up in all kinds of dynamical systems (e.g. brain dynamics) where there's more than one variable involved. Such a point is termed an unstable equilibrium - any deviation from the equilibrium point will cause the system to move away from it. However, the change may not be terribly rapid. When there are lots of variables involved, such local equilibria may have very complicated dynamics associated with them indeed - the range of possibilities gets very large and dynamics can become very rich indeed.

]]>Now, with beautiful clear skies, light winds, and frosty mornings, you'd be forgiven for thinking there's a big fat high pressure system sitting over us. But there isn't. For the last few days, we (by which I mean at this end of the country) have been in or around a saddle-point, in terms of pressure. There have been lows to the north and south, highs to the east and west, and somewhere in the middle over us. I note today that things have rotated a bit, so the lows now lie east and west, with a high to the north and another approximately south. Here's a picture I've stolen from the metservice website this morning (www.metservice.com, 18 July 2014, 11am); it's the forecaset for noon today. Note how NZ is sandwiched between two lows, but isn't really covered by either.

You can see the strength of the wind on this plot by the feathers on the arrow symbols. The more feathers, the stronger the winds. (The arrows point in the direction of the wind). Note how the wind blows clockwise around the low pressures (and anticlockwise, less strongly, around the highs). Have a look just around Cape Reinga (for non-NZ dwellers, and I know there's a few of you out there, that's the northern-most tip of the North Island.) There's a point where the wind (anthropromorphising) doesn't know what to do. It's in what's mathematically termed a saddle point. It's a point where locally there is no gradient in pressure, **but is neither a high or a low**. Winds are light. In two dimensions (this is what we have on the earth's surface) with a single variable such as pressure, there are those three possibilities where the gradient of pressure is zero – a the maximum of a high, the minimum of a low, or a saddle.

In terms of terrain, a mountain pass is a saddle point. It's where one goes from valley to valley (low to low), between two mountains. On top of the pass, you are at a point where the gradient is zero. But it's neither a peak or a trough. It's called a 'saddle', because the shape looks rather like a saddle for a horse – which is low on both flanks, but high at the front and back. A marble placed on top of a saddle should, if it were placed exactly at the equilibrium point with no vibrations, stay there.

Saddle-points crop up in all kinds of dynamical systems (e.g. brain dynamics) where there's more than one variable involved. Such a point is termed an unstable equilibrium – any deviation from the equilibrium point will cause the system to move away from it. However, the change may not be terribly rapid. When there are lots of variables involved, such local equilibria may have very complicated dynamics associated with them indeed – the range of possibilities gets very large and dynamics can become very rich indeed.

]]>That's great. We can now work on the much simpler stylized system, and make predictions on how it behaves. Transferring those predictions to the real situation, that helps us to design suspension systems for real situations.

There are however, some drawbacks. We have to be sure that our stylized system really does capture the essential features of the actual system. Otherwise we can get predictions completely wrong. On the other hand, we don't want to make our model too complicated, otherwise there is no advantage in using the model. "A model should be as simple as possible, but not simpler" as Einstein might have said.

There's another trap for modellers, which is going outside the realm of applicability for the model. What do I mean by that? Well, some simplifications work really well, but only in certain regimes. For example, Newton's laws are a great simplification on relativistic mechanics. They are much easier to work with. However, if you use them when things are moving close to the speed of light, your answers will be incorrect. They may not even be close to what actually happens. We say that Newton's laws apply when velocities are much less than the velocity of light. When that's the case (e.g. traffic going down a road) they work just fine - you'd be silly to use relativity to improve car safety - but when that's not the case (e.g. physics of black holes) you'll get things very wrong indeed.

A trap for a modeller is to forget where the realm of applicability actually is. In the rush to make approximations and simplifications, just where the boundary is between reasonable and not reasonable can be forgotten. I've been reminded of this this week, while working with some models of the electrical behaviour of the brain. Rather than go into the detail of what that problem was, here's a (rather simpler!) example I can across some time ago now.

I was puzzling over some predictions made in a scientific paper, using a model. It didn't quite seem right to me, though I struggled for a while to put my finger on exactly what I didn't like about what the authors had done. Then I saw it. There were some complicated equations in the model, and to simplify them, they'd made a common mathematical approximation: 1/(1+x) is approximately equal to 1-x. That's a pretty reasonable assumption __so long as__ x is a small number (rather less than 1). We can see how large it's allowed to get by looking at the plot here. The continuous blue line shows y = 1/(1+x); the dotted line shows 1-x. (The insert is the same, at very small x).

We can see for very small x (smaller than 0.1 or so) there's not much difference, but when x gets above 0.5 there's a considerable difference between the two. When x gets larger still (above 1) we have the approximation 1-x going negative, whereas the unapproximated 1/(1+x) stays positive. It's then a completely invalid approximation.

However, in this paper, the authors had made calculations and predictions using a large x. What they got was just, simply, wrong, because they were using the model outside the region where it was valid.

This kind of thing can be really quite subtle, particularly when the system being modelled is complicated (e.g. the brain) and we are desperate to make simplifications and approximations. There's a lot we can do that might actually go beyond what is reasonable, and a good modeller has to look out for this.

]]>That's great. We can now work on the much simpler stylized system, and make predictions on how it behaves. Transferring those predictions to the real situation, that helps us to design suspension systems for real situations.

There are however, some drawbacks. We have to be sure that our stylized system really does capture the essential features of the actual system. Otherwise we can get predictions completely wrong. On the other hand, we don't want to make our model too complicated, otherwise there is no advantage in using the model. "A model should be as simple as possible, but not simpler" as Einstein might have said.

There's another trap for modellers, which is going outside the realm of applicability for the model. What do I mean by that? Well, some simplifications work really well, but only in certain regimes. For example, Newton's laws are a great simplification on relativistic mechanics. They are much easier to work with. However, if you use them when things are moving close to the speed of light, your answers will be incorrect. They may not even be close to what actually happens. We say that Newton's laws apply when velocities are much less than the velocity of light. When that's the case (e.g. traffic going down a road) they work just fine – you'd be silly to use relativity to improve car safety – but when that's not the case (e.g. physics of black holes) you'll get things very wrong indeed.

A trap for a modeller is to forget where the realm of applicability actually is. In the rush to make approximations and simplifications, just where the boundary is between reasonable and not reasonable can be forgotten. I've been reminded of this this week, while working with some models of the electrical behaviour of the brain. Rather than go into the detail of what that problem was, here's a (rather simpler!) example I can across some time ago now.

I was puzzling over some predictions made in a scientific paper, using a model. It didn't quite seem right to me, though I struggled for a while to put my finger on exactly what I didn't like about what the authors had done. Then I saw it. There were some complicated equations in the model, and to simplify them, they'd made a common mathematical approximation: 1/(1+x) is approximately equal to 1-x. That's a pretty reasonable assumption __so long as__ x is a small number (rather less than 1). We can see how large it's allowed to get by looking at the plot here. The continuous blue line shows y = 1/(1+x); the dotted line shows 1-x. (The insert is the same, at very small x).

We can see for very small x (smaller than 0.1 or so) there's not much difference, but when x gets above 0.5 there's a considerable difference between the two. When x gets larger still (above 1) we have the approximation 1-x going negative, whereas the unapproximated 1/(1+x) stays positive. It's then a completely invalid approximation.

However, in this paper, the authors had made calculations and predictions using a large x. What they got was just, simply, wrong, because they were using the model outside the region where it was valid.

This kind of thing can be really quite subtle, particularly when the system being modelled is complicated (e.g. the brain) and we are desperate to make simplifications and approximations. There's a lot we can do that might actually go beyond what is reasonable, and a good modeller has to look out for this.

]]>