**Isaac Newton is often thought to be the inventor of the apparently self-deprecating phrase ‘If I have seen further it is by standing on the shoulders of giants’, but he was not: actually it had been in use for over 500 years before he repeated it in 1675***.* Of more significance is that Newton wrote it in a letter to one of his great scientific rivals… who may have been a hunchbacked dwarf.

Friday afternoons generally bring the welcomed arrival of an email from the Science Media Centre (although today – the dreaded *Ides of March* – also brought terrible news from Christchurch; thoughts with the victims, their friends and families). That email, containing the weekly ‘Science Deadline’ news in science and technology, generally contains some pithy quote. So here is what appeared in the latest issue

Now, Newton actually *wrote* something along those lines, rather than *said* it. That is important because we know precisely what he wrote (and to whom), and so we do not need to depend on hearsay.

But leaving pedantry aside, as I have noted in the heading of this piece it was not original to him by any means. The first-known fielding of that metaphor was apparently by Bernard of Chartres in the early 12th century (or possibly in the late 11th). The best-known employment of the statement is undoubtedly that of Sir Isaac Newton, though, and this seems to have led to the mistaken belief that he originated it.

What Newton wrote, specifically, was:

*“If I have seen further it is by standing on ye sholders [sic] of Giants”*

The letter, written by Newton to Robert Hooke in February 1675, is held in the collections of the Historical Society of Pennsylvania, in Philadelphia. The HSP has generously made a scan of that letter available to all in its digital library; the statement appears about two-thirds of the way down the first page of the letter.

The thing is this. Newton and Hooke were huge rivals, and fell out over priority for various scientific discoveries in optics, mechanics, and other areas of what we now term *physics*.

A little example. In high school and first-year university physics one learns *Hooke’s Law of Elasticity*. This says that the extension (of a spring, or other material) is directly related to the force/weight applied. Hooke published his discovery thus: *ceiiinosssttuu*. Make sense? Only if you know how to re-arrange the letters: *ut tensio, sic uis *(in Latin). In English: *as the tension, so the extension*. This was not only a significant scientific discovery, but also a matter of technological (and potentially economic) benefit: think about how spring weighing scales work, or how this might be vital in the quest to make a clock or watch would would keep time accurately on a ship (where a pendulum clock is useless).

Students tend to like learning about the above because of the way it is written as a physical law:

*Y=FL/eA*

Yes, **flea**. This applies to a metallic wire on which varying weights are hung. *Y* (Young’s Modulus of Elasticity) equals the force *F* (*i.e.* the weight) times the length of the wire *L *divided by the product of the extension of the wire that results *e* and the cross-sectional area of the wire *A*.

Back to Newton and Hooke. It is well-established that there was an intense rivalry between them (and indeed there were arguments with many other natural philosophers of the era, such as Edmond Halley), though in 1675 it seems that much of the Hooke-Newton animosity was still in the future. If one reads the letter in question, it appears quite cordial in tone, covering a variety of scientific matters as well as how each should proceed in terms of widely communicating the results of their investigations such that due credit be given to the proper discoverer. Newton engaged in numerous acrimonious disputes with others with regard to priority on scientific and mathematical discoveries, a prime example being the ‘invention’ of the calculus, Newton accusing Gottfried Wilhelm Leibniz of plagiarising his work. (Echoes of this dispute continue to this day, for example through the use of two different types of notation: a time-derivative may be written as *dx/dt*, or as *x-dot*, a single dot above the *x* for the first derivative [a speed], and two dots for the second derivative [an acceleration].)

As time went by, the relationship between Newton and Hooke decayed, and the former ascended to greater heights than the latter. Most people have heard of Newton; few know of Hooke. Whilst Hooke was seven years older than Newton, he died in 1703, more than two decades before Newton’s expiry, and in the very year that Newton became President of the Royal Society (of London), a post he held until his death in 1727. This is pivotal for the way that history recognises one over the other: Newton, with ‘PRS’ after his name, set about obliterating Hooke’s reputation. Indeed it is thought by many that Newton had all portraits of Hooke destroyed: several paintings of Newton remain, yet there are no contemporary portraits of Hooke.

Without a portrait of Hooke in oils painted at the time, we can know little about the way he appeared in real life. Whilst he was apparently not short, as such, neither was he tall; and he was of slight build. It is also known that from an early age he suffered from kyphosis, a severe convex curvature of the middle spine often termed ’roundback’. Whilst he might not have been, technically, a hunchbacked dwarf, certainly he was not a giant, and his shoulders were not fit for anyone to stand on.

This realisation has led to some debate during recent decades over just what Newton intended to convey when he wrote to Hooke that “If I have seen further it is by standing on ye sholders of Giants.” Some suggest that it was a savage insult, a highly-personal put-down of Hooke, Newton effectively saying that his work was not based on anything that Hooke might have contributed. Others disagree. What we can be sure of is that neither of them were saints.

In the paragraph quoted by Kate Hannah, then, she is entirely correct to state that some of those whose shoulders we are standing on “were bloody awful.” In reading what I wrote above you may have developed some sympathy for poor deformed Robert Hooke, whose memory was despoiled by big bad Isaac Newton. Let me disabuse you of that notion.

There are many tomes dealing with Hooke in terms of the history of science and his contributions, but more-accessible accounts of the man himself are also available, and I would recommend *The Curious Life of Robert Hooke* (2003) by Lisa Jardine. A few points to ponder… Hooke engaged his niece Grace as a housekeeper, and had an incestuous relationship with her which he recorded in detail in his diary. He had sexual connections (being polite) with various other housekeepers, though he did not record those so diligently, at least one of these resulting in a child of which Hooke denied paternity. If you’ve ever read the unexpurgated version of Samuel Pepys Diary, of much the same era in England, you will know that these were days of ribaldry and lewdness in which pretty much anything and anybody were considered fair game, but Hooke’s diary could make even the most unshockable blush. I mean, he left us vivid accounts of his onanistic activities.

In summary: Hooke, like Newton, was a nasty piece of work. As Kate Hannah wrote, they were bloody awful, as people.

Let me now get back to the title of this blog post: *On the Shoulders of Giants?*

Testament to the recognition of the sentence in question (hereafter *S**OTSOG* or *OTSOG*) comes from the many ways in which it has appeared in modern culture. This is by no means limited to science, although a book edited by Stephen Hawking did use it as a title:

Given the discussion above about Newton and Hooke, it seems entirely understandable that John and Mary Gribbin should have given their recent volume a related title and sub-title:

This meme mutates a little, but is a mainstay of the titles of books on the history of science:

Many other science books have exploited *OTSOG*, but let’s not be boring. It seems a natural for the autobiography of a basketball player, does it not?

Turns out that, if you do a search, you will find that there are dozens and dozens of books that use the OTSOG allusion. Two about different association football clubs ([Glasgow] Celtic, and Manchester United) were published in 2013 with the same title:

The term makes its way into other languages. Here it is in Italian, courtesy of Umberto Eco:

Deployment of the *shoulders of giants* metaphor is not limited to books; for example, consider the (rock music) albums by the UK group *Oasis*, and a Californian band known as *Tribe of Gypsies*, both released in the year 2000, it happens:

You may have noticed that the Oasis album title features only one shoulder, and that apparently stems from Noel Gallagher (one of the two brothers forming the basis of the band) noticed the inscription SOTSOG on the milled edge of a UK two-pound coin. He was drunk at the time (not an unusual state for the Gallaghers) and wrote it down on a cigarette packet, managing to miss the final letter in *SHOULDERS*.

That coin’s edge is shown in the heading for this blog post. Here is the reverse side of the coin:

When this coin was introduced into circulation in 1997, the Royal Mint (of which, by the way, Newton was Master for three decades) trumpeted how its design was intended to celebrate “the History of Technological Achievement”, therefore copying Newton’s statement on its perimeter (though with “Shoulders” containing a “u”). Only one problem. Count the interlocked cogs forming a circle around the centre of the design. There are 19. That means they will not budge: there has to be an even number if they are to be able to rotate in such an arrangement. Think about it.

On the shoulders of giants? There’s a lot more that could be written. But, for now, I am out of space.

*The photograph of the stack of coins that forms the heading illustration for this blog post has an *Attribution-NonCommercial-ShareAlike 2.0 Generic (CC BY-NC-SA 2.0)* license; this means that it may be shared or indeed adapted for non-commercial purposes so long as an attribution/credit to the original source is given. This creator was user ** reway2007** on

Addendum March 21st: On re-reading this post I realise that I forgot to mention some points which I had intended to make in connection with Newton’s letter to Hooke. These thoughts are highly-speculative, and some might well find them even absurd.

(a) Newton seems to have used a capital letter to begin the word “Giants”, despite it being the last word in the sentence. If one were being uncharitable towards Newton, one might suggest that this was to accentuate that Hooke was by no means a giant (either physically or, in Newton’s estimation, intellectually).

(b) On the spelling of “sholder”: whilst one might note that word spellings can be fluid in any language and/or Newton may simply have slipped up (*e.g.* earlier in the letter he wrote “yeild”), and also that “sholder” is a known variant in Middle English, if one were again uncharitably-inclined towards Newton then one might interpret the missing letter *u* as indicating that he was saying ‘not you’ to Hooke. I would not indulge myself even further by wondering whether this should be regarded as being U or non-U word usage.

]]>

I notice a few regulars no longer allow public access to the site counters. This may happen accidentally when the blog format is altered. If your blog is unexpectedly missing please check this out. Send me the URL for your site meter and I can correct the information in the database.

Similarly, if your blog data in this list seems out of whack, please check your site meter. Usually, the problem is that for some reason your site meter is no longer working.

Sitemeter is no longer working so the total number of NZ blogs in this list has been drastically reduced. I recommend anyone with Sitemeter consider transferring to one of the other meters. See *NZ Blog Rankings FAQ.*

This list is composed automatically from the data in the various site meters used. If you feel the data in this list is wrong could you check to make sure the problem is not with your own site meter? I am of course happy to correct any mistakes that occur in the automatic transfer of data to this list but cannot be responsible for the site meters themselves. They do play up.

Every month I get queries from people wanting their own blog included. I encourage and am happy to respond to queries but have prepared a list of frequently asked questions (FAQs) people can check out. Have a look at *NZ Blog Rankings FAQ.* This is particularly helpful to those wondering how to set up sitemeters. Please note, the system is automatic and relies on blogs having sitemeters which allow public access to the stats.

Here are the rankings of New Zealand blogs with publicly available statistics for March 2018. Ranking is by visit numbers. I have listed the blogs in the table below, together with monthly visits and page view numbers. Meanwhile, I am still keen to hear of any other blogs with publicly available sitemeter or visitor stats that I have missed. Contact me if you know of any or wish help adding publicly available stats to your blog.

**This article was originally published on The Conversation. Read the original article.**

**Mathematics and art are generally viewed as very different disciplines – one devoted to abstract thought, the other to feeling. But sometimes the parallels between the two are uncanny.**

From Islamic tiling to the chaotic patterns of Jackson Pollock, we can see remarkable similarities between art and the mathematical research that follows it. The two modes of thinking are not exactly the same, but, in interesting ways, often one seems to foreshadow the other.

Does art sometimes spur mathematical discovery? There’s no simple answer to this question, but in some instances it seems very likely.

Consider Islamic ornament, such as that found in the Alhambra in Granada, Spain.

In the 14th and 15th centuries, the Alhambra served as the palace and harem of the Berber monarchs. For many visitors, it’s a setting as close to paradise as anything on earth: a series of open courtyards with fountains, surrounded by arcades that provide shelter and shade. The ceilings are molded in elaborate geometric patterns that resemble stalactites. The crowning glory is the ornament in colorful tile on the surrounding walls, which dazzles the eye in a hypnotic way that’s strangely blissful. In a fashion akin to music, the patterns lift the onlooker into an almost out-of-body state, a sort of heavenly rapture.

It’s a triumph of art – and of mathematical reasoning. The ornament explores a branch of mathematics known as tiling, which seeks to fill a space completely with regular geometric patterns. Math shows that a flat surface can be regularly covered by symmetric shapes with three, four and six sides, but not with shapes of five sides.

It’s also possible to combine different shapes, using triangular, square and hexagonal tiles to fill a space completely. The Alhambra revels in elaborate combinations of this sort, which are hard to see as stable rather than in motion. They seem to spin before our eyes. They trigger our brain into action and, as we look, we arrange and rearrange their patterns in different configurations.

An emotional experience? Very much so. But what’s fascinating about such Islamic tilings is that the work of anonymous artists and craftsmen also displays a near-perfect mastery of mathematical logic. Mathematicians have identified 17 types of symmetry: bilateral symmetry, rotational symmetry and so forth. At least 16 appear in the tilework of the Alhambra, almost as if they were textbook diagrams.

The patterns are not merely beautiful, but mathematically rigorous as well. They explore the fundamental characteristics of symmetry in a surprisingly complete way. Mathematicians, however, did not come up with their analysis of the principles of symmetry until several centuries after the tiles of the Alhambra had been set in place.

Stunning as they are, the decorations of the Alhambra may have been surpassed by a masterpiece in Persia. There, in 1453, anonymous craftsmen at the Darbi-I Imam shrine in Isfahan discovered quasicrystalline patterns. These patterns have complex and mysterious mathematical properties that were not analyzed by mathematicians until the discovery of Penrose tilings in the 1970s.

Such patterns fill a space completely with regular shapes, but in a configuration which never repeats itself – indeed, is infinitely nonrepeated – although the mathematical constant known as the Golden Section occurs over and over again.

Daniel Schectman won the 2001 Nobel Prize for the discovery of quasicrystals, which obey this law of organization. This breakthrough forced scientists to reconsider their conception of the very nature of matter.

Whatever the method, it’s clear that the quasicrystalline patterns at Darbi-I Imam were created by craftsmen without advanced training in mathematics. It took several more centuries for mathematicians to analyze and articulate what they were doing. In other words, intuition preceded full understanding.In 2005, Harvard physicist Peter James Lu showed that it’s possible to generate such quasicrystalline patterns relatively easily using girih tiles. Girih tiles combine several pure geometric shapes into five patterns: a regular decagon, an irregular hexagon, a bow tie, a rhombus and a regular pentagon.

Geometric perspective made it possible to portray the visible world with a new verisimilitude and accuracy, creating an artistic revolution in the Italian Renaissance. One could argue that perspective also led to a major reexamination of the fundamental laws of mathematics.

According to Euclidian mathematics, two parallel lines will remain parallel into infinity and never meet. In the world of Renaissance perspective, however, parallel lines eventually do meet in the far distance at the so-called “vanishing point.” In other words, Renaissance perspective present a geometry which follows regular mathematical laws, but is non-Euclidian.

When mathematicians first devised non-Euclidian mathematics in the early 19th century, they imagined a world in which parallel lines meet at infinity. The geometry they explored was, in many ways, similar to that of Renaissance perspective.

Non-Euclidian mathematics has since moved on to explore space which has 12 or 13 dimensions, far outside the world of Renaissance perspective. But it’s worth asking whether Renaissance art may have made easier to make that initial leap.

An interesting modern case of art that broke traditional boundaries – and that has suggestive parallels with recent developments in mathematics – is that of the paintings of Jackson Pollock.

To those who first encountered them, the paintings of Pollock seemed chaotic and senseless. With time, however, we’ve come to see that they have elements of order, though not a traditional sort. Their shapes are simultaneously predictable and unpredictable, in a fashion similar to the pattern of dripping water from a faucet. There’s no way to predict the exact effect of the next drip. But, if we chart the pattern of drips, we find that they fall within a zone that has a clear shape and boundaries.

Such unpredictability was once out of bounds for mathematicians. But, in recent years, it has become one of the hottest areas of mathematical exploration. For example, chaos theory explores patterns that are not predictable but fall within a definable range of possibilities, while fractal analysis studies shapes that are similar but not identical.

Pollock himself had no particular interest in mathematics, and little known talent in that arena. His fascination with these forms was intuitive and subjective.

Intriguingly, mathematicians have not been able to accurately describe what Pollock was doing in his paintings. For example, there have been attempts to use fractal analysis to create a numerical “signature” of his style, but so far the method has not worked – we can’t mathematically distinguish Pollock’s autograph work from bad imitations. Even the notion that Pollock employed fractal thoughts is probably incorrect.

Nonetheless, Pollock’s simultaneously chaotic and orderly patterns have suggested a fruitful direction for mathematics. At some point, it may well be possible to describe what Pollock was doing with mathematical tools, and artists will have to move on and mark out a new frontier to explore.

**Henry Adams, Ruth Coulter Heede Professor of Art History, Case Western Reserve University**

**This article was originally published on The Conversation. Read the original article.**

**Image: Is there a geometry lesson hidden in ‘The Last Supper’? Wikimedia Commons**

I recycled a talk that I’d given a couple of years ago on the role of mathematics in physics – specifically comparing and contrasting how practising physicists and students think about how maths works within physics.

My conclusion from the research I’ve done (based on interviewing students and physicists (you can read it in the Waikato Journal of Education here) was that many students find the statement ‘Physics is a science’ difficult. They would rather prefer to re-write it as ‘Physics is applied mathematics’.

Now, by science here, I mean a body of knowledge based on a systematic, empirical observation of the world. A body of knowledge that is able to generate testable predictions and then accept or reject or refine hypotheses in light of the results of experiments.

I (too naively) assumed that my audience wouldn’t need convincing that physics is a science. Actually, there was some debate on this. One person in particular, a physicist in fact, presented the view that physics is not a science. Biology and Chemistry fit my description of science – being based on experiment – but physics, in its actual outworking, does not. His argument was that the greatest advances in physics have been theoretical and not based on experiment. Quantum mechanics and general relativity are highly theoretical – drawing intensely from mathematics – and any experimental validation of them came long after the theory was accepted (and, in the case of Eddington’s eclipse data, quite possibly fudged). One might put the Higgs Boson into the same category – I suspect that most physicists never doubted that the Higgs Boson would eventually be discovered. That is to say the physics was not based on experiment – the experiments were merely confirming what physics ‘knew’ already. Who is the most famous physicist? Albert Einstein – who never did an experiment in his life. But clearly he was a physicist, not a mathematician.

BUT, his was not the only view. For example, Einstein, the theoretical physicist, obtained his Nobel Prize for his explanation of the photoelectric effect. This was an observed phenomenon that had puzzled physicists – results just didn’t fit with the understanding of the time. And what about the ultraviolet catastrophe? So theoretical approaches were not made in the absence of experiment – there were some uncomfortable phenomena around that were prompting thinking.

So, back to my point. “Physics is a science” being uncomfortable for students of physics. It is clearly not just students that find this uncomfortable. Is that a reason why, perhaps, the University of Western Australia has now moved ‘physics’ out of the Faculty of Science and put it in with engineering (which Waikato did many years ago)?

And, if physicists can’t agree on what physics is, what hope is there convincing students that they should study it? Maybe I should just surrender and become an engineer.

*Featured image: Simulated Higgs Boson data from the Large Hadron Collider / Wikimedia. *

**We have built a world of largely straight lines – the houses we live in, the skyscrapers we work in and the streets we drive on our daily commutes. Yet outside our boxes, nature teams with frilly, crenellated forms, from the fluted surfaces of lettuces and fungi to the frilled skirts of sea slugs and the gorgeous undulations of corals.**

These organisms are biological manifestations of what we call hyperbolic geometry, an alternative to the Euclidean geometry we learn about in school that involves lines, shapes and angles on a flat surface or plane. In hyperbolic geometry, the plane is not necessarily so flat.

Yet while nature has been playing with hyperbolic forms for hundreds of millions of years, mathematicians spent hundreds of years trying to prove that such structures were impossible.

But these efforts led to a realisation that hyperbolic geometry is logically legitimate. And that, in turn, led to the revolution that produced the kind of maths now underlying general relativity, and thus the structure of the universe.

Hyperbolic geometry is radical because it violates one of the axioms of Euclidean geometry, which long stood as a model for reason itself.

The fifth and final axiom of Euclid’s system – the so-called parallel postulate – turns out not to be correct. Or at least not necessarily so. If we accept it, we get Euclidean geometry, but if we abandon it, other geometries become possible, most famously the hyperbolic variety.

Here’s how the parallel postulate works. Consider a simple question: if I have a straight line, and a point outside the line, how many straight lines can I draw through the point that never meet the original line? Euclid said the answer is *one* and there couldn’t be any more, which feels intuitively right.

Mathematicians, being sticklers, wanted to prove this was true, but in the end, such efforts led them to see that there is a logically consistent geometric system in which the answer is infinity. We can represent the situation as follows.

This seems impossible and a first reaction is to say it’s cheating because the lines look curved. But they only look curved because we’re trying to project an image of a curved surface onto a flat plane.

It’s the same as when we’re trying to project an image of the surface of the Earth onto a flat map; the relationships get distorted. To really see countries relative to one another we have to look at a globe.

So also with hyperbolic geometry. To really see what’s going on we have to look at the curved surface itself, and here the lines are straight.

One way of understanding different geometries is in terms of their curvature. A flat, or Euclidean plane has zero curvature. The surface of a sphere (like a beach ball) has positive curvature, and a hyperbolic plane has negative curvature. It’s a geometric analogue of a negative number.

When mathematicians discovered this aberrant geometry in the early 19th century they were nearly driven mad. “For God’s sake please give it up,” said the Hungarian mathematician Wolfgang Bolyai to his son János Bolyai, urging him abandon to work on hyperbolic geometry.

Yet critters who’d never studied non-Euclidean geometry had meanwhile just been doing it. Along with corals, many other species of reef organisms have hyperbolic forms, including sponges and kelps.

Wherever there is an advantage to maximising surface area – such as for filter feeding animals – hyperbolic shapes are an excellent solution. There are hyperbolic structures in cells, hyperbolic cacti and hyperbolic flowers, such as calla lilies. In the film Avatar, there is a fabulous CGI grove of giant hyperbolic blooms that curl up when touched.

Hyperbolic surfaces can also be built at the molecular scale from carbon atoms. These carbon nanofoams were discovered in 1997 by physicist Andrei Rode and his colleagues at the Australian National University.

That year Cornell mathematician Daina Taimina also worked out how to model such surfaces using crochet, which was a big deal because it’s actually hard for humans to construct these forms.

For the past 10 years, I’ve been spearheading a project where we use hyperbolic crochet to make woolly simulations of coral reefs. Our Crochet Coral Reefs are an artistic response to the devastation of living reefs due to global warming and have been exhibited at art galleries and science museums around the world, including the Smithsonian.

Here, a ball of wool and a crochet hook become pedagogical tools bringing mathematics out of textbooks and taking it to people as a living tactile experience.

More than 8,000 women in a dozen countries (including Australia, the United States of America, and the United Arab Emirates) have participated in making these installations, which reside at the intersection of mathematics, marine biology, community art practice and environmentalism.

Once mathematicians realised that different geometrical spaces are possible, a question arose as to which one is realised in physical space. What is the shape of our universe?

Galileo Galilei and Isaac Newton founded modern physics on the assumption that space is Euclidean, but Albert Einstein’s equations of general relativity describe a universe that can have complex curved forms.

One of the major questions astronomers are trying to resolve, with instruments such as the Hubble Space Telescope, is what shape our universe has. While most of the large-scale evidence points to a Euclidean structure, there is some tantalising evidence that we might just live in a hyperbolic world.

**Margaret Wertheim, Vice-Chancellor’s Fellow in Science Communication, University of Melbourne**

**This article was originally published on The Conversation. Read the original article.**

*Featured image: Wikimedia CC, Toby Hudson.*

The 1-dimensional quantum harmonic oscillator problem is a textbook problem that gets inflicted on generations of students. I remembering suffering the algebra that went with it. At the University of Waikato, we save our second year students the algebra by just talking about the solutions, but then spring it on them in third year. For those who like that kind of thing, it's an interesting analysis, but for those that don't, it really is quite horrible.

Perhaps that is what motivated Paul Dirac to come up with (in my opinion) a really elegant complementary approach to solving the 1-dimensional quantum harmonic oscillator problem. While his approach is easily found in text-books, what I haven't been able to track down is a description of **how** he came up with it. The same seems to be true of many of the analyses that get wheeled out to students. While they look clean and tidy when presented now, I'm left with the question "How did they come up with this?". That tends to be overlooked in favour of the end product. Did Dirac spend weeks pondering over this, thinking "there must be a better approach - the symmetry between p and x in this equation should surely be exploitable somehow...", was it a sudden revelation, did he try twenty different approaches till something worked, or what? My text books don't say.

What Dirac did was to reformulate the problem in terms of 'raising' and 'lowering' operators. He realized the problem as a ladder of energy-levels, and showed rather elegantly that these energy levels were equally spaced. Moreover, some rather neat operators, that he defined, could move a quantum state 'up' or 'down' the ladder. That's a very creative way of looking at the problem, and has been taken much further since then. For example, when analyzing problems with many electrons (which generally means just about anything electronic) we can now formulate the problem in terms of operators that create and destroy electrons. Whether electrons really are being created and destroyed is a moot point, but the formulation is a neat one that helps us to analyze what is going on. Theoretical physicists consider it a really useful 'tool' of the trade, even though the history behind its construction tends to be overlooked when we teach it.

So what is the point of me telling you this? Well, it's about teaching. Just how do you teach creativity, especially in something that is, on the face of it, as tedious as physics. Physics isn't actually tedious (if it were I wouldn't be sitting here writing this) but we do tend to make it unnecessarily so at times. I wonder whether that's because that's the easiest path to take for undergraduate teaching. At PhD level and beyond, there's some really creative research going on, but do our undergraduates really see this? Likewise, from what I've seen at school science fairs, there's some great creativity at primary and intermediate school level, but that then vanishes late in secondary school in favour of 'content'. Somehow, we tend to smother out creativity and elegance in favour of 'something-that-gets-the-job-done.' But truly great physicists, Dirac included, have never 'just-got-the-job-done'.

Open-ended projects are a way to go (and we manage to some extent to do this with our **engineering** students), but, as many readers know, we run into trouble with time, the need to prepare students for exams, fitting in with timetabling requirements, and so forth. The problem may go much deeper than we think - indeed, does the whole secondary and tertiary education structure smother-out creativity from students (at least in physics)?

And with that, have a creative Christmas, and Happy New Year to you all! I'll be heading southward next week to the Canterbury hills - a part of the country I haven't been to before.

]]>

The 1-dimensional quantum harmonic oscillator problem is a textbook problem that gets inflicted on generations of students. I remembering suffering the algebra that went with it. At the University of Waikato, we save our second year students the algebra by just talking about the solutions, but then spring it on them in third year. For those who like that kind of thing, it's an interesting analysis, but for those that don't, it really is quite horrible.

Perhaps that is what motivated Paul Dirac to come up with (in my opinion) a really elegant complementary approach to solving the 1-dimensional quantum harmonic oscillator problem. While his approach is easily found in text-books, what I haven't been able to track down is a description of **how** he came up with it. The same seems to be true of many of the analyses that get wheeled out to students. While they look clean and tidy when presented now, I'm left with the question "How did they come up with this?". That tends to be overlooked in favour of the end product. Did Dirac spend weeks pondering over this, thinking "there must be a better approach – the symmetry between p and x in this equation should surely be exploitable somehow…", was it a sudden revelation, did he try twenty different approaches till something worked, or what? My text books don't say.

What Dirac did was to reformulate the problem in terms of 'raising' and 'lowering' operators. He realized the problem as a ladder of energy-levels, and showed rather elegantly that these energy levels were equally spaced. Moreover, some rather neat operators, that he defined, could move a quantum state 'up' or 'down' the ladder. That's a very creative way of looking at the problem, and has been taken much further since then. For example, when analyzing problems with many electrons (which generally means just about anything electronic) we can now formulate the problem in terms of operators that create and destroy electrons. Whether electrons really are being created and destroyed is a moot point, but the formulation is a neat one that helps us to analyze what is going on. Theoretical physicists consider it a really useful 'tool' of the trade, even though the history behind its construction tends to be overlooked when we teach it.

So what is the point of me telling you this? Well, it's about teaching. Just how do you teach creativity, especially in something that is, on the face of it, as tedious as physics. Physics isn't actually tedious (if it were I wouldn't be sitting here writing this) but we do tend to make it unnecessarily so at times. I wonder whether that's because that's the easiest path to take for undergraduate teaching. At PhD level and beyond, there's some really creative research going on, but do our undergraduates really see this? Likewise, from what I've seen at school science fairs, there's some great creativity at primary and intermediate school level, but that then vanishes late in secondary school in favour of 'content'. Somehow, we tend to smother out creativity and elegance in favour of 'something-that-gets-the-job-done.' But truly great physicists, Dirac included, have never 'just-got-the-job-done'.

Open-ended projects are a way to go (and we manage to some extent to do this with our **engineering** students), but, as many readers know, we run into trouble with time, the need to prepare students for exams, fitting in with timetabling requirements, and so forth. The problem may go much deeper than we think – indeed, does the whole secondary and tertiary education structure smother-out creativity from students (at least in physics)?

And with that, have a creative Christmas, and Happy New Year to you all! I'll be heading southward next week to the Canterbury hills – a part of the country I haven't been to before.

]]>

**How did you get involved in this research?**

We were interested in a way of modelling fish populations from an academic point of view. The idea of balanced harvesting was put forward in an article in *Science* and we realised we had a model we could test it with. The result was not really what we expected, but when you stop and think about it, it makes a lot of sense. That’s the power of mathematics, giving you insight into why you are seeing the results that you are. That was the first time that a group had combined balanced harvesting with a model that keeps track of how biomass flows through the population, from prey to predator to offspring.

**What is balanced harvesting, and what are the benefits?**

It means harvesting fish according to their natural productivity. Small fish tend to have higher productivity than big ones because they grow faster and there are more of them, so in practice balanced harvesting means catching more small fish. This allows you to net bigger catches, in the same way that eating plants is more efficient than eating meat, because it’s lower down the food chain. An adult fish can produce millions of eggs over its lifetime, so it’s worth protecting big fish as it helps ensure a sustainable population.

**We’ve been targeting big fish. What has that done to our oceans?**

It has massively depleted the numbers of large adults, with knock on effects for the rest of the ecosystem. It has even caused fish to evolve to mature at smaller sizes. For example, adult Northeast Arctic cod are now almost 2 kg smaller than they were 70 years ago.

**What size fish should fishers be catching?**

At Te Pūnaha Matatini, we are investigating what would happen if fishers were free to target whatever size they wanted. Would their catches go up or down and would it be sustainable? The best size to catch depends on the species and it’s difficult to model species in isolation because they influence each other. We are working towards a model that looks at lots of species together, which is what we need to really understand the best way to fish sustainably at the same time as feeding a growing population.

**Is this happening anywhere in the real world, and is it likely to happen in New Zealand?**

There are examples of small-scale fisheries in Africa that catch small fish, yet are both high-yielding and sustainable. In New Zealand, anyone who’s eaten whitebait knows that small fish can taste good; and traditional Māori practice has often targeted the small and thrown back the large. Perhaps there are things we can learn from this when managing commercial stocks here.

**These interviews showcase researchers supported by the Marsden Fund which, since 1994, has been supporting fundamental, investigator-led research in New Zealand.**

There's a fine line between an excellently-performing robot and a disaster. To be fair to the students, they haven't come across control theory yet, so for them to identify what's going on when the robot veers off sideways and accelerates into the wall is often not easy. There's been one common problem that the groups have been tackling, namely instability in their tracking.

Most groups are using two sensors to look for the white line. Crudely speaking, if the robot veers off to the right, the left-hand sensor will cross the line, the robot will 'realize' this, and turn to the left. Conversely, if it goes too far to the left, the right-hand sensor crosses the line and the robot will respond by turning right. But getting the control system stable isn't as straightforward as that.

If the robot doesn't turn hard enough, what happens is that it fails to get round corners. It goes off the line completely, so that neither sensor now sees the line, and then it's doomed. However, if it turns too hard, it can over-adjust, so that it now veers off the line on the other side. What can happen now is an oscillation: the robot drifts off to the left - so it then corrects and moves hard to the right - but it goes too far right and now it needs to turn hard back left, and so on.

We can end up with the robot either wiggling along a perfectly straight line, or worse, having it progressively over-correct until the corrections become so large it loses the line completely. The former is an example of a 'limit cycle, or attractor' - a systems-theory term for a stable but possibly rather complicated oscillation.

More amusing this morning was the poor robot that ended up going in every-decreasing circles. Just what was happening with it I'm not sure, but it veered off the line, did a large diameter circle, and continued in a circlular orbing, but gradually speeding up and reducing its diameter. It ended up spinning on the spot with the left-wheel on maximum forward speed and the right-wheel on maximum reverse speed. This is another (and more entertaining) example of a limit cycle - once it had got into the spinning state, that was where it was going to end up.

Preventing these things is a bit easier when you know some control theory (see here for example) and can apply the negative feedback in a sensible manner - but we teach them that later. For now, it's about the design process (and the entertainment value).

]]>

There's a fine line between an excellently-performing robot and a disaster. To be fair to the students, they haven't come across control theory yet, so for them to identify what's going on when the robot veers off sideways and accelerates into the wall is often not easy. There's been one common problem that the groups have been tackling, namely instability in their tracking.

Most groups are using two sensors to look for the white line. Crudely speaking, if the robot veers off to the right, the left-hand sensor will cross the line, the robot will 'realize' this, and turn to the left. Conversely, if it goes too far to the left, the right-hand sensor crosses the line and the robot will respond by turning right. But getting the control system stable isn't as straightforward as that.

If the robot doesn't turn hard enough, what happens is that it fails to get round corners. It goes off the line completely, so that neither sensor now sees the line, and then it's doomed. However, if it turns too hard, it can over-adjust, so that it now veers off the line on the other side. What can happen now is an oscillation: the robot drifts off to the left – so it then corrects and moves hard to the right – but it goes too far right and now it needs to turn hard back left, and so on.

We can end up with the robot either wiggling along a perfectly straight line, or worse, having it progressively over-correct until the corrections become so large it loses the line completely. The former is an example of a 'limit cycle, or attractor' – a systems-theory term for a stable but possibly rather complicated oscillation.

More amusing this morning was the poor robot that ended up going in every-decreasing circles. Just what was happening with it I'm not sure, but it veered off the line, did a large diameter circle, and continued in a circlular orbing, but gradually speeding up and reducing its diameter. It ended up spinning on the spot with the left-wheel on maximum forward speed and the right-wheel on maximum reverse speed. This is another (and more entertaining) example of a limit cycle – once it had got into the spinning state, that was where it was going to end up.

Preventing these things is a bit easier when you know some control theory (see here for example) and can apply the negative feedback in a sensible manner – but we teach them that later. For now, it's about the design process (and the entertainment value).

]]>

In the meantime, let’s consider two concepts this election hangs on – the so called “**Wasted vote**” and the “**Overhang**.” The Wasted Vote is the proportion of votes that go to parties that do not make it into parliament by either crossing the 5% Party vote threshold ** OR** by winning at least one electoral seat. The overhang is when a party or parties win more electoral seats than the proportion of their Party vote entitles them too. This means that the size of parliament would increase. Normally 120 and 1/120

The number of seats in parliament is crucial because it means the effective number of seats a party of block of parties must win in order to form the majority to govern increases. 61 seats are needed for a 120 or 121 member parliament, 62 for a 122 or 123, and 63 for a 124 member parliament.

About the Wasted vote two ideas are important:

*The Wasted vote supports the party already with the most votes the most*

**The Wasted vote could determine who governs!**

Let’s assume that 61 seats are necessary in a 120 seat parliament. Ie a block needs 61/120^{th} of the party vote (50.83%) to govern. Crucially this percentage, though, is NOT the percentage of the vote that block gain on the night (which is what the polls try and predict). What it is, is the “effective percentage” *after* the Wasted votes are taken into account. A scenario could help. Consider an election with two parties crossing the 5% threshold to get into parliament and all the rest being wasted votes. Let’s call the two parties the Big, Rich and Totally Selfish (BRATS*) party and the Really After Total State (RATS**) party. Consider this, there are 1 million voters. BRATS gets 450,000 votes on the night (45%). But, 10% (100,000) of the vote is Wasted. That means the proportion of votes the BRATS get out of the non-wasted votes is 450,000/900,000 giving an “effective percentage” of 50% which would give them 60 seats in parliament. The RATS would have the same in this scenario. We can turn this question around the other way and ask how high a proportion of the total vote does the Wasted vote have to be for the BRATS “effective percentage” to cross the 50.83% threshold needed to govern? This will depend on the total proportion of votes the BRATS receive (in our example 45%). The graph below illustrates this.

So, folks, if on the night your vote is in the waste basket, rest assured it will have an effect on the outcome of this election. *The only truly wasted vote is the one that is not cast!*

_______________________________________

*Led by Mr I.M. Wright

** Led by Mr M.Y. Tern

Tagged: Election, electoral commission, electoral maths, mathematics, Overhang, Party vote, Vote2014NZ, Wasted vote

]]>With dark evenings and mornings with us now :(, Benjamin's become interested in the dark. It's dark after he's finished tea, and he likes to be taken outside to see the dark, the moon, and stars, before his bath. "See dark" has become a predictable request after he's finished stuffing himself full of dinner. It's usually accompanied by a hopeful "Moon?" (pronounced "Moo") to which Daddy has had to tell him that the moon is now a morning moon, and it will be way past his bedtime before it rises.

I haven't yet explained that his request is an oxymoron. How can one see the dark? Given dark is lack of light, what we are really doing is not seeing. But there's plenty of precedence for attributing lack of something to an entity itself, so 'seeing the dark' is quite a reasonable way of looking at it.

One can talk about cold, for example. "Feel how cold it is this morning". It is heat, a form of energy, that is the physical entity here. Cold is really the lack of heat, but we're happy to talk about it as if it were a thing in itself. Another example: Paul Dirac in 1928 interpreted the lack of electrons in the negative energy states that arise from his description of relativistic quantum mechanics as being anti-electrons, or positrons. In fact, this was a __prediction__ of the existence of anti-matter - the discovery of the positron didn't come until latter.

In semiconductor physics, we have 'holes'. These are the lack of electrons in a valence band - a 'band' being a broad region of energy states where electrons can exist. If we take an electron out of the band we leave a 'hole'. This enables nearby electrons to move into the hole, leaving another hole. In this way holes can move through a material. It's rather like one of those slidy puzzles - move the pieces one space at a time to create the picture. Holes are a little bit tricky to teach to start with. Taking an electron out of a material leaves it charged, so we say a hole has a positive charge. That's a bit confusing - some students will usually start of thinking that holes are protons. Holes will accelerate if an electric field is applied (because they have positive charge) and so we can attribute a mass to the hole. That's another conceptual jump. How can the lack of something have a mass? Holes, because they are the lack of an electron, tend to move to the highest available energy states not the lowest energy states. Once the idea is grasped, we can start talking about holes as real things, and that is pretty well what solid-state physics textbooks will do. It works to treat them as positively charged particles. It's easy then to forget that we talking about things that are really the lack of something, rather than something in themselves.

A more recent example is being developed in relation to mechanics of materials as part of a Marsden-funded project by my colleage Ilanko. He's working with negative masses and stiffnesses on structures - as a way of facilitating the analysis of the vibrational states and resonances of a structure (e.g. a building). By treating the lack of something as a real thing, we often can find our physics comes just a bit easier to work through.

So seeing the dark is not such a silly request, after all.

]]>

With dark evenings and mornings with us now :(, Benjamin's become interested in the dark. It's dark after he's finished tea, and he likes to be taken outside to see the dark, the moon, and stars, before his bath. "See dark" has become a predictable request after he's finished stuffing himself full of dinner. It's usually accompanied by a hopeful "Moon?" (pronounced "Moo") to which Daddy has had to tell him that the moon is now a morning moon, and it will be way past his bedtime before it rises.

I haven't yet explained that his request is an oxymoron. How can one see the dark? Given dark is lack of light, what we are really doing is not seeing. But there's plenty of precedence for attributing lack of something to an entity itself, so 'seeing the dark' is quite a reasonable way of looking at it.

One can talk about cold, for example. "Feel how cold it is this morning". It is heat, a form of energy, that is the physical entity here. Cold is really the lack of heat, but we're happy to talk about it as if it were a thing in itself. Another example: Paul Dirac in 1928 interpreted the lack of electrons in the negative energy states that arise from his description of relativistic quantum mechanics as being anti-electrons, or positrons. In fact, this was a __prediction__ of the existence of anti-matter – the discovery of the positron didn't come until latter.

In semiconductor physics, we have 'holes'. These are the lack of electrons in a valence band – a 'band' being a broad region of energy states where electrons can exist. If we take an electron out of the band we leave a 'hole'. This enables nearby electrons to move into the hole, leaving another hole. In this way holes can move through a material. It's rather like one of those slidy puzzles – move the pieces one space at a time to create the picture. Holes are a little bit tricky to teach to start with. Taking an electron out of a material leaves it charged, so we say a hole has a positive charge. That's a bit confusing – some students will usually start of thinking that holes are protons. Holes will accelerate if an electric field is applied (because they have positive charge) and so we can attribute a mass to the hole. That's another conceptual jump. How can the lack of something have a mass? Holes, because they are the lack of an electron, tend to move to the highest available energy states not the lowest energy states. Once the idea is grasped, we can start talking about holes as real things, and that is pretty well what solid-state physics textbooks will do. It works to treat them as positively charged particles. It's easy then to forget that we talking about things that are really the lack of something, rather than something in themselves.

A more recent example is being developed in relation to mechanics of materials as part of a Marsden-funded project by my colleage Ilanko. He's working with negative masses and stiffnesses on structures – as a way of facilitating the analysis of the vibrational states and resonances of a structure (e.g. a building). By treating the lack of something as a real thing, we often can find our physics comes just a bit easier to work through.

So seeing the dark is not such a silly request, after all.

]]>