# Physics Stop

## A light puzzleMarcus WilsonJul 13

Here's a puzzling photograph that Hans Bachor showed me at the end of the NZ Institute of Physics conference last week. It comes from his public lecture on lasers a week ago. And we don't have the answer to it, so maybe you can enlighten us (pun intended).

The photo is of a demonstration of total internal reflection with a laser. Hans is holding a container of water, which has a small hole at the bottom. Consequently there is a jet of water emerging. A laser is held up to the container, and with careful orientation it can be made to shine down the stream of water. The light follows the water, due to total internal reflection at the boundary between the water and the air (rather like a fibre-optic). Actually, it's not TOTAL internal reflection – if it were we wouldn't see the light escaping from the stream of water, but a great proportion of it is contained within the water stream.

Now, in this case, Hans didn't quite get the hole the right size and shape. Consequently the stream breaks up into discrete droplets, which you can see in the photograph. Now, here's the puzzle. Look at the droplets and you can see that a couple of them are shining green – i.e. they appear to have laser light in them.

But how does that work? Light moves so much faster than water one can consider the water to be 'frozen' in space as far as the light is concerned. While the laser light will happily travel along the water stream, when the stream breaks up into drops there is no total internal reflection anymore. The drops should not be glowing. Perhaps the light is jumping from drop to drop to drop. Unlikely – each drop will scatter the light considerably so that very little will jump from one drop to the next – let alone across many drops.

As you think about this, you should bear in mind the conditions the photograph is taken over. It's a flash photograph, but it's likely that the shutter is open for longer than the flash illuminates the scence. This might (or might not) be significant, since the flash will capture the position of the water stream, but the shutter will still be letting in light from the laser even after the flash has stopped. So the capturing of the 'green' laser light in the photograph is not completely synchronized with the capturing of the rest of the image.

Our best hypothesis is that the light that is that drops are illuminated directly by light that is emerging from the end of the stream – that is, the light leaves the stream, travels though the air, and hits a drop. In the spirit of Eugenia Etkina's ISLE approach then, are there other hypotheses and what experiments can we formulate to test them?

## High-tech, Low-tech, planetary observations.Marcus WilsonJul 01

First the low-tech:  The conjunction of Venus (the brighter one) and Jupiter as recorded by my very lousy cellphone camera  just after sunset yesterday.

Now the high-tech: A day before that Pluto occulted a star. It moved in front of the star, rather like an eclipse. The significance of the event was that it allowed Pluto's atmosphere to be studied – by looking at the way the light moved through and around the atmosphere, various properties of the atmosphere can be inferred. The SOFIA project was in action, capturing the event, at the other end of the cost spectrum to my mobile phone.

There'll be more Pluto excitement coming as the New Horizons probe flies closeby in just a couple of weeks.

P.S. I should add in the conversation I had with my son (just turned 3) yesterday, after showing him the planets outside.

Benjamin: "I don't like planets"

Me: "Why not?"

Benjamin: "Because they're quite noisy"

Me: "How are they noisy?"

Benjamin: "Because Grandad says they're quite loud, actually."

Umm…. Work that one out!

## The equation of time strikes againMarcus WilsonJun 17

Some of us are rather looking forward to getting to 22 June. That's when the days get longer again. Yes, the reality is that no-one's really going to notice much difference for a while, but it's encouraging to think that the days will be getting lighter again, if only by a little bit. Don't confuse that with temperatures getting warmer – the coldest day (on average, of course) lags the darkest day quite considerably. Here it's around the end of July

But there's an interesting effect going on with sunrise and sunset. We've already had the darkest evening (hooray!) yet the darkest morning is still to come. Look at the sunrise and sunset times (for Hamilton) on the MetService website: today we've had sunrise at 7.32am and with cloudless skies the sun may stay out all the way to sunset at 5.07pm. But tomorrow sunset is recorded at 5.08pm (later!) and on Saturday sunrise has shifted to 7.33am (also later!). How can that be?

The point here is that the length of a day, meaning now the time between when the sun is at its highest to the next time the sun is at its highest, is only 24 hours on average (for some periods of the year its greater, for some periods is less), and isn't equal to the time it takes for the earth to spin once on its axis.

Let's take this last point first. It's solar midday, meaning that the sun is at its highest. Now, let the earth rotate exactly once on its axis. Do we get back to solar midday the next day? No. That's because, in the time taken for the earth to rotate once, it has also moved along its orbit (about 1/365th of the way around). That means it's got to spin a little bit more before the sun reaches its higherst point. The time to spin once (the siderial day) is about 23 hours 56 minutes – four minutes less than the mean solar day. Note that 4 minutes x 365 = 24 hours – which means one more revolution than you might expect  - the earth actually does 366 and a quarter revolutions each year.

However, the movement along the earth's orbit in a day is only on average 1/365th of the orbit. When the earth is closest to the sun (called perihelion – 3 January at present) it moves faster. That's Kepler's second law. When it's further away (at this time of year) it moves slower. That would mean that in January, we should perceive that solar midday gets later every day by our watch (since the earth needs extra time to spin that extra bit more), but that in July, the solar midday should be getting earlier. However, that's not what is observed. Our prediction for January is true, but for July it's the other way around – solar midday actually gets later as measured by our watches.

There's another effect going on.  This is because the earth is tilted on its axis. However, it's quite tricky to explain why that makes a difference.  Consider the transition from winter to summer, in the southern hemisphere. If we look at the position of the sun at sunrise and sunset, we see it move southward from one day to the next. What is significant is that at sunset the sun is further southward than for the previous sunrise. That gives us a shift in the measured time between solar midday and the next solar midday. A better explanation is given here.  This effect is 'zero' at the solstices and equinoxes, and does two cycles a year. Add this to the effect of Kepler's second law, and we get the odd-looking curve that is called 'the equation of time', and means that, at present, each solar day is slightly longer than 24 hours, giving both ligher evenings and darker mornings.

You can see a net result displayed on the ground under the sundial in Hamilton Gardens. The elongated figure-of-eight is called an 'analemma'. It will show you the position of the tip of shadow of the pole at different times at different times of year.

## A blatant plug for the NZIP2015 conferenceMarcus WilsonMay 06

There's no hiding my conflicts of interest here. I'm on the New Zealand Institute of Physics 2015 conference organizing committee. I'm also the NZIP treasurer. And I'm a staff member at the host organization.  So, to contribute to the New Zealand physics community's biennial event  in Hamilton on 6 – 8 July, click on this link.

But why? Pick from the following

a. Because you get to meet colleagues and actually talk with them.

b. Because you get to hear and discuss first hand about some of the exciting physics work that goes on in New Zealand

c. Because you get to meet, talk to, and learn from Eugenia Etkina, who is one of the most honoured and respected physics educators in the US. She's researched in particular student learning through practical experiments, and how to maximize it. But also she's looked at the modern physics curriculum more generally. And she'll be here with us to share it all.

d. Because you get to celebrate the International Year of Light (which, by the way, was designated by UNESCO following lobbying from a handful of countries including New Zealand)

e. Because you get to experience practical examples of Bessel Functions.  (You may need to click here for an explanation).

So, no excuses. See you in The Tron in July.

## The difference between a theoretical physicist and a mathematician is…Marcus WilsonMar 13

A mathematician can say what he likes… A physicist has to be at least partly sane

J. Willard Gibbs

What is it that makes a physicist sane (if only in part)? Everything has to be related back to the 'real world', or the 'real universe'. That is, a physicist has to talk about how things work in the world or universe in which we live, not some hypothetical universe. That's how I think of it, and I know, having done a bit of research with some of my students, a lot of them think the same way. That's not to say mathematicians don't have a lot to say about this universe too. It's just that the constraints on them are somewhat less.

Another way of looking at it is that physicists work with dimensioned quantities. Most things of physical relevance have dimensions. For example, a book has a length, width and thickness. All of these are distances, and can be measured. The unit doesn't matter; we could use centimetres, inches or light-years – but the physical size of the object is determined by lengths. Also, the book has a mass (one could measure it in kilograms). It might find its way onto my desk at a particular time (measured, for example, in hours, minutes, seconds, millennia or whatever). Perhaps it is falling at a particular velocity – which describes what distance it travels in a particular time. All of these things are physical quantities, and they carry dimensions.

One of my pet hates as a physicist is reading physics material in which the dimensions have been removed. You can do this by writing lengths in terms of a 'standard' length, but then only quoting how many of the standard length it is. So we might talk about lengths in terms of the length of a piece of A4 paper (which happens to be 297 mm); a piece of A2 paper has length 2 standard-lengths, and an area of 4 standard-areas. The problem really comes when the discussion drops the 'standard-length' or 'standard-area' bit and we are left with statements such as a piece of A2 paper has a length of 2 and an area of 4.  It is left to the reader to work out what this actually means in practice. A mathematician can get away with it – she can say what she likes, but not so the physicist.

Here's a question which illustrates the point? What is the length of a side of a cube whose volume is equal to its surface area? The over-zealous mathematics student blunders straight in there: Let the length be x. Then volume is x^3, and surface area is 6 x^2 (the area of a face is x^2, and there are six on a cube). So x^3 = 6 x^2 ; cancelling x^2 from both sides, we have x=6.  Six what? centimetres, inches, furlongs, parsecs? The point is that the volume of a cube can never be equal to its surface area. Volume and area are fundamentally different things.

The Wikipedia page on 'fundamental units' , along with many text books, blunders in this way too. The authors should really know better. (Yes, I should fix it, I know…) For example:

A widely used choice is the so-called Planck units, which are defined by setting ħ = c = G = 1

No, NO, NO!  What is wrong with this? How can the speed of light 'c' be EQUAL to Newton's constant of Gravitation 'G". They are fundamentally different things. The speed of light is a speed (distance per unit time), Newton's constant of gravitation is… well.. it's a length-cubed per mass per time-squared. It's certainly not a speed, so it can't possible be equal to the speed of light. And neither can be equal to 1, which is a dimensionless number. What the statement should say, is that c = 1 length-unit per time-unit; and G = 1 length-unit-cubed per mass-unit per time-unit squared.

However, doing physics can be more complicated that this. A lot of physics is now done by computer. In writing a computer programme to do a physics calculation, we almost always don't have explicit record of the units or dimensions in our calculations. Our variables are just numbers. It's left to us to keep track of what units each of these numbers is in. Strictly speaking, I'd say it's rather slack. It would be nice to have a physics-programming language that actually keeps track of the units as well. However, I'm not aware of one. (If someone could enlighten me otherwise, that would be fascinating…) Otherwise, I'll have to have a go at constructing one.

What's prompted this little piece is that I've been reviewing a paper that has been submitted to a physics journal. The authors have standardized the dimensions out of existence, which makes it awfully hard for me to work out what things mean physically. Just how fast is a speed of 1.5? How many centimetres per second is it? While that might be an answer their computer programme spits out, the authors really should have made the effort of turning it back into something that relates to the real world. In a mathematics journal, they might get away with it. But not in a physics journal. At least, not if I'm a reviewer…

## Seeing spots before my eyesMarcus WilsonJan 23

"Doctor, Doctor, I keep seeing spots before my eyes"

"Have you ever seen an optician?"

"No, just spots".

The concept of seeing an optician floating across my field of view is a scary one indeed. However, the concept of seeing spots doing the same is one I'm coming to terms with.

I had a talk to an opthamologist about this last week, as part of an eye check-up. He was very good, I have to say, and we discussed in detail some optical physics, particularly with regard to the astigmatism in my right eye (and why no pair of glasses ever seems quite right).  He also reassured me that seeing floaters is nothing, in itself, to be worried about. It's basically a sign of getting old. How nice. He did though talk about signs of a detached retina to look out for (pun intented) – and did some more extensive than usual examination.

So what are those floaty things I see? To use a technical biological phrase, they are small lumps of rubbish that are floating around in the vitreous humour of the eye. They are real things – not an illusion – although I don't 'see' them in the conventional manner that I would see other objects.

The eye is there to look at things outside it. Its lens focuses light from objects onto the retina, where light sensitive cells convert the image to electrical signals that are interpreted by the brain. But given that the floaters are actually between the lens and the eye, how am I seeing them?

There are a couple of phenomena going on. First of all, a floater can cast a small shadow onto the retina. You can see this effect by using a lens to put an image of something (e.g. the scene outside) onto a piece of card, and then put something between the lens and the image. Some of the light can't get to the card, and so part of the image is shadowed. The appearence of the shadow depends on how close the object is to the card – if its right by the lens there will be very little effect – but if close to the card there'll be a tight, well-defiined shadow. My experience is that these spots are definitely most noticable in bright conditions – presumably because the shadows on the retina then appear in much greater contrast than under dull conditions.

Secondly, however, they can bend the light. Their refractive index will be different from that of the vitreous humour, and therefore when a light ray hits a floater it will bend, a little. The consequence is a defocusing of a little bit of the image, which wil be visible. If the floater stayed still, it would probably barely be noticable, but when it moves, the little bit of bluriness moves with it, and the brain picks up the movement rather effectively.

The most interesting thing to me is that it just isn't possible to look at these things. When I try, my eyes move, and consequently these bits of rubbish flit out of view. Rather like quantum phenomena, you can't observe them without changing where they are and where they are moving to.

## Hawking radiation in the labMarcus WilsonOct 21

A highlight of the recent NZ Institute of Physics conference was the Dan Walls medal talk given by Matt Visser. Matt has been working on general relativity. That's not desparately unusual for a physicist, but Matt has been successful in working on some of the crazier aspects of relativity and getting it published – wormholes, dumb holes and the like. He gave an entertaining talk – perfect for closing the conference.

I was particularly taken by the description of the analogies between light and sound. It's unsurprising that there should be analogies between the physics of light and the physics of sound in that both are waves, but the extent to which the analogy can go surprised me. For example, it is possible to get Hawking radiation with sound.

Hawking radiation is predicted to be radiated from black holes. I say 'predicted' because experimental evidence is still scant. It allows black holes to 'evaporate' by emitting radiation from their event horizons (Within the event horizon nothing escapes the black hole – not even light. Once you've passed that boundary, you have a one-way ticket to a singularity). There's an analogy between the event horizon of the black hole and an acoustic shock-front (sonic boom) created by an object moving faster than sound. In the case of the former, once you are past the event horizon you can't get back out, and in the case of the latter, it's not possible for a perturbation that occurs behind the shock front to have an effect in front of it – in order to do so it would need to go faster than sound.

It turns out that many of the equations governing the situations are similar, including those necessary to produce Hawking radiation. The implication is that one should be able to create Hawking radiation from shock fronts created with supersonic fluid flow. And indeed it has been done – what one might consider an effect of general relativity demonstrated in a fairly simple lab experiment. Quite beautiful. Black holes (well, OK, certain aspects of them) on your lab bench.

## Precision Cosmology – Yeah, Right!Marcus WilsonSep 27

We've just had our first session at the NZ Institute of Physics Conference. The focus was on astrophysics, and we heard from Richard Easther about 'Precision Cosmology' – measuring things about the universe accurately enough to test theories and models of the universe. We ablso heard about binary stars and supernovae, and evidence for the existence of dark matter from observing high energy gamma rays.

Perhaps the most telling insight into cosmology was given in an off-the-cuff comment from one of our speakers, David Wiltshire. It went something like this. “In cosmology, if you have a model that fits all the experimental data then your model will be wrong, because you can guarantee that some of the data will be wrong.”

Testing models against experimental observation is a necessary step in their development. We call it validation. Take known experimental results for a situation and ask the model to reproduce them. If it can't (or can't get close enough) then the model is either wrong or it's missing some important factor.(s). Of course, this relies on your experimental observations being correct. And, if they're not, you're going to struggle to develop good models an good understanding about a situation.

The problem with astrophysics and cosmology is that experimental data is usually difficult and expensive to collect. There's not a lot of it – you don't tend to have twenty experiments sitting in orbit all measuring the same thing to offer you cross-checks of results – so if something goes wrong it might not be immediately apparent. And if you can't cross-check, you can't be terribly sure that your results are correct. It's a very standard idea across all of science – don't measure something just once, or just twice, (like so many of my students want to do), keep going until you are certain that you have agreement.

Little wonder why people have only very recently taken the words 'precision cosmology' at all seriously.

## Hotspot and Silicone TapeMarcus WilsonAug 09

1 Comment

Well, today’s big story is just perfect for PhysicsStop. Cricket meets physics. What more could I ask for.

In case you’ve just arrived from Alpha Centauri,  there have been accusations flying that both English and Australian batsmen have been trying to defeat the ‘Hot Spot‘ detector by putting silicone tape on their bats. The allegations have been vigorously denied from both sides.

Hot Spot is used as part of a decision review system in professional cricket. The idea is that it will provide evidence as to whether the ball has hit the bat or not when assessing possible dismissals. It uses thermal imaging (infra-red) technology to look for the heat left behind when the ball makes contact with a surface. As the cricket ball just skims the edge of the bat, friction between the two will generate a small amount of heat at the point of contact. The thermal imagers can detect this heat and therefore prove whether the ball hit the bat or not. At least, that is the intention.

So how might silicone tape (a fairly innocuous medical product) give the batsman an advantage? The allegation being made is that a batsman would put tape on the outside edge of the bat, which reduces or eliminates the ‘hot spot’ left by a ball grazing the edge. Presumably they’d leave off the tape from the inside edge, so as to make sure that a fine edge on to their pads gets detected to counter any appeal for leg-before-wicket. (I admit that anyone who doesn’t know cricket will not have a clue what I’m talking about at this point, but hopefully you can still follow the physics part.)

Presumably the thinking is that silicone tape reduces the frictional forces between bat and ball, and therefore reduces the heat generated during a collision between the two. Would it work? One would need to try it out to be sure. But a quick glance at some values for coefficients of friction (e.g. here) will show that there is a vast range of values depending on the two materials. Some combinations surfaces have much more potential for friction (and therefore heating) than others. So it’s plausible that a low friction tape might have the effect. (Though one would think there might be more effective methods – e.g. spraying the edge of the bat with a lubricant spray. The thinking might be that applying tape to a bat is, bizarrely as it might sound,  actually legal in cricket.)

There’s been some discussion on the blogs that it has to do with thermal conductivity, though I’m not convinced by this argument. To defeat Hot Spot in this manner, one would need a material that gets rid of the heat very quickly by spreading it to other areas, so a noticeable hot spot doesn’t persist. The problem is that the thermal diffusivities of everyday materials are too low for this to happen. Thermal diffusivity controls how quickly heat spreads out by conduction. Even the very highly diffusive materials, with thermal diffusivities of around 100 mm2/s or so, would have a spot of heat spread out by only 10 mm in a second (The square-root of the product of thermal diffusivity and time tells you roughly how far heat will spread in that time). The Hot Spot frame rate is much shorter than this so there’s not time for the heat to diffuse away.

But I can think of another mechanism by which the tape might fool Hot Spot. The amount of infra-red light emitted by a surface doesn’t just depend on its temperature. Some surfaces are better emitters than others. A perfect emitter is called a ‘black-body’ in physics. However, be warned – an object that emits infra-red really well doesn’t necessarily look black to the eye – and conversely don’t think that because something is white that it doesn’t emit infra-red well. Some materials have properties that are very dependent on wavelength. It is possible (I don’t know) that silicone tape has a lower emissivity than wood, and therefore the effect, as viewed by an infra-red camera, would be reduced. Possibly it’s a combination of reduced friction and reduced emissivity.

Then again, possibly this is just a media propaganda stunt to try to get some interest back into the last two Ashes tests. (Again, non-cricketers won’t have a clue about that sentence).

All this would make a great student project. I’m sure there’d be physics graduates queuing up to do a PhD in defeating cricket technology.

## What’s in a colour?Marcus WilsonJul 23

When I was young (about six-ish)  I had a variety of ambitions. Some of them I shared with a lot of other boys of my age, such as being a train driver and playing cricket for England. Some were more particular to me, such as becoming a biologist and discovering a new colour.

Needless to say I failed on all accounts. One I got close to – being a physicist is not so far away from being a biologist.  I’ve at least watched England play cricket (including an England v India match at Lord’s – in the members’ guests area – that was rather neat) and stood on the footplate of a steam engine. Discovering a new colour, however, is something I was not likely to achieve from the outset.

I had a vague idea that if I mixed enough paints together I’d hit on a combination that no-one had tried before (maybe purple and green with just a hint of orange) and, hey-presto, they’d mix together to some entirely colour previously unknown to science. The colour would naturally be named after me, and become an instant hit with home decorators. Out would go ‘Magnolia’, in would come ‘Wilurple’.

I gave up on the ambition long before I found out why it was unlikely to work. The CIE colour chart encapsulates the situation neatly. There are only three different colour receptors (‘cones’)  in the human eye. By having the ‘red’, ‘green’ and ‘blue’ cones stimulated differently, one sees different colours. The CIE chart puts all possible colours onto a 2d grid. One defines the variable ‘x’ as being the fraction of the total stimulation that is accounted for by the red cones; the variable ‘y’ as the fraction of the total that is accounted for by the green cones. (One could define ‘z’ in a similar way for the blue cones, but it is redundant since x plus y plus z must equal 1.) Then ‘x’ and ‘y’ defines a colour. The chart shows it.

All possible colours are shown on this chart. The outside of the curved space shows the colours of the spectrum – those stimulated by a pure wavelength of light. The others are due to combinations of wavelengths. At x=1/3, y=1/3 (and so z=1/3) there is white. It isn’t possible to go outside this chart, and therefore it contains all possible colours. D’oh.

But, there is hope. The response of the green cones of the eye is entirely overlapped by those of the red and the blue. This means it isn’t possible to find a wavelength of light that stimulates JUST the green cones. If, somehow, one could stimulate cells artificially, one might be able to trigger green cones to fire without any response from red and blue. And then the person would be seeing a colour they’ve never experienced before.

Network-wide options by YD - Freelance Wordpress Developer