SciBlogs

Posts Tagged light

A blatant plug for the NZIP2015 conference Marcus Wilson May 06

2 Comments

There's no hiding my conflicts of interest here. I'm on the New Zealand Institute of Physics 2015 conference organizing committee. I'm also the NZIP treasurer. And I'm a staff member at the host organization.  So, to contribute to the New Zealand physics community's biennial event  in Hamilton on 6 – 8 July, click on this link. 

But why? Pick from the following

a. Because you get to meet colleagues and actually talk with them. 

b. Because you get to hear and discuss first hand about some of the exciting physics work that goes on in New Zealand

c. Because you get to meet, talk to, and learn from Eugenia Etkina, who is one of the most honoured and respected physics educators in the US. She's researched in particular student learning through practical experiments, and how to maximize it. But also she's looked at the modern physics curriculum more generally. And she'll be here with us to share it all. 

d. Because you get to celebrate the International Year of Light (which, by the way, was designated by UNESCO following lobbying from a handful of countries including New Zealand)

e. Because you get to experience practical examples of Bessel Functions.  (You may need to click here for an explanation). 

So, no excuses. See you in The Tron in July. 

 

The difference between a theoretical physicist and a mathematician is… Marcus Wilson Mar 13

5 Comments

A mathematician can say what he likes… A physicist has to be at least partly sane

J. Willard Gibbs 

What is it that makes a physicist sane (if only in part)? Everything has to be related back to the 'real world', or the 'real universe'. That is, a physicist has to talk about how things work in the world or universe in which we live, not some hypothetical universe. That's how I think of it, and I know, having done a bit of research with some of my students, a lot of them think the same way. That's not to say mathematicians don't have a lot to say about this universe too. It's just that the constraints on them are somewhat less. 

Another way of looking at it is that physicists work with dimensioned quantities. Most things of physical relevance have dimensions. For example, a book has a length, width and thickness. All of these are distances, and can be measured. The unit doesn't matter; we could use centimetres, inches or light-years – but the physical size of the object is determined by lengths. Also, the book has a mass (one could measure it in kilograms). It might find its way onto my desk at a particular time (measured, for example, in hours, minutes, seconds, millennia or whatever). Perhaps it is falling at a particular velocity – which describes what distance it travels in a particular time. All of these things are physical quantities, and they carry dimensions.

One of my pet hates as a physicist is reading physics material in which the dimensions have been removed. You can do this by writing lengths in terms of a 'standard' length, but then only quoting how many of the standard length it is. So we might talk about lengths in terms of the length of a piece of A4 paper (which happens to be 297 mm); a piece of A2 paper has length 2 standard-lengths, and an area of 4 standard-areas. The problem really comes when the discussion drops the 'standard-length' or 'standard-area' bit and we are left with statements such as a piece of A2 paper has a length of 2 and an area of 4.  It is left to the reader to work out what this actually means in practice. A mathematician can get away with it – she can say what she likes, but not so the physicist. 

Here's a question which illustrates the point? What is the length of a side of a cube whose volume is equal to its surface area? The over-zealous mathematics student blunders straight in there: Let the length be x. Then volume is x^3, and surface area is 6 x^2 (the area of a face is x^2, and there are six on a cube). So x^3 = 6 x^2 ; cancelling x^2 from both sides, we have x=6.  Six what? centimetres, inches, furlongs, parsecs? The point is that the volume of a cube can never be equal to its surface area. Volume and area are fundamentally different things. 

The Wikipedia page on 'fundamental units' , along with many text books, blunders in this way too. The authors should really know better. (Yes, I should fix it, I know…) For example:

A widely used choice is the so-called Planck units, which are defined by setting ħ = c = G = 1

No, NO, NO!  What is wrong with this? How can the speed of light 'c' be EQUAL to Newton's constant of Gravitation 'G". They are fundamentally different things. The speed of light is a speed (distance per unit time), Newton's constant of gravitation is… well.. it's a length-cubed per mass per time-squared. It's certainly not a speed, so it can't possible be equal to the speed of light. And neither can be equal to 1, which is a dimensionless number. What the statement should say, is that c = 1 length-unit per time-unit; and G = 1 length-unit-cubed per mass-unit per time-unit squared. 

However, doing physics can be more complicated that this. A lot of physics is now done by computer. In writing a computer programme to do a physics calculation, we almost always don't have explicit record of the units or dimensions in our calculations. Our variables are just numbers. It's left to us to keep track of what units each of these numbers is in. Strictly speaking, I'd say it's rather slack. It would be nice to have a physics-programming language that actually keeps track of the units as well. However, I'm not aware of one. (If someone could enlighten me otherwise, that would be fascinating…) Otherwise, I'll have to have a go at constructing one.

What's prompted this little piece is that I've been reviewing a paper that has been submitted to a physics journal. The authors have standardized the dimensions out of existence, which makes it awfully hard for me to work out what things mean physically. Just how fast is a speed of 1.5? How many centimetres per second is it? While that might be an answer their computer programme spits out, the authors really should have made the effort of turning it back into something that relates to the real world. In a mathematics journal, they might get away with it. But not in a physics journal. At least, not if I'm a reviewer…

 

 

Seeing spots before my eyes Marcus Wilson Jan 23

No Comments

"Doctor, Doctor, I keep seeing spots before my eyes"

"Have you ever seen an optician?"

"No, just spots".

The concept of seeing an optician floating across my field of view is a scary one indeed. However, the concept of seeing spots doing the same is one I'm coming to terms with. 

I had a talk to an opthamologist about this last week, as part of an eye check-up. He was very good, I have to say, and we discussed in detail some optical physics, particularly with regard to the astigmatism in my right eye (and why no pair of glasses ever seems quite right).  He also reassured me that seeing floaters is nothing, in itself, to be worried about. It's basically a sign of getting old. How nice. He did though talk about signs of a detached retina to look out for (pun intented) – and did some more extensive than usual examination. 

So what are those floaty things I see? To use a technical biological phrase, they are small lumps of rubbish that are floating around in the vitreous humour of the eye. They are real things – not an illusion – although I don't 'see' them in the conventional manner that I would see other objects. 

The eye is there to look at things outside it. Its lens focuses light from objects onto the retina, where light sensitive cells convert the image to electrical signals that are interpreted by the brain. But given that the floaters are actually between the lens and the eye, how am I seeing them?

There are a couple of phenomena going on. First of all, a floater can cast a small shadow onto the retina. You can see this effect by using a lens to put an image of something (e.g. the scene outside) onto a piece of card, and then put something between the lens and the image. Some of the light can't get to the card, and so part of the image is shadowed. The appearence of the shadow depends on how close the object is to the card – if its right by the lens there will be very little effect – but if close to the card there'll be a tight, well-defiined shadow. My experience is that these spots are definitely most noticable in bright conditions – presumably because the shadows on the retina then appear in much greater contrast than under dull conditions. 

Secondly, however, they can bend the light. Their refractive index will be different from that of the vitreous humour, and therefore when a light ray hits a floater it will bend, a little. The consequence is a defocusing of a little bit of the image, which wil be visible. If the floater stayed still, it would probably barely be noticable, but when it moves, the little bit of bluriness moves with it, and the brain picks up the movement rather effectively. 

The most interesting thing to me is that it just isn't possible to look at these things. When I try, my eyes move, and consequently these bits of rubbish flit out of view. Rather like quantum phenomena, you can't observe them without changing where they are and where they are moving to.  

 

 

 

Hawking radiation in the lab Marcus Wilson Oct 21

2 Comments

A highlight of the recent NZ Institute of Physics conference was the Dan Walls medal talk given by Matt Visser. Matt has been working on general relativity. That's not desparately unusual for a physicist, but Matt has been successful in working on some of the crazier aspects of relativity and getting it published – wormholes, dumb holes and the like. He gave an entertaining talk – perfect for closing the conference.

I was particularly taken by the description of the analogies between light and sound. It's unsurprising that there should be analogies between the physics of light and the physics of sound in that both are waves, but the extent to which the analogy can go surprised me. For example, it is possible to get Hawking radiation with sound. 

Hawking radiation is predicted to be radiated from black holes. I say 'predicted' because experimental evidence is still scant. It allows black holes to 'evaporate' by emitting radiation from their event horizons (Within the event horizon nothing escapes the black hole – not even light. Once you've passed that boundary, you have a one-way ticket to a singularity). There's an analogy between the event horizon of the black hole and an acoustic shock-front (sonic boom) created by an object moving faster than sound. In the case of the former, once you are past the event horizon you can't get back out, and in the case of the latter, it's not possible for a perturbation that occurs behind the shock front to have an effect in front of it – in order to do so it would need to go faster than sound. 

It turns out that many of the equations governing the situations are similar, including those necessary to produce Hawking radiation. The implication is that one should be able to create Hawking radiation from shock fronts created with supersonic fluid flow. And indeed it has been done – what one might consider an effect of general relativity demonstrated in a fairly simple lab experiment. Quite beautiful. Black holes (well, OK, certain aspects of them) on your lab bench.

 

Precision Cosmology – Yeah, Right! Marcus Wilson Sep 27

No Comments

We've just had our first session at the NZ Institute of Physics Conference. The focus was on astrophysics, and we heard from Richard Easther about 'Precision Cosmology' – measuring things about the universe accurately enough to test theories and models of the universe. We ablso heard about binary stars and supernovae, and evidence for the existence of dark matter from observing high energy gamma rays.

Perhaps the most telling insight into cosmology was given in an off-the-cuff comment from one of our speakers, David Wiltshire. It went something like this. “In cosmology, if you have a model that fits all the experimental data then your model will be wrong, because you can guarantee that some of the data will be wrong.”

Testing models against experimental observation is a necessary step in their development. We call it validation. Take known experimental results for a situation and ask the model to reproduce them. If it can't (or can't get close enough) then the model is either wrong or it's missing some important factor.(s). Of course, this relies on your experimental observations being correct. And, if they're not, you're going to struggle to develop good models an good understanding about a situation.

The problem with astrophysics and cosmology is that experimental data is usually difficult and expensive to collect. There's not a lot of it – you don't tend to have twenty experiments sitting in orbit all measuring the same thing to offer you cross-checks of results – so if something goes wrong it might not be immediately apparent. And if you can't cross-check, you can't be terribly sure that your results are correct. It's a very standard idea across all of science – don't measure something just once, or just twice, (like so many of my students want to do), keep going until you are certain that you have agreement.

Little wonder why people have only very recently taken the words 'precision cosmology' at all seriously.

Hotspot and Silicone Tape Marcus Wilson Aug 09

1 Comment

Well, today’s big story is just perfect for PhysicsStop. Cricket meets physics. What more could I ask for.

In case you’ve just arrived from Alpha Centauri,  there have been accusations flying that both English and Australian batsmen have been trying to defeat the ‘Hot Spot‘ detector by putting silicone tape on their bats. The allegations have been vigorously denied from both sides. 

Hot Spot is used as part of a decision review system in professional cricket. The idea is that it will provide evidence as to whether the ball has hit the bat or not when assessing possible dismissals. It uses thermal imaging (infra-red) technology to look for the heat left behind when the ball makes contact with a surface. As the cricket ball just skims the edge of the bat, friction between the two will generate a small amount of heat at the point of contact. The thermal imagers can detect this heat and therefore prove whether the ball hit the bat or not. At least, that is the intention.

So how might silicone tape (a fairly innocuous medical product) give the batsman an advantage? The allegation being made is that a batsman would put tape on the outside edge of the bat, which reduces or eliminates the ‘hot spot’ left by a ball grazing the edge. Presumably they’d leave off the tape from the inside edge, so as to make sure that a fine edge on to their pads gets detected to counter any appeal for leg-before-wicket. (I admit that anyone who doesn’t know cricket will not have a clue what I’m talking about at this point, but hopefully you can still follow the physics part.)

Presumably the thinking is that silicone tape reduces the frictional forces between bat and ball, and therefore reduces the heat generated during a collision between the two. Would it work? One would need to try it out to be sure. But a quick glance at some values for coefficients of friction (e.g. here) will show that there is a vast range of values depending on the two materials. Some combinations surfaces have much more potential for friction (and therefore heating) than others. So it’s plausible that a low friction tape might have the effect. (Though one would think there might be more effective methods – e.g. spraying the edge of the bat with a lubricant spray. The thinking might be that applying tape to a bat is, bizarrely as it might sound,  actually legal in cricket.)

There’s been some discussion on the blogs that it has to do with thermal conductivity, though I’m not convinced by this argument. To defeat Hot Spot in this manner, one would need a material that gets rid of the heat very quickly by spreading it to other areas, so a noticeable hot spot doesn’t persist. The problem is that the thermal diffusivities of everyday materials are too low for this to happen. Thermal diffusivity controls how quickly heat spreads out by conduction. Even the very highly diffusive materials, with thermal diffusivities of around 100 mm2/s or so, would have a spot of heat spread out by only 10 mm in a second (The square-root of the product of thermal diffusivity and time tells you roughly how far heat will spread in that time). The Hot Spot frame rate is much shorter than this so there’s not time for the heat to diffuse away.

But I can think of another mechanism by which the tape might fool Hot Spot. The amount of infra-red light emitted by a surface doesn’t just depend on its temperature. Some surfaces are better emitters than others. A perfect emitter is called a ‘black-body’ in physics. However, be warned – an object that emits infra-red really well doesn’t necessarily look black to the eye – and conversely don’t think that because something is white that it doesn’t emit infra-red well. Some materials have properties that are very dependent on wavelength. It is possible (I don’t know) that silicone tape has a lower emissivity than wood, and therefore the effect, as viewed by an infra-red camera, would be reduced. Possibly it’s a combination of reduced friction and reduced emissivity.

Then again, possibly this is just a media propaganda stunt to try to get some interest back into the last two Ashes tests. (Again, non-cricketers won’t have a clue about that sentence).

All this would make a great student project. I’m sure there’d be physics graduates queuing up to do a PhD in defeating cricket technology. 

 

 

 

What’s in a colour? Marcus Wilson Jul 23

4 Comments

When I was young (about six-ish)  I had a variety of ambitions. Some of them I shared with a lot of other boys of my age, such as being a train driver and playing cricket for England. Some were more particular to me, such as becoming a biologist and discovering a new colour. 

Needless to say I failed on all accounts. One I got close to – being a physicist is not so far away from being a biologist.  I’ve at least watched England play cricket (including an England v India match at Lord’s – in the members’ guests area – that was rather neat) and stood on the footplate of a steam engine. Discovering a new colour, however, is something I was not likely to achieve from the outset.

I had a vague idea that if I mixed enough paints together I’d hit on a combination that no-one had tried before (maybe purple and green with just a hint of orange) and, hey-presto, they’d mix together to some entirely colour previously unknown to science. The colour would naturally be named after me, and become an instant hit with home decorators. Out would go ‘Magnolia’, in would come ‘Wilurple’. 

I gave up on the ambition long before I found out why it was unlikely to work. The CIE colour chart encapsulates the situation neatly. There are only three different colour receptors (‘cones’)  in the human eye. By having the ‘red’, ‘green’ and ‘blue’ cones stimulated differently, one sees different colours. The CIE chart puts all possible colours onto a 2d grid. One defines the variable ‘x’ as being the fraction of the total stimulation that is accounted for by the red cones; the variable ‘y’ as the fraction of the total that is accounted for by the green cones. (One could define ‘z’ in a similar way for the blue cones, but it is redundant since x plus y plus z must equal 1.) Then ‘x’ and ‘y’ defines a colour. The chart shows it. 

All possible colours are shown on this chart. The outside of the curved space shows the colours of the spectrum – those stimulated by a pure wavelength of light. The others are due to combinations of wavelengths. At x=1/3, y=1/3 (and so z=1/3) there is white. It isn’t possible to go outside this chart, and therefore it contains all possible colours. D’oh.

But, there is hope. The response of the green cones of the eye is entirely overlapped by those of the red and the blue. This means it isn’t possible to find a wavelength of light that stimulates JUST the green cones. If, somehow, one could stimulate cells artificially, one might be able to trigger green cones to fire without any response from red and blue. And then the person would be seeing a colour they’ve never experienced before. 

 

Seeing circular polarization Marcus Wilson Nov 22

No Comments

Physicsworld magazine is doing a ‘special feature’ this month on animal superheroes – those with rather unusual physical abilities.

The best of the lot (in my subjective opinion) is the featured-on-the-cover mantis shrimp. Not because of its ‘dactyl clubs’ that can produce a force of 700 N, but because of its eyesight.

The mantis shrimp can see circularly polarized light - something that no other animal is known to do. Polarization describes how the electric and magnetic fields in the light wave are oriented. For example, a horizontally-travelling light wave (say in the x- direction) might have its electric field pointing in the z-direction (vertically) and the magnetic field in the negative y direction. In an electromagnetic wave, the electric field, magnetic field and direction of travel are all mutually perpendicular. We could call that a vertical, plane polarization.

In circular polarization, the electric field moves in a corkscrew-like shape as the wave travels. The corkscrew can spiral one of two ways – hence there are two distinct polarizations which we call left-handed and right-handed. The mantis shrimp can distinguish between the two. It does this by using its own version of a quarter-wave plate – made of a birefringent material – one that has a different refractive index in different directions. That converts a circular polarization to a linear polarization, which it detects via more conventional methods. (There are several animals that can ‘see’ linear polarization – bees are a famous example. There are plenty that don’t distinguish one  polarization from another at all, such as humans.)

The mysterious question is why? Bees use linear polarization to assist navigation (light from the sky is linearly polarized), but what use is distinguishing left-handed and right-handed circular polarizations to a shrimp? There’s a cool research question for someone’s PhD thesis.

 

Pinhole cameras and eclipses Marcus Wilson Nov 15

No Comments

Well, the eclipse yesterday was fun. There were enough patches of sky between the clouds to get some good views. I was pleased that the pinhole cameras I made out of miscellaneous cardboard tubes, tins, paper and tinfoil worked really well. Also, the trees around the front of the sciences building gave some nice natural pinholes as the sunlight worked it’s way through the gaps between the foliage – we could see lots of crescents projected onto the wall of the building. Not something you see everyday.

The trick with the pinhole camera is to get the combination of length between pinhole and screen and size of pinhole correct. (Basically – the f-number in photography-speak) A long length means a larger image – but also a fainter one. To increase the brightness, we need to let more light through (a bigger pinhole) but the drawback of this is that it blurs the image. It takes a bit of experimenting – best done well before the eclipse that you want to see.

On the subject of which…if you live in New Zealand…you don’t have a lot of opportunity for a while. We northerners get an iddy-biddy eclipse next May (10th) – sorry Mainlanders – you miss out – and then it’s nothing for ages before we get a few more feeble partials in the 2020s. BUT, as I said earlier, it’s then non-stop eclipse mayhem from 2028, with THREE total and THREE annular eclipses before 2045, for those of us who are still alive to see them. Details are all here courtesy of RASNZ.

There are a few videos up already from the Cairns region – here’s one. However, video does not do an eclipse justice, partly because of the difficulty in video capturing parts of the corona at different luminances simultaneously. If you want to see the fainter, whispy stuff at the far edge of the corona, you end up well overexposing the brighter area nearer the moon.  The naked eye does a far better job of capturing the totality phase than a camera. 

http://www.youtube.com/watch?v=CTbIufApsSk

I note a fair amount of pink on the video – this is the chromosphere – a thin, cooler area of the sun, between the photosphere (the bright yellow bit that we normally see) and the corona.

 

 

 

 

Pepper’s Ghost Marcus Wilson Nov 01

No Comments

 Have a good look at the photo. The pretty rhododendron to the left of the chair looks a bit odd. That’s because it’s a ghost shrub. No, our garden isn’t haunted, and neither have I doctored the photo; it’s an example of Pepper’s Ghost – an illusion caused by reflections. The bush in question is off to the right, out of frame, and the camera is seeing its reflection in the window. Because the bush is well lit, but the background isn’t, it appears to be ‘real’. The effect looked even more stunning with polarizing sunglasses on.

 

 

P1070157.jpg

Network-wide options by YD - Freelance Wordpress Developer