I wish you all a very Happy Christmas – PhysicsStop will be back in the New Year.
Apology Dec 20No Comments
With regard to the last post, I’d like to clarfiy why I have used the word ‘girl’ not ‘woman’ in the title. Many of you have pointed this out to me.
My thought when I began to write this short post was that I was addressing it to girls at school, who were thinking about whether to carry on with science subjects (and physics in particular). Hence the word ’girl’ rather than ‘woman’. However, on reading the article again, I can see that this intention isn’t clear.
I apologize if it has caused any offence.
[20 December 2010 - Please read the comment about the title here]
I just throw up the following factoids from the Australian Institute of Physics congress. Maybe together they mean something. Let me know…
1. Not many women do physics at university. (That wasn’t from the congress - every physicist knows it)
2. Female students do (slightly) worse at some well-used standard multiple-choice tests such as the ‘Force Concept Inventory’.
3. If you ask the question ‘Are you Male or Female?’ at the beginning of a physics test, female students will do worse in the test. Male students are unaffected. (The hypothesis is that women suddenly think – I’m female – I’m not supposed to do well – then the subconscious takes over to ensure they don’t.)
4. A major reason why year 10 students (male or female) choose not to continue in science is that they cannot picture themselves being a scientist. (This has implications for doing outreach programmes aimed at this age group).
5. Eighty-three percent of women holding academic positions in physics (maybe in the US – but I’m not sure) have a husband / partner who is a scientist. (Schiebinger et al. (2008), Institute for Gender Research, Stamford University)
Consider the following perfectly reasonable sentences:
"It’s hot outside"
"The oven is heating up"
"Insulation helps keep a house warm"
Here we have physics words and concepts being used in everyday English in ways that are rather loose from a physics point of view. Does the conventional English use of words such as ‘heat’, ‘temperature’, ‘insulate’, etc confuse students when they come to learn thermodynamics? For example, even a physicist would say "it’s hot today", when he knows what he actually means is "the temperature is high today". In thermodynamics, heat and temperature are very precise concepts, and are not interchangeable, as they often are in English.
Anyway, a study of confusion amongst students caused by conventional English usage of thermodynamics words was the subject of Helen Georgiou’s short talk last week at the Australian Institute of Physics congress. Brief conclusion: Yes, there is confusion, and often students aren’t aware of where it’s coming from.
No, the Large Hadron Collider hasn’t vanished. It might not be so prominent in the news as it was two years ago, but it is quietly colliding protons together and generating lots of useful data for analysis.
Here’s a couple of bits which I gleaned in Melbourne
1. What lies inside a quark (if anything?). Us physicists are happy with the notion that at the centre of an atom lies a nucleus, consisting of protons and neutrons, and each proton and neutron contains three quarks. (For the case of a proton, it’s two ‘up’ quarks and one ‘down’ quark; for the neutron its two ‘down’ quarks and one ‘up’. Protons and neutrons are actually very, very similar things.) But is the quark made of anything? How could we tell? Basically, the way you do this is to collide protons together (i.e. 3 quarks on 3 quarks) and carefully analyze the statistics of the scattering. At what angles are the protons scattered? Is there fine-structure in the scattering pattern? This is exactly what Rutherford did with Geiger and Marsden’s alpha-particles on gold-foil results to determine that there must be a nucleus to an atom. In the case of the gold-foil, the structure in the pattern is pretty obvious. In the proton-proton case, it’s not. In fact, results from the ATLAS experiment at the LHC fail to indicate any structure at current energies (3.4 TeV). (In particle physics, higher energies equate to probing smaller distances). So we can conclude that IF there is structure (and it’s a big if), it must appear at energy scales larger than 3.4 TeV. So far, the quark remains ‘fundamental’.
2. Can we find dark matter? Dark matter is what is thought to make up 23% of the mass/energy of the universe. It has the annoying property that you can’t see it – in fact it doesn’t interact with any electromagnetic things. So why do we think it’s there? If you study the way galaxies are moving, knowing what we know about gravity, we come to the conclusion that there simply isn’t enough visible mass in a galaxy to account for its movement. Galaxies seem to be more massive than we can account for by ‘counting’ stars in them. This missing mass is called ‘dark matter’. (N.B. There’s also dark energy, that makes up about 73% of the universe, which is another thing, but I won’t go there today.) So, what is it? We don’t know, but there are theories. Moreover, these theories are testable – in that you can use them to make predictions about what might be observed in the LHC. So people are busy analysing results of collisions to see if there are features observed in the LHC that can only be explained by the theories of dark matter. If there are, that’s strong evidence for the ‘discovery’ of dark matter. I have to say that listening to a couple of talks I was impressed at the size of the research effort on theories of dark matter – given that this stuff hasn’t actually been observed yet. It must take a bit of faith to spend your PhD studying something that might not even exist.
Experimenting Dec 133 Comments
[14 December 2010 - this post initially appeared truncated - sorry if it confused you. The full thing is now below]
One of my talks last week concerned a piece of work I’d done with my second year experimental physics class this year. Before going to Melbourne, I gave the talk a trial run at the University of Waikato’s ‘celebrating teaching’ day. It provoked a few comments then, and a few more in Melbourne, so I thought I’d give a summary of it here.
I’ve been teaching experimental physics more or less for the whole time I’ve been at the university (my divine punishment for navigating my own undergraduate studies on the basis of finding the path with the least amount of practical work in it). I’ve noticed that few students do any planning before the lab. Some will turn up at the lab without even knowing what experiment they will be trying to do. So this year I’ve tried to turn this around.
The great thing about the theory of tertiary education is that it says that when there is a problem, the solution is often easy. And that is to pay attention to what you are assessing. “If you want to change student learning …. change the assessment” ( G. Brown, J. Bull and M. Pendlebury. Assessing Student Learning in Higher Education. Routledge, London, New York (1997). ) The issue was, I think, that I was never actually getting the students to plan anything. They learn that they can get good marks without doing any preparation beforehand, because the instructions for the lab are pretty well provided to them.
So this year I’ve forced them to prepare for a couple of experiments, by removing the instructions. Instead, I gave them the task they had to do, and let them get on with working out how it should be done, using what equipment, etc. Since we use some moderately complicated lab equipment, I chose to ‘pair-up’ experiments – one week to introduce them to the equipment, the next to give them an experiment to do (without instructions) that used that equipment. That way, learning to drive the equipment did not become a distraction.
For the most part (around three quarters) students overcame initial hesitations (horror?) and tackled this very well. Most enjoyed it, and thought the approach was beneficial. However, the other quarter really didn’t like it. Appraisal forms, a focus group, and casual conversations in the lab with the students tell me this.
I gave my talk and there was a fair bit of discussion afterwards. The audience (mostly filled with secondary teachers and tertiary teachers with a strong interest in education) thought that the way that these experiments were assessed needed very careful thought to get the most out of the students. Was I assessing the ‘planning’ task itself (and how?), the end results of the planning, or something else. I thought I was assessing ‘planning’, as well as how well the student carried out and documented the experiment after the planning, but possibly it was not transparent enough to some of the students. That’s worth working on for next year.
Also, was I concerned that students might get their experiment ‘planned’ by someone else? E.g. consult another student in the group that had done this experiment in a previous week. Personally, this doesn’t bother me – in fact, I would encourage such consultation as it shows students are taking the task seriously. If a student finds it easier to learn from other students rather than from me, I have no problem with that. If the end result is that he or she learns (and I mean ’learn’ not ‘parrot’) what I wish them to learn (which is more than just facts) then I have no problem with whatever route they take.
I was encouraged by a final comment by a lecturer who had done a similar thing with a large first-year class (in contrast to my small second-year class) and found very similar results – generally successful and well-liked by students, but with a significant minority that had strong views the other way.
Back on-line now after a week in Melbourne at the Australian Institute of Physics conference.
I have lots of good stuff to blog about, including optomechanics (using light to cause vibrations), physics education (lots on this), the Large Hadron Collider and complicated models of things that might not even exist, but I’ll do this one on climate change.
On Wednesday, we had a very colourful and dynamic plenary lecture by David Kardy, from the University of Melbourne. In short, it was a rant about Ian Plimer’s recent book about Climate Change, and how certain high-profile Australian politicians (e.g. leader of the opposition Tony Abbot) like to draw their science from unscientific sources.
Essentially, Prof Kardy went through Ian Plimer’s major items of evidence for non-human causes of recent climate change and rebutted them. He covered a lot of ground in a 45 minute talk, which was rather too fast to take in, but the take-home message was that the science says that the bulk of recent warming of the earth’s climate is caused by human activities (and not due to Plimeresque magic underwater volcanoes).
Anyway, the most useful part of the talk for me, came in the ‘questions’ section at the end. Someone stood up and said something along these lines: "I am a physicist but not a climate physicist. What can I do to tackle disinformation in this field?" Prof Kardy’s response wasto say that we should help people to see what good science actually was. There has been a lot of good science done on climate change. Kardy suggested that non-climate physicists (like me) look at the booklet "Science of Climate Change - Questions and Answers", published by the Australian Academy of Science. (You may do the same, by clicking the link).
Real Science in easy chunks Dec 02No Comments
Last week I took part in a ‘Science Sampler Day’, at Ruakura in Hamilton. The idea behind this was to take some really good year 9 school children, and give them a day exposed to some real science. This was run by another Hamilton-based scientist Liz Carpenter, and I thought was a great success.
Throughout the day, the children were is small groups and rotated around different science activities. It was intentionally pretty rapid-fire, with each activity only being about 10 minutes in length. That gave the opportunity to sample lots of different areas of science. Examples included strength of materials, water quality, measuring a runner’s speed and using an infra-red camera to measure body temperature – very varied but all very exciting too.
I took along some electroencephalogram (brain-waves) recording equipment. I think it turned out to be a good choice of activity – on the one hand it looks quite high-tech and impressive, and I think being wired-up and having your brain waves monitored is quite fun – but also in ten minutes I could use it to illustrate that things the children are already learning at school have real application – notably electricity and circuits, and a bit of maths too.
I was certainly impressed with the range of questions and comments that I got – for example speculating about the uses of this (e.g. monitoring of anaesthesia) – and thoughts about ‘what would happen if..?’. There was some good thinking going on, which tells me that the day was a success.
Just which of these local children will turn out to be world-class scientists I can’t answer, but I would like to think that it has shown some of them that science covers a huge area of application. Thank you Liz for organizing it.
I’ll be conferencing next week so blogging might be a bit hit-or-miss, but I’m sure I’ll return with lots of blog-fodder for the holiday period.
Earth currents Nov 29No Comments
Five hundred and seventeen for one. That’s more like it. Looking forward to more of the same in Adelaide.
So, physics. Last week I was doing a bit of work in the lab with a student, trying to track down why his instrumentation wasn’t working. We’re still at it; what he’s trying to do is quite complicated, but we made some progress. One thing we noticed was that there were multiple ground points in the circuit.
What do I mean by this? A lot of electronic equipment is earthed (‘grounded). It’s basically an electrical safety thing. It means that any metal on the outside of the equipment (stuff you can touch) is connected, via the power cable and the earth pin on the plug, to a large piece of metal somewhere outside the building that is hammered well into the ground. What this means is that the outside of the equipment sits at the same electric potential as your feet. If you touch the case with your hand, there is no potential difference between your hand and your feet (which are touching the ground, usually), and so no electric current flows through you. If there’s an electrical fault and the outside case suddenly becomes live, a large current will flow through the earth wire to ground, blowing the fuse.
But a problem can happen if you are using many pieces of electrical equipment as part of an electrical circuit. For example, the negative input to an oscilloscope is often (but not always) connected to ground. When you measure the potential difference in your circuit with such an oscilloscope, the point where you connect the negative terminal becomes ‘grounded’. If you then used a second oscilloscope to monitor another potential difference, you could have two points in your circuit that are forced to a ‘ground’ potential.
That doesn’t work. What happens is that you have introduced a short-circuit between the two ground points. In effect, you’ve connected a wire between them. However, the ‘wire’ is slightly obscure – it journeys from the negative input of the first oscilloscope, down the earth wire of the power cord, where it probably joins somewhere with the earth wire of the power cord from the second oscilloscope, then it goes up that earth wire and to the negative terminal of the second oscilloscope. There could be a substantial current flowing around this loop. What it will do to your circuit will depend on what the circuit is, but for sure it will mean that the circuit won’t do what you want it to.
Usually, with a spot of thinking, you can reorganize what you’re plugging in to the circuit to measure it so that you avoid the problem.
The moral – you can only have one earth point on a circuit. Watch what you plug in to monitor the circuit – it may be earthed.
Analogue Computing Nov 26No Comments
What a dismal and predictable start to the Ashes. Turn your back on the computer screen for five minutes and suddenly England have lost three wickets.
Anyway, yesterday I was in Auckland, talking about research progress with a group that we’ve had strong links with in the past. (By ‘we’ I mean the Waikato cortical modelling group). One of the talks we heard was a very enthusiastic affair on re-invigourating analogue computing.
The word ‘analogue’ has relatively recently come to mean ‘not digital’ but we were reminded of just where the use came from. An analogue computer is an electrical analogue of another system. For example, we can create an electrical system whose underlying physics is entirely analogous to a physical system (e.g. a mechanical system) that we wish to study. Therefore, building the electrical system and seeing how its voltages change with time will tell us exactly how the mechanical system will change with time. So we have ‘solved’ the behaviour of a mechanical system just by building an electrical circuit.
The operational amplifier allows us to make small circuit elements that add, take logs, and, very importantly, integrate or differentiate. This means we can put these elements together to make a system that obeys a set of differential equations. For example, if you wish to study the trajectory of a projectile in three dimensions (with air resistance) you can write the underlying physics in six coupled first-order differential equations (two for each of the x, y and z components). You can then make up a circuit with six operational-amplifiers whose voltages obey the same equations. If you want to change, say, the coefficient of air resistance, we can do this by changing a few resistor values.
Let the circuit run, and, hey-presto, you have the solution to your problem. There is a degree of elegance here that is lost with a digital interpretation of the same problem. In the digital method, we would have to break down the problem into a series of small time steps, and ask ourselves the question – if my system is in state X now, what state will it be in at a small time in the future? Then, knowing what state it is in a small time in the future, we can ask the same question again – what state will it be in at another small step into the future? And so forth.
Of course, the real world doesn’t work in discrete time steps, but we have to introduce them to get a digital computation to work. We then try to balance accuracy with run-time – to get the most accurate solution we need small time steps, but if you have small time steps you need more of them, which takes longer to compute. In the analogue method there is no such problem. Your circuit just does its stuff in continuous time, just like a real system. Much more elegant.