SciBlogs

Posts Tagged experiment

Engineering, lego and line followers Marcus Wilson Aug 19

No Comments

In the last few weeks I've been working with some second-year software engineering students on a design project. Their particular task is to build (with Lego – but the high-tech variety) a robot that can follow a white line on a bench. It's fun to watch them play with different ideas and concepts – there's the occasional disaster when the robot roars off at high speed in an unexpected direction and falls off the bench top. 

To produce something that approximately works isn't that difficult. We can use a couple of lights and detectors, sitting either side of the white line. If the robot is going straight, neither gets much reflection. But if one records a high amount of reflected light (and so is on top of the line) we need to turn the robot  - if it's the other that records a high amount, we ned to turn the other way. Indeed, many, many years ago I made something very similar using analogue electronics (a few LEDs, photocells, transistors etc and a couple of motors to turn the wheels). It approximately worked, but there were a lot of conditions that would fool it – give it shadows and sharp corners to deal with and it was lousy. 

The lego robots that the students have can be programmed – and as such there is a huge array of different options for their control. The exercise is just as much in the development of the software as the hardware. Indeed, since these students are software engineering students, that is the bit they are most familiar with. 

One thing we're trying to get them to think about are different concepts. It's easy to think of one solution and just go with that. But is that the best solution? In engineering we can't afford just to develop the first idea that comes into our heads. We don't really have much idea about what is 'best' until we think through other possibilities and assess them against relevant criteria.  Too often we can be constrained by traditional thinking – "it has to be done that way" – without really considering novel options.  Two light sensors might work. But would three (or even four) be better? How are they best placed?  What about sensors that aren't rigidly mounted but can move (actively search for the line)? The possibilities are almost endless. 

But the hardware is only half the problem. How should the robot best respond to the input signals? Simply turning one way or the other is easy to implement, but can lead to excessive oscillation. There are smarter control systems available (e.g. Proportional Integral-Differential control), but at a cost of increased complexity. Is it worth pursuing them?

These are questions that the students need to think about with their project. We can get them to do that (rather than just thinking up one solution that might work and considering nothing else) by setting the assessment tasks appropriately. So they are not just judged on how well their robots can follow the white line, but what concepts they thought about, and whether they selected one appropriately using reasonable specifications and design criteria (i.e. how well they followed the established process for engineering design). In fact, following the design process well should ensure that the end result actually does do a good job of following the line accurately, repeatedly and quickly . 

There are still several weeks until the end of semester, when these line-following robots need to be perfected. They'll be tested at the Engineering Design Show where we'll find out how to build a good line-follower.

 

Going down the plughole Marcus Wilson Jul 04

8 Comments

Being a father of an active, outdoor-loving two-year-old, I am well acquainted with the bath. Almost every night: fill with suitable volume of warm water, check water temperature, place two-year-old in it, retreat to safe distance. He's not the only thing that ends up wet as he carries out various vigorous experiments with fluid flow. 

One that he's just caught on to is how the water spirals down the plug-hole. Often the bath is full of little plastic fish (from a magnetic fishing game), and if one of these gets near the plug hole it gets a life of its own. It typically ends up nose-down over the hole, spinning at a great rate as it gets driven round by the exiting water. 

The physics of rotating water is a little tricky. There are two key concepts thrown in together; first the idea of circular motion  - which involves a rotating piece of water having a force on it towards the centre (centripetal force); second is viscosity – in which a piece of water can have a shear force on it due to a velocity gradient in the water. Although viscosity has quite a technical definition, colloquially, one might think of it as 'gloopiness' [Treacle is more viscous than water. The ultimate in viscosity is glass, which is actually a fluid, not a solid - the windows of very old buildings are thicker at the bottom than the top due to the fluid flow over tens or hundreds of years.] In rotational motion there's a subtle interplay between these two forces which can result in the characteristic water-down-plughole motion. 

In terms of mathematics, we can construct some equations to describe what is going on and solve them. We find, for a sample of rotating fluid, that two steady solutions are possible. 

The first solution is what you'd get if all the fluid rotated at the same angular rate – the velocity of the fluid is proportional to the radius. This is what you'd get if you put a cup of water on a turntable and rotated it – all the water rotates as if it were a solid.

The second solution has the velocity inversely proportional to the radius – so the closer the fluid is to the centre, the faster it is moving. This is like the plughole situation where a long way from the plug hole the fluid circulates slowly, but close in it rotates very quickly. Coupled with this is a steep pressure gradient – low pressure on the inside (because the water is disappearing down the hole) but higher pressure out away from the hole. Obviously this solution can't apply arbitrarily close to the rotation axis because then the velocity would be infinite. This is where vortices often occur. (Actually, Wikipedia has a nice entry and animations on this, showing the two forms of flow I've described above). 

A Couette viscometer expoits these effects as a way of measuring the viscosity of a fluid. Two coaxial cylinders are used, with fluid between them. The outer is rotated while the inner one is kept stationary, and the torque required enables us to calculate the viscosity of the liquid.

 

Dismantling the health and safety pyramid Marcus Wilson Jun 04

No Comments

A few days ago I was updating one of the lectures I do for my Experimental Physics course. I was putting in a bit more about safety and managing hazards, which are things that are associated with doing experiments for real. When I was a student, we didn't learn anything about this – my first introduction to the ideas behind hazard management came only when I joined an employer. Before then, I simply hadn't thought about the issues involved. 

One of the things that gets banded around Health and Safety discussions is Heinrich's pyramid, dating back to 1931. The basic idea of this is that accidents don't just happen out of the blue. For every fatal accident there are several non-fatal but major accidents, for every major accident there's several minor accidents, and for every minor accident there's a whole heap of incidents (things that could have been accidents if circumstances had been different). The implication is then that by addressing the minor things that crop up frequently, we make the workplace a safer place.  I've seen various versions of the pyramid on-line, but here's one:

enerpipe_img-SafetyPyramid.jpg

Diagram taken from http://www.enerpipeinc.com/HowWeDoIt/Pages/safety.aspx 

That all seems to make some sense. However, searching around for a good picture to include in my lecture notes, I came across this article by Fred Manuele:

http://www.asse.org/professionalsafety/pastissues/056/10/052_061_f2manuele_1011z.pdf

It calls into question the whole basis of this pyramid and its implications for health and safety in the workplace. Specifically, Manuele reports that:

1. No-one can trace Heinrich's original data

2. If it exists, then the extent to which 1920's and 30's data applies in today's workplace is dubious. 

3. That the pyramid idea is counter-productive to ensuring a safe working environment since it over-emphasizes the importance of minor non-compliance issus (not wearing one's lab coat) and focuses attention away from major, systemic failings in senior managment and even government regulators and legislators whence the really big events tend to come. [Think Pike River, where MBIE's own investigation points the finger at itself (in the form of its predecessor, the Ministry of Economic Development) for carrying out its regulatory function in a 'light-handed and perfunctory way'.]

There's some lovely statistics included on what a focus on reducing small incidents actually does. Here's some US figures on the reduction in accdient-related insurance claims between 1997 and 2003 (F. Manuele,  “State of the Line,” by National Council on Compensation Insurance, 2005, Boca Raton, FL):

Less than $2000: Down 37%

$2000 – $10000: Down 23%

$10k – $50k: Down 11%

above $50k: Down 7%

See the issue here? Focusing attention on small incidents and small accidents does wonders for reducing small incidents and small accidents, but very little on reducing the big accidents. That's because, as Manuele describes, they have different underlying causes. 

The paper's worth a read, and cuts at what I've been taught over several years about health and safety. One notable feature is that it actually draws from hard data, rather than myth, which is how Manuele labels Heinrich's work. 

And the consequence for my experimental physics students? I shan't be including that pyramid in their lectures.

 

 

 

The gearbox problem Marcus Wilson May 27

3 Comments

At afternoon tea yesterday we were discussing a problem regarding racing slot-cars (electric toy racing cars).  A very practical problem indeed! Basically, what we want to know is how do we optimize the size of the electric motor and gear-ratio (it only has one gear) in order to achieve the best time over a given distance from a stationary start?

There's lots of issues that come in here. First, let's think about the motors. A more powerful motor gives us more torque (and more force for a given gear ratio), but comes with the cost of more mass. That means more inertia and more friction. But given that the motor is not the total weight of the car, it is logical to think that stuffing in the most powerful motor we can will do the trick. 

Electric motors have an interesting torque against rotation-rate characteristic. They provide maximum torque at zero rotation rate (zero rpm), completely unlike petrol engines. Electric motors give the best acceleration from a standing start – petrol engines need a few thousand rpm to give their best torque. As their rotation rate increases, the torque decreases, roughly linearly, until there reaches a point where they can provide no more torque. For a given gear ratio, the car therefore has a maximum speed – it's impossible to accelerate the car (on a flat surface) beyond this point. 

Now, the gear ratio. A low gear leads to a high torque at the wheels, and therefore a high force on the car and high acceleration. That sounds great, but remember that a low gear ratio means that the engine rotates faster for a given speed of the car. Since the engine has a maximum rotation rate (where torque goes to zero) that means in a low gear the car has good acceleration from a stationary start, but a lower top-speed. Will that win the race? That depends on how long the race is. It's clear (pretty much) that, to win the race over a straight, flat track, one needs the most powerful engine and a low gear (best acceleration, for a short race) or a high gear (best maximum velocity, for a long race). The length of the race matters for choosing the best gear. Think about racing a bicycle. If the race is a short distance (e.g. a BMX track), you want a good acceleration – if it's a long race (a pursuit race at a velodrome), you want to get up to a high speed and hence a huge gear.  

One can throw some equations together, make some assumptions, and analyze this mathematically. It turns out to be quite interesting and not entirely straightforward. We get a second-order differential equation in time with a solution that's quite a complicated function of the gear-ratio. If we maximize to find the 'best' gear, it turns out (from my simple analysis, anyway) that the best gear ratio grows as the square-root of the time of the race. For tiny race times, you want a tiny gear (=massive acceleration), for long race times a high gear.   If one quadruples the time of the race, the optimum gear doubles. Quite interesting, and I'd say not at all obvious. 

The next step is to relax some of the assumptions (like zero air resistance, and a flat surface) and see how that changes things. 

What it means in practice is that when you're designing your car to beat the opposition, you need to think about the time-scales for the track you're racing on. Different tracks will have different optimum gears.

Seeing in the dark Marcus Wilson May 21

2 Comments

No, nothing to do with carrots and vitamin A I'm afraid. 

With dark evenings and mornings with us now :(, Benjamin's become interested in the dark. It's dark after he's finished tea, and he likes to be taken outside to see the dark, the moon, and stars, before his bath. "See dark" has become a predictable request after he's finished stuffing himself full of dinner. It's usually accompanied by a hopeful "Moon?"  (pronounced "Moo") to which Daddy has had to tell him that the moon is now a morning moon, and it will be way past his bedtime before it rises. 

I haven't yet explained that his request is an oxymoron. How can one see the dark? Given dark is lack of light, what we are really doing is not seeing. But there's plenty of precedence for attributing lack of something to an entity itself, so 'seeing the dark' is quite a reasonable way of looking at it.  

One can talk about cold, for example. "Feel how cold it is this morning". It is heat, a form of energy, that is the physical entity here. Cold is really the lack of heat, but we're happy to talk about it as if it were a thing in itself. Another example: Paul Dirac in 1928 interpreted the lack of electrons in the negative energy states that arise from his description of relativistic quantum mechanics as being anti-electrons, or positrons. In fact, this was a prediction of the existence of anti-matter – the discovery of the positron didn't come until latter.  

In semiconductor physics, we have 'holes'. These are the lack of electrons in a valence band – a 'band' being a broad region of energy states where electrons can exist. If we take an electron out of the band we leave a 'hole'. This enables nearby electrons to move into the hole, leaving another hole. In this way holes can move through a material. It's rather like one of those slidy puzzles – move the pieces one space at a time to create the picture. Holes are a little bit tricky to teach to start with. Taking an electron out of a material leaves it charged, so we say a hole has a positive charge. That's a bit confusing – some students will usually start of thinking that holes are protons. Holes will accelerate if an electric field is applied (because they have positive charge) and so we can attribute a mass to the hole. That's another conceptual jump. How can the lack of something have a mass? Holes, because they are the lack of an electron, tend to move to the highest available energy states not the lowest energy states. Once the idea is grasped, we can start talking about holes as real things, and that is pretty well what solid-state physics textbooks will do. It works to treat them as positively charged particles. It's easy then to forget that we talking about things that are really the lack of something, rather than something in themselves. 

A more recent example is being developed in relation to mechanics of materials as part of a Marsden-funded project by my colleage Ilanko. He's working with negative masses and stiffnesses on structures – as a way of facilitating the analysis of the vibrational states and resonances of a structure (e.g. a building). By treating the lack of something as a real thing, we often can find our physics comes just a bit easier to work through. 

So seeing the dark is not such a silly request, after all.

 

 

 

 

Don’t trust the machine Marcus Wilson Apr 28

No Comments

Back to blogging, after a nice holiday in Taranaki dodging the rain showers (and, as it turned out, the volcano, which we never even got a glimpse of) and a frantic week of lab work while the undergraduates were away. 

Both were very interesting, but it's the lab work I'll talk about here. 

Something that I've learned over the years is that if something looks dodgy, it probably is. Obviously, when doing experimental work, we don't know what results we are going to get. (If we did, we wouldn't bother doing the research). It is true that sometimes results can surprise us. Sometimes this is the start of a discovery of a new phenomenon, which will make the experimenter famous. But more usually, much more usually, it's because you've stuffed something up in your method or analysis. If your data just looks wrong, it probably is.

We had this with our conductivity measurements in the lab two weeks ago. We were using a moderately high-tech (approx 10k NZD or so) piece of equipment to measure electrical impedance of our samples of biological tissue. The results were odd – we had an unexpected jump in conductivity as we changed frequency. It took a while to track down what the problem was. First, I talked to a colleague who used to work for the company that made the equipment we were using. He hadn't seen anything like it before, and offered a few suggestions as to what we might do to track down what was going on. There was the suggestion that it might even be a calibration failure in one of the machine's internal circuits. 

We did a few tests, and were still puzzled. We tested progressively simpler and simpler things, trying to isolate the problem. It was a good exercise in troubleshooting, really, and it took a while. We ended up with testing the impedance of just a single 10 ohm resistor. We didn't expect this to be an issue. But, when the machine told us that it's impedance was 8 ohms for frequencies lower than 121 Hz and 6 ohms for frequencies above 121 Hz we knew something was terribly wrong somewhere. Then the machine refused to work altogether. At this point the thought of ten thousand New Zealand dollars going up in smoke in front of our eyes did cross my mind, but only momentarily, since suddenly it kicked into life again and started reading 10 ohms. Then I just touched the front and it was 8 ohms. A bit more experimenting quickly narrowed down the problem to a dodgy lead.

That was all it was. One of our coaxial cables had a dodgy connection on it. We replaced the lead, and suddenly the results are all perfectly believeable again. Only two days of work for two people completely wasted by a cable worth only a few dollars.

The moral of the story: It pays to do some really simple tests of the equipment every time you use it. Don't just blindly trust the readout on the machine. Check it's working first. I recall now the words of one of our (now retired) technicians here – "If you have a perplexing problem with something electronic, it's a fair bet it's simply a dodgy connection." How true. 

A fun experiment to try at your desk Marcus Wilson Mar 13

3 Comments

I received the latest PhysicsWorld magazine from the Institute of Physics yesterday. A quick flick through it reveals a fantastic demonstration you can do with kids (or grown-up kids) to show how strong friction can be. Take two telephone directories, and interleave the pages (so every page of book A has a page from book B above and below it, and vice versa). Admittedly this takes some dedication, but that's what graduate students are for. Then try to pull the directories apart. In fact, the photo in the magazine shows two such interleaved directories being used in the centre of a tug-of-war. I have got to try with my students. 

In fact, you don't need the patience to turn page-by-page through two phonebooks to do this. I've spent a couple of minutes interleaving my copy of the 84-page University of Waikato Science and Engineering Graduate Handbook with the slightly larger University of Waikato Science and Engineering Undergraduate Handbook.  (Some might say the two make a lot more sense arranged in this manner….) It didn't take too long to do. I can't pull them apart.

It's simply down to the large surface area that the interleaved books have. They are A5 in size (approx 21 cm x 15 cm), with 86 (84 pages plus inside covers) surfaces. That gives, very approximately 27000 cm2 area of contact, around two or three metres squared of contact. That's pretty sizeable. A pair of telephone directories could come in at about 30 metres squared or so!  Lots of surface are gives lots of frictional force. 

What makes something show on radar? Marcus Wilson Mar 11

No Comments

One of the questions on everyone's lips at the moment is "How does a large passenger jet simply disappear from radar without trace?"  It is clearly very distressing for anyone with friends or relatives on board – not knowing what has happened. As I write this, there still seems to be a complete lack of clear evidence pointing one way or another. I'm not an aviation expert so I really can't add anything of value here. But I can turn the question around to one of more physics relevance, which is "What allows a plane to be detected with radar in the first place?"

Radar, in concept at least, is pretty simple physics. It's name comes from an acronym "RAdio Detection AND Ranging". However, it has out-grown its acronym, since it now does more than simply detect and 'range' (tell the distance to), and 'radar' is now a word in itself and not spelled in uppercase.  The basic idea is that a radio wave will reflect off a metal object (a plane, ship, your car…) and some of that wave will return to where it came from. To be pedantic, while we often think about the radio transmitter and detector being in the same place, this doesn't need to be the case. In fact the first radar systems had detectors physically separate from the receivers. Anyway, we know the speed at which radio waves travel (pretty well the speed of light in air) and therefore by timing the delay between transmitting and receiving we know how far the object is away. By also knowing the direction the reflections come from, we can therefore work out a position. 

It gets a bit more difficult in practice, since radio waves don't necessarily travel in straight lines, but can be bent due to atmospheric conditions. And radar can tell us more than just position. For example, we can exploit the doppler effect to measure the how fast an object is travelling. Waves reflecting from a moving object return with a different wavelength – measure the wavelength shift and you measure how quickly the object is moving towards or away from you. 

So why does a metal object reflect radio waves? That's down to its high electrical conductivity ensuring that there must be no electric field at the surface.  The waves simply can't get into the material and are completely reflected. I won't bore you with the analysis of Maxwell's equations to show this – unless you happen to be in my third year electromagnetic waves class in which case I'll bore you with it – whoops, make that excite you with it – in a week's time.  Metal makes a pretty good shield for radio. 

Just what fraction of the power of the incident wave that gets reflected back towards the transmitter can be tricky to calculate. It's encapsulated in a term known as the 'radar cross section' (RCS). The definition of RCS is a little tricky to wrap one's head around, but I'll give you it: The radar cross section of an object, in a given direction and a particular frequency, is the cross sectional area of a perfectly-reflecting sphere that would give the same power return as the object gives in that direction. In other words, imagine a large, metal sphere, that reflects the same amount of power that our plane does. Take the cross-sectional area of it (pi times the radius squared) and that's the RCS. A large RCS means a large amount of power returned.

To some extent the RCS simply depends on how big an object is, but just as important is the shape of an object.  Geometry with right-angles in it will cause large reflections back in the direction of the transmitter (think of a snooker table – if a ball bounces off two cushions it's direction of travel is reversed – it doesn't matter about the angle of incidence). Long edges also give large returns – they can act rather like antennas and re-radiate the incoming radiation. Unless you specifically set out to design an aircraft with a low RCS the chances are that what you'll end up with is something which has a pretty substantial RCS. The tail is at right-angles to the fuselage, it has long straight wings, and is made from highly reflective metal. 

And that means that a Boeing 777 isn't likely to vanish off a radar screen while it remains in one piece. 

 

 

Dem Cables Marcus Wilson Feb 21

No Comments

I've just been shifting around various bits of equipment and computers in our 2nd and 3rd year physics lab, to make way for an item that's shifting in there from a nearby lab. It's gone something like this…(rising in semitones, with apologies to the original performers) 

Da power socket is connected to da extension cord;

Da extension cord is connected to da monitor;

Da monitor is connected to da computer;

da computer is connected to da control box;

da control box is connected to da MRI machine;

da MRI machine is connected to da MRI-machine stand;  

da MRI-stand is connected to da floor*;  

Now why are there so many cables?

Dem cables, dem cables, dem power cables;

dem cables, dem cables, dem ethernet cables;

dem cables, dem cables, dem USB cables;

What a mess of knitting!  [Roll on wireless power transmission!]

 

*The MRI-stand is connected to the floor because we don't want the thing to move. The unit is calibrated for the position it's currently in; I'm not inclinded to move it in a hurry. The other things, however, might be more sensibly located. 

Nanotechnology, asbestos and measurement Marcus Wilson Feb 04

No Comments

Last week I had a very interesting and useful visit to the Measurement Standards Laboratory in Lower Hutt. I went along with my summer scholarship student to discuss the measurement of electrical properties of biological tissue. While the procedure for measuring the conductivity of a piece of solid is pretty-well established, biological tissue is soft, squishy, easily damaged, reacts chemically with what you try to measure it with,  and changes its properties considerably between 'alive' and 'dead' states. There's no clear-cut method here.  

We were also shown around some of the labs. What I found particularly interesting is the progress towards ditching the kilogram. By that I don't mean getting rid of the unit and using pounds and ounces, I mean doing away with the need to have a single, standard kilogram locked away in Paris. One can get away with this problematic beast through using a Watt Balance machine and defining Planck's constant. More on that later, I think.

But today's blog entry is about a discussion I had one evening in a cafe in Wellington train station, with a friend of mine who works for what is now known as Worksafe. As their name suggests, their purpose is to ensure  workplaces are healthy and safe (Not that this replaces the obligation on everyone to ensure a safe work environment, I should add.)  My friend has been having some discussion around the health issues associated with nanotechnology. Engineering tiny things has opened a huge range of possibilities – intensely strong fibres, minature motors, molecular-sized electronics – it's all possible, and it's going to get more common place. But have the risks of such technology been thought about? More specifically, are the monitoring processes keeping track with the development of the technology. My friend refers to nanotechnology as 'The asbestosis of the future'. That might prove to be unfounded, but the point is we simply don't know. Asbestos was a wonder-material that has been used intensively in the 20th century and a huge number of buildings (including the one I'm sitting in as I write this) is loaded with the stuff. It makes a great fire retardant and insulator, with the teeny-weeny drawback that inhaling asbestos dust can kill you. It is a massive headache for Worksafe as the whole country is full of the stuff – cue the story about the arguments between EQC, insurance companies and ACC regarding what to do with the great many earthquake damaged houses found to contain (now exposed) asbestos.

And nanotechnology could follow. By definition, it consists of tiny, tiny particles. What will they do in someone's lungs for twenty years? Who knows? How does one monitor the exposure to nanotechnology? That's maybe a more useful question to ask, and one to pursue properly. We have measures of exposure to radiation, for example, that we can apply to those who work with it, so what about a practical measure of nanotechnology exposure, that can be implemented in a workplace? An open question. 

 

 

 

Network-wide options by YD - Freelance Wordpress Developer