Posts Tagged Newton’s laws

Static friction is something sticky (as is Scholarship physics) Marcus Wilson Feb 13

1 Comment

In January I had a go at the 2014 Scholarship Physics Exam, as I've done for the last couple of years. Sam Hight from the PhysicsLounge came along to help (or was it laugh?) The idea of this collaboration is that I get filmed attempting to do the Scholarship paper for the first time. This means, unlike some of the beautifully explained answers you can find on YouTube, you get my thoughts as I think about the question and how to answer it. Our hope is that this captures some of the underlying thinking behind the answers – e.g. how do you know you're supposed to start this way rather than that way? What are the key bits of information that I recognize are going to be important – and why do I recognize them as such? So the videos (to be put up on PhysicsLounge) will demonstrate how I go about solving a physics problem (or, in some cases, making a mess of a physics problem), rather than providing model answers, which you can find elsewhere. We hope this is helpful. 

One of the questions for 2014 concerned friction. This is a slippery little concept. Make that a sticky little concept. We all have a good idea of what it is and does, but how do you characterize it? It's not completely straightforward, but a very common model is captured by the equation f=mu N, where f is the frictional force on an object (e.g. my coffee mug on my desk), N is the normal force on the object due to whatever its resting on, and mu (a greek letter), is a proportionality constant called the coefficient of friction. 

What we see here is that if the normal force increases, so does the frictional force, in proportion to the normal force. In the case of my coffee mug on a flat desk*, that means that if I increase the weight of the mug by putting coffee in it, the normal force of the desk holding it up against gravity will also increase, and so will the frictional force, in proportion.

Or, at least, that's true if the cup is moving. Here we can be more specific and say that the constant mu is called 'the coefficient of kinetic friction': kinetic implying movement.  But what happens when the cup is stationary? Here it gets a bit harder. The equation f=mu N gets modified a bit: f < mu N. In other words, the maximum frictional force on a static object is mu N. Now mu is the 'coefficient of static friction'. Another way of looking at that is that if the frictional force required to keep an object stationary is bigger than mu N, then the object will not remain stationary. So in a static problem (nothing moving) this equation actually doesn't help you at all. If I tip my desk up so that it slopes, but not enough for my coffee mug to slide downwards, the magnitude force of friction acting on the mug due to the desk is determined by the component of gravity down the slope. The greater the slope, the greater the frictional force. If I keep tipping up the desk, eventually, the frictional force needed to hold the cup there exceeds mu N, and off slides the cup. 

What this means is that we when faced with friction questions, we do have to think about whether we have a static or kinetic case. Watch the videos (Q4) you'll see how I forget this fact (I blame it on a poorly written question – that's my excuse anyway!). 


*N.B. I have just picked up a new pair of glasses, and consequently previously flat surfaces such as my desk have now become curved, and gravity fails to act downwards. I expect this local anomoly to sort itself out over the weekend. 

P.S. 17 February 2015. Sam now has the videos uploaded on physicslounge  

Check those approximations Marcus Wilson Jul 15


A common technique in physics is 'modelling'. This is about constructing a description of a physical phenomenon in terms of physical principles. Often these can be encapsulated with mathematical equations. For example, it's common to model the suspension system of a car as two masses connected by springs to a much larger mass. Here, the large mass represents the car body, one of the small masses represents the wheel, and the other the tyre. The two springs represent the 'spring' in the suspension system (which on a car is usually a curly spring – though it can take other forms on trucks or motorbikes), and the tyre (which has springyness itself). We can then add in some damping effect (the shock absorber). What we've done is to reduce the actual system into a stylized system that maintains the essential characteristics of the original but is simpler and more suitable for making mathematical calculations. 

That's great. We can now work on the much simpler stylized system, and make predictions on how it behaves. Transferring those predictions to the real situation, that helps us to design suspension systems for real situations. 

There are however, some drawbacks. We have to be sure that our stylized system really does capture the essential features of the actual system. Otherwise we can get predictions completely wrong. On the other hand, we don't want to make our model too complicated, otherwise there is no advantage in using the model. "A model should be as simple as possible, but not simpler" as Einstein might have said

There's another trap for modellers, which is going outside the realm of applicability for the model. What do I mean by that? Well, some simplifications work really well, but only in certain regimes. For example, Newton's laws are a great simplification on relativistic mechanics. They are much easier to work with. However, if you use them when things are moving close to the speed of light, your answers will be incorrect. They may not even be close to what actually happens. We say that Newton's laws apply when velocities are much less than the velocity of light. When that's the case (e.g. traffic going down a road) they work just fine – you'd be silly to use relativity to improve car safety – but when that's not the case (e.g. physics of black holes) you'll get things very wrong indeed. 

A trap for a modeller is to forget where the realm of applicability actually is. In the rush to make approximations and simplifications, just where the boundary is between reasonable and not reasonable can be forgotten. I've been reminded of this this week, while working with some models of the electrical behaviour of the brain. Rather than go into the detail of what that problem was, here's a (rather simpler!) example I can across some time ago now. 

I was puzzling over some predictions made in a scientific paper, using a model. It didn't quite seem right to me, though I struggled for a while to put my finger on exactly what I didn't like about what the authors had done. Then I saw it. There were some complicated equations in the model, and to simplify them, they'd made a common mathematical approximation: 1/(1+x) is approximately equal to 1-x.  That's a pretty reasonable assumption so long as x is a small number (rather less than 1). We can see how large it's allowed to get by looking at the plot here. The continuous blue line shows y = 1/(1+x); the dotted line shows 1-x.  (The insert is the same, at very small x). 




We can see for very small x (smaller than 0.1 or so) there's not much difference, but when x gets above 0.5 there's a considerable difference between the two. When x gets larger still (above 1) we have the approximation 1-x going negative, whereas the unapproximated 1/(1+x) stays positive. It's then a completely invalid approximation. 

However, in this paper, the authors had made calculations and predictions using a large x. What they got was just, simply, wrong, because they were using the model outside the region where it was valid. 

This kind of thing can be really quite subtle, particularly when the system being modelled is complicated (e.g. the brain) and we are desperate to make simplifications and approximations. There's a lot we can do that might actually go beyond what is reasonable, and a good modeller has to look out for this. 

Managing ignition timing Marcus Wilson Jun 13


I've just been at a great lecture by Peter Leijen as part of our schools-focused Osborne Physics and Engineering Day.   He's an ex-student of ours, who did electronic engineering here at Waikato – and graduated just a couple of years ago.  He now works in the automotive electronics industry. That's an incredibly quickly growing industry. So much of a car's systems are now driven by electronics, not mechanics. Being a car 'mechanic' now means being a car 'electronic engineer' just as much as it does being a mechanic. 

One interesting piece of electronics is the ignition timing system. The mechanism that produces the spark in the cylinders from a 12 volt battery is really old and standard technology – one uses a step-up transformer and kills the current to the primary coil by opening a switch – the sudden drop in current creates a  sudden reduction in magnetic flux in the transformer, and these collapsing flux lines cutting the secondary coil create a huge voltage, enough for the spark plug to spark. That really is easy to do. The tricky thing is getting it to spark at the right time. 

One needs the fuel/air mix in the cylinder to be ignited at the optimum time, so that the resulting explosion drives the piston downwards. Ignite too early, while the compression is going on, and you'll simply stop the piston rather than increasing the speed of its motion. Apply too late, and you won't get the full benefit of the explosion. It's rather like pushing a child on a swing – to get the amplitude of the motion to build, you need to push at the optimum time – this is just after they've started swinging away from you. 

All this is complicated by the fact that the explosion isn't instantaneous. It takes a small amount of time to happen. That means, at very high revolution rates, one has to be careful as to exactly when you make the ignition. It has to be earlier than at lower rates, particularly if the throttle setting is low, because the explosion takes a significant proportion of the period of the oscillation.  This is called 'ignition advance'.  

On newer cars, this is done electronically. A computer simply 'looks up' the correct angle of advance for the rpm and the throttle setting of the car, and applies the outcome. The result: a well running, efficient engine, using all the power available to it. Or so you might think.

But here's the revelation from Peter: car manufacturer's can deliberately stuff up the timing. Why do they want to do that? Well, there's a market for selling different versions of the otherwise same car. The high-end models have performance and features (and price tag) that the low-end models don't have. There's status in buying the high-end model (if you're the kind of person who cares about that – and the fact that these things sell says, yes, there are such people), but, alternatively, if that extra couple of horsepower doesn't bother you, you can get the lower-spec model for a lower price. Now, the manufacturers have worked out that making lots of different versions of the otherwise same car is inefficient. It's far easier to have a production line that fires out identical cars. So how do you achieve the low-end to high-end specification spectrum? Easy. You build everything high-end, and then to produce  low-end cars deliberately disable or tinker with the features so they don't work or don't perform so well. That is, make the car worse. 

Ignition timing is one example, says Peter. There are in fact companies who will take your low-end car and un-stuff-up your electronics for you – in effect reprogramme it to do what it should be doing. In other words, turn your low-end car back into a high-end one (which is how it started out life) without you having to pay the premium that the manufacturer would place on it for not stuffing it up in the first place. 

Who said free market economics resulted in the best outcome for consumers?


The gearbox problem Marcus Wilson May 27


At afternoon tea yesterday we were discussing a problem regarding racing slot-cars (electric toy racing cars).  A very practical problem indeed! Basically, what we want to know is how do we optimize the size of the electric motor and gear-ratio (it only has one gear) in order to achieve the best time over a given distance from a stationary start?

There's lots of issues that come in here. First, let's think about the motors. A more powerful motor gives us more torque (and more force for a given gear ratio), but comes with the cost of more mass. That means more inertia and more friction. But given that the motor is not the total weight of the car, it is logical to think that stuffing in the most powerful motor we can will do the trick. 

Electric motors have an interesting torque against rotation-rate characteristic. They provide maximum torque at zero rotation rate (zero rpm), completely unlike petrol engines. Electric motors give the best acceleration from a standing start – petrol engines need a few thousand rpm to give their best torque. As their rotation rate increases, the torque decreases, roughly linearly, until there reaches a point where they can provide no more torque. For a given gear ratio, the car therefore has a maximum speed – it's impossible to accelerate the car (on a flat surface) beyond this point. 

Now, the gear ratio. A low gear leads to a high torque at the wheels, and therefore a high force on the car and high acceleration. That sounds great, but remember that a low gear ratio means that the engine rotates faster for a given speed of the car. Since the engine has a maximum rotation rate (where torque goes to zero) that means in a low gear the car has good acceleration from a stationary start, but a lower top-speed. Will that win the race? That depends on how long the race is. It's clear (pretty much) that, to win the race over a straight, flat track, one needs the most powerful engine and a low gear (best acceleration, for a short race) or a high gear (best maximum velocity, for a long race). The length of the race matters for choosing the best gear. Think about racing a bicycle. If the race is a short distance (e.g. a BMX track), you want a good acceleration – if it's a long race (a pursuit race at a velodrome), you want to get up to a high speed and hence a huge gear.  

One can throw some equations together, make some assumptions, and analyze this mathematically. It turns out to be quite interesting and not entirely straightforward. We get a second-order differential equation in time with a solution that's quite a complicated function of the gear-ratio. If we maximize to find the 'best' gear, it turns out (from my simple analysis, anyway) that the best gear ratio grows as the square-root of the time of the race. For tiny race times, you want a tiny gear (=massive acceleration), for long race times a high gear.   If one quadruples the time of the race, the optimum gear doubles. Quite interesting, and I'd say not at all obvious. 

The next step is to relax some of the assumptions (like zero air resistance, and a flat surface) and see how that changes things. 

What it means in practice is that when you're designing your car to beat the opposition, you need to think about the time-scales for the track you're racing on. Different tracks will have different optimum gears.

Apparent forces Marcus Wilson Apr 01

No Comments

A couple of weeks ago I had the misfortune to be on a bus which had an accident. I wasn't hurt, because I was safely seated, which is more than I can say for one unfortunate passenger who was still on his way to his seat at the time. It wasn't a high-speed event – I'd guess we were doing about 10 km/h. We had just pulled away from a bus stop, when a car that had been parked a few metres in front of the bus decided to pull out into the road right in front of us. The driver hits the breaks hard, and, as a result, the fellow passenger ends up in a heap on the floor at the front of the bus. 

While the cause of the crash I would say rests firmly with the driver of the car that pulled out, that's little comfort to the poor guy with blood dripping from a wound on his head, down the back of his shirt, which is probably now dyed a nice shade of maroon. Standing on buses is pretty dangerous, even at low speed. I do think the driver should have waited till everyone was seated before pulling away. 

So, from a physics perspective, what happened? One can explain this in two ways. There's the 'inertial' approach, as explained by the witness on the side of the road: The bus stopped, but the guy standing, who has inertia, carried on. Then there's my viewpoint, from inside the bus. Everything experiences a sudden acceleration forward. This causes the passenger to lose his balance, and down he goes. 

This forward acceleration, from the perspective of the person on the bus, is called an apparent force. It arises because the frame-of-reference, the bus, isn't an inertial frame. That is, it's accelerating (or, in this case, decelerating). It's called 'apparent' because the person on the side of the road wouldn't see it in this way; it only becomes apparent if the observer is in the accelerating frame of reference. It might be termed an 'apparent' force, but for the person on the bus it's a very real push forwards, one that splatted him on the floor and would have given the bus cleaners a more interesting job that usual.  It's the same kind of thing as centrifugal force (yes, the 'f' word), which one experiences when going round corners. To the person in the object that is doing the moving, the force is a very real thing (ask the racing car driver). But to everyone else, it doesn't actually exist. 

Apparent forces are pretty hard to teach (I've just been doing it), but I think the key is really to emphasize that they are there only to the observer who is in the accelerating frame. 

What happened to the passenger? Against the advice of everyone around him, including me, he refused to be taken to a medical centre, which was only a few hundred metres from the place of the incident, and insisted on carrying on the journey to his destination. Possibly if he'd been able to see the back of his head he might have thought differently. One shudders to think of the consequences at 50 km/h. Seat belts in buses? Yes please. 





Oh dear Mr Kohli Marcus Wilson Feb 11

No Comments

Wow! That was a real nailbiting finish to the first test. Well done to the New Zealand bowlers to hold their nerve as India's batsmen got close. There was some great bowling, and also some great batting at times. Maybe the difference between the teams was that New Zealand in that final innings made fewer tactical blunders. 

I'm sure every armchair pundit has their own opinion of where the match was won and lost, but one that stands out for me is Virat Kohli's lapse of concentration against Neil Wagner. Aggressive batting is great to watch, but it has to give way to common sense if you want to stay at the crease. Trying a pull shot at a ball that isn't terribly high and  w-i-d-e outside off stump would be a suspect choice of shot even in a Twenty20 game, bad in any Test match, and downright appaling in a test that was as closely balanced as this one. What did he expect to happen? 

Anyone who has ever faced fast bowling will know that there are some basic laws of physics going on. What's of great importance in determining where the ball will end up after hitting your bat is the relative motion of the ball with respect to the bat, and the angle of incidence of the ball on the bat. The ball doesn't go in the direction that you hit it. Since it's carrying momentum (and a fair bit of it), what you do when you apply a force with the bat is that you change the ball's momentum. It's the change in momentum, not the final momentum itself, that's equal to the impulse (force times time) that you give to the ball. These things are vector quantities, that is, they have directions.  If you don't hit the ball in the exact opposite direction to where it is coming from, the final momentum of the ball won't be in the direction in which you hit it (apply a force on it). 

To pull a cricket ball through midwicket means that your bat's got to be pointing somewhere towards mid-on when you make contact with it. Try doing that when you're stretching out for a ball that's w-i-d-e outside off stump and you'll get the idea of why this shot was never going to work. Guiding it to the point boundary would have been a whole lot safer and effective, but then I'm sure Mr Kohli is well aware of that now. 




Scholarship Physics, 2013-style Marcus Wilson Jan 24

No Comments

Last year, Sam Hight and I made a collection of videos on tackling the 2012 Scholarship Physics exam. Well, to be precise, Sam did the videoing, editing, and distribution, and I just did the exam. The key thing, though, was that I did the exam 'live'. I was seeing the questions for the first time. I didn't give myself a few days to work out carefully composed and presented answers, like some of the slick model answers you find online. The idea was to give students an idea of what scholarship is like to do (answer: hard!)  but also how I think through a physics problem and come up with my solutions. Often what is lacking on a 'slick' model answer is any indication of how the writer 'knew' to tackle the question in the way she did. (Answer to that one – probably because she'd spend a few days looking at it, or wrote the exam question in the first place – neither terribly helpful to a student.)

By popular request, I did the same yesterday. Video camera in front of me, whiteboard, three hours with a scholarship paper. My conclusion? The 2013 paper was hard-as. (You can see for yourself here.) I'd rate it a good step up from the 2012 one. To be fair, different people have different strengths. It may have been that there was a 'bad' lot of questions for me in the 2013 paper, but it might have been a 'good' lot for someone else. I'd love to hear your thoughts on this one – do you think it's harder?

One thing I noticed was there was a lot of algebra and calculation in the 2013 paper, and there wasn't so much in 2012. The question about the A-frame ladder had a derivation involving three simultaneous equations to solve. But I got it! In the end. 

Sam and I will get the videos distributed in due course, probably via PhysicsLounge   If nothing else, you can watch me fumble around with a couple of questions which, 24 hours on, I realize weren't as difficult as I was trying to make them out. But if I showed you the answer today, it would be slick, and you'd be stuck wondering how I came up with it.  

Will they be helpful? You decide. If not, you can always have a good laugh at me squaring a number twice because I wasn't paying attention to what I'd written, and getting my notation in a muddle.  Whatever, I'd like to offer my congratulations to those who landed Scholarship Physics in 2013, because you most certainly deserve it!

Is it OK to bungle the science if the end message is good? Marcus Wilson Oct 23

No Comments

On Saturday morning I held a session for school students preparing to sit the 2013 Scholarship Physics exam. My intention is to help them prepare for this. It's a tough exam, aimed at rewarding the best school students in the various subjects. I talked through the principles behind answering various types of question, e.g. 'estimate' questions, mathematical questions, 'explain' questions and so forth, drawing heavily from previous exam papers. One of the questions we talked about was from the 2010 paper. Students were asked to critique the voice-over on a well-aired road safety ad of the time. (You can find the ad here - isn't YouTube wonderful?).

I won't go into the physics here, partly because I've already done it in a previous blog entry. Suffice to say that the advert will get approximately zero out of ten for scientific accuracy. However, it does get its central message across rather well, I think: Excessive speed causes crashes. So, I think it's reasonable to ask the question: "Is this a good advert?". We had a brief discussion on this on Saturday. There are several points that could be made. In defence of the ad, it does, I think, what it is designed to do – get people to think about how fast they drive.

But does it do more harm than good? It certainly doesn't promote scientific literacy by using science concepts incorrectly. We've already seen numerous examples of how lack of science understanding among the public can lead to outrageous decisions being made by politicians who rely on the public vote: governments drag their feet on tackling climate change (coz it will hit the voters in their pocket – not a smart political move) and in Hamilton we've had a narrow squeak over fluoridation – fortunately in the latter case the science won and a citizen's referendum has overturned a ridiculous decision made by the Hamilton City councillors. 

But the science can sometimes be hard to explain well. After I gave him what I thought was a clear, concise and accurate statement of what I thought the advert should say, my father-in-law replied on Saturday afternoon "no wonder they've done another explation – it's easier to understand" (or words to that effect).  Yes…but…it's not right.

Tricky one this.




The 2013 Nobel Prize in Physics goes to…. Marcus Wilson Oct 09


….Well, what do you think? No surprises this year.  Francois Englert and Peter Higgs have been awarded this year's Nobel Prize in physics for the theoretical 'discovery' of the Higgs mechanism. The citation, however, I find very interesting:

for the theoretical discovery of a mechanism that contributes to our understanding of the origin of mass of subatomic particles, and which recently was confirmed through the discovery of the predicted funamental particle, by the ATLAS and CMS experiments at CERN's Large Hadron Collider.

First of all, can one 'discover' something theoretically? Sure, one can predict the presence of something theoretically, but can it be discovered by a piece of theoretical analysis? I'll let you debate the semantics of 'discovery'. 

Then, note how the prize isn't given for the discovery of the Higgs Boson.  The word 'boson' doesn't get a mention at all, in fact, though it is implied by the words 'predicted fundamental particle'.  The boson is merely a piece of experimental evidence  - a rather key piece, it has to be said, given it's the only thing about the Higgs mechanism that is really observable – but still only a piece of evidence for the Higgs mechanism. It is the explanation of the origin of mass that is the notable thing here.

Well, actually, not quite. Note how the citation is for "…a mechanism that contributes to our understanding of the origin of mass…" It stops short of saying that the Higgs mechanism explains it. Is there more to come?

Then finally the experimental credit is given. The Nobel Prize isn't generally awarded to large teams of people. The ATLAS and CMS teams are vast indeed (see the list of authors on the ATLAS and CMS Higgs Boson discovery papers here and here) but these teams are rightfully given credit for their part in confirming the Higgs mechanism.

So, well done to you all. 

Gravity goes downwards Marcus Wilson Aug 06

No Comments

Yesterday afternoon I was engaged in a spot of DIY – putting up some shelves. Even for me, as someone who takes to DIY like a duck to mountaineering, it’s a fairly simple task, and I’m pleased to say that I got there without the ‘do’ in DIY turning into ‘destroy’. With the help of my trusty stud-finder (Karen – who has a knack of locating those invisible studs behind plasterboard walls just be tapping), I managed to locate two studs by drilling just three holes. The rest of the job took only four tools – a drill, a pencil, a screwdriver and the all-important spirit level.

I’ve always been fascinated by just how simple a tool the spirit level is. It does a fantastic job of getting things level (level enough for general domestic purposes, anyway), just by using a bubble of air in a liquid. The physical principle by which it works is hardly taxing – the bubble (the lack of fluid) rises to the highest point in its tube, as the liquid sinks down as low as possible to minimize its potential energy. A similarly simple method – the plumb line – gets things vertical – though a second tube turned by 90 degrees on the spirit level turned through 90 degrees can do the same task. 

In fact, it is hard to imagine a complicated machine to find where ‘vertical’ is. If one assumes that ‘up’ is the opposite direction to the force of gravity, one simply has to measure the direction of the force of gravity, and hanging something on a string is the most obvious method to do it. Sure, one can get technical and enclose the thing in a pipe so that wind doesn’t get to it, and so forth, but the basic weight-on-a-string is simple and effective. 

There are some hiccups to think about, however. One needs to be sure what one actually means by ‘vertical’ and ‘horizontal’. The force of gravity isn’t precisely towards the centre of the earth at all places on the earth’s surface. A weight on a string will be affected by the presence of nearby mountains, or large-scale variations in the geology underneath the surface. A quick estimate based on Newton’s law of gravitation and the size of mount Te Aroha, for example, suggests that houses in Te Aroha town might have their vertical distorted by a few thousands of a degree. Not a great deal but enough to be detectable with half-decent equipment.

But is the vertical really out? If the definition of a vertical is "the direction of the acceleration due to gravity" then, no, it isn’t. If one is putting up shelves in Te Aroha and wants them horizontal (so that a ball placed on the shelf stays on the shelf) one wants them at 90 degrees to the local force of gravity. If that means a few thousands of a degree different from what you’d get in Hamilton, then so be it. It just depends on your definition of ‘up’.

[And, of course, it is more than a few thousands of a degree different from Hamilton anyway - being 44km away on a sphere of 6400km radius, that's about 0.4 of a degree due to location.] 


Network-wide options by YD - Freelance Wordpress Developer