SciBlogs

Archive June 2010

Mindbullets goodness: The Avatar Wars aimee whitcroft Jun 28

5 Comments

Does anyone else out there read Mind Bullets?

If not, I’d thoroughly recommend it.  Or else wait for me to occasionally post on it (when it’s appropriate to this forum).  The most recent post certainly is.

Mind Bullets, for the uninitiated, is part of the FutureWorlds network.  Basically, a bunch of really interesting people like to cast their minds into the future and write about possible scenarios.  It’s a mental exercise, but one involving futurism.  Great fun, in other words.

Given my interest in the Singularity, I thought I’d share the following post:

mindbullets singularity

It’s certainly an interesting concept.  Of course, we’re some way off being able to clone people (and even if we could, waiting the years necessary for clones to reach adulthood sounds like an extreme exercise in patience).

As for the robots – slightly more likely.  Then again, if I could choose a robot body, I’m really not sure I’d want a copy of the one I’m currently inhabiting. aimee 2.0 could definitely be designed better. It does beg the question, though – who would the facsimile be for?  If it’s for family (a kind of macabre, living urn) then it makes sense that it should be as close to the original as possible.  Otherwise, I really don’t see the point.  Also, I’m assuming that the robots would get to know and mimic one when, well, aged.  Being aged for eternity (or close) seems somewhat depressing.

There’re a lot of other issues raised by this article, too.  Rather than going into them in any length here, I’m going to yell “discuss!” and stand back.  Further information (from MB) is provided below, for your edification.

Analysis -> Synthesis: How this scenario came to be:

From New Scientist:

Though there is little prospect of creating a genuinely conscious robo-clone in the foreseeable future, several companies are taking the first steps in that direction. Their initial goal is to enable you to create a lifelike digital representation, or avatar, that can continue long after your biological body has decomposed. This digitised “twin” might be able to provide valuable lessons for your great-grandchildren – as well as giving them a good idea of what their ancestor was like.

Ultimately, however, they aim to create a personalised, conscious avatar embodied in a robot – effectively enabling you, or some semblance of you, to achieve immortality. “If you can upload yourself into this digital form, it could live forever,” says Nick Mayer of Lifenaut, a US company that is exploring ways to build lifelike avatars. “It really is a way of avoiding death.”

An example, of how realistic avatars can be made to be, is demonstrated in this video from the Emily Project by Image Metrics.

FUTURESTATES explores this theme more fully in the episode PIA: When a woman in mourning encounters a mysterious wandering service android, she is forced to redefine its conceptions of humanity, relationships, and family. Watch the full episode at http://futurestates.tv/episodes/pia.

Further links:

  1. Geminoid F: Hiroshi Ishiguro Unveils New Smiling Female Android – IEEE Spectrum, 3 April 2010
  2. Immortal avatars: Back up your brain, never die – New Scientist, 7 June 2010
  3. The avatar revolution: Here come the new humans – New Scientist, 7 June 2010
  4. Lifenaut website
  5. MindBullet: ROBOTIC GIRL ‘MURDERED’ BY JEALOUS HUMAN LOVER (Dateline 14 February 2018, Published 24 July 2008)
  6. MindBullet: SYNTHETIC LIFE MAKES THE JUMP FROM VIRTUAL WORLDS (Dateline 8 August 2028, Published 9 August 2007)

When art meets science – Wireless in your World aimee whitcroft Jun 25

2 Comments

This is what happens when art and science get together and make very beautiful motion infographic babies.

wifi 2We’re used to the idea that we’re surrounded by wifi networks (well, I hope we are).  Still, have you ever actually tried to visual them all, and how they surround you and overlap with each other?

I’ve not, to be honest.  Something about which I’m now slightly embarassed.

Anyhoo, someone else has! Designer Timo Arnall has made a very beautiful video showing the interaction between the networks, overlaid over scenes from ordinary life.

The official description:

Utopian and radical architects in the 1960s predicted that cities in the future would not only be made of brick and mortar, but also defined by bits and flows of information. The urban dweller would become a nomad who inhabits a space in constant flux, mutating in real time. Their vision has taken on new meaning in an age when information networks rule over many of the city’s functions, and define our experiences as much as the physical infrastructures, while mobile technologies transform our sense of time and of space.

It’s a beautiful way of showing just how pervasive wifi technologies are, particularly in, * ahem * other countries.  And yes, I’m jealous.

Found this gem here, on Flowing Data, but before that it came from infosthetics (both of which may be some of my new favourite blogs).

Quantum memory improves (vastly) aimee whitcroft Jun 24

No Comments

Let’s hope I manage to explain this properly :)

light bike

Photons make up light.

Firstly, a shoutout to a local – a University of Otago researcher was actually involved in this.  Hooray!  It means I get, not only to geek out, but to have pride in doing so.  Always a nice feeling.

So, on with the post.  Well, in essence, quantum memory just got a lot better.  If that’s all the news you needed, you may stop reading now.

If, on the other hand, you have a multitude of questions, perhaps including ‘what is quantum memory?, ‘how much better?’, the classic ”why do we care?’ and so forth, then read on.

The first thing to understand is, probably, the concept of quantum*.  In this case, we’re talking about really, really small things.  On the subatomic level.  Smaller than atoms.  Electrons.  Photons.  Quarks (beware the quantum duck, haha), etc.

The next thing to understand is the idea of quantum communication networks. We’re interested in these because, if we can implement them, they’re very secure.  Very secure.  The problem with current encryption is that no encryption algorithm as currently written is perfectly unbreakable.  The top encryptions (well, many) use random numbers in the encryption process.  The problem with this is that a piece of software had to generate those random numbers, meaning that they aren’t, truly, random.

The great thing about quantum communication networks is that true random number generation** is possible.  Hence the security.

The thing with building quantum communication networks, though, is that there needs to be the ability to store this information. So that it can be coordinated, passed on, etc.  And retrieval of the information, once stored, needs to be on-demand.  One needs to be able to lift specific data out of the whole packet, for example.  Or get it when one needs it.

The wrinkle here is that the medium used is photons.  Little light bits.  So the storage device/medium needs to be able to store the quantum information in a light field.  Easy!  Or not, in fact. The thing with photons is that they’re quantum.  Which means that Heisenberg’s uncertainty principle applies to them.

Heisenberg’s uncertainty principle is brilliant.  In essence, it states that on a quantum level, one can either know something’s position, or its velocity.  One can’t know both (one can in the macro world we inhabit, happily).  Why is this?  Because in order to see something, we have to bounce photons off it.  If you’re dealing with subatomic stuff, then even one photon is kinda big.  As an example, observing quantum-thing-A by bouncing just a single photon off of it. changes its velocity in the same way that someone your size bumping into you while you’re walking does.  So you can see where it was at the time of the incident, but you can’t know where it was going.  Or at what speed.

What does this mean?  Well,it means that the classic means of measuring and reconstructing this data don’t work.  The storage device has to simply imprint the light field’s characteristics – we can’t measure them when they’re input, because that changes the information. Rendering the exercise moot.

People have been playing with ways to achieve this storage for a while now.  I myself have read some great papers on the subject.  The problem is that, much like my memory sometimes, the highest recall allowed by any of these methods was low: 17% at maximum.  That’s not even nearly enough to make it practicable, particularly if one is pedantic enough to want practical transmission rates at the sorts of distances (>1,000km) over which quantum communication  would occur.  The minimum efficiency necessary, for various reasons, is 50%.

These previous, low-efficiency means have used atomic vapours.  They’ve also only been able to use pretty weak quantum states, and an average photon number of around one.  Not great.

Now, however, a breakthrough has been achieved!  A team of international physics brains have done things rather differently.  Firstly, the quantum memory they’ve developed is solid state***. Which is brilliant.  As Jevon J Longdell (the Kiwi author) says:

We stored the light and then recalled from impurties in a crystal. When people are developing new technologies, ones that are based on solid systems tend to be easy to make, reliable and robust and easy to minaturise. So the fact that we use a solid rather than atoms trapped in a vacuum chamber, is something of a selling point.

Secondly, they’ve gotten the recall efficiency higher.  A lot higher.  Up to, apparently, about 69 %. And they reckon they could get it even higher too, given improved materials.

Thirdly, it works for the weak quantum states of one photon the other, ‘classical’ memory used, up to bright states containing 500 photons.  And for states of, on average, 30 photons or less, it was able to surpass the ‘no-cloning’ limit: this basically means that more information was retrieved from the input than was left behind or destroyed, which is good for maintaining the security of the communique.

In short – we’re one step closer, peeps, to truly secure communications.  The excitement of intelligence  (and commercial) communities around the globe is palpable.

For those who give not a jot about secure communications, it could also have other applications, such as the optical detection of ultrasound – useful both in health and in engineering (although for different things, obviously).

For anyone who’d like to wallow in all the details, the reference for the paper is below.

UPDATE: Jevon’s comment can be seen above.  More may be forthcoming…

—————————–

* On which books have been written.  This may merit a more fulsome post in the future.  ‘Cause I really, really love quantum stuff.

** Oh dear.  Another post topic…

*** No, it’s not a flash drive.  It’s actually a sort of crystal.

Also:

Heisenberg’s Uncertainty Principle

No-cloning theorem

Reference:

Hedges, M., Longdell, J., Li, Y., & Sellars, M. (2010). Efficient quantum memory for light Nature, 465 (7301), 1052-1056 DOI: 10.1038/nature09081

ResearchBlogging.org

Brief interlude: spoon aimee whitcroft Jun 24

1 Comment

So, because I realise I have been remiss in posting over the last few days*.

spoons

And also because I’m currently writing something somewhat more complex:  I bring you sciencey spoon-related humour.

A word of introduction.  Some time ago, in a country fairly far away, the BBC decided to implement a terrestrial version of the Hitchhiker’s Guide to the Galaxy: h2g2.  And they invited applications from all peoples, whether Earthian or not**.

It’s an absolutely fantastic way to spend an aeon or so, as entries have mounted over the years.  One of my favourites, however, was discovered a decade ago, and deals with the subject of spoons***.

What has this to do with science?, rumble readers.  Well, this is what happens when someone who is familiar with these remarkable implements, and also has a science background of some sort, explains the concept.  They are not, it would appear, quite as self-explanatory as us human beans would think.

Here, to whet your appetite, is the first bit.

Spoons

A spoon is a hand tool used for transporting food to the mouth. For convenience, in this Entry, the material to be transported will be called the stuff.

The bowl is a structure designed to provide a local area of reduced gravitational potential, surrounded by a closed loop of greater gravitational potential. If used in a gravitational field the bowl thus constrains the content to remain within it unless the user imposes a force on the content such as to produce an acceleration large enough to overcome the gravity well. Increasing the potential difference between the bottom and sides of the bowl (by deepening the bowl) allows the user to accelerate the spoon more rapidly in a direction perpendicular to the applied field without spillage. This modification of the bowl (as well as a change in bowl/handle relationship, and often in the size of the bowl) can be seen in a related specialised tool, the ladle.

Structure

A spoon is made up of two parts, the bowl and the handle.

The handle is designed to allow the user to support and move the bowl in comfort, and so is usually reasonably rounded and of a size which is easily held in the hand. Some spoons have their bowl and handle made out of the same material, eg wood or metal. Many use different materials, as the differing desired characteristics of bowl and handle can often be best met by two different materials.

The rest, in which methods of use are covered, can be found here.

For other h2g2-related silliness, I present to you The Hitchhiker’s Guide to the Daleks

YouTube Preview Image

And because I know people will trawl through this – what are your favourite h2g2 entries?

———————————-

* It’s the recurrent insomnia makin’ me braindead.  Honest.

** I have no demographic data as to this split.  My apologies.

*** No, not spoons as in the spoon from that movie. Although the movement of spoons is indeed mentioned.

Sciencey goodness Pt I aimee whitcroft Jun 18

5 Comments

Sometimes I am able to write posts on Friday.  Sometimes not.

heads-up

At least part of the reason for this is that I spend my Thursday afternoons and Fridays (or at least parts thereof) research for and writing the SMC weekly newsletter.  Which is awesome.

And you should sign up!  Why?  Because it’s well interesting, of course.  Peter Griffin, also of the SMC, writes the feature articlets for it, and I get the fun of doing the rest.  Which means I get to wallow in all manner of science stories and research, choose stuff that I think’s cool, and send it out to a bunch of people.  Hopefully, some of you will shortly be one of that number.

You can also read back issues in our archive (reachable off the SMC homepage, on the left), which gets updated shortly after the newsletter goes out.

On the other hand, I also tend to find a lot of other interestingness as well in my forays through the world of science, and since, clearly I don’t have time to blog about each one at length (I tried, but my to-write-about list has developed its own gravity well), I may also start writing short sniplets about some of these discoveries.

Many of you will be aware of at least some of them.  Some of you may not. It’s not a competition (promise), and nonetheless, I hope it’s of interest.  Of course, comments always welcome!

Language preservation: it’s not a zero sum game

There’s quite a bit of consternation over the fact that the vast majority of the world’s languages are not spoken by all that many people.  That is to say, they lead a slightly precarious existence and one that, as the dominant languages continue their spread, looks to become even more so.

Why on earth would we care if some obscure language went extinct?  Because our languages encode a great deal of our cultural information, identity and history.  Previous models have predicted a somewhat apocalyptic end for many of our lesser-spotted lesser-spoken languages, but someone’s developed a new model which gives a little more hope.

One of the characteristics of previous models was that they didn’t take account of people who’re bi- or more lingual.  Which seems a little odd, frankly.  Modelling language use as zero sum seems incredibly oversimplistic.

Yes, certainly, it has happened – for example, English has crushed some other languages. But it’s not always going to happen that way.  People don’t always just choose one language.  For example: in South Africa, where I come from, many people speak five plus languages.  Fluently.  (The vernacular is, um, rich back home)

Anyhoo, the new model has done away with that.  It allows bilingual people to exist within it, and in the process allows languages to co-exist and co-evolve.  Hooray.  Although, say the authors, it’s still  something of a delicate balance.  Still, at least there’s hope.

[Talking of which - has anyone else heard of jejemon?  I only came across it a couple of days ago, and am completely boggled linguistically]

Fertile people: you can live longer!

Ok, yes, that’s something of an overexcited headline.  It’s not quite that simple.

Scientists looking at longevity have found that, in the roundworm C. elegans* at least, longevity and  the reproductive system go hand in hand.  Or something. They knocked out a number of different genes – including one called Ash 2, which is a regulatory gene – involved in the germline, and noticed an interesting effect.

The knockout extended the lives of the roundworms by up to 30% (we’re not sure whether the lengthening happened in the period of life when one is partying heavily, sprogging, buying motorcycles, or needing Zimmer frames, sadly).  The researchers aren’t exactly sure why, though.

The catch? It only works if the worms are still fertile…

Change blindness modelled

Change blindness is the term used to describe our inability to see changes in a scene.  For example, a bench moving place.  Or a wall changing colour.  Or something that’s been added/removed.  Etc. It’s tested by showing people before and after pictures, and asking them what’s different.

Up until now, scientists have been researching change blindness by manually changing pictures of scenes, which meant that they had to choose the features to be changed and how.  This, of course, adds the element of bias.

So some mathematically-inclined people got together and wrote an algorithm which allows a computer (man’s bestest friend) to make these decisions instead.  Bias free.  The better for to study the phenomenon with.

And they did experiments to test it.  As one does.  The experiments confirmed it can be used to test change blindness, but they also showed something else: that we detect removals or additions from a scene more easily than we detect whether something has changed colour.  While the scientists said they were expecting the opposite to be the case, I find I’m not terribly surprised – colour is important to how we navigate and interact with our surroundings, but the presence or absence of said objects is probably of more importance.  I, for example, am more likely to notice having just walked into a chair that wasn’t there previously, than to notice it’s changed from a delightful sandalwood to an even more delightful, um, something else.

Anyhoo, it’s hoped the algorithm can be used to help develop things, like roadsigns, that we’re likely to notice.

Enough for now – have a wonderful weekend! * raises a toast *

———————————-

* C. elegans is the roundworm version of Drosophila Sophophora melanogaster.  A sort of geneticist’s playground.  The worms have a great deal in common with us, which means we’re able to learn much about our own genetics without having to directly play with people.  For ethical reasons etc.

References:

Greer, E., Maures, T., Hauswirth, A., Green, E., Leeman, D., Maro, G., Han, S., Banko, M., Gozani, O., & Brunet, A. (2010). Members of the H3K4 trimethylation complex regulate lifespan in a germline-dependent manner in C. elegans Nature DOI: 10.1038/nature09195

Verma, M., & McOwan, P. (2010). A semi-automated approach to balancing of bottom-up salience for predicting change detection performance Journal of Vision, 10 (6), 3-3 DOI: 10.1167/10.6.3

ResearchBlogging.org

Distorted internal body maps, anyone? aimee whitcroft Jun 15

No Comments

Our brains’ internal representations of ourselves are not, it would appear, quite as accurate as one would have thought.

hand

That, at least, is the conclusion of paper which just came out in the dangerously-acronymed PNAS*.

To introduce the subject, then, let’s agree that it’s important for the brain to know where all our various physical bits are.  It stops us walking backwards into things, not accidentally kicking people under the table, and so forth.  No doubt you, dear reader, can come up with a wealth of such instances (extreme data overload today precludes my ability to do so).

Oh, and it’s also very useful (perhaps most useful) for our ability to tell where our body parts are in relation to each other.

Further, whereas external sensory stimuli might tell us where bits of us are – aaarg, I know where my face is because I just walked into a door – we don’t generally get the sorts of stimuli which might tell us about the size and shape of said bits.  So, our brain needs to resort to internal models it’s built.

One would assume that such internal models are relatively accurate, particularly given that we are often able to see our bits. Or at least bits of our bits.  Intuitively, I would have thought that my brain would have assimilated this visual information into its internal models.  I’m sure such as assumption is shared by many other people.  And you know what?

We’re dead wrong.

The researchers looked, specifically, at people’s internal representations of their hands.  They got people to put their hands under a solid surface and then to point to where they thought the knuckles and tips of each finger were. These pointings-at were then used to build a map of people’s ‘internal’ hands.

The discrepancy between reality and model

The discrepancy between reality and model

And, interestingly, they found that pretty much across the board, people thought that their hands were wider, and their fingers shorter, than was actually the case.

Fascinatingly, this distortion also seems to have something in common with representations such as the Penfield homunculus. What’s that?  It’s a representation of the body where the size of the body part is in proportion with the number of connections between the brain and that bit.  Which makes for a very scary picture, but a very interesting way of immediately understanding how we’re wired up.  This paper suggests that the homunculus may provide not only a picture of our wiring, but also a picture (at least to some degree) of what our internal map looks like.

The Penfield homunculus

The Penfield homunculus

Ok, so given this, how is it we’re able to have such fine manual motor control? The authors put forward a couple of different possible hypotheses: maybe the motor system just thinks about the end point, not bothering much with the limb’s representation.  Or, maybe the motor system uses a different representational model of the body, perhaps by integrating visual inputs.  We’re not yet sure, basically.

I feel I do need to emphasize, though, that this distortion is in our mental model of our bodies. In other words, our conscious body image of ourselves tends to be pretty accurate – it’s the internal model that’s not.  On the other hand, it might help explain why some people (anorexics, for example) have such skewed body images – their mental map has performed some sort of coup and displaced their body image…

Certainly it’s all very interesting.  And it comes with a new word, for all the linguaphiles out there: proprioception.  Or, the ability to know where your bits are without having to look.

For a fun game, go home, blindfold yourself (or a close friend), and see how accurate you can be :)

———————–

* Warning:  Be very careful when saying this acronym.  I have not always been, to much hilarity/embarrassment.

Reference:

Matthew R. Longo and Patrick Haggard (2010). An implicit body representation underlying human position sense Proceedings of the National Academy of Sciences : 10.1073/pnas.1003483107

ResearchBlogging.org

Lament on the perfect fuse aimee whitcroft Jun 14

4 Comments

Late last year, I was lucky enough to get to go to IRL to go frolick in their archives.  For people like me, who like old paper and the smell of knowledge made incarnate, places like that are easy to get lost in…

fuse

UPDATE: This picture of a fuse is completely wrong.  Hat tip to one of me readers, who pointed out that it would, actually, have looked more like this

I also got to go visit the apple tree, growing in IRL’s grounds, which was grown from a graft of Newton’s apple tree.  Very cool.  And I got to touch it. * wiggles fingers *

(Also, did you know that a fragment from that most famous of apple trees has been to space?  The thought makes me smile…)

Anyway, at the end of the trip, I was given some old documents and photos.  Which I clutched gleefully all the way back to the office, intent on sharing with the world, and which have consequently sat on my desk for the past 6 months.

As it was inevitable they would.  Sigh.

While I’m finding out whether I’m allowed to scan and post them in their entirety, I’d thought I’d share a somewhat amusing poem, penned by an unknown author, dedicated to the subject of a certain fuse.  And what I’m starting to realise may be an ancient adversarial (sortof, at least) relationship between the Antipodean countries not including South Africa.  Heh.

Lament on the Perfect Fuse
(author unknown)

From the Ford Annexe, Wellington, 1943

There was chaos in the factory
For the word had got around
That a perfect fuse without a fault
By someone had been found.

They tested it with calipers
And gauges of the best
With shadowgraphs of costly make
But still it stood the test.

The safety cap was really safe
The shutter shuttered good
And every littlw orking part
Worked just the way it should.

Directors, Experts, Engineers
All stood around and gazed
A miracle has been performed
They really were amazed.

A thing like that they all agreed
Might cause a recolution
And he who made it must be found
Was their firm resolution.

The manager expressed the view: -
“A criminal mishap
Which overlooked, might easy cause
The death of some poor Jap.”

Machinists, Fitters, Labourers
Were each in turn accused
But all denied one ounce of guilt
And any blame refused.

The Storeman next came on the mat
But proved beyond all doubt
That every time he issued tools
He gave the wrong ones out.

And next the draughtsman cleared his name,
And said his life he’d stake
No drawing ever left his pen
With less than one mistake.

Determined still to fix the guilt
They tried the Office staff
Who all declared their innocence
And proved it with a graph.

“This graph is true in every line”
They said “And we’re not skiting
It proves production’s down to nil
Since we started expediting.”

The D.P.L. then made this claim
And no one disbelieved
“With the kind of tools that we turn out
It couldn’t be achieved.”

The guagues then were sorted up
And tools were brought along
But not one piece in all the lot
Was less than ten thou. wrong.

So when they did eliminate
Each chance to fix the blame
They prayed “Oh, Lord, please punish him
Who sullied out good name.”

Directors, Foremen, Managers,
Then wrote with one accord
Petitions to the Government
Exonerating Ford.

With firm resolve they threw it out -
Their one sad dismal failure
When the dustman found that it was stamped
“A SAMPLE FROM AUSTRALIA”.

Haha.

For anyone interested, here’s the page itself.  I also found interesting the rhyme at the top of the page.  Certainly a clever pun, and a sentiment that, frankly, still applies to most endeavours, warlike or not, today…

[Click on picture to enlarge, as usual]

lament

Note: I realise I scanned it skew.  I am still, I am sure, going to be able to sleep tonight…

Wellington geeks/nerds: jooooooooiiiiiin uuuuuuus aimee whitcroft Jun 04

No Comments

Anyone who watches my twitter stream *snort* can now breathe: the details of the plotting mentioned can now be made public.

glasses

Brian Calhoun (SilverStripe) and I have decided to band bravely together in a novel, nerdy enterprise.  Sort of.  More an event, really.  And we need your help, and the help of everyone you know.  Think of this as a clarion call1.

More details before you commit, you say?  Not a problem, we reply!

It’s not a new idea, actually.  Called nerdnite, it started in the States (as so much great nerdiness/geekiness2 does) many years ago.  Not quite days of yore, but certainly a few.  Its purpose?  To bring together people who have obsessions, and get them to talk about said obsessions and, maybe, even talk to each other.  Gasp.

As one might expect, it happens in bars, since that’s where the beer/wine/etc3 is.

The format tends to take the shape of a few presentations on subjects ranging from different types of worms, to sports, to Wolverine’s various guises over the years, and anything in between.  There’s absolutely no limit to what people can talk about, or how they present it (apparently, music gets perpetrated too sometimes).

Since its inception in Boston, it’s spread like a cute, nerdy little virus all over the States and now into other countries .  Including Germany, of course4.  And so, we thought we’d organise one for Wellington.  Because Wellington’s brilliant, and we reckon there’s no shortage of people who’d love to get involved.

Please don’t prove us wrong.

What do we need from you?  Well, participants5!  If you have something you get all geeky/nerdy about, tell us.  Perhaps you might like to talk about it at some point, in front of some people.  Or, perhaps you have a friend/acquaintance/pet/deadly enemy who you think might like to get involved.

Or, of course, you think you’d like to come along and watch (we still want to hear from you), or have ideas around the perfect venue or some other way you’d like to contribute.

C’mon, it’ll be fun!  Watch out for the twitter hashtag #nerdnitewelly, and it’s easy enough to contact us by email, too (for me, for example, just look up the page)*. Should you prefer Twitter, we’re @teh_aimee and @unbrand (just guess who’s who).

I’ll leave you with the tagline: “It’s like the Discovery Channel… with beer!”

nerdnite

UPDATE: I just realised this was my 100th post (on Sciblogs, at least).  How appropriate :)

———————————–

Note: Non-nerds/geeks are also welcome. We enjoy fresh meat new people.  Ooh, and we’re not limiting it to Wellingtonians – it’s just going to be taking place here.

1 For the less linguaphilic of you, this means an emphatic call to action.

2 As previously mentioned, I’m not going to get into the debate.  Again, xkcd sums it up perfectly.  Which means that geeks are just as welcome as nerds.

3 The stuff that helps with the talking to (and in front of) each other.

4 I’m still saluting you German types for your support of the Wellington Declaration.

5 You were slightly worried we were going to ask for money or…other stuff…, weren’t you.

———————————–

I’m beginning to wonder whether I’m developing a footnote problem addiction.

* Ok, fiiiiine.  aimee dot whitcroft at gmail dot com.  And unbrand at gmail dot com. Or aimee at nerdnite dot com and brian at nerdnite dot com.  Enough choices?

The Creativity Machine aimee whitcroft Jun 04

2 Comments

Ah, the Creativity Machine – generator of a phrase I considered having tattooed onto my tender self.

neural_network

I mentioned the Creativity Machine in the footnotes of a previous post.  And certainly, it’s not a new invention either.  But it is a fascinating one, if nothing else for its illustration of what near death experiences can do to software. Yes, you head that correctly.

Some history:

In the 1980s, a man named Stephen Thaler was playing with neural networks.  Software which attempts to model the computational processes seen in brains.  At the time, it was part of his day job (he was also playing lasers, lucky man).

And, at some point, he began to wonder what would happen if he killed his neural networks.  Very creator/destroyer of him.  The idea was that its death might let him know more about what happens in organic brains when they die.

So he did it.  He built another piece of software, appropriately entitled the Grim Reaper, to kill a neural netowkr by killing off its ‘neurons’, one by one.  Basically, it disrupted the neural network’s synapses (connections).  But he didn’t just kill it.  Oh no.There was a prelude (apt, given what he did).

First, he loaded up his network with the lyrics of a bunch of different Christmas carols. and then he introduced the Grim Reaper.  And he watched what happened as the neural network watched its life flashing before its eyes – remember, this wasn’t a simple, quick shutdown, but rather a progressive disintegration*.

What he saw was fascinating.  As the damage deepened, the neural network went from reproducing the carols perfectly to recombining them using what ever was left. Now, given that creativity is often described as the ability to recombined things others haven’t thought to, this means that the poor, panicked network was being creative.  It was hallucinating: making, from the remains of its shattered and fragmented memories, new carols.

Its final line?

“All men go to good earth in one eternal silent night.”

That’s the line I considered turning into a tattoo.  I still do, sometimes.  It’s beautiful.  And, given the circumstances, incredible.

Of course, this spurred an idea.  Not in how to generate new and interesting word combinations, but in what other applications this sort of recombination could have.  Would it, he wondered, be possible to produce this sort of noise in a network without having to kill it?  So he tried, and it worked.  He was able to perturb a network’s connections, in the process changing it and generating ideas.    This time, his machine was able to invent new, ultrahard metals.  It’s the noise that’s key, it turns out, to be network’s ability to create.  Indeed, some biologists now believe that the same is true for the human brain.

In addition, creativity machines can optimise their work.  They have critic networks built into them which help them select the best ideas, enabling them to use those to generate even better ideas.

One of the most famous of the machine’s creations?  The Oral-B CrossAction toothbrush.  Yup.  And, apparently, there are a bunch of other commonly-used devices out there designed by one of these – the companies who sell them just aren’t terribly comfortable with revealing who/what designed them :)

It’s got a bunch of different applications, though – if you think about it, any space where a human brain’s ability to be creative is useful.  Naming new products and companies; designing weaponry; screening at airports; writing tabloid headlines, and so forth.

No, I don’t think we need to worry about this heralding the world domination of humans by machines (although, as previously expressed, I welcome our robot overlords).  But it is a fascinating story.  And a wonderful machine.

And, taken far enough, it could, it’s one of the technologies which could one day help us ‘download’.  Something I, for one, would be very keen to do.

Here’s a video on the subject:

YouTube Preview Image

Also: I love the word “perceptron”.  And I’m not even going to go into STANNOs now.  Or the Singularity.

———————————————————-

Note: I’ve known this story for years.  It’s practically branded in my brain.  But I hope it’s still interesting :)

* Which reminds me of, but isn’t really related to, this marvellous TED talk, in which a neuroscientist discusses her experiences during a catastrophic stroke.

Towards evolvable circuitry aimee whitcroft Jun 03

No Comments

In the recent past, I stumbled across a paper entitled ’The evolvability of programmable hardware’.  I was, as I’m sure would be the case for anyone, immediately fascinated.

FPGA

An FPGA (see below)

This, then, is the account of that paper, dear reader.  But a word of background, perhaps, before we dive in.

Biological systems are known for their ability to tolerate faults, caused either externally (hostile environments) or internally (mutations).  Indeed, their ability to withstand mutations is a major component of their ability to evolve.

Manmade systems, on the other hand, are less robust – change or remove something, and they tend to fall over.  This limits their ability to gain new, useful adaptations (through random change, that is, rather than direct design).  Of course, it’s something which many people have been tackling since the early 90s at least.

So, how does the squishy biological stuff do it?

Certain properties are needed for this level of biological robustness – proteins are useful examples.  Proteins are built by assembling amino acids into long strings, like pearls.  The order of these amino acids then causes the protein to fold into a shape, which in turn determines its function.

There are 22 standard amino acids, of which we’re able to make 10.  The ‘recipes’ for these 10 are, of course, encoded in our DNA. However, it takes 3  DNA base pairs to code for one amino acid.  Since there is a higher number of possible permutations of a 3 bp sequence than there are amino acids coded for, you can image that each amino acid has multiple codes.  This is important – it means minor mutations in a DNA sequence might, or might not, induce a change in the amino acid sequence.

Further, even changes in that sequence might or might not induce a change in the protein’s structure, and therefore its function.  Hence robustness.  But also, the ability to change.

So now, the question that needs to be asked is whether programmable hardware – in this case, a certain class of electronic circuits  – can be designed to have the same ability, or whether there are intrinsic differences in biological vs technological organisation that would preclude it1.

As with biological systems, technological systems are also subject to 2 forms of change – external/environmental and internal component changes.  This paper focuses on the internal, because of biological counterparts’ ability to get to innovation through internal changes.

“Fault-tolerant” systems are not new. Neither is the design of circuitry which is able to adapt and evolve its function.  And, of course, there’s the use of evolutionary algorithms2 to solve very complex problems.  In evolvable hardware, the same principles – reproduction, mutation and selection – are used in electronics. Often, field-programmable gate arrays (FPGAs) are used.

What are FPGAs?  Simply put, they’re silicon based lifeforms logic circuits, built from transistors. What it’s important to realise, is that both the functions computed by each of the logic gates assembled in an array in the circuits, and the connection of these gates, can be altered.  Hence “field-programmable”.

FPGAs are used in image processing, digital sign processing and high-performance computing applications such as fast Fourier transforms.  Basically, they’re commercially valuable.  Which is one of the reasons why they’re the subject of the paper.

So, to the experiment itself:

Each circuit, (which computed one function each) has a specific structure, made up of the identity of, and connections between, its logic gates.  As in biology, let us call this its genotype.  Further, this structure denotes its function – its phenotype. Each circuit can be mapped to one function.

And the questions to be asked?  I’m going to quote the authors here…

“How ‘robust’ is a typical circuit to changes in the wiring/configuration? Do neutral networks exist in this configuration space? Can circuits with significantly different configuration compute the same function? Does the organization of circuit space facilitate or hinder the adoption of novel phenotypes (logic function computations) through small numbers of gate changes?”

Basically: “if we poke it with a stick, what will it do?” [Yes, I know we're all thinking about those snails, but please, try to remain focused on the research at hand]

For the most part, they look at circuits with 4×4 nodes, although they did also look at 3×3 and 6×6 to see the how circuit size might affect their findings. Think of all the possible permutations in structure and function of these circuits as inhabiting a phase space we’ll call a “circuit space”.    And woohoo, but it’s a big one.  Lots of zeroes.   Circuits are ‘neighbours’ if their structure is only minimally different.  That is, they live close to each other in circuit space.  It’s worth reminding you, dear reader, that small changes in structure might, or might not, mean a change in function.  As with changes in DNA or amino acid sequence.

circuit space

Digital logic circuits and circuit space.

(a) Shows the standard symbols for logic gates along with the functions they represent. (b) Shows an example of a digital logic circuit comprising four (2 x 2) gates, akin to a field-programmable gate array. The circuit comprises four logic gates, represented by the symbols shown in (a). Each of the gates has two inputs and one output. The entire array has nI ¼ 4 input ports and nO ¼ 4 output ports; the array maps a Boolean function having four input variables to four output variables. The connections between the various columns or ‘levels’ in the array are ‘feed-forward’; i.e. the inputs to each element in a column of the array can come only from the outputs of any of the elements from previous columns. There are four outputs from the array, which can be mapped to any of the four gate outputs. (c) Illustrates the concept of neighbours in circuit space. The panel shows six circuits with 2 x 2 logic gates and two inputs and two outputs per circuit. The figure shows a circuit C1 (thick ellipse) and some of its neighbours in circuit space, that is circuits that differ from it in one of the four possible kinds of elementary circuit change. For example, C2 differs from C1 in internal wiring, C3 differs in the logic function computed by one of the four gates, C5 differs in an input mapping to one of the gates and C6, which differs in the output mapping. The circuit C4 differs from C1 in two elementary changes and is therefore not its neighbour in circuit space; however, it is a neighbour of C3 and C5. The differences between C1 and the other circuits are shown by shaded grey boxes. [Taken from paper; click on picture for bigger version]

Right, ok, what did they actually do?

Well, they built a simulation.  Obviously.  Because no one, no matter how obsessive they are, wants to sit and built squillions of circuits. To be slightly more specific, they built a vector representation, allowing them to easily compute the ‘distance’ between circuits (how difference the output bits are to each other) which would denote a change in the ‘wiring’ of the FPGA.  They then choose a bunch of different permutations for input mappings, gate configurations and output mappings.

And they analysed 1,000 circuits.  To see how robust a circuit was, they caused one of its gates to ‘fail’, and watched what happened.

On to the findings!

(Or, the bit where people ask “right, so we know what you did, and have survived your interminable introduction, but why should we care?)

Interestingly, they noticed that a small number of functions were computed by a large number of circuits, and vice versa.

Also, circuits computing any one function are linked in ‘neutral networks’ across this circuit space.  Basically, you can move from one circuit computing function x to another computing function x, simply by making a small change (but one which, obviously, preserves its function.  Just like what happens with proteins).  Of course, some functions have larger neutral networks than others.  The larger the neutral network, the more robust the circuits are to changes.

As circuits wander through their neutral networks, they will encounter different things.  Some of their neighbours will have novel functions.  Some will have the same function.  Of course, as they wander further from their original position out into their neutral networks, they are more likely to encounter other circuits with novel functions.  All of which makes good, intuitive sense as well (think about what happens when you travel).

In essence, it shows that yes, electronic circuits can be designed which mimic biological robustness.  But this paper does quite a bit more.  Previous work has shown that specific circuit functions can be designed to be fault-tolerant.  For example, people have forced circuits to ‘learn’ how to better and better tolerate random gate failures and noise3.

What makes this paper interesting is that it’s not specific.  It shows that this ability to be fault-tolerant applies to many different types of circuits and functions, and that for each function, there will be circuits that are significantly more or less robust than others, without even needing things like redundant gates.

In other words – a more fault-tolerant circuit doesn’t necessarily need to be more complex.  In our increasingly energy-aware times, this is good.  Also, evolutionary algorithms (or other such brilliant things) can be used to find these uber-circuits.

Also: sometimes, one wants a circuit which can easily change to computing a new function.  Like in modular robots, which we use for deep sea mining, space exploration, search and rescue, and other environments hostile to squishy, meaty4 lifeforms such as ourselves.  In such instances, the ability of said robots to reconfigure their circuitry, for example for navigation, could be very handy.  It means the robots could learn5.  For this, it’s particularly useful that neighbours with novel functions exist in circuit space — to put it more simply, if you’re trying to redesign your circuitry to do something new, it would be nice to do so having to make as few changes as possible.

Hell, it even means self-repairing circuitry could be designed, even if some parts fail!

So, what are the costs of reconfiguration?  Well, time, really, and reconfig data storage space, and of course, both depend on how much reconfig work is needed.  Also, if one designs everything as the authors did, so that only minimal (partial) reconfig is needed, then everything can go on working, rather than everyone having to stop to move around.  Sort of like closing one lane on a road, as opposed to closing a whole tunnel6 road.

Another general observation? The more complex a circuit (i.e. the more gates it has), the more robust it is.  This makes sense.  It’s why we can survive many changes to our genome and be just fine, whereas, for example, viruses can suffer severe loss of function if their genome changes much.

Also, the more complex the circuit, the larger its neutral network in circuit space.  And there will be some functions that little circuits can’t compute. (I have an image in my head of a small person being asked to do some outrageous piece of multiple division, heh).

An example, from the authors:

’A case in point is again the space of four-gate circuits. This space comprises 4.67 x 108 circuits. These circuits can compute only 4.05 x 106 different functions, a small fraction of the possible 1.8 x 1019 Boolean functions with four inputs and four outputs…These considerations show that robustness and evolvability of programmable hardware have a price: increasing system complexity.’

Finally  – as in biology, so again tech: there are some components which aren’t used directly for the computation/function, but are nonetheless important.  They allow extra variability, and are key in the ability to develop new functions.   It’s also why very simple/small circuits may not be as ideal, even though they are simple and elegant — their pared-down nature means they don’t have these extra bits.

There are limitations, though, to the research: each silicon chip has minute differences, and external factors such as temperatire DO play a role. And a computer simulation is, axiomatically, not the same thing as real life. And the authors aren’t sure what would happen if you scaled up to circuits containing thousands/millions of logic gates.  And, of course, to get around the mahoosive numbers involved anyway, it was necessary to perform sampling, rather than measuring everything.

So, there you have it.  It’s possible to design fault-tolerant, adaptive electronic circuitry.  Bring on our silicon-based overlords.

Disclaimer: until I read this, I had no idea what FPGAs are.  Or logic gates.  I am, of course, immensely ashamed.  However, it is again an example of how judicious use of search engines, and some curiosity, can bring wonderful things.  (Yes, this is aimed at people who are too lazy to look things up sometimes)

Also: you think this is long?  You should see the original paper…

—————————-

1 Earlier this year, some fascinating research was published which explained why E. coli bacteria crash less often than Linux.  Seriously.

2 Evolutionary algorithms.  Sigh. These are one of the reasons I began a genetics degree in the first place.  Algorithms behaving like little, sexually reproducing beasties.  It’s delicious.

3 Watch out for my upcoming post on the Creativity Machine

4 Bring on bionics

5 That’s a good thing

6 Wellingtonians will understand…

References:

UPDATE: It’s online, finally!

Raman, K., & Wagner, A. (2010). The evolvability of programmable hardware Journal of The Royal Society Interface DOI: 10.1098/rsif.2010.0212
ResearchBlogging.org

Network-wide options by YD - Freelance Wordpress Developer