This just in from Slate – interview with a “cyborg”. Neil Harbisson, who has a device to allow him to sense colour talks about some of the experiences and discrimination he has experienced. He notes the current situation is somewhat like those experienced by transsexuals, and that societal attitudes, as well as healthcare services, will need to change.
Homo electronicus – update Feb 28
Future Foods Feb 26
Will farm livestock become endangered species? Social, economic and environmental drivers are converging to not only look at producing food more efficiently and sustainably, but are also stimulating new ways to produce meat or remove the need for it altogether. Such changes, if successful, could have substantial effects on New Zealand’s agricultural and economic landscapes.
Lab-grown meat has been worked on for a while, and convergence with other technologies is starting. Modern Meadow is aiming to print meat. In vitro production of meat still has a long way to go, technically, economically and socially. There is scepticism that it will become economically viable and sufficiently scaleable. Or even appeal to consumers. But would it really be that different from currently available mechanically extracted meat products , insects or some of the delights whipped up by molecular gastronomists?
Will our culinary future need meat? Bloomberg BusinessWeek notes the emergence of start-ups looking at plant proteins to replace meat and egg products. This goes way beyond that culinary favourite of yore – textured vegetable protein. What is particularly noteworthy about these developments is that some of the key backers are VCs who have also focused on clean technologies. Some of them are hoping that the food companies will give quicker returns on investments.
Like the “food pills” of yesteryear these new technologies may not come to pass, but they warrant careful consideration because of their potential to disrupt traditional practices. If a good cheap substitute for milk powder is created what effect will that have on our milk producers? A premium price for real milk or a quaint cottage industry?
Alongside vertical farming and technologies to enhance food safety these developments signal a higher tech approach to food production, and increasing emphasis on production practices, that New Zealand may not be sufficiently prepared for.
Thinking Futures Workshop Feb 22
The NZ Futures Trust is running a workshop in Wellington on 6th March. The purpose is to help connect up future thinkers and to help identify how the Futures Trust can better support futures thinking in NZ. The workshop will:
• identify gaps in the way futuring works to support New Zealand businesses
communities, policy-making and decision-making.
• identify and review potential “fixes” for the system gaps
• consider new ways to connect with other organisations, and
• work out how we can open up access to the resources held by NZFT and other organisations.
To find out more and register go here. Spaces are limited.
I’m involved in helping organise the workshop.
Homo electronicus Feb 22
If you could have bionic bits, would you, should you? Which ones? Currently your options are limited, but that seems likely to change over the next couple of decades. There is increasing activity in wiring up appendages and organs to the brain. As recently noted by Miguel Nicolelis, cyborgs rather than Singularity seems more likely. Lots of money is going into research to better understand and manipulate the brain.
Pace makers, defibrillators and cochlear implants have been around a while now, and despite high costs are routinely inserted in more affluent countries.
The FDA has just approved a bionic eye implant that can restore some vision (it’s been available longer in Europe). The device, made by Second Sight Medical Products, costs US$100,000 and has 60 electrodes that need to be wired up. Future developments aim to increase connectivity, and hence vision quality. Costs will also probably fall. Similar devices are also being developed by others.
Researchers have also given rats a new sensory capability as they investigate how to rewire the brain.
DARPA’s investing in developing a brain-computer interface to help handles lots of data, as well as other enhancements to improve soldiers physical and mental capabilities. The large numbers of military and civilian casualties from Afghanistan and Iraq is also stimulating development of advanced prosthetics and implants.
As far as I know, no one with normal hearing has had an implant to make them hear better, and rewiring your retina to have x-ray or eagle eye vision also seems unlikely for most people. More probably people will go for cheaper and less invasive external enhancements, like electronic contact lenses, or advanced Google glasses. Some implantable devices that can communicate with external devices or brain signals are also much easier and cheaper to insert, so have the potential to be more widely used.
Some people are interested in introducing RFID chips into themselves (or their children). Kevin Warwick has had several operations to introduce electronic components into himself as part of his research. However, some RFID recipients now regret their decisions or recognise that they weren’t sufficiently well informed.
The increasing number of chemical and electronic “enhancements” becoming available raise concerns about the ethics, equity, privacy, liability and long term safety of them . What if your employer required or encouraged some enhancements to enable you to work more effectively, efficiently, or safely? Or cyborgs were more likely to be employed?. What if an implant to correct a medical condition results in other unforeseen enhancements?
There is a sense that we’ll become more intimate with electronics this Century, through choice or obligation. That’s not a done deal, widespread public discussions on the issues are only just beginning.
In an earlier post I noted how optimistic some early 19th Century visions of the future were. I wondered then whether we are getting more pessimistic. Now there is some real data to play with. Brain Pickings has published an infographic from Giorgia Lupi called A visual timeline of the future based on famous fiction.
The figure characterises stories as having an overall positive, negative or neutral perspective about the time in which they are set, and tags the stories theme as being primarily about the environment, science, technology, society, travel/adventure or politics. I don’t know what criteria they used to decide what was positive or negative, but I’ll take that as face value. Sixty two stories (novels, short stories, and comics) are covered, so it isn’t a comprehensive review. Some of the most prolific authors (such as Issac Asimov, Philip K. Dick, Arthur C. Clarke, and Robert A. Heinlein) only have a couple of stories in the graphic, and some well know authors are absent (George Orwell, Ursula Le Guin). The analysis is also skewed to having a relatively large proportion of the stories being published in the last two decades.
But what the hell, you can still extract superficial impressions. (And apologies for he graphs being on the small size, there is no goldilocks zone for image size in Word Press).
There are three times more “negative” (29) views of the future than “positive” (10), with the neutral stories (23) sitting in between them. The 2000′s seem a pretty glum time to be writing about the future based on this sample, while the 1950′s produced a cheerier ouevre. But overall, you can’t claim that science fiction has taken a more, or less, positive trajectory over the past 60 years.
Stories focused on the environment and society in the future tend to be more negative, while ones about travel to other planets have a more even handed perspective. The degree of social dystopia isn’t surprising, but if you just watch sci fi movies you may be surprised at the number of less negative stories about the future environment (and science & technology).
As a posting by David Levine noted a couple of years ago, science fiction tends to mirror recent social issues, and they are mostly hopeless at predicting what will happen.
Its great to see the government reaching out to the country to get feedback on what the big challenges facing NZ are, and the role that science can play in helping solve these. However, I’m dissatisfied with the possible challenges that they have put up on the website for two reasons. Firstly, many of these seem to be some of the Transformational Research, Science & Technology topics identified by MoRST (one of the grandparents of the Science & Innovation group in MBIE) several years ago. Nothing wrong with that, it was good work. But not a lot of additional thinking about these seems to have gone on since then.
The more unsatisfactory aspect is that these potential challenges cover much of what is already funded under existing schemes – more of the same rather than something novel. Yes, they are examples only, and the Cabinet Paper does refer to the use of “straw men” in the public consultation. But I’d expect more effort to help the public consider what a good science challenge is or what some important specific issues are, rather than just provide examples of a range of research that is currently being undertaken. Who wouldn’t want to support developing cures for cancer, or better pest control? What makes a good challenge and why, and what are the criteria being used to decide? A bit more information please. Maybe the TV advertising will tease this out, I hope so.
On the more positive side, the intent is to get more government agencies aligning policies and other efforts behind the challenges.
I have difficulty is viewing what is currently available as challenges or even “science project” because they lack specificity and measurable outcomes. Minister Joyce in the Cabinet Paper notes that the intention is rather to develop a scheme similar to CSIRO’s National Research Flagships, and that more of government funding will migrate to the selected challenge areas in the longer term. For me a “Challenge” has more the flavour and intent of the DARPA, Grand Challenges in Global Health and Research UK challenge initiatives, where quite specific problems are being addressed and there are specific milestones and indicators of success along the way.
The Cabinet Paper notes the national science challenges are “aspirational”, so I see the scheme as more PR (not a bad thing) than focussing on some significant and hard specific problems facing NZ. However, time will tell. The process is to take the public responses and attempt to combine them with internal Ministry thinking based on sector consultation and a “Peak panel workshop” to identify the final set of challenges.
I would have preferred that they pick three challenges rather than 10, so that each one has a decent amount of money behind it. Money though isn’t the only “challenge”. Even with recent structural and funding changes, getting some institutions (as distinct from individual researchers) to collaborate nicely rather than compete will take some time (and a new generation of administrators).
It is fortuitous that a recent issue of Technology Review looks at challenges and asks “Why we can’t solve big problems“. The conclusion in the article is that solutions to technological problems require three factors:
- The public and politicians must care about the problem;
- There is institutional support for the solution (eg NASA and others supported going to the moon); and
- The problem is a technological and one that we understand (the latter is an acknowledgement that we don’t yet know enough about, for example, some diseases to consider what an effective technological solution could be).
The DARPA and Global Health challenges tend to operate in this space. The National Science Challenges perhaps won’t meet all these criteria, particularly when it comes to the third bullet point. Reducing obesity or child poverty can have little to do with technological solutions. Improving environmental health may have a technological component, but societal and policy changes will also be necessary.
That isn’t a problem for the National Science Challenges because the initiative is framed as science rather than technological challenges. But the government (and the lucky “winners”) will need to be careful in explaining exactly what they are attempting to achieve to avoid getting a backlash in 4 years or so when the public asks “Has that challenge been solved?”
The European Commission has published a relevant report about Grand Challenges [PDF], noting the preference for the USA to go for technological or industrial challenges while Europe has focussed more on scientific challenges, and some Asian countries are trying to cover the spectrum. The report warns against attempting to integrate or closely align research challenges with innovation ones (ie where the goal is to take an idea or technology through to commercial success). This is because this seldom works where the government doesn’t have a strong influence over industrial decision making. Grand challenges also do better when they have some independence from government, and that the organisation managing the challenges see themselves as “change agents” rather than funding bodies.
So we aren’t going down the Grand Challenges path. But there are some opportunities to stimulate new research and encourage public interest and engagement. I’ll be putting a lot of thought into my proposals for good challenges.
Ericsson is promoting a “big data” approach to education with its Future of learning video and report [PDF]. They have a vested interested in promoting mobile-enabled learning, but there are interesting concepts in their report. They highlight how analytics can be used to tailor learning for each pupil (or staff member), drawing on adaptive learning platforms developed by firms such as Knewton, and the on-line teaching resources being provided by the likes of the Khan Academy and iTunes U.
Firms selling adaptive learning programmes report very good learning improvement, but they also don’t appear to be able to assess long form answers or creativity well. One blog post also notes that such approaches focus on developing tools to improve passing existing tests. No doubt technologies will improve, but it would be wrong to think that technology will single handedly create a bright new learning future. Data and analytics help, but the fundamentals of the education system, not just the tools, also require refinement.
One of the commentators in the video gets too carried away by declaring “knowing something is probably an obsolete idea”. Sure the current education system may not be well aligned with today’s employment needs. But finding out stuff when you need to know it is a purely utilitarian view of knowledge, and seems to leach out the creativity that education commentators like Ken Robinson identify as being what is lacking in the current system.
Dale Stevens was interviewed on Nine to Noon, and he makes some good points about how US schools at least are under performing. His book Hacking your education is soon to be published, and he founded UnCollege. He is someone who consciously dropped out of the American school system and taught himself – unschooled rather than home schooled – because he found the system didn’t meet his learning style or needs.
Dropping out of the system can work for a small well motivated and affluent minority. But the future of learning and society would be better served by changing the system rather than making it easier to opt out. As NZ’s Secretary of Education rightfully indicated, we should have high expectations that the school system works well for all.
How we learn is changing. Better technology has a place, but so too do people with a passion and skill to teach. We are inspired more by people than technology, and that seems to be the key to learning.
The Singularity Hub shows some 19th Century French Postcards (relax/sorry, nothing risqué) that depicted life in the year 2000. I’ve seen a couple of these previously (the blog site Paleofuture posted a few of them several years ago), but Singularity Hub shows a more extensive set, along with comments about how accurate some of them seem to have been. Robotic machines feature frequently.
The postcards were created after, and presumably inspired by, some of Jules Verne’s stories. The cards (originally 50 in total) weren’t apparently distributed – they were intended to be included as inserts in either some toys or cigarette packets, according to Isaac Asimov who rediscovered them.
One of the striking things about the postcards (and other attempts at sketching the future) is not the accuracy (or lack thereof) of the predictions, but how the environment and clothing in the pictures usually remains unchanged. So not a fancy Roomba-like vacuum cleaner, just a semi- autonomous good old fashioned scrubbing brush (wireless not yet invented). And 19th Century clothing and parquet flooring.
That illustrates some of the traps in foresighting – extrapolating from the current situation, and focussing on the technology rather than also considering how the environment in which it will sit will also change.
The postcard of the school of the future is also a delight – not quite what Google has in mind, I hope, for digitising books. Or how National Standards will play out.
Most of the postcards appear optimistic about the future. Not unsurprising, given Jules Verne’s techno-enthusiasm. These days popular culture (or at least things that end up in the cinema) tends to have a more pessimistic future outlook. Paleofuture also shows how some US children in 1976 imagined what 2076 would be like.
If my drawing was any good, I’d think about doing some postcards showing “solutions” for the National Science Challenges. Maybe some schools, and others, may like to give that a go to help inspire creativity in framing what the biggest issues facing NZ are and potential ways of overcoming them.
A few years ago DARPA’s little sibling IARPA (Intelligence Advanced Research Projects Activity) sought to improve the forecasting of future events through crowdsourcing. It established the Aggregative Contingent Estimation Program to “improve accuracy, precision and timeliness of forecasts for a broad range of events”. [Crowdsourcing refers to tapping into the insights of any and everyone with an interest to solve a problem, or tapping into their wallets to fund projects as Siouxsie has blogged about here at Sciblogs]
Following an earlier trial this program has developed into the Global Crowd Intelligence website (run by Applied Research Associates Inc., which has the scent of “Universal Exports” that a certain Mr Bond allegedly worked for). Here crowdsourcing is combined with “gamification” (see my earlier blog posting on this). You get to select missions (be it predicting the likelihood of a future conflict, when the iPad mini will be launched, or whether Kim Kardashian’s divorce will be finalised before December).
An article on the BBC’s website advises that “Forecast topics are not related to actual intelligence operations.”
Should you choose to accept them, the more missions you take on the more experience you accrue, the better your reputation becomes and the quicker you advance on from being a humble analyst to something perhaps more suave and sophisticated.
The BBC report notes that earlier experiments indicated an 25% improvement in predictions compared to a non-crowd sourced control group. Not spectacular, but progress, which I’m sure IARPA will be seeking to improve upon. I’d be interested in what were their stunning failures as well as the successes. I’m not sure if the latest trial has a control. What would be good to see would be to pit crowdsourcing against data mining and experienced intelligence operatives for some scenarios to see which may be better and under what circumstances. A few sensible and knowledgeable heads may be more prescient than wishful or ill informed thinking from a host of others.
Crowdsourcing predictions about defined events or scenarios is becoming common – see NZ’s iPredict. [The just announced proposal to trial a system to track the most vulnerable children isn't crowdsourcing, but it has elements of it]. Success varies, and like fortune tellers, will often be influenced by how precise the scenario is worded. One problem with scenarios is that if you are just fixed on predicting their likelihood you may miss other things going on. I’m sure those smart folk at the CIA, MI6, and our own GCSB & SIS will have that covered though. Don’t you think?
Another issue is the signal to noise ratio you get when gathering lots of data. An earlier crowdsourcing challenge run by DARPA – to find a set of red balloons [PDF] scattered across America – illustrated how some strategies work better than others, and that a lot of effort is required to be able to verify or discount some of the incoming information. The latest project is designed to be able to detect rogue elements attempting to distort the outcomes.
I expect IARPA will learn most about what types of scenarios are more or less successful at predicting via crowdsourcing, and they’ll get some useful insights in how to analyse information more effectively. Whether we could all be part of the GCSB in the future seems doubtful.
A quick follow up to my last post. Kathryn Ryan had an interesting interview with Hamish Gow this morning. He is the Director of the Centre for Agribusiness Policy & Strategy at Massey University. Hamish suggests that greater attention needs to be paid to the marketing of food products in other countries to help gain access to new markets and have more control over value. He is of the opinion that current marketing strategies have considerable room for improvement. Currently there is more focus on processing.
He also notes the changing capability requirements for farmers – advocating for greater international experience for young graduates before they return to help run corporate farms. There are increasing numbers of students enrolling in agri-economics, which is encouraging. Farming syndicates are becoming more common – see MyFarm.
Hamish also talks about his involvement in helping improve food safety and small scale farming in developing countries.
You can listen to the interview here - Nine To Noon interview with Hamish Gow