By Robert Hickson 07/09/2020

There have been several hyped technological developments in the last few weeks. Nothing unusual in that, but they provide useful examples of the need to adopt a critical mindset when considering the significance of developments and trends.

Some futurists seem to just scrape the headlines for content without questioning the reports. That just feeds an uncritical futures frenzy.

Others enjoy debunking them. But there is the risk of being too dismissive and overlooking the broader significance of developments, even if a particular event doesn’t actually show what it claims.


Too good to be true?

First up, reports of a cheap “game changing” battery, that lasts for hundreds of years, could power just about anything, and gets rid of nuclear waste. What’s not to like?

A company called NDB – Nano Diamond Battery – is promoting a self-charging battery that encases in diamond carbon-14 from nuclear power plant waste. They are in the prototype stage and have been getting some positive media attention.

The obvious first question is “does it actually work?”

A YouTube video takes apart the claims the company makes. The video is overly long and repetitious but notes that similar batteries – or more accurately energy harvesters – already exist. These produce only very small amounts (microWatts) of energy. The video points out that the company doesn’t provide many technical details and seem to have removed some earlier, more revealing, information that helps assess its claims.

Seems more like simple marketing, rather than technological revolution at this stage. The nano diamond battery sits in the hype space. Something to keep an eye on but, like C-14, treat with caution.

However, things like NDB can have value by alerting you to the emergence of different approaches to challenges. There is a lot of interesting and exciting energy-related technological development going on. You just have to be discerning about what is being promoted.


Neuroscience is harder than rocket science

Elon Musk’s recent demonstration of Neuralink’s “brain implant”, involving three little pigs, is closer to the “plausible tech” space.

Musk showed that the implant, with the help of machine learning, could predict an animal’s leg movements. Unlike his previous electric vehicle and rocket events, it relied more on tell than show. Though he did have a big robot on stage.

The critical question to ask here is what is the gap between the demonstration and medical (or entertainment) uses?

Mostly the demonstration seemed to be a combination of marketing and recruitment.  Musk promoted at one point the future potential medical applications, but then drifted towards such devices being used for playing video games, summoning your Tesla, or helping keep up with artificial intelligence.

Neuroscience theatre” according to Technology Review. “Solid engineering, but mediocre neuroscience” was the diagnosis of a neuroscientist.

Neuralink, according to some reports, appears to be in some trouble, with several of the original neuroscientists leaving, and concerns over an internal culture characterized by hasty timelines and a “move fast and break things” attitude. Not something that you’d want from a medical device company.

Brain machine interfaces have been developed over the last 50 years. (Think cochlear implants). Other companies and university labs have developed similar, or better, devices than Neuralink’s, so it pays to also consider some of the less flashy developments in academia.

Unlike NDB, Neuralink has shown proof of concept. But there is still a gap between the underlying science and the technology. We don’t know yet what the feasible and acceptable uses of brain implants will be, or when they’ll be more generally available. Or what the problems will be.

Still, it seems reasonable to consider that the future will have more devices that connect to our nervous systems. Now is a good time to think about the ethical and regulatory frameworks that we’ll need.


Top Gun AI

Recently in DARPA’s AlphaDogfight trials one of the artificial systems was able to beat an F-16 pilot five-nil in a virtual fight.

Some commentators immediately jumped to a future of autonomous warfare, and maybe Skynet.

Others, usually aviators, focused on the artificiality of the trial, pointing out human skills that are also needed in aerial combat.

Jumping to an extreme state or highlighting the limitations are common responses to new developments. But the the application may follow a different path.

This is reflected in analysis from a Navy pilot. His view is that it isn’t about replacing human pilots soon, but a signal of how there will be increasing human-machine “symbiosis” in combat (and non-combat) situations. There already is a lot of automation in flying and warfare, so use of more artificial intelligence in other rule-based operations is to be expected, if not welcomed.

The important question to consider in these cases is will the technology augment rather than replace people.

A related question for this particular case is do we have good safeguards to cover both situations.

The US has policy that allows for the use of “lethal autonomous weapons systems”, though none appear operational yet. The policy requires that such systems “allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” What “appropriate levels of human judgement” means allows for plenty of wriggle room.

It would be great to have some human-machine collaborations that stop us getting to the point of shooting at each other in the first place.


But will it really fly?

I wrote about flying cars in 2017, and several other companies have entered the air space since then. A Japanese company, SkyDrive, has joined the ranks of those who have test flown a flying car. They say they expect to have full autonomous flying by 2030. These things are often 10 years away.

However, flying cars have to contend not only with the laws of physics, but the laws of economics. Earlier this year Kitty Hawk’s Flyer was grounded because it “could not find a path to a viable business.” The company, though, isn’t giving up on flying vehicles. It is concentrating on a longer range flying taxi it calls Heaviside.

Like every other technology, flying vehicles sit within a bigger system. Futurists need to think about those other factors too to make sense of the significance of individual developments.

A recent paper identified what it called “seven domains of interest” in relation to developing and regulating flying cars:

  • Safety

  • Training

  • Infrastructure

  • Environment

  • Logistics & Sustainability

  • Cybersecurity, and

  • Human Factors

Progress in all of these are still poorly developed, so the operational feasibility of flying cars still has a lot of uncertainty. Even for ground-based vehicles semi-autonomous or “active driver assistance” capabilities, despite often years of development,  “do not perform consistently, especially in real-word scenarios.” That will change but, as I wrote previously, may not be as quickly as software developers expect.

Flying cars will come, but as an important transport option or a minor one isn’t easy to tell yet.

Beyond the coolness and science-fiction-becomes-fact desire, advocates of flying cars emphasise speed, efficiency and independence. Much like early car and plane developers.

As PR man Rory Sutherland pointed out, perspective is everything. A more valued social objective would be to put more effort into making existing public transport options more appealing. For example, by improving their efficiency, comfort, safety, accessibility and affordability.

So, in addition to considering what else needs to happen, good futures thinkers need to look at what other things the technologies may impact or inhibit.


Tips for technology thinkers

So, when reading those technology headlines consider:

  1. Have they proven what they claim?

  2. What still needs to happen?

  3. What are more likely ways it will be used?

  4. What else may it affect?

Featured photo by Atul Vinayak on Unsplash