Is science becoming “AI-led”, as some venture capitalists suggest?
The short answer is no. A slightly longer response is that’s not the most important question to ask about the future of science.
A tool, not a solution
DeepMind’s success in determining quite accurate 3D protein structures in a competition made headlines last week. Rightly so because it is an impressive achievement.
The company is understandably gung ho on the future scientific possibilities
“The progress announced today gives us further confidence that AI will become one of humanity’s most useful tools in expanding the frontiers of scientific knowledge, and we’re looking forward to the many years of hard work and discovery ahead!”
It is easy, though, to get carried away with hype. Solving a protein’s structure is just one step (often a very important one) in understanding functions and interactions, and developing drugs.
The protein folding problem also hasn’t been “solved” by an algorithm. AlphaFold, and all other computational methods, predict. Protein scientists need to confirm structures experimentally.
As Vishal Gulati points out, just knowing more protein structures doesn’t lead to more drugs. You need to find structures that can be targeted by drugs.
But better predictions of protein structures will help with the study of protein-protein interactions and misfolded proteins, and inform the design of novel (or at least not yet identified) proteins.
Be cautious about AI hype
Some applications of AI haven’t ended well. IBM’s Watson healthcare system was quietly placed on sick leave in 2019 after over promising and under delivering.
Progress is often more gradual, or not progress at all.
You need to have a critical mindset. Technology Review suggested 5 questions to ask about AI news:
- What is the problem that needs to be solved?
- How is the company, or lab, approaching that problem with AI methods?
- How do they source the training data?
- Do they have processes for auditing the products and results?
- Should they be using AI methods to solve this problem?
DeepMind’s AlphaFold gives good answers to these questions.
Additional questions I include:
- Do they explain their method(s) clearly for a more general audience?
- Do they discuss limitations and potential biases?
Over-estimating short term developments and underestimating long term progress is as common to AI as to many other new technologies. So you can’t assume current successes and failures describe the future.
It’s not all hype
There are many examples of artificial intelligence methods being used in scientific research, and applications are increasing rapidly. Often without much fanfare.
Last year The Royal Society produced a report highlighting the potential of AI in research. It provided examples of the roles it can play as an enabler of research and development in many fields.
AI methods are being used in a variety of ways in Covid-19 research. For example, to identify genes interacting with SARS-CoV-2, or existing drugs that may be useful, analysis of the research papers, or to process medical images.
A year ago, the use of AI would probably have been in the paper’s title. Now they are just part of the methodology section. That is a real indicator of progress.
Another article in Nature suggests that what is really going to help advance AI in research is better collaboration and transparency. Not sharing data sets and models creates barriers to progress rather than bridges. That’s not just an issue with AI. Many areas of science would benefit from more collaboration and sharing, as the pandemic has illustrated.
The most interesting question that The Royal Society posed in its report was:
“Is there a rigorous way to incorporate existing theory/ knowledge into a machine learning algorithm, to constrain the outcomes to scientifically plausible solutions?”
This highlights that we often need to adapt new tools to suit the tasks rather than adopt them without too much thought. So it’s not always about how will AI shape science, but how will, or should, research shape AI applications.
Don’t consider AI in isolation
As I flagged at the start of this post, it’s unhelpful to focus too much on the role of artificial intelligence. That’s “singularity thinking”. Yes, AI is likely to become increasingly important in many areas of science.
In reality, there are many things shaping the future of science. Such as automation more generally. Arup produced a report in 2018 (and lightly updated a couple of months ago) looking at the future of labs. It highlights not just automation, but also some of the social, political and financial factors influencing research over the coming decade.
An increasingly important aspect of science is how different knowledge systems are woven into together. A Guide to Vision Mātauranga highlights the experiences of Māori researchers in the New Zealand science system, the challenges and the opportunities for valuing Mātauranga Māori (Māori knowledge systems) alongside Western science. The Building cultural perspectives report from Superu describes how different streams of knowledge can both work alongside each other and work together.
As DeepMind’s AlphaFold has shown, AI can be good at helping solve some types of puzzles. But science is about investigating mysteries too. This is where there isn’t one answer or solution, or when the answer naturally emerges from gathering all the facts.
If it is to continue to help us better understand the world, and do more good and less harm, science will need to become more socially intelligent – responsive to social expectations and needs – rather than just algorithmically more sophisticated.
Update 10 Dec: A paper just out in Nature describes natural language processing programs that analyse and summarise thousands of scientific papers. The next goal is to try and get programs to synthesise information from different papers.