Could and should we be outsourcing our discovery to AI, and what does this mean for scientific discovery and creativity? How can society prepare for future AI and human collaboration?
AI and its application are being hyped and discussed across a range of industries. These new technologies are helping researchers explore fundamental processes in chemistry and biology from photosynthesis to the development of new molecules. As new technologies impact on scientific discovery and society more broadly, we will begin to see more interesting symbiotic relationships between AI and humans. In a session at Tech Foresight 2038, Professor Mimi Hii and Dr Mark Kennedy discussed whether we should outsource our discovery to AI and what it might mean for the future.
Mimi Hii: There is a saying: “Your conclusion is only a good as your data”. I will extend this to “artificial discovery is only as good as the data”.
Mark Kennedy: We can do a lot with data science to refine junky data to make it useful, but there are limits. By analogy to petroleum, refining your data can translate crude into something high octane, albeit at some cost of hassle and processing power. Even so, you can’t refine rubbish, only crude. So it’s still “garbage in, garbage out.”
MH: It depends on the question. If the question is black and white (if I do A what would happen to B, and what are the tangible consequences), then the answer is yes: mathematically, this can be described as a problem with discreet variables, and the development of AI (in part) is about solving problems with continuous variables (the ‘what if’ questions). I can imagine an AI to be able to spot inconsistencies/contradictions in human knowledge, for example, theory A conflicts with theory B and cannot be resolved by existing data – thus generating questions in that way. Questions that do not have a definitive answer (as it cannot be tested) may be difficult, for example, is Brexit a good idea? For these questions, the best thing a machine can do is to provide statistical probabilities (which can be as reliable as the results of the last few elections!), so I think it will be left to a human to make the final judgement call, or to place further constraints and assumptions.
MK: I would say the skill of a scientist is less about asking the right question than it is about asking questions that matter. I was just having this conversation with a PhD prospect yesterday. I explained that most academics have to be quite focused in their work, so the job is to identify a question or topic that is focused enough that you can aspire to it in a way that genuinely advances knowledge – but with the proviso that we want to create knowledge that really matters. AI can help us assess the impact of ideas, but it’s not (yet) as useful for saying what will be important because that takes the savvy to participate in deciding what is important.
MH: It depends on what kind of ‘serendipity’ we are talking about. If these are ‘accidental’ because the scientist/machine is doing something it is not supposed to (human/machine error), then I would expect these to be much reduced, assuming that machines are built not to make errors! If we are talking about serendipity associated with something that should not, in theory, be possible, but the scientist did it anyway (either because they do not believe the theory, or are just ignorant/unaware), then these will be pretty much eliminated. That said, this pretty much depends on how AI is developed – which is a challenge itself.
MK: Data reduction techniques are already a stimulus for serendipity. When we use statistics and maths to reduce a complex high-dimensional data set to a smaller number of dimensions, the results is a whole set of models that suggest different ways the whole system is working. Most of these models are misleading epiphenomenal echoes of causal mechanisms, but some are clues to deeper insights.
MH: There will always be a need for scientists so long as there is a need for human input, where ‘values’ are evaluated differently, e.g respect for privacy.
MK: Same as before, except we will have help for some of the more blue collar tasks scientists do.
MH: If we learn something from history about disruptive technologies, AI is likely to influence the industry with low CAPEX, e.g. why invest in cars when you own a fleet of horses and have a workforce who knows how to direct horses (but don’t know how to drive cars)? It will be the same story here – therefore, adoption of AI will be slowest with industries with the largest CAPEX investment. In my particularly filed of chemistry, this is likely to include the pharmaceutical manufacturing industry, who not only have significant capital investment around the world but also a large workforce trained in the ‘classical’ way (to be fair, they are justifiable conservative, as they can also least afford to make a mistake, as it can literally mean life and death!).
So, my bets are on companies who are now relying heavily on the ‘gig’ economy, ie. mainly service industry that can change the nature of their workforce relatively easily without having to decommission a large manufacturing facility/fire lots of people. This inevitably includes the service industry that relies heavily on demand management (in some cases they are already making an impact), e.g. cab and delivery services, certain financial services (e.g. accounting, investment) and – warning: this may be rather controversial – provision of primary healthcare.
MK: I expect there will be significant impact in fields where adaptations of search and dimensionality reduction can turbo-charge design. That would be things as disparate as structural engineering and fashion, to name
just a couple