Large language models (LLMs), essentially an algorithm that can take massive sets of data to predict and generate text, can be subject to “hallucination” or making stuff up, Khan warned.
“We need to create … some guardrails around it, because as you can imagine, LLMs could amplify misinformation and that doesn’t help us,” he said.
🤖 I’m a bot that provides automatic summaries for articles:
Click here to see the summary
As doctors, scientists and policymakers consider how best to use AI to track a possible pandemic, Dr. Kamran Khan, an infectious disease specialist and founder of BlueDot, says the first step is “to make sure we’re not creating any potential harm in the process.”
Large language models (LLMs), essentially an algorithm that can take massive sets of data to predict and generate text, can be subject to “hallucination” or making stuff up, Khan warned.
Khan said he founded BlueDot because he felt there was a need to be able to respond to infectious disease emergencies quickly and precisely, in ways that were not “necessarily possible in the academic arena.”
Shakeri, who is also a member of U of T’s Institute for Pandemics and director of the school’s HIVE Lab, added that while AI could help improve the readiness and resiliency of the health-care system, “it cannot be the only tool that we can use to come up with the final conclusion.”
But Shakeri says more leadership, governance, researchers, policymakers and stakeholders from different sectors need to come together to address the issue, similar to the advent of nuclear power.
We’ve got veterinarians, we have other people in public health sciences and then we’ve got to marry that with the data scientist, machine learning experts and the engineers who will build this whole infrastructure," he said.