While artificial intelligence (AI) was used to detect and warn people about the recent pandemic, the same technology could lead to the spread of misinformation if the right guardrails are not in place, the founder of a Canadian company who was among the first to discover COVID-19 says.
As doctors, scientists and policymakers consider how best to use AI to track a possible pandemic, Dr. Kamran Khan, infectious disease specialist and founder of BlueDot, that the first step is “to make sure we’re not creating any potential harm in the process.”
Speaking to CTVNews.ca in June at the Collision tech conference in Toronto, where the potential dangers of artificial intelligence were among the most popular conversations, Khan said, “this is a problem that’s not just a problem for a government alone ,” but for the whole community.
Large language models (LLMs), essentially an algorithm that can take massive sets of data to predict and generate text, can be subject to “hallucination,” or making things up, Khan warned.
“We need to create … some guardrails around that because, as you can imagine, LLMs can amplify misinformation and that doesn’t help us,” he said.
BLUEDOT DETECTS COVID-19
Toronto-based BlueDot was noted for being among the first to detect the signs of what would later be called SARS-CoV-2, the coronavirus that causes the disease COVID-19.
The company achieved this by using artificial intelligence to crawl tens of thousands of articles each day in dozens of languages, leading its system to detect an article about a “pneumonia of unknown cause” on the morning of December 31, 2019.
BlueDot sent an alert to its customers the same day, nearly a week before the US Centers for Disease Control and Prevention and the World Health Organization issued their own warnings.
In June, Harvard Public Health reported that after BlueDot sent its warning to customers, the company’s customer base grew by 475 percent.
USING AI TO GET AHEAD OF NEW DISEASES
Much has been written about the benefits of artificial intelligence, namely the speed with which it can help identify a new disease and issue these early warning signals.
Khan said he founded BlueDot because he felt there was a need to be able to respond to acute infectious diseases quickly and accurately in ways that weren’t “necessarily possible in the academic arena.”
“We should take advantage of the latest in technology and innovation to get ahead of this problem, which is not just one for Canada, but it’s actually one for the rest of the world,” he said.
But trying to do that is “rooted in trust, and there’s been a lot of erosion of trust in the last several years,” Khan added.
The Organization for Economic Co-operation and Development said in April 2020 that while AI is not a “silver bullet”, policymakers should encourage the sharing of medical, molecular and scientific data to help AI researchers build the tools that can help it medical community, while also ensuring that AI systems are “trustworthy”.
“Instead of doing manual data analysis, or starting the data labeling, or spending some time consolidating data coming from different resources, we have our AI modules that can process the data and generate some insightful information for decision makers in the context,” told Zahra Shakeri, an assistant professor of health informatics and information visualization at the University of Toronto, told CTVNews.ca in an interview Sunday.
‘INTEGRATED MIX OF EXPERTS’ REQUIRED
Shakeri, who is also a member of the U of T’s Institute for Pandemics and director of the school’s HIVE Lab, added that while AI could help improve health care preparedness and resilience, “it can’t be the only tool we can use to come to the final conclusion.”
Generative AI models, she said, work by trying to discover relationships between words, not necessarily what is factual.
And while certain texts can be flagged for AI as misinformation, not everything will be detected.
One solution could be to get experts from different fields to help determine what is true, or to make AI models better able to detect misinformation. An increased public awareness of the potential harms of the information produced by generative AI could also help.
But Shakeri says more leadership, governance, researchers, policy makers and stakeholders from different sectors need to come together to solve the problem, similar to the rise of nuclear power.
“It might sound very straightforward to talk about these concepts, but when it comes to the implementation of the solutions, we really need more expertise, more support,” she said.
Khan also says we need an “integrated mix of experts who understand the problem.”
“Like myself as a doctor, I’m an epidemiologist. We have veterinarians, we have other people in public health science, and then we have to marry that with the data scientist, machine learning experts and the engineers who are going to build this whole infrastructure,” he said.
It’s a matter of “not being caught flat-footed” and preparing now, he added.
“And I don’t think we need to be in panic mode, but we need to use every day well because the clock is ticking.”
#artificial #intelligence #predict #pandemic