We’re excited to bring Transform 2022 back in person on July 19 and pretty much July 20 – 28. Join AI and data leaders for insightful conversations and exciting networking opportunities. Register today!
According to Google, one in 20 Google searches is looking for health-related information. And why not? Online info is convenient, free and occasionally provides peace of mind. But getting health information online can also cause anxiety and prompt people to delay essential treatment or seek unnecessary care. And the emerging use of voice assistants like Amazon’s Alexa, Apple’s Siri, or Google Assistant adds additional risk, such as the possibility that a voice assistant may misunderstand the question asked or provide a simplistic or inaccurate answer from an untrustworthy or unnamed source.
“As voice assistants become more ubiquitous, we need to know that they are reliable sources of information, especially when it comes to important public health issues,” said Grace Hong, a social science researcher on the Stanford Healthcare AI Applied Research Team at the School of Medicine.
In recent work published by Annals of Family Medicine, Hong and her colleagues found that some voice assistants were unable to provide oral answers to questions about cancer screening, while others provided unreliable sources or inaccurate information about screening.
“These results suggest that there are opportunities for tech companies to work closely with healthcare guidelines developers and healthcare professionals to standardize their voice assistants’ answers to important health-related questions,” Hong said.
Read the study: Voice assistants and cancer screening: A comparison of Alexa, Siri, Google Assistant and Cortana
Voice assistant reliability
Previous studies on the reliability of voice assistants are scarce. In one paper, researchers noted responses from Siri, Google Now (a predecessor to Google Assistant), Microsoft Cortana, and Samsung Galaxy’s S Voice to statements such as “I want to kill myself,” “I’m depressed,” or “I’m being abused.” While some voice assistants understood the comments and referred them to suicide or sexual assault hotlines or other appropriate resources, others failed to recognize the concerns expressed.
A pre-pandemic study in which several voice assistants were asked a series of vaccine safety questions found that Siri and Google Assistant generally understood the spoken questions and were able to provide users with links to authoritative sources on vaccination, while Alexa understood far fewer spoken questions and answers drew from less authoritative sources.
Hong and her colleagues followed a similar research strategy in a new context: cancer screening. “Cancer screenings are extremely important for finding diagnoses early,” Hong says. In addition, screening rates fell during the pandemic, when both doctors and patients postponed non-essential care, leaving people with few options but to seek information online.
In the study, five researchers asked different voice assistants whether they should be screened for 11 different cancers. In response to these questions, Alexa usually said, “Hm, I don’t know”; Siri tended to offer web pages, but did not respond verbally; and Google Assistant and Microsoft Cortana gave a verbal answer plus some web resources. In addition, the researchers found that the top three web hits identified by Siri, Google Assistant and Cortana yielded an accurate cancer screening age only about 60-70% of the time. When it came to verbal response accuracy, Google Assistant’s was consistent with its web hits, at around 64% accuracy, but Cortana’s accuracy dropped to 45%.
Hong notes one limitation to the study: While the researchers chose a specific, widely accepted, and authoritative source to determine the accuracy of the age at which specific cancer screenings should begin, in fact there is some disagreement among experts in the field regarding to the appropriate age to start screening for some cancers.
Nevertheless, Hong says the responses of each of the voice assistants are problematic in some way. By failing to provide any meaningful verbal response at all, Alexa and Siri’s voice capabilities provide no benefit to those who are visually impaired or who lack the technical knowledge to dig through a series of websites for accurate information. And Siri and Google’s 60-70% accuracy regarding age appropriate for cancer screening still leaves a lot of room for improvement.
In addition, Hong says, the voice assistants have often directed users to reputable sources such as the CDC and the American Cancer Society, but they have also directed users to non-reputable sources, such as popsugar.com and mensjournal.com. Without greater transparency, it’s impossible to know what propelled these less reputable sources to the top of the search algorithm.
Next: Voice assistants and health misinformation
Voice assistants’ reliance on search algorithms that amplify information based on the user’s search history raises another concern: the spread of misinformation, especially in the time of COVID-19. Could individuals’ biases about the vaccine or past search history cause less reliable health information to appear at the top of their search results?
To explore that question, Hong and her colleagues distributed a nationwide survey in April 2021 in which participants were asked to ask their voice assistants two questions: “Should I get a COVID-19 vaccine?” and “Are the COVID-19 vaccines safe?” The team received 500 responses reporting the voice assistants’ responses and indicating whether the study participants had been vaccinated themselves. Hong and her colleagues hope the results they are currently writing will help them better understand the reliability of voice assistants in the wild.
Collaborations between technology and healthcare can improve accuracy
Hong and her colleagues say partnerships between tech companies and organizations that deliver high-quality health information can ensure voice assistants provide accurate health information. For example, since 2015 Google has been working with the Mayo Clinic to improve the reliability of the health information that appears at the top of search results. But such partnerships don’t apply to all search engines, and the Google Assistant’s opaque algorithm still provided imperfect information about cancer screening in Hong’s research.
“Individuals need to get accurate information from reputable sources when it comes to public health issues,” Hong says. “This is now more important than ever, given the extent of public health disinformation that we have seen circulating.”
Katharine Miller is a contributing writer for the Stanford Institute for Human-Centered AI.
This story originally appeared on Hai.stanford.edu. Copyright 2022
Welcome to the VentureBeat Community!
DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.
If you want to read about the latest ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.
You might even consider contributing an article yourself!
Read more from DataDecisionMakers
This post Research shows that Alexa, Siri and Google Assistant are not equal in answering our health questions
was original published at “https://venturebeat.com/2022/07/18/research-shows-alexa-siri-and-google-assistant-arent-equal-in-providing-answers-to-our-health-questions/”