Google wants to make your cell phone a “doctor in your pocket” that relies on the company’s artificial intelligence.

But first, the tech giant will need to convince skeptical lawmakers and the Biden administration that its health AI isn’t a risk to patient privacy and safety — or a threat to its smaller competitors.

Google has assembled a potent lobbying team to influence the rules governing AI just as regulators start writing them. But members of Congress say they’re concerned that the company is using its advanced AI in health care before government has had a chance to draw up guardrails. Competitors worry Google is moving to corner the market. Both fear what could happen to patient privacy given Google's history of vacuuming personal data.

“There is great promise in many of these tools to save more lives,” said Sen. Mark Warner (D-Va.), adding that “they also have the potential to do exactly the opposite — harm patients and their data, reinforce human bias, and add burdens to providers as they navigate a clinical and legal landscape without clear norms."

Google’s AI scours medical records, research papers, imaging, and clinical guidelines to help doctors diagnose diseases and evaluate treatment options. The tech giant’s already selling these tools to hospitals. It’s inked a deal with the Mayo Clinic, for one — and it foresees much more, including direct-to-consumer applications.

Warner recently sent a letter to Google CEO Sundar Pichai, saying he was troubled that hospitals are using the company’s AI without sufficient vetting.

Mark Isakowitz, Google’s North American head of policy and government affairs and formerly Ohio GOP Sen. Rob Portman’s chief of staff, responded that the company’s technology isn’t trained on personal health information and that it’s only deployed at a limited capacity. Isakowitz said that health systems have control over how it’s used and monitor its behavior.

President Joe Biden has tasked his agencies with figuring out how to ensure AI in health care serves patients as well as flesh-and-blood physicians do, if not better, but rules could be months or years away. Though the Food and Drug Administration approves medical devices that use AI, it has no rules governing advanced software-based tools.

Google isn’t waiting for the agency to write them. It’s now piloting its AI with Mayo Clinic researchers, as an assistant. HCA Healthcare, which operates more than 2,000 hospitals and other health care facilities across the U.S. and U.K., is using it to write clinical notes for its physicians and nurses.

Bayer Pharmaceuticals is also trying it out within clinical trials, helping to craft communications. And electronic medical record company Meditech is using the technology to summarize patient history.

As lawmakers like Warner and regulatory agencies like the Food and Drug Administration ponder what to do, Google’s hiring former government health care regulators well positioned to advocate for the company.

The “doctor in your pocket” quote is from Karen DeSalvo, who oversaw health information technology regulation for President Barack Obama and who is now Google’s chief health officer.

Startups are concerned Google and other tech giants will push for regulations that squeeze smaller rivals.

The smaller firms want to head off reporting requirements Biden directed his agencies to create in an executive order on AI regulation in October. They think such requirements would be easy for titans like Google or Microsoft but difficult for less well-financed rivals.

“They're going to basically create a setup where we will be dependent just on them to move forward,” said Punit Soni, CEO of clinical note-taking company Suki AI, referring to big tech players in health care including Google.

A lesson learned

Google learned an important lesson about working with Washington in 2019.

That’s when its first effort to apply its prowess with “Big Data” to patient records — a deal with the St. Louis-based hospital chain Ascension to analyze tens of millions of health records — sparked a Department of Health and Human Services inquiry over privacy concerns.

This time, Google appears to be getting ahead of problems with regulators before they arise.

Google’s parent company has hired several former Food and Drug Administration officials, including Bakul Patel, the agency’s former chief digital health officer, who directed the FDA’s early thinking on AI. The agency, headed by a former Google parent company employee, Robert Califf, figures to lead the Biden administration’s rulemaking.

Google is also a member of the Coalition for Health AI, a group of health systems and technology companies working to shape AI standards in coordination with federal health agencies. The group published its inaugural “blueprint” for AI in health care this year. It also helped inform the National Academy of Medicine’s Health Care AI Code of Conduct.

In the wake of President Biden’s executive order on artificial intelligence, policymakers are trying to get up to speed on the technology — Biden has asked for reports and recommendations and set deadlines — and Google wants to help.

In mid-November, the company published a policy agenda for AI, calling for pro-innovation laws as well as infrastructure to support the advancement and adoption of AI.

The company sees itself as a partner to the government. Now senior director of global digital health strategy and regulatory at Google, Patel said that the company spends a lot of time explaining how the technology works to officials so they can set standards.

“We can’t tell them what to do, but we can educate them,” he said.



The new AI

The artificial intelligence Google is selling sits in a regulatory gray zone.

The FDA authorizes AI-enabled medical devices, but its reviews were designed for less advanced technology.

No one from the government is ensuring newer, software-based tools do what they promise.

Last year, the agency said it would seek to review some of those products, but it hasn’t yet done so. In the meantime, the agency has published an action plan for regulating AI and guidance around what algorithms fall within its scope and what information companies should include in applications for marketing clearance.

The Biden administration shared, via its executive order, stipulations for how agencies should monitor and collect data from advanced AI models that may have national security or public health impacts. Still, the executive order is largely a request for agencies to conduct studies to better understand how the technology should be safeguarded.

That is a preliminary step to eventually writing regulations that could be months or years away.

Meanwhile, Congress is slowly getting up to speed. Both the Senate Health, Education, Labor and Pensions Committee and the House Energy and Commerce Committee have held hearings exploring AI and how it might be regulated within health care.

Members asked questions concerning AI’s impact on everything from personal data collection to its use in the development of bioweapons. But so far, no legislation is forthcoming.

Senators led by John Thune (R-S.D.) and Amy Klobuchar (D-Minn.) have proposed the Artificial Intelligence Research, Innovation, and Accountability Act of 2023, but it is not specific to health care and has not advanced since its introduction last month.

Meanwhile, FDA expects big growth in AI-enabled medical devices this year, up over 30 percent from 2022.

Google is steadily launching artificial intelligence projects and products. The company is now licensing algorithms for detecting breast cancer, lung cancer and gene mutations. It also continues to test AI as a tool for diagnosing diabetic retinopathy, as well as for spotting anomalies in ultrasound images.

The rollout of Google’s Med PaLM-2, a bot that can answer questions and pass medical licensing exams, builds on existing relationships with health care companies. Bayer partners with Google Cloud on AI for clinical trials and to jump start early drug discovery. Before Google began piloting Med-PaLM 2 at the Mayo Clinic, the two had collaborated on AI that helps plan radiological treatment for head and neck cancers. HCA Health has used Google Cloud since 2021.

Google’s vision for health care isn’t limited to health care companies. It has ambitions to play a role in consumer health. It already has FitBit, a wearable that collects vital signs and other fitness metrics. And it has tested other consumer tools like DermAssist, which aims to diagnose skin conditions and is marked as a low risk medical device in Europe.

DeSalvo sees smartphones as a key tool in the future of medicine.

“So much will happen just from a device that's honestly pretty inexpensive and in the hands of a huge chunk of the planet,” said DeSalvo. “I'm pretty keen to think about that as a platform for people, for consumers.”

Privacy and market power

The main concern from regulators, legal experts and startups is that AI will infiltrate health care before legislators can wrap their heads around it.

Mason Marks, senior fellow at Harvard Law’s Petrie-Flom Center, said it’s not just that there are no laws regulating the new AI. Old laws designed to protect patients don’t really hold up under the weight of this next generation technology, he said.

Marks is worried that HIPAA, a decades-old law designed to protect patient privacy, falls apart in the face of large language models because HIPAA allows health systems and their vendors to use de-identified patient data.

“Once you remove certain personal identifiers, you can do whatever you want with it,” he said. Researchers point out that de-identified data can be re-identified if juxtaposed with additional data using AI.

Even if it isn’t, there are ethical considerations, Marks said. He points to an incident with Crisis Text Line, a text-based teen mental health hotline that was selling de-identified data for marketing purposes without user consent. It’s not illegal, but he questions whether it’s ethical, especially when companies use that data to improve AI systems from which they profit.

Getting access to that data may also have antitrust implications, he said. “Companies like Google who invest heavily in AI systems for hospitals and health care systems can gain an enormous competitive advantage or monopoly position with all the data that they're generating,” said Marks.

Soni at Suki AI is also concerned about issues of competition and privacy, but from another perspective.

He’s worried regulators are focusing on drafting rules that would create compliance burdens that don’t necessarily ensure protection for privacy or from bias, but that make it difficult for small innovators to compete with deep-pocketed firms like Google.

“I feel like we have doubled down on reporting, and not actually explained compliance,” he said. “I would rather we actually spend some time trying to figure out what is the AI version of HIPAA.”

QOSHE - ‘Doctor in your pocket’: Google’s plan to educate Washington and dominate AI - Ruth Reader
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

‘Doctor in your pocket’: Google’s plan to educate Washington and dominate AI

2 23
05.12.2023

Google wants to make your cell phone a “doctor in your pocket” that relies on the company’s artificial intelligence.

But first, the tech giant will need to convince skeptical lawmakers and the Biden administration that its health AI isn’t a risk to patient privacy and safety — or a threat to its smaller competitors.

Google has assembled a potent lobbying team to influence the rules governing AI just as regulators start writing them. But members of Congress say they’re concerned that the company is using its advanced AI in health care before government has had a chance to draw up guardrails. Competitors worry Google is moving to corner the market. Both fear what could happen to patient privacy given Google's history of vacuuming personal data.

“There is great promise in many of these tools to save more lives,” said Sen. Mark Warner (D-Va.), adding that “they also have the potential to do exactly the opposite — harm patients and their data, reinforce human bias, and add burdens to providers as they navigate a clinical and legal landscape without clear norms."

Google’s AI scours medical records, research papers, imaging, and clinical guidelines to help doctors diagnose diseases and evaluate treatment options. The tech giant’s already selling these tools to hospitals. It’s inked a deal with the Mayo Clinic, for one — and it foresees much more, including direct-to-consumer applications.

Warner recently sent a letter to Google CEO Sundar Pichai, saying he was troubled that hospitals are using the company’s AI without sufficient vetting.

Mark Isakowitz, Google’s North American head of policy and government affairs and formerly Ohio GOP Sen. Rob Portman’s chief of staff, responded that the company’s technology isn’t trained on personal health information and that it’s only deployed at a limited capacity. Isakowitz said that health systems have control over how it’s used and monitor its behavior.

President Joe Biden has tasked his agencies with figuring out how to ensure AI in health care serves patients as well as flesh-and-blood physicians do, if not better, but rules could be months or years away. Though the Food and Drug Administration approves medical devices that use AI, it has no rules governing advanced software-based tools.

Google isn’t waiting for the agency to write them. It’s now piloting its AI with Mayo Clinic researchers, as an assistant. HCA Healthcare, which operates more than 2,000 hospitals and other health care facilities across the U.S. and U.K., is using it to write clinical notes for its physicians and nurses.

Bayer Pharmaceuticals is also trying it out within clinical trials, helping to craft communications. And electronic medical record company Meditech is using the technology to summarize patient history.

As........

© Politico


Get it on Google Play