medicine Archives - AI News https://www.artificialintelligence-news.com/news/tag/medicine/ Artificial Intelligence News Fri, 02 May 2025 12:38:13 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png medicine Archives - AI News https://www.artificialintelligence-news.com/news/tag/medicine/ 32 32 Google AMIE: AI doctor learns to ‘see’ medical images https://www.artificialintelligence-news.com/news/google-amie-ai-doctor-learns-to-see-medical-images/ https://www.artificialintelligence-news.com/news/google-amie-ai-doctor-learns-to-see-medical-images/#respond Fri, 02 May 2025 12:38:12 +0000 https://www.artificialintelligence-news.com/?p=106274 Google is giving its diagnostic AI the ability to understand visual medical information with its latest research on AMIE (Articulate Medical Intelligence Explorer). Imagine chatting with an AI about a health concern, and instead of just processing your words, it could actually look at the photo of that worrying rash or make sense of your […]

The post Google AMIE: AI doctor learns to ‘see’ medical images appeared first on AI News.

]]>
Google is giving its diagnostic AI the ability to understand visual medical information with its latest research on AMIE (Articulate Medical Intelligence Explorer).

Imagine chatting with an AI about a health concern, and instead of just processing your words, it could actually look at the photo of that worrying rash or make sense of your ECG printout. That’s what Google is aiming for.

We already knew AMIE showed promise in text-based medical chats, thanks to earlier work published in Nature. But let’s face it, real medicine isn’t just about words.

Doctors rely heavily on what they can see – skin conditions, readings from machines, lab reports. As the Google team rightly points out, even simple instant messaging platforms “allow static multimodal information (e.g., images and documents) to enrich discussions.”

Text-only AI was missing a huge piece of the puzzle. The big question, as the researchers put it, was “Whether LLMs can conduct diagnostic clinical conversations that incorporate this more complex type of information.”

Google teaches AMIE to look and reason

Google’s engineers have beefed up AMIE using their Gemini 2.0 Flash model as the brains of the operation. They’ve combined this with what they call a “state-aware reasoning framework.” In plain English, this means the AI doesn’t just follow a script; it adapts its conversation based on what it’s learned so far and what it still needs to figure out.

It’s close to how a human clinician works: gathering clues, forming ideas about what might be wrong, and then asking for more specific information – including visual evidence – to narrow things down.

“This enables AMIE to request relevant multimodal artifacts when needed, interpret their findings accurately, integrate this information seamlessly into the ongoing dialogue, and use it to refine diagnoses,” Google explains.

Think of the conversation flowing through stages: first gathering the patient’s history, then moving towards diagnosis and management suggestions, and finally follow-up. The AI constantly assesses its own understanding, asking for that skin photo or lab result if it senses a gap in its knowledge.

To get this right without endless trial-and-error on real people, Google built a detailed simulation lab.

Google created lifelike patient cases, pulling realistic medical images and data from sources like the PTB-XL ECG database and the SCIN dermatology image set, adding plausible backstories using Gemini. Then, they let AMIE ‘chat’ with simulated patients within this setup and automatically check how well it performed on things like diagnostic accuracy and avoiding errors (or ‘hallucinations’).

The virtual OSCE: Google puts AMIE through its paces

The real test came in a setup designed to mirror how medical students are assessed: the Objective Structured Clinical Examination (OSCE).

Google ran a remote study involving 105 different medical scenarios. Real actors, trained to portray patients consistently, interacted either with the new multimodal AMIE or with actual human primary care physicians (PCPs). These chats happened through an interface where the ‘patient’ could upload images, just like you might in a modern messaging app.

Afterwards, specialist doctors (in dermatology, cardiology, and internal medicine) and the patient actors themselves reviewed the conversations.

The human doctors scored everything from how well history was taken, the accuracy of the diagnosis, the quality of the suggested management plan, right down to communication skills and empathy—and, of course, how well the AI interpreted the visual information.

Surprising results from the simulated clinic

Here’s where it gets really interesting. In this head-to-head comparison within the controlled study environment, Google found AMIE didn’t just hold its own—it often came out ahead.

The AI was rated as being better than the human PCPs at interpreting the multimodal data shared during the chats. It also scored higher on diagnostic accuracy, producing differential diagnosis lists (the ranked list of possible conditions) that specialists deemed more accurate and complete based on the case details.

Specialist doctors reviewing the transcripts tended to rate AMIE’s performance higher across most areas. They particularly noted “the quality of image interpretation and reasoning,” the thoroughness of its diagnostic workup, the soundness of its management plans, and its ability to flag when a situation needed urgent attention.

Perhaps one of the most surprising findings came from the patient actors: they often found the AI to be more empathetic and trustworthy than the human doctors in these text-based interactions.

And, on a critical safety note, the study found no statistically significant difference between how often AMIE made errors based on the images (hallucinated findings) compared to the human physicians.

Technology never stands still, so Google also ran some early tests swapping out the Gemini 2.0 Flash model for the newer Gemini 2.5 Flash.

Using their simulation framework, the results hinted at further gains, particularly in getting the diagnosis right (Top-3 Accuracy) and suggesting appropriate management plans.

While promising, the team is quick to add a dose of realism: these are just automated results, and “rigorous assessment through expert physician review is essential to confirm these performance benefits.”

Important reality checks

Google is commendably upfront about the limitations here. “This study explores a research-only system in an OSCE-style evaluation using patient actors, which substantially under-represents the complexity… of real-world care,” they state clearly. 

Simulated scenarios, however well-designed, aren’t the same as dealing with the unique complexities of real patients in a busy clinic. They also stress that the chat interface doesn’t capture the richness of a real video or in-person consultation.

So, what’s the next step? Moving carefully towards the real world. Google is already partnering with Beth Israel Deaconess Medical Center for a research study to see how AMIE performs in actual clinical settings with patient consent.

The researchers also acknowledge the need to eventually move beyond text and static images towards handling real-time video and audio—the kind of interaction common in telehealth today.

Giving AI the ability to ‘see’ and interpret the kind of visual evidence doctors use every day offers a glimpse of how AI might one day assist clinicians and patients. However, the path from these promising findings to a safe and reliable tool for everyday healthcare is still a long one that requires careful navigation.

(Photo by Alexander Sinn)

See also: Are AI chatbots really changing the world of work?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google AMIE: AI doctor learns to ‘see’ medical images appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/google-amie-ai-doctor-learns-to-see-medical-images/feed/ 0
MHRA pilots ‘AI Airlock’ to accelerate healthcare adoption https://www.artificialintelligence-news.com/news/mhra-pilots-ai-airlock-accelerate-healthcare-adoption/ https://www.artificialintelligence-news.com/news/mhra-pilots-ai-airlock-accelerate-healthcare-adoption/#respond Wed, 04 Dec 2024 11:46:32 +0000 https://www.artificialintelligence-news.com/?p=16631 The Medicines and Healthcare products Regulatory Agency (MHRA) has announced the selection of five healthcare technologies for its ‘AI Airlock’ scheme. AI Airlock aims to refine the process of regulating AI-driven medical devices and help fast-track their safe introduction to the UK’s National Health Service (NHS) and patients in need. The technologies chosen for this […]

The post MHRA pilots ‘AI Airlock’ to accelerate healthcare adoption appeared first on AI News.

]]>
The Medicines and Healthcare products Regulatory Agency (MHRA) has announced the selection of five healthcare technologies for its ‘AI Airlock’ scheme.

AI Airlock aims to refine the process of regulating AI-driven medical devices and help fast-track their safe introduction to the UK’s National Health Service (NHS) and patients in need.

The technologies chosen for this scheme include solutions targeting cancer and chronic respiratory diseases, as well as advancements in radiology diagnostics. These AI systems promise to revolutionise the accuracy and efficiency of healthcare, potentially driving better diagnostic tools and patient care.

The AI Airlock, as described by the MHRA, is a “sandbox” environment—an experimental framework designed to help manufacturers determine how best to collect real-world evidence to support the regulatory approval of their devices.

Unlike traditional medical devices, AI models continue to evolve through learning, making the establishment of safety and efficacy evidence more complex. The Airlock enables this exploration within a monitored virtual setting, giving developers insight into the practical challenges of regulation while supporting the NHS’s broader adoption of transformative AI technologies.

Safely enabling AI healthcare innovation  

Laura Squire, the lead figure in MedTech regulatory reform and Chief Officer at the MHRA, said: “New AI medical devices have the potential to increase the accuracy of healthcare decisions, save time, and improve efficiency—leading to better outcomes for the NHS and patients across all healthcare settings. 

“But we need to be confident that AI-powered medical devices introduced into the NHS are safe, stay safe, and perform as intended through their lifetime of use.”

Squire emphasised that the AI Airlock pilot allows collaboration “in partnership with technology specialists, developers and the NHS,” facilitating the exploration of best practices and accelerating safe patient access to innovative solutions.

Government representatives have praised the initiative for its forward-thinking framework.

Karin Smyth, Minister of State for Health, commented: “As part of our 10-Year Health Plan, we’re shifting NHS care from analogue to digital, and this project will help bring the most promising technology to patients.

“AI has the power to revolutionise care by supporting doctors to diagnose diseases, automating time-consuming admin tasks, and reducing hospital admissions by predicting future ill health.”

Science Minister Lord Vallance lauded the AI Airlock pilot as “a great example of government working with businesses to enable them to turn ideas into products that improve lives.” He added, “This shows how good regulation can facilitate emerging technologies for the benefit of the UK and our economy.”

Selected technologies  

The deployment of AI-powered medical devices requires meeting stringent criteria to ensure innovation, patient benefits, and regulatory challenge readiness. The five technologies selected for this inaugural pilot offer vital insights into healthcare’s future: 

  1. Lenus Stratify

Patients with Chronic Obstructive Pulmonary Disease (COPD) are among those who stand to benefit significantly from AI innovation. Lenus Stratify, developed by Lenus Health, analyses patient data to predict severe lung disease outcomes, reducing unscheduled hospital admissions. The system empowers care providers to adopt earlier interventions, affording patients an improved quality of life while alleviating NHS resource strain.  

  1. Philips Radiology Reporting Enhancer

Philips has integrated AI into existing radiology workflows to enhance the efficiency and accuracy of critical radiology reports. This system uses AI to prepare the “Impression” section of reports, summarising essential diagnostic information for healthcare providers. By automating this process, Philips aims to minimise workload struggles, human errors, and miscommunication, creating a more seamless diagnostic experience.  

  1. Federated AI Monitoring Service (FAMOS)

One recurring AI challenge is the concept of “drift,” when changing real-world conditions impair system performance over time. Newton’s Tree has developed FAMOS to monitor AI models in real time, flagging degradation and enabling rapid corrections. Hospitals, regulators, and software developers can use this tool to ensure algorithms remain high-performing, adapting to evolving circumstances while prioritising patient safety.  

  1. OncoFlow Personalised Cancer Management

Targeting the pressing healthcare challenge of reducing waiting times for cancer treatment, OncoFlow speeds up clinical workflows through its intelligent care pathway platform. Initially applied to breast cancer protocols, the system later aims to expand across other oncology domains. With quicker access to tailored therapies, patients gain increased survival rates amidst mounting NHS pressures.  

  1. SmartGuideline

Developed to simplify complex clinical decision-making processes, SmartGuideline uses large-language AI trained on official NICE medical guidelines. This technology allows clinicians to ask routine questions and receive verified, precise answers, eliminating the ambiguity associated with current AI language models. By integrating this tool, patients benefit from more accurate treatments grounded in up-to-date medical knowledge.  

Broader implications  

The influence of the AI Airlock extends beyond its current applications. The MHRA expects pilot findings, due in 2025, to inform future medical device regulations and create a clearer path for manufacturers developing AI-enabled technologies. 

The evidence derived will contribute to shaping post-Brexit UKCA marking processes, helping manufacturers achieve compliance with higher levels of transparency. By improving regulatory frameworks, the UK could position itself as a global hub for med-tech innovation while ensuring faster access to life-saving tools.

The urgency of these developments was underscored earlier this year in Lord Darzi’s review of health and care. The report outlined the “critical state” of the NHS, offering AI interventions as a promising pathway to sustainability. The work on AI Airlock by the MHRA addresses one of the report’s major recommendations for enabling regulatory solutions and “unlocking the AI revolution” for healthcare advancements.

While being selected into the AI Airlock pilot does not indicate regulatory approval, the technologies chosen represent a potential leap forward in applying AI to some of healthcare’s most pressing challenges. The coming years will test the potential of these solutions under regulatory scrutiny.

If successful, the initiative from the MHRA could redefine how pioneering technologies like AI are adopted in healthcare, balancing the need for speed, safety, and efficiency. With the NHS under immense pressure from growing demand, AI’s ability to augment clinicians, predict illnesses, and streamline workflows may well be the game-changer the system urgently needs.

(Photo by National Cancer Institute)

See also: AI’s role in helping to prevent skin cancer through behaviour change

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post MHRA pilots ‘AI Airlock’ to accelerate healthcare adoption appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/mhra-pilots-ai-airlock-accelerate-healthcare-adoption/feed/ 0
AlphaProteo: Google DeepMind unveils protein design system https://www.artificialintelligence-news.com/news/alphaproteo-google-deepmind-protein-design-system/ https://www.artificialintelligence-news.com/news/alphaproteo-google-deepmind-protein-design-system/#respond Fri, 06 Sep 2024 14:55:36 +0000 https://www.artificialintelligence-news.com/?p=15994 Google DeepMind has unveiled an AI system called AlphaProteo that can design novel proteins that successfully bind to target molecules, potentially revolutionising drug design and disease research. AlphaProteo can generate new protein binders for diverse target proteins, including VEGF-A, which is associated with cancer and diabetes complications. Notably, this is the first time an AI […]

The post AlphaProteo: Google DeepMind unveils protein design system appeared first on AI News.

]]>
Google DeepMind has unveiled an AI system called AlphaProteo that can design novel proteins that successfully bind to target molecules, potentially revolutionising drug design and disease research.

AlphaProteo can generate new protein binders for diverse target proteins, including VEGF-A, which is associated with cancer and diabetes complications. Notably, this is the first time an AI tool has successfully designed a protein binder for VEGF-A.

The system’s performance is particularly impressive, achieving higher experimental success rates and binding affinities that are up to 300 times better than existing methods across seven target proteins tested:

Chart demonstrating Google DeepMind's AlphaProteo success rate
(Credit: Google DeepMind)

Trained on vast amounts of protein data from the Protein Data Bank and over 100 million predicted structures from AlphaFold, AlphaProteo has learned the intricacies of molecular binding. Given the structure of a target molecule and preferred binding locations, the system generates a candidate protein designed to bind at those specific sites.

To validate AlphaProteo’s capabilities, the team designed binders for a diverse range of target proteins, including viral proteins involved in infection and proteins associated with cancer, inflammation, and autoimmune diseases. The results were promising, with high binding success rates and best-in-class binding strengths observed across the board.

For instance, when targeting the viral protein BHRF1, 88% of AlphaProteo’s candidate molecules bound successfully in wet lab testing. On average, AlphaProteo binders exhibited 10 times stronger binding than the best existing design methods across the targets tested.

The system’s performance suggests it could significantly reduce the time required for initial experiments involving protein binders across a wide range of applications. However, the team acknowledges that AlphaProteo has limitations, as it was unable to design successful binders against TNFɑ (a protein associated with autoimmune diseases like rheumatoid arthritis.)

To ensure responsible development, Google DeepMind is collaborating with external experts to inform their phased approach to sharing this work and contributing to community efforts in developing best practices—including the NTI’s new AI Bio Forum.

As the technology evolves, the team plans to work with the scientific community to leverage AlphaProteo on impactful biology problems and understand its limitations. They are also exploring drug design applications at Isomorphic Labs.

While AlphaProteo represents a significant step forward in protein design, achieving strong binding is typically just the first step in designing proteins for practical applications. There remain many bioengineering challenges to overcome in the research and development process.

Nevertheless, Google DeepMind’s advancement holds tremendous potential for accelerating progress across a broad spectrum of research, including drug development, cell and tissue imaging, disease understanding and diagnosis, and even crop resistance to pests.

You can find the full AlphaProteo whitepaper here (PDF)

See also: Paige and Microsoft unveil next-gen AI models for cancer diagnosis

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AlphaProteo: Google DeepMind unveils protein design system appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/alphaproteo-google-deepmind-protein-design-system/feed/ 0
AI tool finds cancer signs missed by doctors https://www.artificialintelligence-news.com/news/ai-tool-finds-cancer-signs-missed-by-doctors/ https://www.artificialintelligence-news.com/news/ai-tool-finds-cancer-signs-missed-by-doctors/#respond Thu, 21 Mar 2024 13:08:38 +0000 https://www.artificialintelligence-news.com/?p=14589 An AI tool has proven capable of detecting signs of cancer that were overlooked by human radiologists. The AI tool, called Mia, was piloted alongside NHS clinicians in the UK and analysed the mammograms of over 10,000 women.  Most of the participants were cancer-free, but the AI successfully flagged all of those with symptoms of […]

The post AI tool finds cancer signs missed by doctors appeared first on AI News.

]]>
An AI tool has proven capable of detecting signs of cancer that were overlooked by human radiologists.

The AI tool, called Mia, was piloted alongside NHS clinicians in the UK and analysed the mammograms of over 10,000 women. 

Most of the participants were cancer-free, but the AI successfully flagged all of those with symptoms of breast cancer—as well as an additional 11 cases that the doctors failed to identify. Of the 10,889 women who participated in the trial, only 81 chose not to have their scans reviewed by the AI system.

The AI tool was trained on a dataset of over 6,000 previous breast cancer cases to learn the subtle patterns and imaging biomarkers associated with malignant tumours. When evaluated on the new cases, it correctly predicted the presence of cancer with 81.6 percent accuracy and correctly ruled it out 72.9 percent of the time.

Breast cancer is the most common cancer in women worldwide, with two million new cases diagnosed annually. While survival rates have improved with earlier detection and better treatments, many patients still experience severe side effects like lymphoedema after surgery and radiotherapy.

Researchers are now developing the AI system further to predict a patient’s risk of such side effects up to three years after treatment. This could allow doctors to personalise care with alternative treatments or additional supportive measures for high-risk patients.

The research team plans to enrol 780 breast cancer patients in a clinical trial called Pre-Act to prospectively validate the AI risk prediction model over a two-year follow-up period. The long-term goal is an AI system that can comprehensively evaluate a patient’s prognosis and treatment needs.

(Photo by Angiola Harry)

See also: NVIDIA unveils Blackwell architecture to power next GenAI wave 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI tool finds cancer signs missed by doctors appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-tool-finds-cancer-signs-missed-by-doctors/feed/ 0
BSI publishes guidance to boost trust in AI for healthcare https://www.artificialintelligence-news.com/news/bsi-publishes-guidance-boost-trust-ai-healthcare/ https://www.artificialintelligence-news.com/news/bsi-publishes-guidance-boost-trust-ai-healthcare/#respond Wed, 02 Aug 2023 12:05:55 +0000 https://www.artificialintelligence-news.com/?p=13417 In a bid to foster greater digital trust in AI products used for medical diagnoses and treatment, the British Standards Institution (BSI) has released high-level guidance. The guidance, titled ’Validation framework for the use of AI within healthcare – Specification (BS 30440),’ aims to bolster confidence among clinicians, healthcare professionals, and providers regarding the safe, […]

The post BSI publishes guidance to boost trust in AI for healthcare appeared first on AI News.

]]>
In a bid to foster greater digital trust in AI products used for medical diagnoses and treatment, the British Standards Institution (BSI) has released high-level guidance.

The guidance, titled ’Validation framework for the use of AI within healthcare – Specification (BS 30440),’ aims to bolster confidence among clinicians, healthcare professionals, and providers regarding the safe, effective, and ethical development of AI tools.

As the global debate on the appropriate use of AI continues, this auditable standard targets products primarily designed for healthcare interventions, diagnoses, and health condition management.

Jeanne Greathouse, Global Healthcare Director at BSI, said:

“This standard is highly relevant to organisations in the healthcare sector and those interacting with it. As AI becomes the norm, it has the potential to be transformative for healthcare.

With the onset of more innovative AI tools, and AI algorithms’ ability to digest and accurately analyse copious amounts of data, clinicians and health providers can efficiently make informed diagnostic decisions to intervene, prevent, and treat diseases, ultimately improving patients’ quality of life.”

According to forecasts, the global healthcare AI market is expected to surpass $187.95 billion by 2030. However, healthcare providers and clinicians may face challenges in assessing AI products due to time and budget constraints or a lack of in-house capabilities. 

The BS 30440 specification seeks to aid decision-making processes by providing criteria for evaluating healthcare AI products, including clinical benefit, performance standards, safe integration into clinical environments, ethical considerations, and equitable social outcomes.

The standard covers a wide range of healthcare AI products, including regulated medical devices like software used for medical purposes, imaging software, patient-facing products like AI-powered smartphone chatbots, and home monitoring devices. It applies to products and technologies utilising AI elements – including machine learning – and is relevant to both AI system suppliers and product auditors.

The development of this specification involved collaboration among a panel of experts, including clinicians, software engineers, AI specialists, ethicists, and healthcare leaders. The guidance draws from existing literature and best practices, translating complex functionality assessments into an auditable framework for AI system conformity.

Healthcare organisations will be able to mandate BS 30440 certification in their procurement processes to ensure adherence to these recognized standards.

Scott Steedman, Director General for Standards at BSI, commented:

“The new guidance can help build digital trust in cutting-edge tools that represent enormous potential benefit to patients, and the professionals diagnosing and treating them.

AI has the potential to shape our future in a positive way and we all need confidence in the tools being developed, especially in healthcare.

This specification, which is auditable, can help guide everyone from doctors to healthcare leaders and patients to choose AI products that are safe, effective, and ethically produced.”

The specification addresses the need for an agreed validation framework for AI development and clinical evaluation in healthcare. It builds on a framework initially piloted at Guy’s and St. Thomas Cancer Centre and later revised through discussions with stakeholders involved in AI and machine learning.

With the publication of this guidance, BSI seeks to instil confidence in AI products used in healthcare and empower doctors, healthcare leaders, and patients to make informed and ethical choices for improved patient care and overall societal benefit.

As AI continues to shape the future of healthcare, adherence to recognised standards will play a vital role in ensuring the safe and effective integration of AI technologies in medical practice.

(Photo by Owen Beard on Unsplash)

See also: AI regulation: A pro-innovation approach – EU vs UK

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post BSI publishes guidance to boost trust in AI for healthcare appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/bsi-publishes-guidance-boost-trust-ai-healthcare/feed/ 0
LabGenius uses Graphcore’s IPUs to speed up drug discovery https://www.artificialintelligence-news.com/news/labgenius-uses-graphcore-ipus-speed-up-drug-discovery/ https://www.artificialintelligence-news.com/news/labgenius-uses-graphcore-ipus-speed-up-drug-discovery/#respond Thu, 21 Apr 2022 11:05:07 +0000 https://artificialintelligence-news.com/?p=11895 AI-driven scientific research firm LabGenius is harnessing the power of Graphcore’s IPUs (Intelligence Processing Units) to speed up its drug discovery efforts. LabGenius is currently focused on discovering new treatments for cancer and inflammatory diseases. The firm combines AI, lab automation, and synthetic biology for its potentially life-saving work. Until now, the company has been […]

The post LabGenius uses Graphcore’s IPUs to speed up drug discovery appeared first on AI News.

]]>
AI-driven scientific research firm LabGenius is harnessing the power of Graphcore’s IPUs (Intelligence Processing Units) to speed up its drug discovery efforts.

LabGenius is currently focused on discovering new treatments for cancer and inflammatory diseases. The firm combines AI, lab automation, and synthetic biology for its potentially life-saving work.

Until now, the company has been using traditional GPUs for its workloads. LabGenius reports that switching to Graphcore’s IPUs in cloud instances from Cirrascale Cloud Services enabled its training of models to be reduced from one month to around two weeks.

“Previously we used GPUs and it took us about a month to have a functioning model of all the proteins that are out there,” said Dr Katya Putintseva, a Machine Learning Advisor to LabGenius.

“With Graphcore, we reduced the turnaround time to about two weeks, so we can experiment much more rapidly and we can see the results quicker.”

Specifically, LabGenius is using IPUs from Bristol, UK-based Graphcore to train a BERT Transformer model on a large data set of known proteins to predict masked amino acids. This, the company says, enables the model to effectively learn the basic biophysics of proteins.

“[The system] is looking across different features we could change about the molecule — from point mutations of simpler constructs to the overall composition and topology of multi-module proteins,” explained Tom Ashworth, Head of Technology at LabGenius.

“It’s making suggestions about what to design next… to learn about a change in the input and how that maps to a change in the output.”

One in two people now develop cancer in their lifetime. Current treatments often cause much suffering themselves and, while survival rates for most forms are increasing, only around 50 percent survive for ten years or more.

AI will help to find new cancer treatments that cause less suffering and greatly increase the odds of long-term survivability. However, while discovering new cancer treatments is the current focus of LabGenius, the company notes how the principles can be applied more widely to find new treatments for other horrible diseases that plague mankind.

“Graphcore has changed what we’re able to do, accelerating our model training time from weeks to days,” adds Ashworth.

“For our data scientists, that’s really transformative. They can move much more at the speed they think.”

(Photo by National Cancer Institute on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post LabGenius uses Graphcore’s IPUs to speed up drug discovery appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/labgenius-uses-graphcore-ipus-speed-up-drug-discovery/feed/ 0
The NHS can now access ‘pioneering’ AI stroke diagnosis software https://www.artificialintelligence-news.com/news/nhs-access-pioneering-ai-stroke-diagnosis-software/ https://www.artificialintelligence-news.com/news/nhs-access-pioneering-ai-stroke-diagnosis-software/#respond Mon, 07 Mar 2022 12:14:25 +0000 https://artificialintelligence-news.com/?p=11734 NHS Shared Business Services (NHS SBS) has announced a procurement framework for “pioneering” AI software to diagnose strokes. Breakthroughs in medical AIs are helping to reduce patient suffering, the likelihood and/or severity of long-term complications, and even save lives across a number of ailments. Some of the benefits from medical AI breakthroughs are achieved through […]

The post The NHS can now access ‘pioneering’ AI stroke diagnosis software appeared first on AI News.

]]>
NHS Shared Business Services (NHS SBS) has announced a procurement framework for “pioneering” AI software to diagnose strokes.

Breakthroughs in medical AIs are helping to reduce patient suffering, the likelihood and/or severity of long-term complications, and even save lives across a number of ailments.

Some of the benefits from medical AI breakthroughs are achieved through improved understanding leading to better treatment, while others are due to reducing the amount of time healthcare professionals have to spend on repetitive tasks.

Over 100,000 people in the UK suffer from a stroke per year; with over 32,000 deaths as a result. NHS SBS sought out how AI can help tackle one of the UK’s leading causes of death and disability.

Adam Nickerson, NHS SBS Senior Category Manager – Digital & IT, said:

“This use of AI is a prime example of how new technologies have the potential to transform NHS patient care, speeding up diagnosis and treatment times by ensuring that expert clinical resource is targeted where it has the greatest impact for the patient. 

By identifying areas in which technology can be used to help speed up patient pathways, clinicians have more time for providing personalised care and patient waiting lists – exacerbated by the pandemic, are reduced.

We have been pleased to work alongside some of the country’s leading tech minds, expert stroke clinicians, and policy leaders to develop this unique framework, which will go a long way to enabling more rapid uptake of Stroke AI software across the NHS.”

While AI can be a powerful tool in medicine, it can be difficult to ensure solutions are evidence-based and cost-effective. That’s where the new ‘Provision of AI Software in Neuroscience for Stroke Decision Making Support’ procurement framework comes in.

The framework was developed with contributions from across NHS England and NHS Improvement (NHSEI), clinical leads from the 20 Integrated Stroke Delivery Networks across England, the Academic Health Science Network, and with further input from NHSX and the Care Quality Commission.

Darrien Bold, National Digital and AI Lead for Stroke at NHSEI, commented:

“We are already seeing the impact AI decision-support software is having on stroke pathways across the country, and the introduction of this framework will drive forward further progress in delivering best-practice care where rapid assessment and treatment are of the essence.

Over the past 18 months, the heath and care system has been compelled to look to new technologies to continue providing frontline care, and the stroke community has embraced new ways of working in times of unprecedented pressure.

This framework agreement will be of great benefit as we implement the NOSIP – driving better outcomes, better patient experience and better patient safety, using new technology quickly, safely and innovatively.”

Time is very much of the essence when it comes to strokes. The framework will enable the procurement of AI solutions that analyse images to detect ischaemic or haemorrhagic strokes and provide real-time interpretations to augment the review, diagnosis, and delivery of time-dependent treatments.

While manual review of imagery can take up to 30 minutes to interpret, AI is able to do so within seconds.

“Rapid brain imaging and its interpretation is arguably one of the most important steps in the care of patients with stroke-like symptoms,” commented Dr David Hargroves, Getting It Right First Time (GIRFT) Clinical Lead for Stroke and National Specialty Advisor for Stroke Medicine at NHSEI.

“Incorporating AI decision support software is likely to improve access to disability-saving interventions to thousands of patients. This framework agreement supplies a valuable platform to support providers of hyperacute stroke care in the purchase of AI software.”

As part of the NHS Long Term Plan, the health service aims to achieve a tenfold increase in the proportion of stroke victims who receive a thrombectomy by 2022—estimated to enable around 1,600 more patients per year to live independently.

AI will be key to achieving the NHS’ long-term goals across care for stroke patients and more. We look forward to seeing all the ways health services around the world put AI to good use over the coming years to improve patient outcomes.

(Photo by Ian Taylor on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post The NHS can now access ‘pioneering’ AI stroke diagnosis software appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/nhs-access-pioneering-ai-stroke-diagnosis-software/feed/ 0
Google’s latest AI could prevent deaths caused by incorrect prescriptions https://www.artificialintelligence-news.com/news/google-latest-ai-prevent-deaths-incorrect-prescriptions/ https://www.artificialintelligence-news.com/news/google-latest-ai-prevent-deaths-incorrect-prescriptions/#comments Fri, 03 Apr 2020 13:08:23 +0000 http://artificialintelligence-news.com/?p=9507 A new AI system developed by researchers from Google and the University of California could prevent deaths caused by incorrect prescriptions. While quite rare, prescriptions that are incorrect – or react badly to a patient’s existing medications – can result in hospitalisation or even death. In a blog post today, Alvin Rajkomar MD, Research Scientist […]

The post Google’s latest AI could prevent deaths caused by incorrect prescriptions appeared first on AI News.

]]>
A new AI system developed by researchers from Google and the University of California could prevent deaths caused by incorrect prescriptions.

While quite rare, prescriptions that are incorrect – or react badly to a patient’s existing medications – can result in hospitalisation or even death.

In a blog post today, Alvin Rajkomar MD, Research Scientist and Eyal Oren PhD, Product Manager, Google AI, set out their work on using AI for medical predictions.

The AI is able to predict which conditions a patient is being treated for based on certain parameters. “For example, if a doctor prescribed ceftriaxone and doxycycline for a patient with an elevated temperature, fever and cough, the model could identify these as signals that the patient was being treated for pneumonia,” the researchers wrote.

In the future, an AI could step in if a medication that’s being prescribed looks incorrect for a patient with a specific condition in their current situation.

“While no doctor, nurse, or pharmacist wants to make a mistake that harms a patient, research shows that 2% of hospitalized patients experience serious preventable medication-related incidents that can be life-threatening, cause permanent harm, or result in death,” the researchers wrote.

“However, determining which medications are appropriate for any given patient at any given time is complex — doctors and pharmacists train for years before acquiring the skill.”

The AI was trained on an anonymised data set featuring around three million records of medications issued from over 100,000 hospitalisations.

In their paper, the researchers wrote:

“Patient records vary significantly in length and density of data points (e.g., vital sign measurements in an intensive care unit vs outpatient clinic), so we formulated three deep learning neural network model architectures that take advantage of such data in different ways: one based on recurrent neural networks (long short-term memory (LSTM)), one on an attention-based TANN, and one on a neural network with boosted time-based decision stumps.

We trained each architecture (three different ones) on each task (four tasks) and multiple time points (e.g., before admission, at admission, 24 h after admission and at discharge), but the results of each architecture were combined using ensembling.”

You can find the full paper in science journal Nature here.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Google’s latest AI could prevent deaths caused by incorrect prescriptions appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/google-latest-ai-prevent-deaths-incorrect-prescriptions/feed/ 1
Babylon Health lashes out at doctor who raised AI chatbot safety concerns https://www.artificialintelligence-news.com/news/babylon-health-doctor-ai-chatbot-safety-concerns/ https://www.artificialintelligence-news.com/news/babylon-health-doctor-ai-chatbot-safety-concerns/#respond Wed, 26 Feb 2020 17:24:08 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6433 Controversial healthcare app maker Babylon Health has criticised the doctor who first raised concerns about the safety of their AI chatbot. Babylon Health’s chatbot is available in the company’s GP at Hand app, a digital healthcare solution championed by health secretary Matt Hancock that was also integrated into Samsung Health since last year. The chatbot […]

The post Babylon Health lashes out at doctor who raised AI chatbot safety concerns appeared first on AI News.

]]>
Controversial healthcare app maker Babylon Health has criticised the doctor who first raised concerns about the safety of their AI chatbot.

Babylon Health’s chatbot is available in the company’s GP at Hand app, a digital healthcare solution championed by health secretary Matt Hancock that was also integrated into Samsung Health since last year.

The chatbot aims to reduce the burden on GPs and A&E departments by automating the triage process to determine whether someone can treat themselves at home, should book an online or in-person GP appointment, or go straight to a hospital.

A Twitter user under the pseudonym of Dr Murphy first reached out to us back in 2018 alleging that Babylon Health’s chatbot was giving unsafe advice. Dr Murphy recently unveiled himself as Dr David Watkins and went public with his findings at The Royal Society of Medicine’s “Recent developments in AI and digital health 2020“ event in addition to appearing on a BBC Newsnight report.

Over the past couple of years, Dr Watkins has provided many examples of the chatbot giving dangerous advice. In one example, an obese 48-year-old heavy smoker patient who presented himself with chest pains was suggested to book a consultation “in the next few hours”. Anyone with any common sense would have told you to dial an emergency number straight away.

This particular issue has since been rectified but Dr Watkins has highlighted many further examples over the years which show, very clearly, there are serious safety issues.

In a press release (PDF) on Monday, Babylon Health calls Dr Watkins a “troll” who has “targeted members of our staff, partners, clients, regulators and journalists and tweeted defamatory content about us”.

According to the release, Dr Watkins has conducted 2,400 tests of the chatbot in a bid to discredit the service while raising “fewer than 100 test results which he considered concerning”.

Babylon Health claims that in just 20 cases did Dr Watkins find genuine errors while others were “misrepresentations” or “mistakes,” according to Babylon’s own “panel of senior clinicians” who remain unnamed.

Speaking to TechCrunch, Dr Watkins called Babylon’s claims “utterly nonsense” and questions where the startup got its figures from as “there are certainly not 2,400 completed triage assessments”.

Dr Watkins estimates he has conducted between 800 and 900 full triages, some of which were repeat tests to see whether Babylon Health had fixed the issues he previously highlighted.

The doctor acknowledges Babylon Health’s chatbot has improved and has issues around the rate of around one in three instances. In 2018, when Dr Watkins first reached out to us and other outlets, he says this rate was “one in one”.

While it’s one account versus the other, the evidence shows that Babylon Health’s chatbot has issued dangerous advice on a number of occasions. Dr Watkins has dedicated many hours to highlighting these issues to Babylon Health in order to improve patient safety.

Rather than welcome his efforts and work with Dr Watkins to improve their service, it seems Babylon Health has decided to go on the offensive and “try and discredit someone raising patient safety concerns”.

In their press release, Babylon accuses Watkins of posting “over 6,000” misleading attacks but without giving details of where. Dr Watkins primarily uses Twitter to post his findings. His account, as of writing, has tweeted a total of 3,925 times and not just about Babylon’s service.

This isn’t the first time Babylon Health’s figures have come into question. Back in June 2018, Babylon Health held an event where it boasted its AI beat trainee GPs at the MRCGP exam used for testing their ability to diagnose medical problems. The average pass mark is 72 percent. “How did Babylon Health do?” said Dr Mobasher Butt at the event, a director at Babylon Health. “It got 82 percent.”

Given the number of dangerous suggestions to trivial ailments the chatbot has given, especially at the time, it’s hard to imagine the claim that it beats trainee GPs as being correct. Intriguingly, the video of the event has since been deleted from Babylon Health’s YouTube account and the company removed all links to coverage of it from the “Babylon in the news” part of its website.

When asked why it deleted the content, Babylon Health said in a statement: “As a fast-paced and dynamic health-tech company, Babylon is constantly refreshing the website with new information about our products and services. As such, older content is often removed to make way for the new.”

AI solutions like those offered by Babylon Health will help to reduce the demand on health services and ensure people have access to the right information and care whenever and wherever they need it. However, patient safety must come first.

Mistakes are less forgivable in healthcare due to the risk of potentially fatal or lifechanging consequences. The usual “move fast and break things” ethos in tech can’t apply here. 

There’s a general acceptance that rarely is a new technology going to be without its problems, but people want to see that best efforts are being made to limit and address those issues. Instead of welcoming those pointing out issues with their service before it leads to a serious incident, it seems Babylon Health would rather blame everyone else for its faults.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Babylon Health lashes out at doctor who raised AI chatbot safety concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/babylon-health-doctor-ai-chatbot-safety-concerns/feed/ 0
MIT researchers use AI to discover a welcome new antibiotic https://www.artificialintelligence-news.com/news/mit-researchers-use-ai-to-discover-a-welcome-new-antibiotic/ https://www.artificialintelligence-news.com/news/mit-researchers-use-ai-to-discover-a-welcome-new-antibiotic/#respond Fri, 21 Feb 2020 15:49:32 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6423 A team of MIT researchers have used AI to discover a welcome new antibiotic to help in the fight against increasing resistance. Using a machine learning algorithm, the MIT researchers were able to discover a new antibiotic compound which did not develop any resistance during a 30-day treatment period on mice. The algorithm was trained […]

The post MIT researchers use AI to discover a welcome new antibiotic appeared first on AI News.

]]>
A team of MIT researchers have used AI to discover a welcome new antibiotic to help in the fight against increasing resistance.

Using a machine learning algorithm, the MIT researchers were able to discover a new antibiotic compound which did not develop any resistance during a 30-day treatment period on mice.

The algorithm was trained using around 2,500 molecules – including about 1,700 FDA-approved drugs and a set of 800 natural products – to seek out chemical features that make molecules effective at killing bacteria. 

After the model was trained, the researchers tested it on a library of about 6,000 compounds known as the Broad Institute’s Drug Repurposing Hub.

“We wanted to develop a platform that would allow us to harness the power of artificial intelligence to usher in a new age of antibiotic drug discovery,” explains James Collins, the Termeer Professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science (IMES) and Department of Biological Engineering.

“Our approach revealed this amazing molecule which is arguably one of the more powerful antibiotics that has been discovered.”

Antibiotic resistance is terrifying. Researchers have already discovered bacterias that are immune to current antibiotics and we’re very much in danger of illnesses that have become simple to treat becoming deadly once more.

Data from the Centers for Disease Control and Prevention (CDC) already indicates that antibiotic-resistant bacteria and antimicrobial-resistant fungi cause more than 2.8 million infections and 35,000 deaths a year in the United States alone.

“We’re facing a growing crisis around antibiotic resistance, and this situation is being generated by both an increasing number of pathogens becoming resistant to existing antibiotics, and an anaemic pipeline in the biotech and pharmaceutical industries for new antibiotics,” Collins says.

The recent coronavirus outbreak leaves many patients with pneumonia. With antibiotics, pneumonia is not often fatal nowadays unless a patient has a substantially weakened immune system. The current death toll for coronavirus would be much higher if antibiotic resistance essentially sets healthcare back to the 1930s.

MIT’s researchers claim their AI is able to check more than 100 million chemical compounds in a matter of days to pick out potential antibiotics that kill bacteria. This rapid checking reduces the time it takes to discover new lifesaving treatments and begins to swing the odds back in our favour.

The newly discovered molecule is called halicin – after the AI named Hal in the film 2001: A Space Odyssey – and has been found to be effective against E.coli. The team is now hoping to develop halicin for human use (a separate machine learning model has already indicated that it should have low toxicity to humans, so early signs are positive.)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post MIT researchers use AI to discover a welcome new antibiotic appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/mit-researchers-use-ai-to-discover-a-welcome-new-antibiotic/feed/ 0
Transhumanism: AI could figure out how to make humans live forever https://www.artificialintelligence-news.com/news/transhumanism-ai-how-humans-live-forever/ https://www.artificialintelligence-news.com/news/transhumanism-ai-how-humans-live-forever/#respond Thu, 28 Feb 2019 17:38:13 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5248 During a panel discussion on transhumanism at this year’s MWC, one expert predicted AI could figure out how to make a human live forever. ‘If You’re Under 50, You’ll Live Forever: Hello Transhumanism’ was the name of the session and featured Alex Rodriguez Vitello of the World Economic Forum and Stephen Dunne of Telefonica-owned innovation […]

The post Transhumanism: AI could figure out how to make humans live forever appeared first on AI News.

]]>
During a panel discussion on transhumanism at this year’s MWC, one expert predicted AI could figure out how to make a human live forever.

‘If You’re Under 50, You’ll Live Forever: Hello Transhumanism’ was the name of the session and featured Alex Rodriguez Vitello of the World Economic Forum and Stephen Dunne of Telefonica-owned innovation facility Alpha.

Transhumanism is the idea that humans can evolve beyond their current physical and mental limitations using technological advancements. In some ways, this is already happening.

Medical advancements have extended our lifespans and AI is helping to make further breakthroughs in areas such as cancer treatment.

Vitello notes how Dr Aubrey de Grey from the SENS Research Foundation has been able to extend the lifespan of mice threefold (Fun fact: Grey was an AI reseearcher before switching fields to biology.)

“That’s about 300 years in human years. And these mice are super happy, they’re like having sex and everything is great,” jokes Vitello.

Prosthetics, meanwhile, are enabling people to overcome their disabilities. Today, you can even be turned into a human compass with an implant that vibrates every time you face north.

CRISPR gene editing will one day help to eliminate disorders prior to birth. “You can eliminate cancer, muscular dystrophy, multiple sclerosis… all these things,” comments Vitello.

Artificial limbs will go beyond matching the abilities of natural body parts and provide things such as enhanced vision or superhuman strength beyond what even Arnie achieved in his prime.

These are exciting possibilities, but some transhumanist concepts are many years from becoming available. Even when they are, most enhancements will remain unaffordable for quite some time.

Cryogenics, the idea of being frozen to be revived years in the future, is one such example of something that’s possible today but unaffordable to most. One of the biggest companies in the field is Alcor if you’re willing to part with $200,000.

In answer to whether he agreed with the panel’s title, Dunne responded that a better question to ask is whether the first person is alive that will live forever. On that basis, he believes they might be.

“If you’re [Amazon CEO] Jeff Bezos, maybe,” commented Dunne. “If you put all your resources towards that.”

One concept is that we’ll be able to live forever virtually through storing a digital copy of our brains. American inventor and futurist Ray Kurzweil wants his brain to be downloaded and uploaded elsewhere when he dies.

“What’s more, he [Ray] has all these recordings of his father and he wants to take all of this information and put it on a computer brain to see if he can reproduce the essence of his father,” says Vitello.

This kind of thing requires the ability to emulate the brain. While huge strides in computing power are being made, we’re some way off from that level of processing power.

“I met Ray recently and he thinks of it as a computer scientist, that if we have enough computing power we can simulate the brain,” comments Dunne. “I think we’re so far off understanding how the brain works this is just wrong at the moment.”

Even what conciousness is still eludes researchers. Only last year was a whole new neuron was discovered which goes to show how little we know about the brain at this point.

“The company I used to work for [Neurolectrics] has a project on measuring consciousness, but just the level of it,” Dunne continues. “We just don’t know how this stuff works at a very fundamental level.”

When asked how far along ‘the loading bar’ we are towards brain emulation, Dunne said he’d put it at somewhere around one percent. However, things such as stimulating the brain to improve memory retention or boost certain abilities he believes is a lot closer.

That isn’t without its own challenges. Dunne explains how it’s almost impossible for someone able-sighted to learn braille as not enough brain power is dedicated to the task.

“If you enhance one feature, you kind of have to take that processing power from somewhere else,” he says. “To learn braille you need to be blind as otherwise you’re using your visual cortex and there’s not enough computing power for the task.”

Dunne then goes on to note how AI could help to speed up breakthroughs that are difficult for us to comprehend today: “If we do invent artificial general intelligence, it might figure out all we need to know about the brain to do this within the next 30 years.”

AI is keeping the dream alive, but it seems unlikely that many – if any – under 50 will be living forever. At least we can look forward to some transhumanist enhancements in the coming years.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post Transhumanism: AI could figure out how to make humans live forever appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/transhumanism-ai-how-humans-live-forever/feed/ 0
DeepMind is using AI for protein folding breakthroughs https://www.artificialintelligence-news.com/news/deepmind-ai-protein-folding-breakthroughs/ https://www.artificialintelligence-news.com/news/deepmind-ai-protein-folding-breakthroughs/#respond Mon, 03 Dec 2018 14:01:26 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4265 Protein folding could help diagnose and treat some of the worst diseases, and DeepMind believes AI can speed up that process. Conditions such as Alzheimer’s, Parkinson’s, Huntington’s, and cystic fibrosis are suspected to be caused by misfolded proteins. Being able to predict a protein’s shape enables a greater understanding of its role within the body. […]

The post DeepMind is using AI for protein folding breakthroughs appeared first on AI News.

]]>
Protein folding could help diagnose and treat some of the worst diseases, and DeepMind believes AI can speed up that process.

Conditions such as Alzheimer’s, Parkinson’s, Huntington’s, and cystic fibrosis are suspected to be caused by misfolded proteins. Being able to predict a protein’s shape enables a greater understanding of its role within the body.

Previous techniques used for determining the shapes of proteins – such as cryo-electron microscopy, nuclear magnetic resonance, and X-ray crystallography – takes years and costs tens of thousands of dollars per structure.

AI, the researchers hope, will enable target shapes to be modelled from scratch without requiring previously solved proteins to be used as templates.

DeepMind calls their AI-powered folding efforts AlphaFold.

AlphaFold uses two different methods to construct predictions of protein structures:

    1. The first method repeatedly replaces pieces of a protein structure with new protein fragments, building on a technique commonly used in structural biology. A neural network invents new fragments.
  1. The second method is called ‘gradient descent’ which is a mathematical technique applied to entire protein chains rather than pieces and makes small, incremental improvements.

Image Credit: DeepMind

DeepMind says its work is a successful demonstration of how AI can reduce the complexity of tasks such as protein folding; speeding up the diagnosis and treatment of some of the world’s most debilitating conditions.

In a contest organised by the Protein Structure Prediction Centre, AlphaMind was judged the winner among a total 98 algorithms by predicting the shapes of 25 out of 43 proteins. The runner-up, in comparison, could only predict three of the 43 proteins.

“For us, this is a really key moment,” said Demis Hassabis, co-founder and CEO of DeepMind. “This is a lighthouse project, our first major investment in terms of people and resources into a fundamental, very important, real-world scientific problem.”

 Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post DeepMind is using AI for protein folding breakthroughs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/deepmind-ai-protein-folding-breakthroughs/feed/ 0