healthcare Archives - AI News https://www.artificialintelligence-news.com/news/tag/healthcare/ Artificial Intelligence News Fri, 02 May 2025 12:38:13 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png healthcare Archives - AI News https://www.artificialintelligence-news.com/news/tag/healthcare/ 32 32 Google AMIE: AI doctor learns to ‘see’ medical images https://www.artificialintelligence-news.com/news/google-amie-ai-doctor-learns-to-see-medical-images/ https://www.artificialintelligence-news.com/news/google-amie-ai-doctor-learns-to-see-medical-images/#respond Fri, 02 May 2025 12:38:12 +0000 https://www.artificialintelligence-news.com/?p=106274 Google is giving its diagnostic AI the ability to understand visual medical information with its latest research on AMIE (Articulate Medical Intelligence Explorer). Imagine chatting with an AI about a health concern, and instead of just processing your words, it could actually look at the photo of that worrying rash or make sense of your […]

The post Google AMIE: AI doctor learns to ‘see’ medical images appeared first on AI News.

]]>
Google is giving its diagnostic AI the ability to understand visual medical information with its latest research on AMIE (Articulate Medical Intelligence Explorer).

Imagine chatting with an AI about a health concern, and instead of just processing your words, it could actually look at the photo of that worrying rash or make sense of your ECG printout. That’s what Google is aiming for.

We already knew AMIE showed promise in text-based medical chats, thanks to earlier work published in Nature. But let’s face it, real medicine isn’t just about words.

Doctors rely heavily on what they can see – skin conditions, readings from machines, lab reports. As the Google team rightly points out, even simple instant messaging platforms “allow static multimodal information (e.g., images and documents) to enrich discussions.”

Text-only AI was missing a huge piece of the puzzle. The big question, as the researchers put it, was “Whether LLMs can conduct diagnostic clinical conversations that incorporate this more complex type of information.”

Google teaches AMIE to look and reason

Google’s engineers have beefed up AMIE using their Gemini 2.0 Flash model as the brains of the operation. They’ve combined this with what they call a “state-aware reasoning framework.” In plain English, this means the AI doesn’t just follow a script; it adapts its conversation based on what it’s learned so far and what it still needs to figure out.

It’s close to how a human clinician works: gathering clues, forming ideas about what might be wrong, and then asking for more specific information – including visual evidence – to narrow things down.

“This enables AMIE to request relevant multimodal artifacts when needed, interpret their findings accurately, integrate this information seamlessly into the ongoing dialogue, and use it to refine diagnoses,” Google explains.

Think of the conversation flowing through stages: first gathering the patient’s history, then moving towards diagnosis and management suggestions, and finally follow-up. The AI constantly assesses its own understanding, asking for that skin photo or lab result if it senses a gap in its knowledge.

To get this right without endless trial-and-error on real people, Google built a detailed simulation lab.

Google created lifelike patient cases, pulling realistic medical images and data from sources like the PTB-XL ECG database and the SCIN dermatology image set, adding plausible backstories using Gemini. Then, they let AMIE ‘chat’ with simulated patients within this setup and automatically check how well it performed on things like diagnostic accuracy and avoiding errors (or ‘hallucinations’).

The virtual OSCE: Google puts AMIE through its paces

The real test came in a setup designed to mirror how medical students are assessed: the Objective Structured Clinical Examination (OSCE).

Google ran a remote study involving 105 different medical scenarios. Real actors, trained to portray patients consistently, interacted either with the new multimodal AMIE or with actual human primary care physicians (PCPs). These chats happened through an interface where the ‘patient’ could upload images, just like you might in a modern messaging app.

Afterwards, specialist doctors (in dermatology, cardiology, and internal medicine) and the patient actors themselves reviewed the conversations.

The human doctors scored everything from how well history was taken, the accuracy of the diagnosis, the quality of the suggested management plan, right down to communication skills and empathy—and, of course, how well the AI interpreted the visual information.

Surprising results from the simulated clinic

Here’s where it gets really interesting. In this head-to-head comparison within the controlled study environment, Google found AMIE didn’t just hold its own—it often came out ahead.

The AI was rated as being better than the human PCPs at interpreting the multimodal data shared during the chats. It also scored higher on diagnostic accuracy, producing differential diagnosis lists (the ranked list of possible conditions) that specialists deemed more accurate and complete based on the case details.

Specialist doctors reviewing the transcripts tended to rate AMIE’s performance higher across most areas. They particularly noted “the quality of image interpretation and reasoning,” the thoroughness of its diagnostic workup, the soundness of its management plans, and its ability to flag when a situation needed urgent attention.

Perhaps one of the most surprising findings came from the patient actors: they often found the AI to be more empathetic and trustworthy than the human doctors in these text-based interactions.

And, on a critical safety note, the study found no statistically significant difference between how often AMIE made errors based on the images (hallucinated findings) compared to the human physicians.

Technology never stands still, so Google also ran some early tests swapping out the Gemini 2.0 Flash model for the newer Gemini 2.5 Flash.

Using their simulation framework, the results hinted at further gains, particularly in getting the diagnosis right (Top-3 Accuracy) and suggesting appropriate management plans.

While promising, the team is quick to add a dose of realism: these are just automated results, and “rigorous assessment through expert physician review is essential to confirm these performance benefits.”

Important reality checks

Google is commendably upfront about the limitations here. “This study explores a research-only system in an OSCE-style evaluation using patient actors, which substantially under-represents the complexity… of real-world care,” they state clearly. 

Simulated scenarios, however well-designed, aren’t the same as dealing with the unique complexities of real patients in a busy clinic. They also stress that the chat interface doesn’t capture the richness of a real video or in-person consultation.

So, what’s the next step? Moving carefully towards the real world. Google is already partnering with Beth Israel Deaconess Medical Center for a research study to see how AMIE performs in actual clinical settings with patient consent.

The researchers also acknowledge the need to eventually move beyond text and static images towards handling real-time video and audio—the kind of interaction common in telehealth today.

Giving AI the ability to ‘see’ and interpret the kind of visual evidence doctors use every day offers a glimpse of how AI might one day assist clinicians and patients. However, the path from these promising findings to a safe and reliable tool for everyday healthcare is still a long one that requires careful navigation.

(Photo by Alexander Sinn)

See also: Are AI chatbots really changing the world of work?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google AMIE: AI doctor learns to ‘see’ medical images appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/google-amie-ai-doctor-learns-to-see-medical-images/feed/ 0
Deepgram Nova-3 Medical: AI speech model cuts healthcare transcription errors https://www.artificialintelligence-news.com/news/deepgram-nova-3-medical-ai-speech-model-healthcare-transcription-errors/ https://www.artificialintelligence-news.com/news/deepgram-nova-3-medical-ai-speech-model-healthcare-transcription-errors/#respond Tue, 04 Mar 2025 13:25:55 +0000 https://www.artificialintelligence-news.com/?p=104673 Deepgram has unveiled Nova-3 Medical, an AI speech-to-text (STT) model tailored for transcription in the demanding environment of healthcare. Designed to integrate seamlessly with existing clinical workflows, Nova-3 Medical aims to address the growing need for accurate and efficient transcription in the UK’s public NHS and private healthcare landscape. As electronic health records (EHRs), telemedicine, […]

The post Deepgram Nova-3 Medical: AI speech model cuts healthcare transcription errors appeared first on AI News.

]]>
Deepgram has unveiled Nova-3 Medical, an AI speech-to-text (STT) model tailored for transcription in the demanding environment of healthcare.

Designed to integrate seamlessly with existing clinical workflows, Nova-3 Medical aims to address the growing need for accurate and efficient transcription in the UK’s public NHS and private healthcare landscape.

As electronic health records (EHRs), telemedicine, and digital health platforms become increasingly prevalent, the demand for reliable AI-powered transcription has never been higher. However, traditional speech-to-text models often struggle with the complex and specialised vocabulary used in clinical settings, leading to errors and “hallucinations” that can compromise patient care.

Deepgram’s Nova-3 Medical is engineered to overcome these challenges. The model leverages advanced machine learning and specialised medical vocabulary training to accurately capture medical terms, acronyms, and clinical jargon—even in challenging audio conditions. This is particularly crucial in environments where healthcare professionals may move away from recording devices.

“Nova‑3 Medical represents a significant leap forward in our commitment to transforming clinical documentation through AI,” said Scott Stephenson, CEO of Deepgram. “By addressing the nuances of clinical language and offering unprecedented customisation, we are empowering developers to build products that improve patient care and operational efficiency.”

One of the key features of the model is its ability to deliver structured transcriptions that integrate seamlessly with clinical workflows and EHR systems, ensuring vital patient data is accurately organised and readily accessible. The model also offers flexible, self-service customisation, including Keyterm Prompting for up to 100 key terms, allowing developers to tailor the solution to the unique needs of various medical specialties.

Versatile deployment options – including on-premises and Virtual Private Cloud (VPC) configurations – ensure enterprise-grade security and HIPAA compliance, which is crucial for meeting UK data protection regulations.

“Speech-to-text for enterprise use cases is not trivial, and there is a fundamental difference between voice AI platforms designed for enterprise use cases vs entertainment use cases,” said Kevin Fredrick, Managing Partner at OneReach.ai. “Deepgram’s Nova-3 model and Nova-3-Medical model, are leading voice AI offerings, including TTS, in terms of the accuracy, latency, efficiency, and scalability required for enterprise use cases.”

Benchmarking Nova-3 Medical: Accuracy, speed, and efficiency

Deepgram has conducted benchmarking to demonstrate the performance of Nova-3 Medical. The model claims to deliver industry-leading transcription accuracy, optimising both overall word recognition and critical medical term accuracy.

  • Word Error Rate (WER): With a median WER of 3.45%, Nova-3 Medical outperforms competitors, achieving a 63.6% reduction in errors compared to the next best competitor. This enhanced precision minimises manual corrections and streamlines workflows.
  • Keyword Error Rate (KER): Crucially, Nova-3 Medical achieves a KER of 6.79%, marking a 40.35% reduction in errors compared to the next best competitor. This ensures that critical medical terms – such as drug names and conditions – are accurately transcribed, reducing the risk of miscommunication and patient safety issues.

In addition to accuracy, Nova-3 Medical excels in real-time applications. The model transcribes speech 5-40x faster than many alternative speech recognition vendors, making it ideal for telemedicine and digital health platforms. Its scalable architecture ensures high performance even as transcription volumes increase.

Furthermore, Nova-3 Medical is designed to be cost-effective. Starting at $0.0077 per minute of streaming audio – which Deepgram claims is more than twice as affordable as leading cloud providers – it allows healthcare tech companies to reinvest in innovation and accelerate product development.

Deepgram’s Nova-3 Medical aims to empower developers to build transformative medical transcription applications, driving exceptional outcomes across healthcare.

(Photo by Alexander Sinn)

See also: Autoscience Carl: The first AI scientist writing peer-reviewed papers

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Deepgram Nova-3 Medical: AI speech model cuts healthcare transcription errors appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/deepgram-nova-3-medical-ai-speech-model-healthcare-transcription-errors/feed/ 0
AI Action Summit: Leaders call for unity and equitable development https://www.artificialintelligence-news.com/news/ai-action-summit-leaders-call-for-unity-equitable-development/ https://www.artificialintelligence-news.com/news/ai-action-summit-leaders-call-for-unity-equitable-development/#respond Mon, 10 Feb 2025 13:07:09 +0000 https://www.artificialintelligence-news.com/?p=104258 As the 2025 AI Action Summit kicks off in Paris, global leaders, industry experts, and academics are converging to address the challenges and opportunities presented by AI. Against the backdrop of rapid technological advancements and growing societal concerns, the summit aims to build on the progress made since the 2024 Seoul Safety Summit and establish […]

The post AI Action Summit: Leaders call for unity and equitable development appeared first on AI News.

]]>
As the 2025 AI Action Summit kicks off in Paris, global leaders, industry experts, and academics are converging to address the challenges and opportunities presented by AI.

Against the backdrop of rapid technological advancements and growing societal concerns, the summit aims to build on the progress made since the 2024 Seoul Safety Summit and establish a cohesive global framework for AI governance.  

AI Action Summit is ‘a wake-up call’

French President Emmanuel Macron has described the summit as “a wake-up call for Europe,” emphasising the need for collective action in the face of AI’s transformative potential. This comes as the US has committed $500 billion to AI infrastructure.

The UK, meanwhile, has unveiled its Opportunities Action Plan ahead of the full implementation of the UK AI Act. Ahead of the AI Summit, UK tech minister Peter Kyle told The Guardian the AI race must be led by “western, liberal, democratic” countries.

These developments signal a renewed global dedication to harnessing AI’s capabilities while addressing its risks.  

Matt Cloke, CTO at Endava, highlighted the importance of bridging the gap between AI’s potential and its practical implementation.

Headshot of Matt Cloke.

“Much of the conversation is set to focus on understanding the risks involved with using AI while helping to guide decision-making in an ever-evolving landscape,” he said.  

Cloke also stressed the role of organisations in ensuring AI adoption goes beyond regulatory frameworks.

“Modernising core systems enables organisations to better harness AI while ensuring regulatory compliance,” he explained.

“With improved data management, automation, and integration capabilities, these systems make it easier for organisations to stay agile and quickly adapt to impending regulatory changes.”  

Governance and workforce among critical AI Action Summit topics

Kit Cox, CTO and Founder of Enate, outlined three critical areas for the summit’s agenda.

Headshot of Kit Cox ahead of the 2025 AI Action Summit in Paris.

“First, AI governance needs urgent clarity,” he said. “We must establish global guidelines to ensure AI is safe, ethical, and aligned across nations. A disconnected approach won’t work; we need unity to build trust and drive long-term progress.”

Cox also emphasised the need for a future-ready workforce.

“Employers and governments must invest in upskilling the workforce for an AI-driven world,” he said. “This isn’t just about automation replacing jobs; it’s about creating opportunities through education and training that genuinely prepare people for the future of work.”  

Finally, Cox called for democratising AI’s benefits.

“AI must be fair and democratic both now and in the future,” he said. “The benefits can’t be limited to a select few. We must ensure that AI’s power reaches beyond Silicon Valley to all corners of the globe, creating opportunities for everyone to thrive.”  

Developing AI in the public interest

Professor Gina Neff, Professor of Responsible AI at Queen Mary University of London and Executive Director at Cambridge University’s Minderoo Centre for Technology & Democracy, stressed the importance of making AI relatable to everyday life.

Headshot of Professor Gina Neff.

“For us in civil society, it’s essential that we bring imaginaries about AI into the everyday,” she said. “From the barista who makes your morning latte to the mechanic fixing your car, they all have to understand how AI impacts them and, crucially, why AI is a human issue.”  

Neff also pushed back against big tech’s dominance in AI development.

“I’ll be taking this spirit of public interest into the Summit and pushing back against big tech’s push for hyperscaling. Thinking about AI as something we’re building together – like we do our cities and local communities – puts us all in a better place.”

Addressing bias and building equitable AI

Professor David Leslie, Professor of Ethics, Technology, and Society at Queen Mary University of London, highlighted the unresolved challenges of bias and diversity in AI systems.

“Over a year after the first AI Safety Summit at Bletchley Park, only incremental progress has been made to address the many problems of cultural bias and toxic and imbalanced training data that have characterised the development and use of Silicon Valley-led frontier AI systems,” he said.

Headshot of Professor David Leslie ahead of the 2025 AI Action Summit in Paris.

Leslie called for a renewed focus on public interest AI.

“The French AI Action Summit promises to refocus the conversation on AI governance to tackle these and other areas of immediate risk and harm,” he explained. “A main focus will be to think about how to advance public interest AI for all through mission-driven and society-led funding.”  

He proposed the creation of a public interest AI foundation, supported by governments, companies, and philanthropic organisations.

“This type of initiative will have to address issues of algorithmic and data biases head on, at concrete and practice-based levels,” he said. “Only then can it stay true to the goal of making AI technologies – and the infrastructures upon which they depend – accessible global public goods.”  

Systematic evaluation  

Professor Maria Liakata, Professor of Natural Language Processing at Queen Mary University of London, emphasised the need for rigorous evaluation of AI systems.

Headshot of Professor Maria Liakata ahead of the 2025 AI Action Summit in Paris.

“AI has the potential to make public service more efficient and accessible,” she said. “But at the moment, we are not evaluating AI systems properly. Regulators are currently on the back foot with evaluation, and developers have no systematic way of offering the evidence regulators need.”  

Liakata called for a flexible and systematic approach to AI evaluation.

“We must remain agile and listen to the voices of all stakeholders,” she said. “This would give us the evidence we need to develop AI regulation and help us get there faster. It would also help us get better at anticipating the risks posed by AI.”  

AI in healthcare: Balancing innovation and ethics

Dr Vivek Singh, Lecturer in Digital Pathology at Barts Cancer Institute, Queen Mary University of London, highlighted the ethical implications of AI in healthcare.

Headshot of Dr Vivek Singh ahead of the 2025 AI Action Summit in Paris.

“The Paris AI Action Summit represents a critical opportunity for global collaboration on AI governance and innovation,” he said. “I hope to see actionable commitments that balance ethical considerations with the rapid advancement of AI technologies, ensuring they benefit society as a whole.”  

Singh called for clear frameworks for international cooperation.

“A key outcome would be the establishment of clear frameworks for international cooperation, fostering trust and accountability in AI development and deployment,” he said.  

AI Action Summit: A pivotal moment

The 2025 AI Action Summit in Paris represents a pivotal moment for global AI governance. With calls for unity, equity, and public interest at the forefront, the summit aims to address the challenges of bias, regulation, and workforce readiness while ensuring AI’s benefits are shared equitably.

As world leaders and industry experts converge, the hope is that actionable commitments will pave the way for a more inclusive and ethical AI future.

(Photo by Jorge Gascón)

See also: EU AI Act: What businesses need to know as regulations go live

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI Action Summit: Leaders call for unity and equitable development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-action-summit-leaders-call-for-unity-equitable-development/feed/ 0
MHRA pilots ‘AI Airlock’ to accelerate healthcare adoption https://www.artificialintelligence-news.com/news/mhra-pilots-ai-airlock-accelerate-healthcare-adoption/ https://www.artificialintelligence-news.com/news/mhra-pilots-ai-airlock-accelerate-healthcare-adoption/#respond Wed, 04 Dec 2024 11:46:32 +0000 https://www.artificialintelligence-news.com/?p=16631 The Medicines and Healthcare products Regulatory Agency (MHRA) has announced the selection of five healthcare technologies for its ‘AI Airlock’ scheme. AI Airlock aims to refine the process of regulating AI-driven medical devices and help fast-track their safe introduction to the UK’s National Health Service (NHS) and patients in need. The technologies chosen for this […]

The post MHRA pilots ‘AI Airlock’ to accelerate healthcare adoption appeared first on AI News.

]]>
The Medicines and Healthcare products Regulatory Agency (MHRA) has announced the selection of five healthcare technologies for its ‘AI Airlock’ scheme.

AI Airlock aims to refine the process of regulating AI-driven medical devices and help fast-track their safe introduction to the UK’s National Health Service (NHS) and patients in need.

The technologies chosen for this scheme include solutions targeting cancer and chronic respiratory diseases, as well as advancements in radiology diagnostics. These AI systems promise to revolutionise the accuracy and efficiency of healthcare, potentially driving better diagnostic tools and patient care.

The AI Airlock, as described by the MHRA, is a “sandbox” environment—an experimental framework designed to help manufacturers determine how best to collect real-world evidence to support the regulatory approval of their devices.

Unlike traditional medical devices, AI models continue to evolve through learning, making the establishment of safety and efficacy evidence more complex. The Airlock enables this exploration within a monitored virtual setting, giving developers insight into the practical challenges of regulation while supporting the NHS’s broader adoption of transformative AI technologies.

Safely enabling AI healthcare innovation  

Laura Squire, the lead figure in MedTech regulatory reform and Chief Officer at the MHRA, said: “New AI medical devices have the potential to increase the accuracy of healthcare decisions, save time, and improve efficiency—leading to better outcomes for the NHS and patients across all healthcare settings. 

“But we need to be confident that AI-powered medical devices introduced into the NHS are safe, stay safe, and perform as intended through their lifetime of use.”

Squire emphasised that the AI Airlock pilot allows collaboration “in partnership with technology specialists, developers and the NHS,” facilitating the exploration of best practices and accelerating safe patient access to innovative solutions.

Government representatives have praised the initiative for its forward-thinking framework.

Karin Smyth, Minister of State for Health, commented: “As part of our 10-Year Health Plan, we’re shifting NHS care from analogue to digital, and this project will help bring the most promising technology to patients.

“AI has the power to revolutionise care by supporting doctors to diagnose diseases, automating time-consuming admin tasks, and reducing hospital admissions by predicting future ill health.”

Science Minister Lord Vallance lauded the AI Airlock pilot as “a great example of government working with businesses to enable them to turn ideas into products that improve lives.” He added, “This shows how good regulation can facilitate emerging technologies for the benefit of the UK and our economy.”

Selected technologies  

The deployment of AI-powered medical devices requires meeting stringent criteria to ensure innovation, patient benefits, and regulatory challenge readiness. The five technologies selected for this inaugural pilot offer vital insights into healthcare’s future: 

  1. Lenus Stratify

Patients with Chronic Obstructive Pulmonary Disease (COPD) are among those who stand to benefit significantly from AI innovation. Lenus Stratify, developed by Lenus Health, analyses patient data to predict severe lung disease outcomes, reducing unscheduled hospital admissions. The system empowers care providers to adopt earlier interventions, affording patients an improved quality of life while alleviating NHS resource strain.  

  1. Philips Radiology Reporting Enhancer

Philips has integrated AI into existing radiology workflows to enhance the efficiency and accuracy of critical radiology reports. This system uses AI to prepare the “Impression” section of reports, summarising essential diagnostic information for healthcare providers. By automating this process, Philips aims to minimise workload struggles, human errors, and miscommunication, creating a more seamless diagnostic experience.  

  1. Federated AI Monitoring Service (FAMOS)

One recurring AI challenge is the concept of “drift,” when changing real-world conditions impair system performance over time. Newton’s Tree has developed FAMOS to monitor AI models in real time, flagging degradation and enabling rapid corrections. Hospitals, regulators, and software developers can use this tool to ensure algorithms remain high-performing, adapting to evolving circumstances while prioritising patient safety.  

  1. OncoFlow Personalised Cancer Management

Targeting the pressing healthcare challenge of reducing waiting times for cancer treatment, OncoFlow speeds up clinical workflows through its intelligent care pathway platform. Initially applied to breast cancer protocols, the system later aims to expand across other oncology domains. With quicker access to tailored therapies, patients gain increased survival rates amidst mounting NHS pressures.  

  1. SmartGuideline

Developed to simplify complex clinical decision-making processes, SmartGuideline uses large-language AI trained on official NICE medical guidelines. This technology allows clinicians to ask routine questions and receive verified, precise answers, eliminating the ambiguity associated with current AI language models. By integrating this tool, patients benefit from more accurate treatments grounded in up-to-date medical knowledge.  

Broader implications  

The influence of the AI Airlock extends beyond its current applications. The MHRA expects pilot findings, due in 2025, to inform future medical device regulations and create a clearer path for manufacturers developing AI-enabled technologies. 

The evidence derived will contribute to shaping post-Brexit UKCA marking processes, helping manufacturers achieve compliance with higher levels of transparency. By improving regulatory frameworks, the UK could position itself as a global hub for med-tech innovation while ensuring faster access to life-saving tools.

The urgency of these developments was underscored earlier this year in Lord Darzi’s review of health and care. The report outlined the “critical state” of the NHS, offering AI interventions as a promising pathway to sustainability. The work on AI Airlock by the MHRA addresses one of the report’s major recommendations for enabling regulatory solutions and “unlocking the AI revolution” for healthcare advancements.

While being selected into the AI Airlock pilot does not indicate regulatory approval, the technologies chosen represent a potential leap forward in applying AI to some of healthcare’s most pressing challenges. The coming years will test the potential of these solutions under regulatory scrutiny.

If successful, the initiative from the MHRA could redefine how pioneering technologies like AI are adopted in healthcare, balancing the need for speed, safety, and efficiency. With the NHS under immense pressure from growing demand, AI’s ability to augment clinicians, predict illnesses, and streamline workflows may well be the game-changer the system urgently needs.

(Photo by National Cancer Institute)

See also: AI’s role in helping to prevent skin cancer through behaviour change

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post MHRA pilots ‘AI Airlock’ to accelerate healthcare adoption appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/mhra-pilots-ai-airlock-accelerate-healthcare-adoption/feed/ 0
AI’s role in helping to prevent skin cancer through behaviour change https://www.artificialintelligence-news.com/news/ais-role-in-helping-to-prevent-skin-cancer-through-behaviour-change/ https://www.artificialintelligence-news.com/news/ais-role-in-helping-to-prevent-skin-cancer-through-behaviour-change/#respond Thu, 12 Sep 2024 07:39:10 +0000 https://www.artificialintelligence-news.com/?p=16046 In the past year, we’ve seen remarkable achievements across AI-assisted cancer diagnosis as more and more clinicians test, use and integrate AI companions into daily practice. Skin cancer is no exception, and we expect AI diagnostic tools to be widely implemented across this clinical arena in the future. What does AI assistance look like for […]

The post AI’s role in helping to prevent skin cancer through behaviour change appeared first on AI News.

]]>
In the past year, we’ve seen remarkable achievements across AI-assisted cancer diagnosis as more and more clinicians test, use and integrate AI companions into daily practice.

Skin cancer is no exception, and we expect AI diagnostic tools to be widely implemented across this clinical arena in the future. What does AI assistance look like for skin cancer? A 2024 study led by researchers at Stanford Medicine compared the performance of clinicians diagnosing at least one skin cancer with and without deep learning-based AI assistance. In an experimental environment, clinicians without AI assistance achieved an average sensitivity of 74.8% while for AI-assisted clinicians, sensitivity was around 81.1%.

What’s intriguing is AI helped medical professionals at all levels, with the largest improvement seen among non-dermatologists.

AI for skin cancer can impact behaviour change

Cancer is on the rise among younger people. According to a study published in BMJ Oncology, the number of under-50s worldwide being diagnosed with cancer has risen by nearly 80% in three decades. And, over the last decade melanoma skin cancer incidence rates have increased by almost two-fifths (38%) with Spain seeing a steady incidence increase of 2.4% during this time.

If detected early enough, skin cancer is easily treated and prognosis is very good. But busy lives and competing concerns mean fewer people are getting checked out, resulting in delays to diagnosis and treatment, which is dramatically changing the survival rates. Those who do, often wait to speak to a doctor. In fact, new research from Bupa, Attitudes Towards Digital Healthcare, indicates only 9% of people would immediately go to get a mole they were concerned about examined by a professional.

However, the same research found that if people were able to have a mole assessed by an AI-powered phone app at the time of their choosing, that percentage increases more than threefold (33%). This signifies emerging technology can have a significant impact on positive behaviour change in healthcare and improve clinical outcome of a potentially severe disease.

Bupa now offer an at-home dermatology tool

At Bupa, we see lots of opportunities to use AI and are exploring its use to enhance patient care, improve operational efficiency, and help our customers to live longer, healthier and happier lives. We know that people want their healthcare partner to be by their side, not just when they are sick, but supporting them constantly to keep them well.

That’s why we launched Blua, our digital healthcare service that’s available in over 200 countries. Blua provides access to three lifechanging healthcare innovations that drive convenience and accessibility. They are virtual consultations so that a customer can connect to a health professional from wherever they choose. Digital health programmesthat allow customers to proactively manage their health and remote healthcare services such as prescription delivery and at home monitoring equipment.  

For customers in Spain, we offer an at-home dermatology assessment service through Blua. How does this work? Customers who’re worried about a skin lesion can take high resolution photos of it using their smartphone. Once taken, the photos are uploaded to Blua and using AI are compared with a database of millions of other images of skin lesions to check for signs of malignancy.

The tool’s algorithms are able to discern between 302 different skin pathologies. If the tool suspects that there is a cause for concern it will let the customer know to book a follow up appointment with a doctor so that it can be looked at further and preventative action can be taken if needed.

The future of healthcare means early detection

Digital healthcare, together with AI, is going to play a crucial role in removing the barriers that stop people from getting health concerns like moles checked out in a timely manner, promoting positive behaviour change that can save lives. This is why Blua is especially useful in today’s fast-paced world where convenience is paramount and virtual consultations and at home tests will empower individuals to prioritise their health, without the need to sacrifice their time. 

(Photo by Nsey Benajah)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI’s role in helping to prevent skin cancer through behaviour change appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ais-role-in-helping-to-prevent-skin-cancer-through-behaviour-change/feed/ 0
AlphaProteo: Google DeepMind unveils protein design system https://www.artificialintelligence-news.com/news/alphaproteo-google-deepmind-protein-design-system/ https://www.artificialintelligence-news.com/news/alphaproteo-google-deepmind-protein-design-system/#respond Fri, 06 Sep 2024 14:55:36 +0000 https://www.artificialintelligence-news.com/?p=15994 Google DeepMind has unveiled an AI system called AlphaProteo that can design novel proteins that successfully bind to target molecules, potentially revolutionising drug design and disease research. AlphaProteo can generate new protein binders for diverse target proteins, including VEGF-A, which is associated with cancer and diabetes complications. Notably, this is the first time an AI […]

The post AlphaProteo: Google DeepMind unveils protein design system appeared first on AI News.

]]>
Google DeepMind has unveiled an AI system called AlphaProteo that can design novel proteins that successfully bind to target molecules, potentially revolutionising drug design and disease research.

AlphaProteo can generate new protein binders for diverse target proteins, including VEGF-A, which is associated with cancer and diabetes complications. Notably, this is the first time an AI tool has successfully designed a protein binder for VEGF-A.

The system’s performance is particularly impressive, achieving higher experimental success rates and binding affinities that are up to 300 times better than existing methods across seven target proteins tested:

Chart demonstrating Google DeepMind's AlphaProteo success rate
(Credit: Google DeepMind)

Trained on vast amounts of protein data from the Protein Data Bank and over 100 million predicted structures from AlphaFold, AlphaProteo has learned the intricacies of molecular binding. Given the structure of a target molecule and preferred binding locations, the system generates a candidate protein designed to bind at those specific sites.

To validate AlphaProteo’s capabilities, the team designed binders for a diverse range of target proteins, including viral proteins involved in infection and proteins associated with cancer, inflammation, and autoimmune diseases. The results were promising, with high binding success rates and best-in-class binding strengths observed across the board.

For instance, when targeting the viral protein BHRF1, 88% of AlphaProteo’s candidate molecules bound successfully in wet lab testing. On average, AlphaProteo binders exhibited 10 times stronger binding than the best existing design methods across the targets tested.

The system’s performance suggests it could significantly reduce the time required for initial experiments involving protein binders across a wide range of applications. However, the team acknowledges that AlphaProteo has limitations, as it was unable to design successful binders against TNFɑ (a protein associated with autoimmune diseases like rheumatoid arthritis.)

To ensure responsible development, Google DeepMind is collaborating with external experts to inform their phased approach to sharing this work and contributing to community efforts in developing best practices—including the NTI’s new AI Bio Forum.

As the technology evolves, the team plans to work with the scientific community to leverage AlphaProteo on impactful biology problems and understand its limitations. They are also exploring drug design applications at Isomorphic Labs.

While AlphaProteo represents a significant step forward in protein design, achieving strong binding is typically just the first step in designing proteins for practical applications. There remain many bioengineering challenges to overcome in the research and development process.

Nevertheless, Google DeepMind’s advancement holds tremendous potential for accelerating progress across a broad spectrum of research, including drug development, cell and tissue imaging, disease understanding and diagnosis, and even crop resistance to pests.

You can find the full AlphaProteo whitepaper here (PDF)

See also: Paige and Microsoft unveil next-gen AI models for cancer diagnosis

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AlphaProteo: Google DeepMind unveils protein design system appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/alphaproteo-google-deepmind-protein-design-system/feed/ 0
Paige and Microsoft unveil next-gen AI models for cancer diagnosis https://www.artificialintelligence-news.com/news/paige-and-microsoft-unveil-next-gen-ai-models-cancer-diagnosis/ https://www.artificialintelligence-news.com/news/paige-and-microsoft-unveil-next-gen-ai-models-cancer-diagnosis/#respond Fri, 09 Aug 2024 10:56:53 +0000 https://www.artificialintelligence-news.com/?p=15671 Paige and Microsoft have unveiled the next big breakthrough in clinical AI for cancer diagnosis and treatment: Virchow2 and Virchow2G, enhanced versions of its revolutionary AI models for cancer pathology. The Virchow2 and Virchow2G models are based on an enormous dataset that Paige has accumulated. Paige has gathered more than three million pathology slides from […]

The post Paige and Microsoft unveil next-gen AI models for cancer diagnosis appeared first on AI News.

]]>
Paige and Microsoft have unveiled the next big breakthrough in clinical AI for cancer diagnosis and treatment: Virchow2 and Virchow2G, enhanced versions of its revolutionary AI models for cancer pathology.

The Virchow2 and Virchow2G models are based on an enormous dataset that Paige has accumulated. Paige has gathered more than three million pathology slides from over 800 labs across 45 countries, on which the models were trained. Such a volume of data is, unsurprisingly, highly beneficial. This data was obtained from over 225,000 patients, all de-identified to create a rich and representative dataset encompassing all genders, races, ethnic groups, and regions across the globe.

What makes these models truly remarkable is their scope. They cover over 40 different tissue types and various staining methods, making them applicable to a wide range of cancer diagnoses. Virchow2G, with its 1.8 billion parameters, stands as the largest pathology model ever created and sets new standards in AI training, scale, and performance.

As Dr. Thomas Fuchs, founder and chief scientist of Paige, comments: “We’re just beginning to tap into what these foundation models can achieve in revolutionising our understanding of cancer through computational pathology.” He believes these models will significantly improve the future for pathologists, and he agrees that this technology is becoming an important step in the progression of diagnostics, targeted medications, and customised patient care.

Similarly, Razik Yousfi, Paige’s senior vice president of technology, states that these models are not only making precision medicine a reality but are also improving the accuracy and efficiency of cancer diagnosis, and pushing the boundaries of what’s possible in pathology and patient care.

So, how is this relevant to cancer diagnosis today? Paige has developed a clinical AI application that pathologists can use to recognise cancer in over 40 tissue types. This tool allows potentially hazardous areas to be identified more quickly and accurately. In other words, the diagnostic process becomes more efficient and less prone to errors, even for rare cancers, with the help of this tool.

Beyond diagnosis, Paige has created AI modules that can benefit life sciences and pharmaceutical companies. These tools can aid in therapeutic targeting, biomarker identification, and clinical trial design, potentially leading to more successful trials and faster development of new therapies.

The good news for researchers is that Virchow2 is available on Hugging Face for non-commercial research, while the entire suite of AI modules is now available for commercial use. This accessibility could accelerate advancements in cancer research and treatment across the scientific community.

In summary, the recently introduced AI models represent a major advancement in the fight against cancer. Paige and Microsoft have chosen the right path by combining the power of data with state-of-the-art AI technologies. These companies have created new opportunities for more accurate cancer prediction, paving the way for tailored solutions and innovative research in oncology.

(Photo by National Cancer Institute)

See also: The hidden climate cost of AI: How tech giants are struggling to go green

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Paige and Microsoft unveil next-gen AI models for cancer diagnosis appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/paige-and-microsoft-unveil-next-gen-ai-models-cancer-diagnosis/feed/ 0
UK hospitals begin live trial of prostate cancer-detecting AI https://www.artificialintelligence-news.com/news/uk-hospitals-live-trial-prostate-cancer-detecting-ai/ https://www.artificialintelligence-news.com/news/uk-hospitals-live-trial-prostate-cancer-detecting-ai/#respond Mon, 22 Jul 2024 12:31:28 +0000 https://www.artificialintelligence-news.com/?p=15488 Three hospital systems across England have begun a live clinical trial of AI technology designed to detect and grade prostate cancer. The study – known as ARTICULATE PRO – is being led by the University of Oxford in collaboration with Paige, a pioneer in clinical AI applications for cancer diagnosis. The participating hospitals – North […]

The post UK hospitals begin live trial of prostate cancer-detecting AI appeared first on AI News.

]]>
Three hospital systems across England have begun a live clinical trial of AI technology designed to detect and grade prostate cancer. The study – known as ARTICULATE PRO – is being led by the University of Oxford in collaboration with Paige, a pioneer in clinical AI applications for cancer diagnosis.

The participating hospitals – North Bristol Trust Southmead Hospital, University Hospitals Coventry and Warwickshire, and Oxford University NHS Foundation Trust – are now incorporating Paige’s AI technology into their standard of care. This multisite trial aims to evaluate the potential of AI to improve patient outcomes against a backdrop of rising prostate cancer cases.

Professor Clare Verrill, OUH Cellular Pathology Consultant, Associate Professor and Principal Investigator of ARTICULATE PRO, said “The central focus of ARTICULATE PRO is patients. We are striving towards our goal to safely and effectively ensure they benefit the most from powerful AI technology.

“With the multisite live use of The Paige Prostate Suite, we can systematically study benefits to patients in clinical settings.”

The Prostate Suite – the AI system being trialled – is designed to assist pathologists in detecting, grading, and measuring tumours in prostate biopsies and tissue samples. Pathologists at the three hospitals are assessing how this AI technology impacts their clinical decision-making, pathology service delivery, and resource utilisation in real-world settings.

Dr Jon Oxley, Uropathologist and Bristol lead of ARTICULATE PRO, commented: “I have studied the disease and progression of prostate cancer in clinical research for over 25 years, it is a significant advancement that Paige’s AI applications have achieved a level of validation and performance that allows safe and effective live clinical use.

“Using Paige Prostate Suite alongside our standard of care has the promise to increase efficiency and improve reproducibility of results for patients.”

The study is notable for its implementation across hospitals using different digital pathology scanners and information systems, serving distinct patient populations. This diversity allows for a comprehensive assessment of how Paige’s AI technology can best serve patients, histopathologists, and hospital systems in prostate cancer diagnosis.

Dr Bidisa Sinha, Uropathologist at University Hospitals Coventry and Warwickshire, added: “We believe AI can help to improve the accuracy and consistency of grading cancer and assist in detection of small areas of cancer which are easy to miss.

“This is world-leading research being carried out at UHCW. We are proud to be a global leader in the field of digital and computational pathology.”

The ARTICULATE PRO study is funded by the Accelerated Access Collaborative (AAC) Artificial Intelligence in Health and Care Award, overseen by the Department of Health and Social Care.

As prostate cancer rates continue to rise, the integration of AI in diagnosis could potentially lead to earlier detection, more accurate grading, and ultimately improved patient outcomes. The results of this trial could pave the way for wider adoption of AI in cancer diagnosis across the UK and beyond.

(Image Credit: Paige)

See also: AI could unleash £119 billion in UK productivity

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK hospitals begin live trial of prostate cancer-detecting AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-hospitals-live-trial-prostate-cancer-detecting-ai/feed/ 0
AI tool finds cancer signs missed by doctors https://www.artificialintelligence-news.com/news/ai-tool-finds-cancer-signs-missed-by-doctors/ https://www.artificialintelligence-news.com/news/ai-tool-finds-cancer-signs-missed-by-doctors/#respond Thu, 21 Mar 2024 13:08:38 +0000 https://www.artificialintelligence-news.com/?p=14589 An AI tool has proven capable of detecting signs of cancer that were overlooked by human radiologists. The AI tool, called Mia, was piloted alongside NHS clinicians in the UK and analysed the mammograms of over 10,000 women.  Most of the participants were cancer-free, but the AI successfully flagged all of those with symptoms of […]

The post AI tool finds cancer signs missed by doctors appeared first on AI News.

]]>
An AI tool has proven capable of detecting signs of cancer that were overlooked by human radiologists.

The AI tool, called Mia, was piloted alongside NHS clinicians in the UK and analysed the mammograms of over 10,000 women. 

Most of the participants were cancer-free, but the AI successfully flagged all of those with symptoms of breast cancer—as well as an additional 11 cases that the doctors failed to identify. Of the 10,889 women who participated in the trial, only 81 chose not to have their scans reviewed by the AI system.

The AI tool was trained on a dataset of over 6,000 previous breast cancer cases to learn the subtle patterns and imaging biomarkers associated with malignant tumours. When evaluated on the new cases, it correctly predicted the presence of cancer with 81.6 percent accuracy and correctly ruled it out 72.9 percent of the time.

Breast cancer is the most common cancer in women worldwide, with two million new cases diagnosed annually. While survival rates have improved with earlier detection and better treatments, many patients still experience severe side effects like lymphoedema after surgery and radiotherapy.

Researchers are now developing the AI system further to predict a patient’s risk of such side effects up to three years after treatment. This could allow doctors to personalise care with alternative treatments or additional supportive measures for high-risk patients.

The research team plans to enrol 780 breast cancer patients in a clinical trial called Pre-Act to prospectively validate the AI risk prediction model over a two-year follow-up period. The long-term goal is an AI system that can comprehensively evaluate a patient’s prognosis and treatment needs.

(Photo by Angiola Harry)

See also: NVIDIA unveils Blackwell architecture to power next GenAI wave 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI tool finds cancer signs missed by doctors appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-tool-finds-cancer-signs-missed-by-doctors/feed/ 0
AI & Big Data Expo: Unlocking the potential of AI on edge devices https://www.artificialintelligence-news.com/news/ai-big-data-expo-unlocking-potential-ai-on-edge-devices/ https://www.artificialintelligence-news.com/news/ai-big-data-expo-unlocking-potential-ai-on-edge-devices/#respond Fri, 15 Dec 2023 17:55:42 +0000 https://www.artificialintelligence-news.com/?p=14080 In an interview at AI & Big Data Expo, Alessandro Grande, Head of Product at Edge Impulse, discussed issues around developing machine learning models for resource-constrained edge devices and how to overcome them. During the discussion, Grande provided insightful perspectives on the current challenges, how Edge Impulse is helping address these struggles, and the tremendous […]

The post AI & Big Data Expo: Unlocking the potential of AI on edge devices appeared first on AI News.

]]>
In an interview at AI & Big Data Expo, Alessandro Grande, Head of Product at Edge Impulse, discussed issues around developing machine learning models for resource-constrained edge devices and how to overcome them.

During the discussion, Grande provided insightful perspectives on the current challenges, how Edge Impulse is helping address these struggles, and the tremendous promise of on-device AI.

Key hurdles with edge AI adoption

Grande highlighted three primary pain points companies face when attempting to productise edge machine learning models, including difficulties determining optimal data collection strategies, scarce AI expertise, and cross-disciplinary communication barriers between hardware, firmware, and data science teams.

“A lot of the companies building edge devices are not very familiar with machine learning,” says Grande. “Bringing those two worlds together is the third challenge, really, around having teams communicate with each other and being able to share knowledge and work towards the same goals.”

Strategies for lean and efficient models

When asked how to optimise for edge environments, Grande emphasised first minimising required sensor data.

“We are seeing a lot of companies struggle with the dataset. What data is enough, what data should they collect, what data from which sensors should they collect the data from. And that’s a big struggle,” explains Grande.

Selecting efficient neural network architectures helps, as does compression techniques like quantisation to reduce precision without substantially impacting accuracy. Always balance sensor and hardware constraints against functionality, connectivity needs, and software requirements.

Edge Impulse aims to enable engineers to validate and verify models themselves pre-deployment using common ML evaluation metrics, ensuring reliability while accelerating time-to-value. The end-to-end development platform seamlessly integrates with all major cloud and ML platforms.

Transformative potential of on-device intelligence

Grande highlighted innovative products already leveraging edge intelligence to provide personalised health insights without reliance on the cloud, such as sleep tracking with Oura Ring.

“It’s sold over a billion pieces, and it’s something that everybody can experience and everybody can get a sense of really the power of edge AI,” explains Grande.

Other exciting opportunities exist around preventative industrial maintenance via anomaly detection on production lines.

Ultimately, Grande sees massive potential for on-device AI to greatly enhance utility and usability in daily life. Rather than just raw data, edge devices can interpret sensor inputs to provide actionable suggestions and responsive experiences not previously possible—heralding more useful technology and improved quality of life.

Unlocking the potential of AI on edge devices hinges on overcoming current obstacles inhibiting adoption. Grande and other leading experts provided deep insights at this year’s AI & Big Data Expo on how to break down the barriers and unleash the full possibilities of edge AI.

“I’d love to see a world where the devices that we were dealing with were actually more useful to us,” concludes Grande.

Watch our full interview with Alessandro Grande below:

(Photo by Niranjan _ Photographs on Unsplash)

See also: AI & Big Data Expo: Demystifying AI and seeing past the hype

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI & Big Data Expo: Unlocking the potential of AI on edge devices appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-big-data-expo-unlocking-potential-ai-on-edge-devices/feed/ 0
Absci and AstraZeneca forge AI partnership to discover cancer treatments https://www.artificialintelligence-news.com/news/absci-astrazeneca-ai-partnership-discover-cancer-treatments/ https://www.artificialintelligence-news.com/news/absci-astrazeneca-ai-partnership-discover-cancer-treatments/#respond Mon, 04 Dec 2023 17:00:56 +0000 https://www.artificialintelligence-news.com/?p=14000 Absci, a frontrunner in generative AI antibody discovery, has partnered with biopharmaceutical giant AstraZeneca to leverage AI in the quest for a novel cancer treatment. This collaboration will capitalise on Absci’s Integrated Drug Creation platform—seamlessly integrating with AstraZeneca’s expertise in oncology, aiming to expedite the discovery of a potentially game-changing cancer therapy. Under the agreement, […]

The post Absci and AstraZeneca forge AI partnership to discover cancer treatments appeared first on AI News.

]]>
Absci, a frontrunner in generative AI antibody discovery, has partnered with biopharmaceutical giant AstraZeneca to leverage AI in the quest for a novel cancer treatment.

This collaboration will capitalise on Absci’s Integrated Drug Creation platform—seamlessly integrating with AstraZeneca’s expertise in oncology, aiming to expedite the discovery of a potentially game-changing cancer therapy.

Under the agreement, Absci will deploy its pioneering generative AI technology to craft a therapeutic candidate antibody tailored for a specific oncology target. The collaboration encompasses an upfront commitment, substantial R&D funding, milestone payments, and royalties on future product sales.

Sean McClain, Founder & CEO of Absci, said: “AstraZeneca is a leader in developing novel treatments in oncology, and we are excited to collaborate with them to design a therapeutic candidate antibody with the potential to improve the lives of cancer patients.”

Absci’s Integrated Drug Creation platform combines generative AI and scalable wet-lab technologies, generating proprietary data by scrutinising millions of protein-protein interactions. This data fuels Absci’s proprietary AI models, facilitating the design of antibodies that are later validated through wet-lab experiments.

This accelerated approach, completing the entire cycle within approximately six weeks, enhances the probability of successful development outcomes for biologic drug candidates.

Puja Sapra, PhD, SVP of Biologics Engineering & Oncology Targeted Delivery at AstraZeneca, commented: “This collaboration is an exciting opportunity to utilise Absci’s de novo AI antibody creation platform to design a potential new antibody therapy in oncology.”

The announcement follows Absci’s recent publication on the design and validation of de novo antibodies using their state-of-the-art ‘zero-shot’ generative AI model.

The collaboration between Absci and AstraZeneca should further help to demonstrate how AI can be used to revolutionise drug discovery.

(Photo by National Cancer Institute on Unsplash)

See also: AI & Big Data Expo: AI’s impact on decision-making in marketing

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Absci and AstraZeneca forge AI partnership to discover cancer treatments appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/absci-astrazeneca-ai-partnership-discover-cancer-treatments/feed/ 0
BSI: Closing ‘AI confidence gap’ key to unlocking benefits https://www.artificialintelligence-news.com/news/bsi-closing-ai-confidence-gap-key-unlocking-benefits/ https://www.artificialintelligence-news.com/news/bsi-closing-ai-confidence-gap-key-unlocking-benefits/#respond Tue, 17 Oct 2023 14:34:00 +0000 https://www.artificialintelligence-news.com/?p=13759 The UK’s potential to harness the benefits of AI in crucial sectors such as healthcare, food safety, and sustainability is under threat due to a significant “confidence gap” among the public. According to a study conducted by BSI, 54 percent of UK respondents expressed excitement about AI’s potential to revolutionise medical diagnoses and 43 percent […]

The post BSI: Closing ‘AI confidence gap’ key to unlocking benefits appeared first on AI News.

]]>
The UK’s potential to harness the benefits of AI in crucial sectors such as healthcare, food safety, and sustainability is under threat due to a significant “confidence gap” among the public.

According to a study conducted by BSI, 54 percent of UK respondents expressed excitement about AI’s potential to revolutionise medical diagnoses and 43 percent welcomed AI’s role in reducing food waste. However, there is a prevailing lack of trust.

This scepticism could hinder the integration of AI technologies in the NHS, which is currently grappling with challenges like the COVID-19 backlog and an ageing population. Almost half of Britons (49%) support the use of AI to alleviate pressure on the healthcare system and reduce waiting times. However, only 20 percent have more confidence in AI than humans in detecting food contamination issues.

The study also highlighted a pressing need for education, as 65 percent of respondents felt patients should be informed about the use of AI tools in diagnosis or treatment. 37 percent of respondents expect to use AI regularly in medical settings by 2030.

Craig Civil, Director of Data Science and AI at BSI, said:

“The magnitude of ways AI can shape the UK’s future means we are seeing some degree of hesitation of the unknown. This can be addressed by developing greater understanding and recognition that human involvement will always be needed if we are to make the best use of this technology, and by ensuring we have frameworks that are in place to govern its use and build trust.

Now is the moment for the UK to collaborate to balance the great power of this tool with the realities of actually using it in a credible, authentic, well-executed, and well-governed way.

Closing the confidence gap and building the appropriate checks and balances can enable us to make not just good but great use of AI in every area of life and society.”

60 percent believed consumers needed protections regarding AI technologies. The study also revealed that 61 percent of Britons are calling for international guidelines to ensure the safe use of AI. This demand reflects a global sentiment, with 50 percent of respondents highlighting the need for ethical safeguards on patient data use.

Harold Pradal, Chief Commercial Officer at BSI, commented:

“AI is a transformational technology. For it to be a powerful force for good, trust needs to be the critical factor. There is a clear opportunity to harness AI to drive societal impact, change lives, and accelerate progress towards a better future and a sustainable world.

Closing the AI confidence gap is the first necessary step, it has to be delivered through education to help realise AI’s benefits and shape Society 5.0 in a positive way.”

The study’s findings are a call to action for the UK, urging collaboration and the establishment of frameworks to govern AI’s use.

The UK Government, recognising the importance of safe AI implementation, is set to host a global AI Safety Summit at the historic Bletchley Park on 1-2 November 2023. BSI is an official partner for the much-anticipated event.

(Photo by Suad Kamardeen on Unsplash)

See also: UK reveals AI Safety Summit opening day agenda

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post BSI: Closing ‘AI confidence gap’ key to unlocking benefits appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/bsi-closing-ai-confidence-gap-key-unlocking-benefits/feed/ 0