science Archives - AI News https://www.artificialintelligence-news.com/news/tag/science/ Artificial Intelligence News Fri, 25 Apr 2025 14:07:33 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png science Archives - AI News https://www.artificialintelligence-news.com/news/tag/science/ 32 32 LG EXAONE Deep is a maths, science, and coding buff https://www.artificialintelligence-news.com/news/lg-exaone-deep-maths-science-and-coding-buff/ https://www.artificialintelligence-news.com/news/lg-exaone-deep-maths-science-and-coding-buff/#respond Tue, 18 Mar 2025 12:49:26 +0000 https://www.artificialintelligence-news.com/?p=104905 LG AI Research has unveiled EXAONE Deep, a reasoning model that excels in complex problem-solving across maths, science, and coding. The company highlighted the global challenge in creating advanced reasoning models, noting that currently, only a handful of organisations with foundational models are actively pursuing this complex area. EXAONE Deep aims to compete directly with […]

The post LG EXAONE Deep is a maths, science, and coding buff appeared first on AI News.

]]>
LG AI Research has unveiled EXAONE Deep, a reasoning model that excels in complex problem-solving across maths, science, and coding.

The company highlighted the global challenge in creating advanced reasoning models, noting that currently, only a handful of organisations with foundational models are actively pursuing this complex area. EXAONE Deep aims to compete directly with these leading models, showcasing a competitive level of reasoning ability.

LG AI Research has focused its efforts on dramatically improving EXAONE Deep’s reasoning capabilities in core domains. The model also demonstrates a strong ability to understand and apply knowledge across a broader range of subjects.

The performance benchmarks released by LG AI Research are impressive:

  • Maths: The EXAONE Deep 32B model outperformed a competing model, despite being only 5% of its size, in a demanding mathematics benchmark. Furthermore, the 7.8B and 2.4B versions achieved first place in all major mathematics benchmarks for their respective model sizes.
  • Science and coding: In these areas, the EXAONE Deep models (7.8B and 2.4B) have secured the top spot across all major benchmarks.
  • MMLU (Massive Multitask Language Understanding): The 32B model achieved a score of 83.0 on the MMLU benchmark, which LG AI Research claims is the best performance among domestic Korean models.

The capabilities of the EXAONE Deep 32B model have already garnered international recognition.

Shortly after its release, it was included in the ‘Notable AI Models’ list by US-based non-profit research organisation Epoch AI. This listing places EXAONE Deep alongside its predecessor, EXAONE 3.5, making LG the only Korean entity with models featured on this prestigious list in the past two years.

Maths prowess

EXAONE Deep has demonstrated exceptional mathematical reasoning skills across its various model sizes (32B, 7.8B, and 2.4B). In assessments based on the 2025 academic year’s mathematics curriculum, all three models outperformed global reasoning models of comparable size.

The 32B model achieved a score of 94.5 in a general mathematics competency test and 90.0 in the American Invitational Mathematics Examination (AIME) 2024, a qualifying exam for the US Mathematical Olympiad.

In the AIME 2025, the 32B model matched the performance of DeepSeek-R1—a significantly larger 671B model. This result showcases EXAONE Deep’s efficient learning and strong logical reasoning abilities, particularly when tackling challenging mathematical problems.

The smaller 7.8B and 2.4B models also achieved top rankings in major benchmarks for lightweight and on-device models, respectively. The 7.8B model scored 94.8 on the MATH-500 benchmark and 59.6 on AIME 2025, while the 2.4B model achieved scores of 92.3 and 47.9 in the same evaluations.

Science and coding excellence

EXAONE Deep has also showcased remarkable capabilities in professional science reasoning and software coding.

The 32B model scored 66.1 on the GPQA Diamond test, which assesses problem-solving skills in doctoral-level physics, chemistry, and biology. In the LiveCodeBench evaluation, which measures coding proficiency, the model achieved a score of 59.5, indicating its potential for high-level applications in these expert domains.

The 7.8B and 2.4B models continued this trend of strong performance, both securing first place in the GPQA Diamond and LiveCodeBench benchmarks within their respective size categories. This achievement builds upon the success of the EXAONE 3.5 2.4B model, which previously topped Hugging Face’s LLM Readerboard in the edge division.

Enhanced general knowledge

Beyond its specialised reasoning capabilities, EXAONE Deep has also demonstrated improved performance in general knowledge understanding.

The 32B model achieved an impressive score of 83.0 on the MMLU benchmark, positioning it as the top-performing domestic model in this comprehensive evaluation. This indicates that EXAONE Deep’s reasoning enhancements extend beyond specific domains and contribute to a broader understanding of various subjects.

LG AI Research believes that EXAONE Deep’s reasoning advancements represent a leap towards a future where AI can tackle increasingly complex problems and contribute to enriching and simplifying human lives through continuous research and innovation.

See also: Baidu undercuts rival AI models with ERNIE 4.5 and ERNIE X1

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post LG EXAONE Deep is a maths, science, and coding buff appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/lg-exaone-deep-maths-science-and-coding-buff/feed/ 0
Autoscience Carl: The first AI scientist writing peer-reviewed papers https://www.artificialintelligence-news.com/news/autoscience-carl-the-first-ai-scientist-writing-peer-reviewed-papers/ https://www.artificialintelligence-news.com/news/autoscience-carl-the-first-ai-scientist-writing-peer-reviewed-papers/#respond Mon, 03 Mar 2025 15:50:23 +0000 https://www.artificialintelligence-news.com/?p=104661 The newly-formed Autoscience Institute has unveiled ‘Carl,’ the first AI system crafting academic research papers to pass a rigorous double-blind peer-review process. Carl’s research papers were accepted in the Tiny Papers track at the International Conference on Learning Representations (ICLR). Critically, these submissions were generated with minimal human involvement, heralding a new era for AI-driven […]

The post Autoscience Carl: The first AI scientist writing peer-reviewed papers appeared first on AI News.

]]>
The newly-formed Autoscience Institute has unveiled ‘Carl,’ the first AI system crafting academic research papers to pass a rigorous double-blind peer-review process.

Carl’s research papers were accepted in the Tiny Papers track at the International Conference on Learning Representations (ICLR). Critically, these submissions were generated with minimal human involvement, heralding a new era for AI-driven scientific discovery.

Meet Carl: The ‘automated research scientist’

Carl represents a leap forward in the role of AI as not just a tool, but an active participant in academic research. Described as “an automated research scientist,” Carl applies natural language models to ideate, hypothesise, and cite academic work accurately. 

Crucially, Carl can read and comprehend published papers in mere seconds. Unlike human researchers, it works continuously, thus accelerating research cycles and reducing experimental costs.

According to Autoscience, Carl successfully “ideated novel scientific hypotheses, designed and performed experiments, and wrote multiple academic papers that passed peer review at workshops.”

This underlines the potential of AI to not only complement human research but, in many ways, surpass it in speed and efficiency.

Carl is a meticulous worker, but human involvement is still vital

Carl’s ability to generate high-quality academic work is built on a three-step process:

  1. Ideation and hypothesis formation: Leveraging existing research, Carl identifies potential research directions and generates hypotheses. Its deep understanding of related literature allows it to formulate novel ideas in the field of AI.
  1. Experimentation: Carl writes code, tests hypotheses, and visualises the resulting data through detailed figures. Its tireless operation shortens iteration times and reduces redundant tasks.
  1. Presentation: Finally, Carl compiles its findings into polished academic papers—complete with data visualisations and clearly articulated conclusions.

Although Carl’s capabilities make it largely independent, there are points in its workflow where human involvement is still required to adhere to computational, formatting, and ethical standards:

  • Greenlighting research steps: To avoid wasting computational resources, human reviewers provide “continue” or “stop” signals during specific stages of Carl’s process. This guidance steers Carl through projects more efficiently but does not influence the specifics of the research itself.
  • Citations and formatting: The Autoscience team ensures all references are correctly cited and formatted to meet academic standards. This is currently a manual step but ensures the research aligns with the expectations of its publication venue. 
  • Assistance with pre-API models: Carl occasionally relies on newer OpenAI and Deep Research models that lack auto-accessible APIs. In such cases, manual interventions – such as copy-pasting outputs – bridge these gaps. Autoscience expects these tasks to be entirely automated in the future when APIs become available.

For Carl’s debut paper, the human team also helped craft the “related works” section and refine the language. These tasks, however, were unnecessary following updates applied before subsequent submissions.

Stringent verification process for academic integrity

Before submitting any research, the Autoscience team undertook a rigorous verification process to ensure Carl’s work met the highest standards of academic integrity:

  • Reproducibility: Every line of Carl’s code was reviewed and experiments were rerun to confirm reproducibility. This ensured the findings were scientifically valid and not coincidental anomalies.
  • Originality checks: Autoscience conducted extensive novelty evaluations to ensure that Carl’s ideas were new contributions to the field and not rehashed versions of existing publications.
  • External validation: A hackathon involving researchers from prominent academic institutions – such as MIT, Stanford University, and U.C. Berkeley – independently verified Carl’s research. Further plagiarism and citation checks were performed to ensure compliance with academic norms.

Undeniable potential, but raises larger questions

Achieving acceptance at a workshop as respected as the ICLR is a significant milestone, but Autoscience recognises the greater conversation this milestone may spark. Carl’s success raises larger philosophical and logistical questions about the role of AI in academic settings.

“We believe that legitimate results should be added to the public knowledge base, regardless of where they originated,” explained Autoscience. “If research meets the scientific standards set by the academic community, then who – or what – created it should not lead to automatic disqualification.”

“We also believe, however, that proper attribution is necessary for transparent science, and work purely generated by AI systems should be discernable from that produced by humans.”

Given the novelty of autonomous AI researchers like Carl, conference organisers may need time to establish new guidelines that account for this emerging paradigm, especially to ensure fair evaluation and intellectual attribution standards. To prevent unnecessary controversy at present, Autoscience has withdrawn Carl’s papers from ICLR workshops while these frameworks are being devised.

Moving forward, Autoscience aims to contribute to shaping these evolving standards. The company intends to propose a dedicated workshop at NeurIPS 2025 to formally accommodate research submissions from autonomous research systems. 

As the narrative surrounding AI-generated research unfolds, it’s clear that systems like Carl are not merely tools but collaborators in the pursuit of knowledge. But as these systems transcend typical boundaries, the academic community must adapt to fully embrace this new paradigm while safeguarding integrity, transparency, and proper attribution.

(Photo by Rohit Tandon)

See also: You.com ARI: Professional-grade AI research agent for businesses

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Autoscience Carl: The first AI scientist writing peer-reviewed papers appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/autoscience-carl-the-first-ai-scientist-writing-peer-reviewed-papers/feed/ 0
Microsoft advances materials discovery with MatterGen https://www.artificialintelligence-news.com/news/microsoft-advances-materials-discovery-mattergen/ https://www.artificialintelligence-news.com/news/microsoft-advances-materials-discovery-mattergen/#respond Fri, 17 Jan 2025 14:20:54 +0000 https://www.artificialintelligence-news.com/?p=16905 The discovery of new materials is key to solving some of humanity’s biggest challenges. However, as highlighted by Microsoft, traditional methods of discovering new materials can feel like “finding a needle in a haystack.” Historically, finding new materials relied on laborious and costly trial-and-error experiments. More recently, computational screening of vast materials databases helped to […]

The post Microsoft advances materials discovery with MatterGen appeared first on AI News.

]]>
The discovery of new materials is key to solving some of humanity’s biggest challenges. However, as highlighted by Microsoft, traditional methods of discovering new materials can feel like “finding a needle in a haystack.”

Historically, finding new materials relied on laborious and costly trial-and-error experiments. More recently, computational screening of vast materials databases helped to speed up the process, but it remained a time-intensive process.

Now, a powerful new generative AI tool from Microsoft could accelerate this process significantly. Dubbed MatterGen, the tool steps away from traditional screening methods and instead directly engineers novel materials based on design requirements, offering a potentially game-changing approach to materials discovery.

Published in a paper in Nature, Microsoft describes MatterGen as a diffusion model that operates within the 3D geometry of materials. Where an image diffusion model might generate images from text prompts by tweaking pixel colours, MatterGen generates material structures by altering elements, positions, and periodic lattices in randomised structures. This bespoke architecture is designed specifically to handle the unique demands of materials science, such as periodicity and 3D arrangements.  

“MatterGen enables a new paradigm of generative AI-assisted materials design that allows for efficient exploration of materials, going beyond the limited set of known ones,” explains Microsoft.

A leap beyond screening

Traditional computational methods involve screening enormous databases of potential materials to identify candidates with desired properties. Yet, even these methods are limited in their ability to explore the universe of unknown materials and require researchers to sift through millions of options before finding promising candidates.  

In contrast, MatterGen starts from scratch—generating materials based on specific prompts about chemistry, mechanical attributes, electronic properties, magnetic behaviour, or combinations of these constraints. The model was trained using over 608,000 stable materials compiled from the Materials Project and Alexandria databases.

In the comparison below, MatterGen significantly outperformed traditional screening methods in generating novel materials with specific properties—specifically a bulk modulus greater than 400 GPa, meaning they are hard to compress.

Comparison of MatterGen using AI for materials discovery over traditional screening methods.

While screening exhibited diminishing returns over time as its pool of known candidates became exhausted, MatterGen continued generating increasingly novel results.

One common challenge encountered during materials synthesis is compositional disorder—the phenomenon where atoms randomly swap positions within a crystal lattice. Traditional algorithms often fail to distinguish between similar structures when deciding what counts as a “truly novel” material.  

To address this, Microsoft devised a new structure-matching algorithm that incorporates compositional disorder into its evaluations. The tool identifies whether two structures are merely ordered approximations of the same underlying disordered structure, enabling more robust definitions of novelty.

Proving MatterGen works for materials discovery

To prove MatterGen’s potential, Microsoft collaborated with researchers at Shenzhen Institutes of Advanced Technology (SIAT) – part of the Chinese Academy of Sciences – to experimentally synthesise a novel material designed by the AI.

The material, TaCr₂O₆, was generated by MatterGen to meet a bulk modulus target of 200 GPa. While the experimental result fell slightly short of the target, measuring a modulus of 169 GPa, the relative error was just 20%—a small discrepancy from an experimental perspective.

Interestingly, the final material exhibited compositional disorder between Ta and Cr atoms, but its structure aligned closely with the model’s prediction. If this level of predictive accuracy can be translated to other domains, MatterGen could have a profound impact on material designs for batteries, fuel cells, magnets, and more.

Microsoft positions MatterGen as a complementary tool to its previous AI model, MatterSim, which accelerates simulations of material properties. Together, the tools could serve as a technological “flywheel”, enhancing both the exploration of new materials and the simulation of their properties in iterative loops.

This approach aligns with what Microsoft refers to as the “fifth paradigm of scientific discovery,” in which AI moves beyond pattern recognition to actively guide experiments and simulations.  

Microsoft has released MatterGen’s source code under the MIT licence. Alongside the code, the team has made the model’s training and fine-tuning datasets available to support further research and encourage broader adoption of this technology.

Reflecting on generative AI’s broader scientific potential, Microsoft draws parallels to drug discovery, where such tools have already started transforming how researchers design and develop medicines. Similarly, MatterGen could reshape the way we approach materials design, particularly for critical domains such as renewable energy, electronics, and aerospace engineering. 

(Image credit: Microsoft)

See also: L’Oréal: Making cosmetics sustainable with generative AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Microsoft advances materials discovery with MatterGen appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/microsoft-advances-materials-discovery-mattergen/feed/ 0
DeepMind releases AlphaFold database of nearly all human protein structures https://www.artificialintelligence-news.com/news/deepmind-releases-alphafold-database-nearly-all-human-protein-structures/ https://www.artificialintelligence-news.com/news/deepmind-releases-alphafold-database-nearly-all-human-protein-structures/#respond Fri, 23 Jul 2021 15:11:28 +0000 http://artificialintelligence-news.com/?p=10796 British artificial intelligence giant DeepMind has released a database of nearly all human protein structures that it amassed as part of its AlphaFold program. Last year, the organisers of the biennial Critical Assessment of protein Structure Prediction (CASP) recognised AlphaFold as a solution to the grand challenge of figuring out what shapes proteins fold into. […]

The post DeepMind releases AlphaFold database of nearly all human protein structures appeared first on AI News.

]]>
British artificial intelligence giant DeepMind has released a database of nearly all human protein structures that it amassed as part of its AlphaFold program.

Last year, the organisers of the biennial Critical Assessment of protein Structure Prediction (CASP) recognised AlphaFold as a solution to the grand challenge of figuring out what shapes proteins fold into.

Professor John Moult, Co-founder and Chair of CASP, University of Maryland, said:

“We have been stuck on this one problem – how do proteins fold up – for nearly 50 years.

To see DeepMind produce a solution for this, having worked personally on this problem for so long and after so many stops and starts, wondering if we’d ever get there, is a very special moment.”

AlphaFold is a major scientific advance that will play a crucial role in helping scientists to solve important problems such as the protein misfolding associated with diseases such as Alzheimer’s, Parkinson’s, cystic fibrosis and Huntington’s disease.

Arthur D. Levinson, Founder and CEO of Calico, explained:

“AlphaFold is a once-in-a-generation advance, predicting protein structures with incredible speed and precision.

This leap forward demonstrates how computational methods are poised to transform research in biology and hold much promise for accelerating the drug discovery process.”

Using AI, AlphaFold has successfully predicted the structure of nearly all 20,000 proteins expressed by humans. An independent benchmark proved the system was capable of predicting the shape of a protein to a decent standard around 95 percent of the time.

DeepMind is now releasing its database of every single protein in the human body, as well as for the proteins of 20 additional organisms that scientists rely on for their research, for free, for any researchers to use for the betterment of humankind.

“This will be one of the most important datasets since the mapping of the Human Genome,” said Ewan Birney, Deputy Director-General of EMBL and Director of EMBL-EBI.

Thanks to the “astonishingly accurate” models produced by AlphaFold, Professor Andrei Lupas, Director of the Max Planck Institute for Developmental Biology, claims they were able to solve a protein structure they were stuck on for close to a decade.

The full AlphaFold protein structure database can be accessed for free online here.

(Photo by Carolina Garcia Tavizon on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post DeepMind releases AlphaFold database of nearly all human protein structures appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/deepmind-releases-alphafold-database-nearly-all-human-protein-structures/feed/ 0
AI uses data from Oura wearables to predict COVID-19 three days early https://www.artificialintelligence-news.com/news/ai-data-oura-wearables-predict-covid19-three-days-early/ https://www.artificialintelligence-news.com/news/ai-data-oura-wearables-predict-covid19-three-days-early/#respond Tue, 02 Jun 2020 12:00:59 +0000 http://artificialintelligence-news.com/?p=9664 Researchers have successfully used AI to analyse data from Oura’s wearable rings and predict COVID-19 symptoms three days early. The researchers, from WVU Medicine and the Rockefeller Neuroscience Institute, first announced the potentially groundbreaking project in April. At the time, the researchers found they could predict COVID-19 symptoms – including fever, cough, and fatigue – […]

The post AI uses data from Oura wearables to predict COVID-19 three days early appeared first on AI News.

]]>
Researchers have successfully used AI to analyse data from Oura’s wearable rings and predict COVID-19 symptoms three days early.

The researchers, from WVU Medicine and the Rockefeller Neuroscience Institute, first announced the potentially groundbreaking project in April.

At the time, the researchers found they could predict COVID-19 symptoms – including fever, cough, and fatigue – up to 24 hours before their onset.

“The holistic and integrated neuroscience platform developed by the RNI continuously monitors the human operating system, which allows for the accurate prediction of the onset of viral infection symptoms associated with COVID-19,” said Ali Rezai, M.D., executive chair of the WVU Rockefeller Neuroscience Institute.

“We feel this platform will be integral to protecting our healthcare workers, first responders, and communities as we adjust to life in the COVID-19 era.”

Participants in the study were asked to log neurological symptoms like stress and anxiety in an app. The Oura ring, meanwhile, automatically tracks physiological data like body temperature, heart rate, and sleep patterns.

“We are hopeful that Oura’s technology will advance how people identify and understand our body’s most nuanced physiological signals and warning signs, as they relate to infectious diseases like COVID-19,” explained Harpreet Rai, CEO of Oura Health.

“Partnering with the Rockefeller Neuroscience Institute on this important study helps fulfil Oura’s vision of offering data for the public good and empowering individuals with the personal insights needed to lead healthier lives.”  

Using an AI prediction model, the researchers have improved their ability to track COVID-19 symptoms from 24 hours before their onset to three days.

The accuracy rate for the current system is 90 percent. While impressive, that does mean 100 people in every 1000 patients could be misdiagnosed if such a system was widely rolled out.

This isn’t the only research into the use of wearables to help tackle the COVID-19 pandemic – Fitbit is also conducting a large study into whether its popular wearables can detect markers which may indicate that a user is infected with the novel coronavirus and should therefore quarantine and seek a professional test.

With the COVID-19 pandemic looking set to disrupt our lives for the foreseeable future, it seems AI and wearables provide some hope of diagnosing cases earlier, limiting reinfection, and helping people return to some degree of normality.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post AI uses data from Oura wearables to predict COVID-19 three days early appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-data-oura-wearables-predict-covid19-three-days-early/feed/ 0
Theresa May: AI is a ‘new weapon’ against cancer https://www.artificialintelligence-news.com/news/theresa-may-ai-weapon-cancer/ https://www.artificialintelligence-news.com/news/theresa-may-ai-weapon-cancer/#respond Mon, 21 May 2018 10:17:35 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3127 Prime Minister Theresa May will use a speech today in Cheshire to highlight the potential of AI to diagnose cancer earlier. Cancer has a higher successful treatment rate the earlier it’s diagnosed. The later the diagnosis, the greater the risk of death or long-term debilitating effects. In her speech, Mrs May will say: “Late diagnosis […]

The post Theresa May: AI is a ‘new weapon’ against cancer appeared first on AI News.

]]>
Prime Minister Theresa May will use a speech today in Cheshire to highlight the potential of AI to diagnose cancer earlier.

Cancer has a higher successful treatment rate the earlier it’s diagnosed. The later the diagnosis, the greater the risk of death or long-term debilitating effects.

In her speech, Mrs May will say:

“Late diagnosis of otherwise treatable illnesses is one of the biggest causes of avoidable deaths.

The development of smart technologies to analyse great quantities of data quickly, and with a higher degree of accuracy than is possible by human beings, opens up a whole new field of medical research and gives us a new weapon in our armoury in the fight against disease.

Achieving this mission will not only save thousands of lives, it will incubate a whole new industry around AI-in-healthcare. It will create high-skilled science jobs across the country – drawing on existing centres of excellence in places like Edinburgh, Oxford, and Leeds – and help to grow new ones.”

At least 50,000 people a year suffering from lung, prostate, ovarian, or bowel cancer will be diagnosed earlier due to AI, May will claim.

To achieve this goal, researchers will require access to large amounts of medical records to cross-reference patients’ lifestyles, genetics, and prior conditions to highlight when individuals are most at risk.

The UK’s National Health Service (NHS) has vast amounts of data. Every time a patient visits a service anywhere in the country, a record is made.

A patient’s medical record can include:

    • treatments received or ongoing
    • information about allergies
    • current medication(s)
    • any reactions to medications in the past
    • any known long-term conditions, such as diabetes or asthma
    • medical test results such as blood tests, allergy tests, and other screenings
    • any clinically relevant lifestyle information, such as smoking, alcohol or weight
    • personal data, such as age, name, and address
    • consultation notes, which a doctor takes during an appointment
    • hospital admission records, including the reason
    • hospital discharge records, which will include the results of treatment and whether any follow-up appointments or care are required
    • X-rays
  • photographs and image slides, such as MRI scans or CT scans

How this data is shared and used to improve medical care remains a controversial topic. For example, the NHS’ sharing of data with Google-owned DeepMind has often come under scrutiny.

An independent panel last year found the deal between DeepMind and the Royal Free NHS Foundation Trust to develop an app for diagnosing kidney disease was ‘illegal’ and did not do enough to safeguard patient data.

Theresa May’s party, the Conservatives, have also faced widespread criticism over under-funding and privatisation of the NHS — leading to increased staff pressure and longer waiting times for patients.

Two-thirds of NHS trusts reported having at least one cancer patient waiting more than six months last year, while almost seven in 10 (69%) trusts said they had a worse longest wait than in 2010. One cancer patient waited 541 days for treatment.

If employed correctly, the automation offered by AI has the potential to greatly reduce staff pressure and improve patient care.

“Earlier detection and diagnosis could fundamentally transform outcomes for people with cancer, as well as saving the NHS money,” comments Sir Harpal Kumar, CEO of Cancer Research. “Advances in detection technologies depend on the intelligent use of data and have the potential to save hundreds of thousands of lives every year.”

“We need to ensure we have the right infrastructure, embedded in our health system, to make this possible.”

What are your thoughts on the use of AI in healthcare? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post Theresa May: AI is a ‘new weapon’ against cancer appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/theresa-may-ai-weapon-cancer/feed/ 0
Virtually Brainy: AI wires itself to navigate like mammals https://www.artificialintelligence-news.com/news/ai-navigate-like-mammals/ https://www.artificialintelligence-news.com/news/ai-navigate-like-mammals/#comments Thu, 10 May 2018 11:53:52 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3074 Researchers have built an AI with virtual brain cells that wires itself to navigate an environment much like mammals do in nature. Fully understanding the ‘internal GPS’ used by humans and other mammals to navigate from point A to B has eluded neuroscientists for decades. By analysing a new AI, which developed ‘grid cells’ similar […]

The post Virtually Brainy: AI wires itself to navigate like mammals appeared first on AI News.

]]>
Researchers have built an AI with virtual brain cells that wires itself to navigate an environment much like mammals do in nature.

Fully understanding the ‘internal GPS’ used by humans and other mammals to navigate from point A to B has eluded neuroscientists for decades. By analysing a new AI, which developed ‘grid cells’ similar to our brains, researchers believe we could be closer than ever.

The new AI was designed by a team from Google DeepMind and University College London to navigate a virtual environment from one point to another in the most efficient way possible.

In findings posted to science journal Nature, the AI developed grid cells similar to mammals. Grid cells were first discovered in 2005 by Norwegian neuroscientists May-Britt and Edvard Moser, earning them a share of the 2014 medicine Nobel Prize.

The neuroscientists made their discovery after observing rats navigating and finding grid cells in their brains firing at points which formed a hexagonal pattern.

Animation provided by DeepMind

Grid cells work in combination with other brain cells. This includes ‘place cells’ which activate when a mammal is in a specific location, and ‘head direction cells’ which fire when the head is pointed in a specific direction.

How all of these cells work together is less well known, but the researchers are hoping to find some answers by observing the AI. They expect this will be just the start in using AI to gain a greater understanding of biology.

You can find the formatted Nature paper here, but note that it’s behind a controversial paywall. Alternatively, the unformatted full paper is available free here (PDF).

What are your thoughts on the use of AI to gain a deeper understanding of biology? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post Virtually Brainy: AI wires itself to navigate like mammals appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-navigate-like-mammals/feed/ 1