data Archives - AI News https://www.artificialintelligence-news.com/news/tag/data/ Artificial Intelligence News Thu, 24 Apr 2025 11:40:59 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png data Archives - AI News https://www.artificialintelligence-news.com/news/tag/data/ 32 32 Meta will train AI models using EU user data https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/ https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/#respond Tue, 15 Apr 2025 16:32:02 +0000 https://www.artificialintelligence-news.com/?p=105325 Meta has confirmed plans to utilise content shared by its adult users in the EU (European Union) to train its AI models. The announcement follows the recent launch of Meta AI features in Europe and aims to enhance the capabilities and cultural relevance of its AI systems for the region’s diverse population.    In a statement, […]

The post Meta will train AI models using EU user data appeared first on AI News.

]]>
Meta has confirmed plans to utilise content shared by its adult users in the EU (European Union) to train its AI models.

The announcement follows the recent launch of Meta AI features in Europe and aims to enhance the capabilities and cultural relevance of its AI systems for the region’s diverse population.   

In a statement, Meta wrote: “Today, we’re announcing our plans to train AI at Meta using public content – like public posts and comments – shared by adults on our products in the EU.

“People’s interactions with Meta AI – like questions and queries – will also be used to train and improve our models.”

Starting this week, users of Meta’s platforms (including Facebook, Instagram, WhatsApp, and Messenger) within the EU will receive notifications explaining the data usage. These notifications, delivered both in-app and via email, will detail the types of public data involved and link to an objection form.

“We have made this objection form easy to find, read, and use, and we’ll honor all objection forms we have already received, as well as newly submitted ones,” Meta explained.

Meta explicitly clarified that certain data types remain off-limits for AI training purposes.

The company says it will not “use people’s private messages with friends and family” to train its generative AI models. Furthermore, public data associated with accounts belonging to users under the age of 18 in the EU will not be included in the training datasets.

Meta wants to build AI tools designed for EU users

Meta positions this initiative as a necessary step towards creating AI tools designed for EU users. Meta launched its AI chatbot functionality across its messaging apps in Europe last month, framing this data usage as the next phase in improving the service.

“We believe we have a responsibility to build AI that’s not just available to Europeans, but is actually built for them,” the company explained. 

“That means everything from dialects and colloquialisms, to hyper-local knowledge and the distinct ways different countries use humor and sarcasm on our products.”

This becomes increasingly pertinent as AI models evolve with multi-modal capabilities spanning text, voice, video, and imagery.   

Meta also situated its actions in the EU within the broader industry landscape, pointing out that training AI on user data is common practice.

“It’s important to note that the kind of AI training we’re doing is not unique to Meta, nor will it be unique to Europe,” the statement reads. 

“We’re following the example set by others including Google and OpenAI, both of which have already used data from European users to train their AI models.”

Meta further claimed its approach surpasses others in openness, stating, “We’re proud that our approach is more transparent than many of our industry counterparts.”   

Regarding regulatory compliance, Meta referenced prior engagement with regulators, including a delay initiated last year while awaiting clarification on legal requirements. The company also cited a favourable opinion from the European Data Protection Board (EDPB) in December 2024.

“We welcome the opinion provided by the EDPB in December, which affirmed that our original approach met our legal obligations,” wrote Meta.

Broader concerns over AI training data

While Meta presents its approach in the EU as transparent and compliant, the practice of using vast swathes of public user data from social media platforms to train large language models (LLMs) and generative AI continues to raise significant concerns among privacy advocates.

Firstly, the definition of “public” data can be contentious. Content shared publicly on platforms like Facebook or Instagram may not have been posted with the expectation that it would become raw material for training commercial AI systems capable of generating entirely new content or insights. Users might share personal anecdotes, opinions, or creative works publicly within their perceived community, without envisaging its large-scale, automated analysis and repurposing by the platform owner.

Secondly, the effectiveness and fairness of an “opt-out” system versus an “opt-in” system remain debatable. Placing the onus on users to actively object, often after receiving notifications buried amongst countless others, raises questions about informed consent. Many users may not see, understand, or act upon the notification, potentially leading to their data being used by default rather than explicit permission.

Thirdly, the issue of inherent bias looms large. Social media platforms reflect and sometimes amplify societal biases, including racism, sexism, and misinformation. AI models trained on this data risk learning, replicating, and even scaling these biases. While companies employ filtering and fine-tuning techniques, eradicating bias absorbed from billions of data points is an immense challenge. An AI trained on European public data needs careful curation to avoid perpetuating stereotypes or harmful generalisations about the very cultures it aims to understand.   

Furthermore, questions surrounding copyright and intellectual property persist. Public posts often contain original text, images, and videos created by users. Using this content to train commercial AI models, which may then generate competing content or derive value from it, enters murky legal territory regarding ownership and fair compensation—issues currently being contested in courts worldwide involving various AI developers.

Finally, while Meta highlights its transparency relative to competitors, the actual mechanisms of data selection, filtering, and its specific impact on model behaviour often remain opaque. Truly meaningful transparency would involve deeper insights into how specific data influences AI outputs and the safeguards in place to prevent misuse or unintended consequences.

The approach taken by Meta in the EU underscores the immense value technology giants place on user-generated content as fuel for the burgeoning AI economy. As these practices become more widespread, the debate surrounding data privacy, informed consent, algorithmic bias, and the ethical responsibilities of AI developers will undoubtedly intensify across Europe and beyond.

(Photo by Julio Lopez)

See also: Apple AI stresses privacy with synthetic and anonymised data

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta will train AI models using EU user data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/feed/ 0
Best 7 news API data feeds https://www.artificialintelligence-news.com/news/best-7-news-api-data-feeds/ https://www.artificialintelligence-news.com/news/best-7-news-api-data-feeds/#respond Tue, 11 Mar 2025 11:22:55 +0000 https://www.artificialintelligence-news.com/?p=104748 Access to real-time and historical news data is important in today’s digital landscape. Businesses, developers, and analysts rely on news API data feeds to gather structured insights from various sources, ranging from global news outlets and blogs, to forums and social media. APIs help integrate content into applications and workflows, enabling decision-making and scalable solutions. […]

The post Best 7 news API data feeds appeared first on AI News.

]]>
Access to real-time and historical news data is important in today’s digital landscape. Businesses, developers, and analysts rely on news API data feeds to gather structured insights from various sources, ranging from global news outlets and blogs, to forums and social media. APIs help integrate content into applications and workflows, enabling decision-making and scalable solutions.

What are news API data feeds?

News API data feeds are platforms that aggregate, organise, and deliver structured news data from multiple sources, like websites, blogs, forums, and online publications. They simplify the process of gathering information from different outlets and formatting it into machine-readable formats like JSON or XML. These feeds eliminate the manual effort of collecting and curating data by presenting structured content ready to be processed.

Top 7 news API data feeds

Let’s explore seven top news API data feeds leading the industry. These tools provide businesses with real-time access, historical coverage, and features tailored to various industries.

1. Webz.io

Webz.io is one of the most comprehensive news APIs, offering both real-time and archived coverage from the open and deep web, as well as the dark web. It provides highly customisable data feeds for industries like finance, risk intelligence, and cybersecurity.

Key features:

  • Access to open, deep, and dark web data.
  • Advanced filters for sentiment, topic, and geographic coverage.
  • Support for visualisation and actionable risk monitoring.

Use case: Media monitoring, sentiment analysis, and threat intelligence for corporate security teams and financial organisations.

Why Webz.io? Its expansive source list and deep customisation options make it ideal for specialised industries like cybersecurity and financial analytics.

2. GNews API

GNews API is a simple, lightweight platform that aggregates reliable news from around the globe. It is perfect for small-scale applications or developers looking for affordable yet efficient solutions.

Key features:

  • Real-time global coverage.
  • Filters for topics, languages, and countries.
  • Affordable pricing plans suitable for startups.

Use case: Localisation-focused news widgets or small aggregators serving specific regional or language-based audiences.

Why GNews? Its intuitive design and affordability make GNews a great entry point for developers and startups.

3. The Guardian API

The Guardian API provides direct access to high-quality journalism from the Guardian’s editorial content. It offers structured news, tags, and metadata from one of the world’s most respected news organisations.

Key features:

  • High-quality editorial content.
  • Filtering by topic or category.
  • Media-rich datan integration, including multimedia embedding.

Use case: Apps or research projects requiring trusted editorial sources for accurate analysis or curated content.

Why The Guardian API? Focused on credible data, it works best for platforms and professionals prioritising journalistic integrity.

4. Bloomberg API

Renowned for its financial insights, Bloomberg API delivers in-depth business coverage and real-time data for institutions and professional investors. It specialises in market data, financial news, and economic reports.

Key features:

  • Exclusive financial data and analysis.
  • Real-time market coverage.
  • Seamless integration with Bloomberg’s terminals.

Use case: Analysts and investment professionals monitoring market trends and making data-driven decisions.

Why Bloomberg? Its precise focus on finance makes it essential for institutions heavily reliant on actionable market news.

5. Financial Times API

The Financial Times API is a premium solution that supplies business and economic-focused news. It is built for professional teams that require deep insights into global markets and economic activity.

Key features:

  • Premium content on global finance and markets.
  • Access to detailed economic reports and analyses.
  • Subscription access for gated content.

Use case: Economists, researchers, or executives tracking global economic trends and industry reports.

Why Financial Times? Its premium-quality data and economic insights provide unmatched value for businesses targeting comprehensive market analysis.

6. Opoint

Opoint specialises in news monitoring and sentiment analysis, making it particularly useful for PR, marketing, and branding teams. It supports multiple languages and global sources with cutting-edge media monitoring capabilities.

Key features:

  • Real-time monitoring with sentiment tagging.
  • Multilingual and multi-source coverage.
  • Tailored brand monitoring and competitor tracking.

Use case: PR agencies and marketers monitoring sentiment shifts or competitive landscape changes like product launches.

Why Opoint? Its advanced monitoring features help organisations stay agile in rapidly shifting media environments.

7. Mediastack API

Mediastack combines accessibility with scalability, offering a mix of free plans for developers and paid tiers for advanced features. It aggregates news in real time from over 7,500 sources globally.

Key features:

  • Free and affordable paid plans.
  • Multilingual support and geo-targeted searches.
  • Scalable for both startups and growing enterprises.

Use case: Developers building applications that require versatile, budget-friendly news feeds with reliable real-time updates.

Why Mediastack? Its affordability and flexibility cater to businesses of all sizes, making it a versatile option for a wide range of users.

Use cases for news API data feeds

The applications of news API data feeds are as diverse as the industries relying on them:

Financial intelligence: Investment tools use APIs to analyse market-moving news in real time.

Media monitoring: PR agencies use media insights to track brand mentions and sentiment.

Risk assessment: Governments and corporations assess geopolitical risks or public sentiment.

Content platforms: Aggregators curate articles, summaries, and headlines for apps/websites.

AI & predictive analysis: APIs provide data for machine learning models that forecast trends.

(Image source: Unsplash)

The post Best 7 news API data feeds appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/best-7-news-api-data-feeds/feed/ 0
The ethics of AI and how they affect you https://www.artificialintelligence-news.com/news/the-ethics-of-ai-and-how-they-affect-you/ https://www.artificialintelligence-news.com/news/the-ethics-of-ai-and-how-they-affect-you/#respond Mon, 10 Mar 2025 06:39:00 +0000 https://www.artificialintelligence-news.com/?p=104703 Having worked with AI since 2018, I’m watching its slow but steady pick-up alongside the unstructured bandwagon-jumping with considerable interest. Now that the initial fear has subsided somewhat about a robotic takeover, discussion about the ethics that will surround the integration of AI into everyday business structures has taken its place.   A whole new range […]

The post The ethics of AI and how they affect you appeared first on AI News.

]]>
Having worked with AI since 2018, I’m watching its slow but steady pick-up alongside the unstructured bandwagon-jumping with considerable interest. Now that the initial fear has subsided somewhat about a robotic takeover, discussion about the ethics that will surround the integration of AI into everyday business structures has taken its place.  

A whole new range of roles will be required to handle ethics, governance and compliance, all of which are going to gain enormous value and importance to organisations.

Probably the most essential of these will be an AI Ethics Specialist, who will be required to ensure Agentic AI systems meet ethical standards like fairness and transparency. This role will involve using specialised tools and frameworks to address ethical concerns efficiently and avoid potential legal or reputational risks.  Human oversight to ensure transparency and responsible ethics is essential to maintain the delicate balance between data driven decisions, intelligence and intuition.

In addition, roles like Agentic AI Workflow Designer, AI Interaction and Integration Designer will ensure AI integrates seamlessly across ecosystems and prioritises transparency, ethical considerations, and adaptability. An AI Overseer will also be required, to monitor the entire Agentic stack of agents and arbiters, the decision-making elements of AI.   

For anyone embarking on the integration of AI into their organisation and wanting to ensure the technology is introduced and maintained responsibly, I can recommend consulting the United Nations’ principles. These 10 principles were created by the United Nations in 2022, in response to the ethical challenges raised by the increasing preponderance of AI.

So what are these ten principles, and how can we use them as a framework?

First, do no harm 

As befits technology with an autonomous element, the first principle focuses on the deployment of AI systems in ways that will avoid any negative impact on social, cultural, economic, natural or political environments. An AI lifecycle should be designed to respect and protect human rights and freedoms. Systems should be monitored to ensure that that situation is maintained and no long-term damage is being done.

Avoid AI for AI’s sake

Ensure that the use of AI is justified, appropriate and not excessive. There is a distinct temptation to become over-zealous in the application of this exciting technology and it needs to be balanced against human needs and aims and should never be used at the expense of human dignity. 

Safety and security

Safety and security risks should be identified, addressed and mitigated

throughout the life cycle of the AI system and on an on-going basis. Exactly the same robust health and safety frameworks should be applied to AI as to any other area of the business. 

Equality

Similarly, AI should be deployed with the aim of ensuring the equal and just distribution of the benefits, risks and cost, and to prevent bias, deception, discrimination and stigma of any kind.

Sustainability

AI should be aimed at promoting environmental, economic and social sustainability. Continual assessment should be made to address negative impacts, including any on the generations to come. 

Data privacy, data protection and data governance

Adequate data protection frameworks and data governance mechanisms should be established or enhanced to ensure that the privacy and rights of individuals are maintained in line with legal guidelines around data integrity and personal data protection. No AI system should impinge on the privacy of another human being.

Human oversight

Human oversight should be guaranteed to ensure that the outcomes of using AI are fair and just. Human-centric design practises should be employed and capacity to be given for a human to step in at any stage and make a decision on how and when AI should be used, and to over-ride any decision made by AI. Rather dramatically but entirely reasonably, the UN suggests any decision affecting life or death should not be left to AI. 

Transparency and Explainability

This, to my mind, forms part of the guidelines around equality. Everyone using AI should fully understand the systems they are using, the decision-making processes used by the system and its ramifications. Individuals should be told when a decision regarding their rights, freedoms or benefits has been made by artificial intelligence, and most importantly, the explanation should be made in a way that makes it comprehensible. 

Responsibility and Accountability

This is the whistleblower principle, that covers audit and due diligence as well as protection for whistleblowers to make sure that someone is responsible and accountable for the decisions made by, and use of, AI. Governance should be put in place around the ethical and legal responsibility of humans for any AI-based decisions. Any of these decisions that cause harm should be investigated and action taken. 

Inclusivity and participation

Just as in any other area of business, when designing, deploying and using artificial intelligence systems, an inclusive, interdisciplinary and participatory approach should be taken, which also includes gender equality. Stakeholders and any communities that are affected should be informed and consulted and informed of any benefits and potential risks. 

Building your AI integration around these central pillars should help you feel reassured that your entry into AI integration is built on an ethical and solid foundation. 

Photo by Immo Wegmann on Unsplash

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post The ethics of AI and how they affect you appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/the-ethics-of-ai-and-how-they-affect-you/feed/ 0
DeepSeek ban? China data transfer boosts security concerns https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/ https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/#respond Fri, 07 Feb 2025 17:44:01 +0000 https://www.artificialintelligence-news.com/?p=104228 US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company. DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga. Its rise has been fuelled in part […]

The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News.

]]>
US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company.

DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga.

Its rise has been fuelled in part by its business model: unlike many of its American counterparts, including OpenAI and Google, DeepSeek offered its advanced powers for free.

However, concerns have been raised about DeepSeek’s extensive data collection practices and a probe has been launched by Microsoft and OpenAI over a breach of the latter’s system by a group allegedly linked to the Chinese AI startup.

A threat to US AI dominance

DeepSeek’s astonishing capabilities have, within a matter of weeks, positioned it as a major competitor to American AI stalwarts like OpenAI’s ChatGPT and Google Gemini. But, alongside the app’s prowess, concerns have emerged over alleged ties to the Chinese Communist Party (CCP).  

According to security researchers, hidden code within DeepSeek’s AI has been found transmitting user data to China Mobile—a state-owned telecoms company banned in the US. DeepSeek’s own privacy policy permits the collection of data such as IP addresses, device information, and, most alarmingly, even keystroke patterns.

Such findings have led to bipartisan efforts in the US Congress to curtail DeepSeek’s influence, with lawmakers scrambling to protect sensitive data from potential CCP oversight.

Reps. Darin LaHood (R-IL) and Josh Gottheimer (D-NJ) are spearheading efforts to introduce legislation that would prohibit DeepSeek from being installed on all government-issued devices. 

Several federal agencies, among them NASA and the US Navy, have already preemptively issued a ban on DeepSeek. Similarly, the state of Texas has also introduced restrictions.

Potential ban of DeepSeek a TikTok redux?

The controversy surrounding DeepSeek bears similarities to debates over TikTok, the social video app owned by Chinese company ByteDance. TikTok remains under fire over accusations that user data is accessible to the CCP, though definitive proof has yet to materialise.

In contrast, DeepSeek’s case involves clear evidence, as revealed by cybersecurity investigators who identified the app’s unauthorised data transmissions. While some might say DeepSeek echoes the TikTok controversy, security experts argue that it represents a much starker and documented threat.

Lawmakers around the world are taking note. In addition to the US proposals, DeepSeek has already faced bans from government systems in countries including Australia, South Korea, and Italy.  

AI becomes a geopolitical battleground

The concerns over DeepSeek exemplify how AI has now become a geopolitical flashpoint between global superpowers—especially between the US and China.

American AI firms like OpenAI have enjoyed a dominant position in recent years, but Chinese companies have poured resources into catching up and, in some cases, surpassing their US competitors.  

DeepSeek’s lightning-quick growth has unsettled that balance, not only because of its AI models but also due to its pricing strategy, which undercuts competitors by offering the app free of charge. That begs the question of whether it’s truly “free” or if the cost is paid in lost privacy and security.

China Mobile’s involvement raises further eyebrows, given the state-owned telecom company’s prior sanctions and prohibition from the US market. Critics worry that data collected through platforms like DeepSeek could fill gaps in Chinese surveillance activities or even potential economic manipulations.

A nationwide DeepSeek ban is on the cards

If the proposed US legislation is passed, it could represent the first step toward nationwide restrictions or an outright ban on DeepSeek. Geopolitical tension between China and the West continues to shape policies in advanced technologies, and AI appears to be the latest arena for this ongoing chess match.  

In the meantime, calls to regulate applications like DeepSeek are likely to grow louder. Conversations about data privacy, national security, and ethical boundaries in AI development are becoming ever more urgent as individuals and organisations across the globe navigate the promises and pitfalls of next-generation tools.  

DeepSeek’s rise may have, indeed, rattled the AI hierarchy, but whether it can maintain its momentum in the face of increasing global pushback remains to be seen.

(Photo by Solen Feyissa)

See also: AVAXAI brings DeepSeek to Web3 with decentralised AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/feed/ 0
Microsoft and OpenAI probe alleged data theft by DeepSeek https://www.artificialintelligence-news.com/news/microsoft-and-openai-probe-alleged-data-theft-deepseek/ https://www.artificialintelligence-news.com/news/microsoft-and-openai-probe-alleged-data-theft-deepseek/#respond Wed, 29 Jan 2025 15:28:41 +0000 https://www.artificialintelligence-news.com/?p=17009 Microsoft and OpenAI are investigating a potential breach of the AI firm’s system by a group allegedly linked to Chinese AI startup DeepSeek. According to Bloomberg, the investigation stems from suspicious data extraction activity detected in late 2024 via OpenAI’s application programming interface (API), sparking broader concerns over international AI competition. Microsoft, OpenAI’s largest financial […]

The post Microsoft and OpenAI probe alleged data theft by DeepSeek appeared first on AI News.

]]>
Microsoft and OpenAI are investigating a potential breach of the AI firm’s system by a group allegedly linked to Chinese AI startup DeepSeek.

According to Bloomberg, the investigation stems from suspicious data extraction activity detected in late 2024 via OpenAI’s application programming interface (API), sparking broader concerns over international AI competition.

Microsoft, OpenAI’s largest financial backer, first identified the large-scale data extraction and informed the ChatGPT maker of the incident. Sources believe the activity may have violated OpenAI’s terms of service, or that the group may have exploited loopholes to bypass restrictions limiting how much data they could collect.

DeepSeek has quickly risen to prominence in the competitive AI landscape, particularly with the release of its latest model, R-1, on 20 January.

Billed as a rival to OpenAI’s ChatGPT in performance but developed at a significantly lower cost, R-1 has shaken up the tech industry. Its release triggered a sharp decline in tech and AI stocks that wiped billions from US markets in a single week.

David Sacks, the White House’s newly appointed “crypto and AI czar,” alleged that DeepSeek may have employed questionable methods to achieve its AI’s capabilities. In an interview with Fox News, Sacks noted evidence suggesting that DeepSeek had used “distillation” to train its AI models using outputs from OpenAI’s systems.

“There’s substantial evidence that what DeepSeek did here is they distilled knowledge out of OpenAI’s models, and I don’t think OpenAI is very happy about this,” Sacks told the network.  

Model distillation involves training one AI system using data generated by another, potentially allowing a competitor to develop similar functionality. This method, when applied without proper authorisation, has stirred ethical and intellectual property debates as the global race for AI supremacy heats up.  

OpenAI declined to comment specifically on the accusations against DeepSeek but acknowledged the broader risk posed by model distillation, particularly by Chinese companies.  

“We know PRC-based companies — and others — are constantly trying to distill the models of leading US AI companies,” a spokesperson for OpenAI told Bloomberg.  

Geopolitical and security concerns  

Growing tensions around AI innovation now extend into national security. CNBC reported that the US Navy has banned its personnel from using DeepSeek’s products, citing fears that the Chinese government could exploit the platform to access sensitive information.

In an email dated 24 January, the Navy warned its staff against using DeepSeek AI “in any capacity” due to “potential security and ethical concerns associated with the model’s origin and usage.”

Critics have highlighted DeepSeek’s privacy policy, which permits the collection of data such as IP addresses, device information, and even keystroke patterns—a scope of data collection considered excessive by some experts.

Earlier this week, DeepSeek stated it was facing “large-scale malicious attacks” against its systems. A banner on its website informed users of a temporary sign-up restriction.

The growing competition between the US and China in particular in the AI sector has underscored wider concerns regarding technological ownership, ethical governance, and national security.  

Experts warn that as AI systems advance and become increasingly integral to global economic and strategic planning, disputes over data usage and intellectual property are only likely to intensify. Accusations such as those against DeepSeek amplify alarm over China’s rapid development in the field and its potential quest to bypass US-led safeguards through reverse engineering and other means.  

While OpenAI and Microsoft continue their investigation into the alleged misuse of OpenAI’s platform, businesses and governments alike are paying close attention. The case could set a precedent for how AI developers police model usage and enforce terms of service.

For now, the response from both US and Chinese stakeholders highlights how AI innovation has become not just a race for technological dominance, but a fraught geopolitical contest that is shaping 21st-century power dynamics.

(Image by Mohamed Hassan)

See also: Qwen 2.5-Max outperforms DeepSeek V3 in some benchmarks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Microsoft and OpenAI probe alleged data theft by DeepSeek appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/microsoft-and-openai-probe-alleged-data-theft-deepseek/feed/ 0
Rodolphe Malaguti, Conga: Poor data hinders AI in public services https://www.artificialintelligence-news.com/news/rodolphe-malaguti-conga-poor-data-ai-potential-in-public-services/ https://www.artificialintelligence-news.com/news/rodolphe-malaguti-conga-poor-data-ai-potential-in-public-services/#respond Tue, 21 Jan 2025 11:15:19 +0000 https://www.artificialintelligence-news.com/?p=16916 According to Rodolphe Malaguti, Product Strategy and Transformation at Conga, poor data structures and legacy systems are hindering the potential of AI in transforming public services. Taxpayer-funded services in the UK, from the NHS to local councils, are losing out on potential productivity savings of £45 billion per year due to an overwhelming reliance on […]

The post Rodolphe Malaguti, Conga: Poor data hinders AI in public services appeared first on AI News.

]]>
According to Rodolphe Malaguti, Product Strategy and Transformation at Conga, poor data structures and legacy systems are hindering the potential of AI in transforming public services.

Taxpayer-funded services in the UK, from the NHS to local councils, are losing out on potential productivity savings of £45 billion per year due to an overwhelming reliance on outdated technology—a figure equivalent to the total cost of running every primary school in the country for a year.   

A report published this week highlights how nearly half of public services are still not accessible online. This forces British citizens to engage in time-consuming and frustrating processes such as applying for support in person, enduring long wait times on hold, or travelling across towns to council offices. Public sector workers are similarly hindered by inefficiencies, such as sifting through mountains of physical letters, which slows down response times and leaves citizens to bear the brunt of government red tape.

Headshot of Rodolphe Malaguti, Product Strategy and Transformation at Conga, for an article on how poor data and legacy systems are holding back the potential of AI in transforming public services.

“As this report has shown, there is clearly a gap between what the government and public bodies intend to achieve with their digital projects and what they actually deliver,” explained Malaguti. “The public sector still relies heavily upon legacy systems and has clearly struggled to tackle existing poor data structures and inefficiencies across key departments. No doubt this has had a clear impact on decision-making and hindered vital services for vulnerable citizens.”

The struggles persist even in deeply personal and critical scenarios. For example, the current process for registering a death still demands a physical presence, requiring grieving individuals to manage cumbersome bureaucracy while mourning the loss of a loved one. Other outdated processes unnecessarily burden small businesses—one striking example being the need to publish notices in local newspapers simply to purchase a lorry licence, creating further delays and hindering economic growth.

A lack of coordination between departments amplifies these challenges. In some cases, government bodies are using over 500 paper-based processes, leaving systems fragmented and inefficient. Vulnerable individuals suffer disproportionately under this disjointed framework. For instance, patients with long-term health conditions can be forced into interactions with up to 40 different services, repeating the same information as departments repeatedly fail to share data.

“The challenge is that government leaders have previously focused on technology and online interactions, adding layers to services whilst still relying on old data and legacy systems—this has ultimately led to inefficiencies across departments,” added Malaguti.

“Put simply, they have failed to address existing issues or streamline their day-to-day operations. It is critical that data is more readily available and easily shared between departments, particularly if leaders are hoping to employ new technology like AI to analyse this data and drive better outcomes or make strategic decisions for the public sector as a whole.”

Ageing Infrastructure: High costs and security risks

The report underscores that ageing infrastructure comes at a steep financial and operational cost. More than one-in-four digital systems used across the UK’s central government are outdated, with this figure ballooning to 70 percent in some departments. Maintenance costs for legacy systems are significantly higher, up to three-to-four times more, compared to keeping technology up-to-date.  

Furthermore, a growing number of these outdated systems are now classified as “red-rated” for reliability and cybersecurity risk. Alarmingly, NHS England experienced 123 critical service outages last year alone. These outages often meant missed appointments and forced healthcare workers to resort to paper-based systems, making it harder for patients to access care when they needed it most.

Malaguti stresses that addressing such challenges goes beyond merely upgrading technology.

“The focus should be on improving data structure, quality, and timeliness. All systems, data, and workflows must be properly structured and fully optimised prior to implementation for these technologies to be effective. Public sector leaders should look to establish clear measurable objectives, as they continue to improve service delivery and core mission impacts.”

Transforming public services

In response to these challenges, Technology Secretary Peter Kyle is announcing an ambitious overhaul of public sector technology to usher in a more modern, efficient, and accessible system. Emphasising the use of AI, digital tools, and “common sense,” the goal is to reform how public services are designed and delivered—streamlining operations across local government, the NHS, and other critical departments.

A package of tools known as ‘Humphrey’ – named after the fictional Whitehall official in popular BBC drama ‘Yes, Minister’ – is set to be made available to all civil servants soon, with some available today.

Humphrey includes:

  • Consult: Analyses the thousands of responses received during government consultations within hours, presenting policymakers and experts with interactive dashboards to directly explore public feedback.
  • Parlex: A tool that enables policymakers to search and analyze decades of parliamentary debate, helping them refine their thinking and manage bills more effectively through both the Commons and the Lords.
  • Minute: A secure AI transcription service that creates customisable meeting summaries in the formats needed by public servants. It is currently being used by multiple central departments in meetings with ministers and is undergoing trials with local councils.
  • Redbox: A generative AI tool tailored to assist civil servants with everyday tasks, such as summarising policies and preparing briefings.
  • Lex: A tool designed to support officials in researching the law by providing analysis and summaries of relevant legislation for specific, complex issues.

The new tools and changes will help to tackle the inefficiencies highlighted in the report while delivering long-term cost savings. By reducing the burden of administrative tasks, the reforms aim to enable public servants, such as doctors and nurses, to spend more time helping the people they serve. For businesses, this could mean faster approvals for essential licences and permits, boosting economic growth and innovation.

“The government’s upcoming reforms and policy updates, where it is expected to deliver on its ‘AI Opportunities Action Plan,’ [will no doubt aim] to speed up processes,” said Malaguti. “Public sector leaders need to be more strategic with their investments and approach these projects with a level head, rolling out a programme in a phased manner, considering each phase of their operations.”

This sweeping transformation will also benefit from an expanded role for the Government Digital Service (GDS). Planned measures include using the GDS to identify cybersecurity vulnerabilities in public sector systems that could be exploited by hackers, enabling services to be made more robust and secure. Such reforms are critical to protect citizens, particularly as the reliance on digital solutions increases.

The broader aim of these reforms is to modernise the UK’s public services to reflect the convenience and efficiencies demanded in a digital-first world. By using technologies like AI, the government hopes to make interactions with public services faster and more intuitive while saving billions for taxpayers in the long run.

As technology reshapes the future of how services are delivered, leaders must ensure they are comprehensively addressing the root causes of inefficiency—primarily old data infrastructure and fragmented workflows. Only then can technological solutions, whether AI or otherwise, achieve their full potential in helping services deliver for the public.

(Photo by Claudio Schwarz)

See also: Biden’s executive order targets energy needs for AI data centres

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Rodolphe Malaguti, Conga: Poor data hinders AI in public services appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/rodolphe-malaguti-conga-poor-data-ai-potential-in-public-services/feed/ 0
NJ cops demand protections against data brokers https://www.artificialintelligence-news.com/news/nj-cops-demand-protections-against-data-brokers/ https://www.artificialintelligence-news.com/news/nj-cops-demand-protections-against-data-brokers/#respond Mon, 16 Dec 2024 18:25:08 +0000 https://www.artificialintelligence-news.com/?p=16711 Privacy laws in the United States are a patchwork at best. More often than not, they miss the mark, leaving most people with little actual privacy. When such laws are enacted, they can seem tailored to protect those in positions of power. Even laws designed to protect crime victims might end up protecting the names […]

The post NJ cops demand protections against data brokers appeared first on AI News.

]]>
Privacy laws in the United States are a patchwork at best. More often than not, they miss the mark, leaving most people with little actual privacy. When such laws are enacted, they can seem tailored to protect those in positions of power.

Even laws designed to protect crime victims might end up protecting the names of abusive officers by labelling them as victims of crime in cases like resisting arrest or assaulting an officer. Such accusations are often used in cases of excessive force, keeping cops’ names out of the spotlight.

For example, a recent New Jersey law emerged from a tragic event in which a government employee faced violence, sparking a legislative response. Known as “Daniel’s Law,” it was created after the personal information of a federal judge’s family was used by a murderer to track them down. Instead of a broader privacy law that could protect all residents of New Jersey, it focused exclusively on safeguarding certain public employees.

Under the law, judges, prosecutors, and police officers can request that their personal information (addresses and phone numbers, for example) be scrubbed from public databases. Popular services that people use to look up information, such as Whitepages or Spokeo, must comply. While this sounds like a win for privacy, the protections stop there. The average citizen is still left exposed, with no legal recourse if their personal data is misused or sold.

At the centre of the debate is a lawyer who’s taken up the cause of protecting cops’ personal data. He’s suing numerous companies for making this type of information accessible. While noble at first glance, a deeper look raises questions.

It transpires that the lawyer’s company has previously collected and monetised personal data. And when a data service responded to his demands by freezing access to some of the firm’s databases, he and his clients cried foul — despite specifically requesting restrictions on how their information could be used.

It’s also worth noting how unevenly data protection measures are to be applied. Cops, for instance, frequently rely on the same tools and databases they’re now asking to be restricted. These services have long been used by law enforcement for investigations and running background checks. Yet, when law enforcement data appears in such systems, special treatment is required.

A recent anecdote involved a police union leader who was shown a simple property record pulled from an online database. The record displayed basic details like his home address and his property’s square footage — information anyone could find with a few clicks. His reaction was one of shock and anger – an obvious disconnect.

For everyday citizens, this level of data exposure is a given. But for law enforcement, it requires a level of granular exclusion that’s not practical.

Perhaps everyone, including law enforcement personnel deserves better safeguards against data harvesting and misuse? But what Daniel’s law and later events involving police officers point to is the need for the type of improvements to the way data is treated for all, not just one group of society.

Instead of expanding privacy rights to all New Jersey residents, the law carves out exceptions for the powerful — leaving the rest of the population as vulnerable as ever.

(Photo by Unsplash)

See also: EU AI legislation sparks controversy over data transparency

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NJ cops demand protections against data brokers appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/nj-cops-demand-protections-against-data-brokers/feed/ 0
A new decentralised AI ecosystem and its implications https://www.artificialintelligence-news.com/news/a-new-decentralised-ai-ecosystem-and-its-implications/ https://www.artificialintelligence-news.com/news/a-new-decentralised-ai-ecosystem-and-its-implications/#respond Mon, 16 Dec 2024 08:38:35 +0000 https://www.artificialintelligence-news.com/?p=16706 Artificial Intelligence and its associated innovations have revamped the global technological landscape, with recent data released by the US government predicting 13% growth in IT-related opportunities over the next six years – potentially adding 667,600 new jobs to the sector. Researchers have stated that by 2034, the AI sector’s cumulative valuation may reach $3.6 trillion […]

The post A new decentralised AI ecosystem and its implications appeared first on AI News.

]]>
Artificial Intelligence and its associated innovations have revamped the global technological landscape, with recent data released by the US government predicting 13% growth in IT-related opportunities over the next six years – potentially adding 667,600 new jobs to the sector.

Researchers have stated that by 2034, the AI sector’s cumulative valuation may reach $3.6 trillion across industry. The healthcare sector has already integrated AI-based diagnostic tools, with 38% of today’s major medical providers using the technology.

The financial sector is also expecting AI to contribute approximately $15.7 trillion to the global economy by 2030, and the retail industry anticipates anywhere between $400 billion and $660 billion through AI-driven customer experiences annually.

It is estimated that approximately 83% of companies now have AI exploration as an agenda item for continued technical growth, especially given its capacity to drive innovation, enhance efficiency, and create sustainable competitive advantage.

Decentralising AI’s foundations

While AI’s potential is seemingly limitless, its rapid growth has brought a challenge – the centralisation of AI development and data management.

As AI systems become more sophisticated, risks like dataset manipulation, biased training models, and opaque decision-making processes threaten to undermine their potential.

Different blockchain tech providers have taken steps to decentralise the sector, offering infrastructure frameworks that change how AI systems are developed, trained, and deployed.

Space and Time (SXT) has devised a verifiable database that aims to bridge the gap between disparate areas, providing users with transparent, secure development tools that mean AI agents can execute transactions with greater levels data integrity.

The platform’s innovation lies in its ability to provide contextual data which AI agents can use for executing trades and purchases in ways that end-users can validate.

Another project of note is Chromia. It takes a similar approach, with a focus on creating a decentralised architecture to handle complex, data-intensive AI applications. Speaking about the platform’s capabilities, Yeou Jie Goh, Head of Business Development at Chromia, said:

“Our relational blockchain is specifically designed to support AI applications, performing hundreds of read-write operations per transaction and indexing data in real-time. We’re not just building a blockchain; we’re creating the infrastructure for the next generation of AI development.”

Chromia wants to lower the barriers to entry for data scientists and machine learning engineers.

By providing a SQL-based relational blockchain, the platform makes it easier for technical professionals to build and deploy AI applications on decentralised infrastructure. “Our mission is to position Chromia as the transparency layer of Web3, providing a robust backbone for data integrity across applications,” Goh said.

Chromia has already formed partnerships with Elfa AI, Chasm Network, and Stork.

Establishing a roadmap for technological sovereignty

The synergy between AI and blockchain is more than a fad, rather, a reimagining of AI’s infrastructure. Space and Time, for instance, is working to expand its ecosystem in multiple domains, including AI, DeFi, gaming, and decentralised physical infrastructure networks (DePIN).

Its strategy focuses on onboarding developers and building a mainnet that delivers verifiable data to smart contracts and AI agents.

Chromia is ambitious, launching a $20 million Data and AI Ecosystem Fund earlier this year. The project’s ‘Asgard Mainnet Upgrade’ with an ‘Extensions’ feature offers users adaptable application use.

The implications of AI’s shift toward decentralisation is of significant interest to Nate Holiday, CEO of Space and Time. He predicts that blockchain-based transactions associated with AI agents could grow from the current 3% of the market to 30% in the near future. He said:

“Ushering in this inevitable, near-term future is going to require data infrastructure like SXT that provides AI agents with the context that they need to execute trades and purchases in a way that the end user can verify.”

Chromia’s Yeou Jie Goh sees the transition not just as a technological innovation but as a means of creating a more transparent, secure, and democratised technological ecosystem. By using blockchain’s inherent strengths – immutability, transparency, and decentralisation – the two companies are working to create intelligent systems that are powerful, accountable, ethical, and aligned with human values. 

The post A new decentralised AI ecosystem and its implications appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/a-new-decentralised-ai-ecosystem-and-its-implications/feed/ 0
AI governance gap: 95% of firms haven’t implemented frameworks https://www.artificialintelligence-news.com/news/ai-governance-gap-95-of-firms-havent-frameworks/ https://www.artificialintelligence-news.com/news/ai-governance-gap-95-of-firms-havent-frameworks/#respond Thu, 17 Oct 2024 16:38:58 +0000 https://www.artificialintelligence-news.com/?p=16318 Robust governance is essential to mitigate AI risks and maintain responsible systems, but the majority of firms are yet to implement a framework. Commissioned by Prove AI and conducted by Zogby Analytics, the report polled over 600 CEOs, CIOs, and CTOs from large companies across the US, UK, and Germany. The findings show that 96% […]

The post AI governance gap: 95% of firms haven’t implemented frameworks appeared first on AI News.

]]>
Robust governance is essential to mitigate AI risks and maintain responsible systems, but the majority of firms are yet to implement a framework.

Commissioned by Prove AI and conducted by Zogby Analytics, the report polled over 600 CEOs, CIOs, and CTOs from large companies across the US, UK, and Germany. The findings show that 96% of organisations are already utilising AI to support business operations, with the same percentage planning to increase their AI budgets in the coming year.

The primary motivations for AI investment include increasing productivity (82%), improving operational efficiency (73%), enhancing decision-making (65%), and achieving cost savings (60%). The most common AI use cases reported were customer service and support, predictive analytics, and marketing and ad optimisation.

Despite the surge in AI investments, business leaders are acutely aware of the additional risk exposure that AI brings to their organisations. Data integrity and security emerged as the biggest deterrents to implementing new AI solutions.

Executives also reported encountering various AI performance issues, including:

  • Data quality issues (e.g., inconsistencies or inaccuracies): 41%
  • Bias detection and mitigation challenges in AI algorithms, leading to unfair or discriminatory outcomes: 37%
  • Difficulty in quantifying and measuring the return on investment (ROI) of AI initiatives: 28%

While 95% of respondents expressed confidence in their organisation’s current AI risk management practices, the report revealed a significant gap in AI governance implementation.

Only 5% of executives reported that their organisation has implemented any AI governance framework. However, 82% stated that implementing AI governance solutions is a somewhat or extremely pressing priority, with 85% planning to implement such solutions by summer 2025.

The report also found that 82% of participants support an AI governance executive order to provide stronger oversight. Additionally, 65% expressed concern about IP infringement and data security.

Mrinal Manohar, CEO of Prove AI, commented: “Executives are making themselves clear: AI’s long-term efficacy, including providing a meaningful return on the massive investments organisations are currently making, is contingent on their ability to develop and refine comprehensive AI governance strategies.

“The wave of AI-focused legislation going into effect around the world is only increasing the urgency; for the current wave of innovation to continue responsibly, we need to implement clearer guardrails to manage and monitor the data informing AI systems.”

As global regulations like the EU AI Act loom on the horizon, the report underscores the importance of de-risking AI and the work that still needs to be done. Implementing and optimising dedicated AI governance strategies has emerged as a top priority for businesses looking to harness the power of AI while mitigating associated risks.

The findings of this report serve as a wake-up call for organisations to prioritise AI governance as they continue to invest in and deploy AI technologies. Responsible implementation and robust governance frameworks will be key to unlocking the full potential of AI while maintaining trust and compliance.

(Photo by Rob Thompson)

See also: Scoring AI models: Endor Labs unveils evaluation tool

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI governance gap: 95% of firms haven’t implemented frameworks appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-governance-gap-95-of-firms-havent-frameworks/feed/ 0
UK secures £6.3B in data infrastructure investments https://www.artificialintelligence-news.com/news/uk-secures-6-3b-data-infrastructure-investments/ https://www.artificialintelligence-news.com/news/uk-secures-6-3b-data-infrastructure-investments/#respond Mon, 14 Oct 2024 12:08:42 +0000 https://www.artificialintelligence-news.com/?p=16286 Four major US firms have announced plans to invest a combined £6.3 billion in UK data infrastructure.  The announcement, made during the International Investment Summit, was welcomed by Technology Secretary Peter Kyle as a “vote of confidence” in Britain’s approach to partnering with businesses to drive growth. CyrusOne, ServiceNow, CloudHQ, and CoreWeave have all committed […]

The post UK secures £6.3B in data infrastructure investments appeared first on AI News.

]]>
Four major US firms have announced plans to invest a combined £6.3 billion in UK data infrastructure. 

The announcement, made during the International Investment Summit, was welcomed by Technology Secretary Peter Kyle as a “vote of confidence” in Britain’s approach to partnering with businesses to drive growth.

CyrusOne, ServiceNow, CloudHQ, and CoreWeave have all committed to substantial investments, bringing the total investment in UK data centres to over £25 billion since the current government took office. These new facilities will provide the UK with increased computing power and data storage capabilities, essential for training and deploying next-generation AI technologies.

“Tech leaders from all over the world are seeing Britain as the best place to invest with a thriving and stable market for data centres and AI development,” stated Kyle.

The largest single investment comes from Washington DC-based CloudHQ, which plans to develop a £1.9 billion data centre campus in Didcot, Oxfordshire. This hyper-scale facility is expected to create 1,500 jobs during construction and 100 permanent positions once operational.

ServiceNow has pledged £1.15 billion over the next five years to expand its UK operations. This investment will support AI development, expand data centres with Nvidia GPUs for local processing of LLM data, and grow the company’s UK workforce beyond its current 1,000 employees. ServiceNow also plans to offer new skills programmes to reach 240,000 UK learners.

ServiceNow’s AI platform is already utilised by 85% of Fortune 500 companies and more than half of the FTSE100. In the UK, the company works with organisations including BT Group, Aston Martin Aramco Formula One Team, and hundreds of public sector bodies such as the NHS and the Department for Work and Pensions.

Rachel Reeves, Chancellor of the Exchequer, commented: “This investment is a huge vote of confidence in the UK’s tech and AI sector, and is exactly the kind we want to see as we grow the economy. That’s what the International Investment Summit is all about too. Showing global investors and business that Britain is open for business.”

CyrusOne, a leading global data centre developer, announced plans to invest £2.5 billion in the UK over the coming years. Subject to planning permission, their projects are expected to be operational by Q4 2028 and create over 1,000 jobs.

AI hyperscaler CoreWeave confirmed an additional £750 million investment to support the next generation of AI cloud infrastructure, building on its £1 billion investment announced in May.

These investments follow recent commitments from other tech giants, including Blackstone’s £10 billion investment in the North East of England and Amazon Web Services’ plan to invest £8 billion in UK data centres over the next five years.

The UK government has been actively supporting the growth of data infrastructure and the broader tech sector. Last month, data centres were classified as ‘Critical National Infrastructure’ (CNI), providing the industry with greater government support. Additionally, the Tech Secretary appointed entrepreneur Matt Clifford to develop an AI Opportunities Action Plan, aimed at boosting AI adoption across the economy.

As part of the ongoing International Investment Summit, Prime Minister Keir Starmer is bringing together 300 industry leaders to catalyse investment in the UK. The summit will see discussions on how the UK can capitalise on emerging growth sectors including health tech, AI, clean energy, and creative industries.

Bill McDermott, Chairman and CEO of ServiceNow, said: “The UK is embracing technology transformation at scale. In this new age of AI, the country continues to be a global leader in driving innovation for the benefit of all its communities.

“Our investment accelerates the UK’s push to put AI to work, empowering people, enriching experiences, and strengthening societal bonds. Together, ServiceNow and our customers across the UK are delivering a future where technology benefits everyone.”

The series of investments and government initiatives bolstering UK data infrastructure aims to secure the country’s leadership in AI and technology innovation within Europe, and reinforces it as an attractive destination for international tech companies seeking to expand their operations.

(Photo by Freddie Collins)

See also: King’s Business School: How AI is transforming problem-solving

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK secures £6.3B in data infrastructure investments appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/uk-secures-6-3b-data-infrastructure-investments/feed/ 0
Han Heloir, MongoDB: The role of scalable databases in AI-powered apps https://www.artificialintelligence-news.com/news/han-heloir-mongodb-the-future-of-ai-powered-applications-and-scalable-databases/ https://www.artificialintelligence-news.com/news/han-heloir-mongodb-the-future-of-ai-powered-applications-and-scalable-databases/#respond Mon, 30 Sep 2024 00:22:58 +0000 https://www.artificialintelligence-news.com/?p=16108 As data management grows more complex and modern applications extend the capabilities of traditional approaches, AI is revolutionising application scaling. In addition to freeing operators from outdated, inefficient methods that require careful supervision and extra resources, AI enables real-time, adaptive optimisation of application scaling. Ultimately, these benefits combine to enhance efficiency and reduce costs for […]

The post Han Heloir, MongoDB: The role of scalable databases in AI-powered apps appeared first on AI News.

]]>
As data management grows more complex and modern applications extend the capabilities of traditional approaches, AI is revolutionising application scaling.

Han Heloir, EMEA gen AI senior solutions architect at MongoDB
Han Heloir, EMEA gen AI senior solutions architect, MongoDB.

In addition to freeing operators from outdated, inefficient methods that require careful supervision and extra resources, AI enables real-time, adaptive optimisation of application scaling. Ultimately, these benefits combine to enhance efficiency and reduce costs for targeted applications.

With its predictive capabilities, AI ensures that applications scale efficiently, improving performance and resource allocation—marking a major advance over conventional methods.

Ahead of AI & Big Data Expo Europe, Han Heloir, EMEA gen AI senior solutions architect at MongoDB, discusses the future of AI-powered applications and the role of scalable databases in supporting generative AI and enhancing business processes.

AI News: As AI-powered applications continue to grow in complexity and scale, what do you see as the most significant trends shaping the future of database technology?

Heloir: While enterprises are keen to leverage the transformational power of generative AI technologies, the reality is that building a robust, scalable technology foundation involves more than just choosing the right technologies. It’s about creating systems that can grow and adapt to the evolving demands of generative AI, demands that are changing quickly, some of which traditional IT infrastructure may not be able to support. That is the uncomfortable truth about the current situation.

Today’s IT architectures are being overwhelmed by unprecedented data volumes generated from increasingly interconnected data sets. Traditional systems, designed for less intensive data exchanges, are currently unable to handle the massive, continuous data streams required for real-time AI responsiveness. They are also unprepared to manage the variety of data being generated.

The generative AI ecosystem often comprises a complex set of technologies. Each layer of technology—from data sourcing to model deployment—increases functional depth and operational costs. Simplifying these technology stacks isn’t just about improving operational efficiency; it’s also a financial necessity.

AI News: What are some key considerations for businesses when selecting a scalable database for AI-powered applications, especially those involving generative AI?

Heloir: Businesses should prioritise flexibility, performance and future scalability. Here are a few key reasons:

  • The variety and volume of data will continue to grow, requiring the database to handle diverse data types—structured, unstructured, and semi-structured—at scale. Selecting a database that can manage such variety without complex ETL processes is important.
  • AI models often need access to real-time data for training and inference, so the database must offer low latency to enable real-time decision-making and responsiveness.
  • As AI models grow and data volumes expand, databases must scale horizontally, to allow organisations to add capacity without significant downtime or performance degradation.
  • Seamless integration with data science and machine learning tools is crucial, and native support for AI workflows—such as managing model data, training sets and inference data—can enhance operational efficiency.

AI News: What are the common challenges organisations face when integrating AI into their operations, and how can scalable databases help address these issues?

Heloir: There are a variety of challenges that organisations can run into when adopting AI. These include the massive amounts of data from a wide variety of sources that are required to build AI applications. Scaling these initiatives can also put strain on the existing IT infrastructure and once the models are built, they require continuous iteration and improvement.

To make this easier, a database that scales can help simplify the management, storage and retrieval of diverse datasets. It offers elasticity, allowing businesses to handle fluctuating demands while sustaining performance and efficiency. Additionally, they accelerate time-to-market for AI-driven innovations by enabling rapid data ingestion and retrieval, facilitating faster experimentation.

AI News: Could you provide examples of how collaborations between database providers and AI-focused companies have driven innovation in AI solutions?

Heloir: Many businesses struggle to build generative AI applications because the technology evolves so quickly. Limited expertise and the increased complexity of integrating diverse components further complicate the process, slowing innovation and hindering the development of AI-driven solutions.

One way we address these challenges is through our MongoDB AI Applications Program (MAAP), which provides customers with resources to assist them in putting AI applications into production. This includes reference architectures and an end-to-end technology stack that integrates with leading technology providers, professional services and a unified support system.

MAAP categorises customers into four groups, ranging from those seeking advice and prototyping to those developing mission-critical AI applications and overcoming technical challenges. MongoDB’s MAAP enables faster, seamless development of generative AI applications, fostering creativity and reducing complexity.

AI News: How does MongoDB approach the challenges of supporting AI-powered applications, particularly in industries that are rapidly adopting AI?

Heloir: Ensuring you have the underlying infrastructure to build what you need is always one of the biggest challenges organisations face.

To build AI-powered applications, the underlying database must be capable of running queries against rich, flexible data structures. With AI, data structures can become very complex. This is one of the biggest challenges organisations face when building AI-powered applications, and it’s precisely what MongoDB is designed to handle. We unify source data, metadata, operational data, vector data and generated data—all in one platform.

AI News: What future developments in database technology do you anticipate, and how is MongoDB preparing to support the next generation of AI applications?

Heloir: Our key values are the same today as they were when MongoDB initially launched: we want to make developers’ lives easier and help them drive business ROI. This remains unchanged in the age of artificial intelligence. We will continue to listen to our customers, assist them in overcoming their biggest difficulties, and ensure that MongoDB has the features they require to develop the next [generation of] great applications.

(Photo by Caspar Camille Rubin)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Han Heloir, MongoDB: The role of scalable databases in AI-powered apps appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/han-heloir-mongodb-the-future-of-ai-powered-applications-and-scalable-databases/feed/ 0
SolarWinds: IT professionals want stronger AI regulation https://www.artificialintelligence-news.com/news/solarwinds-it-professionals-stronger-ai-regulation/ https://www.artificialintelligence-news.com/news/solarwinds-it-professionals-stronger-ai-regulation/#respond Tue, 17 Sep 2024 14:36:25 +0000 https://www.artificialintelligence-news.com/?p=16093 A new survey from SolarWinds has unveiled a resounding call for increased government oversight of AI, with 88% of IT professionals advocating for stronger regulation. The study, which polled nearly 700 IT experts, highlights security as the paramount concern. An overwhelming 72% of respondents emphasised the critical need for measures to secure infrastructure. Privacy follows […]

The post SolarWinds: IT professionals want stronger AI regulation appeared first on AI News.

]]>
A new survey from SolarWinds has unveiled a resounding call for increased government oversight of AI, with 88% of IT professionals advocating for stronger regulation.

The study, which polled nearly 700 IT experts, highlights security as the paramount concern. An overwhelming 72% of respondents emphasised the critical need for measures to secure infrastructure. Privacy follows closely behind, with 64% of IT professionals urging for more robust rules to protect sensitive information.

Rob Johnson, VP and Global Head of Solutions Engineering at SolarWinds, commented: “It is understandable that IT leaders are approaching AI with caution. As technology rapidly evolves, it naturally presents challenges typical of any emerging innovation.

“Security and privacy remain at the forefront, with ongoing scrutiny by regulatory bodies. However, it is incumbent upon organisations to take proactive measures by enhancing data hygiene, enforcing robust AI ethics and assembling the right teams to lead these efforts. This proactive stance not only helps with compliance with evolving regulations but also maximises the potential of AI.”

The survey’s findings come at a pivotal moment, coinciding with the implementation of the EU’s AI Act. In the UK, the new Labour government recently proposed its own AI legislation during the latest King’s speech, signalling a growing recognition of the need for regulatory frameworks. In the US, the California State Assembly passed a controversial AI safety bill last month.

Beyond security and privacy, the survey reveals a broader spectrum of concerns amongst IT professionals. A majority (55%) believe government intervention is crucial to stem the tide of AI-generated misinformation. Additionally, half of the respondents support regulations aimed at ensuring transparency and ethical practices in AI development.

Challenges extend beyond AI regulation

However, the challenges facing AI adoption extend beyond regulatory concerns. The survey uncovers a troubling lack of trust in data quality—a cornerstone of successful AI implementation.

Only 38% of respondents consider themselves ‘very trusting’ of the data quality and training used in AI systems. This scepticism is not unfounded, as 40% of IT leaders who have encountered issues with AI attribute these problems to algorithmic errors stemming from insufficient or biased data.

Consequently, data quality emerges as the second most significant barrier to AI adoption (16%), trailing only behind security and privacy risks. This finding underscores the critical importance of robust, unbiased datasets in driving AI success.

“High-quality data is the cornerstone of accurate and reliable AI models, which in turn drive better decision-making and outcomes,” adds Johnson. “Trustworthy data builds confidence in AI among IT professionals, accelerating the broader adoption and integration of AI technologies.”

The survey also sheds light on widespread concerns about database readiness. Less than half (43%) of IT professionals express confidence in their company’s ability to meet the increasing data demands of AI. This lack of preparedness is further exacerbated by the perception that organisations are not moving swiftly enough to implement AI, with 46% of respondents citing ongoing data quality challenges as a contributing factor.

As AI continues to reshape the technological landscape, the findings of this SolarWinds survey serve as a clarion call for both stronger regulation and improved data practices. The message from IT professionals is clear: while AI holds immense promise, its successful integration hinges on addressing critical concerns around security, privacy, and data quality.

(Photo by Kelly Sikkema)

See also: Whitepaper dispels fears of AI-induced job losses

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post SolarWinds: IT professionals want stronger AI regulation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/solarwinds-it-professionals-stronger-ai-regulation/feed/ 0