privacy Archives - AI News https://www.artificialintelligence-news.com/news/tag/privacy/ Artificial Intelligence News Thu, 24 Apr 2025 11:40:59 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png privacy Archives - AI News https://www.artificialintelligence-news.com/news/tag/privacy/ 32 32 Meta will train AI models using EU user data https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/ https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/#respond Tue, 15 Apr 2025 16:32:02 +0000 https://www.artificialintelligence-news.com/?p=105325 Meta has confirmed plans to utilise content shared by its adult users in the EU (European Union) to train its AI models. The announcement follows the recent launch of Meta AI features in Europe and aims to enhance the capabilities and cultural relevance of its AI systems for the region’s diverse population.    In a statement, […]

The post Meta will train AI models using EU user data appeared first on AI News.

]]>
Meta has confirmed plans to utilise content shared by its adult users in the EU (European Union) to train its AI models.

The announcement follows the recent launch of Meta AI features in Europe and aims to enhance the capabilities and cultural relevance of its AI systems for the region’s diverse population.   

In a statement, Meta wrote: “Today, we’re announcing our plans to train AI at Meta using public content – like public posts and comments – shared by adults on our products in the EU.

“People’s interactions with Meta AI – like questions and queries – will also be used to train and improve our models.”

Starting this week, users of Meta’s platforms (including Facebook, Instagram, WhatsApp, and Messenger) within the EU will receive notifications explaining the data usage. These notifications, delivered both in-app and via email, will detail the types of public data involved and link to an objection form.

“We have made this objection form easy to find, read, and use, and we’ll honor all objection forms we have already received, as well as newly submitted ones,” Meta explained.

Meta explicitly clarified that certain data types remain off-limits for AI training purposes.

The company says it will not “use people’s private messages with friends and family” to train its generative AI models. Furthermore, public data associated with accounts belonging to users under the age of 18 in the EU will not be included in the training datasets.

Meta wants to build AI tools designed for EU users

Meta positions this initiative as a necessary step towards creating AI tools designed for EU users. Meta launched its AI chatbot functionality across its messaging apps in Europe last month, framing this data usage as the next phase in improving the service.

“We believe we have a responsibility to build AI that’s not just available to Europeans, but is actually built for them,” the company explained. 

“That means everything from dialects and colloquialisms, to hyper-local knowledge and the distinct ways different countries use humor and sarcasm on our products.”

This becomes increasingly pertinent as AI models evolve with multi-modal capabilities spanning text, voice, video, and imagery.   

Meta also situated its actions in the EU within the broader industry landscape, pointing out that training AI on user data is common practice.

“It’s important to note that the kind of AI training we’re doing is not unique to Meta, nor will it be unique to Europe,” the statement reads. 

“We’re following the example set by others including Google and OpenAI, both of which have already used data from European users to train their AI models.”

Meta further claimed its approach surpasses others in openness, stating, “We’re proud that our approach is more transparent than many of our industry counterparts.”   

Regarding regulatory compliance, Meta referenced prior engagement with regulators, including a delay initiated last year while awaiting clarification on legal requirements. The company also cited a favourable opinion from the European Data Protection Board (EDPB) in December 2024.

“We welcome the opinion provided by the EDPB in December, which affirmed that our original approach met our legal obligations,” wrote Meta.

Broader concerns over AI training data

While Meta presents its approach in the EU as transparent and compliant, the practice of using vast swathes of public user data from social media platforms to train large language models (LLMs) and generative AI continues to raise significant concerns among privacy advocates.

Firstly, the definition of “public” data can be contentious. Content shared publicly on platforms like Facebook or Instagram may not have been posted with the expectation that it would become raw material for training commercial AI systems capable of generating entirely new content or insights. Users might share personal anecdotes, opinions, or creative works publicly within their perceived community, without envisaging its large-scale, automated analysis and repurposing by the platform owner.

Secondly, the effectiveness and fairness of an “opt-out” system versus an “opt-in” system remain debatable. Placing the onus on users to actively object, often after receiving notifications buried amongst countless others, raises questions about informed consent. Many users may not see, understand, or act upon the notification, potentially leading to their data being used by default rather than explicit permission.

Thirdly, the issue of inherent bias looms large. Social media platforms reflect and sometimes amplify societal biases, including racism, sexism, and misinformation. AI models trained on this data risk learning, replicating, and even scaling these biases. While companies employ filtering and fine-tuning techniques, eradicating bias absorbed from billions of data points is an immense challenge. An AI trained on European public data needs careful curation to avoid perpetuating stereotypes or harmful generalisations about the very cultures it aims to understand.   

Furthermore, questions surrounding copyright and intellectual property persist. Public posts often contain original text, images, and videos created by users. Using this content to train commercial AI models, which may then generate competing content or derive value from it, enters murky legal territory regarding ownership and fair compensation—issues currently being contested in courts worldwide involving various AI developers.

Finally, while Meta highlights its transparency relative to competitors, the actual mechanisms of data selection, filtering, and its specific impact on model behaviour often remain opaque. Truly meaningful transparency would involve deeper insights into how specific data influences AI outputs and the safeguards in place to prevent misuse or unintended consequences.

The approach taken by Meta in the EU underscores the immense value technology giants place on user-generated content as fuel for the burgeoning AI economy. As these practices become more widespread, the debate surrounding data privacy, informed consent, algorithmic bias, and the ethical responsibilities of AI developers will undoubtedly intensify across Europe and beyond.

(Photo by Julio Lopez)

See also: Apple AI stresses privacy with synthetic and anonymised data

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta will train AI models using EU user data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-will-train-ai-models-using-eu-user-data/feed/ 0
The ethics of AI and how they affect you https://www.artificialintelligence-news.com/news/the-ethics-of-ai-and-how-they-affect-you/ https://www.artificialintelligence-news.com/news/the-ethics-of-ai-and-how-they-affect-you/#respond Mon, 10 Mar 2025 06:39:00 +0000 https://www.artificialintelligence-news.com/?p=104703 Having worked with AI since 2018, I’m watching its slow but steady pick-up alongside the unstructured bandwagon-jumping with considerable interest. Now that the initial fear has subsided somewhat about a robotic takeover, discussion about the ethics that will surround the integration of AI into everyday business structures has taken its place.   A whole new range […]

The post The ethics of AI and how they affect you appeared first on AI News.

]]>
Having worked with AI since 2018, I’m watching its slow but steady pick-up alongside the unstructured bandwagon-jumping with considerable interest. Now that the initial fear has subsided somewhat about a robotic takeover, discussion about the ethics that will surround the integration of AI into everyday business structures has taken its place.  

A whole new range of roles will be required to handle ethics, governance and compliance, all of which are going to gain enormous value and importance to organisations.

Probably the most essential of these will be an AI Ethics Specialist, who will be required to ensure Agentic AI systems meet ethical standards like fairness and transparency. This role will involve using specialised tools and frameworks to address ethical concerns efficiently and avoid potential legal or reputational risks.  Human oversight to ensure transparency and responsible ethics is essential to maintain the delicate balance between data driven decisions, intelligence and intuition.

In addition, roles like Agentic AI Workflow Designer, AI Interaction and Integration Designer will ensure AI integrates seamlessly across ecosystems and prioritises transparency, ethical considerations, and adaptability. An AI Overseer will also be required, to monitor the entire Agentic stack of agents and arbiters, the decision-making elements of AI.   

For anyone embarking on the integration of AI into their organisation and wanting to ensure the technology is introduced and maintained responsibly, I can recommend consulting the United Nations’ principles. These 10 principles were created by the United Nations in 2022, in response to the ethical challenges raised by the increasing preponderance of AI.

So what are these ten principles, and how can we use them as a framework?

First, do no harm 

As befits technology with an autonomous element, the first principle focuses on the deployment of AI systems in ways that will avoid any negative impact on social, cultural, economic, natural or political environments. An AI lifecycle should be designed to respect and protect human rights and freedoms. Systems should be monitored to ensure that that situation is maintained and no long-term damage is being done.

Avoid AI for AI’s sake

Ensure that the use of AI is justified, appropriate and not excessive. There is a distinct temptation to become over-zealous in the application of this exciting technology and it needs to be balanced against human needs and aims and should never be used at the expense of human dignity. 

Safety and security

Safety and security risks should be identified, addressed and mitigated

throughout the life cycle of the AI system and on an on-going basis. Exactly the same robust health and safety frameworks should be applied to AI as to any other area of the business. 

Equality

Similarly, AI should be deployed with the aim of ensuring the equal and just distribution of the benefits, risks and cost, and to prevent bias, deception, discrimination and stigma of any kind.

Sustainability

AI should be aimed at promoting environmental, economic and social sustainability. Continual assessment should be made to address negative impacts, including any on the generations to come. 

Data privacy, data protection and data governance

Adequate data protection frameworks and data governance mechanisms should be established or enhanced to ensure that the privacy and rights of individuals are maintained in line with legal guidelines around data integrity and personal data protection. No AI system should impinge on the privacy of another human being.

Human oversight

Human oversight should be guaranteed to ensure that the outcomes of using AI are fair and just. Human-centric design practises should be employed and capacity to be given for a human to step in at any stage and make a decision on how and when AI should be used, and to over-ride any decision made by AI. Rather dramatically but entirely reasonably, the UN suggests any decision affecting life or death should not be left to AI. 

Transparency and Explainability

This, to my mind, forms part of the guidelines around equality. Everyone using AI should fully understand the systems they are using, the decision-making processes used by the system and its ramifications. Individuals should be told when a decision regarding their rights, freedoms or benefits has been made by artificial intelligence, and most importantly, the explanation should be made in a way that makes it comprehensible. 

Responsibility and Accountability

This is the whistleblower principle, that covers audit and due diligence as well as protection for whistleblowers to make sure that someone is responsible and accountable for the decisions made by, and use of, AI. Governance should be put in place around the ethical and legal responsibility of humans for any AI-based decisions. Any of these decisions that cause harm should be investigated and action taken. 

Inclusivity and participation

Just as in any other area of business, when designing, deploying and using artificial intelligence systems, an inclusive, interdisciplinary and participatory approach should be taken, which also includes gender equality. Stakeholders and any communities that are affected should be informed and consulted and informed of any benefits and potential risks. 

Building your AI integration around these central pillars should help you feel reassured that your entry into AI integration is built on an ethical and solid foundation. 

Photo by Immo Wegmann on Unsplash

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation ConferenceBlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post The ethics of AI and how they affect you appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/the-ethics-of-ai-and-how-they-affect-you/feed/ 0
Opera introduces browser-integrated AI agent https://www.artificialintelligence-news.com/news/opera-introduces-browser-integrated-ai-agent/ https://www.artificialintelligence-news.com/news/opera-introduces-browser-integrated-ai-agent/#respond Mon, 03 Mar 2025 16:34:09 +0000 https://www.artificialintelligence-news.com/?p=104668 Opera has introduced “Browser Operator,” a native AI agent designed to perform tasks for users directly within the browser. Rather than acting as a separate tool, Browser Operator is an extension of the browser itself—designed to empower users by automating repetitive tasks like purchasing products, completing online forms, and gathering web content. Unlike server-based AI […]

The post Opera introduces browser-integrated AI agent appeared first on AI News.

]]>
Opera has introduced “Browser Operator,” a native AI agent designed to perform tasks for users directly within the browser.

Rather than acting as a separate tool, Browser Operator is an extension of the browser itself—designed to empower users by automating repetitive tasks like purchasing products, completing online forms, and gathering web content.

Unlike server-based AI integrations which require sensitive data to be sent to third-party servers, Browser Operator processes tasks locally within the Opera browser.

Opera’s demonstration video showcases how Browser Operator can streamline an everyday task like buying socks. Instead of manually scrolling through product pages or filling out payment forms, users could delegate the entire process to Browser Operator—allowing them to shift focus to activities that matter more to them, such as spending time with loved ones.

Harnessing natural language processing powered by Opera’s AI Composer Engine, Browser Operator interprets written instructions from users and executes corresponding tasks within the browser. All operations occur locally on a user’s device, leveraging the browser’s own infrastructure to safely and swiftly complete commands.  

If Browser Operator encounters a sensitive step in the process, such as entering payment details or approving an order, it pauses and requests the user’s input. You also have the freedom to intervene and take control of the process at any time.  

Every step Browser Operator takes is transparent and fully reviewable, providing users a clear understanding of how tasks are being executed. If mistakes occur – like placing an incorrect order – you can further instruct the AI agent to make amends, such as cancelling the order or adjusting a form.

The key differentiators: Privacy, performance, and precision  

What sets Browser Operator apart from other AI-integrated tools is its localised, privacy-first architecture. Unlike competitors that depend on screenshots or video recordings to understand webpage content, Opera’s approach uses the Document Object Model (DOM) Tree and browser layout data—a textual representation of the webpage.  

This difference offers several key advantages:

  • Faster task completion: Browser Operator doesn’t need to “see” and interpret pixels on the screen or emulate mouse movements. Instead, it accesses web page elements directly, avoiding unnecessary overhead and allowing it to process pages holistically without scrolling.
  • Enhanced privacy: With all operations conducted on the browser itself, user data – including logins, cookies, and browsing history – remains secure on the local device. No screenshots, keystrokes, or personal information are sent to Opera’s servers.
  • Easier interaction with page elements: The AI can engage with elements hidden from the user’s view, such as behind cookie popups or verification dialogs, enabling seamless access to web page content.

By enabling the browser to autonomously perform tasks, Opera is taking a significant step forward in making browsers “agentic”—not just tools for accessing the internet, but assistants that actively enhance productivity.  

See also: You.com ARI: Professional-grade AI research agent for businesses

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Opera introduces browser-integrated AI agent appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/opera-introduces-browser-integrated-ai-agent/feed/ 0
DeepSeek to open-source AGI research amid privacy concerns https://www.artificialintelligence-news.com/news/deepseek-open-source-agi-research-amid-privacy-concerns/ https://www.artificialintelligence-news.com/news/deepseek-open-source-agi-research-amid-privacy-concerns/#respond Fri, 21 Feb 2025 13:56:59 +0000 https://www.artificialintelligence-news.com/?p=104592 DeepSeek, a Chinese AI startup aiming for artificial general intelligence (AGI), announced plans to open-source five repositories starting next week as part of its commitment to transparency and community-driven innovation. However, this development comes against the backdrop of mounting controversies that have drawn parallels to the TikTok saga. Today, DeepSeek shared its intentions in a […]

The post DeepSeek to open-source AGI research amid privacy concerns appeared first on AI News.

]]>
DeepSeek, a Chinese AI startup aiming for artificial general intelligence (AGI), announced plans to open-source five repositories starting next week as part of its commitment to transparency and community-driven innovation.

However, this development comes against the backdrop of mounting controversies that have drawn parallels to the TikTok saga.

Today, DeepSeek shared its intentions in a tweet that outlined its vision of open collaboration: “We’re a tiny team at DeepSeek exploring AGI. Starting next week, we’ll be open-sourcing five repos, sharing our small but sincere progress with full transparency.”

The repositories – which the company describes as “documented, deployed, and battle-tested in production” – include fundamental building blocks of DeepSeek’s online service.

By open-sourcing its tools, DeepSeek hopes to contribute to the broader AI research community.

“As part of the open-source community, we believe that every line shared becomes collective momentum that accelerates the journey. No ivory towers – just pure garage-energy and community-driven innovation,” the company said.

This philosophy has drawn praise for fostering collaboration in a field that often suffers from secrecy, but DeepSeek’s rapid rise has also raised eyebrows.

Despite being a small team with a mission rooted in transparency, the company has been under intense scrutiny amid allegations of data misuse and geopolitical entanglements.

Rising fast, under fire

Practically unknown until recently, DeepSeek burst onto the scene with a business model that stood in stark contrast to more established players like OpenAI and Google.

Offering its advanced AI capabilities for free, DeepSeek quickly gained global acclaim for its cutting-edge performance. However, its exponential rise has also sparked debates about the trade-offs between innovation and privacy.

US lawmakers are now pushing for a ban on DeepSeek after security researchers found the app transferring user data to a banned state-owned company.

A probe has also been launched by Microsoft and OpenAI over a breach of the latter’s systems by a group allegedly linked to DeepSeek.

Concerns about data collection and potential misuse have triggered comparisons to the controversies surrounding TikTok, another Chinese tech success story grappling with regulatory pushback in the West.

DeepSeek continues AGI innovation amid controversy

DeepSeek’s commitment to open-source its technology appears timed to deflect criticism and reassure sceptics about its intentions.

Open-sourcing has long been heralded as a way to democratise technology and increase transparency, and DeepSeek’s “daily unlocks,” that are set to begin soon, could offer the community reassuring insight into its operations.

Nevertheless, questions remain over how much of the technology will be open for scrutiny and whether the move is an attempt to shift the narrative amid growing political and regulatory pressure.

It’s unclear whether this balancing act will be enough to satisfy lawmakers or deter critics, but one thing is certain: DeepSeek’s open-source leap marks another turn in its dramatic rise.

While the company’s motto of “garage-energy and community-driven innovation” resonates with developers eager for open collaboration, its future may rest as much on its ability to address security concerns as on its technical prowess.

(Photo by Solen Feyissa)

See also: DeepSeek’s AI dominance expands from EVs to e-scooters in China

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including IoT Tech Expo, Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepSeek to open-source AGI research amid privacy concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/deepseek-open-source-agi-research-amid-privacy-concerns/feed/ 0
DeepSeek ban? China data transfer boosts security concerns https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/ https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/#respond Fri, 07 Feb 2025 17:44:01 +0000 https://www.artificialintelligence-news.com/?p=104228 US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company. DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga. Its rise has been fuelled in part […]

The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News.

]]>
US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company.

DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga.

Its rise has been fuelled in part by its business model: unlike many of its American counterparts, including OpenAI and Google, DeepSeek offered its advanced powers for free.

However, concerns have been raised about DeepSeek’s extensive data collection practices and a probe has been launched by Microsoft and OpenAI over a breach of the latter’s system by a group allegedly linked to the Chinese AI startup.

A threat to US AI dominance

DeepSeek’s astonishing capabilities have, within a matter of weeks, positioned it as a major competitor to American AI stalwarts like OpenAI’s ChatGPT and Google Gemini. But, alongside the app’s prowess, concerns have emerged over alleged ties to the Chinese Communist Party (CCP).  

According to security researchers, hidden code within DeepSeek’s AI has been found transmitting user data to China Mobile—a state-owned telecoms company banned in the US. DeepSeek’s own privacy policy permits the collection of data such as IP addresses, device information, and, most alarmingly, even keystroke patterns.

Such findings have led to bipartisan efforts in the US Congress to curtail DeepSeek’s influence, with lawmakers scrambling to protect sensitive data from potential CCP oversight.

Reps. Darin LaHood (R-IL) and Josh Gottheimer (D-NJ) are spearheading efforts to introduce legislation that would prohibit DeepSeek from being installed on all government-issued devices. 

Several federal agencies, among them NASA and the US Navy, have already preemptively issued a ban on DeepSeek. Similarly, the state of Texas has also introduced restrictions.

Potential ban of DeepSeek a TikTok redux?

The controversy surrounding DeepSeek bears similarities to debates over TikTok, the social video app owned by Chinese company ByteDance. TikTok remains under fire over accusations that user data is accessible to the CCP, though definitive proof has yet to materialise.

In contrast, DeepSeek’s case involves clear evidence, as revealed by cybersecurity investigators who identified the app’s unauthorised data transmissions. While some might say DeepSeek echoes the TikTok controversy, security experts argue that it represents a much starker and documented threat.

Lawmakers around the world are taking note. In addition to the US proposals, DeepSeek has already faced bans from government systems in countries including Australia, South Korea, and Italy.  

AI becomes a geopolitical battleground

The concerns over DeepSeek exemplify how AI has now become a geopolitical flashpoint between global superpowers—especially between the US and China.

American AI firms like OpenAI have enjoyed a dominant position in recent years, but Chinese companies have poured resources into catching up and, in some cases, surpassing their US competitors.  

DeepSeek’s lightning-quick growth has unsettled that balance, not only because of its AI models but also due to its pricing strategy, which undercuts competitors by offering the app free of charge. That begs the question of whether it’s truly “free” or if the cost is paid in lost privacy and security.

China Mobile’s involvement raises further eyebrows, given the state-owned telecom company’s prior sanctions and prohibition from the US market. Critics worry that data collected through platforms like DeepSeek could fill gaps in Chinese surveillance activities or even potential economic manipulations.

A nationwide DeepSeek ban is on the cards

If the proposed US legislation is passed, it could represent the first step toward nationwide restrictions or an outright ban on DeepSeek. Geopolitical tension between China and the West continues to shape policies in advanced technologies, and AI appears to be the latest arena for this ongoing chess match.  

In the meantime, calls to regulate applications like DeepSeek are likely to grow louder. Conversations about data privacy, national security, and ethical boundaries in AI development are becoming ever more urgent as individuals and organisations across the globe navigate the promises and pitfalls of next-generation tools.  

DeepSeek’s rise may have, indeed, rattled the AI hierarchy, but whether it can maintain its momentum in the face of increasing global pushback remains to be seen.

(Photo by Solen Feyissa)

See also: AVAXAI brings DeepSeek to Web3 with decentralised AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/feed/ 0
AI governance: Analysing emerging global regulations https://www.artificialintelligence-news.com/news/ai-governance-analysing-emerging-global-regulations/ https://www.artificialintelligence-news.com/news/ai-governance-analysing-emerging-global-regulations/#respond Thu, 19 Dec 2024 16:21:18 +0000 https://www.artificialintelligence-news.com/?p=16742 Governments are scrambling to establish regulations to govern AI, citing numerous concerns over data privacy, bias, safety, and more. AI News caught up with Nerijus Šveistys, Senior Legal Counsel at Oxylabs, to understand the state of play when it comes to AI regulation and its potential implications for industries, businesses, and innovation. “The boom of […]

The post AI governance: Analysing emerging global regulations appeared first on AI News.

]]>
Governments are scrambling to establish regulations to govern AI, citing numerous concerns over data privacy, bias, safety, and more.

AI News caught up with Nerijus Šveistys, Senior Legal Counsel at Oxylabs, to understand the state of play when it comes to AI regulation and its potential implications for industries, businesses, and innovation.

“The boom of the last few years appears to have sparked a push to establish regulatory frameworks for AI governance,” explains Šveistys.

“This is a natural development, as the rise of AI seems to pose issues in data privacy and protection, bias and discrimination, safety, intellectual property, and other legal areas, as well as ethics that need to be addressed.”

Regions diverge in regulatory strategy

The European Union’s AI Act has, unsurprisingly, positioned the region with a strict, centralised approach. The regulation, which came into force this year, is set to be fully effective by 2026.

Šveistys pointed out that the EU has acted relatively swiftly compared to other jurisdictions: “The main difference we can see is the comparative quickness with which the EU has released a uniform regulation to govern the use of all types of AI.”

Meanwhile, other regions have opted for more piecemeal approaches. China, for instance, has been implementing regulations specific to certain AI technologies in a phased-out manner. According to Šveistys, China began regulating AI models as early as 2021.

“In 2021, they introduced regulation on recommendation algorithms, which [had] increased their capabilities in digital advertising. It was followed by regulations on deep synthesis models or, in common terms, deepfakes and content generation in 2022,” he said.

“Then, in 2023, regulation on generative AI models was introduced as these models were making a splash in commercial usage.”

The US, in contrast, remains relatively uncoordinated in its approach. Federal-level regulations are yet to be enacted, with efforts mostly emerging at the state level.

“There are proposed regulations at the state level, such as the so-called California AI Act, but even if they come into power, it may still take some time before they do,” Šveistys noted.

This delay in implementing unified AI regulations in the US has raised questions about the extent to which business pushback may be contributing to the slow rollout. Šveistys said that while lobbyist pressure is a known factor, it’s not the only potential reason.

“There was pushback to the EU AI Act, too, which was nevertheless introduced. Thus, it is not clear whether the delay in the US is only due to lobbyism or other obstacles in the legislation enactment process,” explains Šveistys.

“It might also be because some still see AI as a futuristic concern, not fully appreciating the extent to which it is already a legal issue of today.”

Balancing innovation and safety

Differentiated regulatory approaches could affect the pace of innovation and business competitiveness across regions.

Europe’s regulatory framework, though more stringent, aims to ensure consumer protection and ethical adherence—something that less-regulated environments may lack.

“More rigid regulatory frameworks may impose compliance costs for businesses in the AI field and stifle competitiveness and innovation. On the other hand, they bring the benefits of protecting consumers and adhering to certain ethical norms,” comments Šveistys.

This trade-off is especially pronounced in AI-related sectors such as targeted advertising, where algorithmic bias is increasingly scrutinised.

AI governance often extends beyond laws that specifically target AI, incorporating related legal areas like those governing data collection and privacy. For example, the EU AI Act also regulates the use of AI in physical devices, such as elevators.

“Additionally, all businesses that collect data for advertisement are potentially affected as AI regulation can also cover algorithmic bias in targeted advertising,” emphasises Šveistys.

Impact on related industries

One industry that is deeply intertwined with AI developments is web scraping. Typically used for collecting publicly available data, web scraping is undergoing an AI-driven evolution.

“From data collection, validation, analysis, or overcoming anti-scraping measures, there is a lot of potential for AI to massively improve the efficiency, accuracy, and adaptability of web scraping operations,” said Šveistys. 

However, as AI regulation and related laws tighten, web scraping companies will face greater scrutiny.

“AI regulations may also bring the spotlight on certain areas of law that were always very relevant to the web scraping industry, such as privacy or copyright laws,” Šveistys added.

“At the end of the day, scraping content protected by such laws without proper authorisation could always lead to legal issues, and now so can using AI this way.”

Copyright battles and legal precedents

The implications of AI regulation are also playing out on a broader legal stage, particularly in cases involving generative AI tools.

High-profile lawsuits have been launched against AI giants like OpenAI and its primary backer, Microsoft, by authors, artists, and musicians who claim their copyrighted materials were used to train AI systems without proper permission.

“These cases are pivotal in determining the legal boundaries of using copyrighted material for AI development and establishing legal precedents for protecting intellectual property in the digital age,” said Šveistys.

While these lawsuits could take years to resolve, their outcomes may fundamentally shape the future of AI development. So, what can businesses do now as the regulatory and legal landscape continues to evolve?

“Speaking about the specific cases of using copyrighted material for AI training, businesses should approach this the same way as any web-scraping activity – that is, evaluate the specific data they wish to collect with the help of a legal expert in the field,” recommends Šveistys.

“It is important to recognise that the AI legal landscape is very new and rapidly evolving, with not many precedents in place to refer to as of yet. Hence, continuous monitoring and adaptation of your AI usage are crucial.”

Just this week, the UK Government made headlines with its announcement of a consultation on the use of copyrighted material for training AI models. Under the proposals, tech firms could be permitted to use copyrighted material unless owners have specifically opted out.

Despite the diversity of approaches globally, the AI regulatory push marks a significant moment for technological governance. Whether through the EU’s comprehensive model, China’s step-by-step strategy, or narrower, state-level initiatives like in the US, businesses worldwide must navigate a complex, evolving framework.

The challenge ahead will be striking the right balance between fostering innovation and mitigating risks, ensuring that AI remains a force for good while avoiding potential harms.

(Photo by Nathan Bingle)

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI governance: Analysing emerging global regulations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai-governance-analysing-emerging-global-regulations/feed/ 0
Machine unlearning: Researchers make AI models ‘forget’ data https://www.artificialintelligence-news.com/news/machine-unlearning-researchers-ai-models-forget-data/ https://www.artificialintelligence-news.com/news/machine-unlearning-researchers-ai-models-forget-data/#respond Tue, 10 Dec 2024 17:18:26 +0000 https://www.artificialintelligence-news.com/?p=16680 Researchers from the Tokyo University of Science (TUS) have developed a method to enable large-scale AI models to selectively “forget” specific classes of data. Progress in AI has provided tools capable of revolutionising various domains, from healthcare to autonomous driving. However, as technology advances, so do its complexities and ethical considerations.  The paradigm of large-scale […]

The post Machine unlearning: Researchers make AI models ‘forget’ data appeared first on AI News.

]]>
Researchers from the Tokyo University of Science (TUS) have developed a method to enable large-scale AI models to selectively “forget” specific classes of data.

Progress in AI has provided tools capable of revolutionising various domains, from healthcare to autonomous driving. However, as technology advances, so do its complexities and ethical considerations. 

The paradigm of large-scale pre-trained AI systems, such as OpenAI’s ChatGPT and CLIP (Contrastive Language–Image Pre-training), has reshaped expectations for machines. These highly generalist models, capable of handling a vast array of tasks with consistent precision, have seen widespread adoption for both professional and personal use.  

However, such versatility comes at a hefty price. Training and running these models demands prodigious amounts of energy and time, raising sustainability concerns, as well as requiring cutting-edge hardware significantly more expensive than standard computers. Compounding these issues is that generalist tendencies may hinder the efficiency of AI models when applied to specific tasks.  

For instance, “in practical applications, the classification of all kinds of object classes is rarely required,” explains Associate Professor Go Irie, who led the research. “For example, in an autonomous driving system, it would be sufficient to recognise limited classes of objects such as cars, pedestrians, and traffic signs.

“We would not need to recognise food, furniture, or animal species. Retaining classes that do not need to be recognised may decrease overall classification accuracy, as well as cause operational disadvantages such as the waste of computational resources and the risk of information leakage.”  

A potential solution lies in training models to “forget” redundant or unnecessary information—streamlining their processes to focus solely on what is required. While some existing methods already cater to this need, they tend to assume a “white-box” approach where users have access to a model’s internal architecture and parameters. Oftentimes, however, users get no such visibility.  

“Black-box” AI systems, more common due to commercial and ethical restrictions, conceal their inner mechanisms, rendering traditional forgetting techniques impractical. To address this gap, the research team turned to derivative-free optimisation—an approach that sidesteps reliance on the inaccessible internal workings of a model.  

Advancing through forgetting

The study, set to be presented at the Neural Information Processing Systems (NeurIPS) conference in 2024, introduces a methodology dubbed “black-box forgetting.”

The process modifies the input prompts (text instructions fed to models) in iterative rounds to make the AI progressively “forget” certain classes. Associate Professor Irie collaborated on the work with co-authors Yusuke Kuwana and Yuta Goto (both from TUS), alongside Dr Takashi Shibata from NEC Corporation.  

For their experiments, the researchers targeted CLIP, a vision-language model with image classification abilities. The method they developed is built upon the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), an evolutionary algorithm designed to optimise solutions step-by-step. In this study, CMA-ES was harnessed to evaluate and hone prompts provided to CLIP, ultimately suppressing its ability to classify specific image categories.

As the project progressed, challenges arose. Existing optimisation techniques struggled to scale up for larger volumes of targeted categories, leading the team to devise a novel parametrisation strategy known as “latent context sharing.”  

This approach breaks latent context – a representation of information generated by prompts – into smaller, more manageable pieces. By allocating certain elements to a single token (word or character) while reusing others across multiple tokens, they dramatically reduced the problem’s complexity. Crucially, this made the process computationally tractable even for extensive forgetting applications.  

Through benchmark tests on multiple image classification datasets, the researchers validated the efficacy of black-box forgetting—achieving the goal of making CLIP “forget” approximately 40% of target classes without direct access to the AI model’s internal architecture.

This research marks the first successful attempt to induce selective forgetting in a black-box vision-language model, demonstrating promising results.  

Benefits of helping AI models forget data

Beyond its technical ingenuity, this innovation holds significant potential for real-world applications where task-specific precision is paramount.

Simplifying models for specialised tasks could make them faster, more resource-efficient, and capable of running on less powerful devices—hastening the adoption of AI in areas previously deemed unfeasible.  

Another key use lies in image generation, where forgetting entire categories of visual context could prevent models from inadvertently creating undesirable or harmful content, be it offensive material or misinformation.  

Perhaps most importantly, this method addresses one of AI’s greatest ethical quandaries: privacy.

AI models, particularly large-scale ones, are often trained on massive datasets that may inadvertently contain sensitive or outdated information. Requests to remove such data—especially in light of laws advocating for the “Right to be Forgotten”—pose significant challenges.

Retraining entire models to exclude problematic data is costly and time-intensive, yet the risks of leaving it unaddressed can have far-reaching consequences.

“Retraining a large-scale model consumes enormous amounts of energy,” notes Associate Professor Irie. “‘Selective forgetting,’ or so-called machine unlearning, may provide an efficient solution to this problem.”  

These privacy-focused applications are especially relevant in high-stakes industries like healthcare and finance, where sensitive data is central to operations.  

As the global race to advance AI accelerates, the Tokyo University of Science’s black-box forgetting approach charts an important path forward—not only by making the technology more adaptable and efficient but also by adding significant safeguards for users.  

While the potential for misuse remains, methods like selective forgetting demonstrate that researchers are proactively addressing both ethical and practical challenges.  

See also: Why QwQ-32B-Preview is the reasoning AI to watch

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Machine unlearning: Researchers make AI models ‘forget’ data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/machine-unlearning-researchers-ai-models-forget-data/feed/ 0
Industry experts call for tailored AI rules in post-election UK https://www.artificialintelligence-news.com/news/industry-experts-tailored-ai-rules-post-election-uk/ https://www.artificialintelligence-news.com/news/industry-experts-tailored-ai-rules-post-election-uk/#respond Tue, 02 Jul 2024 16:06:48 +0000 https://www.artificialintelligence-news.com/?p=15172 As the UK gears up for its general election, industry leaders are weighing in on the potential impact on technology and AI regulation. With economic challenges at the forefront of political debates, experts argue that the next government must prioritise technological innovation and efficiency to drive growth and maintain the UK’s competitive edge. Rupal Karia, […]

The post Industry experts call for tailored AI rules in post-election UK appeared first on AI News.

]]>
As the UK gears up for its general election, industry leaders are weighing in on the potential impact on technology and AI regulation.

With economic challenges at the forefront of political debates, experts argue that the next government must prioritise technological innovation and efficiency to drive growth and maintain the UK’s competitive edge.

Rupal Karia, Country Leader UK&I at Celonis, emphasises the need for immediate action to address inefficiencies in both private and public sectors.

“The next government needs to channel a more immediate focus on removing inefficiencies within UK businesses, which both the private and public sector are being weighed down by,” Karia states.

Karia advocates for the use of process intelligence to provide “data-based methods of generating positive impact at the top, the bottom, and the green line.”

While political parties focus on long-term strategies such as infrastructure investments and industrial policies, Karia suggests that leveraging technology for efficiency gains could yield more immediate results. 

“Delivering fast growth is tough, but in the meantime businesses can become leaner and more agile, gaining maximum value within their current processes,” Karia explains.

James Hall, VP & Country Manager, UK&I at Snowflake, predicts a significant focus on AI investment and regulation in the next government. He anticipates the appointment of chief AI officers across government departments to ensure AI aligns with manifesto priorities.

Furthermore, Hall also emphasises the importance of a robust data strategy, stating, “A foundational data strategy with governance at its core will help meet AI goals.”

Hall proposes several initiatives to boost AI innovation and data utilisation:

  • An AI fund to promote public-private partnerships
  • Use of synthetic data to commercialise assets globally while maintaining privacy
  • Industry-specific AI regulations, particularly for sectors like healthcare and pharmaceuticals
  • Stronger agreements on medical data usage in the pharmaceutical industry
  • A dedicated office to oversee data and AI initiatives, ensuring diverse voices are heard in policymaking

On the topic of AI regulation, Hall suggests a nuanced approach: “It would be beneficial to establish industry-specific rules, with particular attention paid to sectors like healthcare and pharmaceuticals and their unique needs.”

Both experts agree that embracing AI and data-driven technologies is crucial for the UK’s future economic success.

“These steps will be crucial for a new government to support data-driven industries and ensure they can capitalise on AI, thus positioning the UK as a global innovation powerhouse whilst ensuring sustainable growth and protecting national interests,” Hall concludes.

As the election approaches, it remains to be seen how political parties will address these technological challenges and opportunities in their manifestos. The outcome could significantly shape the UK’s approach to AI regulation and its position in the global tech landscape.

(Photo by Chris Robert)

See also: EU probes Microsoft-OpenAI and Google-Samsung AI deals

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Industry experts call for tailored AI rules in post-election UK appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/industry-experts-tailored-ai-rules-post-election-uk/feed/ 0
Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans https://www.artificialintelligence-news.com/news/musk-ends-openai-lawsuit-slamming-apple-chatgpt-plans/ https://www.artificialintelligence-news.com/news/musk-ends-openai-lawsuit-slamming-apple-chatgpt-plans/#respond Wed, 12 Jun 2024 15:45:08 +0000 https://www.artificialintelligence-news.com/?p=14988 Elon Musk has dropped his lawsuit against OpenAI, the company he co-founded in 2015. Court filings from the Superior Court of California reveal that Musk called off the legal action on June 11th, just a day before an informal conference was scheduled to discuss the discovery process. Musk had initially sued OpenAI in March 2024, […]

The post Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans appeared first on AI News.

]]>
Elon Musk has dropped his lawsuit against OpenAI, the company he co-founded in 2015. Court filings from the Superior Court of California reveal that Musk called off the legal action on June 11th, just a day before an informal conference was scheduled to discuss the discovery process.

Musk had initially sued OpenAI in March 2024, alleging breach of contracts, unfair business practices, and failure in fiduciary duty. He claimed that his contributions to the company were made “in exchange for and in reliance on promises that those assets were irrevocably dedicated to building AI for public benefit, with only safety as a countervailing concern.”

The lawsuit sought remedies for “breach of contract, promissory estoppel, breach of fiduciary duty, unfair business practices, and accounting,” as well as specific performance, restitution, and damages.

However, Musk’s filings to withdraw the case provided no explanation for abandoning the lawsuit. OpenAI had previously called Musk’s claims “incoherent” and that his inability to produce a contract made his breach claims difficult to prove, stating that documents provided by Musk “contradict his allegations as to the alleged terms of the agreement.”

The withdrawal of the lawsuit comes at a time when Musk is strongly opposing Apple’s plans to integrate ChatGPT into its operating systems.

During Apple’s keynote event announcing Apple Intelligence for iOS 18, iPadOS 18, and macOS Sequoia, Musk threatened to ban Apple devices from his companies, calling the integration “an unacceptable security violation.”

Despite assurances from Apple and OpenAI that user data would only be shared with explicit consent and that interactions would be secure, Musk questioned Apple’s ability to ensure data security, stating, “Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river.”

Since bringing the lawsuit against OpenAI, Musk has also created his own AI company, xAI, and secured over $6 billion in funding for his plans to advance the Grok chatbot on his social network, X.

While Musk’s reasoning for dropping the OpenAI lawsuit remains unclear, his actions suggest a potential shift in focus towards advancing his own AI endeavours while continuing to vocalise his criticism of OpenAI through social media rather than the courts.

See also: DuckDuckGo releases portal giving private access to AI models

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/musk-ends-openai-lawsuit-slamming-apple-chatgpt-plans/feed/ 0
DuckDuckGo releases portal giving private access to AI models https://www.artificialintelligence-news.com/news/duckduckgo-portal-giving-private-access-ai-models/ https://www.artificialintelligence-news.com/news/duckduckgo-portal-giving-private-access-ai-models/#respond Fri, 07 Jun 2024 15:42:22 +0000 https://www.artificialintelligence-news.com/?p=14966 DuckDuckGo has released a platform that allows users to interact with popular AI chatbots privately, ensuring that their data remains secure and protected. The service, accessible at Duck.ai, is globally available and features a light and clean user interface. Users can choose from four AI models: two closed-source models and two open-source models. The closed-source […]

The post DuckDuckGo releases portal giving private access to AI models appeared first on AI News.

]]>
DuckDuckGo has released a platform that allows users to interact with popular AI chatbots privately, ensuring that their data remains secure and protected.

The service, accessible at Duck.ai, is globally available and features a light and clean user interface. Users can choose from four AI models: two closed-source models and two open-source models. The closed-source models are OpenAI’s GPT-3.5 Turbo and Anthropic’s Claude 3 Haiku, while the open-source models are Meta’s Llama-3 70B and Mistral AI’s Mixtral 8x7b.

What sets DuckDuckGo AI Chat apart is its commitment to user privacy. Neither DuckDuckGo nor the chatbot providers can use user data to train their models, ensuring that interactions remain private and anonymous. DuckDuckGo also strips away metadata, such as server or IP addresses, so that queries appear to originate from the company itself rather than individual users.

The company has agreements in place with all model providers to ensure that any saved chats are completely deleted within 30 days, and that none of the chats made on the platform can be used to train or improve the models. This makes preserving privacy easier than changing the privacy settings for each service.

In an era where online services are increasingly hungry for user data, DuckDuckGo’s AI Chat service is a breath of fresh air. The company’s commitment to privacy is a direct response to the growing concerns about data collection and usage in the AI industry. By providing a private and anonymous platform for users to interact with AI chatbots, DuckDuckGo is setting a new standard for the industry.

DuckDuckGo’s AI service is free to use within a daily limit, and the company is considering launching a paid tier to reduce or eliminate these limits. The service is designed to be a complementary partner to its search engine, allowing users to switch between search and AI chat for a more comprehensive search experience.

“We view AI Chat and search as two different but powerful tools to help you find what you’re looking for – especially when you’re exploring a new topic. You might be shopping or doing research for a project and are unsure how to get started. In situations like these, either AI Chat or Search could be good starting points.” the company explained.

“If you start by asking a few questions in AI Chat, the answers may inspire traditional searches to track down reviews, prices, or other primary sources. If you start with Search, you may want to switch to AI Chat for follow-up queries to help make sense of what you’ve read, or for quick, direct answers to new questions that weren’t covered in the web pages you saw.”

To accommodate that user workflow, DuckDuckGo has made AI Chat accessible through DuckDuckGo Private Search for quick access.

The launch of DuckDuckGo AI Chat comes at a time when the AI industry is facing increasing scrutiny over data privacy and usage. The service is a welcome addition for privacy-conscious individuals, joining the recent launch of Venice AI by crypto entrepreneur Erik Voorhees. Venice AI features an uncensored AI chatbot and image generator that doesn’t require accounts and doesn’t retain data..

As the AI industry continues to evolve, it’s clear that privacy will remain a top concern for users. With the launch of DuckDuckGo AI Chat, the company is taking a significant step towards providing users with a private and secure platform for interacting with AI chatbots.

See also: AI pioneers turn whistleblowers and demand safeguards

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DuckDuckGo releases portal giving private access to AI models appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/duckduckgo-portal-giving-private-access-ai-models/feed/ 0
OpenAI faces complaint over fictional outputs https://www.artificialintelligence-news.com/news/openai-faces-complaint-over-fictional-outputs/ https://www.artificialintelligence-news.com/news/openai-faces-complaint-over-fictional-outputs/#respond Mon, 29 Apr 2024 08:45:02 +0000 https://www.artificialintelligence-news.com/?p=14751 European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union. “Making up false information […]

The post OpenAI faces complaint over fictional outputs appeared first on AI News.

]]>
European data protection advocacy group noyb has filed a complaint against OpenAI over the company’s inability to correct inaccurate information generated by ChatGPT. The group alleges that OpenAI’s failure to ensure the accuracy of personal data processed by the service violates the General Data Protection Regulation (GDPR) in the European Union.

“Making up false information is quite problematic in itself. But when it comes to false information about individuals, there can be serious consequences,” said Maartje de Graaf, Data Protection Lawyer at noyb. 

“It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

The GDPR requires that personal data be accurate, and individuals have the right to rectification if data is inaccurate, as well as the right to access information about the data processed and its sources. However, OpenAI has openly admitted that it cannot correct incorrect information generated by ChatGPT or disclose the sources of the data used to train the model.

“Factual accuracy in large language models remains an area of active research,” OpenAI has argued.

The advocacy group highlights a New York Times report that found chatbots like ChatGPT “invent information at least 3 percent of the time – and as high as 27 percent.” In the complaint against OpenAI, noyb cites an example where ChatGPT repeatedly provided an incorrect date of birth for the complainant, a public figure, despite requests for rectification.

“Despite the fact that the complainant’s date of birth provided by ChatGPT is incorrect, OpenAI refused his request to rectify or erase the data, arguing that it wasn’t possible to correct data,” noyb stated.

OpenAI claimed it could filter or block data on certain prompts, such as the complainant’s name, but not without preventing ChatGPT from filtering all information about the individual. The company also failed to adequately respond to the complainant’s access request, which the GDPR requires companies to fulfil.

“The obligation to comply with access requests applies to all companies. It is clearly possible to keep records of training data that was used to at least have an idea about the sources of information,” said de Graaf. “It seems that with each ‘innovation,’ another group of companies thinks that its products don’t have to comply with the law.”

European privacy watchdogs have already scrutinised ChatGPT’s inaccuracies, with the Italian Data Protection Authority imposing a temporary restriction on OpenAI’s data processing in March 2023 and the European Data Protection Board establishing a task force on ChatGPT.

In its complaint, noyb is asking the Austrian Data Protection Authority to investigate OpenAI’s data processing and measures to ensure the accuracy of personal data processed by its large language models. The advocacy group also requests that the authority order OpenAI to comply with the complainant’s access request, bring its processing in line with the GDPR, and impose a fine to ensure future compliance.

You can read the full complaint here (PDF)

(Photo by Eleonora Francesca Grotto)

See also: Igor Jablokov, Pryon: Building a responsible AI future

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI faces complaint over fictional outputs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/openai-faces-complaint-over-fictional-outputs/feed/ 0
80% of AI decision makers are worried about data privacy and security https://www.artificialintelligence-news.com/news/80-of-ai-decision-makers-are-worried-about-data-privacy-and-security/ https://www.artificialintelligence-news.com/news/80-of-ai-decision-makers-are-worried-about-data-privacy-and-security/#respond Wed, 17 Apr 2024 22:25:00 +0000 https://www.artificialintelligence-news.com/?p=14692 Organisations are enthusiastic about generative AI’s potential for increasing their business and people productivity, but lack of strategic planning and talent shortages are preventing them from realising its true value. This is according to a study conducted in early 2024 by Coleman Parkes Research and sponsored by data analytics firm SAS, which surveyed 300 US […]

The post 80% of AI decision makers are worried about data privacy and security appeared first on AI News.

]]>
Organisations are enthusiastic about generative AI’s potential for increasing their business and people productivity, but lack of strategic planning and talent shortages are preventing them from realising its true value.

This is according to a study conducted in early 2024 by Coleman Parkes Research and sponsored by data analytics firm SAS, which surveyed 300 US GenAI strategy or data analytics decision makers to pulse check major areas of investment and the hurdles organisations are facing.

Marinela Profi, strategic AI advisor at SAS, said: “Organisations are realising that large language models (LLMs) alone don’t solve business challenges. 

“GenAI should be treated as an ideal contributor to hyper automation and the acceleration of existing processes and systems rather than the new shiny toy that will help organisations realise all their business aspirations. Time spent developing a progressive strategy and investing in technology that offers integration, governance and explainability of LLMs are crucial steps all organisations should take before jumping in with both feet and getting ‘locked in.’”

Organisations are hitting stumbling blocks in four key areas of implementation:

• Increasing trust in data usage and achieving compliance. Only one in 10 organisations has a reliable system in place to measure bias and privacy risk in LLMs. Moreover, 93% of U.S. businesses lack a comprehensive governance framework for GenAI, and the majority are at risk of noncompliance when it comes to regulation.

• Integrating GenAI into existing systems and processes. Organisations reveal they’re experiencing compatibility issues when trying to combine GenAI with their current systems.

• Talent and skills. In-house GenAI is lacking. As HR departments encounter a scarcity of suitable hires, organisational leaders worry they don’t have access to the necessary skills to make the most of their GenAI investment.

• Predicting costs. Leaders cite prohibitive direct and indirect costs associated with using LLMs. Model creators provide a token cost estimate (which organisations now realise is prohibitive). But the costs for private knowledge preparation, training and ModelOps management are lengthy and complex.

Profi added: “It’s going to come down to identifying real-world use cases that deliver the highest value and solve human needs in a sustainable and scalable manner. 

“Through this study, we’re continuing our commitment to helping organisations stay relevant, invest their money wisely and remain resilient. In an era where AI technology evolves almost daily, competitive advantage is highly dependent on the ability to embrace the resiliency rules.”

Details of the study were unveiled today at SAS Innovate in Las Vegas, SAS Software’s AI and analytics conference for business leaders, technical users and SAS partners.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post 80% of AI decision makers are worried about data privacy and security appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/80-of-ai-decision-makers-are-worried-about-data-privacy-and-security/feed/ 0