Voice Recognition | Voice & Speech Recognition AI News | AI News https://www.artificialintelligence-news.com/categories/ai-applications/ai-voice-recognition/ Artificial Intelligence News Fri, 25 Apr 2025 14:07:51 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Voice Recognition | Voice & Speech Recognition AI News | AI News https://www.artificialintelligence-news.com/categories/ai-applications/ai-voice-recognition/ 32 32 Deepgram Nova-3 Medical: AI speech model cuts healthcare transcription errors https://www.artificialintelligence-news.com/news/deepgram-nova-3-medical-ai-speech-model-healthcare-transcription-errors/ https://www.artificialintelligence-news.com/news/deepgram-nova-3-medical-ai-speech-model-healthcare-transcription-errors/#respond Tue, 04 Mar 2025 13:25:55 +0000 https://www.artificialintelligence-news.com/?p=104673 Deepgram has unveiled Nova-3 Medical, an AI speech-to-text (STT) model tailored for transcription in the demanding environment of healthcare. Designed to integrate seamlessly with existing clinical workflows, Nova-3 Medical aims to address the growing need for accurate and efficient transcription in the UK’s public NHS and private healthcare landscape. As electronic health records (EHRs), telemedicine, […]

The post Deepgram Nova-3 Medical: AI speech model cuts healthcare transcription errors appeared first on AI News.

]]>
Deepgram has unveiled Nova-3 Medical, an AI speech-to-text (STT) model tailored for transcription in the demanding environment of healthcare.

Designed to integrate seamlessly with existing clinical workflows, Nova-3 Medical aims to address the growing need for accurate and efficient transcription in the UK’s public NHS and private healthcare landscape.

As electronic health records (EHRs), telemedicine, and digital health platforms become increasingly prevalent, the demand for reliable AI-powered transcription has never been higher. However, traditional speech-to-text models often struggle with the complex and specialised vocabulary used in clinical settings, leading to errors and “hallucinations” that can compromise patient care.

Deepgram’s Nova-3 Medical is engineered to overcome these challenges. The model leverages advanced machine learning and specialised medical vocabulary training to accurately capture medical terms, acronyms, and clinical jargon—even in challenging audio conditions. This is particularly crucial in environments where healthcare professionals may move away from recording devices.

“Nova‑3 Medical represents a significant leap forward in our commitment to transforming clinical documentation through AI,” said Scott Stephenson, CEO of Deepgram. “By addressing the nuances of clinical language and offering unprecedented customisation, we are empowering developers to build products that improve patient care and operational efficiency.”

One of the key features of the model is its ability to deliver structured transcriptions that integrate seamlessly with clinical workflows and EHR systems, ensuring vital patient data is accurately organised and readily accessible. The model also offers flexible, self-service customisation, including Keyterm Prompting for up to 100 key terms, allowing developers to tailor the solution to the unique needs of various medical specialties.

Versatile deployment options – including on-premises and Virtual Private Cloud (VPC) configurations – ensure enterprise-grade security and HIPAA compliance, which is crucial for meeting UK data protection regulations.

“Speech-to-text for enterprise use cases is not trivial, and there is a fundamental difference between voice AI platforms designed for enterprise use cases vs entertainment use cases,” said Kevin Fredrick, Managing Partner at OneReach.ai. “Deepgram’s Nova-3 model and Nova-3-Medical model, are leading voice AI offerings, including TTS, in terms of the accuracy, latency, efficiency, and scalability required for enterprise use cases.”

Benchmarking Nova-3 Medical: Accuracy, speed, and efficiency

Deepgram has conducted benchmarking to demonstrate the performance of Nova-3 Medical. The model claims to deliver industry-leading transcription accuracy, optimising both overall word recognition and critical medical term accuracy.

  • Word Error Rate (WER): With a median WER of 3.45%, Nova-3 Medical outperforms competitors, achieving a 63.6% reduction in errors compared to the next best competitor. This enhanced precision minimises manual corrections and streamlines workflows.
  • Keyword Error Rate (KER): Crucially, Nova-3 Medical achieves a KER of 6.79%, marking a 40.35% reduction in errors compared to the next best competitor. This ensures that critical medical terms – such as drug names and conditions – are accurately transcribed, reducing the risk of miscommunication and patient safety issues.

In addition to accuracy, Nova-3 Medical excels in real-time applications. The model transcribes speech 5-40x faster than many alternative speech recognition vendors, making it ideal for telemedicine and digital health platforms. Its scalable architecture ensures high performance even as transcription volumes increase.

Furthermore, Nova-3 Medical is designed to be cost-effective. Starting at $0.0077 per minute of streaming audio – which Deepgram claims is more than twice as affordable as leading cloud providers – it allows healthcare tech companies to reinvest in innovation and accelerate product development.

Deepgram’s Nova-3 Medical aims to empower developers to build transformative medical transcription applications, driving exceptional outcomes across healthcare.

(Photo by Alexander Sinn)

See also: Autoscience Carl: The first AI scientist writing peer-reviewed papers

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Deepgram Nova-3 Medical: AI speech model cuts healthcare transcription errors appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/deepgram-nova-3-medical-ai-speech-model-healthcare-transcription-errors/feed/ 0
Top seven Voice of Customer (VoC) tools for 2025 https://www.artificialintelligence-news.com/news/top-seven-voice-of-customer-tools-for-2025/ https://www.artificialintelligence-news.com/news/top-seven-voice-of-customer-tools-for-2025/#respond Mon, 03 Mar 2025 09:32:11 +0000 https://www.artificialintelligence-news.com/?p=104689 One of the powerful methods for enhancing customer experiences and building lasting relationships is through Voice of Customer (VoC) tools. These tools allow businesses to gather insights directly from their customers, helping them to improve services, products, and overall customer satisfaction. What are voice of customer (VoC) tools? VoC tools are specialised software applications designed […]

The post Top seven Voice of Customer (VoC) tools for 2025 appeared first on AI News.

]]>
One of the powerful methods for enhancing customer experiences and building lasting relationships is through Voice of Customer (VoC) tools. These tools allow businesses to gather insights directly from their customers, helping them to improve services, products, and overall customer satisfaction.

What are voice of customer (VoC) tools?

VoC tools are specialised software applications designed to collect, analyse, and interpret customer feedback. Feedback can come from various sources, including surveys, social media, direct customer interactions, and product reviews. The primary goal of the tools is to build a comprehensive understanding of customer sentiment, pain points, and preferences.

VoC tools let organisations gather qualitative and quantitative data, translating the voice of their customers into actionable insights. By implementing these tools, businesses can achieve a deeper understanding of their customers, leading to informed decision-making and ultimately, enhanced customer loyalty.

Top 7 Voice of Customer (VoC) tools for 2025

Here are the top seven VoC tools to consider in 2025, each offering unique features and functions to help you capture the voice of your customers effectively:

1. Revuze

Revuze is an AI-driven VoC tool that focuses on extracting actionable insights from customer feedback, reviews, and surveys.

Key features:

  • Natural language processing to analyse open-ended responses.
  • Comprehensive reporting dashboards that highlight key themes.
  • The ability to benchmark against competitors.

Benefits: Revuze empowers businesses to turn large amounts of feedback into strategic insights, enhancing decision-making and customer engagement.

2. Satisfactory

Satisfactory is a user-friendly VoC tool that emphasises customer feedback collection through satisfaction surveys and interactive forms.

Key features:

  • Simple survey creation with customisable templates.
  • Live feedback tracking and reporting.
  • Integration with popular CRM systems like Salesforce.

Benefits: Satisfactory helps businesses quickly gather customer feedback, allowing for immediate action to improve customer satisfaction and experience.

3. GetFeedback

GetFeedback offers a streamlined platform for creating surveys and collecting customer insights, designed for usability across various industries.

Key features:

  • Easy drag-and-drop survey builder.
  • Real-time feedback collection via multiple channels.
  • Integration capabilities with other tools like Salesforce and HubSpot.

Benefits: GeTFEEDBACK provides actionable insights while ensuring an engaging experience for customers participating in surveys.

4. Chattermill

Chattermill focuses on analysing customer feedback through sophisticated AI and machine learning algorithms, turning unstructured data into actionable insights.

Key features:

  • Customer sentiment analysis across multiple data sources.
  • Automated reporting tools and dashboards.
  • Customisable alerts for key metrics and issues.

Benefits: Chattermill enables businesses to react quickly to customer feedback, enhancing their responsiveness and improving overall service quality.

5. Skeepers

Skeepers is designed for brands looking to amplify the customer voice by combining feedback gathering and brand advocacy functions.

Key features:

  • Comprehensive review management system.
  • Real-time customer jury feedback for products.
  • Customer advocacy programme integration.

Benefits: Skeepers helps brands transform customer insights into powerful endorsements, boosting brand reputation and fostering trust.

6. Medallia

Medallia is an established leader in the VoC space, providing an extensive platform for capturing feedback from various touchpoints throughout the customer journey.

Key features:

  • Robust analytics capabilities and AI-driven insights.
  • Multi-channel feedback collection, including mobile, web, and in-store.
  • Integration with existing systems for data flow.

Benefits: Medallia’s comprehensive suite offers valuable tools for organisations aiming to transform customer feedback into strategic opportunities.

7. InMoment

InMoment combines customer feedback across all channels, providing organisations with insights to enhance customer experience consistently.

Key features:

  • AI-powered analytics for deep insights and trends.
  • Multi-channel capabilities for collecting feedback.
  • Advanced reporting and visualisation tools.

Benefits: With InMoment, businesses can create a holistic view of the customer experience, driving improvements across the organisation.

Benefits of using VoC tools

  • Enhanced customer understanding: By capturing and analysing customer feedback, businesses gain insights into what customers truly want, their pain points, and overall satisfaction levels.
  • Improvement of products and services: VoC tools help organisations identify specific areas where products or services can be improved based on customer feedback, leading to increased satisfaction and loyalty.
  • Informed decision making: With access to real-time customer insights, organisations can make data-driven decisions, ensuring that strategies align with customer preferences.
  • Increased customer loyalty: When customers feel heard and valued, they are more likely to remain loyal to a brand, leading to repeat business and long-term growth.
  • Competitive advantage: Organisations that effectively use customer feedback can stay ahead of competitors by quickly adapting to market demands and trends.
  • Proactive issue resolution: VoC tools enable businesses to identify customer complaints early, allowing them to address issues proactively and improve overall customer satisfaction.
  • Enhanced employee engagement: A deep understanding of customer needs can help employees deliver better service, enhancing their engagement and job satisfaction.

How to choose VoC tools

Choosing the right VoC tool involves several considerations:

  • Define your goals: Before researching tools, clearly define what you want to achieve with VoC. Whether it’s improving product features, enhancing customer service, or understanding market trends, outlining your goals will help narrow your choices.
  • Assess your budget: VoC tools come with various pricing models. Determine your budget and evaluate the tools that provide the best value for your investment.
  • Evaluate features: Based on your goals, assess the features of each tool. Prioritise the features that align with your needs, like sentiment analysis, real-time reporting, or integration capabilities.
  • Check integration options: Ensure that the chosen VoC tool can easily integrate with your existing systems. Integration can save time and enhance the overall efficiency of data utilisation.
  • Look for scalability: As your business grows, your VoC needs may change. Choose a tool that can scale with your business and adapt to evolving customer insight demands.
  • Request demos and trials: Take advantage of free trials or request demos to see how the tools function in real-time. The experience can provide valuable information about usability and effectiveness.
  • Read reviews and case studies: Researching customer reviews, testimonials, and case studies can give you insights into how well the tool performs and its impact on businesses similar to yours.

The post Top seven Voice of Customer (VoC) tools for 2025 appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/top-seven-voice-of-customer-tools-for-2025/feed/ 0
Western drivers remain sceptical of in-vehicle AI https://www.artificialintelligence-news.com/news/western-drivers-remain-sceptical-in-vehicle-ai/ https://www.artificialintelligence-news.com/news/western-drivers-remain-sceptical-in-vehicle-ai/#respond Tue, 05 Nov 2024 12:58:15 +0000 https://www.artificialintelligence-news.com/?p=16437 A global study has unveiled a stark contrast in attitudes towards embracing in-vehicle AI between Eastern and Western markets, with European drivers particularly reluctant. The research – conducted by MHP – surveyed 4,700 car drivers across China, the US, Germany, the UK, Italy, Sweden, and Poland, revealing significant geographical disparities in AI acceptance and understanding. […]

The post Western drivers remain sceptical of in-vehicle AI appeared first on AI News.

]]>
A global study has unveiled a stark contrast in attitudes towards embracing in-vehicle AI between Eastern and Western markets, with European drivers particularly reluctant.

The research – conducted by MHP – surveyed 4,700 car drivers across China, the US, Germany, the UK, Italy, Sweden, and Poland, revealing significant geographical disparities in AI acceptance and understanding.

According to the study, while AI is becoming integral to modern vehicles, European consumers remain hesitant about its implementation and value proposition.

Regional disparities

The study found that 48 percent of Chinese respondents view in-car AI predominantly as an opportunity, while merely 23 percent of European respondents share this optimistic outlook. In Europe, 39 percent believe AI’s opportunities and risks are broadly balanced, while 24 percent take a negative stance, suggesting the risks outweigh potential benefits.

Understanding of AI technology also varies significantly by region. While over 80 percent of Chinese respondents claim to understand AI’s use in cars, this figure drops to just 54 percent among European drivers, highlighting a notable knowledge gap.

Marcus Willand, Partner at MHP and one of the study’s authors, notes: “The figures show that the prospect of greater safety and comfort due to AI can motivate purchasing decisions. However, the European respondents in particular are often hesitant and price-sensitive.”

The willingness to pay for AI features shows an equally stark divide. Just 23 percent of European drivers expressed willingness to pay for AI functions, compared to 39 percent of Chinese drivers. The study suggests that most users now expect AI features to be standard rather than optional extras.

Graphs showing what features the public view can be significantly improved by in-vehicle AI.

Dr Nils Schaupensteiner, Associated Partner at MHP and study co-author, said: “Automotive companies need to create innovations with clear added value and develop both direct and indirect monetisation of their AI offerings, for example through data-based business models and improved services.”

In-vehicle AI opportunities

Despite these challenges, traditional automotive manufacturers maintain a trust advantage over tech giants. The study reveals that 64 percent of customers trust established car manufacturers with AI implementation, compared to 50 percent for technology firms like Apple, Google, and Microsoft.

Graph highlighting the public trust in various stakeholders regarding in-vehicle AI.

The research identified several key areas where AI could provide significant value across the automotive industry’s value chain, including pattern recognition for quality management, enhanced data management capabilities, AI-driven decision-making systems, and improved customer service through AI-powered communication tools.

“It is worth OEMs and suppliers considering the opportunities offered by the new technology along their entire value chain,” explains Augustin Friedel, Senior Manager and study co-author. “However, the possible uses are diverse and implementation is quite complex.”

The study reveals that while up to 79 percent of respondents express interest in AI-powered features such as driver assistance systems, intelligent route planning, and predictive maintenance, manufacturers face significant challenges in monetising these capabilities, particularly in the European market.

Graph showing the public interest in various in-vehicle AI features.

See also: MIT breakthrough could transform robot training

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Western drivers remain sceptical of in-vehicle AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/western-drivers-remain-sceptical-in-vehicle-ai/feed/ 0
How to use AI-driven speech analytics in contact centres https://www.artificialintelligence-news.com/news/how-to-use-ai-driven-speech-analytics-contact-centres/ https://www.artificialintelligence-news.com/news/how-to-use-ai-driven-speech-analytics-contact-centres/#respond Thu, 01 Aug 2024 12:52:21 +0000 https://www.artificialintelligence-news.com/?p=15586 Speech analytics driven by AI is speech recognition software that works using natural language processing and machine learning technologies. With speech analytics in call centres, you can convert live speech into text. After that, the program evaluates this text to reveal details about the needs, preferences, and sentiment of the customer. In contact centres, speech […]

The post How to use AI-driven speech analytics in contact centres appeared first on AI News.

]]>
Speech analytics driven by AI is speech recognition software that works using natural language processing and machine learning technologies. With speech analytics in call centres, you can convert live speech into text. After that, the program evaluates this text to reveal details about the needs, preferences, and sentiment of the customer.

In contact centres, speech analytics tools helps: 

  • Analyse voice recordings.
  • Provide feedback for agents. 
  • Improve customer experience.
  • Increase sales.

How does speech analytics driven by AI differ from the traditional one? What benefits can contact centres and businesses receive from it? Find the answers in this article.

How does AI-driven speech analytics differ from traditional?

They differ in several key aspects:

Key components of AI-driven speech analytics

Here is a list of common technologies driven by artificial intelligence. They are being used to optimise and improve the performance of contact centres and the applications they run:

Artificial intelligence is a branch of computer technology that develops computer programs to solve complex problems by simulating behavior associated with the behaviour of intelligent beings. AI is able to reason, learn, solve issues, and self-correct.

Machine learning is a subsection of AI that teaches computers through experience rather than additional programming. It is a method of data analysis that, without the need for programming, finds patterns in data and forecasts future events using statistical algorithms.

Natural language processing allows a computer to understand spoken or written language. It can analyse syntax and semantics. In determining meaning and developing suitable answers, this is helpful.

For example, it processes verbal commands given to intelligent virtual operators, virtual assistants that staff work with, or voice menus. Sentiment analysis is another application for this technology. More advanced natural language processing can “learn” to take into account context and read sarcasm, humor, and a variety of different human emotions.

A part of natural language processing called natural language understanding enables a computer to comprehend written or spoken language. Grammatical structure, syntax, and semantics of a sentence can all be examined using it. This helps in deciphering meaning and creating suitable answers.

Predictive analytics uses machine learning, data mining, and statistical analysis techniques to analyse data and identify relationships, patterns, and trends. One can create a predictive model using such data. It forecasts the possibility of a given thing happening, the tendency to do something, and their possible consequences. 

How does speech analytics work in contact centres?

Software for speech analytics gathers and examines data from conversations with customers. Transcripts of phone conversations, dashboards, and reports can all be created using the gathered data.

Agent productivity, customer satisfaction, call volume, and other metrics are all shown in real time to contact centre management through dashboards. Call transcripts are recordings of conversations in text format used for training and quality control of service.

Speech analysis is most often carried out in the following stages:

#1 Interaction recording

A recording of a conversation that needs to be analysed. 

#2 Separating the audio tracks of interlocutors

It enables you to more clearly pinpoint issues. For example, if the paths intersect in a conversation between a manager and a client, one interlocutor interrupts the other.

#3 Converting speech to text 

This step helps to obtain a text version of the conversation that will be used for subsequent research.

#4 Text transcript

Different text processing techniques are applied to the resultant text to examine it. These include of finding tags and themes, marking words and phrases, and assessing the tone of the text. The program also processes terms, dialogues, and discussion.

#5 Data classification

By terms, topic, tone of emotion, or other parameters. 

#6 Data visualisation

By charts, graphs, heat maps, and other visuals. The program will clearly show the results achieved.

#7 Data analytics 

During this phase, judgments are made, trends are found, important discoveries are highlighted, and data is interpreted.

The system allows you to record calls and create detailed, complete reports, which will allow you to identify errors in work and find additional points of growth. This information will help develop the project and increase the average bill with the right choice of promotion tools and budget savings.

How can AI-driven speech analytics help businesses?

Depending on the company size, industry, size of the contact centre, and other factors, different benefits of speech analytics will come to the fore. The universal advantages are the following:

Increasing the number of verified calls

Quality control teams in call centres check an average of two to four operator calls per month. Businesses may quickly validate up to 100% of calls with speech analytics.

KPI fulfilment tracking

Various interaction metrics can be analysed with the use of speech analytics:

  • Request escalation rates
  • Out-of-script behaviour
  • Customer satisfaction
  • Average call handling time, etc.

Speech analytics tools are able to pinpoint the areas in which agents’ quality scores are lagging. Following that, it offers useful data to boost productivity.

Instant feedback

Supervisors may provide agents individualised feedback more quickly with faster analysis and 100% call coverage. Many contact centres have begun implementing AI assistants to give agents real-time suggestions.

Improved operational efficiency

Speech analytics reduces the time for verification processes. Contact centres can handle large call volumes and enhance operational efficiency with its help.

Large-scale customer self-service capabilities for common queries are provided by speech-to-text and text-to-speech voice assistants. Resources for agents to handle more complicated scenarios are freed up.

Personalised learning

Programs for individualised agent training can be developed by managers and workforce development teams. Because each agent’s call performance and attributes are advanced assessed, it becomes feasible.

Higher customer service quality 

Speech analytics offers thorough insight into the requirements of the consumer. Teams can find elements of a satisfying customer experience by using sentiment analysis. Or indicators of a negative customer experience to influence the customer experience and lifecycle.

Problem identification and management

Words and phrases used in consumer interactions can be found via speech analytics. Problem-call information can be instantly sent to supervisors by email or instant messenger. Managers are able to address challenging issues in a timely manner because of notifications. After that, they use reports and dashboards to evaluate the effectiveness of their decisions.

Customer sentiment analysis

Speech analytics can determine a speaker’s emotions at a given moment by considering speech characteristics such as voice volume and pitch. Contact centres can use this information to determine a customer’s general opinion of the business.

What difficulties could you expect when using AI-based speech analytics? 

Data privacy and security

Contact centres handle a large amount of personal and financial information. There is a risk of data breaches, unauthorised access, and misuse of customer information, which can lead to regulatory penalties and a loss of customer trust.

How to address:

Contact centres need to put strong data security procedures in place. These are the following: 

  • Data encryption
  • Strict access controls
  • Regular security audits, etc. 

It helps identify and address vulnerabilities. Also, you can employ solutions with built-in security features.

Cost of implementation

AI-based voice analytics implementation can need a large financial outlay. Such costs include the following: 

  • Purchasing software
  • Integrating new systems with existing infrastructure
  • Training staff
  • Ongoing maintenance and support

How to address:

Contact centres should start with an ROI analysis. They ought to project possible cost reductions as well as increased income. Phased implementing modifications can assist in distributing costs. It lessens the financial load in the short term. You can also implement cloud-based solutions—it lowers up-front expenses because these are usually pay-as-you-go.

Technological complexity

Deploying advanced AI technologies and their integration with existing systems can be technically demanding and require specialised knowledge. 

How to address:

Implementation complexity can be decreased by collaborating with seasoned suppliers that have a solid track record. These vendors can provide end-to-end services, including integration, training, and ongoing support. 

The bottom line

Statistics show that mundane duties take up almost half of a contact centre agent’s working hours. The introduction of modern speech analytics services significantly optimises processes and allows you to obtain analytical data. Based on this data, you can develop a strategy for the further development of the company and improve relationships with customers, forming their loyalty.

The post How to use AI-driven speech analytics in contact centres appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/how-to-use-ai-driven-speech-analytics-contact-centres/feed/ 0
SAS aims to make AI accessible regardless of skill set with packaged AI models https://www.artificialintelligence-news.com/news/sas-aims-to-make-ai-accessible-regardless-of-skill-set-with-packaged-ai-models/ https://www.artificialintelligence-news.com/news/sas-aims-to-make-ai-accessible-regardless-of-skill-set-with-packaged-ai-models/#respond Wed, 17 Apr 2024 23:37:00 +0000 https://www.artificialintelligence-news.com/?p=14696 SAS, a specialist in data and AI solutions, has unveiled what it describes as a “game-changing approach” for organisations to tackle business challenges head-on. Introducing lightweight, industry-specific AI models for individual licence, SAS hopes to equip organisations with readily deployable AI technology to productionise real-world use cases with unparalleled efficiency. Chandana Gopal, research director, Future […]

The post SAS aims to make AI accessible regardless of skill set with packaged AI models appeared first on AI News.

]]>
SAS, a specialist in data and AI solutions, has unveiled what it describes as a “game-changing approach” for organisations to tackle business challenges head-on.

Introducing lightweight, industry-specific AI models for individual licence, SAS hopes to equip organisations with readily deployable AI technology to productionise real-world use cases with unparalleled efficiency.

Chandana Gopal, research director, Future of Intelligence, IDC, said: “SAS is evolving its portfolio to meet wider user needs and capture market share with innovative new offerings,

“An area that is ripe for SAS is productising models built on SAS’ core assets, talent and IP from its wealth of experience working with customers to solve industry problems.”

In today’s market, the consumption of models is primarily focused on large language models (LLMs) for generative AI. In reality, LLMs are a very small part of the modelling needs of real-world production deployments of AI and decision making for businesses. With the new offering, SAS is moving beyond LLMs and delivering industry-proven deterministic AI models for industries that span use cases such as fraud detection, supply chain optimization, entity management, document conversation and health care payment integrity and more.

Unlike traditional AI implementations that can be cumbersome and time-consuming, SAS’ industry-specific models are engineered for quick integration, enabling organisations to operationalise trustworthy AI technology and accelerate the realisation of tangible benefits and trusted results.

Expanding market footprint

Organisations are facing pressure to compete effectively and are looking to AI to gain an edge. At the same time, staffing data science teams has never been more challenging due to AI skills shortages. Consequently, businesses are demanding agility in using AI to solve problems and require flexible AI solutions to quickly drive business outcomes. SAS’ easy-to-use, yet powerful models tuned for the enterprise enable organisations to benefit from a half-century of SAS’ leadership across industries.

Delivering industry models as packaged offerings is one outcome of SAS’ commitment of $1 billion to AIpowered industry solutions. As outlined in the May 2023 announcement, the investment in AI builds on SAS’ decades-long focus on providing packaged solutions to address industry challenges in banking, government, health care and more.

Udo Sglavo, VP for AI and Analytics, SAS, said: “Models are the perfect complement to our existing solutions and SAS Viya platform offerings and cater to diverse business needs across various audiences, ensuring that innovation reaches every corner of our ecosystem. 

“By tailoring our approach to understanding specific industry needs, our frameworks empower businesses to flourish in their distinctive Environments.”

Bringing AI to the masses

SAS is democratising AI by offering out-of-the-box, lightweight AI models – making AI accessible regardless of skill set – starting with an AI assistant for warehouse space optimisation. Leveraging technology like large language models, these assistants cater to nontechnical users, translating interactions into optimised workflows seamlessly and aiding in faster planning decisions.

Sgvalo said: “SAS Models provide organisations with flexible, timely and accessible AI that aligns with industry challenges.

“Whether you’re embarking on your AI journey or seeking to accelerate the expansion of AI across your enterprise, SAS offers unparalleled depth and breadth in addressing your business’s unique needs.”

The first SAS Models are expected to be generally available later this year.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post SAS aims to make AI accessible regardless of skill set with packaged AI models appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/sas-aims-to-make-ai-accessible-regardless-of-skill-set-with-packaged-ai-models/feed/ 0
Meta’s open-source speech AI models support over 1,100 languages https://www.artificialintelligence-news.com/news/meta-open-source-speech-ai-models-support-over-1100-languages/ https://www.artificialintelligence-news.com/news/meta-open-source-speech-ai-models-support-over-1100-languages/#respond Tue, 23 May 2023 12:46:19 +0000 https://www.artificialintelligence-news.com/?p=13101 Advancements in machine learning and speech recognition technology have made information more accessible to people, particularly those who rely on voice to access information. However, the lack of labelled data for numerous languages poses a significant challenge in developing high-quality machine-learning models. In response to this problem, the Meta-led Massively Multilingual Speech (MMS) project has […]

The post Meta’s open-source speech AI models support over 1,100 languages appeared first on AI News.

]]>
Advancements in machine learning and speech recognition technology have made information more accessible to people, particularly those who rely on voice to access information. However, the lack of labelled data for numerous languages poses a significant challenge in developing high-quality machine-learning models.

In response to this problem, the Meta-led Massively Multilingual Speech (MMS) project has made remarkable strides in expanding language coverage and improving the performance of speech recognition and synthesis models.

By combining self-supervised learning techniques with a diverse dataset of religious readings, the MMS project has achieved impressive results in growing the ~100 languages supported by existing speech recognition models to over 1,100 languages.

Breaking down language barriers

To address the scarcity of labelled data for most languages, the MMS project utilised religious texts, such as the Bible, which have been translated into numerous languages.

These translations provided publicly available audio recordings of people reading the texts, enabling the creation of a dataset comprising readings of the New Testament in over 1,100 languages.

By including unlabeled recordings of other religious readings, the project expanded language coverage to recognise over 4,000 languages.

Despite the dataset’s specific domain and predominantly male speakers, the models performed equally well for male and female voices. Meta also says it did not introduce any religious bias.

Overcoming challenges through self-supervised learning

Training conventional supervised speech recognition models with just 32 hours of data per language is inadequate.

To overcome this limitation, the MMS project leveraged the benefits of the wav2vec 2.0 self-supervised speech representation learning technique.

By training self-supervised models on approximately 500,000 hours of speech data across 1,400 languages, the project significantly reduced the reliance on labelled data.

The resulting models were then fine-tuned for specific speech tasks, such as multilingual speech recognition and language identification.

Impressive results

Evaluation of the models trained on the MMS data revealed impressive results. In a comparison with OpenAI’s Whisper, the MMS models exhibited half the word error rate while covering 11 times more languages.

Furthermore, the MMS project successfully built text-to-speech systems for over 1,100 languages. Despite the limitation of having relatively few different speakers for many languages, the speech generated by these systems exhibited high quality.

While the MMS models have shown promising results, it is essential to acknowledge their imperfections. Mistranscriptions or misinterpretations by the speech-to-text model could result in offensive or inaccurate language. The MMS project emphasises collaboration across the AI community to mitigate such risks.

You can read the MMS paper here or find the project on GitHub.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta’s open-source speech AI models support over 1,100 languages appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/meta-open-source-speech-ai-models-support-over-1100-languages/feed/ 0
AI21 Labs raises $64M to help it compete against OpenAI https://www.artificialintelligence-news.com/news/ai21-labs-raises-64m-to-help-it-compete-against-openai/ https://www.artificialintelligence-news.com/news/ai21-labs-raises-64m-to-help-it-compete-against-openai/#respond Wed, 13 Jul 2022 11:52:18 +0000 https://www.artificialintelligence-news.com/?p=12158 AI21 Labs has raised $64 million in a funding round to help it compete against OpenAI and other NLP leaders. Competition in NLP (Natural Language Processing) is heating up. OpenAI is currently seen as the industry leader with its GPT-3 model but rivals are gaining traction. Investors see AI21 Labs as one of the most […]

The post AI21 Labs raises $64M to help it compete against OpenAI appeared first on AI News.

]]>
AI21 Labs has raised $64 million in a funding round to help it compete against OpenAI and other NLP leaders.

Competition in NLP (Natural Language Processing) is heating up. OpenAI is currently seen as the industry leader with its GPT-3 model but rivals are gaining traction.

Investors see AI21 Labs as one of the most promising contenders.

“We completed this round during a period of market uncertainty, which highlights the confidence our investors have in AI21’s vision to change the way people consume and produce information,” said Ori Goshen, Co-Founder and Co-CEO of AI21 Labs.

“The funding will allow us to accelerate the company’s global growth while continuing to develop advanced technology in the field of natural language processing. We are looking forward to growing our team and our offerings.”

The latest funding round was led by Ahren and brings AI21 Labs’ valuation to $664 million.

“NLP has reached a critical inflection point and AI21 has developed unique infrastructure and products to successfully serve a large and rapidly growing market” commented Alice Newcombe-Ellis, Founding and General Partner of Ahren.

“We consider this team to be of the highest calibre, both technically and commercially, leading a differentiated company in a transformative space.”

AI21 Labs’ Jurassic-1 Jumbo model is around the size of GPT-3. The company has been gradually building products around it, including its ‘AI-as-a-Service’ platform AI21 Studio.

One of the consumer-facing products launched by AI21 Labs is Wordtune, an AI writing tool with millions of active users that was chosen by Google as one of its favourite extensions for 2021.

Another product, Wordtune Read, is able to analyse and summarise documents in seconds—enabling users to read long and complex text quickly and efficiently.

A survey last year by John Snow Labs found that 60 percent of budgets for NLP technologies increased by at least 10 percent in 2020, while 33 percent reported a 30 percent increase and 15 percent said their budget more than doubled.

NLP specialists like AI21 Labs are set to benefit greatly from the clear appetite for such technologies over the coming years.

(Image Credit: AI21 Labs)

Related: Meta’s NLLB-200 AI model improves translation quality by 44%

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI21 Labs raises $64M to help it compete against OpenAI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/ai21-labs-raises-64m-to-help-it-compete-against-openai/feed/ 0
IRS expands voice bot options for faster service https://www.artificialintelligence-news.com/news/irs-expands-voice-bot-options-for-faster-service/ https://www.artificialintelligence-news.com/news/irs-expands-voice-bot-options-for-faster-service/#respond Tue, 21 Jun 2022 13:51:14 +0000 https://www.artificialintelligence-news.com/?p=12096 The US Internal Revenue Service has unveiled expanded voice bot options to help eligible taxpayers easily verify their identity to set up or modify a payment plan while avoiding long wait times. “This is part of a wider effort at the IRS to help improve the experience of taxpayers,” said IRS commissioner Chuck Rettig. “We […]

The post IRS expands voice bot options for faster service appeared first on AI News.

]]>
The US Internal Revenue Service has unveiled expanded voice bot options to help eligible taxpayers easily verify their identity to set up or modify a payment plan while avoiding long wait times.

“This is part of a wider effort at the IRS to help improve the experience of taxpayers,” said IRS commissioner Chuck Rettig. “We continue to look for ways to better assist taxpayers, and that includes helping people avoid waiting on hold or having to make a second phone call to get what they need. The expanded voice bots are another example of how technology can help the IRS provide better service to taxpayers.”

Voice bots run on software powered by artificial intelligence, which enables a caller to navigate an interactive voice response. The IRS has been using voice bots on numerous toll-free lines since January, enabling taxpayers with simple payment or notice questions to get what they need quickly and avoid waiting. Taxpayers can always speak with an English- or Spanish-speaking IRS telephone representative if needed.

Eligible taxpayers who call the Automated Collection System (ACS) and Accounts Management toll-free lines and want to discuss payment plan options can authenticate or verify their identities through a personal identification number (PIN) creation process. Setting up a PIN is easy: Taxpayers will need their most recent IRS bill and some basic personal information to complete the process.

“To date, the voice bots have answered over three million calls. As we add more functions for taxpayers to resolve their issues, I anticipate many more taxpayers getting the service they need quickly and easily,” said Darren Guillot, IRS deputy commissioner of Small Business/Self Employed Collection & Operations Support.

Additional voice bot service enhancements are planned in 2022 that will allow authenticated individuals (taxpayers with established or newly created PINs) to get:

  • Account and return transcripts.
  • Payment history.
  • Current balance owed.

In addition to the payment lines, voice bots help people who call the Economic Impact Payment (EIP) toll-free line with general procedural responses to frequently asked questions. The IRS also added voice bots for the Advance Child Tax Credit toll-free line in February to provide similar assistance to callers who need help reconciling the credits on their 2021 tax return.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IRS expands voice bot options for faster service appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/irs-expands-voice-bot-options-for-faster-service/feed/ 0
Zoom receives backlash for emotion-detecting AI https://www.artificialintelligence-news.com/news/zoom-receives-backlash-for-emotion-detecting-ai/ https://www.artificialintelligence-news.com/news/zoom-receives-backlash-for-emotion-detecting-ai/#respond Thu, 19 May 2022 08:22:19 +0000 https://www.artificialintelligence-news.com/?p=11988 Zoom has caused a stir following reports that it’s developing an AI system for detecting emotions. The system, first reported by Protocol, claims to scan users’ faces and their speech to determine their emotions. Zoom detailed the system more in a blog post last month. The company says ‘Zoom IQ’ will be particularly useful for […]

The post Zoom receives backlash for emotion-detecting AI appeared first on AI News.

]]>
Zoom has caused a stir following reports that it’s developing an AI system for detecting emotions.

The system, first reported by Protocol, claims to scan users’ faces and their speech to determine their emotions.

Zoom detailed the system more in a blog post last month. The company says ‘Zoom IQ’ will be particularly useful for helping salespeople improve their pitches based on the emotions of call participants.

Naturally, the system is seen as rather dystopian and has received its fair share of criticism.

On Wednesday, over 25 rights groups sent a joint letter to Zoom CEO Eric Yuan. The letter urges Zoom to cease research on emotion-based AI.

The letter’s signatories include the American Civil Liberties Union (ACLU), Muslim Justice League, and Access Now.

One of the key concerns is that emotion-detecting AI could be used for things like hiring or financial decisions; such as whether to grant loans. That has the possibility to increase existing inequalities.

“Results are not intended to be used for employment decisions or other comparable decisions. All recommended ranges for metrics are based on publicly available research,” Zoom explained.

Zoom IQ tracks metrics including:

  • Talk-listen ratio
  • Talking speed
  • Filler words
  • Longest spiel (monologue)
  • Patience
  • Engaging questions
  • Next steps set up
  • Sentiment/Engagement analysis

Esha Bhandari, Deputy Director of the ACLU Speech, Privacy, and Technology Project, called emotion-detecting AI “creepy” and “a junk science”.

(Photo by iyus sugiharto on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Zoom receives backlash for emotion-detecting AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/zoom-receives-backlash-for-emotion-detecting-ai/feed/ 0
DeepMind co-founder Mustafa Suleyman launches new AI venture https://www.artificialintelligence-news.com/news/deepmind-co-founder-mustafa-suleyman-launches-new-ai-venture/ https://www.artificialintelligence-news.com/news/deepmind-co-founder-mustafa-suleyman-launches-new-ai-venture/#respond Wed, 09 Mar 2022 12:08:56 +0000 https://artificialintelligence-news.com/?p=11742 DeepMind co-founder Mustafa Suleyman has joined two other high-profile industry figures in launching a new venture called Inflection AI. LinkedIn co-founder Reid Hoffman is joining Suleyman on the venture. “Reid and I are excited to announce that we are co-founding a new company, Inflection AI,” wrote Suleyman in a statement. “Inflection will be an AI-first […]

The post DeepMind co-founder Mustafa Suleyman launches new AI venture appeared first on AI News.

]]>
DeepMind co-founder Mustafa Suleyman has joined two other high-profile industry figures in launching a new venture called Inflection AI.

LinkedIn co-founder Reid Hoffman is joining Suleyman on the venture.

“Reid and I are excited to announce that we are co-founding a new company, Inflection AI,” wrote Suleyman in a statement.

“Inflection will be an AI-first consumer products company, incubated at Greylock, with all the advantages and expertise that come from being part of one of the most storied venture capital firms in the world.”

Dr Karén Simonyan, another former DeepMind AI expert, will serve as Inflection AI’s chief scientist and its third co-founder.

“Karén is one of the most accomplished deep learning leaders of his generation. He completed his PhD at Oxford, where he designed VGGNet and then sold his first company to DeepMind,” continued Suleyman.

“He created and led the deep learning scaling team and played a key role in such breakthroughs as AlphaZero, AlphaFold, WaveNet, and BigGAN.”

Inflection AI will focus on machine learning and natural language processing.

“Recent advances in artificial intelligence promise to fundamentally redefine human-machine interaction,” explains Suleyman.

“We will soon have the ability to relay our thoughts and ideas to computers using the same natural, conversational language we use to communicate with people. Over time these new language capabilities will revolutionise what it means to have a digital experience.”

Interest in natural language processing is surging. This month, Microsoft completed its $19.7 billion acquisition of Siri voice recognition engine creator Nuance.

Suleyman departed Google in January 2022 following an eight-year stint at the company.

While at Google, Suleyman was placed on administrative leave following bullying allegations. During a podcast, he said that he “really screwed up” and was “very sorry about the impact that caused people and the hurt people felt.”

Suleyman joined venture capital firm Greylock after leaving Google.

“There are few people who are as visionary, knowledgeable and connected across the vast artificial intelligence landscape as Mustafa,” wrote Hoffman, a Greylock partner, in a post at the time.

“Mustafa has spent years thinking about how technological advances impact society, and he cares deeply about the ethics and governance supporting new AI systems.”

Inflection AI was incubated by Greylock. Suleyman and Hoffman will both remain venture partners at the company.

Suleyman promises that more details about Inflection AI’s product plans will be provided over the coming months.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepMind co-founder Mustafa Suleyman launches new AI venture appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/deepmind-co-founder-mustafa-suleyman-launches-new-ai-venture/feed/ 0
Microsoft acquires Nuance to usher in ‘new era of outcomes-based AI’ https://www.artificialintelligence-news.com/news/microsoft-acquires-nuance-new-era-outcomes-based-ai/ https://www.artificialintelligence-news.com/news/microsoft-acquires-nuance-new-era-outcomes-based-ai/#respond Tue, 08 Mar 2022 15:46:00 +0000 https://artificialintelligence-news.com/?p=11738 Microsoft has completed its acquisition of Siri backend creator Nuance in a bumper deal that it says will usher in a “new era of outcomes-based AI”. “Completion of this significant and strategic acquisition brings together Nuance’s best-in-class conversational AI and ambient intelligence with Microsoft’s secure and trusted industry cloud offerings,” said Scott Guthrie, Executive Vice […]

The post Microsoft acquires Nuance to usher in ‘new era of outcomes-based AI’ appeared first on AI News.

]]>
Microsoft has completed its acquisition of Siri backend creator Nuance in a bumper deal that it says will usher in a “new era of outcomes-based AI”.

“Completion of this significant and strategic acquisition brings together Nuance’s best-in-class conversational AI and ambient intelligence with Microsoft’s secure and trusted industry cloud offerings,” said Scott Guthrie, Executive Vice President of the Cloud + AI Group at Microsoft. 

“This powerful combination will help providers offer more affordable, effective, and accessible healthcare, and help organisations in every industry create more personalised and meaningful customer experiences. I couldn’t be more pleased to welcome the Nuance team to our Microsoft family.”

Nuance became a household name (in techie households, anyway) for creating the speech recognition engine that powers Apple’s smart assistant, Siri. However, Nuance has been in the speech recognition business since 2001 when it was known as ScanSoft.

While it may not have made many big headlines in recent years, Nuance has continued to make some impressive advancements—which caught the attention of Microsoft.

Microsoft announced its intention to acquire Nuance for $19.7 billion last year, in the company’s largest deal after its $26.2 billion acquisition of LinkedIn (both deals would be blown out the water by Microsoft’s proposed $70 billion purchase of Activision Blizzard).

The proposed acquisition of Nuance caught the attention of global regulators. It was cleared in the US relatively quickly, while the EU’s regulator got in the festive spirit and cleared the deal just prior to last Christmas. The UK’s Competition and Markets Authority finally gave it a thumbs-up last week.

Regulators examined whether there may be anti-competition concerns in some verticals where both companies are active, such as healthcare. However, after investigation, the regulators determined that competition shouldn’t be affected by the deal.

The EU, for example, determined that “competing transcription service providers in healthcare do not depend on Microsoft for cloud computing services” and that “transcription service providers in the healthcare sector are not particularly important users of cloud computing services”.

Furthermore, the EU’s regulator concluded:

  • Microsoft-Nuance will continue to face stiff competition from rivals in the future.
  • There’d be no ability/incentive to foreclose existing market solutions.
  • Nuance can only use the data it collects for its own services.
  • The data will not provide Microsoft with an advantage to shut out competing software providers.

The companies appear keen to ensure that people are aware the deal is about more than just healthcare.

“Combining the power of Nuance’s deep vertical expertise and proven business outcomes across healthcare, financial services, retail, telecommunications, and other industries with Microsoft’s global cloud ecosystems will enable us to accelerate our innovation and deploy our solutions more quickly, more seamlessly, and at greater scale to solve our customers’ most pressing challenges,” said Mark Benjamin, CEO of Nuance.

Benjamin will remain the CEO of Nuance and will report to Guthrie.

(Photo by Omid Armin on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Microsoft acquires Nuance to usher in ‘new era of outcomes-based AI’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/microsoft-acquires-nuance-new-era-outcomes-based-ai/feed/ 0
The EU’s AI rules will likely take over a year to be agreed https://www.artificialintelligence-news.com/news/eu-ai-rules-likely-take-over-year-to-be-agreed/ https://www.artificialintelligence-news.com/news/eu-ai-rules-likely-take-over-year-to-be-agreed/#respond Thu, 17 Feb 2022 12:34:20 +0000 https://artificialintelligence-news.com/?p=11691 Rules governing the use of artificial intelligence across the EU will likely take over a year to be agreed upon. Last year, the European Commission drafted AI laws. While the US and China are set to dominate AI development with their vast resources, economic might, and light-touch regulation, European rivals – including the UK and […]

The post The EU’s AI rules will likely take over a year to be agreed appeared first on AI News.

]]>
Rules governing the use of artificial intelligence across the EU will likely take over a year to be agreed upon.

Last year, the European Commission drafted AI laws. While the US and China are set to dominate AI development with their vast resources, economic might, and light-touch regulation, European rivals – including the UK and EU members – believe they can lead in ethical standards.

In the draft of the EU regulations, companies that are found guilty of AI misuse face a fine of €30 million or six percent of their global turnover (whichever is greater). The risk of such fines has been criticised as driving investments away from Europe.

The EU’s draft AI regulation classifies systems into three risk categories:

  • Limited risk – includes systems like chatbots, inventory management, spam filters, and video games.
  • High risk – includes systems that make vital decisions like evaluating creditworthiness, recruitment, justice administration, and biometric identification in non-public spaces.
  • Unacceptable risk – includes systems that are manipulative or exploitative, create social scoring, or conduct real-time biometric authentication in public spaces for law enforcement.

Unacceptable risk systems will face a blanket ban from deployment in the EU while limited risk will require minimal oversight.

Organisations deploying high-risk AI systems would be required to have things like:

  • Human oversight.
  • A risk-management system.
  • Record keeping and logging.
  • Transparency to users.
  • Data governance and management.
  • Conformity assessment.
  • Government registration.

However, the cumbersome nature of the EU – requiring agreement from all member states, each with their own priorities – means that new regulations are often subject to more debate and delay than national lawmaking.

Reuters reports that two key lawmakers on Wednesday said the EU’s AI regulations will likely take over a year more to agree. The delay is primarily due to debates over whether facial recognition should be banned and who should enforce the rules.

“Facial recognition is going to be the biggest ideological discussion between the right and left,” said one lawmaker, Dragos Tudorache, in a Reuters interview.

“I don’t believe in an outright ban. For me, the solution is to put the right rules in place.”

With leading academic institutions and more than 1,300 AI companies employing over 30,000 people, the UK is the biggest destination for AI investment in Europe and the third in the world. Between January and June 2021, global investors poured £13.5 billion into more than 1,400 “deep tech” UK private technology firms—more than Germany, France, and Israel combined.

In September 2021, the UK published its 10-year National Artificial Intelligence Strategy in a bid to secure its European AI leadership. Governance plays a large role in the strategy.

“The UK already punches above its weight internationally and we are ranked third in the world behind the USA and China in the list of top countries for AI,” commented DCMS Minister Chris Philp.

“We’re laying the foundations for the next ten years’ growth with a strategy to help us seize the potential of artificial intelligence and play a leading role in shaping the way the world governs it.”

As part of its strategy, the UK is creating an ‘AI Standards Hub’ to coordinate the country’s engagement in establishing global rules and is working with The Alan Turing Institute to update guidance on AI ethics and safety.

“We are proud of creating a dynamic, collaborative community of diverse researchers and are growing world-leading capabilities in responsible, safe, ethical, and inclusive AI research and innovation,” said Professor Sir Adrian Smith, Chief Executive of The Alan Turing Institute.

Striking a balance between innovation-stifling overregulation and ethics-compromising underregulation is never a simple task. It will be interesting to observe how AI regulations in Europe will differ across the continent and beyond.

(Photo by Christian Lue on Unsplash)

Related: British intelligence agency GCHQ publishes ‘Ethics of AI’ report

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post The EU’s AI rules will likely take over a year to be agreed appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/eu-ai-rules-likely-take-over-year-to-be-agreed/feed/ 0