Joe Green, Author at AI News https://www.artificialintelligence-news.com Artificial Intelligence News Thu, 01 May 2025 11:28:50 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Joe Green, Author at AI News https://www.artificialintelligence-news.com 32 32 Conversations with AI: Education https://www.artificialintelligence-news.com/news/conversations-with-ai-education-implications-and-future/ https://www.artificialintelligence-news.com/news/conversations-with-ai-education-implications-and-future/#respond Thu, 01 May 2025 10:27:00 +0000 https://www.artificialintelligence-news.com/?p=106152 How can AI be used in education? An ethical debate, with an AI

The post Conversations with AI: Education appeared first on AI News.

]]>
The classroom hasn’t changed much in over a century. A teacher at the front, rows of students listening, and a curriculum defined by what’s testable – not necessarily what’s meaningful.

But AI, as arguably the most powerful tool humanity has created in the last few years, is about to break that model open. Not with smarter software or faster grading, but by forcing us to ask: “What is the purpose of education in a world where machines could teach?”

At AI News, rather than speculate about distant futures or lean on product announcements and edtech deals, we started a conversation – with an AI. We asked it what it sees when it looks at the classroom, the teacher, and the learner.

What follows is a distilled version of that exchange, given here not as a technical analysis, but as a provocation.

The system cracks

Education is under pressure worldwide: Teachers are overworked, students are disengaged, and curricula feel outdated in a changing world. Into this comes AI – not as a patch or plug-in, but as a potential accelerant.

Our opening prompt: What roles might an AI play in education?

The answer was wide-ranging:

  • Personalised learning pathways
  • Intelligent tutoring systems
  • Administrative efficiency
  • Language translation and accessibility tools
  • Behavioural and emotional recognition
  • Scalable, always-available content delivery

These are features of an education system, its nuts and bolts. But what about meaning and ethics?

Flawed by design?

One concern kept resurfacing: bias.

We asked the AI: “If you’re trained on the internet – and the internet is the output of biased, flawed human thought – doesn’t that mean your responses are equally flawed?”

The AI acknowledged the logic. Bias is inherited. Inaccuracies, distortions, and blind spots all travel from teacher to pupil. What an AI learns, it learns from us, and it can reproduce our worst habits at vast scale.

But we weren’t interested in letting human teachers off the hook either. So we asked: “Isn’t bias true of human educators too?”

The AI agreed: human teachers are also shaped by the limitations of their training, culture, and experience. Both systems – AI and human – are imperfect. But only humans can reflect and care.

That led us to a deeper question: if both AI and human can reproduce bias, why use AI at all?

Why use AI in education?

The AI outlined what it felt were its clear advantages, which seemed to be systemic, rather than revolutionary. The aspect of personalised learning intrigued us – after all, doing things fast and at scale is what software and computers are good at.

We asked: How much data is needed to personalise learning effectively?

The answer: it varies. But at scale, it could require gigabytes or even terabytes of student data – performance, preferences, feedback, and longitudinal tracking over years.

Which raises its own question: “What do we trade in terms of privacy for that precision?”

A personalised or fragmented future?

Putting aside the issue of whether we’re happy with student data being codified and ingested, if every student were to receive a tailored lesson plan, what happens to the shared experience of learning?

Education has always been more than information. It’s about dialogue, debate, discomfort, empathy, and encounters with other minds, not just mirrored algorithms. AI can tailor a curriculum, but it can’t recreate the unpredictable alchemy of a classroom.

We risk mistaking customisation for connection.

“I use ChatGPT to provide more context […] to plan, structure and compose my essays.” – James, 17, Ottawa, Canada.

The teacher reimagined

Where does this leave the teacher?

In the AI’s view: liberated. Freed from repetitive tasks and administrative overload, the teacher is able to spend more time guiding, mentoring, and cultivating important thinking.

But this requires a shift in mindset – from delivering knowledge to curating wisdom. In broad terms, from part-time administrator, part-time teacher, to in-classroom collaborator.

AI won’t replace teachers, but it might reveal which parts of the teaching job were never the most important.

“The main way I use ChatGPT is to either help with ideas for when I am planning an essay, or to reinforce understanding when revising.” – Emily, 16, Eastbourne College, UK.

What we teach next

So, what do we want students to learn?

In an AI-rich world, important thinking, ethical reasoning, and emotional intelligence rise in value. Ironically, the more intelligent our machines become, the more we’ll need to double down on what makes us human.

Perhaps the ultimate lesson isn’t in what AI can teach us – but in what it can’t, or what it shouldn’t even try.

Conclusion

The future of education won’t be built by AI alone. The is our opportunity to modernise classrooms, and to reimagine them. Not to fear the machine, but to ask the bigger question: “What is learning in a world where all knowledge is available?”

Whatever the answer is – that’s how we should be teaching next.

(Image source: “Large lecture college classes” by Kevin Dooley is licensed under CC BY 2.0)

See also: AI in education: Balancing promises and pitfalls

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Conversations with AI: Education appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/conversations-with-ai-education-implications-and-future/feed/ 0
Red Hat on open, small language models for responsible, practical AI https://www.artificialintelligence-news.com/news/red-hat-on-open-small-language-models-for-responsible-practical-ai/ https://www.artificialintelligence-news.com/news/red-hat-on-open-small-language-models-for-responsible-practical-ai/#respond Tue, 22 Apr 2025 07:49:15 +0000 https://www.artificialintelligence-news.com/?p=105184 As geopolitical events shape the world, it’s no surprise that they affect technology too – specifically, in the ways that the current AI market is changing, alongside its accepted methodology, how it’s developed, and the ways it’s put to use in the enterprise. The expectations of results from AI are balanced at present with real-world […]

The post Red Hat on open, small language models for responsible, practical AI appeared first on AI News.

]]>
As geopolitical events shape the world, it’s no surprise that they affect technology too – specifically, in the ways that the current AI market is changing, alongside its accepted methodology, how it’s developed, and the ways it’s put to use in the enterprise.

The expectations of results from AI are balanced at present with real-world realities. And there remains a good deal of suspicion about the technology, again in balance with those who are embracing it even in its current nascent stages. The closed-loop nature of the well-known LLMs is being challenged by instances like Llama, DeepSeek, and Baidu’s recently-released Ernie X1.

In contrast, open source development provides transparency and the ability to contribute back, which is more in tune with the desire for “responsible AI”: a phrase that encompasses the environmental impact of large models, how AIs are used, what comprises their learning corpora, and issues around data sovereignty, language, and politics. 

As the company that’s demonstrated the viability of an economically-sustainable open source development model for its business, Red Hat wants to extend its open, collaborative, and community-driven approach to AI. We spoke recently to Julio Guijarro, the CTO for EMEA at Red Hat, about the organisation’s efforts to unlock the undoubted power of generative AI models in ways that bring value to the enterprise, in a manner that’s responsible, sustainable, and as transparent as possible. 

Julio underlined how much education is still needed in order for us to more fully understand AI, stating, “Given the significant unknowns about AI’s inner workings, which are rooted in complex science and mathematics, it remains a ‘black box’ for many. This lack of transparency is compounded where it has been developed in largely inaccessible, closed environments.”

There are also issues with language (European and Middle-Eastern languages are very much under-served), data sovereignty, and fundamentally, trust. “Data is an organisation’s most valuable asset, and businesses need to make sure they are aware of the risks of exposing sensitive data to public platforms with varying privacy policies.” 

The Red Hat response 

Red Hat’s response to global demand for AI has been to pursue what it feels will bring most benefit to end-users, and remove many of the doubts and caveats that are quickly becoming apparent when the de facto AI services are deployed. 

One answer, Julio said, is small language models, running locally or in hybrid clouds, on non-specialist hardware, and accessing local business information. SLMs are compact, efficient alternatives to LLMs, designed to deliver strong performance for specific tasks while requiring significantly fewer computational resources. There are smaller cloud providers that can be utilised to offload some compute, but the key is having the flexibility and freedom to choose to keep business-critical information in-house, close to the model, if desired. That’s important, because information in an organisation changes rapidly. “One challenge with large language models is they can get obsolete quickly because the data generation is not happening in the big clouds. The data is happening next to you and your business processes,” he said. 

There’s also the cost. “Your customer service querying an LLM can present a significant hidden cost – before AI, you knew that when you made a data query, it had a limited and predictable scope. Therefore, you could calculate how much that transaction could cost you. In the case of LLMs, they work on an iterative model. So the more you use it, the better its answer can get, and the more you like it, the more questions you may ask. And every interaction is costing you money. So the same query that before was a single transaction can now become a hundred, depending on who and how is using the model. When you are running a model on-premise, you can have greater control, because the scope is limited by the cost of your own infrastructure, not by the cost of each query.”

Organisations needn’t brace themselves for a procurement round that involves writing a huge cheque for GPUs, however. Part of Red Hat’s current work is optimising models (in the open, of course) to run on more standard hardware. It’s possible because the specialist models that many businesses will use don’t need the huge, general-purpose data corpus that has to be processed at high cost with every query. 

“A lot of the work that is happening right now is people looking into large models and removing everything that is not needed for a particular use case. If we want to make AI ubiquitous, it has to be through smaller language models. We are also focused on supporting and improving vLLM (the inference engine project) to make sure people can interact with all these models in an efficient and standardised way wherever they want: locally, at the edge or in the cloud,” Julio said. 

Keeping it small 

Using and referencing local data pertinent to the user means that the outcomes can be crafted according to need. Julio cited projects in the Arab- and Portuguese-speaking worlds that wouldn’t be viable using the English-centric household name LLMs. 

There are a couple of other issues, too, that early adopter organisations have found in practical, day-to-day use LLMs. The first is latency – which can be problematic in time-sensitive or customer-facing contexts. Having the focused resources and relevantly-tailored results just a network hop or two away makes sense. 

Secondly, there is the trust issue: an integral part of responsible AI. Red Hat advocates for open platforms, tools, and models so we can move towards greater transparency, understanding, and the ability for as many people as possible to contribute. “It is going to be critical for everybody,” Julio said. “We are building capabilities to democratise AI, and that’s not only publishing a model, it’s giving users the tools to be able to replicate them, tune them, and serve them.” 

Red Hat recently acquired Neural Magic to help enterprises more easily scale AI, to improve performance of inference, and to provide even greater choice and accessibility of how enterprises build and deploy AI workloads with the vLLM project for open model serving. Red Hat, together with IBM Research, also released InstructLab to open the door to would-be AI builders who aren’t data scientists but who have the right business knowledge. 

There’s a great deal of speculation around if, or when, the AI bubble might burst, but such conversations tend to gravitate to the economic reality that the big LLM providers will soon have to face. Red Hat believes that AI has a future in a use case-specific and inherently open source form, a technology that will make business sense and that will be available to all. To quote Julio’s boss, Matt Hicks (CEO of Red Hat), “The future of AI is open.” 

Supporting Assets: 

Tech Journey: Adopt and scale AI

The post Red Hat on open, small language models for responsible, practical AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/red-hat-on-open-small-language-models-for-responsible-practical-ai/feed/ 0
Dream properties at the touch of a button: Quant and Huawei in Saudi Arabia https://www.artificialintelligence-news.com/news/property-investment-market-changed-by-quant-and-huawei-in-saudi-arabia/ https://www.artificialintelligence-news.com/news/property-investment-market-changed-by-quant-and-huawei-in-saudi-arabia/#respond Mon, 17 Mar 2025 09:30:50 +0000 https://www.artificialintelligence-news.com/?p=104800 Despite being one of the richest countries in the world, and the overwhelming preference of investors for the property market of the country over stocks, Saudi Arabia remained data-poor in the real-estate sector until relatively recently. That was something Ahmed Bukhamseen wanted to change, and his company Quant set out to transform the property investment market and […]

The post Dream properties at the touch of a button: Quant and Huawei in Saudi Arabia appeared first on AI News.

]]>
Despite being one of the richest countries in the world, and the overwhelming preference of investors for the property market of the country over stocks, Saudi Arabia remained data-poor in the real-estate sector until relatively recently.

That was something Ahmed Bukhamseen wanted to change, and his company Quant set out to transform the property investment market and open up pricing information to buyers, speculators, and investors.

As you might expect, at the core of Quant is data – advertised and realised prices, mapping, details of proposed and ongoing construction, geo-spacial data, imagery, and much more. We spoke to Ahmed at the recent MWC (Mobile World Congress) in Barcelona, to talk about the company’s vision and its journey to date. But first we asked what his chosen data platform of choice was – the basis on which the business runs.

“Our main challenge was to first comply with data protection. So, Saudi implemented new regulation laws about data protection, similar to GDPR, run by the Saudi Data and AI Authority. And for that we needed to have a local host for data protection. So that was our main driver for moving from Azure – Microsoft here in Europe – back to Saudi.”

Saudi Arabia, the Middle East, Africa and the APAC are target markets for Chinese tech giant Huawei, but it wasn’t an immediate given for Quant. “We discovered that Huawei offer cloud services late […] about this time last year. We started comparing Huawei with others.”

As Quant’s business model developed, so too did its requirements. The company started out just supplying information it drew from the Saudi municipal data registries, but has since enriched that core information with local data, and high-res satellite imagery.

“We were using a data centre [here] and the data had to go from Saudi to Europe and back. We were running AI on the edge, on images without storing them. But now we have different use cases, so we needed to store data before and after processing, after enrichment or maximisation. There was a huge cost, and we didn’t expect it at the beginning.”

Part of the issue was the imagery the company was pulling down from satellites to get up-to-date pictures of existing buildings on the ground, and those in the process of construction. It was grabbing detailed images once every two weeks, and is about to go live with a daily satellite photography cadence.

“We started testing Huawei in terms of latency, especially, because we streamed high volume data, and would like the individuals using our mobile application to get a seamless experience, and start navigating in-app.”

The app released by Quant opens the property market to anyone: individuals buying their first place, portfolio managers scoping new possibilities, investors of all sizes, landlords, and those – like Ahmed was once himself – hoping just to find a place to rent without the massive variance in prices that was the norm before Quant began its service.

Quant combines different data sources such a property prices, maps, satellite imagery, and government-approved planning documents among them to give its app users everything they need to find property or land to buy and sell.

The future will see Quant revising its data, further enriching it, and adding value for its clients. “We need to develop specific feature detection or object detection. For example, municipality data could state a company has zoning for a warehouse or extension, and we don’t know which one the owner decided on. But from space, we can build models and see the exact the building plan.”

Although Quant likely has the technical ‘chops’ to build its own models, it’s turning to Huawei:

“With our new development, computing the model [ourselves] would take us six months to deploy. Now when we run it, it takes four hours. And last time I checked with the team, we run it every week,” Ahmed said.

Quant’s plans include a service for retailers that will advise on the best areas to site their stores, given the known demographics of a neighbourhood – all data it collects, collates, enriches, and presents. Plus, there is a project on the table that extends the data scope to take in the rental market.

Given the speed at which Saudi Arabia develops, Quant and delivery partner Huawei need to move at least as fast. “We want to see you in Saudi, and see the return on real estate! We don’t talk about 2% or 5% growth: We’re talking about multiplying investments similar to the Bitcoin market. Foreign investors will be able to buy property or land with just the touch of a button, and they need data for that. They need to know the growth areas, how much it costs in perspective, before making decisions.”

You can download the Quant app, read more about the company on its website, and check out Huawei’s service offerings for Saudi and beyond.

(Image source: European Space Agency, licensed under CC BY-SA 3.0 IGO.)

The post Dream properties at the touch of a button: Quant and Huawei in Saudi Arabia appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/property-investment-market-changed-by-quant-and-huawei-in-saudi-arabia/feed/ 0
French initiative for responsible AI leaders https://www.artificialintelligence-news.com/news/french-initiative-for-responsible-ai-leaders/ https://www.artificialintelligence-news.com/news/french-initiative-for-responsible-ai-leaders/#respond Tue, 04 Feb 2025 13:17:12 +0000 https://www.artificialintelligence-news.com/?p=104128 ESSEC Business School and Accenture have announced the launch of a new initiative, ‘AI for Responsible Leadership,’ which marks the 10th anniversary of the establishment of the role of Chair at ESSEC, titled the ESSEC Accenture Strategic Business Analytics Chair. The initiative aims to encourage the use of artificial intelligence by leaders in ways that […]

The post French initiative for responsible AI leaders appeared first on AI News.

]]>
ESSEC Business School and Accenture have announced the launch of a new initiative, ‘AI for Responsible Leadership,’ which marks the 10th anniversary of the establishment of the role of Chair at ESSEC, titled the ESSEC Accenture Strategic Business Analytics Chair.

The initiative aims to encourage the use of artificial intelligence by leaders in ways that are responsible and ethical, and that lead to high levels of professional performance. It aims to provide current and future leaders with the skills they require when faced with challenges in the future; economic, environmental, or social.

Several organisations support the initiative, including institutions, businesses, and specialised groups, including ESSEC Metalab for Data, Technology & Society, and Accenture Research.

Executive Director of the ESSEC Metalab, Abdelmounaim Derraz, spoke of the collaboration, saying, “Technical subjects are continuing to shake up business schools, and AI has opened up opportunities for collaboration between partner companies, researchers, and other members of the ecosystem (students, think tanks, associations, [and] public service).”

ESSEC and Accenture aim to integrate perspectives from multiple fields of expertise, an approach that is a result of experimentation in the decade the Chair has existed.

The elements of the initiative include workshops and talks designed to promote the exchange of knowledge and methods. It will also include a ‘barometer’ to help track AI’s implementation and overall impact on responsible leadership.

The initiative will engage with a network of institutions and academic publications, and an annual Grand Prix will recognise projects that focus on and explore the subject of AI and leadership.

Fabrice Marque, founder of the initiative and the current ESSEC Accenture Strategics Business Analytics Chair, said, “For years, we have explored the potential of using data and artificial intelligence in organisations. The synergies we have developed with our partners (Accenture, Accor, Dataiku, Engie, Eurofins, MSD, Orange) allowed us to evaluate and test innovative solutions before deploying them.

“With this initiative, we’re taking a major step: bringing together an engaged ecosystem to sustainably transform how leaders think, decide, and act in the face of tomorrow’s challenges. Our ambition is clear: to make AI a lever for performance, innovation and responsibility for […] leaders.”

Managing Director at Accenture and sponsor of the ESSEC/Accenture Chair and initiative, Aurélien Bouriot, said, “The ecosystem will benefit from the resources that Accenture puts at its disposal, and will also benefit our employees who participate.”

Laetitia Cailleteau, Managing Director at Accenture and leader of Responsible AI & Generative AI for Europe, highlighted the importance of future leaders understanding all aspects of AI.

“AI is a pillar of the ongoing industrial transformation. Tomorrow’s leaders must understand the technical, ethical, and human aspects and risks – and know how to manage them. In this way, they will be able to maximise value creation and generate a positive impact for the organisation, its stakeholders and society as a whole.”

Image credit: Wikimedia Commons

See also: Microsoft and OpenAI probe alleged data theft by DeepSeek

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post French initiative for responsible AI leaders appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/french-initiative-for-responsible-ai-leaders/feed/ 0
New AI training techniques aim to overcome current challenges https://www.artificialintelligence-news.com/news/o1-model-llm-ai-openai-training-research-next-generation/ https://www.artificialintelligence-news.com/news/o1-model-llm-ai-openai-training-research-next-generation/#respond Thu, 28 Nov 2024 11:58:28 +0000 https://www.artificialintelligence-news.com/?p=16574 OpenAI and other leading AI companies are developing new training techniques to overcome limitations of current methods. Addressing unexpected delays and complications in the development of larger, more powerful language models, these fresh techniques focus on human-like behaviour to teach algorithms to ‘think. Reportedly led by a dozen AI researchers, scientists, and investors, the new […]

The post New AI training techniques aim to overcome current challenges appeared first on AI News.

]]>
OpenAI and other leading AI companies are developing new training techniques to overcome limitations of current methods. Addressing unexpected delays and complications in the development of larger, more powerful language models, these fresh techniques focus on human-like behaviour to teach algorithms to ‘think.

Reportedly led by a dozen AI researchers, scientists, and investors, the new training techniques, which underpin OpenAI’s recent ‘o1’ model (formerly Q* and Strawberry), have the potential to transform the landscape of AI development. The reported advances may influence the types or quantities of resources AI companies need continuously, including specialised hardware and energy to aid the development of AI models.

The o1 model is designed to approach problems in a way that mimics human reasoning and thinking, breaking down numerous tasks into steps. The model also utilises specialised data and feedback provided by experts in the AI industry to enhance its performance.

Since ChatGPT was unveiled by OpenAI in 2022, there has been a surge in AI innovation, and many technology companies claim existing AI models require expansion, be it through greater quantities of data or improved computing resources. Only then can AI models consistently improve.

Now, AI experts have reported limitations in scaling up AI models. The 2010s were a revolutionary period for scaling, but Ilya Sutskever, co-founder of AI labs Safe Superintelligence (SSI) and OpenAI, says that the training of AI models, particularly in the understanding language structures and patterns, has levelled off.

“The 2010s were the age of scaling, now we’re back in the age of wonder and discovery once again. Scaling the right thing matters more now,” they said.

In recent times, AI lab researchers have experienced delays in and challenges to developing and releasing large language models (LLM) that are more powerful than OpenAI’s GPT-4 model.

First, there is the cost of training large models, often running into tens of millions of dollars. And, due to complications that arise, like hardware failing due to system complexity, a final analysis of how these models run can take months.

In addition to these challenges, training runs require substantial amounts of energy, often resulting in power shortages that can disrupt processes and impact the wider electriciy grid. Another issue is the colossal amount of data large language models use, so much so that AI models have reportedly used up all accessible data worldwide.

Researchers are exploring a technique known as ‘test-time compute’ to improve current AI models when being trained or during inference phases. The method can involve the generation of multiple answers in real-time to decide on a range of best solutions. Therefore, the model can allocate greater processing resources to difficult tasks that require human-like decision-making and reasoning. The aim – to make the model more accurate and capable.

Noam Brown, a researcher at OpenAI who helped develop the o1 model, shared an example of how a new approach can achieve surprising results. At the TED AI conference in San Francisco last month, Brown explained that “having a bot think for just 20 seconds in a hand of poker got the same boosting performance as scaling up the model by 100,000x and training it for 100,000 times longer.”

Rather than simply increasing the model size and training time, this can change how AI models process information and lead to more powerful, efficient systems.

It is reported that other AI labs have been developing versions of the o1 technique. The include xAI, Google DeepMind, and Anthropic. Competition in the AI world is nothing new, but we could see a significant impact on the AI hardware market as a result of new techniques. Companies like Nvidia, which currently dominates the supply of AI chips due to the high demand for their products, may be particularly affected by updated AI training techniques.

Nvidia became the world’s most valuable company in October, and its rise in fortunes can be largely attributed to its chips’ use in AI arrays. New techniques may impact Nvidia’s market position, forcing the company to adapt its products to meet the evolving AI hardware demand. Potentially, this could open more avenues for new competitors in the inference market.

A new age of AI development may be on the horizon, driven by evolving hardware demands and more efficient training methods such as those deployed in the o1 model. The future of both AI models and the companies behind them could be reshaped, unlocking unprecedented possibilities and greater competition.

See also: Anthropic urges AI regulation to avoid catastrophes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, a

The post New AI training techniques aim to overcome current challenges appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/news/o1-model-llm-ai-openai-training-research-next-generation/feed/ 0