Artificial Intelligence was the hot topic of 2024, so how is it evolving? What are we seeing in AI today and what do we expect to see in the next 12-18 months? We asked Andrew Brust, Chester Conforte, Chris Ray, Dana Hernandez, Howard Holton, Ivan McPhee, Seth Byrnes, Whit Walters and William McKnight to join.
First, what else is hot? Where are AI use cases succeeding?
Chester: I see people using AI beyond experimentation. People have had a chance to experiment and now we’re getting to the point where real vertical-specific use cases are being developed. I watch healthcare closely and see more fine-tuned models specific to specific use cases, such as the use of artificial intelligence to help doctors be more present when talking to patients through auditory listening and note-taking tools.
I believe that “small is the new big” – this is a key trend like hematology versus pathology versus pulmonology. Artificial intelligence in imaging technology is not new, but it is now coming to the fore with new models being used to speed up cancer detection. It must be supported by a healthcare professional: AI cannot be the sole source of diagnoses. The radiologist must confirm, verify and confirm the findings.
Dana: In my reports, I see artificial intelligence being used effectively from a specific industry perspective. For example, finance and insurance vendors use AI for tasks such as financial crime prevention and process automation, often with specialized smaller language models. These AI industry models are a significant trend that I believe will continue into the next year.
William: We are seeing reduced cycles in areas such as pipeline development and master data management becoming more autonomous. One area that is gaining popularity is data observability – 2025 could be its year.
Andrew: Generative AI works well for code generation – generating SQL queries and creating a natural language interface for querying the data. This was effective, although it is now somewhat commoditized.
More interesting are the advances in the data layer and architecture. For example, Postgres has a vector database add-on that is useful for Retrieval Augmented Generation (RAG) queries. I see a shift from the “wow” factor of demos to practical use, using the right models and data to reduce hallucinations and make data accessible. Over the next two to three years, vendors will move from basic querying to building more sophisticated tools.
How are we likely to see large language models evolve?
White: Overall, we will see AI models shaped by cultural and political values. It’s less about technical development and more about what we want our AIs to do. Consider Elon Musk’s xAI, based on Twitter/X. It’s uncensored – quite different from Google Gemini, which tends to read you if you ask the wrong question.
Different providers, geographies and governments will tend to either move towards freer expression or try to control AI outputs. The difference is noticeable. Next year we’ll see an increase in bar-less models that will provide more direct answers.
John: It also focuses heavily on structured challenges. A slight change in phrasing, such as using “detailed” versus “comprehensive,” can yield very different answers. Users must learn how to use these tools effectively.
White: Responsive engineering is indeed essential. Depending on how the words are inserted in the model, you can get drastically different answers. If you ask the AI to explain what it wrote and why, it will force it to think more deeply. We will soon see tools for domain-trained challenges—agent models that can help optimize challenges for better results.
How does artificial intelligence build on data and develop its use through analytics and business intelligence (BI)?
Andrew: Data is the foundation of AI. We’ve seen how generative AI over large amounts of unstructured data can lead to hallucinations, and projects are being scrapped. We’re seeing a lot of disillusionment in the enterprise space, but progress is coming: we’re starting to see a marriage between AI and BI, beyond natural language querying.
Semantic models exist in BI to make data more understandable and can be extended to structured data. When combined, we can use these models to generate useful chatbot-like experiences that elicit responses from structured and unstructured data sources. This approach produces commercially useful outputs while reducing hallucinations through contextual enhancements. This is where AI will become more robust and democratization of data will become more effective.
Howard: Agreement. BI has yet to function perfectly in the last decade. Those who produce BI often do not understand the business and the business does not fully understand the data, which leads to friction. But this cannot be solved by Gen AI alone, it needs mutual understanding between both groups. Without this enforcement of data-driven approaches, organizations won’t get very far.
What other challenges do you see that could hinder the progress of AI?
Andrew: AI euphoria has diverted idea sharing and budgets away from data projects, which is unfortunate. Businesses must see them as the same.
White: There’s also the AI startup bubble – too many startups, too much funding, burning money without generating revenue. It feels like an unsustainable situation and we’ll see it crack a bit next year. There is so much going on and keeping up has become ridiculous.
Chris: On a related note, I see vendors building solutions to “secure” GenAI / LLM. Penetration testing as a service (PTaaS) vendors offer LLM-focused testing, and cloud-native application protection (CNAPP) has vendors that offer controls for LLM deployed in customer cloud accounts. I don’t think buyers have even begun to understand how to use LLM effectively in the enterprise, yet vendors push new products/services to “secure” them. This is ripe for cracking, although some “LLM” security products/services will penetrate.
Seth: On the supply chain security side, vendors are beginning to offer AI model analysis to identify models used in environments. It seems a little advanced to me, but it’s starting to happen.
William: Another looming factor for 2025 is the EU’s data law, which will require AI systems to be able to switch off at the click of a button. This could have a big impact on the continued development of AI.
The million dollar question: how close are we to artificial general intelligence (AGI)?
White: AGI remains a dream. We don’t understand consciousness well enough to recreate it, and just applying computing power to a problem won’t make something conscious—it’ll just be a simulation.
Andrew: We can move towards AGI, but we need to stop thinking that predicting the next word is intelligence. It’s just a statistical prediction – an impressive application, but not really intelligent.
White: Exactly. Although AI models “reason”, it is not true reasoning or creativity. They are just re-combining what they were trained to do. It’s about how far you can push combinatorics on a given data file.
Thanks everyone!
The post The Evolving Revolution: AI in 2025 appeared first on Gigaom.