Are LLMs stealing the spotlight from classic Machine Learning?
Tomasz Jażdżewski
ML Engineer
Published: Apr 7, 2025|19 min read19 minutes read
Editor’s note: This article is based on Tomasz Jażdżewski’s talk from the 2025 edition of the AI Summit in Munich, Germany.
The LLMs as we know them didn’t appear out of thin air. AI has roots dating back over 100 years, when early concepts like fuzzy logic emerged. Then came mathematical models that contributed to the development of neural networks.
Today, we combine past learnings with modern advancements and reap the benefits of decades of research. Large Language Models (LLMs) are a prime example of such a benefit, transforming how we understand and interact with information. But will LLMs replace classic Machine Learning for good?
Classic ML models are sets of algorithms designed to solve specific problems. These include decision trees, basic neural networks, and other established techniques.
For most use cases, classic machine learning refers to algorithms designed to solve specific, unique problems—such as anomaly detection. Each model addresses precisely one type of issue. For example, a model built to detect anomalies is unable to understand image segmentation. Different problems require building different models.
This is one of the reasons why Large Language Models (LLMs) are so popular. By tokenizing inputs, they can handle vast amounts of unstructured data, such as images, text, music, and time series data. There is no need to build a new model for each problem. LLMs come pre-equipped with broad capabilities, which allow them to understand and interpret diverse data types without additional and extensive training.
Imagine your business workflow as a road. Depending on the type of road, you will have the best results with different types of cars. For example, a regular car would be enough to drive on a highway, but rough terrain would require something with off-road capabilities.
Similarly, different types of tasks will require different solutions. In this analogy, highways that you can drive on with a regular car represent common tasks that even a standard model can handle. In contrast, rough terrain that requires a vehicle with off-road capabilities represents complex or niche problems that call for custom-built machine learning models.
Is it easier to build your car from scratch or adapt the one you already have?
Deploying an AI model, whether it’s ML or LLM, involves a few steps: defining the problem, developing solutions, deploying those solutions, and ensuring they remain updated. The difference between these models lies in their use cases. Classic ML models often require specialized data and domain expertise, making them ideal for unique business problems. LLMs, however, are suitable for routine tasks involving unstructured data—for example, analyzing job resumes.
Choosing a model is similar to selecting a car customized to your specific needs. LLMs are like pre-built cars that require minimal adaptation, whereas classic ML models are like customized vehicles built from various individual parts.
In other words, classic machine learning models allow you to create specialized tools tailored to specific challenges, while LLMs offer adaptable, ready-made solutions.
Current LLM models have shown outstanding performance, beating classic ML models like decision trees and basic neural networks in terms of accessibility. However, they each have their strong suits.
Classic machine learning consists of algorithms designed to address specific problems. For instance, anomaly detection requires a model trained explicitly for that purpose,meaning one model solves one unique problem. But, that is also one of the reasons why it can solve that problem so well.
Conversely, large language models are attractive because they handle vast amounts of unstructured data, including text, images, music, and time series data. LLMs already understand various data types and require no additional training for specific tasks, functioning as ready-to-use tools. However, adopting them to one specific problem might be challenging.
How do you decide which approach is the right one for you?
When choosing the right solution for your organization, you need to look at specific differences between working with ML and LLM.
Defining a business problem in AI language: When choosing between models, consider the following: Do you have time to validate models? Do you possess the required expertise? Do you have sufficient data? Classic machine learning requires extensive validation and domain expertise. LLMs can offer quick results but often still require additional tuning and prompt engineering.
How to build the model: When building an AI solution, consider three main factors: validation time, available expertise, and data quality. Classic machine learning requires extensive validation, specialized expertise, and robust, clearly structured data. Large Language Models (LLMs) also have their own requirements that demand resources and expertise, such as data preparation and fine-tuning.
Deployment: Classic ML models allow better control over input data and edge cases, making them easier to manage. LLMs, however, depend on tokenization, which can lead to unpredictable results from slight input variations.
User interface complexity: Classic models may be challenging for non-experts, while LLMs are typically user-friendly.
Optimization and scalability: LLMs are pre-optimized and require advanced expertise for further tuning. Classic models, though potentially complex, can be more straightforward to optimize and scale, often deployed efficiently as microservices. However, managing numerous classic models (e.g., forecasting for many products) can be resource-intensive.
Retraining also varies: Classic models can easily automate retraining, whereas improving LLMs involves complex adjustments, costing additional time and resources. If your environment changes frequently (e.g., sales forecasting), classic models might serve you better. Conversely, stable problems (e.g., word prediction or code generation) benefit more from LLMs.
You cannot always predict how your model will analyze the problem. For instance, there was an experiment to see if AI can see the difference between dogs and wolves.
The model got really good results, but the dataset, images of wolves, had snow in the background. After some tests, researchers found out that the algorithm focused on snow instead of the faces of wolves. That is why, sometimes, understating how the model predicts is much more important than a good score itself.
source: researchgate.net
To prevent such failures, it is important to understand how your model works. Other reasons include:
Trust and transparency
Accountability and compliance
Improved decision-making
Debugging and model improvement
These needs led to the creation of explainable AI (XAI): a set of methods and techniques that aim to make the decision-making process of models more transparent.
How to understand a model?
Analyzing classic ML models is much faster than analyzing LLMs due to a different number of parameters:
Parameters:
Classic ML – 1-10⁷ parameters
LLMs – from 10⁹ parameter
10⁷ seconds ➞ 115 days
10⁹ seconds ➞ 32 years
As parameters increase, we need more advanced methods such as:
Generate a forecast based on the input data, representing the entire problem
Neuron Activation Explanation
How does understanding a model influence your choice between LLM and ML?
Building a classic machine learning model is closely connected with understanding the data itself. You need to involve data analysts, business analysts, and other specialists who understand the data collection and related business processes. With LLMs, businesses tend to focus too heavily on the solution and not enough on the source data.
Choosing between ML and LLM comes down to what you need to get out of it. Sometimes, LLMs provide quick wins, but in other cases, classic ML models are more suitable. You need to test and validate which works best for your specific use case. The main differences come down to:
Model Transparency: Classic ML models are easier to interpret. LLMs, with billions of parameters, pose significant challenges for detailed analysis.
Data Analysis Focus: Classic ML models often focus heavily on understanding and refining the dataset, ensuring better reliability in data-driven insights.
Summary:
If you have a large amount of unstructured data, you should use LLMs.
If you have structured data, consider using classic machine learning models, as they are typically easier to optimize and more scalable.
AI solutions should be chosen based on the specific problem, available data, and desired outcomes. Classic ML models remain highly relevant in complex and highly specific domains, while LLMs excel in broader, unstructured applications.
“Is classic ML dead? No, it is not, as there are plenty of use cases. In my experience at VirtusLab, sometimes the classic ML approach is more suitable and alive than we could imagine.”
AI solutions should be chosen based on the specific problem, available data, and desired outcomes. Classic ML models remain highly relevant in complex and highly specific domains, while LLMs excel in broader, unstructured applications.