11 Сhallenges in Machine Learning Project: Definition Stage

Every Machine Learning project is a long and busy process. Usually, it involves many people from different fields and takes into account many circumstances. And naturally, there are many points where undesired situations can arise. As a result, the progress can trip and stall. Some of these points are unique, but we can form categories of typical missteps that can happen in various projects for most of them.

This article will review the 11 most significant ML projects’ problems that may occur in the initial stage. This is the stage where the project is only proposed, outlined, and defined. Therefore, each of these problems can cause a shockwave across the entire future project. The inherent issues that exist in the very beginning will be the toughest to correct. Some of them may even cause you to discard the earlier project developments and start all over again.

To avoid these situations, we list problems at the definition stage and the steps we can take to prevent them. According to the October 2019 survey by Algorithmia, 78% of all Machine Learning or AI projects with model training have stalled somewhere amid the development. 45% of them have stalled after the proof of concept stage.

Given the importance of data and other external factors for ML projects, these figures are not surprising. Yet, it is more than reasonable to try to avoid potential stalling factors that may contribute to the problem as much as we can.

Generally, we can divide the development of any ML project into six stages:

  • ML Problem Definition – This is what we will review today. During this stage, we form the project with its resources and management and define all needed parameters and aspects;
  • Dataset Selection – Since data is the key to a successful ML project, at this stage we focus on finding the best available dataset or gathering one. This stage can be one of the most resource-consuming parts of the project;
  • Data Preparation – Data does not normally come to us prepared for future use. Therefore, we should process and explore it to understand it, in depth, before proceeding;
  • ML Model Design – At this stage, we build the appropriate model which will best suit the problem and data, conduct feature selection, and set the algorithm;
  • Model Training – This is the stage where we train, evaluate, and improve the model, trying to find the right parameters where it shows the best performance;
  • Operationalize in Production – Now we observe and tune the already deployed model into the working environment. We estimate its influence on the situation while measuring its success.

Now, let’s turn to the 11 biggest problems at the ML project definition stage, their main causes, and how we can deal with them.

Unclear Metrics of the Project Success (Vague success metrics of the ML model)

Are you building a model to improve a workflow? Or to increase customer satisfaction? These situations are relatively common, and in both of them, the measurement of the project success is defined vaguely. Often such goals will be converted into some metrics or KPIs during the project development. However, just as often, they are not, and the development continues for the general improvement.

Having said this, it is beneficial to determine if the project lives up to expectations. Firstly, it sets the success baseline on which the team can orient during all stages. Secondly, it allows comparing expectations and the result without adjusting the problem points of view, as your view on the project is bound to change in the process. And, most importantly, such metrics allow us to correctly measure the project’s impact on the business.

Values for such concepts as success or satisfaction can be measured indirectly. For example, we can use parameters of customer behavior. Resources spent in the company for the specific problem before and after the project are also a good choice. These measurements will also allow developers to understand which areas to focus on, reshaping the project itself.

The Model is Developed. Now how do we make use of it? (Even if we had the perfect model — no clue of how it will be used within existing workflows)

The results of the ML project should always serve some purpose in the company’s operations. When they are not integrated, it is usually the result of not planning accordingly before the start. With the notable exception of experimental research, we need to know where and how to apply the project’s supposed results.

Let’s suppose we identified the customers who are more likely to buy sneakers than Oxford  shoes by using the historical purchase data. However, that will make little sense until we know how to apply this information to advertising or logistics to boost our sales.

This becomes even more common when some promising ideas appear while exploring the data but with a vague understanding of the possible findings. In such cases, the practical technique is to observe the existing workflows and identify results that can help.

Maximizing the Accuracy Without Trade-off Context (Building a 100% accurate model — no clarity on the acceptable trade-offs such as precision vs. recall)

This issue can make your model completely unusable if overlooked and force you to start implementing it all over again. For many problems solved by ML, high accuracy does not mean that it performs well.

Let’s look at disease diagnosing. Suppose 1000 patients visited the doctor, and 30 of them are ill. We build the ML model to identify such patients, and it has an accuracy of 99%. But what does this mean? 10 patients were identified incorrectly, so it can both mean that we found every ill patient and that every third (10/30) of them went home undiagnosed. Moreover, it is possible to build a 97% accurate model by predicting everyone to be healthy!

For cases like this, it is absolutely essential to consider what you have to measure, the most crucial indicator that the model reached its purpose, and the trade-offs. For the case above, such an indicator will likely be a percentage of ill patients correctly identified (a metric called recall). But to ensure that the model works as it should, keep an eye on the other balancing metrics. In the case above it can be how many of the patients predicted to be ill were indeed ill (precision).

By identifying your model’s trade-offs, you truly start to understand what the model is supposed to do and its limitations. Accuracy of almost 100% does not mean that the model is perfect or even good.

Not Looking for the Simpler, Effective Algorithms (Using a hammer to kill an ant — not checking the performance of simpler alternatives)

Machine Learning is a potent tool that can solve highly complex problems. On the other hand, not every problem is complex enough to warrant its use. If simple statistics or exploratory data analysis solves it perfectly, there is no need to turn to ML.

Machine Learning is the heavy artillery of problem-solving, and using it on a simple task will only cost you resources and effort. The fact that your solution works perfectly without Artificial Intelligence is rather a compliment for it.

How to determine if the problem is suited for ML? There are some more lengthy guides addressing the issue. But in general, a good measure is that it needs to predict some result based on the relationship in the data, with some success metrics defined. Also, the available data needs to be applicable for any new data that the model would try to evaluate.

The general rule for this issue is that you need the least costly solution that suits your required model success.

Not Checking if the ML Problem is Worth Solving at All (Not all ML problems are worth solving — the impact may not be worth the effort)

Building a production-ready and robust Machine Learning model requires a lot of resources and time. According to the mentioned earlier survey done by Algorithmia in October 2019, only 42% of companies manage to fully deploy their model to market in an average time of less than 30 days. For 18% of them, the average time is over 90 days.

So the question we have to ask ourselves here is if we need to concentrate our resources on this problem at all. The developers tasked with it can be performing other, more rewarding projects, and the problem may carry only tangent profit to the business. It also can become not so relevant by the time the model deployment finishes.

To avoid this misstep, it is beneficial to evaluate how each new task contributes to the project’s strategic goal and the pressing issues it resolves. It is quite possible that while some problems look the most promising to be solved, they do not bring that much value for the project or the customers.

Ignoring the Cost of Data Among Others (Underestimating project costs — ignoring data costs)

Data is the heart of any  ML project, since without data it would simply not exist. However, the price of acquiring it is often underestimated and not included in the project resources. This can often lead to overspending or falling behind the schedule, as the process can prove costly or require a lot of time and resources.

Also, if you acquired a great set of data, it is essential to treat this competitive advantage as an essential asset. It’s always safe to assume that your competitors will definitely do so. This also means that in some cases buying the required data may be extremely expensive.

If you plan to gather the data yourself, keep in mind that more often than not, collecting and labeling the data will take some time and can require specific people to perform it. Some data needs to be constantly updated and maintained, which also requires resources. It is also important to plan if your project will require new data as a part of its work and how to integrate it.

Picture 1. Data is the most valuable resource for almost all ML projects // Image: Piqsels

Not Paying Enough Attention to the Ethics of AI (Treating AI Ethics as a nice-to-have)

Ethics in your Machine Learning project is often the subject that is not brought up until the advanced development stages. Sometimes, it is even treated as more of a product release concern. But this approach is wrong from many points of view.

Firstly, in line with the development of the AI and data-related area, the focus turns to the legality concerns surrounding it. While we talked about data as an asset earlier, data is also very much a property and a personal issue. Thus, significant rules and regulations surrounding data handling arise from related concerns in different countries, many of them modeled after the EU’s General Data Protection Regulation. And even if you think your workflows won’t violate any ethical norms, it is an absolute must to verify if they comply with applicable regulations. Otherwise, massive legal and reputational trouble may only be a matter of time.

As a bonus benefit for going ethical, making the model as fair as possible can considerably improve the project performance. Almost every ML model will have some bias. It can either come from discrimination already existing in the data or arise from the model properties itself. This bias may cause your project to produce some results that will fit your evaluation criteria, but at the same time, not reflect the real world as it is. Potentially this will make your project not treat every situation adequately and avert some customers.

Additionally, the way you treat your project in every stage will likely reflect on how trustworthy it will be. Ethically evaluating each step will also help to establish values inside your team.

In any case, it would be beneficial to understand the context behind your project. If you are working in the public area, answering the “why” question may  not only be your competitive advantage, but even sometimes a must.

Not Living Up to Stakeholder Expectations (Not managing stakeholder expectations)

As ordinary as it sounds, this issue can be the cause of many misunderstandings and later disappointments in the project, and in the most severe cases, even in the entire field of AI. With many mysteries and myths surrounding the area due to rapid development, Machine Learning sometimes gets the treatment of an almighty force able to solve most business problems at once.

People who do not work with ML directly, which is often the case with upper management levels, usually do not have enough relevant expertise to become familiar with the matter’s boundaries. Sometimes this results in setting completely false expectations about the upcoming project, motivated by business needs. In some cases, the goal of the project was unreachable from the start.

Another category of projects more probable ending up like these are ambitious innovations. Many of them eye their product as their undeniable breakthrough, which has to be closely evaluated.

It is crucial to make sure that everyone is cold-headed about the possible outcomes and aware of what the task is before starting any project. Also, it is beneficial to have at least one person in the team capable of bridging the gap between the business expectations and technical possibilities and communicating it among the involved.

Not Distributing the Problem (Not decomposing the problem)

This issue is prone to arise in entirely different situations and for teams with varying levels of experience. It feels natural to think about the given ML problem as a whole, trying to solve everything at once. However, this approach does not consider any additional questions that may arise between the steps and improve the model as a whole.

By looking at the problem as a series of minor issues, we often may discover that finding a concise answer to each of them will help pick up many subtleties to the greater one that we would miss otherwise. Additionally, these minor questions may be answered by different models. This is another reason not to force a single solution on them.

Even when distributing bigger problems into smaller ones isn’t an option, different points of view still can help. Some techniques like ensemble models can improve performance in crucial situations. This technology is perfect for pointing out why having several connected models can help your project. Different models pick up different patterns in the data, making them more robust.

Improving the Model Endlessly (Optimizing on the model perpetually)

Machine Learning is a field with a constant trade-off between what is essential and what can be beneficial. In this case, the core of the problem is constantly trying to make the model better. In certain situations, such as the medical field, it can be one of the ultimate development goals. But in most cases, time spent on further improving the model means resources. At some point, it can become more profitable to release the present model into the market,  than delaying it for slight improvements.

At this point, it is best to ask the following questions: is the current solution already good enough to be put into practice? Can we possibly use it as the first version while improving the new versions further? Or, most importantly, is it even worth it? This progress evaluation at set points of the project can prove crucial for its success.

But are there some defined rules that determine whether or not your model is good enough to be released? Well, there are several widespread approaches. The first one is to set a baseline beforehand. Suppose you want your model to be at least 95% accurate or improve the previous result by 5%. As soon as you reach that goal, you know your model has at least satisfied your expectations.

You can also set exact requirements not on the model itself but on changes in different KPIs that the company often uses to measure success. Another way is to determine the model’s performance level, which the customers view as the best-performing. Also, you can take a look at the standards of your competitors or the field as a whole.

Picture 2. You may make your Machine Learning model perfect, but will it be worth the effort?  // Image: PizzaBottle

A Lot of Data Means a Successful Project (Assuming Petabytes of data available == Successful ML project)

This one is the Machine Learning equivalent of “we have a lot of money, so our company will surely succeed.” The outcome of any ML project indeed relies a lot on the data. However, the required companion to the data is its quality, or a possibility to be transformed into quality data.

There can be literally petabytes of data, but it needs to represent something meaningful to succeed with Machine Learning. If the data draws an incomplete view of the situation, either by missing values or data-entry errors, building an excellent ML project will be challenging. Without clean data, even the most skillful ML developers will have difficulty finding its use.

This issue is most likely to arise at the project’s planning stage. Thus, it is crucial not to overestimate the data and carefully overview it. Only then is the best time for the definite conclusions about its potential outcome.


There are some common types of problems with Machine Learning projects in their initial phase. But in most cases, these problems are preventable. The proper decision-making and observing the project’s progress will improve many of them.

Some of these issues also arise from misinterpretations of different facts. It further underscores how crucial it is to be well aware of the project’s goal and what is required to achieve it. Perfect communication between the technical and business sides should allow the team to understand the situation completely.

It is essential to be flexible and not follow the general ML concepts blindly. Appropriate parameters and metrics greatly depend on the exact problem the team is trying to solve. Machine Learning is excellent in its variety, and thousands of approaches allow us to find the best possible fit.