5 Things to Consider Before Building an AI Product

Sameera Weerakoon
6 min readMar 10, 2020

With the justifiable interest and the unjustifiable hype around artificial intelligence, the businesses are trying very hard to join the game and make early headway in the market with this new technology. With early adopters using in-house and outsourced teams to tailor solutions using artificial intelligence, the vendors have been looking for ways to generalise the solutions to a wider market.

With some experience in the field with intelligent products / features delivered, these are the 5 things I would want you to consider as a decision maker before building AI products.

Not Everyone Will Understand AI

Not every decision maker in the market will understand AI, and that is okay. One of the key mistakes vendors make is worrying too much that the decision makers who will procure their services does not understand AI.

AI is a tool, just like APIs, micro-services, algorithms, etc., and what is important is that you have good communicators in your team to clarify any complexities to possible customers during an onboarding process. An important thing to communicate would be that nobody can buy an AI, and there is a finer detail between different models used to build AI-powered services.

Photo by You X Ventures on Unsplash

Nobody would be able to use an AI model built for shop-floor planning to improve a chatbot experience. If your customer does not understand this, you have a difficult work ahead of you.

Best Path to AI Productisation is Use Cases

When communicating with the potential customers, the best way to communicate is by explaining how a potential use case would work. With the businesses moving to business models where delivering value throughout a consumer journey is vital, you may show how your product is used in a use case and how a consumer journey is enriched with that use case.

Traditionally, most of the sales teams tries to sell features to the customers. “We have these new features X, Y, Z coming in our product in the next quarter that you can upgrade for by paying an additional $” was a way that sales team was approaching the up-selling of a product with a new feature set in the roadmap. However, with the evolution of sales to be having more two-way communication with the customer, a more detailed discussions about customers’ expected value of the feature will improve the buy-in.

This is specially useful for AI use cases where it will be quite difficult to sell a “feature” as the value gains will be more contextual to the use case. Let’s take an example.

Traditional: We have a sentiment analysis feature in our bot building platform.

With a Use Case: With the new release of our bot building platform, you will be able to analyse the customer queries and assign a priority when booking a support ticket if the customer seems impatient and irritated.

This is vital because ultimately with whatever technology, AI or not, you are trying to solve a customer problem. If the person you are communicating is not interested in a use case, that should be a red flag, as they might not actually care about the business value your feature or product will deliver to their respective business unit.

Markets Do Not Like Black-Box Models

Have you assigned a task to an ML engineer, and they have started to develop a neural network from the very next hour? Not so fast.

One of the key areas of concern raised with AI is the ethics which has been entering public discourse more frequently with the adventures of Twitter, Facebook, Tesla and other tech leaders. With collection of personal information, lack of clarity about the decision making processes, leaks of stored information, there has been controversies with the companies who heavily utilise AI technologies. The particular concern you as a Product Manager has to address is the lack of clarity about how the different models work to come to the result the model has predicts.

There are areas of concern regarding data collection also, but I am not going to mention them here. However, in terms of AI models, one of the key things you have to keep in mind is:

  • The model has to be explainable
  • The results from the model should be interpretable

If you have built a multi layered neural network that analyses a massive set of data points, you may have a hard time explaining the model. However, if you built a decision tree that looks at few data points with identified weights, even though your model will not be accurate as a neural network, you will find it easier to communicate the model to the customer.

Further, imagine a situation your model has predicted a particular results which is used to make a decision. You should be able interpret that result to the customer’s decision maker so they will understand how the model came up with that particular prediction. In a situation such as fraud detection, this kind of interpretability is necessary as your AI model identifying a genuine transaction as a fraud and recommending a transaction-block decision will be problematic. If the product have mistakenly recommended to block a key client and the human in the loop has followed through on the recommendation, that will be disastrous.

Generalised AI Will Have a Reduced Accuracy

It is very clear that more data related to a customer fed during the training phrase, an AI model will become more accurate. This is why it is necessary to obtain massive data sets if the need is to train a more complex AI model.

Photo by Markus Spiske on Unsplash

However, as a Product Manager, you cannot train an AI model with all of customer’s data if you want it to be productised. If you train an AI model, say an image recognition engine with millions of images related to a particular customer, that image recognition engine would be quite useful to that customer, but not that much to anyone else. New areas in AI such as transfer learning will hopefully solve this problem, but we are not fully there yet and we are stuck with this limitation for now.

Therefore it is important to understand this fact and communicate with your customers. Communicate very clearly that the reason a customer is able to get the product price is simply because it is generalised. If they are expecting a tailor made model for them with more specific data sets used to train the model, they should expect to pay for the service in the same level they will pay for a managed service AI project.

There is an Accuracy-Performance Trade-Off

Resulting from theoretical and practical limits, this will be the hardest part to sell as the customers would be demanding to get a high performance, fully accurate AI model that would solve their business problems.

When the AI models are made more accurate, that involves more complexity with more features which in turn result in more demand for training data. Further, running of these models requires more computational power with a higher latency in the prediction results.

Some commercially available NLP models provided by Google are limited to 10 transactions per second despite certain use cases in the industry requires more than 100 times of that performance. This is uniquely problematic with the technology stack used in the industry so far, with Python and R dominating the data science disciplines. Hopefully, with more people using high performance computing languages such as Golang and Julia, more AI libraries will be crowd sourced solving this problem over time.

Until then, we are stuck with this trade-off.

--

--

Sameera Weerakoon

I’m an interested in the technology adoption in the modern world and its effects. Also fascinated how consumers make, often irrational decisions about products.