insights

The six step framework: Iterate

Jochen Derwae

This is the last chapter in our series on the 6-step framework, our vision on how to build AI based solutions. Before explaining the iterative process to continue improving upon the proof of concept, I’ll take a short peek back at what’s in the previous chapters:

  1. Get ready: here we discussed getting training for yourself and your employees and making sure that you and your organization are ready to make AI projects a success.
  2. Ideate: in this chapter we went deeper into gathering ideas for processes to approve using workshops or a bottom up approach.
  3. Assess: we explored how assess the business value and feasibility of each idea
  4. Identify: after assessing the solution ideas we discussed identifying those that have the greatest potential for success and monitoring the maturity of those ideas
  5. Build a pilot: here we went over how to prepare requirements, ways to assemble a technical team and looked at the importance of setting up governance processes

Now we’re ready to look at what needs to be done after a pilot is built:

  • Reassess business value and feasibility
  • Adjust requirements
  • Set KPI goals
  • Set ROI goals
  • Set up monitoring and metrics (of KPI’s and ROI)
  • Set a project end point

Lessons learned

Assessing business value and feasibility have been discussed in greater detail in previous chapters. Here it suffices to say that with the implementation of a pilot you might have learned some new things. After incorporating these learnings into the business value and feasibility scores you’ll be able to tell if you’re on the right track. If either of these scores have gone down drastically, investigate where your previous assessments failed and shelve this project.

The lessons learned from the pilot will allow you to also look at the requirements (see the previous chapter) with some fresh ideas. Are they still relevant? Did we make the right choices? Revisiting these requirements here might save you from some terrible losses later on.

The reality of AI

It might feel that this new wave of AI tools (like OpenAI’s ChatGPT) has put AI systems on par with real humans. This is by no means true. AI is in some sense deeply flawed. Where previous software solutions would respond deterministically (do exactly the same when under exactly the same conditions in a predictable manner), AI systems do this less so. And this is even stronger in the ChatGPT-like AI systems.

The thing is that this lack of predictability is both a big strength and weakness of AI systems. This is a strength because it can respond in a favorable way to many conditions that were not listed specifically when building the system. This is in contrast to non-AI systems that just stop working under these conditions. The great weakness is that in a number of circumstances, the AI will come up with the wrong answer, even if it’s obvious for us humans what the right answer is. You can’t get any of the current AI systems to get to 100% reliability (and depending on the case, 80% or 90% might be hard to attain).

Putting extra effort into “fine tuning” an AI model to get it to those 80% or 90% is possible but also costly, both in the effort by data scientists and in AI governance and compliance measures. The rest of this chapter I’ll discuss setting KPIs and setting up monitoring and metrics so that you can balance the AI system’s performance against the costs that you’re willing to put up with and to make sure that your return on investment stays positive.

Setting KPIs

Setting the right goals for an AI / software product team is hard. You don’t want to give them any high-level business goals since there’s no way for them to determine the impact of their work on those goals. You also don’t want to set the KPIs at too low a level since this will prevent your team from coming up with creative solutions to get you the functionality you actually need. Furthermore good KPIs come with a measuring method that is objective and with which the development team can agree.

Here are some examples of good KPIs:

An example agenda for a 2 hour workshop
KPI Measuring method
50% of viewers choose one of the three suggested videos to watch next. Take all viewers over a period of a week that finished viewing their first movie of the day as the total number of viewers. Record for each of them the suggested next movie to watch. From these viewers, take those that watched more than 15 minutes of one of the suggested movies in the week following the suggestion.
80% of job vacancies are filled within 6 weeks. For last month, take all vacancies that were filled in and all vacancies that were still open for more than 6 weeks as the total number of vacancies. From those vacancies count the number that were filled within 6 weeks of publishing.
75% of product searches need to return the correct product within the top 5 results When supplying 100 images and the corresponding vendor, the system needs to return the correct product id within the top 5 search results for 75 of those images

There are different ways in which your development / data science team can improve the KPIs of the pilot they’ve built. And each of these solutions will improve the performance to a different degree. Ask your team to come up with a list of strategies (and an estimation of the effort to implement it) for improving each of the KPIs you've defined. You can use this list to set priorities on what KPIs to improve first and to what extent.

If the development team is uncomfortable with giving you exact estimations, allow for broad categories like: easy, moderate, hard and very hard. Making accurate estimates is difficult and you don’t need the exact estimates anyway, you need to be able to choose where to direct development effort first.

Next, set a scale on each KPI in, say 10% increases (e.g.: 10% of viewers, 20% of viewers, …). For each of these increases, determine what it will do in terms of additional revenue, costs saved, image gained, … or any of the other business values that you explored during the business value analysis. Also try to put a monetary value on each of these improvements.

These scales will guide you in setting priorities (what KPI gain should we work on next) and determining when to stop developing to improve that KPI. For each of those there is a point of diminishing returns. An increase of 10% points might cost more to develop than you will gain in performance on that KPI.

And this is how the iterations work after delivery of the pilot: set KPIs, calculate the potential gains on each increase on that KPI and let your team iteratively improve on those KPIs up until the moment that the investments start to outweigh the business value and keep that ROI in the positive.

Don’t hesitate to contact us if you need any guidance on any of the steps or techniques discussed within this series.

more insights

Close Cookie Preference Manager
Cookie Settings
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts. More info
Strictly Necessary (Always Active)
Cookies required to enable basic website functionality.
Made by Flinch 77
Oops! Something went wrong while submitting the form.