top of page
Search

Discover 5 real world lessons in AI and machine learning projects

Updated: Apr 3, 2021

First published 23 Nov 2019 in LinkedIn



Whilst every organisation is trying to get ahead and create THE data driven organisation, it is worthwhile to learn from the journeys of those who had succeeded.

Just for kicks, I’ll frame the lessons in a negative perspective.


1. Trying to boil the ocean / solve everything using machine learning (ML)


The uninitiated are usually so hyped up with ML that they think ML solves everything, for e.g. trying to use ML to take over a personal assistance’s job. A personal assistant’s job is highly complex – this undertaking is too vast in scope and doomed to fail.

In a similar situation, a start-up tried to use ML to securitise real estate by letting lower income people to own chunks of real estate across different geographic areas (spreading their investment risk). Whilst this is a good idea, the team didn’t know how to execute and make it work.


What was more interesting was that whilst they were trying to solve a complex problem, they discovered a simpler problem, pivoted to that and ended up with a $60 million exit.


They discovered from their research that in the real estate rentals area, managers were having problems handling the myriad of calls, texts and emails from prospective renters which ML can solve. The manual method was just too time consuming and troublesome for humans. The team then focused their energies towards this direction and demonstrated immense value to take away customer handling via a semi-automated ML assisted solution.


The ability to tune the scope of an ML project to deliver the biggest possible bang for the buck, solving the right problem type.. I would think is very highly sought after.


The ability to tune the scope of an ML project to deliver the biggest possible bang for the buck, solving the right problem type.. I would think is very highly sought after.

Lessons learnt here, the good old scoping for success, in action.


2. Underestimating the amount of work required outside the “sexy” areas of building & testing ML models


There are many aspects to a successful AI project apart from training and feature engineering the ML model. These include:

  • Focusing on quality of existing data

  • Generating more data (data pipe-lining)

  • Building sustainable & efficient business processes to provide good data

  • Building processes around other components of the ML product

  • Bringing the different level stakeholders (unions included) along for the journey


3. Hiring the wrong people, causing difficult to fix skill gaps


Hiring a machine learning rock-star can sometimes backfire, because a successful project offers ML to solve a business problem, not to feed itself. Often times, this means that a good ML model need to be “productised”. Good is sometimes better than great here. This I because a good hire who is competent in machine learning, great team player and has the capability to improve, can then fortify themselves on the run with software engineering skills to design and productise the bigger solution.


Personalities who see themselves also as an entrepreneurs or intrapreneurs with ability to climb the necessary learning curves are often the better fit than rock-stars.


4. Delivering perfection instead of value for business


Just like in start-ups, delivering value quickly instead of gold plating perfection, is crucial. The spirit of helping the business rather than perfecting “my own product”, is paramount!

The spirit of helping the business rather than perfecting “my own product”, is paramount!

Green field projects frequently underestimate the importance of process design and re-engineering, required to underpin ML projects. They make the mistake of trying to optimise a small component of the big picture, instead of building up the product from a software engineering perspective (creating a product that works for the business) and taking the humans for the journey.


An important best practice, is to use simple, proven technologies, not shiny models hailing from academic research (just to eke out additional 2% point improvement). Frequently, those new cutting edge models take time to mature and manifests as unnecessary cost.


5. Set and forget, not monitoring ML performance and getting caught out when operating environment changes




ML delivery is not a set and forget. Monitoring to ensure models are performing, for e.g. checking false positive and false negative performances (amongst others), is imperative because when operating environment changes, the algorithm performance may decay.


What is your experience in these areas? Share with us in the comments below.


6 views0 comments
bottom of page