The stages and workflows that are involved in Machine Learning projects are evolving as the field and technology itself develop. The emergence of GPU-enabled mobile devices has introduced a new stage within the traditional ML project workflow. The emergence of new stages has also created new roles and job titles.

1. Problem Definition

problem definition

Problem definition is the initial stage of a Computer Vision/ML project, and it focuses on gaining an understanding of the problem poised to be solved by applying ML. It usually involves a problem descriptor that records, in a selected form, a scenario-based description of the first-hand experience of an encounter of the problem to be solved. This stage also captures what an ideal solution to a problem will be from the problem descriptor’s perspective.

2. Research

Machine learning research

This stage sets the foundation for later stages, along with the planning of implementation and development work carried out within subsequent stages. An exploration into the form a solution will take is conducted, along with information into the data structures, formats, and sources.
A combination of an understanding of the problem, unified with proposed solutions, and available data, will enable a suitable ML model selection process to achieve the ideal solution result. At this stage, it is helpful to research the hardware and software requirements for the algorithms and model implementation; this saves a lot of time in later stages.


3. Data Aggregation / Mining / Scraping

data, scrapping

Data is the fuel for an ML/CV application. Data aggregation is a crucial step that sets a precedent for the effectiveness and performance of the trained model.
The output of the agreed-upon solution defines the data aggregated. Data understanding is paramount and any sourced data should be examined and analyzed utilizing visualization tools or statistical methods. Data examination promoted data integrity and credibility by ensuring the data source is the expected data. The data gathered needs to be diverse, unbiased, and abundant.

4. Data Preparation / Preprocessing / Augmentation

Data Preparation Preprocessing Augmentation

Preprocessing steps for data are based mainly on the model input requirements. Refer back to the research stage and recall input parameters and requirements that the selected model / neural network architecture requires. The preprocessing step transforms the raw sourced data into a format that enables successful model training.

Data preprocessing could include the following; Data Reformatting, Data Cleaning, Data Normalization. Data augmentation is a step that is carried out to improve the diversification of data that has been sourced. Augmentation of image data could take the following forms; Rotation of an image by any arbitrary degrees, Scaling of an image either to create zoomed in/out effects, Cropping of an image, Flipping, and Mean Subtraction.

5. Model Implementation

machine learning Implementation

Typically, model implementation is simplified by leveraging exiting models that are available from a variety of online sources. Most ML/DL frameworks such as PyTorch or TensorFlow, have pre-trained models that are leveraged to speed up the model implementation stage.
These pre-trained models have been trained on robust datasets and mimic the state-of-the-art neural network architectures’ performance and structure.
You rarely have to implement a model from scratch. The following might be expected to be conducted during the model implementation stage. Removal of last layers within a neural network to repurpose models for specific tasks. For example, removing the last layer of a Resnet neural network architecture enables the utilization of a descriptor provided by the model within an encoder-decoder neural network architecture. Secondly, fine-tuning pre-trained models.

6. Training

training

The training data delivered from the previous Data stages are utilized within the training stage. The implementation of model training involves passing the refined aggregated training data through the implemented model to create a model that can perform its dedicated task well. The training of the implemented model involves iteratively passing mini-batches of the training data through the model for a specified amount of epochs.

During the early stages of training, model performance and accuracy can be very unimpressive. Still, as the model conducts predictions and a comparison of predicted values is made to the desired/target value, backpropagation takes place within the neural networks, the model begins to improve and gets better at the task it’s designed and implemented to do.
Just before training can commence, we have to set hyperparameters and network parameters that will steer the effectiveness of our training stage on the model.

7. Evaluation

Evaluation

At this stage, you should have a trained model and are ready to conduct evaluation techniques on its performance.
For evaluation, we utilize a partition of the refined data, usually referred to as the ‘test data’. The test data have not been seen during the model during training. They are also representative of examples of data that are expected to be encountered in practical scenarios.
Some examples of evaluation strategies that can be leveraged are the Confusion matrix (error matrix) and Precision-Recall.