written by
Vectice

How to Use Status Reports and Project Reviews in ML

Management in AI 3 min read

‍‍Status reports and project reviews are two management tools that drive progress and increase the team learning curve after project completion. In this article, we’ll provide some best practices and apply these tools to machine learning projects. Let’s dive right in!

First, we’ll define the purpose of each: status reports are intermediate reviews that provide the current status on a project. They can be used to share breakthroughs, discuss bottlenecks, or track general progress. Project reviews happen when a project is completed, or a significant milestone has been reached and a new phase starts. These help to discover what worked, what failed and what can be done better next time.‍

Best Practices for Status Reports & Project Reviews

Status reports should happen very frequently, ideally once per week. They can happen ad hoc or at specified times. Some productivity models like Agile Scrum/CRISP-DM will have dedicated times for status reports, although your team can determine what works best.

Project reviews should aim to create a body of knowledge that can be transferred across teams, projects and time. This knowledge should be widely available to all employees. This can be very helpful with data science and machine learning, since they are new fields driven by exploration, and companies are still struggling to find what works and what doesn’t. Since ML projects are prone to fail, project reviews could find the root cause of failure so it can be prevented next time.

Status reports should include all key contributors: data analysts and scientists, business developers and project managers. Project reviews should include another level (or more) up. This usually includes stakeholders and even executives. They will have an overview of the larger business context and should apply learnings from reviews to the next projects.

One of the biggest mistakes with status reports and project reviews is when results are hardly interpretable. Despite its popularity, many domain experts are not yet familiar with evaluating and presenting ML analyses. Some experts may present granular results with too much computational complexity, leaving stakeholders confused and frustrated. Engineers should learn how to present a data analysis in a clear, concise manner with suggestions for next steps.

How to Plan Around Uncertainty in ML projects?

In machine learning, it can be difficult to determine when results will be obtained that can be applied in the real world. That’s why it’s critical to spend time upfront thinking about meaningful outcomes. Do this before writing a single line of code, because these outcomes determine what success and failure means! If you don’t know what to benchmark, return to the business understanding phase. (We highly recommend using OKRs to set meaningful milestones.)

Checkpointing and Metric Alerts

Two useful techniques that can help with uncertainty are checkpointing and metric alerts. Checkpointing allows you to save current progress when training a model, while metric alerts can notify teams when a desirable outcome is achieved. Both are available in most machine learning libraries and are particularly helpful with long training sessions and larger models.

One common checkpointing strategy is to record where the validation loss stops to decrease during the model training process along with the training loss and save the weights of the corresponding model. Training can be interrupted due to external factors like power outages or server failures. When that happens, current progress can be saved and training can restart from that checkpoint. Proper documentation allows you to communicate results cross-teams, and checkpointing provides model milestones for status reports.

Metric alerts can monitor a model in the background and send a notification when conditions are met (e.g. accuracy/performance/stopping metrics). Alerts can be used to prompt status reports, for example when a prediction model has achieved 90% accuracy on a validation set. This can reduce the need for regular status reports and team meetings, but instead prioritizing meetings when a breakthrough has been achieved.

How can Vectice help?

While analytics tools do a great job of automatically reporting on pre-calculated metrics, Vectice goes one step further and contextualizes status reports and milestone reviews. Metrics need to be interpreted which requires a business and technical understanding. Our platform creates automated lineage tracking, which allows experts to move between metrics and milestones.‍