The Facebook Field Guide to Machine Learning is a six-part video series developed by the Facebook ads machine learning team. The series shares best real-world practices and provides practical tips about how to apply machine-learning capabilities to real-world problems.
If you’re interested in using machine learning to enhance your product in the real world, it’s important to understand how the entire development process works. It’s not only what happens during the training of your models, but everything that comes before and after, and how each step can either set you up for success or doom you to fail.
The Facebook ads machine learning team has developed a series of videos to help engineers and new researchers learn to apply their machine learning skills to real-world problems. The Facebook Field Guide to Machine Learning series breaks down the machine learning process into six steps:
1. Problem definition
This video series covers each of these steps, explaining how the decisions you make along the way can help you successfully apply machine learning to your product or use case. Each lesson highlights examples and stories of non-obvious things that can be important in an applied setting.
If you’ve followed the four previous Facebook Field Guide to Machine Learning lessons carefully, this lesson on model should come fairly naturally. Your next job is to choose the right model for your data and find the algorithm to implement and train that model.
Lesson 5 offers tips about picking, tuning and comparing models, such as:
How to pick a model
Ideally, features and data should govern model choice. A good understanding of statistical learning, and the fundamentals of the different algorithms can help make the choice follow naturally, but we also share a few ways to think about tradeoffs including: interpretability and ability to debug, data volume, and training and prediction.
How to tune a model
Once the data and features are fixed and you’ve chosen a model, the task is to choose settings for the model. Machine learning models can have anywhere from a few settings to an incredible number of options. Key principles to keep in mind when tuning models are efficiency, transparency, reproducibility, and automation.
How to compare models.
When comparing models, it’s critical to compare apples with apples. Two models can train on different datasets, but comparing evaluation metrics only makes sense in the context of the same test dataset. And since model choice is often an iterative process – where you try a model, discover some of its shortcomings and then extend it, it is often useful to compare many different models. Keep track of the models you train, and what data they were trained on.
In summary, don’t worry too much about choices of models at first – a little non-linearity often goes a long way – and always be careful and systematic in how you approach model iterations, allowing yourself to correctly compare different approaches in a fair way.