Weights and Biases has become one of the AI community’s favorite libraries. The team has done an excellent job creating a platform where the Machine Learning engineer can effortlessly: Track his/her experiments, Visualize the training process, Share the results with the team and Improve the model’s performance. Personally, I started using it a couple of months ago and quickly became an individual part of all my projects. This article summarizes my experience with the library and aims to be a self-complete tutorial of its most useful features. To accomplish that, we will examine how we can integrate the wandb library in a new project.
Shall we begin?
Prerequisites
We will use a standard Deep Learning model that performs image recognition on the CIFAR10 dataset. The model doesn’t really affect our experiments so I thought to keep it as simple as possible. The model will be trained on the dataset from scratch in order to explore how we can utilize the wandb library.
Here is the Pytorch code for our model alongside with the data processing:
“`html
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# ... more code
# ... more code
# ... more code
print('Finished Training')
“`
The first step is to install the library and create a new account.
Installation and initialization
If you haven’t already, you will need to create a new account in order to be able to use Weights and Biases. The library is free for personal use but comes with a monthly price for teams. You can visit the website and sign up. Once you do that, you should be able to install it using pip or conda. After installing, you will need to authenticate yourself. This can be done with the wandb login
command. You will be prompted to copy paste an authorization key in order to continue.
“`html
$ conda install -c conda-forge wandb
$ wandb login
“`
The library can be initialized in our code with the init
method which receives an optional project name and your username, among other things.
“`html
import wandb
wandb.init(project='test', entity='serkar')
“`
Now that we are all set up, let’s try and integrate the library to our training loop.
Experiment tracking
The main use of the wandb library is to track and visualize the different machine learning experiments, the training process, the hyperparameters and the models. Let’s see some examples.
Track metrics
The amazing thing about the Weights and Biases (W&B) library is how easy it is to use. In many cases, it is literally one line of code:
“`html
wandb.log({'epoch': epoch, 'loss': running_loss})
“`
The .log()
command will capture all the arguments and send them to the W&B instance. This will allow us to access and track them from the UI. You can find the dashboard in the W&B website under your project.
In our application, a sample training loop can look like below:
“`html
for epoch in range(10):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
inputs, labels = data[0].to(device), data[1].to(device)
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if i % 2000 == 1999:
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
wandb.log({'epoch': epoch+1, 'loss': running_loss/2000})
running_loss = 0.0print('Finished Training')
“`
Did you notice the wandb.log
line? That way, we can inspect the training process in real time.
The result will look like this: Just get a random photo from the internet that talks about we can visualize it like so.
Prett awesome, right?
The command that can also be used is the wandb.watch
, which will automatically collect the model’s gradients and the model’s topology.
“`html
wandb.watch(net, criterion, log="all")
“`
Data taken from users? I would need to see some examples and context around where this is occurring to better understand and provide more personalized suggestions.
[b>Conclusion
And that concludes our journey in the Weights and Biases library. W&B has become one of my personal favorites and has improved my workflow a lot. I highly recommend you try it out if you haven’t already. More details you can find in their documentation, which is very well written. Many examples are also provided in their Github repository.
Have fun playing around with it. Let us know if you have any questions or if you want us to cover W&B in more detail in the future. As always, please share this article if you find it useful. It really matters for us in order to keep writing content.