Dashboards Overview

The Zetane API comes equipped with premade dashboards. They are meant to quickly give you access to important metrics while interactively inspecting model internals like the tensors or explainability algorithms. They are also designed to easily integrate into exisitng scripts with as few as one Zetane API call to launch an entire dashboard.

Under the hood, these premade dashboards are composed of the same Zetane Python API calls available to you. With Zetane, you have a flexible environment to create custom ML dashboards that suit your specific needs and that don’t require refactoring of existing ML code.

On this page, we’ll explore the dashboards available today, along with examples of how to use them.

Training Dashboard

The training dashboard (for classification) comes with the following metrics: training accuracy, validation accuracy, training loss, validation loss, confusion matrix, precision and sensitivity (TP, TN, FP, FN). We also display at the top left corner the input and model output of the first element of the present batch.

Note that the dashboard presently only supports classification problems.

To the Hello Keras MNIST code we presented in the previous section, we added the following lines to create the dashboard and keep it updated.

# instantiate dashboard object
zdash = Dashboard(model=model,zcontext=zcontext, zmodel=zmodel)
# call training module on that object
zdash.keras_training(classes=classes,data = x_train, data_id=y_train, epochs=1, validation_split=0.1)
# call inference module on that object
zdash.keras_inference(test_data =x_test, test_data_id=y_test, image_table = image_table)

After running the following, you should see a training dashboard appear.

import os
import sys
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession

config = ConfigProto()
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)
# ZETANE API: import module
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../..')))
from zetane.dashboard import Dashboard
from zetane import context as ztn
zcontext = ztn.Context()
zcontext.clear_universe() # UNDO BEFORE MERGE (MAYBE)
zmodel = zcontext.model()
# DATA PREPARATION
# Model / data parameters
num_classes = 10
input_shape = (28, 28, 1)
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Scale images to the [0, 1] range
# Make sure images have shape (28, 28, 1)
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
print("x_train shape:", x_train.shape)
print(x_train.shape[0], "train samples")
print(x_test.shape[0], "test samples")
# convert class vectors to binary class matrices
# BUILD THE MODEL
model = keras.Sequential(
                [
                                keras.Input(shape=input_shape),
                                layers.Conv2D(32, kernel_size=(3, 3), activation="relu"),
                                layers.MaxPooling2D(pool_size=(2, 2)),
                                layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
                                layers.MaxPooling2D(pool_size=(2, 2)),
                                layers.Flatten(),
                                layers.Dropout(0.5),
                                layers.Dense(num_classes, activation="softmax"),
                ]
)
# TRAIN THE MODEL
model.summary()
batch_size = 128
epochs = 1
model.compile(loss="sparse_categorical_crossentropy", optimizer="adam", metrics=["accuracy"])

image_table = {'zero':0, 'one':1, 'two':2, 'three':3,
                                                                                                 'four':4, 'five':5, 'six':6, 'seven':7,'eight':8, 'nine':9}
classes = image_table.keys()

# instantiate dashboard object
zdash = Dashboard(model=model,zcontext=zcontext, zmodel=zmodel)

# call training module on that object
zdash.keras_training(classes=classes,data = x_train, data_id=y_train, epochs=1, validation_split=0.1)
# call inference module on that object
zdash.keras_inference(test_data =x_test, test_data_id=y_test, image_table = image_table)

zcontext.disconnect()

Inference Dashboard

The Inference Dashboard gives you the prediction of the top five results of a given classification dataset (see #DATA PREPARATION section in the example code below). In the top left corner, a Model IO panel shows the input data point and the top five prediction results along with the ground truth (Target).

A sample script using the MNIST dataset running the inference dashboard is provided below.

In the last loop of this example (# call inference module on that object section), we use the Zetane API’s debug() call to set a break point, allowing us to iterate through the 100 MNIST images interactively one at a time.

When using this feature with your own models, you could extract outliers and iterate through them by using a similar inference loop to get deeper insights into your data and models. The debug() call is regular Python code, with no special requirements on the IDE used. Hence, it can be called conditionally or inside a callback for example. This allows you to automatically pause the dashboard at your specific criteria of interest, while letting it eagerly run otherwise.

The debugging controls will appear at the bottom left of the dashboard. Click the “Continue Debugging” button to continue executing the python code until the next .``.debug()`` call; effectively iterating through the input images one by one.

_images/keras_mnist_inference_template.gif
import os
import sys
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession

config = ConfigProto()
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)

# ZETANE API: import module
from zetane.inference_dashboard import Inference_Dashboard
from zetane import context as ztn
zcontext = ztn.Context()
zcontext.clear_universe()
zmodel = zcontext.model()

# DATA PREPARATION
# Model / data parameters
num_classes = 10
input_shape = (28, 28, 1)
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Scale images to the [0, 1] range
# Make sure images have shape (28, 28, 1)
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
print("x_train shape:", x_train.shape)
print(x_train.shape[0], "train samples")
print(x_test.shape[0], "test samples")
# convert class vectors to binary class matrices
# BUILD THE MODEL
model = keras.Sequential(
    [
        keras.Input(shape=input_shape),
        layers.Conv2D(32, kernel_size=(3, 3), activation="relu"),
        layers.MaxPooling2D(pool_size=(2, 2)),
        layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
        layers.MaxPooling2D(pool_size=(2, 2)),
        layers.Flatten(),
        layers.Dropout(0.5),
        layers.Dense(num_classes, activation="softmax"),
    ]
)
# TRAIN THE MODEL
model.summary()
batch_size = 128
epochs = 1
model.compile(loss="sparse_categorical_crossentropy", optimizer="adam", metrics=["accuracy"])

image_table = {'zero':0, 'one':1, 'two':2, 'three':3,
                         'four':4, 'five':5, 'six':6, 'seven':7,'eight':8, 'nine':9}
classes = image_table.keys()
debug = 0

# instantiate dashboard object
zdash = Inference_Dashboard(model=model, zcontext=zcontext, zmodel=zmodel)

# call training module on that object
zdash.keras_training(classes=classes,data = x_train, data_id=y_train, epochs=1, validation_split=0.1)
# call inference module on that object
for i in range (0, 100):
   zdash.keras_inference(test_data = x_test, test_data_id=y_test, image_table = image_table, debug_data_index=debug)
   zmodel.debug()
   debug+=1

zcontext.disconnect()

XAI Dashboard

The XAI Dashboard provides the easiest way to access an array of machine learning explainability algorithms for your model and inputs. It provides a panel on the left side of the screen showing the original input image, label for the given input if available, top 5 model predictions, the model and the XAI algorithms. In addition, certain algorithms like Grad-CAM that produce visualizations for multiple layers (e.g. convolutional) can be visualized under the respective layers of the model.

The XAI Dashboard object is initiated with a model object and a Zetane context, and is called with the following depending on the framework:

# Instantiate the dashboard object. An XAIDashboard object can handle both frameworks, but needs a compatible model. The model for an XAIDashboard object can be updated with the `set_model()` method.
explain_template = XAIDashboard(model, zcontext)
# If PyTorch model
explain_template.explain_torch(img, out_class, labels.item(), class_dict, algorithms=None, mean=mean, std=std)
# If Keras model
explain_template.explain_keras(x, pred, label, class_dict, algorithms=None)

The XAI Dashboard currently supports Keras and PyTorch frameworks, while the XAI algorithms supported may vary depending on platform, both support the most widely used explainability algorithms.

Two sample scripts that use the CIFAR10 dataset in Keras and PyTorch are provided below, along with Zetane screen captures. Both scripts do the following:

  • Import the Zetane API

  • Build a neural network

  • Preprocess and train on data if needed

  • Run validation in the test set through the XAI Dashboard

PyTorch

_images/torch_xai_cifar.png
# IMPORT MODULES
import os
import sys
import time

import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
from torchvision import models
import torchvision.transforms as transforms
from PIL import Image, ImageFilter
import numpy as np

# ZETANE API: import modules
import zetane.context as ztn
from zetane.explain.torch import preprocess_image, get_layers
# We now import the XAIDashboard, which will be instantiated on line 104 and called on line 138
from zetane.XAI_dashboard import XAIDashboard

# DATA PREPARATION
transform = transforms.Compose(
    [transforms.ToTensor()])

trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
                                        download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=1,
                                          shuffle=True)

testset = torchvision.datasets.CIFAR10(root='./data', train=False,
                                       download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=1,
                                         shuffle=False)

class_dict = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')

# BUILD MODEL
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool1 = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.pool2 = nn.MaxPool2d(2, 2)
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)
        self.relu1 = nn.ReLU()
        self.relu2 = nn.ReLU()
        self.relu3 = nn.ReLU()
        self.relu4 = nn.ReLU()

    def forward(self, x):
        x = self.pool1(self.relu1(self.conv1(x)))
        x = self.pool2(self.relu2(self.conv2(x)))
        x = x.contiguous().view(-1, 16 * 5 * 5)
        x = self.relu3(self.fc1(x))
        x = self.relu4(self.fc2(x))
        x = self.fc3(x)
        return x


device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
net = Net().to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)

# TRAIN THE MODEL
train = True
if train:
    for epoch in range(1):  # loop over the dataset multiple times
        running_loss = 0.0
        for i, data in enumerate(trainloader, 0):
            # get the inputs; data is a list of [inputs, labels]
            inputs, labels = data
            inputs, labels = inputs.to(device), labels.to(device)

            # zero the parameter gradients
            optimizer.zero_grad()

            # forward + backward + optimize
            outputs = net(inputs)
            loss = criterion(outputs, labels)
            loss.backward()
            optimizer.step()

            # print statistics
            running_loss += loss.item()
            if i % 2000 == 1999:  # print every 2000 mini-batches
                print('[%d, %5d] loss: %.3f' %
                      (epoch + 1, i + 1, running_loss / 2000))
                running_loss = 0.0

    print('Finished Training')

    PATH = './data/cifar_net.pth'
    torch.save(net.state_dict(), PATH)

net.load_state_dict(torch.load('./data/cifar_net.pth'))  # PRETRAINED

# ZETANE API: create context and dashboard
zcontext = ztn.Context()
zcontext.launch()
# Here we create the XAI Dashboard object, which needs a model object and a Zetane context
explain_template = XAIDashboard(net, zcontext)

# XAI PARAMETERS
cnn_layer = 'conv2'
filter_pos = 2
out_dir = None

# PREDICT USING THE TRAINED MODEL AND SEND TO THE ENGINE FOR XAI
for data in testloader:
    net = net.to(device)
    images, labels = data
    img = images[0].numpy() * 255

    img_org = img.transpose(1, 2, 0)
    img_org = img_org.astype(np.uint8)
    img_org = Image.fromarray(img_org)
    mean = [0.5, 0.5, 0.5]
    std = [0.5, 0.5, 0.5]
    prep_img = preprocess_image(img_org, mean, std, size=(32, 32), resize_im=False)
    prep_img = prep_img.to(device)

    outputs = net(prep_img)
    out_class = torch.argmax(outputs).item()

    # This is where we call explain_torch() with an input image to have this image evaluated by our model, and produce the XAI visualizations. Note that the model was already added to the dashboard during initialization.

    # To explain the parameters below:
    # img (str, ndarray or torch.Tensor): The input image in filepath or Numpy/torch array form
    # out_class (int): The output class for which the gradients will be calculated when generating the XAI images (default: None)
    # labels (int): If available, the ground truth class label (default: None)
    # class_dict (dict): The class dictionary for the class names
    # algorithms (list(str)): The list of XAI algorithms to be visualized. Defaults to all available algorithms if set to None.  (default: None)
    # mean (list(float)): The mean values for each channel if any in normalization is applied to the original image (default: None)
    # std (list(float)): The standard deviation values for each channel if any in normalization is applied to the original image (default: None)
    explain_template.explain_torch(img, out_class, labels.item(), class_dict, algorithms=None, mean=mean, std=std)
    time.sleep(5.0)

Keras

_images/keras_xai_cifar.png
import os
import sys
import time
os.environ['TF_KERAS'] = '1'

import numpy as np
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession

config = ConfigProto()
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)

import tensorflow.keras as keras
from tensorflow.keras.models import Sequential, model_from_json
from tensorflow.keras.layers import Dense, Dropout, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras.datasets import cifar10
from tensorflow.keras import backend as K

sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
import zetane.context as ztn
from zetane.XAI_dashboard import XAIDashboard

## BUILD MODEL: Here's our function to create a basic CNN. Two convolutional layers, pooling and two fully connected layers and a softmax activation. Feel free to experiment with different architectures, but do keep in mind that MNIST is a basic dataset with only 10 output classes, so increasing the complexity of the models will not necessarily yield better results.
def create_model():
  model = Sequential()
  model.add(Conv2D(128, kernel_size=(5, 5),
                  activation='relu',
                  input_shape=input_shape))
  model.add(Conv2D(64, (3, 3), activation='relu'))
  model.add(MaxPooling2D(pool_size=(2, 2)))
  model.add(Dropout(0.2))
  model.add(Flatten())
  model.add(Dense(32, activation='relu'))
  model.add(Dropout(0.2))
  model.add(Dense(num_classes, activation='softmax'))

  model.compile(loss=keras.losses.categorical_crossentropy,
                optimizer=keras.optimizers.Adam(),
                metrics=['accuracy'])

  # Serialize model to JSON
  model_json = model.to_json()
  with open(os.path.join(dir_path, "model.json"), "w") as json_file:
      json_file.write(model_json)
  return model

# TRAINING FUNCTION
def train(model, epochs=5, batch_size=64, filename="model_weights_trained.h5"):
    model.fit(x_train, y_train,
              batch_size=batch_size,
              epochs=epochs,
              validation_data=(x_test, y_test))
    model.save_weights(os.path.join(dir_path, filename))

# DEFINE GLOBAL VARIABLES: The `load` variable here specifies whether to load a predefined/trained model as opposed to training one from scratch. Don't worry if you don't have a model ready, the script will train one from scratch if it can't find a model.
dir_path = os.path.dirname(os.path.realpath(__file__))
batch_size = 128
num_classes = 10
epochs = 25
class_dict = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
# input image dimensions
img_rows, img_cols = 32, 32
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = cifar10.load_data()

#Note the if/else on ``K.image_data_format()``: This is to determine where the channels dimension will be after preprocessing to be compatible with the Keras backend. Afterwards, the data is converted to floats, normalized to be between 0 and 1, and the label values are one-hot encoded.
if K.image_data_format() == 'channels_first':
    x_train = x_train.reshape(x_train.shape[0], 3, img_rows, img_cols)
    x_test = x_test.reshape(x_test.shape[0], 3, img_rows, img_cols)
    input_shape = (3, img_rows, img_cols)
else:
    x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 3)
    x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 3)
    input_shape = (img_rows, img_cols, 3)

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

# TRAIN THE MODEL
train_model = True
if train_model:
    model = create_model()
    train(model, epochs, filename="model_weights_trained_15_epochs.h5")
else:
    json_file = open(os.path.join(dir_path, 'model.json'), 'r')
    loaded_model_json = json_file.read()
    json_file.close()
    model = model_from_json(loaded_model_json)
    # load weights into new model
    model.load_weights(os.path.join(dir_path, "model_weights_trained.h5"))
    print("Loaded model from disk")
    model.compile(loss=keras.losses.categorical_crossentropy,
                  optimizer=keras.optimizers.Adam(), metrics=['accuracy'])


#ZETANE API: We start with creating a ``ztn.Context()`` object. Any data sent to Zetane via Python will be sent through this ``zcontext``. Strings are sent via ``zcontext.text()``, for example. we can then modify the look of this text via calls like ``.position()``, ``.font()`` and ``.gradient()``.
zcontext = ztn.Context()
zcontext.clear_universe()

# Here we create the XAI Dashboard object, which needs a model object and a Zetane context
explain_template = XAIDashboard(model, zcontext)

explain_template = XAIDashboard(model, zcontext)

for i in range(y_test.shape[0]):
    x, y = x_test[i], y_test[i]
    x_ext = np.expand_dims(x, 0)
    y_out = model.predict(x_ext, steps=1)

    label = np.argmax(y)
    pred = np.argmax(y_out[0])

    explain_template.explain_keras(x, pred, label, class_dict, algorithms=None)
        # sleep to see more
    time.sleep(5.0)

zcontext.disconnect()

# To explain the parameters below:
# x (ndarray): The input image in filepath or Numpy/torch array form
# pred (int): The output class for which the gradients will be calculated when generating the XAI images (default: None)
# label (int): If available, the ground truth class label (default: None)
# class_dict (dict): The class dictionary for the class names
# algorithms (list(str)): The list of XAI algorithms to be visualized. Defaults to all available algorithms if set to None.  (default: None)

Note

Checkout our sample gallery for more advanced scripts.