Requirements

Zetane is a 64-bit desktop application with the following requirements:

Component

Minimum Requirements

Recommended

Operating System

Windows 7 (64-bit) or
Ubuntu 16.04 (64-bit) or
macOS 10.15
Windows 10 (64-bit) or
Ubuntu 18.04 (64-bit) or
macOS 10.15

Python

Python 3.6 (64-bit)

Python 3.7-3.8 (64-bit)

Processor

x86-64 Dual-core

x86-64 Quad-core, SSE, AVX2

Graphics API

OpenGL 3.3 (latest drivers)

OpenGL 3.3 (latest drivers)

Memory

2 GB RAM

8 GB RAM

GPU

Intel HD Graphics 4000 or better

NVIDIA GTX 1050 (4 GB) or
AMD Radeon RX 560 (4 GB) or
Intel Iris 540 or better

Storage

2 GB

2 GB

Resolution

1024x768

1920x1080

Input

Keyboard and trackpad

Keyboard and 3-button mouse

Internet

Broadband Internet (for installation and signing in only; operation does not require internet). Offline mode available to paid users, but sign-in verification occurs every 2 weeks.

Broadband Internet (for installation and signing in only; operation does not require internet). Offline mode available to paid users, but sign-in verification occurs every 2 weeks.

Zetane is designed to fit within an existing ML ecosystem (or workflow). A typical Zetane user has:

  • A basic understanding of Python programming.

  • Prior experience using ML frameworks (PyTorch, Keras, or TensorFlow).

  • Access to a machine that meets Minimum Requirements.

Dependencies

  • Python (3.6 or higher)

    • On Linux, if Python was not installed via pyenv nor conda, then the Python3 dev package (python3-dev on apt; python3-devel on yum) is required.

  • Git

  • Pip

Installation

pip install zetane --upgrade -f https://download.zetane.com/zetane/index.html

Note

It is recommended to install zetane in a virtual environment created and managed by venv, conda, pipenv or virtualenv. You also might have to add --user to install the libraries in their user specific python directory.

Hello World

First let’s get a “Hello World” example going. In your favorite python environment, run the following code which creates a text object and displays it in the Zetane Engine.

# Import the Zetane Python API module
import zetane

# Create the engine object and launch it
engine = zetane.Context()

# Create a text object in the engine
ztxt = engine.text("hello world")

# Change the font size to 1.0
ztxt.font_size(1.0)

# Send or update the text object to the engine
ztxt.update()

# Disconnect from the engine
engine.disconnect()

After running this code, the engine is launched, the Python API and the engine connects and you should see “Hello World” in white.

_images/Hello_World.PNG

Note: If the engine closes immediately after the script ends (can happen in some IDE like VSCode and workflow), consider launching the engine beforehand through the command prompt by typing zetane. Other options are to add the following lines

input("Press Enter to exit.")

or

import time
time.sleep(10)

Basic Navigation

Zetane produces lively interactive environments. Here are the essential controls:

Action

Mouse

Trackpad gestures

Pan up/down:

Scroll

Scroll

Pan sideways:

Shift+Scroll

Scroll horizontally

Zoom:

Ctrl+Scroll or Alt+Scroll

Pinch-to-zoom

Rotate:

Middle drag

Alt+Left drag

Select:

Left-click

Left-click

The camera can be reset at any time using Ctrl+R. See Navigation for more available options.

Inspect Model Snapshots

Directly from the Zetane Engine interface, we have provided snapshots of artificial neural networks for you to inspect.

After launching the above “Hello World” example, click on the tab Snapshots in the top toolbar. Then, click on the following button to load Fashion MNIST model example.

images/button_fashion_mnist.png

You can now navigate this model snapshot which was taken during the training process of a Fahion MNIST model.

About the Zetane API

Zetane supports PyTorch and Keras / Tensorflow. Zetane engine is launched via an API call that starts the engine and sends data to it that can be explored.

import zetane

engine = zetane.Context()

Zetane is composed of a context object, which wraps the state of the engine. The context also has factories that build different objects that are renderable in the 3D / 2D space. This includes models (Keras, Pytorch, ONNX), images, numpy arrays, 2D graphs, 3D graphs, text, dials, height maps, point clouds and meshes.

The context object is called to build other objects that can be created in the engine, like so:

model = engine.model()
text = engine.text()
image = engine.image()
graph = engine.chart()
metric1 = graph.metric()
metric2 = graph.metric()

All objects are chainable and sync with the engine via an .update() call. We send new data or update existing objects in the universe via object specific calls, for example:

model.torch(net, inputs).update()
text.text("Hello world!").update()
metric1.append_values(y=[0.10])
metric2.append_values(y=[0.85])
graph.update()

Other objects allow named argument updates in the update call themselves:

image.update(data=image_data)
image.update(filepath='image.png')

For seeing how this works in practice, please take a look at the two examples for working with Keras or PyTorch.

Dynamic Objects

In this example, we will create a dynamic scene by updating a Zetane object in a loop. While the example is simple, it demonstrates a useful pattern that can applied to many Zetane objects, including Keras models, Pytorch models, ONNX models, images, meshes, graphs, and CSV tables.

import math
import time
import zetane

engine = zetane.Context()

ztxt = engine.text("Hello World").font_size(1.0).update()

for x in range(1000):
  ztxt.position(math.sin(x / 100), math.sin(x / 50), math.cos(x / 50))
  ztxt.color((math.sin(x / 1000), 0.3, 0.05)).update()
  time.sleep(0.015)

engine.disconnect()

Notice that multiple properties and commands can be chained, as demonstrated in ztxt = engine.text("Hello World").font_size(1.0).update(). Inside the for loop, we are updating the Zetane object, resulting in the “Hello World” text buzzing around in 3D space while gradually changing color. Before exiting the script, we disconnect from the engine; leaving the Zetane window alive and interactive.