Using CLI

Guide to start using Konduit-Serving with CLI

This document will demonstrate using Konduit-Serving using mainly CLI tools. You can deploy ML/DL models to production using minimal effort using Konduit-Serving. Let's look at the process of building and installing Konduit-Serving from source and how to deploy a model using a simple configuration.

Prerequisite

You will need following prerequisites to follow along

  • Maven 3.x

  • JDK 8

  • Git

Installation from Sources

The following two sections explains how to clone, build and install Konduit-Serving from sources.

To build from source, follow the guide below

Building from sourcechevron-right

To install the respective built binaries you can navigate to the section below

Installing Binarieschevron-right

After you've installed Konduit-Serving in your local machine you can switch to a terminal and verify the installation by running

konduit --version

You'll see an output similar to the one below

$ konduit --version
------------------------------------------------
Version: 0.1.0-SNAPSHOT
Commit hash: 3dd38832
Commit time: 01.03.2021 @ 03:37:08 MYT
Build time: 07.03.2021 @ 16:57:51 MYT

Deploying Models

Let's look at how to deploy a dl4j/keras model using Konduit-Serving

Cloning Examples Repo

Let's clone the konduit-serving-examples repo

and navigate to the quickstart folder

The examples we want to run are under the folders 3-keras-mnist and 5-dl4j-mnist. Let's follow a basic workflow for both models using the Konduit-Serving CLI.

Navigate to 3-keras-mnist

Here, you'll find the following files:

The keras.json contains the configuration file for running an MNIST dataset trained model in Keras. To serve the model, execute the following command

You'll be able to see a similar output like the following

The last line will show you the details about which URL the server is serving the models at.

Press Ctrl + C, or execute konduit stop keras-server to kill the server.

To run the server in the background, you can run the same command with the --background or -b flag.

You'll see something similar to

To list the server, simply run

You'll see the running servers as a list

To view the logs, you can run the following command

The --lines or -l flag shows the specified number of last lines. By executing the above command you'll see the following

Now finally, let's look at running predictions with Konduit-Serving by sending an image file to the server.

It will convert the image into an n-dimensional array and then send the input to the keras model and you'll see the following output

Congratulations! You've learned the basic workflow for Konduit-Serving using the Command Line Interface.

Last updated

Was this helpful?