Search…
Using CLI
Guide to start using Konduit-Serving with CLI
This document will demonstrate using Konduit-Serving using mainly CLI tools. You can deploy ML/DL models to production using minimal effort using Konduit-Serving. Let's look at the process of building and installing Konduit-Serving from source and how to deploy a model using a simple configuration.

Prerequisite

You will need following prerequisites to follow along
    Maven 3.x
    JDK 8
    Git

Installation from Sources

The following two sections explains how to clone, build and install Konduit-Serving from sources.
To build from source, follow the guide below
To install the respective built binaries you can navigate to the section below
After you've installed Konduit-Serving in your local machine you can switch to a terminal and verify the installation by running
1
konduit --version
Copied!
You'll see an output similar to the one below
1
$ konduit --version
2
------------------------------------------------
3
Version: 0.1.0-SNAPSHOT
4
Commit hash: 3dd38832
5
Commit time: 01.03.2021 @ 03:37:08 MYT
6
Build time: 07.03.2021 @ 16:57:51 MYT
Copied!

Deploying Models

Let's look at how to deploy a dl4j/keras model using Konduit-Serving

Cloning Examples Repo

Let's clone the konduit-serving-examples repo
1
git clone https://github.com/KonduitAI/konduit-serving-examples.git
Copied!
and navigate to the quickstart folder
1
cd konduit-serving-demo/quickstart
Copied!
The examples we want to run are under the folders 3-keras-mnist and 5-dl4j-mnist. Let's follow a basic workflow for both models using the Konduit-Serving CLI.
Keras
DL4J
Navigate to 3-keras-mnist
1
cd 3-keras-mnist
Copied!
Here, you'll find the following files:
1
.
2
├── keras-mnist.ipynb | A supplementary jupyter/beakerx notebook
3
├── keras.h5 | Model file we want to serve
4
├── keras.json | Konduit-Serving configuration
5
├── test-image.jpg | Test input for predictions
6
└── train.py | Script for creating 'keras.h5' model file
Copied!
The keras.json contains the configuration file for running an MNIST dataset trained model in Keras. To serve the model, execute the following command
1
konduit serve --config keras.json -id keras-server
Copied!
You'll be able to see a similar output like the following
1
.
2
.
3
.
4
15:00:08.575 [vert.x-worker-thread-0] INFO a.k.s.m.d.step.DL4JRunner -
5
15:00:08.576 [vert.x-worker-thread-0] INFO a.k.s.v.verticle.InferenceVerticle -
6
7
####################################################################
8
# #
9
# | / _ \ \ | _ \ | | _ _| __ __| | / | / #
10
# . < ( | . | | | | | | | . < . < #
11
# _|\_\ \___/ _|\_| ___/ \__/ ___| _| _|\_\ _) _|\_\ _) #
12
# #
13
####################################################################
14
15
15:00:08.576 [vert.x-worker-thread-0] INFO a.k.s.v.verticle.InferenceVerticle - Pending server start, please wait...
16
.
17
.
18
.
19
.
20
15:00:08.752 [vert.x-eventloop-thread-0] INFO a.k.s.v.p.h.v.InferenceVerticleHttp - Inference HTTP server is listening on host: 'localhost'
21
15:00:08.752 [vert.x-eventloop-thread-0] INFO a.k.s.v.p.h.v.InferenceVerticleHttp - Inference HTTP server started on port 40987 with 4 pipeline steps
Copied!
The last line will show you the details about which URL the server is serving the models at.
Press Ctrl + C, or execute konduit stop keras-server to kill the server.
To run the server in the background, you can run the same command with the --background or -b flag.
1
konduit serve --config keras.json -id keras-server --background
Copied!
You'll see something similar to
1
Starting konduit server...
2
Expected classpath: /Users/konduit/Projects/Konduit/konduit-serving/konduit-serving-tar/target/konduit-serving-tar-0.1.0-SNAPSHOT-dist/bin/../konduit.jar
3
INFO: Running command /Users/konduit/opt/miniconda3/jre/bin/java -Dkonduit.logs.file.path=/Users/konduit/.konduit-serving/command_logs/keras-server.log -Dlogback.configurationFile=/Users/konduit/Projects/Konduit/konduit-serving/konduit-serving-tar/target/konduit-serving-tar-0.1.0-SNAPSHOT-dist/bin/../conf/logback-run_command.xml -cp /Users/konduit/Projects/Konduit/konduit-serving/konduit-serving-tar/target/konduit-serving-tar-0.1.0-SNAPSHOT-dist/bin/../konduit.jar ai.konduit.serving.cli.launcher.KonduitServingLauncher run --instances 1 -s inference -c keras.json -Dserving.id=keras-server
4
For server status, execute: 'konduit list'
5
For logs, execute: 'konduit logs keras-server'
Copied!
To list the server, simply run
1
konduit list
Copied!
You'll see the running servers as a list
1
Listing konduit servers...
2
3
# | ID | TYPE | URL | PID | STATUS
4
1 | keras-server | inference | localhost:1000 | 1200 | Started
Copied!
To view the logs, you can run the following command
1
konduit logs keras-server --lines 2
Copied!
The --lines or -l flag shows the specified number of last lines. By executing the above command you'll see the following
1
15:00:08.752 [vert.x-eventloop-thread-0] INFO a.k.s.v.p.h.v.InferenceVerticleHttp - Inference HTTP server is listening on host: 'localhost'
2
15:00:08.752 [vert.x-eventloop-thread-0] INFO a.k.s.v.p.h.v.InferenceVerticleHttp - Inference HTTP server started on port 1000 with 4 pipeline steps
Copied!
Now finally, let's look at running predictions with Konduit-Serving by sending an image file to the server.
1
konduit predict keras-server -it multipart '[email protected]'
Copied!
It will convert the image into an n-dimensional array and then send the input to the keras model and you'll see the following output
1
{
2
"output_layer" : [ [ 9.0376153E-7, 1.0595608E-8, 1.3115231E-5, 0.44657645, 6.748624E-12, 0.5524258, 1.848306E-7, 2.7652052E-9, 9.76023E-4, 7.5933513E-6 ] ],
3
"prob" : 0.5524258017539978,
4
"index" : 5,
5
"label" : "5"
6
}
Copied!
Navigate to 5-dl4j-mnist
1
cd 5-dl4j-mnist
Copied!
Here, you'll find the following files:
1
.
2
├── dl4j-mnist.ipynb | A supplementary jupyter/beakerx notebook
3
├── dl4j-mnist.zip | Model file we want to serve
4
├── dl4j.json | Konduit-Serving configuration
5
└── test-image.jpg | Test input for predictions
Copied!
The dl4j.json contains the configuration file for running an MNIST dataset trained model in DL4J. To serve the model, execute the following command
1
konduit serve --config dl4j.json -id dl4j-server
Copied!
You'll be able to see a similar output like the following
1
.
2
.
3
.
4
15:00:08.575 [vert.x-worker-thread-0] INFO a.k.s.m.d.step.DL4JRunner -
5
15:00:08.576 [vert.x-worker-thread-0] INFO a.k.s.v.verticle.InferenceVerticle -
6
7
####################################################################
8
# #
9
# | / _ \ \ | _ \ | | _ _| __ __| | / | / #
10
# . < ( | . | | | | | | | . < . < #
11
# _|\_\ \___/ _|\_| ___/ \__/ ___| _| _|\_\ _) _|\_\ _) #
12
# #
13
####################################################################
14
15
15:00:08.576 [vert.x-worker-thread-0] INFO a.k.s.v.verticle.InferenceVerticle - Pending server start, please wait...
16
.
17
.
18
.
19
.
20
15:00:08.752 [vert.x-eventloop-thread-0] INFO a.k.s.v.p.h.v.InferenceVerticleHttp - Inference HTTP server is listening on host: 'localhost'
21
15:00:08.752 [vert.x-eventloop-thread-0] INFO a.k.s.v.p.h.v.InferenceVerticleHttp - Inference HTTP server started on port 40987 with 4 pipeline steps
Copied!
The last line will show you the details about which URL the server is serving the models at.
Press Ctrl + C, or execute konduit stop keras-server to kill the server.
To run the server in the background, you can run the same command with the --background or -b flag.
1
konduit serve --config dl4j.json -id dl4j-server --background
Copied!
You'll see something similar to
1
Starting konduit server...
2
Expected classpath: /Users/konduit/Projects/Konduit/konduit-serving/konduit-serving-tar/target/konduit-serving-tar-0.1.0-SNAPSHOT-dist/bin/../konduit.jar
3
INFO: Running command /Users/konduit/opt/miniconda3/jre/bin/java -Dkonduit.logs.file.path=/Users/konduit/.konduit-serving/command_logs/dl4j-server.log -Dlogback.configurationFile=/Users/konduit/Projects/Konduit/konduit-serving/konduit-serving-tar/target/konduit-serving-tar-0.1.0-SNAPSHOT-dist/bin/../conf/logback-run_command.xml -cp /Users/konduit/Projects/Konduit/konduit-serving/konduit-serving-tar/target/konduit-serving-tar-0.1.0-SNAPSHOT-dist/bin/../konduit.jar ai.konduit.serving.cli.launcher.KonduitServingLauncher run --instances 1 -s inference -c dl4j.json -Dserving.id=dl4j-server
4
For server status, execute: 'konduit list'
5
For logs, execute: 'konduit logs dl4j-server'
Copied!
To list the server, simply run
1
konduit list
Copied!
You'll see the running servers as a list
1
Listing konduit servers...
2
3
# | ID | TYPE | URL | PID | STATUS
4
1 | dl4j-server | inference | localhost:1000 | 1200 | Started
Copied!
To view the logs, you can run the following command
1
konduit logs dl4j-server --lines 2
Copied!
The --lines or -l flag shows the specified number of last lines. By executing the above command you'll see the following
1
15:00:08.752 [vert.x-eventloop-thread-0] INFO a.k.s.v.p.h.v.InferenceVerticleHttp - Inference HTTP server is listening on host: 'localhost'
2
15:00:08.752 [vert.x-eventloop-thread-0] INFO a.k.s.v.p.h.v.InferenceVerticleHttp - Inference HTTP server started on port 1000 with 4 pipeline steps
Copied!
Now finally, let's look at running predictions with Konduit-Serving by sending an image file to the server.
1
konduit predict dl4j-server -it multipart '[email protected]'
Copied!
It will convert the image into an n-dimensional array and then send the input to the DL4J model and you'll see the following output
1
{
2
"layer5" : [ [ 1.845163E-5, 1.8346094E-6, 0.31436875, 0.43937472, 2.6101702E-8, 0.24587035, 5.9430695E-6, 3.3270408E-4, 6.3698195E-8, 2.708706E-5 ] ],
3
"prob" : 0.439374715089798,
4
"index" : 3,
5
"label" : "3"
6
}
Copied!
Congratulations! You've learned the basic workflow for Konduit-Serving using the Command Line Interface.
Last modified 7mo ago