Using Docker
Guide to start using Konduit-Serving with Docker
In this section, we provide guidance on how to demonstrate Konduit-Serving with Docker. Konduit-Serving is a platform to serve the ML/DL models directly through a single line of code. We'll start building and installing the Konduit-Serving from docker image and follow by deploying the trained model in Konduit-Serving.
Prerequisites
You will need following prerequisites to follow along
To ensure Konduit-Serving works properly, install these prerequisites. We’ll be using Konduit-Serving to deploy machine/deep learning pipelines easier using a few lines of code. We've prepared a Github repository to simplify showcasing Konduit-Serving.
Introduction to Repository
The repository contains simple examples of deploying a pipeline using different model types. To clone the repository, run the following command:
git clone https://github.com/ShamsUlAzeem/konduit-serving-demoBuild the CPU version of docker image by running the following command in the root directory:
bash build.sh CPUAfter a successful build, run docker image with docker-compose at current working directory:
docker-compose upNow, open JupyterLab in the browser.
Explore Konduit-Serving in Repository
Let's take a look inside the demos directory from konduit-serving-demo. Each folder inside the demos folder demonstrate serving a different kind of model, using a configuration file in either JSON or YAML, through the Konduit-Serving CLI.
The examples use different frameworks, including Keras, Tensorflow, Pytorch, and DL4J. These examples can be run in IPython Notebook (.ipynb) with Java-based kernel. Konduit-Serving provides a platform for users to take advantage of models using konduit CLI such as serve, list, logs, predict and stop. You can also build your model and start using Konduit-Serving.
Let's look at training and serving a model with Konduit-Serving.
Build your model
These are steps to train a model for Keras and DL4J from scratch:
First, import library that will be use:
Load Iris data set:
Display distribution of data set:
Apply data pre-processing and split data into training and testing:
Configure the model and print summary:
Train the model by training data with 800 epochs:
Test the model by predicting testing data:
Display confusion matrix to see more details of model prediction between actual and predicted result (if satisfied can save the model):
Also, you can try to predict by using your value in the trained model:
Print the result of classification:
Then, save the trained model in HDF5 (.h5) format which will be used in Konduit-Serving later:
The model is now ready to be deployed.
First, import library that will be use (auto generated if you are using IntelliJ - proceed to next step) :
Declare the variables that will be used in the training process (this code starts under main body):
Load Iris data set from resources file:
Get data set using record reader (to handle loading or parsing):
Create iterator from record reader:
Shuffling the arrangement of data and splitting into training and testing:
Apply data pre-processing by normalization:
Configure and initiate the model that will be used:
Display in UI of training process which can be seen when open in browser (optional):
Train the model using training data:
Evaluate the model by using testing data (save the model if satisfied):
Then, save the model at the location you want to keep with the name in Zip format. The model is ready to be used (for example, the model saved in the current working directory):
The model is now ready to be deployed.
Deploy your model in Konduit-Serving
Now, you are ready to deploy a model in Konduit-Serving by using konduit CLI. This step needs a saved model file (h5 or zip file) and JSON/YAML configuration file. Let's begin with:
Create a new folder in the demos directory (for example, 10-iris-model):

Drag the model file into the demos directory folder and create an IPython Notebook file with Java kernel (for example, iris-model.ipynb).
In this notebook, we will use konduit CLI.
Check the version of Konduit-Serving, and either is installed or not:
Create the JSON/YAML configuration file by using
konduit configcommand:
YAML configuration:
Or, you can try this command to get a JSON configuration file:
YAML configuration:
Or, you can try this command to get a JSON configuration file:
In the configuration file, you need to edit the YAML/JSON file in the pipeline section (for this example, we will use YAML with DL4J):
To determine the name of input and output, you can use Netron to read ML/DL model, for example:
In node properties, use the name's value of first weights as "inputNames".

In node properties, use the name's value of last weights as "outputname".

In model properties, use the name's value of input for "inputNames" and output for "outputNames".

Start the server by using
konduit servecommand and give the id's name based on your own:
Listing the active server in Konduit-Serving,
konduit list:
Show the log of the selected server’s id for 100 lines by
konduit logs:
Test the prediction of ML/DL model by
konduit predictin the Konduit-Serving at the selected id (in this example: dl4j-iris):
You can test again with another input value to get another result:
And, the result will be like this:

For more interactive result, you can edit the JSON/YAML file in the pipeline section as below:
So, you will get the result of classification straightforward with prediction's label:

Lastly but not least, use
konduit stopto terminate selected id in Konduit-Serving:
Congratulation! You have deployed Konduit-Serving on your own. What's next?
Last updated
Was this helpful?