Using Docker

Guide to start using Konduit-Serving with Docker

In this section, we provide guidance on how to demonstrate Konduit-Serving with Docker. Konduit-Serving is a platform to serve the ML/DL models directly through a single line of code. We'll start building and installing the Konduit-Serving from docker image and follow by deploying the trained model in Konduit-Serving.

Prerequisites

You will need following prerequisites to follow along

To ensure Konduit-Serving works properly, install these prerequisites. We’ll be using Konduit-Serving to deploy machine/deep learning pipelines easier using a few lines of code. We've prepared a Github repository to simplify showcasing Konduit-Serving.

Introduction to Repository

The repository contains simple examples of deploying a pipeline using different model types. To clone the repository, run the following command:

git clone https://github.com/ShamsUlAzeem/konduit-serving-demo

Build the CPU version of docker image by running the following command in the root directory:

bash build.sh CPU

After a successful build, run docker image with docker-compose at current working directory:

docker-compose up

Now, open JupyterLab in the browser.

Explore Konduit-Serving in Repository

Let's take a look inside the demos directory from konduit-serving-demo. Each folder inside the demos folder demonstrate serving a different kind of model, using a configuration file in either JSON or YAML, through the Konduit-Serving CLI.

The examples use different frameworks, including Keras, Tensorflow, Pytorch, and DL4J. These examples can be run in IPython Notebook (.ipynb) with Java-based kernel. Konduit-Serving provides a platform for users to take advantage of models using konduit CLI such as serve, list, logs, predict and stop. You can also build your model and start using Konduit-Serving.

Let's look at training and serving a model with Konduit-Serving.

Build your model

These are steps to train a model for Keras and DL4J from scratch:

  • First, import library that will be use:

  • Load Iris data set:

  • Display distribution of data set:

  • Apply data pre-processing and split data into training and testing:

  • Configure the model and print summary:

  • Train the model by training data with 800 epochs:

  • Test the model by predicting testing data:

  • Display confusion matrix to see more details of model prediction between actual and predicted result (if satisfied can save the model):

  • Also, you can try to predict by using your value in the trained model:

  • Print the result of classification:

  • Then, save the trained model in HDF5 (.h5) format which will be used in Konduit-Serving later:

The model is now ready to be deployed.

Deploy your model in Konduit-Serving

Now, you are ready to deploy a model in Konduit-Serving by using konduit CLI. This step needs a saved model file (h5 or zip file) and JSON/YAML configuration file. Let's begin with:

  • Create a new folder in the demos directory (for example, 10-iris-model):

  • Drag the model file into the demos directory folder and create an IPython Notebook file with Java kernel (for example, iris-model.ipynb).

In this notebook, we will use konduit CLI.

  • Check the version of Konduit-Serving, and either is installed or not:

  • Create the JSON/YAML configuration file by using konduit config command:

YAML configuration:

Or, you can try this command to get a JSON configuration file:

  • In the configuration file, you need to edit the YAML/JSON file in the pipeline section (for this example, we will use YAML with DL4J):

  • To determine the name of input and output, you can use Netron to read ML/DL model, for example:

In node properties, use the name's value of first weights as "inputNames".

In node properties, use the name's value of last weights as "outputname".

  • Start the server by using konduit serve command and give the id's name based on your own:

  • Listing the active server in Konduit-Serving, konduit list:

  • Show the log of the selected server’s id for 100 lines by konduit logs:

  • Test the prediction of ML/DL model by konduit predict in the Konduit-Serving at the selected id (in this example: dl4j-iris):

  • You can test again with another input value to get another result:

  • And, the result will be like this:

  • For more interactive result, you can edit the JSON/YAML file in the pipeline section as below:

  • So, you will get the result of classification straightforward with prediction's label:

  • Lastly but not least, use konduit stop to terminate selected id in Konduit-Serving:

Congratulation! You have deployed Konduit-Serving on your own. What's next?

Last updated

Was this helpful?