In this section, we provide guidance on how to demonstrate Konduit-Serving with Docker. Konduit-Serving is a platform to serve the ML/DL models directly through a single line of code. We'll start building and installing the Konduit-Serving from docker image and follow by deploying the trained model in Konduit-Serving.
Prerequisites
You will need following prerequisites to follow along
To ensure Konduit-Serving works properly, install these prerequisites. We’ll be using Konduit-Serving to deploy machine/deep learning pipelines easier using a few lines of code. We've prepared a Github repository to simplify showcasing Konduit-Serving.
Introduction to Repository
The repository contains simple examples of deploying a pipeline using different model types. To clone the repository, run the following command:
Let's take a look inside the demos directory from konduit-serving-demo. Each folder inside the demos folder demonstrate serving a different kind of model, using a configuration file in either JSON or YAML, through the Konduit-Serving CLI.
The examples use different frameworks, including Keras, Tensorflow, Pytorch, and DL4J. These examples can be run in IPython Notebook (.ipynb) with Java-based kernel. Konduit-Serving provides a platform for users to take advantage of models using konduit CLI such as serve, list, logs, predict and stop. You can also build your model and start using Konduit-Serving.
Let's look at training and serving a model with Konduit-Serving.
Build your model
These are steps to train a model for Keras and DL4J from scratch:
First, import library that will be use:
import kerasfrom keras.models import Sequentialfrom keras.layers import Densefrom keras.optimizers import Adamfrom sklearn.preprocessing import LabelEncoderfrom sklearn.model_selection import train_test_splitfrom sklearn.metrics import confusion_matriximport seaborn as snsimport pandas as pdimport numpy as np
model =Sequential()model.add(Dense(4,input_shape=(4,),activation='relu', name="input"))model.add(Dense(3,activation='softmax', name='output'))model.compile(Adam(lr=0.01),'categorical_crossentropy',metrics=['accuracy'])model.summary()
Declare the variables that will be used in the training process (this code starts under main body):
int numLinesToSkip =0;char delimiter =',';int batchSize = 150; // Iris data set: 150 examples total. We are loading all of them into one DataSet (not recommended for large data sets)
int labelIndex =4; // index of label/class columnint numClasses =3;int seed =1234;int epochs =800;double learningRate =0.1;int nHidden =4;
Configure and initiate the model that will be used:
MultiLayerConfiguration config =new NeuralNetConfiguration.Builder().seed(seed).weightInit(WeightInit.XAVIER).activation(Activation.TANH).updater(newNesterovs(learningRate,Nesterovs.DEFAULT_NESTEROV_MOMENTUM)).l2(1e-4).list().layer(0,new DenseLayer.Builder().nIn(labelIndex).nOut(nHidden).build()).layer(1,new DenseLayer.Builder().nOut(nHidden).build()).layer(2,new OutputLayer.Builder(LossFunctions.LossFunction.MCXENT).activation(Activation.SOFTMAX).nOut(numClasses).name("output").build()).backpropType(BackpropType.Standard).build();MultiLayerNetwork model =newMultiLayerNetwork(config);model.init();
Display in UI of training process which can be seen when open in browser (optional):
StatsStorage storage =newInMemoryStatsStorage();UIServer server =UIServer.getInstance();server.attach(storage);model.setListeners(newStatsListener(storage,10));
Train the model using training data:
for (int i=0; i < epochs; i++) {model.fit(trainingData);}
Evaluate the model by using testing data (save the model if satisfied):
Then, save the model at the location you want to keep with the name in Zip format. The model is ready to be used (for example, the model saved in the current working directory):
File locationToSave =newFile("./dl4j_iris_model.zip");boolean saveUpdater =true;ModelSerializer.writeModel(model,locationToSave,saveUpdater);System.out.println("******PROGRAM IS FINISHED******");
The model is now ready to be deployed.
Deploy your model in Konduit-Serving
Now, you are ready to deploy a model in Konduit-Serving by using konduit CLI. This step needs a saved model file (h5 or zip file) and JSON/YAML configuration file. Let's begin with:
Create a new folder in the demos directory (for example, 10-iris-model):
Drag the model file into the demos directory folder and create an IPython Notebook file with Java kernel (for example, iris-model.ipynb).
In this notebook, we will use konduit CLI.
Check the version of Konduit-Serving, and either is installed or not:
%%bash
konduit -version
Create the JSON/YAML configuration file by using konduit config command:
YAML configuration:
%%bashkonduitconfig-pkeras-oiris-keras.yaml-y
Or, you can try this command to get a JSON configuration file: