Search…
DL4J
Example of DL4J framework with CUSTOM endpoints

Including package to the classpath

Before starting to serve the model, let's add the main package to the classpath to load the whole necessary libraries to Jupyter Notebook kernel from Konduit-Serving.
1
%classpath add jar ../../konduit.jar
Copied!
Classpaths can be considered similar to site-packages in the python ecosystem. It is loaded from each library that's to be imported to your code.
Let's start a server with an id of dl4j-mnist and use dl4j.json as the configuration file.
1
%%bash
2
konduit serve -id dl4j-mnist -c dl4j.json -rwm -b
Copied!
You'll notice with the following message indicating the server is starting.
1
Starting konduit server...
2
Using classpath: /root/konduit/bin/../konduit.jar
3
INFO: Running command /root/miniconda/jre/bin/java -Dkonduit.logs.file.path=/root/.konduit-serving/command_logs/dl4j-mnist.log -Dlogback.configurationFile=/tmp/logback-run_command_a6000ad26ed94583.xml -jar /root/konduit/bin/../konduit.jar run --instances 1 -s inference -c dl4j.json -Dserving.id=dl4j-mnist
4
For server status, execute: 'konduit list'
5
For logs, execute: 'konduit logs dl4j-mnist'
6
Copied!
Use konduit logs to get the logs of served model.
1
%%bash
2
konduit logs dl4j-mnist -l 100
Copied!
The output of logging is similar to the below.
1
.
2
.
3
.
4
15:00:54.683 [vert.x-worker-thread-0] INFO a.k.s.v.verticle.InferenceVerticle -
5
6
####################################################################
7
# #
8
# | / _ \ \ | _ \ | | _ _| __ __| | / | / #
9
# . < ( | . | | | | | | | . < . < #
10
# _|\_\ \___/ _|\_| ___/ \__/ ___| _| _|\_\ _) _|\_\ _) #
11
# #
12
####################################################################
13
14
15:00:54.683 [vert.x-worker-thread-0] INFO a.k.s.v.verticle.InferenceVerticle - Pending server start, please wait...
15
15:00:54.703 [vert.x-eventloop-thread-0] INFO a.k.s.v.p.h.v.InferenceVerticleHttp - MetricsProvider implementation detected, adding endpoint /metrics
16
15:00:54.718 [vert.x-eventloop-thread-0] INFO a.k.s.v.p.h.v.InferenceVerticleHttp - No GPU binaries found. Selecting and scraping only CPU metrics.
17
15:00:54.861 [vert.x-eventloop-thread-0] INFO a.k.s.v.verticle.InferenceVerticle - Writing inspection data at '/root/.konduit-serving/servers/1517.data' with configuration:
18
.
19
.
20
.
21
15:00:54.862 [vert.x-eventloop-thread-0] INFO a.k.s.v.p.h.v.InferenceVerticleHttp - Inference HTTP server is listening on host: 'localhost'
22
15:00:54.862 [vert.x-eventloop-thread-0] INFO a.k.s.v.p.h.v.InferenceVerticleHttp - Inference HTTP server started on port 39487 with 4 pipeline steps
Copied!
We'll be able to use konduit list command to view all active servers.
1
%%bash
2
konduit list
Copied!
These are examples of active servers if the previous one is still in use.
1
Listing konduit servers...
2
3
# | ID | TYPE | URL | PID | STATUS
4
1 | keras-mnist | inference | localhost:33387 | 31757 | started
5
2 | dl4j-mnist | inference | localhost:35921 | 31893 | started
Copied!

Sending an input to served model

We're going to display the test image first before feeding it into the model.
1
%%html
2
<img src="test-image.jpg"/>
Copied!
The previous image is used as the testing image for this deployed model:
Now, let's predict the input by using the test image above.
1
%%bash
2
konduit predict dl4j-mnist -it multipart "[email protected]"
Copied!
You'll see the following output with the label of classification.
1
{
2
"layer5" : [ [ 1.845163E-5, 1.8346094E-6, 0.31436875, 0.43937472, 2.6101702E-8, 0.24587035, 5.9430695E-6, 3.3270408E-4, 6.3698195E-8, 2.708706E-5 ] ],
3
"prob" : 0.439374715089798,
4
"index" : 3,
5
"label" : "3"
6
}
Copied!

Stopping the server

We can stop the running server by using konduit stop command.
1
%%bash
2
konduit stop dl4j-mnist
Copied!
You'll see this output once the mentioned id's server is terminated.
1
Stopping konduit server 'dl4j-mnist'
2
Application 'dl4j-mnist' terminated with status 0
Copied!