LogoLogo
HomeCommunity
EN master
EN master
  • Introduction
  • Components
  • Quickstart
    • Using Docker
    • Using Java SDK
    • Using Python SDK
    • Using CLI
  • Building from source
  • Installing Binaries
  • Configurations
    • JSON
    • YAML
  • GitHub
  • Examples
    • Java
      • Server
        • Pipeline Steps
          • Image To NDArray Step
          • Python Step
          • DL4J Step
          • Keras Step
          • ONNX Step
          • Tensorflow Step
        • Sequence Pipeline
        • Graph Pipeline
      • Client
        • Running Predictions
        • Inspecting a Server
    • Python
      • Server
        • Pipeline Steps
          • Image To NDArray Step
          • Python Step
          • DL4J Step
        • Sequence Pipeline
        • Graph Pipeline
      • Client
        • Running Predictions
        • Inspecting a Server
    • IPython Notebook
      • Basic
      • ONNX
        • Pytorch (IRIS)
        • Pytorch (MNIST)
      • Keras
      • Tensorflow
      • DL4J
    • CLI
      • Use-Cases
        • Creating a Sequence Pipeline
        • Creating a Graph Pipeline
        • Create Server URL with Inspection Queries
        • Adding Extra Classpaths
        • Multiple Instances of a Server
      • Commands
        • Serve Command
        • Logs Command
        • Inspect Command
        • Profile Command
  • How-To Guides
    • Serving a BMI Model
      • With HTML Content
    • Performing Object Detection
    • RPA Use-Case
    • Showing Metrics
      • Prometheus
      • Grafana
  • References
    • Pipeline Steps
      • IMAGE_TO_NDARRAY
      • IMAGE_CROP
      • IMAGE_RESIZE
      • DEEPLEARNINGL4J
      • KERAS
      • ND4JTENSORFLOW
      • ONNX
      • TENSORFLOW
      • SAMEDIFF
      • CLASSIFIER_OUTPUT
      • REGRESSION_OUTPUT
      • LOGGING
      • BOUNDING_BOX_FILTER
      • BOUNDING_BOX_TO_POINT
      • CROP_GRID
      • CROP_FIXED_GRID
      • DRAW_BOUNDING_BOX
      • DRAW_FACE_KEY_POINT
      • DRAW_GRID
      • DRAW_FIXED_GRID
      • DRAW_HEATMAP
      • DRAW_POINTS
      • DRAW_SEGMENTATION
      • EXTRACT_BOUNDING_BOX
      • SSD_TO_BBOX
      • YOLO_BBOX
      • RELATIVE_TO_ABSOLUTE
      • SHOW_IMAGE
      • FRAME_CAPTURE
      • VIDEO_CAPTURE
      • PERSPECTIVE_TRANSFORM
    • Inference Configuration
      • MQTT Configuration
      • KAFKA Configuration
    • CLI Commands
      • Serve Command
      • Logs Command
      • Inspect Command
      • Pythonpaths Command
      • Build Command
      • Config Command
      • Predict Command
      • Profile Command
  • Change Logs
    • Version 0.1.0
  • Contribution Guidelines
Powered by GitBook
On this page
  • Including package to the classpath
  • Starting a server
  • Sending an input to served model
  • Stopping the server

Was this helpful?

  1. Examples
  2. IPython Notebook

DL4J

Example of DL4J framework with CUSTOM endpoints

PreviousTensorflowNextCLI

Last updated 4 years ago

Was this helpful?

Including package to the classpath

Before starting to serve the model, let's add the main package to the classpath to load the whole necessary libraries to Jupyter Notebook kernel from Konduit-Serving.

%classpath add jar ../../konduit.jar

Classpaths can be considered similar to site-packages in the python ecosystem. It is loaded from each library that's to be imported to your code.

Starting a server

Let's start a server with an id of dl4j-mnist and use dl4j.json as the configuration file.

%%bash
konduit serve -id dl4j-mnist -c dl4j.json -rwm -b

You'll notice with the following message indicating the server is starting.

Starting konduit server...
Using classpath: /root/konduit/bin/../konduit.jar
INFO: Running command /root/miniconda/jre/bin/java -Dkonduit.logs.file.path=/root/.konduit-serving/command_logs/dl4j-mnist.log -Dlogback.configurationFile=/tmp/logback-run_command_a6000ad26ed94583.xml -jar /root/konduit/bin/../konduit.jar run --instances 1 -s inference -c dl4j.json -Dserving.id=dl4j-mnist
For server status, execute: 'konduit list'
For logs, execute: 'konduit logs dl4j-mnist'

The DL4J model is taken from the dl4j-example here:

Use konduit logs to get the logs of served model.

%%bash
konduit logs dl4j-mnist -l 100

The output of logging is similar to the below.

.
.
.
15:00:54.683 [vert.x-worker-thread-0] INFO  a.k.s.v.verticle.InferenceVerticle - 

####################################################################
#                                                                  #
#    |  /   _ \   \ |  _ \  |  | _ _| __ __|    |  /     |  /      #
#    . <   (   | .  |  |  | |  |   |     |      . <      . <       #
#   _|\_\ \___/ _|\_| ___/ \__/  ___|   _|     _|\_\ _) _|\_\ _)   #
#                                                                  #
####################################################################

15:00:54.683 [vert.x-worker-thread-0] INFO  a.k.s.v.verticle.InferenceVerticle - Pending server start, please wait...
15:00:54.703 [vert.x-eventloop-thread-0] INFO  a.k.s.v.p.h.v.InferenceVerticleHttp - MetricsProvider implementation detected, adding endpoint /metrics
15:00:54.718 [vert.x-eventloop-thread-0] INFO  a.k.s.v.p.h.v.InferenceVerticleHttp - No GPU binaries found. Selecting and scraping only CPU metrics.
15:00:54.861 [vert.x-eventloop-thread-0] INFO  a.k.s.v.verticle.InferenceVerticle - Writing inspection data at '/root/.konduit-serving/servers/1517.data' with configuration: 
.
.
.
15:00:54.862 [vert.x-eventloop-thread-0] INFO  a.k.s.v.p.h.v.InferenceVerticleHttp - Inference HTTP server is listening on host: 'localhost'
15:00:54.862 [vert.x-eventloop-thread-0] INFO  a.k.s.v.p.h.v.InferenceVerticleHttp - Inference HTTP server started on port 39487 with 4 pipeline steps

We'll be able to use konduit list command to view all active servers.

%%bash
konduit list

These are examples of active servers if the previous one is still in use.

Listing konduit servers...

 #   | ID                             | TYPE       | URL                  | PID     | STATUS     
 1   | keras-mnist                    | inference  | localhost:33387      | 31757   | started    
 2   | dl4j-mnist                     | inference  | localhost:35921      | 31893   | started  

Sending an input to served model

We're going to display the test image first before feeding it into the model.

%%html
<img src="test-image.jpg"/>

The previous image is used as the testing image for this deployed model:

Now, let's predict the input by using the test image above.

%%bash
konduit predict dl4j-mnist -it multipart "image=@test-image.jpg"

You'll see the following output with the label of classification.

{
  "layer5" : [ [ 1.845163E-5, 1.8346094E-6, 0.31436875, 0.43937472, 2.6101702E-8, 0.24587035, 5.9430695E-6, 3.3270408E-4, 6.3698195E-8, 2.708706E-5 ] ],
  "prob" : 0.439374715089798,
  "index" : 3,
  "label" : "3"
}

Stopping the server

We can stop the running server by using konduit stop command.

%%bash
konduit stop dl4j-mnist

You'll see this output once the mentioned id's server is terminated.

Stopping konduit server 'dl4j-mnist'
Application 'dl4j-mnist' terminated with status 0
https://github.com/eclipse/deeplearning4j-examples/blob/master/mvn-project-template/src/main/java/org/deeplearning4j/examples/sample/LeNetMNIST.java