Creating a Sequence Pipeline

To create boilerplate configurations

A Sequence Pipeline is used to treat the data and Machine Learning or Deep Learning model in a series of steps from pre-processing to model serving and post-processing on the output product. In this example, the CLI command specifies on konduit config is used to configure the configuration file to serve the models on Konduit-Serving. You'll be able to follow this example on your local terminal on any directory.

If deploying the model does not need pre- nor post-processing, only one step, a deep learning model is needed. This configuration is defined using a single Step to serve a model, and the command for creating the configuration file is like the following.

$ konduit config --pipeline dl4j --output config_dl4j.yaml --yaml

The YAML configuration is as follows.

---
host: "localhost"
port: 0
use_ssl: false
protocol: "HTTP"
static_content_root: "static-content"
static_content_url: "/static-content"
static_content_index_page: "/index.html"
kafka_configuration:
  start_http_server_for_kafka: true
  http_kafka_host: "localhost"
  http_kafka_port: 0
  consumer_topic_name: "inference-in"
  consumer_key_deserializer_class: "io.vertx.kafka.client.serialization.JsonObjectDeserializer"
  consumer_value_deserializer_class: "io.vertx.kafka.client.serialization.JsonObjectDeserializer"
  consumer_group_id: "konduit-serving-consumer-group"
  consumer_auto_offset_reset: "earliest"
  consumer_auto_commit: "true"
  producer_topic_name: "inference-out"
  producer_key_serializer_class: "io.vertx.kafka.client.serialization.JsonObjectSerializer"
  producer_value_serializer_class: "io.vertx.kafka.client.serialization.JsonObjectSerializer"
  producer_acks: "1"
mqtt_configuration: {}
custom_endpoints: []
pipeline:
  steps:
  - '@type': "DEEPLEARNING4J"
    modelUri: "<path_to_model>"
    inputNames:
    - "1"
    - "2"
    outputNames:
    - "11"
    - "22"

The Steps are included in the --pipeline based on the model's requirement and how output should represent. For example, the model fetches an image input, so the image_to_ndarray should be pre-processing step to convert the image into an array. The table below shows all steps that can be used in the Sequence Pipeline.

Pre-processing Step

Model/Python Step

Post-processing Step

Logging

  • image_to_ndarray

  • dl4j

  • keras

  • tensorflow

  • nd4jtensorflow

  • onnx

  • samediff

  • python

  • crop_grid

  • crop_fixed_grip

  • draw_bounding_box

  • draw_fixed_grid

  • draw_segmentation

  • extract_bounding_box

  • camera_frame_capture

  • video_frame_capture

  • ssd_to_bounding_box

  • show_image

  • classifier_output

  • logging

Here is another example with a series of step in Sequence Pipeline (image_to_ndarray to nd4jtensorflow to classifier_output). The input image needs to convert into an n-D array before feeding into the model and produce the classification output. A command likes below:

The command would give the configuration file in JSON with the complete Pipeline Steps.

circle-info

Here is the example which use almost similar steps. You can find the JSON file on https://github.com/ShamsUlAzeem/konduit-serving-demo/blob/master/demos/4-tensorflow-mnist/tensorflow.jsonarrow-up-right

Every Step in the Pipeline needs to modify based on the input characteristics, model configurations and how output should looks like in the end. Using a configuration file allows you to serve the model with Konduit-Serving.

Last updated

Was this helpful?