Configurations
Konduit-Serving supports defining server configurations as JSON or YAML files.
The configuration is essential to serve a Machine Learning or Deep Learning model in Konduit-Serving. The complete format contains inference configuration and pipeline steps to deploy the server to serve the model in details.
A Konduit-Serving configuration file has two top-level items:
Inference configuration
Pipeline
Inference Configuration
A sample below is the inference configuration, which contains many setups for the server to run on Konduit-Serving. For explanation purpose, the YAML format is used like the following.
As the setting on the above, there are a lot of keys used in the inference configuration. The inference configuration takes the following arguments:
host
: specify the port numberport
: the host of the Konduit-Serving. Default is 'localhost'.use_ssl
: Enable SSL for internet connection security data protection. Default is 'false'.protocol
: protocol use with the server. Default is 'HTTP'.static_content_root
: root directory use to search for a folder in serving static contentstatic_content_url
: URL that will link to the static contents specified in thestatic_content_root
key.static_content_index_page
: index file name. Default is 'index.html'.kafka_configuration
: Configuration for Kafka message queue whenprotocol
key isKAFKA
mqtt_configuration
: configuration if 'MQTT' is used as protocol.custom_endpoints
: Custom created endpoints by implementing theai.konduit.serving.endpoint.HttpEndpoints
interface.
This setting can be generated using the CLI
command, and generally, this configuration file is a must to deploy any Pipeline Steps on Konduit-Serving.
Pipeline
A pipeline consists of steps on how the server should treat the data and the model. We can use many steps in the pipeline to serve the model, including the input pre-processing step and output post-processing step. The simple application makes Konduit-Serving more convenient for all level experience to use our platform. Below is the example of Pipeline Steps which only consists of serving the model and do post-processing by classifying the output product from the last layer of the model.
These steps can be added as many as possible based on the requirement for custom endpoints. Among the steps that can be used in this pipeline are:
Sequences pipeline steps:
crop_grid
crop_fixed_grip
dl4j (use in above example)
keras
draw_bounding_box
draw_fixed_grid
draw_segmentation
extract_bounding_box
camera_frame_capture
video_frame_capture
image_to_ndarray
logging
ssd_to_bounding_box
samediff
show_image
tensorflow
nd4jtensorflow
python
onnx
classifier_output (use in above example)
Graphs pipeline steps:
pipeline steps
merge step
switch step
any step
The complete inference configuration files can be in:
or
Last updated