LogoLogo
HomeCommunity
EN master
EN master
  • Introduction
  • Components
  • Quickstart
    • Using Docker
    • Using Java SDK
    • Using Python SDK
    • Using CLI
  • Building from source
  • Installing Binaries
  • Configurations
    • JSON
    • YAML
  • GitHub
  • Examples
    • Java
      • Server
        • Pipeline Steps
          • Image To NDArray Step
          • Python Step
          • DL4J Step
          • Keras Step
          • ONNX Step
          • Tensorflow Step
        • Sequence Pipeline
        • Graph Pipeline
      • Client
        • Running Predictions
        • Inspecting a Server
    • Python
      • Server
        • Pipeline Steps
          • Image To NDArray Step
          • Python Step
          • DL4J Step
        • Sequence Pipeline
        • Graph Pipeline
      • Client
        • Running Predictions
        • Inspecting a Server
    • IPython Notebook
      • Basic
      • ONNX
        • Pytorch (IRIS)
        • Pytorch (MNIST)
      • Keras
      • Tensorflow
      • DL4J
    • CLI
      • Use-Cases
        • Creating a Sequence Pipeline
        • Creating a Graph Pipeline
        • Create Server URL with Inspection Queries
        • Adding Extra Classpaths
        • Multiple Instances of a Server
      • Commands
        • Serve Command
        • Logs Command
        • Inspect Command
        • Profile Command
  • How-To Guides
    • Serving a BMI Model
      • With HTML Content
    • Performing Object Detection
    • RPA Use-Case
    • Showing Metrics
      • Prometheus
      • Grafana
  • References
    • Pipeline Steps
      • IMAGE_TO_NDARRAY
      • IMAGE_CROP
      • IMAGE_RESIZE
      • DEEPLEARNINGL4J
      • KERAS
      • ND4JTENSORFLOW
      • ONNX
      • TENSORFLOW
      • SAMEDIFF
      • CLASSIFIER_OUTPUT
      • REGRESSION_OUTPUT
      • LOGGING
      • BOUNDING_BOX_FILTER
      • BOUNDING_BOX_TO_POINT
      • CROP_GRID
      • CROP_FIXED_GRID
      • DRAW_BOUNDING_BOX
      • DRAW_FACE_KEY_POINT
      • DRAW_GRID
      • DRAW_FIXED_GRID
      • DRAW_HEATMAP
      • DRAW_POINTS
      • DRAW_SEGMENTATION
      • EXTRACT_BOUNDING_BOX
      • SSD_TO_BBOX
      • YOLO_BBOX
      • RELATIVE_TO_ABSOLUTE
      • SHOW_IMAGE
      • FRAME_CAPTURE
      • VIDEO_CAPTURE
      • PERSPECTIVE_TRANSFORM
    • Inference Configuration
      • MQTT Configuration
      • KAFKA Configuration
    • CLI Commands
      • Serve Command
      • Logs Command
      • Inspect Command
      • Pythonpaths Command
      • Build Command
      • Config Command
      • Predict Command
      • Profile Command
  • Change Logs
    • Version 0.1.0
  • Contribution Guidelines
Powered by GitBook
On this page

Was this helpful?

  1. References
  2. CLI Commands

Serve Command

$ konduit serve [-ad <additional_dependencies>] [-b] [-cp <classpath>] -c
       <server-config>  [-h <host>] [-i <instances>] [-jo <value>]  [-p <port>]
       [-p <profile_name>] [-rwm] [-s <type>] [-id <value>]

OPTIONS

Command Flags

Description

-ad,--addDep

Additional dependencies to include with the launch.

-b,--background

Runs the process in the background, if set.

-cp,--classpath

Provides an extra classpath to be used for the verticle deployment.

-c,--config

Specifies configuration that should be provided to the verticle. <config> should reference either a text file containing a valid JSON object which represents the configuration OR be a JSON string.

-h,--host

Specifies the host name of the Konduit server when the configuration provided is just a pipeline configuration instead of a whole inference configuration. Defaults is: localhost.

-i,--instances

Specifies how many instances of the server will be deployed. Defaults is: 1.

-jo,--java-opts

Java Virtual Machine options to pass to the spawned process such as "-Xmx1G -Xms256m -XX:MaxPermSize=256m". If not set, the JAVA_OPTS environment variable is used.

-p,--port

Specifies the port number of the konduit server when the configuration provided is just a pipeline configuration instead of a whole inference configuration. Defaults is: 0.

-p,--profileName

Name of the profile to be used with the server launch.

-rwm,--runWithoutManifest

Do not create the manifest jar file before launching the server.

-s,--service

Service type that needs to be deployed. Defaults is: inference.

-id,--serving-id

Id of the serving process. This will be visible in the list command. This id can be used to call predict and stop commands on the running servers. If not given then an 8 character UUID is created automatically.

PreviousCLI CommandsNextLogs Command

Last updated 4 years ago

Was this helpful?