LogoLogo
HomeCommunity
EN master
EN master
  • Introduction
  • Components
  • Quickstart
    • Using Docker
    • Using Java SDK
    • Using Python SDK
    • Using CLI
  • Building from source
  • Installing Binaries
  • Configurations
    • JSON
    • YAML
  • GitHub
  • Examples
    • Java
      • Server
        • Pipeline Steps
          • Image To NDArray Step
          • Python Step
          • DL4J Step
          • Keras Step
          • ONNX Step
          • Tensorflow Step
        • Sequence Pipeline
        • Graph Pipeline
      • Client
        • Running Predictions
        • Inspecting a Server
    • Python
      • Server
        • Pipeline Steps
          • Image To NDArray Step
          • Python Step
          • DL4J Step
        • Sequence Pipeline
        • Graph Pipeline
      • Client
        • Running Predictions
        • Inspecting a Server
    • IPython Notebook
      • Basic
      • ONNX
        • Pytorch (IRIS)
        • Pytorch (MNIST)
      • Keras
      • Tensorflow
      • DL4J
    • CLI
      • Use-Cases
        • Creating a Sequence Pipeline
        • Creating a Graph Pipeline
        • Create Server URL with Inspection Queries
        • Adding Extra Classpaths
        • Multiple Instances of a Server
      • Commands
        • Serve Command
        • Logs Command
        • Inspect Command
        • Profile Command
  • How-To Guides
    • Serving a BMI Model
      • With HTML Content
    • Performing Object Detection
    • RPA Use-Case
    • Showing Metrics
      • Prometheus
      • Grafana
  • References
    • Pipeline Steps
      • IMAGE_TO_NDARRAY
      • IMAGE_CROP
      • IMAGE_RESIZE
      • DEEPLEARNINGL4J
      • KERAS
      • ND4JTENSORFLOW
      • ONNX
      • TENSORFLOW
      • SAMEDIFF
      • CLASSIFIER_OUTPUT
      • REGRESSION_OUTPUT
      • LOGGING
      • BOUNDING_BOX_FILTER
      • BOUNDING_BOX_TO_POINT
      • CROP_GRID
      • CROP_FIXED_GRID
      • DRAW_BOUNDING_BOX
      • DRAW_FACE_KEY_POINT
      • DRAW_GRID
      • DRAW_FIXED_GRID
      • DRAW_HEATMAP
      • DRAW_POINTS
      • DRAW_SEGMENTATION
      • EXTRACT_BOUNDING_BOX
      • SSD_TO_BBOX
      • YOLO_BBOX
      • RELATIVE_TO_ABSOLUTE
      • SHOW_IMAGE
      • FRAME_CAPTURE
      • VIDEO_CAPTURE
      • PERSPECTIVE_TRANSFORM
    • Inference Configuration
      • MQTT Configuration
      • KAFKA Configuration
    • CLI Commands
      • Serve Command
      • Logs Command
      • Inspect Command
      • Pythonpaths Command
      • Build Command
      • Config Command
      • Predict Command
      • Profile Command
  • Change Logs
    • Version 0.1.0
  • Contribution Guidelines
Powered by GitBook
On this page

Was this helpful?

  1. Examples
  2. Java

Server

Simple example to deploy a server with Konduit-Serving

In this example, we'll deploy a server with Konduit-Serving.

  • First, let's start with creating a complete configuration of the server.

InferenceConfiguration inferenceConfiguration = new InferenceConfiguration();
inferenceConfiguration.pipeline(
        SequencePipeline
                .builder()
                .add(new LoggingStep().log(LoggingStep.Log.KEYS_AND_VALUES))
                .build()
);
  • Let's deploy the server with the configuration made above. The successful server deployment will give the port number and host of the server:

DeployKonduitServing.deploy(
                new VertxOptions(), // Default vertx options
                new DeploymentOptions(), // Default deployment options
                inferenceConfiguration, // Inference configuration with logging step
                handler -> { // this block will be called when server finishes the deployment
                    if (handler.succeeded()) { // If the server is sucessfully running
                        // Getting the result of the deployment
                        InferenceDeploymentResult inferenceDeploymentResult = handler.result();
                        int runnningPort = inferenceDeploymentResult.getActualPort();
                        String deploymentId = inferenceDeploymentResult.getDeploymentId();

                        System.out.format("The server is running on port %s with deployment id of %s%n",
                                runnningPort, deploymentId);

                        try {
                            String result = Unirest.post(String.format("http://localhost:%s/predict", runnningPort))
                                    .header("Content-Type", "application/json")
                                    .header("Accept", "application/json")
                                    .body(new JSONObject().put("input_key", "input_value"))
                                    .asString().getBody();

                            System.out.format("Result from server : %s%n", result);

                            System.exit(0);
                        } catch (UnirestException e) {
                            e.printStackTrace();

                            System.exit(1);
                        }
                    } else { // If the server failed to run
                            System.out.println(handler.cause().getMessage());
                            System.exit(1);
                    }
                });

You'll be able to see the output similar to this once the server successfully deployed:

The server is running on port 37663 with deployment id of 59d5d475-be83-4348-8983-4d3e7328e71d
Result from server : {
  "input_key" : "input_value"
}

Process finished with exit code 0
PreviousJavaNextPipeline Steps

Last updated 4 years ago

Was this helpful?