Open Neural Network Exchange (ONNX)
This page provides a Java example of inferencing a model, built in Python with ONNX Runtime, a cross-platform, high performance scoring engine for machine learning models.
import ai.konduit.serving.InferenceConfiguration;
import ai.konduit.serving.config.ServingConfig;
import ai.konduit.serving.configprovider.KonduitServingMain;
import ai.konduit.serving.model.PythonConfig;
import ai.konduit.serving.pipeline.step.ImageLoadingStep;
import ai.konduit.serving.pipeline.step.PythonStep;
import com.mashape.unirest.http.Unirest;
import com.mashape.unirest.http.exceptions.UnirestException;
import org.apache.commons.io.FileUtils;
import org.datavec.python.PythonVariables;
import org.nd4j.linalg.io.ClassPathResource;Python script with PyTorch and ONNX Runtime
Configure the step
Defining a PythonConfig
PythonConfigDefine a pipeline step with the PythonStep class
PythonStep classDefine a pipeline step with ImageLoadingStep class
ImageLoadingStep classConfigure the server
Inference
Confirm the output
Last updated
Was this helpful?