BERT
This notebook illustrates a simple client-server interaction to perform inference on a TensorFlow model using the Python SDK for Konduit Serving.
import numpy as np
import osfrom konduit import ParallelInferenceConfig, ServingConfig, TensorFlowConfig, \
ModelConfigType, TensorDataTypesConfig, ModelStep, InferenceConfiguration
from konduit.server import Server
from konduit.client import Clientfrom konduit.load import server_from_file, client_from_fileOverview
Configure the step
Configure the server
Start the server
Configure the client
Inference
Last updated
Was this helpful?