Search…
Basic
Simple example to demonstrate Konduit-Serving
In the first example, we’ll use normal operations as a model, but it is deploying on Konduit-Serving. You can give input and get the return of output from the server. This will make your model more straightforward to understand by humans as it can provide direct results.

Viewing directory structure

Let’s run cells with bash in a sub-process by using cell magic command and view files in the current directory that will be used in this demonstration.
1
%%bash
2
echo "Current directory $(pwd)" && tree
Copied!
The following files are present in our simple python script demo.
1
Current directory /root/konduit/demos/0-python-simple
2
.
3
├── init_script.py
4
├── python-simple.ipynb
5
├── python.yaml
6
└── run_script.py
7
8
0 directories, 4 files
Copied!

Viewing Python script content

The scripts contain a simple initialization script for an add function which loads the main function in the init_script.py and executes the incoming input through run_script.py.
1
%%bash
2
less init_script.py
Copied!
You’ll be able to see the following.
1
def add_function(x, y):
2
return x + y
Copied!
Once again, let’s browse through the python script for the calling function from init_script.py.
1
%%bash
2
less run_script.py
Copied!
You’ll notice the script only has a line of code.
1
c = add_function(a, b)
Copied!

Viewing the main configuration file

The main configuration should define the inputs as a and b and the output as c, just as we've showed in the run_script.py.
1
%%bash
2
less python.yaml
Copied!
The YAML script file is as follows.
1
---
2
host: "0.0.0.0"
3
pipeline:
4
steps:
5
- '@type': "PYTHON"
6
python_config:
7
append_type: "BEFORE"
8
extra_inputs: {}
9
import_code_path: "init_script.py"
10
python_code_path: "run_script.py"
11
io_inputs:
12
a:
13
python_type: "float"
14
secondary_type: "NONE"
15
type: "DOUBLE"
16
b:
17
python_type: "float"
18
secondary_type: "NONE"
19
type: "DOUBLE"
20
io_outputs:
21
c:
22
python_type: "float"
23
secondary_type: "NONE"
24
type: "DOUBLE"
25
job_suffix: "konduit_job"
26
python_config_type: "CONDA"
27
python_path: "1"
28
environment_name: "base"
29
python_path_resolution: "STATIC"
30
python_inputs: {}
31
python_outputs: {}
32
return_all_inputs: false
33
setup_and_run: false
34
port: 8082
35
protocol: "HTTP"
Copied!

Using the configuration to start a server

Now we can use the konduit serve command to start the server in background with the given files and configurations.
1
%%bash
2
konduit serve -rwm --config python.yaml -id server --background
Copied!
You’ll get the message like this.
1
Starting konduit server...
2
Expected classpath: /root/konduit/bin/../konduit.jar
3
INFO: Running command /root/miniconda/jre/bin/java -Dkonduit.logs.file.path=/root/.konduit-serving/command_logs/server.log -Dlogback.configurationFile=/tmp/logback-run_command_13ccd5e27dfe43b1.xml -cp /root/konduit/bin/../konduit.jar ai.konduit.serving.cli.launcher.KonduitServingLauncher run --instances 1 -s inference -c python.yaml -Dserving.id=server
4
For server status, execute: 'konduit list'
5
For logs, execute: 'konduit logs server'
Copied!

Listing the servers

We can list the created servers with konduit list command
1
%%bash
2
konduit list
Copied!
The ID’s server lists like below, giving the status of the server.
1
Listing konduit servers...
2
3
# | ID | TYPE | URL | PID | STATUS
4
1 | server | inference | 0.0.0.0:8082 | 421 | started
Copied!

Viewing logs

Logs can be viewed for the server with an ID of server through running konduit logs server .. command.
1
%%bash
2
konduit logs server --lines 1000
Copied!
Logs output of started server:
1
09:44:17.852 [main] INFO a.k.s.c.l.command.KonduitRunCommand - Processing configuration: /root/konduit/demos/0-python-simple/python.yaml
2
.
3
.
4
.
5
09:44:19.436 [vert.x-worker-thread-0] INFO a.k.s.v.verticle.InferenceVerticle -
6
7
####################################################################
8
# #
9
# | / _ \ \ | _ \ | | _ _| __ __| | / | / #
10
# . < ( | . | | | | | | | . < . < #
11
# _|\_\ \___/ _|\_| ___/ \__/ ___| _| _|\_\ _) _|\_\ _) #
12
# #
13
####################################################################
14
15
09:44:19.436 [vert.x-worker-thread-0] INFO a.k.s.v.verticle.InferenceVerticle - Pending server start, please wait...
16
.
17
.
18
.
19
09:44:19.589 [vert.x-eventloop-thread-0] INFO a.k.s.v.p.h.v.InferenceVerticleHttp - Inference HTTP server is listening on host: '0.0.0.0'
20
09:44:19.589 [vert.x-eventloop-thread-0] INFO a.k.s.v.p.h.v.InferenceVerticleHttp - Inference HTTP server started on port 8082 with 1 pipeline steps
Copied!

Sending inputs

Now we’ll be able to send the inputs for inferring the output.
1
%%bash
2
konduit predict server '{"a":1,"b":2}'
Copied!
The output result of the function deployed from the server.
1
{
2
"c" : 3.0
3
}
Copied!

Stopping the server

Stop the server by giving the ID’s we want to terminate.
1
%%bash
2
konduit stop server
Copied!
Status of the server will be printed out as below.
1
Stopping konduit server 'server'
2
Application 'server' terminated with status 0
Copied!
As you can see from this example, we only use a simple function to deploy in Konduit-Serving. Next, we'll deploy the model in Konduit-Serving.