Go to file
Jia 11480f18f3 Store operation into graph
because the operation is a shared pointor, in app, the operation is
created as:
    auto op = graph->CreateOperation();

uses natively think the operation had been register into the graph and
would not manage the op locally.

if running the graph in another fucntion instead of the function that
create the operation, the operation would had been delete.

so the operation should be stored into the graph.

Signed-off-by: Jia <juku.jia@verisilicon.com>
2021-04-15 11:21:56 +08:00
cmake Add support for S905D3 SoC 2021-04-06 13:30:16 +08:00
include/tim/vx Store operation into graph 2021-04-15 11:21:56 +08:00
prebuilt-sdk Add support for S905D3 SoC 2021-04-06 13:30:16 +08:00
samples/lenet Update linking style as static linking for sample 2021-02-08 14:47:37 +08:00
src/tim/vx add the Stack op 2021-04-07 19:57:28 +08:00
toolchains support build for tensorflow A311D 2021-02-07 10:33:04 +08:00
.bazelrc Add support for S905D3 SoC 2021-04-06 13:30:16 +08:00
.bazelversion Support build for A311D 2021-01-29 00:11:41 -08:00
.clang-format Add .clang-format 2021-01-19 09:54:50 +08:00
.gitignore Update internal to REL/v1.1.30 2021-02-26 17:05:14 +08:00
BUILD Rename BUILD.bazel to BUILD 2021-02-26 09:53:32 +08:00
CMakeLists.txt Add support for S905D3 SoC 2021-04-06 13:30:16 +08:00
LICENSE Initial Commit for VERSION 1.1.28 2021-01-11 18:27:48 +08:00
README.md Fix README typo 2021-02-26 15:29:38 +08:00
VERSION v1.1.30 2021-02-26 17:20:36 +08:00
WORKSPACE Add support for S905D3 SoC 2021-04-06 13:30:16 +08:00

README.md

TIM-VX - Tensor Interface Module for OpenVX

TIM-VX is a software integration module provided by VeriSilicon to facilitate deployment of Neural-Networks on OpenVX enabled ML accelerators. It serves as the backend binding for runtime frameworks such as Android NN, Tensorflow-Lite, MLIR, TVM and more.

Main Features

  • Over 130 internal operators with rich format support for both quantized and floating point
  • Simplified binding API calls to create Tensors and Operations
  • Dynamic graph construction and supports shape inferencing
  • Built-in custom layer extensions
  • A set of utility functions for debugging

Framework Support

Roadmap

Roadmap of TIM-VX will be updated here in the future.

Get started

Build and Run

TIM-VX uses bazel build system by default. Install bazel first to get started.

TIM-VX needs to be compiled and linked against VeriSilicon OpenVX SDK which provides related header files and pre-compiled libraries. A default linux-x86_64 SDK is provided which contains the simulation environment on PC. Platform specific SDKs can be obtained from respective SoC vendors.

To build TIM-VX

bazel build libtim-vx.so

To run sample LeNet

# set VIVANTE_SDK_DIR for runtime compilation environment
export VIVANTE_SDK_DIR=`pwd`/prebuilt-sdk/x86_64_linux

bazel build //samples/lenet:lenet_asymu8_cc
bazel run //samples/lenet:lenet_asymu8_cc

To build and run Tensorflow-Lite delegate on A311D platform

# clone and cross build VeriSilicon tensorflow fork with TFlite delegate support
git clone --single-branch --branch dev/vx-delegate git@github.com:VeriSilicon/tensorflow.git vx-delegate; cd vx-delegate
bazel build --config A311D //tensorflow/lite/tools/benchmark:benchmark_model

# push benchmark_model onto device and run
./benchmark_model --graph=mobilenet_v1_1.0_224_quant.tflite --use_vxdelegate=true