Go to file
xiang.zhang d4a13e18a9 Minor refinement: use tensor pointer after check
Signed-off-by: xiang.zhang <xiang.zhang@verisilicon.com>
2021-08-04 11:31:26 +08:00
.github/workflows Add CMake UnitTest in CI (#66) 2021-05-25 09:48:49 +08:00
cmake Added configuration for Yocto SDK build 2021-06-16 14:11:46 +08:00
docs [NNRT-1111] add memory layout for doc 2021-06-01 16:59:55 +08:00
include/tim Add align_corners support for SpatialTransformer 2021-08-03 10:52:51 +08:00
prebuilt-sdk Correct x86_64 SDK version number to 6.4.6 2021-05-22 16:44:59 +08:00
samples Add multi thread test 2021-07-13 09:17:35 +08:00
src/tim Minor refinement: use tensor pointer after check 2021-08-04 11:31:26 +08:00
toolchains support build for tensorflow A311D 2021-02-07 10:33:04 +08:00
.bazelrc Add prebuild support for VIPLite 2021-05-14 18:31:08 +08:00
.bazelversion Support build for A311D 2021-01-29 00:11:41 -08:00
.clang-format Add .clang-format 2021-01-19 09:54:50 +08:00
.gitignore Update gitignore 2021-05-11 12:53:31 +08:00
Android.mk Minor cleanup 2021-05-06 19:48:36 +08:00
BUILD add uint8 quantized unit_test for conv2d 2021-06-07 13:30:43 +08:00
CMakeLists.txt Add multi thread test 2021-07-13 09:17:35 +08:00
LICENSE Initial Commit for VERSION 1.1.28 2021-01-11 18:27:48 +08:00
README.md Update README 2021-08-03 12:49:46 +08:00
VERSION Update version 1.1.32 2021-07-13 19:19:46 +08:00
WORKSPACE Add prebuild support for VIPLite 2021-05-14 18:31:08 +08:00

README.md

TIM-VX - Tensor Interface Module for OpenVX

VSim.X86.UnitTest

TIM-VX is a software integration module provided by VeriSilicon to facilitate deployment of Neural-Networks on OpenVX enabled ML accelerators. It serves as the backend binding for runtime frameworks such as Android NN, Tensorflow-Lite, MLIR, TVM and more.

Main Features

  • Over 130 operators with rich format support for both quantized and floating point
  • Simplified C++ binding API calls to create Tensors and Operations
  • Dynamic graph construction with support for shape inference and layout inference
  • Built-in custom layer extensions
  • A set of utility functions for debugging

Framework Support

Feel free to raise a github issue if you wish to add TIM-VX for other frameworks.

Get started

Build and Run

TIM-VX supports both bazel and cmake. Install bazel to get started.

TIM-VX needs to be compiled and linked against VeriSilicon OpenVX SDK which provides related header files and pre-compiled libraries. A default linux-x86_64 SDK is provided which contains the simulation environment on PC. Platform specific SDKs can be obtained from respective SoC vendors.

To build TIM-VX

bazel build libtim-vx.so

To run sample LeNet

# set VIVANTE_SDK_DIR for runtime compilation environment
export VIVANTE_SDK_DIR=`pwd`/prebuilt-sdk/x86_64_linux

bazel build //samples/lenet:lenet_asymu8_cc
bazel run //samples/lenet:lenet_asymu8_cc

To build and run Tensorflow-Lite with TIM-VX, please see README To build and run TVM with TIM-VX, please see TVM