Go to file
Zongwu.Yang d019a76db5
Add function for lite driver handle (#209)
Signed-off-by: Zongwu Yang <zongwu.yang@verisilicon.com>
2021-11-10 20:05:31 +08:00
.github/workflows Add dispatch_workflow to CI action (#198) 2021-10-22 11:45:04 +08:00
cmake Add function for lite driver handle (#209) 2021-11-10 20:05:31 +08:00
docs Add TIM-VX Architecture overview (#197) 2021-10-21 19:31:41 +08:00
include/tim Add function for lite driver handle (#209) 2021-11-10 20:05:31 +08:00
prebuilt-sdk Update prebuilt SDK x86_64 to 6.4.8 2021-10-08 11:57:41 +08:00
samples Add function for lite driver handle (#209) 2021-11-10 20:05:31 +08:00
src/tim Add function for lite driver handle (#209) 2021-11-10 20:05:31 +08:00
toolchains support build for tensorflow A311D 2021-02-07 10:33:04 +08:00
.bazelrc Add prebuild support for VIPLite 2021-05-14 18:31:08 +08:00
.bazelversion Fix bazel BUILD 2021-10-20 16:18:48 +08:00
.clang-format Add shuffle_channel support & test for tim::vx 2021-09-07 22:44:57 +08:00
.gitignore Add unidirectional sequence lstm support 2021-08-09 13:43:33 +08:00
Android.mk Minor cleanup 2021-05-06 19:48:36 +08:00
BUILD Fix bazel BUILD 2021-10-20 16:18:48 +08:00
CMakeLists.txt Add function for lite driver handle (#209) 2021-11-10 20:05:31 +08:00
LICENSE Initial Commit for VERSION 1.1.28 2021-01-11 18:27:48 +08:00
README.md Add reference board information (#212) 2021-11-10 11:10:00 +08:00
VERSION Release 1.1.34 2021-10-08 12:36:02 +08:00
WORKSPACE Release 1.1.34 2021-10-08 12:36:02 +08:00

README.md

TIM-VX - Tensor Interface Module

Bazel.VSim.X86.UnitTest CMake.VSim.X86.UnitTest

TIM-VX is a software integration module provided by VeriSilicon to facilitate deployment of Neural-Networks on Verisilicon ML accelerators. It serves as the backend binding for runtime frameworks such as Android NN, Tensorflow-Lite, MLIR, TVM and more.

Main Features

  • Over 150 operators with rich format support for both quantized and floating point
  • Simplified C++ binding API calls to create Tensors and Operations Guide
  • Dynamic graph construction with support for shape inference and layout inference
  • Built-in custom layer extensions
  • A set of utility functions for debugging

Framework Support

Feel free to raise a github issue if you wish to add TIM-VX for other frameworks.

Architecture Overview

TIM-VX Architecture

Get started

Build and Run

TIM-VX supports both bazel and cmake.

Cmake

To build TIM-VX:

mkdir host_build
cd host_build
cmake ..
make -j8
make install

All install files (both headers and *.so) is located in : host_build/install

Cmake option:

CONFIG: Set Target Platform. Such as: A311D, S905D3, vim3_android, YOCTO. Default is X86_64_linux.

TIM_VX_ENABLE_TEST: Build the unit test. Default is OFF.

TIM_VX_USE_EXTERNAL_OVXLIB: Use external OVXLIB. Default is OFF.

EXTERNAL_VIV_SDK: use external VX driver libs. By default is OFF.

run unit test:

cd host_build/src/tim
export LD_LIBRARY_PATH=`pwd`/../../../prebuilt-sdk/x86_64_linux/lib:$LD_LIBRARY_PATH
./unit_test

Bazel

Install bazel to get started.

TIM-VX needs to be compiled and linked against VeriSilicon OpenVX SDK which provides related header files and pre-compiled libraries. A default linux-x86_64 SDK is provided which contains the simulation environment on PC. Platform specific SDKs can be obtained from respective SoC vendors.

To build TIM-VX:

bazel build libtim-vx.so

To run sample LeNet:

# set VIVANTE_SDK_DIR for runtime compilation environment
export VIVANTE_SDK_DIR=`pwd`/prebuilt-sdk/x86_64_linux

bazel build //samples/lenet:lenet_asymu8_cc
bazel run //samples/lenet:lenet_asymu8_cc

Other

To build and run Tensorflow-Lite with TIM-VX, please see README

To build and run TVM with TIM-VX, please see TVM README

Reference board

Chip Vendor References
i.MX 8M Plus NXP download BSP

Support

create issue on github or email to ML_Support@verisilicon.com