Go to file
Sven 7c1a00213b
[New API] Add compile_option support - relax_mode (#285)
Added new API for tim::vx::Context::CreateGraph with a CompileOption

Only one option added in CompileOption:
    relax_mode : Run float32 mode with bfloat16

Signed-off-by: xiang.zhang <xiang.zhang@verisilicon.com>
2022-02-09 10:52:11 +08:00
.github/workflows Add dispatch_workflow to CI action (#198) 2021-10-22 11:45:04 +08:00
cmake Enabled bulding with buildroot toolchain. (#281) 2022-01-28 13:12:22 +08:00
docs Update component diagram and README.md (#269) 2022-01-17 01:13:20 +08:00
include/tim [New API] Add compile_option support - relax_mode (#285) 2022-02-09 10:52:11 +08:00
prebuilt-sdk Update internal to 1.1.37_preview (#254) 2022-01-10 01:56:00 +08:00
samples Integrate benchmark test of conv2d and depthwise conv2d (#276) 2022-01-21 15:32:23 +08:00
src/tim [New API] Add compile_option support - relax_mode (#285) 2022-02-09 10:52:11 +08:00
toolchains support build for tensorflow A311D 2021-02-07 10:33:04 +08:00
.bazelrc Add prebuild support for VIPLite 2021-05-14 18:31:08 +08:00
.bazelversion Fix bazel BUILD 2021-10-20 16:18:48 +08:00
.clang-format Add shuffle_channel support & test for tim::vx 2021-09-07 22:44:57 +08:00
.gitignore Add unidirectional sequence lstm support 2021-08-09 13:43:33 +08:00
Android.mk Minor cleanup 2021-05-06 19:48:36 +08:00
BUILD [New API] Add compile_option support - relax_mode (#285) 2022-02-09 10:52:11 +08:00
CMakeLists.txt Enabled bulding with buildroot toolchain. (#281) 2022-01-28 13:12:22 +08:00
LICENSE Initial Commit for VERSION 1.1.28 2021-01-11 18:27:48 +08:00
README.md Enabled bulding with buildroot toolchain. (#281) 2022-01-28 13:12:22 +08:00
VERSION Release 1.1.34 2021-10-08 12:36:02 +08:00
WORKSPACE Release 1.1.34 2021-10-08 12:36:02 +08:00

README.md

TIM-VX - Tensor Interface Module

Bazel.VSim.X86.UnitTest CMake.VSim.X86.UnitTest

TIM-VX is a software integration module provided by VeriSilicon to facilitate deployment of Neural-Networks on Verisilicon ML accelerators. It serves as the backend binding for runtime frameworks such as Android NN, Tensorflow-Lite, MLIR, TVM and more.

Main Features

  • Over 150 operators with rich format support for both quantized and floating point
  • Simplified C++ binding API calls to create Tensors and Operations Guide
  • Dynamic graph construction with support for shape inference and layout inference
  • Built-in custom layer extensions
  • A set of utility functions for debugging

Framework Support

Feel free to raise a github issue if you wish to add TIM-VX for other frameworks.

Architecture Overview

TIM-VX Architecture

Get started

Build and Run

TIM-VX supports both bazel and cmake.


cmake

To build TIM-VX for x86 with prebuilt:

mkdir host_build
cd host_build
cmake ..
make -j8
make install

All install files (both headers and *.so) is located in : host_build/install

cmake options:

option name Summary Default
TIM_VX_ENABLE_TEST Enable unit test case for public APIs and ops OFF
TIM_VX_ENABLE_LAYOUT_INFER Build with tensor data layout inference support ON
TIM_VX_USE_EXTERNAL_OVXLIB Replace internal with a prebuilt libovxlib library OFF
OVXLIB_LIB full path to libovxlib.so include so name, required if TIM_VX_USE_EXTERNAL_OVXLIB=ON Not set
OVXLIB_INC ovxlib's include path, required if TIM_VX_USE_EXTERNAL_OVXLIB=ON Not set
EXTERNAL_VIV_SDK Give external vivante openvx driver libraries Not set
TIM_VX_BUILD_EXAMPLES Build example applications OFF
TIM_VX_ENABLE_40BIT Enable large memory (over 4G) support in NPU driver OFF

run unit test:

cd host_build/src/tim

export LD_LIBRARY_PATH=`pwd`/../../../prebuilt-sdk/x86_64_linux/lib:<path to libgtest_main.so>:$LD_LIBRARY_PATH
export VIVANTE_SDK_DIR=`pwd`/../../../prebuilt-sdk/x86_64_linux/lib
export VSIMULATOR_CONFIG=<hardware name should get from chip vendor>
# if you want to debug wit gdb, please set
export DISABLE_IDE_DEBUG=1
./unit_test

Build with local google test source

    cd <wksp_root>
    git clone --depth 1 -b release-1.10.0 git@github.com:google/googletest.git

    cd <root_tim_vx>/build/
    cmake ../ -DTIM_VX_ENABLE_TEST=ON -DFETCHCONTENT_SOURCE_DIR_GOOGLETEST=<wksp_root/googletest> <add other cmake define here>

Build for your evk-board

  1. prepare toolchain file follow cmake standard
  2. make sure cross build low-level driver with toolchain separately, we need the sdk from the low-level driver
  3. add -DEXTERNAL_VIV_SDK=<low-level-driver/out/sdk> to cmake definitions, also remember -DCMAKE_TOOLCHAIN_FILE=<Your_Toolchain_Config>
  4. or for using a buildroot toolchain with extrnal VIV-SDK add: -DCONFIG=BUILDROOT -DCMAKE_SYSROOT={CMAKE_SYSROOT} -DEXTERNAL_VIV_SDK={BUILDROOT_SYSROOT}
  5. then make

Important notice for integration

If you want to build tim-vx as a static library, and link it to your shared library or application, please be carefull with the linker, "-Wl,--whole-archive" is required.

@see samples/lenet/CMakeLists.txt for reference

Bazel

Install bazel to get started.

TIM-VX needs to be compiled and linked against VeriSilicon OpenVX SDK which provides related header files and pre-compiled libraries. A default linux-x86_64 SDK is provided which contains the simulation environment on PC. Platform specific SDKs can be obtained from respective SoC vendors.

To build TIM-VX:

bazel build libtim-vx.so

To run sample LeNet:

# set VIVANTE_SDK_DIR for runtime compilation environment
export VIVANTE_SDK_DIR=`pwd`/prebuilt-sdk/x86_64_linux

bazel build //samples/lenet:lenet_asymu8_cc
bazel run //samples/lenet:lenet_asymu8_cc

Other

To build and run Tensorflow-Lite with TIM-VX, please see README

To build and run TVM with TIM-VX, please see TVM README

Reference board

Chip Vendor References
i.MX 8M Plus NXP ML Guide, BSP
A311D Khadas - VIM3 A311D datasheet, BSP
S905D3 Khadas - VIM3L S905D3 , BSP

Support

create issue on github or email to ML_Support@verisilicon.com