Go to file
chxin66 6424ef104e
Fixed the IOtensor order difference between src_graph and infer_graph
* Fixed the IOtensor order difference between src_graph and infer_graph

Graph Input/Output tensor sequence may changed after graph
transformation(layout infer), it is difficult to get the IO mapping
between original graph and final graph.

Clients such as the Android Support Library create tensors using the
original input/output order, which may not be the same as the input
order of src_graph, the data can not be setup correctly.

Solution:
Decide the order of inputs/outputs while creating tensor not at binding to
operation. The order of binding could be change in each transform.

Type:Code improvement

Signed-off-by: Chen Xin <jack.chen@verisilicon.com>

* Fixed maxpoolgrad maxpoolwithargmax2 cases

Some tensors created with wrong attr

Type: Bug fix

Signed-off-by: Chen Xin <jack.chen@verisilicon.com>

---------

Signed-off-by: Chen Xin <jack.chen@verisilicon.com>
Co-authored-by: Chen Xin <jack.chen@verisilicon.com>
2023-03-21 09:21:15 +08:00
.github/workflows Fixed tensorflow version in CI 2022-10-19 18:04:52 +08:00
cmake Enabled bulding with buildroot toolchain. (#281) 2022-01-28 13:12:22 +08:00
docs complete custom op readme 2023-02-07 22:05:58 +08:00
include/tim update copyright information 2023-01-20 12:49:48 +08:00
prebuilt-sdk Update x86_64_linux/include for 22Q3 2022-11-11 18:22:26 +08:00
samples update copyright information 2023-01-20 12:49:48 +08:00
src/tim Fixed the IOtensor order difference between src_graph and infer_graph 2023-03-21 09:21:15 +08:00
toolchains support build for tensorflow A311D 2021-02-07 10:33:04 +08:00
.bazelrc Add prebuild support for VIPLite 2021-05-14 18:31:08 +08:00
.bazelversion Fix bazel BUILD 2021-10-20 16:18:48 +08:00
.clang-format Add shuffle_channel support & test for tim::vx 2021-09-07 22:44:57 +08:00
.gitignore Update overview diagram 2022-08-03 09:06:32 +08:00
Android.mk Minor cleanup 2021-05-06 19:48:36 +08:00
BUILD Add BUILD_WITH_BAZEL option, marco of VSI_FEAT_OP_XXX should behind headers now. 2022-11-22 21:39:02 +08:00
CMakeLists.txt Introduce CMAKE option TIM_VX_DBG_DISABLE_TENSOR_HNDL=OFF 2023-02-09 14:31:32 +08:00
LICENSE update copyright information 2023-01-20 12:49:48 +08:00
README.md Update OpenCV usage link (#477) 2022-09-07 13:14:45 +08:00
VERSION Update Version to 1.1.50 (#462) 2022-08-22 17:41:43 +08:00
WORKSPACE Release 1.1.34 2021-10-08 12:36:02 +08:00
gen_vsi_feat_ops_def.sh Add BUILD_WITH_BAZEL option, marco of VSI_FEAT_OP_XXX should behind headers now. 2022-11-22 21:39:02 +08:00

README.md

TIM-VX - Tensor Interface Module

bazel_x86_vsim_unit_test cmake_x86_vsim

TIM-VX is a software integration module provided by VeriSilicon to facilitate deployment of Neural-Networks on Verisilicon ML accelerators. It serves as the backend binding for runtime frameworks such as Android NN, Tensorflow-Lite, MLIR, TVM and more.

Main Features

  • Over 150 operators with rich format support for both quantized and floating point
  • Simplified C++ binding API calls to create Tensors and Operations Guide
  • Dynamic graph construction with support for shape inference and layout inference
  • Built-in custom layer extensions
  • A set of utility functions for debugging

Framework Support

Feel free to raise a github issue if you wish to add TIM-VX for other frameworks.

Architecture Overview

TIM-VX Architecture

Technical documents

Get started

Build and Run

TIM-VX supports both bazel and cmake.


cmake

To build TIM-VX for x86 with prebuilt:

mkdir host_build
cd host_build
cmake ..
make -j8
make install

All install files (both headers and *.so) is located in : host_build/install

cmake options:

option name Summary Default
TIM_VX_ENABLE_TEST Enable unit test case for public APIs and ops OFF
TIM_VX_ENABLE_LAYOUT_INFER Build with tensor data layout inference support ON
TIM_VX_USE_EXTERNAL_OVXLIB Replace internal with a prebuilt libovxlib library OFF
OVXLIB_LIB full path to libovxlib.so include so name, required if TIM_VX_USE_EXTERNAL_OVXLIB=ON Not set
OVXLIB_INC ovxlib's include path, required if TIM_VX_USE_EXTERNAL_OVXLIB=ON Not set
EXTERNAL_VIV_SDK Give external vivante openvx driver libraries Not set
TIM_VX_BUILD_EXAMPLES Build example applications OFF
TIM_VX_ENABLE_40BIT Enable large memory (over 4G) support in NPU driver OFF

Run unit test:

cd host_build/src/tim

export LD_LIBRARY_PATH=`pwd`/../../../prebuilt-sdk/x86_64_linux/lib:<path to libgtest_main.so>:$LD_LIBRARY_PATH
export VIVANTE_SDK_DIR=`pwd`/../../../prebuilt-sdk/x86_64_linux/
export VSIMULATOR_CONFIG=<hardware name should get from chip vendor>
# if you want to debug wit gdb, please set
export DISABLE_IDE_DEBUG=1
./unit_test

Build with local google test source

    cd <wksp_root>
    git clone --depth 1 -b release-1.10.0 git@github.com:google/googletest.git

    cd <root_tim_vx>/build/
    cmake ../ -DTIM_VX_ENABLE_TEST=ON -DFETCHCONTENT_SOURCE_DIR_GOOGLETEST=<wksp_root/googletest> <add other cmake define here>

Build for evk-boards

  1. prepare toolchain file follow cmake standard
  2. make sure cross build low-level driver with toolchain separately, we need the sdk from the low-level driver
  3. add -DEXTERNAL_VIV_SDK=<low-level-driver/out/sdk> to cmake definitions, also remember -DCMAKE_TOOLCHAIN_FILE=<Toolchain_Config>
  4. or for using a buildroot toolchain with extrnal VIV-SDK add:
    -DCONFIG=BUILDROOT -DCMAKE_SYSROOT=${CMAKE_SYSROOT} -DEXTERNAL_VIV_SDK=${BUILDROOT_SYSROOT}
    
  5. then make

Important notice for integration

If you want to build tim-vx as a static library, and link it to your shared library or application, please be carefull with the linker, "-Wl,--whole-archive" is required.

@see samples/lenet/CMakeLists.txt for reference

Bazel

Install bazel to get started.

TIM-VX needs to be compiled and linked against VeriSilicon OpenVX SDK which provides related header files and pre-compiled libraries. A default linux-x86_64 SDK is provided which contains the simulation environment on PC. Platform specific SDKs can be obtained from respective SoC vendors.

To build TIM-VX:

bazel build libtim-vx.so

To run sample LeNet:

# set VIVANTE_SDK_DIR for runtime compilation environment
export VIVANTE_SDK_DIR=`pwd`/prebuilt-sdk/x86_64_linux

bazel build //samples/lenet:lenet_asymu8_cc
bazel run //samples/lenet:lenet_asymu8_cc

Other

To build and run Tensorflow-Lite with TIM-VX, please see README

To build and run TVM with TIM-VX, please see TVM README

Reference board

Chip Vendor References Success Stories
i.MX 8M Plus NXP ML Guide, BSP SageMaker with 8MP
A311D Khadas - VIM3 A311D datasheet, BSP Paddle-lite demo
S905D3 Khadas - VIM3L S905D3 , BSP

Support

Create issue on github or email to ML_Support at verisilicon dot com