* Fixed the IOtensor order difference between src_graph and infer_graph Graph Input/Output tensor sequence may changed after graph transformation(layout infer), it is difficult to get the IO mapping between original graph and final graph. Clients such as the Android Support Library create tensors using the original input/output order, which may not be the same as the input order of src_graph, the data can not be setup correctly. Solution: Decide the order of inputs/outputs while creating tensor not at binding to operation. The order of binding could be change in each transform. Type:Code improvement Signed-off-by: Chen Xin <jack.chen@verisilicon.com> * Fixed maxpoolgrad maxpoolwithargmax2 cases Some tensors created with wrong attr Type: Bug fix Signed-off-by: Chen Xin <jack.chen@verisilicon.com> --------- Signed-off-by: Chen Xin <jack.chen@verisilicon.com> Co-authored-by: Chen Xin <jack.chen@verisilicon.com> |
||
|---|---|---|
| .github/workflows | ||
| cmake | ||
| docs | ||
| include/tim | ||
| prebuilt-sdk | ||
| samples | ||
| src/tim | ||
| toolchains | ||
| .bazelrc | ||
| .bazelversion | ||
| .clang-format | ||
| .gitignore | ||
| Android.mk | ||
| BUILD | ||
| CMakeLists.txt | ||
| LICENSE | ||
| README.md | ||
| VERSION | ||
| WORKSPACE | ||
| gen_vsi_feat_ops_def.sh | ||
README.md
TIM-VX - Tensor Interface Module
TIM-VX is a software integration module provided by VeriSilicon to facilitate deployment of Neural-Networks on Verisilicon ML accelerators. It serves as the backend binding for runtime frameworks such as Android NN, Tensorflow-Lite, MLIR, TVM and more.
Main Features
- Over 150 operators with rich format support for both quantized and floating point
- Simplified C++ binding API calls to create Tensors and Operations Guide
- Dynamic graph construction with support for shape inference and layout inference
- Built-in custom layer extensions
- A set of utility functions for debugging
Framework Support
- Tensorflow-Lite (External Delegate)
- Tengine (Official)
- TVM (Fork)
- Paddle-Lite (Official)
- OpenCV (Offical)
- MLIR Dialect (In development)
Feel free to raise a github issue if you wish to add TIM-VX for other frameworks.
Architecture Overview
Technical documents
Get started
Build and Run
TIM-VX supports both bazel and cmake.
cmake
To build TIM-VX for x86 with prebuilt:
mkdir host_build
cd host_build
cmake ..
make -j8
make install
All install files (both headers and *.so) is located in : host_build/install
cmake options:
| option name | Summary | Default |
|---|---|---|
TIM_VX_ENABLE_TEST |
Enable unit test case for public APIs and ops | OFF |
TIM_VX_ENABLE_LAYOUT_INFER |
Build with tensor data layout inference support | ON |
TIM_VX_USE_EXTERNAL_OVXLIB |
Replace internal with a prebuilt libovxlib library | OFF |
OVXLIB_LIB |
full path to libovxlib.so include so name, required if TIM_VX_USE_EXTERNAL_OVXLIB=ON |
Not set |
OVXLIB_INC |
ovxlib's include path, required if TIM_VX_USE_EXTERNAL_OVXLIB=ON |
Not set |
EXTERNAL_VIV_SDK |
Give external vivante openvx driver libraries | Not set |
TIM_VX_BUILD_EXAMPLES |
Build example applications | OFF |
TIM_VX_ENABLE_40BIT |
Enable large memory (over 4G) support in NPU driver | OFF |
Run unit test:
cd host_build/src/tim
export LD_LIBRARY_PATH=`pwd`/../../../prebuilt-sdk/x86_64_linux/lib:<path to libgtest_main.so>:$LD_LIBRARY_PATH
export VIVANTE_SDK_DIR=`pwd`/../../../prebuilt-sdk/x86_64_linux/
export VSIMULATOR_CONFIG=<hardware name should get from chip vendor>
# if you want to debug wit gdb, please set
export DISABLE_IDE_DEBUG=1
./unit_test
Build with local google test source
cd <wksp_root>
git clone --depth 1 -b release-1.10.0 git@github.com:google/googletest.git
cd <root_tim_vx>/build/
cmake ../ -DTIM_VX_ENABLE_TEST=ON -DFETCHCONTENT_SOURCE_DIR_GOOGLETEST=<wksp_root/googletest> <add other cmake define here>
Build for evk-boards
- prepare toolchain file follow cmake standard
- make sure cross build low-level driver with toolchain separately, we need the sdk from the low-level driver
- add
-DEXTERNAL_VIV_SDK=<low-level-driver/out/sdk>to cmake definitions, also remember-DCMAKE_TOOLCHAIN_FILE=<Toolchain_Config> - or for using a buildroot toolchain with extrnal VIV-SDK add:
-DCONFIG=BUILDROOT -DCMAKE_SYSROOT=${CMAKE_SYSROOT} -DEXTERNAL_VIV_SDK=${BUILDROOT_SYSROOT} - then make
Important notice for integration
If you want to build tim-vx as a static library, and link it to your shared library or application, please be carefull with the linker, "-Wl,--whole-archive" is required.
@see samples/lenet/CMakeLists.txt for reference
Bazel
Install bazel to get started.
TIM-VX needs to be compiled and linked against VeriSilicon OpenVX SDK which provides related header files and pre-compiled libraries. A default linux-x86_64 SDK is provided which contains the simulation environment on PC. Platform specific SDKs can be obtained from respective SoC vendors.
To build TIM-VX:
bazel build libtim-vx.so
To run sample LeNet:
# set VIVANTE_SDK_DIR for runtime compilation environment
export VIVANTE_SDK_DIR=`pwd`/prebuilt-sdk/x86_64_linux
bazel build //samples/lenet:lenet_asymu8_cc
bazel run //samples/lenet:lenet_asymu8_cc
Other
To build and run Tensorflow-Lite with TIM-VX, please see README
To build and run TVM with TIM-VX, please see TVM README
Reference board
| Chip | Vendor | References | Success Stories |
|---|---|---|---|
| i.MX 8M Plus | NXP | ML Guide, BSP | SageMaker with 8MP |
| A311D | Khadas - VIM3 | A311D datasheet, BSP | Paddle-lite demo |
| S905D3 | Khadas - VIM3L | S905D3 , BSP |
Support
Create issue on github or email to ML_Support at verisilicon dot com