# TIM-VX - Tensor Interface Module ![Bazel.VSim.X86.UnitTest](https://github.com/VeriSilicon/TIM-VX/actions/workflows/bazel_x86_vsim_unit_test.yml/badge.svg) ![CMake.VSim.X86.UnitTest](https://github.com/VeriSilicon/TIM-VX/actions/workflows/cmake_x86_vsim_unit_test.yml/badge.svg) TIM-VX is a software integration module provided by VeriSilicon to facilitate deployment of Neural-Networks on Verisilicon ML accelerators. It serves as the backend binding for runtime frameworks such as Android NN, Tensorflow-Lite, MLIR, TVM and more. Main Features - Over [150 operators](https://github.com/VeriSilicon/TIM-VX/blob/main/src/tim/vx/ops/README.md) with rich format support for both quantized and floating point - Simplified C++ binding API calls to create Tensors and Operations [Guide](https://github.com/VeriSilicon/TIM-VX/blob/main/docs/Programming_Guide.md) - Dynamic graph construction with support for shape inference and layout inference - Built-in custom layer extensions - A set of utility functions for debugging ## Framework Support - [Tensorflow-Lite](https://github.com/VeriSilicon/tflite-vx-delegate) (External Delegate) - [Tengine](https://github.com/OAID/Tengine) (Official) - [TVM](https://github.com/VeriSilicon/tvm) (Fork) - MLIR Dialect (In development) Feel free to raise a github issue if you wish to add TIM-VX for other frameworks. ## Architecture Overview ![TIM-VX Architecture](docs/image/timvx_overview.svg) # Get started ## Build and Run TIM-VX supports both [bazel](https://bazel.build) and cmake. ### Cmake To build TIM-VX for x86 with prebuilt: ```shell mkdir host_build cd host_build cmake .. make -j8 make install ``` All install files (both headers and *.so) is located in : `host_build/install` Cmake option: `CONFIG`: Set Target Platform. Such as: `A311D`, `S905D3`, `vim3_android`, `YOCTO`. Default is `X86_64_linux`. `TIM_VX_ENABLE_TEST`: Build the unit test. Default is OFF. `TIM_VX_USE_EXTERNAL_OVXLIB`: Use external OVXLIB. Default is OFF. `EXTERNAL_VIV_SDK`: use external VX driver libs. By default is OFF. run unit test: ```shell cd host_build/src/tim export LD_LIBRARY_PATH=`pwd`/../../../prebuilt-sdk/x86_64_linux/lib::$LD_LIBRARY_PATH export VIVANTE_SDK_DIR=`pwd`/../../../prebuilt-sdk/x86_64_linux/lib export VSIMULATOR_CONFIG= # if you want to debug wit gdb, please set export DISABLE_IDE_DEBUG=1 ./unit_test ``` #### Build with local google test source ```shell cd git clone --depth 1 -b release-1.10.0 git@github.com:google/googletest.git cd /build/ cmake ../ -DTIM_VX_ENABLE_TEST=ON -DFETCHCONTENT_SOURCE_DIR_GOOGLETEST= ``` #### Build for your evk-board 1. prepare toolchain file follow cmake standard 2. make sure cross build low-level driver with toolchain separately, we need the sdk from the low-level driver 3. add -DEXTERNAL_VIV_SDK= to cmake definitions, also remember -DCMAKE_TOOLCHAIN_FILE= 4. then make ### Bazel [Install bazel](https://docs.bazel.build/versions/master/install.html) to get started. TIM-VX needs to be compiled and linked against VeriSilicon OpenVX SDK which provides related header files and pre-compiled libraries. A default linux-x86_64 SDK is provided which contains the simulation environment on PC. Platform specific SDKs can be obtained from respective SoC vendors. To build TIM-VX: ```shell bazel build libtim-vx.so ``` To run sample LeNet: ```shell # set VIVANTE_SDK_DIR for runtime compilation environment export VIVANTE_SDK_DIR=`pwd`/prebuilt-sdk/x86_64_linux bazel build //samples/lenet:lenet_asymu8_cc bazel run //samples/lenet:lenet_asymu8_cc ``` ## Other To build and run Tensorflow-Lite with TIM-VX, please see [README](https://github.com/VeriSilicon/tflite-vx-delegate#readme) To build and run TVM with TIM-VX, please see [TVM README](https://github.com/VeriSilicon/tvm/blob/vsi_npu/README.VSI.md) # Reference board Chip | Vendor | References :------ |:----- |:------ i.MX 8M Plus | NXP | [ML Guide](https://www.nxp.com.cn/docs/en/user-guide/IMX-MACHINE-LEARNING-UG.pdf) [BSP](https://www.nxp.com/design/software/embedded-software/i-mx-software/embedded-linux-for-i-mx-applications-processors:IMXLINUX?tab=Design_Tools_Tab) # Support create issue on github or email to ML_Support@verisilicon.com