* Add group parameter for deconv API
Limitation: only support depthwise deconvolution
Signed-off-by: xiang.zhang <xiang.zhang@verisilicon.com>
* Add single channel case and fix build warning
Signed-off-by: xiang.zhang <xiang.zhang@verisilicon.com>
These links are for reference only, actually implementation
may vary in terms of dimensions and parameters supported.
Signed-off-by: Kainan Cha <kainan.zha@verisilicon.com>
* Add lenet sample with TIM-LITE
A lenet sample with TIM-LITE executable.
Signed-off-by: zhao.xia <zhao.xia@verisilicon.com>
* Update TIM-LITE API
Update handle usage.
Use Execution::Trigger instead of Execution::Exec
Signed-off-by: zhao.xia <zhao.xia@verisilicon.com>
* Update lenet lite case to use new api
Signed-off-by: zhao.xia <zhao.xia@verisilicon.com>
Add layout inference support for space2depth, depth2space, space2batch, batch2space, pad and
reduce.
Signed-off-by: yuenan.li <yuenan.li@verisilicon.com>
Co-authored-by: yuenan.li <yuenan.li@verisilicon.com>
* Properly support tensor handle for both input and output
* Fix UT to use size_in_bytes instead of size in elements
Signed-off-by: Kainan Cha <kainan.zha@verisilicon.com>
This change adds support for building TIM-VX under a
Android AOSP environment.
Instructions below based on Khadas VIMS system
* Add TIM-VX git repository to Android AOSP
# cd vendor/amlogic/common/npu
# git clone git@github.com:VeriSilicon/TIM-VX.git tim-vx
* Include tim-vx/Android.mk to AOSP build
Edit vendor/amlogic/common/npu/Android.mak
+TMP_PATH := $(LOCAL_PATH)
+VIVANTE_SDK_DIR := $(LOCAL_PATH)/service/ovx_inc
+include $(LOCAL_PATH)/tim-vx/Android.mk
+LOCAL_PATH := $(TMP_PATH)
ifeq ($(BOARD_NPU_SERVICE_ENABLE), true)
Note VIVANTE_SDK_DIR needs to point to SDK header
inclusion path
Signed-off-by: Kainan Cha <kainan.zha@verisilicon.com>
RTNE, Round To Nearest Even is a better rounding policy
which aligns with implementation of Tensorflow Lite.
Signed-off-by: Kainan Cha <kainan.zha@verisilicon.com>
because the operation is a shared pointor, in app, the operation is
created as:
auto op = graph->CreateOperation();
uses natively think the operation had been register into the graph and
would not manage the op locally.
if running the graph in another fucntion instead of the function that
create the operation, the operation would had been delete.
so the operation should be stored into the graph.
Signed-off-by: Jia <juku.jia@verisilicon.com>