TIM-VX/include/tim/vx
Chen Feiyue 33f3a4f176
Enable float16 bias convolution model runs on NN (#612)
Convert float16 bias tensor to float32 to meet condition of NN
convolution in driver

Caution: Clang version requires minimum 15.0

Type: Code Improvement
Issue: bugzilla id:32785 | jira id VIVD-744

Signed-off-by: Feiyue Chen <Feiyue.Chen@verisilicon.com>
2023-06-30 09:41:28 +08:00
..
ops Enable float16 bias convolution model runs on NN (#612) 2023-06-30 09:41:28 +08:00
platform Support remote platform by gRPC (#561) 2023-03-28 09:51:23 +08:00
builtin_op.h update copyright information 2023-01-20 12:49:48 +08:00
compile_option.h update copyright information 2023-01-20 12:49:48 +08:00
context.h update copyright information 2023-01-20 12:49:48 +08:00
graph.h Enable float16 bias convolution model runs on NN (#612) 2023-06-30 09:41:28 +08:00
operation.h Enable float16 bias convolution model runs on NN (#612) 2023-06-30 09:41:28 +08:00
ops.h optimization for tiny_yolov4 (#591) 2023-05-23 14:28:47 +08:00
tensor.h Reload "==" operator for quantizations of two tensor (#583) 2023-05-10 17:58:30 +09:00
types.h update copyright information 2023-01-20 12:49:48 +08:00