Commit Graph

132 Commits

Author SHA1 Message Date
Doru Bercea a42fdd08f3 Fix Gemm translation to ONNX dialect. 2020-01-15 14:11:32 -05:00
Doru Bercea da0e9b01b1 Fix 1 and 2 dimensional cases. Add test for 1 and 2 dimensional combinations. 2020-01-14 10:47:24 -05:00
Doru Bercea 642f77abed Add additional dynamic dimension. 2020-01-14 10:47:24 -05:00
Doru Bercea 95ebf3e23a Add test for multypling stacks of matrices. 2020-01-14 10:47:24 -05:00
Doru Bercea ae966cdee9 Add tests for matrices and stack of matrices combinations. 2020-01-14 10:47:24 -05:00
Doru Bercea a5f1d39c20 Add tests for matrices and stack of matrices combinations. 2020-01-14 10:47:24 -05:00
Doru Bercea 6478c88cdc Add test for all one dimensional case. 2020-01-14 10:47:24 -05:00
Doru Bercea 151f4f8c44 Add the default shape inference for the transposition operation. 2020-01-09 13:50:38 -05:00
Tung D. Le 44ec333dfa Update test cases 2020-01-08 13:11:57 +09:00
Tung D. Le 3d4ad52011 Rewrite tanh using TanhOp, add log, cos 2020-01-08 12:11:21 +09:00
Tian Jin 38e7d2d068 Update LLVM_SRC, LLVM_BUILD env vars to LLVM_PROJ_SRC, LLVM_PROJ_BUILD since MLIR is now parallel to LLVM in llvm-project repository. 2020-01-06 15:59:19 -05:00
Tian Jin 922a40962c FE -> ONNF 2019-12-22 23:53:14 -05:00
Tian Jin 685bf23b40 Enable ONNX Backend Test (#1)
* wip, commit before merging with upstream

* organize API, return wrapped output

* enable onnx backend test

* undo unintentional commit

* fix krnl ops tablegen

* format krnl ops

* reorder fillDynMemRefWithMemRef to be after fillPtrToMemRefWithDynMemRef, better comments

* more onnx backend tests

* ensure that test names refer to existing tests

* improve code readability by shortening type names

* nit

* restore unintentional changes

* more nits

* fix ; -> :

* split runtime implementation into header and body file, add support for data types

* comment on the onnx backend test

* make the comments read better

* do not dump when lowering
2019-12-22 23:14:57 -05:00
TUNG LEDUC 06a968d4a1 [MLIR] Add broadcasting support for element wise operations (#398)
* Add broadcasting support for elementwise operations

* Remove MLIRDialect from MLIRWholeArchiveLibs

* Rewrite getLoopIVsForBroadcasting

* Compute dimensions for allocating result memory

* Compute dimensions for allocating result memory (revised)

* Use static dimension for element-wise operation testcases

* Add a test for addition with broadcasting

* Missed Traits.h when merging

* Revise

* Update SharedWork.md

* Broadcasting for variadic operations

* Edit comments

* Update SharedWork.md

* Reorganize the code

* Add CHECK-LABEL for test_add_with_broadcasting
2019-12-21 02:08:27 -05:00
Haruki Imai 7e3f96e642 [MLIR] Add support for Reciprocal (#397)
* Added support for Reciprocal

* Fixed format
2019-12-21 02:07:44 -05:00
Tian Jin 3e7b8465e9 clean up 2019-12-21 02:07:24 -05:00
GHEORGHE-TEOD BERCEA e81a7654f9 [MLIR] Add support for reshape (#390)
* Add reshape op handling.

* Lower reshape to KRNL dialect.

* Add comments.

* Propagate reshape to KRNL IR.

* Lower KRNL reshape to affine and standard ops level dialects.

* Add lowering of reshape operation to Krnl and LLVM Dialects.

* Add test for LLVM IR dialect output for reshape.

* Fix rebase.

* Fix test variable.

* Emit errors during reshape shape inference. Address other reviewer comments.
2019-12-21 02:06:14 -05:00
TUNG LEDUC 5ed79083d5 [MLIR] Add support for Max, Min, Sum, Elu, Selu, LeakyRelu, HardSigmoid (#395)
* Lower ONNXSumOp

* Add inferShapes() and test cases

* Load the first operand to the result

* Update SharingWork.md

* Update SharingWork.md

* Update SharingWork.md

* Add support for Max, Min

* Pass operation instead of location to mapToLowerScalarOp

* Add support for Elu, Selu, LeakyRelu, HardSigmoid

* Add test cases

* Update SharingWork.md

* Rewrite the part of lowering variadic ops and use it for binary ops

* Use two diffenrent templates for Unary and Variadic Ops

* Revise the code
2019-12-21 02:02:09 -05:00
TUNG LEDUC 45608282e0 [MLIR] Add support for Relu (#392)
* Add support for Relu

* Add comments
2019-12-21 01:38:16 -05:00
TUNG LEDUC 1c3176bf9f [MLIR] Lower ONNX element-wise unary ops: Exp, Tanh, Sinh, Cosh, Sigmoid (#389)
* Lower ExpOp

* Lower ONNXTanhOp

* Lower Exp Tanh, Sinh, and Cosh

* Lower ONNX Sigmoid op

* Merge

* Specialize template lowerScalarOp

* Unify ONNXEWUnaryOpLowering and ONNXEWBinaryOpLowering

* Support multiple types

* Reformat the code

* Add test cases

* Reformat the code

* Change names

* Apply clang-format

* Update variable names
2019-12-21 01:37:29 -05:00
Tian Jin 0048f2fd86 clean up 2019-12-21 01:36:03 -05:00
TUNG LEDUC c3ef1d93ae [MLIR] Lower ONNX element-wise binary ops: Mul, Div, Sub, And, Or, Xor (#388)
* Lower ONNX element-wise binary ops: Mul, Div, Sub, And, Or, Xor

* Edit gen_doc.py to avoid changes about AnyTypeOf<[AnyMemRef, AnyTensor]>

* Miss a space

* Add tests

* Shorten ONNXElementWiseBinaryOpLowering into ONNXEWBinaryOpLowering

* Move lowering patterns into runOnModule()

* Redundant space
2019-12-21 01:35:31 -05:00
GHEORGHE-TEOD BERCEA 7fb2f80dce [MLIR] Add support for dealloc insertion (#386)
* Add support for dealloc op.

* Check dealloc for returned result not present.
2019-12-21 01:34:48 -05:00
Tian Jin b2a1103915 [MLIR] Refactor Krnl Dialect and Krnl Dialect Lowering (#375)
* Store bounds as affine map attributes & check in test cases with generic printer

* Upgrading MLIR

MLIR is outdated on Buildbot, rebuilding a newer version.

* work with new version of mlir

* check-in parser tests

* custom printer

* nit

* bug fix

* enable custom asm printer test

* enable custom asm printer test

* more consistent variable naming

* test max/min

* variable naming scheme change to MLIR style

* can lower krnl to llvm

* kernel -> llvm

* comments

* bug fix

* try fixing ci

* fix ci

* deactivate model test

* fix lit test

* nit

* fix z buildbot
2019-12-21 01:34:14 -05:00
GHEORGHE-TEOD BERCEA 652ce4b7d4 Add test for checking lowering of Add op to KRNL IR (#385)
* Add test for checking lowering of Add op to KRNL IR.

* Add test file.
2019-12-21 01:20:36 -05:00
GHEORGHE-TEOD BERCEA b02652dd76 [MLIR] Lowering of frontend dialect to KRNL dialect (#382)
* Partial support for lowering operations to KRNL dialect.

* Attempt to lower to KRNL IR.

* Update file.

* Add lowering.

* Address comments. Fix alloc dynamic dimensions. Correctly link StandardOps.

* Temporarily remove deallocation of locally allocated tensors.
2019-12-21 01:11:14 -05:00
TUNG LEDUC d61cf35471 [MLIR] Add one more test case for MatMul-Add fusion (#380)
* Add one more testcase for matmul-add fusion

* Code format for identity elimination testcase
2019-12-21 00:51:54 -05:00
Tian Jin 004762c13d [MLIR] Remove module from test (#379)
* Remove module from test

* Update onnx_canonicalization.mlir
2019-12-21 00:51:23 -05:00
TUNG LEDUC 53ab014a1d [MLIR] Canonicalization pattern for eliminating identity ops (#377)
* Canonicalization pattern for eliminating identity ops

* Add a test for the identity elimination rule

* Remove frontend from test

* Use CHECK-NEXT instead of CHECK
2019-12-21 00:47:22 -05:00
GHEORGHE-TEOD BERCEA bee32e2041 Fix rebase errors. (#378) 2019-12-21 00:46:29 -05:00
TONG CHEN 3f68c5420d [MLIR] generate op from onnx document (#366)
* generate op from onnx document

* Restore FullGemm

* update the op attribute for shape inference and canonicalizer

* Update onnx_canonicalization.mlir
2019-12-21 00:40:40 -05:00
Tian Jin d01ac7732f [MLIR] compartmentalize build script (#369)
* compartmentalize build script, temporarily remove dependency of onnf_opt on helper.cpp

* fix test includes

* fix op directory include

* compiler -> op

* compiler test depends on boost system

* fix function name

* specify libcompiler dependencies

* let cmake take care of transitive dependencies

* remove unnecessary includes

* use ONNF_SRC_ROOT and ONNF_BIN_ROOT

* allow whole-archive linked libraries to be appended

* [MLIR] Support filecheck (#371)

* support lit+FileCheck

* add lit into build script

* format MLIR.cmake

* format cmake

* [MLIR] Remove input/output ops (#372)

* remove input/output ops

* get output tensor type from symbol table
2019-12-21 00:34:51 -05:00