Commit Graph

81 Commits

Author SHA1 Message Date
Tian Jin 0231bb83a2
Properly link with ZLIB. (#40) 2020-01-21 11:08:16 -05:00
Tung D. Le e89e51699b Lowering softmax (#14)
* Rebase

* Use max normalization

* Handle axis

* Add tests

* Update SharingWork.md

* Remove redundant spaces

* Format code

* Rebase

* Change from the use of Value* to Value

* Add end-to-end tests

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-01-20 21:57:32 -05:00
Doru Bercea 6b55bb43c7 Fix operand type access. 2020-01-20 15:48:16 -05:00
Doru Bercea bd44d8402e Add verifier function for checking negative perms. 2020-01-20 14:54:40 -05:00
Doru Bercea 9d1078540d Transpose using perm attribute. 2020-01-20 14:54:40 -05:00
Tian Jin 8665ecd998
Enable e2e tests (#29)
* Sync with latest MLIR.

* Enable ONNX backend tests as a means to test ONNF lowering end-to-end.

* Install ONNX using quiet mode.

* Remove debug comments.

* Install ONNX from third_party/onnx.

* Check python version and fix pip command for installing ONNX.

* Using --user install option to prevent permission denied.

* Remove unused imports.

* Try using stock ONNX pip package as there are more tests in them.

* Pip got stuck building wheels, try sudo.

* Use verbose install to debug.

* Invalidate cache to build LLVM tools.

* Fix mlir installation script location.

* Debug to locate ONNF.

* Sanity check.

* Check out ONNF code first.

* Use verbose LIT output.

* 1. Update documentation to always use verbose LIT.
2. Update krnl ops to reflect new affine map attribute syntax.

* See if conda exists

* Install ONNX by manually cloning the repo.

* Install cmake first.

* Using sudo priviledge when installing.

* Limit build parallelism.

* Limit parallelism.

* Larger memory.

* Install onnx package with pip.

* Build MLIR tools.

* Invalidate cache.

* Compile model.so with -fPIC.

* Remove module dump to get concise debug output.

* Print command before executing.

* Use quiet install mode to reduce logging.

* Use -relocation-model=pic to generate position independent code.

* 1. Remove MAKEFLAGS because now buildbot has enough memory.
2. Run DocCheck as a last step.

* 1. Add verbose mode for backtend test.

* When dumping to LLVM bitcode, do not dump module IR, but print a message indicating that bitcode has been written to disk.

* Do not pass MakeFlags to CMake.

* Add more explaination for posible reasons of failing to identify tests.
2020-01-20 12:30:08 -05:00
Gheorghe-Teodor Bercea d895670656
Merge branch 'master' into fix-conv 2020-01-15 17:56:57 -05:00
Gheorghe-Teodor Bercea 969459ddcb
Merge branch 'master' into fix-conv 2020-01-15 17:50:36 -05:00
Gheorghe-Teodor Bercea 514cbcb1dc
Merge branch 'master' into fix-gemm 2020-01-15 17:50:15 -05:00
Doru Bercea a1b44905e2 Add documentation for handling optional arguments. 2020-01-15 17:06:14 -05:00
Doru Bercea 3f6efdf4a4 Fix MaxPool translation to ONNX dialect. 2020-01-15 15:16:45 -05:00
Doru Bercea d2a90e2923 Remove references to FullGemm. 2020-01-15 14:27:21 -05:00
Doru Bercea a42fdd08f3 Fix Gemm translation to ONNX dialect. 2020-01-15 14:11:32 -05:00
Doru Bercea 67ec9e9009 Fix convolution translation to MLIR. 2020-01-15 13:26:50 -05:00
Doru Bercea fc352745e0 Make last argument of conv variadic. 2020-01-14 11:17:52 -05:00
Tian Jin 22a6bdc574
Sync with latest MLIR. (#26) 2020-01-13 12:21:29 -05:00
Doru Bercea 151f4f8c44 Add the default shape inference for the transposition operation. 2020-01-09 13:50:38 -05:00
Tung D. Le edcd506dde
Merge branch 'master' into tanh_cos_log 2020-01-08 13:39:24 +09:00
Tung D. Le 3d4ad52011 Rewrite tanh using TanhOp, add log, cos 2020-01-08 12:11:21 +09:00
Tung D. Le becb2add4a Do not get float attributes with fixed precision 2020-01-07 17:39:34 +09:00
Tian Jin 0582846864 Transition to value-typed Value, rename Value* -> Value, in accordance with upstream MLIR style change. 2019-12-30 22:42:13 -05:00
Tian Jin eadf33d816 explicit ordering among operands 2019-12-24 03:36:33 -05:00
Tian Jin 4eb95b2373 fix onnf build 2019-12-24 03:00:54 -05:00
Tian Jin 58c2f6de00 fix link 2019-12-24 02:46:14 -05:00
Tian Jin c55020f6b6 fix build script 2019-12-24 02:29:28 -05:00
Tian Jin 1188b765c9 comment out test tanh 2019-12-24 02:19:46 -05:00
Tian Jin 95de5b7ac9 revert changes to lower-to-krnl 2019-12-24 02:07:21 -05:00
Tian Jin 8815f12ad0 final -> override 2019-12-24 01:09:54 -05:00
Tian Jin 50ea6bed03 fix build 2019-12-23 02:09:11 -05:00
Tian Jin 238c937f1b rewrite cli description 2019-12-23 01:14:35 -05:00
Tian Jin 0c41a204e4 fix include path 2019-12-23 00:22:11 -05:00
Tian Jin da4527c961 flatten src directory structure 2019-12-23 00:13:52 -05:00
Tian Jin 82d513096e a commandline interface for onnf 2019-12-22 23:52:49 -05:00
Tian Jin 911cc2ad92
Merge pull request #1 from doru1004/remove-boost
clean up, remove dependency for boost
2019-12-22 23:15:46 -05:00
Tian Jin 685bf23b40 Enable ONNX Backend Test (#1)
* wip, commit before merging with upstream

* organize API, return wrapped output

* enable onnx backend test

* undo unintentional commit

* fix krnl ops tablegen

* format krnl ops

* reorder fillDynMemRefWithMemRef to be after fillPtrToMemRefWithDynMemRef, better comments

* more onnx backend tests

* ensure that test names refer to existing tests

* improve code readability by shortening type names

* nit

* restore unintentional changes

* more nits

* fix ; -> :

* split runtime implementation into header and body file, add support for data types

* comment on the onnx backend test

* make the comments read better

* do not dump when lowering
2019-12-22 23:14:57 -05:00
Tian Jin 2cb054324d clean up, remove dependency for boost 2019-12-22 20:49:29 -05:00
Tian Jin 5573cb39fe clean up, remove dependency for boost 2019-12-22 20:33:33 -05:00
Tian Jin a6a40cf989 Format Key Files using LLVM Style (#403)
* format using llvm style

* merge and format
2019-12-21 02:11:49 -05:00
TUNG LEDUC 06a968d4a1 [MLIR] Add broadcasting support for element wise operations (#398)
* Add broadcasting support for elementwise operations

* Remove MLIRDialect from MLIRWholeArchiveLibs

* Rewrite getLoopIVsForBroadcasting

* Compute dimensions for allocating result memory

* Compute dimensions for allocating result memory (revised)

* Use static dimension for element-wise operation testcases

* Add a test for addition with broadcasting

* Missed Traits.h when merging

* Revise

* Update SharedWork.md

* Broadcasting for variadic operations

* Edit comments

* Update SharedWork.md

* Reorganize the code

* Add CHECK-LABEL for test_add_with_broadcasting
2019-12-21 02:08:27 -05:00
GHEORGHE-TEOD BERCEA 0a8af69e94 Add inference for Identity operation. (#400) 2019-12-21 02:08:13 -05:00
Haruki Imai 7e3f96e642 [MLIR] Add support for Reciprocal (#397)
* Added support for Reciprocal

* Fixed format
2019-12-21 02:07:44 -05:00
Tian Jin 3e7b8465e9 clean up 2019-12-21 02:07:24 -05:00
GHEORGHE-TEOD BERCEA e81a7654f9 [MLIR] Add support for reshape (#390)
* Add reshape op handling.

* Lower reshape to KRNL dialect.

* Add comments.

* Propagate reshape to KRNL IR.

* Lower KRNL reshape to affine and standard ops level dialects.

* Add lowering of reshape operation to Krnl and LLVM Dialects.

* Add test for LLVM IR dialect output for reshape.

* Fix rebase.

* Fix test variable.

* Emit errors during reshape shape inference. Address other reviewer comments.
2019-12-21 02:06:14 -05:00
TUNG LEDUC 5ed79083d5 [MLIR] Add support for Max, Min, Sum, Elu, Selu, LeakyRelu, HardSigmoid (#395)
* Lower ONNXSumOp

* Add inferShapes() and test cases

* Load the first operand to the result

* Update SharingWork.md

* Update SharingWork.md

* Update SharingWork.md

* Add support for Max, Min

* Pass operation instead of location to mapToLowerScalarOp

* Add support for Elu, Selu, LeakyRelu, HardSigmoid

* Add test cases

* Update SharingWork.md

* Rewrite the part of lowering variadic ops and use it for binary ops

* Use two diffenrent templates for Unary and Variadic Ops

* Revise the code
2019-12-21 02:02:09 -05:00
TONG CHEN c8d591fb28 [MLIR] import attribute of onnx node (#383)
* add attributes as NamedAttribute

* support list value for attribute

* use std::tie to avoid c++17 feature
2019-12-21 02:00:58 -05:00
TUNG LEDUC 45608282e0 [MLIR] Add support for Relu (#392)
* Add support for Relu

* Add comments
2019-12-21 01:38:16 -05:00
Tian Jin 82f5bfec9f Update lower_frontend_to_krnl.cpp (#391) 2019-12-21 01:37:50 -05:00
TUNG LEDUC 1c3176bf9f [MLIR] Lower ONNX element-wise unary ops: Exp, Tanh, Sinh, Cosh, Sigmoid (#389)
* Lower ExpOp

* Lower ONNXTanhOp

* Lower Exp Tanh, Sinh, and Cosh

* Lower ONNX Sigmoid op

* Merge

* Specialize template lowerScalarOp

* Unify ONNXEWUnaryOpLowering and ONNXEWBinaryOpLowering

* Support multiple types

* Reformat the code

* Add test cases

* Reformat the code

* Change names

* Apply clang-format

* Update variable names
2019-12-21 01:37:29 -05:00
TUNG LEDUC c3ef1d93ae [MLIR] Lower ONNX element-wise binary ops: Mul, Div, Sub, And, Or, Xor (#388)
* Lower ONNX element-wise binary ops: Mul, Div, Sub, And, Or, Xor

* Edit gen_doc.py to avoid changes about AnyTypeOf<[AnyMemRef, AnyTensor]>

* Miss a space

* Add tests

* Shorten ONNXElementWiseBinaryOpLowering into ONNXEWBinaryOpLowering

* Move lowering patterns into runOnModule()

* Redundant space
2019-12-21 01:35:31 -05:00
TUNG LEDUC 05e16dafae Use template to support lowering all binary onnx ops to kernel ir (#387) 2019-12-21 01:35:17 -05:00