Commit Graph

84 Commits

Author SHA1 Message Date
Tian Jin ab10eb510a fix yaml file 2019-12-24 00:20:37 -05:00
Tian Jin 20fc4d963b fix yaml file 2019-12-24 00:17:45 -05:00
Tian Jin 40677f528d fix yaml file 2019-12-24 00:16:00 -05:00
Tian Jin 78dc3f6e93 cache MLIR build 2019-12-24 00:09:31 -05:00
Tian Jin 8dbca0cc7b build in release mode 2019-12-23 23:23:07 -05:00
Tian Jin e1f4fcd66e reduce hash table size 2019-12-23 23:18:38 -05:00
Tian Jin d4c59cc45f build all 2019-12-23 22:43:43 -05:00
Tian Jin 8e27d831e6 try tricks to reduce memory consumption of linkers 2019-12-23 22:37:23 -05:00
Tian Jin 71b27d555b use 4 threads, not much credit left 2019-12-23 22:01:16 -05:00
Tian Jin e909658279 use 2 threads 2019-12-23 21:59:26 -05:00
Tian Jin 01c88a3750 running out of memory, limit parallelism 2019-12-23 18:19:55 -05:00
Tian Jin 8f1af19e43 install ninja 2019-12-23 17:58:55 -05:00
Tian Jin 0a953629db use sudo 2019-12-23 17:53:31 -05:00
Tian Jin db4afef879 try without installing sudo 2019-12-23 17:51:52 -05:00
Tian Jin d601a34afa install gcc, cmake 2019-12-23 17:47:37 -05:00
Tian Jin 65240a1368 use python image 2019-12-23 17:03:22 -05:00
Tian Jin 52ca81f3ad find git 2019-12-23 17:00:55 -05:00
Tian Jin 13f9c83397 set up buildbot 2019-12-23 16:53:08 -05:00
Tian Jin 6c1d0d42c5 enable ci 2019-12-23 16:33:08 -05:00
Tian Jin e9e47edc9e
Merge pull request #2 from doru1004/flatten-src-folder-structure
flatten src directory structure
2019-12-23 11:41:11 -05:00
Tian Jin 3b7a1c17b0
Merge pull request #3 from doru1004/fix-build
Fix build
2019-12-23 11:41:04 -05:00
Tian Jin 206fb5db67 fix whole-archive link 2019-12-23 11:40:15 -05:00
Tian Jin 50ea6bed03 fix build 2019-12-23 02:09:11 -05:00
Tian Jin 238c937f1b rewrite cli description 2019-12-23 01:14:35 -05:00
Tian Jin 0c41a204e4 fix include path 2019-12-23 00:22:11 -05:00
Tian Jin da4527c961 flatten src directory structure 2019-12-23 00:13:52 -05:00
Tian Jin 922a40962c FE -> ONNF 2019-12-22 23:53:14 -05:00
Tian Jin 82d513096e a commandline interface for onnf 2019-12-22 23:52:49 -05:00
Tian Jin 911cc2ad92
Merge pull request #1 from doru1004/remove-boost
clean up, remove dependency for boost
2019-12-22 23:15:46 -05:00
Tian Jin 685bf23b40 Enable ONNX Backend Test (#1)
* wip, commit before merging with upstream

* organize API, return wrapped output

* enable onnx backend test

* undo unintentional commit

* fix krnl ops tablegen

* format krnl ops

* reorder fillDynMemRefWithMemRef to be after fillPtrToMemRefWithDynMemRef, better comments

* more onnx backend tests

* ensure that test names refer to existing tests

* improve code readability by shortening type names

* nit

* restore unintentional changes

* more nits

* fix ; -> :

* split runtime implementation into header and body file, add support for data types

* comment on the onnx backend test

* make the comments read better

* do not dump when lowering
2019-12-22 23:14:57 -05:00
Tian Jin 2cb054324d clean up, remove dependency for boost 2019-12-22 20:49:29 -05:00
Tian Jin 5573cb39fe clean up, remove dependency for boost 2019-12-22 20:33:33 -05:00
Tian Jin a6a40cf989 Format Key Files using LLVM Style (#403)
* format using llvm style

* merge and format
2019-12-21 02:11:49 -05:00
TUNG LEDUC 06a968d4a1 [MLIR] Add broadcasting support for element wise operations (#398)
* Add broadcasting support for elementwise operations

* Remove MLIRDialect from MLIRWholeArchiveLibs

* Rewrite getLoopIVsForBroadcasting

* Compute dimensions for allocating result memory

* Compute dimensions for allocating result memory (revised)

* Use static dimension for element-wise operation testcases

* Add a test for addition with broadcasting

* Missed Traits.h when merging

* Revise

* Update SharedWork.md

* Broadcasting for variadic operations

* Edit comments

* Update SharedWork.md

* Reorganize the code

* Add CHECK-LABEL for test_add_with_broadcasting
2019-12-21 02:08:27 -05:00
GHEORGHE-TEOD BERCEA 0a8af69e94 Add inference for Identity operation. (#400) 2019-12-21 02:08:13 -05:00
Haruki Imai 7e3f96e642 [MLIR] Add support for Reciprocal (#397)
* Added support for Reciprocal

* Fixed format
2019-12-21 02:07:44 -05:00
Tian Jin 3e7b8465e9 clean up 2019-12-21 02:07:24 -05:00
GHEORGHE-TEOD BERCEA e81a7654f9 [MLIR] Add support for reshape (#390)
* Add reshape op handling.

* Lower reshape to KRNL dialect.

* Add comments.

* Propagate reshape to KRNL IR.

* Lower KRNL reshape to affine and standard ops level dialects.

* Add lowering of reshape operation to Krnl and LLVM Dialects.

* Add test for LLVM IR dialect output for reshape.

* Fix rebase.

* Fix test variable.

* Emit errors during reshape shape inference. Address other reviewer comments.
2019-12-21 02:06:14 -05:00
TUNG LEDUC 5ed79083d5 [MLIR] Add support for Max, Min, Sum, Elu, Selu, LeakyRelu, HardSigmoid (#395)
* Lower ONNXSumOp

* Add inferShapes() and test cases

* Load the first operand to the result

* Update SharingWork.md

* Update SharingWork.md

* Update SharingWork.md

* Add support for Max, Min

* Pass operation instead of location to mapToLowerScalarOp

* Add support for Elu, Selu, LeakyRelu, HardSigmoid

* Add test cases

* Update SharingWork.md

* Rewrite the part of lowering variadic ops and use it for binary ops

* Use two diffenrent templates for Unary and Variadic Ops

* Revise the code
2019-12-21 02:02:09 -05:00
Alexandre E Eichenberger fb1b43f842 Create SharingWork.md (#394)
Add tables to keep track of the work, initially with a section on ONNX to KRNL work. Add more sections as you see fit
2019-12-21 02:01:47 -05:00
TONG CHEN c8d591fb28 [MLIR] import attribute of onnx node (#383)
* add attributes as NamedAttribute

* support list value for attribute

* use std::tie to avoid c++17 feature
2019-12-21 02:00:58 -05:00
TUNG LEDUC 45608282e0 [MLIR] Add support for Relu (#392)
* Add support for Relu

* Add comments
2019-12-21 01:38:16 -05:00
Tian Jin 82f5bfec9f Update lower_frontend_to_krnl.cpp (#391) 2019-12-21 01:37:50 -05:00
TUNG LEDUC 1c3176bf9f [MLIR] Lower ONNX element-wise unary ops: Exp, Tanh, Sinh, Cosh, Sigmoid (#389)
* Lower ExpOp

* Lower ONNXTanhOp

* Lower Exp Tanh, Sinh, and Cosh

* Lower ONNX Sigmoid op

* Merge

* Specialize template lowerScalarOp

* Unify ONNXEWUnaryOpLowering and ONNXEWBinaryOpLowering

* Support multiple types

* Reformat the code

* Add test cases

* Reformat the code

* Change names

* Apply clang-format

* Update variable names
2019-12-21 01:37:29 -05:00
Tian Jin 0048f2fd86 clean up 2019-12-21 01:36:03 -05:00
TUNG LEDUC c3ef1d93ae [MLIR] Lower ONNX element-wise binary ops: Mul, Div, Sub, And, Or, Xor (#388)
* Lower ONNX element-wise binary ops: Mul, Div, Sub, And, Or, Xor

* Edit gen_doc.py to avoid changes about AnyTypeOf<[AnyMemRef, AnyTensor]>

* Miss a space

* Add tests

* Shorten ONNXElementWiseBinaryOpLowering into ONNXEWBinaryOpLowering

* Move lowering patterns into runOnModule()

* Redundant space
2019-12-21 01:35:31 -05:00
TUNG LEDUC 05e16dafae Use template to support lowering all binary onnx ops to kernel ir (#387) 2019-12-21 01:35:17 -05:00
GHEORGHE-TEOD BERCEA 7fb2f80dce [MLIR] Add support for dealloc insertion (#386)
* Add support for dealloc op.

* Check dealloc for returned result not present.
2019-12-21 01:34:48 -05:00
Tian Jin b2a1103915 [MLIR] Refactor Krnl Dialect and Krnl Dialect Lowering (#375)
* Store bounds as affine map attributes & check in test cases with generic printer

* Upgrading MLIR

MLIR is outdated on Buildbot, rebuilding a newer version.

* work with new version of mlir

* check-in parser tests

* custom printer

* nit

* bug fix

* enable custom asm printer test

* enable custom asm printer test

* more consistent variable naming

* test max/min

* variable naming scheme change to MLIR style

* can lower krnl to llvm

* kernel -> llvm

* comments

* bug fix

* try fixing ci

* fix ci

* deactivate model test

* fix lit test

* nit

* fix z buildbot
2019-12-21 01:34:14 -05:00
GHEORGHE-TEOD BERCEA 652ce4b7d4 Add test for checking lowering of Add op to KRNL IR (#385)
* Add test for checking lowering of Add op to KRNL IR.

* Add test file.
2019-12-21 01:20:36 -05:00