Tung D. Le
2814ea3898
Support dilations and enable the remaining e2e tests for MaxPoolSingleOut ( #31 )
...
* Support dilations and enable e2e tests
* Fix allocating memory for dynamic shape
* Edit comments
* Do dilation by computing an offset from kernel index
* Correct dilation formula, add an example of out-of-bound, and add a test for dilation
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-18 09:55:50 -04:00
Tung D. Le
4763e8a8bc
Lower ONNXAbsOp to Krnl dialect and enable e2e tests for ONNXReduceL1 ( #18 )
...
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-17 11:12:45 -04:00
Gheorghe-Teodor Bercea
1622b9f161
[NFC] Change ONNF based names to ONNX-MLIR ( #32 )
...
* Rename onnf to onnx-mlir.
* Change workspace name.
2020-03-17 09:16:33 -04:00
Tian Jin
c25831094e
Revert "Support attribute promotion."
...
This reverts commit 955968b750
.
2020-03-17 17:41:59 +08:00
Tian Jin
955968b750
Support attribute promotion.
2020-03-17 17:39:34 +08:00
Tung D. Le
d86591d61a
Import all initialized tensors as dense constants ( #30 )
...
* Import initialized tensor as dense attribute
* Import all initialize tensors as dense constants
* Remove unintentional code
* Fix value attribute format in shape inference tests of reshape
* Readd rank check for reshape's shape inference
* Remove a redundant variable
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-16 11:17:28 -04:00
Gheorghe-Teodor Bercea
c46880d5c6
Fix reshape output shape inference when a single dynamic shape is given ( #22 )
...
* Fix reshape when a dynamic shape is given.
* Fix default attributes for ConvNoBias.
* Fix comment.
* Resolve comment.
* Improve checks.
* Handle zero dim case.
* Add helper to fetch constants. Add test for dynamic reshape.
* Add test for zero.
* Use shortcut method for size.
2020-03-13 17:18:46 -04:00
chentong319
6137fc7c17
Fix issues #15 and #16 ( #29 )
...
* fix issue #15 and #16
* fix format
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-13 10:19:27 -04:00
Tung D. Le
362491553c
Shape inference for ONNXAveragePool ( #21 )
...
* Shape inference for ONNXAveragePool
* Edit comments and puts helper function on top of the file
* Fix template
2020-03-13 09:59:16 -04:00
Tung D. Le
a65820940c
Lower ConstantOp ( #28 )
...
* Lower ConstantOp
* Refactor the code
* Edit error messages
* Check whether attribute is sparse or dense during shape inference
2020-03-12 10:58:42 -04:00
Tung D. Le
162ac1bc32
Pad value for MaxPool must be negative infinity instead of zero ( #20 )
...
Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
2020-03-12 09:30:02 -04:00
Alexandre Eichenberger
811b63e031
Inter common pad ( #26 )
...
* common pad handling in shape inference for conv and maxpool
* common pads
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-11 18:36:02 -04:00
chentong319
391f565a66
Lower constant padding operation to KRNL dialect ( #27 )
2020-03-11 16:54:07 -04:00
Gheorghe-Teodor Bercea
e8a0b47e10
Fix case for upper and lower padding when strides are present. ( #11 )
...
* Fix case for upper and lower padding when strides are present.
* Address comments.
* Code clean-up.
* Fix tests.
2020-03-10 16:58:05 -04:00
Gheorghe-Teodor Bercea
fe3279e721
Initialize operation arguments with ONNX model constants ( #8 )
...
* Save current state.
* Include constant arguments in source.
* Emit constants for Reshape second argument.
* Clean-up code.
* Add changes to gen_doc.py file.
* Propagate constant tensor to Reshape second arg to infer shape.
* Update documentation.
* Eliminate constant tensor operations when lowering to KRNL dialect.
* Replace ConstantTensorOp with ConstantOp.
* Add comment to remove temporary Constant lowering code.
* Remove unused shape inference for Constant.
* Remove comment.
* Remove explicit constant elimination.
* Refactor code.
2020-03-10 14:46:35 -04:00
Gheorghe-Teodor Bercea
ba02b90e0b
Enable inference for arbitrary number of instructions ( #12 )
...
* Fix shape inference.
* Remove comment.
* Remove worklist since it is not needed.
2020-03-10 14:16:03 -04:00
Tung D. Le
1882059ac9
Support Pads for MaxPoolSingleOut ( #14 )
...
* Support Pads for MaxPoolSingleOut
* Regenerate onnx.md to include the new op
* Edit comments
* Undo redundant parts that were unintentionally changed
* Move declarative rewriting rules into canonicalize to avoid creating a new op
* Reformat the rewriting rule pattern of MaxPoolSingleOut
* Put ONNXPadConstantValuePadOp's build method into a .cpp file instead of a tablegen file
* Use the same helper function as the one in inferShape for the ONNXPadConstantValuePadOp's build method
* Change function names and fix padding for the spatial dimensions
* Call shape-inference again after canonicalization to infer shape for newly added ops during canonicalization.
* Fix typos
2020-03-09 20:15:58 -04:00
Tian Jin
718ec85479
Change variant repo from git to https. ( #17 )
2020-03-10 00:16:43 +08:00
Gheorghe-Teodor Bercea
8a992b619f
Create some helper functions to emit constant op for a specific type ( #7 )
...
* emitConstantOp with a given type
* Helper functions to create infinity constants
* Use new constant helper functions for MaxPoolSingleOut
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-05 14:21:00 -05:00
Gheorghe-Teodor Bercea
8e1b30e133
Check channel dimension mismatch only for known dimensions ( #2 )
...
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-04 14:34:08 -05:00
Gheorghe-Teodor Bercea
e4c23da4fd
Lower MaxPoolSingleOutOp to Krnl dialect ( #1 )
...
* Lower MaxPoolSingleOutOp to Krnl dialect
* Edit comments
* Update changes according to the new folder structure
* Add MLIR tests
* Support ceil_mode
* Merge the first two krnl loops into one krnl loop; remove attribute checks
* Dynamically allocate memory for the result if the result has unknown dimensions
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-04 14:27:21 -05:00
Tung D. Le
e97df0b343
Add a pass to decompose ONNX operations ( #9 )
2020-03-04 10:53:59 -05:00
Gheorghe-Teodor Bercea
7c1dd0279b
Merge pull request #5 from tjingrant/update-buildbot
...
Use llvm-project we know that works.
2020-03-02 11:59:54 -05:00
Tian Jin
47831749ce
Use the exact commit id specified in clang-ykt/llvm-project.
2020-03-03 00:18:59 +08:00
Tian Jin
04dd904ca7
Switch to new status badge.
2020-03-02 20:37:33 +08:00
Tian Jin
473fdd726d
Fix DocCheck error.
2020-03-02 17:06:40 +08:00
Tian Jin
2f5d65f2e4
Invalidate cache.
2020-03-02 16:24:15 +08:00
Tian Jin
ee96ffab73
Merge branch 'update-buildbot' of https://github.com/tjingrant/onnx-mlir into update-buildbot
2020-03-02 16:21:10 +08:00
Tian Jin
d8b5e195d2
Upgrade MLIR commit id.
2020-03-02 16:20:58 +08:00
Tian Jin
5e2a02ecdf
Trigger buildbot
2020-03-02 15:00:31 +08:00
Tian Jin
f856f84c55
Use llvm-project we know that works.
2020-03-02 14:28:36 +08:00
Tung D. Le
5357fc1421
Use SqrtOp in Standard dialect ( #108 )
...
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-02-26 12:03:24 -05:00
Tung D. Le
0c4a010283
Remove rank constraints in gemm fusion ( #101 )
...
* Remove rank constraints in gemm fusion
* Add an MLIR test
Co-authored-by: Tian Jin <tjingrant@gmail.com>
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-02-26 11:40:52 -05:00
Tung D. Le
24d89625e3
Remove redundant lower_frontend_to_krnl since we reorganized it ( #99 )
...
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-02-26 16:32:06 +08:00
chentong319
04d9e2f341
Merge pull request #84 from chentong319/shapeinference-pad
...
Shape inference for pad with constant pads
2020-02-25 19:32:34 -05:00
chentong
4edc97f3de
Merge branch 'shapeinference-pad' of github.com:chentong319/ONNF into shapeinference-pad
2020-02-25 17:46:44 -05:00
chentong
3abbf1c0e9
put the common code into a helper function
2020-02-25 17:43:49 -05:00
chentong
4079ee1f26
Merge remote-tracking branch 'upstream/master' into shapeinference-pad
2020-02-25 15:54:18 -05:00
Alexandre Eichenberger
3a88361b17
use input/output operation names, use helper for attribute function and int values ( #106 )
2020-02-25 15:46:11 -05:00
Alexandre Eichenberger
3b1c29c078
Using attribute setters for maxpool ( #105 )
...
* using attribute setters for maxpool
* fix typos, added handling of storage order, simplified code
2020-02-25 14:33:48 -05:00
Tian Jin
e02aa87748
Update gitignore file to ignore Filesystem artifacts and python related temporary files. ( #103 )
...
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-02-25 11:18:37 -05:00
Gheorghe-Teodor Bercea
907104d7e8
Merge branch 'master' into shapeinference-pad
2020-02-25 11:14:28 -05:00
Gheorghe-Teodor Bercea
ee3e140ddb
[NFC] Change structure of conversion folder. ( #96 )
...
* Change structure of conversion folder.
* Fix comments.
Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-02-25 10:38:08 -05:00
Gheorghe-Teodor Bercea
32f08bcf0c
Clean-up code. ( #98 )
2020-02-25 09:54:29 -05:00
Gheorghe-Teodor Bercea
0d307d1183
Set flag to true when definition is emitted. ( #97 )
2020-02-25 09:47:42 -05:00
Tung D. Le
a720f9a7b2
Remove special GemmNoBias since we can handle it using NoneType bias ( #100 )
...
* Remove special GemmNoBias since we can handle it using NoneType bias
* Remove GemmNoBias from onnx.md
Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-02-25 13:20:43 +08:00
Tian Jin
732317cd5a
Transition to ONNX-1.6.0. ( #95 )
...
* Transition to ONNX-1.6.0.
* Use the version of ONNX inside ONNF when running backend tests.
* Install quietly and with sudo previledge.
2020-02-25 13:04:15 +08:00
Gheorghe-Teodor Bercea
1ad7989fc5
Merge branch 'master' into shapeinference-pad
2020-02-24 17:22:00 -05:00
Alexandre Eichenberger
fcb5f35993
Introduce helper class to generate KRNL code and apply it to Convolution ( #93 )
...
* helper to gen krnl code, applied to conv
* suggested changes, name, removed set insertion point
* format
* suggested changes
* added comments and made a small name change
2020-02-24 17:20:15 -05:00
Gheorghe-Teodor Bercea
d4f8fef947
Merge branch 'master' into shapeinference-pad
2020-02-24 16:13:21 -05:00