Commit Graph

290 Commits

Author SHA1 Message Date
Alexandre Eichenberger 811b63e031
Inter common pad (#26)
* common pad handling in shape inference for conv and maxpool

* common pads

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-11 18:36:02 -04:00
chentong319 391f565a66
Lower constant padding operation to KRNL dialect (#27) 2020-03-11 16:54:07 -04:00
Gheorghe-Teodor Bercea e8a0b47e10
Fix case for upper and lower padding when strides are present. (#11)
* Fix case for upper and lower padding when strides are present.

* Address comments.

* Code clean-up.

* Fix tests.
2020-03-10 16:58:05 -04:00
Gheorghe-Teodor Bercea fe3279e721
Initialize operation arguments with ONNX model constants (#8)
* Save current state.

* Include constant arguments in source.

* Emit constants for Reshape second argument.

* Clean-up code.

* Add changes to gen_doc.py file.

* Propagate constant tensor to Reshape second arg to infer shape.

* Update documentation.

* Eliminate constant tensor operations when lowering to KRNL dialect.

* Replace ConstantTensorOp with ConstantOp.

* Add comment to remove temporary Constant lowering code.

* Remove unused shape inference for Constant.

* Remove comment.

* Remove explicit constant elimination.

* Refactor code.
2020-03-10 14:46:35 -04:00
Gheorghe-Teodor Bercea ba02b90e0b
Enable inference for arbitrary number of instructions (#12)
* Fix shape inference.

* Remove comment.

* Remove worklist since it is not needed.
2020-03-10 14:16:03 -04:00
Tung D. Le 1882059ac9
Support Pads for MaxPoolSingleOut (#14)
* Support Pads for MaxPoolSingleOut

* Regenerate onnx.md to include the new op

* Edit comments

* Undo redundant parts that were unintentionally changed

* Move declarative rewriting rules into canonicalize to avoid creating a new op

* Reformat the rewriting rule pattern of MaxPoolSingleOut

* Put ONNXPadConstantValuePadOp's build method into a .cpp file instead of a tablegen file

* Use the same helper function as the one in inferShape for the ONNXPadConstantValuePadOp's build method

* Change function names and fix padding for the spatial dimensions

* Call shape-inference again after canonicalization to infer shape for newly added ops during canonicalization.

* Fix typos
2020-03-09 20:15:58 -04:00
Tian Jin 718ec85479
Change variant repo from git to https. (#17) 2020-03-10 00:16:43 +08:00
Gheorghe-Teodor Bercea 8a992b619f
Create some helper functions to emit constant op for a specific type (#7)
* emitConstantOp with a given type

* Helper functions to create infinity constants

* Use new constant helper functions for MaxPoolSingleOut

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-05 14:21:00 -05:00
Gheorghe-Teodor Bercea 8e1b30e133
Check channel dimension mismatch only for known dimensions (#2)
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-04 14:34:08 -05:00
Gheorghe-Teodor Bercea e4c23da4fd
Lower MaxPoolSingleOutOp to Krnl dialect (#1)
* Lower MaxPoolSingleOutOp to Krnl dialect

* Edit comments

* Update changes according to the new folder structure

* Add MLIR tests

* Support ceil_mode

* Merge the first two krnl loops into one krnl loop; remove attribute checks

* Dynamically allocate memory for the result if the result has unknown dimensions

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-04 14:27:21 -05:00
Tung D. Le e97df0b343
Add a pass to decompose ONNX operations (#9) 2020-03-04 10:53:59 -05:00
Gheorghe-Teodor Bercea 7c1dd0279b
Merge pull request #5 from tjingrant/update-buildbot
Use llvm-project we know that works.
2020-03-02 11:59:54 -05:00
Tian Jin 47831749ce Use the exact commit id specified in clang-ykt/llvm-project. 2020-03-03 00:18:59 +08:00
Tian Jin 04dd904ca7 Switch to new status badge. 2020-03-02 20:37:33 +08:00
Tian Jin 473fdd726d Fix DocCheck error. 2020-03-02 17:06:40 +08:00
Tian Jin 2f5d65f2e4 Invalidate cache. 2020-03-02 16:24:15 +08:00
Tian Jin ee96ffab73 Merge branch 'update-buildbot' of https://github.com/tjingrant/onnx-mlir into update-buildbot 2020-03-02 16:21:10 +08:00
Tian Jin d8b5e195d2 Upgrade MLIR commit id. 2020-03-02 16:20:58 +08:00
Tian Jin 5e2a02ecdf
Trigger buildbot 2020-03-02 15:00:31 +08:00
Tian Jin f856f84c55 Use llvm-project we know that works. 2020-03-02 14:28:36 +08:00
Tung D. Le 5357fc1421
Use SqrtOp in Standard dialect (#108)
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-02-26 12:03:24 -05:00
Tung D. Le 0c4a010283
Remove rank constraints in gemm fusion (#101)
* Remove rank constraints in gemm fusion

* Add an MLIR test

Co-authored-by: Tian Jin <tjingrant@gmail.com>
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-02-26 11:40:52 -05:00
Tung D. Le 24d89625e3
Remove redundant lower_frontend_to_krnl since we reorganized it (#99)
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-02-26 16:32:06 +08:00
chentong319 04d9e2f341
Merge pull request #84 from chentong319/shapeinference-pad
Shape inference for pad with constant pads
2020-02-25 19:32:34 -05:00
chentong 4edc97f3de Merge branch 'shapeinference-pad' of github.com:chentong319/ONNF into shapeinference-pad 2020-02-25 17:46:44 -05:00
chentong 3abbf1c0e9 put the common code into a helper function 2020-02-25 17:43:49 -05:00
chentong 4079ee1f26 Merge remote-tracking branch 'upstream/master' into shapeinference-pad 2020-02-25 15:54:18 -05:00
Alexandre Eichenberger 3a88361b17
use input/output operation names, use helper for attribute function and int values (#106) 2020-02-25 15:46:11 -05:00
Alexandre Eichenberger 3b1c29c078
Using attribute setters for maxpool (#105)
* using attribute setters for maxpool

* fix typos, added handling of storage order, simplified code
2020-02-25 14:33:48 -05:00
Tian Jin e02aa87748
Update gitignore file to ignore Filesystem artifacts and python related temporary files. (#103)
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-02-25 11:18:37 -05:00
Gheorghe-Teodor Bercea 907104d7e8
Merge branch 'master' into shapeinference-pad 2020-02-25 11:14:28 -05:00
Gheorghe-Teodor Bercea ee3e140ddb
[NFC] Change structure of conversion folder. (#96)
* Change structure of conversion folder.

* Fix comments.

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-02-25 10:38:08 -05:00
Gheorghe-Teodor Bercea 32f08bcf0c
Clean-up code. (#98) 2020-02-25 09:54:29 -05:00
Gheorghe-Teodor Bercea 0d307d1183
Set flag to true when definition is emitted. (#97) 2020-02-25 09:47:42 -05:00
Tung D. Le a720f9a7b2
Remove special GemmNoBias since we can handle it using NoneType bias (#100)
* Remove special GemmNoBias since we can handle it using NoneType bias

* Remove GemmNoBias from onnx.md

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-02-25 13:20:43 +08:00
Tian Jin 732317cd5a
Transition to ONNX-1.6.0. (#95)
* Transition to ONNX-1.6.0.

* Use the version of ONNX inside ONNF when running backend tests.

* Install quietly and with sudo previledge.
2020-02-25 13:04:15 +08:00
Gheorghe-Teodor Bercea 1ad7989fc5
Merge branch 'master' into shapeinference-pad 2020-02-24 17:22:00 -05:00
Alexandre Eichenberger fcb5f35993
Introduce helper class to generate KRNL code and apply it to Convolution (#93)
* helper to gen krnl code, applied to conv

* suggested changes, name, removed set insertion point

* format

* suggested changes

* added comments and made a small name change
2020-02-24 17:20:15 -05:00
Gheorghe-Teodor Bercea d4f8fef947
Merge branch 'master' into shapeinference-pad 2020-02-24 16:13:21 -05:00
Tian Jin 9c398c0121
Support Optional Inputs (#94)
* 1. Combine variadicIn/Out with expectedNumOperands/Results to simplify import function arguments.
2. Generic improvements to code readability in gen_doc.py.

* Update ONNX Dialect doc.

* Remove redundant code in ImportNode.

* Prettify op_build_table.inc.

* 1. Remove irrelevant code in gen_doc.py

* Refactor code to be more readable.

* Further refactoring for readability improvements.

* Allow gemm to have an optional operand (bias term), and include an example of declarative optimization pattern targeting gemm with bias term ommitted.

* Make shape inference/lowering of gemm op compatible with optional operand declaration.

* Apply canonicalization again after lowering from onnx -> std dialects.

* Make hasBias compatible with the situation of GemmNoBias op.

* Update doc.

* Add a canonicalization test.

* Remove special handler for importing Gemm op, as it's redundant now.
2020-02-24 23:46:48 +08:00
chentong b3df3c64b5 Merge branch 'master' of github.com:clang-ykt/ONNF into shapeinference-pad 2020-02-24 09:26:45 -05:00
chentong319 5ab7fe37c4
Merge branch 'master' into shapeinference-pad 2020-02-21 09:36:41 -05:00
chentong 2281cc060f Merge branch 'master' of github.com:clang-ykt/ONNF into shapeinference-pad
Conflicts:
	src/pass/shape_inference_pass.cpp
2020-02-21 09:30:40 -05:00
Tung D. Le 479dd5e35a
Add result type inference to op definition (#87)
* Add result type inference to op definition

* Edit MLIR tests

* Fix result type for Mul

* Format comments

* Return UnrankedTensorType as result type

* Just for testing -split-input-file

* Undo: Just for testing -split-input-file

* Extract a function, get_operand_ins, that gets operand types; rewrite gen_attr_ins function

* Generate custom builders

* Call existing build methods

* Add comments

* Minor changes

* Generate build methods with attributes

* Add support of variadic type

* Do not generate custom build methods for ops having only attributes

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-02-21 22:28:24 +08:00
Tung D. Le aea6479ad3
Lower BatchNormalization (test mode) to Krnl dialect (#70)
* Add ONNXBatchNormalizationTestModeOp and its shape inference

* Lower batchnormalization test mode

* re-use scale, bias, mean, and variance

* Add MLIR tests

* Add e2e tests

* fix typos

* Fix a bug in MLIR tests

* Change type from int to int64_t for indices

* Uncomment e2e tests due to segmentation fault

* Uncomment e2e tests due to segmentation fault

* Revise the code

* [Tian] Fix segmentation fault in e2e tests

* Re-generate onnx.md to include BatchNormalizationTestModeOp

* Reverse an unintentional change

* Fix some typos in comments

* Use convertToMemRefType from the master branch

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-02-20 11:45:40 -05:00
Tung D. Le f1d20e368f
Add support of GemmNoBias (#91)
* Add support of GemmNoBias

* Fix a wrong indentation
2020-02-20 10:55:24 -05:00
Tung D. Le a3f042220e
Get MemRefType for result types (#69)
* Get memreftype for result types

* Revise

* Replace convertToMemRefType

* Use convertToMemRefType in ONNXConvNoBiasOpLowering

* Merge with the master branch

* Reverse an unintentional change

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-02-20 21:44:01 +08:00
Gheorghe-Teodor Bercea b28c6906b4
Fix building ONNF with latest LLVM/MLIR (#89)
* Fix build and link errors.

* Fix end to end tests.

* Fix indentation.

* Fix type conversion.

* Use newest LLVM version.

* Use newest LLVM version.
2020-02-19 18:15:02 -05:00
Gheorghe-Teodor Bercea da037ffc7d
Merge branch 'master' into shapeinference-pad 2020-02-19 13:46:00 -05:00
Tung D. Le b9f2f25b56
[NFC] Categorize ONNX ops lowering (#80)
* Create two categories: elementwise and tensor

* typos

* Create directories for categories

* Edit comments

* Extract a function that creates a KrnlIterateOp

* Add comments

* Extract some common parts

* Revise softmax

* Add reduction.inc

* Move lower-frontend to lib/conversion

* Move  directory to  directory

* Change file/directory names

* Comment format

* Add matmul.inc
2020-02-19 15:17:48 +08:00