Commit Graph

69 Commits

Author SHA1 Message Date
Tung D. Le 6bd9471262
Lower ReduceMean op to Krnl dialect (#318)
* Improve support for krnl.dim (#317)

* Reorganize main function.

* Follow review comments.

* Emit constants are globals in Krnl and LLVM dialects.

* Make krnl dim more robust.

* Format.

* Update comments.

* Change pass name.

Signed-off-by: Tung D. Le <tung@jp.ibm.com>

* Initial Location info support (#302)

* NFC: Attribute cleanup (remove references of attributes)  (#286)

* Define krnl.permute op.

* Support krnl.permute operation.

* Properly remove loop references.

* Re-push, Github was down.

* Need to debug interpretOp error.

* Fix lowering bug by erasing ops after full krnl IR interpretation is done, and clean up & comment code.

* Introduce permute, unroll operations.

* More debug.

* Remove std::set.

* krnl.terminate fails to be converted.

* Pass all tests, need to add legal ops as well as part of the conversion target.

* Change test format to new permute spec.

* Bug fix for nested iterate op lowering.

* Simplify error reporting.

* Fix compilation error.

* Increase comments coverage.

* Remove unnecessary imports.

* Re-trigger Jenkins

* Add permute/unroll tests.

* Retrigger Jenkins

* remove & (ref) for Attributes

Co-authored-by: Tian Jin <tjingrant@gmail.com>
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Syntax highlighting for mlir code in README (#276)

* Syntax highlighting for mlir code in README

* Restart Jenkins

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* use print not dump

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* add semicolon

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* syntax

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* add code to preserve locations

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* format

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Emit the dynamic memory pool (#290)

* Reorganize main function.

* Follow review comments.

* Emit constants are globals in Krnl and LLVM dialects.

* Add support for bundling dynamic memory pools.

* Add dynamic bundling.

* Clean-up code.

* Clean-up file.

* Add test for bundling dynamic memory pool.

* Fixes. Simplify data structure. Add mixed test.

* Remove unused import.

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Fix wrong type for llvm::loadop (#293)

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Update llvm commit ID to 1d01fc1 (#292)

* Fix for LLVM revision D85495

* Fix for LLVM revision DD86121

* Fix for LLVM revision D85622 (f9dc2b7)
TODO: Change preloadDialectsInContext to false

Memo for previous fixes: D86121 (250f43d), D85495 (575b22b)

* clang-format

* Update llvm commit ID of README and clone-mlir.sh

* Updated llvm commit ID of README.md

* Fix for passing backend tests

* Removed the commented code

* Empty commit for triggering rebuild

* Test multi-stage travis build

* Specify stage order.

* Empty commit for triggering rebuild

* Update prereq.s390x.Dockerfile

Make it possible to execute s390x prereq docker multiple times.

* Build prereq for each arch

* Fix multi-arch prereq build.

* timeout at 40m

* Update .travis.yml

* add ppc64le prereq builder

* Run ppc docker prereq build multiple times

* Do not test branch update unless it's mater.

* Fix dockerfile.

* Fix typo in travis.yml.

* Fix ppc64 docker file

* Update .travis.yml

* turn off metacopy on ppc64le

* Update .travis.yml

* Turn off metacopy.

* Turn off metacopy inside Dockerfile in ppc64.

* No sudo in Docker.

* Remove metacopy config from Dockerfile.

* Change base image to be bionic.

* Using newer linux distro for ppc64.

* Turn off metacopy in before_install.

* Fix sudo permission issue.

* Run docker info.

* Allow amd64 docker file to be built multiple times

* Support building amd64 prereq.

* Fix amd64 docker file typo.

* fix ppc64le dockerfile typo.

* timeout from 40m -> 30m

* 40m->30m

* 40m->30m

* fix bug preventing incremental build.

* fix bug preventing incremental build.

* Bump CircleCI cache version.

* Push to production prereq container repository and condition prereq docker rebuild on commit message.

* Rebuild prereq docker.

* Move default script to top-level.

* Python not properly installed.

* amd64 -> x86

* Rebuild prereq docker.

* Rebuild prereq docker.

* Rebuild prereq docker.

* Restart all CI.

* Disallow cache on Jenkins docker build.

* Restart zJenkins.

* Restart zJenkins.

Co-authored-by: Haruki Imai <imaihal@jp.ibm.com>
Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Using onnx-mlir through incremental stages (#257)

* Add lowering of Vector dialect for lower-all-llvm pass

* Fix generating CallOp instructions when return type is void

* Fix lowering of memref

* Reformat using clang-format

* Record more context.

* Reflow comments.

Co-authored-by: Tian Jin <tjingrant@gmail.com>
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Dropout elimination & Conv Bugfix (#297)

* Dropout elimination.

* Test VGG19.

* Add shufflenet.

* Fix grouped convolution bug.

* Fix lit test failure.

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Rewrite shape and size OP  (#285)

* add shape inference

* Revert "add shape inference"

This reverts commit f9d42f39e68e14b5648abccfc8617fff00244d16.

* add rewrite rules

* test cases

* format

* add constraint

* response to review

* response to review

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* initial code for handling custom ops (#288)

* initial code for handling custom ops

* format

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* ShapeInference for SizeOp (#299)

* add shape inference

* Revert "add shape inference"

This reverts commit f9d42f39e68e14b5648abccfc8617fff00244d16.

* shape inference

* test case

* format

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Gather ONNX to Kernel Lowering (#294)

* Define krnl.permute op.

* Support krnl.permute operation.

* Properly remove loop references.

* Re-push, Github was down.

* Need to debug interpretOp error.

* Fix lowering bug by erasing ops after full krnl IR interpretation is done, and clean up & comment code.

* Introduce permute, unroll operations.

* More debug.

* Remove std::set.

* krnl.terminate fails to be converted.

* Pass all tests, need to add legal ops as well as part of the conversion target.

* Change test format to new permute spec.

* Bug fix for nested iterate op lowering.

* Simplify error reporting.

* Fix compilation error.

* Increase comments coverage.

* Remove unnecessary imports.

* Re-trigger Jenkins

* Add permute/unroll tests.

* Retrigger Jenkins

* initial implementation of gather

* added tests

* format

* remove affine load for second load, as it uses an indirection

* changes suggested by reviewers

* remove backend tests until I can verify them locally

Co-authored-by: Tian Jin <tjingrant@gmail.com>
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* add lit test
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* fix option spelling
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* braces in wrong place
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* add lit test
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* remove duplicate code from lit test Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Simplify lit test Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* remove attributes from lit test Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* add onnx-mlir-opt to tool names
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* add printIR to second RUN
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* redo adding printIR
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* fix bug

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* format

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* fix typo in test

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
Co-authored-by: Tung D. Le <tung@jp.ibm.com>
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Haruki Imai <imaihal@jp.ibm.com>
Co-authored-by: Kevin Wu <6334443+kwu91@users.noreply.github.com>
Co-authored-by: chentong319 <chentong@us.ibm.com>
Signed-off-by: Tung D. Le <tung@jp.ibm.com>

* Support ReduceMean

Signed-off-by: Tung D. Le <tung@jp.ibm.com>

* Add lit tests

Signed-off-by: Tung D. Le <tung@jp.ibm.com>

* Fix unknown dimensions for type f32

Signed-off-by: Tung D. Le <tung@jp.ibm.com>

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Kevin O'Brien <caomhin@us.ibm.com>
Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
Co-authored-by: Haruki Imai <imaihal@jp.ibm.com>
Co-authored-by: Kevin Wu <6334443+kwu91@users.noreply.github.com>
Co-authored-by: chentong319 <chentong@us.ibm.com>
2020-10-10 22:09:42 -04:00
chentong319 931127c7e9
Lower tile to Krnl (#308)
* alloc for unknown shape

* determine affine

* format

* test for unknown input

* Update test.py

* fix the expression

Signed-off-by: chentong <chentong@us.ibm.com>

* fix test lit

Signed-off-by: chentong <chentong@us.ibm.com>

* remove affine load

Signed-off-by: chentong <chentong@us.ibm.com>

* format

Signed-off-by: chentong <chentong@us.ibm.com>

* fix test

Signed-off-by: chentong <chentong@us.ibm.com>

* fix Affineload

Signed-off-by: chentong <chentong@us.ibm.com>

* affine for alternative

Signed-off-by: chentong <chentong@us.ibm.com>

* use DimOp

Signed-off-by: chentong <chentong@us.ibm.com>

* change test case

Signed-off-by: chentong <chentong@us.ibm.com>

* fix test

Signed-off-by: chentong <chentong@us.ibm.com>

* use more auto type

Signed-off-by: chentong <chentong@us.ibm.com>

* fix affine load

Signed-off-by: chentong <chentong@us.ibm.com>

* small fix

Signed-off-by: chentong <chentong@us.ibm.com>

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-10-05 00:50:59 -04:00
daquexian cb3d1e4f64
Import graph output type from protobuf (#333)
* import output type

Signed-off-by: daquexian <daquexian566@gmail.com>

* rename input->value_info, update doc

Signed-off-by: daquexian <daquexian566@gmail.com>

* infer shape on return op inputs

Signed-off-by: daquexian <daquexian566@gmail.com>

* import output type from protobuf only if it has shape

Signed-off-by: daquexian <daquexian566@gmail.com>

* fix wrong gather test

Signed-off-by: daquexian <daquexian566@gmail.com>

* add comments

Signed-off-by: daquexian <daquexian566@gmail.com>
2020-10-03 22:21:15 +07:00
Alexandre Eichenberger f0c5b99229
Gather: fix for negative indices (#313)
* Define krnl.permute op.

* Support krnl.permute operation.

* Properly remove loop references.

* Re-push, Github was down.

* Need to debug interpretOp error.

* Fix lowering bug by erasing ops after full krnl IR interpretation is done, and clean up & comment code.

* Introduce permute, unroll operations.

* More debug.

* Remove std::set.

* krnl.terminate fails to be converted.

* Pass all tests, need to add legal ops as well as part of the conversion target.

* Change test format to new permute spec.

* Bug fix for nested iterate op lowering.

* Simplify error reporting.

* Fix compilation error.

* Increase comments coverage.

* Remove unnecessary imports.

* Re-trigger Jenkins

* Add permute/unroll tests.

* Retrigger Jenkins

* changes to support negative indices

Signed-off-by: Alexandre Eichenberger <alexe@us.ibm.com>

* use krnl.dim now

Signed-off-by: Alexandre Eichenberger <alexe@us.ibm.com>

* move comment

Signed-off-by: Alexandre Eichenberger <alexe@us.ibm.com>

* updated test for krnl-dim pattern

Signed-off-by: Alexandre Eichenberger <alexe@us.ibm.com>

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-09-24 14:02:49 -04:00
Kevin O'Brien 17383768df
Initial Location info support (#302)
* NFC: Attribute cleanup (remove references of attributes)  (#286)

* Define krnl.permute op.

* Support krnl.permute operation.

* Properly remove loop references.

* Re-push, Github was down.

* Need to debug interpretOp error.

* Fix lowering bug by erasing ops after full krnl IR interpretation is done, and clean up & comment code.

* Introduce permute, unroll operations.

* More debug.

* Remove std::set.

* krnl.terminate fails to be converted.

* Pass all tests, need to add legal ops as well as part of the conversion target.

* Change test format to new permute spec.

* Bug fix for nested iterate op lowering.

* Simplify error reporting.

* Fix compilation error.

* Increase comments coverage.

* Remove unnecessary imports.

* Re-trigger Jenkins

* Add permute/unroll tests.

* Retrigger Jenkins

* remove & (ref) for Attributes

Co-authored-by: Tian Jin <tjingrant@gmail.com>
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Syntax highlighting for mlir code in README (#276)

* Syntax highlighting for mlir code in README

* Restart Jenkins

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* use print not dump

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* add semicolon

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* syntax

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* add code to preserve locations

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* format

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Emit the dynamic memory pool (#290)

* Reorganize main function.

* Follow review comments.

* Emit constants are globals in Krnl and LLVM dialects.

* Add support for bundling dynamic memory pools.

* Add dynamic bundling.

* Clean-up code.

* Clean-up file.

* Add test for bundling dynamic memory pool.

* Fixes. Simplify data structure. Add mixed test.

* Remove unused import.

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Fix wrong type for llvm::loadop (#293)

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Update llvm commit ID to 1d01fc1 (#292)

* Fix for LLVM revision D85495

* Fix for LLVM revision DD86121

* Fix for LLVM revision D85622 (f9dc2b7)
TODO: Change preloadDialectsInContext to false

Memo for previous fixes: D86121 (250f43d), D85495 (575b22b)

* clang-format

* Update llvm commit ID of README and clone-mlir.sh

* Updated llvm commit ID of README.md

* Fix for passing backend tests

* Removed the commented code

* Empty commit for triggering rebuild

* Test multi-stage travis build

* Specify stage order.

* Empty commit for triggering rebuild

* Update prereq.s390x.Dockerfile

Make it possible to execute s390x prereq docker multiple times.

* Build prereq for each arch

* Fix multi-arch prereq build.

* timeout at 40m

* Update .travis.yml

* add ppc64le prereq builder

* Run ppc docker prereq build multiple times

* Do not test branch update unless it's mater.

* Fix dockerfile.

* Fix typo in travis.yml.

* Fix ppc64 docker file

* Update .travis.yml

* turn off metacopy on ppc64le

* Update .travis.yml

* Turn off metacopy.

* Turn off metacopy inside Dockerfile in ppc64.

* No sudo in Docker.

* Remove metacopy config from Dockerfile.

* Change base image to be bionic.

* Using newer linux distro for ppc64.

* Turn off metacopy in before_install.

* Fix sudo permission issue.

* Run docker info.

* Allow amd64 docker file to be built multiple times

* Support building amd64 prereq.

* Fix amd64 docker file typo.

* fix ppc64le dockerfile typo.

* timeout from 40m -> 30m

* 40m->30m

* 40m->30m

* fix bug preventing incremental build.

* fix bug preventing incremental build.

* Bump CircleCI cache version.

* Push to production prereq container repository and condition prereq docker rebuild on commit message.

* Rebuild prereq docker.

* Move default script to top-level.

* Python not properly installed.

* amd64 -> x86

* Rebuild prereq docker.

* Rebuild prereq docker.

* Rebuild prereq docker.

* Restart all CI.

* Disallow cache on Jenkins docker build.

* Restart zJenkins.

* Restart zJenkins.

Co-authored-by: Haruki Imai <imaihal@jp.ibm.com>
Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Using onnx-mlir through incremental stages (#257)

* Add lowering of Vector dialect for lower-all-llvm pass

* Fix generating CallOp instructions when return type is void

* Fix lowering of memref

* Reformat using clang-format

* Record more context.

* Reflow comments.

Co-authored-by: Tian Jin <tjingrant@gmail.com>
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Dropout elimination & Conv Bugfix (#297)

* Dropout elimination.

* Test VGG19.

* Add shufflenet.

* Fix grouped convolution bug.

* Fix lit test failure.

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Rewrite shape and size OP  (#285)

* add shape inference

* Revert "add shape inference"

This reverts commit f9d42f39e68e14b5648abccfc8617fff00244d16.

* add rewrite rules

* test cases

* format

* add constraint

* response to review

* response to review

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* initial code for handling custom ops (#288)

* initial code for handling custom ops

* format

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* ShapeInference for SizeOp (#299)

* add shape inference

* Revert "add shape inference"

This reverts commit f9d42f39e68e14b5648abccfc8617fff00244d16.

* shape inference

* test case

* format

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Gather ONNX to Kernel Lowering (#294)

* Define krnl.permute op.

* Support krnl.permute operation.

* Properly remove loop references.

* Re-push, Github was down.

* Need to debug interpretOp error.

* Fix lowering bug by erasing ops after full krnl IR interpretation is done, and clean up & comment code.

* Introduce permute, unroll operations.

* More debug.

* Remove std::set.

* krnl.terminate fails to be converted.

* Pass all tests, need to add legal ops as well as part of the conversion target.

* Change test format to new permute spec.

* Bug fix for nested iterate op lowering.

* Simplify error reporting.

* Fix compilation error.

* Increase comments coverage.

* Remove unnecessary imports.

* Re-trigger Jenkins

* Add permute/unroll tests.

* Retrigger Jenkins

* initial implementation of gather

* added tests

* format

* remove affine load for second load, as it uses an indirection

* changes suggested by reviewers

* remove backend tests until I can verify them locally

Co-authored-by: Tian Jin <tjingrant@gmail.com>
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* add lit test
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* fix option spelling
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* braces in wrong place
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* add lit test
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* remove duplicate code from lit test Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Simplify lit test Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* remove attributes from lit test Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* add onnx-mlir-opt to tool names
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* add printIR to second RUN
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* redo adding printIR
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* fix bug

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* format

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* fix typo in test

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
Co-authored-by: Tung D. Le <tung@jp.ibm.com>
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Haruki Imai <imaihal@jp.ibm.com>
Co-authored-by: Kevin Wu <6334443+kwu91@users.noreply.github.com>
Co-authored-by: chentong319 <chentong@us.ibm.com>
2020-09-23 18:58:27 -04:00
NathanielMcVicar 3491b90b1e
Support LLVM as of 7dcd0042 (#309)
* Update to support LLVM as of 7dcd0042

Fixes for upstream changes to mlir.

- New pass registration method from https://reviews.llvm.org/D85622
- Integer attributes are now C types when possible https://reviews.llvm.org/D86739

Signed-off-by: Nathaniel McVicar <namcvica@microsoft.com>

* Fix for checkclang

* Windows incremental build fix from @max-ku

* Remove MLIRShapeToSCF lib

* Missed a getSExtValue on now c type

* Rebuild prereq docker.

* Bump CircleCI cache version.

* Update hash for Windows build

Signed-off-by: Nathaniel McVicar <namcvica@microsoft.com>

* Bump CircieCI cache version again.

* Rebuild prereq docker.

* Update README.md

* Update README.md

* Undo edits to ONNXOps.td.inc.

* Undo changes to ONNXOps.td.inc.

* Fix cast op TableGen.

* Tweak tablegen definition of Cast.

* Use explicitly signed integer as attributes.

* Move all signless attribute to explicitly signed attribute.

* Import ONNX int attribute as SI64 attribute.

* Make Conv.group attr use SI64 attr.

* Fix conv test.

* Fix DocCheck complaint.

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-09-22 23:42:50 +07:00
Prashant Kumar 4cc16aceb7
[MLIR] Add SizeOp conversion from ONNX dialect to Krnl dialect (#295)
* [MLIR] Add SizeOp conversion from ONNX dialect to Krnl dialect

Added ONNXSizeOp conversion from ONNX dialect to Krnl dialect. This op is added as a part of --convert-onnx-to-krnl pass.

Signed-off-by: Prashant Kumar <pk5561@gmail.com>

* Add unit tests for Size op.

* Remove unit tests.

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-09-21 16:55:21 +07:00
Tung D. Le 66074da3ac
Lower ONNXConstantOfShapeOp to Krnl dialect (#296)
* Lower ONNXConstantOfShapeOp to Krnl dialect

* Change a variable name

* Add comments to lit tests

Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
2020-09-19 12:47:39 -04:00
Alexandre Eichenberger 3a5aa7ee31
Gather ONNX to Kernel Lowering (#294)
* Define krnl.permute op.

* Support krnl.permute operation.

* Properly remove loop references.

* Re-push, Github was down.

* Need to debug interpretOp error.

* Fix lowering bug by erasing ops after full krnl IR interpretation is done, and clean up & comment code.

* Introduce permute, unroll operations.

* More debug.

* Remove std::set.

* krnl.terminate fails to be converted.

* Pass all tests, need to add legal ops as well as part of the conversion target.

* Change test format to new permute spec.

* Bug fix for nested iterate op lowering.

* Simplify error reporting.

* Fix compilation error.

* Increase comments coverage.

* Remove unnecessary imports.

* Re-trigger Jenkins

* Add permute/unroll tests.

* Retrigger Jenkins

* initial implementation of gather

* added tests

* format

* remove affine load for second load, as it uses an indirection

* changes suggested by reviewers

* remove backend tests until I can verify them locally

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-09-11 15:36:23 -04:00
Tian Jin 5e11429d77
Dropout elimination & Conv Bugfix (#297)
* Dropout elimination.

* Test VGG19.

* Add shufflenet.

* Fix grouped convolution bug.

* Fix lit test failure.
2020-09-10 14:47:30 +08:00
Aman LaChapelle 24d0a2ac71
Add shape inference and names (#266)
* Add shape inference and names

 - Add shape inference for PRelu
 - Fix shape inference for group conv
   for ConvTranspose
 - Add input and output names for
   graphs (functions)
 - Add support for (u)int8 tensor
   attributes

* Fix format issues

* Revert formatting for gen_onnx_mlir.py

* Pads can have ArrayAttr and DenseElementsAttr so support both

* NumInputs is the number of graph inputs that don't have initializers

* Add test for 2D batchnorm

* Fix typo in define_loops in new 2d BN test

* Change 'name' to 'onnx_node_name'

* Fix Batchnorm for 2D I/O and add lowering test

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-08-27 15:46:27 -04:00
Anh Leu 2ee725d939
Add CastOp lowering (#259)
* move scalerop to decompose

* change clang format

* change clang format

* add shape inference for scaler op

* fixing generated onnxop

* generate onnx.md

* add benefit for scaler decompose and simplify scaler shape inference

* cast rewrite only for float

* add cast op same type rewrite rule

* working on cast lowering

* cast lowering working

* correct onnx version

* update onnx md

* add test for tensor<10xf64>
2020-08-11 16:07:13 -04:00
Tian Jin 58ee62fb49
[NFC] Rename passes for stylistic consistency. (#232)
* lower-frontend -> convert-onnx-to-krnl

* lower-all-llvm -> convert-krnl-to-llvm

* lower-krnl -> convert-krnl-to-affine

* Name fix.
2020-07-31 21:37:35 +08:00
Gheorghe-Teodor Bercea a58594ec81
Revert "Emit allocs at the top of functions (#222)" (#226)
This reverts commit b27e57cc4f.
2020-07-21 18:30:39 -04:00
Gheorghe-Teodor Bercea b27e57cc4f
Emit allocs at the top of functions (#222)
* Reorganize main function.

* Follow review comments.

* Emit constants are globals in Krnl and LLVM dialects.

* Add support for moving dynamic alloca instructions to top of functions.

* Fix memory pooling tests.

* Various fixes.

* Fix lit tests.

* More test fixes.

* Reformat.

* Reformat some more.

* Fix issue with TestConv and split-input-file.

* Use smart pointers.

* Remove redundant pointer.

* Reformat.

* Add initMap description.

* Clean up tests.
2020-07-20 19:24:17 -04:00
Tian Jin 01a4977c74
Remove optimize_loops/return_loops op. (#200)
* Remove optimize_loops/return_loops op in elementwise ops lowering and fix tests in onnx_lowering.mlir.

* Fix all tests.

* Remove all occurences of def_loops/return_loops.

* Fix test.

* Fix comments for defineLoops & emitKrnlLoopsAndIterationForOperand function.

* Remove emitOptimizedLoops.

* Allow not specifying optimizedLoops when creating KrnlIterateOperandPack.

* Fix style.

* Make BuildKernelLoop helper not emit optimize/return_loop operations & retire emitKrnlLoopsAndIterationForOperand by replacing it with BuildKernelLoop.

* DefineLoops -> DefineLoopsEx, remove redundant emitKrnlLoopsAndIterationForOperand function.

* BuildKrnlLoop API name update.

* Tweak comments.

* Remove unused withEmptyOptimization flag.

* Better comment for BuildKrnlLoop.

* Fully remove krnl.return_loops/optimize_loops op.

* Trigger Windows Build

* Bump windows ci python version.
2020-07-08 12:49:15 +08:00
Aaron Smith 8e6642b2bc
Update llvm version (#187)
* Update llvm version

* Update git hash for llvm-project

* Update option handling

* Update LLVM version

* Update tests

* Update git hash

* Update docs

* clang-format

* Fix operand adaptor

* Fix dim with constant

* Update LSTM.cpp

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-07-07 21:26:00 +08:00
Tung D. Le 7e05f371de
Replace std.load/std.store by affine.load/affine.store (#180)
* Move to more recent LLVM ID (May 15)

* clang-format

* Bump cache version up

* Update readme

* Fix doc check

* Move to a newer commit id

* Update LoopToStandard -> SCFToStandard

* Change MLIRSideEffects to MLIRSideEffectInterfaces

* Add AffineScope trait to KrnlIterateOp

* [ElementWise] Load/Store op to AffineLoad/AffineStore op

* [Gemm, MatMul, Reduction, Softmax] Load/Store op to AffineLoad/AffineStore op

* [Concat] Load/Store op to AffineLoad/AffineStore op

* [Pad, PadConstantValuePad, Reshape, Transpose] Load/Store op to AffineLoad/AffineStore op

* [LSTM] Load/Store op to AffineLoad/AffineStore op

* [Conv, Norm, Pooling] Load/Store op to AffineLoad/AffineStore op

* Add affine-loop-fusion pass

* Use Load/Store for scalar

* Use Load/Store for scalar

* Fix lit tests

* Unknown dimensions for broadcasting ops

* Affine Load/Store for scalar memref

* clang-format

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-07-05 16:20:21 +08:00
Tung D. Le 2c8f5701bd
Lower SqueezeOp to Krnl dialect (#164)
* Lower Squeeze op to Krnl dialect

* Emit tensor size as a single constant; add a lit test for unknown dimensions

* Code style

* Speical case where the input is only used by this squeeze op

* Remove squeeze-in-place optimization

* Update ConvertONNXToKrnl.cpp

Twek to re-run tests.

* Trigger buildbot re-run.

* Re-run CI

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-07-03 16:26:41 +08:00
chentong319 2e08b2112c
String type (Ready for Review) (#182)
* string type from tensorflow

* simplify type

* parser and print

* gen StringType for tablegen

* onnx to onnx-mlir type

* add namespace

* allow all integer type

* dialect document

* add test case

* format

* more precise type for ONNXOp

* format

* enable the failed test

* update comment

* update onnx.md

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-06-25 16:34:37 -04:00
Tung D. Le 8c4d527eea
Lower SplitOp to Krnl dialect (#155)
* Fix importing variadic output

* Lower splitop

* Support unknown dimension and add lit tests

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-06-11 10:57:20 +08:00
Tung D. Le bb17fa965f
Add AffineScope trait to KrnlIterateOp and enable affine-loop-fusion pass (#140)
* Make KrnlIterate's IVs valid to AffineLoad/AffineStore

* [Unary elementwise op] Load/Store -> AffineLoad/AffineStore

* [Conv] Load/Store -> AffineLoad/AffineStore

* Add affine-loop-fusion pass

* typos

* Mistake when merging branch master

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-06-08 15:36:27 +08:00
Alexandre Eichenberger 20dd6544aa
conv bug fix (#154) 2020-05-28 07:34:58 +08:00
chentong319 6099efd91b
Express some basic features of an Operation in TableGen file (#103)
* change operation definition

* change importer

* default type inference

* file format

* generate types for input/output

* generate the mapping for operation output type

* remove debug message for gen_doc.py

* update the dialect doc

* add support Complex

* format

* update document

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-05-21 22:03:16 -04:00
chentong319 23bea50404
Implement PadOp based on attribute promotion (#71)
* enable promote attr for pad

* use optional arguments for pad

* shape infereance for pad

* Lowering Pad

* format file

* use DenseTensor for the attribute

* use Pad in ONNXRewrite

* fix the merge conflict

* fix the attr given to constantOp

* handle ONNXConstantOp in attribute promotion

* Fix bug when AttributePromotion is called more than once

* update ONNXOps.td.inc with correct version of onnx

* update onnx.md

* responses to review

* fix the build error

* change the implementation of Pad

* delete commented out code

* clang format

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-05-15 13:19:28 +08:00
Tung D. Le 4d8b855c17
Unify codes in shape inference and conversion (#98)
* Use AffineMap

* Shared AffineMap

* AffineMap for Conv/Pooling

* Create helper files

* Remove changes for Relu

* Remove redundant includes

* Use AffineMap for AveragePool's shape inference

* Add MLIR tests for unknown dimension case

* Extract a method AffineMapIntConstant

* Comment stylist and include path

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-05-14 17:31:33 +08:00
Tung D. Le d65a6e72dd
Specialize the op lowering logic for element-wise operations (#118)
* Specialize the op lowering logic for elementwise operations

* Fix clang-format error.

* Update tests for LSTM since LSTM uses element-wise ops

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-05-14 13:00:15 +08:00
Tung D. Le 24343177b8
Lower LSTMOp to Krnl dialect (#73)
* Support dilations and enable e2e tests

* Fix allocating memory for dynamic shape

* Edit comments

* Do dilation by computing an offset from kernel index

* Correct dilation formula, add an example of out-of-bound, and add a test for dilation

* Import optional outputs as NoneType

* Shape inference for ONNXLSTM

* Edit ONNXLSTM::inferShape()

* Shape inference for ONNXLSTMOp

* Create a common function for inferring shape for RNN ops

* CheckInsertDeallocation for a specific result

* Allocate memory for LSTM

* First round of lowering

* Allocate memory for hidden and cell states

* Test with custom Tanh

* Fix an error in Ct's formula

* Add E2E tests

* Return outputs

* Refactor the code

* Enable E2E tests

* Support reverse and bidirectional directions

* Minor revision

* Return all intermediate hidden states

* Call existing activation functions

* Structs for activation functions

* Call existing activations in ONNX

* Minor revision

* Compare strings ignoring case

* Use memreftype of rank 0 for calling activation functions

* Fix getActivationPack()

* Revise the code

* Add one MLIR test

* Add MLIR tests for reverse and bidirectional modes

* Make the order of emiting instructions deterministic

* Use OperandAdaptor instead of directly use an operand index

* Use literal assignments

* Change some variable names

* Use literal assignments

* Use literal assignments

* Format the code

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-05-13 21:08:06 +08:00
Tung D. Le 64ed03295f
Fix converting type for functions with no argument (#96)
* Fix converting type for functions with no argument

* Add two tests

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-04-27 11:01:51 -04:00
Tung D. Le eac2297624
Lower MaxPooling and AveragePool to Krnl dialect using AffineMap (#38)
* Create a template for pooling and add support for AveragePool

* Edit MLIR tests for MaxPool according to the new lowering template for pooling

* Dealloc temporary variables

* Support count_include_pad for AveragePool

* Add MLIR tests for AveragePool lowering

* Make changes according to Tian's comments

* Push AffineMap as upper bound for KrnlIterateOp

* Test AffineMap to use in Pooling

* Replace the old implementaion by a new one using AffineMap

* Fix the computation when dilations are non-unit

* Clean up the old code

* Remove AveragePool from Canonicalization pass

* Fix computing the end indices of a filter window

* Refactor the code for pooling

* Revise pushAffineMapBound

* Add MLIR tests

* Remove unused functions

* Fix check-onnx-backend build on x86 Linux. (#91)

* Add the split marker to test files (#90)

Co-authored-by: Tian Jin <tjingrant@gmail.com>

Co-authored-by: gongsu832 <gong_su@hotmail.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-04-19 21:39:34 +08:00
Tung D. Le e32f531546
Add the split marker to test files (#90)
Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-04-16 15:17:27 +08:00
Alexandre Eichenberger fa8962753c
Concat lower (#82)
* implement shape inference for concat

* better checking of axis being concatenated: constant values only

* lowering of Concat with lit and backend tests

* fixes

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-04-13 11:40:39 -04:00
Tung D. Le f4fefcf713
Re-add tanh lowering (#75)
* Re-add tanh lowering

* Make the emission deterministic
2020-04-09 14:22:36 +08:00
Gheorghe-Teodor Bercea f16e79d744
Emit constant tensors as global constants (#66)
* Reorganize main function.

* Follow review comments.

* Emit constants are globals in Krnl and LLVM dialects.

* Enable unique constant variable names.

* Emit alloca for local array. Add tests.

* Comment clean-up.

* Simplify MemRef construction.

* Fix output type.
2020-04-01 13:51:06 -04:00
Alexandre Eichenberger 653fa69102
Unify Conv implementation (#54)
* fixed readme for new git repo

* conv with bias as an optional input
2020-03-26 11:03:19 -04:00
Tung D. Le 2814ea3898
Support dilations and enable the remaining e2e tests for MaxPoolSingleOut (#31)
* Support dilations and enable e2e tests

* Fix allocating memory for dynamic shape

* Edit comments

* Do dilation by computing an offset from kernel index

* Correct dilation formula, add an example of out-of-bound, and add a test for dilation

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-18 09:55:50 -04:00
Tung D. Le 4763e8a8bc
Lower ONNXAbsOp to Krnl dialect and enable e2e tests for ONNXReduceL1 (#18)
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-17 11:12:45 -04:00
Gheorghe-Teodor Bercea 1622b9f161
[NFC] Change ONNF based names to ONNX-MLIR (#32)
* Rename onnf to onnx-mlir.

* Change workspace name.
2020-03-17 09:16:33 -04:00
Tung D. Le a65820940c
Lower ConstantOp (#28)
* Lower ConstantOp

* Refactor the code

* Edit error messages

* Check whether attribute is sparse or dense during shape inference
2020-03-12 10:58:42 -04:00
chentong319 391f565a66
Lower constant padding operation to KRNL dialect (#27) 2020-03-11 16:54:07 -04:00
Gheorghe-Teodor Bercea e4c23da4fd
Lower MaxPoolSingleOutOp to Krnl dialect (#1)
* Lower MaxPoolSingleOutOp to Krnl dialect

* Edit comments

* Update changes according to the new folder structure

* Add MLIR tests

* Support ceil_mode

* Merge the first two krnl loops into one krnl loop; remove attribute checks

* Dynamically allocate memory for the result if the result has unknown dimensions

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-04 14:27:21 -05:00
Tung D. Le 5357fc1421
Use SqrtOp in Standard dialect (#108)
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-02-26 12:03:24 -05:00
Tung D. Le a720f9a7b2
Remove special GemmNoBias since we can handle it using NoneType bias (#100)
* Remove special GemmNoBias since we can handle it using NoneType bias

* Remove GemmNoBias from onnx.md

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-02-25 13:20:43 +08:00
Tung D. Le aea6479ad3
Lower BatchNormalization (test mode) to Krnl dialect (#70)
* Add ONNXBatchNormalizationTestModeOp and its shape inference

* Lower batchnormalization test mode

* re-use scale, bias, mean, and variance

* Add MLIR tests

* Add e2e tests

* fix typos

* Fix a bug in MLIR tests

* Change type from int to int64_t for indices

* Uncomment e2e tests due to segmentation fault

* Uncomment e2e tests due to segmentation fault

* Revise the code

* [Tian] Fix segmentation fault in e2e tests

* Re-generate onnx.md to include BatchNormalizationTestModeOp

* Reverse an unintentional change

* Fix some typos in comments

* Use convertToMemRefType from the master branch

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-02-20 11:45:40 -05:00
Tung D. Le f1d20e368f
Add support of GemmNoBias (#91)
* Add support of GemmNoBias

* Fix a wrong indentation
2020-02-20 10:55:24 -05:00
Tung D. Le b521719587
Lower Matmul operation to Krnl dialect (#57)
* Allocate memory for matmul's result

* Group cases

* Add support of N-D x N-D, N>=2

* Revise createIterateOperandPack

* Add 1-D x 1-D

* Add 1-D x N-D

* Add MLIR tests

* Change variable names

* Change type from int to int64_t for indices

* Change variable names

* Change int64_t back to int

* Change int64_t back to int

* Change int64_t back to int

* Use decltype

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-02-14 10:43:17 -05:00
Gheorghe-Teodor Bercea 094be4f37a
Add support for strides when emitting convolution loop nest. (#76)
* Add support for strides when emitting convolution loop nest.

* Only emit stride multiplication if strides is greater than one.

* Add test.
2020-02-11 11:53:13 -05:00
Tung D. Le adad9e24bd
Add support of negative dimensions (#66)
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-02-11 10:37:47 -05:00
Tung D. Le 2c7046ff5f
Lowering ReductionMax, ReductionMin, ReductionProd and ReductionSum (#31)
* Shape inference for reduction

* Lower ReduceSum

* Support list-like attributes

* Add ReduceMax, ReduceMin, ReduceProd

* Add tests

* Emit errors for unsupported types

* Typos

* Add backend test

* Fix axis computation

* Update the use of attributes

* Use SmallVector

* Address stylistic comments

* Change type from int to int64_t for indices

* Change type from int to int64_t for indices
2020-02-10 21:38:19 +08:00
Gheorghe-Teodor Bercea 0272451521
Lower convolution to KRNL dialect. (#65)
* Ensure data shape is at least 4.

* First version of convolution.

* Simplify code for KRNL lowering.

* Add test without padding or strides.

* Refactor code for lowering frontend operations to KRNL dialect.

* Add test for conv with no bias and no padding.

* Add test with group greater than one.

* Address comment.
2020-02-07 16:51:32 -05:00