Commit Graph

189 Commits

Author SHA1 Message Date
daquexian cb3d1e4f64
Import graph output type from protobuf (#333)
* import output type

Signed-off-by: daquexian <daquexian566@gmail.com>

* rename input->value_info, update doc

Signed-off-by: daquexian <daquexian566@gmail.com>

* infer shape on return op inputs

Signed-off-by: daquexian <daquexian566@gmail.com>

* import output type from protobuf only if it has shape

Signed-off-by: daquexian <daquexian566@gmail.com>

* fix wrong gather test

Signed-off-by: daquexian <daquexian566@gmail.com>

* add comments

Signed-off-by: daquexian <daquexian566@gmail.com>
2020-10-03 22:21:15 +07:00
Gheorghe-Teodor Bercea 0db735f48d
Support per-block bundling for static memory pools. (#325)
* Reorganize main function.

* Follow review comments.

* Emit constants are globals in Krnl and LLVM dialects.

* Support the bundling on a per-block basis.

* Format.

* Fix test.

* Fix indent.

* Improve data structure.

* Format.

* Simplify maps for static pools.

* Format.

* Clean-up.
2020-10-01 14:15:03 -04:00
chentong319 46306a8a26
Improve shape inference of flatten Op (#314)
* change code

Signed-off-by: chentong <chentong@us.ibm.com>

* add test

Signed-off-by: chentong <chentong@us.ibm.com>

* fix type error

Signed-off-by: chentong <chentong@us.ibm.com>

Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
2020-09-29 12:59:01 -04:00
Haruki Imai 278f10c3a4
Normalize memrefs with affine_map in krnl.memcpy (#322)
* Normalize memrefs with affine_map in krnl.memcpy

Added a trait of `MemRefsNormalizable` in `krnl.memcpy` to have
`krnl.memcpy` normalizable.
Reference: https://reviews.llvm.org/D86236

Signed-off-by: Haruki Imai <imaihal@jp.ibm.com>

* Simplified test case about normalizing memrefs in krnl.memcpy

Signed-off-by: Haruki Imai <imaihal@jp.ibm.com>

* Remove other krnl ops from test case for simplification

Signed-off-by: Haruki Imai <imaihal@jp.ibm.com>

* Fixed test code

Signed-off-by: Haruki Imai <imaihal@jp.ibm.com>

Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
2020-09-25 14:36:15 -04:00
Alexandre Eichenberger f0c5b99229
Gather: fix for negative indices (#313)
* Define krnl.permute op.

* Support krnl.permute operation.

* Properly remove loop references.

* Re-push, Github was down.

* Need to debug interpretOp error.

* Fix lowering bug by erasing ops after full krnl IR interpretation is done, and clean up & comment code.

* Introduce permute, unroll operations.

* More debug.

* Remove std::set.

* krnl.terminate fails to be converted.

* Pass all tests, need to add legal ops as well as part of the conversion target.

* Change test format to new permute spec.

* Bug fix for nested iterate op lowering.

* Simplify error reporting.

* Fix compilation error.

* Increase comments coverage.

* Remove unnecessary imports.

* Re-trigger Jenkins

* Add permute/unroll tests.

* Retrigger Jenkins

* changes to support negative indices

Signed-off-by: Alexandre Eichenberger <alexe@us.ibm.com>

* use krnl.dim now

Signed-off-by: Alexandre Eichenberger <alexe@us.ibm.com>

* move comment

Signed-off-by: Alexandre Eichenberger <alexe@us.ibm.com>

* updated test for krnl-dim pattern

Signed-off-by: Alexandre Eichenberger <alexe@us.ibm.com>

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-09-24 14:02:49 -04:00
Tian Jin f16983a7c4
Update onnx_location.mlir (#319) 2020-09-24 10:17:29 -04:00
Kevin O'Brien 17383768df
Initial Location info support (#302)
* NFC: Attribute cleanup (remove references of attributes)  (#286)

* Define krnl.permute op.

* Support krnl.permute operation.

* Properly remove loop references.

* Re-push, Github was down.

* Need to debug interpretOp error.

* Fix lowering bug by erasing ops after full krnl IR interpretation is done, and clean up & comment code.

* Introduce permute, unroll operations.

* More debug.

* Remove std::set.

* krnl.terminate fails to be converted.

* Pass all tests, need to add legal ops as well as part of the conversion target.

* Change test format to new permute spec.

* Bug fix for nested iterate op lowering.

* Simplify error reporting.

* Fix compilation error.

* Increase comments coverage.

* Remove unnecessary imports.

* Re-trigger Jenkins

* Add permute/unroll tests.

* Retrigger Jenkins

* remove & (ref) for Attributes

Co-authored-by: Tian Jin <tjingrant@gmail.com>
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Syntax highlighting for mlir code in README (#276)

* Syntax highlighting for mlir code in README

* Restart Jenkins

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* use print not dump

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* add semicolon

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* syntax

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* add code to preserve locations

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* format

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Emit the dynamic memory pool (#290)

* Reorganize main function.

* Follow review comments.

* Emit constants are globals in Krnl and LLVM dialects.

* Add support for bundling dynamic memory pools.

* Add dynamic bundling.

* Clean-up code.

* Clean-up file.

* Add test for bundling dynamic memory pool.

* Fixes. Simplify data structure. Add mixed test.

* Remove unused import.

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Fix wrong type for llvm::loadop (#293)

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Update llvm commit ID to 1d01fc1 (#292)

* Fix for LLVM revision D85495

* Fix for LLVM revision DD86121

* Fix for LLVM revision D85622 (f9dc2b7)
TODO: Change preloadDialectsInContext to false

Memo for previous fixes: D86121 (250f43d), D85495 (575b22b)

* clang-format

* Update llvm commit ID of README and clone-mlir.sh

* Updated llvm commit ID of README.md

* Fix for passing backend tests

* Removed the commented code

* Empty commit for triggering rebuild

* Test multi-stage travis build

* Specify stage order.

* Empty commit for triggering rebuild

* Update prereq.s390x.Dockerfile

Make it possible to execute s390x prereq docker multiple times.

* Build prereq for each arch

* Fix multi-arch prereq build.

* timeout at 40m

* Update .travis.yml

* add ppc64le prereq builder

* Run ppc docker prereq build multiple times

* Do not test branch update unless it's mater.

* Fix dockerfile.

* Fix typo in travis.yml.

* Fix ppc64 docker file

* Update .travis.yml

* turn off metacopy on ppc64le

* Update .travis.yml

* Turn off metacopy.

* Turn off metacopy inside Dockerfile in ppc64.

* No sudo in Docker.

* Remove metacopy config from Dockerfile.

* Change base image to be bionic.

* Using newer linux distro for ppc64.

* Turn off metacopy in before_install.

* Fix sudo permission issue.

* Run docker info.

* Allow amd64 docker file to be built multiple times

* Support building amd64 prereq.

* Fix amd64 docker file typo.

* fix ppc64le dockerfile typo.

* timeout from 40m -> 30m

* 40m->30m

* 40m->30m

* fix bug preventing incremental build.

* fix bug preventing incremental build.

* Bump CircleCI cache version.

* Push to production prereq container repository and condition prereq docker rebuild on commit message.

* Rebuild prereq docker.

* Move default script to top-level.

* Python not properly installed.

* amd64 -> x86

* Rebuild prereq docker.

* Rebuild prereq docker.

* Rebuild prereq docker.

* Restart all CI.

* Disallow cache on Jenkins docker build.

* Restart zJenkins.

* Restart zJenkins.

Co-authored-by: Haruki Imai <imaihal@jp.ibm.com>
Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Using onnx-mlir through incremental stages (#257)

* Add lowering of Vector dialect for lower-all-llvm pass

* Fix generating CallOp instructions when return type is void

* Fix lowering of memref

* Reformat using clang-format

* Record more context.

* Reflow comments.

Co-authored-by: Tian Jin <tjingrant@gmail.com>
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Dropout elimination & Conv Bugfix (#297)

* Dropout elimination.

* Test VGG19.

* Add shufflenet.

* Fix grouped convolution bug.

* Fix lit test failure.

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Rewrite shape and size OP  (#285)

* add shape inference

* Revert "add shape inference"

This reverts commit f9d42f39e68e14b5648abccfc8617fff00244d16.

* add rewrite rules

* test cases

* format

* add constraint

* response to review

* response to review

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* initial code for handling custom ops (#288)

* initial code for handling custom ops

* format

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* ShapeInference for SizeOp (#299)

* add shape inference

* Revert "add shape inference"

This reverts commit f9d42f39e68e14b5648abccfc8617fff00244d16.

* shape inference

* test case

* format

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Gather ONNX to Kernel Lowering (#294)

* Define krnl.permute op.

* Support krnl.permute operation.

* Properly remove loop references.

* Re-push, Github was down.

* Need to debug interpretOp error.

* Fix lowering bug by erasing ops after full krnl IR interpretation is done, and clean up & comment code.

* Introduce permute, unroll operations.

* More debug.

* Remove std::set.

* krnl.terminate fails to be converted.

* Pass all tests, need to add legal ops as well as part of the conversion target.

* Change test format to new permute spec.

* Bug fix for nested iterate op lowering.

* Simplify error reporting.

* Fix compilation error.

* Increase comments coverage.

* Remove unnecessary imports.

* Re-trigger Jenkins

* Add permute/unroll tests.

* Retrigger Jenkins

* initial implementation of gather

* added tests

* format

* remove affine load for second load, as it uses an indirection

* changes suggested by reviewers

* remove backend tests until I can verify them locally

Co-authored-by: Tian Jin <tjingrant@gmail.com>
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* add lit test
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* fix option spelling
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* braces in wrong place
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* add lit test
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* remove duplicate code from lit test Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* Simplify lit test Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* remove attributes from lit test Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* add onnx-mlir-opt to tool names
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* add printIR to second RUN
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* redo adding printIR
Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* fix bug

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* format

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

* fix typo in test

Signed-off-by: Kevin O'Brien <caomhin@us.ibm.com>

Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
Co-authored-by: Tung D. Le <tung@jp.ibm.com>
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Haruki Imai <imaihal@jp.ibm.com>
Co-authored-by: Kevin Wu <6334443+kwu91@users.noreply.github.com>
Co-authored-by: chentong319 <chentong@us.ibm.com>
2020-09-23 18:58:27 -04:00
Gheorghe-Teodor Bercea 4bbe12ff50
Improve support for krnl.dim (#317)
* Reorganize main function.

* Follow review comments.

* Emit constants are globals in Krnl and LLVM dialects.

* Make krnl dim more robust.

* Format.

* Update comments.

* Change pass name.
2020-09-23 14:36:16 -04:00
NathanielMcVicar 3491b90b1e
Support LLVM as of 7dcd0042 (#309)
* Update to support LLVM as of 7dcd0042

Fixes for upstream changes to mlir.

- New pass registration method from https://reviews.llvm.org/D85622
- Integer attributes are now C types when possible https://reviews.llvm.org/D86739

Signed-off-by: Nathaniel McVicar <namcvica@microsoft.com>

* Fix for checkclang

* Windows incremental build fix from @max-ku

* Remove MLIRShapeToSCF lib

* Missed a getSExtValue on now c type

* Rebuild prereq docker.

* Bump CircleCI cache version.

* Update hash for Windows build

Signed-off-by: Nathaniel McVicar <namcvica@microsoft.com>

* Bump CircieCI cache version again.

* Rebuild prereq docker.

* Update README.md

* Update README.md

* Undo edits to ONNXOps.td.inc.

* Undo changes to ONNXOps.td.inc.

* Fix cast op TableGen.

* Tweak tablegen definition of Cast.

* Use explicitly signed integer as attributes.

* Move all signless attribute to explicitly signed attribute.

* Import ONNX int attribute as SI64 attribute.

* Make Conv.group attr use SI64 attr.

* Fix conv test.

* Fix DocCheck complaint.

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-09-22 23:42:50 +07:00
Prashant Kumar 4cc16aceb7
[MLIR] Add SizeOp conversion from ONNX dialect to Krnl dialect (#295)
* [MLIR] Add SizeOp conversion from ONNX dialect to Krnl dialect

Added ONNXSizeOp conversion from ONNX dialect to Krnl dialect. This op is added as a part of --convert-onnx-to-krnl pass.

Signed-off-by: Prashant Kumar <pk5561@gmail.com>

* Add unit tests for Size op.

* Remove unit tests.

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-09-21 16:55:21 +07:00
Tung D. Le 66074da3ac
Lower ONNXConstantOfShapeOp to Krnl dialect (#296)
* Lower ONNXConstantOfShapeOp to Krnl dialect

* Change a variable name

* Add comments to lit tests

Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
2020-09-19 12:47:39 -04:00
Alexandre Eichenberger 3a5aa7ee31
Gather ONNX to Kernel Lowering (#294)
* Define krnl.permute op.

* Support krnl.permute operation.

* Properly remove loop references.

* Re-push, Github was down.

* Need to debug interpretOp error.

* Fix lowering bug by erasing ops after full krnl IR interpretation is done, and clean up & comment code.

* Introduce permute, unroll operations.

* More debug.

* Remove std::set.

* krnl.terminate fails to be converted.

* Pass all tests, need to add legal ops as well as part of the conversion target.

* Change test format to new permute spec.

* Bug fix for nested iterate op lowering.

* Simplify error reporting.

* Fix compilation error.

* Increase comments coverage.

* Remove unnecessary imports.

* Re-trigger Jenkins

* Add permute/unroll tests.

* Retrigger Jenkins

* initial implementation of gather

* added tests

* format

* remove affine load for second load, as it uses an indirection

* changes suggested by reviewers

* remove backend tests until I can verify them locally

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-09-11 15:36:23 -04:00
chentong319 fa04c32a0c
ShapeInference for SizeOp (#299)
* add shape inference

* Revert "add shape inference"

This reverts commit f9d42f39e68e14b5648abccfc8617fff00244d16.

* shape inference

* test case

* format
2020-09-11 12:47:11 -05:00
chentong319 ac67900baf
Rewrite shape and size OP (#285)
* add shape inference

* Revert "add shape inference"

This reverts commit f9d42f39e68e14b5648abccfc8617fff00244d16.

* add rewrite rules

* test cases

* format

* add constraint

* response to review

* response to review
2020-09-10 14:46:00 -04:00
Tian Jin 5e11429d77
Dropout elimination & Conv Bugfix (#297)
* Dropout elimination.

* Test VGG19.

* Add shufflenet.

* Fix grouped convolution bug.

* Fix lit test failure.
2020-09-10 14:47:30 +08:00
Kevin Wu 03dae57189
Using onnx-mlir through incremental stages (#257)
* Add lowering of Vector dialect for lower-all-llvm pass

* Fix generating CallOp instructions when return type is void

* Fix lowering of memref

* Reformat using clang-format

* Record more context.

* Reflow comments.

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-09-10 10:29:55 +08:00
Tian Jin dbc41d2330
Update llvm commit ID to 1d01fc1 (#292)
* Fix for LLVM revision D85495

* Fix for LLVM revision DD86121

* Fix for LLVM revision D85622 (f9dc2b7)
TODO: Change preloadDialectsInContext to false

Memo for previous fixes: D86121 (250f43d), D85495 (575b22b)

* clang-format

* Update llvm commit ID of README and clone-mlir.sh

* Updated llvm commit ID of README.md

* Fix for passing backend tests

* Removed the commented code

* Empty commit for triggering rebuild

* Test multi-stage travis build

* Specify stage order.

* Empty commit for triggering rebuild

* Update prereq.s390x.Dockerfile

Make it possible to execute s390x prereq docker multiple times.

* Build prereq for each arch

* Fix multi-arch prereq build.

* timeout at 40m

* Update .travis.yml

* add ppc64le prereq builder

* Run ppc docker prereq build multiple times

* Do not test branch update unless it's mater.

* Fix dockerfile.

* Fix typo in travis.yml.

* Fix ppc64 docker file

* Update .travis.yml

* turn off metacopy on ppc64le

* Update .travis.yml

* Turn off metacopy.

* Turn off metacopy inside Dockerfile in ppc64.

* No sudo in Docker.

* Remove metacopy config from Dockerfile.

* Change base image to be bionic.

* Using newer linux distro for ppc64.

* Turn off metacopy in before_install.

* Fix sudo permission issue.

* Run docker info.

* Allow amd64 docker file to be built multiple times

* Support building amd64 prereq.

* Fix amd64 docker file typo.

* fix ppc64le dockerfile typo.

* timeout from 40m -> 30m

* 40m->30m

* 40m->30m

* fix bug preventing incremental build.

* fix bug preventing incremental build.

* Bump CircleCI cache version.

* Push to production prereq container repository and condition prereq docker rebuild on commit message.

* Rebuild prereq docker.

* Move default script to top-level.

* Python not properly installed.

* amd64 -> x86

* Rebuild prereq docker.

* Rebuild prereq docker.

* Rebuild prereq docker.

* Restart all CI.

* Disallow cache on Jenkins docker build.

* Restart zJenkins.

* Restart zJenkins.

Co-authored-by: Haruki Imai <imaihal@jp.ibm.com>
Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
2020-09-09 23:12:01 +08:00
Gheorghe-Teodor Bercea 9f69b2f317
Emit the dynamic memory pool (#290)
* Reorganize main function.

* Follow review comments.

* Emit constants are globals in Krnl and LLVM dialects.

* Add support for bundling dynamic memory pools.

* Add dynamic bundling.

* Clean-up code.

* Clean-up file.

* Add test for bundling dynamic memory pool.

* Fixes. Simplify data structure. Add mixed test.

* Remove unused import.
2020-09-03 10:31:06 -04:00
Alexandre Eichenberger 8bfde7de4b
Transpose optimization (#280)
* Define krnl.permute op.

* Support krnl.permute operation.

* Properly remove loop references.

* Re-push, Github was down.

* Need to debug interpretOp error.

* Fix lowering bug by erasing ops after full krnl IR interpretation is done, and clean up & comment code.

* Introduce permute, unroll operations.

* More debug.

* Remove std::set.

* krnl.terminate fails to be converted.

* Pass all tests, need to add legal ops as well as part of the conversion target.

* Change test format to new permute spec.

* Bug fix for nested iterate op lowering.

* Simplify error reporting.

* Fix compilation error.

* Increase comments coverage.

* Remove unnecessary imports.

* Re-trigger Jenkins

* Add permute/unroll tests.

* Retrigger Jenkins

* transpose fusion and removal

* format

* fix comments

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-08-31 10:40:17 -04:00
Aman LaChapelle 24d0a2ac71
Add shape inference and names (#266)
* Add shape inference and names

 - Add shape inference for PRelu
 - Fix shape inference for group conv
   for ConvTranspose
 - Add input and output names for
   graphs (functions)
 - Add support for (u)int8 tensor
   attributes

* Fix format issues

* Revert formatting for gen_onnx_mlir.py

* Pads can have ArrayAttr and DenseElementsAttr so support both

* NumInputs is the number of graph inputs that don't have initializers

* Add test for 2D batchnorm

* Fix typo in define_loops in new 2d BN test

* Change 'name' to 'onnx_node_name'

* Fix Batchnorm for 2D I/O and add lowering test

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-08-27 15:46:27 -04:00
Gheorghe-Teodor Bercea f278f08120
Introduce krnl.shape operation and its lowering (#267)
* Reorganize main function.

* Follow review comments.

* Emit constants are globals in Krnl and LLVM dialects.

* Add krnl shape op and lowering pass.

* Add lowering function.

* Clean-up code.

* Remove duplicate entry.

* Add test.

* Update LowerKrnlShape.cpp

* Update KrnlToLLVM.cpp
2020-08-19 12:57:40 -04:00
Tung D. Le 7c1e67898d
Fuse convolution and batch normalization (#253)
* Rewriting rule

* Fix formulas

* Reuse op results

* Const propagation for Div and Sqrt

* Explicitly use ONNXConstantOp

* Minor revise

* Const propagation for unsqueeze

* Do const propagationnce all tensors have inferred shapes

* LIT tests for fusion

* Add LIT tests for constant propagation on Div, Sqrt, and Unsqueeze

* Missing dash

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-08-18 16:41:40 +08:00
Anh Leu 00299910f3
OneHotEncoder Shape Inference (#265)
* move scalerop to decompose

* change clang format

* change clang format

* add shape inference for scaler op

* fixing generated onnxop

* generate onnx.md

* redo get onnx.md and onnxop.td.inc using onnx 1.6

* Add shape inference for scaler op

* add benefit for scaler decompose and simplify scaler shape inference

* add scaler decompose benefit num and simplify shape inference

* add cast builder

* cast rewrite only for float

* add cast op same type rewrite rule

* working on cast lowering

* cast lowering working

* add cast lowering

* fix format

* Delete OpBuildTable.inc

* complete requested changes

Co-authored-by: chentong319 <chentong@us.ibm.com>
2020-08-14 16:13:31 -04:00
Gheorghe-Teodor Bercea d3dcee7366
Add krnl.dim operation and lowering pass (#261)
* Reorganize main function.

* Follow review comments.

* Emit constants are globals in Krnl and LLVM dialects.

* Add krnl.dim op.

* Add test with alloc with map.

* Code clean-up.

* Code clean-up.

* Add comment for function.

* Update DisconnectKrnlDimFromAlloc.cpp
2020-08-14 12:59:33 -04:00
Tung D. Le 1b42d0b4eb
Update LLVM commit ID to the version that corresponds to MLIR News, 13th edition (8/7/2020) (#248)
* Update LLVM commit ID to include to the new modeling of LLVM type in MLIR

* Fix commit id discrepancy

* Update README.md

* Update MLIR version

* Force rebuild prereq dockers and see what happens.

* Use LLVM commit ID that corresponds to MLIR News, 13th edition (8/7/2020)

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-08-14 12:52:48 +08:00
Gheorghe-Teodor Bercea 2f41c2bf5b
Assorted fixes for memory pool related passes (#244)
* Reorganize main function.

* Follow review comments.

* Emit constants are globals in Krnl and LLVM dialects.

* Make mempooling more robust.

* Fix.

* Update MainUtils.cpp

Additional canonicalization not required anymore.
2020-08-11 17:34:59 -04:00
Anh Leu 2ee725d939
Add CastOp lowering (#259)
* move scalerop to decompose

* change clang format

* change clang format

* add shape inference for scaler op

* fixing generated onnxop

* generate onnx.md

* add benefit for scaler decompose and simplify scaler shape inference

* cast rewrite only for float

* add cast op same type rewrite rule

* working on cast lowering

* cast lowering working

* correct onnx version

* update onnx md

* add test for tensor<10xf64>
2020-08-11 16:07:13 -04:00
albertomagni-ms e1386b0689
Add shape inference for Ops used by BERT (#249)
* Add shape inference for Ops used by BERT

* Erf
* Pow
* ReduceMean
* Dropout
* Expand
  https://github.com/onnx/onnx/blob/master/docs/Operators.md#expand
  Deduce the value of the shape operand by looking at the producer
  of the operand.
  Currently supported producers are: onnx.Constant and onnx.Shape.

* Add corresponding tests for each op.

* Sort the list of ops with shape inference in gen_onnx_mlir.py
in alphabetic order for clarity.

* Restart CI

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-08-07 13:08:00 +08:00
Tian Jin 58ee62fb49
[NFC] Rename passes for stylistic consistency. (#232)
* lower-frontend -> convert-onnx-to-krnl

* lower-all-llvm -> convert-krnl-to-llvm

* lower-krnl -> convert-krnl-to-affine

* Name fix.
2020-07-31 21:37:35 +08:00
chentong319 b4228fd288
Seq type (#199)
* base implementation

* add example

* change table gen

* docs

* small change for review

Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-07-31 20:05:59 +08:00
Gheorghe-Teodor Bercea 029fb5eb67
Add support for emitting individual memory pools with dynamic sizes (#211)
* Reorganize main function.

* Follow review comments.

* Emit constants are globals in Krnl and LLVM dialects.

* Emit memory pools with dynamic sizes.

* Reformat.
2020-07-30 12:24:07 -04:00
Anh Leu a611c145f4
Add Rewrite rule to eliminate CastOp when input element type is the same as expected output element type (#237)
* move scalerop to decompose

* change clang format

* change clang format

* add shape inference for scaler op

* fixing generated onnxop

* generate onnx.md

* add benefit for scaler decompose and simplify scaler shape inference

* cast rewrite only for float

* add cast op same type rewrite rule

* fix format

Co-authored-by: chentong319 <chentong@us.ibm.com>
2020-07-27 12:49:14 -04:00
Anh Leu e631283c71
ScalerOp support non-float input using CastOp (#231)
* move scalerop to decompose

* change clang format

* change clang format

* add shape inference for scaler op

* fixing generated onnxop

* generate onnx.md

* redo get onnx.md and onnxop.td.inc using onnx 1.6

* Add shape inference for scaler op

* add benefit for scaler decompose and simplify scaler shape inference

* add scaler decompose benefit num and simplify shape inference

* add cast builder

Co-authored-by: chentong319 <chentong@us.ibm.com>
2020-07-24 10:57:52 -04:00
Tian Jin 2e8f012195
Support krnl loop permutation (#215)
* Define krnl.permute op.

* Support krnl.permute operation.

* Properly remove loop references.

* Re-push, Github was down.

* Need to debug interpretOp error.

* Fix lowering bug by erasing ops after full krnl IR interpretation is done, and clean up & comment code.

* Introduce permute, unroll operations.

* More debug.

* Remove std::set.

* krnl.terminate fails to be converted.

* Pass all tests, need to add legal ops as well as part of the conversion target.

* Change test format to new permute spec.

* Bug fix for nested iterate op lowering.

* Simplify error reporting.

* Fix compilation error.

* Increase comments coverage.

* Remove unnecessary imports.

* Re-trigger Jenkins

* Add permute/unroll tests.

* Retrigger Jenkins

* Using a non-trivial example.

* Add more complex example/test case.
2020-07-24 18:19:38 +08:00
Anh Leu c9e3ba2d64
Add shape inference for ScalerOp (#228)
* move scalerop to decompose

* change clang format

* change clang format

* add shape inference for scaler op

* fixing generated onnxop

* generate onnx.md

* Add shape inference for scaler op

* add benefit for scaler decompose and simplify scaler shape inference
2020-07-23 13:05:19 -04:00
Tung D. Le 034f98c00c
Shape inference for some ops in RNN models (#207)
* Shape inference for ShapeOp

* Shape inference for TileOp

* Fix check-doc

* Fix onnx-mlir-doc

* Shape inference for GatherOp

* Check validity of GatherOp's indices tensor

* Shape inference for Slice

* Tests for SliceOp

* Fix importing none inputs

* Type inference for constantofshape

* Empty tensor in case of ConstantOfShape

* Remove unrelated changes

Co-authored-by: chentong319 <chentong@us.ibm.com>
2020-07-22 10:15:56 -04:00
Gheorghe-Teodor Bercea a58594ec81
Revert "Emit allocs at the top of functions (#222)" (#226)
This reverts commit b27e57cc4f.
2020-07-21 18:30:39 -04:00
Gheorghe-Teodor Bercea b27e57cc4f
Emit allocs at the top of functions (#222)
* Reorganize main function.

* Follow review comments.

* Emit constants are globals in Krnl and LLVM dialects.

* Add support for moving dynamic alloca instructions to top of functions.

* Fix memory pooling tests.

* Various fixes.

* Fix lit tests.

* More test fixes.

* Reformat.

* Reformat some more.

* Fix issue with TestConv and split-input-file.

* Use smart pointers.

* Remove redundant pointer.

* Reformat.

* Add initMap description.

* Clean up tests.
2020-07-20 19:24:17 -04:00
Anh Leu 4b33c312d6
Add ONNXScalerOp pattern (#220)
* add ONNXScalerOp pattern

* move ScalerOp rewrite rule to Rewrite.cpp .td

* attempt to fix format issue

* fixing format issue

* fixing format issue2

* add ONNXScalerOp pattern

* move ScalerOp rewrite rule to Rewrite.cpp .td

* attempt to fix format issue

* fixing format issue

* fixing format issue2
2020-07-17 11:01:30 -04:00
gongsu832 d235f248e4
Add emitjni target (#204)
* Detect llvm-project commit change in utils/clone-mlir.sh and rebuild llvm-project
for zLinux Jenkins build bot

* Add --EmitJNI target (tested working with mnist and resnet50)
- MainUtils
  * first shot at refactoring compileModuleToSharedLibrary
  * add setExecPath call to allow resolving runtime directory from onnx-mlir
    executable path when ONNX_MLIR_RUNTIME_DIR is not set. This allows
    tests to run without having to install onnx-mlir or to explicitly set
    ONNX_MLIR_RUNTIME_DIR
- RtMemRef
  * add getDataSize for C (equivalent of size() for C++).
  * fix setStrides bug (setting sizes, not strides)
- TestConv
  * _main_graph-*.so were filling up /tmp. Change to use fixed shared library
    in build directory

* Fix clang-format-lint complaints

* - getRuntimeDir checks lib64
- install targets for javaruntime and jniruntime
- remove ONNX_MLIR_LD_PRELOAD_onnx-mlir and ONNX_MLIR_LD_PRELOAD_onnx-mlir-opt

* See what happens when `kExecPath` decl is dropped.

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-07-11 13:23:13 +08:00
Gheorghe-Teodor Bercea 100bfc81b4
Bundle individual memory pools into a single memory pool (#183)
* Reorganize main function.

* Follow review comments.

* Emit constants are globals in Krnl and LLVM dialects.

* Add memory pooling for constant sized arrays.

* Clean code.

* Clean code.

* Clean code.

* Add simple bundling test.

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-07-08 12:35:31 -04:00
Tian Jin 01a4977c74
Remove optimize_loops/return_loops op. (#200)
* Remove optimize_loops/return_loops op in elementwise ops lowering and fix tests in onnx_lowering.mlir.

* Fix all tests.

* Remove all occurences of def_loops/return_loops.

* Fix test.

* Fix comments for defineLoops & emitKrnlLoopsAndIterationForOperand function.

* Remove emitOptimizedLoops.

* Allow not specifying optimizedLoops when creating KrnlIterateOperandPack.

* Fix style.

* Make BuildKernelLoop helper not emit optimize/return_loop operations & retire emitKrnlLoopsAndIterationForOperand by replacing it with BuildKernelLoop.

* DefineLoops -> DefineLoopsEx, remove redundant emitKrnlLoopsAndIterationForOperand function.

* BuildKrnlLoop API name update.

* Tweak comments.

* Remove unused withEmptyOptimization flag.

* Better comment for BuildKrnlLoop.

* Fully remove krnl.return_loops/optimize_loops op.

* Trigger Windows Build

* Bump windows ci python version.
2020-07-08 12:49:15 +08:00
Aaron Smith 8e6642b2bc
Update llvm version (#187)
* Update llvm version

* Update git hash for llvm-project

* Update option handling

* Update LLVM version

* Update tests

* Update git hash

* Update docs

* clang-format

* Fix operand adaptor

* Fix dim with constant

* Update LSTM.cpp

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-07-07 21:26:00 +08:00
Tung D. Le 7e05f371de
Replace std.load/std.store by affine.load/affine.store (#180)
* Move to more recent LLVM ID (May 15)

* clang-format

* Bump cache version up

* Update readme

* Fix doc check

* Move to a newer commit id

* Update LoopToStandard -> SCFToStandard

* Change MLIRSideEffects to MLIRSideEffectInterfaces

* Add AffineScope trait to KrnlIterateOp

* [ElementWise] Load/Store op to AffineLoad/AffineStore op

* [Gemm, MatMul, Reduction, Softmax] Load/Store op to AffineLoad/AffineStore op

* [Concat] Load/Store op to AffineLoad/AffineStore op

* [Pad, PadConstantValuePad, Reshape, Transpose] Load/Store op to AffineLoad/AffineStore op

* [LSTM] Load/Store op to AffineLoad/AffineStore op

* [Conv, Norm, Pooling] Load/Store op to AffineLoad/AffineStore op

* Add affine-loop-fusion pass

* Use Load/Store for scalar

* Use Load/Store for scalar

* Fix lit tests

* Unknown dimensions for broadcasting ops

* Affine Load/Store for scalar memref

* clang-format

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-07-05 16:20:21 +08:00
Tung D. Le 2c8f5701bd
Lower SqueezeOp to Krnl dialect (#164)
* Lower Squeeze op to Krnl dialect

* Emit tensor size as a single constant; add a lit test for unknown dimensions

* Code style

* Speical case where the input is only used by this squeeze op

* Remove squeeze-in-place optimization

* Update ConvertONNXToKrnl.cpp

Twek to re-run tests.

* Trigger buildbot re-run.

* Re-run CI

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-07-03 16:26:41 +08:00
Tian Jin e902506ee5
Support encoding data type infomration as part of the DMR struct. (#178)
* Support encoding data type infomration as part of the DMR struct.

* Support full range of np types.

* Report error when encountering unsupported type.

* Add gerRank method API.

* Add missing API declarations.

* DynMemRef -> RtMemRef

* Format code.

* Missed DynMemRef -> RtMemRef conversion.

* More comments for RMR, and rename variable names from dmr -> rmr.

* DynMemRef -> RtMemRef.

* Format code.
2020-06-30 10:58:21 +08:00
Tian Jin f9cb113a84
Krnl block (#170)
* Support krnl.block printing/parsing.

* Checkpoing, PoC working.

* Implement krnl.block operation.

* Make tuple -> make pair.

* Bug fix, white list krnl.iterate op while lowering.

* Add return loop op lowering.

* Bug fix.

* Allow using loop refs more than once if they are used by krnl.iterate op.

* More comments and include lit test.

* Make krnl.block definition more restrictive.

* Splitting tests creates modules, making affine_map matching more verbose, prefer not splitting since test cases are small.

* Use verbose mode for LIT test on Z.

* Use verbose build to diagnose.

* Missing libraries linking when building in shared mode.

* Fix whole-archive linkage.

* Try preloading affinetransforms.

* Try put AffineTransforms into LD_LIBRARY_PATH.

* Fix python syntax error.

* No need to link with whole-archive libs, as they are pre-loaded.

* Do not preload any library.

* Link with whole-archive libs.

* Explicitly shared linkage in CMake.

* Fix CMake syntax error.

* Restore test.py

* Update z13.sh

* Update z13.sh

* Provide krnl.block operation description.
2020-06-27 22:35:01 +08:00
chentong319 2e08b2112c
String type (Ready for Review) (#182)
* string type from tensorflow

* simplify type

* parser and print

* gen StringType for tablegen

* onnx to onnx-mlir type

* add namespace

* allow all integer type

* dialect document

* add test case

* format

* more precise type for ONNXOp

* format

* enable the failed test

* update comment

* update onnx.md

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-06-25 16:34:37 -04:00
chentong319 cc68f77d8d
Merge onnx ml into onnx (#176)
* merge onnx-ml into onnx

* delete onnx ml
2020-06-22 20:01:56 -04:00
Tian Jin f81f44662b
Remove whole archive linkage (#173)
* Explicit pass registration.

* Remove whole-archive linking, replace with regular linking.

* Remove whole-archive linkage related scripts.

* No need to preload library, simply expose them through LD_LIBRARY_PATH.

* Use OMLibs to record all onnx-mlir libs.

* Add OMResultTypeInferenceOpInterface lib to OMLibs.

* nit.

* No need to expose libs through LD_LIBRARY_PATH.

* Fix missing onnx header file issue.

* Define OMLibs before Tool subdirectory is imported.

* Define OMLibs at parent scope.

* Specify dependency of MainUtils on OMLibs early.

* Set OMLibs both at current & parent scope.

* Add comment about what future pass implementation should do.
2020-06-19 00:21:27 +08:00