Commit Graph

304 Commits

Author SHA1 Message Date
Tian Jin 7dba324404
Setup Documentation Page using Github Pages (#76)
* Setup documentation server, move doc files from /doc to /docs as per Github Pages convention.

* Include deleted files in patch.

* /doc -> /docs

* /doc -> /docs

* Update documentation on importing ONNX spec into ONNX Dialect; provide documentation on how to add new documentation pages.
2020-04-09 23:37:04 +08:00
Tung D. Le 4e66488ad3
Change the name and signature of mapToLowerScalarOp (#67)
* Revise mapToLowerScalarOp()

* Update TanhOp

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-04-09 16:06:56 +08:00
Tung D. Le f4fefcf713
Re-add tanh lowering (#75)
* Re-add tanh lowering

* Make the emission deterministic
2020-04-09 14:22:36 +08:00
Tian Jin c9199c9061
Programmatically ensure ONNX Dialect related generated files are up-to-date. (#58)
* Generate ONNX Dialect TableGen Inc files & operation importing inc files when necessary.

* Ensure TableGen inc file is generated before TableGen is invoked.

* Nit: capitalize builder -> Builder.

* Use file-same-as-stdout directive to ensure generated files are always up-to-date in our codebase.

* Use more up-to-date version of ONNXOps.td.inc.

* Do not automatically invoke gen_doc.py.

* Support dry run in gen_doc.py.

* Fix case.

* Remove debug code.

* Add test for new doc_check primitive.

* Add documentation for file-same-as-stdout.

* Provide more comments.

* Add DocCheck to DocCheck README.

* Nit: format CMake script.

* Update comments.

Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
2020-04-08 15:00:34 +08:00
Alexandre Eichenberger f5bed72e13
implement shape inference for concat (#74)
* implement shape inference for concat

* better checking of axis being concatenated: constant values only
2020-04-07 16:13:41 -04:00
Gheorghe-Teodor Bercea 8532a10614
Fix input argument indexing error (#69)
* Reorganize main function.

* Follow review comments.

* Emit constants are globals in Krnl and LLVM dialects.

* Fixes. Emit error before return in shape inference.

* Fix description.

* Fix emitted error message.

* Fix index name.
2020-04-06 11:35:17 -04:00
Tung D. Le 83eb15bfae
Fix src/Dialect/ONNX/ONNXOps.td.inc (#68) 2020-04-03 18:18:35 +08:00
Gheorghe-Teodor Bercea f16e79d744
Emit constant tensors as global constants (#66)
* Reorganize main function.

* Follow review comments.

* Emit constants are globals in Krnl and LLVM dialects.

* Enable unique constant variable names.

* Emit alloca for local array. Add tests.

* Comment clean-up.

* Simplify MemRef construction.

* Fix output type.
2020-04-01 13:51:06 -04:00
Byron Changuion b65e77305c
Move to more recent LLVM commit ID (#64)
* Move to more recent LLVM commit ID

* Update LLVM cache version from V9 to V10

* Update to latest LLVM commit id from master, roll back conditions in util scripts

* Update circlci LLVM cache tag to ensure ci updates builds with latest LLVM commit id

* Update README.md to have matching LLVM commit id

* Update doc/Dialtects/onnx.md
2020-04-01 12:38:34 -04:00
Alexandre Eichenberger b422116f12
clean operands using names provided by operandAdaptors (#56)
* clean operands using names provided by operandAdaptors

* reverted changes that were erronerous, from Tung's comment

* clang format issues
2020-03-31 11:55:27 -04:00
Tung D. Le c8758545e7
Import optional outputs as NoneType (#57)
* Import optional outputs as NoneType

* Allow NoneType results after the shape inference

* Use empty() to check an empty string
2020-03-30 21:21:18 -04:00
chentong319 55cbe316fd
Handle error shape inference (#47)
* add return to inferShape

* ran clang-format

* minor changes according to review

* fix format
2020-03-30 11:22:55 -04:00
Tung D. Le 867406191f
Replace SplitConvOpPattern by a declarative rewriting rule (#46) 2020-03-30 14:23:14 +08:00
Alexandre Eichenberger 653fa69102
Unify Conv implementation (#54)
* fixed readme for new git repo

* conv with bias as an optional input
2020-03-26 11:03:19 -04:00
Gheorghe-Teodor Bercea 1777c07b1e
[NFC] Reorganize main function. (#44)
* Reorganize main function.

* Follow review comments.

* Use new file names.
2020-03-24 13:48:54 -04:00
Tung D. Le ddff0f1256
Fix a bug in createArrayAttribute (#43)
* Fix a bug in createArrayAttribute

* Use size_t

* Use const auto&
2020-03-24 14:04:23 +08:00
Gheorghe-Teodor Bercea b3719d486b
Make path relative to onnx-mlir project only (#40)
* Make path relative to onnx-mlir project.

* Use binary root folder path for onnx-mlir.
2020-03-20 12:04:22 -04:00
Tian Jin febee542ee
[NFC] Breakdown CMake Build Scripts to Smaller Parts (#39)
* 1.Break down CMake scripts to smaller libraries per-directory.
2. Move some transformations and interfaces to the right folder.
3. Fix minor merge failure of the patch renaming files to use LLVM convention.

* Link OMBuilder with OMONNXOps.

* 1. Update the src location of generated ONNX dialect definition.
2. Link OMONNXRewrite with OMONNXOps.

* Fix path to tablegen for add_onnx_mlir_dialect_doc.

* Update build script for onnx_mlir_transform.

* 1. Remove comment code.
2. onnx_mlir_attribute_promotion -> OMAttributePromotion.

* Name tablegen generated files with LLVM convention.

* Nit: reorder libraries to link against.

* Nit: Link against MLIR first.
2020-03-20 22:40:51 +08:00
Tian Jin 0aafb3e72f
[WIP][NFC]Rename files to llvm style (#35)
* Change naming style for builder directory.

* Change naming style for conversion folder.

* Fix case sensitivity issue.

* Fix missing onnx header onnx_pb.h issue.

* Rename files in Conversion to llvm style.

* Rename files in Dialect to llvm style.

* Path fix.

* Rename files in Pass to llvm style.

* Rename files in Runtime to llvm style.

* Rename files in Tool to llvm style.

* Rename files in Transform to llvm style.

* Change comments about filenames.

* Fix case.

* Rename interface directory to use llvm file naming convention.
2020-03-19 16:48:09 +08:00
Tian Jin 549af8f0b2
Support attribute promotion. (#34)
* Support attribute promotion.

* Simplify op interface name.

* 1. Add more comments to Attribute Promotion Pass.
2. Move Promotable Const Operand Interface to src/interface, and link against it.

* Complete NFC change onnx -> onnx-mlir.

* Move attribute_promotion pass to src/transform.

* Nit: reword comment.

* Support Attribute Promotion in gen_doc.py.

* Add test.

* Update ONNX doc.

* Add negative test.

* Rename onnxop.inc -> onnx_ops.td.inc.

* Include onnx_ops.td.inc.

* Nit: better comments.

* Prettify CMake.

* Remove original attribute_promotion code, improve comments.

* Append '_op_interface' to op interface decl/defs.

* Namespace cmake targets using onnx_mlir_ prefix.

* Use updated header name.

* Use new body file name.

* Fix dependency.

* Use new CMake target name.

* Make attribute promotion self-contained by removing redundant constant operaions inside the pass execution.

* Remove canonicalization pass.

* Increase comments.

* Use stricter checks.

* Add one more test case.

* Remove %arg1 as it's never used.
2020-03-19 15:03:37 +08:00
Tung D. Le 2814ea3898
Support dilations and enable the remaining e2e tests for MaxPoolSingleOut (#31)
* Support dilations and enable e2e tests

* Fix allocating memory for dynamic shape

* Edit comments

* Do dilation by computing an offset from kernel index

* Correct dilation formula, add an example of out-of-bound, and add a test for dilation

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-18 09:55:50 -04:00
Tung D. Le 4763e8a8bc
Lower ONNXAbsOp to Krnl dialect and enable e2e tests for ONNXReduceL1 (#18)
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-17 11:12:45 -04:00
Gheorghe-Teodor Bercea 1622b9f161
[NFC] Change ONNF based names to ONNX-MLIR (#32)
* Rename onnf to onnx-mlir.

* Change workspace name.
2020-03-17 09:16:33 -04:00
Tian Jin c25831094e Revert "Support attribute promotion."
This reverts commit 955968b750.
2020-03-17 17:41:59 +08:00
Tian Jin 955968b750 Support attribute promotion. 2020-03-17 17:39:34 +08:00
Tung D. Le d86591d61a
Import all initialized tensors as dense constants (#30)
* Import initialized tensor as dense attribute

* Import all initialize tensors as dense constants

* Remove unintentional code

* Fix value attribute format in shape inference tests of reshape

* Readd rank check for reshape's shape inference

* Remove a redundant variable

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-16 11:17:28 -04:00
Gheorghe-Teodor Bercea c46880d5c6
Fix reshape output shape inference when a single dynamic shape is given (#22)
* Fix reshape when a dynamic shape is given.

* Fix default attributes for ConvNoBias.

* Fix comment.

* Resolve comment.

* Improve checks.

* Handle zero dim case.

* Add helper to fetch constants. Add test for dynamic reshape.

* Add test for zero.

* Use shortcut method for size.
2020-03-13 17:18:46 -04:00
chentong319 6137fc7c17
Fix issues #15 and #16 (#29)
* fix issue #15 and #16

* fix format

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-13 10:19:27 -04:00
Tung D. Le 362491553c
Shape inference for ONNXAveragePool (#21)
* Shape inference for ONNXAveragePool

* Edit comments and puts helper function on top of the file

* Fix template
2020-03-13 09:59:16 -04:00
Tung D. Le a65820940c
Lower ConstantOp (#28)
* Lower ConstantOp

* Refactor the code

* Edit error messages

* Check whether attribute is sparse or dense during shape inference
2020-03-12 10:58:42 -04:00
Tung D. Le 162ac1bc32
Pad value for MaxPool must be negative infinity instead of zero (#20)
Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
2020-03-12 09:30:02 -04:00
Alexandre Eichenberger 811b63e031
Inter common pad (#26)
* common pad handling in shape inference for conv and maxpool

* common pads

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-11 18:36:02 -04:00
chentong319 391f565a66
Lower constant padding operation to KRNL dialect (#27) 2020-03-11 16:54:07 -04:00
Gheorghe-Teodor Bercea e8a0b47e10
Fix case for upper and lower padding when strides are present. (#11)
* Fix case for upper and lower padding when strides are present.

* Address comments.

* Code clean-up.

* Fix tests.
2020-03-10 16:58:05 -04:00
Gheorghe-Teodor Bercea fe3279e721
Initialize operation arguments with ONNX model constants (#8)
* Save current state.

* Include constant arguments in source.

* Emit constants for Reshape second argument.

* Clean-up code.

* Add changes to gen_doc.py file.

* Propagate constant tensor to Reshape second arg to infer shape.

* Update documentation.

* Eliminate constant tensor operations when lowering to KRNL dialect.

* Replace ConstantTensorOp with ConstantOp.

* Add comment to remove temporary Constant lowering code.

* Remove unused shape inference for Constant.

* Remove comment.

* Remove explicit constant elimination.

* Refactor code.
2020-03-10 14:46:35 -04:00
Gheorghe-Teodor Bercea ba02b90e0b
Enable inference for arbitrary number of instructions (#12)
* Fix shape inference.

* Remove comment.

* Remove worklist since it is not needed.
2020-03-10 14:16:03 -04:00
Tung D. Le 1882059ac9
Support Pads for MaxPoolSingleOut (#14)
* Support Pads for MaxPoolSingleOut

* Regenerate onnx.md to include the new op

* Edit comments

* Undo redundant parts that were unintentionally changed

* Move declarative rewriting rules into canonicalize to avoid creating a new op

* Reformat the rewriting rule pattern of MaxPoolSingleOut

* Put ONNXPadConstantValuePadOp's build method into a .cpp file instead of a tablegen file

* Use the same helper function as the one in inferShape for the ONNXPadConstantValuePadOp's build method

* Change function names and fix padding for the spatial dimensions

* Call shape-inference again after canonicalization to infer shape for newly added ops during canonicalization.

* Fix typos
2020-03-09 20:15:58 -04:00
Gheorghe-Teodor Bercea 8a992b619f
Create some helper functions to emit constant op for a specific type (#7)
* emitConstantOp with a given type

* Helper functions to create infinity constants

* Use new constant helper functions for MaxPoolSingleOut

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-05 14:21:00 -05:00
Gheorghe-Teodor Bercea 8e1b30e133
Check channel dimension mismatch only for known dimensions (#2)
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-04 14:34:08 -05:00
Gheorghe-Teodor Bercea e4c23da4fd
Lower MaxPoolSingleOutOp to Krnl dialect (#1)
* Lower MaxPoolSingleOutOp to Krnl dialect

* Edit comments

* Update changes according to the new folder structure

* Add MLIR tests

* Support ceil_mode

* Merge the first two krnl loops into one krnl loop; remove attribute checks

* Dynamically allocate memory for the result if the result has unknown dimensions

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-04 14:27:21 -05:00
Tung D. Le e97df0b343
Add a pass to decompose ONNX operations (#9) 2020-03-04 10:53:59 -05:00
Tung D. Le 5357fc1421
Use SqrtOp in Standard dialect (#108)
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-02-26 12:03:24 -05:00
Tung D. Le 0c4a010283
Remove rank constraints in gemm fusion (#101)
* Remove rank constraints in gemm fusion

* Add an MLIR test

Co-authored-by: Tian Jin <tjingrant@gmail.com>
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-02-26 11:40:52 -05:00
Tung D. Le 24d89625e3
Remove redundant lower_frontend_to_krnl since we reorganized it (#99)
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-02-26 16:32:06 +08:00
chentong 3abbf1c0e9 put the common code into a helper function 2020-02-25 17:43:49 -05:00
chentong 4079ee1f26 Merge remote-tracking branch 'upstream/master' into shapeinference-pad 2020-02-25 15:54:18 -05:00
Alexandre Eichenberger 3a88361b17
use input/output operation names, use helper for attribute function and int values (#106) 2020-02-25 15:46:11 -05:00
Alexandre Eichenberger 3b1c29c078
Using attribute setters for maxpool (#105)
* using attribute setters for maxpool

* fix typos, added handling of storage order, simplified code
2020-02-25 14:33:48 -05:00
Gheorghe-Teodor Bercea ee3e140ddb
[NFC] Change structure of conversion folder. (#96)
* Change structure of conversion folder.

* Fix comments.

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-02-25 10:38:08 -05:00
Gheorghe-Teodor Bercea 32f08bcf0c
Clean-up code. (#98) 2020-02-25 09:54:29 -05:00
Gheorghe-Teodor Bercea 0d307d1183
Set flag to true when definition is emitted. (#97) 2020-02-25 09:47:42 -05:00
Tung D. Le a720f9a7b2
Remove special GemmNoBias since we can handle it using NoneType bias (#100)
* Remove special GemmNoBias since we can handle it using NoneType bias

* Remove GemmNoBias from onnx.md

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-02-25 13:20:43 +08:00
Tian Jin 732317cd5a
Transition to ONNX-1.6.0. (#95)
* Transition to ONNX-1.6.0.

* Use the version of ONNX inside ONNF when running backend tests.

* Install quietly and with sudo previledge.
2020-02-25 13:04:15 +08:00
Alexandre Eichenberger fcb5f35993
Introduce helper class to generate KRNL code and apply it to Convolution (#93)
* helper to gen krnl code, applied to conv

* suggested changes, name, removed set insertion point

* format

* suggested changes

* added comments and made a small name change
2020-02-24 17:20:15 -05:00
Tian Jin 9c398c0121
Support Optional Inputs (#94)
* 1. Combine variadicIn/Out with expectedNumOperands/Results to simplify import function arguments.
2. Generic improvements to code readability in gen_doc.py.

* Update ONNX Dialect doc.

* Remove redundant code in ImportNode.

* Prettify op_build_table.inc.

* 1. Remove irrelevant code in gen_doc.py

* Refactor code to be more readable.

* Further refactoring for readability improvements.

* Allow gemm to have an optional operand (bias term), and include an example of declarative optimization pattern targeting gemm with bias term ommitted.

* Make shape inference/lowering of gemm op compatible with optional operand declaration.

* Apply canonicalization again after lowering from onnx -> std dialects.

* Make hasBias compatible with the situation of GemmNoBias op.

* Update doc.

* Add a canonicalization test.

* Remove special handler for importing Gemm op, as it's redundant now.
2020-02-24 23:46:48 +08:00
chentong b3df3c64b5 Merge branch 'master' of github.com:clang-ykt/ONNF into shapeinference-pad 2020-02-24 09:26:45 -05:00
chentong 2281cc060f Merge branch 'master' of github.com:clang-ykt/ONNF into shapeinference-pad
Conflicts:
	src/pass/shape_inference_pass.cpp
2020-02-21 09:30:40 -05:00
Tung D. Le 479dd5e35a
Add result type inference to op definition (#87)
* Add result type inference to op definition

* Edit MLIR tests

* Fix result type for Mul

* Format comments

* Return UnrankedTensorType as result type

* Just for testing -split-input-file

* Undo: Just for testing -split-input-file

* Extract a function, get_operand_ins, that gets operand types; rewrite gen_attr_ins function

* Generate custom builders

* Call existing build methods

* Add comments

* Minor changes

* Generate build methods with attributes

* Add support of variadic type

* Do not generate custom build methods for ops having only attributes

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-02-21 22:28:24 +08:00
Tung D. Le aea6479ad3
Lower BatchNormalization (test mode) to Krnl dialect (#70)
* Add ONNXBatchNormalizationTestModeOp and its shape inference

* Lower batchnormalization test mode

* re-use scale, bias, mean, and variance

* Add MLIR tests

* Add e2e tests

* fix typos

* Fix a bug in MLIR tests

* Change type from int to int64_t for indices

* Uncomment e2e tests due to segmentation fault

* Uncomment e2e tests due to segmentation fault

* Revise the code

* [Tian] Fix segmentation fault in e2e tests

* Re-generate onnx.md to include BatchNormalizationTestModeOp

* Reverse an unintentional change

* Fix some typos in comments

* Use convertToMemRefType from the master branch

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-02-20 11:45:40 -05:00
Tung D. Le f1d20e368f
Add support of GemmNoBias (#91)
* Add support of GemmNoBias

* Fix a wrong indentation
2020-02-20 10:55:24 -05:00
Tung D. Le a3f042220e
Get MemRefType for result types (#69)
* Get memreftype for result types

* Revise

* Replace convertToMemRefType

* Use convertToMemRefType in ONNXConvNoBiasOpLowering

* Merge with the master branch

* Reverse an unintentional change

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-02-20 21:44:01 +08:00
Gheorghe-Teodor Bercea b28c6906b4
Fix building ONNF with latest LLVM/MLIR (#89)
* Fix build and link errors.

* Fix end to end tests.

* Fix indentation.

* Fix type conversion.

* Use newest LLVM version.

* Use newest LLVM version.
2020-02-19 18:15:02 -05:00
Gheorghe-Teodor Bercea da037ffc7d
Merge branch 'master' into shapeinference-pad 2020-02-19 13:46:00 -05:00
Tung D. Le b9f2f25b56
[NFC] Categorize ONNX ops lowering (#80)
* Create two categories: elementwise and tensor

* typos

* Create directories for categories

* Edit comments

* Extract a function that creates a KrnlIterateOp

* Add comments

* Extract some common parts

* Revise softmax

* Add reduction.inc

* Move lower-frontend to lib/conversion

* Move  directory to  directory

* Change file/directory names

* Comment format

* Add matmul.inc
2020-02-19 15:17:48 +08:00
chentong ec43fadc3b Merge remote-tracking branch 'upstream/master' into shapeinference-pad 2020-02-17 08:27:43 -05:00
Gheorghe-Teodor Bercea 3c505ae31d
Split convolution into explicit padding and unpaded convolution. (#82)
* Split convolution into explicit padding and unpaded convolution.

* Refactor code. Add test.
2020-02-14 16:06:38 -05:00
chentong bbdf4e3b4d Merge remote-tracking branch 'upstream/master' into shapeinference-pad
Conflicts:
	test/mlir/onnx/onnx_shape_inference.mlir
2020-02-14 15:35:47 -05:00
Gheorghe-Teodor Bercea 17d84901b7
Allow 1-D convolutions. (#86)
* Fix check.

* Fix comment.
2020-02-14 10:54:08 -05:00
Tung D. Le b521719587
Lower Matmul operation to Krnl dialect (#57)
* Allocate memory for matmul's result

* Group cases

* Add support of N-D x N-D, N>=2

* Revise createIterateOperandPack

* Add 1-D x 1-D

* Add 1-D x N-D

* Add MLIR tests

* Change variable names

* Change type from int to int64_t for indices

* Change variable names

* Change int64_t back to int

* Change int64_t back to int

* Change int64_t back to int

* Use decltype

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-02-14 10:43:17 -05:00
chentong c3041bfb43 shape inference for pad with constant pads 2020-02-13 19:56:05 -05:00
Tian Jin 937bbec265
Fix scalar entry point parameter lowering issue. (#78)
* Fix scalar entry point parameter lowering issue.

* Enable scalar bias test.

* Nit. Improve comments and remove debug code.

* Make helper function static, move to upfront position.

* Move helper function to top of the file.

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-02-13 13:50:05 +08:00
chentong319 49dae74eab
Create constant pad (#75)
* handle pad op which does not have the optional third argment

* rewrite PadConstantValue with constant pad into PadConstantValuePad

* add test for PadConstantValuePad

* update onnx.md
2020-02-11 15:32:01 -05:00
Gheorghe-Teodor Bercea 094be4f37a
Add support for strides when emitting convolution loop nest. (#76)
* Add support for strides when emitting convolution loop nest.

* Only emit stride multiplication if strides is greater than one.

* Add test.
2020-02-11 11:53:13 -05:00
Tung D. Le adad9e24bd
Add support of negative dimensions (#66)
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-02-11 10:37:47 -05:00
Tian Jin 181803ebf4
Using Tablegen to Generate Op Documentation (#74)
* Add dialect documentation.

* Add a step in our CI to ensure documentation is up-to-date.

* Add dialect documentation.

* Fix config file mistake, using multi-line commands.

* Fix a bug in DocCheck.
2020-02-10 14:18:54 -05:00
Tung D. Le 2c7046ff5f
Lowering ReductionMax, ReductionMin, ReductionProd and ReductionSum (#31)
* Shape inference for reduction

* Lower ReduceSum

* Support list-like attributes

* Add ReduceMax, ReduceMin, ReduceProd

* Add tests

* Emit errors for unsupported types

* Typos

* Add backend test

* Fix axis computation

* Update the use of attributes

* Use SmallVector

* Address stylistic comments

* Change type from int to int64_t for indices

* Change type from int to int64_t for indices
2020-02-10 21:38:19 +08:00
Gheorghe-Teodor Bercea 0272451521
Lower convolution to KRNL dialect. (#65)
* Ensure data shape is at least 4.

* First version of convolution.

* Simplify code for KRNL lowering.

* Add test without padding or strides.

* Refactor code for lowering frontend operations to KRNL dialect.

* Add test for conv with no bias and no padding.

* Add test with group greater than one.

* Address comment.
2020-02-07 16:51:32 -05:00
Tung D. Le 0564c0eaef
Add constraints for matmul-add fusion (#67)
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-02-07 13:51:44 -05:00
Tung D. Le 0bfb660d02
Import 2-argument Gemm as GemmNoBias (#68)
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-02-07 13:45:37 -05:00
chentong319 60ac8f081f
Op def output (#73)
* fix name of operator output in onnxop.inc and Operator.md

* remove Operators.md
2020-02-08 00:10:35 +08:00
Gheorghe-Teodor Bercea ae297f14ee
Revert "fix name of operator output in onnxop.inc and Operator.md (#62)" (#72)
This reverts commit c45655413d.
2020-02-06 10:52:57 -05:00
chentong319 c45655413d
fix name of operator output in onnxop.inc and Operator.md (#62)
* fix name of operator output in onnxop.inc and Operator.md

* Update directive.py

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-02-06 20:54:03 +08:00
Haruki Imai 477227a0ec
Added lowering of SignOp (#21)
* Support lowering of SignOp

* Fixed test code for signop of integer input

* Inserted Sigh and Reciprocal in SharingWork.md (Reciprocal is for past commit 7e3f96e)

* Added test for Sign Op

* Fixed minus_one -> minusOne

* Fixed test for signop
2020-02-04 22:27:17 +08:00
Gheorghe-Teodor Bercea 87aa72764f
Fix dependencies for onnf-opt (#51)
* Ensure onnf-opt is being rebuilt.

* Remove additional dependencies.
2020-01-31 10:51:43 -05:00
Tung D. Le 2b56c09454
Rewrite ReduceL1, ReduceL2, ReduceLogSum, ReduceLogSumExp, ReduceSumSquare in the ONNX dialect (#38)
* Rewrite ReduceSumSquare

* Edit gen_doc.py

* Revise the code

* Do shape inference after canonicalization so that there is no need to implement shape inference of rewritten ops

* Rewrite ReduceL2

* Add onnx_rewrite.cpp for all rewriting for ONNX ops

* Rewrite ReduceL1, ReduceLogSum, ReduceLogSumExp

* Edit comments

* Change the use of -> to .

* Checkout gen_doc.py from the master branch

* Use emplace_back instead of push_back

* Revise the code

* Edit comments

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-01-31 19:00:39 +08:00
Alexandre Eichenberger 0d77840969
Inference maxpool (#48)
* first steps for shape inference of maxpool

* setps forward

* ongoing

* working version

* first steps for shape inference of maxpool

* setps forward

* ongoing

* working version

* fix errors introduced by github merge

* changes suggested by Doru

* updates

* requested fixes

* reqested changes

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-01-30 14:30:28 -05:00
Gheorghe-Teodor Bercea 9fb826ae7e
Lower transpose operation to KRNL dialect (#54)
* Lower transpose operation.

* Fix IndetityOp.

* Add tests.

* Add backend tests.

* Clean-up code.

* Move transpose code and improve comment.
2020-01-30 11:44:56 -05:00
chentong319 6959cf4586
clean up gen_doc.py (#59)
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-01-29 13:54:46 -05:00
Tung D. Le 400676e371
Lowering Gemm (#19)
* Initial implementation

* Support transposing inputs

* Revise unidirectional broadcasting and unknown dimensions

* Revise gemm

* Add testcase

* Rename some variables

* Update SharingWork.md

* Change from the use of Value* to Value

* Insert deallocation

* Initilize the output matrix and fix wrong computation

* Add end-to-end testcases

* Edit lowering tests

* Change attribute names

* Use emplace_push for SmallVector

* Use the new way of getting attributes

* Revise the use of attributes

* Check the bias's shape

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-01-29 11:11:49 -05:00
Tung D. Le 9e82d388f0
Add support for Unsqueeze (#50)
* Infer shape for Unsqueeze

* Lower Unsqueeze

* Revise

* Turn off backend tests

* Compute tensorSize for static shape

* Compute tensorSize with unknown dims

* Edit tests

* Update the use of attributes

* Add e2e tests

* Use SmallVector

* Remove return

* Check whether the operand is ranked or not

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-01-29 10:46:02 -05:00
Tung D. Le 5b44169aaa
Support dimension zero in reshape (#55)
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-01-29 10:41:09 -05:00
Tung D. Le f3047943a1
Handle 1-D MATMUL N-D (#56) 2020-01-29 10:35:05 -05:00
Tung D. Le 195bf9d15d Add KrnlSqrtOp (#22)
* Initial lowering of KrnlSqrtOp

* Fix errors and add a testcase

* typos

* Add the MLIR example

* Restore doc/doc_check/CMakeLists.txt

* Clean the code

* Edit comments

* Remove redundant parts

* Chang the use of -> to .

* Add a test for f64

* Support ONNXSqrtOp

* Fix indentation

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-01-28 11:10:47 -05:00
Gheorghe-Teodor Bercea f00206cecf
Fix reshape op. (#53) 2020-01-28 10:21:08 -05:00
Tung D. Le 7c889548a7 Allow importing variadic inputs/outputs of onnx operators (#16)
* Allow importing variadic inputs/outputs of onnx operators

* Enable testcases for variadic ops

* Modify gen_doc.py
2020-01-28 21:48:11 +08:00
Doru Bercea b450a763d1 Change variable names to use rank. Add aditional check for scalars. 2020-01-27 12:08:23 -05:00
Gheorghe-Teodor Bercea 3f5c543782
Merge branch 'master' into matmul-shape 2020-01-27 11:37:40 -05:00
Gheorghe-Teodor Bercea 95cf939c5c
Fix end-to-end tests. (#52)
* Fix end-to-end tests.

* Use dyn_cast.
2020-01-27 11:35:45 -05:00
chentong319 c74f814f64 Add attributes as operation parameters (#45)
* add attributes of Op into parameters

* fix rewrite rule for GemmOp with attributes

* use I64Attr instead of I32Attr and modify test cases for the changes in attributes

* add output name (prefixed with o_) to Op definition

* update shape inference for the new attributes
2020-01-27 10:09:14 -05:00
Gheorghe-Teodor Bercea 696da50d2a
Merge branch 'master' into matmul-shape 2020-01-24 15:53:02 -05:00
Yasushi Negishi 383a5c31ac Support Softplus and Softsign operations (#17)
* Support Softplus and Softsign operations

* Add the default shape inference for the transposition operation.

* Fix conflict with master

* Fix conflict with master branch

* Add test for softplus and softsign in test/backend/test.py

* Re-enable Reciprocal tests.

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-01-23 21:18:38 -07:00
Doru Bercea 07d28769d3 Merge remote-tracking branch 'origin/master' into matmul-shape 2020-01-23 11:53:53 -05:00
Doru Bercea 68efd21064 Fix dilation formula in the code. 2020-01-22 16:34:59 -05:00
Doru Bercea 1784ec2314 Fix reference error. 2020-01-22 16:09:19 -05:00
Doru Bercea 0bc07ef661 Merge remote-tracking branch 'origin/master' into matmul-shape 2020-01-22 15:29:09 -05:00
Doru Bercea 94391a3cde Add comment. 2020-01-22 15:05:56 -05:00
Doru Bercea ea45cbcca9 Add support for dilations attribute and add tests. 2020-01-22 14:40:10 -05:00
Doru Bercea de77758faf Fix kernel dimensions. 2020-01-22 10:11:36 -05:00
Doru Bercea 169236a8fc Handle SAME_LOWER and SAME_UPPER. 2020-01-22 10:11:36 -05:00
Doru Bercea ec9e023f04 Add shape inference method. 2020-01-22 10:11:36 -05:00
Doru Bercea 3fe0f2e735 Fix operand type access. 2020-01-22 10:11:36 -05:00
Doru Bercea ab8e2f9a1b Add verifier to check for required attributes. 2020-01-22 10:11:34 -05:00
Tian Jin 51b0f4c9dd
Chentong319 attribute with variant (#25)
* change the read-in of attribute, using variant

* Use backported variant.

* Reduce code duplication.

* 1. Make array attribute parsing more clear.
2. int -> int64_t.

* 1. Fix how array attributes are imported.

* 1. Fix clang-tidy warnings.

* 1. Nit: fix clang-tidy warnings.

* Fix MaxPool node construction.

* Fix call to MaxPool.

* Comment out backend tests that fail.

* Add path to variant submodule to enable include file detection.

* Allow unused argument to avoid special casing generator.

* Address attribute related e2e test failures for Hard sigmoid,Elu,LeakyRelu,Selu,Softmax

Co-authored-by: chentong319 <chentong@us.ibm.com>
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-01-21 19:36:21 -07:00
Tian Jin 0231bb83a2
Properly link with ZLIB. (#40) 2020-01-21 11:08:16 -05:00
Tung D. Le e89e51699b Lowering softmax (#14)
* Rebase

* Use max normalization

* Handle axis

* Add tests

* Update SharingWork.md

* Remove redundant spaces

* Format code

* Rebase

* Change from the use of Value* to Value

* Add end-to-end tests

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-01-20 21:57:32 -05:00
Doru Bercea 6b55bb43c7 Fix operand type access. 2020-01-20 15:48:16 -05:00
Doru Bercea bd44d8402e Add verifier function for checking negative perms. 2020-01-20 14:54:40 -05:00
Doru Bercea 9d1078540d Transpose using perm attribute. 2020-01-20 14:54:40 -05:00
Tian Jin 8665ecd998
Enable e2e tests (#29)
* Sync with latest MLIR.

* Enable ONNX backend tests as a means to test ONNF lowering end-to-end.

* Install ONNX using quiet mode.

* Remove debug comments.

* Install ONNX from third_party/onnx.

* Check python version and fix pip command for installing ONNX.

* Using --user install option to prevent permission denied.

* Remove unused imports.

* Try using stock ONNX pip package as there are more tests in them.

* Pip got stuck building wheels, try sudo.

* Use verbose install to debug.

* Invalidate cache to build LLVM tools.

* Fix mlir installation script location.

* Debug to locate ONNF.

* Sanity check.

* Check out ONNF code first.

* Use verbose LIT output.

* 1. Update documentation to always use verbose LIT.
2. Update krnl ops to reflect new affine map attribute syntax.

* See if conda exists

* Install ONNX by manually cloning the repo.

* Install cmake first.

* Using sudo priviledge when installing.

* Limit build parallelism.

* Limit parallelism.

* Larger memory.

* Install onnx package with pip.

* Build MLIR tools.

* Invalidate cache.

* Compile model.so with -fPIC.

* Remove module dump to get concise debug output.

* Print command before executing.

* Use quiet install mode to reduce logging.

* Use -relocation-model=pic to generate position independent code.

* 1. Remove MAKEFLAGS because now buildbot has enough memory.
2. Run DocCheck as a last step.

* 1. Add verbose mode for backtend test.

* When dumping to LLVM bitcode, do not dump module IR, but print a message indicating that bitcode has been written to disk.

* Do not pass MakeFlags to CMake.

* Add more explaination for posible reasons of failing to identify tests.
2020-01-20 12:30:08 -05:00
Gheorghe-Teodor Bercea a87f01747a
Merge branch 'master' into matmul-shape 2020-01-15 18:03:03 -05:00
Gheorghe-Teodor Bercea d895670656
Merge branch 'master' into fix-conv 2020-01-15 17:56:57 -05:00
Gheorghe-Teodor Bercea deb7a7c4bb
Merge branch 'master' into matmul-shape 2020-01-15 17:51:13 -05:00
Gheorghe-Teodor Bercea 969459ddcb
Merge branch 'master' into fix-conv 2020-01-15 17:50:36 -05:00
Gheorghe-Teodor Bercea 514cbcb1dc
Merge branch 'master' into fix-gemm 2020-01-15 17:50:15 -05:00
Doru Bercea a1b44905e2 Add documentation for handling optional arguments. 2020-01-15 17:06:14 -05:00
Doru Bercea 3f6efdf4a4 Fix MaxPool translation to ONNX dialect. 2020-01-15 15:16:45 -05:00
Doru Bercea d2a90e2923 Remove references to FullGemm. 2020-01-15 14:27:21 -05:00
Doru Bercea a42fdd08f3 Fix Gemm translation to ONNX dialect. 2020-01-15 14:11:32 -05:00
Doru Bercea 67ec9e9009 Fix convolution translation to MLIR. 2020-01-15 13:26:50 -05:00
Doru Bercea fc352745e0 Make last argument of conv variadic. 2020-01-14 11:17:52 -05:00
Doru Bercea 36475ac509 Code clean-up. 2020-01-14 10:47:24 -05:00
Doru Bercea e091825896 Add check for matrix size match for 1 and 2 dimenisional cases. 2020-01-14 10:47:24 -05:00
Doru Bercea da0e9b01b1 Fix 1 and 2 dimensional cases. Add test for 1 and 2 dimensional combinations. 2020-01-14 10:47:24 -05:00
Doru Bercea 96551ef71e Fix conditions. 2020-01-14 10:47:24 -05:00
Doru Bercea a3995b61e7 Add support for shape broadcast. 2020-01-14 10:47:24 -05:00
Doru Bercea 38bffee619 Add support for broadcasting left matrix. 2020-01-14 10:47:24 -05:00
Doru Bercea d176b84506 Add support for broadcasting right matrix. 2020-01-14 10:47:24 -05:00
Doru Bercea 170296b7c6 Add special case for 1-D matrix multiplication. 2020-01-14 10:47:22 -05:00
Tian Jin 22a6bdc574
Sync with latest MLIR. (#26) 2020-01-13 12:21:29 -05:00
Doru Bercea 151f4f8c44 Add the default shape inference for the transposition operation. 2020-01-09 13:50:38 -05:00
Tung D. Le edcd506dde
Merge branch 'master' into tanh_cos_log 2020-01-08 13:39:24 +09:00
Tung D. Le 3d4ad52011 Rewrite tanh using TanhOp, add log, cos 2020-01-08 12:11:21 +09:00
Tung D. Le becb2add4a Do not get float attributes with fixed precision 2020-01-07 17:39:34 +09:00
Tian Jin 0582846864 Transition to value-typed Value, rename Value* -> Value, in accordance with upstream MLIR style change. 2019-12-30 22:42:13 -05:00
Tian Jin eadf33d816 explicit ordering among operands 2019-12-24 03:36:33 -05:00
Tian Jin 4eb95b2373 fix onnf build 2019-12-24 03:00:54 -05:00
Tian Jin 58c2f6de00 fix link 2019-12-24 02:46:14 -05:00
Tian Jin c55020f6b6 fix build script 2019-12-24 02:29:28 -05:00
Tian Jin 1188b765c9 comment out test tanh 2019-12-24 02:19:46 -05:00
Tian Jin 95de5b7ac9 revert changes to lower-to-krnl 2019-12-24 02:07:21 -05:00