* 1.Break down CMake scripts to smaller libraries per-directory.
2. Move some transformations and interfaces to the right folder.
3. Fix minor merge failure of the patch renaming files to use LLVM convention.
* Link OMBuilder with OMONNXOps.
* 1. Update the src location of generated ONNX dialect definition.
2. Link OMONNXRewrite with OMONNXOps.
* Fix path to tablegen for add_onnx_mlir_dialect_doc.
* Update build script for onnx_mlir_transform.
* 1. Remove comment code.
2. onnx_mlir_attribute_promotion -> OMAttributePromotion.
* Name tablegen generated files with LLVM convention.
* Nit: reorder libraries to link against.
* Nit: Link against MLIR first.
* Support attribute promotion.
* Simplify op interface name.
* 1. Add more comments to Attribute Promotion Pass.
2. Move Promotable Const Operand Interface to src/interface, and link against it.
* Complete NFC change onnx -> onnx-mlir.
* Move attribute_promotion pass to src/transform.
* Nit: reword comment.
* Support Attribute Promotion in gen_doc.py.
* Add test.
* Update ONNX doc.
* Add negative test.
* Rename onnxop.inc -> onnx_ops.td.inc.
* Include onnx_ops.td.inc.
* Nit: better comments.
* Prettify CMake.
* Remove original attribute_promotion code, improve comments.
* Append '_op_interface' to op interface decl/defs.
* Namespace cmake targets using onnx_mlir_ prefix.
* Use updated header name.
* Use new body file name.
* Fix dependency.
* Use new CMake target name.
* Make attribute promotion self-contained by removing redundant constant operaions inside the pass execution.
* Remove canonicalization pass.
* Increase comments.
* Use stricter checks.
* Add one more test case.
* Remove %arg1 as it's never used.
* Support dilations and enable e2e tests
* Fix allocating memory for dynamic shape
* Edit comments
* Do dilation by computing an offset from kernel index
* Correct dilation formula, add an example of out-of-bound, and add a test for dilation
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* Fix reshape when a dynamic shape is given.
* Fix default attributes for ConvNoBias.
* Fix comment.
* Resolve comment.
* Improve checks.
* Handle zero dim case.
* Add helper to fetch constants. Add test for dynamic reshape.
* Add test for zero.
* Use shortcut method for size.
* Support Pads for MaxPoolSingleOut
* Regenerate onnx.md to include the new op
* Edit comments
* Undo redundant parts that were unintentionally changed
* Move declarative rewriting rules into canonicalize to avoid creating a new op
* Reformat the rewriting rule pattern of MaxPoolSingleOut
* Put ONNXPadConstantValuePadOp's build method into a .cpp file instead of a tablegen file
* Use the same helper function as the one in inferShape for the ONNXPadConstantValuePadOp's build method
* Change function names and fix padding for the spatial dimensions
* Call shape-inference again after canonicalization to infer shape for newly added ops during canonicalization.
* Fix typos
* emitConstantOp with a given type
* Helper functions to create infinity constants
* Use new constant helper functions for MaxPoolSingleOut
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* Lower MaxPoolSingleOutOp to Krnl dialect
* Edit comments
* Update changes according to the new folder structure
* Add MLIR tests
* Support ceil_mode
* Merge the first two krnl loops into one krnl loop; remove attribute checks
* Dynamically allocate memory for the result if the result has unknown dimensions
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* Remove rank constraints in gemm fusion
* Add an MLIR test
Co-authored-by: Tian Jin <tjingrant@gmail.com>
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* helper to gen krnl code, applied to conv
* suggested changes, name, removed set insertion point
* format
* suggested changes
* added comments and made a small name change
* 1. Combine variadicIn/Out with expectedNumOperands/Results to simplify import function arguments.
2. Generic improvements to code readability in gen_doc.py.
* Update ONNX Dialect doc.
* Remove redundant code in ImportNode.
* Prettify op_build_table.inc.
* 1. Remove irrelevant code in gen_doc.py
* Refactor code to be more readable.
* Further refactoring for readability improvements.
* Allow gemm to have an optional operand (bias term), and include an example of declarative optimization pattern targeting gemm with bias term ommitted.
* Make shape inference/lowering of gemm op compatible with optional operand declaration.
* Apply canonicalization again after lowering from onnx -> std dialects.
* Make hasBias compatible with the situation of GemmNoBias op.
* Update doc.
* Add a canonicalization test.
* Remove special handler for importing Gemm op, as it's redundant now.
* Add result type inference to op definition
* Edit MLIR tests
* Fix result type for Mul
* Format comments
* Return UnrankedTensorType as result type
* Just for testing -split-input-file
* Undo: Just for testing -split-input-file
* Extract a function, get_operand_ins, that gets operand types; rewrite gen_attr_ins function
* Generate custom builders
* Call existing build methods
* Add comments
* Minor changes
* Generate build methods with attributes
* Add support of variadic type
* Do not generate custom build methods for ops having only attributes
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* Add ONNXBatchNormalizationTestModeOp and its shape inference
* Lower batchnormalization test mode
* re-use scale, bias, mean, and variance
* Add MLIR tests
* Add e2e tests
* fix typos
* Fix a bug in MLIR tests
* Change type from int to int64_t for indices
* Uncomment e2e tests due to segmentation fault
* Uncomment e2e tests due to segmentation fault
* Revise the code
* [Tian] Fix segmentation fault in e2e tests
* Re-generate onnx.md to include BatchNormalizationTestModeOp
* Reverse an unintentional change
* Fix some typos in comments
* Use convertToMemRefType from the master branch
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* Get memreftype for result types
* Revise
* Replace convertToMemRefType
* Use convertToMemRefType in ONNXConvNoBiasOpLowering
* Merge with the master branch
* Reverse an unintentional change
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* Create two categories: elementwise and tensor
* typos
* Create directories for categories
* Edit comments
* Extract a function that creates a KrnlIterateOp
* Add comments
* Extract some common parts
* Revise softmax
* Add reduction.inc
* Move lower-frontend to lib/conversion
* Move directory to directory
* Change file/directory names
* Comment format
* Add matmul.inc
* Allocate memory for matmul's result
* Group cases
* Add support of N-D x N-D, N>=2
* Revise createIterateOperandPack
* Add 1-D x 1-D
* Add 1-D x N-D
* Add MLIR tests
* Change variable names
* Change type from int to int64_t for indices
* Change variable names
* Change int64_t back to int
* Change int64_t back to int
* Change int64_t back to int
* Use decltype
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* Fix scalar entry point parameter lowering issue.
* Enable scalar bias test.
* Nit. Improve comments and remove debug code.
* Make helper function static, move to upfront position.
* Move helper function to top of the file.
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* handle pad op which does not have the optional third argment
* rewrite PadConstantValue with constant pad into PadConstantValuePad
* add test for PadConstantValuePad
* update onnx.md
* Add dialect documentation.
* Add a step in our CI to ensure documentation is up-to-date.
* Add dialect documentation.
* Fix config file mistake, using multi-line commands.
* Fix a bug in DocCheck.
* Shape inference for reduction
* Lower ReduceSum
* Support list-like attributes
* Add ReduceMax, ReduceMin, ReduceProd
* Add tests
* Emit errors for unsupported types
* Typos
* Add backend test
* Fix axis computation
* Update the use of attributes
* Use SmallVector
* Address stylistic comments
* Change type from int to int64_t for indices
* Change type from int to int64_t for indices
* Ensure data shape is at least 4.
* First version of convolution.
* Simplify code for KRNL lowering.
* Add test without padding or strides.
* Refactor code for lowering frontend operations to KRNL dialect.
* Add test for conv with no bias and no padding.
* Add test with group greater than one.
* Address comment.
* fix name of operator output in onnxop.inc and Operator.md
* Update directive.py
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* Support lowering of SignOp
* Fixed test code for signop of integer input
* Inserted Sigh and Reciprocal in SharingWork.md (Reciprocal is for past commit 7e3f96e)
* Added test for Sign Op
* Fixed minus_one -> minusOne
* Fixed test for signop
* Rewrite ReduceSumSquare
* Edit gen_doc.py
* Revise the code
* Do shape inference after canonicalization so that there is no need to implement shape inference of rewritten ops
* Rewrite ReduceL2
* Add onnx_rewrite.cpp for all rewriting for ONNX ops
* Rewrite ReduceL1, ReduceLogSum, ReduceLogSumExp
* Edit comments
* Change the use of -> to .
* Checkout gen_doc.py from the master branch
* Use emplace_back instead of push_back
* Revise the code
* Edit comments
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* first steps for shape inference of maxpool
* setps forward
* ongoing
* working version
* first steps for shape inference of maxpool
* setps forward
* ongoing
* working version
* fix errors introduced by github merge
* changes suggested by Doru
* updates
* requested fixes
* reqested changes
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* Initial implementation
* Support transposing inputs
* Revise unidirectional broadcasting and unknown dimensions
* Revise gemm
* Add testcase
* Rename some variables
* Update SharingWork.md
* Change from the use of Value* to Value
* Insert deallocation
* Initilize the output matrix and fix wrong computation
* Add end-to-end testcases
* Edit lowering tests
* Change attribute names
* Use emplace_push for SmallVector
* Use the new way of getting attributes
* Revise the use of attributes
* Check the bias's shape
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* Infer shape for Unsqueeze
* Lower Unsqueeze
* Revise
* Turn off backend tests
* Compute tensorSize for static shape
* Compute tensorSize with unknown dims
* Edit tests
* Update the use of attributes
* Add e2e tests
* Use SmallVector
* Remove return
* Check whether the operand is ranked or not
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* Initial lowering of KrnlSqrtOp
* Fix errors and add a testcase
* typos
* Add the MLIR example
* Restore doc/doc_check/CMakeLists.txt
* Clean the code
* Edit comments
* Remove redundant parts
* Chang the use of -> to .
* Add a test for f64
* Support ONNXSqrtOp
* Fix indentation
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* add attributes of Op into parameters
* fix rewrite rule for GemmOp with attributes
* use I64Attr instead of I32Attr and modify test cases for the changes in attributes
* add output name (prefixed with o_) to Op definition
* update shape inference for the new attributes
* Support Softplus and Softsign operations
* Add the default shape inference for the transposition operation.
* Fix conflict with master
* Fix conflict with master branch
* Add test for softplus and softsign in test/backend/test.py
* Re-enable Reciprocal tests.
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* change the read-in of attribute, using variant
* Use backported variant.
* Reduce code duplication.
* 1. Make array attribute parsing more clear.
2. int -> int64_t.
* 1. Fix how array attributes are imported.
* 1. Fix clang-tidy warnings.
* 1. Nit: fix clang-tidy warnings.
* Fix MaxPool node construction.
* Fix call to MaxPool.
* Comment out backend tests that fail.
* Add path to variant submodule to enable include file detection.
* Allow unused argument to avoid special casing generator.
* Address attribute related e2e test failures for Hard sigmoid,Elu,LeakyRelu,Selu,Softmax
Co-authored-by: chentong319 <chentong@us.ibm.com>
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* Rebase
* Use max normalization
* Handle axis
* Add tests
* Update SharingWork.md
* Remove redundant spaces
* Format code
* Rebase
* Change from the use of Value* to Value
* Add end-to-end tests
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* Sync with latest MLIR.
* Enable ONNX backend tests as a means to test ONNF lowering end-to-end.
* Install ONNX using quiet mode.
* Remove debug comments.
* Install ONNX from third_party/onnx.
* Check python version and fix pip command for installing ONNX.
* Using --user install option to prevent permission denied.
* Remove unused imports.
* Try using stock ONNX pip package as there are more tests in them.
* Pip got stuck building wheels, try sudo.
* Use verbose install to debug.
* Invalidate cache to build LLVM tools.
* Fix mlir installation script location.
* Debug to locate ONNF.
* Sanity check.
* Check out ONNF code first.
* Use verbose LIT output.
* 1. Update documentation to always use verbose LIT.
2. Update krnl ops to reflect new affine map attribute syntax.
* See if conda exists
* Install ONNX by manually cloning the repo.
* Install cmake first.
* Using sudo priviledge when installing.
* Limit build parallelism.
* Limit parallelism.
* Larger memory.
* Install onnx package with pip.
* Build MLIR tools.
* Invalidate cache.
* Compile model.so with -fPIC.
* Remove module dump to get concise debug output.
* Print command before executing.
* Use quiet install mode to reduce logging.
* Use -relocation-model=pic to generate position independent code.
* 1. Remove MAKEFLAGS because now buildbot has enough memory.
2. Run DocCheck as a last step.
* 1. Add verbose mode for backtend test.
* When dumping to LLVM bitcode, do not dump module IR, but print a message indicating that bitcode has been written to disk.
* Do not pass MakeFlags to CMake.
* Add more explaination for posible reasons of failing to identify tests.
* wip, commit before merging with upstream
* organize API, return wrapped output
* enable onnx backend test
* undo unintentional commit
* fix krnl ops tablegen
* format krnl ops
* reorder fillDynMemRefWithMemRef to be after fillPtrToMemRefWithDynMemRef, better comments
* more onnx backend tests
* ensure that test names refer to existing tests
* improve code readability by shortening type names
* nit
* restore unintentional changes
* more nits
* fix ; -> :
* split runtime implementation into header and body file, add support for data types
* comment on the onnx backend test
* make the comments read better
* do not dump when lowering