* Add shape inference and names
- Add shape inference for PRelu
- Fix shape inference for group conv
for ConvTranspose
- Add input and output names for
graphs (functions)
- Add support for (u)int8 tensor
attributes
* Fix format issues
* Revert formatting for gen_onnx_mlir.py
* Pads can have ArrayAttr and DenseElementsAttr so support both
* NumInputs is the number of graph inputs that don't have initializers
* Add test for 2D batchnorm
* Fix typo in define_loops in new 2d BN test
* Change 'name' to 'onnx_node_name'
* Fix Batchnorm for 2D I/O and add lowering test
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* Rewriting rule
* Fix formulas
* Reuse op results
* Const propagation for Div and Sqrt
* Explicitly use ONNXConstantOp
* Minor revise
* Const propagation for unsqueeze
* Do const propagationnce all tensors have inferred shapes
* LIT tests for fusion
* Add LIT tests for constant propagation on Div, Sqrt, and Unsqueeze
* Missing dash
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* move scalerop to decompose
* change clang format
* change clang format
* add shape inference for scaler op
* fixing generated onnxop
* generate onnx.md
* redo get onnx.md and onnxop.td.inc using onnx 1.6
* Add shape inference for scaler op
* add benefit for scaler decompose and simplify scaler shape inference
* add scaler decompose benefit num and simplify shape inference
* add cast builder
* cast rewrite only for float
* add cast op same type rewrite rule
* working on cast lowering
* cast lowering working
* add cast lowering
* fix format
* Delete OpBuildTable.inc
* complete requested changes
Co-authored-by: chentong319 <chentong@us.ibm.com>
* Update LLVM commit ID to include to the new modeling of LLVM type in MLIR
* Fix commit id discrepancy
* Update README.md
* Update MLIR version
* Force rebuild prereq dockers and see what happens.
* Use LLVM commit ID that corresponds to MLIR News, 13th edition (8/7/2020)
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* Reorganize main function.
* Follow review comments.
* Emit constants are globals in Krnl and LLVM dialects.
* Make mempooling more robust.
* Fix.
* Update MainUtils.cpp
Additional canonicalization not required anymore.
* move scalerop to decompose
* change clang format
* change clang format
* add shape inference for scaler op
* fixing generated onnxop
* generate onnx.md
* add benefit for scaler decompose and simplify scaler shape inference
* cast rewrite only for float
* add cast op same type rewrite rule
* working on cast lowering
* cast lowering working
* correct onnx version
* update onnx md
* add test for tensor<10xf64>
* Add shape inference for Ops used by BERT
* Erf
* Pow
* ReduceMean
* Dropout
* Expand
https://github.com/onnx/onnx/blob/master/docs/Operators.md#expand
Deduce the value of the shape operand by looking at the producer
of the operand.
Currently supported producers are: onnx.Constant and onnx.Shape.
* Add corresponding tests for each op.
* Sort the list of ops with shape inference in gen_onnx_mlir.py
in alphabetic order for clarity.
* Restart CI
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* base implementation
* add example
* change table gen
* docs
* small change for review
Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* Detect llvm-project commit change in utils/clone-mlir.sh and rebuild llvm-project
for zLinux Jenkins build bot
* Fix MainUtils.cpp compilation on Windows
* Fix clang-format
* Update comments on Windows not to delete the constant pack object file
* move scalerop to decompose
* change clang format
* change clang format
* add shape inference for scaler op
* fixing generated onnxop
* generate onnx.md
* add benefit for scaler decompose and simplify scaler shape inference
* cast rewrite only for float
* add cast op same type rewrite rule
* fix format
Co-authored-by: chentong319 <chentong@us.ibm.com>
* Define krnl.permute op.
* Support krnl.permute operation.
* Properly remove loop references.
* Re-push, Github was down.
* Need to debug interpretOp error.
* Fix lowering bug by erasing ops after full krnl IR interpretation is done, and clean up & comment code.
* Introduce permute, unroll operations.
* More debug.
* Remove std::set.
* krnl.terminate fails to be converted.
* Pass all tests, need to add legal ops as well as part of the conversion target.
* Change test format to new permute spec.
* Bug fix for nested iterate op lowering.
* Simplify error reporting.
* Fix compilation error.
* Increase comments coverage.
* Remove unnecessary imports.
* Re-trigger Jenkins
* Add permute/unroll tests.
* Retrigger Jenkins
* Using a non-trivial example.
* Add more complex example/test case.
* move scalerop to decompose
* change clang format
* change clang format
* add shape inference for scaler op
* fixing generated onnxop
* generate onnx.md
* Add shape inference for scaler op
* add benefit for scaler decompose and simplify scaler shape inference
* Reorganize main function.
* Follow review comments.
* Emit constants are globals in Krnl and LLVM dialects.
* Add support for moving dynamic alloca instructions to top of functions.
* Fix memory pooling tests.
* Various fixes.
* Fix lit tests.
* More test fixes.
* Reformat.
* Reformat some more.
* Fix issue with TestConv and split-input-file.
* Use smart pointers.
* Remove redundant pointer.
* Reformat.
* Add initMap description.
* Clean up tests.
* Detect llvm-project commit change in utils/clone-mlir.sh and rebuild llvm-project
for zLinux Jenkins build bot
* Add --EmitJNI target (tested working with mnist and resnet50)
- MainUtils
* first shot at refactoring compileModuleToSharedLibrary
* add setExecPath call to allow resolving runtime directory from onnx-mlir
executable path when ONNX_MLIR_RUNTIME_DIR is not set. This allows
tests to run without having to install onnx-mlir or to explicitly set
ONNX_MLIR_RUNTIME_DIR
- RtMemRef
* add getDataSize for C (equivalent of size() for C++).
* fix setStrides bug (setting sizes, not strides)
- TestConv
* _main_graph-*.so were filling up /tmp. Change to use fixed shared library
in build directory
* Fix clang-format-lint complaints
* - getRuntimeDir checks lib64
- install targets for javaruntime and jniruntime
- remove ONNX_MLIR_LD_PRELOAD_onnx-mlir and ONNX_MLIR_LD_PRELOAD_onnx-mlir-opt
* See what happens when `kExecPath` decl is dropped.
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* changes for mypipeline.onnx
* format
* rm MLOpBuildTable.inc
* copy string without free
* fix the memory issue
* restore change for STRING
* format
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* Remove optimize_loops/return_loops op in elementwise ops lowering and fix tests in onnx_lowering.mlir.
* Fix all tests.
* Remove all occurences of def_loops/return_loops.
* Fix test.
* Fix comments for defineLoops & emitKrnlLoopsAndIterationForOperand function.
* Remove emitOptimizedLoops.
* Allow not specifying optimizedLoops when creating KrnlIterateOperandPack.
* Fix style.
* Make BuildKernelLoop helper not emit optimize/return_loop operations & retire emitKrnlLoopsAndIterationForOperand by replacing it with BuildKernelLoop.
* DefineLoops -> DefineLoopsEx, remove redundant emitKrnlLoopsAndIterationForOperand function.
* BuildKrnlLoop API name update.
* Tweak comments.
* Remove unused withEmptyOptimization flag.
* Better comment for BuildKrnlLoop.
* Fully remove krnl.return_loops/optimize_loops op.
* Trigger Windows Build
* Bump windows ci python version.
* Move to more recent LLVM ID (May 15)
* clang-format
* Bump cache version up
* Update readme
* Fix doc check
* Move to a newer commit id
* Update LoopToStandard -> SCFToStandard
* Change MLIRSideEffects to MLIRSideEffectInterfaces
* Add AffineScope trait to KrnlIterateOp
* [ElementWise] Load/Store op to AffineLoad/AffineStore op
* [Gemm, MatMul, Reduction, Softmax] Load/Store op to AffineLoad/AffineStore op
* [Concat] Load/Store op to AffineLoad/AffineStore op
* [Pad, PadConstantValuePad, Reshape, Transpose] Load/Store op to AffineLoad/AffineStore op
* [LSTM] Load/Store op to AffineLoad/AffineStore op
* [Conv, Norm, Pooling] Load/Store op to AffineLoad/AffineStore op
* Add affine-loop-fusion pass
* Use Load/Store for scalar
* Use Load/Store for scalar
* Fix lit tests
* Unknown dimensions for broadcasting ops
* Affine Load/Store for scalar memref
* clang-format
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* Lower Squeeze op to Krnl dialect
* Emit tensor size as a single constant; add a lit test for unknown dimensions
* Code style
* Speical case where the input is only used by this squeeze op
* Remove squeeze-in-place optimization
* Update ConvertONNXToKrnl.cpp
Twek to re-run tests.
* Trigger buildbot re-run.
* Re-run CI
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* Detect llvm-project commit change in utils/clone-mlir.sh and rebuild llvm-project
for zLinux Jenkins build bot
* Compile libcruntime.a object with -fPIC to avoid segfault when embedded into model.so
* Enable unit tests on zLinux
Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* Support encoding data type infomration as part of the DMR struct.
* Support full range of np types.
* Report error when encountering unsupported type.
* Add gerRank method API.
* Add missing API declarations.
* DynMemRef -> RtMemRef
* Format code.
* Missed DynMemRef -> RtMemRef conversion.
* More comments for RMR, and rename variable names from dmr -> rmr.
* DynMemRef -> RtMemRef.
* Format code.
* Support krnl.block printing/parsing.
* Checkpoing, PoC working.
* Implement krnl.block operation.
* Make tuple -> make pair.
* Bug fix, white list krnl.iterate op while lowering.
* Add return loop op lowering.
* Bug fix.
* Allow using loop refs more than once if they are used by krnl.iterate op.
* More comments and include lit test.
* Make krnl.block definition more restrictive.
* Splitting tests creates modules, making affine_map matching more verbose, prefer not splitting since test cases are small.
* Use verbose mode for LIT test on Z.
* Use verbose build to diagnose.
* Missing libraries linking when building in shared mode.
* Fix whole-archive linkage.
* Try preloading affinetransforms.
* Try put AffineTransforms into LD_LIBRARY_PATH.
* Fix python syntax error.
* No need to link with whole-archive libs, as they are pre-loaded.
* Do not preload any library.
* Link with whole-archive libs.
* Explicitly shared linkage in CMake.
* Fix CMake syntax error.
* Restore test.py
* Update z13.sh
* Update z13.sh
* Provide krnl.block operation description.
* string type from tensorflow
* simplify type
* parser and print
* gen StringType for tablegen
* onnx to onnx-mlir type
* add namespace
* allow all integer type
* dialect document
* add test case
* format
* more precise type for ONNXOp
* format
* enable the failed test
* update comment
* update onnx.md
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* Reorganize main function.
* Follow review comments.
* Emit constants are globals in Krnl and LLVM dialects.
* Add more passes to onnx-mlir-opt.
* Clean-up.
* Explicit pass registration.
* Remove whole-archive linking, replace with regular linking.
* Remove whole-archive linkage related scripts.
* No need to preload library, simply expose them through LD_LIBRARY_PATH.
* Use OMLibs to record all onnx-mlir libs.
* Add OMResultTypeInferenceOpInterface lib to OMLibs.
* nit.
* No need to expose libs through LD_LIBRARY_PATH.
* Fix missing onnx header file issue.
* Define OMLibs before Tool subdirectory is imported.
* Define OMLibs at parent scope.
* Specify dependency of MainUtils on OMLibs early.
* Set OMLibs both at current & parent scope.
* Add comment about what future pass implementation should do.