* PoC works.
* MNist works.
* Clean up.
* Fix test.
* Make Linux work.
* Use consistent symbol name.
* Fix variable name.
* Fix array addr access.
* Bug fix.
* Bug fix.
* install before running e2e tests.
* Fix build config.
* Use sudo when installing.
* Make embeddedDataLoader position independent.
* Enable ResNet50.
* Format code.
* Format MainUtil.
* Try not using sudo to install.
* Supply runtime dir via environment variable.
* Dump problematic operation.
* Dump entire function.
* Debug.
* Dump input.
* Dump constant op.
* Debug.
* Debug.
* Debug.
* Print to stderr.
* take care of endianness.
* Use endianness-aware execution session.
* Fix ZLinux error.
* Include warning when desired output endianness can't be deduced.
* Remove debug code.
* Remove debug code in shape inference.
* Support binary-decoder for testing constants packing.
* Support filename, move-to-file, elision-threshold configurations in constant packing pass for easy testing.
* Add lit test, fix lit test type mismatch.
* Add more consts packing tests.
* Ensure intermediate files are properly cleaned up.
* No need for constant elimination.
* Link with threading libraries.
* Remove debug code.
* Format code.
* More tests.
* test nit.
* Remove debug code.
* Reduce hard-coded constants.
* Use temporary and unique working directory for hosting model parameters.
* Test if it works.
* Try to find objcopy.
* Rename symbols using objcopy.
* Move sanitized name to linux section.
* Use verbose mode for debugging.
* Disambiguate pass constructor.
* Fix symbol name.
* Use Command API to build and execute commands.
* Move linux to use Command API.
* Fix reset args.
* Execute redefine sym.
* Format code.
* Do not use verbose mode for CircleCI.
* Remove debug code.
* Prettify code, add comments.
* getSegmentData -> getEmbeddedConstPool
* vector -> std::vector.
* Make sure we properly clean up intermediate files.
* Fix test cases.
* Add runtime directory.
* Trigger rebuild.
* [Merge with master] fix debug script.
* Diable affine fusion pass for now.
* Support generic fallback const packing mechanism.
* Remove debug code.
* Handle the case where objcopy is not available.
* Fix Windows missing types.
* Support int64.
* Copy packed constant to a local directory for non-Linux/Mac platforms.
* Nit: remove debug code, refactor const pack preprocessing out as a separate function.
* Cannot make preprocessConstPack a standalone function because file removers are stack-allocated, and they are deallocated prematurely when function stack gets popped, deleteing intermediate files too early.
* Don't require executable filename.
* Import ONNX data types directly.
* Fix LIT test.
* Bug fix, use moved string value.
* Remove redundant filenames.
* Fix CMake script.
* Embed endianness information as a symbol, and check during runtime.
* More comments, update lit tests.
* Fix lit test on BE machine.
* Copyright notices.
* Reorganize main function.
* Follow review comments.
* Emit constants are globals in Krnl and LLVM dialects.
* Replace internal malloc with memory pool and getref instruction.
* Lower krnl.getref to LLVM.
* Fix formatting issues.
* Add tests.
* Add missing dependency.
* Improve LLVM lowering.
* Add test to show getref is generic.
* 1. Add shape inference for the following ops:
- Atan
- Tan
- Sin
- Cast
- ConvTranspose
- Flatten
- DynamicQuantizeLinear
- QuantizeLinear
- DequantizeLinear
- ConvInteger
2. Import attributes for generic nodes
3. Fixes for cases where .cast<> should be .isa<> (ONNXConcat::inferShapes)
* Fix foormatting issues
* Address comments:
- SmallVector<> * -> SmallVectorImpl<> &
- switch-case -> helper function
- Inside helper function, preserve signed-ness
- add TODOs
* Can't use signed integers yet in convertONNXTypeToMLIRType, add TODO
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* Detect llvm-project commit change in utils/clone-mlir.sh and rebuild llvm-project
for zLinux Jenkins build bot
* Since many headers are generated and included indirectly through
other headers, there are often missing dependencies that break
parallel build. So we add header targets for KrnlOps, ONNXOps, and
MLONNXOps, and add explicit dependencies for all the relevant headers.
* fix copy-and-paste bug
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* initial const prop attempt
* added support for broadcast ops
* adde all binary broadcast ops into custom builders with precise type
* added test example
* working
* format
* fixed suggestion by Tung, start woring on unary
* added subtraction and neg the right way, and added elementwise mul too
* formatting changes
* format
* format
* added instructions to add new optimizations
* Call llc, ld from within onnx-mlir.
* Rename EmitLLVMBC -> EmitLib., reorder header files
* Edit comment.
* Checkpoint, debug.py works.
* Automatically generate inputs in debug.py.
* Use float.
* initial support for rapidcheck tests.
* Convolution test case works.
* Format code.
* Link library with MainUtils.
* Fix CMake script error.
* Fast implementation of array assertion, more detailed error analysis.
* More utility for DynMemRef.
* Fix linking issue.
* Uncomment unit test.
* Refactor to separate C++/Python ExecutionSession, enable unit test.
* format code.
* Verbose build.
* Enable PIC option for ExecusionSession.
* Fix cmake error.
* Build all targets.
* Fix doc to build all targets.
* Clean up.
* Clean up, debug.
* Use type alias consistently.
* Move definitions to DynMemRef.cpp.
* include algorithm.
* pyruntime -> PyRuntime
* Format code.
* Free memory.
* Add comments.
* Copyright notice.
* Improve stylistic consistency.
* Add comment.
* Revert irrelevant changes.
* Disambiguate.
* Refator test case generator out from test case implementation, implement example exhaustive test driver.
* Add documentation for testing.
* Add type inference for CastOp
* Share type translation between op builder and onnx importer
* clang-format
* Format emitted code
* Remove unnecessary dependencies
* removed warning missing return, dangling else
* fixed errors, made sure to return false in all shape inference failures
* shape inference use LogicalResults as return value
* format fixed
* format error
* additional error correction
* handle errors properly for all former emitError site, using either emitError, assert, or llvm_unreachable
* help added
* fixes
* edit of doc
* doc edit
* removed warning missing return, dangling else
* fixed errors, made sure to return false in all shape inference failures
* shape inference use LogicalResults as return value
* format fixed
* format error
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* fix type inference for ConstantOp and MaxPoolSingleOut
* modify interface
* use OpInterface
* change name to OpInterface
* Builder dependence
* Update CMakeLists.txt
* Update CMakeLists.txt
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* Call llc, ld from within onnx-mlir.
* Rename EmitLLVMBC -> EmitLib., reorder header files
* Checkpoint, debug.py works.
* Automatically generate inputs in debug.py.
* Use float.
* Fix merge conflict, remove RapidCheck from this patch.
* Remove submodule rapidcheck properly.
* Reformat code.
* More comments.
* Add documentation.
* Add documentation to navigation.
* Account for the fact that some initializers may also appear as input.
* Move to more recent LLVM ID (May 15)
* clang-format
* Bump cache version up
* Update readme
* Fix doc check
* Move to a newer commit id
* Update LoopToStandard -> SCFToStandard
* Change MLIRSideEffects to MLIRSideEffectInterfaces
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* Reorganize main function.
* Follow review comments.
* Emit constants are globals in Krnl and LLVM dialects.
* Change extensions of intermediate file.
* enable promote attr for pad
* use optional arguments for pad
* shape infereance for pad
* Lowering Pad
* format file
* use DenseTensor for the attribute
* use Pad in ONNXRewrite
* fix the merge conflict
* fix the attr given to constantOp
* handle ONNXConstantOp in attribute promotion
* Fix bug when AttributePromotion is called more than once
* update ONNXOps.td.inc with correct version of onnx
* update onnx.md
* responses to review
* fix the build error
* change the implementation of Pad
* delete commented out code
* clang format
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* Use AffineMap
* Shared AffineMap
* AffineMap for Conv/Pooling
* Create helper files
* Remove changes for Relu
* Remove redundant includes
* Use AffineMap for AveragePool's shape inference
* Add MLIR tests for unknown dimension case
* Extract a method AffineMapIntConstant
* Comment stylist and include path
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* Specialize the op lowering logic for elementwise operations
* Fix clang-format error.
* Update tests for LSTM since LSTM uses element-wise ops
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* Support importing tensor proto.
* Use signed types, use template.
* Resotre using signless integer types because Reshape faults with integer signs; using traits to declare/define attribute types.
* Simplify attribute importing logic.
* Use existing code to import TensorProto.
* nit.
* Specify in linking stage, where runtime shared library is located.
* Cite & make comment a full sentence.
* Fix error communicating runtime dir to ld.
* Run clang-format on all source code.
* Add Clang-Format Github Action.
* Apply patch produced by Clang-Format Bot.
* nit.
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* Support dilations and enable e2e tests
* Fix allocating memory for dynamic shape
* Edit comments
* Do dilation by computing an offset from kernel index
* Correct dilation formula, add an example of out-of-bound, and add a test for dilation
* Import optional outputs as NoneType
* Shape inference for ONNXLSTM
* Edit ONNXLSTM::inferShape()
* Shape inference for ONNXLSTMOp
* Create a common function for inferring shape for RNN ops
* CheckInsertDeallocation for a specific result
* Allocate memory for LSTM
* First round of lowering
* Allocate memory for hidden and cell states
* Test with custom Tanh
* Fix an error in Ct's formula
* Add E2E tests
* Return outputs
* Refactor the code
* Enable E2E tests
* Support reverse and bidirectional directions
* Minor revision
* Return all intermediate hidden states
* Call existing activation functions
* Structs for activation functions
* Call existing activations in ONNX
* Minor revision
* Compare strings ignoring case
* Use memreftype of rank 0 for calling activation functions
* Fix getActivationPack()
* Revise the code
* Add one MLIR test
* Add MLIR tests for reverse and bidirectional modes
* Make the order of emiting instructions deterministic
* Use OperandAdaptor instead of directly use an operand index
* Use literal assignments
* Change some variable names
* Use literal assignments
* Use literal assignments
* Format the code
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* Implement shape inference for SplitOp
* Change spitOpt to SplitAttribute and check the axis range before updating the axis attribute
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* target_link_libraries(OMElideKrnlGlobalConstants ...) adds duplicated
../lib/libOMKrnlOps.a ../lib/libOMONNXOps.a at end of linkage for onnx-mlir
and breaks shared library build
* Fix .buildbot/z13.sh to prepare for zLinux Jenkins build bot
Co-authored-by: Gong Su <gong_su@hotmail.com>
* Make onnx-mlir work with latest mlir.
* Bump CircleCI cache version.
* Fix missing passes in onnx-mlir-opt.
* Fix backend test failure.
* Fix doc.
* Fix doc and exclude the generated _site directory from DocCheck.
* Remove debug code.
* Do not hard code target name, on Mac shared lib can end with .dylib.
* FunctionPass -> PassWrapper.
* Reorganize main function.
* Follow review comments.
* Emit constants are globals in Krnl and LLVM dialects.
* Output of non-value constants. Write full source to file.
* Fix e2e tests.
* Output constant free and full code in separate files.
* Emit separate files.
* Move file output management to utils.
* Elide the values of glotbal krnl constants.
* Add dual file output for Basic flag.
* Add tests.
* Add passes to cmake file.
* Move to more recent LLVM commit ID
* Update LLVM cache version from V9 to V10
* Update to latest LLVM commit id from master, roll back conditions in util scripts
* Update circlci LLVM cache tag to ensure ci updates builds with latest LLVM commit id
* Update README.md to have matching LLVM commit id
* Update doc/Dialtects/onnx.md
* Enable onnx-mlir for VS builds on Windows
* Update README to include lit
* Update build command for Windows to include config
* Update build instructions, add cmd files for windows, enable single source of truth for MLIR commit-id (clone-mlir.sh)
* Add Visual Studio workload info
* Update ONNX op definitions
* Revert onnx submodule back to previous commit, disable warnings in CMakeLists to work around build issues with MSVC
* Update environment for path to PDcurses on Windows
* Fix directory strings to be compatible with Windows or Linux style slashes
* Fix install-mlir.sh so it works when sourced
* Ensure README and cmd files match and have correct paths
* Properly quote ONNX_MLIR_SRC_DIR
* Address PR feedback: Use llvm_unreachable to indicate failure to convert attribute proto to name/value pair
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* Create a template for pooling and add support for AveragePool
* Edit MLIR tests for MaxPool according to the new lowering template for pooling
* Dealloc temporary variables
* Support count_include_pad for AveragePool
* Add MLIR tests for AveragePool lowering
* Make changes according to Tian's comments
* Push AffineMap as upper bound for KrnlIterateOp
* Test AffineMap to use in Pooling
* Replace the old implementaion by a new one using AffineMap
* Fix the computation when dilations are non-unit
* Clean up the old code
* Remove AveragePool from Canonicalization pass
* Fix computing the end indices of a filter window
* Refactor the code for pooling
* Revise pushAffineMapBound
* Add MLIR tests
* Remove unused functions
* Fix check-onnx-backend build on x86 Linux. (#91)
* Add the split marker to test files (#90)
Co-authored-by: Tian Jin <tjingrant@gmail.com>
Co-authored-by: gongsu832 <gong_su@hotmail.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* Specify each lib only once; allow llvm build in shared libs mode.
* Remove debug code.
* For library targets, retain dependency information using add_dependencies, but do not link using taget_link_libraries.
* Do not set LD_PRELOAD by default.
Co-authored-by: Gong Su <gongsu@us.ibm.com>
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* implement shape inference for concat
* better checking of axis being concatenated: constant values only
* lowering of Concat with lit and backend tests
* fixes
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* Bug fix, ensure krnl.iterate can lower in the degenerate case.
* Fix parser issue with degenerate iterate op.
* Add a test case.
* Remove dead code.
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* Setup documentation server, move doc files from /doc to /docs as per Github Pages convention.
* Include deleted files in patch.
* /doc -> /docs
* /doc -> /docs
* Update documentation on importing ONNX spec into ONNX Dialect; provide documentation on how to add new documentation pages.
* Generate ONNX Dialect TableGen Inc files & operation importing inc files when necessary.
* Ensure TableGen inc file is generated before TableGen is invoked.
* Nit: capitalize builder -> Builder.
* Use file-same-as-stdout directive to ensure generated files are always up-to-date in our codebase.
* Use more up-to-date version of ONNXOps.td.inc.
* Do not automatically invoke gen_doc.py.
* Support dry run in gen_doc.py.
* Fix case.
* Remove debug code.
* Add test for new doc_check primitive.
* Add documentation for file-same-as-stdout.
* Provide more comments.
* Add DocCheck to DocCheck README.
* Nit: format CMake script.
* Update comments.
Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
* Move to more recent LLVM commit ID
* Update LLVM cache version from V9 to V10
* Update to latest LLVM commit id from master, roll back conditions in util scripts
* Update circlci LLVM cache tag to ensure ci updates builds with latest LLVM commit id
* Update README.md to have matching LLVM commit id
* Update doc/Dialtects/onnx.md
* 1.Break down CMake scripts to smaller libraries per-directory.
2. Move some transformations and interfaces to the right folder.
3. Fix minor merge failure of the patch renaming files to use LLVM convention.
* Link OMBuilder with OMONNXOps.
* 1. Update the src location of generated ONNX dialect definition.
2. Link OMONNXRewrite with OMONNXOps.
* Fix path to tablegen for add_onnx_mlir_dialect_doc.
* Update build script for onnx_mlir_transform.
* 1. Remove comment code.
2. onnx_mlir_attribute_promotion -> OMAttributePromotion.
* Name tablegen generated files with LLVM convention.
* Nit: reorder libraries to link against.
* Nit: Link against MLIR first.
* Support attribute promotion.
* Simplify op interface name.
* 1. Add more comments to Attribute Promotion Pass.
2. Move Promotable Const Operand Interface to src/interface, and link against it.
* Complete NFC change onnx -> onnx-mlir.
* Move attribute_promotion pass to src/transform.
* Nit: reword comment.
* Support Attribute Promotion in gen_doc.py.
* Add test.
* Update ONNX doc.
* Add negative test.
* Rename onnxop.inc -> onnx_ops.td.inc.
* Include onnx_ops.td.inc.
* Nit: better comments.
* Prettify CMake.
* Remove original attribute_promotion code, improve comments.
* Append '_op_interface' to op interface decl/defs.
* Namespace cmake targets using onnx_mlir_ prefix.
* Use updated header name.
* Use new body file name.
* Fix dependency.
* Use new CMake target name.
* Make attribute promotion self-contained by removing redundant constant operaions inside the pass execution.
* Remove canonicalization pass.
* Increase comments.
* Use stricter checks.
* Add one more test case.
* Remove %arg1 as it's never used.
* Support dilations and enable e2e tests
* Fix allocating memory for dynamic shape
* Edit comments
* Do dilation by computing an offset from kernel index
* Correct dilation formula, add an example of out-of-bound, and add a test for dilation
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* Fix reshape when a dynamic shape is given.
* Fix default attributes for ConvNoBias.
* Fix comment.
* Resolve comment.
* Improve checks.
* Handle zero dim case.
* Add helper to fetch constants. Add test for dynamic reshape.
* Add test for zero.
* Use shortcut method for size.
* Support Pads for MaxPoolSingleOut
* Regenerate onnx.md to include the new op
* Edit comments
* Undo redundant parts that were unintentionally changed
* Move declarative rewriting rules into canonicalize to avoid creating a new op
* Reformat the rewriting rule pattern of MaxPoolSingleOut
* Put ONNXPadConstantValuePadOp's build method into a .cpp file instead of a tablegen file
* Use the same helper function as the one in inferShape for the ONNXPadConstantValuePadOp's build method
* Change function names and fix padding for the spatial dimensions
* Call shape-inference again after canonicalization to infer shape for newly added ops during canonicalization.
* Fix typos
* emitConstantOp with a given type
* Helper functions to create infinity constants
* Use new constant helper functions for MaxPoolSingleOut
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* Lower MaxPoolSingleOutOp to Krnl dialect
* Edit comments
* Update changes according to the new folder structure
* Add MLIR tests
* Support ceil_mode
* Merge the first two krnl loops into one krnl loop; remove attribute checks
* Dynamically allocate memory for the result if the result has unknown dimensions
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* Remove rank constraints in gemm fusion
* Add an MLIR test
Co-authored-by: Tian Jin <tjingrant@gmail.com>
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>