* Update LLVM commit ID to include to the new modeling of LLVM type in MLIR
* Fix commit id discrepancy
* Update README.md
* Update MLIR version
* Force rebuild prereq dockers and see what happens.
* Use LLVM commit ID that corresponds to MLIR News, 13th edition (8/7/2020)
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* Reorganize main function.
* Follow review comments.
* Emit constants are globals in Krnl and LLVM dialects.
* Make mempooling more robust.
* Fix.
* Update MainUtils.cpp
Additional canonicalization not required anymore.
* move scalerop to decompose
* change clang format
* change clang format
* add shape inference for scaler op
* fixing generated onnxop
* generate onnx.md
* add benefit for scaler decompose and simplify scaler shape inference
* cast rewrite only for float
* add cast op same type rewrite rule
* working on cast lowering
* cast lowering working
* correct onnx version
* update onnx md
* add test for tensor<10xf64>
* Reorganize main function.
* Follow review comments.
* Emit constants are globals in Krnl and LLVM dialects.
* Add support for moving dynamic alloca instructions to top of functions.
* Fix memory pooling tests.
* Various fixes.
* Fix lit tests.
* More test fixes.
* Reformat.
* Reformat some more.
* Fix issue with TestConv and split-input-file.
* Use smart pointers.
* Remove redundant pointer.
* Reformat.
* Add initMap description.
* Clean up tests.
* Remove optimize_loops/return_loops op in elementwise ops lowering and fix tests in onnx_lowering.mlir.
* Fix all tests.
* Remove all occurences of def_loops/return_loops.
* Fix test.
* Fix comments for defineLoops & emitKrnlLoopsAndIterationForOperand function.
* Remove emitOptimizedLoops.
* Allow not specifying optimizedLoops when creating KrnlIterateOperandPack.
* Fix style.
* Make BuildKernelLoop helper not emit optimize/return_loop operations & retire emitKrnlLoopsAndIterationForOperand by replacing it with BuildKernelLoop.
* DefineLoops -> DefineLoopsEx, remove redundant emitKrnlLoopsAndIterationForOperand function.
* BuildKrnlLoop API name update.
* Tweak comments.
* Remove unused withEmptyOptimization flag.
* Better comment for BuildKrnlLoop.
* Fully remove krnl.return_loops/optimize_loops op.
* Trigger Windows Build
* Bump windows ci python version.
* Move to more recent LLVM ID (May 15)
* clang-format
* Bump cache version up
* Update readme
* Fix doc check
* Move to a newer commit id
* Update LoopToStandard -> SCFToStandard
* Change MLIRSideEffects to MLIRSideEffectInterfaces
* Add AffineScope trait to KrnlIterateOp
* [ElementWise] Load/Store op to AffineLoad/AffineStore op
* [Gemm, MatMul, Reduction, Softmax] Load/Store op to AffineLoad/AffineStore op
* [Concat] Load/Store op to AffineLoad/AffineStore op
* [Pad, PadConstantValuePad, Reshape, Transpose] Load/Store op to AffineLoad/AffineStore op
* [LSTM] Load/Store op to AffineLoad/AffineStore op
* [Conv, Norm, Pooling] Load/Store op to AffineLoad/AffineStore op
* Add affine-loop-fusion pass
* Use Load/Store for scalar
* Use Load/Store for scalar
* Fix lit tests
* Unknown dimensions for broadcasting ops
* Affine Load/Store for scalar memref
* clang-format
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* Lower Squeeze op to Krnl dialect
* Emit tensor size as a single constant; add a lit test for unknown dimensions
* Code style
* Speical case where the input is only used by this squeeze op
* Remove squeeze-in-place optimization
* Update ConvertONNXToKrnl.cpp
Twek to re-run tests.
* Trigger buildbot re-run.
* Re-run CI
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* Explicit pass registration.
* Remove whole-archive linking, replace with regular linking.
* Remove whole-archive linkage related scripts.
* No need to preload library, simply expose them through LD_LIBRARY_PATH.
* Use OMLibs to record all onnx-mlir libs.
* Add OMResultTypeInferenceOpInterface lib to OMLibs.
* nit.
* No need to expose libs through LD_LIBRARY_PATH.
* Fix missing onnx header file issue.
* Define OMLibs before Tool subdirectory is imported.
* Define OMLibs at parent scope.
* Specify dependency of MainUtils on OMLibs early.
* Set OMLibs both at current & parent scope.
* Add comment about what future pass implementation should do.
* PoC works.
* MNist works.
* Clean up.
* Fix test.
* Make Linux work.
* Use consistent symbol name.
* Fix variable name.
* Fix array addr access.
* Bug fix.
* Bug fix.
* install before running e2e tests.
* Fix build config.
* Use sudo when installing.
* Make embeddedDataLoader position independent.
* Enable ResNet50.
* Format code.
* Format MainUtil.
* Try not using sudo to install.
* Supply runtime dir via environment variable.
* Dump problematic operation.
* Dump entire function.
* Debug.
* Dump input.
* Dump constant op.
* Debug.
* Debug.
* Debug.
* Print to stderr.
* take care of endianness.
* Use endianness-aware execution session.
* Fix ZLinux error.
* Include warning when desired output endianness can't be deduced.
* Remove debug code.
* Remove debug code in shape inference.
* Support binary-decoder for testing constants packing.
* Support filename, move-to-file, elision-threshold configurations in constant packing pass for easy testing.
* Add lit test, fix lit test type mismatch.
* Add more consts packing tests.
* Ensure intermediate files are properly cleaned up.
* No need for constant elimination.
* Link with threading libraries.
* Remove debug code.
* Format code.
* More tests.
* test nit.
* Remove debug code.
* Reduce hard-coded constants.
* Use temporary and unique working directory for hosting model parameters.
* Test if it works.
* Try to find objcopy.
* Rename symbols using objcopy.
* Move sanitized name to linux section.
* Use verbose mode for debugging.
* Disambiguate pass constructor.
* Fix symbol name.
* Use Command API to build and execute commands.
* Move linux to use Command API.
* Fix reset args.
* Execute redefine sym.
* Format code.
* Do not use verbose mode for CircleCI.
* Remove debug code.
* Prettify code, add comments.
* getSegmentData -> getEmbeddedConstPool
* vector -> std::vector.
* Make sure we properly clean up intermediate files.
* Fix test cases.
* Add runtime directory.
* Trigger rebuild.
* [Merge with master] fix debug script.
* Diable affine fusion pass for now.
* Support generic fallback const packing mechanism.
* Remove debug code.
* Handle the case where objcopy is not available.
* Fix Windows missing types.
* Support int64.
* Copy packed constant to a local directory for non-Linux/Mac platforms.
* Nit: remove debug code, refactor const pack preprocessing out as a separate function.
* Cannot make preprocessConstPack a standalone function because file removers are stack-allocated, and they are deallocated prematurely when function stack gets popped, deleteing intermediate files too early.
* Don't require executable filename.
* Import ONNX data types directly.
* Fix LIT test.
* Bug fix, use moved string value.
* Remove redundant filenames.
* Fix CMake script.
* Embed endianness information as a symbol, and check during runtime.
* More comments, update lit tests.
* Fix lit test on BE machine.
* Copyright notices.
* Detect llvm-project commit change in utils/clone-mlir.sh and rebuild llvm-project
for zLinux Jenkins build bot
* Since many headers are generated and included indirectly through
other headers, there are often missing dependencies that break
parallel build. So we add header targets for KrnlOps, ONNXOps, and
MLONNXOps, and add explicit dependencies for all the relevant headers.
* fix copy-and-paste bug
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* Add type inference for CastOp
* Share type translation between op builder and onnx importer
* clang-format
* Format emitted code
* Remove unnecessary dependencies
* removed warning missing return, dangling else
* fixed errors, made sure to return false in all shape inference failures
* shape inference use LogicalResults as return value
* format fixed
* format error
* additional error correction
* handle errors properly for all former emitError site, using either emitError, assert, or llvm_unreachable
* help added
* fixes
* edit of doc
* doc edit
* removed warning missing return, dangling else
* fixed errors, made sure to return false in all shape inference failures
* shape inference use LogicalResults as return value
* format fixed
* format error
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* enable promote attr for pad
* use optional arguments for pad
* shape infereance for pad
* Lowering Pad
* format file
* use DenseTensor for the attribute
* use Pad in ONNXRewrite
* fix the merge conflict
* fix the attr given to constantOp
* handle ONNXConstantOp in attribute promotion
* Fix bug when AttributePromotion is called more than once
* update ONNXOps.td.inc with correct version of onnx
* update onnx.md
* responses to review
* fix the build error
* change the implementation of Pad
* delete commented out code
* clang format
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* Use AffineMap
* Shared AffineMap
* AffineMap for Conv/Pooling
* Create helper files
* Remove changes for Relu
* Remove redundant includes
* Use AffineMap for AveragePool's shape inference
* Add MLIR tests for unknown dimension case
* Extract a method AffineMapIntConstant
* Comment stylist and include path
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* Specialize the op lowering logic for elementwise operations
* Fix clang-format error.
* Update tests for LSTM since LSTM uses element-wise ops
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* Run clang-format on all source code.
* Add Clang-Format Github Action.
* Apply patch produced by Clang-Format Bot.
* nit.
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* Support dilations and enable e2e tests
* Fix allocating memory for dynamic shape
* Edit comments
* Do dilation by computing an offset from kernel index
* Correct dilation formula, add an example of out-of-bound, and add a test for dilation
* Import optional outputs as NoneType
* Shape inference for ONNXLSTM
* Edit ONNXLSTM::inferShape()
* Shape inference for ONNXLSTMOp
* Create a common function for inferring shape for RNN ops
* CheckInsertDeallocation for a specific result
* Allocate memory for LSTM
* First round of lowering
* Allocate memory for hidden and cell states
* Test with custom Tanh
* Fix an error in Ct's formula
* Add E2E tests
* Return outputs
* Refactor the code
* Enable E2E tests
* Support reverse and bidirectional directions
* Minor revision
* Return all intermediate hidden states
* Call existing activation functions
* Structs for activation functions
* Call existing activations in ONNX
* Minor revision
* Compare strings ignoring case
* Use memreftype of rank 0 for calling activation functions
* Fix getActivationPack()
* Revise the code
* Add one MLIR test
* Add MLIR tests for reverse and bidirectional modes
* Make the order of emiting instructions deterministic
* Use OperandAdaptor instead of directly use an operand index
* Use literal assignments
* Change some variable names
* Use literal assignments
* Use literal assignments
* Format the code
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* Make onnx-mlir work with latest mlir.
* Bump CircleCI cache version.
* Fix missing passes in onnx-mlir-opt.
* Fix backend test failure.
* Fix doc.
* Fix doc and exclude the generated _site directory from DocCheck.
* Remove debug code.
* Do not hard code target name, on Mac shared lib can end with .dylib.
* FunctionPass -> PassWrapper.
* Reorganize main function.
* Follow review comments.
* Emit constants are globals in Krnl and LLVM dialects.
* Output of non-value constants. Write full source to file.
* Fix e2e tests.
* Output constant free and full code in separate files.
* Emit separate files.
* Move file output management to utils.
* Elide the values of glotbal krnl constants.
* Add dual file output for Basic flag.
* Add tests.
* Add passes to cmake file.
* Move to more recent LLVM commit ID
* Update LLVM cache version from V9 to V10
* Update to latest LLVM commit id from master, roll back conditions in util scripts
* Update circlci LLVM cache tag to ensure ci updates builds with latest LLVM commit id
* Update README.md to have matching LLVM commit id
* Update doc/Dialtects/onnx.md
* Enable onnx-mlir for VS builds on Windows
* Update README to include lit
* Update build command for Windows to include config
* Update build instructions, add cmd files for windows, enable single source of truth for MLIR commit-id (clone-mlir.sh)
* Add Visual Studio workload info
* Update ONNX op definitions
* Revert onnx submodule back to previous commit, disable warnings in CMakeLists to work around build issues with MSVC
* Update environment for path to PDcurses on Windows
* Fix directory strings to be compatible with Windows or Linux style slashes
* Fix install-mlir.sh so it works when sourced
* Ensure README and cmd files match and have correct paths
* Properly quote ONNX_MLIR_SRC_DIR
* Address PR feedback: Use llvm_unreachable to indicate failure to convert attribute proto to name/value pair
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* Create a template for pooling and add support for AveragePool
* Edit MLIR tests for MaxPool according to the new lowering template for pooling
* Dealloc temporary variables
* Support count_include_pad for AveragePool
* Add MLIR tests for AveragePool lowering
* Make changes according to Tian's comments
* Push AffineMap as upper bound for KrnlIterateOp
* Test AffineMap to use in Pooling
* Replace the old implementaion by a new one using AffineMap
* Fix the computation when dilations are non-unit
* Clean up the old code
* Remove AveragePool from Canonicalization pass
* Fix computing the end indices of a filter window
* Refactor the code for pooling
* Revise pushAffineMapBound
* Add MLIR tests
* Remove unused functions
* Fix check-onnx-backend build on x86 Linux. (#91)
* Add the split marker to test files (#90)
Co-authored-by: Tian Jin <tjingrant@gmail.com>
Co-authored-by: gongsu832 <gong_su@hotmail.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
* Specify each lib only once; allow llvm build in shared libs mode.
* Remove debug code.
* For library targets, retain dependency information using add_dependencies, but do not link using taget_link_libraries.
* Do not set LD_PRELOAD by default.
Co-authored-by: Gong Su <gongsu@us.ibm.com>
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* implement shape inference for concat
* better checking of axis being concatenated: constant values only
* lowering of Concat with lit and backend tests
* fixes
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
* Move to more recent LLVM commit ID
* Update LLVM cache version from V9 to V10
* Update to latest LLVM commit id from master, roll back conditions in util scripts
* Update circlci LLVM cache tag to ensure ci updates builds with latest LLVM commit id
* Update README.md to have matching LLVM commit id
* Update doc/Dialtects/onnx.md
* 1.Break down CMake scripts to smaller libraries per-directory.
2. Move some transformations and interfaces to the right folder.
3. Fix minor merge failure of the patch renaming files to use LLVM convention.
* Link OMBuilder with OMONNXOps.
* 1. Update the src location of generated ONNX dialect definition.
2. Link OMONNXRewrite with OMONNXOps.
* Fix path to tablegen for add_onnx_mlir_dialect_doc.
* Update build script for onnx_mlir_transform.
* 1. Remove comment code.
2. onnx_mlir_attribute_promotion -> OMAttributePromotion.
* Name tablegen generated files with LLVM convention.
* Nit: reorder libraries to link against.
* Nit: Link against MLIR first.