Commit Graph

142 Commits

Author SHA1 Message Date
chentong319 2e08b2112c
String type (Ready for Review) (#182)
* string type from tensorflow

* simplify type

* parser and print

* gen StringType for tablegen

* onnx to onnx-mlir type

* add namespace

* allow all integer type

* dialect document

* add test case

* format

* more precise type for ONNXOp

* format

* enable the failed test

* update comment

* update onnx.md

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-06-25 16:34:37 -04:00
chentong319 cc68f77d8d
Merge onnx ml into onnx (#176)
* merge onnx-ml into onnx

* delete onnx ml
2020-06-22 20:01:56 -04:00
Tian Jin f81f44662b
Remove whole archive linkage (#173)
* Explicit pass registration.

* Remove whole-archive linking, replace with regular linking.

* Remove whole-archive linkage related scripts.

* No need to preload library, simply expose them through LD_LIBRARY_PATH.

* Use OMLibs to record all onnx-mlir libs.

* Add OMResultTypeInferenceOpInterface lib to OMLibs.

* nit.

* No need to expose libs through LD_LIBRARY_PATH.

* Fix missing onnx header file issue.

* Define OMLibs before Tool subdirectory is imported.

* Define OMLibs at parent scope.

* Specify dependency of MainUtils on OMLibs early.

* Set OMLibs both at current & parent scope.

* Add comment about what future pass implementation should do.
2020-06-19 00:21:27 +08:00
chentong319 1fc43fa181
support map and seq in tablegen (#159)
* support map and seq in tablegen

* register MLONNX for testing

* format

* remove the unwanted test

* add a test

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-06-18 21:49:40 +08:00
Alexandre Eichenberger 82d2caa542
constant folding for transpose of constant tensors (#171)
* added constant folding for transpose of constant tensors

* format

* responding to reviews
2020-06-17 10:42:06 -04:00
Alexandre Eichenberger 742e817722
Constprop2 (#167)
* initial const prop attempt

* added support for broadcast ops

* adde all binary broadcast ops into custom builders with precise type

* added test example

* working

* format

* fixed suggestion by Tung, start woring on unary

* added subtraction and neg the right way, and added elementwise mul too

* formatting changes

* format

* format

* added instructions to add new optimizations

* added propagation rules that always migrate constants toward the root of the expression, using assoc and commutativity

* format comment
2020-06-15 14:56:15 -04:00
Tian Jin e0ae583da0
Compiling Models with Large Constant Arrays (#146)
* PoC works.

* MNist works.

* Clean up.

* Fix test.

* Make Linux work.

* Use consistent symbol name.

* Fix variable name.

* Fix array addr access.

* Bug fix.

* Bug fix.

* install before running e2e tests.

* Fix build config.

* Use sudo when installing.

* Make embeddedDataLoader position independent.

* Enable ResNet50.

* Format code.

* Format MainUtil.

* Try not using sudo to install.

* Supply runtime dir via environment variable.

* Dump problematic operation.

* Dump entire function.

* Debug.

* Dump input.

* Dump constant op.

* Debug.

* Debug.

* Debug.

* Print to stderr.

* take care of endianness.

* Use endianness-aware execution session.

* Fix ZLinux error.

* Include warning when desired output endianness can't be deduced.

* Remove debug code.

* Remove debug code in shape inference.

* Support binary-decoder for testing constants packing.

* Support filename, move-to-file, elision-threshold configurations in constant packing pass for easy testing.

* Add lit test, fix lit test type mismatch.

* Add more consts packing tests.

* Ensure intermediate files are properly cleaned up.

* No need for constant elimination.

* Link with threading libraries.

* Remove debug code.

* Format code.

* More tests.

* test nit.

* Remove debug code.

* Reduce hard-coded constants.

* Use temporary and unique working directory for hosting model parameters.

* Test if it works.

* Try to find objcopy.

* Rename symbols using objcopy.

* Move sanitized name to linux section.

* Use verbose mode for debugging.

* Disambiguate pass constructor.

* Fix symbol name.

* Use Command API to build and execute commands.

* Move linux to use Command API.

* Fix reset args.

* Execute redefine sym.

* Format code.

* Do not use verbose mode for CircleCI.

* Remove debug code.

* Prettify code, add comments.

* getSegmentData -> getEmbeddedConstPool

* vector -> std::vector.

* Make sure we properly clean up intermediate files.

* Fix test cases.

* Add runtime directory.

* Trigger rebuild.

* [Merge with master] fix debug script.

* Diable affine fusion pass for now.

* Support generic fallback const packing mechanism.

* Remove debug code.

* Handle the case where objcopy is not available.

* Fix Windows missing types.

* Support int64.

* Copy packed constant to a local directory for non-Linux/Mac platforms.

* Nit: remove debug code, refactor const pack preprocessing out as a separate function.

* Cannot make preprocessConstPack a standalone function because file removers are stack-allocated, and they are deallocated prematurely when function stack gets popped, deleteing intermediate files too early.

* Don't require executable filename.

* Import ONNX data types directly.

* Fix LIT test.

* Bug fix, use moved string value.

* Remove redundant filenames.

* Fix CMake script.

* Embed endianness information as a symbol, and check during runtime.

* More comments, update lit tests.

* Fix lit test on BE machine.

* Copyright notices.
2020-06-12 10:27:05 +08:00
Tung D. Le 8c4d527eea
Lower SplitOp to Krnl dialect (#155)
* Fix importing variadic output

* Lower splitop

* Support unknown dimension and add lit tests

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-06-11 10:57:20 +08:00
Gheorghe-Teodor Bercea 4ab96fbc6c
Add basic support for memory pool (#161)
* Reorganize main function.

* Follow review comments.

* Emit constants are globals in Krnl and LLVM dialects.

* Replace internal malloc with memory pool and getref instruction.

* Lower krnl.getref to LLVM.

* Fix formatting issues.

* Add tests.

* Add missing dependency.

* Improve LLVM lowering.

* Add test to show getref is generic.
2020-06-09 16:48:33 -04:00
Aman LaChapelle ca185002f2
Add shape inference for several ops (#163)
* 1. Add shape inference for the following ops:
 - Atan
 - Tan
 - Sin
 - Cast
 - ConvTranspose
 - Flatten
 - DynamicQuantizeLinear
 - QuantizeLinear
 - DequantizeLinear
 - ConvInteger

2. Import attributes for generic nodes
3. Fixes for cases where .cast<> should be .isa<> (ONNXConcat::inferShapes)

* Fix foormatting issues

* Address comments:
 - SmallVector<> * -> SmallVectorImpl<> &
 - switch-case -> helper function
   - Inside helper function, preserve signed-ness
 - add TODOs

* Can't use signed integers yet in convertONNXTypeToMLIRType, add TODO

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-06-09 14:55:49 +08:00
Alexandre Eichenberger e2af505746
Constprop (#162)
* initial const prop attempt

* added support for broadcast ops

* adde all binary broadcast ops into custom builders with precise type

* added test example

* working

* format

* fixed suggestion by Tung, start woring on unary

* added subtraction and neg the right way, and added elementwise mul too

* formatting changes

* format

* format

* added instructions to add new optimizations
2020-06-08 15:45:32 -04:00
Tung D. Le bb17fa965f
Add AffineScope trait to KrnlIterateOp and enable affine-loop-fusion pass (#140)
* Make KrnlIterate's IVs valid to AffineLoad/AffineStore

* [Unary elementwise op] Load/Store -> AffineLoad/AffineStore

* [Conv] Load/Store -> AffineLoad/AffineStore

* Add affine-loop-fusion pass

* typos

* Mistake when merging branch master

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-06-08 15:36:27 +08:00
Tian Jin cde1157d62
Rapid check test (#141)
* Call llc, ld from within onnx-mlir.

* Rename EmitLLVMBC -> EmitLib., reorder header files

* Edit comment.

* Checkpoint, debug.py works.

* Automatically generate inputs in debug.py.

* Use float.

* initial support for rapidcheck tests.

* Convolution test case works.

* Format code.

* Link library with MainUtils.

* Fix CMake script error.

* Fast implementation of array assertion, more detailed error analysis.

* More utility for DynMemRef.

* Fix linking issue.

* Uncomment unit test.

* Refactor to separate C++/Python ExecutionSession, enable unit test.

* format code.

* Verbose build.

* Enable PIC option for ExecusionSession.

* Fix cmake error.

* Build all targets.

* Fix doc to build all targets.

* Clean up.

* Clean up, debug.

* Use type alias consistently.

* Move definitions to DynMemRef.cpp.

* include algorithm.

* pyruntime -> PyRuntime

* Format code.

* Free memory.

* Add comments.

* Copyright notice.

* Improve stylistic consistency.

* Add comment.

* Revert irrelevant changes.

* Disambiguate.

* Refator test case generator out from test case implementation, implement example exhaustive test driver.

* Add documentation for testing.
2020-06-08 10:18:55 +08:00
Alexandre Eichenberger 20dd6544aa
conv bug fix (#154) 2020-05-28 07:34:58 +08:00
chentong319 6099efd91b
Express some basic features of an Operation in TableGen file (#103)
* change operation definition

* change importer

* default type inference

* file format

* generate types for input/output

* generate the mapping for operation output type

* remove debug message for gen_doc.py

* update the dialect doc

* add support Complex

* format

* update document

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-05-21 22:03:16 -04:00
Tian Jin 4cdc0873ca
Call llc, ld from within onnx-mlir. (#127)
* Call llc, ld from within onnx-mlir.

* Rename EmitLLVMBC -> EmitLib., reorder header files

* Edit comment.

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-05-19 10:15:48 +08:00
chentong319 23bea50404
Implement PadOp based on attribute promotion (#71)
* enable promote attr for pad

* use optional arguments for pad

* shape infereance for pad

* Lowering Pad

* format file

* use DenseTensor for the attribute

* use Pad in ONNXRewrite

* fix the merge conflict

* fix the attr given to constantOp

* handle ONNXConstantOp in attribute promotion

* Fix bug when AttributePromotion is called more than once

* update ONNXOps.td.inc with correct version of onnx

* update onnx.md

* responses to review

* fix the build error

* change the implementation of Pad

* delete commented out code

* clang format

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-05-15 13:19:28 +08:00
Tung D. Le 4d8b855c17
Unify codes in shape inference and conversion (#98)
* Use AffineMap

* Shared AffineMap

* AffineMap for Conv/Pooling

* Create helper files

* Remove changes for Relu

* Remove redundant includes

* Use AffineMap for AveragePool's shape inference

* Add MLIR tests for unknown dimension case

* Extract a method AffineMapIntConstant

* Comment stylist and include path

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-05-14 17:31:33 +08:00
Tung D. Le d65a6e72dd
Specialize the op lowering logic for element-wise operations (#118)
* Specialize the op lowering logic for elementwise operations

* Fix clang-format error.

* Update tests for LSTM since LSTM uses element-wise ops

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-05-14 13:00:15 +08:00
Tian Jin adc08fb93e
Specify in linking stage, where runtime shared library is located. (#120)
* Specify in linking stage, where runtime shared library is located.

* Cite & make comment a full sentence.

* Fix error communicating runtime dir to ld.
2020-05-14 09:04:16 +08:00
Tung D. Le 24343177b8
Lower LSTMOp to Krnl dialect (#73)
* Support dilations and enable e2e tests

* Fix allocating memory for dynamic shape

* Edit comments

* Do dilation by computing an offset from kernel index

* Correct dilation formula, add an example of out-of-bound, and add a test for dilation

* Import optional outputs as NoneType

* Shape inference for ONNXLSTM

* Edit ONNXLSTM::inferShape()

* Shape inference for ONNXLSTMOp

* Create a common function for inferring shape for RNN ops

* CheckInsertDeallocation for a specific result

* Allocate memory for LSTM

* First round of lowering

* Allocate memory for hidden and cell states

* Test with custom Tanh

* Fix an error in Ct's formula

* Add E2E tests

* Return outputs

* Refactor the code

* Enable E2E tests

* Support reverse and bidirectional directions

* Minor revision

* Return all intermediate hidden states

* Call existing activation functions

* Structs for activation functions

* Call existing activations in ONNX

* Minor revision

* Compare strings ignoring case

* Use memreftype of rank 0 for calling activation functions

* Fix getActivationPack()

* Revise the code

* Add one MLIR test

* Add MLIR tests for reverse and bidirectional modes

* Make the order of emiting instructions deterministic

* Use OperandAdaptor instead of directly use an operand index

* Use literal assignments

* Change some variable names

* Use literal assignments

* Use literal assignments

* Format the code

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-05-13 21:08:06 +08:00
Tung D. Le 9a874007ce
Implement shape inference for SplitOp (#95)
* Implement shape inference for SplitOp

* Change spitOpt to SplitAttribute and check the axis range before updating the axis attribute

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-05-13 18:07:27 +08:00
Gheorghe-Teodor Bercea f5f336db08
Fix running backend tests triggered by preloading cruntime dynamic library (#104)
* Reorganize main function.

* Follow review comments.

* Emit constants are globals in Krnl and LLVM dialects.

* Fix preloading of runtime shared library for backend tests.

* Update library name.

* Only add libstdc++ library if it exists.
2020-05-04 08:37:58 -04:00
Tung D. Le 64ed03295f
Fix converting type for functions with no argument (#96)
* Fix converting type for functions with no argument

* Add two tests

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-04-27 11:01:51 -04:00
Tian Jin fad2ad7d03
Make onnx-mlir work with latest mlir. (#93)
* Make onnx-mlir work with latest mlir.

* Bump CircleCI cache version.

* Fix missing passes in onnx-mlir-opt.

* Fix backend test failure.

* Fix doc.

* Fix doc and exclude the generated _site directory from DocCheck.

* Remove debug code.

* Do not hard code target name, on Mac shared lib can end with .dylib.

* FunctionPass -> PassWrapper.
2020-04-27 17:03:56 +08:00
Gheorghe-Teodor Bercea 137ce767e6
Rework output to improve readability of intermediate MLIR code. (#87)
* Reorganize main function.

* Follow review comments.

* Emit constants are globals in Krnl and LLVM dialects.

* Output of non-value constants. Write full source to file.

* Fix e2e tests.

* Output constant free and full code in separate files.

* Emit separate files.

* Move file output management to utils.

* Elide the values of glotbal krnl constants.

* Add dual file output for Basic flag.

* Add tests.

* Add passes to cmake file.
2020-04-24 16:15:36 -04:00
Byron Changuion c567781fa3
Add support for Windows using Visual Studio Compiler (#86)
* Move to more recent LLVM commit ID

* Update LLVM cache version from V9 to V10

* Update to latest LLVM commit id from master, roll back conditions in util scripts

* Update circlci LLVM cache tag to ensure ci updates builds with latest LLVM commit id

* Update README.md to have matching LLVM commit id

* Update doc/Dialtects/onnx.md

* Enable onnx-mlir for VS builds on Windows

* Update README to include lit

* Update build command for Windows to include config

* Update build instructions, add cmd files for windows, enable single source of truth for MLIR commit-id (clone-mlir.sh)

* Add Visual Studio workload info

* Update ONNX op definitions

* Revert onnx submodule back to previous commit, disable warnings in CMakeLists to work around build issues with MSVC

* Update environment for path to PDcurses on Windows

* Fix directory strings to be compatible with Windows or Linux style slashes

* Fix install-mlir.sh so it works when sourced

* Ensure README and cmd files match and have correct paths

* Properly quote ONNX_MLIR_SRC_DIR

* Address PR feedback: Use llvm_unreachable to indicate failure to convert attribute proto to name/value pair

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-04-19 22:11:24 +08:00
Tung D. Le eac2297624
Lower MaxPooling and AveragePool to Krnl dialect using AffineMap (#38)
* Create a template for pooling and add support for AveragePool

* Edit MLIR tests for MaxPool according to the new lowering template for pooling

* Dealloc temporary variables

* Support count_include_pad for AveragePool

* Add MLIR tests for AveragePool lowering

* Make changes according to Tian's comments

* Push AffineMap as upper bound for KrnlIterateOp

* Test AffineMap to use in Pooling

* Replace the old implementaion by a new one using AffineMap

* Fix the computation when dilations are non-unit

* Clean up the old code

* Remove AveragePool from Canonicalization pass

* Fix computing the end indices of a filter window

* Refactor the code for pooling

* Revise pushAffineMapBound

* Add MLIR tests

* Remove unused functions

* Fix check-onnx-backend build on x86 Linux. (#91)

* Add the split marker to test files (#90)

Co-authored-by: Tian Jin <tjingrant@gmail.com>

Co-authored-by: gongsu832 <gong_su@hotmail.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-04-19 21:39:34 +08:00
Tung D. Le e32f531546
Add the split marker to test files (#90)
Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-04-16 15:17:27 +08:00
gongsu832 72de6eb004
Fix check-onnx-backend build on x86 Linux. (#91) 2020-04-16 14:38:52 +08:00
Tian Jin d06dbfefdd
Specify each lib only once; allow llvm build in shared libs mode. (#77)
* Specify each lib only once; allow llvm build in shared libs mode.

* Remove debug code.

* For library targets, retain dependency information using add_dependencies, but do not link using taget_link_libraries.

* Do not set LD_PRELOAD by default.

Co-authored-by: Gong Su <gongsu@us.ibm.com>
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-04-14 17:40:05 +08:00
Alexandre Eichenberger fa8962753c
Concat lower (#82)
* implement shape inference for concat

* better checking of axis being concatenated: constant values only

* lowering of Concat with lit and backend tests

* fixes

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-04-13 11:40:39 -04:00
Tian Jin caeaa390e2
Bug fix, ensure krnl.iterate can lower in the degenerate case. (#78)
* Bug fix, ensure krnl.iterate can lower in the degenerate case.

* Fix parser issue with degenerate iterate op.

* Add a test case.

* Remove dead code.

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-04-10 23:27:00 +08:00
Tung D. Le f4fefcf713
Re-add tanh lowering (#75)
* Re-add tanh lowering

* Make the emission deterministic
2020-04-09 14:22:36 +08:00
Alexandre Eichenberger f5bed72e13
implement shape inference for concat (#74)
* implement shape inference for concat

* better checking of axis being concatenated: constant values only
2020-04-07 16:13:41 -04:00
Gheorghe-Teodor Bercea f16e79d744
Emit constant tensors as global constants (#66)
* Reorganize main function.

* Follow review comments.

* Emit constants are globals in Krnl and LLVM dialects.

* Enable unique constant variable names.

* Emit alloca for local array. Add tests.

* Comment clean-up.

* Simplify MemRef construction.

* Fix output type.
2020-04-01 13:51:06 -04:00
Alexandre Eichenberger 844dcd8b1f
Name change for tests, to be check-onnx-(lit | backend) (#62) 2020-03-31 10:06:14 -04:00
Alexandre Eichenberger 653fa69102
Unify Conv implementation (#54)
* fixed readme for new git repo

* conv with bias as an optional input
2020-03-26 11:03:19 -04:00
Tian Jin 549af8f0b2
Support attribute promotion. (#34)
* Support attribute promotion.

* Simplify op interface name.

* 1. Add more comments to Attribute Promotion Pass.
2. Move Promotable Const Operand Interface to src/interface, and link against it.

* Complete NFC change onnx -> onnx-mlir.

* Move attribute_promotion pass to src/transform.

* Nit: reword comment.

* Support Attribute Promotion in gen_doc.py.

* Add test.

* Update ONNX doc.

* Add negative test.

* Rename onnxop.inc -> onnx_ops.td.inc.

* Include onnx_ops.td.inc.

* Nit: better comments.

* Prettify CMake.

* Remove original attribute_promotion code, improve comments.

* Append '_op_interface' to op interface decl/defs.

* Namespace cmake targets using onnx_mlir_ prefix.

* Use updated header name.

* Use new body file name.

* Fix dependency.

* Use new CMake target name.

* Make attribute promotion self-contained by removing redundant constant operaions inside the pass execution.

* Remove canonicalization pass.

* Increase comments.

* Use stricter checks.

* Add one more test case.

* Remove %arg1 as it's never used.
2020-03-19 15:03:37 +08:00
Tung D. Le 2814ea3898
Support dilations and enable the remaining e2e tests for MaxPoolSingleOut (#31)
* Support dilations and enable e2e tests

* Fix allocating memory for dynamic shape

* Edit comments

* Do dilation by computing an offset from kernel index

* Correct dilation formula, add an example of out-of-bound, and add a test for dilation

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-18 09:55:50 -04:00
Tung D. Le 4763e8a8bc
Lower ONNXAbsOp to Krnl dialect and enable e2e tests for ONNXReduceL1 (#18)
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-17 11:12:45 -04:00
Gheorghe-Teodor Bercea 1622b9f161
[NFC] Change ONNF based names to ONNX-MLIR (#32)
* Rename onnf to onnx-mlir.

* Change workspace name.
2020-03-17 09:16:33 -04:00
Tung D. Le d86591d61a
Import all initialized tensors as dense constants (#30)
* Import initialized tensor as dense attribute

* Import all initialize tensors as dense constants

* Remove unintentional code

* Fix value attribute format in shape inference tests of reshape

* Readd rank check for reshape's shape inference

* Remove a redundant variable

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-16 11:17:28 -04:00
Gheorghe-Teodor Bercea c46880d5c6
Fix reshape output shape inference when a single dynamic shape is given (#22)
* Fix reshape when a dynamic shape is given.

* Fix default attributes for ConvNoBias.

* Fix comment.

* Resolve comment.

* Improve checks.

* Handle zero dim case.

* Add helper to fetch constants. Add test for dynamic reshape.

* Add test for zero.

* Use shortcut method for size.
2020-03-13 17:18:46 -04:00
chentong319 6137fc7c17
Fix issues #15 and #16 (#29)
* fix issue #15 and #16

* fix format

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-13 10:19:27 -04:00
Tung D. Le 362491553c
Shape inference for ONNXAveragePool (#21)
* Shape inference for ONNXAveragePool

* Edit comments and puts helper function on top of the file

* Fix template
2020-03-13 09:59:16 -04:00
Tung D. Le a65820940c
Lower ConstantOp (#28)
* Lower ConstantOp

* Refactor the code

* Edit error messages

* Check whether attribute is sparse or dense during shape inference
2020-03-12 10:58:42 -04:00
Tung D. Le 162ac1bc32
Pad value for MaxPool must be negative infinity instead of zero (#20)
Co-authored-by: Alexandre Eichenberger <alexe@us.ibm.com>
2020-03-12 09:30:02 -04:00
Alexandre Eichenberger 811b63e031
Inter common pad (#26)
* common pad handling in shape inference for conv and maxpool

* common pads

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-11 18:36:02 -04:00
chentong319 391f565a66
Lower constant padding operation to KRNL dialect (#27) 2020-03-11 16:54:07 -04:00