Commit Graph

59 Commits

Author SHA1 Message Date
Aman LaChapelle 24d0a2ac71
Add shape inference and names (#266)
* Add shape inference and names

 - Add shape inference for PRelu
 - Fix shape inference for group conv
   for ConvTranspose
 - Add input and output names for
   graphs (functions)
 - Add support for (u)int8 tensor
   attributes

* Fix format issues

* Revert formatting for gen_onnx_mlir.py

* Pads can have ArrayAttr and DenseElementsAttr so support both

* NumInputs is the number of graph inputs that don't have initializers

* Add test for 2D batchnorm

* Fix typo in define_loops in new 2d BN test

* Change 'name' to 'onnx_node_name'

* Fix Batchnorm for 2D I/O and add lowering test

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-08-27 15:46:27 -04:00
Anh Leu 2ee725d939
Add CastOp lowering (#259)
* move scalerop to decompose

* change clang format

* change clang format

* add shape inference for scaler op

* fixing generated onnxop

* generate onnx.md

* add benefit for scaler decompose and simplify scaler shape inference

* cast rewrite only for float

* add cast op same type rewrite rule

* working on cast lowering

* cast lowering working

* correct onnx version

* update onnx md

* add test for tensor<10xf64>
2020-08-11 16:07:13 -04:00
Tian Jin 58ee62fb49
[NFC] Rename passes for stylistic consistency. (#232)
* lower-frontend -> convert-onnx-to-krnl

* lower-all-llvm -> convert-krnl-to-llvm

* lower-krnl -> convert-krnl-to-affine

* Name fix.
2020-07-31 21:37:35 +08:00
Gheorghe-Teodor Bercea a58594ec81
Revert "Emit allocs at the top of functions (#222)" (#226)
This reverts commit b27e57cc4f.
2020-07-21 18:30:39 -04:00
Gheorghe-Teodor Bercea b27e57cc4f
Emit allocs at the top of functions (#222)
* Reorganize main function.

* Follow review comments.

* Emit constants are globals in Krnl and LLVM dialects.

* Add support for moving dynamic alloca instructions to top of functions.

* Fix memory pooling tests.

* Various fixes.

* Fix lit tests.

* More test fixes.

* Reformat.

* Reformat some more.

* Fix issue with TestConv and split-input-file.

* Use smart pointers.

* Remove redundant pointer.

* Reformat.

* Add initMap description.

* Clean up tests.
2020-07-20 19:24:17 -04:00
Tian Jin 01a4977c74
Remove optimize_loops/return_loops op. (#200)
* Remove optimize_loops/return_loops op in elementwise ops lowering and fix tests in onnx_lowering.mlir.

* Fix all tests.

* Remove all occurences of def_loops/return_loops.

* Fix test.

* Fix comments for defineLoops & emitKrnlLoopsAndIterationForOperand function.

* Remove emitOptimizedLoops.

* Allow not specifying optimizedLoops when creating KrnlIterateOperandPack.

* Fix style.

* Make BuildKernelLoop helper not emit optimize/return_loop operations & retire emitKrnlLoopsAndIterationForOperand by replacing it with BuildKernelLoop.

* DefineLoops -> DefineLoopsEx, remove redundant emitKrnlLoopsAndIterationForOperand function.

* BuildKrnlLoop API name update.

* Tweak comments.

* Remove unused withEmptyOptimization flag.

* Better comment for BuildKrnlLoop.

* Fully remove krnl.return_loops/optimize_loops op.

* Trigger Windows Build

* Bump windows ci python version.
2020-07-08 12:49:15 +08:00
Aaron Smith 8e6642b2bc
Update llvm version (#187)
* Update llvm version

* Update git hash for llvm-project

* Update option handling

* Update LLVM version

* Update tests

* Update git hash

* Update docs

* clang-format

* Fix operand adaptor

* Fix dim with constant

* Update LSTM.cpp

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-07-07 21:26:00 +08:00
Tung D. Le 7e05f371de
Replace std.load/std.store by affine.load/affine.store (#180)
* Move to more recent LLVM ID (May 15)

* clang-format

* Bump cache version up

* Update readme

* Fix doc check

* Move to a newer commit id

* Update LoopToStandard -> SCFToStandard

* Change MLIRSideEffects to MLIRSideEffectInterfaces

* Add AffineScope trait to KrnlIterateOp

* [ElementWise] Load/Store op to AffineLoad/AffineStore op

* [Gemm, MatMul, Reduction, Softmax] Load/Store op to AffineLoad/AffineStore op

* [Concat] Load/Store op to AffineLoad/AffineStore op

* [Pad, PadConstantValuePad, Reshape, Transpose] Load/Store op to AffineLoad/AffineStore op

* [LSTM] Load/Store op to AffineLoad/AffineStore op

* [Conv, Norm, Pooling] Load/Store op to AffineLoad/AffineStore op

* Add affine-loop-fusion pass

* Use Load/Store for scalar

* Use Load/Store for scalar

* Fix lit tests

* Unknown dimensions for broadcasting ops

* Affine Load/Store for scalar memref

* clang-format

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-07-05 16:20:21 +08:00
Tung D. Le 2c8f5701bd
Lower SqueezeOp to Krnl dialect (#164)
* Lower Squeeze op to Krnl dialect

* Emit tensor size as a single constant; add a lit test for unknown dimensions

* Code style

* Speical case where the input is only used by this squeeze op

* Remove squeeze-in-place optimization

* Update ConvertONNXToKrnl.cpp

Twek to re-run tests.

* Trigger buildbot re-run.

* Re-run CI

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-07-03 16:26:41 +08:00
chentong319 2e08b2112c
String type (Ready for Review) (#182)
* string type from tensorflow

* simplify type

* parser and print

* gen StringType for tablegen

* onnx to onnx-mlir type

* add namespace

* allow all integer type

* dialect document

* add test case

* format

* more precise type for ONNXOp

* format

* enable the failed test

* update comment

* update onnx.md

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-06-25 16:34:37 -04:00
Tung D. Le 8c4d527eea
Lower SplitOp to Krnl dialect (#155)
* Fix importing variadic output

* Lower splitop

* Support unknown dimension and add lit tests

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-06-11 10:57:20 +08:00
Tung D. Le bb17fa965f
Add AffineScope trait to KrnlIterateOp and enable affine-loop-fusion pass (#140)
* Make KrnlIterate's IVs valid to AffineLoad/AffineStore

* [Unary elementwise op] Load/Store -> AffineLoad/AffineStore

* [Conv] Load/Store -> AffineLoad/AffineStore

* Add affine-loop-fusion pass

* typos

* Mistake when merging branch master

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-06-08 15:36:27 +08:00
Alexandre Eichenberger 20dd6544aa
conv bug fix (#154) 2020-05-28 07:34:58 +08:00
chentong319 6099efd91b
Express some basic features of an Operation in TableGen file (#103)
* change operation definition

* change importer

* default type inference

* file format

* generate types for input/output

* generate the mapping for operation output type

* remove debug message for gen_doc.py

* update the dialect doc

* add support Complex

* format

* update document

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-05-21 22:03:16 -04:00
chentong319 23bea50404
Implement PadOp based on attribute promotion (#71)
* enable promote attr for pad

* use optional arguments for pad

* shape infereance for pad

* Lowering Pad

* format file

* use DenseTensor for the attribute

* use Pad in ONNXRewrite

* fix the merge conflict

* fix the attr given to constantOp

* handle ONNXConstantOp in attribute promotion

* Fix bug when AttributePromotion is called more than once

* update ONNXOps.td.inc with correct version of onnx

* update onnx.md

* responses to review

* fix the build error

* change the implementation of Pad

* delete commented out code

* clang format

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-05-15 13:19:28 +08:00
Tung D. Le 4d8b855c17
Unify codes in shape inference and conversion (#98)
* Use AffineMap

* Shared AffineMap

* AffineMap for Conv/Pooling

* Create helper files

* Remove changes for Relu

* Remove redundant includes

* Use AffineMap for AveragePool's shape inference

* Add MLIR tests for unknown dimension case

* Extract a method AffineMapIntConstant

* Comment stylist and include path

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-05-14 17:31:33 +08:00
Tung D. Le d65a6e72dd
Specialize the op lowering logic for element-wise operations (#118)
* Specialize the op lowering logic for elementwise operations

* Fix clang-format error.

* Update tests for LSTM since LSTM uses element-wise ops

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-05-14 13:00:15 +08:00
Tung D. Le 24343177b8
Lower LSTMOp to Krnl dialect (#73)
* Support dilations and enable e2e tests

* Fix allocating memory for dynamic shape

* Edit comments

* Do dilation by computing an offset from kernel index

* Correct dilation formula, add an example of out-of-bound, and add a test for dilation

* Import optional outputs as NoneType

* Shape inference for ONNXLSTM

* Edit ONNXLSTM::inferShape()

* Shape inference for ONNXLSTMOp

* Create a common function for inferring shape for RNN ops

* CheckInsertDeallocation for a specific result

* Allocate memory for LSTM

* First round of lowering

* Allocate memory for hidden and cell states

* Test with custom Tanh

* Fix an error in Ct's formula

* Add E2E tests

* Return outputs

* Refactor the code

* Enable E2E tests

* Support reverse and bidirectional directions

* Minor revision

* Return all intermediate hidden states

* Call existing activation functions

* Structs for activation functions

* Call existing activations in ONNX

* Minor revision

* Compare strings ignoring case

* Use memreftype of rank 0 for calling activation functions

* Fix getActivationPack()

* Revise the code

* Add one MLIR test

* Add MLIR tests for reverse and bidirectional modes

* Make the order of emiting instructions deterministic

* Use OperandAdaptor instead of directly use an operand index

* Use literal assignments

* Change some variable names

* Use literal assignments

* Use literal assignments

* Format the code

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-05-13 21:08:06 +08:00
Tung D. Le 64ed03295f
Fix converting type for functions with no argument (#96)
* Fix converting type for functions with no argument

* Add two tests

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-04-27 11:01:51 -04:00
Tung D. Le eac2297624
Lower MaxPooling and AveragePool to Krnl dialect using AffineMap (#38)
* Create a template for pooling and add support for AveragePool

* Edit MLIR tests for MaxPool according to the new lowering template for pooling

* Dealloc temporary variables

* Support count_include_pad for AveragePool

* Add MLIR tests for AveragePool lowering

* Make changes according to Tian's comments

* Push AffineMap as upper bound for KrnlIterateOp

* Test AffineMap to use in Pooling

* Replace the old implementaion by a new one using AffineMap

* Fix the computation when dilations are non-unit

* Clean up the old code

* Remove AveragePool from Canonicalization pass

* Fix computing the end indices of a filter window

* Refactor the code for pooling

* Revise pushAffineMapBound

* Add MLIR tests

* Remove unused functions

* Fix check-onnx-backend build on x86 Linux. (#91)

* Add the split marker to test files (#90)

Co-authored-by: Tian Jin <tjingrant@gmail.com>

Co-authored-by: gongsu832 <gong_su@hotmail.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-04-19 21:39:34 +08:00
Tung D. Le e32f531546
Add the split marker to test files (#90)
Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-04-16 15:17:27 +08:00
Alexandre Eichenberger fa8962753c
Concat lower (#82)
* implement shape inference for concat

* better checking of axis being concatenated: constant values only

* lowering of Concat with lit and backend tests

* fixes

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-04-13 11:40:39 -04:00
Tung D. Le f4fefcf713
Re-add tanh lowering (#75)
* Re-add tanh lowering

* Make the emission deterministic
2020-04-09 14:22:36 +08:00
Gheorghe-Teodor Bercea f16e79d744
Emit constant tensors as global constants (#66)
* Reorganize main function.

* Follow review comments.

* Emit constants are globals in Krnl and LLVM dialects.

* Enable unique constant variable names.

* Emit alloca for local array. Add tests.

* Comment clean-up.

* Simplify MemRef construction.

* Fix output type.
2020-04-01 13:51:06 -04:00
Alexandre Eichenberger 653fa69102
Unify Conv implementation (#54)
* fixed readme for new git repo

* conv with bias as an optional input
2020-03-26 11:03:19 -04:00
Tung D. Le 2814ea3898
Support dilations and enable the remaining e2e tests for MaxPoolSingleOut (#31)
* Support dilations and enable e2e tests

* Fix allocating memory for dynamic shape

* Edit comments

* Do dilation by computing an offset from kernel index

* Correct dilation formula, add an example of out-of-bound, and add a test for dilation

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-18 09:55:50 -04:00
Tung D. Le 4763e8a8bc
Lower ONNXAbsOp to Krnl dialect and enable e2e tests for ONNXReduceL1 (#18)
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-17 11:12:45 -04:00
Gheorghe-Teodor Bercea 1622b9f161
[NFC] Change ONNF based names to ONNX-MLIR (#32)
* Rename onnf to onnx-mlir.

* Change workspace name.
2020-03-17 09:16:33 -04:00
Tung D. Le a65820940c
Lower ConstantOp (#28)
* Lower ConstantOp

* Refactor the code

* Edit error messages

* Check whether attribute is sparse or dense during shape inference
2020-03-12 10:58:42 -04:00
chentong319 391f565a66
Lower constant padding operation to KRNL dialect (#27) 2020-03-11 16:54:07 -04:00
Gheorghe-Teodor Bercea e4c23da4fd
Lower MaxPoolSingleOutOp to Krnl dialect (#1)
* Lower MaxPoolSingleOutOp to Krnl dialect

* Edit comments

* Update changes according to the new folder structure

* Add MLIR tests

* Support ceil_mode

* Merge the first two krnl loops into one krnl loop; remove attribute checks

* Dynamically allocate memory for the result if the result has unknown dimensions

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-03-04 14:27:21 -05:00
Tung D. Le 5357fc1421
Use SqrtOp in Standard dialect (#108)
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-02-26 12:03:24 -05:00
Tung D. Le a720f9a7b2
Remove special GemmNoBias since we can handle it using NoneType bias (#100)
* Remove special GemmNoBias since we can handle it using NoneType bias

* Remove GemmNoBias from onnx.md

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-02-25 13:20:43 +08:00
Tung D. Le aea6479ad3
Lower BatchNormalization (test mode) to Krnl dialect (#70)
* Add ONNXBatchNormalizationTestModeOp and its shape inference

* Lower batchnormalization test mode

* re-use scale, bias, mean, and variance

* Add MLIR tests

* Add e2e tests

* fix typos

* Fix a bug in MLIR tests

* Change type from int to int64_t for indices

* Uncomment e2e tests due to segmentation fault

* Uncomment e2e tests due to segmentation fault

* Revise the code

* [Tian] Fix segmentation fault in e2e tests

* Re-generate onnx.md to include BatchNormalizationTestModeOp

* Reverse an unintentional change

* Fix some typos in comments

* Use convertToMemRefType from the master branch

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-02-20 11:45:40 -05:00
Tung D. Le f1d20e368f
Add support of GemmNoBias (#91)
* Add support of GemmNoBias

* Fix a wrong indentation
2020-02-20 10:55:24 -05:00
Tung D. Le b521719587
Lower Matmul operation to Krnl dialect (#57)
* Allocate memory for matmul's result

* Group cases

* Add support of N-D x N-D, N>=2

* Revise createIterateOperandPack

* Add 1-D x 1-D

* Add 1-D x N-D

* Add MLIR tests

* Change variable names

* Change type from int to int64_t for indices

* Change variable names

* Change int64_t back to int

* Change int64_t back to int

* Change int64_t back to int

* Use decltype

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-02-14 10:43:17 -05:00
Gheorghe-Teodor Bercea 094be4f37a
Add support for strides when emitting convolution loop nest. (#76)
* Add support for strides when emitting convolution loop nest.

* Only emit stride multiplication if strides is greater than one.

* Add test.
2020-02-11 11:53:13 -05:00
Tung D. Le adad9e24bd
Add support of negative dimensions (#66)
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-02-11 10:37:47 -05:00
Tung D. Le 2c7046ff5f
Lowering ReductionMax, ReductionMin, ReductionProd and ReductionSum (#31)
* Shape inference for reduction

* Lower ReduceSum

* Support list-like attributes

* Add ReduceMax, ReduceMin, ReduceProd

* Add tests

* Emit errors for unsupported types

* Typos

* Add backend test

* Fix axis computation

* Update the use of attributes

* Use SmallVector

* Address stylistic comments

* Change type from int to int64_t for indices

* Change type from int to int64_t for indices
2020-02-10 21:38:19 +08:00
Gheorghe-Teodor Bercea 0272451521
Lower convolution to KRNL dialect. (#65)
* Ensure data shape is at least 4.

* First version of convolution.

* Simplify code for KRNL lowering.

* Add test without padding or strides.

* Refactor code for lowering frontend operations to KRNL dialect.

* Add test for conv with no bias and no padding.

* Add test with group greater than one.

* Address comment.
2020-02-07 16:51:32 -05:00
Haruki Imai 477227a0ec
Added lowering of SignOp (#21)
* Support lowering of SignOp

* Fixed test code for signop of integer input

* Inserted Sigh and Reciprocal in SharingWork.md (Reciprocal is for past commit 7e3f96e)

* Added test for Sign Op

* Fixed minus_one -> minusOne

* Fixed test for signop
2020-02-04 22:27:17 +08:00
Gheorghe-Teodor Bercea 9fb826ae7e
Lower transpose operation to KRNL dialect (#54)
* Lower transpose operation.

* Fix IndetityOp.

* Add tests.

* Add backend tests.

* Clean-up code.

* Move transpose code and improve comment.
2020-01-30 11:44:56 -05:00
Tung D. Le 400676e371
Lowering Gemm (#19)
* Initial implementation

* Support transposing inputs

* Revise unidirectional broadcasting and unknown dimensions

* Revise gemm

* Add testcase

* Rename some variables

* Update SharingWork.md

* Change from the use of Value* to Value

* Insert deallocation

* Initilize the output matrix and fix wrong computation

* Add end-to-end testcases

* Edit lowering tests

* Change attribute names

* Use emplace_push for SmallVector

* Use the new way of getting attributes

* Revise the use of attributes

* Check the bias's shape

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-01-29 11:11:49 -05:00
Tung D. Le 9e82d388f0
Add support for Unsqueeze (#50)
* Infer shape for Unsqueeze

* Lower Unsqueeze

* Revise

* Turn off backend tests

* Compute tensorSize for static shape

* Compute tensorSize with unknown dims

* Edit tests

* Update the use of attributes

* Add e2e tests

* Use SmallVector

* Remove return

* Check whether the operand is ranked or not

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-01-29 10:46:02 -05:00
Tung D. Le 5b44169aaa
Support dimension zero in reshape (#55)
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-01-29 10:41:09 -05:00
Tung D. Le 195bf9d15d Add KrnlSqrtOp (#22)
* Initial lowering of KrnlSqrtOp

* Fix errors and add a testcase

* typos

* Add the MLIR example

* Restore doc/doc_check/CMakeLists.txt

* Clean the code

* Edit comments

* Remove redundant parts

* Chang the use of -> to .

* Add a test for f64

* Support ONNXSqrtOp

* Fix indentation

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-01-28 11:10:47 -05:00
chentong319 c74f814f64 Add attributes as operation parameters (#45)
* add attributes of Op into parameters

* fix rewrite rule for GemmOp with attributes

* use I64Attr instead of I32Attr and modify test cases for the changes in attributes

* add output name (prefixed with o_) to Op definition

* update shape inference for the new attributes
2020-01-27 10:09:14 -05:00
Yasushi Negishi 383a5c31ac Support Softplus and Softsign operations (#17)
* Support Softplus and Softsign operations

* Add the default shape inference for the transposition operation.

* Fix conflict with master

* Fix conflict with master branch

* Add test for softplus and softsign in test/backend/test.py

* Re-enable Reciprocal tests.

Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-01-23 21:18:38 -07:00
Tian Jin 51b0f4c9dd
Chentong319 attribute with variant (#25)
* change the read-in of attribute, using variant

* Use backported variant.

* Reduce code duplication.

* 1. Make array attribute parsing more clear.
2. int -> int64_t.

* 1. Fix how array attributes are imported.

* 1. Fix clang-tidy warnings.

* 1. Nit: fix clang-tidy warnings.

* Fix MaxPool node construction.

* Fix call to MaxPool.

* Comment out backend tests that fail.

* Add path to variant submodule to enable include file detection.

* Allow unused argument to avoid special casing generator.

* Address attribute related e2e test failures for Hard sigmoid,Elu,LeakyRelu,Selu,Softmax

Co-authored-by: chentong319 <chentong@us.ibm.com>
Co-authored-by: Gheorghe-Teodor Bercea <gt.bercea@gmail.com>
2020-01-21 19:36:21 -07:00
Tung D. Le e89e51699b Lowering softmax (#14)
* Rebase

* Use max normalization

* Handle axis

* Add tests

* Update SharingWork.md

* Remove redundant spaces

* Format code

* Rebase

* Change from the use of Value* to Value

* Add end-to-end tests

Co-authored-by: Tian Jin <tjingrant@gmail.com>
2020-01-20 21:57:32 -05:00