Commit Graph

259 Commits

Author SHA1 Message Date
A. Unique TensorFlower 75a1c450ea [MLIR][KernelGen] Fix Windows build failure
Fix usage of default constructor. Instead, always use the parameterized
constructor and make the maximum supported rank explicit.

PiperOrigin-RevId: 377037155
2021-06-02 05:34:44 -07:00
wyzhao 968d4b8709 PR #49598: [MLIR][DISC] legalize tensor_load inserted during hlo-to-lhlo conversion
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/49598

This PR implements logic for lowering memref.tensor_load ops that are
inserted during `mhlo-legalize-to-lmhlo`
Copybara import of the project:

--
80eb377af4e02182e1aecc943a41ca5d7d1c2100 by Wenyi Zhao <reyizero@gmail.com>:

[MLIR][DISC] legalize tensor_load inserted during hlo-to-lhlo conversion

This PR implements logic for lowering memref.tensor_load ops that are
inserted during `mhlo-legalize-to-lmhlo`.

--
ac452fe3dcd591211cd5c59be9189fe2f7153b41 by Wenyi Zhao <reyizero@gmail.com>:

minor fix

--
6b36017f8632a06adbc3e05a62975fa641d0260f by Wenyi Zhao <reyizero@gmail.com>:

minor refine

--
846005cc76d0033112e47825c2e9a97790b6925f by Wenyi Zhao <reyizero@gmail.com>:

minor fix

--
f6a4becaa287d5ca323b2d152a4d0ae053730fd9 by Wenyi Zhao <reyizero@gmail.com>:

fix

--
5555749f60f7fce8f57962860ef65efccf0362ba by Wenyi Zhao <reyizero@gmail.com>:

fix

--
8873b9b6d9315c1199ca9f7c133ecf377ecd2fa6 by Wenyi Zhao <reyizero@gmail.com>:

fix

PiperOrigin-RevId: 376942547
2021-06-01 16:27:56 -07:00
A. Unique TensorFlower d1828625ab [MLIR][KernelGen] Make maximum supported rank in rank specialization configurable
The maximum supported target rank of 5 is sufficient for all operations but
`select`. Make the maximum target rank configurable in the rank specialization.
This reduces the number of generated kernels for operations that don't require
it.

PiperOrigin-RevId: 376822496
2021-06-01 06:54:31 -07:00
A. Unique TensorFlower b32f885ad7 [MLIR][KernelGen] Enable cluster rank specialization
Replace the previously used `TransformUnrankedHloPass` which rank-specializes
only one operation at a time. The new generalized rank specialization clusters
compatible operations and rank-specializes them collectively.

PiperOrigin-RevId: 376127752
2021-05-27 02:44:31 -07:00
Mehdi Amini af01c08ce6 Allow tuple as results of mhlo.custom_call op (NFC)
PiperOrigin-RevId: 376009518
2021-05-26 12:56:10 -07:00
Adrian Kuegel a847109ac7 Support complex types when converting HLO multiply op.
We can lower it to the MulOp in the complex dialect.

PiperOrigin-RevId: 375675079
2021-05-25 04:35:34 -07:00
Adrian Kuegel 5816920258 Support complex types when converting HLO divide op.
We can lower it to the DivOp in the complex dialect.
Also add tests to hlo-legalize-to-linalg.mlir for CompareOp lowering of complex
types. These were forgotten in a previous commit.

PiperOrigin-RevId: 375669125
2021-05-25 03:43:46 -07:00
Adrian Kuegel 758ae7da6b Support complex types when converting HLO compare op (EQ/NE).
We can lower it to the EqualOp / NotEqualOp in the complex dialect.

PiperOrigin-RevId: 375655092
2021-05-25 01:54:27 -07:00
wyzhao b93e54d8a4 PR #49454: [MLIR][DISC] Upgrade to use the new `reifyReturnTypeShapes` interface.
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/49454

The new interface is more safe to be used during dialect conversion
(e.g. converting from tensor world to buffer world).
Copybara import of the project:

--
a6968072d59bec3c3bbaef0121d297e807c37c91 by Wenyi Zhao <reyizero@gmail.com>:

[MLIR][DISC] Upgrade to use the new `reifyReturnTypeShapes` interface.

The new interface is more safe to be used during dialect conversion
(e.g. converting from tensor world to buffer world).

--
55e7c6b7f2f99b99e226645a57e2433fae3e90ed by Wenyi Zhao <reyizero@gmail.com>:

minor fix

PiperOrigin-RevId: 375500273
2021-05-24 10:11:55 -07:00
Feiwen a7884196f5 PR #49228: [MLIR][DISC] porting dynamic shape related OPs to mhlo and lmhlo dialect
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/49228

We are porting our MLIR-based dynamic shape compiler to tf community (From OP def, Patttern, to Optimization pass, etc).
This is the first PR, which including some dynamic shape OPs def in mhlo and lmhlo dialect.
For mhlo dialect, we add:
- HLO_RealDynamicSliceOp
- HLO_DynamicPadOp
- HLO_DynamicGatherOp
- HLO_DynamicConvOp

For lmhlo dialect, we add:
- LHLO_RealDynamicSliceOp
- LHLO_DynamicBroadcastInDimOp
- LHLO_DynamicGatherOp
- LHLO_DynamicPadOp
- LHLO_DynamicBitcastOp
- LHLO_DynamicConvOp
- LHLO_DynamicIotaOp
- LHLO_DynamicReshapeOp
- LHLO_DotGeneralOp
- LHLO_BitcastOp

Rest Ops to add:
* We will send a separate PR containing LHLO_DynamicWhileOp and LHLO_DynamicCaseOp for control flow.
* We will add a separate dedicated dialect like mhlo_ral, which including D2HOp/H2DOp/DebugPrintOp/TopKOp, etc.

Previous discussions:[RFC](https://groups.google.com/a/tensorflow.org/g/mlir/c/_X48poNcbDI/m/jCC8BWIICQAJ), [discussion_1](https://llvm.discourse.group/t/updates-on-mlir-based-dynamic-shape-compiler/2384), [Recording of meeting](https://drive.google.com/file/d/1_uEISlV5MUWdG9faKAdKlCWnPtGjRC-D/view?usp=sharing).
Copybara import of the project:

--
e22d9e61106e00a1a1c6f368cc4a03e3bd1f414c by azazhu <azazhu@gmail.com>:

[DISC]fea: porting mhlo and lmhlo OPs

--
9ec3e76290da07cbd53d7da5fa86ff67179441a1 by azazhu <azazhu@gmail.com>:

[DISC][MLIR] 1. add summary and description for dynamic OPs in mhlo and lmhlo; 2. rm InferOutputTypes; 3. add verify for RealDynamicSliceOp and DynamicPadOp

--
0d68cd135555fd935991c12456b21329e628f23f by azazhu <azazhu@gmail.com>:

[DISC][MLIR] 1.remove D2H,H2D and DebugPrint Ops from mhlo/lmhlo dialect; 2. add type constraint to DynamicPadOp and RealDynamicSliceOp; 3.refine lmhlo type constraint; 4.rename RealDynamicSliceOp as name conflict.

--
698762a77d60f6a844cb1ab3f32740d4ef3c5843 by azazhu <azazhu@gmail.com>:

[DISC][MLIR] 1. replace dyn_cast to cast 2. refine code

PiperOrigin-RevId: 375022260
2021-05-20 23:16:47 -07:00
Rahul Joshi 41f663ce47 [HLO] Adopt custom syntax for convolution dimensions and window attributes (HLO)
PiperOrigin-RevId: 374923250
2021-05-20 12:13:50 -07:00
Rahul Joshi fc88cf1ff4 [HLO] Adopt custom syntax for convolution dims and window attributes for LMHLO_GPU
PiperOrigin-RevId: 374889917
2021-05-20 09:41:48 -07:00
A. Unique TensorFlower 57aeb5ab16 Integrate LLVM at llvm/llvm-project@0316f3e649
Updates LLVM usage to match
[0316f3e64972](https://github.com/llvm/llvm-project/commit/0316f3e64972)

PiperOrigin-RevId: 374855085
2021-05-20 06:09:40 -07:00
Stella Laurenzo 0fe07e3814 Separate CHLO transforms for expanding compositions and lowering broadcasts.
* The former is typically invariant regardless of backend.
* The latter may need to be done differently depending on capabilities of the lowering target.

PiperOrigin-RevId: 374492924
2021-05-18 13:33:59 -07:00
A. Unique TensorFlower ccd70d5717 [MLIR][HLO] Add `rank-specialization-to-scf` pass
Currently the lowering is only implemented for the unary case. The n-ary case
will follow.

PiperOrigin-RevId: 374162772
2021-05-17 03:56:23 -07:00
Rahul Joshi a361253e4f [HLO] Add custom print/parse for window attributes of convolutions (in LMHLO)
PiperOrigin-RevId: 373807616
2021-05-14 09:47:25 -07:00
A. Unique TensorFlower d2cc74317c Implement constant folding for mhlo.Sign.
PiperOrigin-RevId: 373550014
2021-05-13 03:54:04 -07:00
A. Unique TensorFlower 420c42a0a1 [MLIR][HLO] Support CHLO unary operations in rank specialization clustering
PiperOrigin-RevId: 373397321
2021-05-12 10:20:43 -07:00
A. Unique TensorFlower 596918a6f1 [MLIR][HLO] Allow rank specialization clustering with `chlo.broadcast_select` op
PiperOrigin-RevId: 373379990
2021-05-12 08:56:49 -07:00
Rahul Joshi e260aa771c [HLO] Add custom print/parse for convolution dimension numbers (in LMHLO)
PiperOrigin-RevId: 373379227
2021-05-12 08:52:46 -07:00
A. Unique TensorFlower 313d24bc8f [MLIR][HLO] Add `rank-specialization-cluster` pass
Add a pass to cluster unranked C/HLO operations in one
`chlo.rank_specialization_cluster` op. The C/HLO operations are moved to the
body of the operation. Later passes can use this to rank-specialize all these
operations together.

PiperOrigin-RevId: 373336725
2021-05-12 03:46:01 -07:00
Jacques Pienaar 2ea9470515 Remove BASE_HLO_ConvOp to remove coupling between MHLO and LMHLO conv ops
PiperOrigin-RevId: 373201247
2021-05-11 11:54:44 -07:00
Itai Zukerman a4db6c57aa Removed all (most) BASE_HLO_* ops.
Moved the corresponding `summary` and `description` fields into the subclasses.
Kept BASE_HLO_ConvOp for `hasWindowReversal()'.

PiperOrigin-RevId: 373173025
2021-05-11 09:48:31 -07:00
A. Unique TensorFlower 7f7a86ad0d [MLIR][HLO] Implement `RegionBranchOpInterface` for rank specialization cluster
PiperOrigin-RevId: 373163196
2021-05-11 09:03:05 -07:00
A. Unique TensorFlower 96a47345cc [MLIR][HLO] Add `rank_specialization_cluster` op to CHLO
The operation will be used to cluster compatible operations that can be rank-
specialized collectively.

PiperOrigin-RevId: 373128557
2021-05-11 05:17:42 -07:00
A. Unique TensorFlower 6191b3e528 [MLIR][HLO] Use `#` operator for list concatenation in C/HLO definitions
PiperOrigin-RevId: 373110780
2021-05-11 02:53:34 -07:00
Rahul Joshi 8c854886cb [XLA:GPU] Allow all-gather operands to have different element types.
- XLA's all-gather combiner can create such all-gathers, so relax the same element type
  trait for all-gathers.

PiperOrigin-RevId: 372380446
2021-05-06 11:04:13 -07:00
dfki-jugr 6bc854f5d9 PR #48667: [mlir-hlo] Added RegionBranchOpInterfaces to lmhlo operations.
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/48667

Added RegionBranchOpInterfaces to lmhlo operations that use regions.
This is needed, since the bufferization features in MLIR have to reason about the control flow within these operations.
Copybara import of the project:

--
572fd7d850a46630b812da84e9094280f89f259e by Julian Gross <julian.gross@dfki.de>:

Added RegionBranchOpInterfaces to lmhlo operations.

PiperOrigin-RevId: 372070825
2021-05-05 00:27:56 -07:00
Benjamin Kramer f4414fcd66 [MHLO:Linalg] Add support for lowering unsigned ops
This strips away the signedness with a type converter, using unrealized
conversion casts. The rest is mostly mechanically pushing the original op down
the pipeline so lowerings can see the original types.

Signed types stay signless for now. This can be changed in the HLO bridge later.

I did a pass over all ops and added unsigned lowerings where they were missing.
There may be more.

Currently the lowering will die at a later stage because it doesn't understand
the unrealized casts.

PiperOrigin-RevId: 371077494
2021-04-29 02:27:35 -07:00
A. Unique TensorFlower e500ab37a1 Introduce constant folds for ReduceOp with single LogicalAnd or LogicalOr op.
PiperOrigin-RevId: 370551483
2021-04-26 15:11:27 -07:00
Adrian Kuegel 0e2b255f01 Lower LHLO::AbsOp to complex dialect.
Also fix the traits for LHLO::AbsOp to allow different types and add a
verifier.

PiperOrigin-RevId: 370438790
2021-04-26 05:44:03 -07:00
A. Unique TensorFlower 8db96f54d3 [mhlo] Add a folder for mhlo.map which does nothing but return one of the arguments.
Add a folder for maps whose body returns only one of the arguments. When this arises the fold replaces the map output with one of the operand tensors.

PiperOrigin-RevId: 369304322
2021-04-19 14:36:08 -07:00
Rahul Joshi c75cbf4ac7 [MLIR][NFC] Rename ReduceOp operands() => inputs().
- Rename to avoid confusion as operands generally includes all operands of an operation

PiperOrigin-RevId: 368479524
2021-04-14 12:08:23 -07:00
Jacques Pienaar fdd75daed6 Add shape function for MHLO RngNormal and RngUniform
PiperOrigin-RevId: 368276963
2021-04-13 12:59:42 -07:00
Hanhan Wang 768234b077 [NFC] Fix a typo in ScalarLimit comments.
PiperOrigin-RevId: 367593932
2021-04-09 01:57:50 -07:00
Rahul Joshi 0800423d27 [LMHLO] Simplify FusionOp::getInputBuffers() and friends.
- No need to walk the entire region, instead just iterate over the top level operations in
  the region attached to the fusion op.

PiperOrigin-RevId: 366528833
2021-04-02 15:55:49 -07:00
Geoffrey Martin-Noble 763ff55970 Restore SingleBlockImplicitTerminator verification to mhlo.while
The internal users have been cleaned up, so we can roll this forward again.

PiperOrigin-RevId: 366313960
2021-04-01 13:04:26 -07:00
Rahul Joshi ff2cbfa2ec [MLIR] Add support for representing variadic reduce-window in HLO/LMHLO dialect.
-  Fixed a subset of transformations to handle variadic reduce-window.

PiperOrigin-RevId: 366278650
2021-04-01 10:24:50 -07:00
A. Unique TensorFlower af3bc47a8b Integrate LLVM at llvm/llvm-project@8396aeb07c
Updates LLVM usage to match
[8396aeb07cdd](https://github.com/llvm/llvm-project/commit/8396aeb07cdd)

PiperOrigin-RevId: 366034463
2021-03-31 08:01:34 -07:00
Geoffrey Martin-Noble 5ec66775d4 Temporarily relax restriction on mhlo.while terminator
Some internal tests are failing, so relaxing this restriction
temporarily while we investigate.

PiperOrigin-RevId: 365949611
2021-03-30 19:44:07 -07:00
Geoffrey Martin-Noble 5d65758e8c Canonicalize MHLO Case and If Ops with constant conditions
ReplaceOpWithRegion was taken directly from ScfOps. We should maybe put that somewhere common in core.

PiperOrigin-RevId: 365936724
2021-03-30 17:58:01 -07:00
Geoffrey Martin-Noble 7a9394dca5 Restrict MHLO control flow ops to single-block regions
PiperOrigin-RevId: 365935824
2021-03-30 17:51:03 -07:00
Adrian Kuegel c1a6ae8994 Generalize the HloBinaryElementwiseAdaptor
We can use it also for ternary ops like Select if we change the signature so
that a ValueRange is passed in.
Also remove special casing for HloComplexAdaptor. It can be handled with the
generic adaptor as well.

PiperOrigin-RevId: 365777493
2021-03-30 03:53:53 -07:00
A. Unique TensorFlower 9ebadc4c4d Integrate LLVM at llvm/llvm-project@482283042f
Updates LLVM usage to match
[482283042f79](https://github.com/llvm/llvm-project/commit/482283042f79)

PiperOrigin-RevId: 365710568
2021-03-29 18:29:48 -07:00
Geoffrey Martin-Noble a2b6060c0c Add folder for HLO NotOp
PiperOrigin-RevId: 364989658
2021-03-25 02:08:38 -07:00
Stella Laurenzo 7f2bf48b8b Integrate LLVM at llvm/llvm-project@b24436ac96
Updates LLVM usage to match
[b24436ac96bd](https://github.com/llvm/llvm-project/commit/b24436ac96bd)

PiperOrigin-RevId: 364615807
2021-03-23 12:20:17 -07:00
A. Unique TensorFlower 8987dfd1d6 [MLIR][HLO] Move broadcasts over n-ary shape-preserving ops
This will open up more fusion opportunities.

PiperOrigin-RevId: 364577231
2021-03-23 09:38:39 -07:00
A. Unique TensorFlower 0c4a89e52c [MLIR][MHLO] Implement shape reification for `dynamic_broadcast_in_dim`
PiperOrigin-RevId: 363622714
2021-03-18 03:39:15 -07:00
A. Unique TensorFlower f1408e791e [MLIR][HLO] Add `Elementwise` trait to unary element-wise ops
PiperOrigin-RevId: 363428909
2021-03-17 08:51:17 -07:00
A. Unique TensorFlower c54527fe88 Integrate LLVM at llvm/llvm-project@678241795c
Updates LLVM usage to match
[678241795c95](https://github.com/llvm/llvm-project/commit/678241795c95)

PiperOrigin-RevId: 363257913
2021-03-16 13:33:00 -07:00