Commit Graph

357 Commits

Author SHA1 Message Date
A. Unique TensorFlower 7f86dd9f7e Constant fold compare EQ if one of the operands is true and compare NE if one of the operands is false.
PiperOrigin-RevId: 373058030
2021-05-10 18:53:49 -07:00
A. Unique TensorFlower d8c40b691c [MLIR][HLO] Add `shape.broadcast` canonicalization to unblock broadcast moving
PiperOrigin-RevId: 372120309
2021-05-05 07:16:49 -07:00
dfki-jugr 6bc854f5d9 PR #48667: [mlir-hlo] Added RegionBranchOpInterfaces to lmhlo operations.
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/48667

Added RegionBranchOpInterfaces to lmhlo operations that use regions.
This is needed, since the bufferization features in MLIR have to reason about the control flow within these operations.
Copybara import of the project:

--
572fd7d850a46630b812da84e9094280f89f259e by Julian Gross <julian.gross@dfki.de>:

Added RegionBranchOpInterfaces to lmhlo operations.

PiperOrigin-RevId: 372070825
2021-05-05 00:27:56 -07:00
Geoffrey Martin-Noble ac68145565 [MHLO:Linalg] Add support for lowering concat of unsigned tensors
Nothing about concat here really. Just need to plumb through the type
conversion.

PiperOrigin-RevId: 372012957
2021-05-04 15:57:54 -07:00
Geoffrey Martin-Noble 5a60793b31 [MHLO:Linalg] Add support for lowering dynamic-slice on unsigned ints
PiperOrigin-RevId: 371979004
2021-05-04 13:08:36 -07:00
Adrian Kuegel bae9936b98 Replace !any_of with none_of
No functional change.

PiperOrigin-RevId: 371868221
2021-05-04 01:55:36 -07:00
Adrian Kuegel 384b87fad0 Lower ReluGrad via chlo::BroadcastSelect.
This allows to get rid of the constraint that it needs to have a static shape.

PiperOrigin-RevId: 371862452
2021-05-04 01:03:02 -07:00
Benjamin Kramer f4414fcd66 [MHLO:Linalg] Add support for lowering unsigned ops
This strips away the signedness with a type converter, using unrealized
conversion casts. The rest is mostly mechanically pushing the original op down
the pipeline so lowerings can see the original types.

Signed types stay signless for now. This can be changed in the HLO bridge later.

I did a pass over all ops and added unsigned lowerings where they were missing.
There may be more.

Currently the lowering will die at a later stage because it doesn't understand
the unrealized casts.

PiperOrigin-RevId: 371077494
2021-04-29 02:27:35 -07:00
A. Unique TensorFlower e500ab37a1 Introduce constant folds for ReduceOp with single LogicalAnd or LogicalOr op.
PiperOrigin-RevId: 370551483
2021-04-26 15:11:27 -07:00
Adrian Kuegel 0e2b255f01 Lower LHLO::AbsOp to complex dialect.
Also fix the traits for LHLO::AbsOp to allow different types and add a
verifier.

PiperOrigin-RevId: 370438790
2021-04-26 05:44:03 -07:00
A. Unique TensorFlower 0569b7f7a4 [MLIR][MHLO] Generalize extent tensor cast elimination in bcast moving
PiperOrigin-RevId: 370112887
2021-04-23 10:52:50 -07:00
A. Unique TensorFlower 21e9365718 [MLIR][MHLO] Generalize extent tensor cast elimination in bcast moving
PiperOrigin-RevId: 370085141
2021-04-23 08:31:11 -07:00
A. Unique TensorFlower da5d252143 [MLIR] Merge extent tensor casts into `shape_of` ops in broadcast moving
PiperOrigin-RevId: 370058002
2021-04-23 04:44:01 -07:00
A. Unique TensorFlower 890a79641e Integrate LLVM at llvm/llvm-project@37e1458128
Updates LLVM usage to match
[37e145812855](https://github.com/llvm/llvm-project/commit/37e145812855)

PiperOrigin-RevId: 370020161
2021-04-22 22:57:08 -07:00
Hanhan Wang 49df46893c Add support for lowering variadic mhlo.reduce op.
Also add more lowering for body ops. Some MinOp and MaxOp can be legalized to
SelectOp + CompareOp.

PiperOrigin-RevId: 369891551
2021-04-22 09:50:49 -07:00
Benjamin Kramer 4d435a817e [mhlo:linalg] Add support for lowering mhlo.concatenate to Linalg ops.
This uses a indexed linalg.generic, which is rather awkward standalone but
allows fusing into the output of the concatenate and avoid to ever materialize
it in memory. I think this is the only way to get that with the current linalg
stack, fusion across a concatenate would require more infrastructure.

PiperOrigin-RevId: 369677652
2021-04-21 10:01:08 -07:00
A. Unique TensorFlower 8db96f54d3 [mhlo] Add a folder for mhlo.map which does nothing but return one of the arguments.
Add a folder for maps whose body returns only one of the arguments. When this arises the fold replaces the map output with one of the operand tensors.

PiperOrigin-RevId: 369304322
2021-04-19 14:36:08 -07:00
A. Unique TensorFlower 9374a1c0c5 [MLIR] Fix merge of assuming ops
Assuming ops can only be merged if their witnesses will dominate the merged
assuming op. This is not the case if the second op's witness is a result of the
first.

PiperOrigin-RevId: 369192868
2021-04-19 04:21:08 -07:00
Adrian Kuegel db9f298505 Generate Equal and NotEqual kernels for complex types.
PiperOrigin-RevId: 368586877
2021-04-15 00:35:52 -07:00
Rahul Joshi c75cbf4ac7 [MLIR][NFC] Rename ReduceOp operands() => inputs().
- Rename to avoid confusion as operands generally includes all operands of an operation

PiperOrigin-RevId: 368479524
2021-04-14 12:08:23 -07:00
Prashant Kumar 236e7db5c0 PR #47315: [MLIR] Add concatenateOp lowering from lmhlo to Affine.
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/47315

Lowering of `concatenateOp` is added from lmhlo to Affine. The lowering
has been added as a part of `lhlo-legalize-to-affine` pass.

Signed-off-by: Prashant Kumar <prashantk@polymagelabs.com>
Copybara import of the project:

--
15314e4579f7a6901cf3475eff25962a34772eaf by Prashant Kumar <prashantk@polymagelabs.com>:

[MLIR] Add concatenateOp lowering from lmhlo to Affine.

Lowering of `concatenateOp` is added from lmhlo to Affine. The lowering
has been added as a part of `lhlo-legalize-to-affine` pass.

Signed-off-by: Prashant Kumar <prashantk@polymagelabs.com>
PiperOrigin-RevId: 368465992
2021-04-14 11:06:38 -07:00
Jacques Pienaar fdd75daed6 Add shape function for MHLO RngNormal and RngUniform
PiperOrigin-RevId: 368276963
2021-04-13 12:59:42 -07:00
Hanhan Wang a3fc99efe0 Add support for lowering mhlo.dynamic_slice to Linalg ops.
PiperOrigin-RevId: 368033540
2021-04-12 10:34:55 -07:00
A. Unique TensorFlower 0ec0a23e61 [MLIR][HLO] Generalize merged witnesses in `move-up-dynamic-broadcasts-for-fusion`
PiperOrigin-RevId: 368012460
2021-04-12 08:55:29 -07:00
A. Unique TensorFlower 6d2209e301 [MLIR][HLO] Canonicalize chained broadcasts
Compose two subsequent `dynamic_broadcast_in_dim` ops into one.

PiperOrigin-RevId: 367630360
2021-04-09 07:35:34 -07:00
Hanhan Wang fdb653788c Add support for lowering and/or within mhlo.reduce op body.
PiperOrigin-RevId: 367627034
2021-04-09 07:09:13 -07:00
Adrian Kuegel cc607bc72d Support up to rank 8 in rank specialization for SelectOp.
PiperOrigin-RevId: 367406557
2021-04-08 04:55:41 -07:00
Hanhan Wang c466f08993 Add checks in ReduceWindowOpOnTensorsConversion.
The pattern does not support ops with non-zero padding config. Add a check to
prevent unexpected lowering.

It is not easy to add tests because other patterns will convert body ops, and
it causes issues like invalid IRs.

PiperOrigin-RevId: 367202450
2021-04-07 05:46:01 -07:00
A. Unique TensorFlower a62382cf1f Integrate LLVM at llvm/llvm-project@0e92cbd6a6
Updates LLVM usage to match
[0e92cbd6a652](https://github.com/llvm/llvm-project/commit/0e92cbd6a652)

PiperOrigin-RevId: 367062980
2021-04-06 12:40:51 -07:00
Rahul Joshi ff2cbfa2ec [MLIR] Add support for representing variadic reduce-window in HLO/LMHLO dialect.
-  Fixed a subset of transformations to handle variadic reduce-window.

PiperOrigin-RevId: 366278650
2021-04-01 10:24:50 -07:00
A. Unique TensorFlower af3bc47a8b Integrate LLVM at llvm/llvm-project@8396aeb07c
Updates LLVM usage to match
[8396aeb07cdd](https://github.com/llvm/llvm-project/commit/8396aeb07cdd)

PiperOrigin-RevId: 366034463
2021-03-31 08:01:34 -07:00
A. Unique TensorFlower bbe0aa204c [MLIR][MHLO] Merge assuming ops with compatible witnesses
PiperOrigin-RevId: 366018349
2021-03-31 06:11:38 -07:00
Adrian Kuegel c8157ba4df Move code from helper struct to the only user.
We don't need the separate helper struct anymore, because it is now only used
in one place.

PiperOrigin-RevId: 366012639
2021-03-31 05:21:34 -07:00
Adrian Kuegel 4033a56750 Add special cases for SelectOp rank specialization.
We now use the same special cases for all ops with arity >= 2.
For binary ops, we now have only one special case if at least one of the
operands has exactly one element. In that case, we reshape both operands to
rank 1. Before, we had separate special cases whether the left-hand side
or the right-hand side have a scalar shape.

PiperOrigin-RevId: 366005835
2021-03-31 04:28:51 -07:00
A. Unique TensorFlower 9206805c58 [MLIR][MHLO] Do not yield results of ops that were moved out of assuming regions
When an op is moved out of an assuming region we already know statically that it
is independent of the assuming region. Hence, there is no need to yield its
results.

PiperOrigin-RevId: 366001405
2021-03-31 03:50:27 -07:00
A. Unique TensorFlower 8ade5d78c8 [MLIR][MHLO] Move `cstr_broadcastable` and `shape_of` out of `assuming` regions
Add pattern to move operations out of assuming op. This only valid for
constraint-independent ops, like `cstr_broadcastable` and `shape_of`. It will
eventually allow to make assuming regions' constraints independent from each
other so that they can be merged.

PiperOrigin-RevId: 365993145
2021-03-31 02:39:07 -07:00
A. Unique TensorFlower eade942635 [MLIR][MHLO] Add pattern to move ops into the assuming region
This will eventually allow to make assuming regions' constraints independent
from each other.

PiperOrigin-RevId: 365985081
2021-03-31 01:23:31 -07:00
Geoffrey Martin-Noble 5d65758e8c Canonicalize MHLO Case and If Ops with constant conditions
ReplaceOpWithRegion was taken directly from ScfOps. We should maybe put that somewhere common in core.

PiperOrigin-RevId: 365936724
2021-03-30 17:58:01 -07:00
Geoffrey Martin-Noble 2fb2a92c6e Verify mhlo.if region return types match op
This matches the behavior of mhlo.case. Additionally, fix the verification of CaseOp in the case of nested ops with mhlo.return-containing regions.

PiperOrigin-RevId: 365936672
2021-03-30 17:57:20 -07:00
Geoffrey Martin-Noble 7a9394dca5 Restrict MHLO control flow ops to single-block regions
PiperOrigin-RevId: 365935824
2021-03-30 17:51:03 -07:00
Adrian Kuegel c1a6ae8994 Generalize the HloBinaryElementwiseAdaptor
We can use it also for ternary ops like Select if we change the signature so
that a ValueRange is passed in.
Also remove special casing for HloComplexAdaptor. It can be handled with the
generic adaptor as well.

PiperOrigin-RevId: 365777493
2021-03-30 03:53:53 -07:00
Adrian Kuegel 6388e8d9ee mlir-hlo-opt: set preloadDialectsInContext to false.
This requires specifying dependent dialects in several passes.

PiperOrigin-RevId: 365758084
2021-03-30 01:07:14 -07:00
A. Unique TensorFlower 85a306d356 [MLIR][MHLO] Add pattern to inline broadcasted shapes
Simplify reasoning about `cstr_broadcastable` ops in the
`mhlo-move-up-dynamic-broadcasts-for-fusion` pass.

PiperOrigin-RevId: 365560893
2021-03-29 06:32:32 -07:00
A. Unique TensorFlower fb819c1de8 [MLIR][MHLO] Apply patterns in MoveUpDynamicBroadcastsForFusionPass greedily
PiperOrigin-RevId: 365556488
2021-03-29 06:02:06 -07:00
Geoffrey Martin-Noble a2b6060c0c Add folder for HLO NotOp
PiperOrigin-RevId: 364989658
2021-03-25 02:08:38 -07:00
Adrian Kuegel a34aa699f8 Fix tanh lowering for NaN input.
If the input is NaN, the result should be NaN, too.

PiperOrigin-RevId: 364788902
2021-03-24 06:34:36 -07:00
Stella Laurenzo 7f2bf48b8b Integrate LLVM at llvm/llvm-project@b24436ac96
Updates LLVM usage to match
[b24436ac96bd](https://github.com/llvm/llvm-project/commit/b24436ac96bd)

PiperOrigin-RevId: 364615807
2021-03-23 12:20:17 -07:00
A. Unique TensorFlower 8987dfd1d6 [MLIR][HLO] Move broadcasts over n-ary shape-preserving ops
This will open up more fusion opportunities.

PiperOrigin-RevId: 364577231
2021-03-23 09:38:39 -07:00
A. Unique TensorFlower 54f37abc28 [MHLO] Move broadcasts over elementwise ops
Move up dynamic broadcasts and shape computations to allow for more fusion
opportunities.

PiperOrigin-RevId: 364514158
2021-03-23 02:34:41 -07:00
Benjamin Kramer 59fa7c0ef7 [MHLO:linalg] Lower all dynamic broadcasts of static shapes to linalg.generic
We only need the memref_reinterpret_cast if we don't know whether a dimension
gets expanded or not. With static shapes we know that a dimension can only be
expanded if it's a static 1, so lower it in the same way we lower fully
static broadcasts.

PiperOrigin-RevId: 363859181
2021-03-19 03:52:02 -07:00