Commit Graph

377 Commits

Author SHA1 Message Date
Hanhan Wang 402b74ed7f Fix type bug in mhlo.dynamic-update-slice lowering.
The operand type can be f32. We should not use operand type to do clamp
operations.

PiperOrigin-RevId: 376286524
2021-05-27 17:53:49 -07:00
Hanhan Wang 28c411606f Add support for lowering mhlo.dynamic-update-slice ops to Linalg and std ops.
PiperOrigin-RevId: 376042810
2021-05-26 15:31:05 -07:00
Robert Suderman 26a0053d7d Remove linalg.indexed_generic from mhlo lowerings to linalg
IndexedGeneric is going away. Transition to using linalg.Index instead.

PiperOrigin-RevId: 376002501
2021-05-26 12:24:23 -07:00
A. Unique TensorFlower 4ebcebf31c [MLIR][HLO] Exploit scalar properties in rank specialization lowering
Take advantage of the fact that scalars are already ranked and that they are
neutral elements to broadcasting. Do not reshape scalars, do not consider them
for broadcasting, and materialize ranked operations on scalars accordingly.

PiperOrigin-RevId: 375968371
2021-05-26 09:59:13 -07:00
Benjamin Kramer edf5ec8084 Integrate LLVM at llvm/llvm-project@cb65419b1a
Updates LLVM usage to match
[cb65419b1ac0](https://github.com/llvm/llvm-project/commit/cb65419b1ac0)

PiperOrigin-RevId: 375915516
2021-05-26 04:47:24 -07:00
A. Unique TensorFlower cb46298a07 [MLIR][HLO] Support all smaller ranks in rank specialization cases
Rank specialization cases can be applied to all argument tensors of smaller
ranks than the expected maximum rank. This is crucial if all operands are
effectively scalars and the maximum reduced rank is 0.

PiperOrigin-RevId: 375712020
2021-05-25 08:38:53 -07:00
Adrian Kuegel a847109ac7 Support complex types when converting HLO multiply op.
We can lower it to the MulOp in the complex dialect.

PiperOrigin-RevId: 375675079
2021-05-25 04:35:34 -07:00
Adrian Kuegel 5816920258 Support complex types when converting HLO divide op.
We can lower it to the DivOp in the complex dialect.
Also add tests to hlo-legalize-to-linalg.mlir for CompareOp lowering of complex
types. These were forgotten in a previous commit.

PiperOrigin-RevId: 375669125
2021-05-25 03:43:46 -07:00
Adrian Kuegel 758ae7da6b Support complex types when converting HLO compare op (EQ/NE).
We can lower it to the EqualOp / NotEqualOp in the complex dialect.

PiperOrigin-RevId: 375655092
2021-05-25 01:54:27 -07:00
wyzhao b93e54d8a4 PR #49454: [MLIR][DISC] Upgrade to use the new `reifyReturnTypeShapes` interface.
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/49454

The new interface is more safe to be used during dialect conversion
(e.g. converting from tensor world to buffer world).
Copybara import of the project:

--
a6968072d59bec3c3bbaef0121d297e807c37c91 by Wenyi Zhao <reyizero@gmail.com>:

[MLIR][DISC] Upgrade to use the new `reifyReturnTypeShapes` interface.

The new interface is more safe to be used during dialect conversion
(e.g. converting from tensor world to buffer world).

--
55e7c6b7f2f99b99e226645a57e2433fae3e90ed by Wenyi Zhao <reyizero@gmail.com>:

minor fix

PiperOrigin-RevId: 375500273
2021-05-24 10:11:55 -07:00
Hanhan Wang 1ba4c714c9 Add support for lowering mhlo.scatter ops to Linalg.
This only works for updating tensors, not add/min/max computations. It requires
the index depth to be 1 because of the limitation in Linalg. We can not compare
multiple indices without packing indices.

PiperOrigin-RevId: 375137721
2021-05-21 12:17:14 -07:00
A. Unique TensorFlower 97e6103933 [MLIR][HLO] Reshape to scalars in rank specialization
Scalars were incorrectly casted to scalar tensors when they have to be reshaped.

PiperOrigin-RevId: 375049088
2021-05-21 03:12:16 -07:00
A. Unique TensorFlower 3daf65578a [MLIR][HLO] Add scalar cases for binary rank specialization
For rank specialization clusters that have only two operands, we can materialize
two extra cases in which either of them is a scalar. This avoids redundant index
computations in these cases.

PiperOrigin-RevId: 375037390
2021-05-21 01:35:44 -07:00
Hanhan Wang cd8f585cf7 [MHLO:Linalg] Add support for lowering torch_index_select of unsigned tensors
Also fixes typos in tests.

PiperOrigin-RevId: 374979460
2021-05-20 17:03:05 -07:00
Rahul Joshi 41f663ce47 [HLO] Adopt custom syntax for convolution dimensions and window attributes (HLO)
PiperOrigin-RevId: 374923250
2021-05-20 12:13:50 -07:00
Rahul Joshi fc88cf1ff4 [HLO] Adopt custom syntax for convolution dims and window attributes for LMHLO_GPU
PiperOrigin-RevId: 374889917
2021-05-20 09:41:48 -07:00
A. Unique TensorFlower c62fd89663 [MLIR][HLO] Add equal shapes case to rank specialization
Also restructure lowering implementation to facilitate the addition or removal
of special cases.

PiperOrigin-RevId: 374626365
2021-05-19 05:38:42 -07:00
Stella Laurenzo 71394fb301 Properly handle if DynamicBroadcastInDimOp shape is not of index type.
* The op defines this to be index, any integer, or pred (i1).
* Many TensorFlow legalizations produce integers for the shape.

PiperOrigin-RevId: 374566113
2021-05-18 21:12:11 -07:00
Stella Laurenzo 0fe07e3814 Separate CHLO transforms for expanding compositions and lowering broadcasts.
* The former is typically invariant regardless of backend.
* The latter may need to be done differently depending on capabilities of the lowering target.

PiperOrigin-RevId: 374492924
2021-05-18 13:33:59 -07:00
A. Unique TensorFlower 6af3d2df91 [MLIR][HLO] Add rank specialization with multiple non-scalar operands
Add lowering pattern for rank specialization clusters with more than one
non-scalar operand. The lowering resembles that of the `TransformUnrankedHlo`
pass and switches cases for maximal ranks from 1 through 8.

PiperOrigin-RevId: 374377002
2021-05-18 03:02:45 -07:00
A. Unique TensorFlower 474e419729 [MLIR][HLO] Generalize rank specialization with single operand
The pattern can be generalized to also rank specialize operations with a single
non-scalar operand. Also extract helper functions that can be reused in
following specializations.

PiperOrigin-RevId: 374198381
2021-05-17 08:12:55 -07:00
A. Unique TensorFlower c514c73390 [MLIR][HLO] Extend rank specialization clustering pass
Also cluster operations that operate on same shape operands. These implicitly
satisfy the broadcasting semantics requirement. Also, add test cases for some
cases that appear in the current MLIR-generated kernels.

PiperOrigin-RevId: 374191950
2021-05-17 07:31:36 -07:00
A. Unique TensorFlower ccd70d5717 [MLIR][HLO] Add `rank-specialization-to-scf` pass
Currently the lowering is only implemented for the unary case. The n-ary case
will follow.

PiperOrigin-RevId: 374162772
2021-05-17 03:56:23 -07:00
Rahul Joshi a361253e4f [HLO] Add custom print/parse for window attributes of convolutions (in LMHLO)
PiperOrigin-RevId: 373807616
2021-05-14 09:47:25 -07:00
A. Unique TensorFlower 76341f3720 [MLIR][HLO] Add mixed test for `rank-specialization-cluster` pass
PiperOrigin-RevId: 373762814
2021-05-14 04:40:40 -07:00
A. Unique TensorFlower d2cc74317c Implement constant folding for mhlo.Sign.
PiperOrigin-RevId: 373550014
2021-05-13 03:54:04 -07:00
Hanhan Wang d764806c1e [MHLO:Linalg] Add support for lowering reshape of unsigned tensors
PiperOrigin-RevId: 373461627
2021-05-12 15:14:29 -07:00
A. Unique TensorFlower 420c42a0a1 [MLIR][HLO] Support CHLO unary operations in rank specialization clustering
PiperOrigin-RevId: 373397321
2021-05-12 10:20:43 -07:00
A. Unique TensorFlower 596918a6f1 [MLIR][HLO] Allow rank specialization clustering with `chlo.broadcast_select` op
PiperOrigin-RevId: 373379990
2021-05-12 08:56:49 -07:00
Rahul Joshi e260aa771c [HLO] Add custom print/parse for convolution dimension numbers (in LMHLO)
PiperOrigin-RevId: 373379227
2021-05-12 08:52:46 -07:00
A. Unique TensorFlower 875803e5e1 [MLIR][HLO] Add more tests for `rank-specialization-cluster` pass
PiperOrigin-RevId: 373343750
2021-05-12 04:46:30 -07:00
A. Unique TensorFlower 313d24bc8f [MLIR][HLO] Add `rank-specialization-cluster` pass
Add a pass to cluster unranked C/HLO operations in one
`chlo.rank_specialization_cluster` op. The C/HLO operations are moved to the
body of the operation. Later passes can use this to rank-specialize all these
operations together.

PiperOrigin-RevId: 373336725
2021-05-12 03:46:01 -07:00
A. Unique TensorFlower 7f7a86ad0d [MLIR][HLO] Implement `RegionBranchOpInterface` for rank specialization cluster
PiperOrigin-RevId: 373163196
2021-05-11 09:03:05 -07:00
A. Unique TensorFlower 96a47345cc [MLIR][HLO] Add `rank_specialization_cluster` op to CHLO
The operation will be used to cluster compatible operations that can be rank-
specialized collectively.

PiperOrigin-RevId: 373128557
2021-05-11 05:17:42 -07:00
Benjamin Kramer 86b7eb434c [MHLO] Don't crash trying to constant fold mhlo.convert on complex
MLIR still doesn't have a complex attribute so this can't be implemented, so
just bail out instead of trying to fold.

PiperOrigin-RevId: 373128307
2021-05-11 05:15:57 -07:00
A. Unique TensorFlower 7f86dd9f7e Constant fold compare EQ if one of the operands is true and compare NE if one of the operands is false.
PiperOrigin-RevId: 373058030
2021-05-10 18:53:49 -07:00
A. Unique TensorFlower 2af1796194 Integrate LLVM at llvm/llvm-project@5c7b43aa82
Updates LLVM usage to match
[5c7b43aa8298](https://github.com/llvm/llvm-project/commit/5c7b43aa8298)

PiperOrigin-RevId: 373028739
2021-05-10 15:46:34 -07:00
Rahul Joshi ce4c76314e [NFC] Remove all_gather_dimension from all-to-all in the unit test
PiperOrigin-RevId: 372463706
2021-05-06 18:14:52 -07:00
Rahul Joshi 8c854886cb [XLA:GPU] Allow all-gather operands to have different element types.
- XLA's all-gather combiner can create such all-gathers, so relax the same element type
  trait for all-gathers.

PiperOrigin-RevId: 372380446
2021-05-06 11:04:13 -07:00
Adrian Kuegel b2bc17c8b0 Integrate LLVM at llvm/llvm-project@632ebc4ab4
Updates LLVM usage to match
[632ebc4ab437](https://github.com/llvm/llvm-project/commit/632ebc4ab437)

PiperOrigin-RevId: 372330771
2021-05-06 06:37:39 -07:00
A. Unique TensorFlower d8c40b691c [MLIR][HLO] Add `shape.broadcast` canonicalization to unblock broadcast moving
PiperOrigin-RevId: 372120309
2021-05-05 07:16:49 -07:00
Geoffrey Martin-Noble ac68145565 [MHLO:Linalg] Add support for lowering concat of unsigned tensors
Nothing about concat here really. Just need to plumb through the type
conversion.

PiperOrigin-RevId: 372012957
2021-05-04 15:57:54 -07:00
Geoffrey Martin-Noble 5a60793b31 [MHLO:Linalg] Add support for lowering dynamic-slice on unsigned ints
PiperOrigin-RevId: 371979004
2021-05-04 13:08:36 -07:00
Adrian Kuegel 384b87fad0 Lower ReluGrad via chlo::BroadcastSelect.
This allows to get rid of the constraint that it needs to have a static shape.

PiperOrigin-RevId: 371862452
2021-05-04 01:03:02 -07:00
Benjamin Kramer f4414fcd66 [MHLO:Linalg] Add support for lowering unsigned ops
This strips away the signedness with a type converter, using unrealized
conversion casts. The rest is mostly mechanically pushing the original op down
the pipeline so lowerings can see the original types.

Signed types stay signless for now. This can be changed in the HLO bridge later.

I did a pass over all ops and added unsigned lowerings where they were missing.
There may be more.

Currently the lowering will die at a later stage because it doesn't understand
the unrealized casts.

PiperOrigin-RevId: 371077494
2021-04-29 02:27:35 -07:00
A. Unique TensorFlower 4d41b11f3b Integrate LLVM at llvm/llvm-project@671f0e2e18
Updates LLVM usage to match
[671f0e2e189c](https://github.com/llvm/llvm-project/commit/671f0e2e189c)

PiperOrigin-RevId: 371011125
2021-04-28 16:37:53 -07:00
Benjamin Kramer b2a23bf269 Integrate LLVM at llvm/llvm-project@4b13b7581d
Updates LLVM usage to match
[4b13b7581db5](https://github.com/llvm/llvm-project/commit/4b13b7581db5)

PiperOrigin-RevId: 370736351
2021-04-27 12:19:05 -07:00
A. Unique TensorFlower e500ab37a1 Introduce constant folds for ReduceOp with single LogicalAnd or LogicalOr op.
PiperOrigin-RevId: 370551483
2021-04-26 15:11:27 -07:00
Adrian Kuegel 0e2b255f01 Lower LHLO::AbsOp to complex dialect.
Also fix the traits for LHLO::AbsOp to allow different types and add a
verifier.

PiperOrigin-RevId: 370438790
2021-04-26 05:44:03 -07:00
A. Unique TensorFlower 0569b7f7a4 [MLIR][MHLO] Generalize extent tensor cast elimination in bcast moving
PiperOrigin-RevId: 370112887
2021-04-23 10:52:50 -07:00
A. Unique TensorFlower 21e9365718 [MLIR][MHLO] Generalize extent tensor cast elimination in bcast moving
PiperOrigin-RevId: 370085141
2021-04-23 08:31:11 -07:00
A. Unique TensorFlower da5d252143 [MLIR] Merge extent tensor casts into `shape_of` ops in broadcast moving
PiperOrigin-RevId: 370058002
2021-04-23 04:44:01 -07:00
A. Unique TensorFlower 890a79641e Integrate LLVM at llvm/llvm-project@37e1458128
Updates LLVM usage to match
[37e145812855](https://github.com/llvm/llvm-project/commit/37e145812855)

PiperOrigin-RevId: 370020161
2021-04-22 22:57:08 -07:00
Hanhan Wang 49df46893c Add support for lowering variadic mhlo.reduce op.
Also add more lowering for body ops. Some MinOp and MaxOp can be legalized to
SelectOp + CompareOp.

PiperOrigin-RevId: 369891551
2021-04-22 09:50:49 -07:00
Benjamin Kramer 4d435a817e [mhlo:linalg] Add support for lowering mhlo.concatenate to Linalg ops.
This uses a indexed linalg.generic, which is rather awkward standalone but
allows fusing into the output of the concatenate and avoid to ever materialize
it in memory. I think this is the only way to get that with the current linalg
stack, fusion across a concatenate would require more infrastructure.

PiperOrigin-RevId: 369677652
2021-04-21 10:01:08 -07:00
A. Unique TensorFlower 8db96f54d3 [mhlo] Add a folder for mhlo.map which does nothing but return one of the arguments.
Add a folder for maps whose body returns only one of the arguments. When this arises the fold replaces the map output with one of the operand tensors.

PiperOrigin-RevId: 369304322
2021-04-19 14:36:08 -07:00
A. Unique TensorFlower 9374a1c0c5 [MLIR] Fix merge of assuming ops
Assuming ops can only be merged if their witnesses will dominate the merged
assuming op. This is not the case if the second op's witness is a result of the
first.

PiperOrigin-RevId: 369192868
2021-04-19 04:21:08 -07:00
Adrian Kuegel db9f298505 Generate Equal and NotEqual kernels for complex types.
PiperOrigin-RevId: 368586877
2021-04-15 00:35:52 -07:00
Prashant Kumar 236e7db5c0 PR #47315: [MLIR] Add concatenateOp lowering from lmhlo to Affine.
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/47315

Lowering of `concatenateOp` is added from lmhlo to Affine. The lowering
has been added as a part of `lhlo-legalize-to-affine` pass.

Signed-off-by: Prashant Kumar <prashantk@polymagelabs.com>
Copybara import of the project:

--
15314e4579f7a6901cf3475eff25962a34772eaf by Prashant Kumar <prashantk@polymagelabs.com>:

[MLIR] Add concatenateOp lowering from lmhlo to Affine.

Lowering of `concatenateOp` is added from lmhlo to Affine. The lowering
has been added as a part of `lhlo-legalize-to-affine` pass.

Signed-off-by: Prashant Kumar <prashantk@polymagelabs.com>
PiperOrigin-RevId: 368465992
2021-04-14 11:06:38 -07:00
Jacques Pienaar fdd75daed6 Add shape function for MHLO RngNormal and RngUniform
PiperOrigin-RevId: 368276963
2021-04-13 12:59:42 -07:00
Hanhan Wang a3fc99efe0 Add support for lowering mhlo.dynamic_slice to Linalg ops.
PiperOrigin-RevId: 368033540
2021-04-12 10:34:55 -07:00
A. Unique TensorFlower 0ec0a23e61 [MLIR][HLO] Generalize merged witnesses in `move-up-dynamic-broadcasts-for-fusion`
PiperOrigin-RevId: 368012460
2021-04-12 08:55:29 -07:00
Alexander Belyaev 8a9bf05d78 Integrate LLVM at llvm/llvm-project@6ce76ff7eb
Updates LLVM usage to match
[6ce76ff7eb76](https://github.com/llvm/llvm-project/commit/6ce76ff7eb76)

PiperOrigin-RevId: 367678843
2021-04-09 12:11:56 -07:00
A. Unique TensorFlower 6d2209e301 [MLIR][HLO] Canonicalize chained broadcasts
Compose two subsequent `dynamic_broadcast_in_dim` ops into one.

PiperOrigin-RevId: 367630360
2021-04-09 07:35:34 -07:00
Hanhan Wang fdb653788c Add support for lowering and/or within mhlo.reduce op body.
PiperOrigin-RevId: 367627034
2021-04-09 07:09:13 -07:00
Adrian Kuegel cc607bc72d Support up to rank 8 in rank specialization for SelectOp.
PiperOrigin-RevId: 367406557
2021-04-08 04:55:41 -07:00
Rahul Joshi ff2cbfa2ec [MLIR] Add support for representing variadic reduce-window in HLO/LMHLO dialect.
-  Fixed a subset of transformations to handle variadic reduce-window.

PiperOrigin-RevId: 366278650
2021-04-01 10:24:50 -07:00
A. Unique TensorFlower c23be1841c [MLIR] Add example test case for `move-up-dynamic-broadcasts-for-fusion` pass
Add exemplary test case as it appears in the lowering of two subsequent `tf.Sub`
ops.

PiperOrigin-RevId: 366219139
2021-04-01 03:24:43 -07:00
A. Unique TensorFlower af3bc47a8b Integrate LLVM at llvm/llvm-project@8396aeb07c
Updates LLVM usage to match
[8396aeb07cdd](https://github.com/llvm/llvm-project/commit/8396aeb07cdd)

PiperOrigin-RevId: 366034463
2021-03-31 08:01:34 -07:00
A. Unique TensorFlower bbe0aa204c [MLIR][MHLO] Merge assuming ops with compatible witnesses
PiperOrigin-RevId: 366018349
2021-03-31 06:11:38 -07:00
Adrian Kuegel 4033a56750 Add special cases for SelectOp rank specialization.
We now use the same special cases for all ops with arity >= 2.
For binary ops, we now have only one special case if at least one of the
operands has exactly one element. In that case, we reshape both operands to
rank 1. Before, we had separate special cases whether the left-hand side
or the right-hand side have a scalar shape.

PiperOrigin-RevId: 366005835
2021-03-31 04:28:51 -07:00
A. Unique TensorFlower 9206805c58 [MLIR][MHLO] Do not yield results of ops that were moved out of assuming regions
When an op is moved out of an assuming region we already know statically that it
is independent of the assuming region. Hence, there is no need to yield its
results.

PiperOrigin-RevId: 366001405
2021-03-31 03:50:27 -07:00
A. Unique TensorFlower 8ade5d78c8 [MLIR][MHLO] Move `cstr_broadcastable` and `shape_of` out of `assuming` regions
Add pattern to move operations out of assuming op. This only valid for
constraint-independent ops, like `cstr_broadcastable` and `shape_of`. It will
eventually allow to make assuming regions' constraints independent from each
other so that they can be merged.

PiperOrigin-RevId: 365993145
2021-03-31 02:39:07 -07:00
A. Unique TensorFlower eade942635 [MLIR][MHLO] Add pattern to move ops into the assuming region
This will eventually allow to make assuming regions' constraints independent
from each other.

PiperOrigin-RevId: 365985081
2021-03-31 01:23:31 -07:00
Geoffrey Martin-Noble 5d65758e8c Canonicalize MHLO Case and If Ops with constant conditions
ReplaceOpWithRegion was taken directly from ScfOps. We should maybe put that somewhere common in core.

PiperOrigin-RevId: 365936724
2021-03-30 17:58:01 -07:00
Geoffrey Martin-Noble 2fb2a92c6e Verify mhlo.if region return types match op
This matches the behavior of mhlo.case. Additionally, fix the verification of CaseOp in the case of nested ops with mhlo.return-containing regions.

PiperOrigin-RevId: 365936672
2021-03-30 17:57:20 -07:00
Geoffrey Martin-Noble 7a9394dca5 Restrict MHLO control flow ops to single-block regions
PiperOrigin-RevId: 365935824
2021-03-30 17:51:03 -07:00
A. Unique TensorFlower 9ebadc4c4d Integrate LLVM at llvm/llvm-project@482283042f
Updates LLVM usage to match
[482283042f79](https://github.com/llvm/llvm-project/commit/482283042f79)

PiperOrigin-RevId: 365710568
2021-03-29 18:29:48 -07:00
A. Unique TensorFlower 85a306d356 [MLIR][MHLO] Add pattern to inline broadcasted shapes
Simplify reasoning about `cstr_broadcastable` ops in the
`mhlo-move-up-dynamic-broadcasts-for-fusion` pass.

PiperOrigin-RevId: 365560893
2021-03-29 06:32:32 -07:00
A. Unique TensorFlower fb819c1de8 [MLIR][MHLO] Apply patterns in MoveUpDynamicBroadcastsForFusionPass greedily
PiperOrigin-RevId: 365556488
2021-03-29 06:02:06 -07:00
Geoffrey Martin-Noble a2b6060c0c Add folder for HLO NotOp
PiperOrigin-RevId: 364989658
2021-03-25 02:08:38 -07:00
Adrian Kuegel a34aa699f8 Fix tanh lowering for NaN input.
If the input is NaN, the result should be NaN, too.

PiperOrigin-RevId: 364788902
2021-03-24 06:34:36 -07:00
Stella Laurenzo 7f2bf48b8b Integrate LLVM at llvm/llvm-project@b24436ac96
Updates LLVM usage to match
[b24436ac96bd](https://github.com/llvm/llvm-project/commit/b24436ac96bd)

PiperOrigin-RevId: 364615807
2021-03-23 12:20:17 -07:00
A. Unique TensorFlower 8987dfd1d6 [MLIR][HLO] Move broadcasts over n-ary shape-preserving ops
This will open up more fusion opportunities.

PiperOrigin-RevId: 364577231
2021-03-23 09:38:39 -07:00
A. Unique TensorFlower 618223778d Integrate LLVM at llvm/llvm-project@5657f93e78
Updates LLVM usage to match
[5657f93e788f](https://github.com/llvm/llvm-project/commit/5657f93e788f)

PiperOrigin-RevId: 364541987
2021-03-23 06:15:46 -07:00
A. Unique TensorFlower 54f37abc28 [MHLO] Move broadcasts over elementwise ops
Move up dynamic broadcasts and shape computations to allow for more fusion
opportunities.

PiperOrigin-RevId: 364514158
2021-03-23 02:34:41 -07:00
Benjamin Kramer 59fa7c0ef7 [MHLO:linalg] Lower all dynamic broadcasts of static shapes to linalg.generic
We only need the memref_reinterpret_cast if we don't know whether a dimension
gets expanded or not. With static shapes we know that a dimension can only be
expanded if it's a static 1, so lower it in the same way we lower fully
static broadcasts.

PiperOrigin-RevId: 363859181
2021-03-19 03:52:02 -07:00
Hanhan Wang 2e0ee7759b Add support for lowering mhlo.torch_index_select to Linalg on tensors.
The change upstreams the pattern from IREE repo to MHLO repo.

PiperOrigin-RevId: 363406294
2021-03-17 06:33:41 -07:00
A. Unique TensorFlower c54527fe88 Integrate LLVM at llvm/llvm-project@678241795c
Updates LLVM usage to match
[678241795c95](https://github.com/llvm/llvm-project/commit/678241795c95)

PiperOrigin-RevId: 363257913
2021-03-16 13:33:00 -07:00
A. Unique TensorFlower 2be112a603 [MLIR][MHLO] Approximate `tf.Tanh` as constant +/-1 for small/large values
Fix issue raised in https://github.com/tensorflow/tensorflow/issues/47724

PiperOrigin-RevId: 363210296
2021-03-16 10:14:30 -07:00
Jacques Pienaar 3de2024a9b Avoid creating tuple type only for verification
Make the error message a bit more verbose & it is cheaper to verify the elements rather than creating a (potentially) new type.

PiperOrigin-RevId: 363073909
2021-03-15 17:58:19 -07:00
Tim Shen d16860d26d [MLIR] Change LMHLO Conditional and While to capture needed buffers, instead of passing them by operands.
This is consistent with the design of LMHLO FusionOp, and it simplifies the
usage. Before the change, those redundant operands ended up unused as all sub-regions can already capture needed buffers.

PiperOrigin-RevId: 362381155
2021-03-11 14:42:41 -08:00
Hanhan Wang 4f5e1c51dd Add support for lowering NHWC pooling mhlo.reduce_window to Linalg on tensors.
The change upstreams the pattern from IREE repo to MHLO repo.

PiperOrigin-RevId: 362312573
2021-03-11 09:41:34 -08:00
Hanhan Wang 630cabefb0 Add support for lowering 2D depthwise mhlo.conv to Linalg on tensors.
The change upstreams the pattern from IREE repo to MHLO repo.

PiperOrigin-RevId: 362300550
2021-03-11 08:41:38 -08:00
Benjamin Kramer 94f9740c67 [MLIR][HLO:Linalg] Lower mhlo.dynamic_iota to indexed_generic
This is the same as iota, but instead of taking the dimensions from the result
tensor we use the supplied shape extents tensor.

PiperOrigin-RevId: 362298548
2021-03-11 08:31:29 -08:00
Benjamin Kramer 09f8046816 [MLIR:HLO:LINALG] Fix codegen for mhlo.reshape when one side is rank 0
This is an annoying edge case because the collapse->expand lowering expects at
least R1 or it will produce invalid linalg reshapes. Using the direct lowering
works fine.

PiperOrigin-RevId: 362269199
2021-03-11 05:29:56 -08:00
Benjamin Kramer d77b556822 [MLIR][MHLO] Allow recursion in the shape_of mover
This allows it to push shape_of over a chain of ops all the way to the top.

PiperOrigin-RevId: 362249009
2021-03-11 02:52:21 -08:00
Benjamin Kramer 67a770e4e0 [HLO:MLIR] Make binary op type reification emit shape_of instead of tensor ops
This gives cleaner code and allows shape optimizations to happen on the result.

PiperOrigin-RevId: 362242975
2021-03-11 02:01:35 -08:00
Rahul Joshi 9902e6ee32 [HLO] Add LMHLO CollectivePermute verification.
- Extract verification of source target pairs attached to collective permute into a common
  helper function and use that to verify both MHLO and LMHLO variants.
- Change MlirGpuTestBase::ParseMlirModule to allow returning back a failure, and use
  that to update the mlir_gpu_compile_test to check the new behavior.

PiperOrigin-RevId: 362156962
2021-03-10 15:37:12 -08:00
A. Unique TensorFlower c217a6ef61 [MHLO] Add pass to move up dynamic broadcasts for fusion
For now, the pass only reifies the required shape computations. Moving
broadcasts will follow to allow for fusion across them.

PiperOrigin-RevId: 362033715
2021-03-10 06:21:57 -08:00