Adrian Kuegel
376da8592f
Add MLIR generated SignOp GPU kernel for complex types.
...
PiperOrigin-RevId: 379924456
2021-06-17 03:56:58 -07:00
Adrian Kuegel
73ed8cbf82
Add MLIR generated NegOp GPU kernel for complex types.
...
PiperOrigin-RevId: 379905236
2021-06-17 01:30:51 -07:00
Hanhan Wang
b44ab8ad49
Add support for lowering DataMovementOp ops to Linalg on unsigned types.
...
PiperOrigin-RevId: 379527360
2021-06-15 10:58:22 -07:00
Benjamin Kramer
d1c60df2fe
[MHLO:linalg] Be more aggressive about turning mhlo.const into std.constant
...
On tensors the only difference between these ops is that mhlo.const supports unsigned types.
PiperOrigin-RevId: 377970948
2021-06-07 11:58:23 -07:00
Hanhan Wang
25b93c8d66
Add support for lowering mhlo.iota/dynamic_iota to Linalg on unsigned types.
...
PiperOrigin-RevId: 377956338
2021-06-07 10:59:33 -07:00
A. Unique TensorFlower
db05388a3c
Integrate LLVM at llvm/llvm-project@da3ed58b97
...
Updates LLVM usage to match
[da3ed58b97c1](https://github.com/llvm/llvm-project/commit/da3ed58b97c1 )
PiperOrigin-RevId: 377432380
2021-06-03 20:45:18 -07:00
A. Unique TensorFlower
4620410f18
Integrate LLVM at llvm/llvm-project@b25546a4b4
...
Updates LLVM usage to match
[b25546a4b406](https://github.com/llvm/llvm-project/commit/b25546a4b406 )
PiperOrigin-RevId: 377077163
2021-06-02 09:32:59 -07:00
A. Unique TensorFlower
cc1b22e618
[HLO][Linalg] Support scalar broadcasts in point-wise converter
...
This is needed for operations that support this limited form of broadcasting,
namely `mhlo.select`.
PiperOrigin-RevId: 376655844
2021-05-31 03:50:23 -07:00
Hanhan Wang
402b74ed7f
Fix type bug in mhlo.dynamic-update-slice lowering.
...
The operand type can be f32. We should not use operand type to do clamp
operations.
PiperOrigin-RevId: 376286524
2021-05-27 17:53:49 -07:00
Hanhan Wang
28c411606f
Add support for lowering mhlo.dynamic-update-slice ops to Linalg and std ops.
...
PiperOrigin-RevId: 376042810
2021-05-26 15:31:05 -07:00
Robert Suderman
26a0053d7d
Remove linalg.indexed_generic from mhlo lowerings to linalg
...
IndexedGeneric is going away. Transition to using linalg.Index instead.
PiperOrigin-RevId: 376002501
2021-05-26 12:24:23 -07:00
Adrian Kuegel
a847109ac7
Support complex types when converting HLO multiply op.
...
We can lower it to the MulOp in the complex dialect.
PiperOrigin-RevId: 375675079
2021-05-25 04:35:34 -07:00
Adrian Kuegel
5816920258
Support complex types when converting HLO divide op.
...
We can lower it to the DivOp in the complex dialect.
Also add tests to hlo-legalize-to-linalg.mlir for CompareOp lowering of complex
types. These were forgotten in a previous commit.
PiperOrigin-RevId: 375669125
2021-05-25 03:43:46 -07:00
Hanhan Wang
1ba4c714c9
Add support for lowering mhlo.scatter ops to Linalg.
...
This only works for updating tensors, not add/min/max computations. It requires
the index depth to be 1 because of the limitation in Linalg. We can not compare
multiple indices without packing indices.
PiperOrigin-RevId: 375137721
2021-05-21 12:17:14 -07:00
Hanhan Wang
cd8f585cf7
[MHLO:Linalg] Add support for lowering torch_index_select of unsigned tensors
...
Also fixes typos in tests.
PiperOrigin-RevId: 374979460
2021-05-20 17:03:05 -07:00
Stella Laurenzo
71394fb301
Properly handle if DynamicBroadcastInDimOp shape is not of index type.
...
* The op defines this to be index, any integer, or pred (i1).
* Many TensorFlow legalizations produce integers for the shape.
PiperOrigin-RevId: 374566113
2021-05-18 21:12:11 -07:00
Hanhan Wang
d764806c1e
[MHLO:Linalg] Add support for lowering reshape of unsigned tensors
...
PiperOrigin-RevId: 373461627
2021-05-12 15:14:29 -07:00
Adrian Kuegel
b2bc17c8b0
Integrate LLVM at llvm/llvm-project@632ebc4ab4
...
Updates LLVM usage to match
[632ebc4ab437](https://github.com/llvm/llvm-project/commit/632ebc4ab437 )
PiperOrigin-RevId: 372330771
2021-05-06 06:37:39 -07:00
Geoffrey Martin-Noble
ac68145565
[MHLO:Linalg] Add support for lowering concat of unsigned tensors
...
Nothing about concat here really. Just need to plumb through the type
conversion.
PiperOrigin-RevId: 372012957
2021-05-04 15:57:54 -07:00
Geoffrey Martin-Noble
5a60793b31
[MHLO:Linalg] Add support for lowering dynamic-slice on unsigned ints
...
PiperOrigin-RevId: 371979004
2021-05-04 13:08:36 -07:00
Benjamin Kramer
f4414fcd66
[MHLO:Linalg] Add support for lowering unsigned ops
...
This strips away the signedness with a type converter, using unrealized
conversion casts. The rest is mostly mechanically pushing the original op down
the pipeline so lowerings can see the original types.
Signed types stay signless for now. This can be changed in the HLO bridge later.
I did a pass over all ops and added unsigned lowerings where they were missing.
There may be more.
Currently the lowering will die at a later stage because it doesn't understand
the unrealized casts.
PiperOrigin-RevId: 371077494
2021-04-29 02:27:35 -07:00
Hanhan Wang
49df46893c
Add support for lowering variadic mhlo.reduce op.
...
Also add more lowering for body ops. Some MinOp and MaxOp can be legalized to
SelectOp + CompareOp.
PiperOrigin-RevId: 369891551
2021-04-22 09:50:49 -07:00
Benjamin Kramer
4d435a817e
[mhlo:linalg] Add support for lowering mhlo.concatenate to Linalg ops.
...
This uses a indexed linalg.generic, which is rather awkward standalone but
allows fusing into the output of the concatenate and avoid to ever materialize
it in memory. I think this is the only way to get that with the current linalg
stack, fusion across a concatenate would require more infrastructure.
PiperOrigin-RevId: 369677652
2021-04-21 10:01:08 -07:00
Hanhan Wang
a3fc99efe0
Add support for lowering mhlo.dynamic_slice to Linalg ops.
...
PiperOrigin-RevId: 368033540
2021-04-12 10:34:55 -07:00
Hanhan Wang
fdb653788c
Add support for lowering and/or within mhlo.reduce op body.
...
PiperOrigin-RevId: 367627034
2021-04-09 07:09:13 -07:00
Rahul Joshi
ff2cbfa2ec
[MLIR] Add support for representing variadic reduce-window in HLO/LMHLO dialect.
...
- Fixed a subset of transformations to handle variadic reduce-window.
PiperOrigin-RevId: 366278650
2021-04-01 10:24:50 -07:00
Benjamin Kramer
59fa7c0ef7
[MHLO:linalg] Lower all dynamic broadcasts of static shapes to linalg.generic
...
We only need the memref_reinterpret_cast if we don't know whether a dimension
gets expanded or not. With static shapes we know that a dimension can only be
expanded if it's a static 1, so lower it in the same way we lower fully
static broadcasts.
PiperOrigin-RevId: 363859181
2021-03-19 03:52:02 -07:00
Hanhan Wang
2e0ee7759b
Add support for lowering mhlo.torch_index_select to Linalg on tensors.
...
The change upstreams the pattern from IREE repo to MHLO repo.
PiperOrigin-RevId: 363406294
2021-03-17 06:33:41 -07:00
A. Unique TensorFlower
c54527fe88
Integrate LLVM at llvm/llvm-project@678241795c
...
Updates LLVM usage to match
[678241795c95](https://github.com/llvm/llvm-project/commit/678241795c95 )
PiperOrigin-RevId: 363257913
2021-03-16 13:33:00 -07:00
Hanhan Wang
4f5e1c51dd
Add support for lowering NHWC pooling mhlo.reduce_window to Linalg on tensors.
...
The change upstreams the pattern from IREE repo to MHLO repo.
PiperOrigin-RevId: 362312573
2021-03-11 09:41:34 -08:00
Hanhan Wang
630cabefb0
Add support for lowering 2D depthwise mhlo.conv to Linalg on tensors.
...
The change upstreams the pattern from IREE repo to MHLO repo.
PiperOrigin-RevId: 362300550
2021-03-11 08:41:38 -08:00
Benjamin Kramer
94f9740c67
[MLIR][HLO:Linalg] Lower mhlo.dynamic_iota to indexed_generic
...
This is the same as iota, but instead of taking the dimensions from the result
tensor we use the supplied shape extents tensor.
PiperOrigin-RevId: 362298548
2021-03-11 08:31:29 -08:00
Benjamin Kramer
09f8046816
[MLIR:HLO:LINALG] Fix codegen for mhlo.reshape when one side is rank 0
...
This is an annoying edge case because the collapse->expand lowering expects at
least R1 or it will produce invalid linalg reshapes. Using the direct lowering
works fine.
PiperOrigin-RevId: 362269199
2021-03-11 05:29:56 -08:00
Geoffrey Martin-Noble
8687f3e4cf
Lower MHLO Dot to type-polymorphic linalg named ops
...
The linalg named ops are now type polymorphic, so the type-monomorphic
varieties are redundant (and will be deleted soon).
PiperOrigin-RevId: 360509010
2021-03-02 14:00:58 -08:00
Benjamin Kramer
e19ccf975e
Filter static dimensions from dynamic_broadcast_in_dim's init_tensor
...
Otherwise we'd generate invalid IR for those cases.
PiperOrigin-RevId: 360144122
2021-03-01 03:03:54 -08:00
Hanhan Wang
a8f99ee0f5
Fix the shape of linalg.init_tensor in conv op lowering.
...
The output spatial dims are not as same as the input spatial dims. Only supports
static output spatial dims for now.
PiperOrigin-RevId: 359775479
2021-02-26 09:34:11 -08:00
Hanhan Wang
90f0d7f935
Add support for lowering mhlo.conv to Linalg on tensors.
...
This pattern only works for normal convolutions. It does not work for depthwise
convolutions. The Linalg conv ops are defined with static rank, so it only
supports 1d/2d/3d cases, which are the most typical cases.
This also refactors out the same check in lmhlo.conv lowering.
PiperOrigin-RevId: 359503527
2021-02-25 05:59:08 -08:00
Hanhan Wang
45a1249fe2
Add support for lowering mhlo.pad to linalg.pad_tensor
...
The change upstreams the pattern from IREE repo to MHLO repo.
PiperOrigin-RevId: 359481543
2021-02-25 03:00:39 -08:00
Geoffrey Martin-Noble
89f7f2bd65
Lower integer matmuls to linalg
...
PiperOrigin-RevId: 359306495
2021-02-24 09:45:07 -08:00
Hanhan Wang
475b4a06a5
Add support for lowering mhlo.slice to subtensor.
...
PiperOrigin-RevId: 359297978
2021-02-24 09:06:09 -08:00
Adrian Kuegel
37e31f8b26
Lower Expm1 kernel to math.ExpM1.
...
PiperOrigin-RevId: 358152908
2021-02-18 04:54:23 -08:00
Adrian Kuegel
b594254c79
[mhlo] Lower int->bool to a comparison with zero
...
This matches what TF (and C++) do in this case.
PiperOrigin-RevId: 357566262
2021-02-15 06:38:09 -08:00
Benjamin Kramer
240a44de82
[mhlo] Lower int->int cast to sign extension instead of zero extension
...
Signless does not mean unsigned here. Currently mhlo only has signed types.
PiperOrigin-RevId: 357561712
2021-02-15 05:58:47 -08:00
Adrian Kuegel
8672735e9a
[mhlo] Lower float->bool to a comparison with zero
...
This matches what TF (and C++) do in this case.
PiperOrigin-RevId: 357553098
2021-02-15 04:36:36 -08:00
A. Unique TensorFlower
89d81adf6d
[mhlo] Lower float->bool to a comparison with zero
...
This matches what TF (and C++) do in this case.
PiperOrigin-RevId: 357541594
2021-02-15 03:11:56 -08:00
Benjamin Kramer
3e80d91e73
[mhlo] Lower float->bool to a comparison with zero
...
This matches what TF (and C++) do in this case.
PiperOrigin-RevId: 357534118
2021-02-15 02:17:19 -08:00
A. Unique TensorFlower
4060a86fe2
Integrate LLVM at llvm/llvm-project@2bfe27da17
...
Updates LLVM usage to match
[2bfe27da171e](https://github.com/llvm/llvm-project/commit/2bfe27da171e )
PiperOrigin-RevId: 357196336
2021-02-12 08:32:03 -08:00
Mahesh Ravishankar
44d0464d16
Use linalg.fill on tensors instead of tensor.generate in MHLO -> Linalg conversion.
...
linalg.fill on tensors is a structured op that allows use tile + fuse
to reduce the fill overhead.
PiperOrigin-RevId: 355490400
2021-02-03 15:03:49 -08:00
Tres Popp
ae722a883f
Improve performance of lowered chlo.pow with integers
...
The new lowering takes 6 iterations of a loop always rather than iterating the exponent's number of times.
PiperOrigin-RevId: 355131133
2021-02-02 03:28:38 -08:00
Hanhan Wang
30ce82790d
Upstream mhlo.reduce lowering to Linalg to MHLO repo.
...
In IREE, we use indexed generic op to handle the initial value. However, we
lower it to a generic op that carries an init_tensor here, and leave the handle
of initialization problem to later passes.
PiperOrigin-RevId: 354294807
2021-01-28 05:46:09 -08:00