Commit Graph

57 Commits

Author SHA1 Message Date
Hanhan Wang fdb653788c Add support for lowering and/or within mhlo.reduce op body.
PiperOrigin-RevId: 367627034
2021-04-09 07:09:13 -07:00
Rahul Joshi ff2cbfa2ec [MLIR] Add support for representing variadic reduce-window in HLO/LMHLO dialect.
-  Fixed a subset of transformations to handle variadic reduce-window.

PiperOrigin-RevId: 366278650
2021-04-01 10:24:50 -07:00
Benjamin Kramer 59fa7c0ef7 [MHLO:linalg] Lower all dynamic broadcasts of static shapes to linalg.generic
We only need the memref_reinterpret_cast if we don't know whether a dimension
gets expanded or not. With static shapes we know that a dimension can only be
expanded if it's a static 1, so lower it in the same way we lower fully
static broadcasts.

PiperOrigin-RevId: 363859181
2021-03-19 03:52:02 -07:00
Hanhan Wang 2e0ee7759b Add support for lowering mhlo.torch_index_select to Linalg on tensors.
The change upstreams the pattern from IREE repo to MHLO repo.

PiperOrigin-RevId: 363406294
2021-03-17 06:33:41 -07:00
A. Unique TensorFlower c54527fe88 Integrate LLVM at llvm/llvm-project@678241795c
Updates LLVM usage to match
[678241795c95](https://github.com/llvm/llvm-project/commit/678241795c95)

PiperOrigin-RevId: 363257913
2021-03-16 13:33:00 -07:00
Hanhan Wang 4f5e1c51dd Add support for lowering NHWC pooling mhlo.reduce_window to Linalg on tensors.
The change upstreams the pattern from IREE repo to MHLO repo.

PiperOrigin-RevId: 362312573
2021-03-11 09:41:34 -08:00
Hanhan Wang 630cabefb0 Add support for lowering 2D depthwise mhlo.conv to Linalg on tensors.
The change upstreams the pattern from IREE repo to MHLO repo.

PiperOrigin-RevId: 362300550
2021-03-11 08:41:38 -08:00
Benjamin Kramer 94f9740c67 [MLIR][HLO:Linalg] Lower mhlo.dynamic_iota to indexed_generic
This is the same as iota, but instead of taking the dimensions from the result
tensor we use the supplied shape extents tensor.

PiperOrigin-RevId: 362298548
2021-03-11 08:31:29 -08:00
Benjamin Kramer 09f8046816 [MLIR:HLO:LINALG] Fix codegen for mhlo.reshape when one side is rank 0
This is an annoying edge case because the collapse->expand lowering expects at
least R1 or it will produce invalid linalg reshapes. Using the direct lowering
works fine.

PiperOrigin-RevId: 362269199
2021-03-11 05:29:56 -08:00
Geoffrey Martin-Noble 8687f3e4cf Lower MHLO Dot to type-polymorphic linalg named ops
The linalg named ops are now type polymorphic, so the type-monomorphic
varieties are redundant (and will be deleted soon).

PiperOrigin-RevId: 360509010
2021-03-02 14:00:58 -08:00
Benjamin Kramer e19ccf975e Filter static dimensions from dynamic_broadcast_in_dim's init_tensor
Otherwise we'd generate invalid IR for those cases.

PiperOrigin-RevId: 360144122
2021-03-01 03:03:54 -08:00
Hanhan Wang a8f99ee0f5 Fix the shape of linalg.init_tensor in conv op lowering.
The output spatial dims are not as same as the input spatial dims. Only supports
static output spatial dims for now.

PiperOrigin-RevId: 359775479
2021-02-26 09:34:11 -08:00
Hanhan Wang 90f0d7f935 Add support for lowering mhlo.conv to Linalg on tensors.
This pattern only works for normal convolutions. It does not work for depthwise
convolutions. The Linalg conv ops are defined with static rank, so it only
supports 1d/2d/3d cases, which are the most typical cases.

This also refactors out the same check in lmhlo.conv lowering.

PiperOrigin-RevId: 359503527
2021-02-25 05:59:08 -08:00
Hanhan Wang 45a1249fe2 Add support for lowering mhlo.pad to linalg.pad_tensor
The change upstreams the pattern from IREE repo to MHLO repo.

PiperOrigin-RevId: 359481543
2021-02-25 03:00:39 -08:00
Geoffrey Martin-Noble 89f7f2bd65 Lower integer matmuls to linalg
PiperOrigin-RevId: 359306495
2021-02-24 09:45:07 -08:00
Hanhan Wang 475b4a06a5 Add support for lowering mhlo.slice to subtensor.
PiperOrigin-RevId: 359297978
2021-02-24 09:06:09 -08:00
Adrian Kuegel 37e31f8b26 Lower Expm1 kernel to math.ExpM1.
PiperOrigin-RevId: 358152908
2021-02-18 04:54:23 -08:00
Adrian Kuegel b594254c79 [mhlo] Lower int->bool to a comparison with zero
This matches what TF (and C++) do in this case.

PiperOrigin-RevId: 357566262
2021-02-15 06:38:09 -08:00
Benjamin Kramer 240a44de82 [mhlo] Lower int->int cast to sign extension instead of zero extension
Signless does not mean unsigned here. Currently mhlo only has signed types.

PiperOrigin-RevId: 357561712
2021-02-15 05:58:47 -08:00
Adrian Kuegel 8672735e9a [mhlo] Lower float->bool to a comparison with zero
This matches what TF (and C++) do in this case.

PiperOrigin-RevId: 357553098
2021-02-15 04:36:36 -08:00
A. Unique TensorFlower 89d81adf6d [mhlo] Lower float->bool to a comparison with zero
This matches what TF (and C++) do in this case.

PiperOrigin-RevId: 357541594
2021-02-15 03:11:56 -08:00
Benjamin Kramer 3e80d91e73 [mhlo] Lower float->bool to a comparison with zero
This matches what TF (and C++) do in this case.

PiperOrigin-RevId: 357534118
2021-02-15 02:17:19 -08:00
A. Unique TensorFlower 4060a86fe2 Integrate LLVM at llvm/llvm-project@2bfe27da17
Updates LLVM usage to match
[2bfe27da171e](https://github.com/llvm/llvm-project/commit/2bfe27da171e)

PiperOrigin-RevId: 357196336
2021-02-12 08:32:03 -08:00
Mahesh Ravishankar 44d0464d16 Use linalg.fill on tensors instead of tensor.generate in MHLO -> Linalg conversion.
linalg.fill on tensors is a structured op that allows use tile + fuse
to reduce the fill overhead.

PiperOrigin-RevId: 355490400
2021-02-03 15:03:49 -08:00
Tres Popp ae722a883f Improve performance of lowered chlo.pow with integers
The new lowering takes 6 iterations of a loop always rather than iterating the exponent's number of times.

PiperOrigin-RevId: 355131133
2021-02-02 03:28:38 -08:00
Hanhan Wang 30ce82790d Upstream mhlo.reduce lowering to Linalg to MHLO repo.
In IREE, we use indexed generic op to handle the initial value. However, we
lower it to a generic op that carries an init_tensor here, and leave the handle
of initialization problem to later passes.

PiperOrigin-RevId: 354294807
2021-01-28 05:46:09 -08:00
Lei Zhang 39589add22 Use the correct shape when converting mhlo.reshape
If mhlo.reshape is not purely collapsing some consecutive operand
dimensions into result dimensions, we will generate two linalg
reshape op for it: the first one collapses all operand dimensions
into one dimension, and the second one expands it to all result
dimensions. For this case, the number of collapsed/expanded dimensions
should be coming strictly from the operand/result. It is different
from the case where we can generate one linalg reshape. For that case,
the reassociation map should have rank equal to the largest among
operand/result shape.

PiperOrigin-RevId: 354293826
2021-01-28 05:37:54 -08:00
Benjamin Kramer f6b24a6d54 [mlir][hlo] Make min/max always propagate NaNs
This is the right behavior for TF and JAX and matches what TF does on GPU. It
doesn't match TF on CPU, but that's really a TF bug.

PiperOrigin-RevId: 353657779
2021-01-25 09:04:16 -08:00
A. Unique TensorFlower b1438eebcb [mlir][hlo] Make min/max always propagate NaNs
This is the right behavior for TF and JAX and matches what TF does on GPU. It
doesn't match TF on CPU, but that's really a TF bug.

PiperOrigin-RevId: 353628258
2021-01-25 05:43:15 -08:00
Benjamin Kramer 6af4bccfde [mlir][hlo] Make min/max always propagate NaNs
This is the right behavior for TF and JAX and matches what TF does on GPU. It
doesn't match TF on CPU, but that's really a TF bug.

PiperOrigin-RevId: 353624935
2021-01-25 05:15:24 -08:00
A. Unique TensorFlower ef8ccdaebc [MLIR] Add mhlo.logistic lowering to linalg
PiperOrigin-RevId: 353205440
2021-01-22 03:03:16 -08:00
A. Unique TensorFlower 56758a9562 [MLIR][KernelGen] Lower mhlo.log_plus_one to std.log1p
PiperOrigin-RevId: 353200069
2021-01-22 02:18:32 -08:00
Alexander Belyaev 7aa64ee0b7 [MLIR] Migrate TF from STD complex ops to ComplexDialect.
PiperOrigin-RevId: 352966408
2021-01-21 01:22:25 -08:00
Hanhan Wang 46112c95c6 Use `uitofp` when converting a boolean to floating-point.
It was lowered to `sitofp` which resulted in `-1.0`.

PiperOrigin-RevId: 352958489
2021-01-21 00:15:30 -08:00
Tres Popp ba0346b071 Integrate LLVM at llvm/llvm-project@96ef4f307d
Updates LLVM usage to match
[96ef4f307df2](https://github.com/llvm/llvm-project/commit/96ef4f307df2)

PiperOrigin-RevId: 352786460
2021-01-20 07:09:47 -08:00
Tres Popp ba2ee556f1 Handle negative exponents for lowering of hlo.pow
PiperOrigin-RevId: 352382812
2021-01-18 03:47:28 -08:00
A. Unique TensorFlower bcdb3c3548 [MLIR] Lower mhlo.clamp to linalg
PiperOrigin-RevId: 351998800
2021-01-15 06:45:38 -08:00
A. Unique TensorFlower 9a1abaa212 Integrate LLVM at llvm/llvm-project@e2d7d3cb0e
Updates LLVM usage to match
[e2d7d3cb0ead](https://github.com/llvm/llvm-project/commit/e2d7d3cb0ead)

PiperOrigin-RevId: 351915841
2021-01-14 18:05:33 -08:00
Hanhan Wang 300a7c11ce Upstream mhlo.dot_general lowering to Linalg to MHLO repo
PiperOrigin-RevId: 351514250
2021-01-12 22:08:46 -08:00
Hanhan Wang 8f58f844e5 Upstream mhlo.dot lowering to Linalg to MHLO repo.
We prototyped the lowering from mhlo.dot to linalg.matmul in IREE. Since Linalg
now supports matmul in tensors world, we can move the lowering logic to tensors
world, and upstream to legalize_to_linalg.cc. The patch lowers the mhlo.dot to
the linalg.matmul/matvec/dot in tensors world.

PiperOrigin-RevId: 351184911
2021-01-11 10:35:24 -08:00
Alexander Belyaev 180f917446 [KERNEL_GEN] Add a pattern for hlo.dyn_broadcast->linalg to enable is_inf kernel.
PiperOrigin-RevId: 351179620
2021-01-11 10:13:31 -08:00
A. Unique TensorFlower b0bf2ef45b Integrate LLVM at llvm/llvm-project@c3acda0798
Updates LLVM usage to match
[c3acda0798f9](https://github.com/llvm/llvm-project/commit/c3acda0798f9)

PiperOrigin-RevId: 348896724
2020-12-23 23:53:54 -08:00
Tres Popp a42213b870 Define lowering of [l]mhlo.pow.
For floating point operations, this uses std.pow.
For integer operations, this lowers to a loop.
This adds a dependency on scf.

PiperOrigin-RevId: 348537232
2020-12-21 15:27:40 -08:00
Alexander Belyaev 65222893ae [KERNEL_GEN] Convert LHLO AddOp, SubOp (ComplexType) to complex ops.
PiperOrigin-RevId: 347805898
2020-12-16 05:45:06 -08:00
Benjamin Kramer 9930c20c31 [mlir][hlo] Fix lowering of NE comparison. It should return true if either side is NaN
PiperOrigin-RevId: 346988987
2020-12-11 06:46:14 -08:00
Alexander Belyaev c36afd275e [HLO] Add a pattern for HLO ConstOp to HLO -> Linalg conversion.
PiperOrigin-RevId: 346718273
2020-12-09 23:24:57 -08:00
Stephan Herhut c3790af758 Add plumbing for or and xor to hlo to lhlo and linalg lowerings.
PiperOrigin-RevId: 346311314
2020-12-08 06:39:02 -08:00
Benjamin Kramer 5235eceea0 Lower mhlo shifts to linalg
PiperOrigin-RevId: 346161253
2020-12-07 13:02:32 -08:00
A. Unique TensorFlower e866aac3ac Integrate LLVM at llvm/llvm-project@72ddd559b8
Updates LLVM usage to match
[72ddd559b8aa](https://github.com/llvm/llvm-project/commit/72ddd559b8aa)

PiperOrigin-RevId: 340292055
2020-11-02 12:29:52 -08:00
Ahmed S. Taei 39389587d2 {mhlo.is_finite, lmhlo.is_finite} -> linalg.generic conversion
PiperOrigin-RevId: 334414295
2020-09-29 10:50:42 -07:00