Commit Graph

329 Commits

Author SHA1 Message Date
Rahul Joshi 9902e6ee32 [HLO] Add LMHLO CollectivePermute verification.
- Extract verification of source target pairs attached to collective permute into a common
  helper function and use that to verify both MHLO and LMHLO variants.
- Change MlirGpuTestBase::ParseMlirModule to allow returning back a failure, and use
  that to update the mlir_gpu_compile_test to check the new behavior.

PiperOrigin-RevId: 362156962
2021-03-10 15:37:12 -08:00
A. Unique TensorFlower c217a6ef61 [MHLO] Add pass to move up dynamic broadcasts for fusion
For now, the pass only reifies the required shape computations. Moving
broadcasts will follow to allow for fusion across them.

PiperOrigin-RevId: 362033715
2021-03-10 06:21:57 -08:00
Stephan Herhut cabd4d9a06 Canonicalize dynamic_broadcast_in_dim to own shape with rank narrowing on the shape to a corresponding tensor.cast.
PiperOrigin-RevId: 362028291
2021-03-10 05:43:54 -08:00
A. Unique TensorFlower 218476128e [MLIR][KernelGen] Fix zeta lowering at poles
Return nan at zeta poles or inf where the limit is defined. Also test the kernel
based on the series representation of zeta.

PiperOrigin-RevId: 361993482
2021-03-10 01:09:10 -08:00
A. Unique TensorFlower 55eda81407 [MLIR][HLO] Reify shape extents as `index` values
PiperOrigin-RevId: 361519167
2021-03-08 02:42:47 -08:00
A. Unique TensorFlower 39650a5d5a Remove rank 1 specialization from TransformUnrankedHloPass.
For binary ops, we already special-case rank 0 vs rank 1, and same shape. So we
don't need to special-case a maximum rank of 1.

PiperOrigin-RevId: 360891955
2021-03-04 05:24:53 -08:00
Adrian Kuegel 62b357b601 Remove rank 1 specialization from TransformUnrankedHloPass.
For binary ops, we already special-case rank 0 vs rank 1, and same shape. So we
don't need to special-case a maximum rank of 1.

PiperOrigin-RevId: 360881387
2021-03-04 04:04:11 -08:00
Geoffrey Martin-Noble 8687f3e4cf Lower MHLO Dot to type-polymorphic linalg named ops
The linalg named ops are now type polymorphic, so the type-monomorphic
varieties are redundant (and will be deleted soon).

PiperOrigin-RevId: 360509010
2021-03-02 14:00:58 -08:00
Adrian Kuegel 0683db3b24 Legalize MinimumBroadcastShapes op.
Use it in TransformUnrankedHloPass, which allows to reduce the maximum
rank for rank specialized broadcast from 6 to 5.

PiperOrigin-RevId: 360415743
2021-03-02 06:39:01 -08:00
Jacques Pienaar 329b1fd071 Verify compatible shapes in unpack verification rather than exact
Previously this would be too strict and fail if dynamic and static dims were
compared. Dynamic/unknown are treated as "maybe equal" to a static value without further info, so at this layer don't flag as invalid unless truly are.

PiperOrigin-RevId: 360189086
2021-03-01 08:00:16 -08:00
Benjamin Kramer e19ccf975e Filter static dimensions from dynamic_broadcast_in_dim's init_tensor
Otherwise we'd generate invalid IR for those cases.

PiperOrigin-RevId: 360144122
2021-03-01 03:03:54 -08:00
Adrian Kuegel e6a1f5f0f9 Add MinimumBroadcastShapesOp to chlo dialect.
This op is useful for rank specialization of broadcasts. Kernel Generator
needs to generate one kernel for each rank, so if we can minimize the rank
of the broadcast shape, we can support more cases with the same number of
special-cased kernels.

PiperOrigin-RevId: 360137827
2021-03-01 02:23:52 -08:00
Hanhan Wang a8f99ee0f5 Fix the shape of linalg.init_tensor in conv op lowering.
The output spatial dims are not as same as the input spatial dims. Only supports
static output spatial dims for now.

PiperOrigin-RevId: 359775479
2021-02-26 09:34:11 -08:00
Hanhan Wang 90f0d7f935 Add support for lowering mhlo.conv to Linalg on tensors.
This pattern only works for normal convolutions. It does not work for depthwise
convolutions. The Linalg conv ops are defined with static rank, so it only
supports 1d/2d/3d cases, which are the most typical cases.

This also refactors out the same check in lmhlo.conv lowering.

PiperOrigin-RevId: 359503527
2021-02-25 05:59:08 -08:00
Hanhan Wang 45a1249fe2 Add support for lowering mhlo.pad to linalg.pad_tensor
The change upstreams the pattern from IREE repo to MHLO repo.

PiperOrigin-RevId: 359481543
2021-02-25 03:00:39 -08:00
Geoffrey Martin-Noble 89f7f2bd65 Lower integer matmuls to linalg
PiperOrigin-RevId: 359306495
2021-02-24 09:45:07 -08:00
Hanhan Wang 475b4a06a5 Add support for lowering mhlo.slice to subtensor.
PiperOrigin-RevId: 359297978
2021-02-24 09:06:09 -08:00
A. Unique TensorFlower ac0552f127 [MLIR][HLO] Remove duplicate `PopulateTransformUnrankedHloPatterns`
PiperOrigin-RevId: 359046173
2021-02-23 07:50:47 -08:00
Rahul Joshi 5adb7c6e12 [MLIR:LHLO] Add optional call target arg mapping to LMHLO CustomCall operations.
- XLA:HLO -> LMHLO conversion drops all token arguments and return values, however
  custom calls that users write still expect to get buffer pointers for these token types.
- To be able to support this, add an optional call target argument mapping attribute to
  LMHLO custom calls. When this attribute is present, it indicates the number of
  arguments and returns that the custom call expects and also indicates which LMHLO
  arg() or output() maps to which arg or result number of the custom call.

PiperOrigin-RevId: 358826664
2021-02-22 08:43:00 -08:00
Benjamin Kramer a9cc1dcfa0 [mlir][hlo] Add basic rank-specialization for select
This just blows up everything to ranked (up to 6) and is probably quite slow.
This is sufficient to make kernelgen compile SelectV2.

PiperOrigin-RevId: 358777728
2021-02-22 02:41:12 -08:00
Benjamin Kramer b42def4612 [mlir][hlo] Refactor rank specialization to allow an arbitrary number of inputs
This actually simplifies the code a bit.

PiperOrigin-RevId: 358201038
2021-02-18 09:53:03 -08:00
Benjamin Kramer ca4034b56e [mlir][hlo] Make select ready for dynamic shapes (ranked only for now)
Move tf.SelectV2 broadcast lowering to a chlo.broadcast_select op, and lower it
to broadcasts on mhlo from there.

PiperOrigin-RevId: 358179975
2021-02-18 08:08:40 -08:00
Adrian Kuegel 37e31f8b26 Lower Expm1 kernel to math.ExpM1.
PiperOrigin-RevId: 358152908
2021-02-18 04:54:23 -08:00
Richard Uhler b579bd5d9e Support dynamic-shaped operand in verification of BroadcastInDim.
Verification of HLO_BroadcastInDimOp was previously failing or crashing if the
operand had a dynamic shape or was unranked. Update the verification code to
allow the operand to be unranked or have dynamic shape.

PiperOrigin-RevId: 358056793
2021-02-17 16:18:09 -08:00
A. Unique TensorFlower 220deb3709 [MLIR][CHLO] Add legalization for `chlo.polygamma` to MHLO
PiperOrigin-RevId: 357954624
2021-02-17 08:33:01 -08:00
A. Unique TensorFlower 81abaf364d [MLIR][MHLO] Add polygamma op to the CHLO dialect
PiperOrigin-RevId: 357724465
2021-02-16 08:32:33 -08:00
Adrian Kuegel b594254c79 [mhlo] Lower int->bool to a comparison with zero
This matches what TF (and C++) do in this case.

PiperOrigin-RevId: 357566262
2021-02-15 06:38:09 -08:00
Benjamin Kramer 240a44de82 [mhlo] Lower int->int cast to sign extension instead of zero extension
Signless does not mean unsigned here. Currently mhlo only has signed types.

PiperOrigin-RevId: 357561712
2021-02-15 05:58:47 -08:00
Adrian Kuegel 8672735e9a [mhlo] Lower float->bool to a comparison with zero
This matches what TF (and C++) do in this case.

PiperOrigin-RevId: 357553098
2021-02-15 04:36:36 -08:00
A. Unique TensorFlower 89d81adf6d [mhlo] Lower float->bool to a comparison with zero
This matches what TF (and C++) do in this case.

PiperOrigin-RevId: 357541594
2021-02-15 03:11:56 -08:00
Benjamin Kramer 3e80d91e73 [mhlo] Lower float->bool to a comparison with zero
This matches what TF (and C++) do in this case.

PiperOrigin-RevId: 357534118
2021-02-15 02:17:19 -08:00
Adrian Kuegel 824bc9c425 Improve broadcast transformation to treat dynamic shapes with 1 element as scalar.
A shape that contains exactly one element is effectively a scalar. This leads
to a speedup in cases where we have a binary op with one operand that is
effectively a scalar, because we can use the fast path.

PiperOrigin-RevId: 357515552
2021-02-14 23:25:41 -08:00
A. Unique TensorFlower 4060a86fe2 Integrate LLVM at llvm/llvm-project@2bfe27da17
Updates LLVM usage to match
[2bfe27da171e](https://github.com/llvm/llvm-project/commit/2bfe27da171e)

PiperOrigin-RevId: 357196336
2021-02-12 08:32:03 -08:00
Tim Shen 6fa6974e8d [XLA/GPU] Plumb through Bitcast op for LMHLO.
Also remove BitcastOp. XLA bitcast requires the input buffer to alias the output buffer, which makes bitcast always a no-op.

PiperOrigin-RevId: 356884383
2021-02-10 19:45:40 -08:00
Alexander Belyaev 36e04d92c0 [KERNEL_GEN] Add a pattern to bufferize `mhlo.reshape(<unranked_tensor>)`.
PiperOrigin-RevId: 356720899
2021-02-10 06:32:21 -08:00
A. Unique TensorFlower 4a29ca3b1d Add layout to mhlo::InfeedOp td.
PiperOrigin-RevId: 356286875
2021-02-08 09:48:14 -08:00
Tres Popp d086b8a0ec Correct HLO atan2 lowering in cases of -inf and -0 inputs.
This is being done by just removing the approximation and lowering to atan2 lib calls later to make the implementation the same as XLA. Note that if the approximation is brought back later, it can be fixed by changing the IR checking `less-than(X, 0)` to `less-than(copysign(X, 1), 0)`

PiperOrigin-RevId: 356253941
2021-02-08 06:58:04 -08:00
A. Unique TensorFlower 2aa8a90c69 Integrate LLVM at llvm/llvm-project@a1a1d338e9
Updates LLVM usage to match
[a1a1d338e99d](https://github.com/llvm/llvm-project/commit/a1a1d338e99d)

PiperOrigin-RevId: 355927079
2021-02-05 14:20:29 -08:00
Rahul Joshi b251712b1d [XLA:GPU] Add conversion from HLO -> MLIR LMHLO for TriangularSolve
- Also add layout attributes for inputs and output for error checking.

PiperOrigin-RevId: 355863625
2021-02-05 09:18:02 -08:00
A. Unique TensorFlower 99bc05f2e4 Integrate LLVM at llvm/llvm-project@91e7a17133
Updates LLVM usage to match
[91e7a1713332](https://github.com/llvm/llvm-project/commit/91e7a1713332)

PiperOrigin-RevId: 355702100
2021-02-04 13:42:31 -08:00
Mahesh Ravishankar 44d0464d16 Use linalg.fill on tensors instead of tensor.generate in MHLO -> Linalg conversion.
linalg.fill on tensors is a structured op that allows use tile + fuse
to reduce the fill overhead.

PiperOrigin-RevId: 355490400
2021-02-03 15:03:49 -08:00
Stephan Herhut 6cd1875ee4 Implement lowering of chlo::zeta to mhlo dialect.
PiperOrigin-RevId: 355395581
2021-02-03 07:50:05 -08:00
A. Unique TensorFlower 04110a4b1c Integrate LLVM at llvm/llvm-project@67dfe9c8d7
Updates LLVM usage to match
[67dfe9c8d70c](https://github.com/llvm/llvm-project/commit/67dfe9c8d70c)

PiperOrigin-RevId: 355235205
2021-02-02 13:09:20 -08:00
Tres Popp ae722a883f Improve performance of lowered chlo.pow with integers
The new lowering takes 6 iterations of a loop always rather than iterating the exponent's number of times.

PiperOrigin-RevId: 355131133
2021-02-02 03:28:38 -08:00
A. Unique TensorFlower f40ccc5b4b [MLIR][CHLO] Add `chlo.digamma` and lowering to MHLO
PiperOrigin-RevId: 355122765
2021-02-02 02:10:17 -08:00
Adrian Kuegel c2115f56c7 Integrate LLVM at llvm/llvm-project@8f7f2c4211
Updates LLVM usage to match
[8f7f2c4211ca](https://github.com/llvm/llvm-project/commit/8f7f2c4211ca)

PiperOrigin-RevId: 355120697
2021-02-02 01:54:32 -08:00
Adrian Kuegel 96f8771ed7 Add MLIR generated kernel for Angle kernel.
This also requires a canonicalization pattern to remove a redundant dynamic
reshape from rank 1 to rank 1.

PiperOrigin-RevId: 355113135
2021-02-02 00:47:20 -08:00
Rahul Joshi 8e3890e8e8 [MLIR:HLO] Add AllGather and AllToAll operations to LMHLO dialect.
- Use a common base class to for AllReduce, AllGather, and AllToAll in the ODS spec.
- Add basic verification for replica groups attribute.

PiperOrigin-RevId: 354969654
2021-02-01 10:23:46 -08:00
Stephan Herhut e61ef86fdb Add zeta and broadcasting_zeta to chlo dialect.
PiperOrigin-RevId: 354500879
2021-01-29 03:22:52 -08:00
Hanhan Wang 30ce82790d Upstream mhlo.reduce lowering to Linalg to MHLO repo.
In IREE, we use indexed generic op to handle the initial value. However, we
lower it to a generic op that carries an init_tensor here, and leave the handle
of initialization problem to later passes.

PiperOrigin-RevId: 354294807
2021-01-28 05:46:09 -08:00
Lei Zhang 39589add22 Use the correct shape when converting mhlo.reshape
If mhlo.reshape is not purely collapsing some consecutive operand
dimensions into result dimensions, we will generate two linalg
reshape op for it: the first one collapses all operand dimensions
into one dimension, and the second one expands it to all result
dimensions. For this case, the number of collapsed/expanded dimensions
should be coming strictly from the operand/result. It is different
from the case where we can generate one linalg reshape. For that case,
the reassociation map should have rank equal to the largest among
operand/result shape.

PiperOrigin-RevId: 354293826
2021-01-28 05:37:54 -08:00
A. Unique TensorFlower e0a7be7fb1 [MLIR][CHLO] Add `chlo.lgamma` and lowering to `hlo`
PiperOrigin-RevId: 354287316
2021-01-28 04:35:03 -08:00
A. Unique TensorFlower d77c9ad6fa [MLIR][CHLO] Add `is_inf`, `is_pos_inf`, and `is_neg_inf` to CHLO dialect
Also add the respective lowerings to MHLO.

PiperOrigin-RevId: 354101955
2021-01-27 09:00:56 -08:00
Rahul Joshi 44deae2aa1 [MLIR:HLO] Extend AllReduce to support multiple inputs and results (to model tuples).
- Instead of SameTypeOperands, add custom verification to check if operands and
  results pairwise have the same type.

PiperOrigin-RevId: 353986341
2021-01-26 17:25:22 -08:00
Benjamin Kramer f6b24a6d54 [mlir][hlo] Make min/max always propagate NaNs
This is the right behavior for TF and JAX and matches what TF does on GPU. It
doesn't match TF on CPU, but that's really a TF bug.

PiperOrigin-RevId: 353657779
2021-01-25 09:04:16 -08:00
A. Unique TensorFlower b1438eebcb [mlir][hlo] Make min/max always propagate NaNs
This is the right behavior for TF and JAX and matches what TF does on GPU. It
doesn't match TF on CPU, but that's really a TF bug.

PiperOrigin-RevId: 353628258
2021-01-25 05:43:15 -08:00
Benjamin Kramer 6af4bccfde [mlir][hlo] Make min/max always propagate NaNs
This is the right behavior for TF and JAX and matches what TF does on GPU. It
doesn't match TF on CPU, but that's really a TF bug.

PiperOrigin-RevId: 353624935
2021-01-25 05:15:24 -08:00
A. Unique TensorFlower ae2d46414d [MLIR][KernelGen] Add erfc kernel for f16
PiperOrigin-RevId: 353209468
2021-01-22 03:38:30 -08:00
A. Unique TensorFlower ef8ccdaebc [MLIR] Add mhlo.logistic lowering to linalg
PiperOrigin-RevId: 353205440
2021-01-22 03:03:16 -08:00
A. Unique TensorFlower c846f925d4 [MLIR][KernelGen] Add chlo.erfc lowering for f32
PiperOrigin-RevId: 353201886
2021-01-22 02:33:21 -08:00
A. Unique TensorFlower 56758a9562 [MLIR][KernelGen] Lower mhlo.log_plus_one to std.log1p
PiperOrigin-RevId: 353200069
2021-01-22 02:18:32 -08:00
A. Unique TensorFlower 1a37078132 [MLIR][KernelGen] Add chlo.erfc lowerings for f64
PiperOrigin-RevId: 352993223
2021-01-21 04:42:56 -08:00
A. Unique TensorFlower bec2e625a2 [MLIR][KernelGen] Add approximation lowering for mhlo.erf operation on f64
PiperOrigin-RevId: 352977456
2021-01-21 02:48:43 -08:00
Alexander Belyaev 7aa64ee0b7 [MLIR] Migrate TF from STD complex ops to ComplexDialect.
PiperOrigin-RevId: 352966408
2021-01-21 01:22:25 -08:00
Hanhan Wang 46112c95c6 Use `uitofp` when converting a boolean to floating-point.
It was lowered to `sitofp` which resulted in `-1.0`.

PiperOrigin-RevId: 352958489
2021-01-21 00:15:30 -08:00
Stephan Herhut 70a351f301 Add chlo.acosh operation and associated lowerings.
PiperOrigin-RevId: 352839289
2021-01-20 11:43:44 -08:00
Tres Popp ba0346b071 Integrate LLVM at llvm/llvm-project@96ef4f307d
Updates LLVM usage to match
[96ef4f307df2](https://github.com/llvm/llvm-project/commit/96ef4f307df2)

PiperOrigin-RevId: 352786460
2021-01-20 07:09:47 -08:00
A. Unique TensorFlower ec5f5667e1 [MLIR][KernelGen] Add `tf.Asinh` kernels and complete their lowerings
PiperOrigin-RevId: 352773540
2021-01-20 05:31:15 -08:00
A. Unique TensorFlower 96fb617413 [MLIR][KernelGen] Add erf kernel and missing lowering for f16 type
PiperOrigin-RevId: 352416184
2021-01-18 08:21:15 -08:00
Tres Popp ba2ee556f1 Handle negative exponents for lowering of hlo.pow
PiperOrigin-RevId: 352382812
2021-01-18 03:47:28 -08:00
A. Unique TensorFlower 3763740910 [MLIR][KernelGen] Add erf kernel for f32 arguments and missing lowerings
PiperOrigin-RevId: 352381016
2021-01-18 03:35:13 -08:00
A. Unique TensorFlower bcdb3c3548 [MLIR] Lower mhlo.clamp to linalg
PiperOrigin-RevId: 351998800
2021-01-15 06:45:38 -08:00
A. Unique TensorFlower 9a1abaa212 Integrate LLVM at llvm/llvm-project@e2d7d3cb0e
Updates LLVM usage to match
[e2d7d3cb0ead](https://github.com/llvm/llvm-project/commit/e2d7d3cb0ead)

PiperOrigin-RevId: 351915841
2021-01-14 18:05:33 -08:00
Hanhan Wang 300a7c11ce Upstream mhlo.dot_general lowering to Linalg to MHLO repo
PiperOrigin-RevId: 351514250
2021-01-12 22:08:46 -08:00
Hanhan Wang 8f58f844e5 Upstream mhlo.dot lowering to Linalg to MHLO repo.
We prototyped the lowering from mhlo.dot to linalg.matmul in IREE. Since Linalg
now supports matmul in tensors world, we can move the lowering logic to tensors
world, and upstream to legalize_to_linalg.cc. The patch lowers the mhlo.dot to
the linalg.matmul/matvec/dot in tensors world.

PiperOrigin-RevId: 351184911
2021-01-11 10:35:24 -08:00
Alexander Belyaev 180f917446 [KERNEL_GEN] Add a pattern for hlo.dyn_broadcast->linalg to enable is_inf kernel.
PiperOrigin-RevId: 351179620
2021-01-11 10:13:31 -08:00
Alexander Belyaev ecf1bf5132 [KERNEL_GEN] Add a canonicalization pattern to drop a redundant dynamic reshape.
PiperOrigin-RevId: 351141868
2021-01-11 06:38:03 -08:00
Alexander Belyaev 6c42f54298 [KERNEL_GEN] Restrict broadcast -> reshape canonicalization to identity dims.
This is needed to avoid the case, when the broadcast_in_dims also performs permutation.

PiperOrigin-RevId: 350650342
2021-01-07 15:30:28 -08:00
Alexander Belyaev 095dc28e5c [KERNEL_GEN] Add canonicalizaton pattern to drop a redundant broadcast op.
PiperOrigin-RevId: 350105790
2021-01-05 03:01:00 -08:00
A. Unique TensorFlower b0bf2ef45b Integrate LLVM at llvm/llvm-project@c3acda0798
Updates LLVM usage to match
[c3acda0798f9](https://github.com/llvm/llvm-project/commit/c3acda0798f9)

PiperOrigin-RevId: 348896724
2020-12-23 23:53:54 -08:00
Stephan Herhut ccdd07f8e4 Prepare to remove tensor_load and tensor_store special handling from hlo to lhlo legalization.
This updates the tests to no longer rely on tensor_store. Once all users of this behavior have adopted, the tensor_store support will be removed.

PiperOrigin-RevId: 348624899
2020-12-22 06:29:12 -08:00
Tres Popp a42213b870 Define lowering of [l]mhlo.pow.
For floating point operations, this uses std.pow.
For integer operations, this lowers to a loop.
This adds a dependency on scf.

PiperOrigin-RevId: 348537232
2020-12-21 15:27:40 -08:00
Smit Hinsu 737d15ded5 Handle operands with zero elements in HLO PadOp folder
PiperOrigin-RevId: 348034821
2020-12-17 09:27:36 -08:00
Rahul Joshi 8134bff98d [XLA:GPU] Add layout attributes to LHLO_GPU Convolution operations.
- MLIR MemRefs do not preserve layout information correctly when unit dimensions
  are involved. Operations like convolution that use cuDNN however need the correct
  layout to be preserved so that we do not end up creating an incompatible combination
  of input/filter/output layout that is not supported by cuDNN.
- Add these layouts to convolution attributes in the form of I32ArrayAttr for representing
  the layout in "minor_to_major" form similar to XLA.

PiperOrigin-RevId: 348034757
2020-12-17 09:26:28 -08:00
Adrian Kuegel 1f244c3e2c Fix SignOp lowering for floating point values.
It didn't return 0 for 0.0 and -0.0.
Currently we emit -0.0 for -0.0 which is correct according to the HLO dialect.
For the TF_SignOp we should emit 0.0 in that case, we will leave that as a TODO.
Enable the tests which work now, and add another one for Int64.
Also improve the registration code, we should not register the Int32 kernel.

PiperOrigin-RevId: 347981124
2020-12-17 01:45:54 -08:00
A. Unique TensorFlower 5da9190dd9 Integrate LLVM at llvm/llvm-project@0cf7e4b252
Updates LLVM usage to match
[0cf7e4b252fe](https://github.com/llvm/llvm-project/commit/0cf7e4b252fe)

PiperOrigin-RevId: 347948887
2020-12-16 20:30:17 -08:00
Alexander Belyaev 65222893ae [KERNEL_GEN] Convert LHLO AddOp, SubOp (ComplexType) to complex ops.
PiperOrigin-RevId: 347805898
2020-12-16 05:45:06 -08:00
Alexander Belyaev e6e8920921 [KERNEL_GEN] Switch the pipeline to Linalg-on-Tensors.
PiperOrigin-RevId: 347781190
2020-12-16 01:51:15 -08:00
A. Unique TensorFlower f0c2695d31 Fix SignOp lowering for floating point values.
It didn't return 0 for 0.0 and -0.0.
Currently we emit -0.0 for -0.0 which is correct according to the HLO dialect.
For the TF_SignOp we should emit 0.0 in that case, we will leave that as a TODO.
Enable the tests which work now, and add another one for Int64.
Also improve the registration code, we should not register the Int32 kernel.

PiperOrigin-RevId: 347602378
2020-12-15 06:49:48 -08:00
Alexander Belyaev ddda2699fb [KERNEL_GEN] Switch the pipeline to Linalg-on-Tensors.
PiperOrigin-RevId: 347600145
2020-12-15 06:32:25 -08:00
Adrian Kuegel 79fa36bcbc Fix SignOp lowering for floating point values.
It didn't return 0 for 0.0 and -0.0.
Currently we emit -0.0 for -0.0 which is correct according to the HLO dialect.
For the TF_SignOp we should emit 0.0 in that case, we will leave that as a TODO.
Enable the tests which work now, and add another one for Int64.
Also improve the registration code, we should not register the Int32 kernel.

PiperOrigin-RevId: 347590340
2020-12-15 05:12:48 -08:00
Alexander Belyaev 8b35a75d4a [KERNEL_GEN] Switch the pipeline to Linalg-on-Tensors.
PiperOrigin-RevId: 347368063
2020-12-14 05:46:47 -08:00
Benjamin Kramer 9930c20c31 [mlir][hlo] Fix lowering of NE comparison. It should return true if either side is NaN
PiperOrigin-RevId: 346988987
2020-12-11 06:46:14 -08:00
Smit Hinsu ab6ee11813 Fix folding of HLO SliceOp with zero elements
This was causing division by zero in this case.

PiperOrigin-RevId: 346920942
2020-12-10 20:22:48 -08:00
Rahul Joshi f232da1f9d [MLIR:HLO] Add window_reversal attribute to convolution attributes.
- Add this attribute to match the corresponding XLA HLO attribute on convolution
  operations.
- A true value indicates a reversal of the corresponding kernel spatial dimension.
- Since XLA builder does not support this attribute, use a custom HLO converted to map
  from mlir::mhlo::ConvOp to XLA.

PiperOrigin-RevId: 346891737
2020-12-10 16:39:19 -08:00
Alexander Belyaev c36afd275e [HLO] Add a pattern for HLO ConstOp to HLO -> Linalg conversion.
PiperOrigin-RevId: 346718273
2020-12-09 23:24:57 -08:00
Stephan Herhut c3790af758 Add plumbing for or and xor to hlo to lhlo and linalg lowerings.
PiperOrigin-RevId: 346311314
2020-12-08 06:39:02 -08:00
Benjamin Kramer 5235eceea0 Lower mhlo shifts to linalg
PiperOrigin-RevId: 346161253
2020-12-07 13:02:32 -08:00
Tres Popp d327fc5737 [kernel_gen] Lower max rank specialization from 6 to 5
We don't care much about rank 6 broadcasting operations and this lowers compile times significantly.

PiperOrigin-RevId: 346046601
2020-12-07 02:18:38 -08:00
Phoenix Meadowlark c33bdcbd03 Remove fold of `mhlo.compare(%arg0, %arg0)` for floating types.
Two tensors having the same SSA-value isn't sufficient for equality for floating types, as `NaN != NaN`. As written this causes `tf.IsNan` to [miscompile](https://github.com/google/iree/issues/4061).

PiperOrigin-RevId: 345730640
2020-12-04 12:15:02 -08:00
Rahul Joshi e48881af81 [MLIR:LHLO_GPU] Add fused convolution operation without any side inputs.
- Add a variant of the fused convolution that does not need a side input and side input scale.
- Rename the existing one to `ConvForwardFusedSideInputOp`.
- Update tests to exercise all variants of the convolution ops in the GPU dialect.
- Eliminate unused `LHLO_ExtentBuffer` and changed LHLO_Buffer to allow any integer element
  type to match what XLA can generate sometimes for scratch buffers.

PiperOrigin-RevId: 345701569
2020-12-04 10:09:27 -08:00
Tres Popp 7c3f049c8e [kernel_gen] Lower max rank specialization from 6 to 5
We don't care much about rank 6 broadcasting operations and this lowers compile times significantly.

PiperOrigin-RevId: 345466476
2020-12-03 09:19:25 -08:00
Rahul Joshi d7bd5233ab [XLA:GPU] Migrate GEMM Thunk emission to MLIR.
- Map Custom call for GEMM in XLA HLO to Gemm/Gemm bias operations in LHLO GPU
  dialect.
- Make 'algorithm' an optional attribute to better match with XLA HLO backend config.
- Replace 'alpha' with 'alpha_real' and 'alpha_complex' to support complex GEMM correctly.
- Generate GemmThunk off of LHLO GPU Gemm operations.

PiperOrigin-RevId: 345250840
2020-12-02 09:43:12 -08:00
Adrian Kuegel d14c63da54 Add a canonicalization pattern to remove redundant dynamic_reshapes.
PiperOrigin-RevId: 344517381
2020-11-27 04:46:50 -08:00
Adrian Kuegel 6a71a84302 Support different input/output type for TransformUnrankedHlo.
Also generate the tf.Equal kernel, now that it works.

PiperOrigin-RevId: 344402014
2020-11-26 04:20:34 -08:00
Alexander Belyaev 5583c63cab [KERNEL_GEN] Add unranked Conj kernel.
PiperOrigin-RevId: 344243271
2020-11-25 06:37:26 -08:00
Smit Hinsu b016b5a219 Fix constant folding of mhlo.convert op with i1 element types
Boolean element values should be fetched as an unsigned integer and not signed integer which would return -1 for true.

Added to a TODO to handle unsigned types correctly as well as we don't seem to be using unsigned types.

PiperOrigin-RevId: 343927564
2020-11-23 14:18:28 -08:00
Rahul Joshi ac54c5ccfa [XLA:GPU] Convert Cholesky custom call in XLA HLO to LHLO GPU Dialect.
- Restructured LHLO GPU Cholesky to better match XLA HLO by eliminating the
  untyped buffer and changing is_upper attribute to is_lower.
- Change LhloDialectEmitter to emit LHLO GPU Cholesky operation.

PiperOrigin-RevId: 343873516
2020-11-23 10:06:21 -08:00
A. Unique TensorFlower 7f239c7ba2 Add canonicalizer for Reshape(Broadcast(X)) pattern when it is an identity sequence
PiperOrigin-RevId: 343251257
2020-11-19 02:32:45 -08:00
Stephan Herhut 0c7152e65c Extend fusion root heuristic to also work in partially bufferized programs.
We now follow data flow though tensor_cast, tensor_load and tensor_to_memref
operations.

PiperOrigin-RevId: 342851104
2020-11-17 06:34:48 -08:00
Tres Popp be9ae88eaa Consider aliases through control flow for lhlo-fuse-linalg.
This should handle scf.if and shape.assuming regions,
which we care about in kernel_gen.

PiperOrigin-RevId: 342584762
2020-11-16 01:12:45 -08:00
Stephan Herhut c344695bca Properly configure patterns in hlo to lhlo conversion with typeconverter.
Otherwise, materializations for partial conversions do not work.

PiperOrigin-RevId: 342215962
2020-11-13 00:58:51 -08:00
Rahul Joshi bbaad416a1 [MLIR] Update tests to eliminate public function declarations.
- Also fixed kernel_gen embed_memref_prints to mark the declarations inserted private.
- This is in prep for proposed MLIR change to disallow public declarations.

PiperOrigin-RevId: 342081252
2020-11-12 10:42:59 -08:00
Rahul Joshi 1958f228ec [MLIR:HLO] Extend CustomCall to support multiple outputs.
- Extend MHLO CustomCall to have multiple tensors as results.
- Extend LHLO CustomCall to have multiple memrefs for output operands.
- Fix HLO->LHLO and XLA HLO->LHLO mapping for CustomCall to setup the
  operand_segment_sizes attribute correctly.

PiperOrigin-RevId: 342067762
2020-11-12 09:46:25 -08:00
Tres Popp 1dffa62fe9 Fold away shape.shape_of(mhlo.dynamic_reshape(inp, shape))
This specific pattern can be replaced with the shape
passed to dynamic_reshape. This is implemented as a
canonicalization on mhlo.dynamic_reshape to fit in
the infrastructure of canonicalization.

PiperOrigin-RevId: 342009365
2020-11-12 02:48:26 -08:00
Alexander Belyaev d4f2c767d3 [HLO] Fix HLO DynamicBroadcastInDimOp -> LHLO lowering.
The conversion had a bug in computation of strides and sizes args for std.memref_reinterpret_cast. The previous version also relied on linalg::ReshapeOp to do broadcasting when the rank of the output was higher than the rank of the input. Now the broadcasting is entirely done via descriptor modification and linalg::ReshapeOp was replaced with CopyOp.

PiperOrigin-RevId: 341379871
2020-11-09 04:24:40 -08:00
Smit Hinsu 4ef12aa000 Update GetDimensionSize and SetDimensionSize ops to use I64 attribute for dimension
This is to match with HLO semantics and general dimension semantics in MLIR.

Also,

* Define minimal verifier for these ops.
* Add folder for SetDimensionSize op on static shaped dimension.
* Fix assumption of ranked shape in GetDimensionSize op.

PiperOrigin-RevId: 341150923
2020-11-06 18:03:04 -08:00
Robert Suderman a926e0f040 Removed Op(Complex, Real) lowering to address complex type inference issue
Lowerings that depended on operations between real and complex types may
not infer the correct intermediate type. Removing these operations as
they are not technically legally generated operations. Updated tests
to validate this.

PiperOrigin-RevId: 341128903
2020-11-06 15:24:23 -08:00
Tres Popp af4c9774dc Handle rank 1 broadcasts in unranked kernel lowering.
Previously this started at rank 2 after checking for scalars and equal shapes. This resulted in cases such as <1xf32> + <2xf32> being treated as impossible.

PiperOrigin-RevId: 341043965
2020-11-06 07:22:43 -08:00
Dmitry Volodin 1821c69910 PR #44405: Fix typos in compiler directory
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/44405

Splitting #43857 by top-level directories.
Copybara import of the project:

--
fa5da7d5478649d11321dcac9f867b0a57e4798a by Dmitry Volodin <mr.molkree@gmail.com>:

fix typos in compiler dir

--
4d3c9f047f7ecb8ab299f1bf28a86fd39096eee7 by Dmitry Volodin <mr.molkree@gmail.com>:

fix one test as "atleast" in it comes from Bazel

--
9440ebaaa9fc4a735f7f72f0c8f0de4ec58afbd6 by Dmitry Volodin <mr.molkree@gmail.com>:

a bit more

PiperOrigin-RevId: 340819994
2020-11-05 03:31:54 -08:00
Alexander Belyaev 3d930d08c2 [HLO] Delete LHLO memref cast ops and migrate to STD ones.
PiperOrigin-RevId: 340663578
2020-11-04 09:26:34 -08:00
Richard Uhler 82031b356c Improve error message for improperly shaped slice indices.
The slice indices must be rank-1 and have the same number of elements of the
rank of the operand. Give reasonable error messages for violations of these
requirements instead of a misleading error message that the types of the
indices don't all match.

PiperOrigin-RevId: 340660822
2020-11-04 09:10:51 -08:00
Sean Silva d3ea3abdec Remove `results_escape_functions` from HloLegalizeToLhlo
PiperOrigin-RevId: 340464958
2020-11-03 09:49:56 -08:00
A. Unique TensorFlower e866aac3ac Integrate LLVM at llvm/llvm-project@72ddd559b8
Updates LLVM usage to match
[72ddd559b8aa](https://github.com/llvm/llvm-project/commit/72ddd559b8aa)

PiperOrigin-RevId: 340292055
2020-11-02 12:29:52 -08:00
Tres Popp 81e8d778c4 Fix bug using std.rank instead of shape.rank
PiperOrigin-RevId: 339890070
2020-10-30 09:59:24 -07:00
Tres Popp 76b30fd426 Move unranked chlo lowering to transform_unranked_hlo.
Additionally:
- Forward listeners through new if/else op builders.
This corrects an error that led to incomplete legalization of broadcasted op
lowering.
- Use OpConversionPattern to ensure up to date operand values are used.
PiperOrigin-RevId: 339838833
2020-10-30 02:56:44 -07:00
Benjamin Kramer 3bf4277ea4 [MLIR] Add a lmhlo.reduce -> linalg.generic converter
Doesn't support tensors right now, as it's somewhat hairy to support both at
the same time. Since we use a generic lowering the result is messy
and needs a mem2reg pass to eliminate extra load/store/allocas.

PiperOrigin-RevId: 339562971
2020-10-28 16:38:19 -07:00
Smit Hinsu 6eda9ed273 Add compare_type optional attribute to CompareOp in HLO dialects
If unspecified, `compare_type` is FLOAT for float element types, SIGNED for signed element types and UNSIGNED for unsigned element types. compare_type can be TOTALORDER for float element types.

- Added import and export support the attribute.
- Restricted legalization from HLO to TF to the default compare types.
- Updated existing usage of the CompareOp

PiperOrigin-RevId: 339099219
2020-10-26 12:58:29 -07:00
Richard Uhler f9843fabe1 Use InferTypeOpInterface for HLO_SliceOp.
Instead of having a custom builder to construct a slice op without an explicit
return type.

PiperOrigin-RevId: 339058864
2020-10-26 09:54:13 -07:00
Benjamin Kramer 31c1c3aa1f Integrate LLVM at llvm/llvm-project@c89447b659
Updates LLVM usage to match
[c89447b65984](https://github.com/llvm/llvm-project/commit/c89447b65984)

PiperOrigin-RevId: 338560059
2020-10-22 15:23:24 -07:00
A. Unique TensorFlower 33c450e4cb Fix the MHLO to LMHLO lowering of 'gather'
The lowering assumes that the 'gather' op attributes are identical in both MHLO and LMHLO. But that's not true; some time ago the MHLO version was changed to pack 4 of its attributes into a struct. By doing the same for the LMHLO version we both fix the lowering for this op and resolve a longstanding TODO.

PiperOrigin-RevId: 337943946
2020-10-19 15:14:05 -07:00
A. Unique TensorFlower 7a983ea389 Add folder for mhlo::pad
PiperOrigin-RevId: 337827560
2020-10-19 04:21:44 -07:00
A. Unique TensorFlower 4a18aa41ee Add folder to mhlo::round_nearest_afz
PiperOrigin-RevId: 337823786
2020-10-19 03:45:15 -07:00
Jacques Pienaar 27968619b7 Verify non-scalar inputs for HLO concat
XLA HLO concat does not accept scalars, so fail verification if this occurs. Avoids segfault when accessing an empty output shape.

PiperOrigin-RevId: 337618167
2020-10-16 19:39:31 -07:00
A. Unique TensorFlower 706718b4fb Permit vector types in lmhlo to std lowering.
PiperOrigin-RevId: 337523303
2020-10-16 09:47:02 -07:00
Stephan Herhut 2e30b59ddc Extend hlo-fuse-linalg slightly to support aliased returns.
The fusion heuristic identifies the root of a fusion by checking whether an
output of a linalg operation is a function result. It did not consider outputs
flowing through aliasing operations (like casts).

PiperOrigin-RevId: 337479910
2020-10-16 04:04:23 -07:00
A. Unique TensorFlower 51cd4200b6 Make LMHLO's Dot have the same power as MHLO's DotGeneral.
PiperOrigin-RevId: 337391565
2020-10-15 15:09:06 -07:00
A. Unique TensorFlower 05ee41baf8 Add folder for mhlo::scatter
PiperOrigin-RevId: 337274351
2020-10-15 03:26:05 -07:00
Rahul Joshi f6b4e6758a Add GPU specific LMHLO level ops
- Introduce operations in a new lmhlo_gpu dialect that map to GPU library function calls
  in the XLA:GPU backend.
- Add basic unit tests as well.

PiperOrigin-RevId: 337132166
2020-10-14 11:23:55 -07:00
Tres Popp f6af1fc134 Support hlo to lhlo buffer placement through shape.assuming ops.
PiperOrigin-RevId: 336287728
2020-10-09 07:14:12 -07:00
Stephan Herhut d986bd7ad7 Use tensor_cast instead of mhlo::reshape in the lowering of unranked binary operations.
We know that the value already is a scalar and we just want to update the type, so no need to reshape anything.

PiperOrigin-RevId: 336252315
2020-10-09 01:46:48 -07:00
A. Unique TensorFlower 3736c5542f [MLIR][KernelGen] Fix unranked codegeneration in kernel generator
PiperOrigin-RevId: 335847086
2020-10-07 05:39:55 -07:00
A. Unique TensorFlower bae0815ef0 [MLIR][KernelGen] Legalize `atan` to approximation
PiperOrigin-RevId: 335417836
2020-10-05 08:05:52 -07:00
A. Unique TensorFlower 7367eac074 Add folder for mhlo::remainder
PiperOrigin-RevId: 335372628
2020-10-05 02:20:01 -07:00
Tim Shen c708bfd6d0 [MLIR] Add cbrt, reduce-precision, and bitcast ops to MHLO.
PiperOrigin-RevId: 335109804
2020-10-02 15:13:18 -07:00
Adrian Kuegel 3eb767b43d Add missing lowering step for IsFiniteOp.
Also add a BUILD target for generating the GPU kernel.

PiperOrigin-RevId: 334993362
2020-10-02 03:09:00 -07:00
A. Unique TensorFlower 049ca060a1 [MLIR][KernelGen] Legalize `atan2` to approximation
Legalize `atan2` analogously to XLA.  `atan2` is first reduced to `atan` on the
interval [-1, 1] and subsequently approximated.  This CL also adds e2e tests for
trigonometric approximations.

PiperOrigin-RevId: 334794336
2020-10-01 05:34:48 -07:00
A. Unique TensorFlower 4b1809784a Support collapse_slice_dims in the mhlo.gather->mhlo.slice canonicalizer
PiperOrigin-RevId: 334774763
2020-10-01 02:46:49 -07:00
Rahul Joshi bce128b070 Introduce CustomCall operation in LHLO Dialect
- And add conversion from MHLO CustomCall to LHLO CustomCall
- According to XLA documentation, the called function should not be side effecting,
  so marking the argument MemRefs as MemRead.

PiperOrigin-RevId: 334737196
2020-09-30 20:56:18 -07:00
Benjamin Kramer dfe64d3958 Implement InferShapedTypeOpInterface for mhlo.complex
Binary companion for 8bcd33e4b7

PiperOrigin-RevId: 334651523
2020-09-30 12:14:15 -07:00