Adrian Kuegel
37e31f8b26
Lower Expm1 kernel to math.ExpM1.
...
PiperOrigin-RevId: 358152908
2021-02-18 04:54:23 -08:00
Richard Uhler
b579bd5d9e
Support dynamic-shaped operand in verification of BroadcastInDim.
...
Verification of HLO_BroadcastInDimOp was previously failing or crashing if the
operand had a dynamic shape or was unranked. Update the verification code to
allow the operand to be unranked or have dynamic shape.
PiperOrigin-RevId: 358056793
2021-02-17 16:18:09 -08:00
A. Unique TensorFlower
220deb3709
[MLIR][CHLO] Add legalization for `chlo.polygamma` to MHLO
...
PiperOrigin-RevId: 357954624
2021-02-17 08:33:01 -08:00
A. Unique TensorFlower
81abaf364d
[MLIR][MHLO] Add polygamma op to the CHLO dialect
...
PiperOrigin-RevId: 357724465
2021-02-16 08:32:33 -08:00
Adrian Kuegel
b594254c79
[mhlo] Lower int->bool to a comparison with zero
...
This matches what TF (and C++) do in this case.
PiperOrigin-RevId: 357566262
2021-02-15 06:38:09 -08:00
Benjamin Kramer
240a44de82
[mhlo] Lower int->int cast to sign extension instead of zero extension
...
Signless does not mean unsigned here. Currently mhlo only has signed types.
PiperOrigin-RevId: 357561712
2021-02-15 05:58:47 -08:00
Adrian Kuegel
8672735e9a
[mhlo] Lower float->bool to a comparison with zero
...
This matches what TF (and C++) do in this case.
PiperOrigin-RevId: 357553098
2021-02-15 04:36:36 -08:00
A. Unique TensorFlower
89d81adf6d
[mhlo] Lower float->bool to a comparison with zero
...
This matches what TF (and C++) do in this case.
PiperOrigin-RevId: 357541594
2021-02-15 03:11:56 -08:00
Benjamin Kramer
3e80d91e73
[mhlo] Lower float->bool to a comparison with zero
...
This matches what TF (and C++) do in this case.
PiperOrigin-RevId: 357534118
2021-02-15 02:17:19 -08:00
Adrian Kuegel
824bc9c425
Improve broadcast transformation to treat dynamic shapes with 1 element as scalar.
...
A shape that contains exactly one element is effectively a scalar. This leads
to a speedup in cases where we have a binary op with one operand that is
effectively a scalar, because we can use the fast path.
PiperOrigin-RevId: 357515552
2021-02-14 23:25:41 -08:00
A. Unique TensorFlower
4060a86fe2
Integrate LLVM at llvm/llvm-project@2bfe27da17
...
Updates LLVM usage to match
[2bfe27da171e](https://github.com/llvm/llvm-project/commit/2bfe27da171e )
PiperOrigin-RevId: 357196336
2021-02-12 08:32:03 -08:00
Tim Shen
6fa6974e8d
[XLA/GPU] Plumb through Bitcast op for LMHLO.
...
Also remove BitcastOp. XLA bitcast requires the input buffer to alias the output buffer, which makes bitcast always a no-op.
PiperOrigin-RevId: 356884383
2021-02-10 19:45:40 -08:00
Alexander Belyaev
36e04d92c0
[KERNEL_GEN] Add a pattern to bufferize `mhlo.reshape(<unranked_tensor>)`.
...
PiperOrigin-RevId: 356720899
2021-02-10 06:32:21 -08:00
A. Unique TensorFlower
4a29ca3b1d
Add layout to mhlo::InfeedOp td.
...
PiperOrigin-RevId: 356286875
2021-02-08 09:48:14 -08:00
Tres Popp
d086b8a0ec
Correct HLO atan2 lowering in cases of -inf and -0 inputs.
...
This is being done by just removing the approximation and lowering to atan2 lib calls later to make the implementation the same as XLA. Note that if the approximation is brought back later, it can be fixed by changing the IR checking `less-than(X, 0)` to `less-than(copysign(X, 1), 0)`
PiperOrigin-RevId: 356253941
2021-02-08 06:58:04 -08:00
A. Unique TensorFlower
2aa8a90c69
Integrate LLVM at llvm/llvm-project@a1a1d338e9
...
Updates LLVM usage to match
[a1a1d338e99d](https://github.com/llvm/llvm-project/commit/a1a1d338e99d )
PiperOrigin-RevId: 355927079
2021-02-05 14:20:29 -08:00
Rahul Joshi
b251712b1d
[XLA:GPU] Add conversion from HLO -> MLIR LMHLO for TriangularSolve
...
- Also add layout attributes for inputs and output for error checking.
PiperOrigin-RevId: 355863625
2021-02-05 09:18:02 -08:00
A. Unique TensorFlower
99bc05f2e4
Integrate LLVM at llvm/llvm-project@91e7a17133
...
Updates LLVM usage to match
[91e7a1713332](https://github.com/llvm/llvm-project/commit/91e7a1713332 )
PiperOrigin-RevId: 355702100
2021-02-04 13:42:31 -08:00
Mahesh Ravishankar
44d0464d16
Use linalg.fill on tensors instead of tensor.generate in MHLO -> Linalg conversion.
...
linalg.fill on tensors is a structured op that allows use tile + fuse
to reduce the fill overhead.
PiperOrigin-RevId: 355490400
2021-02-03 15:03:49 -08:00
Stephan Herhut
6cd1875ee4
Implement lowering of chlo::zeta to mhlo dialect.
...
PiperOrigin-RevId: 355395581
2021-02-03 07:50:05 -08:00
A. Unique TensorFlower
04110a4b1c
Integrate LLVM at llvm/llvm-project@67dfe9c8d7
...
Updates LLVM usage to match
[67dfe9c8d70c](https://github.com/llvm/llvm-project/commit/67dfe9c8d70c )
PiperOrigin-RevId: 355235205
2021-02-02 13:09:20 -08:00
Tres Popp
ae722a883f
Improve performance of lowered chlo.pow with integers
...
The new lowering takes 6 iterations of a loop always rather than iterating the exponent's number of times.
PiperOrigin-RevId: 355131133
2021-02-02 03:28:38 -08:00
A. Unique TensorFlower
f40ccc5b4b
[MLIR][CHLO] Add `chlo.digamma` and lowering to MHLO
...
PiperOrigin-RevId: 355122765
2021-02-02 02:10:17 -08:00
Adrian Kuegel
c2115f56c7
Integrate LLVM at llvm/llvm-project@8f7f2c4211
...
Updates LLVM usage to match
[8f7f2c4211ca](https://github.com/llvm/llvm-project/commit/8f7f2c4211ca )
PiperOrigin-RevId: 355120697
2021-02-02 01:54:32 -08:00
Adrian Kuegel
96f8771ed7
Add MLIR generated kernel for Angle kernel.
...
This also requires a canonicalization pattern to remove a redundant dynamic
reshape from rank 1 to rank 1.
PiperOrigin-RevId: 355113135
2021-02-02 00:47:20 -08:00
Rahul Joshi
8e3890e8e8
[MLIR:HLO] Add AllGather and AllToAll operations to LMHLO dialect.
...
- Use a common base class to for AllReduce, AllGather, and AllToAll in the ODS spec.
- Add basic verification for replica groups attribute.
PiperOrigin-RevId: 354969654
2021-02-01 10:23:46 -08:00
Stephan Herhut
e61ef86fdb
Add zeta and broadcasting_zeta to chlo dialect.
...
PiperOrigin-RevId: 354500879
2021-01-29 03:22:52 -08:00
Hanhan Wang
30ce82790d
Upstream mhlo.reduce lowering to Linalg to MHLO repo.
...
In IREE, we use indexed generic op to handle the initial value. However, we
lower it to a generic op that carries an init_tensor here, and leave the handle
of initialization problem to later passes.
PiperOrigin-RevId: 354294807
2021-01-28 05:46:09 -08:00
Lei Zhang
39589add22
Use the correct shape when converting mhlo.reshape
...
If mhlo.reshape is not purely collapsing some consecutive operand
dimensions into result dimensions, we will generate two linalg
reshape op for it: the first one collapses all operand dimensions
into one dimension, and the second one expands it to all result
dimensions. For this case, the number of collapsed/expanded dimensions
should be coming strictly from the operand/result. It is different
from the case where we can generate one linalg reshape. For that case,
the reassociation map should have rank equal to the largest among
operand/result shape.
PiperOrigin-RevId: 354293826
2021-01-28 05:37:54 -08:00
A. Unique TensorFlower
e0a7be7fb1
[MLIR][CHLO] Add `chlo.lgamma` and lowering to `hlo`
...
PiperOrigin-RevId: 354287316
2021-01-28 04:35:03 -08:00
A. Unique TensorFlower
d77c9ad6fa
[MLIR][CHLO] Add `is_inf`, `is_pos_inf`, and `is_neg_inf` to CHLO dialect
...
Also add the respective lowerings to MHLO.
PiperOrigin-RevId: 354101955
2021-01-27 09:00:56 -08:00
Rahul Joshi
44deae2aa1
[MLIR:HLO] Extend AllReduce to support multiple inputs and results (to model tuples).
...
- Instead of SameTypeOperands, add custom verification to check if operands and
results pairwise have the same type.
PiperOrigin-RevId: 353986341
2021-01-26 17:25:22 -08:00
Benjamin Kramer
f6b24a6d54
[mlir][hlo] Make min/max always propagate NaNs
...
This is the right behavior for TF and JAX and matches what TF does on GPU. It
doesn't match TF on CPU, but that's really a TF bug.
PiperOrigin-RevId: 353657779
2021-01-25 09:04:16 -08:00
A. Unique TensorFlower
b1438eebcb
[mlir][hlo] Make min/max always propagate NaNs
...
This is the right behavior for TF and JAX and matches what TF does on GPU. It
doesn't match TF on CPU, but that's really a TF bug.
PiperOrigin-RevId: 353628258
2021-01-25 05:43:15 -08:00
Benjamin Kramer
6af4bccfde
[mlir][hlo] Make min/max always propagate NaNs
...
This is the right behavior for TF and JAX and matches what TF does on GPU. It
doesn't match TF on CPU, but that's really a TF bug.
PiperOrigin-RevId: 353624935
2021-01-25 05:15:24 -08:00
A. Unique TensorFlower
ae2d46414d
[MLIR][KernelGen] Add erfc kernel for f16
...
PiperOrigin-RevId: 353209468
2021-01-22 03:38:30 -08:00
A. Unique TensorFlower
ef8ccdaebc
[MLIR] Add mhlo.logistic lowering to linalg
...
PiperOrigin-RevId: 353205440
2021-01-22 03:03:16 -08:00
A. Unique TensorFlower
c846f925d4
[MLIR][KernelGen] Add chlo.erfc lowering for f32
...
PiperOrigin-RevId: 353201886
2021-01-22 02:33:21 -08:00
A. Unique TensorFlower
56758a9562
[MLIR][KernelGen] Lower mhlo.log_plus_one to std.log1p
...
PiperOrigin-RevId: 353200069
2021-01-22 02:18:32 -08:00
A. Unique TensorFlower
1a37078132
[MLIR][KernelGen] Add chlo.erfc lowerings for f64
...
PiperOrigin-RevId: 352993223
2021-01-21 04:42:56 -08:00
A. Unique TensorFlower
bec2e625a2
[MLIR][KernelGen] Add approximation lowering for mhlo.erf operation on f64
...
PiperOrigin-RevId: 352977456
2021-01-21 02:48:43 -08:00
Alexander Belyaev
7aa64ee0b7
[MLIR] Migrate TF from STD complex ops to ComplexDialect.
...
PiperOrigin-RevId: 352966408
2021-01-21 01:22:25 -08:00
Hanhan Wang
46112c95c6
Use `uitofp` when converting a boolean to floating-point.
...
It was lowered to `sitofp` which resulted in `-1.0`.
PiperOrigin-RevId: 352958489
2021-01-21 00:15:30 -08:00
Stephan Herhut
70a351f301
Add chlo.acosh operation and associated lowerings.
...
PiperOrigin-RevId: 352839289
2021-01-20 11:43:44 -08:00
Tres Popp
ba0346b071
Integrate LLVM at llvm/llvm-project@96ef4f307d
...
Updates LLVM usage to match
[96ef4f307df2](https://github.com/llvm/llvm-project/commit/96ef4f307df2 )
PiperOrigin-RevId: 352786460
2021-01-20 07:09:47 -08:00
A. Unique TensorFlower
ec5f5667e1
[MLIR][KernelGen] Add `tf.Asinh` kernels and complete their lowerings
...
PiperOrigin-RevId: 352773540
2021-01-20 05:31:15 -08:00
A. Unique TensorFlower
96fb617413
[MLIR][KernelGen] Add erf kernel and missing lowering for f16 type
...
PiperOrigin-RevId: 352416184
2021-01-18 08:21:15 -08:00
Tres Popp
ba2ee556f1
Handle negative exponents for lowering of hlo.pow
...
PiperOrigin-RevId: 352382812
2021-01-18 03:47:28 -08:00
A. Unique TensorFlower
3763740910
[MLIR][KernelGen] Add erf kernel for f32 arguments and missing lowerings
...
PiperOrigin-RevId: 352381016
2021-01-18 03:35:13 -08:00
A. Unique TensorFlower
bcdb3c3548
[MLIR] Lower mhlo.clamp to linalg
...
PiperOrigin-RevId: 351998800
2021-01-15 06:45:38 -08:00