A. Unique TensorFlower
89d81adf6d
[mhlo] Lower float->bool to a comparison with zero
...
This matches what TF (and C++) do in this case.
PiperOrigin-RevId: 357541594
2021-02-15 03:11:56 -08:00
Benjamin Kramer
3e80d91e73
[mhlo] Lower float->bool to a comparison with zero
...
This matches what TF (and C++) do in this case.
PiperOrigin-RevId: 357534118
2021-02-15 02:17:19 -08:00
A. Unique TensorFlower
4060a86fe2
Integrate LLVM at llvm/llvm-project@2bfe27da17
...
Updates LLVM usage to match
[2bfe27da171e](https://github.com/llvm/llvm-project/commit/2bfe27da171e )
PiperOrigin-RevId: 357196336
2021-02-12 08:32:03 -08:00
Tim Shen
6fa6974e8d
[XLA/GPU] Plumb through Bitcast op for LMHLO.
...
Also remove BitcastOp. XLA bitcast requires the input buffer to alias the output buffer, which makes bitcast always a no-op.
PiperOrigin-RevId: 356884383
2021-02-10 19:45:40 -08:00
A. Unique TensorFlower
4a29ca3b1d
Add layout to mhlo::InfeedOp td.
...
PiperOrigin-RevId: 356286875
2021-02-08 09:48:14 -08:00
Rahul Joshi
b251712b1d
[XLA:GPU] Add conversion from HLO -> MLIR LMHLO for TriangularSolve
...
- Also add layout attributes for inputs and output for error checking.
PiperOrigin-RevId: 355863625
2021-02-05 09:18:02 -08:00
Stephan Herhut
6cd1875ee4
Implement lowering of chlo::zeta to mhlo dialect.
...
PiperOrigin-RevId: 355395581
2021-02-03 07:50:05 -08:00
Tres Popp
ae722a883f
Improve performance of lowered chlo.pow with integers
...
The new lowering takes 6 iterations of a loop always rather than iterating the exponent's number of times.
PiperOrigin-RevId: 355131133
2021-02-02 03:28:38 -08:00
A. Unique TensorFlower
f40ccc5b4b
[MLIR][CHLO] Add `chlo.digamma` and lowering to MHLO
...
PiperOrigin-RevId: 355122765
2021-02-02 02:10:17 -08:00
Rahul Joshi
8e3890e8e8
[MLIR:HLO] Add AllGather and AllToAll operations to LMHLO dialect.
...
- Use a common base class to for AllReduce, AllGather, and AllToAll in the ODS spec.
- Add basic verification for replica groups attribute.
PiperOrigin-RevId: 354969654
2021-02-01 10:23:46 -08:00
Rahul Joshi
1be1123c70
[XLA:GPU] Add support for PartitionId
...
PiperOrigin-RevId: 354599221
2021-01-29 13:31:54 -08:00
Stephan Herhut
e61ef86fdb
Add zeta and broadcasting_zeta to chlo dialect.
...
PiperOrigin-RevId: 354500879
2021-01-29 03:22:52 -08:00
Rahul Joshi
c97cff9a7b
[XLA:GPU] Migrate AllReduce thunk emission to MLIR.
...
PiperOrigin-RevId: 354335704
2021-01-28 10:06:06 -08:00
A. Unique TensorFlower
e0a7be7fb1
[MLIR][CHLO] Add `chlo.lgamma` and lowering to `hlo`
...
PiperOrigin-RevId: 354287316
2021-01-28 04:35:03 -08:00
A. Unique TensorFlower
c3ddcd6c7f
[MLIR][CHLO] Implement type inference for `is_inf`-like operations in CHLO
...
PiperOrigin-RevId: 354265834
2021-01-28 01:37:04 -08:00
A. Unique TensorFlower
fe2e5a175f
[MLIR][HLO] Implement type inference for `is_finite` op
...
PiperOrigin-RevId: 354261420
2021-01-28 00:56:12 -08:00
A. Unique TensorFlower
d77c9ad6fa
[MLIR][CHLO] Add `is_inf`, `is_pos_inf`, and `is_neg_inf` to CHLO dialect
...
Also add the respective lowerings to MHLO.
PiperOrigin-RevId: 354101955
2021-01-27 09:00:56 -08:00
Adrian Kuegel
f4f728f18e
Add SameOperandsAndResultShape trait to ConvertOp and IsFiniteOp.
...
PiperOrigin-RevId: 354070005
2021-01-27 05:10:01 -08:00
Rahul Joshi
44deae2aa1
[MLIR:HLO] Extend AllReduce to support multiple inputs and results (to model tuples).
...
- Instead of SameTypeOperands, add custom verification to check if operands and
results pairwise have the same type.
PiperOrigin-RevId: 353986341
2021-01-26 17:25:22 -08:00
Benjamin Kramer
f6b24a6d54
[mlir][hlo] Make min/max always propagate NaNs
...
This is the right behavior for TF and JAX and matches what TF does on GPU. It
doesn't match TF on CPU, but that's really a TF bug.
PiperOrigin-RevId: 353657779
2021-01-25 09:04:16 -08:00
A. Unique TensorFlower
b1438eebcb
[mlir][hlo] Make min/max always propagate NaNs
...
This is the right behavior for TF and JAX and matches what TF does on GPU. It
doesn't match TF on CPU, but that's really a TF bug.
PiperOrigin-RevId: 353628258
2021-01-25 05:43:15 -08:00
Benjamin Kramer
6af4bccfde
[mlir][hlo] Make min/max always propagate NaNs
...
This is the right behavior for TF and JAX and matches what TF does on GPU. It
doesn't match TF on CPU, but that's really a TF bug.
PiperOrigin-RevId: 353624935
2021-01-25 05:15:24 -08:00
A. Unique TensorFlower
ef8ccdaebc
[MLIR] Add mhlo.logistic lowering to linalg
...
PiperOrigin-RevId: 353205440
2021-01-22 03:03:16 -08:00
A. Unique TensorFlower
56758a9562
[MLIR][KernelGen] Lower mhlo.log_plus_one to std.log1p
...
PiperOrigin-RevId: 353200069
2021-01-22 02:18:32 -08:00
Tim Shen
d1c785381d
[XLA/GPU] Add XLA HLO -> LMHLO conversion to several ops, and implement them in XLA/GPU.
...
PiperOrigin-RevId: 353158172
2021-01-21 19:57:54 -08:00
Alexander Belyaev
7aa64ee0b7
[MLIR] Migrate TF from STD complex ops to ComplexDialect.
...
PiperOrigin-RevId: 352966408
2021-01-21 01:22:25 -08:00
Hanhan Wang
46112c95c6
Use `uitofp` when converting a boolean to floating-point.
...
It was lowered to `sitofp` which resulted in `-1.0`.
PiperOrigin-RevId: 352958489
2021-01-21 00:15:30 -08:00
Stephan Herhut
70a351f301
Add chlo.acosh operation and associated lowerings.
...
PiperOrigin-RevId: 352839289
2021-01-20 11:43:44 -08:00
A. Unique TensorFlower
ec5f5667e1
[MLIR][KernelGen] Add `tf.Asinh` kernels and complete their lowerings
...
PiperOrigin-RevId: 352773540
2021-01-20 05:31:15 -08:00
A. Unique TensorFlower
0e85b4d511
[MLIR][KernelGen] Add `tf.Asinh` kernels and complete their lowerings
...
PiperOrigin-RevId: 352604725
2021-01-19 10:51:41 -08:00
Alexander Belyaev
9e07bdf4ea
[HLO] Move `SameOperandsAndResultShape` trait to Unary/BinaryElemenwiseOp classes.
...
PiperOrigin-RevId: 352404756
2021-01-18 06:47:36 -08:00
A. Unique TensorFlower
c11ea4ef5a
[MLIR][KernelGen] Add `tf.Atanh` kernels
...
PiperOrigin-RevId: 352393602
2021-01-18 05:14:09 -08:00
Tres Popp
ba2ee556f1
Handle negative exponents for lowering of hlo.pow
...
PiperOrigin-RevId: 352382812
2021-01-18 03:47:28 -08:00
A. Unique TensorFlower
b5dc600860
[MLIR] Fix line wraps in hlo_ops.td
...
PiperOrigin-RevId: 352364076
2021-01-18 01:41:41 -08:00
A. Unique TensorFlower
bcdb3c3548
[MLIR] Lower mhlo.clamp to linalg
...
PiperOrigin-RevId: 351998800
2021-01-15 06:45:38 -08:00
A. Unique TensorFlower
791d5afd28
[MLIR][KernelGen] Add `tf.Asinh` kernels and complete their lowerings
...
PiperOrigin-RevId: 351989552
2021-01-15 05:26:57 -08:00
A. Unique TensorFlower
316f630728
[MLIR][KernelGen] Add cosh kernels and tests
...
Allow for relative tolerance in unary kernel tests. In case of the cosh kernels,
this allows to accept an observed difference of 5.6e-8 between the kernel and
the `std::cosh` reference (32829984.568665262 vs. 32829984.568665318) in one of
the test cases.
PiperOrigin-RevId: 351983698
2021-01-15 04:31:30 -08:00
A. Unique TensorFlower
181d2cad31
[MLIR][KernelGen] Add `tf.Log1p` kernel and tests
...
PiperOrigin-RevId: 351566460
2021-01-13 05:37:25 -08:00
Rahul Joshi
0651d58edf
[MLIR:HLO] Change LHLO Outfeed to support multiple inputs.
...
PiperOrigin-RevId: 351447034
2021-01-12 14:13:47 -08:00
A. Unique TensorFlower
0b85d5c510
[MLIR][KernelGen] Add asin kernels and tests
...
PiperOrigin-RevId: 351381423
2021-01-12 09:02:46 -08:00
Alexander Belyaev
ac38478350
Integrate LLVM at llvm/llvm-project@bcbdeafa9c
...
Updates LLVM usage to match
[bcbdeafa9cb3](https://github.com/llvm/llvm-project/commit/bcbdeafa9cb3 )
PiperOrigin-RevId: 350763293
2021-01-08 07:57:20 -08:00
Alexander Belyaev
6c42f54298
[KERNEL_GEN] Restrict broadcast -> reshape canonicalization to identity dims.
...
This is needed to avoid the case, when the broadcast_in_dims also performs permutation.
PiperOrigin-RevId: 350650342
2021-01-07 15:30:28 -08:00
Rahul Joshi
e3754d7b5c
[MLIR:HLO] Change LHLO Infeed to support multiple outputs.
...
PiperOrigin-RevId: 348851612
2020-12-23 14:57:08 -08:00
Rahul Joshi
bd8e768e23
[MLIR:HLO] Rename LMHLO Infeed and Outfeed to InfeedOp and OutfeedOp.
...
- To be consistent with other operation names.
PiperOrigin-RevId: 348824379
2020-12-23 11:38:05 -08:00
Rahul Joshi
8252eafa99
[NFC] Factor out repeated code out of InferFusibilityOpInterface.
...
PiperOrigin-RevId: 348671671
2020-12-22 12:04:29 -08:00
Rahul Joshi
bc367971ec
[MLIR:LHLO_GPU] Add additional constraints for batchnorm
...
- Constrain batchnorm inputs and outputs to be fp memrefs.
PiperOrigin-RevId: 348665747
2020-12-22 11:30:40 -08:00
Tres Popp
a42213b870
Define lowering of [l]mhlo.pow.
...
For floating point operations, this uses std.pow.
For integer operations, this lowers to a loop.
This adds a dependency on scf.
PiperOrigin-RevId: 348537232
2020-12-21 15:27:40 -08:00
Smit Hinsu
8d051723c0
Use InferTypeOpInterface for HLO AbsOp and fix result shape inference
...
Shape inference in case of ops with complex element types need to use the element type of complex as the result element type and not the full operand type.
Before:
"mhlo.abs"(%arg0) : (tensor<4xcomplex<f32>>) -> tensor<4xtensor<4xcomplex<f32>>>
After:
"mhlo.abs"(%arg0) : (tensor<4xcomplex<f32>>) -> tensor<4xf32>
PiperOrigin-RevId: 348123967
2020-12-17 17:37:07 -08:00
Rahul Joshi
8134bff98d
[XLA:GPU] Add layout attributes to LHLO_GPU Convolution operations.
...
- MLIR MemRefs do not preserve layout information correctly when unit dimensions
are involved. Operations like convolution that use cuDNN however need the correct
layout to be preserved so that we do not end up creating an incompatible combination
of input/filter/output layout that is not supported by cuDNN.
- Add these layouts to convolution attributes in the form of I32ArrayAttr for representing
the layout in "minor_to_major" form similar to XLA.
PiperOrigin-RevId: 348034757
2020-12-17 09:26:28 -08:00
Adrian Kuegel
1f244c3e2c
Fix SignOp lowering for floating point values.
...
It didn't return 0 for 0.0 and -0.0.
Currently we emit -0.0 for -0.0 which is correct according to the HLO dialect.
For the TF_SignOp we should emit 0.0 in that case, we will leave that as a TODO.
Enable the tests which work now, and add another one for Int64.
Also improve the registration code, we should not register the Int32 kernel.
PiperOrigin-RevId: 347981124
2020-12-17 01:45:54 -08:00