It didn't return 0 for 0.0 and -0.0.
Currently we emit -0.0 for -0.0 which is correct according to the HLO dialect.
For the TF_SignOp we should emit 0.0 in that case, we will leave that as a TODO.
Enable the tests which work now, and add another one for Int64.
Also improve the registration code, we should not register the Int32 kernel.
PiperOrigin-RevId: 347602378
It didn't return 0 for 0.0 and -0.0.
Currently we emit -0.0 for -0.0 which is correct according to the HLO dialect.
For the TF_SignOp we should emit 0.0 in that case, we will leave that as a TODO.
Enable the tests which work now, and add another one for Int64.
Also improve the registration code, we should not register the Int32 kernel.
PiperOrigin-RevId: 347590340
- Add this attribute to match the corresponding XLA HLO attribute on convolution
operations.
- A true value indicates a reversal of the corresponding kernel spatial dimension.
- Since XLA builder does not support this attribute, use a custom HLO converted to map
from mlir::mhlo::ConvOp to XLA.
PiperOrigin-RevId: 346891737
- Add a variant of the fused convolution that does not need a side input and side input scale.
- Rename the existing one to `ConvForwardFusedSideInputOp`.
- Update tests to exercise all variants of the convolution ops in the GPU dialect.
- Eliminate unused `LHLO_ExtentBuffer` and changed LHLO_Buffer to allow any integer element
type to match what XLA can generate sometimes for scratch buffers.
PiperOrigin-RevId: 345701569
- Split out enum definitions in hlo dialect into a separate .td file (similar to structs)
and generate enum decl/defs for these enums.
- Also split out the LHLO GPU enums into a separate .td file and generate enum
decl/defs for these enums as well.
- Remove unused dialect from ConvolutionAttributes and generate lhlo_gpu enums.
- Add appropriate namespace for all the enums.
PiperOrigin-RevId: 345277240
- Map Custom call for GEMM in XLA HLO to Gemm/Gemm bias operations in LHLO GPU
dialect.
- Make 'algorithm' an optional attribute to better match with XLA HLO backend config.
- Replace 'alpha' with 'alpha_real' and 'alpha_complex' to support complex GEMM correctly.
- Generate GemmThunk off of LHLO GPU Gemm operations.
PiperOrigin-RevId: 345250840
- Restructured LHLO GPU Cholesky to better match XLA HLO by eliminating the
untyped buffer and changing is_upper attribute to is_lower.
- Change LhloDialectEmitter to emit LHLO GPU Cholesky operation.
PiperOrigin-RevId: 343873516
- Extend MHLO CustomCall to have multiple tensors as results.
- Extend LHLO CustomCall to have multiple memrefs for output operands.
- Fix HLO->LHLO and XLA HLO->LHLO mapping for CustomCall to setup the
operand_segment_sizes attribute correctly.
PiperOrigin-RevId: 342067762
This is to match with HLO semantics and general dimension semantics in MLIR.
Also,
* Define minimal verifier for these ops.
* Add folder for SetDimensionSize op on static shaped dimension.
* Fix assumption of ranked shape in GetDimensionSize op.
PiperOrigin-RevId: 341150923
Additionally:
- Forward listeners through new if/else op builders.
This corrects an error that led to incomplete legalization of broadcasted op
lowering.
- Use OpConversionPattern to ensure up to date operand values are used.
PiperOrigin-RevId: 339838833
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/44277
This PR addresses minor spelling tweaks of md/td files under compiler directory
Copybara import of the project:
--
4fedebde8f7d48ce2917642ebaab966c9ce49f3e by Kazuaki Ishizaki <ishizaki@jp.ibm.com>:
minor spelling tweaks
PiperOrigin-RevId: 339260830
If unspecified, `compare_type` is FLOAT for float element types, SIGNED for signed element types and UNSIGNED for unsigned element types. compare_type can be TOTALORDER for float element types.
- Added import and export support the attribute.
- Restricted legalization from HLO to TF to the default compare types.
- Updated existing usage of the CompareOp
PiperOrigin-RevId: 339099219
The lowering assumes that the 'gather' op attributes are identical in both MHLO and LMHLO. But that's not true; some time ago the MHLO version was changed to pack 4 of its attributes into a struct. By doing the same for the LMHLO version we both fix the lowering for this op and resolve a longstanding TODO.
PiperOrigin-RevId: 337943946