- Add this attribute to match the corresponding XLA HLO attribute on convolution
operations.
- A true value indicates a reversal of the corresponding kernel spatial dimension.
- Since XLA builder does not support this attribute, use a custom HLO converted to map
from mlir::mhlo::ConvOp to XLA.
PiperOrigin-RevId: 346891737
- Add a variant of the fused convolution that does not need a side input and side input scale.
- Rename the existing one to `ConvForwardFusedSideInputOp`.
- Update tests to exercise all variants of the convolution ops in the GPU dialect.
- Eliminate unused `LHLO_ExtentBuffer` and changed LHLO_Buffer to allow any integer element
type to match what XLA can generate sometimes for scratch buffers.
PiperOrigin-RevId: 345701569
- Split out enum definitions in hlo dialect into a separate .td file (similar to structs)
and generate enum decl/defs for these enums.
- Also split out the LHLO GPU enums into a separate .td file and generate enum
decl/defs for these enums as well.
- Remove unused dialect from ConvolutionAttributes and generate lhlo_gpu enums.
- Add appropriate namespace for all the enums.
PiperOrigin-RevId: 345277240
- Map Custom call for GEMM in XLA HLO to Gemm/Gemm bias operations in LHLO GPU
dialect.
- Make 'algorithm' an optional attribute to better match with XLA HLO backend config.
- Replace 'alpha' with 'alpha_real' and 'alpha_complex' to support complex GEMM correctly.
- Generate GemmThunk off of LHLO GPU Gemm operations.
PiperOrigin-RevId: 345250840
- Restructured LHLO GPU Cholesky to better match XLA HLO by eliminating the
untyped buffer and changing is_upper attribute to is_lower.
- Change LhloDialectEmitter to emit LHLO GPU Cholesky operation.
PiperOrigin-RevId: 343873516
- Extend MHLO CustomCall to have multiple tensors as results.
- Extend LHLO CustomCall to have multiple memrefs for output operands.
- Fix HLO->LHLO and XLA HLO->LHLO mapping for CustomCall to setup the
operand_segment_sizes attribute correctly.
PiperOrigin-RevId: 342067762
This is to match with HLO semantics and general dimension semantics in MLIR.
Also,
* Define minimal verifier for these ops.
* Add folder for SetDimensionSize op on static shaped dimension.
* Fix assumption of ranked shape in GetDimensionSize op.
PiperOrigin-RevId: 341150923
Additionally:
- Forward listeners through new if/else op builders.
This corrects an error that led to incomplete legalization of broadcasted op
lowering.
- Use OpConversionPattern to ensure up to date operand values are used.
PiperOrigin-RevId: 339838833
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/44277
This PR addresses minor spelling tweaks of md/td files under compiler directory
Copybara import of the project:
--
4fedebde8f7d48ce2917642ebaab966c9ce49f3e by Kazuaki Ishizaki <ishizaki@jp.ibm.com>:
minor spelling tweaks
PiperOrigin-RevId: 339260830
If unspecified, `compare_type` is FLOAT for float element types, SIGNED for signed element types and UNSIGNED for unsigned element types. compare_type can be TOTALORDER for float element types.
- Added import and export support the attribute.
- Restricted legalization from HLO to TF to the default compare types.
- Updated existing usage of the CompareOp
PiperOrigin-RevId: 339099219
The lowering assumes that the 'gather' op attributes are identical in both MHLO and LMHLO. But that's not true; some time ago the MHLO version was changed to pack 4 of its attributes into a struct. By doing the same for the LMHLO version we both fix the lowering for this op and resolve a longstanding TODO.
PiperOrigin-RevId: 337943946
- Introduce operations in a new lmhlo_gpu dialect that map to GPU library function calls
in the XLA:GPU backend.
- Add basic unit tests as well.
PiperOrigin-RevId: 337132166
- Create a common class that hold ConvOp attributes that are common to MHLO and
LHLO Dialects and use Tablegen DAG concat to append these common attributes to
dialect specific arguments in ODS for ConvOp.
PiperOrigin-RevId: 336114003
- And add conversion from MHLO CustomCall to LHLO CustomCall
- According to XLA documentation, the called function should not be side effecting,
so marking the argument MemRefs as MemRead.
PiperOrigin-RevId: 334737196
- This causes declaration of reifyReturnTypeShapes to be inserted in the TableGen
generated code without explicitly adding it in extraClassDeclaration.
PiperOrigin-RevId: 333315601
- Use MLIR provided constraints for HLO_ScalarIntTensor and HLO_DimensionTensor.
- Update unit tests to expect new error messages.
PiperOrigin-RevId: 333313131
A non globally registered pass should define `getName()` in order to generate correct crash reproducers.
This is something we get "for free" when using the TableGen generated base class.
We should also migrate the other passes to the same mechanism and remove the static
global registration.
PiperOrigin-RevId: 332976907
Add `tan` op and lowering to CHLO dialect, move CHLO lowerings to
`chlo_legalize_to_hlo_patterns` and extend missing patterns.
PiperOrigin-RevId: 331506094
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/43137
This PR removes lhlo-copy-removal pass entirely and replace its usages with ```mlir::createCopyRemovalPass()```.
--
7ce1a06f507c8db46c6d7b43c7870cf56002e18e by Ehsan Toosi <ehsan.nadjaran_toosi@dfki.de>:
[mlir][lhlo] Replace lhlo-copy-removal pass with mlir-copy-removal pass
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/43137 from dfki-ehna:using_mlir_copy_removal 7ce1a06f507c8db46c6d7b43c7870cf56002e18e
PiperOrigin-RevId: 331498501
Add `tan` op and lowering to CHLO dialect, move CHLO lowerings to
`chlo_legalize_to_hlo_patterns` and extend missing patterns.
PiperOrigin-RevId: 331128170
Add `tan` op and lowering to CHLO dialect, move CHLO lowerings to
`chlo_legalize_to_hlo_patterns` and extend missing patterns.
PiperOrigin-RevId: 331125286
* Unified TF->Cubin and TF->Kernel_with_host side lowering in `kernel_creator.h|cc`
* Added a pass that attaches GPU binary blob to GPUModuleOp
* Refactored most of the code.
* Added tf_to_kernel binary that emits obj file
PiperOrigin-RevId: 330494488
Start of pass to legalize MHLO control flow to SCF for further optimization in common form. The current version just matches a very simple instance (which also happens to occur a few times). Exposes some further canonicalization opportunities that aren't yet addressed.
PiperOrigin-RevId: 329017723
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/42509
Add folder for mhlo GetDimensionSizeOp (`mhlo.get_dimension_size`).
`get_dimension_size` folds to a constant when the corresponding tensor
dimension size is statically known / constant.
Copybara import of the project:
--
5994915525ec2e932125aa1f133ce2260ba100af by Uday Bondhugula <uday@polymagelabs.com>:
[MLIR] Add folder for mhlo get_dimension_size
Add folder for mhlo GetDimensionSizeOp. get_dimension_size folds to a
constant when the corresponding tensor dimension size is statically
known / constant.
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/42509 from polymage-labs:get_dimension_size_fold 5994915525ec2e932125aa1f133ce2260ba100af
PiperOrigin-RevId: 328222517
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/42508
An lmhlo.constant op on an memref that is locally allocated and with
no users other than dealloc's can be deleted. Add a canonicalization
pattern for this.
Copybara import of the project:
--
8758c409a15f567e7cb8e1077faa020f5705c85a by Uday Bondhugula <uday@polymagelabs.com>:
[MLIR] Erase dead lmhlo.constant ops
An lmhlo.constant op on an memref that is locally allocated and with
no other users (other than dealloc's) can be deleted. Add a
canonicalization patter for this.
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/42508 from polymage-labs:lhlo_constant_erase 8758c409a15f567e7cb8e1077faa020f5705c85a
PiperOrigin-RevId: 328042416
By adding support for complex types to GetScalarOfType and using appropriate
choice of limits for initial values in the unsorted segment reduction ops.
PiperOrigin-RevId: 327061577