Legalize `atan2` analogously to XLA. `atan2` is first reduced to `atan` on the
interval [-1, 1] and subsequently approximated. This CL also adds e2e tests for
trigonometric approximations.
PiperOrigin-RevId: 334794336
- And add conversion from MHLO CustomCall to LHLO CustomCall
- According to XLA documentation, the called function should not be side effecting,
so marking the argument MemRefs as MemRead.
PiperOrigin-RevId: 334737196
A non globally registered pass should define `getName()` in order to generate correct crash reproducers.
This is something we get "for free" when using the TableGen generated base class.
We should also migrate the other passes to the same mechanism and remove the static
global registration.
PiperOrigin-RevId: 332976907
When transforming unranked binary operations from CHLO to HLO, we insert `shape.broadcast` operations. Due to context, we know that the result of the `shape.broadcast` operation has a static shape. Instead of modelling this in the type of the broadcast operation itself, which is illegal, we now use an explicit cast.
PiperOrigin-RevId: 331989879
Add `tan` op and lowering to CHLO dialect, move CHLO lowerings to
`chlo_legalize_to_hlo_patterns` and extend missing patterns.
PiperOrigin-RevId: 331506094
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/43137
This PR removes lhlo-copy-removal pass entirely and replace its usages with ```mlir::createCopyRemovalPass()```.
--
7ce1a06f507c8db46c6d7b43c7870cf56002e18e by Ehsan Toosi <ehsan.nadjaran_toosi@dfki.de>:
[mlir][lhlo] Replace lhlo-copy-removal pass with mlir-copy-removal pass
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/43137 from dfki-ehna:using_mlir_copy_removal 7ce1a06f507c8db46c6d7b43c7870cf56002e18e
PiperOrigin-RevId: 331498501
Add `tan` op and lowering to CHLO dialect, move CHLO lowerings to
`chlo_legalize_to_hlo_patterns` and extend missing patterns.
PiperOrigin-RevId: 331128170
Add `tan` op and lowering to CHLO dialect, move CHLO lowerings to
`chlo_legalize_to_hlo_patterns` and extend missing patterns.
PiperOrigin-RevId: 331125286
MHLO concatenate should support dynamic inputs. Its possible that the output
shape can be inferred from a dimension in one input that is not dynamic in
another.
PiperOrigin-RevId: 331054181
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/43069
The legalization of mlhlo.ReturnOp to lhlo.TerminatorOp by using BufferAssignmentReturnOpConverter fails since the Memref typed results (or the Memref typed operands of Return operation) are set to stay as results after legalization but lhlo.TerminatorOp doesn't accept any operands. Therefore, BufferAssignmentReturnOpConverter must be replaced with a manual conversion that removes all operands of mlhlo.ReturnOp and inserts copy operations in their places.
Copybara import of the project:
--
8be0435b0147263c3872bedec58fd215f784b450 by Ehsan Toosi <ehsan.nadjaran_toosi@dfki.de>:
[hlo] Unbreak hlo-legalize-to-lhlo test
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/43069 from dfki-ehna:fix_hlo_legalize_to_lhlo_test 8be0435b0147263c3872bedec58fd215f784b450
PiperOrigin-RevId: 330907602
* Unified TF->Cubin and TF->Kernel_with_host side lowering in `kernel_creator.h|cc`
* Added a pass that attaches GPU binary blob to GPUModuleOp
* Refactored most of the code.
* Added tf_to_kernel binary that emits obj file
PiperOrigin-RevId: 330494488
Start of pass to legalize MHLO control flow to SCF for further optimization in common form. The current version just matches a very simple instance (which also happens to occur a few times). Exposes some further canonicalization opportunities that aren't yet addressed.
PiperOrigin-RevId: 329017723
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/42509
Add folder for mhlo GetDimensionSizeOp (`mhlo.get_dimension_size`).
`get_dimension_size` folds to a constant when the corresponding tensor
dimension size is statically known / constant.
Copybara import of the project:
--
5994915525ec2e932125aa1f133ce2260ba100af by Uday Bondhugula <uday@polymagelabs.com>:
[MLIR] Add folder for mhlo get_dimension_size
Add folder for mhlo GetDimensionSizeOp. get_dimension_size folds to a
constant when the corresponding tensor dimension size is statically
known / constant.
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/42509 from polymage-labs:get_dimension_size_fold 5994915525ec2e932125aa1f133ce2260ba100af
PiperOrigin-RevId: 328222517
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/42508
An lmhlo.constant op on an memref that is locally allocated and with
no users other than dealloc's can be deleted. Add a canonicalization
pattern for this.
Copybara import of the project:
--
8758c409a15f567e7cb8e1077faa020f5705c85a by Uday Bondhugula <uday@polymagelabs.com>:
[MLIR] Erase dead lmhlo.constant ops
An lmhlo.constant op on an memref that is locally allocated and with
no other users (other than dealloc's) can be deleted. Add a
canonicalization patter for this.
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/42508 from polymage-labs:lhlo_constant_erase 8758c409a15f567e7cb8e1077faa020f5705c85a
PiperOrigin-RevId: 328042416
This allows specifying a constant whose shape is only known when operand shape is. Also use it to update tf.Acos legalization.
PiperOrigin-RevId: 325860604
HLO requires that the element types match for all start index parameters. Right now we don't catch this invalid case until export, so adding a check in the verifier so that we catch this sooner.
This also requires a small tweak to the TF InplaceUpdate op lowering.
PiperOrigin-RevId: 325463796
- Use FuncOp::getArguments() and Region::getArguments() and friends where possible
instead of going through the front() block.
PiperOrigin-RevId: 325352975
For input and output dimensions which must match, we shouldn't fail in the case where one dim is dynamic and the other is static. This is insufficient information to conclude a dimension mismatch.
PiperOrigin-RevId: 325344738
Create safe or unsafe variants of `shape.broadcast` depending on the context.
The representation by means of an extent tensor is only legal if the operands
are known to be broadcastable. Currently, there is no use in a safe context in
the codebase but it will be used for shape inference eventually.
PiperOrigin-RevId: 325228073
Create safe or unsafe variants of `shape.broadcast` depending on the context.
The representation by means of an extent tensor is only legal if the operands
are known to be broadcastable. Currently, there is no use in a safe context in
the codebase but it will be used for shape inference eventually.
PiperOrigin-RevId: 325079842
The transformation of unranked to ranked operations no longer generates cast
operations for shapes and sizes. Instead, we use the newly introduced support
for extent tensor and index types directly.
PiperOrigin-RevId: 325057440
Create safe or unsafe variants of `shape.broadcast` depending on the context.
The representation by means of an extent tensor is only legal if the operands
are known to be broadcastable. Currently, there is no use in a safe context in
the codebase but it will be used for shape inference eventually.
PiperOrigin-RevId: 325056915
Instead, we invoke multiple test tools in a row in end to end tests now. For hlo dialects and passes, we use mlir-hlo-opt explicitly.
PiperOrigin-RevId: 324989884
Added support for HLO ops bitcast-convert, sort and while in MlirHloBuilder and enabled tests for NonMaxSuppressionV4 and SelfAdjointEigV2Op using these ops.
PiperOrigin-RevId: 324360651
This is required before exporting HLO dialect ops with standard dialect constant to XLA.
Also, sink constants for sort op as well. Added a TODO to generalize this pass to handle more ops and non-const values defined outside.
PiperOrigin-RevId: 324301911
Shuffle files around, use TableGen to register passes, and introduce
a `mlir-hlo-opt.cpp` file to hold the main entry point of the -opt tool
and stop relying on static registration for dialect/passes.
PiperOrigin-RevId: 323674455
This is done through reshaping the unranked tensor into a 1D ranked tensor which will result in a safe broadcast/indexing logic when the other operand is a scalar.
PiperOrigin-RevId: 322553661
Some gathers can be interpreted as torch index selects. Transforming these
cases allow torch_index_select lowerings to be used for certain gathers.
PiperOrigin-RevId: 322255835
The existing conversion no longer worked and was not save to undo. Furthermore, the pattern for mhlo.return had been removed.
Also adds some tests to ensure this does not degrade again.
PiperOrigin-RevId: 321542071
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/40925
…ad of std.store
The xla_lhlo.const lowering uses std.store to store a constant to
0-d memrefs. Update it to affine.store since such an access is trivially
affine (no indices). An affine.store can always be lowered to std.store.
Copybara import of the project:
--
9e18ede72fbbca107177bd742921e4cbf77adc82 by Uday Bondhugula <uday@polymagelabs.com>:
[MLIR] Update lhlo.const to linalg lowering to use affine.store instead of std.store
The xla_lhlo.const lowering uses std.store to store a constant to
0-d memrefs. Update it to affine.store since such an access is trivially
affine (no indices). An affine.store can always be lowered to std.store.
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/40925 from polymage-labs:lhlo_to_linalg_affine_store 9e18ede72fbbca107177bd742921e4cbf77adc82
PiperOrigin-RevId: 320623152