Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/42508
An lmhlo.constant op on an memref that is locally allocated and with
no users other than dealloc's can be deleted. Add a canonicalization
pattern for this.
Copybara import of the project:
--
8758c409a15f567e7cb8e1077faa020f5705c85a by Uday Bondhugula <uday@polymagelabs.com>:
[MLIR] Erase dead lmhlo.constant ops
An lmhlo.constant op on an memref that is locally allocated and with
no other users (other than dealloc's) can be deleted. Add a
canonicalization patter for this.
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/42508 from polymage-labs:lhlo_constant_erase 8758c409a15f567e7cb8e1077faa020f5705c85a
PiperOrigin-RevId: 328042416
By adding support for complex types to GetScalarOfType and using appropriate
choice of limits for initial values in the unsorted segment reduction ops.
PiperOrigin-RevId: 327061577
This allows specifying a constant whose shape is only known when operand shape is. Also use it to update tf.Acos legalization.
PiperOrigin-RevId: 325860604
HLO requires that the element types match for all start index parameters. Right now we don't catch this invalid case until export, so adding a check in the verifier so that we catch this sooner.
This also requires a small tweak to the TF InplaceUpdate op lowering.
PiperOrigin-RevId: 325463796
- Use FuncOp::getArguments() and Region::getArguments() and friends where possible
instead of going through the front() block.
PiperOrigin-RevId: 325352975
For input and output dimensions which must match, we shouldn't fail in the case where one dim is dynamic and the other is static. This is insufficient information to conclude a dimension mismatch.
PiperOrigin-RevId: 325344738
Create safe or unsafe variants of `shape.broadcast` depending on the context.
The representation by means of an extent tensor is only legal if the operands
are known to be broadcastable. Currently, there is no use in a safe context in
the codebase but it will be used for shape inference eventually.
PiperOrigin-RevId: 325228073
Create safe or unsafe variants of `shape.broadcast` depending on the context.
The representation by means of an extent tensor is only legal if the operands
are known to be broadcastable. Currently, there is no use in a safe context in
the codebase but it will be used for shape inference eventually.
PiperOrigin-RevId: 325079842
The transformation of unranked to ranked operations no longer generates cast
operations for shapes and sizes. Instead, we use the newly introduced support
for extent tensor and index types directly.
PiperOrigin-RevId: 325057440
Create safe or unsafe variants of `shape.broadcast` depending on the context.
The representation by means of an extent tensor is only legal if the operands
are known to be broadcastable. Currently, there is no use in a safe context in
the codebase but it will be used for shape inference eventually.
PiperOrigin-RevId: 325056915
Instead, we invoke multiple test tools in a row in end to end tests now. For hlo dialects and passes, we use mlir-hlo-opt explicitly.
PiperOrigin-RevId: 324989884
Added support for HLO ops bitcast-convert, sort and while in MlirHloBuilder and enabled tests for NonMaxSuppressionV4 and SelfAdjointEigV2Op using these ops.
PiperOrigin-RevId: 324360651
This is required before exporting HLO dialect ops with standard dialect constant to XLA.
Also, sink constants for sort op as well. Added a TODO to generalize this pass to handle more ops and non-const values defined outside.
PiperOrigin-RevId: 324301911
The computation of a broadcasted shape forced the use of the shape type unnecessarily, which blocked further canonicalizations.
PiperOrigin-RevId: 323783998
Shuffle files around, use TableGen to register passes, and introduce
a `mlir-hlo-opt.cpp` file to hold the main entry point of the -opt tool
and stop relying on static registration for dialect/passes.
PiperOrigin-RevId: 323674455
This is done through reshaping the unranked tensor into a 1D ranked tensor which will result in a safe broadcast/indexing logic when the other operand is a scalar.
PiperOrigin-RevId: 322553661
Some gathers can be interpreted as torch index selects. Transforming these
cases allow torch_index_select lowerings to be used for certain gathers.
PiperOrigin-RevId: 322255835
The existing conversion no longer worked and was not save to undo. Furthermore, the pattern for mhlo.return had been removed.
Also adds some tests to ensure this does not degrade again.
PiperOrigin-RevId: 321542071
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/40925
…ad of std.store
The xla_lhlo.const lowering uses std.store to store a constant to
0-d memrefs. Update it to affine.store since such an access is trivially
affine (no indices). An affine.store can always be lowered to std.store.
Copybara import of the project:
--
9e18ede72fbbca107177bd742921e4cbf77adc82 by Uday Bondhugula <uday@polymagelabs.com>:
[MLIR] Update lhlo.const to linalg lowering to use affine.store instead of std.store
The xla_lhlo.const lowering uses std.store to store a constant to
0-d memrefs. Update it to affine.store since such an access is trivially
affine (no indices). An affine.store can always be lowered to std.store.
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/40925 from polymage-labs:lhlo_to_linalg_affine_store 9e18ede72fbbca107177bd742921e4cbf77adc82
PiperOrigin-RevId: 320623152
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/40745
Fold broadcast_in_dim op if the operand is the result of a tensor splat.
Copybara import of the project:
--
26c9f631448b8d6ffd20ece39ea8d4132b5550c7 by Uday Bondhugula <uday@polymagelabs.com>:
[MLIR] Add constant folder for xla_hlo.broadcast_in_dim op
Fold broadcast_in_dim op if the operand is the result of a tensor
splat.
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/40745 from polymage-labs:broadcast_in_dim_fold 26c9f631448b8d6ffd20ece39ea8d4132b5550c7
PiperOrigin-RevId: 320365164
Following on the plan of isolating the compiler/mlir/hlo directory.
Another xla_lhlo dialect will be created under compiler/mlir/xla/ later.
PiperOrigin-RevId: 320210326
There is no reason to have a multidimensional iota for codegen.
This should be canonicalized to a single dimensional iota followed
by a broadcast. Changing iota to on a single dimension and a broadcast
substantially simplifies implementing iota operations.
PiperOrigin-RevId: 320095470
We're preparing to restructure the MLIR HLO ecosystem with 5 dialects:
- chlo: client dialect with explicit broadcast and multiple composite operations
- mhlo: hlo with dynamic shape, decouple from XLA for evolution purpose
- lmhlo: same as above, but after buffer assignment.
- xla_hlo: mapping 1:1 to the XLA HloInstruction class.
- xla_lhlo: same as above, but after buffer assignment.
The first three dialects are intended to live in the new tensorflow/compiler/mlir/hlo
path, the latter two will be created in tensorflow/compiler/mlir/xla.
This patch only moves the directory, will followup with other transformations and tests.
The structure of the new directory follows: https://llvm.discourse.group/t/rfc-canonical-file-paths-to-dialects/621 as we intend to make it a standalone buildable component (see also https://github.com/google/mlir-npcomp as another example).
PiperOrigin-RevId: 319273229