The transformation of unranked to ranked operations no longer generates cast
operations for shapes and sizes. Instead, we use the newly introduced support
for extent tensor and index types directly.
PiperOrigin-RevId: 325057440
Create safe or unsafe variants of `shape.broadcast` depending on the context.
The representation by means of an extent tensor is only legal if the operands
are known to be broadcastable. Currently, there is no use in a safe context in
the codebase but it will be used for shape inference eventually.
PiperOrigin-RevId: 325056915
mhlo.get_tuple_element supports extracting a mhlo.token type from a tuple. This updates the creation of tuples to allow for mhlo.token typed operands.
PiperOrigin-RevId: 324628663
This is required before exporting HLO dialect ops with standard dialect constant to XLA.
Also, sink constants for sort op as well. Added a TODO to generalize this pass to handle more ops and non-const values defined outside.
PiperOrigin-RevId: 324301911
Constants of unknown shape cannot be materialized. In most cases, one likely wants to use a scalar constant and rely on broadcasting instead.
PiperOrigin-RevId: 324252475
The computation of a broadcasted shape forced the use of the shape type unnecessarily, which blocked further canonicalizations.
PiperOrigin-RevId: 323783998
This is done through reshaping the unranked tensor into a 1D ranked tensor which will result in a safe broadcast/indexing logic when the other operand is a scalar.
PiperOrigin-RevId: 322553661
Some gathers can be interpreted as torch index selects. Transforming these
cases allow torch_index_select lowerings to be used for certain gathers.
PiperOrigin-RevId: 322255835
The existing conversion no longer worked and was not save to undo. Furthermore, the pattern for mhlo.return had been removed.
Also adds some tests to ensure this does not degrade again.
PiperOrigin-RevId: 321542071
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/40925
…ad of std.store
The xla_lhlo.const lowering uses std.store to store a constant to
0-d memrefs. Update it to affine.store since such an access is trivially
affine (no indices). An affine.store can always be lowered to std.store.
Copybara import of the project:
--
9e18ede72fbbca107177bd742921e4cbf77adc82 by Uday Bondhugula <uday@polymagelabs.com>:
[MLIR] Update lhlo.const to linalg lowering to use affine.store instead of std.store
The xla_lhlo.const lowering uses std.store to store a constant to
0-d memrefs. Update it to affine.store since such an access is trivially
affine (no indices). An affine.store can always be lowered to std.store.
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/40925 from polymage-labs:lhlo_to_linalg_affine_store 9e18ede72fbbca107177bd742921e4cbf77adc82
PiperOrigin-RevId: 320623152
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/40745
Fold broadcast_in_dim op if the operand is the result of a tensor splat.
Copybara import of the project:
--
26c9f631448b8d6ffd20ece39ea8d4132b5550c7 by Uday Bondhugula <uday@polymagelabs.com>:
[MLIR] Add constant folder for xla_hlo.broadcast_in_dim op
Fold broadcast_in_dim op if the operand is the result of a tensor
splat.
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/40745 from polymage-labs:broadcast_in_dim_fold 26c9f631448b8d6ffd20ece39ea8d4132b5550c7
PiperOrigin-RevId: 320365164
Following on the plan of isolating the compiler/mlir/hlo directory.
Another xla_lhlo dialect will be created under compiler/mlir/xla/ later.
PiperOrigin-RevId: 320210326
There is no reason to have a multidimensional iota for codegen.
This should be canonicalized to a single dimensional iota followed
by a broadcast. Changing iota to on a single dimension and a broadcast
substantially simplifies implementing iota operations.
PiperOrigin-RevId: 320095470
Also add a localized `mlir-hlo-opt` binary for the testing of
tensorflow/compiler/mlir/hlo/... ; this directory is intended to be self-contained
and depend only on MLIR.
PiperOrigin-RevId: 319878984