Commit Graph

350 Commits

Author SHA1 Message Date
Rahul Joshi a6978cf4ab [MLIR][NFC] Adopt FuncOp/Region argument API's.
- Use FuncOp::getArguments() and Region::getArguments() and friends where possible
  instead of going through the front() block.

PiperOrigin-RevId: 325352975
2020-08-07 22:19:21 -07:00
Lucy Fox cd22ecd136 Relax DynamicBroadcastInDim verifier when dimensions are dynamic.
For input and output dimensions which must match, we shouldn't fail in the case where one dim is dynamic and the other is static. This is insufficient information to conclude a dimension mismatch.

PiperOrigin-RevId: 325344738
2020-08-07 22:18:38 -07:00
A. Unique TensorFlower a68a16cdc7 [MLIR][XLA] Allow for choice of safe/unsafe variant in broadcast utils
Create safe or unsafe variants of `shape.broadcast` depending on the context.
The representation by means of an extent tensor is only legal if the operands
are known to be broadcastable. Currently, there is no use in a safe context in
the codebase but it will be used for shape inference eventually.

PiperOrigin-RevId: 325228073
2020-08-07 22:16:11 -07:00
Alexander Belyaev bc3293a05f [MLIR] Remove Affine->STD and SCF->STD patterns from lhlo->llvm pass.
PiperOrigin-RevId: 325219360
2020-08-07 22:15:24 -07:00
Mehdi Amini 701312720c Add CMake files and lit configurations, enough for `ninja check-mlir-hlo` to pass on all the tests
PiperOrigin-RevId: 325172984
2020-08-07 22:14:34 -07:00
Andy Ly c340367702 Add canonicalization for unpacking and repacking the same tuple (e.g. tuple -> get_tuple_element -> tuple).
These unpacking and repacking of tuples may be generated when modifying tuple arguments or results.

PiperOrigin-RevId: 325162694
2020-08-05 21:38:02 -07:00
A. Unique TensorFlower 4372124362 [MLIR][XLA] Allow for choice of safe/unsafe variant in broadcast utils
Create safe or unsafe variants of `shape.broadcast` depending on the context.
The representation by means of an extent tensor is only legal if the operands
are known to be broadcastable. Currently, there is no use in a safe context in
the codebase but it will be used for shape inference eventually.

PiperOrigin-RevId: 325079842
2020-08-05 12:43:29 -07:00
A. Unique TensorFlower 5d3cc2105e [MLIR][HLO] Remove redundant casts from unranked to ranked transformation
The transformation of unranked to ranked operations no longer generates cast
operations for shapes and sizes. Instead, we use the newly introduced support
for extent tensor and index types directly.

PiperOrigin-RevId: 325057440
2020-08-05 11:11:43 -07:00
A. Unique TensorFlower 37c36a4389 [MLIR][XLA] Allow for choice of safe/unsafe variant in broadcast utils
Create safe or unsafe variants of `shape.broadcast` depending on the context.
The representation by means of an extent tensor is only legal if the operands
are known to be broadcastable. Currently, there is no use in a safe context in
the codebase but it will be used for shape inference eventually.

PiperOrigin-RevId: 325056915
2020-08-05 11:09:23 -07:00
Stephan Herhut 6584c2ab1f Remove optional static registration for hlo dialects again.
Instead, we invoke multiple test tools in a row in end to end tests now. For hlo dialects and passes, we use mlir-hlo-opt explicitly.

PiperOrigin-RevId: 324989884
2020-08-05 03:53:17 -07:00
Mehdi Amini 6c7d1a7c7e Internal change
PiperOrigin-RevId: 324753700
2020-08-03 22:59:44 -07:00
Smit Hinsu 3fe9a7d2db Legalize TensorFlow NonMaxSuppressionV4 and SelfAdjointEigV2Op ops to HLO
Added support for HLO ops bitcast-convert, sort and while in MlirHloBuilder and enabled tests for NonMaxSuppressionV4 and SelfAdjointEigV2Op using these ops.

PiperOrigin-RevId: 324360651
2020-08-03 19:30:54 -07:00
A. Unique TensorFlower 7809320a5e Integrate LLVM at llvm/llvm-project@b7cfa6ca92
Updates LLVM usage to match
[b7cfa6ca9283](https://github.com/llvm/llvm-project/commit/b7cfa6ca9283)

PiperOrigin-RevId: 324331764
2020-08-03 19:30:43 -07:00
Smit Hinsu 577a81a66d Sink standard dialect constants in sink_constants_to_control_flow pass
This is required before exporting HLO dialect ops with standard dialect constant to XLA.

Also, sink constants for sort op as well. Added a TODO to generalize this pass to handle more ops and non-const values defined outside.

PiperOrigin-RevId: 324301911
2020-08-03 19:30:29 -07:00
Stephan Herhut 734b9b25fd Add optional static registration for mhlo/lmhlo passes.
PiperOrigin-RevId: 324190465
2020-08-03 19:29:40 -07:00
Thomas Joerg 735ae2838c Integrate LLVM at llvm/llvm-project@cd4e8d7f6f
Updates LLVM usage to match
[cd4e8d7f6f5e](https://github.com/llvm/llvm-project/commit/cd4e8d7f6f5e)

PiperOrigin-RevId: 324173542
2020-08-03 19:29:29 -07:00
Tres Popp cce4bddf4b Remove unnecessary conversions between Shape and ExtentTensor.
PiperOrigin-RevId: 323981215
2020-08-03 19:29:04 -07:00
Tres Popp ffef8d6593 Support CHLO->LHLO lowering for broadcasting operations with both inputs unranked.
PiperOrigin-RevId: 323960733
2020-08-03 19:28:52 -07:00
Stephan Herhut 1b0eb4baa7 Do not mandate the result type of shape computations but have it be inferred from context.
The computation of a broadcasted shape forced the use of the shape type unnecessarily, which blocked further canonicalizations.

PiperOrigin-RevId: 323783998
2020-08-03 19:28:12 -07:00
Mehdi Amini cd01bb4c4e More cleanup in mlir-hlo to prepare for the standalone build
Shuffle files around, use TableGen to register passes, and introduce
a `mlir-hlo-opt.cpp` file to hold the main entry point of the -opt tool
and stop relying on static registration for dialect/passes.

PiperOrigin-RevId: 323674455
2020-08-03 19:28:00 -07:00
Stephan Herhut effd3fb4f9 Extend unranked to ranked pattern for hlo operations to all unary and binary ops.
As this is essentially always the same pattern, only one operation is tested.

PiperOrigin-RevId: 323525418
2020-08-03 19:27:49 -07:00
Thomas Joerg 739758f9cc Integrate LLVM at llvm/llvm-project@eed333149d
Updates LLVM usage to match
[eed333149d17](https://github.com/llvm/llvm-project/commit/eed333149d17)

PiperOrigin-RevId: 323354988
2020-08-03 19:27:25 -07:00
Robert Suderman 8023baa959 Modified HLOAbsOp lowering for differing types.
PiperOrigin-RevId: 323082107
2020-08-03 19:27:12 -07:00
Hanhan Wang 8f262ae8f5 Add support for lowering mhlo.iota to Linalg.
PiperOrigin-RevId: 322799853
2020-07-30 22:34:45 +00:00
Tres Popp 4251630426 Support CHLO broadcasting operations between scalar and unranked tensors.
This is done through reshaping the unranked tensor into a 1D ranked tensor which will result in a safe broadcast/indexing logic when the other operand is a scalar.

PiperOrigin-RevId: 322553661
2020-07-30 22:34:40 +00:00
Tres Popp 63d62b7952 Change cast to dyn_cast in hlo::ReshapeOp's verification.
With cast, a failing verification results in an assertion error rather than returning a failing status.

PiperOrigin-RevId: 322317937
2020-07-30 22:34:36 +00:00
Robert Suderman c23ad602c8 Add a transform for Gathers to torch_index_select.
Some gathers can be interpreted as torch index selects. Transforming these
cases allow torch_index_select lowerings to be used for certain gathers.

PiperOrigin-RevId: 322255835
2020-07-30 22:34:32 +00:00
Robert Suderman cc776071fe Fix namespace for complex lowerings
PiperOrigin-RevId: 322180317
2020-07-30 22:34:27 +00:00
A. Unique TensorFlower 30fa3db949 Update comments to reflect the new names of `GenericOp` constructor parameters
PiperOrigin-RevId: 321795872
2020-07-30 22:34:23 +00:00
Stephan Herhut c44e08351d Fix mhlo to lmhlo conversion for ReduceOp.
The existing conversion no longer worked and was not save to undo. Furthermore, the pattern for mhlo.return had been removed.

Also adds some tests to ensure this does not degrade again.

PiperOrigin-RevId: 321542071
2020-07-30 22:34:19 +00:00
Robert Suderman 98a1e3b108 Add an optimization that converts some Gathers to Slices.
Some Gathers can be represented as slices. This lowering transforms
these gathers into slices.

PiperOrigin-RevId: 321394868
2020-07-30 22:34:10 +00:00
Stephan Herhut 7a6adc6a84 Add canonicalization patterns for dynamic_broadcast_in_dim where the target shape is the shape of the operand.
PiperOrigin-RevId: 321312182
2020-07-30 22:34:06 +00:00
Stephan Herhut 86f290896d Implement lowering of lmhlo.reshape_memref_cast to LLVM for unknown length shape operand.
PiperOrigin-RevId: 320959625
2020-07-30 22:34:02 +00:00
Uday Bondhugula d166b66cba PR #40925: [MLIR] Update lhlo.const to linalg lowering to use affine.store inste…
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/40925

…ad of std.store

The xla_lhlo.const lowering uses std.store to store a constant to
0-d memrefs. Update it to affine.store since such an access is trivially
affine (no indices). An affine.store can always be lowered to std.store.
Copybara import of the project:

--
9e18ede72fbbca107177bd742921e4cbf77adc82 by Uday Bondhugula <uday@polymagelabs.com>:

[MLIR] Update lhlo.const to linalg lowering to use affine.store instead of std.store

The xla_lhlo.const lowering uses std.store to store a constant to
0-d memrefs. Update it to affine.store since such an access is trivially
affine (no indices). An affine.store can always be lowered to std.store.

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/40925 from polymage-labs:lhlo_to_linalg_affine_store 9e18ede72fbbca107177bd742921e4cbf77adc82
PiperOrigin-RevId: 320623152
2020-07-30 22:33:51 +00:00
Alexander Belyaev 6eaccefdab [MLIR][LHLO] Lower ReshapeMemRefCastOp to LLVM.
PiperOrigin-RevId: 320572751
2020-07-30 22:33:47 +00:00
Robert Suderman 06ae59074f Fold xla iota across a 1-length dimension into a zero value
Iota across length-1 is just a constant. Fold into it.

PiperOrigin-RevId: 320443468
2020-07-30 22:33:43 +00:00
Uday Bondhugula de0578b4f9 PR #40745: [MLIR] Add constant folder for xla_hlo.broadcast_in_dim op
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/40745

Fold broadcast_in_dim op if the operand is the result of a tensor splat.
Copybara import of the project:

--
26c9f631448b8d6ffd20ece39ea8d4132b5550c7 by Uday Bondhugula <uday@polymagelabs.com>:

[MLIR] Add constant folder for xla_hlo.broadcast_in_dim op

Fold broadcast_in_dim op if the operand is the result of a tensor
splat.

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/40745 from polymage-labs:broadcast_in_dim_fold 26c9f631448b8d6ffd20ece39ea8d4132b5550c7
PiperOrigin-RevId: 320365164
2020-07-30 22:33:34 +00:00
Mehdi Amini 506ddd9c4a Cleanup build rule names in compiler/mlir/hlo to remove the redundant/obsolete xla_ prefix
PiperOrigin-RevId: 320320140
2020-07-30 22:33:29 +00:00
Alexander Belyaev f4303855c4 Add mlir_cpu_runner tests infra for CHLO->LHLO->LLVM lowering.
PiperOrigin-RevId: 320218897
2020-07-30 22:33:25 +00:00
Mehdi Amini a575636862 Rename XlaHloDialect class into MhloDialect following the recent dialect namespace renaming
PiperOrigin-RevId: 320213526
2020-07-30 22:33:20 +00:00
Mehdi Amini 94dcb90d38 Rename xla_chlo dialect into chlo
Following on the plan of isolating the compiler/mlir/hlo directory.

PiperOrigin-RevId: 320212018
2020-07-30 22:33:16 +00:00
Mehdi Amini 7c4a5d62b5 Rename xla_lhlo dialect into lmhlo
Following on the plan of isolating the compiler/mlir/hlo directory.
Another xla_lhlo dialect will be created under compiler/mlir/xla/ later.

PiperOrigin-RevId: 320210326
2020-07-30 22:33:11 +00:00
Alexander Belyaev b076e018a8 [MLIR][LHLO] Legalize CallOp that call funcs with tensor args/results.
PiperOrigin-RevId: 320172723
2020-07-30 22:33:07 +00:00
Alexander Belyaev e8cfdee592 [MLIR][LHLO] Convert mhlo.dynamic_reshape -> lhlo.reshape_memref_cast.
PiperOrigin-RevId: 320149593
2020-07-30 22:33:02 +00:00
Alexander Belyaev 8692fde3f9 [MLIR] Convert FuncOp signature with unranked types in HLO->LHLO conversion.
PiperOrigin-RevId: 320146856
2020-07-30 22:32:58 +00:00
Robert Suderman e1651b6090 Canonicalize multidimensional iota to use broadcast
There is no reason to have a multidimensional iota for codegen.
This should be canonicalized to a single dimensional iota followed
by a broadcast. Changing iota to on a single dimension  and a broadcast
substantially simplifies implementing iota operations.

PiperOrigin-RevId: 320095470
2020-07-30 22:32:54 +00:00
Mehdi Amini 8900222fed Rename `xla_hlo` dialect to `mhlo`
This is part of the current refactoring of the HLO related dialect.
`xla_hlo` will be reintroduced in a new form later.

PiperOrigin-RevId: 319916753
2020-07-30 22:32:50 +00:00
Mehdi Amini 31dc1b21eb Move XLA-independent transforms to the new MLIR-HLO directory
This is as straighforward as possible, more cleanup/rewrite to come.

PiperOrigin-RevId: 319849713
2020-07-30 22:32:40 +00:00
Alexander Belyaev 72010faaa7 [MLIR][LHLO] Add ReshapeMemrefCastOp to LHLO.
PiperOrigin-RevId: 319799171
2020-07-30 22:32:36 +00:00
Mehdi Amini fcf3df1541 Move the HLO/LHLO dialects to a new directory: tensorflow/compiler/mlir/hlo
We're preparing to restructure the MLIR HLO ecosystem with 5 dialects:

- chlo: client dialect with explicit broadcast and multiple composite operations
- mhlo: hlo with dynamic shape, decouple from XLA for evolution purpose
- lmhlo: same as above, but after buffer assignment.
- xla_hlo: mapping 1:1 to the XLA HloInstruction class.
- xla_lhlo: same as above, but after buffer assignment.

The first three dialects are intended to live in the new tensorflow/compiler/mlir/hlo
path, the latter two will be created in tensorflow/compiler/mlir/xla.

This patch only moves the directory, will followup with other transformations and tests.

The structure of the new directory follows: https://llvm.discourse.group/t/rfc-canonical-file-paths-to-dialects/621 as we intend to make it a standalone buildable component (see also https://github.com/google/mlir-npcomp as another example).

PiperOrigin-RevId: 319273229
2020-07-30 22:32:32 +00:00