Commit Graph

368 Commits

Author SHA1 Message Date
A. Unique TensorFlower a68a16cdc7 [MLIR][XLA] Allow for choice of safe/unsafe variant in broadcast utils
Create safe or unsafe variants of `shape.broadcast` depending on the context.
The representation by means of an extent tensor is only legal if the operands
are known to be broadcastable. Currently, there is no use in a safe context in
the codebase but it will be used for shape inference eventually.

PiperOrigin-RevId: 325228073
2020-08-07 22:16:11 -07:00
Alexander Belyaev bc3293a05f [MLIR] Remove Affine->STD and SCF->STD patterns from lhlo->llvm pass.
PiperOrigin-RevId: 325219360
2020-08-07 22:15:24 -07:00
Mehdi Amini 701312720c Add CMake files and lit configurations, enough for `ninja check-mlir-hlo` to pass on all the tests
PiperOrigin-RevId: 325172984
2020-08-07 22:14:34 -07:00
Andy Ly c340367702 Add canonicalization for unpacking and repacking the same tuple (e.g. tuple -> get_tuple_element -> tuple).
These unpacking and repacking of tuples may be generated when modifying tuple arguments or results.

PiperOrigin-RevId: 325162694
2020-08-05 21:38:02 -07:00
A. Unique TensorFlower ad12e06ceb Integrate LLVM at llvm/llvm-project@acb66b9111
Updates LLVM usage to match
[acb66b9111ba](https://github.com/llvm/llvm-project/commit/acb66b9111ba)

PiperOrigin-RevId: 325125249
2020-08-05 21:37:34 -07:00
A. Unique TensorFlower 4372124362 [MLIR][XLA] Allow for choice of safe/unsafe variant in broadcast utils
Create safe or unsafe variants of `shape.broadcast` depending on the context.
The representation by means of an extent tensor is only legal if the operands
are known to be broadcastable. Currently, there is no use in a safe context in
the codebase but it will be used for shape inference eventually.

PiperOrigin-RevId: 325079842
2020-08-05 12:43:29 -07:00
A. Unique TensorFlower 5d3cc2105e [MLIR][HLO] Remove redundant casts from unranked to ranked transformation
The transformation of unranked to ranked operations no longer generates cast
operations for shapes and sizes. Instead, we use the newly introduced support
for extent tensor and index types directly.

PiperOrigin-RevId: 325057440
2020-08-05 11:11:43 -07:00
A. Unique TensorFlower 37c36a4389 [MLIR][XLA] Allow for choice of safe/unsafe variant in broadcast utils
Create safe or unsafe variants of `shape.broadcast` depending on the context.
The representation by means of an extent tensor is only legal if the operands
are known to be broadcastable. Currently, there is no use in a safe context in
the codebase but it will be used for shape inference eventually.

PiperOrigin-RevId: 325056915
2020-08-05 11:09:23 -07:00
Benjamin Kramer d26e863865 Integrate LLVM at llvm/llvm-project@c558c22cab
Updates LLVM usage to match
[c558c22cab9a](https://github.com/llvm/llvm-project/commit/c558c22cab9a)

PiperOrigin-RevId: 324993578
2020-08-05 05:17:23 -07:00
Stephan Herhut 6584c2ab1f Remove optional static registration for hlo dialects again.
Instead, we invoke multiple test tools in a row in end to end tests now. For hlo dialects and passes, we use mlir-hlo-opt explicitly.

PiperOrigin-RevId: 324989884
2020-08-05 03:53:17 -07:00
A. Unique TensorFlower 9a97cffbb5 Integrate LLVM at llvm/llvm-project@28e322ea93
Updates LLVM usage to match
[28e322ea9393](https://github.com/llvm/llvm-project/commit/28e322ea9393)

PiperOrigin-RevId: 324957510
2020-08-04 22:31:45 -07:00
Stephan Herhut 7bff244b8b Allow index typed memrefs in reshape_memref_cast.
With the recent change to allow memref of index in MLIR core, we should also
allow this in the mhlo dialect.

PiperOrigin-RevId: 324879354
2020-08-04 13:34:11 -07:00
Mehdi Amini 66f023a057 Add build helper to mlir-hlo
PiperOrigin-RevId: 324843181
2020-08-04 10:46:48 -07:00
A. Unique TensorFlower bf8836effb Integrate LLVM at llvm/llvm-project@b5059b7140
Updates LLVM usage to match
[b5059b714023](https://github.com/llvm/llvm-project/commit/b5059b714023)

PiperOrigin-RevId: 324754243
2020-08-03 23:05:03 -07:00
Mehdi Amini 6c7d1a7c7e Internal change
PiperOrigin-RevId: 324753700
2020-08-03 22:59:44 -07:00
Mehdi Amini f044cf780c Introduce early documentation on mlir-hlo
PiperOrigin-RevId: 324732753
2020-08-03 19:31:38 -07:00
Andy Ly 4c8fead3e0 Add support for token operands to mhlo.tuple.
mhlo.get_tuple_element supports extracting a mhlo.token type from a tuple. This updates the creation of tuples to allow for mhlo.token typed operands.

PiperOrigin-RevId: 324628663
2020-08-03 19:31:15 -07:00
Smit Hinsu 3fe9a7d2db Legalize TensorFlow NonMaxSuppressionV4 and SelfAdjointEigV2Op ops to HLO
Added support for HLO ops bitcast-convert, sort and while in MlirHloBuilder and enabled tests for NonMaxSuppressionV4 and SelfAdjointEigV2Op using these ops.

PiperOrigin-RevId: 324360651
2020-08-03 19:30:54 -07:00
A. Unique TensorFlower 7809320a5e Integrate LLVM at llvm/llvm-project@b7cfa6ca92
Updates LLVM usage to match
[b7cfa6ca9283](https://github.com/llvm/llvm-project/commit/b7cfa6ca9283)

PiperOrigin-RevId: 324331764
2020-08-03 19:30:43 -07:00
Smit Hinsu 577a81a66d Sink standard dialect constants in sink_constants_to_control_flow pass
This is required before exporting HLO dialect ops with standard dialect constant to XLA.

Also, sink constants for sort op as well. Added a TODO to generalize this pass to handle more ops and non-const values defined outside.

PiperOrigin-RevId: 324301911
2020-08-03 19:30:29 -07:00
Smit Hinsu 1c535f1718 Restrict GetDimensionSize HLO op result type to 32 bit integer
XLA implementation has this limitation and always uses 32 bit result for this instruction. This will cause mismatch between the result type in MLIR and XLA at the time of export.

This should be resolved once we have a special dialect mapping directly to HLOInstructionProto. Another option until then could be to introduce a pass to legalize mhlo itself to match XLA semantics.

PiperOrigin-RevId: 324286936
2020-08-03 19:30:15 -07:00
Tim Shen fcb91fb0b9 [MLIR] Add conversion between XLA Fusion and MHLO FusionOp.
PiperOrigin-RevId: 324279065
2020-08-03 19:30:03 -07:00
Stephan Herhut 9cbe5f2285 Constrain mhlo.const to static shaped tensors.
Constants of unknown shape cannot be materialized. In most cases, one likely wants to use a scalar constant and rely on broadcasting instead.

PiperOrigin-RevId: 324252475
2020-08-03 19:29:51 -07:00
Stephan Herhut 734b9b25fd Add optional static registration for mhlo/lmhlo passes.
PiperOrigin-RevId: 324190465
2020-08-03 19:29:40 -07:00
Thomas Joerg 735ae2838c Integrate LLVM at llvm/llvm-project@cd4e8d7f6f
Updates LLVM usage to match
[cd4e8d7f6f5e](https://github.com/llvm/llvm-project/commit/cd4e8d7f6f5e)

PiperOrigin-RevId: 324173542
2020-08-03 19:29:29 -07:00
Jacques Pienaar d83d437588 Add chlo.acos and legalization
Add client HLO op for arc cosine and legalize TF op to it & legalization from it to HLO.

PiperOrigin-RevId: 324053167
2020-08-03 19:29:17 -07:00
Tres Popp cce4bddf4b Remove unnecessary conversions between Shape and ExtentTensor.
PiperOrigin-RevId: 323981215
2020-08-03 19:29:04 -07:00
Tres Popp ffef8d6593 Support CHLO->LHLO lowering for broadcasting operations with both inputs unranked.
PiperOrigin-RevId: 323960733
2020-08-03 19:28:52 -07:00
Stephan Herhut b09bf2a4dc Move hlo end to end tests to the hlo directory tree.
PiperOrigin-RevId: 323955773
2020-08-03 19:28:37 -07:00
Robert Suderman de5ddaf7c9 HLO Random operations should match shape constraints.
PiperOrigin-RevId: 323844002
2020-08-03 19:28:23 -07:00
Stephan Herhut 1b0eb4baa7 Do not mandate the result type of shape computations but have it be inferred from context.
The computation of a broadcasted shape forced the use of the shape type unnecessarily, which blocked further canonicalizations.

PiperOrigin-RevId: 323783998
2020-08-03 19:28:12 -07:00
Mehdi Amini cd01bb4c4e More cleanup in mlir-hlo to prepare for the standalone build
Shuffle files around, use TableGen to register passes, and introduce
a `mlir-hlo-opt.cpp` file to hold the main entry point of the -opt tool
and stop relying on static registration for dialect/passes.

PiperOrigin-RevId: 323674455
2020-08-03 19:28:00 -07:00
Stephan Herhut effd3fb4f9 Extend unranked to ranked pattern for hlo operations to all unary and binary ops.
As this is essentially always the same pattern, only one operation is tested.

PiperOrigin-RevId: 323525418
2020-08-03 19:27:49 -07:00
Benjamin Kramer b7c4314e7f Allow running tests with FILECHECK_OPTS=-enable-var-scope
PiperOrigin-RevId: 323420636
2020-08-03 19:27:38 -07:00
Thomas Joerg 739758f9cc Integrate LLVM at llvm/llvm-project@eed333149d
Updates LLVM usage to match
[eed333149d17](https://github.com/llvm/llvm-project/commit/eed333149d17)

PiperOrigin-RevId: 323354988
2020-08-03 19:27:25 -07:00
Robert Suderman 8023baa959 Modified HLOAbsOp lowering for differing types.
PiperOrigin-RevId: 323082107
2020-08-03 19:27:12 -07:00
Kazuaki Ishizaki 882468da13 PR #41662: NFC - minor spelling tweaks
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/41662

This PR addresses minor spelling tweaks in documents
Copybara import of the project:

--
b806191a117990a479944b40ec7a4b79843287a2 by Kazuaki Ishizaki <ishizaki@jp.ibm.com>:

fix trivial typo

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/41662 from kiszk:spelling_tweaks_docs b806191a117990a479944b40ec7a4b79843287a2
PiperOrigin-RevId: 322955351
2020-08-03 19:23:45 -07:00
Hanhan Wang 8f262ae8f5 Add support for lowering mhlo.iota to Linalg.
PiperOrigin-RevId: 322799853
2020-07-30 22:34:45 +00:00
Tres Popp 4251630426 Support CHLO broadcasting operations between scalar and unranked tensors.
This is done through reshaping the unranked tensor into a 1D ranked tensor which will result in a safe broadcast/indexing logic when the other operand is a scalar.

PiperOrigin-RevId: 322553661
2020-07-30 22:34:40 +00:00
Tres Popp 63d62b7952 Change cast to dyn_cast in hlo::ReshapeOp's verification.
With cast, a failing verification results in an assertion error rather than returning a failing status.

PiperOrigin-RevId: 322317937
2020-07-30 22:34:36 +00:00
Robert Suderman c23ad602c8 Add a transform for Gathers to torch_index_select.
Some gathers can be interpreted as torch index selects. Transforming these
cases allow torch_index_select lowerings to be used for certain gathers.

PiperOrigin-RevId: 322255835
2020-07-30 22:34:32 +00:00
Robert Suderman cc776071fe Fix namespace for complex lowerings
PiperOrigin-RevId: 322180317
2020-07-30 22:34:27 +00:00
A. Unique TensorFlower 30fa3db949 Update comments to reflect the new names of `GenericOp` constructor parameters
PiperOrigin-RevId: 321795872
2020-07-30 22:34:23 +00:00
Stephan Herhut c44e08351d Fix mhlo to lmhlo conversion for ReduceOp.
The existing conversion no longer worked and was not save to undo. Furthermore, the pattern for mhlo.return had been removed.

Also adds some tests to ensure this does not degrade again.

PiperOrigin-RevId: 321542071
2020-07-30 22:34:19 +00:00
A. Unique TensorFlower c8bb0ff54d Integrate LLVM at https://github.com/llvm/llvm-project/commit/f233b92f92a6
PiperOrigin-RevId: 321454533
2020-07-30 22:34:14 +00:00
Robert Suderman 98a1e3b108 Add an optimization that converts some Gathers to Slices.
Some Gathers can be represented as slices. This lowering transforms
these gathers into slices.

PiperOrigin-RevId: 321394868
2020-07-30 22:34:10 +00:00
Stephan Herhut 7a6adc6a84 Add canonicalization patterns for dynamic_broadcast_in_dim where the target shape is the shape of the operand.
PiperOrigin-RevId: 321312182
2020-07-30 22:34:06 +00:00
Stephan Herhut 86f290896d Implement lowering of lmhlo.reshape_memref_cast to LLVM for unknown length shape operand.
PiperOrigin-RevId: 320959625
2020-07-30 22:34:02 +00:00
Uday Bondhugula d166b66cba PR #40925: [MLIR] Update lhlo.const to linalg lowering to use affine.store inste…
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/40925

…ad of std.store

The xla_lhlo.const lowering uses std.store to store a constant to
0-d memrefs. Update it to affine.store since such an access is trivially
affine (no indices). An affine.store can always be lowered to std.store.
Copybara import of the project:

--
9e18ede72fbbca107177bd742921e4cbf77adc82 by Uday Bondhugula <uday@polymagelabs.com>:

[MLIR] Update lhlo.const to linalg lowering to use affine.store instead of std.store

The xla_lhlo.const lowering uses std.store to store a constant to
0-d memrefs. Update it to affine.store since such an access is trivially
affine (no indices). An affine.store can always be lowered to std.store.

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/40925 from polymage-labs:lhlo_to_linalg_affine_store 9e18ede72fbbca107177bd742921e4cbf77adc82
PiperOrigin-RevId: 320623152
2020-07-30 22:33:51 +00:00
Alexander Belyaev 6eaccefdab [MLIR][LHLO] Lower ReshapeMemRefCastOp to LLVM.
PiperOrigin-RevId: 320572751
2020-07-30 22:33:47 +00:00