Commit Graph

55 Commits

Author SHA1 Message Date
Smit Hinsu bc7b6374c8 Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345902167
2020-12-05 18:55:41 -08:00
Phoenix Meadowlark c33bdcbd03 Remove fold of `mhlo.compare(%arg0, %arg0)` for floating types.
Two tensors having the same SSA-value isn't sufficient for equality for floating types, as `NaN != NaN`. As written this causes `tf.IsNan` to [miscompile](https://github.com/google/iree/issues/4061).

PiperOrigin-RevId: 345730640
2020-12-04 12:15:02 -08:00
Smit Hinsu 9bd1995f90 Legalize XlaReplicaId to HLO replica-id op
Also, define shape inference function for HLO replica-id op.

PiperOrigin-RevId: 345714342
2020-12-04 11:04:40 -08:00
A. Unique TensorFlower e87d53742b Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345618958
2020-12-04 00:04:10 -08:00
Smit Hinsu 9456af5880 Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345605910
2020-12-03 22:09:56 -08:00
A. Unique TensorFlower 1b711670bc Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345239817
2020-12-02 08:42:07 -08:00
Smit Hinsu 733fc6d032 Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345227848
2020-12-02 07:24:10 -08:00
Adrian Kuegel d14c63da54 Add a canonicalization pattern to remove redundant dynamic_reshapes.
PiperOrigin-RevId: 344517381
2020-11-27 04:46:50 -08:00
A. Unique TensorFlower 7f239c7ba2 Add canonicalizer for Reshape(Broadcast(X)) pattern when it is an identity sequence
PiperOrigin-RevId: 343251257
2020-11-19 02:32:45 -08:00
Tres Popp 1dffa62fe9 Fold away shape.shape_of(mhlo.dynamic_reshape(inp, shape))
This specific pattern can be replaced with the shape
passed to dynamic_reshape. This is implemented as a
canonicalization on mhlo.dynamic_reshape to fit in
the infrastructure of canonicalization.

PiperOrigin-RevId: 342009365
2020-11-12 02:48:26 -08:00
Smit Hinsu 4ef12aa000 Update GetDimensionSize and SetDimensionSize ops to use I64 attribute for dimension
This is to match with HLO semantics and general dimension semantics in MLIR.

Also,

* Define minimal verifier for these ops.
* Add folder for SetDimensionSize op on static shaped dimension.
* Fix assumption of ranked shape in GetDimensionSize op.

PiperOrigin-RevId: 341150923
2020-11-06 18:03:04 -08:00
Dmitry Volodin 1821c69910 PR #44405: Fix typos in compiler directory
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/44405

Splitting #43857 by top-level directories.
Copybara import of the project:

--
fa5da7d5478649d11321dcac9f867b0a57e4798a by Dmitry Volodin <mr.molkree@gmail.com>:

fix typos in compiler dir

--
4d3c9f047f7ecb8ab299f1bf28a86fd39096eee7 by Dmitry Volodin <mr.molkree@gmail.com>:

fix one test as "atleast" in it comes from Bazel

--
9440ebaaa9fc4a735f7f72f0c8f0de4ec58afbd6 by Dmitry Volodin <mr.molkree@gmail.com>:

a bit more

PiperOrigin-RevId: 340819994
2020-11-05 03:31:54 -08:00
Richard Uhler 82031b356c Improve error message for improperly shaped slice indices.
The slice indices must be rank-1 and have the same number of elements of the
rank of the operand. Give reasonable error messages for violations of these
requirements instead of a misleading error message that the types of the
indices don't all match.

PiperOrigin-RevId: 340660822
2020-11-04 09:10:51 -08:00
Thomas Joerg 3a6580bf75 Integrate LLVM at llvm/llvm-project@4d11daa659
Updates LLVM usage to match
[4d11daa659a1](https://github.com/llvm/llvm-project/commit/4d11daa659a1)

PiperOrigin-RevId: 339631220
2020-10-29 03:02:38 -07:00
Smit Hinsu 6eda9ed273 Add compare_type optional attribute to CompareOp in HLO dialects
If unspecified, `compare_type` is FLOAT for float element types, SIGNED for signed element types and UNSIGNED for unsigned element types. compare_type can be TOTALORDER for float element types.

- Added import and export support the attribute.
- Restricted legalization from HLO to TF to the default compare types.
- Updated existing usage of the CompareOp

PiperOrigin-RevId: 339099219
2020-10-26 12:58:29 -07:00
Richard Uhler f9843fabe1 Use InferTypeOpInterface for HLO_SliceOp.
Instead of having a custom builder to construct a slice op without an explicit
return type.

PiperOrigin-RevId: 339058864
2020-10-26 09:54:13 -07:00
A. Unique TensorFlower 7a983ea389 Add folder for mhlo::pad
PiperOrigin-RevId: 337827560
2020-10-19 04:21:44 -07:00
A. Unique TensorFlower 4a18aa41ee Add folder to mhlo::round_nearest_afz
PiperOrigin-RevId: 337823786
2020-10-19 03:45:15 -07:00
Jacques Pienaar 27968619b7 Verify non-scalar inputs for HLO concat
XLA HLO concat does not accept scalars, so fail verification if this occurs. Avoids segfault when accessing an empty output shape.

PiperOrigin-RevId: 337618167
2020-10-16 19:39:31 -07:00
A. Unique TensorFlower 51cd4200b6 Make LMHLO's Dot have the same power as MHLO's DotGeneral.
PiperOrigin-RevId: 337391565
2020-10-15 15:09:06 -07:00
A. Unique TensorFlower 05ee41baf8 Add folder for mhlo::scatter
PiperOrigin-RevId: 337274351
2020-10-15 03:26:05 -07:00
A. Unique TensorFlower 7367eac074 Add folder for mhlo::remainder
PiperOrigin-RevId: 335372628
2020-10-05 02:20:01 -07:00
A. Unique TensorFlower 4b1809784a Support collapse_slice_dims in the mhlo.gather->mhlo.slice canonicalizer
PiperOrigin-RevId: 334774763
2020-10-01 02:46:49 -07:00
Benjamin Kramer dfe64d3958 Implement InferShapedTypeOpInterface for mhlo.complex
Binary companion for 8bcd33e4b7

PiperOrigin-RevId: 334651523
2020-09-30 12:14:15 -07:00
Benjamin Kramer c8919f8419 Implement InferShapedTypeOpInterface and use inferReturnTypes for mhlo.imag and mhlo.real
This makes the lhlo lowering work with dynamic shapes.

PiperOrigin-RevId: 334553472
2020-09-30 02:02:26 -07:00
Robert Suderman 26ac5baae4 Make mhlo.sort return variadic results instead of a tuple
Tuple is only used on XLA's sort to return multiple inputs. MLIR supports
multiple inputs, switch to a tuple return.

PiperOrigin-RevId: 334226937
2020-09-28 13:32:23 -07:00
Robert Suderman 233f1a8a1a Folders for mhlo.compare
Constant evaluation of compare for the case where inputs are either the same
variable or the values are constant.

PiperOrigin-RevId: 333342328
2020-09-23 12:03:48 -07:00
A. Unique TensorFlower b1fd4d27cf [MLIR][KernelGen] Implement InferShapedTypeOpInterface for `mhlo.compare/select`
PiperOrigin-RevId: 332227340
2020-09-17 07:10:10 -07:00
A. Unique TensorFlower 1880f87737 Integrate LLVM at llvm/llvm-project@e1669843f2
Updates LLVM usage to match
[e1669843f2aa](https://github.com/llvm/llvm-project/commit/e1669843f2aa)

PiperOrigin-RevId: 331987679
2020-09-16 05:58:08 -07:00
Hanhan Wang 1800f44a29 Add sqrt folder.
PiperOrigin-RevId: 331974344
2020-09-16 04:04:09 -07:00
Robert Suderman dff98f8cd5 Folder for mhlo.negate
PiperOrigin-RevId: 331592342
2020-09-14 11:38:44 -07:00
Robert Suderman d0c8d17373 Update mhlo.concatenate inferred return type for dynamics
MHLO concatenate should support dynamic inputs. Its possible that the output
shape can be inferred from a dimension in one input that is not dynamic in
another.

PiperOrigin-RevId: 331054181
2020-09-10 17:46:11 -07:00
Robert Suderman 6eefb07767 Fix zero attr mistake
PiperOrigin-RevId: 331050555
2020-09-10 17:21:39 -07:00
Robert Suderman 81d51d810b Bitwise and/or/xor folders
Includes both and/or/xor on same inputs, constant all ones/zeros single arg
folder, and constant input folders.

PiperOrigin-RevId: 330610858
2020-09-08 16:27:13 -07:00
A. Unique TensorFlower 73b4861f2c Canonicalize mhlo.gather to mhlo.slice if it has a single set of constant indices
PiperOrigin-RevId: 330380755
2020-09-07 07:28:46 -07:00
A. Unique TensorFlower 65b0613491 Integrate LLVM at llvm/llvm-project@202766947e
Updates LLVM usage to match
[202766947edb](https://github.com/llvm/llvm-project/commit/202766947edb)

PiperOrigin-RevId: 329673065
2020-09-02 02:27:52 -07:00
Robert Suderman 7c93352a40 Scalar / Trivial folding for mhlo.select
This covers the case where the predicate is a splat or the on_true/on_false
values are the same.

PiperOrigin-RevId: 329622785
2020-09-01 18:34:12 -07:00
A. Unique TensorFlower a622bf479b Fix mhlo::SliceOp::fold to not crash on unknown shapes
PiperOrigin-RevId: 329504383
2020-09-01 07:37:36 -07:00
Uday Bondhugula 94296bb7ec PR #42509: [MLIR] Add folder for mhlo get_dimension_size
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/42509

Add folder for mhlo GetDimensionSizeOp (`mhlo.get_dimension_size`).
`get_dimension_size` folds to a constant when the corresponding tensor
dimension size is statically known / constant.
Copybara import of the project:

--
5994915525ec2e932125aa1f133ce2260ba100af by Uday Bondhugula <uday@polymagelabs.com>:

[MLIR] Add folder for mhlo get_dimension_size

Add folder for mhlo GetDimensionSizeOp. get_dimension_size folds to a
constant when the corresponding tensor dimension size is statically
known / constant.

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/42509 from polymage-labs:get_dimension_size_fold 5994915525ec2e932125aa1f133ce2260ba100af
PiperOrigin-RevId: 328222517
2020-08-24 15:36:30 -07:00
A. Unique TensorFlower e6fa003bf2 Integrate LLVM at llvm/llvm-project@b6d9add71b
Updates LLVM usage to match
[b6d9add71b1a](https://github.com/llvm/llvm-project/commit/b6d9add71b1a)

PiperOrigin-RevId: 325589103
2020-08-08 04:35:25 -07:00
Andy Ly 53fdda7f3e Update mhlo.constant to use a custom assembly format instead of a custom printer and parser (NFC).
PiperOrigin-RevId: 325560779
2020-08-07 22:23:06 -07:00
Lucy Fox d742477c02 Verify that MHLO DynamicUpdateSlice start indices have matching element types.
HLO requires that the element types match for all start index parameters. Right now we don't catch this invalid case until export, so adding a check in the verifier so that we catch this sooner.

This also requires a small tweak to the TF InplaceUpdate op lowering.

PiperOrigin-RevId: 325463796
2020-08-07 22:21:40 -07:00
Lucy Fox cd22ecd136 Relax DynamicBroadcastInDim verifier when dimensions are dynamic.
For input and output dimensions which must match, we shouldn't fail in the case where one dim is dynamic and the other is static. This is insufficient information to conclude a dimension mismatch.

PiperOrigin-RevId: 325344738
2020-08-07 22:18:38 -07:00
Andy Ly c340367702 Add canonicalization for unpacking and repacking the same tuple (e.g. tuple -> get_tuple_element -> tuple).
These unpacking and repacking of tuples may be generated when modifying tuple arguments or results.

PiperOrigin-RevId: 325162694
2020-08-05 21:38:02 -07:00
Mehdi Amini cd01bb4c4e More cleanup in mlir-hlo to prepare for the standalone build
Shuffle files around, use TableGen to register passes, and introduce
a `mlir-hlo-opt.cpp` file to hold the main entry point of the -opt tool
and stop relying on static registration for dialect/passes.

PiperOrigin-RevId: 323674455
2020-08-03 19:28:00 -07:00
Thomas Joerg 739758f9cc Integrate LLVM at llvm/llvm-project@eed333149d
Updates LLVM usage to match
[eed333149d17](https://github.com/llvm/llvm-project/commit/eed333149d17)

PiperOrigin-RevId: 323354988
2020-08-03 19:27:25 -07:00
Tres Popp 63d62b7952 Change cast to dyn_cast in hlo::ReshapeOp's verification.
With cast, a failing verification results in an assertion error rather than returning a failing status.

PiperOrigin-RevId: 322317937
2020-07-30 22:34:36 +00:00
Stephan Herhut 7a6adc6a84 Add canonicalization patterns for dynamic_broadcast_in_dim where the target shape is the shape of the operand.
PiperOrigin-RevId: 321312182
2020-07-30 22:34:06 +00:00
Robert Suderman 06ae59074f Fold xla iota across a 1-length dimension into a zero value
Iota across length-1 is just a constant. Fold into it.

PiperOrigin-RevId: 320443468
2020-07-30 22:33:43 +00:00
Uday Bondhugula de0578b4f9 PR #40745: [MLIR] Add constant folder for xla_hlo.broadcast_in_dim op
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/40745

Fold broadcast_in_dim op if the operand is the result of a tensor splat.
Copybara import of the project:

--
26c9f631448b8d6ffd20ece39ea8d4132b5550c7 by Uday Bondhugula <uday@polymagelabs.com>:

[MLIR] Add constant folder for xla_hlo.broadcast_in_dim op

Fold broadcast_in_dim op if the operand is the result of a tensor
splat.

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/40745 from polymage-labs:broadcast_in_dim_fold 26c9f631448b8d6ffd20ece39ea8d4132b5550c7
PiperOrigin-RevId: 320365164
2020-07-30 22:33:34 +00:00