Commit Graph

135 Commits

Author SHA1 Message Date
A. Unique TensorFlower d77c9ad6fa [MLIR][CHLO] Add `is_inf`, `is_pos_inf`, and `is_neg_inf` to CHLO dialect
Also add the respective lowerings to MHLO.

PiperOrigin-RevId: 354101955
2021-01-27 09:00:56 -08:00
Rahul Joshi 44deae2aa1 [MLIR:HLO] Extend AllReduce to support multiple inputs and results (to model tuples).
- Instead of SameTypeOperands, add custom verification to check if operands and
  results pairwise have the same type.

PiperOrigin-RevId: 353986341
2021-01-26 17:25:22 -08:00
Jacques Pienaar a7e645f37e Fix incorrect include
PiperOrigin-RevId: 352820426
2021-01-20 10:24:41 -08:00
Tres Popp ba0346b071 Integrate LLVM at llvm/llvm-project@96ef4f307d
Updates LLVM usage to match
[96ef4f307df2](https://github.com/llvm/llvm-project/commit/96ef4f307df2)

PiperOrigin-RevId: 352786460
2021-01-20 07:09:47 -08:00
A. Unique TensorFlower ec5f5667e1 [MLIR][KernelGen] Add `tf.Asinh` kernels and complete their lowerings
PiperOrigin-RevId: 352773540
2021-01-20 05:31:15 -08:00
A. Unique TensorFlower 0e85b4d511 [MLIR][KernelGen] Add `tf.Asinh` kernels and complete their lowerings
PiperOrigin-RevId: 352604725
2021-01-19 10:51:41 -08:00
A. Unique TensorFlower 791d5afd28 [MLIR][KernelGen] Add `tf.Asinh` kernels and complete their lowerings
PiperOrigin-RevId: 351989552
2021-01-15 05:26:57 -08:00
Alexander Belyaev ecf1bf5132 [KERNEL_GEN] Add a canonicalization pattern to drop a redundant dynamic reshape.
PiperOrigin-RevId: 351141868
2021-01-11 06:38:03 -08:00
Alexander Belyaev 6c42f54298 [KERNEL_GEN] Restrict broadcast -> reshape canonicalization to identity dims.
This is needed to avoid the case, when the broadcast_in_dims also performs permutation.

PiperOrigin-RevId: 350650342
2021-01-07 15:30:28 -08:00
Alexander Belyaev 095dc28e5c [KERNEL_GEN] Add canonicalizaton pattern to drop a redundant broadcast op.
PiperOrigin-RevId: 350105790
2021-01-05 03:01:00 -08:00
A. Unique TensorFlower c4accdcc41 Integrate LLVM at llvm/llvm-project@1b97cdf885
Updates LLVM usage to match
[1b97cdf885d6](https://github.com/llvm/llvm-project/commit/1b97cdf885d6)

PiperOrigin-RevId: 348587513
2020-12-21 23:49:18 -08:00
Smit Hinsu 8d051723c0 Use InferTypeOpInterface for HLO AbsOp and fix result shape inference
Shape inference in case of ops with complex element types need to use the element type of complex as the result element type and not the full operand type.

Before:
"mhlo.abs"(%arg0) : (tensor<4xcomplex<f32>>) -> tensor<4xtensor<4xcomplex<f32>>>
After:
"mhlo.abs"(%arg0) : (tensor<4xcomplex<f32>>) -> tensor<4xf32>
PiperOrigin-RevId: 348123967
2020-12-17 17:37:07 -08:00
Smit Hinsu 737d15ded5 Handle operands with zero elements in HLO PadOp folder
PiperOrigin-RevId: 348034821
2020-12-17 09:27:36 -08:00
River Riddle 6b439f7eee [mlir][NFC] Replace usages or mlir/IR/StandardTypes.h with mlir/IR/BuiltinTypes.h
StandardTypes.h was moved to BuiltinTypes.h and is being removed.

PiperOrigin-RevId: 347115952
2020-12-11 19:01:25 -08:00
Smit Hinsu ab6ee11813 Fix folding of HLO SliceOp with zero elements
This was causing division by zero in this case.

PiperOrigin-RevId: 346920942
2020-12-10 20:22:48 -08:00
Smit Hinsu bc7b6374c8 Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345902167
2020-12-05 18:55:41 -08:00
Phoenix Meadowlark c33bdcbd03 Remove fold of `mhlo.compare(%arg0, %arg0)` for floating types.
Two tensors having the same SSA-value isn't sufficient for equality for floating types, as `NaN != NaN`. As written this causes `tf.IsNan` to [miscompile](https://github.com/google/iree/issues/4061).

PiperOrigin-RevId: 345730640
2020-12-04 12:15:02 -08:00
Smit Hinsu 9bd1995f90 Legalize XlaReplicaId to HLO replica-id op
Also, define shape inference function for HLO replica-id op.

PiperOrigin-RevId: 345714342
2020-12-04 11:04:40 -08:00
A. Unique TensorFlower e87d53742b Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345618958
2020-12-04 00:04:10 -08:00
Smit Hinsu 9456af5880 Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345605910
2020-12-03 22:09:56 -08:00
Rahul Joshi dbbdfea95b [MLIR:HLO] Generate enum decls for HLO and LHLO GPU dialects.
- Split out enum definitions in hlo dialect into a separate .td file (similar to structs)
  and generate enum decl/defs for these enums.
- Also split out the LHLO GPU enums into a separate .td file and generate enum
  decl/defs for these enums as well.
- Remove unused dialect from ConvolutionAttributes and generate lhlo_gpu enums.
- Add appropriate namespace for all the enums.

PiperOrigin-RevId: 345277240
2020-12-02 11:39:23 -08:00
A. Unique TensorFlower 1b711670bc Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345239817
2020-12-02 08:42:07 -08:00
Smit Hinsu 733fc6d032 Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345227848
2020-12-02 07:24:10 -08:00
Adrian Kuegel d14c63da54 Add a canonicalization pattern to remove redundant dynamic_reshapes.
PiperOrigin-RevId: 344517381
2020-11-27 04:46:50 -08:00
A. Unique TensorFlower 7f239c7ba2 Add canonicalizer for Reshape(Broadcast(X)) pattern when it is an identity sequence
PiperOrigin-RevId: 343251257
2020-11-19 02:32:45 -08:00
Tres Popp 1dffa62fe9 Fold away shape.shape_of(mhlo.dynamic_reshape(inp, shape))
This specific pattern can be replaced with the shape
passed to dynamic_reshape. This is implemented as a
canonicalization on mhlo.dynamic_reshape to fit in
the infrastructure of canonicalization.

PiperOrigin-RevId: 342009365
2020-11-12 02:48:26 -08:00
Smit Hinsu 4ef12aa000 Update GetDimensionSize and SetDimensionSize ops to use I64 attribute for dimension
This is to match with HLO semantics and general dimension semantics in MLIR.

Also,

* Define minimal verifier for these ops.
* Add folder for SetDimensionSize op on static shaped dimension.
* Fix assumption of ranked shape in GetDimensionSize op.

PiperOrigin-RevId: 341150923
2020-11-06 18:03:04 -08:00
Dmitry Volodin 1821c69910 PR #44405: Fix typos in compiler directory
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/44405

Splitting #43857 by top-level directories.
Copybara import of the project:

--
fa5da7d5478649d11321dcac9f867b0a57e4798a by Dmitry Volodin <mr.molkree@gmail.com>:

fix typos in compiler dir

--
4d3c9f047f7ecb8ab299f1bf28a86fd39096eee7 by Dmitry Volodin <mr.molkree@gmail.com>:

fix one test as "atleast" in it comes from Bazel

--
9440ebaaa9fc4a735f7f72f0c8f0de4ec58afbd6 by Dmitry Volodin <mr.molkree@gmail.com>:

a bit more

PiperOrigin-RevId: 340819994
2020-11-05 03:31:54 -08:00
Alexander Belyaev 3d930d08c2 [HLO] Delete LHLO memref cast ops and migrate to STD ones.
PiperOrigin-RevId: 340663578
2020-11-04 09:26:34 -08:00
Richard Uhler 82031b356c Improve error message for improperly shaped slice indices.
The slice indices must be rank-1 and have the same number of elements of the
rank of the operand. Give reasonable error messages for violations of these
requirements instead of a misleading error message that the types of the
indices don't all match.

PiperOrigin-RevId: 340660822
2020-11-04 09:10:51 -08:00
Thomas Joerg 3a6580bf75 Integrate LLVM at llvm/llvm-project@4d11daa659
Updates LLVM usage to match
[4d11daa659a1](https://github.com/llvm/llvm-project/commit/4d11daa659a1)

PiperOrigin-RevId: 339631220
2020-10-29 03:02:38 -07:00
Smit Hinsu 6eda9ed273 Add compare_type optional attribute to CompareOp in HLO dialects
If unspecified, `compare_type` is FLOAT for float element types, SIGNED for signed element types and UNSIGNED for unsigned element types. compare_type can be TOTALORDER for float element types.

- Added import and export support the attribute.
- Restricted legalization from HLO to TF to the default compare types.
- Updated existing usage of the CompareOp

PiperOrigin-RevId: 339099219
2020-10-26 12:58:29 -07:00
Richard Uhler f9843fabe1 Use InferTypeOpInterface for HLO_SliceOp.
Instead of having a custom builder to construct a slice op without an explicit
return type.

PiperOrigin-RevId: 339058864
2020-10-26 09:54:13 -07:00
Roman Dzhabarov ae00ae487c [mlir] Simplify DDR matching patterns with equal operands for operators.
This https://reviews.llvm.org/D89254 diff introduced implicit matching between same name arguments. Modify usages accordingly.

PiperOrigin-RevId: 338090110
2020-10-20 10:47:09 -07:00
A. Unique TensorFlower 7a983ea389 Add folder for mhlo::pad
PiperOrigin-RevId: 337827560
2020-10-19 04:21:44 -07:00
A. Unique TensorFlower 4a18aa41ee Add folder to mhlo::round_nearest_afz
PiperOrigin-RevId: 337823786
2020-10-19 03:45:15 -07:00
Jacques Pienaar 27968619b7 Verify non-scalar inputs for HLO concat
XLA HLO concat does not accept scalars, so fail verification if this occurs. Avoids segfault when accessing an empty output shape.

PiperOrigin-RevId: 337618167
2020-10-16 19:39:31 -07:00
A. Unique TensorFlower 51cd4200b6 Make LMHLO's Dot have the same power as MHLO's DotGeneral.
PiperOrigin-RevId: 337391565
2020-10-15 15:09:06 -07:00
A. Unique TensorFlower 05ee41baf8 Add folder for mhlo::scatter
PiperOrigin-RevId: 337274351
2020-10-15 03:26:05 -07:00
Rahul Joshi f6b4e6758a Add GPU specific LMHLO level ops
- Introduce operations in a new lmhlo_gpu dialect that map to GPU library function calls
  in the XLA:GPU backend.
- Add basic unit tests as well.

PiperOrigin-RevId: 337132166
2020-10-14 11:23:55 -07:00
A. Unique TensorFlower 7367eac074 Add folder for mhlo::remainder
PiperOrigin-RevId: 335372628
2020-10-05 02:20:01 -07:00
A. Unique TensorFlower 4b1809784a Support collapse_slice_dims in the mhlo.gather->mhlo.slice canonicalizer
PiperOrigin-RevId: 334774763
2020-10-01 02:46:49 -07:00
Benjamin Kramer dfe64d3958 Implement InferShapedTypeOpInterface for mhlo.complex
Binary companion for 8bcd33e4b7

PiperOrigin-RevId: 334651523
2020-09-30 12:14:15 -07:00
Benjamin Kramer c8919f8419 Implement InferShapedTypeOpInterface and use inferReturnTypes for mhlo.imag and mhlo.real
This makes the lhlo lowering work with dynamic shapes.

PiperOrigin-RevId: 334553472
2020-09-30 02:02:26 -07:00
Robert Suderman 26ac5baae4 Make mhlo.sort return variadic results instead of a tuple
Tuple is only used on XLA's sort to return multiple inputs. MLIR supports
multiple inputs, switch to a tuple return.

PiperOrigin-RevId: 334226937
2020-09-28 13:32:23 -07:00
Robert Suderman 233f1a8a1a Folders for mhlo.compare
Constant evaluation of compare for the case where inputs are either the same
variable or the values are constant.

PiperOrigin-RevId: 333342328
2020-09-23 12:03:48 -07:00
A. Unique TensorFlower b1fd4d27cf [MLIR][KernelGen] Implement InferShapedTypeOpInterface for `mhlo.compare/select`
PiperOrigin-RevId: 332227340
2020-09-17 07:10:10 -07:00
A. Unique TensorFlower 1880f87737 Integrate LLVM at llvm/llvm-project@e1669843f2
Updates LLVM usage to match
[e1669843f2aa](https://github.com/llvm/llvm-project/commit/e1669843f2aa)

PiperOrigin-RevId: 331987679
2020-09-16 05:58:08 -07:00
Hanhan Wang 1800f44a29 Add sqrt folder.
PiperOrigin-RevId: 331974344
2020-09-16 04:04:09 -07:00
Robert Suderman dff98f8cd5 Folder for mhlo.negate
PiperOrigin-RevId: 331592342
2020-09-14 11:38:44 -07:00
Robert Suderman d0c8d17373 Update mhlo.concatenate inferred return type for dynamics
MHLO concatenate should support dynamic inputs. Its possible that the output
shape can be inferred from a dimension in one input that is not dynamic in
another.

PiperOrigin-RevId: 331054181
2020-09-10 17:46:11 -07:00
Robert Suderman 6eefb07767 Fix zero attr mistake
PiperOrigin-RevId: 331050555
2020-09-10 17:21:39 -07:00
Robert Suderman 81d51d810b Bitwise and/or/xor folders
Includes both and/or/xor on same inputs, constant all ones/zeros single arg
folder, and constant input folders.

PiperOrigin-RevId: 330610858
2020-09-08 16:27:13 -07:00
Mehdi Amini 9d4273b5a7 Delete mhlo static dialect registration
This is obsolete now.

PiperOrigin-RevId: 330605210
2020-09-08 15:57:36 -07:00
A. Unique TensorFlower 73b4861f2c Canonicalize mhlo.gather to mhlo.slice if it has a single set of constant indices
PiperOrigin-RevId: 330380755
2020-09-07 07:28:46 -07:00
A. Unique TensorFlower 65b0613491 Integrate LLVM at llvm/llvm-project@202766947e
Updates LLVM usage to match
[202766947edb](https://github.com/llvm/llvm-project/commit/202766947edb)

PiperOrigin-RevId: 329673065
2020-09-02 02:27:52 -07:00
Robert Suderman 7c93352a40 Scalar / Trivial folding for mhlo.select
This covers the case where the predicate is a splat or the on_true/on_false
values are the same.

PiperOrigin-RevId: 329622785
2020-09-01 18:34:12 -07:00
A. Unique TensorFlower a622bf479b Fix mhlo::SliceOp::fold to not crash on unknown shapes
PiperOrigin-RevId: 329504383
2020-09-01 07:37:36 -07:00
Uday Bondhugula 94296bb7ec PR #42509: [MLIR] Add folder for mhlo get_dimension_size
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/42509

Add folder for mhlo GetDimensionSizeOp (`mhlo.get_dimension_size`).
`get_dimension_size` folds to a constant when the corresponding tensor
dimension size is statically known / constant.
Copybara import of the project:

--
5994915525ec2e932125aa1f133ce2260ba100af by Uday Bondhugula <uday@polymagelabs.com>:

[MLIR] Add folder for mhlo get_dimension_size

Add folder for mhlo GetDimensionSizeOp. get_dimension_size folds to a
constant when the corresponding tensor dimension size is statically
known / constant.

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/42509 from polymage-labs:get_dimension_size_fold 5994915525ec2e932125aa1f133ce2260ba100af
PiperOrigin-RevId: 328222517
2020-08-24 15:36:30 -07:00
Mehdi Amini 73b5a44f33 Update third_party/tensorflow/compiler/mlir/tensorflow/utils/... to not depend on the global Dialect Registry (NFC)
PiperOrigin-RevId: 328171679
2020-08-24 10:59:22 -07:00
Uday Bondhugula 282dba6d38 PR #42508: [MLIR] Erase dead lmhlo.constant ops
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/42508

An lmhlo.constant op on an memref that is locally allocated and with
no users other than dealloc's can be deleted. Add a canonicalization
pattern for this.
Copybara import of the project:

--
8758c409a15f567e7cb8e1077faa020f5705c85a by Uday Bondhugula <uday@polymagelabs.com>:

[MLIR] Erase dead lmhlo.constant ops

An lmhlo.constant op on an memref that is locally allocated and with
no other users (other than dealloc's) can be deleted. Add a
canonicalization patter for this.

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/42508 from polymage-labs:lhlo_constant_erase 8758c409a15f567e7cb8e1077faa020f5705c85a
PiperOrigin-RevId: 328042416
2020-08-23 12:28:54 -07:00
Jacques Pienaar 5dac76f4af Add chlo.constant_like op which splats a constant to shape of operand
This allows specifying a constant whose shape is only known when operand shape is. Also use it to update tf.Acos legalization.

PiperOrigin-RevId: 325860604
2020-08-11 14:54:48 -07:00
A. Unique TensorFlower e6fa003bf2 Integrate LLVM at llvm/llvm-project@b6d9add71b
Updates LLVM usage to match
[b6d9add71b1a](https://github.com/llvm/llvm-project/commit/b6d9add71b1a)

PiperOrigin-RevId: 325589103
2020-08-08 04:35:25 -07:00
Andy Ly 53fdda7f3e Update mhlo.constant to use a custom assembly format instead of a custom printer and parser (NFC).
PiperOrigin-RevId: 325560779
2020-08-07 22:23:06 -07:00
Lucy Fox d742477c02 Verify that MHLO DynamicUpdateSlice start indices have matching element types.
HLO requires that the element types match for all start index parameters. Right now we don't catch this invalid case until export, so adding a check in the verifier so that we catch this sooner.

This also requires a small tweak to the TF InplaceUpdate op lowering.

PiperOrigin-RevId: 325463796
2020-08-07 22:21:40 -07:00
Lucy Fox cd22ecd136 Relax DynamicBroadcastInDim verifier when dimensions are dynamic.
For input and output dimensions which must match, we shouldn't fail in the case where one dim is dynamic and the other is static. This is insufficient information to conclude a dimension mismatch.

PiperOrigin-RevId: 325344738
2020-08-07 22:18:38 -07:00
A. Unique TensorFlower a68a16cdc7 [MLIR][XLA] Allow for choice of safe/unsafe variant in broadcast utils
Create safe or unsafe variants of `shape.broadcast` depending on the context.
The representation by means of an extent tensor is only legal if the operands
are known to be broadcastable. Currently, there is no use in a safe context in
the codebase but it will be used for shape inference eventually.

PiperOrigin-RevId: 325228073
2020-08-07 22:16:11 -07:00
Mehdi Amini 701312720c Add CMake files and lit configurations, enough for `ninja check-mlir-hlo` to pass on all the tests
PiperOrigin-RevId: 325172984
2020-08-07 22:14:34 -07:00
Andy Ly c340367702 Add canonicalization for unpacking and repacking the same tuple (e.g. tuple -> get_tuple_element -> tuple).
These unpacking and repacking of tuples may be generated when modifying tuple arguments or results.

PiperOrigin-RevId: 325162694
2020-08-05 21:38:02 -07:00
A. Unique TensorFlower 4372124362 [MLIR][XLA] Allow for choice of safe/unsafe variant in broadcast utils
Create safe or unsafe variants of `shape.broadcast` depending on the context.
The representation by means of an extent tensor is only legal if the operands
are known to be broadcastable. Currently, there is no use in a safe context in
the codebase but it will be used for shape inference eventually.

PiperOrigin-RevId: 325079842
2020-08-05 12:43:29 -07:00
A. Unique TensorFlower 37c36a4389 [MLIR][XLA] Allow for choice of safe/unsafe variant in broadcast utils
Create safe or unsafe variants of `shape.broadcast` depending on the context.
The representation by means of an extent tensor is only legal if the operands
are known to be broadcastable. Currently, there is no use in a safe context in
the codebase but it will be used for shape inference eventually.

PiperOrigin-RevId: 325056915
2020-08-05 11:09:23 -07:00
Mehdi Amini cd01bb4c4e More cleanup in mlir-hlo to prepare for the standalone build
Shuffle files around, use TableGen to register passes, and introduce
a `mlir-hlo-opt.cpp` file to hold the main entry point of the -opt tool
and stop relying on static registration for dialect/passes.

PiperOrigin-RevId: 323674455
2020-08-03 19:28:00 -07:00
Thomas Joerg 739758f9cc Integrate LLVM at llvm/llvm-project@eed333149d
Updates LLVM usage to match
[eed333149d17](https://github.com/llvm/llvm-project/commit/eed333149d17)

PiperOrigin-RevId: 323354988
2020-08-03 19:27:25 -07:00
Tres Popp 63d62b7952 Change cast to dyn_cast in hlo::ReshapeOp's verification.
With cast, a failing verification results in an assertion error rather than returning a failing status.

PiperOrigin-RevId: 322317937
2020-07-30 22:34:36 +00:00
Stephan Herhut 7a6adc6a84 Add canonicalization patterns for dynamic_broadcast_in_dim where the target shape is the shape of the operand.
PiperOrigin-RevId: 321312182
2020-07-30 22:34:06 +00:00
Robert Suderman 06ae59074f Fold xla iota across a 1-length dimension into a zero value
Iota across length-1 is just a constant. Fold into it.

PiperOrigin-RevId: 320443468
2020-07-30 22:33:43 +00:00
Uday Bondhugula de0578b4f9 PR #40745: [MLIR] Add constant folder for xla_hlo.broadcast_in_dim op
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/40745

Fold broadcast_in_dim op if the operand is the result of a tensor splat.
Copybara import of the project:

--
26c9f631448b8d6ffd20ece39ea8d4132b5550c7 by Uday Bondhugula <uday@polymagelabs.com>:

[MLIR] Add constant folder for xla_hlo.broadcast_in_dim op

Fold broadcast_in_dim op if the operand is the result of a tensor
splat.

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/tensorflow/pull/40745 from polymage-labs:broadcast_in_dim_fold 26c9f631448b8d6ffd20ece39ea8d4132b5550c7
PiperOrigin-RevId: 320365164
2020-07-30 22:33:34 +00:00
Mehdi Amini 506ddd9c4a Cleanup build rule names in compiler/mlir/hlo to remove the redundant/obsolete xla_ prefix
PiperOrigin-RevId: 320320140
2020-07-30 22:33:29 +00:00
Mehdi Amini a575636862 Rename XlaHloDialect class into MhloDialect following the recent dialect namespace renaming
PiperOrigin-RevId: 320213526
2020-07-30 22:33:20 +00:00
Mehdi Amini 94dcb90d38 Rename xla_chlo dialect into chlo
Following on the plan of isolating the compiler/mlir/hlo directory.

PiperOrigin-RevId: 320212018
2020-07-30 22:33:16 +00:00
Mehdi Amini 7c4a5d62b5 Rename xla_lhlo dialect into lmhlo
Following on the plan of isolating the compiler/mlir/hlo directory.
Another xla_lhlo dialect will be created under compiler/mlir/xla/ later.

PiperOrigin-RevId: 320210326
2020-07-30 22:33:11 +00:00
Robert Suderman e1651b6090 Canonicalize multidimensional iota to use broadcast
There is no reason to have a multidimensional iota for codegen.
This should be canonicalized to a single dimensional iota followed
by a broadcast. Changing iota to on a single dimension  and a broadcast
substantially simplifies implementing iota operations.

PiperOrigin-RevId: 320095470
2020-07-30 22:32:54 +00:00
Mehdi Amini 8900222fed Rename `xla_hlo` dialect to `mhlo`
This is part of the current refactoring of the HLO related dialect.
`xla_hlo` will be reintroduced in a new form later.

PiperOrigin-RevId: 319916753
2020-07-30 22:32:50 +00:00
Alexander Belyaev 72010faaa7 [MLIR][LHLO] Add ReshapeMemrefCastOp to LHLO.
PiperOrigin-RevId: 319799171
2020-07-30 22:32:36 +00:00
Mehdi Amini fcf3df1541 Move the HLO/LHLO dialects to a new directory: tensorflow/compiler/mlir/hlo
We're preparing to restructure the MLIR HLO ecosystem with 5 dialects:

- chlo: client dialect with explicit broadcast and multiple composite operations
- mhlo: hlo with dynamic shape, decouple from XLA for evolution purpose
- lmhlo: same as above, but after buffer assignment.
- xla_hlo: mapping 1:1 to the XLA HloInstruction class.
- xla_lhlo: same as above, but after buffer assignment.

The first three dialects are intended to live in the new tensorflow/compiler/mlir/hlo
path, the latter two will be created in tensorflow/compiler/mlir/xla.

This patch only moves the directory, will followup with other transformations and tests.

The structure of the new directory follows: https://llvm.discourse.group/t/rfc-canonical-file-paths-to-dialects/621 as we intend to make it a standalone buildable component (see also https://github.com/google/mlir-npcomp as another example).

PiperOrigin-RevId: 319273229
2020-07-30 22:32:32 +00:00