Commit Graph

95 Commits

Author SHA1 Message Date
Wenyi Zhao 6660234d80 PR #50100: [MLIR][DISC] Bufferize DynamicIotaOp and DynamicPadOp
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/50100

support hlo-to-lhlo conversion for DynamicIotaOp and DynamicPadOp
Copybara import of the project:

--
c3aae94954e35d3f8ad265f619ef9765665a5115 by Wenyi Zhao <reyizero@gmail.com>:

[MLIR][DISC] Bufferize DynamicIotaOp and DynamicPadOp

--
adc6996d70b804d61310d56a33fac975d70c8636 by Wenyi Zhao <reyizero@gmail.com>:

minor

PiperOrigin-RevId: 378733284
2021-06-10 14:20:45 -07:00
A. Unique TensorFlower d828b457b3 Handle empty tensors in SimplifyConcatSlice.
If the result of the slice is an empty tensor, do nothing.
This fixes a crash: we can't create a `concat` with an
empty operand range.

PiperOrigin-RevId: 378354956
2021-06-09 02:15:47 -07:00
Wenyi Zhao ade873a5e0 PR #49970: [MLIR][DISC] bufferize DynamicReshape and DynamicBroadcastInDim
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/49970

1, add hlo-to-lhlo support for DynamicReshape and DynamicBroadcastInDim

2, add a flag `convert-to-lmhlo-only` to seperate following two case:
   - hlo-to-lhlo only. Simply lowers all mhlo ops to their lmhlo
     counterparts, do not apply any optimization (e.g. elide any
     buffer copy). Buffer optimization is not easy in dynamic
     shape world especially when involving control flow, thus we
     leave this to another dedicated pass.

   - hlo-to-lhlo-or-memref-directly. Lowers some metadata-only mhlo
     ops (e.g. reshape) to memref dialect directly and Lowers others
     to their lmhlo counterparts.
Copybara import of the project:

--
562bd65a368f6194405c4ae6900e3b4388a5ec03 by Wenyi Zhao <reyizero@gmail.com>:

[MLIR][DISC] bufferize DynamicReshape and DynamicBroadcastInDim

1, add hlo-to-lhlo support for DynamicReshape and DynamicBroadcastInDim

2, add a flag `convert-to-lmhlo-only` to seperate following two case:
   - hlo-to-lhlo only. Simply lowers all mhlo ops to their lmhlo
     counterparts, do not apply any optimization (e.g. elide any
     buffer copy). Buffer optimization is not easy in dynamic
     shape world especially when involving control flow, thus we
     leave this to another dedicated pass.

   - hlo-to-lhlo-or-memref-directly. Lowers some metadata-only mhlo
     ops (e.g. reshape) to memref dialect directly and Lowers others
     to their lmhlo counterparts.

PiperOrigin-RevId: 377603395
2021-06-04 15:36:03 -07:00
A. Unique TensorFlower aba16adfa5 Add `mhlo.all_gather` op to MHLO dialect.
Adds import/export/verifier support as well.
Also makes `channel_handle` uniform across mhlo.all_reduce and mhlo.all-gather.

PiperOrigin-RevId: 377323468
2021-06-03 10:45:29 -07:00
Adrian Kuegel a4fa6afa07 [mlir][hlo] Avoid dyn_cast_or_null when called with getDefiningOp result (NFC)
PiperOrigin-RevId: 376110457
2021-05-27 00:20:42 -07:00
wyzhao b93e54d8a4 PR #49454: [MLIR][DISC] Upgrade to use the new `reifyReturnTypeShapes` interface.
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/49454

The new interface is more safe to be used during dialect conversion
(e.g. converting from tensor world to buffer world).
Copybara import of the project:

--
a6968072d59bec3c3bbaef0121d297e807c37c91 by Wenyi Zhao <reyizero@gmail.com>:

[MLIR][DISC] Upgrade to use the new `reifyReturnTypeShapes` interface.

The new interface is more safe to be used during dialect conversion
(e.g. converting from tensor world to buffer world).

--
55e7c6b7f2f99b99e226645a57e2433fae3e90ed by Wenyi Zhao <reyizero@gmail.com>:

minor fix

PiperOrigin-RevId: 375500273
2021-05-24 10:11:55 -07:00
Feiwen a7884196f5 PR #49228: [MLIR][DISC] porting dynamic shape related OPs to mhlo and lmhlo dialect
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/49228

We are porting our MLIR-based dynamic shape compiler to tf community (From OP def, Patttern, to Optimization pass, etc).
This is the first PR, which including some dynamic shape OPs def in mhlo and lmhlo dialect.
For mhlo dialect, we add:
- HLO_RealDynamicSliceOp
- HLO_DynamicPadOp
- HLO_DynamicGatherOp
- HLO_DynamicConvOp

For lmhlo dialect, we add:
- LHLO_RealDynamicSliceOp
- LHLO_DynamicBroadcastInDimOp
- LHLO_DynamicGatherOp
- LHLO_DynamicPadOp
- LHLO_DynamicBitcastOp
- LHLO_DynamicConvOp
- LHLO_DynamicIotaOp
- LHLO_DynamicReshapeOp
- LHLO_DotGeneralOp
- LHLO_BitcastOp

Rest Ops to add:
* We will send a separate PR containing LHLO_DynamicWhileOp and LHLO_DynamicCaseOp for control flow.
* We will add a separate dedicated dialect like mhlo_ral, which including D2HOp/H2DOp/DebugPrintOp/TopKOp, etc.

Previous discussions:[RFC](https://groups.google.com/a/tensorflow.org/g/mlir/c/_X48poNcbDI/m/jCC8BWIICQAJ), [discussion_1](https://llvm.discourse.group/t/updates-on-mlir-based-dynamic-shape-compiler/2384), [Recording of meeting](https://drive.google.com/file/d/1_uEISlV5MUWdG9faKAdKlCWnPtGjRC-D/view?usp=sharing).
Copybara import of the project:

--
e22d9e61106e00a1a1c6f368cc4a03e3bd1f414c by azazhu <azazhu@gmail.com>:

[DISC]fea: porting mhlo and lmhlo OPs

--
9ec3e76290da07cbd53d7da5fa86ff67179441a1 by azazhu <azazhu@gmail.com>:

[DISC][MLIR] 1. add summary and description for dynamic OPs in mhlo and lmhlo; 2. rm InferOutputTypes; 3. add verify for RealDynamicSliceOp and DynamicPadOp

--
0d68cd135555fd935991c12456b21329e628f23f by azazhu <azazhu@gmail.com>:

[DISC][MLIR] 1.remove D2H,H2D and DebugPrint Ops from mhlo/lmhlo dialect; 2. add type constraint to DynamicPadOp and RealDynamicSliceOp; 3.refine lmhlo type constraint; 4.rename RealDynamicSliceOp as name conflict.

--
698762a77d60f6a844cb1ab3f32740d4ef3c5843 by azazhu <azazhu@gmail.com>:

[DISC][MLIR] 1. replace dyn_cast to cast 2. refine code

PiperOrigin-RevId: 375022260
2021-05-20 23:16:47 -07:00
Rahul Joshi 41f663ce47 [HLO] Adopt custom syntax for convolution dimensions and window attributes (HLO)
PiperOrigin-RevId: 374923250
2021-05-20 12:13:50 -07:00
A. Unique TensorFlower 57aeb5ab16 Integrate LLVM at llvm/llvm-project@0316f3e649
Updates LLVM usage to match
[0316f3e64972](https://github.com/llvm/llvm-project/commit/0316f3e64972)

PiperOrigin-RevId: 374855085
2021-05-20 06:09:40 -07:00
A. Unique TensorFlower d2cc74317c Implement constant folding for mhlo.Sign.
PiperOrigin-RevId: 373550014
2021-05-13 03:54:04 -07:00
A. Unique TensorFlower 7f86dd9f7e Constant fold compare EQ if one of the operands is true and compare NE if one of the operands is false.
PiperOrigin-RevId: 373058030
2021-05-10 18:53:49 -07:00
A. Unique TensorFlower e500ab37a1 Introduce constant folds for ReduceOp with single LogicalAnd or LogicalOr op.
PiperOrigin-RevId: 370551483
2021-04-26 15:11:27 -07:00
A. Unique TensorFlower 8db96f54d3 [mhlo] Add a folder for mhlo.map which does nothing but return one of the arguments.
Add a folder for maps whose body returns only one of the arguments. When this arises the fold replaces the map output with one of the operand tensors.

PiperOrigin-RevId: 369304322
2021-04-19 14:36:08 -07:00
Rahul Joshi c75cbf4ac7 [MLIR][NFC] Rename ReduceOp operands() => inputs().
- Rename to avoid confusion as operands generally includes all operands of an operation

PiperOrigin-RevId: 368479524
2021-04-14 12:08:23 -07:00
Jacques Pienaar fdd75daed6 Add shape function for MHLO RngNormal and RngUniform
PiperOrigin-RevId: 368276963
2021-04-13 12:59:42 -07:00
A. Unique TensorFlower 6d2209e301 [MLIR][HLO] Canonicalize chained broadcasts
Compose two subsequent `dynamic_broadcast_in_dim` ops into one.

PiperOrigin-RevId: 367630360
2021-04-09 07:35:34 -07:00
Rahul Joshi ff2cbfa2ec [MLIR] Add support for representing variadic reduce-window in HLO/LMHLO dialect.
-  Fixed a subset of transformations to handle variadic reduce-window.

PiperOrigin-RevId: 366278650
2021-04-01 10:24:50 -07:00
Geoffrey Martin-Noble 5d65758e8c Canonicalize MHLO Case and If Ops with constant conditions
ReplaceOpWithRegion was taken directly from ScfOps. We should maybe put that somewhere common in core.

PiperOrigin-RevId: 365936724
2021-03-30 17:58:01 -07:00
Geoffrey Martin-Noble 2fb2a92c6e Verify mhlo.if region return types match op
This matches the behavior of mhlo.case. Additionally, fix the verification of CaseOp in the case of nested ops with mhlo.return-containing regions.

PiperOrigin-RevId: 365936672
2021-03-30 17:57:20 -07:00
Geoffrey Martin-Noble 7a9394dca5 Restrict MHLO control flow ops to single-block regions
PiperOrigin-RevId: 365935824
2021-03-30 17:51:03 -07:00
Geoffrey Martin-Noble a2b6060c0c Add folder for HLO NotOp
PiperOrigin-RevId: 364989658
2021-03-25 02:08:38 -07:00
A. Unique TensorFlower 0c4a89e52c [MLIR][MHLO] Implement shape reification for `dynamic_broadcast_in_dim`
PiperOrigin-RevId: 363622714
2021-03-18 03:39:15 -07:00
Jacques Pienaar 3de2024a9b Avoid creating tuple type only for verification
Make the error message a bit more verbose & it is cheaper to verify the elements rather than creating a (potentially) new type.

PiperOrigin-RevId: 363073909
2021-03-15 17:58:19 -07:00
Benjamin Kramer 67a770e4e0 [HLO:MLIR] Make binary op type reification emit shape_of instead of tensor ops
This gives cleaner code and allows shape optimizations to happen on the result.

PiperOrigin-RevId: 362242975
2021-03-11 02:01:35 -08:00
Rahul Joshi 9902e6ee32 [HLO] Add LMHLO CollectivePermute verification.
- Extract verification of source target pairs attached to collective permute into a common
  helper function and use that to verify both MHLO and LMHLO variants.
- Change MlirGpuTestBase::ParseMlirModule to allow returning back a failure, and use
  that to update the mlir_gpu_compile_test to check the new behavior.

PiperOrigin-RevId: 362156962
2021-03-10 15:37:12 -08:00
Stephan Herhut cabd4d9a06 Canonicalize dynamic_broadcast_in_dim to own shape with rank narrowing on the shape to a corresponding tensor.cast.
PiperOrigin-RevId: 362028291
2021-03-10 05:43:54 -08:00
A. Unique TensorFlower 55eda81407 [MLIR][HLO] Reify shape extents as `index` values
PiperOrigin-RevId: 361519167
2021-03-08 02:42:47 -08:00
Marius Brehler 29f70cb892 PR #46723: Adjust types of loop counters
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/46723

Reduces some warnings about comparison of integers of different signs.
Copybara import of the project:

--
311f436f77b334f5462127d8cf179cce067969ca by Marius Brehler <marius.brehler@iml.fraunhofer.de>:

Adjust types of loop counters

Reduces some warnings about comparison of integers of different signs.

PiperOrigin-RevId: 360912203
2021-03-04 07:36:12 -08:00
Richard Uhler b579bd5d9e Support dynamic-shaped operand in verification of BroadcastInDim.
Verification of HLO_BroadcastInDimOp was previously failing or crashing if the
operand had a dynamic shape or was unranked. Update the verification code to
allow the operand to be unranked or have dynamic shape.

PiperOrigin-RevId: 358056793
2021-02-17 16:18:09 -08:00
Adrian Kuegel 96f8771ed7 Add MLIR generated kernel for Angle kernel.
This also requires a canonicalization pattern to remove a redundant dynamic
reshape from rank 1 to rank 1.

PiperOrigin-RevId: 355113135
2021-02-02 00:47:20 -08:00
A. Unique TensorFlower fe2e5a175f [MLIR][HLO] Implement type inference for `is_finite` op
PiperOrigin-RevId: 354261420
2021-01-28 00:56:12 -08:00
Jacques Pienaar a7e645f37e Fix incorrect include
PiperOrigin-RevId: 352820426
2021-01-20 10:24:41 -08:00
Tres Popp ba0346b071 Integrate LLVM at llvm/llvm-project@96ef4f307d
Updates LLVM usage to match
[96ef4f307df2](https://github.com/llvm/llvm-project/commit/96ef4f307df2)

PiperOrigin-RevId: 352786460
2021-01-20 07:09:47 -08:00
Alexander Belyaev ecf1bf5132 [KERNEL_GEN] Add a canonicalization pattern to drop a redundant dynamic reshape.
PiperOrigin-RevId: 351141868
2021-01-11 06:38:03 -08:00
Alexander Belyaev 095dc28e5c [KERNEL_GEN] Add canonicalizaton pattern to drop a redundant broadcast op.
PiperOrigin-RevId: 350105790
2021-01-05 03:01:00 -08:00
A. Unique TensorFlower c4accdcc41 Integrate LLVM at llvm/llvm-project@1b97cdf885
Updates LLVM usage to match
[1b97cdf885d6](https://github.com/llvm/llvm-project/commit/1b97cdf885d6)

PiperOrigin-RevId: 348587513
2020-12-21 23:49:18 -08:00
Smit Hinsu 8d051723c0 Use InferTypeOpInterface for HLO AbsOp and fix result shape inference
Shape inference in case of ops with complex element types need to use the element type of complex as the result element type and not the full operand type.

Before:
"mhlo.abs"(%arg0) : (tensor<4xcomplex<f32>>) -> tensor<4xtensor<4xcomplex<f32>>>
After:
"mhlo.abs"(%arg0) : (tensor<4xcomplex<f32>>) -> tensor<4xf32>
PiperOrigin-RevId: 348123967
2020-12-17 17:37:07 -08:00
Smit Hinsu 737d15ded5 Handle operands with zero elements in HLO PadOp folder
PiperOrigin-RevId: 348034821
2020-12-17 09:27:36 -08:00
River Riddle 6b439f7eee [mlir][NFC] Replace usages or mlir/IR/StandardTypes.h with mlir/IR/BuiltinTypes.h
StandardTypes.h was moved to BuiltinTypes.h and is being removed.

PiperOrigin-RevId: 347115952
2020-12-11 19:01:25 -08:00
Smit Hinsu ab6ee11813 Fix folding of HLO SliceOp with zero elements
This was causing division by zero in this case.

PiperOrigin-RevId: 346920942
2020-12-10 20:22:48 -08:00
Smit Hinsu bc7b6374c8 Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345902167
2020-12-05 18:55:41 -08:00
Phoenix Meadowlark c33bdcbd03 Remove fold of `mhlo.compare(%arg0, %arg0)` for floating types.
Two tensors having the same SSA-value isn't sufficient for equality for floating types, as `NaN != NaN`. As written this causes `tf.IsNan` to [miscompile](https://github.com/google/iree/issues/4061).

PiperOrigin-RevId: 345730640
2020-12-04 12:15:02 -08:00
Smit Hinsu 9bd1995f90 Legalize XlaReplicaId to HLO replica-id op
Also, define shape inference function for HLO replica-id op.

PiperOrigin-RevId: 345714342
2020-12-04 11:04:40 -08:00
A. Unique TensorFlower e87d53742b Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345618958
2020-12-04 00:04:10 -08:00
Smit Hinsu 9456af5880 Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345605910
2020-12-03 22:09:56 -08:00
A. Unique TensorFlower 1b711670bc Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345239817
2020-12-02 08:42:07 -08:00
Smit Hinsu 733fc6d032 Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345227848
2020-12-02 07:24:10 -08:00
Adrian Kuegel d14c63da54 Add a canonicalization pattern to remove redundant dynamic_reshapes.
PiperOrigin-RevId: 344517381
2020-11-27 04:46:50 -08:00
A. Unique TensorFlower 7f239c7ba2 Add canonicalizer for Reshape(Broadcast(X)) pattern when it is an identity sequence
PiperOrigin-RevId: 343251257
2020-11-19 02:32:45 -08:00
Tres Popp 1dffa62fe9 Fold away shape.shape_of(mhlo.dynamic_reshape(inp, shape))
This specific pattern can be replaced with the shape
passed to dynamic_reshape. This is implemented as a
canonicalization on mhlo.dynamic_reshape to fit in
the infrastructure of canonicalization.

PiperOrigin-RevId: 342009365
2020-11-12 02:48:26 -08:00