The pass was replaced by the new generalized rank specialization and the two
passes `mhlo-rank-specialization-cluster` and `mhlo-rank-specialization-to-scf`.
PiperOrigin-RevId: 379935562
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/50271
support hlo-to-lhlo conversion for GatherOp and DynamicGatherOp
Copybara import of the project:
--
117a1b1bcaac7ecc5224b02863eede5c1b9618fe by Wenyi Zhao <reyizero@gmail.com>:
[MLIR][DISC] Bufferize GatherOp and DynamicGatherOp
PiperOrigin-RevId: 379801972
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/50020
This pass implements the logic to group kLoop/kInput fusion patterns on
buffer level. The reason for this is that we can avoid a lot of
headaches to handle `shape-only` consumers specially (e.g. memref.dim,
shape.shapeOf) since shapes are already resolved in buffer world. It may
be better to move this pass to tensor level after more shape
inference/constraint infras are ready on mhlo level.
Copybara import of the project:
--
e31f8344b59aa9860097197585215ea1689b8ff4 by Wenyi Zhao <reyizero@gmail.com>:
[MLIR][DISC] support fusion on buffer
This pass implements the logic to group kLoop/kInput fusion patterns on
buffer level. The reason for this is that we can avoid a lot of
headaches to handle `shape-only` consumers specially (e.g. memref.dim,
shape.shapeOf) since shapes are already resolved in buffer world. It may
be better to move this pass to tensor level after more shape
inference/constraint infras are ready on mhlo level.
--
35f2eb2791241b0ab5db1ddcaf1b4006278ddccf by Wenyi Zhao <reyizero@gmail.com>:
fix
--
923c8d61f7fe00a2a0df22d5be396508f0667964 by Wenyi Zhao <reyizero@gmail.com>:
fix sanity check failure
PiperOrigin-RevId: 379743424
The operations allow for a limited form of broadcasting which allows some
operands to be scalars. As such they are neither strictly `Elementwise`, nor
`Broadcasting`. They do fulfill the requirements for `BroadcastingElementwise`
though.
PiperOrigin-RevId: 379719961
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/49919
We are porting our MLIR-based dynamic shape compiler to tf community (From OP def, Patttern, to Optimization pass, etc).
This is the 5th PR about tf2mhlo pattern conversion, which including ConvertUnpackOpDynamic, ConvertSignOpDynamic, ConvertSigmoidGradOpDynamic.
The rest pattern conversions we will add:
- ConvertSqueezeOpxxx
- ConvertStridedSliceOpxxx
- ConvertPrintOp
Copybara import of the project:
--
21b3c3eb05b12956bcdb8b98cc54d9371dbf034d by azazhu <azazhu@gmail.com>:
[MLIR][DISC] pattern conversion from tf2mhlo: ConvertUnpackOpDynamic, ConvertSignOpDynamic, ConvertSigmoidGradOpDynamic
--
634630a4e2e426357290650bd579b35efecab5b3 by azazhu <azazhu@gmail.com>:
[MLIR][DISC] refine ConvertUnpackOpDynamic, ConvertSignOpDynamic, ConvertSigmoidGradOpDynamic
--
39a2bedd6dafb369ae960c5197b7a352bfdfbc80 by azazhu <azazhu@gmail.com>:
add RealDynamicSliceOp's canonicalize and fix CI
--
a1c38dd0963d602ed4812da0d77a096a95920ddb by azazhu <azazhu@gmail.com>:
fix CI for ConvertUnpackOpDynamic
--
5a8b4eb389ed6dc554104356c37f2f1550802b8c by azazhu <azazhu@gmail.com>:
fix typo in ConvertSigmoidGradOpDynamic
PiperOrigin-RevId: 379521079
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/50236
support hlo-to-lhlo conversion for TransposeOp and ConcatenateOp
Copybara import of the project:
--
62860e717f2a14fbd3ddfb634aa6ff132d245a72 by Wenyi Zhao <reyizero@gmail.com>:
[MLIR][DISC] Bufferize TransposeOp and ConcatenateOp
--
ce2ff57c1edee1172cd2f36346cc0b34ec1c7467 by Wenyi Zhao <reyizero@gmail.com>:
fix
PiperOrigin-RevId: 379330954
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/50191
DISC is a e2e flow, including both compiler side and runtime side. For
runtime side, we have different targeting environments (e.g. tensorflow,
pytorch, or sometimes even a standalone binary). In order to simplify
the design of the compiler side, we design a Runtime Abstraction Layer
(RAL) to sperate the compiler side and runtime side. Thus the compiler
side only need to target RAL itself and it is the responsibility of RAL
to handle the differences between different targeting environments.
One of the most important functions of RAL is to manage stateful
resources. To this end, it provides a context object, and hides all
stateful operations behind this context, thus the compiler side itself
doesn't need to care about the resource initialization. For example, a
kernel must be loaded before it can be launched on GPU. However, the
loading operation should only be taken once during the whole lifetime of
the context in order to achieve the best performance. Based on the
initialization-free interfaces provided by RAL, compiler side can focus
on its core optimization logic and lets the RAL to manage the resource
status.
The context mentioned above is passed as a parameter to the entry
function and all RAL APIs should always use the context as their first
argument. This CR also provides a pass to help to ensure this property.
The pass rewrites the entry function to make sure their first argument
is the context. For entry function, the pass also rewrites its inputs
and outputs. To be concrete, all the original inputs and outputs of the
entry function are received from and sent to RAL through a sequence of
RAL API calls correspondingly. The motivation behind this is to hide the
implementation details of I/Os. This design may also potentially enable
partial execution of the compiled module when some of the inputs are
ready.
Copybara import of the project:
--
c4f20a89aed71181e75bcc5265723b88bde23240 by Wenyi Zhao <reyizero@gmail.com>:
[MLIR][DISC] Add RAL (Runtime abstraction layer) Dialect
DISC is a e2e flow, including both compiler side and runtime side. For
runtime side, we have different targeting environments (e.g. tensorflow,
pytorch, or sometimes even a standalone binary). In order to simplify
the design of the compiler side, we design a Runtime Abstraction Layer
(RAL) to sperate the compiler side and runtime side. Thus the compiler
side only need to target RAL itself and it is the responsibility of RAL
to handle the differences between different targeting environments.
One of the most important functions of RAL is to manage stateful
resources. To this end, it provides a context object, and hides all
stateful operations behind this context, thus the compiler side itself
doesn't need to care about the resource initialization. For example, a
kernel must be loaded before it can be launched on GPU. However, the
loading operation should only be taken once during the whole lifetime of
the context in order to achieve the best performance. Based on the
initialization-free interfaces provided by RAL, compiler side can focus
on its core optimization logic and lets the RAL to manage the resource
status.
The context mentioned above is passed as a parameter to the entry
function and all RAL APIs should always use the context as their first
argument. This CR also provides a pass to help to ensure this property.
The pass rewrites the entry function to make sure their first argument
is the context. For entry function, the pass also rewrites its inputs
and outputs. To be concrete, all the original inputs and outputs of the
entry function are received from and sent to RAL through a sequence of
RAL API calls correspondingly. The motivation behind this is to hide the
implementation details of I/Os. This design may also potentially enable
partial execution of the compiled module when some of the inputs are
ready.
--
1991d4f80ab6087943956e1c0fec4940a22ab08d by Wenyi Zhao <reyizero@gmail.com>:
fix
PiperOrigin-RevId: 379317586
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/50211
support hlo-to-lhlo conversion for RealDynamicSliceOp and ReduceOp
Copybara import of the project:
--
c417b336670a1fc256f7026dfe8080e46d13d79a by Wenyi Zhao <reyizero@gmail.com>:
[MLIR][DISC] Bufferize RealDynamicSliceOp and ReduceOp
PiperOrigin-RevId: 378972113
This just adds support for it in the op, but keeps the production/uses as is (e.g., single tensor or tuple) matching what XLA export requires. In follow up here, would be to add pass for export to retuple and then the canonical form could be changed. Tuple'ing given control flow via regions & multi-result operations does not add representational power and all the get_tuple_element ops obscure the computation.
The old form allowed single tensor or tuple. The new variadic number of tensor or tuples as tuples may be nested, so the input could have (Tensor<..>, Tuple<Tensor<...>, Tuple<...>, ...>, Tensor<...>) and HLO_Tensor doesn't allow Tuples.
PiperOrigin-RevId: 378934388
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/50100
support hlo-to-lhlo conversion for DynamicIotaOp and DynamicPadOp
Copybara import of the project:
--
c3aae94954e35d3f8ad265f619ef9765665a5115 by Wenyi Zhao <reyizero@gmail.com>:
[MLIR][DISC] Bufferize DynamicIotaOp and DynamicPadOp
--
adc6996d70b804d61310d56a33fac975d70c8636 by Wenyi Zhao <reyizero@gmail.com>:
minor
PiperOrigin-RevId: 378733284
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/49970
1, add hlo-to-lhlo support for DynamicReshape and DynamicBroadcastInDim
2, add a flag `convert-to-lmhlo-only` to seperate following two case:
- hlo-to-lhlo only. Simply lowers all mhlo ops to their lmhlo
counterparts, do not apply any optimization (e.g. elide any
buffer copy). Buffer optimization is not easy in dynamic
shape world especially when involving control flow, thus we
leave this to another dedicated pass.
- hlo-to-lhlo-or-memref-directly. Lowers some metadata-only mhlo
ops (e.g. reshape) to memref dialect directly and Lowers others
to their lmhlo counterparts.
Copybara import of the project:
--
562bd65a368f6194405c4ae6900e3b4388a5ec03 by Wenyi Zhao <reyizero@gmail.com>:
[MLIR][DISC] bufferize DynamicReshape and DynamicBroadcastInDim
1, add hlo-to-lhlo support for DynamicReshape and DynamicBroadcastInDim
2, add a flag `convert-to-lmhlo-only` to seperate following two case:
- hlo-to-lhlo only. Simply lowers all mhlo ops to their lmhlo
counterparts, do not apply any optimization (e.g. elide any
buffer copy). Buffer optimization is not easy in dynamic
shape world especially when involving control flow, thus we
leave this to another dedicated pass.
- hlo-to-lhlo-or-memref-directly. Lowers some metadata-only mhlo
ops (e.g. reshape) to memref dialect directly and Lowers others
to their lmhlo counterparts.
PiperOrigin-RevId: 377603395
Adds import/export/verifier support as well.
Also makes `channel_handle` uniform across mhlo.all_reduce and mhlo.all-gather.
PiperOrigin-RevId: 377323468
Fix usage of default constructor. Instead, always use the parameterized
constructor and make the maximum supported rank explicit.
PiperOrigin-RevId: 377037155
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/49598
This PR implements logic for lowering memref.tensor_load ops that are
inserted during `mhlo-legalize-to-lmhlo`
Copybara import of the project:
--
80eb377af4e02182e1aecc943a41ca5d7d1c2100 by Wenyi Zhao <reyizero@gmail.com>:
[MLIR][DISC] legalize tensor_load inserted during hlo-to-lhlo conversion
This PR implements logic for lowering memref.tensor_load ops that are
inserted during `mhlo-legalize-to-lmhlo`.
--
ac452fe3dcd591211cd5c59be9189fe2f7153b41 by Wenyi Zhao <reyizero@gmail.com>:
minor fix
--
6b36017f8632a06adbc3e05a62975fa641d0260f by Wenyi Zhao <reyizero@gmail.com>:
minor refine
--
846005cc76d0033112e47825c2e9a97790b6925f by Wenyi Zhao <reyizero@gmail.com>:
minor fix
--
f6a4becaa287d5ca323b2d152a4d0ae053730fd9 by Wenyi Zhao <reyizero@gmail.com>:
fix
--
5555749f60f7fce8f57962860ef65efccf0362ba by Wenyi Zhao <reyizero@gmail.com>:
fix
--
8873b9b6d9315c1199ca9f7c133ecf377ecd2fa6 by Wenyi Zhao <reyizero@gmail.com>:
fix
PiperOrigin-RevId: 376942547
The maximum supported target rank of 5 is sufficient for all operations but
`select`. Make the maximum target rank configurable in the rank specialization.
This reduces the number of generated kernels for operations that don't require
it.
PiperOrigin-RevId: 376822496
Replace the previously used `TransformUnrankedHloPass` which rank-specializes
only one operation at a time. The new generalized rank specialization clusters
compatible operations and rank-specializes them collectively.
PiperOrigin-RevId: 376127752
We can lower it to the DivOp in the complex dialect.
Also add tests to hlo-legalize-to-linalg.mlir for CompareOp lowering of complex
types. These were forgotten in a previous commit.
PiperOrigin-RevId: 375669125
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/49454
The new interface is more safe to be used during dialect conversion
(e.g. converting from tensor world to buffer world).
Copybara import of the project:
--
a6968072d59bec3c3bbaef0121d297e807c37c91 by Wenyi Zhao <reyizero@gmail.com>:
[MLIR][DISC] Upgrade to use the new `reifyReturnTypeShapes` interface.
The new interface is more safe to be used during dialect conversion
(e.g. converting from tensor world to buffer world).
--
55e7c6b7f2f99b99e226645a57e2433fae3e90ed by Wenyi Zhao <reyizero@gmail.com>:
minor fix
PiperOrigin-RevId: 375500273
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/49228
We are porting our MLIR-based dynamic shape compiler to tf community (From OP def, Patttern, to Optimization pass, etc).
This is the first PR, which including some dynamic shape OPs def in mhlo and lmhlo dialect.
For mhlo dialect, we add:
- HLO_RealDynamicSliceOp
- HLO_DynamicPadOp
- HLO_DynamicGatherOp
- HLO_DynamicConvOp
For lmhlo dialect, we add:
- LHLO_RealDynamicSliceOp
- LHLO_DynamicBroadcastInDimOp
- LHLO_DynamicGatherOp
- LHLO_DynamicPadOp
- LHLO_DynamicBitcastOp
- LHLO_DynamicConvOp
- LHLO_DynamicIotaOp
- LHLO_DynamicReshapeOp
- LHLO_DotGeneralOp
- LHLO_BitcastOp
Rest Ops to add:
* We will send a separate PR containing LHLO_DynamicWhileOp and LHLO_DynamicCaseOp for control flow.
* We will add a separate dedicated dialect like mhlo_ral, which including D2HOp/H2DOp/DebugPrintOp/TopKOp, etc.
Previous discussions:[RFC](https://groups.google.com/a/tensorflow.org/g/mlir/c/_X48poNcbDI/m/jCC8BWIICQAJ), [discussion_1](https://llvm.discourse.group/t/updates-on-mlir-based-dynamic-shape-compiler/2384), [Recording of meeting](https://drive.google.com/file/d/1_uEISlV5MUWdG9faKAdKlCWnPtGjRC-D/view?usp=sharing).
Copybara import of the project:
--
e22d9e61106e00a1a1c6f368cc4a03e3bd1f414c by azazhu <azazhu@gmail.com>:
[DISC]fea: porting mhlo and lmhlo OPs
--
9ec3e76290da07cbd53d7da5fa86ff67179441a1 by azazhu <azazhu@gmail.com>:
[DISC][MLIR] 1. add summary and description for dynamic OPs in mhlo and lmhlo; 2. rm InferOutputTypes; 3. add verify for RealDynamicSliceOp and DynamicPadOp
--
0d68cd135555fd935991c12456b21329e628f23f by azazhu <azazhu@gmail.com>:
[DISC][MLIR] 1.remove D2H,H2D and DebugPrint Ops from mhlo/lmhlo dialect; 2. add type constraint to DynamicPadOp and RealDynamicSliceOp; 3.refine lmhlo type constraint; 4.rename RealDynamicSliceOp as name conflict.
--
698762a77d60f6a844cb1ab3f32740d4ef3c5843 by azazhu <azazhu@gmail.com>:
[DISC][MLIR] 1. replace dyn_cast to cast 2. refine code
PiperOrigin-RevId: 375022260
* The former is typically invariant regardless of backend.
* The latter may need to be done differently depending on capabilities of the lowering target.
PiperOrigin-RevId: 374492924
Add a pass to cluster unranked C/HLO operations in one
`chlo.rank_specialization_cluster` op. The C/HLO operations are moved to the
body of the operation. Later passes can use this to rank-specialize all these
operations together.
PiperOrigin-RevId: 373336725
Moved the corresponding `summary` and `description` fields into the subclasses.
Kept BASE_HLO_ConvOp for `hasWindowReversal()'.
PiperOrigin-RevId: 373173025