Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/50236
support hlo-to-lhlo conversion for TransposeOp and ConcatenateOp
Copybara import of the project:
--
62860e717f2a14fbd3ddfb634aa6ff132d245a72 by Wenyi Zhao <reyizero@gmail.com>:
[MLIR][DISC] Bufferize TransposeOp and ConcatenateOp
--
ce2ff57c1edee1172cd2f36346cc0b34ec1c7467 by Wenyi Zhao <reyizero@gmail.com>:
fix
PiperOrigin-RevId: 379330954
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/50191
DISC is a e2e flow, including both compiler side and runtime side. For
runtime side, we have different targeting environments (e.g. tensorflow,
pytorch, or sometimes even a standalone binary). In order to simplify
the design of the compiler side, we design a Runtime Abstraction Layer
(RAL) to sperate the compiler side and runtime side. Thus the compiler
side only need to target RAL itself and it is the responsibility of RAL
to handle the differences between different targeting environments.
One of the most important functions of RAL is to manage stateful
resources. To this end, it provides a context object, and hides all
stateful operations behind this context, thus the compiler side itself
doesn't need to care about the resource initialization. For example, a
kernel must be loaded before it can be launched on GPU. However, the
loading operation should only be taken once during the whole lifetime of
the context in order to achieve the best performance. Based on the
initialization-free interfaces provided by RAL, compiler side can focus
on its core optimization logic and lets the RAL to manage the resource
status.
The context mentioned above is passed as a parameter to the entry
function and all RAL APIs should always use the context as their first
argument. This CR also provides a pass to help to ensure this property.
The pass rewrites the entry function to make sure their first argument
is the context. For entry function, the pass also rewrites its inputs
and outputs. To be concrete, all the original inputs and outputs of the
entry function are received from and sent to RAL through a sequence of
RAL API calls correspondingly. The motivation behind this is to hide the
implementation details of I/Os. This design may also potentially enable
partial execution of the compiled module when some of the inputs are
ready.
Copybara import of the project:
--
c4f20a89aed71181e75bcc5265723b88bde23240 by Wenyi Zhao <reyizero@gmail.com>:
[MLIR][DISC] Add RAL (Runtime abstraction layer) Dialect
DISC is a e2e flow, including both compiler side and runtime side. For
runtime side, we have different targeting environments (e.g. tensorflow,
pytorch, or sometimes even a standalone binary). In order to simplify
the design of the compiler side, we design a Runtime Abstraction Layer
(RAL) to sperate the compiler side and runtime side. Thus the compiler
side only need to target RAL itself and it is the responsibility of RAL
to handle the differences between different targeting environments.
One of the most important functions of RAL is to manage stateful
resources. To this end, it provides a context object, and hides all
stateful operations behind this context, thus the compiler side itself
doesn't need to care about the resource initialization. For example, a
kernel must be loaded before it can be launched on GPU. However, the
loading operation should only be taken once during the whole lifetime of
the context in order to achieve the best performance. Based on the
initialization-free interfaces provided by RAL, compiler side can focus
on its core optimization logic and lets the RAL to manage the resource
status.
The context mentioned above is passed as a parameter to the entry
function and all RAL APIs should always use the context as their first
argument. This CR also provides a pass to help to ensure this property.
The pass rewrites the entry function to make sure their first argument
is the context. For entry function, the pass also rewrites its inputs
and outputs. To be concrete, all the original inputs and outputs of the
entry function are received from and sent to RAL through a sequence of
RAL API calls correspondingly. The motivation behind this is to hide the
implementation details of I/Os. This design may also potentially enable
partial execution of the compiled module when some of the inputs are
ready.
--
1991d4f80ab6087943956e1c0fec4940a22ab08d by Wenyi Zhao <reyizero@gmail.com>:
fix
PiperOrigin-RevId: 379317586
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/50211
support hlo-to-lhlo conversion for RealDynamicSliceOp and ReduceOp
Copybara import of the project:
--
c417b336670a1fc256f7026dfe8080e46d13d79a by Wenyi Zhao <reyizero@gmail.com>:
[MLIR][DISC] Bufferize RealDynamicSliceOp and ReduceOp
PiperOrigin-RevId: 378972113
This just adds support for it in the op, but keeps the production/uses as is (e.g., single tensor or tuple) matching what XLA export requires. In follow up here, would be to add pass for export to retuple and then the canonical form could be changed. Tuple'ing given control flow via regions & multi-result operations does not add representational power and all the get_tuple_element ops obscure the computation.
The old form allowed single tensor or tuple. The new variadic number of tensor or tuples as tuples may be nested, so the input could have (Tensor<..>, Tuple<Tensor<...>, Tuple<...>, ...>, Tensor<...>) and HLO_Tensor doesn't allow Tuples.
PiperOrigin-RevId: 378934388
Find shape equivalence classes among the operands and use them for better rank
specialization. If all operands are known to be of the same shape, we can
flatten them to rank one. If there are two shape equivalence classes, we can
generalize the scalar rank specialization cases.
PiperOrigin-RevId: 378844575
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/50100
support hlo-to-lhlo conversion for DynamicIotaOp and DynamicPadOp
Copybara import of the project:
--
c3aae94954e35d3f8ad265f619ef9765665a5115 by Wenyi Zhao <reyizero@gmail.com>:
[MLIR][DISC] Bufferize DynamicIotaOp and DynamicPadOp
--
adc6996d70b804d61310d56a33fac975d70c8636 by Wenyi Zhao <reyizero@gmail.com>:
minor
PiperOrigin-RevId: 378733284
We should upcast F16 to F32 to prevent precision loss.
E.g. cosh(-9) would evaluate to 4042 previously instead of 4052.
This allows to enable the MLIR generated kernel for F16 type.
Also move template instantiation for Sinh to inside the #ifdef block.
This was missed in a previous commit.
PiperOrigin-RevId: 378635042
When merging rank specialization clusters, avoid duplicating operands. A fewer
number of operands usually allows better rank specialization.
PiperOrigin-RevId: 378445946
If the result of the slice is an empty tensor, do nothing.
This fixes a crash: we can't create a `concat` with an
empty operand range.
PiperOrigin-RevId: 378354956
Because mhlo::ConstantLike doesn't support complex types, we need to use
GetScalarOfType and broadcast it to the needed shape.
Disable the tf2xla fallback, now that MLIR fully supports Sinh.
PiperOrigin-RevId: 378123151
We should upcast F16 to F32 to prevent precision loss.
E.g. sinh(-9) would evaluate to -4042 previously instead of -4052.
This allows to enable the MLIR generated kernel for F16 type.
PiperOrigin-RevId: 377901896
Replace deprecated methods in lhlo_fuse_linalg.cc. The new structured op interface has been introduced in https://reviews.llvm.org/D103394.
PiperOrigin-RevId: 377875452
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/49970
1, add hlo-to-lhlo support for DynamicReshape and DynamicBroadcastInDim
2, add a flag `convert-to-lmhlo-only` to seperate following two case:
- hlo-to-lhlo only. Simply lowers all mhlo ops to their lmhlo
counterparts, do not apply any optimization (e.g. elide any
buffer copy). Buffer optimization is not easy in dynamic
shape world especially when involving control flow, thus we
leave this to another dedicated pass.
- hlo-to-lhlo-or-memref-directly. Lowers some metadata-only mhlo
ops (e.g. reshape) to memref dialect directly and Lowers others
to their lmhlo counterparts.
Copybara import of the project:
--
562bd65a368f6194405c4ae6900e3b4388a5ec03 by Wenyi Zhao <reyizero@gmail.com>:
[MLIR][DISC] bufferize DynamicReshape and DynamicBroadcastInDim
1, add hlo-to-lhlo support for DynamicReshape and DynamicBroadcastInDim
2, add a flag `convert-to-lmhlo-only` to seperate following two case:
- hlo-to-lhlo only. Simply lowers all mhlo ops to their lmhlo
counterparts, do not apply any optimization (e.g. elide any
buffer copy). Buffer optimization is not easy in dynamic
shape world especially when involving control flow, thus we
leave this to another dedicated pass.
- hlo-to-lhlo-or-memref-directly. Lowers some metadata-only mhlo
ops (e.g. reshape) to memref dialect directly and Lowers others
to their lmhlo counterparts.
PiperOrigin-RevId: 377603395
Adds import/export/verifier support as well.
Also makes `channel_handle` uniform across mhlo.all_reduce and mhlo.all-gather.
PiperOrigin-RevId: 377323468
Fix usage of default constructor. Instead, always use the parameterized
constructor and make the maximum supported rank explicit.
PiperOrigin-RevId: 377037155
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/49598
This PR implements logic for lowering memref.tensor_load ops that are
inserted during `mhlo-legalize-to-lmhlo`
Copybara import of the project:
--
80eb377af4e02182e1aecc943a41ca5d7d1c2100 by Wenyi Zhao <reyizero@gmail.com>:
[MLIR][DISC] legalize tensor_load inserted during hlo-to-lhlo conversion
This PR implements logic for lowering memref.tensor_load ops that are
inserted during `mhlo-legalize-to-lmhlo`.
--
ac452fe3dcd591211cd5c59be9189fe2f7153b41 by Wenyi Zhao <reyizero@gmail.com>:
minor fix
--
6b36017f8632a06adbc3e05a62975fa641d0260f by Wenyi Zhao <reyizero@gmail.com>:
minor refine
--
846005cc76d0033112e47825c2e9a97790b6925f by Wenyi Zhao <reyizero@gmail.com>:
minor fix
--
f6a4becaa287d5ca323b2d152a4d0ae053730fd9 by Wenyi Zhao <reyizero@gmail.com>:
fix
--
5555749f60f7fce8f57962860ef65efccf0362ba by Wenyi Zhao <reyizero@gmail.com>:
fix
--
8873b9b6d9315c1199ca9f7c133ecf377ecd2fa6 by Wenyi Zhao <reyizero@gmail.com>:
fix
PiperOrigin-RevId: 376942547