mlir-hlo/lib/Dialect/mhlo/IR
Wenyi Zhao 23ebbb28d1 PR #50191: [MLIR][DISC] Add RAL (Runtime abstraction layer) Dialect
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/50191

DISC is a e2e flow, including both compiler side and runtime side. For
runtime side, we have different targeting environments (e.g. tensorflow,
pytorch, or sometimes even a standalone binary). In order to simplify
the design of the compiler side, we design a Runtime Abstraction Layer
(RAL) to sperate the compiler side and runtime side. Thus the compiler
side only need to target RAL itself and it is the responsibility of RAL
to handle the differences between different targeting environments.

One of the most important functions of RAL is to manage stateful
resources. To this end, it provides a context object, and hides all
stateful operations behind this context, thus the compiler side itself
doesn't need to care about the resource initialization. For example, a
kernel must be loaded before it can be launched on GPU. However, the
loading operation should only be taken once during the whole lifetime of
the context in order to achieve the best performance. Based on the
initialization-free interfaces provided by RAL, compiler side can focus
on its core optimization logic and lets the RAL to manage the resource
status.

The context mentioned above is passed as a parameter to the entry
function and all RAL APIs should always use the context as their first
argument. This CR also provides a pass to help to ensure this property.
The pass rewrites the entry function to make sure their first argument
is the context. For entry function, the pass also rewrites its inputs
and outputs. To be concrete, all the original inputs and outputs of the
entry function are received from and sent to RAL through a sequence of
RAL API calls correspondingly. The motivation behind this is to hide the
implementation details of I/Os. This design may also potentially enable
partial execution of the compiled module when some of the inputs are
ready.
Copybara import of the project:

--
c4f20a89aed71181e75bcc5265723b88bde23240 by Wenyi Zhao <reyizero@gmail.com>:

[MLIR][DISC] Add RAL (Runtime abstraction layer) Dialect

DISC is a e2e flow, including both compiler side and runtime side. For
runtime side, we have different targeting environments (e.g. tensorflow,
pytorch, or sometimes even a standalone binary). In order to simplify
the design of the compiler side, we design a Runtime Abstraction Layer
(RAL) to sperate the compiler side and runtime side. Thus the compiler
side only need to target RAL itself and it is the responsibility of RAL
to handle the differences between different targeting environments.

One of the most important functions of RAL is to manage stateful
resources. To this end, it provides a context object, and hides all
stateful operations behind this context, thus the compiler side itself
doesn't need to care about the resource initialization. For example, a
kernel must be loaded before it can be launched on GPU. However, the
loading operation should only be taken once during the whole lifetime of
the context in order to achieve the best performance. Based on the
initialization-free interfaces provided by RAL, compiler side can focus
on its core optimization logic and lets the RAL to manage the resource
status.

The context mentioned above is passed as a parameter to the entry
function and all RAL APIs should always use the context as their first
argument. This CR also provides a pass to help to ensure this property.
The pass rewrites the entry function to make sure their first argument
is the context. For entry function, the pass also rewrites its inputs
and outputs. To be concrete, all the original inputs and outputs of the
entry function are received from and sent to RAL through a sequence of
RAL API calls correspondingly. The motivation behind this is to hide the
implementation details of I/Os. This design may also potentially enable
partial execution of the compiled module when some of the inputs are
ready.

--
1991d4f80ab6087943956e1c0fec4940a22ab08d by Wenyi Zhao <reyizero@gmail.com>:

fix

PiperOrigin-RevId: 379317586
2021-06-14 11:27:43 -07:00
..
CMakeLists.txt PR #50191: [MLIR][DISC] Add RAL (Runtime abstraction layer) Dialect 2021-06-14 11:27:43 -07:00
chlo_canonicalize.td Add chlo.constant_like op which splats a constant to shape of operand 2020-08-11 14:54:48 -07:00
chlo_ops.cc PR #49454: [MLIR][DISC] Upgrade to use the new `reifyReturnTypeShapes` interface. 2021-05-24 10:11:55 -07:00
disc_ral_ops.cc PR #50191: [MLIR][DISC] Add RAL (Runtime abstraction layer) Dialect 2021-06-14 11:27:43 -07:00
hlo_ops.cc [HLO] Add AllReduceScatter to MHLO and LMHLO dialects. 2021-06-14 09:37:07 -07:00
hlo_ops_base_enums.cc [MLIR:HLO] Generate enum decls for HLO and LHLO GPU dialects. 2020-12-02 11:39:23 -08:00
hlo_ops_base_structs.cc [HLO] Add custom print/parse for convolution dimension numbers (in LMHLO) 2021-05-12 08:52:46 -07:00
hlo_ops_common.cc [HLO] Add AllReduceScatter to MHLO and LMHLO dialects. 2021-06-14 09:37:07 -07:00
hlo_patterns.td Restrict canonicalization to avoid changing type 2021-03-16 16:54:05 -07:00
infer_fusibility_op_interface.cc More cleanup in mlir-hlo to prepare for the standalone build 2020-08-03 19:28:00 -07:00
init.cc Add GPU specific LMHLO level ops 2020-10-14 11:23:55 -07:00
lhlo_gpu_ops.cc [XLA:GPU] Add AllReduce{Start,Done} to MLIR LHLO dialect. 2021-06-10 10:27:22 -07:00
lhlo_gpu_ops_enums.cc [MLIR:HLO] Generate enum decls for HLO and LHLO GPU dialects. 2020-12-02 11:39:23 -08:00
lhlo_gpu_ops_structs.cc Make LMHLO's Dot have the same power as MHLO's DotGeneral. 2020-10-15 15:09:06 -07:00
lhlo_ops.cc [HLO] Add AllReduceScatter to MHLO and LMHLO dialects. 2021-06-14 09:37:07 -07:00
lhlo_ops_structs.cc [MLIR:LHLO] Add optional call target arg mapping to LMHLO CustomCall operations. 2021-02-22 08:43:00 -08:00
mhlo_canonicalize.td PR #49228: [MLIR][DISC] porting dynamic shape related OPs to mhlo and lmhlo dialect 2021-05-20 23:16:47 -07:00