mlir-hlo/lib/Dialect/mhlo/IR/mhlo_canonicalize.td

58 lines
2.3 KiB
TableGen
Raw Normal View History

/* Copyright 2019 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
// This is the canonicalize pattern definition file.
include "mlir/IR/OpBase.td"
include "mlir-hlo/Dialect/mhlo/IR/hlo_ops.td"
include "mlir-hlo/Dialect/mhlo/IR/hlo_utils.td"
def UnaryToBinaryEinsumEq : NativeCodeCall<
"$_builder.getStringAttr(\",\" + $0.getValue().str())">;
// Convert UnaryEinsumOp to EinsumOp with two operands with redundant first
// operand.
def UnaryEinsumToEinsum : Pat<
(HLO_UnaryEinsumOp $operand, $equation),
(HLO_EinsumOp (HLO_ConstOp (GetScalarOfType<1> $operand)),
$operand, (UnaryToBinaryEinsumEq $equation))>;
// A dynamic reshape of a dynamic reshape is a dynamic reshape.
def RemoveRedundantDynamicReshape : Pat<
(HLO_DynamicReshapeOp (HLO_DynamicReshapeOp $operand, $shape1), $shape2),
(HLO_DynamicReshapeOp $operand, $shape2)>;
// A dynamic broadcast of a dynamic reshape with the same shape operand
// is a dynamic reshape.
def RemoveRedundantDynamicBroadcast : Pat<
(HLO_DynamicBroadcastInDimOp
(HLO_DynamicReshapeOp $operand, $shape),
$shape, IdentityBroadcastDims:$dims),
(HLO_DynamicReshapeOp $operand, $shape)>;
PR #49228: [MLIR][DISC] porting dynamic shape related OPs to mhlo and lmhlo dialect Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/49228 We are porting our MLIR-based dynamic shape compiler to tf community (From OP def, Patttern, to Optimization pass, etc). This is the first PR, which including some dynamic shape OPs def in mhlo and lmhlo dialect. For mhlo dialect, we add: - HLO_RealDynamicSliceOp - HLO_DynamicPadOp - HLO_DynamicGatherOp - HLO_DynamicConvOp For lmhlo dialect, we add: - LHLO_RealDynamicSliceOp - LHLO_DynamicBroadcastInDimOp - LHLO_DynamicGatherOp - LHLO_DynamicPadOp - LHLO_DynamicBitcastOp - LHLO_DynamicConvOp - LHLO_DynamicIotaOp - LHLO_DynamicReshapeOp - LHLO_DotGeneralOp - LHLO_BitcastOp Rest Ops to add: * We will send a separate PR containing LHLO_DynamicWhileOp and LHLO_DynamicCaseOp for control flow. * We will add a separate dedicated dialect like mhlo_ral, which including D2HOp/H2DOp/DebugPrintOp/TopKOp, etc. Previous discussions:[RFC](https://groups.google.com/a/tensorflow.org/g/mlir/c/_X48poNcbDI/m/jCC8BWIICQAJ), [discussion_1](https://llvm.discourse.group/t/updates-on-mlir-based-dynamic-shape-compiler/2384), [Recording of meeting](https://drive.google.com/file/d/1_uEISlV5MUWdG9faKAdKlCWnPtGjRC-D/view?usp=sharing). Copybara import of the project: -- e22d9e61106e00a1a1c6f368cc4a03e3bd1f414c by azazhu <azazhu@gmail.com>: [DISC]fea: porting mhlo and lmhlo OPs -- 9ec3e76290da07cbd53d7da5fa86ff67179441a1 by azazhu <azazhu@gmail.com>: [DISC][MLIR] 1. add summary and description for dynamic OPs in mhlo and lmhlo; 2. rm InferOutputTypes; 3. add verify for RealDynamicSliceOp and DynamicPadOp -- 0d68cd135555fd935991c12456b21329e628f23f by azazhu <azazhu@gmail.com>: [DISC][MLIR] 1.remove D2H,H2D and DebugPrint Ops from mhlo/lmhlo dialect; 2. add type constraint to DynamicPadOp and RealDynamicSliceOp; 3.refine lmhlo type constraint; 4.rename RealDynamicSliceOp as name conflict. -- 698762a77d60f6a844cb1ab3f32740d4ef3c5843 by azazhu <azazhu@gmail.com>: [DISC][MLIR] 1. replace dyn_cast to cast 2. refine code PiperOrigin-RevId: 375022260
2021-05-21 14:15:58 +08:00
// Convert DPad to Pad if edge_padding_low, edge_padding_high and
// interior_paddin are HLO_ConstOp
def DPadToPad: Pat<
(HLO_DynamicPadOp HLO_Tensor:$input,
HLO_Tensor:$padding_value,
(HLO_ConstOp I64ElementsAttr:$edge_padding_low),
(HLO_ConstOp I64ElementsAttr:$edge_padding_high),
(HLO_ConstOp I64ElementsAttr:$interior_paddin)),
(HLO_PadOp $input, $padding_value,
(CastIntElementsAttr $edge_padding_low),
(CastIntElementsAttr $edge_padding_high),
(CastIntElementsAttr $interior_paddin))>;