[NFC] Fix typos in comments.

PiperOrigin-RevId: 333311070
This commit is contained in:
Rahul Joshi 2020-09-23 09:45:22 -07:00 committed by TensorFlow MLIR Team
parent 08514eaa5f
commit 7d01a60de8
2 changed files with 11 additions and 11 deletions

View File

@ -106,7 +106,7 @@ pipeline using MLIR:
* `mhlo`: "meta"-HLO dialect ; similar to `xla_hlo`, but with extensions for * `mhlo`: "meta"-HLO dialect ; similar to `xla_hlo`, but with extensions for
dynamic shape support. dynamic shape support.
* `lmhlo`: "late"-"meta"-HLO, it is the IR after buffer allocation is * `lmhlo`: "late"-"meta"-HLO, it is the IR after buffer allocation is
performed. In XLA the buffer allocation is a side-datastructure which keeps performed. In XLA the buffer allocation is a side-data structure which keeps
track of these informations, while this separate dialect materializes it in track of these informations, while this separate dialect materializes it in
the IR. the IR.
@ -114,7 +114,7 @@ We describe these in more details below.
### HLO Client Dialect: `chlo`. ### HLO Client Dialect: `chlo`.
* It was originaly designed to map the * It was originally designed to map the
[XLA client APIs](https://www.tensorflow.org/xla/operation_semantics) (e.g., [XLA client APIs](https://www.tensorflow.org/xla/operation_semantics) (e.g.,
ops supports implicit broadcast and roughly modeled on XlaBuilder API) ops supports implicit broadcast and roughly modeled on XlaBuilder API)
modulo support for dynamic shapes and additional ops required to support modulo support for dynamic shapes and additional ops required to support

View File

@ -16,19 +16,19 @@ limitations under the License.
// This is the operation definition file for LMHLO, the "late" MHLO variant of // This is the operation definition file for LMHLO, the "late" MHLO variant of
// the dialect, which operates on buffers instead of tensors. // the dialect, which operates on buffers instead of tensors.
// //
// This file largely overlaps with mhlo_ops.td at a logic level. It's tempting to // This file largely overlaps with hlo_ops.td at a logical level. It's tempting
// merge these two files together, but we need to consider the following // to merge these two files together, but we need to consider the following
// obstacles: // obstacles:
// * We need to have a common representation for arguments. That is to say, // * We need to have a common representation for arguments. That is to say,
// HLO_Array<X> translates to HLO_Tensor<X> in HLO dialect, and // HLO_Array<X> translates to HLO_Tensor<X> in HLO dialect, and
// Arg<LHLO_Buffer<X>, "", [Mem(Read|Write)]> in LHLO. Array types within tuples // Arg<LHLO_Buffer<X>, "", [Mem(Read|Write)]> in LHLO. Array types within
// also need to be transformed. // tuples also need to be transformed.
// * As of now, TableGen's dag functions are not sufficient to accomplish the // * As of now, TableGen's dag functions are not sufficient to accomplish the
// one above. // one above.
// * Traits aren't identical, but need to be coped. For example, // * Traits aren't identical, but need to be copied. For example,
// SameOperandAndResultType in HLO corresponds to SameTypeOperands in LHLO. // SameOperandAndResultType in HLO corresponds to SameTypeOperands in LHLO.
// * Also, currently HLO describes the API in XLA's client side, not service // * Also, currently HLO describes the API in XLA's client side, not service
// side. LHLO aims for the service side. // side. LHLO aims for the service side.
#ifndef LHLO_OPS #ifndef LHLO_OPS
#define LHLO_OPS #define LHLO_OPS