5264 lines
162 KiB
Markdown
5264 lines
162 KiB
Markdown
|
<!-- Autogenerated by mlir-tblgen; don't manually edit -->
|
||
|
# Dialect 'onnx' definition
|
||
|
|
||
|
[TOC]
|
||
|
|
||
|
## Operation definition
|
||
|
|
||
|
### onnx.Abs (ONNXAbsOp)
|
||
|
ONNX Abs operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Absolute takes one input data (Tensor<T>) and produces one output data"
|
||
|
"(Tensor<T>) where the absolute is, y = abs(x), is applied to"
|
||
|
"the tensor elementwise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Acos (ONNXAcosOp)
|
||
|
ONNX Acos operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Calculates the arccosine (inverse of cosine) of the given input tensor, element-wise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Acosh (ONNXAcoshOp)
|
||
|
ONNX Acosh operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Calculates the hyperbolic arccosine of the given input tensor element-wise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Add (ONNXAddOp)
|
||
|
ONNX Add operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Performs element-wise binary addition (with Numpy-style broadcasting support)."
|
||
|
""
|
||
|
"This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check [the doc](Broadcasting.md)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `A`: memref of any type values or tensor of any type values
|
||
|
1. `B`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `C`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.And (ONNXAndOp)
|
||
|
ONNX And operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Returns the tensor resulted from performing the `and` logical operation"
|
||
|
"elementwise on the input tensors `A` and `B` (with Numpy-style broadcasting support)."
|
||
|
""
|
||
|
"This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check [the doc](Broadcasting.md)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `A`: memref of any type values or tensor of any type values
|
||
|
1. `B`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `C`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.ArgMax (ONNXArgMaxOp)
|
||
|
ONNX ArgMax operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Computes the indices of the max elements of the input tensor's element along the "
|
||
|
"provided axis. The resulted tensor has the same rank as the input if keepdims equal 1."
|
||
|
"If keepdims equal 0, then the resulted tensor have the reduced dimension pruned. "
|
||
|
"The type of the output tensor is integer."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axis` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `keepdims` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `reduced`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.ArgMin (ONNXArgMinOp)
|
||
|
ONNX ArgMin operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Computes the indices of the min elements of the input tensor's element along the "
|
||
|
"provided axis. The resulted tensor has the same rank as the input if keepdims equal 1."
|
||
|
"If keepdims equal 0, then the resulted tensor have the reduced dimension pruned. "
|
||
|
"The type of the output tensor is integer."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axis` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `keepdims` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `reduced`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Asin (ONNXAsinOp)
|
||
|
ONNX Asin operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Calculates the arcsine (inverse of sine) of the given input tensor, element-wise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Asinh (ONNXAsinhOp)
|
||
|
ONNX Asinh operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Calculates the hyperbolic arcsine of the given input tensor element-wise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Atan (ONNXAtanOp)
|
||
|
ONNX Atan operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Calculates the arctangent (inverse of tangent) of the given input tensor, element-wise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Atanh (ONNXAtanhOp)
|
||
|
ONNX Atanh operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Calculates the hyperbolic arctangent of the given input tensor element-wise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.AveragePool (ONNXAveragePoolOp)
|
||
|
ONNX AveragePool operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"AveragePool consumes an input tensor X and applies average pooling across"
|
||
|
" the tensor according to kernel sizes, stride sizes, and pad lengths."
|
||
|
" average pooling consisting of computing the average on all values of a"
|
||
|
" subset of the input tensor according to the kernel size and downsampling the"
|
||
|
" data into the output tensor Y for further processing. The output spatial shape will be following:"
|
||
|
" ```"
|
||
|
" output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)"
|
||
|
" ```"
|
||
|
" or"
|
||
|
" ```"
|
||
|
" output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - kernel_spatial_shape[i]) / strides_spatial_shape[i] + 1)"
|
||
|
" ```"
|
||
|
" if ceil_mode is enabled"
|
||
|
""
|
||
|
" ```"
|
||
|
" * pad_shape[i] is sum of pads along axis i"
|
||
|
" ```"
|
||
|
""
|
||
|
" `auto_pad` is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:"
|
||
|
" ```"
|
||
|
" VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - kernel_spatial_shape[i] + 1) / strides_spatial_shape[i])"
|
||
|
" SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])"
|
||
|
" ```"
|
||
|
" And pad shape will be following if `SAME_UPPER` or `SAME_LOWER`:"
|
||
|
" ```"
|
||
|
" pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + kernel_spatial_shape[i] - input_spatial_shape[i]"
|
||
|
" ```"
|
||
|
" The output of each pooling window is divided by the number of elements (exclude pad when attribute count_include_pad is zero)."
|
||
|
" "
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `auto_pad` | `StringAttr` | string attribute attribute |
|
||
|
| `ceil_mode` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `count_include_pad` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `kernel_shape` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `pads` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `strides` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.BatchNormalization (ONNXBatchNormalizationOp)
|
||
|
ONNX BatchNormalization operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Carries out batch normalization as described in the paper"
|
||
|
"https://arxiv.org/abs/1502.03167. Depending on the mode it is being run,"
|
||
|
"there are multiple cases for the number of outputs, which we list below:"
|
||
|
""
|
||
|
"Output case #1: Y, mean, var, saved_mean, saved_var (training mode)"
|
||
|
"Output case #2: Y (test mode)"
|
||
|
""
|
||
|
"For previous (depreciated) non-spatial cases, implementors are suggested"
|
||
|
"to flatten the input shape to (N x C*D1*D2 ..*Dn) before a BatchNormalization Op."
|
||
|
"This operator has **optional** inputs/outputs. See [the doc](IR.md) for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument's name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
1. `scale`: memref of any type values or tensor of any type values
|
||
|
1. `B`: memref of any type values or tensor of any type values
|
||
|
1. `mean`: memref of any type values or tensor of any type values
|
||
|
1. `var`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `epsilon` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `momentum` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
1. `out_mean`: memref of any type values or tensor of any type values
|
||
|
1. `out_var`: memref of any type values or tensor of any type values
|
||
|
1. `saved_mean`: memref of any type values or tensor of any type values
|
||
|
1. `saved_var`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.BitShift (ONNXBitShiftOp)
|
||
|
ONNX BitShift operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Bitwise shift operator performs element-wise operation. For each input element, if the"
|
||
|
" attribute "direction" is "RIGHT", this operator moves its binary representation toward"
|
||
|
" the right side so that the input value is effectively decreased. If the attribute "direction""
|
||
|
" is "LEFT", bits of binary representation moves toward the left side, which results the"
|
||
|
" increase of its actual value. The input X is the tensor to be shifted and another input"
|
||
|
" Y specifies the amounts of shifting. For example, if "direction" is "Right", X is [1, 4],"
|
||
|
" and S is [1, 1], the corresponding output Z would be [0, 2]. If "direction" is "LEFT" with"
|
||
|
" X=[1, 2] and S=[1, 2], the corresponding output Y would be [2, 8]."
|
||
|
" "
|
||
|
" Because this operator supports Numpy-style broadcasting, X's and Y's shapes are"
|
||
|
" not necessarily identical."
|
||
|
"This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check [the doc](Broadcasting.md)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `direction` | `StringAttr` | string attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Z`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Cast (ONNXCastOp)
|
||
|
ONNX Cast operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"The operator casts the elements of a given input tensor to a data type"
|
||
|
"specified by the 'to' argument and returns an output tensor of the same size in"
|
||
|
"the converted type. The 'to' argument must be one of the data types specified"
|
||
|
"in the 'DataType' enum field in the TensorProto message."
|
||
|
""
|
||
|
"Casting from string tensor in plain (e.g., "3.14" and "1000") and scientific numeric representations"
|
||
|
"(e.g., "1e-5" and "1E8") to float types is supported. For example, converting string "100.5" to an integer may"
|
||
|
"result 100. There are some string literals reserved for special floating-point values;"
|
||
|
""+INF" (and "INF"), "-INF", and "NaN" are positive infinity, negative infinity, and not-a-number, respectively."
|
||
|
"Any string which can exactly match "+INF" in a case-insensitive way would be mapped to positive infinite. Similarly,"
|
||
|
"this case-insensitive rule is applied to "INF" and "NaN". When casting from numeric tensors"
|
||
|
"to string tensors, plain floating-point representation (such as "314.15926") would be used. "
|
||
|
"Converting non-numerical-literal string such as "Hello World!" is an undefined behavior. Cases "
|
||
|
"of converting string representing floating-point arithmetic value, such as "2.718", to INT is an undefined behavior."
|
||
|
""
|
||
|
"Conversion from a numerical type to any numerical type is always allowed."
|
||
|
"User must be aware of precision loss and value change caused by range difference between two types."
|
||
|
"For example, a 64-bit float 3.1415926459 may be round to a 32-bit float 3.141592. Similarly, converting"
|
||
|
"an integer 36 to Boolean may produce 1 because we truncate bits which can't be stored in the targeted type."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `to` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Ceil (ONNXCeilOp)
|
||
|
ONNX Ceil operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Ceil takes one input data (Tensor<T>) and produces one output data"
|
||
|
"(Tensor<T>) where the ceil is, y = ceil(x), is applied to"
|
||
|
"the tensor elementwise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Clip (ONNXClipOp)
|
||
|
ONNX Clip operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Clip operator limits the given input within an interval. The interval is"
|
||
|
"specified by the inputs 'min' and 'max'. They default to"
|
||
|
"numeric_limits::lowest() and numeric_limits::max(), respectively."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
1. `min`: memref of any type values or tensor of any type values
|
||
|
1. `max`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Compress (ONNXCompressOp)
|
||
|
ONNX Compress operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Selects slices from an input tensor along a given axis where condition evaluates to True for each axis index."
|
||
|
" In case axis is not provided, input is flattened before elements are selected."
|
||
|
" Compress behaves like numpy.compress: https://docs.scipy.org/doc/numpy/reference/generated/numpy.compress.html"
|
||
|
" "
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
1. `condition`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axis` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.ConcatFromSequence (ONNXConcatFromSequenceOp)
|
||
|
ONNX ConcatFromSequence operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Concatenate a sequence of tensors into a single tensor."
|
||
|
"All input tensors must have the same shape, except for the dimension size of the axis to concatenate on."
|
||
|
"By default 'new_axis' is 0, the behavior is similar to numpy.concatenate."
|
||
|
"When 'new_axis' is 1, the behavior is similar to numpy.stack."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input_sequence`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axis` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `new_axis` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `concat_result`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Concat (ONNXConcatOp)
|
||
|
ONNX Concat operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Concatenate a list of tensors into a single tensor. All input tensors must have the same shape, except for the dimension size of the axis to concatenate on."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `inputs`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axis` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `concat_result`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.ConstantOfShape (ONNXConstantOfShapeOp)
|
||
|
ONNX ConstantOfShape operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Generate a tensor with given value and shape."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `value` | `Attribute` | any attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Constant (ONNXConstantOp)
|
||
|
ONNX Constant operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"A constant tensor. Exactly one of the two attributes, either value or sparse_value,"
|
||
|
"must be specified."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `sparse_value` | `Attribute` | any attribute attribute |
|
||
|
| `value` | `Attribute` | any attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.ConvInteger (ONNXConvIntegerOp)
|
||
|
ONNX ConvInteger operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"The integer convolution operator consumes an input tensor, its zero-point, a filter, and its zero-point,"
|
||
|
"and computes the output. The production MUST never overflow. The accumulation may overflow if and only if in 32 bits."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `x`: memref of any type values or tensor of any type values
|
||
|
1. `w`: memref of any type values or tensor of any type values
|
||
|
1. `x_zero_point`: memref of any type values or tensor of any type values
|
||
|
1. `w_zero_point`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `auto_pad` | `StringAttr` | string attribute attribute |
|
||
|
| `dilations` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `group` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `kernel_shape` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `pads` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `strides` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.ConvNoBias (ONNXConvNoBiasOp)
|
||
|
ONNX Conv operation with no Bias operand.
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"The convolution operator consumes an input tensor and a filter, and"
|
||
|
"computes the output."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
1. `W`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `auto_pad` | `StringAttr` | string attribute attribute |
|
||
|
| `dilations` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `group` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `kernel_shape` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `pads` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `strides` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `o_Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Conv (ONNXConvOp)
|
||
|
ONNX Conv operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"The convolution operator consumes an input tensor and a filter, and"
|
||
|
"computes the output."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
1. `W`: memref of any type values or tensor of any type values
|
||
|
1. `B`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `auto_pad` | `StringAttr` | string attribute attribute |
|
||
|
| `dilations` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `group` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `kernel_shape` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `pads` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `strides` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.ConvTranspose (ONNXConvTransposeOp)
|
||
|
ONNX ConvTranspose operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"The convolution transpose operator consumes an input tensor and a filter,"
|
||
|
"and computes the output."
|
||
|
""
|
||
|
"If the pads parameter is provided the shape of the output is calculated via the following equation:"
|
||
|
""
|
||
|
" output_shape[i] = stride[i] * (input_size[i] - 1) + output_padding[i] + ((kernel_shape[i] - 1) * dilations[i] + 1) - pads[start_i] - pads[end_i]"
|
||
|
""
|
||
|
"output_shape can also be explicitly specified in which case pads values are auto generated using these equations:"
|
||
|
""
|
||
|
" total_padding[i] = stride[i] * (input_size[i] - 1) + output_padding[i] + ((kernel_shape[i] - 1) * dilations[i] + 1) - output_shape[i]"
|
||
|
" If (auto_pads != SAME_UPPER): pads[start_i] = total_padding[i]/2; pads[end_i] = total_padding[i] - (total_padding[i]/2)"
|
||
|
" Else: pads[start_i] = total_padding[i] - (total_padding[i]/2); pads[end_i] = (total_padding[i]/2)."
|
||
|
""
|
||
|
" "
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
1. `W`: memref of any type values or tensor of any type values
|
||
|
1. `B`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `auto_pad` | `StringAttr` | string attribute attribute |
|
||
|
| `dilations` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `group` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `kernel_shape` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `output_padding` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `output_shape` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `pads` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `strides` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Cos (ONNXCosOp)
|
||
|
ONNX Cos operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Calculates the cosine of the given input tensor, element-wise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Cosh (ONNXCoshOp)
|
||
|
ONNX Cosh operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Calculates the hyperbolic cosine of the given input tensor element-wise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.CumSum (ONNXCumSumOp)
|
||
|
ONNX CumSum operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Performs cumulative sum of the input elements along the given axis."
|
||
|
"By default, it will do the sum inclusively meaning the first element is copied as is."
|
||
|
"Through an `exclusive` attribute, this behavior can change to exclude the first element."
|
||
|
"It can also perform summation in the opposite direction of the axis. For that, set `reverse` attribute to 1."
|
||
|
""
|
||
|
"Example:"
|
||
|
"```"
|
||
|
"input_x = [1, 2, 3]"
|
||
|
"axis=0"
|
||
|
"output = [1, 3, 6]"
|
||
|
"exclusive=1"
|
||
|
"output = [0, 1, 3]"
|
||
|
"exclusive=0"
|
||
|
"reverse=1"
|
||
|
"output = [6, 5, 3]"
|
||
|
"exclusive=1"
|
||
|
"reverse=1"
|
||
|
"output = [5, 3, 0]"
|
||
|
"```"
|
||
|
" "
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `x`: memref of any type values or tensor of any type values
|
||
|
1. `axis`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `exclusive` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `reverse` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.DepthToSpace (ONNXDepthToSpaceOp)
|
||
|
ONNX DepthToSpace operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"DepthToSpace rearranges (permutes) data from depth into blocks of spatial data."
|
||
|
"This is the reverse transformation of SpaceToDepth. More specifically, this op outputs a copy of"
|
||
|
"the input tensor where values from the depth dimension are moved in spatial blocks to the height"
|
||
|
"and width dimensions. By default, `mode` = `DCR`."
|
||
|
"In the DCR mode, elements along the depth dimension from the input tensor are rearranged in the"
|
||
|
"following order: depth, column, and then row. The output y is computed from the input x as below:"
|
||
|
""
|
||
|
"b, c, h, w = x.shape"
|
||
|
""
|
||
|
"tmp = np.reshape(x, [b, blocksize, blocksize, c // (blocksize**2), h, w])"
|
||
|
""
|
||
|
"tmp = np.transpose(tmp, [0, 3, 4, 1, 5, 2])"
|
||
|
""
|
||
|
"y = np.reshape(tmp, [b, c // (blocksize**2), h * blocksize, w * blocksize])"
|
||
|
""
|
||
|
""
|
||
|
"In the CRD mode, elements along the depth dimension from the input tensor are rearranged in the"
|
||
|
"following order: column, row, and the depth. The output y is computed from the input x as below:"
|
||
|
""
|
||
|
"b, c, h, w = x.shape"
|
||
|
""
|
||
|
"tmp = np.reshape(x, [b, c // (blocksize ** 2), blocksize, blocksize, h, w])"
|
||
|
""
|
||
|
"tmp = np.transpose(tmp, [0, 1, 4, 2, 5, 3])"
|
||
|
""
|
||
|
"y = np.reshape(tmp, [b, c // (blocksize ** 2), h * blocksize, w * blocksize])"
|
||
|
""
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `blocksize` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `mode` | `StringAttr` | string attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.DequantizeLinear (ONNXDequantizeLinearOp)
|
||
|
ONNX DequantizeLinear operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"The linear dequantization operator. It consumes a quantized tensor, a scale, a zero point to compute the full precision tensor."
|
||
|
"The dequantization formula is y = (x - x_zero_point) * x_scale. 'x_scale' and 'x_zero_point' must have same shape."
|
||
|
"'x_zero_point' and 'x' must have same type. 'x' and 'y' must have same shape. In the case of dequantizing int32,"
|
||
|
"there's no zero point (zero point is supposed to be 0)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `x`: memref of any type values or tensor of any type values
|
||
|
1. `x_scale`: memref of any type values or tensor of any type values
|
||
|
1. `x_zero_point`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Det (ONNXDetOp)
|
||
|
ONNX Det operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Det calculates determinant of a square matrix or batches of square matrices."
|
||
|
"Det takes one input tensor of shape `[*, M, M]`, where `*` is zero or more batch dimensions,"
|
||
|
"and the inner-most 2 dimensions form square matrices."
|
||
|
"The output is a tensor of shape `[*]`, containing the determinants of all input submatrices."
|
||
|
"e.g., When the input is 2-D, the output is a scalar(shape is empty: `[]`)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Div (ONNXDivOp)
|
||
|
ONNX Div operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Performs element-wise binary division (with Numpy-style broadcasting support)."
|
||
|
""
|
||
|
"This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check [the doc](Broadcasting.md)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `A`: memref of any type values or tensor of any type values
|
||
|
1. `B`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `C`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Dropout (ONNXDropoutOp)
|
||
|
ONNX Dropout operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Dropout takes one input floating tensor and produces two tensor outputs,"
|
||
|
"output (floating tensor) and mask (`Tensor<bool>`). Depending on whether it is"
|
||
|
"in test mode or not, the output Y will either be a random dropout, or a simple"
|
||
|
"copy of the input. Note that our implementation of Dropout does scaling in"
|
||
|
"the training phase, so during testing nothing needs to be done."
|
||
|
"This operator has **optional** inputs/outputs. See [the doc](IR.md) for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument's name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `ratio` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
1. `mask`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.DynamicQuantizeLinear (ONNXDynamicQuantizeLinearOp)
|
||
|
ONNX DynamicQuantizeLinear operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"A Function to fuse calculation for Scale, Zero Point and FP32->8Bit convertion of FP32 Input data."
|
||
|
"Outputs Scale, ZeroPoint and Quantized Input for a given FP32 Input."
|
||
|
"Scale is calculated as:"
|
||
|
"```"
|
||
|
" y_scale = (max(x) - min(x))/(qmax - qmin)"
|
||
|
" * where qmax and qmin are max and min values for quantization range .i.e [0, 255] in case of uint8"
|
||
|
" * data range is adjusted to include 0."
|
||
|
"```"
|
||
|
"Zero point is calculated as:"
|
||
|
"```"
|
||
|
"intermediate_zero_point = (qmin - min(x))/(qmax - qmin)"
|
||
|
"y_zero_point = cast(round(saturate(itermediate_zero_point)))"
|
||
|
"* where qmax and qmin are max and min values for quantization range .i.e [0, 255] in case of uint8"
|
||
|
"* for saturation, it saturates to [0, 255] if it's uint8, or [-127, 127] if it's int8. Right now only uint8 is supported."
|
||
|
"* rounding to nearest ties to even."
|
||
|
"```"
|
||
|
"Data quantization formula is:"
|
||
|
"```"
|
||
|
"y = saturate (round (x / y_scale) + y_zero_point)"
|
||
|
"* for saturation, it saturates to [0, 255] if it's uint8, or [-127, 127] if it's int8. Right now only uint8 is supported."
|
||
|
"* rounding to nearest ties to even."
|
||
|
"```"
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `x`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `y`: memref of any type values or tensor of any type values
|
||
|
1. `y_scale`: memref of any type values or tensor of any type values
|
||
|
1. `y_zero_point`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Elu (ONNXEluOp)
|
||
|
ONNX Elu operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Elu takes one input data (Tensor<T>) and produces one output data"
|
||
|
"(Tensor<T>) where the function `f(x) = alpha * (exp(x) - 1.) for x <"
|
||
|
"0`, `f(x) = x for x >= 0`., is applied to the tensor elementwise."
|
||
|
""
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `alpha` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.EntryPoint (ONNXEntryPointOp)
|
||
|
Indicate ONNX entry point
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
The "onnx.EntryPoint" function indicates the main entry point of ONNX model.
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
|
||
|
### onnx.Equal (ONNXEqualOp)
|
||
|
ONNX Equal operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Returns the tensor resulted from performing the `equal` logical operation"
|
||
|
"elementwise on the input tensors `A` and `B` (with Numpy-style broadcasting support)."
|
||
|
""
|
||
|
"This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check [the doc](Broadcasting.md)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `A`: memref of any type values or tensor of any type values
|
||
|
1. `B`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `C`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Erf (ONNXErfOp)
|
||
|
ONNX Erf operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Computes the error function of the given input tensor element-wise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Exp (ONNXExpOp)
|
||
|
ONNX Exp operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Calculates the exponential of the given input tensor, element-wise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Expand (ONNXExpandOp)
|
||
|
ONNX Expand operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Broadcast the input tensor following the given shape and the broadcast rule."
|
||
|
"The broadcast rule is similar to numpy.array(input) * numpy.ones(shape):"
|
||
|
"Dimensions are right alignment;"
|
||
|
"Two corresponding dimension must have the same value, or one of them is equal to 1."
|
||
|
"Also, this operator is similar to numpy.broadcast_to(input, shape),"
|
||
|
"but the major difference is numpy.broadcast_to() does not allow shape to be smaller than input.size()."
|
||
|
"It is possible that the output.shape is not equal to shape, when some dimensions in shape is equal to 1,"
|
||
|
"or the shape.ndim < input.shape.ndim."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
1. `shape`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.EyeLike (ONNXEyeLikeOp)
|
||
|
ONNX EyeLike operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Generate a 2D tensor (matrix) with ones on the diagonal and zeros everywhere else. Only 2D"
|
||
|
"tensors are supported, i.e. input T1 must be of rank 2. The shape of the output tensor is the"
|
||
|
"same as the input tensor. The data type can be specified by the 'dtype' argument. If"
|
||
|
"'dtype' is not specified, then the type of input tensor is used. By default, the main diagonal"
|
||
|
"is populated with ones, but attribute 'k' can be used to populate upper or lower diagonals."
|
||
|
"The 'dtype' argument must be one of the data types specified in the 'DataType' enum field in the"
|
||
|
"TensorProto message and be valid as an output type."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `dtype` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `k` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Flatten (ONNXFlattenOp)
|
||
|
ONNX Flatten operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Flattens the input tensor into a 2D matrix. If input tensor has shape"
|
||
|
"(d_0, d_1, ... d_n) then the output will have shape"
|
||
|
"(d_0 X d_1 ... d_(axis-1), d_axis X d_(axis+1) ... X dn)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axis` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Floor (ONNXFloorOp)
|
||
|
ONNX Floor operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Floor takes one input data (Tensor<T>) and produces one output data"
|
||
|
"(Tensor<T>) where the floor is, y = floor(x), is applied to"
|
||
|
"the tensor elementwise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.GRU (ONNXGRUOp)
|
||
|
ONNX GRU operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Computes an one-layer GRU. This operator is usually supported via some custom"
|
||
|
"implementation such as CuDNN."
|
||
|
""
|
||
|
"Notations:"
|
||
|
""
|
||
|
"`X` - input tensor"
|
||
|
""
|
||
|
"`z` - update gate"
|
||
|
""
|
||
|
"`r` - reset gate"
|
||
|
""
|
||
|
"`h` - hidden gate"
|
||
|
""
|
||
|
"`t` - time step (t-1 means previous time step)"
|
||
|
""
|
||
|
"`W[zrh]` - W parameter weight matrix for update, reset, and hidden gates"
|
||
|
""
|
||
|
"`R[zrh]` - R recurrence weight matrix for update, reset, and hidden gates"
|
||
|
""
|
||
|
"`Wb[zrh]` - W bias vectors for update, reset, and hidden gates"
|
||
|
""
|
||
|
"`Rb[zrh]` - R bias vectors for update, reset, and hidden gates"
|
||
|
""
|
||
|
"`WB[zrh]` - W parameter weight matrix for backward update, reset, and hidden gates"
|
||
|
""
|
||
|
"`RB[zrh]` - R recurrence weight matrix for backward update, reset, and hidden gates"
|
||
|
""
|
||
|
"`WBb[zrh]` - W bias vectors for backward update, reset, and hidden gates"
|
||
|
""
|
||
|
"`RBb[zrh]` - R bias vectors for backward update, reset, and hidden gates"
|
||
|
""
|
||
|
"`H` - Hidden state"
|
||
|
""
|
||
|
"`num_directions` - 2 if direction == bidirectional else 1"
|
||
|
""
|
||
|
"Activation functions:"
|
||
|
""
|
||
|
" Relu(x) - max(0, x)"
|
||
|
""
|
||
|
" Tanh(x) - (1 - e^{-2x})/(1 + e^{-2x})"
|
||
|
""
|
||
|
" Sigmoid(x) - 1/(1 + e^{-x})"
|
||
|
""
|
||
|
" (NOTE: Below are optional)"
|
||
|
""
|
||
|
" Affine(x) - alpha*x + beta"
|
||
|
""
|
||
|
" LeakyRelu(x) - x if x >= 0 else alpha * x"
|
||
|
""
|
||
|
" ThresholdedRelu(x) - x if x >= alpha else 0"
|
||
|
""
|
||
|
" ScaledTanh(x) - alpha*Tanh(beta*x)"
|
||
|
""
|
||
|
" HardSigmoid(x) - min(max(alpha*x + beta, 0), 1)"
|
||
|
""
|
||
|
" Elu(x) - x if x >= 0 else alpha*(e^x - 1)"
|
||
|
""
|
||
|
" Softsign(x) - x/(1 + |x|)"
|
||
|
""
|
||
|
" Softplus(x) - log(1 + e^x)"
|
||
|
""
|
||
|
"Equations (Default: f=Sigmoid, g=Tanh):"
|
||
|
""
|
||
|
" - zt = f(Xt*(Wz^T) + Ht-1*(Rz^T) + Wbz + Rbz)"
|
||
|
""
|
||
|
" - rt = f(Xt*(Wr^T) + Ht-1*(Rr^T) + Wbr + Rbr)"
|
||
|
""
|
||
|
" - ht = g(Xt*(Wh^T) + (rt (.) Ht-1)*(Rh^T) + Rbh + Wbh) # default, when linear_before_reset = 0"
|
||
|
""
|
||
|
" - ht = g(Xt*(Wh^T) + (rt (.) (Ht-1*(Rh^T) + Rbh)) + Wbh) # when linear_before_reset != 0"
|
||
|
""
|
||
|
" - Ht = (1 - zt) (.) ht + zt (.) Ht-1"
|
||
|
"This operator has **optional** inputs/outputs. See [the doc](IR.md) for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument's name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
1. `W`: memref of any type values or tensor of any type values
|
||
|
1. `R`: memref of any type values or tensor of any type values
|
||
|
1. `B`: memref of any type values or tensor of any type values
|
||
|
1. `sequence_lens`: memref of any type values or tensor of any type values
|
||
|
1. `initial_h`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `activation_alpha` | `ArrayAttr` | 32-bit float array attribute attribute |
|
||
|
| `activation_beta` | `ArrayAttr` | 32-bit float array attribute attribute |
|
||
|
| `activations` | `ArrayAttr` | string array attribute attribute |
|
||
|
| `clip` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `direction` | `StringAttr` | string attribute attribute |
|
||
|
| `hidden_size` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `linear_before_reset` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
1. `Y_h`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.GatherElements (ONNXGatherElementsOp)
|
||
|
ONNX GatherElements operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"GatherElements takes two inputs `data` and `indices` of the same rank r >= 1"
|
||
|
"and an optional attribute `axis` that identifies an axis of `data`"
|
||
|
"(by default, the outer-most axis, that is axis 0). It is an indexing operation"
|
||
|
"that produces its output by indexing into the input data tensor at index"
|
||
|
"positions determined by elements of the `indices` tensor."
|
||
|
"Its output shape is the same as the shape of `indices` and consists of one value"
|
||
|
"(gathered from the `data`) for each element in `indices`."
|
||
|
""
|
||
|
"For instance, in the 3-D case (r = 3), the output produced is determined"
|
||
|
"by the following equations: "
|
||
|
"```"
|
||
|
" out[i][j][k] = input[index[i][j][k]][j][k] if axis = 0,"
|
||
|
" out[i][j][k] = input[i][index[i][j][k]][k] if axis = 1,"
|
||
|
" out[i][j][k] = input[i][j][index[i][j][k]] if axis = 2,"
|
||
|
"```"
|
||
|
""
|
||
|
"This operator is also the inverse of ScatterElements. It is similar to Torch's gather operation."
|
||
|
""
|
||
|
"Example 1:"
|
||
|
"```"
|
||
|
" data = ["
|
||
|
" [1, 2],"
|
||
|
" [3, 4],"
|
||
|
" ]"
|
||
|
" indices = ["
|
||
|
" [0, 0],"
|
||
|
" [1, 0],"
|
||
|
" ]"
|
||
|
" axis = 1"
|
||
|
" output = ["
|
||
|
" ["
|
||
|
" [1, 1],"
|
||
|
" [4, 3],"
|
||
|
" ],"
|
||
|
" ]"
|
||
|
"```"
|
||
|
"Example 2:"
|
||
|
"```"
|
||
|
" data = ["
|
||
|
" [1, 2, 3],"
|
||
|
" [4, 5, 6],"
|
||
|
" [7, 8, 9],"
|
||
|
" ]"
|
||
|
" indices = ["
|
||
|
" [1, 2, 0],"
|
||
|
" [2, 0, 0],"
|
||
|
" ]"
|
||
|
" axis = 0"
|
||
|
" output = ["
|
||
|
" ["
|
||
|
" [4, 8, 3],"
|
||
|
" [7, 2, 3],"
|
||
|
" ],"
|
||
|
" ]"
|
||
|
"```"
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
1. `indices`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axis` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.GatherND (ONNXGatherNDOp)
|
||
|
ONNX GatherND operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Given `data` tensor of rank `r` >= 1, and `indices` tensor of rank `q` >= 1, this operator gathers "
|
||
|
"slices of `data` into an output tensor of rank `q + r - indices_shape[-1] - 1`."
|
||
|
""
|
||
|
"`indices` is an q-dimensional integer tensor, best thought of as a `(q-1)`-dimensional tensor of index-tuples into `data`, "
|
||
|
"where each element defines a slice of `data`"
|
||
|
""
|
||
|
"Some salient points about the inputs' rank and shape:"
|
||
|
" "
|
||
|
"1) r >= 1 and q >= 1 are to be honored. There is no dependency condition to be met between ranks `r` and `q`"
|
||
|
""
|
||
|
"2) The `indices_shape[-1]` should have a value between 1 (inclusive) and rank `r` (inclusive) "
|
||
|
""
|
||
|
"3) All values in `indices` are expected to be within bounds [-s, s-1] along axis of size `s` (i.e.) `-data_shape[i] <= indices[...,i] <= data_shape[i] - 1`."
|
||
|
" It is an error if any of the index values are out of bounds."
|
||
|
""
|
||
|
"The output is computed as follows:"
|
||
|
""
|
||
|
"The output tensor is obtained by mapping each index-tuple in the `indices` tensor to the corresponding slice of the input `data`."
|
||
|
" "
|
||
|
"1) If `indices_shape[-1] > r` => error condition"
|
||
|
""
|
||
|
"2) If `indices_shape[-1] == r`, since the rank of `indices` is `q`, `indices` can be thought of as a `(q-1)`-dimensional tensor"
|
||
|
" containing 1-D tensors of dimension `r`. Let us think of each such `r` ranked tensor as `indices_slice`. "
|
||
|
" Each *scalar value* corresponding to `data[indices_slice]` is filled into the corresponding location of the `(q-1)`-dimensional tensor "
|
||
|
" to form the `output` tensor (Example 1 below)"
|
||
|
""
|
||
|
"3) If `indices_shape[-1] < r`, since the rank of `indices` is `q`, `indices` can be thought of as a `(q-1)`-dimensional tensor"
|
||
|
" containing 1-D tensors of dimension `< r`. Let us think of each such tensors as `indices_slice`. "
|
||
|
" Each *tensor slice* corresponding to `data[indices_slice , :]` is filled into the corresponding location of the `(q-1)`-dimensional tensor "
|
||
|
" to form the `output` tensor (Examples 2, 3, and 4 below)"
|
||
|
""
|
||
|
"This operator is the inverse of `ScatterND`."
|
||
|
""
|
||
|
"`Example 1`"
|
||
|
""
|
||
|
" data = [[0,1],[2,3]] # data_shape = [2, 2]"
|
||
|
""
|
||
|
" indices = [[0,0],[1,1]] # indices_shape = [2, 2]"
|
||
|
""
|
||
|
" output = [0,3] # output_shape = [2]"
|
||
|
""
|
||
|
"`Example 2`"
|
||
|
""
|
||
|
" data = [[0,1],[2,3]] # data_shape = [2, 2]"
|
||
|
""
|
||
|
" indices = [[1],[0]] # indices_shape = [2, 1]"
|
||
|
""
|
||
|
" output = [[2,3],[0,1]] # output_shape = [2, 2]"
|
||
|
""
|
||
|
"`Example 3`"
|
||
|
""
|
||
|
" data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]"
|
||
|
""
|
||
|
" indices = [[0,1],[1,0]] # indices_shape = [2, 2]"
|
||
|
""
|
||
|
" output = [[2,3],[4,5]] # output_shape = [2, 2] "
|
||
|
""
|
||
|
"`Example 4`"
|
||
|
""
|
||
|
" data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]"
|
||
|
""
|
||
|
" indices = [[[0,1]],[[1,0]]] # indices_shape = [2, 1, 2]"
|
||
|
""
|
||
|
" output = [[[2,3]],[[4,5]]] # output_shape = [2, 1, 2] "
|
||
|
""
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
1. `indices`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Gather (ONNXGatherOp)
|
||
|
ONNX Gather operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Given `data` tensor of rank r >= 1, and `indices` tensor of rank q, gather"
|
||
|
"entries of the axis dimension of `data` (by default outer-most one as axis=0) indexed by `indices`, and concatenates"
|
||
|
"them in an output tensor of rank q + (r - 1)."
|
||
|
""
|
||
|
"axis = 0 :"
|
||
|
""
|
||
|
"Let"
|
||
|
"k = indices[i_{0}, ..., i_{q-1\}\]"
|
||
|
"Then"
|
||
|
"output[i_{0}, ..., i_{q-1}, j_{0}, ..., j_{r-2\}\] = input[k , j_{0}, ..., j_{r-2\}\]"
|
||
|
""
|
||
|
"```"
|
||
|
" data = ["
|
||
|
" [1.0, 1.2],"
|
||
|
" [2.3, 3.4],"
|
||
|
" [4.5, 5.7],"
|
||
|
" ]"
|
||
|
" indices = ["
|
||
|
" [0, 1],"
|
||
|
" [1, 2],"
|
||
|
" ]"
|
||
|
" output = ["
|
||
|
" ["
|
||
|
" [1.0, 1.2],"
|
||
|
" [2.3, 3.4],"
|
||
|
" ],"
|
||
|
" ["
|
||
|
" [2.3, 3.4],"
|
||
|
" [4.5, 5.7],"
|
||
|
" ],"
|
||
|
" ]"
|
||
|
"```"
|
||
|
"axis = 1 :"
|
||
|
""
|
||
|
"Let"
|
||
|
"k = indices[i_{0}, ..., i_{q-1\}\]"
|
||
|
"Then"
|
||
|
"output[i_{0}, ..., i_{q-1}, j_{0}, ..., j_{r-2\}\] = input[j_{0}, k, j_{1}, ..., j_{r-2\}\]"
|
||
|
""
|
||
|
"```"
|
||
|
" data = ["
|
||
|
" [1.0, 1.2, 1.9],"
|
||
|
" [2.3, 3.4, 3.9],"
|
||
|
" [4.5, 5.7, 5.9],"
|
||
|
" ]"
|
||
|
" indices = ["
|
||
|
" [0, 2],"
|
||
|
" ]"
|
||
|
" axis = 1,"
|
||
|
" output = ["
|
||
|
" ["
|
||
|
" [1.0, 1.9],"
|
||
|
" [2.3, 3.9],"
|
||
|
" [4.5, 5.9],"
|
||
|
" ],"
|
||
|
" ]"
|
||
|
"```"
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
1. `indices`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axis` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.GemmNoBias (ONNXGemmNoBiasOp)
|
||
|
ONNX general matrix multiply operation without bias.
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
The "onnx.Gemm" generic matrix multiplication without bias.
|
||
|
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `A`: memref of any type values or tensor of any type values
|
||
|
1. `B`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `alpha` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `beta` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `transA` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `transB` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `o_Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Gemm (ONNXGemmOp)
|
||
|
ONNX Gemm operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"General Matrix multiplication:"
|
||
|
"https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms#Level_3"
|
||
|
""
|
||
|
"A' = transpose(A) if transA else A"
|
||
|
""
|
||
|
"B' = transpose(B) if transB else B"
|
||
|
""
|
||
|
"Compute Y = alpha * A' * B' + beta * C, where input tensor A has shape (M, K) or (K, M),"
|
||
|
"input tensor B has shape (K, N) or (N, K), input tensor C is broadcastable to shape (M, N),"
|
||
|
"and output tensor Y has shape (M, N). A will be transposed before doing the"
|
||
|
"computation if attribute transA is non-zero, same for B and transB."
|
||
|
"This operator supports **unidirectional broadcasting** (tensor C should be unidirectional broadcastable to tensor A * B); for more details please check [the doc](Broadcasting.md)."
|
||
|
"This operator has **optional** inputs/outputs. See [the doc](IR.md) for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument's name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `A`: memref of any type values or tensor of any type values
|
||
|
1. `B`: memref of any type values or tensor of any type values
|
||
|
1. `C`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `alpha` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `beta` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `transA` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `transB` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.GlobalAveragePool (ONNXGlobalAveragePoolOp)
|
||
|
ONNX GlobalAveragePool operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"GlobalAveragePool consumes an input tensor X and applies average pooling across"
|
||
|
" the values in the same channel. This is equivalent to AveragePool with kernel size"
|
||
|
" equal to the spatial dimension of input tensor."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.GlobalLpPool (ONNXGlobalLpPoolOp)
|
||
|
ONNX GlobalLpPool operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"GlobalLpPool consumes an input tensor X and applies lp pool pooling across"
|
||
|
" the values in the same channel. This is equivalent to LpPool with kernel size"
|
||
|
" equal to the spatial dimension of input tensor."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `p` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.GlobalMaxPool (ONNXGlobalMaxPoolOp)
|
||
|
ONNX GlobalMaxPool operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"GlobalMaxPool consumes an input tensor X and applies max pooling across"
|
||
|
" the values in the same channel. This is equivalent to MaxPool with kernel size"
|
||
|
" equal to the spatial dimension of input tensor."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Greater (ONNXGreaterOp)
|
||
|
ONNX Greater operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Returns the tensor resulted from performing the `greater` logical operation"
|
||
|
"elementwise on the input tensors `A` and `B` (with Numpy-style broadcasting support)."
|
||
|
""
|
||
|
"This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check [the doc](Broadcasting.md)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `A`: memref of any type values or tensor of any type values
|
||
|
1. `B`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `C`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.HardSigmoid (ONNXHardSigmoidOp)
|
||
|
ONNX HardSigmoid operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"HardSigmoid takes one input data (Tensor<T>) and produces one output data"
|
||
|
"(Tensor<T>) where the HardSigmoid function, y = max(0, min(1, alpha * x + beta)),"
|
||
|
"is applied to the tensor elementwise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `alpha` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `beta` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Hardmax (ONNXHardmaxOp)
|
||
|
ONNX Hardmax operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"The operator computes the hardmax (1 for the first maximum value, and 0 for all others) values for each layer in the batch"
|
||
|
" of the given input."
|
||
|
""
|
||
|
"The input does not need to explicitly be a 2D vector; rather, it will be"
|
||
|
"coerced into one. For an arbitrary n-dimensional tensor"
|
||
|
"input \in [a_0, a_1, ..., a_{k-1}, a_k, ..., a_{n-1\}\] and k is"
|
||
|
"the axis provided, then input will be coerced into a 2-dimensional tensor with"
|
||
|
"dimensions [a_0 * ... * a_{k-1}, a_k * ... * a_{n-1\}\]. For the default"
|
||
|
"case where axis=1, this means the input tensor will be coerced into a 2D tensor"
|
||
|
"of dimensions [a_0, a_1 * ... * a_{n-1\}\], where a_0 is often the batch size."
|
||
|
"In this situation, we must have a_0 = N and a_1 * ... * a_{n-1} = D."
|
||
|
"Each of these dimensions must be matched correctly, or else the operator"
|
||
|
"will throw errors. The output tensor has the same shape"
|
||
|
"and contains the hardmax values of the corresponding input."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axis` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Identity (ONNXIdentityOp)
|
||
|
ONNX Identity operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Identity operator"
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.If (ONNXIfOp)
|
||
|
ONNX If operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"If conditional"
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `cond`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `else_branch` | `Attribute` | any attribute attribute |
|
||
|
| `then_branch` | `Attribute` | any attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `outputs`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.InstanceNormalization (ONNXInstanceNormalizationOp)
|
||
|
ONNX InstanceNormalization operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Carries out instance normalization as described in the paper"
|
||
|
"https://arxiv.org/abs/1607.08022."
|
||
|
""
|
||
|
"y = scale * (x - mean) / sqrt(variance + epsilon) + B,"
|
||
|
"where mean and variance are computed per instance per channel."
|
||
|
""
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
1. `scale`: memref of any type values or tensor of any type values
|
||
|
1. `B`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `epsilon` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.IsInf (ONNXIsInfOp)
|
||
|
ONNX IsInf operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Map infinity to true and other values to false."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `detect_negative` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `detect_positive` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.IsNaN (ONNXIsNaNOp)
|
||
|
ONNX IsNaN operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Returns which elements of the input are NaN."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.LRN (ONNXLRNOp)
|
||
|
ONNX LRN operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Local Response Normalization proposed in the [AlexNet paper](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf)."
|
||
|
"It normalizes over local input regions."
|
||
|
"The local region is defined across the channels. For an element X[n, c, d1, ..., dk] in a tensor"
|
||
|
"of shape (N x C x D1 x D2, ..., Dk), its region is"
|
||
|
"{X[n, i, d1, ..., dk] | max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2))}."
|
||
|
""
|
||
|
"square_sum[n, c, d1, ..., dk] = sum(X[n, i, d1, ..., dk] ^ 2),"
|
||
|
"where max(0, c - floor((size - 1) / 2)) <= i <= min(C - 1, c + ceil((size - 1) / 2))."
|
||
|
""
|
||
|
"Y[n, c, d1, ..., dk] = X[n, c, d1, ..., dk] / (bias + alpha / size * square_sum[n, c, d1, ..., dk] ) ^ beta"
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `alpha` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `beta` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `bias` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `size` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.LSTM (ONNXLSTMOp)
|
||
|
ONNX LSTM operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Computes an one-layer LSTM. This operator is usually supported via some"
|
||
|
"custom implementation such as CuDNN."
|
||
|
""
|
||
|
"Notations:"
|
||
|
""
|
||
|
"`X` - input tensor"
|
||
|
""
|
||
|
"`i` - input gate"
|
||
|
""
|
||
|
"`o` - output gate"
|
||
|
""
|
||
|
"`f` - forget gate"
|
||
|
""
|
||
|
"`c` - cell gate"
|
||
|
""
|
||
|
"`t` - time step (t-1 means previous time step)"
|
||
|
""
|
||
|
"`W[iofc]` - W parameter weight matrix for input, output, forget, and cell gates"
|
||
|
""
|
||
|
"`R[iofc]` - R recurrence weight matrix for input, output, forget, and cell gates"
|
||
|
""
|
||
|
"`Wb[iofc]` - W bias vectors for input, output, forget, and cell gates"
|
||
|
""
|
||
|
"`Rb[iofc]` - R bias vectors for input, output, forget, and cell gates"
|
||
|
""
|
||
|
"`P[iof]` - P peephole weight vector for input, output, and forget gates"
|
||
|
""
|
||
|
"`WB[iofc]` - W parameter weight matrix for backward input, output, forget, and cell gates"
|
||
|
""
|
||
|
"`RB[iofc]` - R recurrence weight matrix for backward input, output, forget, and cell gates"
|
||
|
""
|
||
|
"`WBb[iofc]` - W bias vectors for backward input, output, forget, and cell gates"
|
||
|
""
|
||
|
"`RBb[iofc]` - R bias vectors for backward input, output, forget, and cell gates"
|
||
|
""
|
||
|
"`PB[iof]` - P peephole weight vector for backward input, output, and forget gates"
|
||
|
""
|
||
|
"`H` - Hidden state"
|
||
|
""
|
||
|
"`num_directions` - 2 if direction == bidirectional else 1"
|
||
|
""
|
||
|
"Activation functions:"
|
||
|
""
|
||
|
" Relu(x) - max(0, x)"
|
||
|
""
|
||
|
" Tanh(x) - (1 - e^{-2x})/(1 + e^{-2x})"
|
||
|
""
|
||
|
" Sigmoid(x) - 1/(1 + e^{-x})"
|
||
|
""
|
||
|
" (NOTE: Below are optional)"
|
||
|
""
|
||
|
" Affine(x) - alpha*x + beta"
|
||
|
""
|
||
|
" LeakyRelu(x) - x if x >= 0 else alpha * x"
|
||
|
""
|
||
|
" ThresholdedRelu(x) - x if x >= alpha else 0"
|
||
|
""
|
||
|
" ScaledTanh(x) - alpha*Tanh(beta*x)"
|
||
|
""
|
||
|
" HardSigmoid(x) - min(max(alpha*x + beta, 0), 1)"
|
||
|
""
|
||
|
" Elu(x) - x if x >= 0 else alpha*(e^x - 1)"
|
||
|
""
|
||
|
" Softsign(x) - x/(1 + |x|)"
|
||
|
""
|
||
|
" Softplus(x) - log(1 + e^x)"
|
||
|
""
|
||
|
"Equations (Default: f=Sigmoid, g=Tanh, h=Tanh):"
|
||
|
""
|
||
|
" - it = f(Xt*(Wi^T) + Ht-1*(Ri^T) + Pi (.) Ct-1 + Wbi + Rbi)"
|
||
|
""
|
||
|
" - ft = f(Xt*(Wf^T) + Ht-1*(Rf^T) + Pf (.) Ct-1 + Wbf + Rbf)"
|
||
|
""
|
||
|
" - ct = g(Xt*(Wc^T) + Ht-1*(Rc^T) + Wbc + Rbc)"
|
||
|
""
|
||
|
" - Ct = ft (.) Ct-1 + it (.) ct"
|
||
|
""
|
||
|
" - ot = f(Xt*(Wo^T) + Ht-1*(Ro^T) + Po (.) Ct + Wbo + Rbo)"
|
||
|
""
|
||
|
" - Ht = ot (.) h(Ct)"
|
||
|
"This operator has **optional** inputs/outputs. See [the doc](IR.md) for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument's name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
1. `W`: memref of any type values or tensor of any type values
|
||
|
1. `R`: memref of any type values or tensor of any type values
|
||
|
1. `B`: memref of any type values or tensor of any type values
|
||
|
1. `sequence_lens`: memref of any type values or tensor of any type values
|
||
|
1. `initial_h`: memref of any type values or tensor of any type values
|
||
|
1. `initial_c`: memref of any type values or tensor of any type values
|
||
|
1. `P`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `activation_alpha` | `ArrayAttr` | 32-bit float array attribute attribute |
|
||
|
| `activation_beta` | `ArrayAttr` | 32-bit float array attribute attribute |
|
||
|
| `activations` | `ArrayAttr` | string array attribute attribute |
|
||
|
| `clip` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `direction` | `StringAttr` | string attribute attribute |
|
||
|
| `hidden_size` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `input_forget` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
1. `Y_h`: memref of any type values or tensor of any type values
|
||
|
1. `Y_c`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.LeakyRelu (ONNXLeakyReluOp)
|
||
|
ONNX LeakyRelu operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"LeakyRelu takes input data (Tensor<T>) and an argument alpha, and produces one"
|
||
|
"output data (Tensor<T>) where the function `f(x) = alpha * x for x < 0`,"
|
||
|
"`f(x) = x for x >= 0`, is applied to the data tensor elementwise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `alpha` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Less (ONNXLessOp)
|
||
|
ONNX Less operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Returns the tensor resulted from performing the `less` logical operation"
|
||
|
"elementwise on the input tensors `A` and `B` (with Numpy-style broadcasting support)."
|
||
|
""
|
||
|
"This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check [the doc](Broadcasting.md)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `A`: memref of any type values or tensor of any type values
|
||
|
1. `B`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `C`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Log (ONNXLogOp)
|
||
|
ONNX Log operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Calculates the natural log of the given input tensor, element-wise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.LogSoftmax (ONNXLogSoftmaxOp)
|
||
|
ONNX LogSoftmax operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"The operator computes the logsoftmax (log of softmax) values for each layer in the batch"
|
||
|
" of the given input."
|
||
|
""
|
||
|
"The input does not need to explicitly be a 2D vector; rather, it will be"
|
||
|
"coerced into one. For an arbitrary n-dimensional tensor"
|
||
|
"input \in [a_0, a_1, ..., a_{k-1}, a_k, ..., a_{n-1\}\] and k is"
|
||
|
"the axis provided, then input will be coerced into a 2-dimensional tensor with"
|
||
|
"dimensions [a_0 * ... * a_{k-1}, a_k * ... * a_{n-1\}\]. For the default"
|
||
|
"case where axis=1, this means the input tensor will be coerced into a 2D tensor"
|
||
|
"of dimensions [a_0, a_1 * ... * a_{n-1\}\], where a_0 is often the batch size."
|
||
|
"In this situation, we must have a_0 = N and a_1 * ... * a_{n-1} = D."
|
||
|
"Each of these dimensions must be matched correctly, or else the operator"
|
||
|
"will throw errors. The output tensor has the same shape"
|
||
|
"and contains the logsoftmax values of the corresponding input."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axis` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Loop (ONNXLoopOp)
|
||
|
ONNX Loop operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Generic Looping construct. This loop has multiple termination conditions:"
|
||
|
""
|
||
|
"1) Trip count. Iteration count specified at runtime. Set by"
|
||
|
" specifying the input M. Optional. Set to empty string to omit."
|
||
|
" Note that a static trip count (specified at graph construction time) can be"
|
||
|
" specified by passing in a constant node for input M."
|
||
|
"2) Loop termination condition. This is an input to the op that determines"
|
||
|
" whether to run the first iteration and also a loop-carried dependency for"
|
||
|
" the body graph. The body graph must yield a value for the condition variable,"
|
||
|
" whether this input is provided or not."
|
||
|
""
|
||
|
"This table summarizes the operating modes of this operator with equivalent"
|
||
|
"C-style code:"
|
||
|
""
|
||
|
" Operator inputs defined as (max_trip_count, condition_var)."
|
||
|
""
|
||
|
" input ("", ""):"
|
||
|
" for (int i=0; ; ++i) {"
|
||
|
" cond = ... // Note this value is ignored, but is required in the body"
|
||
|
" }"
|
||
|
""
|
||
|
" input ("", cond) // Note this is analogous to a while loop"
|
||
|
" bool cond = ...;"
|
||
|
" for (int i=0; cond; ++i) {"
|
||
|
" cond = ...;"
|
||
|
" }"
|
||
|
""
|
||
|
" input ("", 1) // Note this is analogous to a do-while loop"
|
||
|
" bool cond = true"
|
||
|
" for (int i=0; cond; ++i) {"
|
||
|
" cond = ...;"
|
||
|
" }"
|
||
|
""
|
||
|
" input (trip_count, "") // Note this is analogous to a for loop"
|
||
|
" int trip_count = ..."
|
||
|
" for (int i=0; i < trip_count; ++i) {"
|
||
|
" cond = ...; // ignored"
|
||
|
" }"
|
||
|
""
|
||
|
" input (trip_count, cond)"
|
||
|
" int trip_count = ...;"
|
||
|
" bool cond = ...;"
|
||
|
" for (int i=0; i < trip_count && cond; ++i) {"
|
||
|
" cond = ...;"
|
||
|
" }"
|
||
|
""
|
||
|
""
|
||
|
"*Sample usage - cond as well as trip count*"
|
||
|
""
|
||
|
" graph predict-net {"
|
||
|
" %a = Constant[value = <Scalar Tensor [3]>]()"
|
||
|
" %b = Constant[value = <Scalar Tensor [6]>]()"
|
||
|
" %keepgoing = Constant[value = <Scalar Tensor [1]>]()"
|
||
|
" %max_trip_count = Constant[value = <Scalar Tensor [10]>]()"
|
||
|
" %keepgoing_out, %b_out, %user_defined_vals = Loop[body = <graph body-net>](%max_trip_count, %keepgoing, %b)"
|
||
|
" return"
|
||
|
" }"
|
||
|
""
|
||
|
" graph body-net ("
|
||
|
" %i[INT32, scalar] // iteration number"
|
||
|
" %keepgoing_in[BOOL, scalar] // incoming loop-termination-condition; not used"
|
||
|
" %b_in[INT32, scalar] // incoming value of loop-carried-dependency b"
|
||
|
" ) {"
|
||
|
" %my_local = Add(%a, %b_in)"
|
||
|
" %b_out = Sub(%a, %b_in) // outgoing value of loop-carried-dependency b"
|
||
|
" %keepgoing_out = Greater(%my_local, %b_out) // outgoing loop-termination-condition"
|
||
|
" %user_defined_val = Add(%b_in, %b_in) // scan-output value to be accumulated"
|
||
|
" return %keepgoing_out, %b_out, %user_defined_val"
|
||
|
" }"
|
||
|
""
|
||
|
"*Sample equivalent C code*"
|
||
|
""
|
||
|
" {"
|
||
|
" /* User-defined code (enclosing scope) */"
|
||
|
" int a = 3, b = 6;"
|
||
|
" bool keepgoing = true; // Analogous to input cond"
|
||
|
" /* End user-defined code */"
|
||
|
""
|
||
|
" /* Implicitly-defined code */"
|
||
|
" const int max_trip_count = 10; // Analogous to input M"
|
||
|
" int user_defined_vals[]; // Imagine this is resizable"
|
||
|
" /* End implicitly-defined code */"
|
||
|
" /* initialize loop-carried variables and scan-output variables */"
|
||
|
" bool keepgoing_out = keepgoing"
|
||
|
" int b_out = b"
|
||
|
""
|
||
|
" for (int i=0; i < max_trip_count && keepgoing_out; ++i) {"
|
||
|
" /* Implicitly-defined code: bind actual parameter values"
|
||
|
" to formal parameter variables of loop-body */"
|
||
|
" bool keepgoing_in = keepgoing_out; "
|
||
|
" bool b_in = b_out;"
|
||
|
""
|
||
|
" /* User-defined code (loop body) */"
|
||
|
" int my_local = a + b_in; // Reading value "a" from the enclosing scope is fine"
|
||
|
" b_out = a - b_in;"
|
||
|
" keepgoing_out = my_local > b_out; "
|
||
|
" user_defined_val = b_in + b_in; // b_in and b_out are different variables"
|
||
|
" /* End user-defined code */"
|
||
|
""
|
||
|
" /* Implicitly defined-code */"
|
||
|
" user_defined_vals[i] = user_defined_val // accumulate scan-output values"
|
||
|
" }"
|
||
|
" // int t = my_local; // Can't do this. my_local is not accessible here."
|
||
|
""
|
||
|
" // The values below are bound to the output variables of the loop and therefore accessible"
|
||
|
" // b_out; user_defined_vals; keepgoing_out;"
|
||
|
" }"
|
||
|
""
|
||
|
"There are several things of note in this code snippet:"
|
||
|
""
|
||
|
"1) Values from the enclosing scope (i.e. variable "a" here) are in scope and can"
|
||
|
" be referenced in the inputs of the loop."
|
||
|
"2) Any values computed in the loop body that needs to be used in a subsequent"
|
||
|
" iteration or after the loop are modelled using a pair of variables in the loop-body,"
|
||
|
" consisting of an input variable (eg., b_in) and an output variable (eg., b_out)."
|
||
|
" These are referred to as loop-carried dependences. The loop operation node"
|
||
|
" supplies the input value of the input variable for the first iteration, and"
|
||
|
" returns the output value of the output variable produced by the final"
|
||
|
" iteration."
|
||
|
"3) Scan_output variables are used to implicitly concatenate values computed across"
|
||
|
" all the iterations. In the above example, the value of user_defined_val computed"
|
||
|
" over all iterations are concatenated and returned as the value of user_defined_vals"
|
||
|
" after the loop."
|
||
|
"4) Values created in the body cannot be accessed in the enclosing scope,"
|
||
|
" except using the mechanism described above."
|
||
|
""
|
||
|
"Note that the semantics of this op support "diagonal" or "wavefront" execution."
|
||
|
"(See Step 3 here for an example:"
|
||
|
"https://devblogs.nvidia.com/optimizing-recurrent-neural-networks-cudnn-5/)."
|
||
|
"Frontends should emit multi-layer RNNs as a series of While operators (with"
|
||
|
"time being the inner looping dimension), with each successive layer consuming"
|
||
|
"the scan_outputs from the previous layer, possibly going through several"
|
||
|
"point-wise operators (e.g. dropout, residual connections, linear layer)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `M`: memref of any type values or tensor of any type values
|
||
|
1. `cond`: memref of any type values or tensor of any type values
|
||
|
1. `v_initial`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `body` | `Attribute` | any attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `v_final_and_scan_outputs`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.LpNormalization (ONNXLpNormalizationOp)
|
||
|
ONNX LpNormalization operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Given a matrix, apply Lp-normalization along the provided axis."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axis` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `p` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.LpPool (ONNXLpPoolOp)
|
||
|
ONNX LpPool operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"LpPool consumes an input tensor X and applies Lp pooling across"
|
||
|
" the tensor according to kernel sizes, stride sizes, and pad lengths."
|
||
|
" Lp pooling consisting of computing the Lp norm on all values of a subset"
|
||
|
" of the input tensor according to the kernel size and downsampling the"
|
||
|
" data into the output tensor Y for further processing."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `auto_pad` | `StringAttr` | string attribute attribute |
|
||
|
| `kernel_shape` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `p` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `pads` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `strides` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.MatMulInteger (ONNXMatMulIntegerOp)
|
||
|
ONNX MatMulInteger operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Matrix product that behaves like numpy.matmul: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.matmul.html."
|
||
|
"The production MUST never overflow. The accumulation may overflow if and only if in 32 bits."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `A`: memref of any type values or tensor of any type values
|
||
|
1. `B`: memref of any type values or tensor of any type values
|
||
|
1. `a_zero_point`: memref of any type values or tensor of any type values
|
||
|
1. `b_zero_point`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.MatMul (ONNXMatMulOp)
|
||
|
ONNX MatMul operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Matrix product that behaves like numpy.matmul: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.matmul.html"
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `A`: memref of any type values or tensor of any type values
|
||
|
1. `B`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Max (ONNXMaxOp)
|
||
|
ONNX Max operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Element-wise max of each of the input tensors (with Numpy-style broadcasting support)."
|
||
|
"All inputs and outputs must have the same data type."
|
||
|
"This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check [the doc](Broadcasting.md)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data_0`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `max`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.MaxPool (ONNXMaxPoolOp)
|
||
|
ONNX MaxPool operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"MaxPool consumes an input tensor X and applies max pooling across"
|
||
|
" the tensor according to kernel sizes, stride sizes, and pad lengths."
|
||
|
" max pooling consisting of computing the max on all values of a"
|
||
|
" subset of the input tensor according to the kernel size and downsampling the"
|
||
|
" data into the output tensor Y for further processing. The output spatial shape will be following:"
|
||
|
" ```"
|
||
|
" output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i] + 1)"
|
||
|
" ```"
|
||
|
" or"
|
||
|
" ```"
|
||
|
" output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i] + 1)"
|
||
|
" ```"
|
||
|
" if ceil_mode is enabled"
|
||
|
""
|
||
|
" ```"
|
||
|
" * pad_shape[i] is sum of pads along axis i"
|
||
|
" ```"
|
||
|
""
|
||
|
" `auto_pad` is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:"
|
||
|
" ```"
|
||
|
" VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) + 1) / strides_spatial_shape[i])"
|
||
|
" SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])"
|
||
|
" ```"
|
||
|
" And pad shape will be following if `SAME_UPPER` or `SAME_LOWER`:"
|
||
|
" ```"
|
||
|
" pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) - input_spatial_shape[i]"
|
||
|
" ```"
|
||
|
" The output of each pooling window is maximum number of elements exclude pad. "
|
||
|
" "
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `auto_pad` | `StringAttr` | string attribute attribute |
|
||
|
| `ceil_mode` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `dilations` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `kernel_shape` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `pads` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `storage_order` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `strides` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
1. `Indices`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.MaxPoolSingleOut (ONNXMaxPoolSingleOutOp)
|
||
|
ONNX MaxPool operation with a single output.
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"ONNX MaxPool operation with a single output."
|
||
|
"See ONNXMaxPoolOp for a full description of the MaxPool semantics."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `auto_pad` | `StringAttr` | string attribute attribute |
|
||
|
| `ceil_mode` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `dilations` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `kernel_shape` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `pads` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `storage_order` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `strides` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `o_Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.MaxRoiPool (ONNXMaxRoiPoolOp)
|
||
|
ONNX MaxRoiPool operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"ROI max pool consumes an input tensor X and region of interests (RoIs) to"
|
||
|
" apply max pooling across each RoI, to produce output 4-D tensor of shape"
|
||
|
" (num_rois, channels, pooled_shape[0], pooled_shape[1])."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
1. `rois`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `pooled_shape` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `spatial_scale` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.MaxUnpool (ONNXMaxUnpoolOp)
|
||
|
ONNX MaxUnpool operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"MaxUnpool essentially computes the partial inverse of the MaxPool op."
|
||
|
" The input information to this op is typically the the output information from a MaxPool op. The first"
|
||
|
" input tensor X is the tensor that needs to be unpooled, which is typically the pooled tensor (first output)"
|
||
|
" from MaxPool. The second input tensor, I, contains the indices to the (locally maximal) elements corrsponding"
|
||
|
" to the elements in the first input tensor X. Input tensor I is typically the second output of the MaxPool op."
|
||
|
" The third (optional) input is a tensor that specifies the output size of the unpooling operation."
|
||
|
""
|
||
|
"MaxUnpool is intended to do 'partial' inverse of the MaxPool op. 'Partial' because all the non-maximal"
|
||
|
" values from the original input to MaxPool are set to zero in the output of the MaxUnpool op. Pooling"
|
||
|
" the result of an unpooling operation should give back the original input to the unpooling op."
|
||
|
""
|
||
|
"MaxUnpool can produce the same output size for several input sizes, which makes unpooling op ambiguous."
|
||
|
" The third input argument, output_size, is meant to disambiguate the op and produce output tensor of"
|
||
|
" known/predictable size."
|
||
|
""
|
||
|
"In addition to the inputs, MaxUnpool takes three attributes, namely kernel_shape, strides, and pads,"
|
||
|
" which define the exact unpooling op. The attributes typically have the same values as the corrsponding"
|
||
|
" pooling op that the unpooling op is trying to invert."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
1. `I`: memref of any type values or tensor of any type values
|
||
|
1. `output_shape`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `kernel_shape` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `pads` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `strides` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Mean (ONNXMeanOp)
|
||
|
ONNX Mean operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Element-wise mean of each of the input tensors (with Numpy-style broadcasting support)."
|
||
|
"All inputs and outputs must have the same data type."
|
||
|
"This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check [the doc](Broadcasting.md)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data_0`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `mean`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.MeanVarianceNormalization (ONNXMeanVarianceNormalizationOp)
|
||
|
ONNX MeanVarianceNormalization operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"A MeanVarianceNormalization Function: Perform mean variance normalization"
|
||
|
" on the input tensor X using formula: <br/> ``` (X-EX)/sqrt(E(X-EX)^2) ```"
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axes` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Min (ONNXMinOp)
|
||
|
ONNX Min operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Element-wise min of each of the input tensors (with Numpy-style broadcasting support)."
|
||
|
"All inputs and outputs must have the same data type."
|
||
|
"This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check [the doc](Broadcasting.md)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data_0`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `min`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Mod (ONNXModOp)
|
||
|
ONNX Mod operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Performs element-wise binary modulus (with Numpy-style broadcasting support). "
|
||
|
" The sign of the remainder is the same as that of the Divisor."
|
||
|
" "
|
||
|
" Mod operator can also behave like C fmod() or numpy.fmod. In this case, the sign of the remainder however, will be the same as the Dividend "
|
||
|
" (in contrast to integer mod). To force a behavior like numpy.fmod() an 'fmod' Attribute is provided."
|
||
|
" This attribute is set to 0 by default causing the behavior to be like integer mod. "
|
||
|
" Setting this attribute to 1 causes the remainder to be calculated similar to that of numpy.fmod()."
|
||
|
""
|
||
|
" If the input type is floating point, then `fmod` attribute must be set to 1."
|
||
|
" "
|
||
|
" In case of dividend being zero, the results will be platform dependent."
|
||
|
""
|
||
|
" This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check [the doc](Broadcasting.md)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `A`: memref of any type values or tensor of any type values
|
||
|
1. `B`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `fmod` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `C`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Mul (ONNXMulOp)
|
||
|
ONNX Mul operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Performs element-wise binary multiplication (with Numpy-style broadcasting support)."
|
||
|
""
|
||
|
"This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check [the doc](Broadcasting.md)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `A`: memref of any type values or tensor of any type values
|
||
|
1. `B`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `C`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Multinomial (ONNXMultinomialOp)
|
||
|
ONNX Multinomial operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Generate a tensor of samples from a multinomial distribution according to the probabilities"
|
||
|
"of each of the possible outcomes."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `dtype` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `sample_size` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `seed` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Neg (ONNXNegOp)
|
||
|
ONNX Neg operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Neg takes one input data (Tensor<T>) and produces one output data"
|
||
|
"(Tensor<T>) where each element flipped sign, y = -x, is applied to"
|
||
|
"the tensor elementwise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.NonMaxSuppression (ONNXNonMaxSuppressionOp)
|
||
|
ONNX NonMaxSuppression operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Filter out boxes that have high intersection-over-union (IOU) overlap with previously selected boxes."
|
||
|
"Bounding boxes with score less than score_threshold are removed. Bounding box format is indicated by attribute center_point_box."
|
||
|
"Note that this algorithm is agnostic to where the origin is in the coordinate system and more generally is invariant to"
|
||
|
"orthogonal transformations and translations of the coordinate system; thus translating or reflections of the coordinate system"
|
||
|
"result in the same boxes being selected by the algorithm."
|
||
|
"The selected_indices output is a set of integers indexing into the input collection of bounding boxes representing the selected boxes."
|
||
|
"The bounding box coordinates corresponding to the selected indices can then be obtained using the Gather or GatherND operation."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `boxes`: memref of any type values or tensor of any type values
|
||
|
1. `scores`: memref of any type values or tensor of any type values
|
||
|
1. `max_output_boxes_per_class`: memref of any type values or tensor of any type values
|
||
|
1. `iou_threshold`: memref of any type values or tensor of any type values
|
||
|
1. `score_threshold`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `center_point_box` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `selected_indices`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.NonZero (ONNXNonZeroOp)
|
||
|
ONNX NonZero operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Returns the indices of the elements that are non-zero"
|
||
|
" (in row-major order - by dimension)."
|
||
|
" NonZero behaves similar to numpy.nonzero:"
|
||
|
" https://docs.scipy.org/doc/numpy/reference/generated/numpy.nonzero.html"
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Not (ONNXNotOp)
|
||
|
ONNX Not operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Returns the negation of the input tensor element-wise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.OneHot (ONNXOneHotOp)
|
||
|
ONNX OneHot operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Produces a one-hot tensor based on inputs."
|
||
|
" The locations represented by the index values in the 'indices' input tensor will have 'on_value'"
|
||
|
" and the other locations will have 'off_value' in the output tensor, where 'on_value' and 'off_value'"
|
||
|
" are specified as part of required input argument 'values', which is a two-element tensor of format"
|
||
|
" [off_value, on_value]. The rank of the output tensor will be one greater than the rank of the"
|
||
|
" input tensor. The additional dimension is for one-hot representation. The additional dimension will"
|
||
|
" be inserted at the position specified by 'axis'. If 'axis' is not specified then then additional"
|
||
|
" dimension will be inserted as the innermost dimension, i.e. axis=-1. The size of the additional"
|
||
|
" dimension is specified by required scalar input 'depth'. The type of the output tensor is the same"
|
||
|
" as the type of the 'values' input. Any entries in the 'indices' input tensor with values outside"
|
||
|
" the range [-depth, depth-1] will result in one-hot representation with all 'off_value' values in the"
|
||
|
" output tensor."
|
||
|
""
|
||
|
" when axis = 0:"
|
||
|
" output[input[i, j, k], i, j, k] = 1 for all i, j, k and 0 otherwise."
|
||
|
""
|
||
|
" when axis = -1:"
|
||
|
" output[i, j, k, input[i, j, k]] = 1 for all i, j, k and 0 otherwise."
|
||
|
""
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `indices`: memref of any type values or tensor of any type values
|
||
|
1. `depth`: memref of any type values or tensor of any type values
|
||
|
1. `values`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axis` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Or (ONNXOrOp)
|
||
|
ONNX Or operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Returns the tensor resulted from performing the `or` logical operation"
|
||
|
"elementwise on the input tensors `A` and `B` (with Numpy-style broadcasting support)."
|
||
|
""
|
||
|
"This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check [the doc](Broadcasting.md)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `A`: memref of any type values or tensor of any type values
|
||
|
1. `B`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `C`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.PRelu (ONNXPReluOp)
|
||
|
ONNX PRelu operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"PRelu takes input data (Tensor<T>) and slope tensor as input, and produces one"
|
||
|
"output data (Tensor<T>) where the function `f(x) = slope * x for x < 0`,"
|
||
|
"`f(x) = x for x >= 0`., is applied to the data tensor elementwise."
|
||
|
"This operator supports **unidirectional broadcasting** (tensor slope should be unidirectional broadcastable to input tensor X); for more details please check [the doc](Broadcasting.md)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
1. `slope`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Pad (ONNXPadOp)
|
||
|
ONNX Pad operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Given a tensor containing the data to be padded (`data`), a tensor containing the number of start and end pad values for axis (`pads`), (optionally) a `mode`, and (optionally) `constant_value`, "
|
||
|
"a padded tensor (`output`) is generated."
|
||
|
""
|
||
|
"The three supported `modes` are (similar to corresponding modes supported by `numpy.pad`):"
|
||
|
""
|
||
|
"1) `constant`(default) - pads with a given constant value as specified by `constant_value` (which defaults to 0)"
|
||
|
""
|
||
|
"2) `reflect` - pads with the reflection of the vector mirrored on the first and last values of the vector along each axis"
|
||
|
""
|
||
|
"3) `edge` - pads with the edge values of array"
|
||
|
""
|
||
|
""
|
||
|
"Example 1 (`constant` mode):"
|
||
|
" Insert 0 pads to the beginning of the second dimension."
|
||
|
""
|
||
|
" data = "
|
||
|
" ["
|
||
|
" [1.0, 1.2],"
|
||
|
" [2.3, 3.4],"
|
||
|
" [4.5, 5.7],"
|
||
|
" ] "
|
||
|
""
|
||
|
" pads = [0, 2, 0, 0]"
|
||
|
""
|
||
|
" mode = 'constant'"
|
||
|
""
|
||
|
" constant_value = 0.0"
|
||
|
""
|
||
|
" output = "
|
||
|
" ["
|
||
|
" ["
|
||
|
" [0.0, 0.0, 1.0, 1.2],"
|
||
|
" [0.0, 0.0, 2.3, 3.4],"
|
||
|
" [0.0, 0.0, 4.5, 5.7],"
|
||
|
" ],"
|
||
|
" ]"
|
||
|
""
|
||
|
""
|
||
|
"Example 2 (`reflect` mode):"
|
||
|
" data = "
|
||
|
" ["
|
||
|
" [1.0, 1.2],"
|
||
|
" [2.3, 3.4],"
|
||
|
" [4.5, 5.7],"
|
||
|
" ] "
|
||
|
""
|
||
|
" pads = [0, 2, 0, 0]"
|
||
|
""
|
||
|
" mode = 'reflect'"
|
||
|
""
|
||
|
" output = "
|
||
|
" ["
|
||
|
" ["
|
||
|
" [1.0, 1.2, 1.0, 1.2],"
|
||
|
" [2.3, 3.4, 2.3, 3.4],"
|
||
|
" [4.5, 5.7, 4.5, 5.7],"
|
||
|
" ],"
|
||
|
" ]"
|
||
|
""
|
||
|
""
|
||
|
"Example 3 (`edge` mode):"
|
||
|
" data = "
|
||
|
" ["
|
||
|
" [1.0, 1.2],"
|
||
|
" [2.3, 3.4],"
|
||
|
" [4.5, 5.7],"
|
||
|
" ] "
|
||
|
""
|
||
|
" pads = [0, 2, 0, 0]"
|
||
|
""
|
||
|
" mode = 'edge'"
|
||
|
""
|
||
|
" output = "
|
||
|
" ["
|
||
|
" ["
|
||
|
" [1.0, 1.0, 1.0, 1.2],"
|
||
|
" [2.3, 2.3, 2.3, 3.4],"
|
||
|
" [4.5, 4.5, 4.5, 5.7],"
|
||
|
" ],"
|
||
|
" ]"
|
||
|
""
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
1. `pads`: memref of any type values or tensor of any type values
|
||
|
1. `constant_value`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `mode` | `StringAttr` | string attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Pow (ONNXPowOp)
|
||
|
ONNX Pow operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Pow takes input data (Tensor<T>) and exponent Tensor, and"
|
||
|
"produces one output data (Tensor<T>) where the function `f(x) = x^exponent`,"
|
||
|
"is applied to the data tensor elementwise."
|
||
|
"This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check [the doc](Broadcasting.md)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Z`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.QLinearConv (ONNXQLinearConvOp)
|
||
|
ONNX QLinearConv operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"The convolution operator consumes a quantized input tensor, its scale and zero point,"
|
||
|
"a quantized filter, its scale and zero point, and output's scale and zero point,"
|
||
|
"and computes the quantized output. Each scale and zero-point pair must have same shape."
|
||
|
"It means they must be either scalars (per tensor) or 1-D tensors (per output channel)."
|
||
|
"Each input or output and its related zero point must have same type."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `x`: memref of any type values or tensor of any type values
|
||
|
1. `x_scale`: memref of any type values or tensor of any type values
|
||
|
1. `x_zero_point`: memref of any type values or tensor of any type values
|
||
|
1. `w`: memref of any type values or tensor of any type values
|
||
|
1. `w_scale`: memref of any type values or tensor of any type values
|
||
|
1. `w_zero_point`: memref of any type values or tensor of any type values
|
||
|
1. `y_scale`: memref of any type values or tensor of any type values
|
||
|
1. `y_zero_point`: memref of any type values or tensor of any type values
|
||
|
1. `B`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `auto_pad` | `StringAttr` | string attribute attribute |
|
||
|
| `dilations` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `group` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `kernel_shape` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `pads` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `strides` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.QLinearMatMul (ONNXQLinearMatMulOp)
|
||
|
ONNX QLinearMatMul operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Matrix product that behaves like numpy.matmul: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.matmul.html."
|
||
|
"It consumes two quantized input tensors, their scales and zero points, scale and zero point of output, and computes the quantized output."
|
||
|
"The quantization formula is y = saturate((x / y_scale) + y_zero_point). For (x / y_scale), it is rounding to nearest ties to even."
|
||
|
"Refer to https://en.wikipedia.org/wiki/Rounding for details. Scale and zero point must have same shape."
|
||
|
"They must be either scalar (per tensor) or 1-D tensor (per row for 'a' and per column for 'b'). If scale and zero point are 1-D tensor,"
|
||
|
"the number of elements of scale and zero point tensor of input 'a' and output 'y' should be equal to the number of rows of input 'a',"
|
||
|
"and the number of elements of scale and zero point tensor of input 'b' should be equal to the number of columns of input 'b'."
|
||
|
"Production must never overflow, and accumulation may overflow if and only if in 32 bits."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `a`: memref of any type values or tensor of any type values
|
||
|
1. `a_scale`: memref of any type values or tensor of any type values
|
||
|
1. `a_zero_point`: memref of any type values or tensor of any type values
|
||
|
1. `b`: memref of any type values or tensor of any type values
|
||
|
1. `b_scale`: memref of any type values or tensor of any type values
|
||
|
1. `b_zero_point`: memref of any type values or tensor of any type values
|
||
|
1. `y_scale`: memref of any type values or tensor of any type values
|
||
|
1. `y_zero_point`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.QuantizeLinear (ONNXQuantizeLinearOp)
|
||
|
ONNX QuantizeLinear operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"The linear per-tensor/layer quantization operator. It consumes a high precision tensor, a scale, a zero point to compute the low precision / quantized tensor."
|
||
|
"The quantization formula is y = saturate ((x / y_scale) + y_zero_point). For saturation, it saturates to [0, 255] if it's uint8, or [-128, 127] if it's int8."
|
||
|
"For (x / y_scale), it's rounding to nearest ties to even. Refer to https://en.wikipedia.org/wiki/Rounding for details. 'y_zero_point' and 'y' must have same type."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `x`: memref of any type values or tensor of any type values
|
||
|
1. `y_scale`: memref of any type values or tensor of any type values
|
||
|
1. `y_zero_point`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.RNN (ONNXRNNOp)
|
||
|
ONNX RNN operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Computes an one-layer simple RNN. This operator is usually supported"
|
||
|
"via some custom implementation such as CuDNN."
|
||
|
""
|
||
|
"Notations:"
|
||
|
""
|
||
|
"`X` - input tensor"
|
||
|
""
|
||
|
"`i` - input gate"
|
||
|
""
|
||
|
"`t` - time step (t-1 means previous time step)"
|
||
|
""
|
||
|
"`Wi` - W parameter weight matrix for input gate"
|
||
|
""
|
||
|
"`Ri` - R recurrence weight matrix for input gate"
|
||
|
""
|
||
|
"`Wbi` - W parameter bias vector for input gate"
|
||
|
""
|
||
|
"`Rbi` - R parameter bias vector for input gate"
|
||
|
""
|
||
|
"`WBi` - W parameter weight matrix for backward input gate"
|
||
|
""
|
||
|
"`RBi` - R recurrence weight matrix for backward input gate"
|
||
|
""
|
||
|
"`WBbi` - WR bias vectors for backward input gate"
|
||
|
""
|
||
|
"`RBbi` - RR bias vectors for backward input gate"
|
||
|
""
|
||
|
"`H` - Hidden state"
|
||
|
""
|
||
|
"`num_directions` - 2 if direction == bidirectional else 1"
|
||
|
""
|
||
|
"Activation functions:"
|
||
|
""
|
||
|
" Relu(x) - max(0, x)"
|
||
|
""
|
||
|
" Tanh(x) - (1 - e^{-2x})/(1 + e^{-2x})"
|
||
|
""
|
||
|
" Sigmoid(x) - 1/(1 + e^{-x})"
|
||
|
""
|
||
|
" (NOTE: Below are optional)"
|
||
|
""
|
||
|
" Affine(x) - alpha*x + beta"
|
||
|
""
|
||
|
" LeakyRelu(x) - x if x >= 0 else alpha * x"
|
||
|
""
|
||
|
" ThresholdedRelu(x) - x if x >= alpha else 0"
|
||
|
""
|
||
|
" ScaledTanh(x) - alpha*Tanh(beta*x)"
|
||
|
""
|
||
|
" HardSigmoid(x) - min(max(alpha*x + beta, 0), 1)"
|
||
|
""
|
||
|
" Elu(x) - x if x >= 0 else alpha*(e^x - 1)"
|
||
|
""
|
||
|
" Softsign(x) - x/(1 + |x|)"
|
||
|
""
|
||
|
" Softplus(x) - log(1 + e^x)"
|
||
|
""
|
||
|
"Equations (Default: f=Tanh):"
|
||
|
""
|
||
|
" - Ht = f(Xt*(Wi^T) + Ht-1*(Ri^T) + Wbi + Rbi)"
|
||
|
"This operator has **optional** inputs/outputs. See [the doc](IR.md) for more details about the representation of optional arguments. An empty string may be used in the place of an actual argument's name to indicate a missing argument. Trailing optional arguments (those not followed by an argument that is present) may also be simply omitted."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
1. `W`: memref of any type values or tensor of any type values
|
||
|
1. `R`: memref of any type values or tensor of any type values
|
||
|
1. `B`: memref of any type values or tensor of any type values
|
||
|
1. `sequence_lens`: memref of any type values or tensor of any type values
|
||
|
1. `initial_h`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `activation_alpha` | `ArrayAttr` | 32-bit float array attribute attribute |
|
||
|
| `activation_beta` | `ArrayAttr` | 32-bit float array attribute attribute |
|
||
|
| `activations` | `ArrayAttr` | string array attribute attribute |
|
||
|
| `clip` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `direction` | `StringAttr` | string attribute attribute |
|
||
|
| `hidden_size` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
1. `Y_h`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.RandomNormalLike (ONNXRandomNormalLikeOp)
|
||
|
ONNX RandomNormalLike operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Generate a tensor with random values drawn from a normal distribution."
|
||
|
"The shape of the output tensor is copied from the shape of the input tensor,"
|
||
|
"and the parameters of the normal distribution are specified by `mean` and `scale`."
|
||
|
""
|
||
|
"The data type is specified by the 'dtype' argument, or copied from the input tensor if not provided."
|
||
|
"The 'dtype' argument must be one of the data types specified in the 'DataType' enum field in the"
|
||
|
"TensorProto message, and be valid as an output type."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `dtype` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `mean` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `scale` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `seed` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.RandomNormal (ONNXRandomNormalOp)
|
||
|
ONNX RandomNormal operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Generate a tensor with random values drawn from a normal distribution. The shape"
|
||
|
"of the tensor is specified by the `shape` argument and the parameter of the normal distribution"
|
||
|
"specified by `mean` and `scale`."
|
||
|
""
|
||
|
"The data type is specified by the 'dtype' argument. The 'dtype' argument must"
|
||
|
"be one of the data types specified in the 'DataType' enum field in the"
|
||
|
"TensorProto message."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `dtype` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `mean` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `scale` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `seed` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `shape` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.RandomUniformLike (ONNXRandomUniformLikeOp)
|
||
|
ONNX RandomUniformLike operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Generate a tensor with random values drawn from a uniform distribution."
|
||
|
"The shape of the output tensor is copied from the shape of the input tensor,"
|
||
|
"and the parameters of the uniform distribution are specified by `low` and `high`."
|
||
|
""
|
||
|
"The data type is specified by the 'dtype' argument, or copied from the input tensor if not provided."
|
||
|
"The 'dtype' argument must be one of the data types specified in the 'DataType' enum field in the"
|
||
|
"TensorProto message and be valid as an output type."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `dtype` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `high` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `low` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `seed` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.RandomUniform (ONNXRandomUniformOp)
|
||
|
ONNX RandomUniform operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Generate a tensor with random values drawn from a uniform distribution. The shape"
|
||
|
"of the tensor is specified by the `shape` argument and the range by `low` and `high`."
|
||
|
""
|
||
|
"The data type is specified by the 'dtype' argument. The 'dtype' argument must"
|
||
|
"be one of the data types specified in the 'DataType' enum field in the"
|
||
|
"TensorProto message."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `dtype` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `high` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `low` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `seed` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `shape` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Range (ONNXRangeOp)
|
||
|
ONNX Range operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Generate a tensor containing a sequence of numbers that begin at `start` and extends by increments of `delta` "
|
||
|
"up to `limit` (exclusive)."
|
||
|
""
|
||
|
"The number of elements in the output of range is computed as below-"
|
||
|
""
|
||
|
"`number_of_elements = max( ceil( (limit - start) / delta ) , 0 )`"
|
||
|
""
|
||
|
"The pseudocode determining the contents of the output is shown below-"
|
||
|
""
|
||
|
"`for(int i=0; i<number_of_elements; ++i)`"
|
||
|
""
|
||
|
"`{`"
|
||
|
" "
|
||
|
"` output[i] = start + (i * delta); ` "
|
||
|
""
|
||
|
"`}` "
|
||
|
""
|
||
|
"`Example 1`"
|
||
|
"Inputs: start = 3, limit = 9, delta = 3"
|
||
|
"Output: [3, 6]"
|
||
|
""
|
||
|
"`Example 2`"
|
||
|
"Inputs: start = 10, limit = 4, delta = -2"
|
||
|
"Output: [10, 8, 6]"
|
||
|
""
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `start`: memref of any type values or tensor of any type values
|
||
|
1. `limit`: memref of any type values or tensor of any type values
|
||
|
1. `delta`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Reciprocal (ONNXReciprocalOp)
|
||
|
ONNX Reciprocal operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Reciprocal takes one input data (Tensor<T>) and produces one output data"
|
||
|
"(Tensor<T>) where the reciprocal is, y = 1/x, is applied to"
|
||
|
"the tensor elementwise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.ReduceL1 (ONNXReduceL1Op)
|
||
|
ONNX ReduceL1 operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Computes the L1 norm of the input tensor's element along the provided axes. The resulted"
|
||
|
"tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then"
|
||
|
"the resulted tensor have the reduced dimension pruned."
|
||
|
""
|
||
|
"The above behavior is similar to numpy, with the exception that numpy default keepdims to"
|
||
|
"False instead of True."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axes` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `keepdims` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `reduced`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.ReduceL2 (ONNXReduceL2Op)
|
||
|
ONNX ReduceL2 operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Computes the L2 norm of the input tensor's element along the provided axes. The resulted"
|
||
|
"tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then"
|
||
|
"the resulted tensor have the reduced dimension pruned."
|
||
|
""
|
||
|
"The above behavior is similar to numpy, with the exception that numpy default keepdims to"
|
||
|
"False instead of True."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axes` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `keepdims` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `reduced`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.ReduceLogSumExp (ONNXReduceLogSumExpOp)
|
||
|
ONNX ReduceLogSumExp operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Computes the log sum exponent of the input tensor's element along the provided axes. The resulted"
|
||
|
"tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then"
|
||
|
"the resulted tensor have the reduced dimension pruned."
|
||
|
""
|
||
|
"The above behavior is similar to numpy, with the exception that numpy default keepdims to"
|
||
|
"False instead of True."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axes` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `keepdims` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `reduced`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.ReduceLogSum (ONNXReduceLogSumOp)
|
||
|
ONNX ReduceLogSum operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Computes the log sum of the input tensor's element along the provided axes. The resulted"
|
||
|
"tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then"
|
||
|
"the resulted tensor have the reduced dimension pruned."
|
||
|
""
|
||
|
"The above behavior is similar to numpy, with the exception that numpy default keepdims to"
|
||
|
"False instead of True."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axes` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `keepdims` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `reduced`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.ReduceMax (ONNXReduceMaxOp)
|
||
|
ONNX ReduceMax operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Computes the max of the input tensor's element along the provided axes. The resulted"
|
||
|
"tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then"
|
||
|
"the resulted tensor have the reduced dimension pruned."
|
||
|
""
|
||
|
"The above behavior is similar to numpy, with the exception that numpy default keepdims to"
|
||
|
"False instead of True."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axes` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `keepdims` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `reduced`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.ReduceMean (ONNXReduceMeanOp)
|
||
|
ONNX ReduceMean operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Computes the mean of the input tensor's element along the provided axes. The resulted"
|
||
|
"tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then"
|
||
|
"the resulted tensor have the reduced dimension pruned."
|
||
|
""
|
||
|
"The above behavior is similar to numpy, with the exception that numpy default keepdims to"
|
||
|
"False instead of True."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axes` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `keepdims` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `reduced`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.ReduceMin (ONNXReduceMinOp)
|
||
|
ONNX ReduceMin operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Computes the min of the input tensor's element along the provided axes. The resulted"
|
||
|
"tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then"
|
||
|
"the resulted tensor have the reduced dimension pruned."
|
||
|
""
|
||
|
"The above behavior is similar to numpy, with the exception that numpy default keepdims to"
|
||
|
"False instead of True."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axes` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `keepdims` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `reduced`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.ReduceProd (ONNXReduceProdOp)
|
||
|
ONNX ReduceProd operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Computes the product of the input tensor's element along the provided axes. The resulted"
|
||
|
"tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then"
|
||
|
"the resulted tensor have the reduced dimension pruned."
|
||
|
""
|
||
|
"The above behavior is similar to numpy, with the exception that numpy default keepdims to"
|
||
|
"False instead of True."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axes` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `keepdims` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `reduced`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.ReduceSum (ONNXReduceSumOp)
|
||
|
ONNX ReduceSum operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Computes the sum of the input tensor's element along the provided axes. The resulted"
|
||
|
"tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then"
|
||
|
"the resulted tensor have the reduced dimension pruned."
|
||
|
""
|
||
|
"The above behavior is similar to numpy, with the exception that numpy default keepdims to"
|
||
|
"False instead of True."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axes` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `keepdims` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `reduced`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.ReduceSumSquare (ONNXReduceSumSquareOp)
|
||
|
ONNX ReduceSumSquare operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Computes the sum square of the input tensor's element along the provided axes. The resulted"
|
||
|
"tensor has the same rank as the input if keepdims equal 1. If keepdims equal 0, then"
|
||
|
"the resulted tensor have the reduced dimension pruned."
|
||
|
""
|
||
|
"The above behavior is similar to numpy, with the exception that numpy default keepdims to"
|
||
|
"False instead of True."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axes` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `keepdims` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `reduced`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Relu (ONNXReluOp)
|
||
|
ONNX Relu operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Relu takes one input data (Tensor<T>) and produces one output data"
|
||
|
"(Tensor<T>) where the rectified linear function, y = max(0, x), is applied to"
|
||
|
"the tensor elementwise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Reshape (ONNXReshapeOp)
|
||
|
ONNX Reshape operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Reshape the input tensor similar to numpy.reshape."
|
||
|
"First input is the data tensor, second input is a shape tensor which specifies the output shape. It outputs the reshaped tensor."
|
||
|
"At most one dimension of the new shape can be -1. In this case, the value is"
|
||
|
"inferred from the size of the tensor and the remaining dimensions. A dimension"
|
||
|
"could also be 0, in which case the actual dimension value is unchanged (i.e. taken"
|
||
|
"from the input tensor)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
1. `shape`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `reshaped`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Resize (ONNXResizeOp)
|
||
|
ONNX Resize operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Resize the input tensor. In general, it calculates every value in the output tensor as a weighted average of neighborhood (a.k.a. sampling locations) in the input tensor."
|
||
|
"Each dimension value of the output tensor is:"
|
||
|
" output_dimension = floor(input_dimension * (roi_end - roi_start) * scale) if input \"sizes\" is not specified."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
1. `roi`: memref of any type values or tensor of any type values
|
||
|
1. `scales`: memref of any type values or tensor of any type values
|
||
|
1. `sizes`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `coordinate_transformation_mode` | `StringAttr` | string attribute attribute |
|
||
|
| `cubic_coeff_a` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `exclude_outside` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `extrapolation_value` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `mode` | `StringAttr` | string attribute attribute |
|
||
|
| `nearest_mode` | `StringAttr` | string attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.ReverseSequence (ONNXReverseSequenceOp)
|
||
|
ONNX ReverseSequence operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Reverse batch of sequences having different lengths specified by `sequence_lens`."
|
||
|
""
|
||
|
"For each slice i iterating on batch axis, the operator reverses the first sequence_lens[i] elements on time axis,"
|
||
|
"and copies elements whose index's beyond sequence_lens[i] to the output. So the output slice i contains reversed"
|
||
|
"sequences on the first sequence_lens[i] elements, then have original values copied for the other elements."
|
||
|
""
|
||
|
"Example 1:"
|
||
|
" input = [[0.0, 4.0, 8.0, 12.0],"
|
||
|
" [1.0, 5.0, 9.0, 13.0],"
|
||
|
" [2.0, 6.0, 10.0, 14.0],"
|
||
|
" [3.0, 7.0, 11.0, 15.0]]"
|
||
|
" sequence_lens = [4, 3, 2, 1]"
|
||
|
" time_axis = 0"
|
||
|
" batch_axis = 1"
|
||
|
""
|
||
|
" output = [[3.0, 6.0, 9.0, 12.0],"
|
||
|
" [2.0, 5.0, 8.0, 13.0],"
|
||
|
" [1.0, 4.0, 10.0, 14.0],"
|
||
|
" [0.0, 7.0, 11.0, 15.0]]"
|
||
|
""
|
||
|
"Example 2:"
|
||
|
" input = [[0.0, 1.0, 2.0, 3.0 ],"
|
||
|
" [4.0, 5.0, 6.0, 7.0 ],"
|
||
|
" [8.0, 9.0, 10.0, 11.0],"
|
||
|
" [12.0, 13.0, 14.0, 15.0]]"
|
||
|
" sequence_lens = [1, 2, 3, 4]"
|
||
|
" time_axis = 1"
|
||
|
" batch_axis = 0"
|
||
|
""
|
||
|
" output = [[0.0, 1.0, 2.0, 3.0 ],"
|
||
|
" [5.0, 4.0, 6.0, 7.0 ],"
|
||
|
" [10.0, 9.0, 8.0, 11.0],"
|
||
|
" [15.0, 14.0, 13.0, 12.0]]"
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
1. `sequence_lens`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `batch_axis` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `time_axis` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.RoiAlign (ONNXRoiAlignOp)
|
||
|
ONNX RoiAlign operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Region of Interest (RoI) align operation described in the"
|
||
|
"[Mask R-CNN paper](https://arxiv.org/abs/1703.06870)."
|
||
|
"RoiAlign consumes an input tensor X and region of interests (rois)"
|
||
|
"to apply pooling across each RoI; it produces a 4-D tensor of shape"
|
||
|
"(num_rois, C, output_height, output_width)."
|
||
|
""
|
||
|
"RoiAlign is proposed to avoid the misalignment by removing"
|
||
|
"quantizations while converting from original image into feature"
|
||
|
"map and from feature map into RoI feature; in each ROI bin,"
|
||
|
"the value of the sampled locations are computed directly"
|
||
|
"through bilinear interpolation."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
1. `rois`: memref of any type values or tensor of any type values
|
||
|
1. `batch_indices`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `mode` | `StringAttr` | string attribute attribute |
|
||
|
| `output_height` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `output_width` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `sampling_ratio` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `spatial_scale` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Round (ONNXRoundOp)
|
||
|
ONNX Round operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Round takes one input Tensor and rounds the values, element-wise, meaning"
|
||
|
"it finds the nearest integer for each value."
|
||
|
"In case of halfs, the rule is to round them to the nearest even integer."
|
||
|
"The output tensor has the same shape and type as the input."
|
||
|
""
|
||
|
"Examples:"
|
||
|
"```"
|
||
|
"round([0.9]) = [1.0]"
|
||
|
"round([2.5]) = [2.0]"
|
||
|
"round([2.3]) = [2.0]"
|
||
|
"round([1.5]) = [2.0]"
|
||
|
"round([-4.5]) = [-4.0]"
|
||
|
"```"
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Scan (ONNXScanOp)
|
||
|
ONNX Scan operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Scan can be used to iterate over one or more scan_input tensors,"
|
||
|
"constructing zero or more scan_output tensors. It combines ideas from general recurrences,"
|
||
|
"functional programming constructs such as scan, fold, map, and zip and is intended to enable"
|
||
|
"generalizations of RNN-like constructs for sequence-to-sequence processing."
|
||
|
"Other tensors (referred to as state_variables here) can be used to carry a state"
|
||
|
"when iterating from one element to another (similar to hidden-state in RNNs, also referred"
|
||
|
"to as loop-carried dependences in the context of loops)."
|
||
|
"Many common usages involve a single scan_input tensor (where functionality"
|
||
|
"similar to scan, fold and map can be obtained). When more than one scan_input is used,"
|
||
|
"a behavior similar to zip is obtained."
|
||
|
""
|
||
|
"The attribute body must be a graph, specifying the computation to be performed in"
|
||
|
"every iteration. It takes as input the current values of the state_variables and"
|
||
|
"the current iterated element of the scan_inputs. It must return the (updated) values"
|
||
|
"of the state_variables and zero or more scan_output_element tensors. The values of the"
|
||
|
"scan_output_element tensors are concatenated over all the iterations to produce the"
|
||
|
"scan_output values of the scan construct (similar to the concatenated intermediate"
|
||
|
"hidden-state values of RNN-like constructs). All the output tensors (state_variables as"
|
||
|
"well as scan_output_element tensors) are required to have the same shape in each iteration"
|
||
|
"of the loop (a restriction imposed to enable efficient memory allocation)."
|
||
|
""
|
||
|
"Note that the iterated element passed to the body subgraph does not have a sequence"
|
||
|
"axis. It will have a rank one less than the rank of the corresponding scan_input."
|
||
|
""
|
||
|
"The scan operation returns the final values of the state_variables as well as the"
|
||
|
"scan_outputs."
|
||
|
""
|
||
|
"The optional attribute scan_input_directions specifies the direction (forward or backward)"
|
||
|
"for each scan input. If this attribute is omitted, all sequences are scanned in the forward"
|
||
|
"direction. A bidirectional scan may be performed by specifying the same tensor input twice"
|
||
|
"in the scan_inputs, once with a forward direction, and once with a backward direction."
|
||
|
""
|
||
|
"The scan_output of the operation is produced by concatenating the scan_output_element"
|
||
|
"values produced by the body in each iteration. The optional attribute scan_output_directions"
|
||
|
"specifies the direction in which scan_output is constructed (by appending or prepending the"
|
||
|
"scan_output_element to scan_output in each iteration) for each scan_output. If this attribute"
|
||
|
"is omitted, the scan_output_element is appended to the scan_output in each iteration."
|
||
|
""
|
||
|
"The optional attribute scan_input_axes specifies the axis to be scanned for each scan_input."
|
||
|
"If omitted, every scan_input will be scanned in axis 0. For example, if axis 0 is the"
|
||
|
"batch axis and axis 1 is the time axis (to be scanned), specify an axis value of 1."
|
||
|
"Note that scanning a non-zero axis may be less efficient than scanning axis zero."
|
||
|
""
|
||
|
"The optional attribute scan_output_axes specifies the axis along which the scan_outputs"
|
||
|
"are accumulated for each scan_output. For example, if axis 1 is the time axis (to be"
|
||
|
"scanned) for both inputs and outputs, specify a scan_input axis and scan_output axis"
|
||
|
"value of 1."
|
||
|
""
|
||
|
"Note that because of the ONNX restriction that only the last parameter of an operator can"
|
||
|
"be variadic, the initial-states and scan-inputs are listed together as one input parameter."
|
||
|
"Similarly, the final-states and scan-outputs are listed together as one output parameter."
|
||
|
"The attribute num_scan_inputs indicates the number M of scan-inputs."
|
||
|
""
|
||
|
"The behavior of"
|
||
|
""
|
||
|
" Scan <"
|
||
|
" num_scan_inputs = m,"
|
||
|
" body = loop-body,"
|
||
|
" scan_input_axes = [axis_1, ..., axis_m]"
|
||
|
" > (init_1, ..., init_n, scan_1, ..., scan_m)"
|
||
|
""
|
||
|
"is equivalent to the following pseudo-code:"
|
||
|
""
|
||
|
" // scan_i.shape[axis_i] denotes the (max) sequence-length of scan_i"
|
||
|
" // scan_i.shape[axis_i] is required to be equal to scan_j.shape[axis_j] for all i,j."
|
||
|
" sequence_length = scan_1.shape[axis_1];"
|
||
|
""
|
||
|
" // initialize state-variables"
|
||
|
" st_1 = init_1; ... st_n = init_n;"
|
||
|
" // initialize scan-output variables: [] denotes an empty tensor"
|
||
|
" scan_out_1 = []; ...; scan_out_k = [];"
|
||
|
" // identify number of iterations:"
|
||
|
""
|
||
|
" // execute loop"
|
||
|
" for (int t = 0; t < sequence_length; ++t) {"
|
||
|
" // generate the scan-input elements: the notation T<axis=k>[t] indicates the sub-tensor"
|
||
|
" // of rank one less than T obtained by indexing T at position t along axis k."
|
||
|
" si_1 = scan_1<axis=axis_1>[t];"
|
||
|
" ... ;"
|
||
|
" si_m = scan_m<axis=axis_m>[t];"
|
||
|
" // execute loop-body"
|
||
|
" st_1, ..., st_n, so_1, ..., so_k = loop-body(st_1, ..., st_n, si_1, ..., si_m)"
|
||
|
" // accumulate the scan-output elements"
|
||
|
" scan_out_1 = Concat<axis=0>(scan_out_1, so_1); ... ; scan_out_k = Concat<axis=0>(scan_out_k, so_k);"
|
||
|
" }"
|
||
|
""
|
||
|
" return st_1, ..., st_n, scan_out_1, ..., scan_out_k;"
|
||
|
""
|
||
|
"*Sample usage: Encoding RNN using a Scan*"
|
||
|
""
|
||
|
"The following example shows how a simple RNN over an input tensor %X, with weight tensor %Wi,"
|
||
|
"recurrence weight tensor %Ri, bias tensors %Wbi and %Rbi, and initial hidden-state %H_0 can"
|
||
|
"be encoded as a ScanLoop. Note that the loop-body is a nested graph, and it directly computes"
|
||
|
"%Wi, %Ri, %Wbi, and %Rbi (typically constants or initializers in the body graph). If these"
|
||
|
"values are computed in the outer graph, they need to be passed in as extra state_variables."
|
||
|
""
|
||
|
" graph rnn-encoding {"
|
||
|
" %H_0 = ... "
|
||
|
" %X = ..."
|
||
|
" %Y_h, %Y = Scan[body = <graph rnn-cell-1>, num_scan_inputs=1](%H_0, %X)"
|
||
|
" return %Y, %Y_h"
|
||
|
" }"
|
||
|
""
|
||
|
" graph rnn-cell-1 ("
|
||
|
" %H_tminus1[FLOAT, tensor]"
|
||
|
" %X_t[FLOAT, tensor]"
|
||
|
" ) {"
|
||
|
" %Wi = ..."
|
||
|
" %Ri = ..."
|
||
|
" %Wbi = ..."
|
||
|
" %Rbi = ..."
|
||
|
" %t1 = X_t * (Wi^T)"
|
||
|
" %t2 = H_tminus1*(Ri^T)"
|
||
|
" %t3 = Add(%t1, %t2)"
|
||
|
" %t4 = Add(%t3, %Wbi)"
|
||
|
" %t5 = Add(%t4, %Rbi)"
|
||
|
" %Ht = Tanh(%t5)"
|
||
|
" %Accumulate = Identity(%Ht)"
|
||
|
" return %Ht, %Accumulate"
|
||
|
" }"
|
||
|
""
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `initial_state_and_scan_inputs`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `body` | `Attribute` | any attribute attribute |
|
||
|
| `num_scan_inputs` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `scan_input_axes` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `scan_input_directions` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `scan_output_axes` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `scan_output_directions` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `final_state_and_scan_outputs`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.ScatterElements (ONNXScatterElementsOp)
|
||
|
ONNX ScatterElements operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"ScatterElements takes three inputs `data`, `updates`, and `indices` of the same"
|
||
|
"rank r >= 1 and an optional attribute axis that identifies an axis of `data`"
|
||
|
"(by default, the outer-most axis, that is axis 0). The output of the operation"
|
||
|
"is produced by creating a copy of the input `data`, and then updating its value"
|
||
|
"to values specified by `updates` at specific index positions specified by"
|
||
|
"`indices`. Its output shape is the same as the shape of `data`."
|
||
|
""
|
||
|
"For each entry in `updates`, the target index in `data` is obtained by combining"
|
||
|
"the corresponding entry in `indices` with the index of the entry itself: the"
|
||
|
"index-value for dimension = axis is obtained from the value of the corresponding"
|
||
|
"entry in `indices` and the index-value for dimension != axis is obtained from the"
|
||
|
"index of the entry itself."
|
||
|
""
|
||
|
"For instance, in a 2-D tensor case, the update corresponding to the [i][j] entry"
|
||
|
"is performed as below:"
|
||
|
"```"
|
||
|
" output[indices[i][j]][j] = updates[i][j] if axis = 0, "
|
||
|
" output[i][indices[i][j]] = updates[i][j] if axis = 1,"
|
||
|
"```"
|
||
|
""
|
||
|
"This operator is the inverse of GatherElements. It is similar to Torch's Scatter operation."
|
||
|
""
|
||
|
"Example 1:"
|
||
|
"```"
|
||
|
" data = ["
|
||
|
" [0.0, 0.0, 0.0],"
|
||
|
" [0.0, 0.0, 0.0],"
|
||
|
" [0.0, 0.0, 0.0],"
|
||
|
" ]"
|
||
|
" indices = ["
|
||
|
" [1, 0, 2],"
|
||
|
" [0, 2, 1],"
|
||
|
" ]"
|
||
|
" updates = ["
|
||
|
" [1.0, 1.1, 1.2],"
|
||
|
" [2.0, 2.1, 2.2],"
|
||
|
" ]"
|
||
|
" output = ["
|
||
|
" [2.0, 1.1, 0.0]"
|
||
|
" [1.0, 0.0, 2.2]"
|
||
|
" [0.0, 2.1, 1.2]"
|
||
|
" ]"
|
||
|
"```"
|
||
|
"Example 2:"
|
||
|
"```"
|
||
|
" data = [[1.0, 2.0, 3.0, 4.0, 5.0]]"
|
||
|
" indices = [[1, 3]]"
|
||
|
" updates = [[1.1, 2.1]]"
|
||
|
" axis = 1"
|
||
|
" output = [[1.0, 1.1, 3.0, 2.1, 5.0]]"
|
||
|
"```"
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
1. `indices`: memref of any type values or tensor of any type values
|
||
|
1. `updates`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axis` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.ScatterND (ONNXScatterNDOp)
|
||
|
ONNX ScatterND operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"ScatterND takes three inputs `data` tensor of rank r >= 1, `indices` tensor of rank q >= 1,"
|
||
|
"and `updates` tensor of rank q + r - indices.shape[-1] - 1. The output of the operation"
|
||
|
"is produced by creating a copy of the input `data`, and then updating its value to values"
|
||
|
"specified by `updates` at specific index positions specified by `indices`. Its output shape"
|
||
|
"is the same as the shape of `data`. Note that `indices` should not have duplicate entries."
|
||
|
"That is, two or more `updates` for the same index-location is not supported."
|
||
|
""
|
||
|
"`indices` is an integer tensor. Let k denote indices.shape[-1], the last dimension in the shape of `indices`."
|
||
|
" `indices` is treated as a (q-1)-dimensional tensor of k-tuples, where each k-tuple is a partial-index into `data`."
|
||
|
"Hence, k can be a value at most the rank of `data`. When k equals rank(data), each update entry specifies an"
|
||
|
"update to a single element of the tensor. When k is less than rank(data) each update entry specifies an"
|
||
|
"update to a slice of the tensor."
|
||
|
""
|
||
|
"`updates` is treated as a (q-1)-dimensional tensor of replacement-slice-values. Thus, the"
|
||
|
"first (q-1) dimensions of updates.shape must match the first (q-1) dimensions of indices.shape."
|
||
|
"The remaining dimensions of `updates` correspond to the dimensions of the"
|
||
|
"replacement-slice-values. Each replacement-slice-value is a (r-k) dimensional tensor,"
|
||
|
"corresponding to the trailing (r-k) dimensions of `data`. Thus, the shape of `updates`"
|
||
|
"must equal indices.shape[0:q-1] ++ data.shape[k:r-1], where ++ denotes the concatenation"
|
||
|
"of shapes."
|
||
|
""
|
||
|
"The `output` is calculated via the following equation:"
|
||
|
""
|
||
|
" output = np.copy(data)"
|
||
|
" update_indices = indices.shape[:-1]"
|
||
|
" for idx in np.ndindex(update_indices):"
|
||
|
" output[indices[idx]] = updates[idx]"
|
||
|
""
|
||
|
"The order of iteration in the above loop is not specified."
|
||
|
"In particular, indices should not have duplicate entries: that is, if idx1 != idx2, then indices[idx1] != indices[idx2]."
|
||
|
"This ensures that the output value does not depend on the iteration order."
|
||
|
""
|
||
|
"This operator is the inverse of GatherND."
|
||
|
""
|
||
|
"Example 1:"
|
||
|
"```"
|
||
|
" data = [1, 2, 3, 4, 5, 6, 7, 8]"
|
||
|
" indices = [[4], [3], [1], [7]]"
|
||
|
" updates = [9, 10, 11, 12]"
|
||
|
" output = [1, 11, 3, 10, 9, 6, 7, 12]"
|
||
|
"```"
|
||
|
""
|
||
|
"Example 2:"
|
||
|
"```"
|
||
|
" data = [[[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],"
|
||
|
" [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],"
|
||
|
" [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]],"
|
||
|
" [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]"
|
||
|
" indices = [[0], [2]]"
|
||
|
" updates = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],"
|
||
|
" [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]]]"
|
||
|
" output = [[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],"
|
||
|
" [[1, 2, 3, 4], [5, 6, 7, 8], [8, 7, 6, 5], [4, 3, 2, 1]],"
|
||
|
" [[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],"
|
||
|
" [[8, 7, 6, 5], [4, 3, 2, 1], [1, 2, 3, 4], [5, 6, 7, 8]]]"
|
||
|
"```"
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
1. `indices`: memref of any type values or tensor of any type values
|
||
|
1. `updates`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Scatter (ONNXScatterOp)
|
||
|
ONNX Scatter operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"This operator is deprecated. Please use ScatterElements, which provides the same functionality."
|
||
|
""
|
||
|
"Scatter takes three inputs `data`, `updates`, and `indices` of the same"
|
||
|
"rank r >= 1 and an optional attribute axis that identifies an axis of `data`"
|
||
|
"(by default, the outer-most axis, that is axis 0). The output of the operation"
|
||
|
"is produced by creating a copy of the input `data`, and then updating its value"
|
||
|
"to values specified by `updates` at specific index positions specified by"
|
||
|
"`indices`. Its output shape is the same as the shape of `data`."
|
||
|
""
|
||
|
"For each entry in `updates`, the target index in `data` is obtained by combining"
|
||
|
"the corresponding entry in `indices` with the index of the entry itself: the"
|
||
|
"index-value for dimension = axis is obtained from the value of the corresponding"
|
||
|
"entry in `indices` and the index-value for dimension != axis is obtained from the"
|
||
|
"index of the entry itself."
|
||
|
""
|
||
|
"For instance, in a 2-D tensor case, the update corresponding to the [i][j] entry"
|
||
|
"is performed as below:"
|
||
|
"```"
|
||
|
" output[indices[i][j]][j] = updates[i][j] if axis = 0, "
|
||
|
" output[i][indices[i][j]] = updates[i][j] if axis = 1,"
|
||
|
"```"
|
||
|
""
|
||
|
"This operator is the inverse of GatherElements. It is similar to Torch's Scatter operation."
|
||
|
""
|
||
|
"Example 1:"
|
||
|
"```"
|
||
|
" data = ["
|
||
|
" [0.0, 0.0, 0.0],"
|
||
|
" [0.0, 0.0, 0.0],"
|
||
|
" [0.0, 0.0, 0.0],"
|
||
|
" ]"
|
||
|
" indices = ["
|
||
|
" [1, 0, 2],"
|
||
|
" [0, 2, 1],"
|
||
|
" ]"
|
||
|
" updates = ["
|
||
|
" [1.0, 1.1, 1.2],"
|
||
|
" [2.0, 2.1, 2.2],"
|
||
|
" ]"
|
||
|
" output = ["
|
||
|
" [2.0, 1.1, 0.0]"
|
||
|
" [1.0, 0.0, 2.2]"
|
||
|
" [0.0, 2.1, 1.2]"
|
||
|
" ]"
|
||
|
"```"
|
||
|
"Example 2:"
|
||
|
"```"
|
||
|
" data = [[1.0, 2.0, 3.0, 4.0, 5.0]]"
|
||
|
" indices = [[1, 3]]"
|
||
|
" updates = [[1.1, 2.1]]"
|
||
|
" axis = 1"
|
||
|
" output = [[1.0, 1.1, 3.0, 2.1, 5.0]]"
|
||
|
"```"
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
1. `indices`: memref of any type values or tensor of any type values
|
||
|
1. `updates`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axis` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Selu (ONNXSeluOp)
|
||
|
ONNX Selu operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Selu takes one input data (Tensor<T>) and produces one output data"
|
||
|
"(Tensor<T>) where the scaled exponential linear unit function,"
|
||
|
"`y = gamma * (alpha * e^x - alpha) for x <= 0`, `y = gamma * x for x > 0`,"
|
||
|
"is applied to the tensor elementwise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `alpha` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `gamma` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.SequenceAt (ONNXSequenceAtOp)
|
||
|
ONNX SequenceAt operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Outputs a tensor copy from the tensor at 'position' in 'input_sequence'."
|
||
|
"Accepted range for 'position' is in `[-n, n - 1]`, where `n` is the number of tensors in 'input_sequence'."
|
||
|
"Negative value means counting positions from the back."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input_sequence`: memref of any type values or tensor of any type values
|
||
|
1. `position`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `tensor`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.SequenceConstruct (ONNXSequenceConstructOp)
|
||
|
ONNX SequenceConstruct operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Construct a tensor sequence containing 'inputs' tensors."
|
||
|
"All tensors in 'inputs' must have the same data type."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `inputs`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output_sequence`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.SequenceEmpty (ONNXSequenceEmptyOp)
|
||
|
ONNX SequenceEmpty operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Construct an empty tensor sequence, with given data type."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `dtype` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.SequenceErase (ONNXSequenceEraseOp)
|
||
|
ONNX SequenceErase operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Outputs a tensor sequence that removes the tensor at 'position' from 'input_sequence'."
|
||
|
"Accepted range for 'position' is in `[-n, n - 1]`, where `n` is the number of tensors in 'input_sequence'."
|
||
|
"Negative value means counting positions from the back."
|
||
|
"'position' is optional, by default it erases the last tensor from 'input_sequence'."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input_sequence`: memref of any type values or tensor of any type values
|
||
|
1. `position`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output_sequence`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.SequenceInsert (ONNXSequenceInsertOp)
|
||
|
ONNX SequenceInsert operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Outputs a tensor sequence that inserts 'tensor' into 'input_sequence' at 'position'."
|
||
|
"'tensor' must have the same data type as 'input_sequence'."
|
||
|
"Accepted range for 'position' is in `[-n, n]`, where `n` is the number of tensors in 'input_sequence'."
|
||
|
"Negative value means counting positions from the back."
|
||
|
"'position' is optional, by default it inserts 'tensor' to the back of 'input_sequence'."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input_sequence`: memref of any type values or tensor of any type values
|
||
|
1. `tensor`: memref of any type values or tensor of any type values
|
||
|
1. `position`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output_sequence`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.SequenceLength (ONNXSequenceLengthOp)
|
||
|
ONNX SequenceLength operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Produces a scalar(tensor of empty shape) containing the number of tensors in 'input_sequence'."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input_sequence`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `length`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Shape (ONNXShapeOp)
|
||
|
ONNX Shape operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Takes a tensor as input and outputs an 1D int64 tensor containing the shape of the input tensor."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `shape`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Shrink (ONNXShrinkOp)
|
||
|
ONNX Shrink operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Shrink takes one input data (Tensor<numeric>) and produces one Tensor output,"
|
||
|
"having same datatype and shape with input. It has two attributes, lambd and"
|
||
|
"bias. The formula of this operator is: If x < -lambd, y = x + bias;"
|
||
|
"If x > lambd, y = x - bias; Otherwise, y = 0."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `bias` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
| `lambd` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Sigmoid (ONNXSigmoidOp)
|
||
|
ONNX Sigmoid operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Sigmoid takes one input data (Tensor<T>) and produces one output data"
|
||
|
"(Tensor<T>) where the sigmoid function, y = 1 / (1 + exp(-x)), is applied to the"
|
||
|
"tensor elementwise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Sign (ONNXSignOp)
|
||
|
ONNX Sign operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Calculate the sign of the given input tensor element-wise."
|
||
|
"If input > 0, output 1. if input < 0, output -1. if input == 0, output 0."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Sin (ONNXSinOp)
|
||
|
ONNX Sin operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Calculates the sine of the given input tensor, element-wise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Sinh (ONNXSinhOp)
|
||
|
ONNX Sinh operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Calculates the hyperbolic sine of the given input tensor element-wise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Size (ONNXSizeOp)
|
||
|
ONNX Size operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Takes a tensor as input and outputs a int64 scalar that equals to the total number of elements of the input tensor."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `size`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Slice (ONNXSliceOp)
|
||
|
ONNX Slice operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Produces a slice of the input tensor along multiple axes. Similar to numpy:"
|
||
|
"https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html"
|
||
|
"Slices uses `starts`, `ends`, `axes` and `steps` inputs to specify the start and end"
|
||
|
"dimension and step for each axis in the list of axes, it uses this information to"
|
||
|
"slice the input `data` tensor. If a negative value is passed for any of the"
|
||
|
"start or end indices, it represent number of elements before the end of that"
|
||
|
"dimension. If the value passed to start or end is larger than the `n` (the"
|
||
|
"number of elements in this dimension), it represents `n`. For slicing to the"
|
||
|
"end of a dimension with unknown size, it is recommended to pass in `INT_MAX`."
|
||
|
"If a negative value is passed for step, it represents slicing backward."
|
||
|
"If `axes` are omitted, they are set to `[0, ..., ndim-1]`."
|
||
|
"If `steps` are omitted, they are set to `[1, ..., 1]` of length `len(starts)`"
|
||
|
"Example 1:"
|
||
|
" data = ["
|
||
|
" [1, 2, 3, 4],"
|
||
|
" [5, 6, 7, 8],"
|
||
|
" ]"
|
||
|
" axes = [0, 1]"
|
||
|
" starts = [1, 0]"
|
||
|
" ends = [2, 3]"
|
||
|
" steps = [1, 2]"
|
||
|
" result = ["
|
||
|
" [5, 7],"
|
||
|
" ]"
|
||
|
"Example 2:"
|
||
|
" data = ["
|
||
|
" [1, 2, 3, 4],"
|
||
|
" [5, 6, 7, 8],"
|
||
|
" ]"
|
||
|
" starts = [0, 1]"
|
||
|
" ends = [-1, 1000]"
|
||
|
" result = ["
|
||
|
" [2, 3, 4],"
|
||
|
" ]"
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
1. `starts`: memref of any type values or tensor of any type values
|
||
|
1. `ends`: memref of any type values or tensor of any type values
|
||
|
1. `axes`: memref of any type values or tensor of any type values
|
||
|
1. `steps`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Softmax (ONNXSoftmaxOp)
|
||
|
ONNX Softmax operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"The operator computes the softmax (normalized exponential) values for each layer in the batch"
|
||
|
" of the given input."
|
||
|
""
|
||
|
"The input does not need to explicitly be a 2D vector; rather, it will be"
|
||
|
"coerced into one. For an arbitrary n-dimensional tensor"
|
||
|
"input \in [a_0, a_1, ..., a_{k-1}, a_k, ..., a_{n-1\}\] and k is"
|
||
|
"the axis provided, then input will be coerced into a 2-dimensional tensor with"
|
||
|
"dimensions [a_0 * ... * a_{k-1}, a_k * ... * a_{n-1\}\]. For the default"
|
||
|
"case where axis=1, this means the input tensor will be coerced into a 2D tensor"
|
||
|
"of dimensions [a_0, a_1 * ... * a_{n-1\}\], where a_0 is often the batch size."
|
||
|
"In this situation, we must have a_0 = N and a_1 * ... * a_{n-1} = D."
|
||
|
"Each of these dimensions must be matched correctly, or else the operator"
|
||
|
"will throw errors. The output tensor has the same shape"
|
||
|
"and contains the softmax values of the corresponding input."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axis` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Softplus (ONNXSoftplusOp)
|
||
|
ONNX Softplus operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Softplus takes one input data (Tensor<T>) and produces one output data"
|
||
|
"(Tensor<T>) where the softplus function, y = ln(exp(x) + 1), is applied to"
|
||
|
"the tensor elementwise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Softsign (ONNXSoftsignOp)
|
||
|
ONNX Softsign operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Calculates the softsign (x/(1+|x|)) of the given input tensor element-wise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.SpaceToDepth (ONNXSpaceToDepthOp)
|
||
|
ONNX SpaceToDepth operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"SpaceToDepth rearranges blocks of spatial data into depth. More specifically,"
|
||
|
"this op outputs a copy of the input tensor where values from the height and width dimensions"
|
||
|
"are moved to the depth dimension."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `blocksize` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Split (ONNXSplitOp)
|
||
|
ONNX Split operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Split a tensor into a list of tensors, along the specified"
|
||
|
"'axis'. Lengths of the parts can be specified using argument 'split'."
|
||
|
"Otherwise, the tensor is split to equal sized parts."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axis` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `split` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `outputs`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.SplitToSequence (ONNXSplitToSequenceOp)
|
||
|
ONNX SplitToSequence operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Split a tensor into a sequence of tensors, along the specified"
|
||
|
"'axis'. Lengths of the parts can be specified using argument 'split'."
|
||
|
"'split' must contain only positive numbers."
|
||
|
"'split' is either a scalar (tensor of empty shape), or a 1-D tensor."
|
||
|
"If 'split' is a scalar, then 'input' will be split into equally sized chunks(if possible)."
|
||
|
"Last chunk will be smaller if the 'input' size along the given axis 'axis' is not divisible"
|
||
|
"by 'split'."
|
||
|
"Otherwise, the tensor is split into 'size(split)' chunks, with lengths of the parts on 'axis'"
|
||
|
"specified in 'split'. In this scenario, the sum of entries in 'split' must be equal to the"
|
||
|
"dimension size of input tensor on 'axis'."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
1. `split`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axis` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `keepdims` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output_sequence`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Sqrt (ONNXSqrtOp)
|
||
|
ONNX Sqrt operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Square root takes one input data (Tensor<T>) and produces one output data"
|
||
|
"(Tensor<T>) where the square root is, y = x^0.5, is applied to"
|
||
|
"the tensor elementwise. If x is negative, then it will return NaN."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Squeeze (ONNXSqueezeOp)
|
||
|
ONNX Squeeze operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Remove single-dimensional entries from the shape of a tensor."
|
||
|
"Takes a parameter `axes` with a list of axes to squeeze."
|
||
|
"If `axes` is not provided, all the single dimensions will be removed from"
|
||
|
"the shape. If an axis is selected with shape entry not equal to one, an error is raised."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axes` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `squeezed`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.StringNormalizer (ONNXStringNormalizerOp)
|
||
|
ONNX StringNormalizer operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"StringNormalization performs string operations for basic cleaning."
|
||
|
"This operator has only one input (denoted by X) and only one output"
|
||
|
"(denoted by Y). This operator first examines the elements in the X,"
|
||
|
"and removes elements specified in "stopwords" attribute."
|
||
|
"After removing stop words, the intermediate result can be further lowercased,"
|
||
|
"uppercased, or just returned depending the "case_change_action" attribute."
|
||
|
"This operator only accepts [C]- and [1, C]-tensor."
|
||
|
"If all elements in X are dropped, the output will be the empty value of string tensor with shape [1]"
|
||
|
"if input shape is [C] and shape [1, 1] if input shape is [1, C]."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `case_change_action` | `StringAttr` | string attribute attribute |
|
||
|
| `is_case_sensitive` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `locale` | `StringAttr` | string attribute attribute |
|
||
|
| `stopwords` | `ArrayAttr` | string array attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Sub (ONNXSubOp)
|
||
|
ONNX Sub operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Performs element-wise binary subtraction (with Numpy-style broadcasting support)."
|
||
|
""
|
||
|
"This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check [the doc](Broadcasting.md)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `A`: memref of any type values or tensor of any type values
|
||
|
1. `B`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `C`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Sum (ONNXSumOp)
|
||
|
ONNX Sum operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Element-wise sum of each of the input tensors (with Numpy-style broadcasting support)."
|
||
|
"All inputs and outputs must have the same data type."
|
||
|
"This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check [the doc](Broadcasting.md)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data_0`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `sum`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Tan (ONNXTanOp)
|
||
|
ONNX Tan operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Calculates the tangent of the given input tensor, element-wise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Tanh (ONNXTanhOp)
|
||
|
ONNX Tanh operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Calculates the hyperbolic tangent of the given input tensor element-wise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.TfIdfVectorizer (ONNXTfIdfVectorizerOp)
|
||
|
ONNX TfIdfVectorizer operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"This transform extracts n-grams from the input sequence and save them as a vector. Input can"
|
||
|
"be either a 1-D or 2-D tensor. For 1-D input, output is the n-gram representation of that input."
|
||
|
"For 2-D input, the output is also a 2-D tensor whose i-th row is the n-gram representation of the i-th input row."
|
||
|
"More specifically, if input shape is [C], the corresponding output shape would be [max(ngram_indexes) + 1]."
|
||
|
"If input shape is [N, C], this operator produces a [N, max(ngram_indexes) + 1]-tensor."
|
||
|
""
|
||
|
"In contrast to standard n-gram extraction, here, the indexes of extracting an n-gram from the original"
|
||
|
"sequence are not necessarily consecutive numbers. The discontinuity between indexes are controlled by the number of skips."
|
||
|
"If the number of skips is 2, we should skip two tokens when scanning through the original sequence."
|
||
|
"Let's consider an example. Assume that input sequence is [94, 17, 36, 12, 28] and the number of skips is 2."
|
||
|
"The associated 2-grams are [94, 12] and [17, 28] respectively indexed by [0, 3] and [1, 4]."
|
||
|
"If the number of skips becomes 0, the 2-grams generated are [94, 17], [17, 36], [36, 12], [12, 28]"
|
||
|
"indexed by [0, 1], [1, 2], [2, 3], [3, 4], respectively."
|
||
|
""
|
||
|
"The output vector (denoted by Y) stores the count of each n-gram;"
|
||
|
"Y[ngram_indexes[i]] indicates the times that the i-th n-gram is found. The attribute ngram_indexes is used to determine the mapping"
|
||
|
"between index i and the corresponding n-gram's output coordinate. If pool_int64s is [94, 17, 17, 36], ngram_indexes is [1, 0],"
|
||
|
"ngram_counts=[0, 0], then the Y[0] (first element in Y) and Y[1] (second element in Y) are the counts of [17, 36] and [94, 17],"
|
||
|
"respectively. An n-gram which cannot be found in pool_strings/pool_int64s should be ignored and has no effect on the output."
|
||
|
"Note that we may consider all skips up to S when generating the n-grams."
|
||
|
""
|
||
|
"The examples used above are true if mode is "TF". If mode is "IDF", all the counts larger than 1 would be truncated to 1 and"
|
||
|
"the i-th element in weights would be used to scale (by multiplication) the count of the i-th n-gram in pool. If mode is "TFIDF","
|
||
|
"this operator first computes the counts of all n-grams and then scale them by the associated values in the weights attribute."
|
||
|
""
|
||
|
"Only one of pool_strings and pool_int64s can be set. If pool_int64s is set, the input should be an integer tensor."
|
||
|
"If pool_strings is set, the input must be a string tensor."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `max_gram_length` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `max_skip_count` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `min_gram_length` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `mode` | `StringAttr` | string attribute attribute |
|
||
|
| `ngram_counts` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `ngram_indexes` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `pool_int64s` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
| `pool_strings` | `ArrayAttr` | string array attribute attribute |
|
||
|
| `weights` | `ArrayAttr` | 32-bit float array attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.ThresholdedRelu (ONNXThresholdedReluOp)
|
||
|
ONNX ThresholdedRelu operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"ThresholdedRelu takes one input data (Tensor<T>) and produces one output data"
|
||
|
"(Tensor<T>) where the rectified linear function, y = x for x > alpha, y = 0 otherwise,"
|
||
|
"is applied to the tensor elementwise."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `alpha` | `FloatAttr` | 32-bit float attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Tile (ONNXTileOp)
|
||
|
ONNX Tile operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Constructs a tensor by tiling a given tensor."
|
||
|
"This is the same as function `tile` in Numpy, but no broadcast."
|
||
|
"For example A = [[1, 2], [3, 4]], B = [1, 2], tile(A, B) = [[1, 2, 1, 2], [3, 4, 3, 4]]"
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `input`: memref of any type values or tensor of any type values
|
||
|
1. `repeats`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.TopK (ONNXTopKOp)
|
||
|
ONNX TopK operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Retrieve the top-K largest or smallest elements along a specified axis. Given an input tensor of"
|
||
|
"shape [a_1, a_2, ..., a_n, r] and integer argument k, return two outputs:"
|
||
|
" -Value tensor of shape [a_1, a_2, ..., a_{axis-1}, k, a_{axis+1}, ... a_n]"
|
||
|
" which contains the values of the top k elements along the specified axis"
|
||
|
" -Index tensor of shape [a_1, a_2, ..., a_{axis-1}, k, a_{axis+1}, ... a_n] which"
|
||
|
" contains the indices of the top k elements (original indices from the input"
|
||
|
" tensor)."
|
||
|
""
|
||
|
"If "largest" is 1 (the default value) then the k largest elements are returned."
|
||
|
"If "sorted" is 1 (the default value) then the resulting k elements will be sorted."
|
||
|
"If "sorted" is 0, order of returned 'Values' and 'Indices' are undefined."
|
||
|
""
|
||
|
"Given two equivalent values, this operator uses the indices along the axis as"
|
||
|
" a tiebreaker. That is, the element with the lower index will appear first."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
1. `K`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axis` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `largest` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `sorted` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Values`: memref of any type values or tensor of any type values
|
||
|
1. `Indices`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Transpose (ONNXTransposeOp)
|
||
|
ONNX Transpose operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Transpose the input tensor similar to numpy.transpose. For example, when"
|
||
|
"perm=(1, 0, 2), given an input tensor of shape (1, 2, 3), the output shape"
|
||
|
"will be (2, 1, 3)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `perm` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `transposed`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Unique (ONNXUniqueOp)
|
||
|
ONNX Unique operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Find the unique elements of a tensor. When an optional attribute 'axis' is provided, unique subtensors sliced along the 'axis' are returned. "
|
||
|
"Otherwise the input tensor is flattened and unique values of the flattened tensor are returned. "
|
||
|
""
|
||
|
"This operator returns the unique values or sliced unique subtensors of the input tensor and three optional outputs. "
|
||
|
"The first output tensor 'Y' contains all unique values or subtensors of the input. "
|
||
|
"The second optional output tensor 'indices' contains indices of 'Y' elements' first occurance in 'X'.. "
|
||
|
"The third optional output tensor 'inverse_indices' contains, for elements of 'X', its corresponding indices in 'Y'. ". "
|
||
|
"The fourth optional output tensor 'counts' contains the count of each element of 'Y' in the input. "
|
||
|
""
|
||
|
"Outputs are either sorted in ascending order or optionally in the order of the first occurrence of the values in the input. "
|
||
|
""
|
||
|
"https://docs.scipy.org/doc/numpy/reference/generated/numpy.unique.html"
|
||
|
""
|
||
|
"Example 1:"
|
||
|
" input_X = [2, 1, 1, 3, 4, 3]"
|
||
|
" attribute_sorted = 0"
|
||
|
" attribute_axis = None"
|
||
|
" output_Y = [2, 1, 3, 4]"
|
||
|
" output_indices = [0, 1, 3, 4]"
|
||
|
" output_inverse_indices = [0, 1, 1, 2, 3, 2]"
|
||
|
" output_counts = [1, 2, 2, 1]"
|
||
|
""
|
||
|
"Example 2:"
|
||
|
" input_X = [[1, 3], [2, 3]]"
|
||
|
" attribute_sorted = 1"
|
||
|
" attribute_axis = None"
|
||
|
" output_Y = [1, 2, 3]"
|
||
|
" output_indices = [0, 2, 1]"
|
||
|
" output_inverse_indices = [0, 2, 1, 2]"
|
||
|
" output_counts = [1, 1, 2]"
|
||
|
""
|
||
|
"Example 3:"
|
||
|
" input_X = [[1, 0, 0], [1, 0, 0], [2, 3, 4]]"
|
||
|
" attribute_sorted = 1"
|
||
|
" attribute_axis = 0"
|
||
|
" output_Y = [[1, 0, 0], [2, 3, 4]]"
|
||
|
" output_indices = [0, 2]"
|
||
|
" output_inverse_indices = [0, 0, 1]"
|
||
|
" output_counts = [2, 1]"
|
||
|
""
|
||
|
"Example 4:"
|
||
|
" input_x = [[[1., 1.], [0., 1.], [2., 1.], [0., 1.]], "
|
||
|
" [[1., 1.], [0., 1.], [2., 1.], [0., 1.]]]"
|
||
|
" attribute_sorted = 1"
|
||
|
" attribute_axis = 1"
|
||
|
""
|
||
|
" intermediate data are presented below for better understanding: "
|
||
|
" "
|
||
|
" there are 4 subtensors sliced along axis 1 of input_x (shape = (2, 4, 2)):"
|
||
|
" A: [[1, 1], [1, 1]], "
|
||
|
" [[0, 1], [0, 1]], "
|
||
|
" [[2, 1], [2, 1]], "
|
||
|
" [[0, 1], [0, 1]]."
|
||
|
" "
|
||
|
" there are 3 unique subtensors: "
|
||
|
" [[1, 1], [1, 1]], "
|
||
|
" [[0, 1], [0, 1]], "
|
||
|
" [[2, 1], [2, 1]]."
|
||
|
" "
|
||
|
" sorted unique subtensors:"
|
||
|
" B: [[0, 1], [0, 1]], "
|
||
|
" [[1, 1], [1, 1]], "
|
||
|
" [[2, 1], [2, 1]]."
|
||
|
" "
|
||
|
" output_Y is constructed from B:"
|
||
|
" [[[0. 1.], [1. 1.], [2. 1.]], "
|
||
|
" [[0. 1.], [1. 1.], [2. 1.]]]"
|
||
|
""
|
||
|
" output_indices is to map from B to A:"
|
||
|
" [1, 0, 2]"
|
||
|
" "
|
||
|
" output_inverse_indices is to map from A to B:"
|
||
|
" [1, 0, 2, 0]"
|
||
|
""
|
||
|
" output_counts = [2 1 1]"
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axis` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
| `sorted` | `IntegerAttr` | 64-bit integer attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
1. `indices`: memref of any type values or tensor of any type values
|
||
|
1. `inverse_indices`: memref of any type values or tensor of any type values
|
||
|
1. `counts`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Unsqueeze (ONNXUnsqueezeOp)
|
||
|
ONNX Unsqueeze operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Insert single-dimensional entries to the shape of an input tensor (`data`)."
|
||
|
"Takes one required argument `axes` - which contains a list of dimension indices and this operator will insert a dimension of value `1` into the corresponding index of the output tensor (`expanded`)."
|
||
|
""
|
||
|
"For example:"
|
||
|
" Given an input tensor (`data`) of shape [3, 4, 5], then"
|
||
|
" Unsqueeze(data, axes=[0, 4]) outputs a tensor (`expanded`) containing same data as `data` but with shape [1, 3, 4, 5, 1]."
|
||
|
""
|
||
|
"The attribute `axes` should not contain any duplicate entries. It is an error if it contains duplicates."
|
||
|
"The rank of the output tensor (`output_rank`) is the rank of the input tensor (`data`) plus the number of values in `axes`."
|
||
|
"Each value in `axes` should be within the (inclusive) range [-output_rank , output_rank - 1]. "
|
||
|
"The order of values in `axes` does not matter and can come in any order. "
|
||
|
""
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `data`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `axes` | `ArrayAttr` | 64-bit integer array attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `expanded`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Upsample (ONNXUpsampleOp)
|
||
|
ONNX Upsample operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Upsample the input tensor."
|
||
|
"Each dimension value of the output tensor is:"
|
||
|
" output_dimension = floor(input_dimension * scale)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
1. `scales`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
| Attribute | MLIR Type | Description |
|
||
|
| :-------: | :-------: | ----------- |
|
||
|
| `mode` | `StringAttr` | string attribute attribute |
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Where (ONNXWhereOp)
|
||
|
ONNX Where operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Return elements, either from X or Y, depending on condition"
|
||
|
" (with Numpy-style broadcasting support)."
|
||
|
" Where behaves like numpy.where with three parameters:"
|
||
|
" https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html"
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `condition`: memref of any type values or tensor of any type values
|
||
|
1. `X`: memref of any type values or tensor of any type values
|
||
|
1. `Y`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `output`: memref of any type values or tensor of any type values
|
||
|
|
||
|
### onnx.Xor (ONNXXorOp)
|
||
|
ONNX Xor operation
|
||
|
|
||
|
#### Description:
|
||
|
|
||
|
|
||
|
"Returns the tensor resulted from performing the `xor` logical operation"
|
||
|
"elementwise on the input tensors `A` and `B` (with Numpy-style broadcasting support)."
|
||
|
""
|
||
|
"This operator supports **multidirectional (i.e., Numpy-style) broadcasting**; for more details please check [the doc](Broadcasting.md)."
|
||
|
|
||
|
#### Operands:
|
||
|
|
||
|
1. `A`: memref of any type values or tensor of any type values
|
||
|
1. `B`: memref of any type values or tensor of any type values
|
||
|
|
||
|
#### Attributes:
|
||
|
|
||
|
|
||
|
#### Results:
|
||
|
|
||
|
1. `C`: memref of any type values or tensor of any type values
|
||
|
|