Merge onnx ml into onnx (#176)

* merge onnx-ml into onnx

* delete onnx ml
This commit is contained in:
chentong319 2020-06-22 20:01:56 -04:00 committed by GitHub
parent f81f44662b
commit cc68f77d8d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
21 changed files with 1207 additions and 1463 deletions

View File

@ -34,14 +34,6 @@ add_subdirectory(third_party/rapidcheck)
set(CMAKE_CXX_STANDARD 14) set(CMAKE_CXX_STANDARD 14)
if ($ENV{EXCLUDE_ONNX_ML})
set(INCLUDE_ONNX_ML FALSE)
else()
set(INCLUDE_ONNX_ML TRUE)
endif()
message(STATUS "INCLUDE_ONNX_ML Dialect " ${INCLUDE_ONNX_ML})
add_subdirectory(utils) add_subdirectory(utils)
add_subdirectory(src) add_subdirectory(src)
add_subdirectory(docs) add_subdirectory(docs)

View File

@ -1,597 +0,0 @@
<!-- Autogenerated by mlir-tblgen; don't manually edit -->
### `mlonnx.ArrayFeatureExtractor` (MLONNXArrayFeatureExtractorOp)
ONNX ArrayFeatureExtractor operation
"Select elements of the input tensor based on the indices passed.<br>"
" The indices are applied to the last axes of the tensor."
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | memref of any type values or tensor of any type values
`Y` | memref of any type values or tensor of any type values
#### Results:
| Result | Description |
| :----: | ----------- |
`Z` | memref of any type values or tensor of any type values
### `mlonnx.Binarizer` (MLONNXBinarizerOp)
ONNX Binarizer operation
"Maps the values of the input tensor to either 0 or 1, element-wise, based on the outcome of a comparison against a threshold value."
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`threshold` | FloatAttr | 32-bit float attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tensor of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values or memref of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | tensor of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values or memref of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values
### `mlonnx.CastMap` (MLONNXCastMapOp)
ONNX CastMap operation
"Converts a map to a tensor.<br>The map key must be an int64 and the values will be ordered"
" in ascending order based on this key.<br>The operator supports dense packing or sparse packing."
" If using sparse packing, the key cannot exceed the max_map-1 value."
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`cast_to` | StringAttr | string attribute
`map_form` | StringAttr | string attribute
`max_map` | IntegerAttr | 64-bit signless integer attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tuple with any combination of tensor of 64-bit signless integer values values or memref of 64-bit signless integer values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
### `mlonnx.CategoryMapper` (MLONNXCategoryMapperOp)
ONNX CategoryMapper operation
"Converts strings to integers and vice versa.<br>"
" Two sequences of equal length are used to map between integers and strings,"
" with strings and integers at the same index detailing the mapping.<br>"
" Each operator converts either integers to strings or strings to integers, depending "
" on which default value attribute is provided. Only one default value attribute"
" should be defined.<br>"
" If the string default value is set, it will convert integers to strings."
" If the int default value is set, it will convert strings to integers."
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`cats_int64s` | ArrayAttr | 64-bit integer array attribute
`cats_strings` | ArrayAttr | string array attribute
`default_int64` | IntegerAttr | 64-bit signless integer attribute
`default_string` | StringAttr | string attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | memref of any type values or tensor of any type values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
### `mlonnx.DictVectorizer` (MLONNXDictVectorizerOp)
ONNX DictVectorizer operation
"Uses an index mapping to convert a dictionary to an array.<br>"
" Given a dictionary, each key is looked up in the vocabulary attribute corresponding to"
" the key type. The index into the vocabulary array at which the key is found is then"
" used to index the output 1-D tensor 'Y' and insert into it the value found in the dictionary 'X'.<br>"
" The key type of the input map must correspond to the element type of the defined vocabulary attribute."
" Therefore, the output array will be equal in length to the index mapping vector parameter."
" All keys in the input dictionary must be present in the index mapping vector."
" For each item in the input dictionary, insert its value in the output array."
" Any keys not present in the input dictionary, will be zero in the output array.<br>"
" For example: if the ``string_vocabulary`` parameter is set to ``[\"a\", \"c\", \"b\", \"z\"]``,"
" then an input of ``{\"a\": 4, \"c\": 8}`` will produce an output of ``[4, 8, 0, 0]``."
" "
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`int64_vocabulary` | ArrayAttr | 64-bit integer array attribute
`string_vocabulary` | ArrayAttr | string array attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tuple with any combination of tensor of 64-bit signless integer or 32-bit float or 64-bit float values values or memref of 64-bit signless integer or 32-bit float or 64-bit float values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
### `mlonnx.FeatureVectorizer` (MLONNXFeatureVectorizerOp)
ONNX FeatureVectorizer operation
"Concatenates input tensors into one continuous output.<br>"
" All input shapes are 2-D and are concatenated along the second dimention. 1-D tensors are treated as [1,C]."
" Inputs are copied to the output maintaining the order of the input arguments.<br>"
" All inputs must be integers or floats, while the output will be all floating point values."
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`inputdimensions` | ArrayAttr | 64-bit integer array attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tensor of 32-bit signless integer or 64-bit signless integer or 32-bit float or 64-bit float values or memref of 32-bit signless integer or 64-bit signless integer or 32-bit float or 64-bit float values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
### `mlonnx.Imputer` (MLONNXImputerOp)
ONNX Imputer operation
"Replaces inputs that equal one value with another, leaving all other elements alone.<br>"
" This operator is typically used to replace missing values in situations where they have a canonical"
" representation, such as -1, 0, NaN, or some extreme value.<br>"
" One and only one of imputed_value_floats or imputed_value_int64s should be defined -- floats if the input tensor"
" holds floats, integers if the input tensor holds integers. The imputed values must all fit within the"
" width of the tensor element type. One and only one of the replaced_value_float or replaced_value_int64 should be defined,"
" which one depends on whether floats or integers are being processed.<br>"
" The imputed_value attribute length can be 1 element, or it can have one element per input feature.<br>In other words, if the input tensor has the shape [*,F], then the length of the attribute array may be 1 or F. If it is 1, then it is broadcast along the last dimension and applied to each feature."
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`imputed_value_floats` | ArrayAttr | 32-bit float array attribute
`imputed_value_int64s` | ArrayAttr | 64-bit integer array attribute
`replaced_value_float` | FloatAttr | 32-bit float attribute
`replaced_value_int64` | IntegerAttr | 64-bit signless integer attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tensor of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values or memref of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | tensor of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values or memref of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values
### `mlonnx.LabelEncoder` (MLONNXLabelEncoderOp)
ONNX LabelEncoder operation
"Maps each element in the input tensor to another value.<br>"
" The mapping is determined by the two parallel attributes, 'keys_*' and"
" 'values_*' attribute. The i-th value in the specified 'keys_*' attribute"
" would be mapped to the i-th value in the specified 'values_*' attribute. It"
" implies that input's element type and the element type of the specified"
" 'keys_*' should be identical while the output type is identical to the"
" specified 'values_*' attribute. If an input element can not be found in the"
" specified 'keys_*' attribute, the 'default_*' that matches the specified"
" 'values_*' attribute may be used as its output value.<br>"
" Let's consider an example which maps a string tensor to an integer tensor."
" Assume and 'keys_strings' is [\"Amy\", \"Sally\"], 'values_int64s' is [5, 6],"
" and 'default_int64' is '-1'. The input [\"Dori\", \"Amy\", \"Amy\", \"Sally\","
" \"Sally\"] would be mapped to [-1, 5, 5, 6, 6].<br>"
" Since this operator is an one-to-one mapping, its input and output shapes"
" are the same. Notice that only one of 'keys_*'/'values_*' can be set.<br>"
" For key look-up, bit-wise comparison is used so even a float NaN can be"
" mapped to a value in 'values_*' attribute.<br>"
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`default_float` | FloatAttr | 32-bit float attribute
`default_int64` | IntegerAttr | 64-bit signless integer attribute
`default_string` | StringAttr | string attribute
`keys_floats` | ArrayAttr | 32-bit float array attribute
`keys_int64s` | ArrayAttr | 64-bit integer array attribute
`keys_strings` | ArrayAttr | string array attribute
`values_floats` | ArrayAttr | 32-bit float array attribute
`values_int64s` | ArrayAttr | 64-bit integer array attribute
`values_strings` | ArrayAttr | string array attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | memref of any type values or tensor of any type values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
### `mlonnx.LinearClassifier` (MLONNXLinearClassifierOp)
ONNX LinearClassifier operation
"Linear classifier"
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`classlabels_ints` | ArrayAttr | 64-bit integer array attribute
`classlabels_strings` | ArrayAttr | string array attribute
`coefficients` | ArrayAttr | 32-bit float array attribute
`intercepts` | ArrayAttr | 32-bit float array attribute
`multi_class` | IntegerAttr | 64-bit signless integer attribute
`post_transform` | StringAttr | string attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tensor of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values or memref of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
`Z` | memref of any type values or tensor of any type values
### `mlonnx.LinearRegressor` (MLONNXLinearRegressorOp)
ONNX LinearRegressor operation
"Generalized linear regression evaluation.<br>"
" If targets is set to 1 (default) then univariate regression is performed.<br>"
" If targets is set to M then M sets of coefficients must be passed in as a sequence"
" and M results will be output for each input n in N.<br>"
" The coefficients array is of length n, and the coefficients for each target are contiguous."
" Intercepts are optional but if provided must match the number of targets."
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`coefficients` | ArrayAttr | 32-bit float array attribute
`intercepts` | ArrayAttr | 32-bit float array attribute
`post_transform` | StringAttr | string attribute
`targets` | IntegerAttr | 64-bit signless integer attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tensor of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values or memref of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
### `mlonnx.Normalizer` (MLONNXNormalizerOp)
ONNX Normalizer operation
"Normalize the input. There are three normalization modes, which have the corresponding formulas,"
" defined using element-wise infix operators '/' and '^' and tensor-wide functions 'max' and 'sum':<br>"
"<br>"
" Max: Y = X / max(X)<br>"
" L1: Y = X / sum(X)<br>"
" L2: Y = sqrt(X^2 / sum(X^2)}<br>"
" In all modes, if the divisor is zero, Y == X."
"<br>"
" For batches, that is, [N,C] tensors, normalization is done along the C axis. In other words, each row"
" of the batch is normalized independently."
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`norm` | StringAttr | string attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tensor of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values or memref of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
### `mlonnx.OneHotEncoder` (MLONNXOneHotEncoderOp)
ONNX OneHotEncoder operation
"Replace each input element with an array of ones and zeros, where a single"
" one is placed at the index of the category that was passed in. The total category count "
" will determine the size of the extra dimension of the output array Y.<br>"
" For example, if we pass a tensor with a single value of 4, and a category count of 8, "
" the output will be a tensor with ``[0,0,0,0,1,0,0,0]``.<br>"
" This operator assumes every input feature is from the same set of categories.<br>"
" If the input is a tensor of float, int32, or double, the data will be cast"
" to integers and the cats_int64s category list will be used for the lookups."
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`cats_int64s` | ArrayAttr | 64-bit integer array attribute
`cats_strings` | ArrayAttr | string array attribute
`zeros` | IntegerAttr | 64-bit signless integer attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | memref of any type values or tensor of any type values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
### `mlonnx.SVMClassifier` (MLONNXSVMClassifierOp)
ONNX SVMClassifier operation
"Support Vector Machine classifier"
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`classlabels_ints` | ArrayAttr | 64-bit integer array attribute
`classlabels_strings` | ArrayAttr | string array attribute
`coefficients` | ArrayAttr | 32-bit float array attribute
`kernel_params` | ArrayAttr | 32-bit float array attribute
`kernel_type` | StringAttr | string attribute
`post_transform` | StringAttr | string attribute
`prob_a` | ArrayAttr | 32-bit float array attribute
`prob_b` | ArrayAttr | 32-bit float array attribute
`rho` | ArrayAttr | 32-bit float array attribute
`support_vectors` | ArrayAttr | 32-bit float array attribute
`vectors_per_class` | ArrayAttr | 64-bit integer array attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tensor of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values or memref of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
`Z` | memref of any type values or tensor of any type values
### `mlonnx.SVMRegressor` (MLONNXSVMRegressorOp)
ONNX SVMRegressor operation
"Support Vector Machine regression prediction and one-class SVM anomaly detection."
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`coefficients` | ArrayAttr | 32-bit float array attribute
`kernel_params` | ArrayAttr | 32-bit float array attribute
`kernel_type` | StringAttr | string attribute
`n_supports` | IntegerAttr | 64-bit signless integer attribute
`one_class` | IntegerAttr | 64-bit signless integer attribute
`post_transform` | StringAttr | string attribute
`rho` | ArrayAttr | 32-bit float array attribute
`support_vectors` | ArrayAttr | 32-bit float array attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tensor of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values or memref of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
### `mlonnx.Scaler` (MLONNXScalerOp)
ONNX Scaler operation
"Rescale input data, for example to standardize features by removing the mean and scaling to unit variance."
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`offset` | ArrayAttr | 32-bit float array attribute
`scale` | ArrayAttr | 32-bit float array attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tensor of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values or memref of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
### `mlonnx.TreeEnsembleClassifier` (MLONNXTreeEnsembleClassifierOp)
ONNX TreeEnsembleClassifier operation
"Tree Ensemble classifier. Returns the top class for each of N inputs.<br>"
" The attributes named 'nodes_X' form a sequence of tuples, associated by "
" index into the sequences, which must all be of equal length. These tuples"
" define the nodes.<br>"
" Similarly, all fields prefixed with 'class_' are tuples of votes at the leaves."
" A leaf may have multiple votes, where each vote is weighted by"
" the associated class_weights index.<br>"
" One and only one of classlabels_strings or classlabels_int64s"
" will be defined. The class_ids are indices into this list."
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`base_values` | ArrayAttr | 32-bit float array attribute
`class_ids` | ArrayAttr | 64-bit integer array attribute
`class_nodeids` | ArrayAttr | 64-bit integer array attribute
`class_treeids` | ArrayAttr | 64-bit integer array attribute
`class_weights` | ArrayAttr | 32-bit float array attribute
`classlabels_int64s` | ArrayAttr | 64-bit integer array attribute
`classlabels_strings` | ArrayAttr | string array attribute
`nodes_falsenodeids` | ArrayAttr | 64-bit integer array attribute
`nodes_featureids` | ArrayAttr | 64-bit integer array attribute
`nodes_hitrates` | ArrayAttr | 32-bit float array attribute
`nodes_missing_value_tracks_true` | ArrayAttr | 64-bit integer array attribute
`nodes_modes` | ArrayAttr | string array attribute
`nodes_nodeids` | ArrayAttr | 64-bit integer array attribute
`nodes_treeids` | ArrayAttr | 64-bit integer array attribute
`nodes_truenodeids` | ArrayAttr | 64-bit integer array attribute
`nodes_values` | ArrayAttr | 32-bit float array attribute
`post_transform` | StringAttr | string attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tensor of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values or memref of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
`Z` | memref of any type values or tensor of any type values
### `mlonnx.TreeEnsembleRegressor` (MLONNXTreeEnsembleRegressorOp)
ONNX TreeEnsembleRegressor operation
"Tree Ensemble regressor. Returns the regressed values for each input in N.<br>"
" All args with nodes_ are fields of a tuple of tree nodes, and"
" it is assumed they are the same length, and an index i will decode the"
" tuple across these inputs. Each node id can appear only once"
" for each tree id.<br>"
" All fields prefixed with target_ are tuples of votes at the leaves.<br>"
" A leaf may have multiple votes, where each vote is weighted by"
" the associated target_weights index.<br>"
" All trees must have their node ids start at 0 and increment by 1.<br>"
" Mode enum is BRANCH_LEQ, BRANCH_LT, BRANCH_GTE, BRANCH_GT, BRANCH_EQ, BRANCH_NEQ, LEAF"
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`aggregate_function` | StringAttr | string attribute
`base_values` | ArrayAttr | 32-bit float array attribute
`n_targets` | IntegerAttr | 64-bit signless integer attribute
`nodes_falsenodeids` | ArrayAttr | 64-bit integer array attribute
`nodes_featureids` | ArrayAttr | 64-bit integer array attribute
`nodes_hitrates` | ArrayAttr | 32-bit float array attribute
`nodes_missing_value_tracks_true` | ArrayAttr | 64-bit integer array attribute
`nodes_modes` | ArrayAttr | string array attribute
`nodes_nodeids` | ArrayAttr | 64-bit integer array attribute
`nodes_treeids` | ArrayAttr | 64-bit integer array attribute
`nodes_truenodeids` | ArrayAttr | 64-bit integer array attribute
`nodes_values` | ArrayAttr | 32-bit float array attribute
`post_transform` | StringAttr | string attribute
`target_ids` | ArrayAttr | 64-bit integer array attribute
`target_nodeids` | ArrayAttr | 64-bit integer array attribute
`target_treeids` | ArrayAttr | 64-bit integer array attribute
`target_weights` | ArrayAttr | 32-bit float array attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tensor of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values or memref of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
### `mlonnx.ZipMap` (MLONNXZipMapOp)
ONNX ZipMap operation
"Creates a map from the input and the attributes.<br>"
" The values are provided by the input tensor, while the keys are specified by the attributes."
" Must provide keys in either classlabels_strings or classlabels_int64s (but not both).<br>"
" The columns of the tensor correspond one-by-one to the keys specified by the attributes. There must be as many columns as keys.<br>"
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`classlabels_int64s` | ArrayAttr | 64-bit integer array attribute
`classlabels_strings` | ArrayAttr | string array attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | memref of any type values or tensor of any type values
#### Results:
| Result | Description |
| :----: | ----------- |
`Z` | tensor of tensor of 32-bit float or 64-bit signless integer values values or memref of 32-bit float or 64-bit signless integer values

View File

@ -154,6 +154,26 @@ ONNX ArgMin operation
| :----: | ----------- | | :----: | ----------- |
`reduced` | memref of any type values or tensor of any type values `reduced` | memref of any type values or tensor of any type values
### `onnx.ArrayFeatureExtractor` (ONNXArrayFeatureExtractorOp)
ONNX ArrayFeatureExtractor operation
"Select elements of the input tensor based on the indices passed.<br>"
" The indices are applied to the last axes of the tensor."
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | memref of any type values or tensor of any type values
`Y` | memref of any type values or tensor of any type values
#### Results:
| Result | Description |
| :----: | ----------- |
`Z` | memref of any type values or tensor of any type values
### `onnx.Asin` (ONNXAsinOp) ### `onnx.Asin` (ONNXAsinOp)
ONNX Asin operation ONNX Asin operation
@ -363,6 +383,30 @@ ONNX BatchNormalization operation in test mode
| :----: | ----------- | | :----: | ----------- |
`o_Y` | memref of any type values or tensor of any type values `o_Y` | memref of any type values or tensor of any type values
### `onnx.Binarizer` (ONNXBinarizerOp)
ONNX Binarizer operation
"Maps the values of the input tensor to either 0 or 1, element-wise, based on the outcome of a comparison against a threshold value."
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`threshold` | FloatAttr | 32-bit float attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tensor of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values or memref of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | tensor of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values or memref of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values
### `onnx.BitShift` (ONNXBitShiftOp) ### `onnx.BitShift` (ONNXBitShiftOp)
ONNX BitShift operation ONNX BitShift operation
@ -399,6 +443,34 @@ ONNX BitShift operation
| :----: | ----------- | | :----: | ----------- |
`Z` | tensor of 8-bit signless integer or 16-bit signless integer or 32-bit signless integer or 64-bit signless integer values or memref of 8-bit signless integer or 16-bit signless integer or 32-bit signless integer or 64-bit signless integer values `Z` | tensor of 8-bit signless integer or 16-bit signless integer or 32-bit signless integer or 64-bit signless integer values or memref of 8-bit signless integer or 16-bit signless integer or 32-bit signless integer or 64-bit signless integer values
### `onnx.CastMap` (ONNXCastMapOp)
ONNX CastMap operation
"Converts a map to a tensor.<br>The map key must be an int64 and the values will be ordered"
" in ascending order based on this key.<br>The operator supports dense packing or sparse packing."
" If using sparse packing, the key cannot exceed the max_map-1 value."
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`cast_to` | StringAttr | string attribute
`map_form` | StringAttr | string attribute
`max_map` | IntegerAttr | 64-bit signless integer attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tuple with any combination of tensor of 64-bit signless integer values values or memref of 64-bit signless integer values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
### `onnx.Cast` (ONNXCastOp) ### `onnx.Cast` (ONNXCastOp)
ONNX Cast operation ONNX Cast operation
@ -441,6 +513,40 @@ ONNX Cast operation
| :----: | ----------- | | :----: | ----------- |
`output` | memref of any type values or tensor of any type values `output` | memref of any type values or tensor of any type values
### `onnx.CategoryMapper` (ONNXCategoryMapperOp)
ONNX CategoryMapper operation
"Converts strings to integers and vice versa.<br>"
" Two sequences of equal length are used to map between integers and strings,"
" with strings and integers at the same index detailing the mapping.<br>"
" Each operator converts either integers to strings or strings to integers, depending "
" on which default value attribute is provided. Only one default value attribute"
" should be defined.<br>"
" If the string default value is set, it will convert integers to strings."
" If the int default value is set, it will convert strings to integers."
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`cats_int64s` | ArrayAttr | 64-bit integer array attribute
`cats_strings` | ArrayAttr | string array attribute
`default_int64` | IntegerAttr | 64-bit signless integer attribute
`default_string` | StringAttr | string attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | memref of any type values or tensor of any type values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
### `onnx.Ceil` (ONNXCeilOp) ### `onnx.Ceil` (ONNXCeilOp)
ONNX Ceil operation ONNX Ceil operation
@ -895,6 +1001,42 @@ ONNX Det operation
| :----: | ----------- | | :----: | ----------- |
`Y` | tensor of 16-bit float or 32-bit float or 64-bit float values or memref of 16-bit float or 32-bit float or 64-bit float values `Y` | tensor of 16-bit float or 32-bit float or 64-bit float values or memref of 16-bit float or 32-bit float or 64-bit float values
### `onnx.DictVectorizer` (ONNXDictVectorizerOp)
ONNX DictVectorizer operation
"Uses an index mapping to convert a dictionary to an array.<br>"
" Given a dictionary, each key is looked up in the vocabulary attribute corresponding to"
" the key type. The index into the vocabulary array at which the key is found is then"
" used to index the output 1-D tensor 'Y' and insert into it the value found in the dictionary 'X'.<br>"
" The key type of the input map must correspond to the element type of the defined vocabulary attribute."
" Therefore, the output array will be equal in length to the index mapping vector parameter."
" All keys in the input dictionary must be present in the index mapping vector."
" For each item in the input dictionary, insert its value in the output array."
" Any keys not present in the input dictionary, will be zero in the output array.<br>"
" For example: if the ``string_vocabulary`` parameter is set to ``[\"a\", \"c\", \"b\", \"z\"]``,"
" then an input of ``{\"a\": 4, \"c\": 8}`` will produce an output of ``[4, 8, 0, 0]``."
" "
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`int64_vocabulary` | ArrayAttr | 64-bit integer array attribute
`string_vocabulary` | ArrayAttr | string array attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tuple with any combination of tensor of 64-bit signless integer or 32-bit float or 64-bit float values values or memref of 64-bit signless integer or 32-bit float or 64-bit float values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
### `onnx.Div` (ONNXDivOp) ### `onnx.Div` (ONNXDivOp)
ONNX Div operation ONNX Div operation
@ -1135,6 +1277,33 @@ ONNX EyeLike operation
| :----: | ----------- | | :----: | ----------- |
`output` | tensor of 16-bit float or 32-bit float or 64-bit float or 8-bit signless integer or 16-bit signless integer or 32-bit signless integer or 64-bit signless integer or 1-bit signless integer values or memref of 16-bit float or 32-bit float or 64-bit float or 8-bit signless integer or 16-bit signless integer or 32-bit signless integer or 64-bit signless integer or 1-bit signless integer values `output` | tensor of 16-bit float or 32-bit float or 64-bit float or 8-bit signless integer or 16-bit signless integer or 32-bit signless integer or 64-bit signless integer or 1-bit signless integer values or memref of 16-bit float or 32-bit float or 64-bit float or 8-bit signless integer or 16-bit signless integer or 32-bit signless integer or 64-bit signless integer or 1-bit signless integer values
### `onnx.FeatureVectorizer` (ONNXFeatureVectorizerOp)
ONNX FeatureVectorizer operation
"Concatenates input tensors into one continuous output.<br>"
" All input shapes are 2-D and are concatenated along the second dimention. 1-D tensors are treated as [1,C]."
" Inputs are copied to the output maintaining the order of the input arguments.<br>"
" All inputs must be integers or floats, while the output will be all floating point values."
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`inputdimensions` | ArrayAttr | 64-bit integer array attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tensor of 32-bit signless integer or 64-bit signless integer or 32-bit float or 64-bit float values or memref of 32-bit signless integer or 64-bit signless integer or 32-bit float or 64-bit float values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
### `onnx.Flatten` (ONNXFlattenOp) ### `onnx.Flatten` (ONNXFlattenOp)
ONNX Flatten operation ONNX Flatten operation
@ -1768,6 +1937,40 @@ ONNX If operation
| :----: | ----------- | | :----: | ----------- |
`outputs` | memref of any type values or tensor of any type values `outputs` | memref of any type values or tensor of any type values
### `onnx.Imputer` (ONNXImputerOp)
ONNX Imputer operation
"Replaces inputs that equal one value with another, leaving all other elements alone.<br>"
" This operator is typically used to replace missing values in situations where they have a canonical"
" representation, such as -1, 0, NaN, or some extreme value.<br>"
" One and only one of imputed_value_floats or imputed_value_int64s should be defined -- floats if the input tensor"
" holds floats, integers if the input tensor holds integers. The imputed values must all fit within the"
" width of the tensor element type. One and only one of the replaced_value_float or replaced_value_int64 should be defined,"
" which one depends on whether floats or integers are being processed.<br>"
" The imputed_value attribute length can be 1 element, or it can have one element per input feature.<br>In other words, if the input tensor has the shape [*,F], then the length of the attribute array may be 1 or F. If it is 1, then it is broadcast along the last dimension and applied to each feature."
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`imputed_value_floats` | ArrayAttr | 32-bit float array attribute
`imputed_value_int64s` | ArrayAttr | 64-bit integer array attribute
`replaced_value_float` | FloatAttr | 32-bit float attribute
`replaced_value_int64` | IntegerAttr | 64-bit signless integer attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tensor of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values or memref of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | tensor of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values or memref of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values
### `onnx.InstanceNormalization` (ONNXInstanceNormalizationOp) ### `onnx.InstanceNormalization` (ONNXInstanceNormalizationOp)
ONNX InstanceNormalization operation ONNX InstanceNormalization operation
@ -1997,6 +2200,54 @@ ONNX LSTM operation
`Y_h` | tensor of 16-bit float or 32-bit float or 64-bit float values or memref of 16-bit float or 32-bit float or 64-bit float values or none type `Y_h` | tensor of 16-bit float or 32-bit float or 64-bit float values or memref of 16-bit float or 32-bit float or 64-bit float values or none type
`Y_c` | tensor of 16-bit float or 32-bit float or 64-bit float values or memref of 16-bit float or 32-bit float or 64-bit float values or none type `Y_c` | tensor of 16-bit float or 32-bit float or 64-bit float values or memref of 16-bit float or 32-bit float or 64-bit float values or none type
### `onnx.LabelEncoder` (ONNXLabelEncoderOp)
ONNX LabelEncoder operation
"Maps each element in the input tensor to another value.<br>"
" The mapping is determined by the two parallel attributes, 'keys_*' and"
" 'values_*' attribute. The i-th value in the specified 'keys_*' attribute"
" would be mapped to the i-th value in the specified 'values_*' attribute. It"
" implies that input's element type and the element type of the specified"
" 'keys_*' should be identical while the output type is identical to the"
" specified 'values_*' attribute. If an input element can not be found in the"
" specified 'keys_*' attribute, the 'default_*' that matches the specified"
" 'values_*' attribute may be used as its output value.<br>"
" Let's consider an example which maps a string tensor to an integer tensor."
" Assume and 'keys_strings' is [\"Amy\", \"Sally\"], 'values_int64s' is [5, 6],"
" and 'default_int64' is '-1'. The input [\"Dori\", \"Amy\", \"Amy\", \"Sally\","
" \"Sally\"] would be mapped to [-1, 5, 5, 6, 6].<br>"
" Since this operator is an one-to-one mapping, its input and output shapes"
" are the same. Notice that only one of 'keys_*'/'values_*' can be set.<br>"
" For key look-up, bit-wise comparison is used so even a float NaN can be"
" mapped to a value in 'values_*' attribute.<br>"
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`default_float` | FloatAttr | 32-bit float attribute
`default_int64` | IntegerAttr | 64-bit signless integer attribute
`default_string` | StringAttr | string attribute
`keys_floats` | ArrayAttr | 32-bit float array attribute
`keys_int64s` | ArrayAttr | 64-bit integer array attribute
`keys_strings` | ArrayAttr | string array attribute
`values_floats` | ArrayAttr | 32-bit float array attribute
`values_int64s` | ArrayAttr | 64-bit integer array attribute
`values_strings` | ArrayAttr | string array attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | memref of any type values or tensor of any type values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
### `onnx.LeakyRelu` (ONNXLeakyReluOp) ### `onnx.LeakyRelu` (ONNXLeakyReluOp)
ONNX LeakyRelu operation ONNX LeakyRelu operation
@ -2045,6 +2296,68 @@ ONNX Less operation
| :----: | ----------- | | :----: | ----------- |
`C` | tensor of 1-bit signless integer values or memref of 1-bit signless integer values `C` | tensor of 1-bit signless integer values or memref of 1-bit signless integer values
### `onnx.LinearClassifier` (ONNXLinearClassifierOp)
ONNX LinearClassifier operation
"Linear classifier"
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`classlabels_ints` | ArrayAttr | 64-bit integer array attribute
`classlabels_strings` | ArrayAttr | string array attribute
`coefficients` | ArrayAttr | 32-bit float array attribute
`intercepts` | ArrayAttr | 32-bit float array attribute
`multi_class` | IntegerAttr | 64-bit signless integer attribute
`post_transform` | StringAttr | string attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tensor of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values or memref of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
`Z` | memref of any type values or tensor of any type values
### `onnx.LinearRegressor` (ONNXLinearRegressorOp)
ONNX LinearRegressor operation
"Generalized linear regression evaluation.<br>"
" If targets is set to 1 (default) then univariate regression is performed.<br>"
" If targets is set to M then M sets of coefficients must be passed in as a sequence"
" and M results will be output for each input n in N.<br>"
" The coefficients array is of length n, and the coefficients for each target are contiguous."
" Intercepts are optional but if provided must match the number of targets."
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`coefficients` | ArrayAttr | 32-bit float array attribute
`intercepts` | ArrayAttr | 32-bit float array attribute
`post_transform` | StringAttr | string attribute
`targets` | IntegerAttr | 64-bit signless integer attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tensor of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values or memref of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
### `onnx.Log` (ONNXLogOp) ### `onnx.Log` (ONNXLogOp)
ONNX Log operation ONNX Log operation
@ -2744,6 +3057,39 @@ ONNX NonZero operation
| :----: | ----------- | | :----: | ----------- |
`Y` | memref of any type values or tensor of any type values `Y` | memref of any type values or tensor of any type values
### `onnx.Normalizer` (ONNXNormalizerOp)
ONNX Normalizer operation
"Normalize the input. There are three normalization modes, which have the corresponding formulas,"
" defined using element-wise infix operators '/' and '^' and tensor-wide functions 'max' and 'sum':<br>"
"<br>"
" Max: Y = X / max(X)<br>"
" L1: Y = X / sum(X)<br>"
" L2: Y = sqrt(X^2 / sum(X^2)}<br>"
" In all modes, if the divisor is zero, Y == X."
"<br>"
" For batches, that is, [N,C] tensors, normalization is done along the C axis. In other words, each row"
" of the batch is normalized independently."
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`norm` | StringAttr | string attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tensor of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values or memref of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
### `onnx.Not` (ONNXNotOp) ### `onnx.Not` (ONNXNotOp)
ONNX Not operation ONNX Not operation
@ -2762,6 +3108,39 @@ ONNX Not operation
| :----: | ----------- | | :----: | ----------- |
`Y` | tensor of 1-bit signless integer values or memref of 1-bit signless integer values `Y` | tensor of 1-bit signless integer values or memref of 1-bit signless integer values
### `onnx.OneHotEncoder` (ONNXOneHotEncoderOp)
ONNX OneHotEncoder operation
"Replace each input element with an array of ones and zeros, where a single"
" one is placed at the index of the category that was passed in. The total category count "
" will determine the size of the extra dimension of the output array Y.<br>"
" For example, if we pass a tensor with a single value of 4, and a category count of 8, "
" the output will be a tensor with ``[0,0,0,0,1,0,0,0]``.<br>"
" This operator assumes every input feature is from the same set of categories.<br>"
" If the input is a tensor of float, int32, or double, the data will be cast"
" to integers and the cats_int64s category list will be used for the lookups."
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`cats_int64s` | ArrayAttr | 64-bit integer array attribute
`cats_strings` | ArrayAttr | string array attribute
`zeros` | IntegerAttr | 64-bit signless integer attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | memref of any type values or tensor of any type values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
### `onnx.OneHot` (ONNXOneHotOp) ### `onnx.OneHot` (ONNXOneHotOp)
ONNX OneHot operation ONNX OneHot operation
@ -3945,6 +4324,97 @@ ONNX Round operation
| :----: | ----------- | | :----: | ----------- |
`Y` | tensor of 16-bit float or 32-bit float or 64-bit float values or memref of 16-bit float or 32-bit float or 64-bit float values `Y` | tensor of 16-bit float or 32-bit float or 64-bit float values or memref of 16-bit float or 32-bit float or 64-bit float values
### `onnx.SVMClassifier` (ONNXSVMClassifierOp)
ONNX SVMClassifier operation
"Support Vector Machine classifier"
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`classlabels_ints` | ArrayAttr | 64-bit integer array attribute
`classlabels_strings` | ArrayAttr | string array attribute
`coefficients` | ArrayAttr | 32-bit float array attribute
`kernel_params` | ArrayAttr | 32-bit float array attribute
`kernel_type` | StringAttr | string attribute
`post_transform` | StringAttr | string attribute
`prob_a` | ArrayAttr | 32-bit float array attribute
`prob_b` | ArrayAttr | 32-bit float array attribute
`rho` | ArrayAttr | 32-bit float array attribute
`support_vectors` | ArrayAttr | 32-bit float array attribute
`vectors_per_class` | ArrayAttr | 64-bit integer array attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tensor of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values or memref of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
`Z` | memref of any type values or tensor of any type values
### `onnx.SVMRegressor` (ONNXSVMRegressorOp)
ONNX SVMRegressor operation
"Support Vector Machine regression prediction and one-class SVM anomaly detection."
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`coefficients` | ArrayAttr | 32-bit float array attribute
`kernel_params` | ArrayAttr | 32-bit float array attribute
`kernel_type` | StringAttr | string attribute
`n_supports` | IntegerAttr | 64-bit signless integer attribute
`one_class` | IntegerAttr | 64-bit signless integer attribute
`post_transform` | StringAttr | string attribute
`rho` | ArrayAttr | 32-bit float array attribute
`support_vectors` | ArrayAttr | 32-bit float array attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tensor of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values or memref of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
### `onnx.Scaler` (ONNXScalerOp)
ONNX Scaler operation
"Rescale input data, for example to standardize features by removing the mean and scaling to unit variance."
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`offset` | ArrayAttr | 32-bit float array attribute
`scale` | ArrayAttr | 32-bit float array attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tensor of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values or memref of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
### `onnx.Scan` (ONNXScanOp) ### `onnx.Scan` (ONNXScanOp)
ONNX Scan operation ONNX Scan operation
@ -5161,6 +5631,104 @@ ONNX Transpose operation
| :----: | ----------- | | :----: | ----------- |
`transposed` | memref of any type values or tensor of any type values `transposed` | memref of any type values or tensor of any type values
### `onnx.TreeEnsembleClassifier` (ONNXTreeEnsembleClassifierOp)
ONNX TreeEnsembleClassifier operation
"Tree Ensemble classifier. Returns the top class for each of N inputs.<br>"
" The attributes named 'nodes_X' form a sequence of tuples, associated by "
" index into the sequences, which must all be of equal length. These tuples"
" define the nodes.<br>"
" Similarly, all fields prefixed with 'class_' are tuples of votes at the leaves."
" A leaf may have multiple votes, where each vote is weighted by"
" the associated class_weights index.<br>"
" One and only one of classlabels_strings or classlabels_int64s"
" will be defined. The class_ids are indices into this list."
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`base_values` | ArrayAttr | 32-bit float array attribute
`class_ids` | ArrayAttr | 64-bit integer array attribute
`class_nodeids` | ArrayAttr | 64-bit integer array attribute
`class_treeids` | ArrayAttr | 64-bit integer array attribute
`class_weights` | ArrayAttr | 32-bit float array attribute
`classlabels_int64s` | ArrayAttr | 64-bit integer array attribute
`classlabels_strings` | ArrayAttr | string array attribute
`nodes_falsenodeids` | ArrayAttr | 64-bit integer array attribute
`nodes_featureids` | ArrayAttr | 64-bit integer array attribute
`nodes_hitrates` | ArrayAttr | 32-bit float array attribute
`nodes_missing_value_tracks_true` | ArrayAttr | 64-bit integer array attribute
`nodes_modes` | ArrayAttr | string array attribute
`nodes_nodeids` | ArrayAttr | 64-bit integer array attribute
`nodes_treeids` | ArrayAttr | 64-bit integer array attribute
`nodes_truenodeids` | ArrayAttr | 64-bit integer array attribute
`nodes_values` | ArrayAttr | 32-bit float array attribute
`post_transform` | StringAttr | string attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tensor of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values or memref of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
`Z` | memref of any type values or tensor of any type values
### `onnx.TreeEnsembleRegressor` (ONNXTreeEnsembleRegressorOp)
ONNX TreeEnsembleRegressor operation
"Tree Ensemble regressor. Returns the regressed values for each input in N.<br>"
" All args with nodes_ are fields of a tuple of tree nodes, and"
" it is assumed they are the same length, and an index i will decode the"
" tuple across these inputs. Each node id can appear only once"
" for each tree id.<br>"
" All fields prefixed with target_ are tuples of votes at the leaves.<br>"
" A leaf may have multiple votes, where each vote is weighted by"
" the associated target_weights index.<br>"
" All trees must have their node ids start at 0 and increment by 1.<br>"
" Mode enum is BRANCH_LEQ, BRANCH_LT, BRANCH_GTE, BRANCH_GT, BRANCH_EQ, BRANCH_NEQ, LEAF"
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`aggregate_function` | StringAttr | string attribute
`base_values` | ArrayAttr | 32-bit float array attribute
`n_targets` | IntegerAttr | 64-bit signless integer attribute
`nodes_falsenodeids` | ArrayAttr | 64-bit integer array attribute
`nodes_featureids` | ArrayAttr | 64-bit integer array attribute
`nodes_hitrates` | ArrayAttr | 32-bit float array attribute
`nodes_missing_value_tracks_true` | ArrayAttr | 64-bit integer array attribute
`nodes_modes` | ArrayAttr | string array attribute
`nodes_nodeids` | ArrayAttr | 64-bit integer array attribute
`nodes_treeids` | ArrayAttr | 64-bit integer array attribute
`nodes_truenodeids` | ArrayAttr | 64-bit integer array attribute
`nodes_values` | ArrayAttr | 32-bit float array attribute
`post_transform` | StringAttr | string attribute
`target_ids` | ArrayAttr | 64-bit integer array attribute
`target_nodeids` | ArrayAttr | 64-bit integer array attribute
`target_treeids` | ArrayAttr | 64-bit integer array attribute
`target_weights` | ArrayAttr | 32-bit float array attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | tensor of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values or memref of 32-bit float or 64-bit float or 64-bit signless integer or 32-bit signless integer values
#### Results:
| Result | Description |
| :----: | ----------- |
`Y` | memref of any type values or tensor of any type values
### `onnx.Unique` (ONNXUniqueOp) ### `onnx.Unique` (ONNXUniqueOp)
ONNX Unique operation ONNX Unique operation
@ -5370,3 +5938,31 @@ ONNX Xor operation
| :----: | ----------- | | :----: | ----------- |
`C` | tensor of 1-bit signless integer values or memref of 1-bit signless integer values `C` | tensor of 1-bit signless integer values or memref of 1-bit signless integer values
### `onnx.ZipMap` (ONNXZipMapOp)
ONNX ZipMap operation
"Creates a map from the input and the attributes.<br>"
" The values are provided by the input tensor, while the keys are specified by the attributes."
" Must provide keys in either classlabels_strings or classlabels_int64s (but not both).<br>"
" The columns of the tensor correspond one-by-one to the keys specified by the attributes. There must be as many columns as keys.<br>"
#### Attributes:
| Attribute | MLIR Type | Description |
| :-------: | :-------: | ----------- |
`classlabels_int64s` | ArrayAttr | 64-bit integer array attribute
`classlabels_strings` | ArrayAttr | string array attribute
#### Operands:
| Operand | Description |
| :-----: | ----------- |
`X` | memref of any type values or tensor of any type values
#### Results:
| Result | Description |
| :----: | ----------- |
`Z` | tensor of tensor of 32-bit float or 64-bit signless integer values values or memref of 32-bit float or 64-bit signless integer values

View File

@ -30,8 +30,3 @@ target_include_directories(OMBuilder
# the compilation will fail. # the compilation will fail.
add_dependencies(OMBuilder OMONNXOpsInc) add_dependencies(OMBuilder OMONNXOpsInc)
add_dependencies(OMBuilder OMONNXOps OMResultTypeInferenceOpInterface) add_dependencies(OMBuilder OMONNXOps OMResultTypeInferenceOpInterface)
if (INCLUDE_ONNX_ML)
add_dependencies(OMBuilder OMMLONNXOpsInc)
add_dependencies(OMBuilder OMMLONNXOps)
endif()

View File

@ -404,9 +404,6 @@ private:
// one known reeason is the optional input // one known reeason is the optional input
#include "src/Builder/OpBuildTable.inc" #include "src/Builder/OpBuildTable.inc"
#if INCLUDE_ONNX_ML == 1
#include "src/Builder/MLOpBuildTable.inc"
#endif
} }
/*! /*!

View File

@ -316,3 +316,39 @@ if (opName == "Where")
buildOperation<mlir::ONNXWhereOp>(node); buildOperation<mlir::ONNXWhereOp>(node);
if (opName == "Xor") if (opName == "Xor")
buildOperation<mlir::ONNXXorOp>(node); buildOperation<mlir::ONNXXorOp>(node);
if (opName == "ArrayFeatureExtractor")
buildOperation<mlir::ONNXArrayFeatureExtractorOp>(node);
if (opName == "Binarizer")
buildOperation<mlir::ONNXBinarizerOp>(node);
if (opName == "CastMap")
buildOperation<mlir::ONNXCastMapOp>(node);
if (opName == "CategoryMapper")
buildOperation<mlir::ONNXCategoryMapperOp>(node);
if (opName == "DictVectorizer")
buildOperation<mlir::ONNXDictVectorizerOp>(node);
if (opName == "FeatureVectorizer")
buildOperation<mlir::ONNXFeatureVectorizerOp>(node);
if (opName == "Imputer")
buildOperation<mlir::ONNXImputerOp>(node);
if (opName == "LabelEncoder")
buildOperation<mlir::ONNXLabelEncoderOp>(node);
if (opName == "LinearClassifier")
buildOperation<mlir::ONNXLinearClassifierOp>(node);
if (opName == "LinearRegressor")
buildOperation<mlir::ONNXLinearRegressorOp>(node);
if (opName == "Normalizer")
buildOperation<mlir::ONNXNormalizerOp>(node);
if (opName == "OneHotEncoder")
buildOperation<mlir::ONNXOneHotEncoderOp>(node);
if (opName == "SVMClassifier")
buildOperation<mlir::ONNXSVMClassifierOp>(node);
if (opName == "SVMRegressor")
buildOperation<mlir::ONNXSVMRegressorOp>(node);
if (opName == "Scaler")
buildOperation<mlir::ONNXScalerOp>(node);
if (opName == "TreeEnsembleClassifier")
buildOperation<mlir::ONNXTreeEnsembleClassifierOp>(node);
if (opName == "TreeEnsembleRegressor")
buildOperation<mlir::ONNXTreeEnsembleRegressorOp>(node);
if (opName == "ZipMap")
buildOperation<mlir::ONNXZipMapOp>(node);

View File

@ -70,11 +70,6 @@ endif()
# (except system libraries such as libc). # (except system libraries such as libc).
add_dependencies(onnx-mlir OMKrnlOpsInc OMONNXOpsInc) add_dependencies(onnx-mlir OMKrnlOpsInc OMONNXOpsInc)
if (INCLUDE_ONNX_ML)
target_link_libraries(MainUtils OMMLONNXOps)
add_dependencies(MainUtils OMMLONNXOpsInc)
endif()
add_dependencies(onnx-mlir cruntime) add_dependencies(onnx-mlir cruntime)
add_dependencies(onnx-mlir EmbeddedDataLoader) add_dependencies(onnx-mlir EmbeddedDataLoader)

View File

@ -1,5 +1,2 @@
add_subdirectory(Krnl) add_subdirectory(Krnl)
add_subdirectory(ONNX) add_subdirectory(ONNX)
if (INCLUDE_ONNX_ML)
add_subdirectory(MLONNX)
endif()

View File

@ -1,31 +0,0 @@
set(LLVM_TARGET_DEFINITIONS MLONNXOps.td)
onnx_mlir_tablegen(MLONNXOps.hpp.inc -gen-op-decls "-I${ONNX_MLIR_SRC_ROOT}/compiler/pass")
onnx_mlir_tablegen(MLONNXOps.cpp.inc -gen-op-defs "-I${ONNX_MLIR_SRC_ROOT}/compiler/pass")
set(GEN_DOC_FILE ${CMAKE_BINARY_DIR}/docs/Dialects/mlonnx.md)
add_public_tablegen_target(OMMLONNXOpsIncGen)
# Header dependencies target for MLONNXOps.hpp
add_custom_target(OMMLONNXOpsInc
DEPENDS OMMLONNXOpsIncGen
OMPromotableConstOperandsOpInterfaceIncGen
OMResultTypeInferenceOpInterfaceIncGen
ShapeInferenceOpInterfaceIncGen)
add_library(OMMLONNXOps
MLONNXOps.cpp
MLONNXOps.hpp)
target_include_directories(OMMLONNXOps
PRIVATE
${ONNX_MLIR_SRC_ROOT}
${ONNX_MLIR_BIN_ROOT}
${ONNX_MLIR_SRC_ROOT})
# Header dependencies
add_dependencies(OMMLONNXOps OMMLONNXOpsInc)
# Linking dependencies
add_dependencies(OMMLONNXOps
OMPromotableConstOperandsOpInterface
OMResultTypeInferenceOpInterface
OMShapeInferenceOpInterface)
add_onnx_mlir_dialect_doc(mlonnx MLONNXOps.td)

View File

@ -1,47 +0,0 @@
//===------------------ MLONNXOps.cpp - ONNX ML Operations ----------------===//
//
// Copyright 2019-2020 The IBM Research Authors.
//
// =============================================================================
//
// This file provides definition of ONNX ML dialect operations.
//
//===----------------------------------------------------------------------===//
#include "mlir/Dialect/Traits.h"
#include "mlir/IR/Block.h"
#include "mlir/IR/Builders.h"
#include "mlir/IR/Function.h"
#include "mlir/IR/IntegerSet.h"
#include "mlir/IR/Matchers.h"
#include "mlir/IR/Module.h"
#include "mlir/IR/OpImplementation.h"
#include "mlir/IR/PatternMatch.h"
#include "llvm/ADT/SetVector.h"
#include "llvm/ADT/SmallBitVector.h"
#include "MLONNXOps.hpp"
using namespace mlir;
using namespace mlir::OpTrait::util;
//===----------------------------------------------------------------------===//
// MLONNXOpsDialect
//===----------------------------------------------------------------------===//
/// Dialect creation, the instance will be owned by the context. This is the
/// point of registration of custom types and operations for the dialect.
MLONNXOpsDialect::MLONNXOpsDialect(mlir::MLIRContext *ctx)
: mlir::Dialect(getDialectNamespace(), ctx) {
addOperations<
#define GET_OP_LIST
#include "src/Dialect/MLONNX/MLONNXOps.cpp.inc"
>();
}
//===----------------------------------------------------------------------===//
// TableGen'd op method definitions
//===----------------------------------------------------------------------===//
#define GET_OP_CLASSES
#include "src/Dialect/MLONNX/MLONNXOps.cpp.inc"

View File

@ -1,44 +0,0 @@
//===----------------- MLONNXOps.hpp - ONNX ML Operations ----_------------===//
//
// Copyright 2019 The IBM Research Authors.
//
// =============================================================================
//
// This file defines ONNX ML operations in the MLIR operation set.
//
//===----------------------------------------------------------------------===//
#pragma once
#include <map>
#include <string>
#include "mlir/Dialect/StandardOps/IR/Ops.h"
#include "mlir/IR/Builders.h"
#include "mlir/IR/Dialect.h"
#include "mlir/IR/OpDefinition.h"
#include "mlir/IR/StandardTypes.h"
#include "src/Interface/PromotableConstOperandsOpInterface.hpp"
#include "src/Interface/ResultTypeInferenceOpInterface.hpp"
#include "src/Interface/ShapeInferenceInterface.hpp"
namespace mlir {
class MLONNXOpsDialect : public Dialect {
public:
MLONNXOpsDialect(MLIRContext *context);
/// Provide a utility accessor to the dialect namespace. This is used by
/// several utilities for casting between dialects.
static StringRef getDialectNamespace() { return "mlonnx"; }
};
/// Include the auto-generated header file containing the declarations of the
/// ONNX operations.
#define GET_OP_CLASSES
#include "src/Dialect/MLONNX/MLONNXOps.hpp.inc"
} // end namespace mlir
namespace onnx_mlir {}

View File

@ -1,72 +0,0 @@
//===-- MLONNXOps.td -- ONNX ML Dialect Operation Definitions -*- tablegen -==//
//
// Copyright 2019-2020 The IBM Research Authors
//
// =============================================================================
//
// Defines ONNX ML Dialect operations.
//
//===----------------------------------------------------------------------===//
#ifdef MLONNX_OPS
#else
#define MLONNX_OPS
#ifdef OP_BASE
#else
include "mlir/IR/OpBase.td"
#endif // OP_BASE
#ifdef SHAPE_INFERENCE_INTERFACE
#else
include "src/Interface/ShapeInferenceInterface.td"
#endif // SHAPE_INFERENCE_INTERFACE
#ifdef PROMOTABLE_CONST_OPERANDS_OP_INTERFACE
#else
include "src/Interface/PromotableConstOperandsOpInterface.td"
#endif // PROMOTABLE_CONST_OPERANDS_OP_INTERFACE
#ifdef RESULT_TYPE_INFERENCE_OP_INTERFACE
#else
include "src/Interface/ResultTypeInferenceOpInterface.td"
#endif // RESULT_TYPE_INFERENCE_OP_INTERFACE
def MLONNX_Dialect : Dialect {
let name = "mlonnx";
let cppNamespace = "";
}
// Base class for ONNX dialect operations. This operation inherits from the base
// `Op` class in OpBase.td, and provides:
// * The parent dialect of the operation.
// * The mnemonic for the operation, or the name without the dialect prefix.
// * A list of traits for the operation.
class MLONNX_Op<string mnemonic, list<OpTrait> traits = []> :
Op<MLONNX_Dialect, mnemonic, traits>;
//===----------------------------------------------------------------------===//
// MLONNX Operations
//===----------------------------------------------------------------------===//
//the tablegen code onnxop.in is generated with gen_doc.py
//clone and install onnx
// git clone --recursive https://github.com/onnx/onnx.git
// set up env for anaconda3 and for ONNX MLIR (BOOSTROOT, cmake, gcc ...)
// cd onnx
//install onnx
// CC=gcc CXX=g++ pip install -e .
//run the script
// python onnx/defs/gen_doc.py
//result is in docs/onnx_ops.td.inc
//current limitations:
// 1. Attributes are not processed
// 2. output type inference not implemented except Add
// 3. Type Attribute: 'optional' and 'Variadic hetergeneous' are ignored
// 4. type of string, complex64 and complex128 for input/output are ignored
// 5. unsigned int are treated as signed one
include "mlir/Interfaces/SideEffectInterfaces.td"
include "src/Dialect/MLONNX/MLONNXOps.td.inc"
#endif // MLONNX_OPS

View File

@ -1,571 +0,0 @@
//********************************************************
// Do not modify this file directly.
// This file is automatically generated via script.
// Details can be found in docs/readonnxdefs.md .
//********************************************************
def MLONNXArrayFeatureExtractorOp:MLONNX_Op<"ArrayFeatureExtractor",
[NoSideEffect]> {
let summary = "ONNX ArrayFeatureExtractor operation";
let description = [{
"Select elements of the input tensor based on the indices passed.<br>"
" The indices are applied to the last axes of the tensor."
}];
let arguments = (ins AnyTypeOf<[AnyMemRef, AnyTensor]>:$X,
AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Z);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 2;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {20};
}
}];
}
def MLONNXBinarizerOp:MLONNX_Op<"Binarizer",
[NoSideEffect]> {
let summary = "ONNX Binarizer operation";
let description = [{
"Maps the values of the input tensor to either 0 or 1, element-wise, based on the outcome of a comparison against a threshold value."
}];
let arguments = (ins AnyTypeOf<[TensorOf<[F32,F64,I64,I32]>, MemRefOf<[F32,F64,I64,I32]>]>:$X,
DefaultValuedAttr<F32Attr, "0.0">:$threshold);
let results = (outs AnyTypeOf<[TensorOf<[F32,F64,I64,I32]>, MemRefOf<[F32,F64,I64,I32]>]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {20};
}
}];
}
def MLONNXCastMapOp:MLONNX_Op<"CastMap",
[NoSideEffect]> {
let summary = "ONNX CastMap operation";
let description = [{
"Converts a map to a tensor.<br>The map key must be an int64 and the values will be ordered"
" in ascending order based on this key.<br>The operator supports dense packing or sparse packing."
" If using sparse packing, the key cannot exceed the max_map-1 value."
}];
let arguments = (ins AnyTypeOf<[TupleOf<[TensorOf<[I64]>]>, MemRefOf<[I64]>]>:$X,
DefaultValuedAttr<StrAttr, "TO_FLOAT">:$cast_to,
DefaultValuedAttr<StrAttr, "DENSE">:$map_form,
DefaultValuedAttr<I64Attr, "1">:$max_map);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {-1};
}
}];
}
def MLONNXCategoryMapperOp:MLONNX_Op<"CategoryMapper",
[NoSideEffect]> {
let summary = "ONNX CategoryMapper operation";
let description = [{
"Converts strings to integers and vice versa.<br>"
" Two sequences of equal length are used to map between integers and strings,"
" with strings and integers at the same index detailing the mapping.<br>"
" Each operator converts either integers to strings or strings to integers, depending "
" on which default value attribute is provided. Only one default value attribute"
" should be defined.<br>"
" If the string default value is set, it will convert integers to strings."
" If the int default value is set, it will convert strings to integers."
}];
let arguments = (ins AnyTypeOf<[AnyMemRef, AnyTensor]>:$X,
OptionalAttr<I64ArrayAttr>:$cats_int64s,
OptionalAttr<StrArrayAttr>:$cats_strings,
DefaultValuedAttr<I64Attr, "-1">:$default_int64,
DefaultValuedAttr<StrAttr, "_Unused">:$default_string);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {-1};
}
}];
}
def MLONNXDictVectorizerOp:MLONNX_Op<"DictVectorizer",
[NoSideEffect]> {
let summary = "ONNX DictVectorizer operation";
let description = [{
"Uses an index mapping to convert a dictionary to an array.<br>"
" Given a dictionary, each key is looked up in the vocabulary attribute corresponding to"
" the key type. The index into the vocabulary array at which the key is found is then"
" used to index the output 1-D tensor 'Y' and insert into it the value found in the dictionary 'X'.<br>"
" The key type of the input map must correspond to the element type of the defined vocabulary attribute."
" Therefore, the output array will be equal in length to the index mapping vector parameter."
" All keys in the input dictionary must be present in the index mapping vector."
" For each item in the input dictionary, insert its value in the output array."
" Any keys not present in the input dictionary, will be zero in the output array.<br>"
" For example: if the ``string_vocabulary`` parameter is set to ``[\"a\", \"c\", \"b\", \"z\"]``,"
" then an input of ``{\"a\": 4, \"c\": 8}`` will produce an output of ``[4, 8, 0, 0]``."
" "
}];
let arguments = (ins AnyTypeOf<[TupleOf<[TensorOf<[I64,F32,F64]>]>, MemRefOf<[I64,F32,F64]>]>:$X,
OptionalAttr<I64ArrayAttr>:$int64_vocabulary,
OptionalAttr<StrArrayAttr>:$string_vocabulary);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {-1};
}
}];
}
def MLONNXFeatureVectorizerOp:MLONNX_Op<"FeatureVectorizer",
[NoSideEffect]> {
let summary = "ONNX FeatureVectorizer operation";
let description = [{
"Concatenates input tensors into one continuous output.<br>"
" All input shapes are 2-D and are concatenated along the second dimention. 1-D tensors are treated as [1,C]."
" Inputs are copied to the output maintaining the order of the input arguments.<br>"
" All inputs must be integers or floats, while the output will be all floating point values."
}];
let arguments = (ins Variadic<AnyTypeOf<[TensorOf<[I32,I64,F32,F64]>, MemRefOf<[I32,I64,F32,F64]>]>>:$X,
OptionalAttr<I64ArrayAttr>:$inputdimensions);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return -1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {-1};
}
}];
}
def MLONNXImputerOp:MLONNX_Op<"Imputer",
[NoSideEffect]> {
let summary = "ONNX Imputer operation";
let description = [{
"Replaces inputs that equal one value with another, leaving all other elements alone.<br>"
" This operator is typically used to replace missing values in situations where they have a canonical"
" representation, such as -1, 0, NaN, or some extreme value.<br>"
" One and only one of imputed_value_floats or imputed_value_int64s should be defined -- floats if the input tensor"
" holds floats, integers if the input tensor holds integers. The imputed values must all fit within the"
" width of the tensor element type. One and only one of the replaced_value_float or replaced_value_int64 should be defined,"
" which one depends on whether floats or integers are being processed.<br>"
" The imputed_value attribute length can be 1 element, or it can have one element per input feature.<br>In other words, if the input tensor has the shape [*,F], then the length of the attribute array may be 1 or F. If it is 1, then it is broadcast along the last dimension and applied to each feature."
}];
let arguments = (ins AnyTypeOf<[TensorOf<[F32,F64,I64,I32]>, MemRefOf<[F32,F64,I64,I32]>]>:$X,
OptionalAttr<F32ArrayAttr>:$imputed_value_floats,
OptionalAttr<I64ArrayAttr>:$imputed_value_int64s,
DefaultValuedAttr<F32Attr, "0.0">:$replaced_value_float,
DefaultValuedAttr<I64Attr, "0">:$replaced_value_int64);
let results = (outs AnyTypeOf<[TensorOf<[F32,F64,I64,I32]>, MemRefOf<[F32,F64,I64,I32]>]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {20};
}
}];
}
def MLONNXLabelEncoderOp:MLONNX_Op<"LabelEncoder",
[NoSideEffect]> {
let summary = "ONNX LabelEncoder operation";
let description = [{
"Maps each element in the input tensor to another value.<br>"
" The mapping is determined by the two parallel attributes, 'keys_*' and"
" 'values_*' attribute. The i-th value in the specified 'keys_*' attribute"
" would be mapped to the i-th value in the specified 'values_*' attribute. It"
" implies that input's element type and the element type of the specified"
" 'keys_*' should be identical while the output type is identical to the"
" specified 'values_*' attribute. If an input element can not be found in the"
" specified 'keys_*' attribute, the 'default_*' that matches the specified"
" 'values_*' attribute may be used as its output value.<br>"
" Let's consider an example which maps a string tensor to an integer tensor."
" Assume and 'keys_strings' is [\"Amy\", \"Sally\"], 'values_int64s' is [5, 6],"
" and 'default_int64' is '-1'. The input [\"Dori\", \"Amy\", \"Amy\", \"Sally\","
" \"Sally\"] would be mapped to [-1, 5, 5, 6, 6].<br>"
" Since this operator is an one-to-one mapping, its input and output shapes"
" are the same. Notice that only one of 'keys_*'/'values_*' can be set.<br>"
" For key look-up, bit-wise comparison is used so even a float NaN can be"
" mapped to a value in 'values_*' attribute.<br>"
}];
let arguments = (ins AnyTypeOf<[AnyMemRef, AnyTensor]>:$X,
DefaultValuedAttr<F32Attr, "-0.0">:$default_float,
DefaultValuedAttr<I64Attr, "-1">:$default_int64,
DefaultValuedAttr<StrAttr, "_Unused">:$default_string,
OptionalAttr<F32ArrayAttr>:$keys_floats,
OptionalAttr<I64ArrayAttr>:$keys_int64s,
OptionalAttr<StrArrayAttr>:$keys_strings,
OptionalAttr<F32ArrayAttr>:$values_floats,
OptionalAttr<I64ArrayAttr>:$values_int64s,
OptionalAttr<StrArrayAttr>:$values_strings);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {-1};
}
}];
}
def MLONNXLinearClassifierOp:MLONNX_Op<"LinearClassifier",
[NoSideEffect]> {
let summary = "ONNX LinearClassifier operation";
let description = [{
"Linear classifier"
}];
let arguments = (ins AnyTypeOf<[TensorOf<[F32,F64,I64,I32]>, MemRefOf<[F32,F64,I64,I32]>]>:$X,
OptionalAttr<I64ArrayAttr>:$classlabels_ints,
OptionalAttr<StrArrayAttr>:$classlabels_strings,
F32ArrayAttr:$coefficients,
OptionalAttr<F32ArrayAttr>:$intercepts,
DefaultValuedAttr<I64Attr, "0">:$multi_class,
DefaultValuedAttr<StrAttr, "NONE">:$post_transform);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y,
AnyTypeOf<[AnyMemRef, AnyTensor]>:$Z);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 2;
}
static std::vector<int> getTypeMap() {
return {-1,-1};
}
}];
}
def MLONNXLinearRegressorOp:MLONNX_Op<"LinearRegressor",
[NoSideEffect]> {
let summary = "ONNX LinearRegressor operation";
let description = [{
"Generalized linear regression evaluation.<br>"
" If targets is set to 1 (default) then univariate regression is performed.<br>"
" If targets is set to M then M sets of coefficients must be passed in as a sequence"
" and M results will be output for each input n in N.<br>"
" The coefficients array is of length n, and the coefficients for each target are contiguous."
" Intercepts are optional but if provided must match the number of targets."
}];
let arguments = (ins AnyTypeOf<[TensorOf<[F32,F64,I64,I32]>, MemRefOf<[F32,F64,I64,I32]>]>:$X,
OptionalAttr<F32ArrayAttr>:$coefficients,
OptionalAttr<F32ArrayAttr>:$intercepts,
DefaultValuedAttr<StrAttr, "NONE">:$post_transform,
DefaultValuedAttr<I64Attr, "1">:$targets);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {-1};
}
}];
}
def MLONNXNormalizerOp:MLONNX_Op<"Normalizer",
[NoSideEffect]> {
let summary = "ONNX Normalizer operation";
let description = [{
"Normalize the input. There are three normalization modes, which have the corresponding formulas,"
" defined using element-wise infix operators '/' and '^' and tensor-wide functions 'max' and 'sum':<br>"
"<br>"
" Max: Y = X / max(X)<br>"
" L1: Y = X / sum(X)<br>"
" L2: Y = sqrt(X^2 / sum(X^2)}<br>"
" In all modes, if the divisor is zero, Y == X."
"<br>"
" For batches, that is, [N,C] tensors, normalization is done along the C axis. In other words, each row"
" of the batch is normalized independently."
}];
let arguments = (ins AnyTypeOf<[TensorOf<[F32,F64,I64,I32]>, MemRefOf<[F32,F64,I64,I32]>]>:$X,
DefaultValuedAttr<StrAttr, "MAX">:$norm);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {-1};
}
}];
}
def MLONNXOneHotEncoderOp:MLONNX_Op<"OneHotEncoder",
[NoSideEffect]> {
let summary = "ONNX OneHotEncoder operation";
let description = [{
"Replace each input element with an array of ones and zeros, where a single"
" one is placed at the index of the category that was passed in. The total category count "
" will determine the size of the extra dimension of the output array Y.<br>"
" For example, if we pass a tensor with a single value of 4, and a category count of 8, "
" the output will be a tensor with ``[0,0,0,0,1,0,0,0]``.<br>"
" This operator assumes every input feature is from the same set of categories.<br>"
" If the input is a tensor of float, int32, or double, the data will be cast"
" to integers and the cats_int64s category list will be used for the lookups."
}];
let arguments = (ins AnyTypeOf<[AnyMemRef, AnyTensor]>:$X,
OptionalAttr<I64ArrayAttr>:$cats_int64s,
OptionalAttr<StrArrayAttr>:$cats_strings,
DefaultValuedAttr<I64Attr, "1">:$zeros);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {-1};
}
}];
}
def MLONNXSVMClassifierOp:MLONNX_Op<"SVMClassifier",
[NoSideEffect]> {
let summary = "ONNX SVMClassifier operation";
let description = [{
"Support Vector Machine classifier"
}];
let arguments = (ins AnyTypeOf<[TensorOf<[F32,F64,I64,I32]>, MemRefOf<[F32,F64,I64,I32]>]>:$X,
OptionalAttr<I64ArrayAttr>:$classlabels_ints,
OptionalAttr<StrArrayAttr>:$classlabels_strings,
OptionalAttr<F32ArrayAttr>:$coefficients,
OptionalAttr<F32ArrayAttr>:$kernel_params,
DefaultValuedAttr<StrAttr, "LINEAR">:$kernel_type,
DefaultValuedAttr<StrAttr, "NONE">:$post_transform,
OptionalAttr<F32ArrayAttr>:$prob_a,
OptionalAttr<F32ArrayAttr>:$prob_b,
OptionalAttr<F32ArrayAttr>:$rho,
OptionalAttr<F32ArrayAttr>:$support_vectors,
OptionalAttr<I64ArrayAttr>:$vectors_per_class);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y,
AnyTypeOf<[AnyMemRef, AnyTensor]>:$Z);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 2;
}
static std::vector<int> getTypeMap() {
return {-1,-1};
}
}];
}
def MLONNXSVMRegressorOp:MLONNX_Op<"SVMRegressor",
[NoSideEffect]> {
let summary = "ONNX SVMRegressor operation";
let description = [{
"Support Vector Machine regression prediction and one-class SVM anomaly detection."
}];
let arguments = (ins AnyTypeOf<[TensorOf<[F32,F64,I64,I32]>, MemRefOf<[F32,F64,I64,I32]>]>:$X,
OptionalAttr<F32ArrayAttr>:$coefficients,
OptionalAttr<F32ArrayAttr>:$kernel_params,
DefaultValuedAttr<StrAttr, "LINEAR">:$kernel_type,
DefaultValuedAttr<I64Attr, "0">:$n_supports,
DefaultValuedAttr<I64Attr, "0">:$one_class,
DefaultValuedAttr<StrAttr, "NONE">:$post_transform,
OptionalAttr<F32ArrayAttr>:$rho,
OptionalAttr<F32ArrayAttr>:$support_vectors);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {-1};
}
}];
}
def MLONNXScalerOp:MLONNX_Op<"Scaler",
[NoSideEffect]> {
let summary = "ONNX Scaler operation";
let description = [{
"Rescale input data, for example to standardize features by removing the mean and scaling to unit variance."
}];
let arguments = (ins AnyTypeOf<[TensorOf<[F32,F64,I64,I32]>, MemRefOf<[F32,F64,I64,I32]>]>:$X,
OptionalAttr<F32ArrayAttr>:$offset,
OptionalAttr<F32ArrayAttr>:$scale);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {-1};
}
}];
}
def MLONNXTreeEnsembleClassifierOp:MLONNX_Op<"TreeEnsembleClassifier",
[NoSideEffect]> {
let summary = "ONNX TreeEnsembleClassifier operation";
let description = [{
"Tree Ensemble classifier. Returns the top class for each of N inputs.<br>"
" The attributes named 'nodes_X' form a sequence of tuples, associated by "
" index into the sequences, which must all be of equal length. These tuples"
" define the nodes.<br>"
" Similarly, all fields prefixed with 'class_' are tuples of votes at the leaves."
" A leaf may have multiple votes, where each vote is weighted by"
" the associated class_weights index.<br>"
" One and only one of classlabels_strings or classlabels_int64s"
" will be defined. The class_ids are indices into this list."
}];
let arguments = (ins AnyTypeOf<[TensorOf<[F32,F64,I64,I32]>, MemRefOf<[F32,F64,I64,I32]>]>:$X,
OptionalAttr<F32ArrayAttr>:$base_values,
OptionalAttr<I64ArrayAttr>:$class_ids,
OptionalAttr<I64ArrayAttr>:$class_nodeids,
OptionalAttr<I64ArrayAttr>:$class_treeids,
OptionalAttr<F32ArrayAttr>:$class_weights,
OptionalAttr<I64ArrayAttr>:$classlabels_int64s,
OptionalAttr<StrArrayAttr>:$classlabels_strings,
OptionalAttr<I64ArrayAttr>:$nodes_falsenodeids,
OptionalAttr<I64ArrayAttr>:$nodes_featureids,
OptionalAttr<F32ArrayAttr>:$nodes_hitrates,
OptionalAttr<I64ArrayAttr>:$nodes_missing_value_tracks_true,
OptionalAttr<StrArrayAttr>:$nodes_modes,
OptionalAttr<I64ArrayAttr>:$nodes_nodeids,
OptionalAttr<I64ArrayAttr>:$nodes_treeids,
OptionalAttr<I64ArrayAttr>:$nodes_truenodeids,
OptionalAttr<F32ArrayAttr>:$nodes_values,
DefaultValuedAttr<StrAttr, "NONE">:$post_transform);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y,
AnyTypeOf<[AnyMemRef, AnyTensor]>:$Z);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 2;
}
static std::vector<int> getTypeMap() {
return {-1,-1};
}
}];
}
def MLONNXTreeEnsembleRegressorOp:MLONNX_Op<"TreeEnsembleRegressor",
[NoSideEffect]> {
let summary = "ONNX TreeEnsembleRegressor operation";
let description = [{
"Tree Ensemble regressor. Returns the regressed values for each input in N.<br>"
" All args with nodes_ are fields of a tuple of tree nodes, and"
" it is assumed they are the same length, and an index i will decode the"
" tuple across these inputs. Each node id can appear only once"
" for each tree id.<br>"
" All fields prefixed with target_ are tuples of votes at the leaves.<br>"
" A leaf may have multiple votes, where each vote is weighted by"
" the associated target_weights index.<br>"
" All trees must have their node ids start at 0 and increment by 1.<br>"
" Mode enum is BRANCH_LEQ, BRANCH_LT, BRANCH_GTE, BRANCH_GT, BRANCH_EQ, BRANCH_NEQ, LEAF"
}];
let arguments = (ins AnyTypeOf<[TensorOf<[F32,F64,I64,I32]>, MemRefOf<[F32,F64,I64,I32]>]>:$X,
DefaultValuedAttr<StrAttr, "SUM">:$aggregate_function,
OptionalAttr<F32ArrayAttr>:$base_values,
OptionalAttr<I64Attr>:$n_targets,
OptionalAttr<I64ArrayAttr>:$nodes_falsenodeids,
OptionalAttr<I64ArrayAttr>:$nodes_featureids,
OptionalAttr<F32ArrayAttr>:$nodes_hitrates,
OptionalAttr<I64ArrayAttr>:$nodes_missing_value_tracks_true,
OptionalAttr<StrArrayAttr>:$nodes_modes,
OptionalAttr<I64ArrayAttr>:$nodes_nodeids,
OptionalAttr<I64ArrayAttr>:$nodes_treeids,
OptionalAttr<I64ArrayAttr>:$nodes_truenodeids,
OptionalAttr<F32ArrayAttr>:$nodes_values,
DefaultValuedAttr<StrAttr, "NONE">:$post_transform,
OptionalAttr<I64ArrayAttr>:$target_ids,
OptionalAttr<I64ArrayAttr>:$target_nodeids,
OptionalAttr<I64ArrayAttr>:$target_treeids,
OptionalAttr<F32ArrayAttr>:$target_weights);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {-1};
}
}];
}
def MLONNXZipMapOp:MLONNX_Op<"ZipMap",
[NoSideEffect]> {
let summary = "ONNX ZipMap operation";
let description = [{
"Creates a map from the input and the attributes.<br>"
" The values are provided by the input tensor, while the keys are specified by the attributes."
" Must provide keys in either classlabels_strings or classlabels_int64s (but not both).<br>"
" The columns of the tensor correspond one-by-one to the keys specified by the attributes. There must be as many columns as keys.<br>"
}];
let arguments = (ins AnyTypeOf<[AnyMemRef, AnyTensor]>:$X,
OptionalAttr<I64ArrayAttr>:$classlabels_int64s,
OptionalAttr<StrArrayAttr>:$classlabels_strings);
let results = (outs AnyTypeOf<[TensorOf<[TensorOf<[F32,I64]>]>, MemRefOf<[F32,I64]>]>:$Z);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {-1};
}
}];
}

View File

@ -5666,3 +5666,568 @@ def ONNXXorOp:ONNX_Op<"Xor",
}]; }];
} }
def ONNXArrayFeatureExtractorOp:ONNX_Op<"ArrayFeatureExtractor",
[NoSideEffect]> {
let summary = "ONNX ArrayFeatureExtractor operation";
let description = [{
"Select elements of the input tensor based on the indices passed.<br>"
" The indices are applied to the last axes of the tensor."
}];
let arguments = (ins AnyTypeOf<[AnyMemRef, AnyTensor]>:$X,
AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Z);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 2;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {20};
}
}];
}
def ONNXBinarizerOp:ONNX_Op<"Binarizer",
[NoSideEffect]> {
let summary = "ONNX Binarizer operation";
let description = [{
"Maps the values of the input tensor to either 0 or 1, element-wise, based on the outcome of a comparison against a threshold value."
}];
let arguments = (ins AnyTypeOf<[TensorOf<[F32,F64,I64,I32]>, MemRefOf<[F32,F64,I64,I32]>]>:$X,
DefaultValuedAttr<F32Attr, "0.0">:$threshold);
let results = (outs AnyTypeOf<[TensorOf<[F32,F64,I64,I32]>, MemRefOf<[F32,F64,I64,I32]>]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {20};
}
}];
}
def ONNXCastMapOp:ONNX_Op<"CastMap",
[NoSideEffect]> {
let summary = "ONNX CastMap operation";
let description = [{
"Converts a map to a tensor.<br>The map key must be an int64 and the values will be ordered"
" in ascending order based on this key.<br>The operator supports dense packing or sparse packing."
" If using sparse packing, the key cannot exceed the max_map-1 value."
}];
let arguments = (ins AnyTypeOf<[TupleOf<[TensorOf<[I64]>]>, MemRefOf<[I64]>]>:$X,
DefaultValuedAttr<StrAttr, "TO_FLOAT">:$cast_to,
DefaultValuedAttr<StrAttr, "DENSE">:$map_form,
DefaultValuedAttr<I64Attr, "1">:$max_map);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {-1};
}
}];
}
def ONNXCategoryMapperOp:ONNX_Op<"CategoryMapper",
[NoSideEffect]> {
let summary = "ONNX CategoryMapper operation";
let description = [{
"Converts strings to integers and vice versa.<br>"
" Two sequences of equal length are used to map between integers and strings,"
" with strings and integers at the same index detailing the mapping.<br>"
" Each operator converts either integers to strings or strings to integers, depending "
" on which default value attribute is provided. Only one default value attribute"
" should be defined.<br>"
" If the string default value is set, it will convert integers to strings."
" If the int default value is set, it will convert strings to integers."
}];
let arguments = (ins AnyTypeOf<[AnyMemRef, AnyTensor]>:$X,
OptionalAttr<I64ArrayAttr>:$cats_int64s,
OptionalAttr<StrArrayAttr>:$cats_strings,
DefaultValuedAttr<I64Attr, "-1">:$default_int64,
DefaultValuedAttr<StrAttr, "_Unused">:$default_string);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {-1};
}
}];
}
def ONNXDictVectorizerOp:ONNX_Op<"DictVectorizer",
[NoSideEffect]> {
let summary = "ONNX DictVectorizer operation";
let description = [{
"Uses an index mapping to convert a dictionary to an array.<br>"
" Given a dictionary, each key is looked up in the vocabulary attribute corresponding to"
" the key type. The index into the vocabulary array at which the key is found is then"
" used to index the output 1-D tensor 'Y' and insert into it the value found in the dictionary 'X'.<br>"
" The key type of the input map must correspond to the element type of the defined vocabulary attribute."
" Therefore, the output array will be equal in length to the index mapping vector parameter."
" All keys in the input dictionary must be present in the index mapping vector."
" For each item in the input dictionary, insert its value in the output array."
" Any keys not present in the input dictionary, will be zero in the output array.<br>"
" For example: if the ``string_vocabulary`` parameter is set to ``[\"a\", \"c\", \"b\", \"z\"]``,"
" then an input of ``{\"a\": 4, \"c\": 8}`` will produce an output of ``[4, 8, 0, 0]``."
" "
}];
let arguments = (ins AnyTypeOf<[TupleOf<[TensorOf<[I64,F32,F64]>]>, MemRefOf<[I64,F32,F64]>]>:$X,
OptionalAttr<I64ArrayAttr>:$int64_vocabulary,
OptionalAttr<StrArrayAttr>:$string_vocabulary);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {-1};
}
}];
}
def ONNXFeatureVectorizerOp:ONNX_Op<"FeatureVectorizer",
[NoSideEffect]> {
let summary = "ONNX FeatureVectorizer operation";
let description = [{
"Concatenates input tensors into one continuous output.<br>"
" All input shapes are 2-D and are concatenated along the second dimention. 1-D tensors are treated as [1,C]."
" Inputs are copied to the output maintaining the order of the input arguments.<br>"
" All inputs must be integers or floats, while the output will be all floating point values."
}];
let arguments = (ins Variadic<AnyTypeOf<[TensorOf<[I32,I64,F32,F64]>, MemRefOf<[I32,I64,F32,F64]>]>>:$X,
OptionalAttr<I64ArrayAttr>:$inputdimensions);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return -1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {-1};
}
}];
}
def ONNXImputerOp:ONNX_Op<"Imputer",
[NoSideEffect]> {
let summary = "ONNX Imputer operation";
let description = [{
"Replaces inputs that equal one value with another, leaving all other elements alone.<br>"
" This operator is typically used to replace missing values in situations where they have a canonical"
" representation, such as -1, 0, NaN, or some extreme value.<br>"
" One and only one of imputed_value_floats or imputed_value_int64s should be defined -- floats if the input tensor"
" holds floats, integers if the input tensor holds integers. The imputed values must all fit within the"
" width of the tensor element type. One and only one of the replaced_value_float or replaced_value_int64 should be defined,"
" which one depends on whether floats or integers are being processed.<br>"
" The imputed_value attribute length can be 1 element, or it can have one element per input feature.<br>In other words, if the input tensor has the shape [*,F], then the length of the attribute array may be 1 or F. If it is 1, then it is broadcast along the last dimension and applied to each feature."
}];
let arguments = (ins AnyTypeOf<[TensorOf<[F32,F64,I64,I32]>, MemRefOf<[F32,F64,I64,I32]>]>:$X,
OptionalAttr<F32ArrayAttr>:$imputed_value_floats,
OptionalAttr<I64ArrayAttr>:$imputed_value_int64s,
DefaultValuedAttr<F32Attr, "0.0">:$replaced_value_float,
DefaultValuedAttr<I64Attr, "0">:$replaced_value_int64);
let results = (outs AnyTypeOf<[TensorOf<[F32,F64,I64,I32]>, MemRefOf<[F32,F64,I64,I32]>]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {20};
}
}];
}
def ONNXLabelEncoderOp:ONNX_Op<"LabelEncoder",
[NoSideEffect]> {
let summary = "ONNX LabelEncoder operation";
let description = [{
"Maps each element in the input tensor to another value.<br>"
" The mapping is determined by the two parallel attributes, 'keys_*' and"
" 'values_*' attribute. The i-th value in the specified 'keys_*' attribute"
" would be mapped to the i-th value in the specified 'values_*' attribute. It"
" implies that input's element type and the element type of the specified"
" 'keys_*' should be identical while the output type is identical to the"
" specified 'values_*' attribute. If an input element can not be found in the"
" specified 'keys_*' attribute, the 'default_*' that matches the specified"
" 'values_*' attribute may be used as its output value.<br>"
" Let's consider an example which maps a string tensor to an integer tensor."
" Assume and 'keys_strings' is [\"Amy\", \"Sally\"], 'values_int64s' is [5, 6],"
" and 'default_int64' is '-1'. The input [\"Dori\", \"Amy\", \"Amy\", \"Sally\","
" \"Sally\"] would be mapped to [-1, 5, 5, 6, 6].<br>"
" Since this operator is an one-to-one mapping, its input and output shapes"
" are the same. Notice that only one of 'keys_*'/'values_*' can be set.<br>"
" For key look-up, bit-wise comparison is used so even a float NaN can be"
" mapped to a value in 'values_*' attribute.<br>"
}];
let arguments = (ins AnyTypeOf<[AnyMemRef, AnyTensor]>:$X,
DefaultValuedAttr<F32Attr, "-0.0">:$default_float,
DefaultValuedAttr<I64Attr, "-1">:$default_int64,
DefaultValuedAttr<StrAttr, "_Unused">:$default_string,
OptionalAttr<F32ArrayAttr>:$keys_floats,
OptionalAttr<I64ArrayAttr>:$keys_int64s,
OptionalAttr<StrArrayAttr>:$keys_strings,
OptionalAttr<F32ArrayAttr>:$values_floats,
OptionalAttr<I64ArrayAttr>:$values_int64s,
OptionalAttr<StrArrayAttr>:$values_strings);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {-1};
}
}];
}
def ONNXLinearClassifierOp:ONNX_Op<"LinearClassifier",
[NoSideEffect]> {
let summary = "ONNX LinearClassifier operation";
let description = [{
"Linear classifier"
}];
let arguments = (ins AnyTypeOf<[TensorOf<[F32,F64,I64,I32]>, MemRefOf<[F32,F64,I64,I32]>]>:$X,
OptionalAttr<I64ArrayAttr>:$classlabels_ints,
OptionalAttr<StrArrayAttr>:$classlabels_strings,
F32ArrayAttr:$coefficients,
OptionalAttr<F32ArrayAttr>:$intercepts,
DefaultValuedAttr<I64Attr, "0">:$multi_class,
DefaultValuedAttr<StrAttr, "NONE">:$post_transform);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y,
AnyTypeOf<[AnyMemRef, AnyTensor]>:$Z);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 2;
}
static std::vector<int> getTypeMap() {
return {-1,-1};
}
}];
}
def ONNXLinearRegressorOp:ONNX_Op<"LinearRegressor",
[NoSideEffect]> {
let summary = "ONNX LinearRegressor operation";
let description = [{
"Generalized linear regression evaluation.<br>"
" If targets is set to 1 (default) then univariate regression is performed.<br>"
" If targets is set to M then M sets of coefficients must be passed in as a sequence"
" and M results will be output for each input n in N.<br>"
" The coefficients array is of length n, and the coefficients for each target are contiguous."
" Intercepts are optional but if provided must match the number of targets."
}];
let arguments = (ins AnyTypeOf<[TensorOf<[F32,F64,I64,I32]>, MemRefOf<[F32,F64,I64,I32]>]>:$X,
OptionalAttr<F32ArrayAttr>:$coefficients,
OptionalAttr<F32ArrayAttr>:$intercepts,
DefaultValuedAttr<StrAttr, "NONE">:$post_transform,
DefaultValuedAttr<I64Attr, "1">:$targets);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {-1};
}
}];
}
def ONNXNormalizerOp:ONNX_Op<"Normalizer",
[NoSideEffect]> {
let summary = "ONNX Normalizer operation";
let description = [{
"Normalize the input. There are three normalization modes, which have the corresponding formulas,"
" defined using element-wise infix operators '/' and '^' and tensor-wide functions 'max' and 'sum':<br>"
"<br>"
" Max: Y = X / max(X)<br>"
" L1: Y = X / sum(X)<br>"
" L2: Y = sqrt(X^2 / sum(X^2)}<br>"
" In all modes, if the divisor is zero, Y == X."
"<br>"
" For batches, that is, [N,C] tensors, normalization is done along the C axis. In other words, each row"
" of the batch is normalized independently."
}];
let arguments = (ins AnyTypeOf<[TensorOf<[F32,F64,I64,I32]>, MemRefOf<[F32,F64,I64,I32]>]>:$X,
DefaultValuedAttr<StrAttr, "MAX">:$norm);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {-1};
}
}];
}
def ONNXOneHotEncoderOp:ONNX_Op<"OneHotEncoder",
[NoSideEffect]> {
let summary = "ONNX OneHotEncoder operation";
let description = [{
"Replace each input element with an array of ones and zeros, where a single"
" one is placed at the index of the category that was passed in. The total category count "
" will determine the size of the extra dimension of the output array Y.<br>"
" For example, if we pass a tensor with a single value of 4, and a category count of 8, "
" the output will be a tensor with ``[0,0,0,0,1,0,0,0]``.<br>"
" This operator assumes every input feature is from the same set of categories.<br>"
" If the input is a tensor of float, int32, or double, the data will be cast"
" to integers and the cats_int64s category list will be used for the lookups."
}];
let arguments = (ins AnyTypeOf<[AnyMemRef, AnyTensor]>:$X,
OptionalAttr<I64ArrayAttr>:$cats_int64s,
OptionalAttr<StrArrayAttr>:$cats_strings,
DefaultValuedAttr<I64Attr, "1">:$zeros);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {-1};
}
}];
}
def ONNXSVMClassifierOp:ONNX_Op<"SVMClassifier",
[NoSideEffect]> {
let summary = "ONNX SVMClassifier operation";
let description = [{
"Support Vector Machine classifier"
}];
let arguments = (ins AnyTypeOf<[TensorOf<[F32,F64,I64,I32]>, MemRefOf<[F32,F64,I64,I32]>]>:$X,
OptionalAttr<I64ArrayAttr>:$classlabels_ints,
OptionalAttr<StrArrayAttr>:$classlabels_strings,
OptionalAttr<F32ArrayAttr>:$coefficients,
OptionalAttr<F32ArrayAttr>:$kernel_params,
DefaultValuedAttr<StrAttr, "LINEAR">:$kernel_type,
DefaultValuedAttr<StrAttr, "NONE">:$post_transform,
OptionalAttr<F32ArrayAttr>:$prob_a,
OptionalAttr<F32ArrayAttr>:$prob_b,
OptionalAttr<F32ArrayAttr>:$rho,
OptionalAttr<F32ArrayAttr>:$support_vectors,
OptionalAttr<I64ArrayAttr>:$vectors_per_class);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y,
AnyTypeOf<[AnyMemRef, AnyTensor]>:$Z);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 2;
}
static std::vector<int> getTypeMap() {
return {-1,-1};
}
}];
}
def ONNXSVMRegressorOp:ONNX_Op<"SVMRegressor",
[NoSideEffect]> {
let summary = "ONNX SVMRegressor operation";
let description = [{
"Support Vector Machine regression prediction and one-class SVM anomaly detection."
}];
let arguments = (ins AnyTypeOf<[TensorOf<[F32,F64,I64,I32]>, MemRefOf<[F32,F64,I64,I32]>]>:$X,
OptionalAttr<F32ArrayAttr>:$coefficients,
OptionalAttr<F32ArrayAttr>:$kernel_params,
DefaultValuedAttr<StrAttr, "LINEAR">:$kernel_type,
DefaultValuedAttr<I64Attr, "0">:$n_supports,
DefaultValuedAttr<I64Attr, "0">:$one_class,
DefaultValuedAttr<StrAttr, "NONE">:$post_transform,
OptionalAttr<F32ArrayAttr>:$rho,
OptionalAttr<F32ArrayAttr>:$support_vectors);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {-1};
}
}];
}
def ONNXScalerOp:ONNX_Op<"Scaler",
[NoSideEffect]> {
let summary = "ONNX Scaler operation";
let description = [{
"Rescale input data, for example to standardize features by removing the mean and scaling to unit variance."
}];
let arguments = (ins AnyTypeOf<[TensorOf<[F32,F64,I64,I32]>, MemRefOf<[F32,F64,I64,I32]>]>:$X,
OptionalAttr<F32ArrayAttr>:$offset,
OptionalAttr<F32ArrayAttr>:$scale);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {-1};
}
}];
}
def ONNXTreeEnsembleClassifierOp:ONNX_Op<"TreeEnsembleClassifier",
[NoSideEffect]> {
let summary = "ONNX TreeEnsembleClassifier operation";
let description = [{
"Tree Ensemble classifier. Returns the top class for each of N inputs.<br>"
" The attributes named 'nodes_X' form a sequence of tuples, associated by "
" index into the sequences, which must all be of equal length. These tuples"
" define the nodes.<br>"
" Similarly, all fields prefixed with 'class_' are tuples of votes at the leaves."
" A leaf may have multiple votes, where each vote is weighted by"
" the associated class_weights index.<br>"
" One and only one of classlabels_strings or classlabels_int64s"
" will be defined. The class_ids are indices into this list."
}];
let arguments = (ins AnyTypeOf<[TensorOf<[F32,F64,I64,I32]>, MemRefOf<[F32,F64,I64,I32]>]>:$X,
OptionalAttr<F32ArrayAttr>:$base_values,
OptionalAttr<I64ArrayAttr>:$class_ids,
OptionalAttr<I64ArrayAttr>:$class_nodeids,
OptionalAttr<I64ArrayAttr>:$class_treeids,
OptionalAttr<F32ArrayAttr>:$class_weights,
OptionalAttr<I64ArrayAttr>:$classlabels_int64s,
OptionalAttr<StrArrayAttr>:$classlabels_strings,
OptionalAttr<I64ArrayAttr>:$nodes_falsenodeids,
OptionalAttr<I64ArrayAttr>:$nodes_featureids,
OptionalAttr<F32ArrayAttr>:$nodes_hitrates,
OptionalAttr<I64ArrayAttr>:$nodes_missing_value_tracks_true,
OptionalAttr<StrArrayAttr>:$nodes_modes,
OptionalAttr<I64ArrayAttr>:$nodes_nodeids,
OptionalAttr<I64ArrayAttr>:$nodes_treeids,
OptionalAttr<I64ArrayAttr>:$nodes_truenodeids,
OptionalAttr<F32ArrayAttr>:$nodes_values,
DefaultValuedAttr<StrAttr, "NONE">:$post_transform);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y,
AnyTypeOf<[AnyMemRef, AnyTensor]>:$Z);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 2;
}
static std::vector<int> getTypeMap() {
return {-1,-1};
}
}];
}
def ONNXTreeEnsembleRegressorOp:ONNX_Op<"TreeEnsembleRegressor",
[NoSideEffect]> {
let summary = "ONNX TreeEnsembleRegressor operation";
let description = [{
"Tree Ensemble regressor. Returns the regressed values for each input in N.<br>"
" All args with nodes_ are fields of a tuple of tree nodes, and"
" it is assumed they are the same length, and an index i will decode the"
" tuple across these inputs. Each node id can appear only once"
" for each tree id.<br>"
" All fields prefixed with target_ are tuples of votes at the leaves.<br>"
" A leaf may have multiple votes, where each vote is weighted by"
" the associated target_weights index.<br>"
" All trees must have their node ids start at 0 and increment by 1.<br>"
" Mode enum is BRANCH_LEQ, BRANCH_LT, BRANCH_GTE, BRANCH_GT, BRANCH_EQ, BRANCH_NEQ, LEAF"
}];
let arguments = (ins AnyTypeOf<[TensorOf<[F32,F64,I64,I32]>, MemRefOf<[F32,F64,I64,I32]>]>:$X,
DefaultValuedAttr<StrAttr, "SUM">:$aggregate_function,
OptionalAttr<F32ArrayAttr>:$base_values,
OptionalAttr<I64Attr>:$n_targets,
OptionalAttr<I64ArrayAttr>:$nodes_falsenodeids,
OptionalAttr<I64ArrayAttr>:$nodes_featureids,
OptionalAttr<F32ArrayAttr>:$nodes_hitrates,
OptionalAttr<I64ArrayAttr>:$nodes_missing_value_tracks_true,
OptionalAttr<StrArrayAttr>:$nodes_modes,
OptionalAttr<I64ArrayAttr>:$nodes_nodeids,
OptionalAttr<I64ArrayAttr>:$nodes_treeids,
OptionalAttr<I64ArrayAttr>:$nodes_truenodeids,
OptionalAttr<F32ArrayAttr>:$nodes_values,
DefaultValuedAttr<StrAttr, "NONE">:$post_transform,
OptionalAttr<I64ArrayAttr>:$target_ids,
OptionalAttr<I64ArrayAttr>:$target_nodeids,
OptionalAttr<I64ArrayAttr>:$target_treeids,
OptionalAttr<F32ArrayAttr>:$target_weights);
let results = (outs AnyTypeOf<[AnyMemRef, AnyTensor]>:$Y);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {-1};
}
}];
}
def ONNXZipMapOp:ONNX_Op<"ZipMap",
[NoSideEffect]> {
let summary = "ONNX ZipMap operation";
let description = [{
"Creates a map from the input and the attributes.<br>"
" The values are provided by the input tensor, while the keys are specified by the attributes."
" Must provide keys in either classlabels_strings or classlabels_int64s (but not both).<br>"
" The columns of the tensor correspond one-by-one to the keys specified by the attributes. There must be as many columns as keys.<br>"
}];
let arguments = (ins AnyTypeOf<[AnyMemRef, AnyTensor]>:$X,
OptionalAttr<I64ArrayAttr>:$classlabels_int64s,
OptionalAttr<StrArrayAttr>:$classlabels_strings);
let results = (outs AnyTypeOf<[TensorOf<[TensorOf<[F32,I64]>]>, MemRefOf<[F32,I64]>]>:$Z);
let extraClassDeclaration = [{
static int getNumberOfOperands() {
return 1;
}
static int getNumberOfResults() {
return 1;
}
static std::vector<int> getTypeMap() {
return {-1};
}
}];
}

View File

@ -246,7 +246,6 @@ void registerDialects() {
mlir::registerDialect<mlir::scf::SCFDialect>(); mlir::registerDialect<mlir::scf::SCFDialect>();
mlir::registerDialect<mlir::StandardOpsDialect>(); mlir::registerDialect<mlir::StandardOpsDialect>();
mlir::registerDialect<mlir::ONNXOpsDialect>(); mlir::registerDialect<mlir::ONNXOpsDialect>();
mlir::registerDialect<mlir::MLONNXOpsDialect>();
mlir::registerDialect<mlir::KrnlOpsDialect>(); mlir::registerDialect<mlir::KrnlOpsDialect>();
} }

View File

@ -22,7 +22,6 @@
#include "src/Builder/FrontendDialectTransformer.hpp" #include "src/Builder/FrontendDialectTransformer.hpp"
#include "src/Dialect/Krnl/KrnlOps.hpp" #include "src/Dialect/Krnl/KrnlOps.hpp"
#include "src/Dialect/MLONNX/MLONNXOps.hpp"
#include "src/Dialect/ONNX/ONNXOps.hpp" #include "src/Dialect/ONNX/ONNXOps.hpp"
#include "src/Pass/Passes.hpp" #include "src/Pass/Passes.hpp"

View File

@ -14,8 +14,3 @@ target_link_libraries(onnx-mlir-opt
${OMLibs} ${OMLibs}
${MLIRLibs} ${MLIRLibs}
onnx) onnx)
if (INCLUDE_ONNX_ML)
target_link_libraries(onnx-mlir-opt OMMLONNXOps)
add_dependencies(onnx-mlir-opt OMMLONNXOpsInc)
endif()

View File

@ -20,7 +20,6 @@
#include <mlir/Support/MlirOptMain.h> #include <mlir/Support/MlirOptMain.h>
#include "src/Dialect/Krnl/KrnlOps.hpp" #include "src/Dialect/Krnl/KrnlOps.hpp"
#include "src/Dialect/MLONNX/MLONNXOps.hpp"
#include "src/Dialect/ONNX/ONNXOps.hpp" #include "src/Dialect/ONNX/ONNXOps.hpp"
#include "src/InitOMPasses.hpp" #include "src/InitOMPasses.hpp"
#include "src/Pass/Passes.hpp" #include "src/Pass/Passes.hpp"
@ -69,7 +68,6 @@ int main(int argc, char **argv) {
llvm::InitLLVM y(argc, argv); llvm::InitLLVM y(argc, argv);
mlir::registerDialect<mlir::ONNXOpsDialect>(); mlir::registerDialect<mlir::ONNXOpsDialect>();
mlir::registerDialect<mlir::MLONNXOpsDialect>();
mlir::registerDialect<mlir::KrnlOpsDialect>(); mlir::registerDialect<mlir::KrnlOpsDialect>();
initOMPasses(); initOMPasses();

View File

@ -3,7 +3,7 @@
//===----------------------------------------------------------------------===// //===----------------------------------------------------------------------===//
// CHECK-LABEL: @check_map1(%arg0: tuple<tensor<10xi64>, tensor<10xi64>>) -> tensor<*xi64> { // CHECK-LABEL: @check_map1(%arg0: tuple<tensor<10xi64>, tensor<10xi64>>) -> tensor<*xi64> {
func @check_map1(%arg0: tuple<tensor<10xi64>, tensor<10xi64>>) -> tensor<*xi64> { func @check_map1(%arg0: tuple<tensor<10xi64>, tensor<10xi64>>) -> tensor<*xi64> {
%0 = "mlonnx.CastMap"(%arg0) {cast_to = "TO_FLOAT", map_form = "DENSE", max_map = 1 : i64} : (tuple<tensor<10xi64>, tensor<10xi64>>) -> tensor<*xi64> %0 = "onnx.CastMap"(%arg0) {cast_to = "TO_FLOAT", map_form = "DENSE", max_map = 1 : i64} : (tuple<tensor<10xi64>, tensor<10xi64>>) -> tensor<*xi64>
return %0 : tensor<*xi64> return %0 : tensor<*xi64>
// CHECK-NEXT: %0 = "mlonnx.CastMap"(%arg0) {cast_to = "TO_FLOAT", map_form = "DENSE", max_map = 1 : i64} : (tuple<tensor<10xi64>, tensor<10xi64>>) -> tensor<*xi64> // CHECK-NEXT: %0 = "onnx.CastMap"(%arg0) {cast_to = "TO_FLOAT", map_form = "DENSE", max_map = 1 : i64} : (tuple<tensor<10xi64>, tensor<10xi64>>) -> tensor<*xi64>
} }

View File

@ -23,35 +23,5 @@ add_custom_target(OMONNXOpsIncTranslation
DEPENDS OMONNXOpsTableGenIncGen DEPENDS OMONNXOpsTableGenIncGen
OMONNXOpsBuildTableIncGen) OMONNXOpsBuildTableIncGen)
# Invoke gen_onnx_mlir.py to obtain MLONNXOps.td.inc, MLOpBuildTable.inc.
add_custom_command(OUTPUT ${CMAKE_CURRENT_SOURCE_DIR}/MLONNXOps.td.inc
${CMAKE_CURRENT_SOURCE_DIR}/MLOpBuildTable.inc
COMMAND python ${CMAKE_CURRENT_SOURCE_DIR}/gen_onnx_mlir.py --domain="ONNX_ML"
DEPENDS ${CMAKE_CURRENT_SOURCE_DIR}/gen_onnx_mlir.py)
# Move the generated files to respective destinations:
# MLONNXOps.td.inc -> src/Dialect/MLONNX/MLONNXOps.td.inc
add_custom_target(OMMLONNXOpsTableGenIncGen
COMMAND ${CMAKE_COMMAND} -E rename
${CMAKE_CURRENT_SOURCE_DIR}/MLONNXOps.td.inc
${ONNX_MLIR_SRC_ROOT}/src/Dialect/MLONNX/MLONNXOps.td.inc
DEPENDS ${CMAKE_CURRENT_SOURCE_DIR}/MLONNXOps.td.inc)
# MLOpBuildTable.inc -> src/Builder/MLOpBuildTable.inc
add_custom_target(OMMLONNXOpsBuildTableIncGen
COMMAND ${CMAKE_COMMAND} -E rename
${CMAKE_CURRENT_SOURCE_DIR}/MLOpBuildTable.inc
${ONNX_MLIR_SRC_ROOT}/src/Builder/MLOpBuildTable.inc
DEPENDS ${CMAKE_CURRENT_SOURCE_DIR}/MLOpBuildTable.inc)
add_custom_target(OMMLONNXOpsIncTranslation
DEPENDS OMMLONNXOpsTableGenIncGen
OMMLONNXOpsBuildTableIncGen)
add_custom_target(OMONNXCheckVersion add_custom_target(OMONNXCheckVersion
COMMAND python ${CMAKE_CURRENT_SOURCE_DIR}/gen_onnx_mlir.py --check-operation-version) COMMAND python ${CMAKE_CURRENT_SOURCE_DIR}/gen_onnx_mlir.py --check-operation-version)
add_custom_target(OMMLONNXCheckVersion
COMMAND python ${CMAKE_CURRENT_SOURCE_DIR}/gen_onnx_mlir.py
--check-operation-version --domain="ONNX_ML")

View File

@ -36,9 +36,6 @@ parser.add_argument("--check-operation-version",
" newer version of operation compared with version stored in version_dicts", " newer version of operation compared with version stored in version_dicts",
action="store_true", action="store_true",
default=False) default=False)
parser.add_argument("--domain",
help="specify domain, ONNX or ONNX_ML",
default = "ONNX")
args = parser.parse_args() args = parser.parse_args()
@ -50,7 +47,7 @@ check_operation_version = args.check_operation_version
# run this script with --check-operation-version flag. # run this script with --check-operation-version flag.
# Update this dictionary when a newer version is implemented # Update this dictionary when a newer version is implemented
# TODO: how to keep the old version # TODO: how to keep the old version
onnx_version_dict = {'Abs': 6, version_dict = {'Abs': 6,
'Acos': 7, 'Acos': 7,
'Acosh': 9, 'Acosh': 9,
'Add': 7, 'Add': 7,
@ -205,9 +202,8 @@ onnx_version_dict = {'Abs': 6,
'Unsqueeze': 11, 'Unsqueeze': 11,
'Upsample': 10, 'Upsample': 10,
'Where': 9, 'Where': 9,
'Xor': 7} 'Xor': 7,
'ArrayFeatureExtractor': 1,
onnx_ml_version_dict = {'ArrayFeatureExtractor': 1,
'Binarizer': 1, 'Binarizer': 1,
'CastMap': 1, 'CastMap': 1,
'CategoryMapper': 1, 'CategoryMapper': 1,
@ -334,15 +330,8 @@ MAX_NUM_TYPES=20
SNIPPETS = collect_snippets() SNIPPETS = collect_snippets()
SAMPLE_IMPLEMENTATIONS = collect_sample_implementations() SAMPLE_IMPLEMENTATIONS = collect_sample_implementations()
ONNX_ML = bool(args.domain == "ONNX_ML")
sys.stderr.write("ONNX_ML {}\n".format(ONNX_ML))
def should_render_domain(domain): # type: (Text) -> bool def should_render_domain(domain): # type: (Text) -> bool
if domain == ONNX_ML_DOMAIN and not ONNX_ML:
return False
elif ONNX_ML and domain != ONNX_ML_DOMAIN:
return False
return True return True
@ -708,9 +697,6 @@ def get_type_inference_func(s, indent, type_inference_code):
def gen_op_def(schema): def gen_op_def(schema):
indent = inc_indent() indent = inc_indent()
if (ONNX_ML) :
s = 'def MLONNX{0}Op:MLONNX_Op<"{0}",\n'.format(schema.name)
else :
s = 'def ONNX{0}Op:ONNX_Op<"{0}",\n'.format(schema.name) s = 'def ONNX{0}Op:ONNX_Op<"{0}",\n'.format(schema.name)
# Generate decl for op traits. # Generate decl for op traits.
@ -881,10 +867,6 @@ def gen_op_importer(schema, file):
if OpSchema.FormalParameterOption.Variadic == output.option: if OpSchema.FormalParameterOption.Variadic == output.option:
expected_num_results = -1 expected_num_results = -1
if ONNX_ML:
handler_func = special_op_handler.get(
schema.name, "buildOperation<mlir::MLONNX{}Op>".format(schema.name))
else:
handler_func = special_op_handler.get( handler_func = special_op_handler.get(
schema.name, "buildOperation<mlir::ONNX{}Op>".format(schema.name)) schema.name, "buildOperation<mlir::ONNX{}Op>".format(schema.name))
@ -920,10 +902,6 @@ def build_operator_schemas():
for domain, _supportmap in sorted(index.items()): for domain, _supportmap in sorted(index.items()):
if not should_render_domain(domain): if not should_render_domain(domain):
continue continue
if domain == ONNX_ML_DOMAIN:
version_dict = onnx_ml_version_dict
else:
version_dict = onnx_version_dict
processed_supportmap = list() processed_supportmap = list()
for _support, _namemap in sorted(_supportmap.items()): for _support, _namemap in sorted(_supportmap.items()):
processed_namemap = list() processed_namemap = list()
@ -1005,9 +983,6 @@ if __name__ == '__main__':
class Args(object): class Args(object):
if args.dry_run_onnx_ops: if args.dry_run_onnx_ops:
op_def = StringIO() op_def = StringIO()
else:
if args.domain == 'ONNX_ML':
op_def_file_path = os.path.join(curr_dir, 'MLONNXOps.td.inc')
else: else:
op_def_file_path = os.path.join(curr_dir, 'ONNXOps.td.inc') op_def_file_path = os.path.join(curr_dir, 'ONNXOps.td.inc')
op_def = io.open(op_def_file_path, 'w', newline='') op_def = io.open(op_def_file_path, 'w', newline='')
@ -1015,9 +990,6 @@ if __name__ == '__main__':
if args.dry_run_op_build_table: if args.dry_run_op_build_table:
op_importer = StringIO() op_importer = StringIO()
else: else:
if args.domain == 'ONNX_ML':
op_importer_file_path = os.path.join(curr_dir, 'MLOpBuildTable.inc')
else :
op_importer_file_path = os.path.join(curr_dir, 'OpBuildTable.inc') op_importer_file_path = os.path.join(curr_dir, 'OpBuildTable.inc')
op_importer = io.open(op_importer_file_path, 'w', newline='') op_importer = io.open(op_importer_file_path, 'w', newline='')
main(Args) main(Args)