This updates the tests to no longer rely on tensor_store. Once all users of this behavior have adopted, the tensor_store support will be removed.
PiperOrigin-RevId: 348624899
For floating point operations, this uses std.pow.
For integer operations, this lowers to a loop.
This adds a dependency on scf.
PiperOrigin-RevId: 348537232
- MLIR MemRefs do not preserve layout information correctly when unit dimensions
are involved. Operations like convolution that use cuDNN however need the correct
layout to be preserved so that we do not end up creating an incompatible combination
of input/filter/output layout that is not supported by cuDNN.
- Add these layouts to convolution attributes in the form of I32ArrayAttr for representing
the layout in "minor_to_major" form similar to XLA.
PiperOrigin-RevId: 348034757
It didn't return 0 for 0.0 and -0.0.
Currently we emit -0.0 for -0.0 which is correct according to the HLO dialect.
For the TF_SignOp we should emit 0.0 in that case, we will leave that as a TODO.
Enable the tests which work now, and add another one for Int64.
Also improve the registration code, we should not register the Int32 kernel.
PiperOrigin-RevId: 347981124
It didn't return 0 for 0.0 and -0.0.
Currently we emit -0.0 for -0.0 which is correct according to the HLO dialect.
For the TF_SignOp we should emit 0.0 in that case, we will leave that as a TODO.
Enable the tests which work now, and add another one for Int64.
Also improve the registration code, we should not register the Int32 kernel.
PiperOrigin-RevId: 347602378
It didn't return 0 for 0.0 and -0.0.
Currently we emit -0.0 for -0.0 which is correct according to the HLO dialect.
For the TF_SignOp we should emit 0.0 in that case, we will leave that as a TODO.
Enable the tests which work now, and add another one for Int64.
Also improve the registration code, we should not register the Int32 kernel.
PiperOrigin-RevId: 347590340
- Add this attribute to match the corresponding XLA HLO attribute on convolution
operations.
- A true value indicates a reversal of the corresponding kernel spatial dimension.
- Since XLA builder does not support this attribute, use a custom HLO converted to map
from mlir::mhlo::ConvOp to XLA.
PiperOrigin-RevId: 346891737
Two tensors having the same SSA-value isn't sufficient for equality for floating types, as `NaN != NaN`. As written this causes `tf.IsNan` to [miscompile](https://github.com/google/iree/issues/4061).
PiperOrigin-RevId: 345730640
- Add a variant of the fused convolution that does not need a side input and side input scale.
- Rename the existing one to `ConvForwardFusedSideInputOp`.
- Update tests to exercise all variants of the convolution ops in the GPU dialect.
- Eliminate unused `LHLO_ExtentBuffer` and changed LHLO_Buffer to allow any integer element
type to match what XLA can generate sometimes for scratch buffers.
PiperOrigin-RevId: 345701569
- Map Custom call for GEMM in XLA HLO to Gemm/Gemm bias operations in LHLO GPU
dialect.
- Make 'algorithm' an optional attribute to better match with XLA HLO backend config.
- Replace 'alpha' with 'alpha_real' and 'alpha_complex' to support complex GEMM correctly.
- Generate GemmThunk off of LHLO GPU Gemm operations.
PiperOrigin-RevId: 345250840
Boolean element values should be fetched as an unsigned integer and not signed integer which would return -1 for true.
Added to a TODO to handle unsigned types correctly as well as we don't seem to be using unsigned types.
PiperOrigin-RevId: 343927564
- Restructured LHLO GPU Cholesky to better match XLA HLO by eliminating the
untyped buffer and changing is_upper attribute to is_lower.
- Change LhloDialectEmitter to emit LHLO GPU Cholesky operation.
PiperOrigin-RevId: 343873516
- Also fixed kernel_gen embed_memref_prints to mark the declarations inserted private.
- This is in prep for proposed MLIR change to disallow public declarations.
PiperOrigin-RevId: 342081252
- Extend MHLO CustomCall to have multiple tensors as results.
- Extend LHLO CustomCall to have multiple memrefs for output operands.
- Fix HLO->LHLO and XLA HLO->LHLO mapping for CustomCall to setup the
operand_segment_sizes attribute correctly.
PiperOrigin-RevId: 342067762
This specific pattern can be replaced with the shape
passed to dynamic_reshape. This is implemented as a
canonicalization on mhlo.dynamic_reshape to fit in
the infrastructure of canonicalization.
PiperOrigin-RevId: 342009365
The conversion had a bug in computation of strides and sizes args for std.memref_reinterpret_cast. The previous version also relied on linalg::ReshapeOp to do broadcasting when the rank of the output was higher than the rank of the input. Now the broadcasting is entirely done via descriptor modification and linalg::ReshapeOp was replaced with CopyOp.
PiperOrigin-RevId: 341379871
This is to match with HLO semantics and general dimension semantics in MLIR.
Also,
* Define minimal verifier for these ops.
* Add folder for SetDimensionSize op on static shaped dimension.
* Fix assumption of ranked shape in GetDimensionSize op.
PiperOrigin-RevId: 341150923
Lowerings that depended on operations between real and complex types may
not infer the correct intermediate type. Removing these operations as
they are not technically legally generated operations. Updated tests
to validate this.
PiperOrigin-RevId: 341128903
Previously this started at rank 2 after checking for scalars and equal shapes. This resulted in cases such as <1xf32> + <2xf32> being treated as impossible.
PiperOrigin-RevId: 341043965
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/44405
Splitting #43857 by top-level directories.
Copybara import of the project:
--
fa5da7d5478649d11321dcac9f867b0a57e4798a by Dmitry Volodin <mr.molkree@gmail.com>:
fix typos in compiler dir
--
4d3c9f047f7ecb8ab299f1bf28a86fd39096eee7 by Dmitry Volodin <mr.molkree@gmail.com>:
fix one test as "atleast" in it comes from Bazel
--
9440ebaaa9fc4a735f7f72f0c8f0de4ec58afbd6 by Dmitry Volodin <mr.molkree@gmail.com>:
a bit more
PiperOrigin-RevId: 340819994
The slice indices must be rank-1 and have the same number of elements of the
rank of the operand. Give reasonable error messages for violations of these
requirements instead of a misleading error message that the types of the
indices don't all match.
PiperOrigin-RevId: 340660822