When an op is moved out of an assuming region we already know statically that it
is independent of the assuming region. Hence, there is no need to yield its
results.
PiperOrigin-RevId: 366001405
Add pattern to move operations out of assuming op. This only valid for
constraint-independent ops, like `cstr_broadcastable` and `shape_of`. It will
eventually allow to make assuming regions' constraints independent from each
other so that they can be merged.
PiperOrigin-RevId: 365993145
We can use it also for ternary ops like Select if we change the signature so
that a ValueRange is passed in.
Also remove special casing for HloComplexAdaptor. It can be handled with the
generic adaptor as well.
PiperOrigin-RevId: 365777493
We only need the memref_reinterpret_cast if we don't know whether a dimension
gets expanded or not. With static shapes we know that a dimension can only be
expanded if it's a static 1, so lower it in the same way we lower fully
static broadcasts.
PiperOrigin-RevId: 363859181
This is the same as iota, but instead of taking the dimensions from the result
tensor we use the supplied shape extents tensor.
PiperOrigin-RevId: 362298548
This is an annoying edge case because the collapse->expand lowering expects at
least R1 or it will produce invalid linalg reshapes. Using the direct lowering
works fine.
PiperOrigin-RevId: 362269199
THe conversion from dot_general to dot fails when trying to retrieve
and use the precision config, since precision_config is optional.
PiperOrigin-RevId: 362095296
For now, the pass only reifies the required shape computations. Moving
broadcasts will follow to allow for fusion across them.
PiperOrigin-RevId: 362033715
Return nan at zeta poles or inf where the limit is defined. Also test the kernel
based on the series representation of zeta.
PiperOrigin-RevId: 361993482
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/46723
Reduces some warnings about comparison of integers of different signs.
Copybara import of the project:
--
311f436f77b334f5462127d8cf179cce067969ca by Marius Brehler <marius.brehler@iml.fraunhofer.de>:
Adjust types of loop counters
Reduces some warnings about comparison of integers of different signs.
PiperOrigin-RevId: 360912203
For binary ops, we already special-case rank 0 vs rank 1, and same shape. So we
don't need to special-case a maximum rank of 1.
PiperOrigin-RevId: 360891955
For binary ops, we already special-case rank 0 vs rank 1, and same shape. So we
don't need to special-case a maximum rank of 1.
PiperOrigin-RevId: 360881387
The linalg named ops are now type polymorphic, so the type-monomorphic
varieties are redundant (and will be deleted soon).
PiperOrigin-RevId: 360509010
This pattern only works for normal convolutions. It does not work for depthwise
convolutions. The Linalg conv ops are defined with static rank, so it only
supports 1d/2d/3d cases, which are the most typical cases.
This also refactors out the same check in lmhlo.conv lowering.
PiperOrigin-RevId: 359503527
This just blows up everything to ranked (up to 6) and is probably quite slow.
This is sufficient to make kernelgen compile SelectV2.
PiperOrigin-RevId: 358777728
A shape that contains exactly one element is effectively a scalar. This leads
to a speedup in cases where we have a binary op with one operand that is
effectively a scalar, because we can use the fast path.
PiperOrigin-RevId: 357515552
This is being done by just removing the approximation and lowering to atan2 lib calls later to make the implementation the same as XLA. Note that if the approximation is brought back later, it can be fixed by changing the IR checking `less-than(X, 0)` to `less-than(copysign(X, 1), 0)`
PiperOrigin-RevId: 356253941
In IREE, we use indexed generic op to handle the initial value. However, we
lower it to a generic op that carries an init_tensor here, and leave the handle
of initialization problem to later passes.
PiperOrigin-RevId: 354294807
If mhlo.reshape is not purely collapsing some consecutive operand
dimensions into result dimensions, we will generate two linalg
reshape op for it: the first one collapses all operand dimensions
into one dimension, and the second one expands it to all result
dimensions. For this case, the number of collapsed/expanded dimensions
should be coming strictly from the operand/result. It is different
from the case where we can generate one linalg reshape. For that case,
the reassociation map should have rank equal to the largest among
operand/result shape.
PiperOrigin-RevId: 354293826
Also generate the kernels for all types of casts between signed int and float types.
This requires some adaptations to our build macros so that we can also specify the
output type of a kernel.
PiperOrigin-RevId: 354067727
Allow for relative tolerance in unary kernel tests. In case of the cosh kernels,
this allows to accept an observed difference of 5.6e-8 between the kernel and
the `std::cosh` reference (32829984.568665262 vs. 32829984.568665318) in one of
the test cases.
PiperOrigin-RevId: 351983698
We prototyped the lowering from mhlo.dot to linalg.matmul in IREE. Since Linalg
now supports matmul in tensors world, we can move the lowering logic to tensors
world, and upstream to legalize_to_linalg.cc. The patch lowers the mhlo.dot to
the linalg.matmul/matvec/dot in tensors world.
PiperOrigin-RevId: 351184911
This updates the tests to no longer rely on tensor_store. Once all users of this behavior have adopted, the tensor_store support will be removed.
PiperOrigin-RevId: 348624899
For floating point operations, this uses std.pow.
For integer operations, this lowers to a loop.
This adds a dependency on scf.
PiperOrigin-RevId: 348537232
These are failing for complex types. Complex types require special handling. We have a fallback lowering for these ops so we can disable complex element types for now.
PiperOrigin-RevId: 348205002