Commit Graph

398 Commits

Author SHA1 Message Date
Adrian Kuegel 0e2b255f01 Lower LHLO::AbsOp to complex dialect.
Also fix the traits for LHLO::AbsOp to allow different types and add a
verifier.

PiperOrigin-RevId: 370438790
2021-04-26 05:44:03 -07:00
A. Unique TensorFlower 0569b7f7a4 [MLIR][MHLO] Generalize extent tensor cast elimination in bcast moving
PiperOrigin-RevId: 370112887
2021-04-23 10:52:50 -07:00
A. Unique TensorFlower 21e9365718 [MLIR][MHLO] Generalize extent tensor cast elimination in bcast moving
PiperOrigin-RevId: 370085141
2021-04-23 08:31:11 -07:00
A. Unique TensorFlower da5d252143 [MLIR] Merge extent tensor casts into `shape_of` ops in broadcast moving
PiperOrigin-RevId: 370058002
2021-04-23 04:44:01 -07:00
A. Unique TensorFlower 890a79641e Integrate LLVM at llvm/llvm-project@37e1458128
Updates LLVM usage to match
[37e145812855](https://github.com/llvm/llvm-project/commit/37e145812855)

PiperOrigin-RevId: 370020161
2021-04-22 22:57:08 -07:00
Hanhan Wang 49df46893c Add support for lowering variadic mhlo.reduce op.
Also add more lowering for body ops. Some MinOp and MaxOp can be legalized to
SelectOp + CompareOp.

PiperOrigin-RevId: 369891551
2021-04-22 09:50:49 -07:00
Benjamin Kramer 4d435a817e [mhlo:linalg] Add support for lowering mhlo.concatenate to Linalg ops.
This uses a indexed linalg.generic, which is rather awkward standalone but
allows fusing into the output of the concatenate and avoid to ever materialize
it in memory. I think this is the only way to get that with the current linalg
stack, fusion across a concatenate would require more infrastructure.

PiperOrigin-RevId: 369677652
2021-04-21 10:01:08 -07:00
A. Unique TensorFlower 8db96f54d3 [mhlo] Add a folder for mhlo.map which does nothing but return one of the arguments.
Add a folder for maps whose body returns only one of the arguments. When this arises the fold replaces the map output with one of the operand tensors.

PiperOrigin-RevId: 369304322
2021-04-19 14:36:08 -07:00
A. Unique TensorFlower 9374a1c0c5 [MLIR] Fix merge of assuming ops
Assuming ops can only be merged if their witnesses will dominate the merged
assuming op. This is not the case if the second op's witness is a result of the
first.

PiperOrigin-RevId: 369192868
2021-04-19 04:21:08 -07:00
Adrian Kuegel db9f298505 Generate Equal and NotEqual kernels for complex types.
PiperOrigin-RevId: 368586877
2021-04-15 00:35:52 -07:00
Rahul Joshi c75cbf4ac7 [MLIR][NFC] Rename ReduceOp operands() => inputs().
- Rename to avoid confusion as operands generally includes all operands of an operation

PiperOrigin-RevId: 368479524
2021-04-14 12:08:23 -07:00
Prashant Kumar 236e7db5c0 PR #47315: [MLIR] Add concatenateOp lowering from lmhlo to Affine.
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/47315

Lowering of `concatenateOp` is added from lmhlo to Affine. The lowering
has been added as a part of `lhlo-legalize-to-affine` pass.

Signed-off-by: Prashant Kumar <prashantk@polymagelabs.com>
Copybara import of the project:

--
15314e4579f7a6901cf3475eff25962a34772eaf by Prashant Kumar <prashantk@polymagelabs.com>:

[MLIR] Add concatenateOp lowering from lmhlo to Affine.

Lowering of `concatenateOp` is added from lmhlo to Affine. The lowering
has been added as a part of `lhlo-legalize-to-affine` pass.

Signed-off-by: Prashant Kumar <prashantk@polymagelabs.com>
PiperOrigin-RevId: 368465992
2021-04-14 11:06:38 -07:00
Jacques Pienaar fdd75daed6 Add shape function for MHLO RngNormal and RngUniform
PiperOrigin-RevId: 368276963
2021-04-13 12:59:42 -07:00
Hanhan Wang a3fc99efe0 Add support for lowering mhlo.dynamic_slice to Linalg ops.
PiperOrigin-RevId: 368033540
2021-04-12 10:34:55 -07:00
A. Unique TensorFlower 0ec0a23e61 [MLIR][HLO] Generalize merged witnesses in `move-up-dynamic-broadcasts-for-fusion`
PiperOrigin-RevId: 368012460
2021-04-12 08:55:29 -07:00
A. Unique TensorFlower 6d2209e301 [MLIR][HLO] Canonicalize chained broadcasts
Compose two subsequent `dynamic_broadcast_in_dim` ops into one.

PiperOrigin-RevId: 367630360
2021-04-09 07:35:34 -07:00
Hanhan Wang fdb653788c Add support for lowering and/or within mhlo.reduce op body.
PiperOrigin-RevId: 367627034
2021-04-09 07:09:13 -07:00
Adrian Kuegel cc607bc72d Support up to rank 8 in rank specialization for SelectOp.
PiperOrigin-RevId: 367406557
2021-04-08 04:55:41 -07:00
Hanhan Wang c466f08993 Add checks in ReduceWindowOpOnTensorsConversion.
The pattern does not support ops with non-zero padding config. Add a check to
prevent unexpected lowering.

It is not easy to add tests because other patterns will convert body ops, and
it causes issues like invalid IRs.

PiperOrigin-RevId: 367202450
2021-04-07 05:46:01 -07:00
A. Unique TensorFlower a62382cf1f Integrate LLVM at llvm/llvm-project@0e92cbd6a6
Updates LLVM usage to match
[0e92cbd6a652](https://github.com/llvm/llvm-project/commit/0e92cbd6a652)

PiperOrigin-RevId: 367062980
2021-04-06 12:40:51 -07:00
Rahul Joshi ff2cbfa2ec [MLIR] Add support for representing variadic reduce-window in HLO/LMHLO dialect.
-  Fixed a subset of transformations to handle variadic reduce-window.

PiperOrigin-RevId: 366278650
2021-04-01 10:24:50 -07:00
A. Unique TensorFlower af3bc47a8b Integrate LLVM at llvm/llvm-project@8396aeb07c
Updates LLVM usage to match
[8396aeb07cdd](https://github.com/llvm/llvm-project/commit/8396aeb07cdd)

PiperOrigin-RevId: 366034463
2021-03-31 08:01:34 -07:00
A. Unique TensorFlower bbe0aa204c [MLIR][MHLO] Merge assuming ops with compatible witnesses
PiperOrigin-RevId: 366018349
2021-03-31 06:11:38 -07:00
Adrian Kuegel c8157ba4df Move code from helper struct to the only user.
We don't need the separate helper struct anymore, because it is now only used
in one place.

PiperOrigin-RevId: 366012639
2021-03-31 05:21:34 -07:00
Adrian Kuegel 4033a56750 Add special cases for SelectOp rank specialization.
We now use the same special cases for all ops with arity >= 2.
For binary ops, we now have only one special case if at least one of the
operands has exactly one element. In that case, we reshape both operands to
rank 1. Before, we had separate special cases whether the left-hand side
or the right-hand side have a scalar shape.

PiperOrigin-RevId: 366005835
2021-03-31 04:28:51 -07:00
A. Unique TensorFlower 9206805c58 [MLIR][MHLO] Do not yield results of ops that were moved out of assuming regions
When an op is moved out of an assuming region we already know statically that it
is independent of the assuming region. Hence, there is no need to yield its
results.

PiperOrigin-RevId: 366001405
2021-03-31 03:50:27 -07:00
A. Unique TensorFlower 8ade5d78c8 [MLIR][MHLO] Move `cstr_broadcastable` and `shape_of` out of `assuming` regions
Add pattern to move operations out of assuming op. This only valid for
constraint-independent ops, like `cstr_broadcastable` and `shape_of`. It will
eventually allow to make assuming regions' constraints independent from each
other so that they can be merged.

PiperOrigin-RevId: 365993145
2021-03-31 02:39:07 -07:00
A. Unique TensorFlower eade942635 [MLIR][MHLO] Add pattern to move ops into the assuming region
This will eventually allow to make assuming regions' constraints independent
from each other.

PiperOrigin-RevId: 365985081
2021-03-31 01:23:31 -07:00
Geoffrey Martin-Noble 5d65758e8c Canonicalize MHLO Case and If Ops with constant conditions
ReplaceOpWithRegion was taken directly from ScfOps. We should maybe put that somewhere common in core.

PiperOrigin-RevId: 365936724
2021-03-30 17:58:01 -07:00
Geoffrey Martin-Noble 2fb2a92c6e Verify mhlo.if region return types match op
This matches the behavior of mhlo.case. Additionally, fix the verification of CaseOp in the case of nested ops with mhlo.return-containing regions.

PiperOrigin-RevId: 365936672
2021-03-30 17:57:20 -07:00
Geoffrey Martin-Noble 7a9394dca5 Restrict MHLO control flow ops to single-block regions
PiperOrigin-RevId: 365935824
2021-03-30 17:51:03 -07:00
Adrian Kuegel c1a6ae8994 Generalize the HloBinaryElementwiseAdaptor
We can use it also for ternary ops like Select if we change the signature so
that a ValueRange is passed in.
Also remove special casing for HloComplexAdaptor. It can be handled with the
generic adaptor as well.

PiperOrigin-RevId: 365777493
2021-03-30 03:53:53 -07:00
Adrian Kuegel 6388e8d9ee mlir-hlo-opt: set preloadDialectsInContext to false.
This requires specifying dependent dialects in several passes.

PiperOrigin-RevId: 365758084
2021-03-30 01:07:14 -07:00
A. Unique TensorFlower 85a306d356 [MLIR][MHLO] Add pattern to inline broadcasted shapes
Simplify reasoning about `cstr_broadcastable` ops in the
`mhlo-move-up-dynamic-broadcasts-for-fusion` pass.

PiperOrigin-RevId: 365560893
2021-03-29 06:32:32 -07:00
A. Unique TensorFlower fb819c1de8 [MLIR][MHLO] Apply patterns in MoveUpDynamicBroadcastsForFusionPass greedily
PiperOrigin-RevId: 365556488
2021-03-29 06:02:06 -07:00
Geoffrey Martin-Noble a2b6060c0c Add folder for HLO NotOp
PiperOrigin-RevId: 364989658
2021-03-25 02:08:38 -07:00
Adrian Kuegel a34aa699f8 Fix tanh lowering for NaN input.
If the input is NaN, the result should be NaN, too.

PiperOrigin-RevId: 364788902
2021-03-24 06:34:36 -07:00
Stella Laurenzo 7f2bf48b8b Integrate LLVM at llvm/llvm-project@b24436ac96
Updates LLVM usage to match
[b24436ac96bd](https://github.com/llvm/llvm-project/commit/b24436ac96bd)

PiperOrigin-RevId: 364615807
2021-03-23 12:20:17 -07:00
A. Unique TensorFlower 8987dfd1d6 [MLIR][HLO] Move broadcasts over n-ary shape-preserving ops
This will open up more fusion opportunities.

PiperOrigin-RevId: 364577231
2021-03-23 09:38:39 -07:00
A. Unique TensorFlower 54f37abc28 [MHLO] Move broadcasts over elementwise ops
Move up dynamic broadcasts and shape computations to allow for more fusion
opportunities.

PiperOrigin-RevId: 364514158
2021-03-23 02:34:41 -07:00
Benjamin Kramer 59fa7c0ef7 [MHLO:linalg] Lower all dynamic broadcasts of static shapes to linalg.generic
We only need the memref_reinterpret_cast if we don't know whether a dimension
gets expanded or not. With static shapes we know that a dimension can only be
expanded if it's a static 1, so lower it in the same way we lower fully
static broadcasts.

PiperOrigin-RevId: 363859181
2021-03-19 03:52:02 -07:00
A. Unique TensorFlower 0c4a89e52c [MLIR][MHLO] Implement shape reification for `dynamic_broadcast_in_dim`
PiperOrigin-RevId: 363622714
2021-03-18 03:39:15 -07:00
Hanhan Wang 2e0ee7759b Add support for lowering mhlo.torch_index_select to Linalg on tensors.
The change upstreams the pattern from IREE repo to MHLO repo.

PiperOrigin-RevId: 363406294
2021-03-17 06:33:41 -07:00
Jacques Pienaar a58e62590e Restrict canonicalization to avoid changing type
Issue #47516

PiperOrigin-RevId: 363300979
2021-03-16 16:54:05 -07:00
A. Unique TensorFlower c54527fe88 Integrate LLVM at llvm/llvm-project@678241795c
Updates LLVM usage to match
[678241795c95](https://github.com/llvm/llvm-project/commit/678241795c95)

PiperOrigin-RevId: 363257913
2021-03-16 13:33:00 -07:00
A. Unique TensorFlower 2be112a603 [MLIR][MHLO] Approximate `tf.Tanh` as constant +/-1 for small/large values
Fix issue raised in https://github.com/tensorflow/tensorflow/issues/47724

PiperOrigin-RevId: 363210296
2021-03-16 10:14:30 -07:00
Jacques Pienaar 3de2024a9b Avoid creating tuple type only for verification
Make the error message a bit more verbose & it is cheaper to verify the elements rather than creating a (potentially) new type.

PiperOrigin-RevId: 363073909
2021-03-15 17:58:19 -07:00
Hanhan Wang 4f5e1c51dd Add support for lowering NHWC pooling mhlo.reduce_window to Linalg on tensors.
The change upstreams the pattern from IREE repo to MHLO repo.

PiperOrigin-RevId: 362312573
2021-03-11 09:41:34 -08:00
Hanhan Wang 630cabefb0 Add support for lowering 2D depthwise mhlo.conv to Linalg on tensors.
The change upstreams the pattern from IREE repo to MHLO repo.

PiperOrigin-RevId: 362300550
2021-03-11 08:41:38 -08:00
Benjamin Kramer 94f9740c67 [MLIR][HLO:Linalg] Lower mhlo.dynamic_iota to indexed_generic
This is the same as iota, but instead of taking the dimensions from the result
tensor we use the supplied shape extents tensor.

PiperOrigin-RevId: 362298548
2021-03-11 08:31:29 -08:00
Benjamin Kramer 09f8046816 [MLIR:HLO:LINALG] Fix codegen for mhlo.reshape when one side is rank 0
This is an annoying edge case because the collapse->expand lowering expects at
least R1 or it will produce invalid linalg reshapes. Using the direct lowering
works fine.

PiperOrigin-RevId: 362269199
2021-03-11 05:29:56 -08:00
Benjamin Kramer d77b556822 [MLIR][MHLO] Allow recursion in the shape_of mover
This allows it to push shape_of over a chain of ops all the way to the top.

PiperOrigin-RevId: 362249009
2021-03-11 02:52:21 -08:00
Benjamin Kramer 67a770e4e0 [HLO:MLIR] Make binary op type reification emit shape_of instead of tensor ops
This gives cleaner code and allows shape optimizations to happen on the result.

PiperOrigin-RevId: 362242975
2021-03-11 02:01:35 -08:00
Rahul Joshi 9902e6ee32 [HLO] Add LMHLO CollectivePermute verification.
- Extract verification of source target pairs attached to collective permute into a common
  helper function and use that to verify both MHLO and LMHLO variants.
- Change MlirGpuTestBase::ParseMlirModule to allow returning back a failure, and use
  that to update the mlir_gpu_compile_test to check the new behavior.

PiperOrigin-RevId: 362156962
2021-03-10 15:37:12 -08:00
Mahesh Ravishankar b212bd66ae Build fix for missing precision_config.
THe conversion from dot_general to dot fails when trying to retrieve
and use the precision config, since precision_config is optional.

PiperOrigin-RevId: 362095296
2021-03-10 11:10:51 -08:00
A. Unique TensorFlower e199df1dbf [MLIR][MHLO] Declare `shape_of` dynamically legal in move-up-dynamic-broadcasts
This allows shape reification to produce `shape_of` ops while they can still be
moved up.

PiperOrigin-RevId: 362075609
2021-03-10 09:59:17 -08:00
A. Unique TensorFlower c217a6ef61 [MHLO] Add pass to move up dynamic broadcasts for fusion
For now, the pass only reifies the required shape computations. Moving
broadcasts will follow to allow for fusion across them.

PiperOrigin-RevId: 362033715
2021-03-10 06:21:57 -08:00
Stephan Herhut cabd4d9a06 Canonicalize dynamic_broadcast_in_dim to own shape with rank narrowing on the shape to a corresponding tensor.cast.
PiperOrigin-RevId: 362028291
2021-03-10 05:43:54 -08:00
A. Unique TensorFlower 507d9fb61d [MLIR][KernelGen] Add `tf.Polygamma` kernel
PiperOrigin-RevId: 362002943
2021-03-10 02:22:01 -08:00
A. Unique TensorFlower 218476128e [MLIR][KernelGen] Fix zeta lowering at poles
Return nan at zeta poles or inf where the limit is defined. Also test the kernel
based on the series representation of zeta.

PiperOrigin-RevId: 361993482
2021-03-10 01:09:10 -08:00
Benjamin Kramer 5be8be31b5 Integrate LLVM at llvm/llvm-project@3f3f88fb95
Updates LLVM usage to match
[3f3f88fb9503](https://github.com/llvm/llvm-project/commit/3f3f88fb9503)

PiperOrigin-RevId: 361762801
2021-03-09 02:19:24 -08:00
A. Unique TensorFlower 55eda81407 [MLIR][HLO] Reify shape extents as `index` values
PiperOrigin-RevId: 361519167
2021-03-08 02:42:47 -08:00
Marius Brehler 29f70cb892 PR #46723: Adjust types of loop counters
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/46723

Reduces some warnings about comparison of integers of different signs.
Copybara import of the project:

--
311f436f77b334f5462127d8cf179cce067969ca by Marius Brehler <marius.brehler@iml.fraunhofer.de>:

Adjust types of loop counters

Reduces some warnings about comparison of integers of different signs.

PiperOrigin-RevId: 360912203
2021-03-04 07:36:12 -08:00
A. Unique TensorFlower 39650a5d5a Remove rank 1 specialization from TransformUnrankedHloPass.
For binary ops, we already special-case rank 0 vs rank 1, and same shape. So we
don't need to special-case a maximum rank of 1.

PiperOrigin-RevId: 360891955
2021-03-04 05:24:53 -08:00
Adrian Kuegel 62b357b601 Remove rank 1 specialization from TransformUnrankedHloPass.
For binary ops, we already special-case rank 0 vs rank 1, and same shape. So we
don't need to special-case a maximum rank of 1.

PiperOrigin-RevId: 360881387
2021-03-04 04:04:11 -08:00
Geoffrey Martin-Noble 8687f3e4cf Lower MHLO Dot to type-polymorphic linalg named ops
The linalg named ops are now type polymorphic, so the type-monomorphic
varieties are redundant (and will be deleted soon).

PiperOrigin-RevId: 360509010
2021-03-02 14:00:58 -08:00
Benjamin Kramer 1facbe9eb5 Integrate LLVM at llvm/llvm-project@7f086d74c3
Updates LLVM usage to match
[7f086d74c347](https://github.com/llvm/llvm-project/commit/7f086d74c347)

PiperOrigin-RevId: 360434104
2021-03-02 08:33:21 -08:00
Adrian Kuegel 0683db3b24 Legalize MinimumBroadcastShapes op.
Use it in TransformUnrankedHloPass, which allows to reduce the maximum
rank for rank specialized broadcast from 6 to 5.

PiperOrigin-RevId: 360415743
2021-03-02 06:39:01 -08:00
Christian Sigg 70ee9369d5 Use mlir::OpState::operator->() to get to Operation::getAttrs().
This is a preparation step to remove getAttrs() from OpState.

PiperOrigin-RevId: 360159716
2021-03-01 04:53:00 -08:00
Benjamin Kramer e19ccf975e Filter static dimensions from dynamic_broadcast_in_dim's init_tensor
Otherwise we'd generate invalid IR for those cases.

PiperOrigin-RevId: 360144122
2021-03-01 03:03:54 -08:00
Adrian Kuegel e6a1f5f0f9 Add MinimumBroadcastShapesOp to chlo dialect.
This op is useful for rank specialization of broadcasts. Kernel Generator
needs to generate one kernel for each rank, so if we can minimize the rank
of the broadcast shape, we can support more cases with the same number of
special-cased kernels.

PiperOrigin-RevId: 360137827
2021-03-01 02:23:52 -08:00
Christian Sigg 2d818c4fd9 Use mlir::OpState::operator->() to get to methods of mlir::Operation.
This is a preparation step to remove those methods from OpState.

PiperOrigin-RevId: 360043992
2021-02-28 09:02:33 -08:00
Hanhan Wang a8f99ee0f5 Fix the shape of linalg.init_tensor in conv op lowering.
The output spatial dims are not as same as the input spatial dims. Only supports
static output spatial dims for now.

PiperOrigin-RevId: 359775479
2021-02-26 09:34:11 -08:00
Hanhan Wang 90f0d7f935 Add support for lowering mhlo.conv to Linalg on tensors.
This pattern only works for normal convolutions. It does not work for depthwise
convolutions. The Linalg conv ops are defined with static rank, so it only
supports 1d/2d/3d cases, which are the most typical cases.

This also refactors out the same check in lmhlo.conv lowering.

PiperOrigin-RevId: 359503527
2021-02-25 05:59:08 -08:00
Hanhan Wang 45a1249fe2 Add support for lowering mhlo.pad to linalg.pad_tensor
The change upstreams the pattern from IREE repo to MHLO repo.

PiperOrigin-RevId: 359481543
2021-02-25 03:00:39 -08:00
Geoffrey Martin-Noble 89f7f2bd65 Lower integer matmuls to linalg
PiperOrigin-RevId: 359306495
2021-02-24 09:45:07 -08:00
Hanhan Wang 475b4a06a5 Add support for lowering mhlo.slice to subtensor.
PiperOrigin-RevId: 359297978
2021-02-24 09:06:09 -08:00
A. Unique TensorFlower ac0552f127 [MLIR][HLO] Remove duplicate `PopulateTransformUnrankedHloPatterns`
PiperOrigin-RevId: 359046173
2021-02-23 07:50:47 -08:00
Rahul Joshi 5adb7c6e12 [MLIR:LHLO] Add optional call target arg mapping to LMHLO CustomCall operations.
- XLA:HLO -> LMHLO conversion drops all token arguments and return values, however
  custom calls that users write still expect to get buffer pointers for these token types.
- To be able to support this, add an optional call target argument mapping attribute to
  LMHLO custom calls. When this attribute is present, it indicates the number of
  arguments and returns that the custom call expects and also indicates which LMHLO
  arg() or output() maps to which arg or result number of the custom call.

PiperOrigin-RevId: 358826664
2021-02-22 08:43:00 -08:00
Benjamin Kramer a9cc1dcfa0 [mlir][hlo] Add basic rank-specialization for select
This just blows up everything to ranked (up to 6) and is probably quite slow.
This is sufficient to make kernelgen compile SelectV2.

PiperOrigin-RevId: 358777728
2021-02-22 02:41:12 -08:00
Benjamin Kramer b42def4612 [mlir][hlo] Refactor rank specialization to allow an arbitrary number of inputs
This actually simplifies the code a bit.

PiperOrigin-RevId: 358201038
2021-02-18 09:53:03 -08:00
Benjamin Kramer ca4034b56e [mlir][hlo] Make select ready for dynamic shapes (ranked only for now)
Move tf.SelectV2 broadcast lowering to a chlo.broadcast_select op, and lower it
to broadcasts on mhlo from there.

PiperOrigin-RevId: 358179975
2021-02-18 08:08:40 -08:00
Adrian Kuegel 37e31f8b26 Lower Expm1 kernel to math.ExpM1.
PiperOrigin-RevId: 358152908
2021-02-18 04:54:23 -08:00
Richard Uhler b579bd5d9e Support dynamic-shaped operand in verification of BroadcastInDim.
Verification of HLO_BroadcastInDimOp was previously failing or crashing if the
operand had a dynamic shape or was unranked. Update the verification code to
allow the operand to be unranked or have dynamic shape.

PiperOrigin-RevId: 358056793
2021-02-17 16:18:09 -08:00
A. Unique TensorFlower 220deb3709 [MLIR][CHLO] Add legalization for `chlo.polygamma` to MHLO
PiperOrigin-RevId: 357954624
2021-02-17 08:33:01 -08:00
A. Unique TensorFlower c06de24f6c [MLIR][CHLO] Generalize lowering with upcast to n-ary operation
Allows reuse for zeta lowering now and for the polygamma lowering soon.

PiperOrigin-RevId: 357739910
2021-02-16 09:47:24 -08:00
A. Unique TensorFlower 81abaf364d [MLIR][MHLO] Add polygamma op to the CHLO dialect
PiperOrigin-RevId: 357724465
2021-02-16 08:32:33 -08:00
A. Unique TensorFlower 2fe0c33083 Integrate LLVM at llvm/llvm-project@16428a8d91
Updates LLVM usage to match
[16428a8d91a9](https://github.com/llvm/llvm-project/commit/16428a8d91a9)

PiperOrigin-RevId: 357550807
2021-02-15 04:17:58 -08:00
Adrian Kuegel 824bc9c425 Improve broadcast transformation to treat dynamic shapes with 1 element as scalar.
A shape that contains exactly one element is effectively a scalar. This leads
to a speedup in cases where we have a binary op with one operand that is
effectively a scalar, because we can use the fast path.

PiperOrigin-RevId: 357515552
2021-02-14 23:25:41 -08:00
A. Unique TensorFlower 4060a86fe2 Integrate LLVM at llvm/llvm-project@2bfe27da17
Updates LLVM usage to match
[2bfe27da171e](https://github.com/llvm/llvm-project/commit/2bfe27da171e)

PiperOrigin-RevId: 357196336
2021-02-12 08:32:03 -08:00
Alexander Belyaev 36e04d92c0 [KERNEL_GEN] Add a pattern to bufferize `mhlo.reshape(<unranked_tensor>)`.
PiperOrigin-RevId: 356720899
2021-02-10 06:32:21 -08:00
A. Unique TensorFlower 80d753c1fe Integrate LLVM at llvm/llvm-project@f89f6d1e5d
Updates LLVM usage to match
[f89f6d1e5d7d](https://github.com/llvm/llvm-project/commit/f89f6d1e5d7d)

PiperOrigin-RevId: 356265374
2021-02-08 09:47:00 -08:00
Tres Popp d086b8a0ec Correct HLO atan2 lowering in cases of -inf and -0 inputs.
This is being done by just removing the approximation and lowering to atan2 lib calls later to make the implementation the same as XLA. Note that if the approximation is brought back later, it can be fixed by changing the IR checking `less-than(X, 0)` to `less-than(copysign(X, 1), 0)`

PiperOrigin-RevId: 356253941
2021-02-08 06:58:04 -08:00
Adrian Kuegel 1c4521cc42 Integrate LLVM at llvm/llvm-project@d1978fa4bf
Updates LLVM usage to match
[d1978fa4bf0d](https://github.com/llvm/llvm-project/commit/d1978fa4bf0d)

PiperOrigin-RevId: 355848094
2021-02-05 07:42:06 -08:00
A. Unique TensorFlower 99bc05f2e4 Integrate LLVM at llvm/llvm-project@91e7a17133
Updates LLVM usage to match
[91e7a1713332](https://github.com/llvm/llvm-project/commit/91e7a1713332)

PiperOrigin-RevId: 355702100
2021-02-04 13:42:31 -08:00
Stephan Herhut 60e1b6882c Add kernel definition for zeta operation.
PiperOrigin-RevId: 355575619
2021-02-04 01:27:43 -08:00
Mahesh Ravishankar 44d0464d16 Use linalg.fill on tensors instead of tensor.generate in MHLO -> Linalg conversion.
linalg.fill on tensors is a structured op that allows use tile + fuse
to reduce the fill overhead.

PiperOrigin-RevId: 355490400
2021-02-03 15:03:49 -08:00
Stephan Herhut 6cd1875ee4 Implement lowering of chlo::zeta to mhlo dialect.
PiperOrigin-RevId: 355395581
2021-02-03 07:50:05 -08:00
A. Unique TensorFlower 3b67b207c4 [MLIR][CHLO] Use CHLO lowering for `is_inf` op
PiperOrigin-RevId: 355189054
2021-02-02 09:53:13 -08:00
A. Unique TensorFlower 0458ae9a22 [MLIR][KernelGen] Add `tf.Digamma` kernels
PiperOrigin-RevId: 355129028
2021-02-02 03:07:39 -08:00