Adrian Kuegel
c1a6ae8994
Generalize the HloBinaryElementwiseAdaptor
...
We can use it also for ternary ops like Select if we change the signature so
that a ValueRange is passed in.
Also remove special casing for HloComplexAdaptor. It can be handled with the
generic adaptor as well.
PiperOrigin-RevId: 365777493
2021-03-30 03:53:53 -07:00
Adrian Kuegel
6388e8d9ee
mlir-hlo-opt: set preloadDialectsInContext to false.
...
This requires specifying dependent dialects in several passes.
PiperOrigin-RevId: 365758084
2021-03-30 01:07:14 -07:00
A. Unique TensorFlower
85a306d356
[MLIR][MHLO] Add pattern to inline broadcasted shapes
...
Simplify reasoning about `cstr_broadcastable` ops in the
`mhlo-move-up-dynamic-broadcasts-for-fusion` pass.
PiperOrigin-RevId: 365560893
2021-03-29 06:32:32 -07:00
A. Unique TensorFlower
fb819c1de8
[MLIR][MHLO] Apply patterns in MoveUpDynamicBroadcastsForFusionPass greedily
...
PiperOrigin-RevId: 365556488
2021-03-29 06:02:06 -07:00
Adrian Kuegel
a34aa699f8
Fix tanh lowering for NaN input.
...
If the input is NaN, the result should be NaN, too.
PiperOrigin-RevId: 364788902
2021-03-24 06:34:36 -07:00
Stella Laurenzo
7f2bf48b8b
Integrate LLVM at llvm/llvm-project@b24436ac96
...
Updates LLVM usage to match
[b24436ac96bd](https://github.com/llvm/llvm-project/commit/b24436ac96bd )
PiperOrigin-RevId: 364615807
2021-03-23 12:20:17 -07:00
A. Unique TensorFlower
8987dfd1d6
[MLIR][HLO] Move broadcasts over n-ary shape-preserving ops
...
This will open up more fusion opportunities.
PiperOrigin-RevId: 364577231
2021-03-23 09:38:39 -07:00
A. Unique TensorFlower
54f37abc28
[MHLO] Move broadcasts over elementwise ops
...
Move up dynamic broadcasts and shape computations to allow for more fusion
opportunities.
PiperOrigin-RevId: 364514158
2021-03-23 02:34:41 -07:00
Benjamin Kramer
59fa7c0ef7
[MHLO:linalg] Lower all dynamic broadcasts of static shapes to linalg.generic
...
We only need the memref_reinterpret_cast if we don't know whether a dimension
gets expanded or not. With static shapes we know that a dimension can only be
expanded if it's a static 1, so lower it in the same way we lower fully
static broadcasts.
PiperOrigin-RevId: 363859181
2021-03-19 03:52:02 -07:00
Hanhan Wang
2e0ee7759b
Add support for lowering mhlo.torch_index_select to Linalg on tensors.
...
The change upstreams the pattern from IREE repo to MHLO repo.
PiperOrigin-RevId: 363406294
2021-03-17 06:33:41 -07:00
A. Unique TensorFlower
c54527fe88
Integrate LLVM at llvm/llvm-project@678241795c
...
Updates LLVM usage to match
[678241795c95](https://github.com/llvm/llvm-project/commit/678241795c95 )
PiperOrigin-RevId: 363257913
2021-03-16 13:33:00 -07:00
A. Unique TensorFlower
2be112a603
[MLIR][MHLO] Approximate `tf.Tanh` as constant +/-1 for small/large values
...
Fix issue raised in https://github.com/tensorflow/tensorflow/issues/47724
PiperOrigin-RevId: 363210296
2021-03-16 10:14:30 -07:00
Hanhan Wang
4f5e1c51dd
Add support for lowering NHWC pooling mhlo.reduce_window to Linalg on tensors.
...
The change upstreams the pattern from IREE repo to MHLO repo.
PiperOrigin-RevId: 362312573
2021-03-11 09:41:34 -08:00
Hanhan Wang
630cabefb0
Add support for lowering 2D depthwise mhlo.conv to Linalg on tensors.
...
The change upstreams the pattern from IREE repo to MHLO repo.
PiperOrigin-RevId: 362300550
2021-03-11 08:41:38 -08:00
Benjamin Kramer
94f9740c67
[MLIR][HLO:Linalg] Lower mhlo.dynamic_iota to indexed_generic
...
This is the same as iota, but instead of taking the dimensions from the result
tensor we use the supplied shape extents tensor.
PiperOrigin-RevId: 362298548
2021-03-11 08:31:29 -08:00
Benjamin Kramer
09f8046816
[MLIR:HLO:LINALG] Fix codegen for mhlo.reshape when one side is rank 0
...
This is an annoying edge case because the collapse->expand lowering expects at
least R1 or it will produce invalid linalg reshapes. Using the direct lowering
works fine.
PiperOrigin-RevId: 362269199
2021-03-11 05:29:56 -08:00
Benjamin Kramer
d77b556822
[MLIR][MHLO] Allow recursion in the shape_of mover
...
This allows it to push shape_of over a chain of ops all the way to the top.
PiperOrigin-RevId: 362249009
2021-03-11 02:52:21 -08:00
Benjamin Kramer
67a770e4e0
[HLO:MLIR] Make binary op type reification emit shape_of instead of tensor ops
...
This gives cleaner code and allows shape optimizations to happen on the result.
PiperOrigin-RevId: 362242975
2021-03-11 02:01:35 -08:00
Mahesh Ravishankar
b212bd66ae
Build fix for missing precision_config.
...
THe conversion from dot_general to dot fails when trying to retrieve
and use the precision config, since precision_config is optional.
PiperOrigin-RevId: 362095296
2021-03-10 11:10:51 -08:00
A. Unique TensorFlower
e199df1dbf
[MLIR][MHLO] Declare `shape_of` dynamically legal in move-up-dynamic-broadcasts
...
This allows shape reification to produce `shape_of` ops while they can still be
moved up.
PiperOrigin-RevId: 362075609
2021-03-10 09:59:17 -08:00
A. Unique TensorFlower
c217a6ef61
[MHLO] Add pass to move up dynamic broadcasts for fusion
...
For now, the pass only reifies the required shape computations. Moving
broadcasts will follow to allow for fusion across them.
PiperOrigin-RevId: 362033715
2021-03-10 06:21:57 -08:00
A. Unique TensorFlower
507d9fb61d
[MLIR][KernelGen] Add `tf.Polygamma` kernel
...
PiperOrigin-RevId: 362002943
2021-03-10 02:22:01 -08:00
A. Unique TensorFlower
218476128e
[MLIR][KernelGen] Fix zeta lowering at poles
...
Return nan at zeta poles or inf where the limit is defined. Also test the kernel
based on the series representation of zeta.
PiperOrigin-RevId: 361993482
2021-03-10 01:09:10 -08:00
Benjamin Kramer
5be8be31b5
Integrate LLVM at llvm/llvm-project@3f3f88fb95
...
Updates LLVM usage to match
[3f3f88fb9503](https://github.com/llvm/llvm-project/commit/3f3f88fb9503 )
PiperOrigin-RevId: 361762801
2021-03-09 02:19:24 -08:00
Marius Brehler
29f70cb892
PR #46723 : Adjust types of loop counters
...
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/46723
Reduces some warnings about comparison of integers of different signs.
Copybara import of the project:
--
311f436f77b334f5462127d8cf179cce067969ca by Marius Brehler <marius.brehler@iml.fraunhofer.de>:
Adjust types of loop counters
Reduces some warnings about comparison of integers of different signs.
PiperOrigin-RevId: 360912203
2021-03-04 07:36:12 -08:00
A. Unique TensorFlower
39650a5d5a
Remove rank 1 specialization from TransformUnrankedHloPass.
...
For binary ops, we already special-case rank 0 vs rank 1, and same shape. So we
don't need to special-case a maximum rank of 1.
PiperOrigin-RevId: 360891955
2021-03-04 05:24:53 -08:00
Adrian Kuegel
62b357b601
Remove rank 1 specialization from TransformUnrankedHloPass.
...
For binary ops, we already special-case rank 0 vs rank 1, and same shape. So we
don't need to special-case a maximum rank of 1.
PiperOrigin-RevId: 360881387
2021-03-04 04:04:11 -08:00
Geoffrey Martin-Noble
8687f3e4cf
Lower MHLO Dot to type-polymorphic linalg named ops
...
The linalg named ops are now type polymorphic, so the type-monomorphic
varieties are redundant (and will be deleted soon).
PiperOrigin-RevId: 360509010
2021-03-02 14:00:58 -08:00
Benjamin Kramer
1facbe9eb5
Integrate LLVM at llvm/llvm-project@7f086d74c3
...
Updates LLVM usage to match
[7f086d74c347](https://github.com/llvm/llvm-project/commit/7f086d74c347 )
PiperOrigin-RevId: 360434104
2021-03-02 08:33:21 -08:00
Adrian Kuegel
0683db3b24
Legalize MinimumBroadcastShapes op.
...
Use it in TransformUnrankedHloPass, which allows to reduce the maximum
rank for rank specialized broadcast from 6 to 5.
PiperOrigin-RevId: 360415743
2021-03-02 06:39:01 -08:00
Christian Sigg
70ee9369d5
Use mlir::OpState::operator->() to get to Operation::getAttrs().
...
This is a preparation step to remove getAttrs() from OpState.
PiperOrigin-RevId: 360159716
2021-03-01 04:53:00 -08:00
Benjamin Kramer
e19ccf975e
Filter static dimensions from dynamic_broadcast_in_dim's init_tensor
...
Otherwise we'd generate invalid IR for those cases.
PiperOrigin-RevId: 360144122
2021-03-01 03:03:54 -08:00
Christian Sigg
2d818c4fd9
Use mlir::OpState::operator->() to get to methods of mlir::Operation.
...
This is a preparation step to remove those methods from OpState.
PiperOrigin-RevId: 360043992
2021-02-28 09:02:33 -08:00
Hanhan Wang
a8f99ee0f5
Fix the shape of linalg.init_tensor in conv op lowering.
...
The output spatial dims are not as same as the input spatial dims. Only supports
static output spatial dims for now.
PiperOrigin-RevId: 359775479
2021-02-26 09:34:11 -08:00
Hanhan Wang
90f0d7f935
Add support for lowering mhlo.conv to Linalg on tensors.
...
This pattern only works for normal convolutions. It does not work for depthwise
convolutions. The Linalg conv ops are defined with static rank, so it only
supports 1d/2d/3d cases, which are the most typical cases.
This also refactors out the same check in lmhlo.conv lowering.
PiperOrigin-RevId: 359503527
2021-02-25 05:59:08 -08:00
Hanhan Wang
45a1249fe2
Add support for lowering mhlo.pad to linalg.pad_tensor
...
The change upstreams the pattern from IREE repo to MHLO repo.
PiperOrigin-RevId: 359481543
2021-02-25 03:00:39 -08:00
Geoffrey Martin-Noble
89f7f2bd65
Lower integer matmuls to linalg
...
PiperOrigin-RevId: 359306495
2021-02-24 09:45:07 -08:00
Hanhan Wang
475b4a06a5
Add support for lowering mhlo.slice to subtensor.
...
PiperOrigin-RevId: 359297978
2021-02-24 09:06:09 -08:00
A. Unique TensorFlower
ac0552f127
[MLIR][HLO] Remove duplicate `PopulateTransformUnrankedHloPatterns`
...
PiperOrigin-RevId: 359046173
2021-02-23 07:50:47 -08:00
Benjamin Kramer
a9cc1dcfa0
[mlir][hlo] Add basic rank-specialization for select
...
This just blows up everything to ranked (up to 6) and is probably quite slow.
This is sufficient to make kernelgen compile SelectV2.
PiperOrigin-RevId: 358777728
2021-02-22 02:41:12 -08:00
Benjamin Kramer
b42def4612
[mlir][hlo] Refactor rank specialization to allow an arbitrary number of inputs
...
This actually simplifies the code a bit.
PiperOrigin-RevId: 358201038
2021-02-18 09:53:03 -08:00
Benjamin Kramer
ca4034b56e
[mlir][hlo] Make select ready for dynamic shapes (ranked only for now)
...
Move tf.SelectV2 broadcast lowering to a chlo.broadcast_select op, and lower it
to broadcasts on mhlo from there.
PiperOrigin-RevId: 358179975
2021-02-18 08:08:40 -08:00
Adrian Kuegel
37e31f8b26
Lower Expm1 kernel to math.ExpM1.
...
PiperOrigin-RevId: 358152908
2021-02-18 04:54:23 -08:00
A. Unique TensorFlower
220deb3709
[MLIR][CHLO] Add legalization for `chlo.polygamma` to MHLO
...
PiperOrigin-RevId: 357954624
2021-02-17 08:33:01 -08:00
A. Unique TensorFlower
c06de24f6c
[MLIR][CHLO] Generalize lowering with upcast to n-ary operation
...
Allows reuse for zeta lowering now and for the polygamma lowering soon.
PiperOrigin-RevId: 357739910
2021-02-16 09:47:24 -08:00
A. Unique TensorFlower
2fe0c33083
Integrate LLVM at llvm/llvm-project@16428a8d91
...
Updates LLVM usage to match
[16428a8d91a9](https://github.com/llvm/llvm-project/commit/16428a8d91a9 )
PiperOrigin-RevId: 357550807
2021-02-15 04:17:58 -08:00
Adrian Kuegel
824bc9c425
Improve broadcast transformation to treat dynamic shapes with 1 element as scalar.
...
A shape that contains exactly one element is effectively a scalar. This leads
to a speedup in cases where we have a binary op with one operand that is
effectively a scalar, because we can use the fast path.
PiperOrigin-RevId: 357515552
2021-02-14 23:25:41 -08:00
A. Unique TensorFlower
4060a86fe2
Integrate LLVM at llvm/llvm-project@2bfe27da17
...
Updates LLVM usage to match
[2bfe27da171e](https://github.com/llvm/llvm-project/commit/2bfe27da171e )
PiperOrigin-RevId: 357196336
2021-02-12 08:32:03 -08:00
Alexander Belyaev
36e04d92c0
[KERNEL_GEN] Add a pattern to bufferize `mhlo.reshape(<unranked_tensor>)`.
...
PiperOrigin-RevId: 356720899
2021-02-10 06:32:21 -08:00
A. Unique TensorFlower
80d753c1fe
Integrate LLVM at llvm/llvm-project@f89f6d1e5d
...
Updates LLVM usage to match
[f89f6d1e5d7d](https://github.com/llvm/llvm-project/commit/f89f6d1e5d7d )
PiperOrigin-RevId: 356265374
2021-02-08 09:47:00 -08:00