Commit Graph

968 Commits

Author SHA1 Message Date
A. Unique TensorFlower 8066794eea Integrate LLVM at llvm/llvm-project@720a828045
Updates LLVM usage to match
[720a828045e1](https://github.com/llvm/llvm-project/commit/720a828045e1)

PiperOrigin-RevId: 362375825
2021-03-11 14:18:36 -08:00
Hanhan Wang 4f5e1c51dd Add support for lowering NHWC pooling mhlo.reduce_window to Linalg on tensors.
The change upstreams the pattern from IREE repo to MHLO repo.

PiperOrigin-RevId: 362312573
2021-03-11 09:41:34 -08:00
Hanhan Wang 630cabefb0 Add support for lowering 2D depthwise mhlo.conv to Linalg on tensors.
The change upstreams the pattern from IREE repo to MHLO repo.

PiperOrigin-RevId: 362300550
2021-03-11 08:41:38 -08:00
Benjamin Kramer 94f9740c67 [MLIR][HLO:Linalg] Lower mhlo.dynamic_iota to indexed_generic
This is the same as iota, but instead of taking the dimensions from the result
tensor we use the supplied shape extents tensor.

PiperOrigin-RevId: 362298548
2021-03-11 08:31:29 -08:00
Benjamin Kramer 09f8046816 [MLIR:HLO:LINALG] Fix codegen for mhlo.reshape when one side is rank 0
This is an annoying edge case because the collapse->expand lowering expects at
least R1 or it will produce invalid linalg reshapes. Using the direct lowering
works fine.

PiperOrigin-RevId: 362269199
2021-03-11 05:29:56 -08:00
Benjamin Kramer d77b556822 [MLIR][MHLO] Allow recursion in the shape_of mover
This allows it to push shape_of over a chain of ops all the way to the top.

PiperOrigin-RevId: 362249009
2021-03-11 02:52:21 -08:00
Benjamin Kramer 67a770e4e0 [HLO:MLIR] Make binary op type reification emit shape_of instead of tensor ops
This gives cleaner code and allows shape optimizations to happen on the result.

PiperOrigin-RevId: 362242975
2021-03-11 02:01:35 -08:00
Rahul Joshi 9902e6ee32 [HLO] Add LMHLO CollectivePermute verification.
- Extract verification of source target pairs attached to collective permute into a common
  helper function and use that to verify both MHLO and LMHLO variants.
- Change MlirGpuTestBase::ParseMlirModule to allow returning back a failure, and use
  that to update the mlir_gpu_compile_test to check the new behavior.

PiperOrigin-RevId: 362156962
2021-03-10 15:37:12 -08:00
A. Unique TensorFlower 4f16b10ce2 Integrate LLVM at llvm/llvm-project@4c973ae51b
Updates LLVM usage to match
[4c973ae51b85](https://github.com/llvm/llvm-project/commit/4c973ae51b85)

PiperOrigin-RevId: 362116801
2021-03-10 12:36:24 -08:00
Mahesh Ravishankar b212bd66ae Build fix for missing precision_config.
THe conversion from dot_general to dot fails when trying to retrieve
and use the precision config, since precision_config is optional.

PiperOrigin-RevId: 362095296
2021-03-10 11:10:51 -08:00
A. Unique TensorFlower e199df1dbf [MLIR][MHLO] Declare `shape_of` dynamically legal in move-up-dynamic-broadcasts
This allows shape reification to produce `shape_of` ops while they can still be
moved up.

PiperOrigin-RevId: 362075609
2021-03-10 09:59:17 -08:00
A. Unique TensorFlower c217a6ef61 [MHLO] Add pass to move up dynamic broadcasts for fusion
For now, the pass only reifies the required shape computations. Moving
broadcasts will follow to allow for fusion across them.

PiperOrigin-RevId: 362033715
2021-03-10 06:21:57 -08:00
Stephan Herhut cabd4d9a06 Canonicalize dynamic_broadcast_in_dim to own shape with rank narrowing on the shape to a corresponding tensor.cast.
PiperOrigin-RevId: 362028291
2021-03-10 05:43:54 -08:00
A. Unique TensorFlower 507d9fb61d [MLIR][KernelGen] Add `tf.Polygamma` kernel
PiperOrigin-RevId: 362002943
2021-03-10 02:22:01 -08:00
A. Unique TensorFlower 218476128e [MLIR][KernelGen] Fix zeta lowering at poles
Return nan at zeta poles or inf where the limit is defined. Also test the kernel
based on the series representation of zeta.

PiperOrigin-RevId: 361993482
2021-03-10 01:09:10 -08:00
A. Unique TensorFlower 7629dfdd81 Integrate LLVM at llvm/llvm-project@df6d0579e1
Updates LLVM usage to match
[df6d0579e18e](https://github.com/llvm/llvm-project/commit/df6d0579e18e)

PiperOrigin-RevId: 361855926
2021-03-09 11:27:27 -08:00
Benjamin Kramer 5be8be31b5 Integrate LLVM at llvm/llvm-project@3f3f88fb95
Updates LLVM usage to match
[3f3f88fb9503](https://github.com/llvm/llvm-project/commit/3f3f88fb9503)

PiperOrigin-RevId: 361762801
2021-03-09 02:19:24 -08:00
A. Unique TensorFlower daf6bde6f5 Integrate LLVM at llvm/llvm-project@c9ff39a3f9
Updates LLVM usage to match
[c9ff39a3f984](https://github.com/llvm/llvm-project/commit/c9ff39a3f984)

PiperOrigin-RevId: 361655071
2021-03-08 14:18:24 -08:00
A. Unique TensorFlower 55eda81407 [MLIR][HLO] Reify shape extents as `index` values
PiperOrigin-RevId: 361519167
2021-03-08 02:42:47 -08:00
Benjamin Kramer 5a415de33b Integrate LLVM at llvm/llvm-project@9b302513f6
Updates LLVM usage to match
[9b302513f6d8](https://github.com/llvm/llvm-project/commit/9b302513f6d8)

PiperOrigin-RevId: 361120223
2021-03-05 04:50:04 -08:00
A. Unique TensorFlower d5f80f0469 Integrate LLVM at llvm/llvm-project@cedc53254a
Updates LLVM usage to match
[cedc53254a5d](https://github.com/llvm/llvm-project/commit/cedc53254a5d)

PiperOrigin-RevId: 361090577
2021-03-05 00:37:48 -08:00
Marius Brehler 29f70cb892 PR #46723: Adjust types of loop counters
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/46723

Reduces some warnings about comparison of integers of different signs.
Copybara import of the project:

--
311f436f77b334f5462127d8cf179cce067969ca by Marius Brehler <marius.brehler@iml.fraunhofer.de>:

Adjust types of loop counters

Reduces some warnings about comparison of integers of different signs.

PiperOrigin-RevId: 360912203
2021-03-04 07:36:12 -08:00
Benjamin Kramer 57e9941d5d Integrate LLVM at llvm/llvm-project@b3a33553ae
Updates LLVM usage to match
[b3a33553aec7](https://github.com/llvm/llvm-project/commit/b3a33553aec7)

PiperOrigin-RevId: 360910047
2021-03-04 07:24:01 -08:00
A. Unique TensorFlower 39650a5d5a Remove rank 1 specialization from TransformUnrankedHloPass.
For binary ops, we already special-case rank 0 vs rank 1, and same shape. So we
don't need to special-case a maximum rank of 1.

PiperOrigin-RevId: 360891955
2021-03-04 05:24:53 -08:00
Benjamin Kramer e5a6706260 Integrate LLVM at llvm/llvm-project@c907681b07
Updates LLVM usage to match
[c907681b077c](https://github.com/llvm/llvm-project/commit/c907681b077c)

PiperOrigin-RevId: 360891677
2021-03-04 05:22:16 -08:00
Adrian Kuegel 62b357b601 Remove rank 1 specialization from TransformUnrankedHloPass.
For binary ops, we already special-case rank 0 vs rank 1, and same shape. So we
don't need to special-case a maximum rank of 1.

PiperOrigin-RevId: 360881387
2021-03-04 04:04:11 -08:00
Geoffrey Martin-Noble 50a516fb9c Adopt td_library
This avoids needing to list all transitive include dependencies and tracks include directories.

PiperOrigin-RevId: 360779798
2021-03-03 16:11:21 -08:00
Benjamin Kramer 5eac983723 Integrate LLVM at llvm/llvm-project@5d7e0a23c6
Updates LLVM usage to match
[5d7e0a23c6f2](https://github.com/llvm/llvm-project/commit/5d7e0a23c6f2)

PiperOrigin-RevId: 360712976
2021-03-03 11:09:13 -08:00
Benjamin Kramer ab8bc35efd Integrate LLVM at llvm/llvm-project@8da090381d
Updates LLVM usage to match
[8da090381d56](https://github.com/llvm/llvm-project/commit/8da090381d56)

PiperOrigin-RevId: 360684382
2021-03-03 09:10:41 -08:00
Benjamin Kramer bf14340316 Integrate LLVM at llvm/llvm-project@1a4990a4f7
Updates LLVM usage to match
[1a4990a4f71a](https://github.com/llvm/llvm-project/commit/1a4990a4f71a)

PiperOrigin-RevId: 360642978
2021-03-03 04:57:35 -08:00
A. Unique TensorFlower 24c98d5211 Integrate LLVM at llvm/llvm-project@99a6d003ed
Updates LLVM usage to match
[99a6d003edbe](https://github.com/llvm/llvm-project/commit/99a6d003edbe)

PiperOrigin-RevId: 360588460
2021-03-02 21:49:11 -08:00
Geoffrey Martin-Noble 8687f3e4cf Lower MHLO Dot to type-polymorphic linalg named ops
The linalg named ops are now type polymorphic, so the type-monomorphic
varieties are redundant (and will be deleted soon).

PiperOrigin-RevId: 360509010
2021-03-02 14:00:58 -08:00
Benjamin Kramer 1facbe9eb5 Integrate LLVM at llvm/llvm-project@7f086d74c3
Updates LLVM usage to match
[7f086d74c347](https://github.com/llvm/llvm-project/commit/7f086d74c347)

PiperOrigin-RevId: 360434104
2021-03-02 08:33:21 -08:00
Adrian Kuegel 0683db3b24 Legalize MinimumBroadcastShapes op.
Use it in TransformUnrankedHloPass, which allows to reduce the maximum
rank for rank specialized broadcast from 6 to 5.

PiperOrigin-RevId: 360415743
2021-03-02 06:39:01 -08:00
Jacques Pienaar 329b1fd071 Verify compatible shapes in unpack verification rather than exact
Previously this would be too strict and fail if dynamic and static dims were
compared. Dynamic/unknown are treated as "maybe equal" to a static value without further info, so at this layer don't flag as invalid unless truly are.

PiperOrigin-RevId: 360189086
2021-03-01 08:00:16 -08:00
Christian Sigg 70ee9369d5 Use mlir::OpState::operator->() to get to Operation::getAttrs().
This is a preparation step to remove getAttrs() from OpState.

PiperOrigin-RevId: 360159716
2021-03-01 04:53:00 -08:00
Benjamin Kramer 7c071e8ee6 Integrate LLVM at llvm/llvm-project@99c24f7aa8
Updates LLVM usage to match
[99c24f7aa8cc](https://github.com/llvm/llvm-project/commit/99c24f7aa8cc)

PiperOrigin-RevId: 360150476
2021-03-01 03:44:30 -08:00
Benjamin Kramer e19ccf975e Filter static dimensions from dynamic_broadcast_in_dim's init_tensor
Otherwise we'd generate invalid IR for those cases.

PiperOrigin-RevId: 360144122
2021-03-01 03:03:54 -08:00
Adrian Kuegel e6a1f5f0f9 Add MinimumBroadcastShapesOp to chlo dialect.
This op is useful for rank specialization of broadcasts. Kernel Generator
needs to generate one kernel for each rank, so if we can minimize the rank
of the broadcast shape, we can support more cases with the same number of
special-cased kernels.

PiperOrigin-RevId: 360137827
2021-03-01 02:23:52 -08:00
Christian Sigg 2d818c4fd9 Use mlir::OpState::operator->() to get to methods of mlir::Operation.
This is a preparation step to remove those methods from OpState.

PiperOrigin-RevId: 360043992
2021-02-28 09:02:33 -08:00
A. Unique TensorFlower 006b58c476 Integrate LLVM at llvm/llvm-project@5077d42cfa
Updates LLVM usage to match
[5077d42cfa42](https://github.com/llvm/llvm-project/commit/5077d42cfa42)

PiperOrigin-RevId: 359899172
2021-02-26 22:19:53 -08:00
Hanhan Wang a8f99ee0f5 Fix the shape of linalg.init_tensor in conv op lowering.
The output spatial dims are not as same as the input spatial dims. Only supports
static output spatial dims for now.

PiperOrigin-RevId: 359775479
2021-02-26 09:34:11 -08:00
A. Unique TensorFlower c616963501 Integrate LLVM at llvm/llvm-project@72b18a86e1
Updates LLVM usage to match
[72b18a86e11e](https://github.com/llvm/llvm-project/commit/72b18a86e11e)

PiperOrigin-RevId: 359762921
2021-02-26 08:22:16 -08:00
A. Unique TensorFlower c68b71b5dc Integrate LLVM at llvm/llvm-project@cb81135f94
Updates LLVM usage to match
[cb81135f94e5](https://github.com/llvm/llvm-project/commit/cb81135f94e5)

PiperOrigin-RevId: 359746866
2021-02-26 06:47:30 -08:00
A. Unique TensorFlower c63ac91206 Integrate LLVM at llvm/llvm-project@bf6380c096
Updates LLVM usage to match
[bf6380c0966b](https://github.com/llvm/llvm-project/commit/bf6380c0966b)

PiperOrigin-RevId: 359728311
2021-02-26 04:05:03 -08:00
A. Unique TensorFlower fe4234bcff Integrate LLVM at llvm/llvm-project@5d64dd8e3c
Updates LLVM usage to match
[5d64dd8e3c22](https://github.com/llvm/llvm-project/commit/5d64dd8e3c22)

PiperOrigin-RevId: 359653225
2021-02-25 17:37:37 -08:00
A. Unique TensorFlower e7a3ec18ad Integrate LLVM at llvm/llvm-project@7f6e331645
Updates LLVM usage to match
[7f6e3316456f](https://github.com/llvm/llvm-project/commit/7f6e3316456f)

PiperOrigin-RevId: 359595260
2021-02-25 13:12:35 -08:00
A. Unique TensorFlower bb22553219 Integrate LLVM at llvm/llvm-project@f4d78a5e3a
Updates LLVM usage to match
[f4d78a5e3aee](https://github.com/llvm/llvm-project/commit/f4d78a5e3aee)

PiperOrigin-RevId: 359543192
2021-02-25 09:40:18 -08:00
Hanhan Wang 90f0d7f935 Add support for lowering mhlo.conv to Linalg on tensors.
This pattern only works for normal convolutions. It does not work for depthwise
convolutions. The Linalg conv ops are defined with static rank, so it only
supports 1d/2d/3d cases, which are the most typical cases.

This also refactors out the same check in lmhlo.conv lowering.

PiperOrigin-RevId: 359503527
2021-02-25 05:59:08 -08:00
Rahul Joshi c5f5d13930 [MLIR] Add dependency from "lhlo_ops_structs_inc_gen" to "lhlo" target.
PiperOrigin-RevId: 359488020
2021-02-25 03:51:36 -08:00