Commit Graph

1079 Commits

Author SHA1 Message Date
A. Unique TensorFlower cd52adb20e Integrate LLVM at llvm/llvm-project@967b64beb4
Updates LLVM usage to match
[967b64beb4bf](https://github.com/llvm/llvm-project/commit/967b64beb4bf)

PiperOrigin-RevId: 363410962
2021-03-17 07:05:09 -07:00
Hanhan Wang 2e0ee7759b Add support for lowering mhlo.torch_index_select to Linalg on tensors.
The change upstreams the pattern from IREE repo to MHLO repo.

PiperOrigin-RevId: 363406294
2021-03-17 06:33:41 -07:00
A. Unique TensorFlower 1336c95920 Integrate LLVM at llvm/llvm-project@506df1bbfd
Updates LLVM usage to match
[506df1bbfd16](https://github.com/llvm/llvm-project/commit/506df1bbfd16)

PiperOrigin-RevId: 363377317
2021-03-17 02:51:58 -07:00
Jacques Pienaar a58e62590e Restrict canonicalization to avoid changing type
Issue #47516

PiperOrigin-RevId: 363300979
2021-03-16 16:54:05 -07:00
A. Unique TensorFlower caae2525ef Integrate LLVM at llvm/llvm-project@85ab413b53
Updates LLVM usage to match
[85ab413b53ae](https://github.com/llvm/llvm-project/commit/85ab413b53ae)

PiperOrigin-RevId: 363298500
2021-03-16 16:42:27 -07:00
A. Unique TensorFlower c54527fe88 Integrate LLVM at llvm/llvm-project@678241795c
Updates LLVM usage to match
[678241795c95](https://github.com/llvm/llvm-project/commit/678241795c95)

PiperOrigin-RevId: 363257913
2021-03-16 13:33:00 -07:00
A. Unique TensorFlower 2be112a603 [MLIR][MHLO] Approximate `tf.Tanh` as constant +/-1 for small/large values
Fix issue raised in https://github.com/tensorflow/tensorflow/issues/47724

PiperOrigin-RevId: 363210296
2021-03-16 10:14:30 -07:00
Jacques Pienaar 3de2024a9b Avoid creating tuple type only for verification
Make the error message a bit more verbose & it is cheaper to verify the elements rather than creating a (potentially) new type.

PiperOrigin-RevId: 363073909
2021-03-15 17:58:19 -07:00
A. Unique TensorFlower 01d729d35d Integrate LLVM at llvm/llvm-project@6878be5dc3
Updates LLVM usage to match
[6878be5dc3ec](https://github.com/llvm/llvm-project/commit/6878be5dc3ec)

PiperOrigin-RevId: 362984365
2021-03-15 11:14:24 -07:00
A. Unique TensorFlower 570d29d643 Integrate LLVM at llvm/llvm-project@e9e788d145
Updates LLVM usage to match
[e9e788d145f5](https://github.com/llvm/llvm-project/commit/e9e788d145f5)

PiperOrigin-RevId: 362834717
2021-03-14 16:16:45 -07:00
Tim Shen d16860d26d [MLIR] Change LMHLO Conditional and While to capture needed buffers, instead of passing them by operands.
This is consistent with the design of LMHLO FusionOp, and it simplifies the
usage. Before the change, those redundant operands ended up unused as all sub-regions can already capture needed buffers.

PiperOrigin-RevId: 362381155
2021-03-11 14:42:41 -08:00
A. Unique TensorFlower 8066794eea Integrate LLVM at llvm/llvm-project@720a828045
Updates LLVM usage to match
[720a828045e1](https://github.com/llvm/llvm-project/commit/720a828045e1)

PiperOrigin-RevId: 362375825
2021-03-11 14:18:36 -08:00
Hanhan Wang 4f5e1c51dd Add support for lowering NHWC pooling mhlo.reduce_window to Linalg on tensors.
The change upstreams the pattern from IREE repo to MHLO repo.

PiperOrigin-RevId: 362312573
2021-03-11 09:41:34 -08:00
Hanhan Wang 630cabefb0 Add support for lowering 2D depthwise mhlo.conv to Linalg on tensors.
The change upstreams the pattern from IREE repo to MHLO repo.

PiperOrigin-RevId: 362300550
2021-03-11 08:41:38 -08:00
Benjamin Kramer 94f9740c67 [MLIR][HLO:Linalg] Lower mhlo.dynamic_iota to indexed_generic
This is the same as iota, but instead of taking the dimensions from the result
tensor we use the supplied shape extents tensor.

PiperOrigin-RevId: 362298548
2021-03-11 08:31:29 -08:00
Benjamin Kramer 09f8046816 [MLIR:HLO:LINALG] Fix codegen for mhlo.reshape when one side is rank 0
This is an annoying edge case because the collapse->expand lowering expects at
least R1 or it will produce invalid linalg reshapes. Using the direct lowering
works fine.

PiperOrigin-RevId: 362269199
2021-03-11 05:29:56 -08:00
Benjamin Kramer d77b556822 [MLIR][MHLO] Allow recursion in the shape_of mover
This allows it to push shape_of over a chain of ops all the way to the top.

PiperOrigin-RevId: 362249009
2021-03-11 02:52:21 -08:00
Benjamin Kramer 67a770e4e0 [HLO:MLIR] Make binary op type reification emit shape_of instead of tensor ops
This gives cleaner code and allows shape optimizations to happen on the result.

PiperOrigin-RevId: 362242975
2021-03-11 02:01:35 -08:00
Rahul Joshi 9902e6ee32 [HLO] Add LMHLO CollectivePermute verification.
- Extract verification of source target pairs attached to collective permute into a common
  helper function and use that to verify both MHLO and LMHLO variants.
- Change MlirGpuTestBase::ParseMlirModule to allow returning back a failure, and use
  that to update the mlir_gpu_compile_test to check the new behavior.

PiperOrigin-RevId: 362156962
2021-03-10 15:37:12 -08:00
A. Unique TensorFlower 4f16b10ce2 Integrate LLVM at llvm/llvm-project@4c973ae51b
Updates LLVM usage to match
[4c973ae51b85](https://github.com/llvm/llvm-project/commit/4c973ae51b85)

PiperOrigin-RevId: 362116801
2021-03-10 12:36:24 -08:00
Mahesh Ravishankar b212bd66ae Build fix for missing precision_config.
THe conversion from dot_general to dot fails when trying to retrieve
and use the precision config, since precision_config is optional.

PiperOrigin-RevId: 362095296
2021-03-10 11:10:51 -08:00
A. Unique TensorFlower e199df1dbf [MLIR][MHLO] Declare `shape_of` dynamically legal in move-up-dynamic-broadcasts
This allows shape reification to produce `shape_of` ops while they can still be
moved up.

PiperOrigin-RevId: 362075609
2021-03-10 09:59:17 -08:00
A. Unique TensorFlower c217a6ef61 [MHLO] Add pass to move up dynamic broadcasts for fusion
For now, the pass only reifies the required shape computations. Moving
broadcasts will follow to allow for fusion across them.

PiperOrigin-RevId: 362033715
2021-03-10 06:21:57 -08:00
Stephan Herhut cabd4d9a06 Canonicalize dynamic_broadcast_in_dim to own shape with rank narrowing on the shape to a corresponding tensor.cast.
PiperOrigin-RevId: 362028291
2021-03-10 05:43:54 -08:00
A. Unique TensorFlower 507d9fb61d [MLIR][KernelGen] Add `tf.Polygamma` kernel
PiperOrigin-RevId: 362002943
2021-03-10 02:22:01 -08:00
A. Unique TensorFlower 218476128e [MLIR][KernelGen] Fix zeta lowering at poles
Return nan at zeta poles or inf where the limit is defined. Also test the kernel
based on the series representation of zeta.

PiperOrigin-RevId: 361993482
2021-03-10 01:09:10 -08:00
A. Unique TensorFlower 7629dfdd81 Integrate LLVM at llvm/llvm-project@df6d0579e1
Updates LLVM usage to match
[df6d0579e18e](https://github.com/llvm/llvm-project/commit/df6d0579e18e)

PiperOrigin-RevId: 361855926
2021-03-09 11:27:27 -08:00
Benjamin Kramer 5be8be31b5 Integrate LLVM at llvm/llvm-project@3f3f88fb95
Updates LLVM usage to match
[3f3f88fb9503](https://github.com/llvm/llvm-project/commit/3f3f88fb9503)

PiperOrigin-RevId: 361762801
2021-03-09 02:19:24 -08:00
A. Unique TensorFlower daf6bde6f5 Integrate LLVM at llvm/llvm-project@c9ff39a3f9
Updates LLVM usage to match
[c9ff39a3f984](https://github.com/llvm/llvm-project/commit/c9ff39a3f984)

PiperOrigin-RevId: 361655071
2021-03-08 14:18:24 -08:00
A. Unique TensorFlower 55eda81407 [MLIR][HLO] Reify shape extents as `index` values
PiperOrigin-RevId: 361519167
2021-03-08 02:42:47 -08:00
Benjamin Kramer 5a415de33b Integrate LLVM at llvm/llvm-project@9b302513f6
Updates LLVM usage to match
[9b302513f6d8](https://github.com/llvm/llvm-project/commit/9b302513f6d8)

PiperOrigin-RevId: 361120223
2021-03-05 04:50:04 -08:00
A. Unique TensorFlower d5f80f0469 Integrate LLVM at llvm/llvm-project@cedc53254a
Updates LLVM usage to match
[cedc53254a5d](https://github.com/llvm/llvm-project/commit/cedc53254a5d)

PiperOrigin-RevId: 361090577
2021-03-05 00:37:48 -08:00
Marius Brehler 29f70cb892 PR #46723: Adjust types of loop counters
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/46723

Reduces some warnings about comparison of integers of different signs.
Copybara import of the project:

--
311f436f77b334f5462127d8cf179cce067969ca by Marius Brehler <marius.brehler@iml.fraunhofer.de>:

Adjust types of loop counters

Reduces some warnings about comparison of integers of different signs.

PiperOrigin-RevId: 360912203
2021-03-04 07:36:12 -08:00
Benjamin Kramer 57e9941d5d Integrate LLVM at llvm/llvm-project@b3a33553ae
Updates LLVM usage to match
[b3a33553aec7](https://github.com/llvm/llvm-project/commit/b3a33553aec7)

PiperOrigin-RevId: 360910047
2021-03-04 07:24:01 -08:00
A. Unique TensorFlower 39650a5d5a Remove rank 1 specialization from TransformUnrankedHloPass.
For binary ops, we already special-case rank 0 vs rank 1, and same shape. So we
don't need to special-case a maximum rank of 1.

PiperOrigin-RevId: 360891955
2021-03-04 05:24:53 -08:00
Benjamin Kramer e5a6706260 Integrate LLVM at llvm/llvm-project@c907681b07
Updates LLVM usage to match
[c907681b077c](https://github.com/llvm/llvm-project/commit/c907681b077c)

PiperOrigin-RevId: 360891677
2021-03-04 05:22:16 -08:00
Adrian Kuegel 62b357b601 Remove rank 1 specialization from TransformUnrankedHloPass.
For binary ops, we already special-case rank 0 vs rank 1, and same shape. So we
don't need to special-case a maximum rank of 1.

PiperOrigin-RevId: 360881387
2021-03-04 04:04:11 -08:00
Geoffrey Martin-Noble 50a516fb9c Adopt td_library
This avoids needing to list all transitive include dependencies and tracks include directories.

PiperOrigin-RevId: 360779798
2021-03-03 16:11:21 -08:00
Benjamin Kramer 5eac983723 Integrate LLVM at llvm/llvm-project@5d7e0a23c6
Updates LLVM usage to match
[5d7e0a23c6f2](https://github.com/llvm/llvm-project/commit/5d7e0a23c6f2)

PiperOrigin-RevId: 360712976
2021-03-03 11:09:13 -08:00
Benjamin Kramer ab8bc35efd Integrate LLVM at llvm/llvm-project@8da090381d
Updates LLVM usage to match
[8da090381d56](https://github.com/llvm/llvm-project/commit/8da090381d56)

PiperOrigin-RevId: 360684382
2021-03-03 09:10:41 -08:00
Benjamin Kramer bf14340316 Integrate LLVM at llvm/llvm-project@1a4990a4f7
Updates LLVM usage to match
[1a4990a4f71a](https://github.com/llvm/llvm-project/commit/1a4990a4f71a)

PiperOrigin-RevId: 360642978
2021-03-03 04:57:35 -08:00
A. Unique TensorFlower 24c98d5211 Integrate LLVM at llvm/llvm-project@99a6d003ed
Updates LLVM usage to match
[99a6d003edbe](https://github.com/llvm/llvm-project/commit/99a6d003edbe)

PiperOrigin-RevId: 360588460
2021-03-02 21:49:11 -08:00
Geoffrey Martin-Noble 8687f3e4cf Lower MHLO Dot to type-polymorphic linalg named ops
The linalg named ops are now type polymorphic, so the type-monomorphic
varieties are redundant (and will be deleted soon).

PiperOrigin-RevId: 360509010
2021-03-02 14:00:58 -08:00
Benjamin Kramer 1facbe9eb5 Integrate LLVM at llvm/llvm-project@7f086d74c3
Updates LLVM usage to match
[7f086d74c347](https://github.com/llvm/llvm-project/commit/7f086d74c347)

PiperOrigin-RevId: 360434104
2021-03-02 08:33:21 -08:00
Adrian Kuegel 0683db3b24 Legalize MinimumBroadcastShapes op.
Use it in TransformUnrankedHloPass, which allows to reduce the maximum
rank for rank specialized broadcast from 6 to 5.

PiperOrigin-RevId: 360415743
2021-03-02 06:39:01 -08:00
Jacques Pienaar 329b1fd071 Verify compatible shapes in unpack verification rather than exact
Previously this would be too strict and fail if dynamic and static dims were
compared. Dynamic/unknown are treated as "maybe equal" to a static value without further info, so at this layer don't flag as invalid unless truly are.

PiperOrigin-RevId: 360189086
2021-03-01 08:00:16 -08:00
Christian Sigg 70ee9369d5 Use mlir::OpState::operator->() to get to Operation::getAttrs().
This is a preparation step to remove getAttrs() from OpState.

PiperOrigin-RevId: 360159716
2021-03-01 04:53:00 -08:00
Benjamin Kramer 7c071e8ee6 Integrate LLVM at llvm/llvm-project@99c24f7aa8
Updates LLVM usage to match
[99c24f7aa8cc](https://github.com/llvm/llvm-project/commit/99c24f7aa8cc)

PiperOrigin-RevId: 360150476
2021-03-01 03:44:30 -08:00
Benjamin Kramer e19ccf975e Filter static dimensions from dynamic_broadcast_in_dim's init_tensor
Otherwise we'd generate invalid IR for those cases.

PiperOrigin-RevId: 360144122
2021-03-01 03:03:54 -08:00
Adrian Kuegel e6a1f5f0f9 Add MinimumBroadcastShapesOp to chlo dialect.
This op is useful for rank specialization of broadcasts. Kernel Generator
needs to generate one kernel for each rank, so if we can minimize the rank
of the broadcast shape, we can support more cases with the same number of
special-cased kernels.

PiperOrigin-RevId: 360137827
2021-03-01 02:23:52 -08:00