Commit Graph

649 Commits

Author SHA1 Message Date
A. Unique TensorFlower 7cbf8d8878 Integrate LLVM at llvm/llvm-project@83d705adb2
Updates LLVM usage to match
[83d705adb2e0](https://github.com/llvm/llvm-project/commit/83d705adb2e0)

PiperOrigin-RevId: 355258427
2021-02-02 14:53:50 -08:00
A. Unique TensorFlower 04110a4b1c Integrate LLVM at llvm/llvm-project@67dfe9c8d7
Updates LLVM usage to match
[67dfe9c8d70c](https://github.com/llvm/llvm-project/commit/67dfe9c8d70c)

PiperOrigin-RevId: 355235205
2021-02-02 13:09:20 -08:00
A. Unique TensorFlower 3b67b207c4 [MLIR][CHLO] Use CHLO lowering for `is_inf` op
PiperOrigin-RevId: 355189054
2021-02-02 09:53:13 -08:00
Adrian Kuegel 0726ab20fa Integrate LLVM at llvm/llvm-project@4d904776a7
Updates LLVM usage to match
[4d904776a77a](https://github.com/llvm/llvm-project/commit/4d904776a77a)

PiperOrigin-RevId: 355142758
2021-02-02 05:11:40 -08:00
Tres Popp ae722a883f Improve performance of lowered chlo.pow with integers
The new lowering takes 6 iterations of a loop always rather than iterating the exponent's number of times.

PiperOrigin-RevId: 355131133
2021-02-02 03:28:38 -08:00
A. Unique TensorFlower 0458ae9a22 [MLIR][KernelGen] Add `tf.Digamma` kernels
PiperOrigin-RevId: 355129028
2021-02-02 03:07:39 -08:00
A. Unique TensorFlower f40ccc5b4b [MLIR][CHLO] Add `chlo.digamma` and lowering to MHLO
PiperOrigin-RevId: 355122765
2021-02-02 02:10:17 -08:00
Adrian Kuegel c2115f56c7 Integrate LLVM at llvm/llvm-project@8f7f2c4211
Updates LLVM usage to match
[8f7f2c4211ca](https://github.com/llvm/llvm-project/commit/8f7f2c4211ca)

PiperOrigin-RevId: 355120697
2021-02-02 01:54:32 -08:00
Adrian Kuegel 96f8771ed7 Add MLIR generated kernel for Angle kernel.
This also requires a canonicalization pattern to remove a redundant dynamic
reshape from rank 1 to rank 1.

PiperOrigin-RevId: 355113135
2021-02-02 00:47:20 -08:00
A. Unique TensorFlower 9d682343a9 Integrate LLVM at llvm/llvm-project@68f66f37d7
Updates LLVM usage to match
[68f66f37d7d7](https://github.com/llvm/llvm-project/commit/68f66f37d7d7)

PiperOrigin-RevId: 355069626
2021-02-01 18:17:47 -08:00
A. Unique TensorFlower 6c8ad0895b Integrate LLVM at llvm/llvm-project@75347ba1fa
Updates LLVM usage to match
[75347ba1fae2](https://github.com/llvm/llvm-project/commit/75347ba1fae2)

PiperOrigin-RevId: 355031910
2021-02-01 14:51:46 -08:00
A. Unique TensorFlower 16bb35009b Integrate LLVM at llvm/llvm-project@78c22fbce9
Updates LLVM usage to match
[78c22fbce991](https://github.com/llvm/llvm-project/commit/78c22fbce991)

PiperOrigin-RevId: 354999980
2021-02-01 12:28:24 -08:00
A. Unique TensorFlower 816d279be3 [MLIR][CHLO] Simplify conversions with upcast
PiperOrigin-RevId: 354975366
2021-02-01 10:48:10 -08:00
Rahul Joshi 8e3890e8e8 [MLIR:HLO] Add AllGather and AllToAll operations to LMHLO dialect.
- Use a common base class to for AllReduce, AllGather, and AllToAll in the ODS spec.
- Add basic verification for replica groups attribute.

PiperOrigin-RevId: 354969654
2021-02-01 10:23:46 -08:00
Adrian Kuegel 7a251aaed4 Integrate LLVM at llvm/llvm-project@49a6502cd5
Updates LLVM usage to match
[49a6502cd5c2](https://github.com/llvm/llvm-project/commit/49a6502cd5c2)

PiperOrigin-RevId: 354936788
2021-02-01 07:43:37 -08:00
Adrian Kuegel 2da9d6947d Integrate LLVM at llvm/llvm-project@e90e455d2a
Updates LLVM usage to match
[e90e455d2a0c](https://github.com/llvm/llvm-project/commit/e90e455d2a0c)

PiperOrigin-RevId: 354922438
2021-02-01 05:57:58 -08:00
Rahul Joshi 1be1123c70 [XLA:GPU] Add support for PartitionId
PiperOrigin-RevId: 354599221
2021-01-29 13:31:54 -08:00
A. Unique TensorFlower b1ce05cfc9 Integrate LLVM at llvm/llvm-project@1c762a81d2
Updates LLVM usage to match
[1c762a81d20f](https://github.com/llvm/llvm-project/commit/1c762a81d20f)

PiperOrigin-RevId: 354567452
2021-01-29 11:04:26 -08:00
A. Unique TensorFlower 2b72ddc6b2 [MLIR][KernelGen] Add `lgamma` kernels
PiperOrigin-RevId: 354519407
2021-01-29 06:14:17 -08:00
Stephan Herhut e61ef86fdb Add zeta and broadcasting_zeta to chlo dialect.
PiperOrigin-RevId: 354500879
2021-01-29 03:22:52 -08:00
A. Unique TensorFlower eb8d5a5e39 Integrate LLVM at llvm/llvm-project@b92a39ac13
Updates LLVM usage to match
[b92a39ac1319](https://github.com/llvm/llvm-project/commit/b92a39ac1319)

PiperOrigin-RevId: 354482705
2021-01-29 00:32:48 -08:00
A. Unique TensorFlower 74b103fedf Integrate LLVM at llvm/llvm-project@081c1db02d
Updates LLVM usage to match
[081c1db02dd2](https://github.com/llvm/llvm-project/commit/081c1db02dd2)

PiperOrigin-RevId: 354470978
2021-01-28 22:34:40 -08:00
A. Unique TensorFlower c915b698b9 Integrate LLVM at llvm/llvm-project@e29552c5af
Updates LLVM usage to match
[e29552c5aff6](https://github.com/llvm/llvm-project/commit/e29552c5aff6)

PiperOrigin-RevId: 354433542
2021-01-28 17:37:31 -08:00
Rahul Joshi c97cff9a7b [XLA:GPU] Migrate AllReduce thunk emission to MLIR.
PiperOrigin-RevId: 354335704
2021-01-28 10:06:06 -08:00
Hanhan Wang 30ce82790d Upstream mhlo.reduce lowering to Linalg to MHLO repo.
In IREE, we use indexed generic op to handle the initial value. However, we
lower it to a generic op that carries an init_tensor here, and leave the handle
of initialization problem to later passes.

PiperOrigin-RevId: 354294807
2021-01-28 05:46:09 -08:00
Lei Zhang 39589add22 Use the correct shape when converting mhlo.reshape
If mhlo.reshape is not purely collapsing some consecutive operand
dimensions into result dimensions, we will generate two linalg
reshape op for it: the first one collapses all operand dimensions
into one dimension, and the second one expands it to all result
dimensions. For this case, the number of collapsed/expanded dimensions
should be coming strictly from the operand/result. It is different
from the case where we can generate one linalg reshape. For that case,
the reassociation map should have rank equal to the largest among
operand/result shape.

PiperOrigin-RevId: 354293826
2021-01-28 05:37:54 -08:00
A. Unique TensorFlower e0a7be7fb1 [MLIR][CHLO] Add `chlo.lgamma` and lowering to `hlo`
PiperOrigin-RevId: 354287316
2021-01-28 04:35:03 -08:00
A. Unique TensorFlower c3ddcd6c7f [MLIR][CHLO] Implement type inference for `is_inf`-like operations in CHLO
PiperOrigin-RevId: 354265834
2021-01-28 01:37:04 -08:00
A. Unique TensorFlower fe2e5a175f [MLIR][HLO] Implement type inference for `is_finite` op
PiperOrigin-RevId: 354261420
2021-01-28 00:56:12 -08:00
A. Unique TensorFlower c653db73c5 Integrate LLVM at llvm/llvm-project@c85b6bf33c
Updates LLVM usage to match
[c85b6bf33c47](https://github.com/llvm/llvm-project/commit/c85b6bf33c47)

PiperOrigin-RevId: 354136678
2021-01-27 11:46:07 -08:00
A. Unique TensorFlower d77c9ad6fa [MLIR][CHLO] Add `is_inf`, `is_pos_inf`, and `is_neg_inf` to CHLO dialect
Also add the respective lowerings to MHLO.

PiperOrigin-RevId: 354101955
2021-01-27 09:00:56 -08:00
Adrian Kuegel f4f728f18e Add SameOperandsAndResultShape trait to ConvertOp and IsFiniteOp.
PiperOrigin-RevId: 354070005
2021-01-27 05:10:01 -08:00
Adrian Kuegel fa059259bc Add template for tf.Cast
Also generate the kernels for all types of casts between signed int and float types.
This requires some adaptations to our build macros so that we can also specify the
output type of a kernel.

PiperOrigin-RevId: 354067727
2021-01-27 04:49:55 -08:00
Rahul Joshi 44deae2aa1 [MLIR:HLO] Extend AllReduce to support multiple inputs and results (to model tuples).
- Instead of SameTypeOperands, add custom verification to check if operands and
  results pairwise have the same type.

PiperOrigin-RevId: 353986341
2021-01-26 17:25:22 -08:00
A. Unique TensorFlower 471fc63c11 Integrate LLVM at llvm/llvm-project@f3f3c9c254
Updates LLVM usage to match
[f3f3c9c2549a](https://github.com/llvm/llvm-project/commit/f3f3c9c2549a)

PiperOrigin-RevId: 353667003
2021-01-25 09:49:38 -08:00
Benjamin Kramer f6b24a6d54 [mlir][hlo] Make min/max always propagate NaNs
This is the right behavior for TF and JAX and matches what TF does on GPU. It
doesn't match TF on CPU, but that's really a TF bug.

PiperOrigin-RevId: 353657779
2021-01-25 09:04:16 -08:00
A. Unique TensorFlower b1438eebcb [mlir][hlo] Make min/max always propagate NaNs
This is the right behavior for TF and JAX and matches what TF does on GPU. It
doesn't match TF on CPU, but that's really a TF bug.

PiperOrigin-RevId: 353628258
2021-01-25 05:43:15 -08:00
Benjamin Kramer 6af4bccfde [mlir][hlo] Make min/max always propagate NaNs
This is the right behavior for TF and JAX and matches what TF does on GPU. It
doesn't match TF on CPU, but that's really a TF bug.

PiperOrigin-RevId: 353624935
2021-01-25 05:15:24 -08:00
A. Unique TensorFlower 0860e8de9b Integrate LLVM at llvm/llvm-project@0057cc5a21
Updates LLVM usage to match
[0057cc5a215e](https://github.com/llvm/llvm-project/commit/0057cc5a215e)

PiperOrigin-RevId: 353618541
2021-01-25 04:16:02 -08:00
A. Unique TensorFlower e192d2a764 Integrate LLVM at llvm/llvm-project@e27197f360
Updates LLVM usage to match
[e27197f36054](https://github.com/llvm/llvm-project/commit/e27197f36054)

PiperOrigin-RevId: 353311722
2021-01-22 13:50:07 -08:00
A. Unique TensorFlower 41c468ca52 Integrate LLVM at llvm/llvm-project@06f8a49693
Updates LLVM usage to match
[06f8a4969395](https://github.com/llvm/llvm-project/commit/06f8a4969395)

PiperOrigin-RevId: 353266312
2021-01-22 10:16:19 -08:00
Tres Popp dfafaac954 Integrate LLVM at llvm/llvm-project@85e7578c6d
Updates LLVM usage to match
[85e7578c6db8](https://github.com/llvm/llvm-project/commit/85e7578c6db8)

PiperOrigin-RevId: 353245217
2021-01-22 08:21:37 -08:00
Tres Popp b865148492 Integrate LLVM at llvm/llvm-project@def99ad68b
Updates LLVM usage to match
[def99ad68bce](https://github.com/llvm/llvm-project/commit/def99ad68bce)

PiperOrigin-RevId: 353222879
2021-01-22 05:30:48 -08:00
A. Unique TensorFlower ae2d46414d [MLIR][KernelGen] Add erfc kernel for f16
PiperOrigin-RevId: 353209468
2021-01-22 03:38:30 -08:00
A. Unique TensorFlower ef8ccdaebc [MLIR] Add mhlo.logistic lowering to linalg
PiperOrigin-RevId: 353205440
2021-01-22 03:03:16 -08:00
A. Unique TensorFlower c846f925d4 [MLIR][KernelGen] Add chlo.erfc lowering for f32
PiperOrigin-RevId: 353201886
2021-01-22 02:33:21 -08:00
A. Unique TensorFlower 56758a9562 [MLIR][KernelGen] Lower mhlo.log_plus_one to std.log1p
PiperOrigin-RevId: 353200069
2021-01-22 02:18:32 -08:00
Tres Popp f6bf9d5780 Integrate LLVM at llvm/llvm-project@2cb130f766
Updates LLVM usage to match
[2cb130f76611](https://github.com/llvm/llvm-project/commit/2cb130f76611)

PiperOrigin-RevId: 353194242
2021-01-22 01:32:52 -08:00
Tres Popp 7644a47a69 Integrate LLVM at llvm/llvm-project@6699029b67
Updates LLVM usage to match
[6699029b67bf](https://github.com/llvm/llvm-project/commit/6699029b67bf)

PiperOrigin-RevId: 353177509
2021-01-21 23:00:31 -08:00
Tim Shen d1c785381d [XLA/GPU] Add XLA HLO -> LMHLO conversion to several ops, and implement them in XLA/GPU.
PiperOrigin-RevId: 353158172
2021-01-21 19:57:54 -08:00