Commit Graph

284 Commits

Author SHA1 Message Date
Hanhan Wang 475b4a06a5 Add support for lowering mhlo.slice to subtensor.
PiperOrigin-RevId: 359297978
2021-02-24 09:06:09 -08:00
A. Unique TensorFlower ac0552f127 [MLIR][HLO] Remove duplicate `PopulateTransformUnrankedHloPatterns`
PiperOrigin-RevId: 359046173
2021-02-23 07:50:47 -08:00
Rahul Joshi 5adb7c6e12 [MLIR:LHLO] Add optional call target arg mapping to LMHLO CustomCall operations.
- XLA:HLO -> LMHLO conversion drops all token arguments and return values, however
  custom calls that users write still expect to get buffer pointers for these token types.
- To be able to support this, add an optional call target argument mapping attribute to
  LMHLO custom calls. When this attribute is present, it indicates the number of
  arguments and returns that the custom call expects and also indicates which LMHLO
  arg() or output() maps to which arg or result number of the custom call.

PiperOrigin-RevId: 358826664
2021-02-22 08:43:00 -08:00
Benjamin Kramer a9cc1dcfa0 [mlir][hlo] Add basic rank-specialization for select
This just blows up everything to ranked (up to 6) and is probably quite slow.
This is sufficient to make kernelgen compile SelectV2.

PiperOrigin-RevId: 358777728
2021-02-22 02:41:12 -08:00
Benjamin Kramer b42def4612 [mlir][hlo] Refactor rank specialization to allow an arbitrary number of inputs
This actually simplifies the code a bit.

PiperOrigin-RevId: 358201038
2021-02-18 09:53:03 -08:00
Benjamin Kramer ca4034b56e [mlir][hlo] Make select ready for dynamic shapes (ranked only for now)
Move tf.SelectV2 broadcast lowering to a chlo.broadcast_select op, and lower it
to broadcasts on mhlo from there.

PiperOrigin-RevId: 358179975
2021-02-18 08:08:40 -08:00
Adrian Kuegel 37e31f8b26 Lower Expm1 kernel to math.ExpM1.
PiperOrigin-RevId: 358152908
2021-02-18 04:54:23 -08:00
Richard Uhler b579bd5d9e Support dynamic-shaped operand in verification of BroadcastInDim.
Verification of HLO_BroadcastInDimOp was previously failing or crashing if the
operand had a dynamic shape or was unranked. Update the verification code to
allow the operand to be unranked or have dynamic shape.

PiperOrigin-RevId: 358056793
2021-02-17 16:18:09 -08:00
A. Unique TensorFlower 220deb3709 [MLIR][CHLO] Add legalization for `chlo.polygamma` to MHLO
PiperOrigin-RevId: 357954624
2021-02-17 08:33:01 -08:00
A. Unique TensorFlower c06de24f6c [MLIR][CHLO] Generalize lowering with upcast to n-ary operation
Allows reuse for zeta lowering now and for the polygamma lowering soon.

PiperOrigin-RevId: 357739910
2021-02-16 09:47:24 -08:00
A. Unique TensorFlower 81abaf364d [MLIR][MHLO] Add polygamma op to the CHLO dialect
PiperOrigin-RevId: 357724465
2021-02-16 08:32:33 -08:00
A. Unique TensorFlower 2fe0c33083 Integrate LLVM at llvm/llvm-project@16428a8d91
Updates LLVM usage to match
[16428a8d91a9](https://github.com/llvm/llvm-project/commit/16428a8d91a9)

PiperOrigin-RevId: 357550807
2021-02-15 04:17:58 -08:00
Adrian Kuegel 824bc9c425 Improve broadcast transformation to treat dynamic shapes with 1 element as scalar.
A shape that contains exactly one element is effectively a scalar. This leads
to a speedup in cases where we have a binary op with one operand that is
effectively a scalar, because we can use the fast path.

PiperOrigin-RevId: 357515552
2021-02-14 23:25:41 -08:00
A. Unique TensorFlower 4060a86fe2 Integrate LLVM at llvm/llvm-project@2bfe27da17
Updates LLVM usage to match
[2bfe27da171e](https://github.com/llvm/llvm-project/commit/2bfe27da171e)

PiperOrigin-RevId: 357196336
2021-02-12 08:32:03 -08:00
Alexander Belyaev 36e04d92c0 [KERNEL_GEN] Add a pattern to bufferize `mhlo.reshape(<unranked_tensor>)`.
PiperOrigin-RevId: 356720899
2021-02-10 06:32:21 -08:00
A. Unique TensorFlower 80d753c1fe Integrate LLVM at llvm/llvm-project@f89f6d1e5d
Updates LLVM usage to match
[f89f6d1e5d7d](https://github.com/llvm/llvm-project/commit/f89f6d1e5d7d)

PiperOrigin-RevId: 356265374
2021-02-08 09:47:00 -08:00
Tres Popp d086b8a0ec Correct HLO atan2 lowering in cases of -inf and -0 inputs.
This is being done by just removing the approximation and lowering to atan2 lib calls later to make the implementation the same as XLA. Note that if the approximation is brought back later, it can be fixed by changing the IR checking `less-than(X, 0)` to `less-than(copysign(X, 1), 0)`

PiperOrigin-RevId: 356253941
2021-02-08 06:58:04 -08:00
Adrian Kuegel 1c4521cc42 Integrate LLVM at llvm/llvm-project@d1978fa4bf
Updates LLVM usage to match
[d1978fa4bf0d](https://github.com/llvm/llvm-project/commit/d1978fa4bf0d)

PiperOrigin-RevId: 355848094
2021-02-05 07:42:06 -08:00
A. Unique TensorFlower 99bc05f2e4 Integrate LLVM at llvm/llvm-project@91e7a17133
Updates LLVM usage to match
[91e7a1713332](https://github.com/llvm/llvm-project/commit/91e7a1713332)

PiperOrigin-RevId: 355702100
2021-02-04 13:42:31 -08:00
Stephan Herhut 60e1b6882c Add kernel definition for zeta operation.
PiperOrigin-RevId: 355575619
2021-02-04 01:27:43 -08:00
Mahesh Ravishankar 44d0464d16 Use linalg.fill on tensors instead of tensor.generate in MHLO -> Linalg conversion.
linalg.fill on tensors is a structured op that allows use tile + fuse
to reduce the fill overhead.

PiperOrigin-RevId: 355490400
2021-02-03 15:03:49 -08:00
Stephan Herhut 6cd1875ee4 Implement lowering of chlo::zeta to mhlo dialect.
PiperOrigin-RevId: 355395581
2021-02-03 07:50:05 -08:00
A. Unique TensorFlower 3b67b207c4 [MLIR][CHLO] Use CHLO lowering for `is_inf` op
PiperOrigin-RevId: 355189054
2021-02-02 09:53:13 -08:00
A. Unique TensorFlower 0458ae9a22 [MLIR][KernelGen] Add `tf.Digamma` kernels
PiperOrigin-RevId: 355129028
2021-02-02 03:07:39 -08:00
A. Unique TensorFlower f40ccc5b4b [MLIR][CHLO] Add `chlo.digamma` and lowering to MHLO
PiperOrigin-RevId: 355122765
2021-02-02 02:10:17 -08:00
Adrian Kuegel 96f8771ed7 Add MLIR generated kernel for Angle kernel.
This also requires a canonicalization pattern to remove a redundant dynamic
reshape from rank 1 to rank 1.

PiperOrigin-RevId: 355113135
2021-02-02 00:47:20 -08:00
A. Unique TensorFlower 816d279be3 [MLIR][CHLO] Simplify conversions with upcast
PiperOrigin-RevId: 354975366
2021-02-01 10:48:10 -08:00
Rahul Joshi 8e3890e8e8 [MLIR:HLO] Add AllGather and AllToAll operations to LMHLO dialect.
- Use a common base class to for AllReduce, AllGather, and AllToAll in the ODS spec.
- Add basic verification for replica groups attribute.

PiperOrigin-RevId: 354969654
2021-02-01 10:23:46 -08:00
A. Unique TensorFlower 2b72ddc6b2 [MLIR][KernelGen] Add `lgamma` kernels
PiperOrigin-RevId: 354519407
2021-01-29 06:14:17 -08:00
Stephan Herhut e61ef86fdb Add zeta and broadcasting_zeta to chlo dialect.
PiperOrigin-RevId: 354500879
2021-01-29 03:22:52 -08:00
Hanhan Wang 30ce82790d Upstream mhlo.reduce lowering to Linalg to MHLO repo.
In IREE, we use indexed generic op to handle the initial value. However, we
lower it to a generic op that carries an init_tensor here, and leave the handle
of initialization problem to later passes.

PiperOrigin-RevId: 354294807
2021-01-28 05:46:09 -08:00
Lei Zhang 39589add22 Use the correct shape when converting mhlo.reshape
If mhlo.reshape is not purely collapsing some consecutive operand
dimensions into result dimensions, we will generate two linalg
reshape op for it: the first one collapses all operand dimensions
into one dimension, and the second one expands it to all result
dimensions. For this case, the number of collapsed/expanded dimensions
should be coming strictly from the operand/result. It is different
from the case where we can generate one linalg reshape. For that case,
the reassociation map should have rank equal to the largest among
operand/result shape.

PiperOrigin-RevId: 354293826
2021-01-28 05:37:54 -08:00
A. Unique TensorFlower e0a7be7fb1 [MLIR][CHLO] Add `chlo.lgamma` and lowering to `hlo`
PiperOrigin-RevId: 354287316
2021-01-28 04:35:03 -08:00
A. Unique TensorFlower c3ddcd6c7f [MLIR][CHLO] Implement type inference for `is_inf`-like operations in CHLO
PiperOrigin-RevId: 354265834
2021-01-28 01:37:04 -08:00
A. Unique TensorFlower fe2e5a175f [MLIR][HLO] Implement type inference for `is_finite` op
PiperOrigin-RevId: 354261420
2021-01-28 00:56:12 -08:00
A. Unique TensorFlower c653db73c5 Integrate LLVM at llvm/llvm-project@c85b6bf33c
Updates LLVM usage to match
[c85b6bf33c47](https://github.com/llvm/llvm-project/commit/c85b6bf33c47)

PiperOrigin-RevId: 354136678
2021-01-27 11:46:07 -08:00
A. Unique TensorFlower d77c9ad6fa [MLIR][CHLO] Add `is_inf`, `is_pos_inf`, and `is_neg_inf` to CHLO dialect
Also add the respective lowerings to MHLO.

PiperOrigin-RevId: 354101955
2021-01-27 09:00:56 -08:00
Adrian Kuegel fa059259bc Add template for tf.Cast
Also generate the kernels for all types of casts between signed int and float types.
This requires some adaptations to our build macros so that we can also specify the
output type of a kernel.

PiperOrigin-RevId: 354067727
2021-01-27 04:49:55 -08:00
Rahul Joshi 44deae2aa1 [MLIR:HLO] Extend AllReduce to support multiple inputs and results (to model tuples).
- Instead of SameTypeOperands, add custom verification to check if operands and
  results pairwise have the same type.

PiperOrigin-RevId: 353986341
2021-01-26 17:25:22 -08:00
A. Unique TensorFlower ae2d46414d [MLIR][KernelGen] Add erfc kernel for f16
PiperOrigin-RevId: 353209468
2021-01-22 03:38:30 -08:00
A. Unique TensorFlower ef8ccdaebc [MLIR] Add mhlo.logistic lowering to linalg
PiperOrigin-RevId: 353205440
2021-01-22 03:03:16 -08:00
A. Unique TensorFlower c846f925d4 [MLIR][KernelGen] Add chlo.erfc lowering for f32
PiperOrigin-RevId: 353201886
2021-01-22 02:33:21 -08:00
Tim Shen d1c785381d [XLA/GPU] Add XLA HLO -> LMHLO conversion to several ops, and implement them in XLA/GPU.
PiperOrigin-RevId: 353158172
2021-01-21 19:57:54 -08:00
Hanhan Wang e2d60e01ba Fix CMakeLists.txt
This is the followup of `7aa64ee0b791` The dep was added in BUILD, but not CMakeLists.txt

PiperOrigin-RevId: 353078811
2021-01-21 12:42:35 -08:00
A. Unique TensorFlower 1a37078132 [MLIR][KernelGen] Add chlo.erfc lowerings for f64
PiperOrigin-RevId: 352993223
2021-01-21 04:42:56 -08:00
A. Unique TensorFlower bec2e625a2 [MLIR][KernelGen] Add approximation lowering for mhlo.erf operation on f64
PiperOrigin-RevId: 352977456
2021-01-21 02:48:43 -08:00
Alexander Belyaev 7aa64ee0b7 [MLIR] Migrate TF from STD complex ops to ComplexDialect.
PiperOrigin-RevId: 352966408
2021-01-21 01:22:25 -08:00
Stephan Herhut 70a351f301 Add chlo.acosh operation and associated lowerings.
PiperOrigin-RevId: 352839289
2021-01-20 11:43:44 -08:00
Jacques Pienaar a7e645f37e Fix incorrect include
PiperOrigin-RevId: 352820426
2021-01-20 10:24:41 -08:00
Tres Popp ba0346b071 Integrate LLVM at llvm/llvm-project@96ef4f307d
Updates LLVM usage to match
[96ef4f307df2](https://github.com/llvm/llvm-project/commit/96ef4f307df2)

PiperOrigin-RevId: 352786460
2021-01-20 07:09:47 -08:00