Commit Graph

414 Commits

Author SHA1 Message Date
Jacques Pienaar 3de2024a9b Avoid creating tuple type only for verification
Make the error message a bit more verbose & it is cheaper to verify the elements rather than creating a (potentially) new type.

PiperOrigin-RevId: 363073909
2021-03-15 17:58:19 -07:00
Hanhan Wang 4f5e1c51dd Add support for lowering NHWC pooling mhlo.reduce_window to Linalg on tensors.
The change upstreams the pattern from IREE repo to MHLO repo.

PiperOrigin-RevId: 362312573
2021-03-11 09:41:34 -08:00
Hanhan Wang 630cabefb0 Add support for lowering 2D depthwise mhlo.conv to Linalg on tensors.
The change upstreams the pattern from IREE repo to MHLO repo.

PiperOrigin-RevId: 362300550
2021-03-11 08:41:38 -08:00
Benjamin Kramer 94f9740c67 [MLIR][HLO:Linalg] Lower mhlo.dynamic_iota to indexed_generic
This is the same as iota, but instead of taking the dimensions from the result
tensor we use the supplied shape extents tensor.

PiperOrigin-RevId: 362298548
2021-03-11 08:31:29 -08:00
Benjamin Kramer 09f8046816 [MLIR:HLO:LINALG] Fix codegen for mhlo.reshape when one side is rank 0
This is an annoying edge case because the collapse->expand lowering expects at
least R1 or it will produce invalid linalg reshapes. Using the direct lowering
works fine.

PiperOrigin-RevId: 362269199
2021-03-11 05:29:56 -08:00
Benjamin Kramer d77b556822 [MLIR][MHLO] Allow recursion in the shape_of mover
This allows it to push shape_of over a chain of ops all the way to the top.

PiperOrigin-RevId: 362249009
2021-03-11 02:52:21 -08:00
Benjamin Kramer 67a770e4e0 [HLO:MLIR] Make binary op type reification emit shape_of instead of tensor ops
This gives cleaner code and allows shape optimizations to happen on the result.

PiperOrigin-RevId: 362242975
2021-03-11 02:01:35 -08:00
Rahul Joshi 9902e6ee32 [HLO] Add LMHLO CollectivePermute verification.
- Extract verification of source target pairs attached to collective permute into a common
  helper function and use that to verify both MHLO and LMHLO variants.
- Change MlirGpuTestBase::ParseMlirModule to allow returning back a failure, and use
  that to update the mlir_gpu_compile_test to check the new behavior.

PiperOrigin-RevId: 362156962
2021-03-10 15:37:12 -08:00
Mahesh Ravishankar b212bd66ae Build fix for missing precision_config.
THe conversion from dot_general to dot fails when trying to retrieve
and use the precision config, since precision_config is optional.

PiperOrigin-RevId: 362095296
2021-03-10 11:10:51 -08:00
A. Unique TensorFlower e199df1dbf [MLIR][MHLO] Declare `shape_of` dynamically legal in move-up-dynamic-broadcasts
This allows shape reification to produce `shape_of` ops while they can still be
moved up.

PiperOrigin-RevId: 362075609
2021-03-10 09:59:17 -08:00
A. Unique TensorFlower c217a6ef61 [MHLO] Add pass to move up dynamic broadcasts for fusion
For now, the pass only reifies the required shape computations. Moving
broadcasts will follow to allow for fusion across them.

PiperOrigin-RevId: 362033715
2021-03-10 06:21:57 -08:00
Stephan Herhut cabd4d9a06 Canonicalize dynamic_broadcast_in_dim to own shape with rank narrowing on the shape to a corresponding tensor.cast.
PiperOrigin-RevId: 362028291
2021-03-10 05:43:54 -08:00
A. Unique TensorFlower 507d9fb61d [MLIR][KernelGen] Add `tf.Polygamma` kernel
PiperOrigin-RevId: 362002943
2021-03-10 02:22:01 -08:00
A. Unique TensorFlower 218476128e [MLIR][KernelGen] Fix zeta lowering at poles
Return nan at zeta poles or inf where the limit is defined. Also test the kernel
based on the series representation of zeta.

PiperOrigin-RevId: 361993482
2021-03-10 01:09:10 -08:00
Benjamin Kramer 5be8be31b5 Integrate LLVM at llvm/llvm-project@3f3f88fb95
Updates LLVM usage to match
[3f3f88fb9503](https://github.com/llvm/llvm-project/commit/3f3f88fb9503)

PiperOrigin-RevId: 361762801
2021-03-09 02:19:24 -08:00
A. Unique TensorFlower 55eda81407 [MLIR][HLO] Reify shape extents as `index` values
PiperOrigin-RevId: 361519167
2021-03-08 02:42:47 -08:00
Marius Brehler 29f70cb892 PR #46723: Adjust types of loop counters
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/46723

Reduces some warnings about comparison of integers of different signs.
Copybara import of the project:

--
311f436f77b334f5462127d8cf179cce067969ca by Marius Brehler <marius.brehler@iml.fraunhofer.de>:

Adjust types of loop counters

Reduces some warnings about comparison of integers of different signs.

PiperOrigin-RevId: 360912203
2021-03-04 07:36:12 -08:00
A. Unique TensorFlower 39650a5d5a Remove rank 1 specialization from TransformUnrankedHloPass.
For binary ops, we already special-case rank 0 vs rank 1, and same shape. So we
don't need to special-case a maximum rank of 1.

PiperOrigin-RevId: 360891955
2021-03-04 05:24:53 -08:00
Adrian Kuegel 62b357b601 Remove rank 1 specialization from TransformUnrankedHloPass.
For binary ops, we already special-case rank 0 vs rank 1, and same shape. So we
don't need to special-case a maximum rank of 1.

PiperOrigin-RevId: 360881387
2021-03-04 04:04:11 -08:00
Geoffrey Martin-Noble 8687f3e4cf Lower MHLO Dot to type-polymorphic linalg named ops
The linalg named ops are now type polymorphic, so the type-monomorphic
varieties are redundant (and will be deleted soon).

PiperOrigin-RevId: 360509010
2021-03-02 14:00:58 -08:00
Benjamin Kramer 1facbe9eb5 Integrate LLVM at llvm/llvm-project@7f086d74c3
Updates LLVM usage to match
[7f086d74c347](https://github.com/llvm/llvm-project/commit/7f086d74c347)

PiperOrigin-RevId: 360434104
2021-03-02 08:33:21 -08:00
Adrian Kuegel 0683db3b24 Legalize MinimumBroadcastShapes op.
Use it in TransformUnrankedHloPass, which allows to reduce the maximum
rank for rank specialized broadcast from 6 to 5.

PiperOrigin-RevId: 360415743
2021-03-02 06:39:01 -08:00
Christian Sigg 70ee9369d5 Use mlir::OpState::operator->() to get to Operation::getAttrs().
This is a preparation step to remove getAttrs() from OpState.

PiperOrigin-RevId: 360159716
2021-03-01 04:53:00 -08:00
Benjamin Kramer e19ccf975e Filter static dimensions from dynamic_broadcast_in_dim's init_tensor
Otherwise we'd generate invalid IR for those cases.

PiperOrigin-RevId: 360144122
2021-03-01 03:03:54 -08:00
Adrian Kuegel e6a1f5f0f9 Add MinimumBroadcastShapesOp to chlo dialect.
This op is useful for rank specialization of broadcasts. Kernel Generator
needs to generate one kernel for each rank, so if we can minimize the rank
of the broadcast shape, we can support more cases with the same number of
special-cased kernels.

PiperOrigin-RevId: 360137827
2021-03-01 02:23:52 -08:00
Christian Sigg 2d818c4fd9 Use mlir::OpState::operator->() to get to methods of mlir::Operation.
This is a preparation step to remove those methods from OpState.

PiperOrigin-RevId: 360043992
2021-02-28 09:02:33 -08:00
Hanhan Wang a8f99ee0f5 Fix the shape of linalg.init_tensor in conv op lowering.
The output spatial dims are not as same as the input spatial dims. Only supports
static output spatial dims for now.

PiperOrigin-RevId: 359775479
2021-02-26 09:34:11 -08:00
Hanhan Wang 90f0d7f935 Add support for lowering mhlo.conv to Linalg on tensors.
This pattern only works for normal convolutions. It does not work for depthwise
convolutions. The Linalg conv ops are defined with static rank, so it only
supports 1d/2d/3d cases, which are the most typical cases.

This also refactors out the same check in lmhlo.conv lowering.

PiperOrigin-RevId: 359503527
2021-02-25 05:59:08 -08:00
Hanhan Wang 45a1249fe2 Add support for lowering mhlo.pad to linalg.pad_tensor
The change upstreams the pattern from IREE repo to MHLO repo.

PiperOrigin-RevId: 359481543
2021-02-25 03:00:39 -08:00
Geoffrey Martin-Noble 89f7f2bd65 Lower integer matmuls to linalg
PiperOrigin-RevId: 359306495
2021-02-24 09:45:07 -08:00
Hanhan Wang 475b4a06a5 Add support for lowering mhlo.slice to subtensor.
PiperOrigin-RevId: 359297978
2021-02-24 09:06:09 -08:00
A. Unique TensorFlower ac0552f127 [MLIR][HLO] Remove duplicate `PopulateTransformUnrankedHloPatterns`
PiperOrigin-RevId: 359046173
2021-02-23 07:50:47 -08:00
Rahul Joshi 5adb7c6e12 [MLIR:LHLO] Add optional call target arg mapping to LMHLO CustomCall operations.
- XLA:HLO -> LMHLO conversion drops all token arguments and return values, however
  custom calls that users write still expect to get buffer pointers for these token types.
- To be able to support this, add an optional call target argument mapping attribute to
  LMHLO custom calls. When this attribute is present, it indicates the number of
  arguments and returns that the custom call expects and also indicates which LMHLO
  arg() or output() maps to which arg or result number of the custom call.

PiperOrigin-RevId: 358826664
2021-02-22 08:43:00 -08:00
Benjamin Kramer a9cc1dcfa0 [mlir][hlo] Add basic rank-specialization for select
This just blows up everything to ranked (up to 6) and is probably quite slow.
This is sufficient to make kernelgen compile SelectV2.

PiperOrigin-RevId: 358777728
2021-02-22 02:41:12 -08:00
Benjamin Kramer b42def4612 [mlir][hlo] Refactor rank specialization to allow an arbitrary number of inputs
This actually simplifies the code a bit.

PiperOrigin-RevId: 358201038
2021-02-18 09:53:03 -08:00
Benjamin Kramer ca4034b56e [mlir][hlo] Make select ready for dynamic shapes (ranked only for now)
Move tf.SelectV2 broadcast lowering to a chlo.broadcast_select op, and lower it
to broadcasts on mhlo from there.

PiperOrigin-RevId: 358179975
2021-02-18 08:08:40 -08:00
Adrian Kuegel 37e31f8b26 Lower Expm1 kernel to math.ExpM1.
PiperOrigin-RevId: 358152908
2021-02-18 04:54:23 -08:00
Richard Uhler b579bd5d9e Support dynamic-shaped operand in verification of BroadcastInDim.
Verification of HLO_BroadcastInDimOp was previously failing or crashing if the
operand had a dynamic shape or was unranked. Update the verification code to
allow the operand to be unranked or have dynamic shape.

PiperOrigin-RevId: 358056793
2021-02-17 16:18:09 -08:00
A. Unique TensorFlower 220deb3709 [MLIR][CHLO] Add legalization for `chlo.polygamma` to MHLO
PiperOrigin-RevId: 357954624
2021-02-17 08:33:01 -08:00
A. Unique TensorFlower c06de24f6c [MLIR][CHLO] Generalize lowering with upcast to n-ary operation
Allows reuse for zeta lowering now and for the polygamma lowering soon.

PiperOrigin-RevId: 357739910
2021-02-16 09:47:24 -08:00
A. Unique TensorFlower 81abaf364d [MLIR][MHLO] Add polygamma op to the CHLO dialect
PiperOrigin-RevId: 357724465
2021-02-16 08:32:33 -08:00
A. Unique TensorFlower 2fe0c33083 Integrate LLVM at llvm/llvm-project@16428a8d91
Updates LLVM usage to match
[16428a8d91a9](https://github.com/llvm/llvm-project/commit/16428a8d91a9)

PiperOrigin-RevId: 357550807
2021-02-15 04:17:58 -08:00
Adrian Kuegel 824bc9c425 Improve broadcast transformation to treat dynamic shapes with 1 element as scalar.
A shape that contains exactly one element is effectively a scalar. This leads
to a speedup in cases where we have a binary op with one operand that is
effectively a scalar, because we can use the fast path.

PiperOrigin-RevId: 357515552
2021-02-14 23:25:41 -08:00
A. Unique TensorFlower 4060a86fe2 Integrate LLVM at llvm/llvm-project@2bfe27da17
Updates LLVM usage to match
[2bfe27da171e](https://github.com/llvm/llvm-project/commit/2bfe27da171e)

PiperOrigin-RevId: 357196336
2021-02-12 08:32:03 -08:00
Alexander Belyaev 36e04d92c0 [KERNEL_GEN] Add a pattern to bufferize `mhlo.reshape(<unranked_tensor>)`.
PiperOrigin-RevId: 356720899
2021-02-10 06:32:21 -08:00
A. Unique TensorFlower 80d753c1fe Integrate LLVM at llvm/llvm-project@f89f6d1e5d
Updates LLVM usage to match
[f89f6d1e5d7d](https://github.com/llvm/llvm-project/commit/f89f6d1e5d7d)

PiperOrigin-RevId: 356265374
2021-02-08 09:47:00 -08:00
Tres Popp d086b8a0ec Correct HLO atan2 lowering in cases of -inf and -0 inputs.
This is being done by just removing the approximation and lowering to atan2 lib calls later to make the implementation the same as XLA. Note that if the approximation is brought back later, it can be fixed by changing the IR checking `less-than(X, 0)` to `less-than(copysign(X, 1), 0)`

PiperOrigin-RevId: 356253941
2021-02-08 06:58:04 -08:00
Adrian Kuegel 1c4521cc42 Integrate LLVM at llvm/llvm-project@d1978fa4bf
Updates LLVM usage to match
[d1978fa4bf0d](https://github.com/llvm/llvm-project/commit/d1978fa4bf0d)

PiperOrigin-RevId: 355848094
2021-02-05 07:42:06 -08:00
A. Unique TensorFlower 99bc05f2e4 Integrate LLVM at llvm/llvm-project@91e7a17133
Updates LLVM usage to match
[91e7a1713332](https://github.com/llvm/llvm-project/commit/91e7a1713332)

PiperOrigin-RevId: 355702100
2021-02-04 13:42:31 -08:00
Stephan Herhut 60e1b6882c Add kernel definition for zeta operation.
PiperOrigin-RevId: 355575619
2021-02-04 01:27:43 -08:00
Mahesh Ravishankar 44d0464d16 Use linalg.fill on tensors instead of tensor.generate in MHLO -> Linalg conversion.
linalg.fill on tensors is a structured op that allows use tile + fuse
to reduce the fill overhead.

PiperOrigin-RevId: 355490400
2021-02-03 15:03:49 -08:00
Stephan Herhut 6cd1875ee4 Implement lowering of chlo::zeta to mhlo dialect.
PiperOrigin-RevId: 355395581
2021-02-03 07:50:05 -08:00
A. Unique TensorFlower 3b67b207c4 [MLIR][CHLO] Use CHLO lowering for `is_inf` op
PiperOrigin-RevId: 355189054
2021-02-02 09:53:13 -08:00
A. Unique TensorFlower 0458ae9a22 [MLIR][KernelGen] Add `tf.Digamma` kernels
PiperOrigin-RevId: 355129028
2021-02-02 03:07:39 -08:00
A. Unique TensorFlower f40ccc5b4b [MLIR][CHLO] Add `chlo.digamma` and lowering to MHLO
PiperOrigin-RevId: 355122765
2021-02-02 02:10:17 -08:00
Adrian Kuegel 96f8771ed7 Add MLIR generated kernel for Angle kernel.
This also requires a canonicalization pattern to remove a redundant dynamic
reshape from rank 1 to rank 1.

PiperOrigin-RevId: 355113135
2021-02-02 00:47:20 -08:00
A. Unique TensorFlower 816d279be3 [MLIR][CHLO] Simplify conversions with upcast
PiperOrigin-RevId: 354975366
2021-02-01 10:48:10 -08:00
Rahul Joshi 8e3890e8e8 [MLIR:HLO] Add AllGather and AllToAll operations to LMHLO dialect.
- Use a common base class to for AllReduce, AllGather, and AllToAll in the ODS spec.
- Add basic verification for replica groups attribute.

PiperOrigin-RevId: 354969654
2021-02-01 10:23:46 -08:00
A. Unique TensorFlower 2b72ddc6b2 [MLIR][KernelGen] Add `lgamma` kernels
PiperOrigin-RevId: 354519407
2021-01-29 06:14:17 -08:00
Stephan Herhut e61ef86fdb Add zeta and broadcasting_zeta to chlo dialect.
PiperOrigin-RevId: 354500879
2021-01-29 03:22:52 -08:00
Hanhan Wang 30ce82790d Upstream mhlo.reduce lowering to Linalg to MHLO repo.
In IREE, we use indexed generic op to handle the initial value. However, we
lower it to a generic op that carries an init_tensor here, and leave the handle
of initialization problem to later passes.

PiperOrigin-RevId: 354294807
2021-01-28 05:46:09 -08:00
Lei Zhang 39589add22 Use the correct shape when converting mhlo.reshape
If mhlo.reshape is not purely collapsing some consecutive operand
dimensions into result dimensions, we will generate two linalg
reshape op for it: the first one collapses all operand dimensions
into one dimension, and the second one expands it to all result
dimensions. For this case, the number of collapsed/expanded dimensions
should be coming strictly from the operand/result. It is different
from the case where we can generate one linalg reshape. For that case,
the reassociation map should have rank equal to the largest among
operand/result shape.

PiperOrigin-RevId: 354293826
2021-01-28 05:37:54 -08:00
A. Unique TensorFlower e0a7be7fb1 [MLIR][CHLO] Add `chlo.lgamma` and lowering to `hlo`
PiperOrigin-RevId: 354287316
2021-01-28 04:35:03 -08:00
A. Unique TensorFlower c3ddcd6c7f [MLIR][CHLO] Implement type inference for `is_inf`-like operations in CHLO
PiperOrigin-RevId: 354265834
2021-01-28 01:37:04 -08:00
A. Unique TensorFlower fe2e5a175f [MLIR][HLO] Implement type inference for `is_finite` op
PiperOrigin-RevId: 354261420
2021-01-28 00:56:12 -08:00
A. Unique TensorFlower c653db73c5 Integrate LLVM at llvm/llvm-project@c85b6bf33c
Updates LLVM usage to match
[c85b6bf33c47](https://github.com/llvm/llvm-project/commit/c85b6bf33c47)

PiperOrigin-RevId: 354136678
2021-01-27 11:46:07 -08:00
A. Unique TensorFlower d77c9ad6fa [MLIR][CHLO] Add `is_inf`, `is_pos_inf`, and `is_neg_inf` to CHLO dialect
Also add the respective lowerings to MHLO.

PiperOrigin-RevId: 354101955
2021-01-27 09:00:56 -08:00
Adrian Kuegel fa059259bc Add template for tf.Cast
Also generate the kernels for all types of casts between signed int and float types.
This requires some adaptations to our build macros so that we can also specify the
output type of a kernel.

PiperOrigin-RevId: 354067727
2021-01-27 04:49:55 -08:00
Rahul Joshi 44deae2aa1 [MLIR:HLO] Extend AllReduce to support multiple inputs and results (to model tuples).
- Instead of SameTypeOperands, add custom verification to check if operands and
  results pairwise have the same type.

PiperOrigin-RevId: 353986341
2021-01-26 17:25:22 -08:00
A. Unique TensorFlower ae2d46414d [MLIR][KernelGen] Add erfc kernel for f16
PiperOrigin-RevId: 353209468
2021-01-22 03:38:30 -08:00
A. Unique TensorFlower ef8ccdaebc [MLIR] Add mhlo.logistic lowering to linalg
PiperOrigin-RevId: 353205440
2021-01-22 03:03:16 -08:00
A. Unique TensorFlower c846f925d4 [MLIR][KernelGen] Add chlo.erfc lowering for f32
PiperOrigin-RevId: 353201886
2021-01-22 02:33:21 -08:00
Tim Shen d1c785381d [XLA/GPU] Add XLA HLO -> LMHLO conversion to several ops, and implement them in XLA/GPU.
PiperOrigin-RevId: 353158172
2021-01-21 19:57:54 -08:00
Hanhan Wang e2d60e01ba Fix CMakeLists.txt
This is the followup of `7aa64ee0b791` The dep was added in BUILD, but not CMakeLists.txt

PiperOrigin-RevId: 353078811
2021-01-21 12:42:35 -08:00
A. Unique TensorFlower 1a37078132 [MLIR][KernelGen] Add chlo.erfc lowerings for f64
PiperOrigin-RevId: 352993223
2021-01-21 04:42:56 -08:00
A. Unique TensorFlower bec2e625a2 [MLIR][KernelGen] Add approximation lowering for mhlo.erf operation on f64
PiperOrigin-RevId: 352977456
2021-01-21 02:48:43 -08:00
Alexander Belyaev 7aa64ee0b7 [MLIR] Migrate TF from STD complex ops to ComplexDialect.
PiperOrigin-RevId: 352966408
2021-01-21 01:22:25 -08:00
Stephan Herhut 70a351f301 Add chlo.acosh operation and associated lowerings.
PiperOrigin-RevId: 352839289
2021-01-20 11:43:44 -08:00
Jacques Pienaar a7e645f37e Fix incorrect include
PiperOrigin-RevId: 352820426
2021-01-20 10:24:41 -08:00
Tres Popp ba0346b071 Integrate LLVM at llvm/llvm-project@96ef4f307d
Updates LLVM usage to match
[96ef4f307df2](https://github.com/llvm/llvm-project/commit/96ef4f307df2)

PiperOrigin-RevId: 352786460
2021-01-20 07:09:47 -08:00
A. Unique TensorFlower ec5f5667e1 [MLIR][KernelGen] Add `tf.Asinh` kernels and complete their lowerings
PiperOrigin-RevId: 352773540
2021-01-20 05:31:15 -08:00
A. Unique TensorFlower 0e85b4d511 [MLIR][KernelGen] Add `tf.Asinh` kernels and complete their lowerings
PiperOrigin-RevId: 352604725
2021-01-19 10:51:41 -08:00
A. Unique TensorFlower 96fb617413 [MLIR][KernelGen] Add erf kernel and missing lowering for f16 type
PiperOrigin-RevId: 352416184
2021-01-18 08:21:15 -08:00
A. Unique TensorFlower c11ea4ef5a [MLIR][KernelGen] Add `tf.Atanh` kernels
PiperOrigin-RevId: 352393602
2021-01-18 05:14:09 -08:00
A. Unique TensorFlower 3763740910 [MLIR][KernelGen] Add erf kernel for f32 arguments and missing lowerings
PiperOrigin-RevId: 352381016
2021-01-18 03:35:13 -08:00
A. Unique TensorFlower bcdb3c3548 [MLIR] Lower mhlo.clamp to linalg
PiperOrigin-RevId: 351998800
2021-01-15 06:45:38 -08:00
A. Unique TensorFlower 791d5afd28 [MLIR][KernelGen] Add `tf.Asinh` kernels and complete their lowerings
PiperOrigin-RevId: 351989552
2021-01-15 05:26:57 -08:00
A. Unique TensorFlower 316f630728 [MLIR][KernelGen] Add cosh kernels and tests
Allow for relative tolerance in unary kernel tests. In case of the cosh kernels,
this allows to accept an observed difference of 5.6e-8 between the kernel and
the `std::cosh` reference (32829984.568665262 vs. 32829984.568665318) in one of
the test cases.

PiperOrigin-RevId: 351983698
2021-01-15 04:31:30 -08:00
A. Unique TensorFlower 181d2cad31 [MLIR][KernelGen] Add `tf.Log1p` kernel and tests
PiperOrigin-RevId: 351566460
2021-01-13 05:37:25 -08:00
Hanhan Wang 300a7c11ce Upstream mhlo.dot_general lowering to Linalg to MHLO repo
PiperOrigin-RevId: 351514250
2021-01-12 22:08:46 -08:00
A. Unique TensorFlower 97a618f91a Integrate LLVM at llvm/llvm-project@6f4d460762
Updates LLVM usage to match
[6f4d46076200](https://github.com/llvm/llvm-project/commit/6f4d46076200)

PiperOrigin-RevId: 351474229
2021-01-12 16:38:25 -08:00
A. Unique TensorFlower 0b85d5c510 [MLIR][KernelGen] Add asin kernels and tests
PiperOrigin-RevId: 351381423
2021-01-12 09:02:46 -08:00
Hanhan Wang 8f58f844e5 Upstream mhlo.dot lowering to Linalg to MHLO repo.
We prototyped the lowering from mhlo.dot to linalg.matmul in IREE. Since Linalg
now supports matmul in tensors world, we can move the lowering logic to tensors
world, and upstream to legalize_to_linalg.cc. The patch lowers the mhlo.dot to
the linalg.matmul/matvec/dot in tensors world.

PiperOrigin-RevId: 351184911
2021-01-11 10:35:24 -08:00
Alexander Belyaev 180f917446 [KERNEL_GEN] Add a pattern for hlo.dyn_broadcast->linalg to enable is_inf kernel.
PiperOrigin-RevId: 351179620
2021-01-11 10:13:31 -08:00
Alexander Belyaev ecf1bf5132 [KERNEL_GEN] Add a canonicalization pattern to drop a redundant dynamic reshape.
PiperOrigin-RevId: 351141868
2021-01-11 06:38:03 -08:00
Alexander Belyaev 6c42f54298 [KERNEL_GEN] Restrict broadcast -> reshape canonicalization to identity dims.
This is needed to avoid the case, when the broadcast_in_dims also performs permutation.

PiperOrigin-RevId: 350650342
2021-01-07 15:30:28 -08:00
Alexander Belyaev 095dc28e5c [KERNEL_GEN] Add canonicalizaton pattern to drop a redundant broadcast op.
PiperOrigin-RevId: 350105790
2021-01-05 03:01:00 -08:00
Adrian Kuegel 50fc56a208 Fix header include.
It should not have the third_party prefix, this doesn't work in open source.

PiperOrigin-RevId: 348905548
2020-12-24 01:40:31 -08:00
A. Unique TensorFlower b0bf2ef45b Integrate LLVM at llvm/llvm-project@c3acda0798
Updates LLVM usage to match
[c3acda0798f9](https://github.com/llvm/llvm-project/commit/c3acda0798f9)

PiperOrigin-RevId: 348896724
2020-12-23 23:53:54 -08:00
Stephan Herhut ccdd07f8e4 Prepare to remove tensor_load and tensor_store special handling from hlo to lhlo legalization.
This updates the tests to no longer rely on tensor_store. Once all users of this behavior have adopted, the tensor_store support will be removed.

PiperOrigin-RevId: 348624899
2020-12-22 06:29:12 -08:00
A. Unique TensorFlower c4accdcc41 Integrate LLVM at llvm/llvm-project@1b97cdf885
Updates LLVM usage to match
[1b97cdf885d6](https://github.com/llvm/llvm-project/commit/1b97cdf885d6)

PiperOrigin-RevId: 348587513
2020-12-21 23:49:18 -08:00
Tres Popp a42213b870 Define lowering of [l]mhlo.pow.
For floating point operations, this uses std.pow.
For integer operations, this lowers to a loop.
This adds a dependency on scf.

PiperOrigin-RevId: 348537232
2020-12-21 15:27:40 -08:00
Smit Hinsu 9466cffaf3 Restrict CHLO Acos and Sinh op lowering to non complex types
These are failing for complex types. Complex types require special handling. We have a fallback lowering for these ops so we can disable complex element types for now.

PiperOrigin-RevId: 348205002
2020-12-18 11:32:10 -08:00
Smit Hinsu 8d051723c0 Use InferTypeOpInterface for HLO AbsOp and fix result shape inference
Shape inference in case of ops with complex element types need to use the element type of complex as the result element type and not the full operand type.

Before:
"mhlo.abs"(%arg0) : (tensor<4xcomplex<f32>>) -> tensor<4xtensor<4xcomplex<f32>>>
After:
"mhlo.abs"(%arg0) : (tensor<4xcomplex<f32>>) -> tensor<4xf32>
PiperOrigin-RevId: 348123967
2020-12-17 17:37:07 -08:00
Smit Hinsu 737d15ded5 Handle operands with zero elements in HLO PadOp folder
PiperOrigin-RevId: 348034821
2020-12-17 09:27:36 -08:00
Christian Sigg dc7e63f74c Use mlir::OpState::operator->() to get to methods of mlir::Operation.
This is a preparation step to remove those methods from OpState.

PiperOrigin-RevId: 348010582
2020-12-17 06:29:33 -08:00
Christian Sigg 099c130daf Fix MLIR include paths.
PiperOrigin-RevId: 347976151
2020-12-17 00:56:04 -08:00
A. Unique TensorFlower 5da9190dd9 Integrate LLVM at llvm/llvm-project@0cf7e4b252
Updates LLVM usage to match
[0cf7e4b252fe](https://github.com/llvm/llvm-project/commit/0cf7e4b252fe)

PiperOrigin-RevId: 347948887
2020-12-16 20:30:17 -08:00
Adrian Kuegel 61244b136c Try to avoid a segfault if we don't support a lowering.
It can happen that a lowering for a certain type is not implemented yet.
We should not segfault in such a case, but instead return a failure().

PiperOrigin-RevId: 347801106
2020-12-16 04:58:17 -08:00
Alexander Belyaev e6e8920921 [KERNEL_GEN] Switch the pipeline to Linalg-on-Tensors.
PiperOrigin-RevId: 347781190
2020-12-16 01:51:15 -08:00
Tres Popp 6d7812bbc5 Correct non ConversionRewriter transformation in LegalizeToLinalg
PiperOrigin-RevId: 347622657
2020-12-15 08:56:58 -08:00
Alexander Belyaev ddda2699fb [KERNEL_GEN] Switch the pipeline to Linalg-on-Tensors.
PiperOrigin-RevId: 347600145
2020-12-15 06:32:25 -08:00
River Riddle 9540e51617 [mlir][NFC] Replace usages or mlir/IR/StandardTypes.h with mlir/IR/BuiltinTypes.h
StandardTypes.h was moved to BuiltinTypes.h and is being removed.

PiperOrigin-RevId: 347559927
2020-12-15 00:59:29 -08:00
Tres Popp 9df327d88f Forward listeners in LhloLegalizeToParallelLoops builders
PiperOrigin-RevId: 347554379
2020-12-15 00:13:15 -08:00
Hanhan Wang 1a58f19664 [NFC] Make function names follow style guide.
Functions should start with a capital letter and have a capital letter for each
new word. See https://google.github.io/styleguide/cppguide.html#Function_Names

PiperOrigin-RevId: 347420402
2020-12-14 10:46:55 -08:00
Alexander Belyaev 8b35a75d4a [KERNEL_GEN] Switch the pipeline to Linalg-on-Tensors.
PiperOrigin-RevId: 347368063
2020-12-14 05:46:47 -08:00
River Riddle 6b439f7eee [mlir][NFC] Replace usages or mlir/IR/StandardTypes.h with mlir/IR/BuiltinTypes.h
StandardTypes.h was moved to BuiltinTypes.h and is being removed.

PiperOrigin-RevId: 347115952
2020-12-11 19:01:25 -08:00
River Riddle 3abdd556de [mlir][NFC] Replace usages or mlir/IR/StandardTypes.h with mlir/IR/BuiltinTypes.h
StandardTypes.h was moved to BuiltinTypes.h and is being removed.

PiperOrigin-RevId: 347111863
2020-12-11 18:12:59 -08:00
Tim Shen 22b91d62ff [XLA/GPU] Migrate all unnested elementwise emitters.
Also fix GetHloOutputs/GetHloOperands to treat aliased operands correctly.

PiperOrigin-RevId: 347055290
2020-12-11 12:45:21 -08:00
Smit Hinsu ab6ee11813 Fix folding of HLO SliceOp with zero elements
This was causing division by zero in this case.

PiperOrigin-RevId: 346920942
2020-12-10 20:22:48 -08:00
Rahul Joshi f232da1f9d [MLIR:HLO] Add window_reversal attribute to convolution attributes.
- Add this attribute to match the corresponding XLA HLO attribute on convolution
  operations.
- A true value indicates a reversal of the corresponding kernel spatial dimension.
- Since XLA builder does not support this attribute, use a custom HLO converted to map
  from mlir::mhlo::ConvOp to XLA.

PiperOrigin-RevId: 346891737
2020-12-10 16:39:19 -08:00
A. Unique TensorFlower 6a05893169 [XLA/GPU] Migrate all unnested elementwise emitters.
PiperOrigin-RevId: 346725498
2020-12-10 00:21:36 -08:00
Christian Sigg 51f535454d Use OpState::operator->() to get to member functions in Operation so we can remove the corresponding methods from OpState.
PiperOrigin-RevId: 346721668
2020-12-09 23:54:07 -08:00
Alexander Belyaev c36afd275e [HLO] Add a pattern for HLO ConstOp to HLO -> Linalg conversion.
PiperOrigin-RevId: 346718273
2020-12-09 23:24:57 -08:00
Tim Shen cfcf741932 [XLA/GPU] Migrate all unnested elementwise emitters.
PiperOrigin-RevId: 346716519
2020-12-09 23:08:13 -08:00
A. Unique TensorFlower 65ebd85563 [XLA/GPU] Migrate all unnested elementwise emitters.
PiperOrigin-RevId: 346658288
2020-12-09 16:05:43 -08:00
Tim Shen 1c10e1fec6 [XLA/GPU] Migrate all unnested elementwise emitters.
PiperOrigin-RevId: 346624905
2020-12-09 13:31:50 -08:00
A. Unique TensorFlower 1a5a1b5f41 [XLA/GPU] Migrate all unnested elementwise emitters.
PiperOrigin-RevId: 346574093
2020-12-09 09:54:16 -08:00
Tim Shen 3c33fe4b9e [XLA/GPU] Migrate all unnested elementwise emitters.
PiperOrigin-RevId: 346559170
2020-12-09 08:42:51 -08:00
Alexander Belyaev e9057cbd93 [mlir] Enable conversion of HLO SignOp to Linalg.
PiperOrigin-RevId: 346380514
2020-12-08 12:08:26 -08:00
Stephan Herhut c3790af758 Add plumbing for or and xor to hlo to lhlo and linalg lowerings.
PiperOrigin-RevId: 346311314
2020-12-08 06:39:02 -08:00
Stephan Herhut dd5895d083 Extend unranked hlo transformations to also support and, or and xor.
PiperOrigin-RevId: 346270393
2020-12-08 01:00:26 -08:00
Benjamin Kramer 5235eceea0 Lower mhlo shifts to linalg
PiperOrigin-RevId: 346161253
2020-12-07 13:02:32 -08:00
Tres Popp d327fc5737 [kernel_gen] Lower max rank specialization from 6 to 5
We don't care much about rank 6 broadcasting operations and this lowers compile times significantly.

PiperOrigin-RevId: 346046601
2020-12-07 02:18:38 -08:00
Smit Hinsu bc7b6374c8 Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345902167
2020-12-05 18:55:41 -08:00
Phoenix Meadowlark c33bdcbd03 Remove fold of `mhlo.compare(%arg0, %arg0)` for floating types.
Two tensors having the same SSA-value isn't sufficient for equality for floating types, as `NaN != NaN`. As written this causes `tf.IsNan` to [miscompile](https://github.com/google/iree/issues/4061).

PiperOrigin-RevId: 345730640
2020-12-04 12:15:02 -08:00
Smit Hinsu 9bd1995f90 Legalize XlaReplicaId to HLO replica-id op
Also, define shape inference function for HLO replica-id op.

PiperOrigin-RevId: 345714342
2020-12-04 11:04:40 -08:00
A. Unique TensorFlower e87d53742b Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345618958
2020-12-04 00:04:10 -08:00
Smit Hinsu 9456af5880 Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345605910
2020-12-03 22:09:56 -08:00
Tres Popp 7c3f049c8e [kernel_gen] Lower max rank specialization from 6 to 5
We don't care much about rank 6 broadcasting operations and this lowers compile times significantly.

PiperOrigin-RevId: 345466476
2020-12-03 09:19:25 -08:00
Rahul Joshi dbbdfea95b [MLIR:HLO] Generate enum decls for HLO and LHLO GPU dialects.
- Split out enum definitions in hlo dialect into a separate .td file (similar to structs)
  and generate enum decl/defs for these enums.
- Also split out the LHLO GPU enums into a separate .td file and generate enum
  decl/defs for these enums as well.
- Remove unused dialect from ConvolutionAttributes and generate lhlo_gpu enums.
- Add appropriate namespace for all the enums.

PiperOrigin-RevId: 345277240
2020-12-02 11:39:23 -08:00
A. Unique TensorFlower 1b711670bc Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345239817
2020-12-02 08:42:07 -08:00
Smit Hinsu 733fc6d032 Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345227848
2020-12-02 07:24:10 -08:00
River Riddle f89244381d [mlir][NFC] Replace usages of Function.h and Module.h with BuiltinOps.h
This is part of a larger refactoring cleaning up the BuiltinDialect of MLIR.

PiperOrigin-RevId: 345085278
2020-12-01 13:18:06 -08:00
A. Unique TensorFlower dd15c6cd84 [MLIR][KernelGen] Generate assertion message in `transform_unranked_hlo` pass
Use constant to generate the correct assertion message. This avoids
confusion when lowering the max rank specialization for debugging.

PiperOrigin-RevId: 344769021
2020-11-30 01:41:09 -08:00
Adrian Kuegel d14c63da54 Add a canonicalization pattern to remove redundant dynamic_reshapes.
PiperOrigin-RevId: 344517381
2020-11-27 04:46:50 -08:00
Stephan Herhut f183e95e69 Register the new branch op rewrite patterns in hlo to lhlo conversion
for good measure.

PiperOrigin-RevId: 344512196
2020-11-27 03:44:28 -08:00
Adrian Kuegel 6a71a84302 Support different input/output type for TransformUnrankedHlo.
Also generate the tf.Equal kernel, now that it works.

PiperOrigin-RevId: 344402014
2020-11-26 04:20:34 -08:00
Alexander Belyaev 5583c63cab [KERNEL_GEN] Add unranked Conj kernel.
PiperOrigin-RevId: 344243271
2020-11-25 06:37:26 -08:00
Lucy Fox 85f92a1651 [KernelGen] Lower tf.Erf and tf.Erfc ops to CHLO.
This does not include the lowerings from CHLO to LMHLO.

PiperOrigin-RevId: 344091604
2020-11-24 10:55:43 -08:00