Commit Graph

215 Commits

Author SHA1 Message Date
Stephan Herhut ccdd07f8e4 Prepare to remove tensor_load and tensor_store special handling from hlo to lhlo legalization.
This updates the tests to no longer rely on tensor_store. Once all users of this behavior have adopted, the tensor_store support will be removed.

PiperOrigin-RevId: 348624899
2020-12-22 06:29:12 -08:00
A. Unique TensorFlower c4accdcc41 Integrate LLVM at llvm/llvm-project@1b97cdf885
Updates LLVM usage to match
[1b97cdf885d6](https://github.com/llvm/llvm-project/commit/1b97cdf885d6)

PiperOrigin-RevId: 348587513
2020-12-21 23:49:18 -08:00
Tres Popp a42213b870 Define lowering of [l]mhlo.pow.
For floating point operations, this uses std.pow.
For integer operations, this lowers to a loop.
This adds a dependency on scf.

PiperOrigin-RevId: 348537232
2020-12-21 15:27:40 -08:00
Smit Hinsu 9466cffaf3 Restrict CHLO Acos and Sinh op lowering to non complex types
These are failing for complex types. Complex types require special handling. We have a fallback lowering for these ops so we can disable complex element types for now.

PiperOrigin-RevId: 348205002
2020-12-18 11:32:10 -08:00
Smit Hinsu 8d051723c0 Use InferTypeOpInterface for HLO AbsOp and fix result shape inference
Shape inference in case of ops with complex element types need to use the element type of complex as the result element type and not the full operand type.

Before:
"mhlo.abs"(%arg0) : (tensor<4xcomplex<f32>>) -> tensor<4xtensor<4xcomplex<f32>>>
After:
"mhlo.abs"(%arg0) : (tensor<4xcomplex<f32>>) -> tensor<4xf32>
PiperOrigin-RevId: 348123967
2020-12-17 17:37:07 -08:00
Smit Hinsu 737d15ded5 Handle operands with zero elements in HLO PadOp folder
PiperOrigin-RevId: 348034821
2020-12-17 09:27:36 -08:00
Christian Sigg dc7e63f74c Use mlir::OpState::operator->() to get to methods of mlir::Operation.
This is a preparation step to remove those methods from OpState.

PiperOrigin-RevId: 348010582
2020-12-17 06:29:33 -08:00
Christian Sigg 099c130daf Fix MLIR include paths.
PiperOrigin-RevId: 347976151
2020-12-17 00:56:04 -08:00
A. Unique TensorFlower 5da9190dd9 Integrate LLVM at llvm/llvm-project@0cf7e4b252
Updates LLVM usage to match
[0cf7e4b252fe](https://github.com/llvm/llvm-project/commit/0cf7e4b252fe)

PiperOrigin-RevId: 347948887
2020-12-16 20:30:17 -08:00
Adrian Kuegel 61244b136c Try to avoid a segfault if we don't support a lowering.
It can happen that a lowering for a certain type is not implemented yet.
We should not segfault in such a case, but instead return a failure().

PiperOrigin-RevId: 347801106
2020-12-16 04:58:17 -08:00
Alexander Belyaev e6e8920921 [KERNEL_GEN] Switch the pipeline to Linalg-on-Tensors.
PiperOrigin-RevId: 347781190
2020-12-16 01:51:15 -08:00
Tres Popp 6d7812bbc5 Correct non ConversionRewriter transformation in LegalizeToLinalg
PiperOrigin-RevId: 347622657
2020-12-15 08:56:58 -08:00
Alexander Belyaev ddda2699fb [KERNEL_GEN] Switch the pipeline to Linalg-on-Tensors.
PiperOrigin-RevId: 347600145
2020-12-15 06:32:25 -08:00
River Riddle 9540e51617 [mlir][NFC] Replace usages or mlir/IR/StandardTypes.h with mlir/IR/BuiltinTypes.h
StandardTypes.h was moved to BuiltinTypes.h and is being removed.

PiperOrigin-RevId: 347559927
2020-12-15 00:59:29 -08:00
Tres Popp 9df327d88f Forward listeners in LhloLegalizeToParallelLoops builders
PiperOrigin-RevId: 347554379
2020-12-15 00:13:15 -08:00
Hanhan Wang 1a58f19664 [NFC] Make function names follow style guide.
Functions should start with a capital letter and have a capital letter for each
new word. See https://google.github.io/styleguide/cppguide.html#Function_Names

PiperOrigin-RevId: 347420402
2020-12-14 10:46:55 -08:00
Alexander Belyaev 8b35a75d4a [KERNEL_GEN] Switch the pipeline to Linalg-on-Tensors.
PiperOrigin-RevId: 347368063
2020-12-14 05:46:47 -08:00
River Riddle 6b439f7eee [mlir][NFC] Replace usages or mlir/IR/StandardTypes.h with mlir/IR/BuiltinTypes.h
StandardTypes.h was moved to BuiltinTypes.h and is being removed.

PiperOrigin-RevId: 347115952
2020-12-11 19:01:25 -08:00
River Riddle 3abdd556de [mlir][NFC] Replace usages or mlir/IR/StandardTypes.h with mlir/IR/BuiltinTypes.h
StandardTypes.h was moved to BuiltinTypes.h and is being removed.

PiperOrigin-RevId: 347111863
2020-12-11 18:12:59 -08:00
Tim Shen 22b91d62ff [XLA/GPU] Migrate all unnested elementwise emitters.
Also fix GetHloOutputs/GetHloOperands to treat aliased operands correctly.

PiperOrigin-RevId: 347055290
2020-12-11 12:45:21 -08:00
Smit Hinsu ab6ee11813 Fix folding of HLO SliceOp with zero elements
This was causing division by zero in this case.

PiperOrigin-RevId: 346920942
2020-12-10 20:22:48 -08:00
Rahul Joshi f232da1f9d [MLIR:HLO] Add window_reversal attribute to convolution attributes.
- Add this attribute to match the corresponding XLA HLO attribute on convolution
  operations.
- A true value indicates a reversal of the corresponding kernel spatial dimension.
- Since XLA builder does not support this attribute, use a custom HLO converted to map
  from mlir::mhlo::ConvOp to XLA.

PiperOrigin-RevId: 346891737
2020-12-10 16:39:19 -08:00
A. Unique TensorFlower 6a05893169 [XLA/GPU] Migrate all unnested elementwise emitters.
PiperOrigin-RevId: 346725498
2020-12-10 00:21:36 -08:00
Christian Sigg 51f535454d Use OpState::operator->() to get to member functions in Operation so we can remove the corresponding methods from OpState.
PiperOrigin-RevId: 346721668
2020-12-09 23:54:07 -08:00
Alexander Belyaev c36afd275e [HLO] Add a pattern for HLO ConstOp to HLO -> Linalg conversion.
PiperOrigin-RevId: 346718273
2020-12-09 23:24:57 -08:00
Tim Shen cfcf741932 [XLA/GPU] Migrate all unnested elementwise emitters.
PiperOrigin-RevId: 346716519
2020-12-09 23:08:13 -08:00
A. Unique TensorFlower 65ebd85563 [XLA/GPU] Migrate all unnested elementwise emitters.
PiperOrigin-RevId: 346658288
2020-12-09 16:05:43 -08:00
Tim Shen 1c10e1fec6 [XLA/GPU] Migrate all unnested elementwise emitters.
PiperOrigin-RevId: 346624905
2020-12-09 13:31:50 -08:00
A. Unique TensorFlower 1a5a1b5f41 [XLA/GPU] Migrate all unnested elementwise emitters.
PiperOrigin-RevId: 346574093
2020-12-09 09:54:16 -08:00
Tim Shen 3c33fe4b9e [XLA/GPU] Migrate all unnested elementwise emitters.
PiperOrigin-RevId: 346559170
2020-12-09 08:42:51 -08:00
Alexander Belyaev e9057cbd93 [mlir] Enable conversion of HLO SignOp to Linalg.
PiperOrigin-RevId: 346380514
2020-12-08 12:08:26 -08:00
Stephan Herhut c3790af758 Add plumbing for or and xor to hlo to lhlo and linalg lowerings.
PiperOrigin-RevId: 346311314
2020-12-08 06:39:02 -08:00
Stephan Herhut dd5895d083 Extend unranked hlo transformations to also support and, or and xor.
PiperOrigin-RevId: 346270393
2020-12-08 01:00:26 -08:00
Benjamin Kramer 5235eceea0 Lower mhlo shifts to linalg
PiperOrigin-RevId: 346161253
2020-12-07 13:02:32 -08:00
Tres Popp d327fc5737 [kernel_gen] Lower max rank specialization from 6 to 5
We don't care much about rank 6 broadcasting operations and this lowers compile times significantly.

PiperOrigin-RevId: 346046601
2020-12-07 02:18:38 -08:00
Smit Hinsu bc7b6374c8 Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345902167
2020-12-05 18:55:41 -08:00
Phoenix Meadowlark c33bdcbd03 Remove fold of `mhlo.compare(%arg0, %arg0)` for floating types.
Two tensors having the same SSA-value isn't sufficient for equality for floating types, as `NaN != NaN`. As written this causes `tf.IsNan` to [miscompile](https://github.com/google/iree/issues/4061).

PiperOrigin-RevId: 345730640
2020-12-04 12:15:02 -08:00
Smit Hinsu 9bd1995f90 Legalize XlaReplicaId to HLO replica-id op
Also, define shape inference function for HLO replica-id op.

PiperOrigin-RevId: 345714342
2020-12-04 11:04:40 -08:00
A. Unique TensorFlower e87d53742b Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345618958
2020-12-04 00:04:10 -08:00
Smit Hinsu 9456af5880 Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345605910
2020-12-03 22:09:56 -08:00
Tres Popp 7c3f049c8e [kernel_gen] Lower max rank specialization from 6 to 5
We don't care much about rank 6 broadcasting operations and this lowers compile times significantly.

PiperOrigin-RevId: 345466476
2020-12-03 09:19:25 -08:00
Rahul Joshi dbbdfea95b [MLIR:HLO] Generate enum decls for HLO and LHLO GPU dialects.
- Split out enum definitions in hlo dialect into a separate .td file (similar to structs)
  and generate enum decl/defs for these enums.
- Also split out the LHLO GPU enums into a separate .td file and generate enum
  decl/defs for these enums as well.
- Remove unused dialect from ConvolutionAttributes and generate lhlo_gpu enums.
- Add appropriate namespace for all the enums.

PiperOrigin-RevId: 345277240
2020-12-02 11:39:23 -08:00
A. Unique TensorFlower 1b711670bc Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345239817
2020-12-02 08:42:07 -08:00
Smit Hinsu 733fc6d032 Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345227848
2020-12-02 07:24:10 -08:00
River Riddle f89244381d [mlir][NFC] Replace usages of Function.h and Module.h with BuiltinOps.h
This is part of a larger refactoring cleaning up the BuiltinDialect of MLIR.

PiperOrigin-RevId: 345085278
2020-12-01 13:18:06 -08:00
A. Unique TensorFlower dd15c6cd84 [MLIR][KernelGen] Generate assertion message in `transform_unranked_hlo` pass
Use constant to generate the correct assertion message. This avoids
confusion when lowering the max rank specialization for debugging.

PiperOrigin-RevId: 344769021
2020-11-30 01:41:09 -08:00
Adrian Kuegel d14c63da54 Add a canonicalization pattern to remove redundant dynamic_reshapes.
PiperOrigin-RevId: 344517381
2020-11-27 04:46:50 -08:00
Stephan Herhut f183e95e69 Register the new branch op rewrite patterns in hlo to lhlo conversion
for good measure.

PiperOrigin-RevId: 344512196
2020-11-27 03:44:28 -08:00
Adrian Kuegel 6a71a84302 Support different input/output type for TransformUnrankedHlo.
Also generate the tf.Equal kernel, now that it works.

PiperOrigin-RevId: 344402014
2020-11-26 04:20:34 -08:00
Alexander Belyaev 5583c63cab [KERNEL_GEN] Add unranked Conj kernel.
PiperOrigin-RevId: 344243271
2020-11-25 06:37:26 -08:00