Commit Graph

169 Commits

Author SHA1 Message Date
Smit Hinsu 733fc6d032 Fix handling of negative seeds in random number generator op kernels for XLA
Casting negative s32 number to u64 directly will have leading 1s in the representation which is not what we want to get a single u64 out of two s32 seeds. Fixed this by first getting unsigned number of the same bit-width.

PiperOrigin-RevId: 345227848
2020-12-02 07:24:10 -08:00
River Riddle f89244381d [mlir][NFC] Replace usages of Function.h and Module.h with BuiltinOps.h
This is part of a larger refactoring cleaning up the BuiltinDialect of MLIR.

PiperOrigin-RevId: 345085278
2020-12-01 13:18:06 -08:00
A. Unique TensorFlower dd15c6cd84 [MLIR][KernelGen] Generate assertion message in `transform_unranked_hlo` pass
Use constant to generate the correct assertion message. This avoids
confusion when lowering the max rank specialization for debugging.

PiperOrigin-RevId: 344769021
2020-11-30 01:41:09 -08:00
Adrian Kuegel d14c63da54 Add a canonicalization pattern to remove redundant dynamic_reshapes.
PiperOrigin-RevId: 344517381
2020-11-27 04:46:50 -08:00
Stephan Herhut f183e95e69 Register the new branch op rewrite patterns in hlo to lhlo conversion
for good measure.

PiperOrigin-RevId: 344512196
2020-11-27 03:44:28 -08:00
Adrian Kuegel 6a71a84302 Support different input/output type for TransformUnrankedHlo.
Also generate the tf.Equal kernel, now that it works.

PiperOrigin-RevId: 344402014
2020-11-26 04:20:34 -08:00
Alexander Belyaev 5583c63cab [KERNEL_GEN] Add unranked Conj kernel.
PiperOrigin-RevId: 344243271
2020-11-25 06:37:26 -08:00
Lucy Fox 85f92a1651 [KernelGen] Lower tf.Erf and tf.Erfc ops to CHLO.
This does not include the lowerings from CHLO to LMHLO.

PiperOrigin-RevId: 344091604
2020-11-24 10:55:43 -08:00
A. Unique TensorFlower 7f239c7ba2 Add canonicalizer for Reshape(Broadcast(X)) pattern when it is an identity sequence
PiperOrigin-RevId: 343251257
2020-11-19 02:32:45 -08:00
A. Unique TensorFlower bba9968ff0 Integrate LLVM at llvm/llvm-project@499bce3aba
Updates LLVM usage to match
[499bce3abab8](https://github.com/llvm/llvm-project/commit/499bce3abab8)

PiperOrigin-RevId: 342921326
2020-11-17 12:30:18 -08:00
A. Unique TensorFlower 7a6a6ffa40 Integrate LLVM at llvm/llvm-project@65f3e121fe
Updates LLVM usage to match
[65f3e121fe4f](https://github.com/llvm/llvm-project/commit/65f3e121fe4f)

PiperOrigin-RevId: 342859791
2020-11-17 07:38:03 -08:00
Stephan Herhut 0c7152e65c Extend fusion root heuristic to also work in partially bufferized programs.
We now follow data flow though tensor_cast, tensor_load and tensor_to_memref
operations.

PiperOrigin-RevId: 342851104
2020-11-17 06:34:48 -08:00
Tres Popp be9ae88eaa Consider aliases through control flow for lhlo-fuse-linalg.
This should handle scf.if and shape.assuming regions,
which we care about in kernel_gen.

PiperOrigin-RevId: 342584762
2020-11-16 01:12:45 -08:00
Ben Vanik 88a5bde51f Fixing M_PI compile error on MSVC.
M_PI and other math constants (used in chlo_legalize_hlo_patterns.td)
are not part of the C++ standard and must be enabled on MSVC
(similar to _GNU_SOURCE adding glibc symbols to posix headers).

PiperOrigin-RevId: 342432987
2020-11-14 10:02:04 -08:00
Smit Hinsu d0901a83e5 Enable fallback legalization for MaxPoolGradGrad and MaxPool3DGradGrad ops
Requires,
* Override for ReducePrecision in HloMlirBuilder
* Sinking of constants for ReduceWindow op

PiperOrigin-RevId: 342330848
2020-11-13 13:49:05 -08:00
Stephan Herhut c344695bca Properly configure patterns in hlo to lhlo conversion with typeconverter.
Otherwise, materializations for partial conversions do not work.

PiperOrigin-RevId: 342215962
2020-11-13 00:58:51 -08:00
Rahul Joshi 1958f228ec [MLIR:HLO] Extend CustomCall to support multiple outputs.
- Extend MHLO CustomCall to have multiple tensors as results.
- Extend LHLO CustomCall to have multiple memrefs for output operands.
- Fix HLO->LHLO and XLA HLO->LHLO mapping for CustomCall to setup the
  operand_segment_sizes attribute correctly.

PiperOrigin-RevId: 342067762
2020-11-12 09:46:25 -08:00
Tres Popp 1dffa62fe9 Fold away shape.shape_of(mhlo.dynamic_reshape(inp, shape))
This specific pattern can be replaced with the shape
passed to dynamic_reshape. This is implemented as a
canonicalization on mhlo.dynamic_reshape to fit in
the infrastructure of canonicalization.

PiperOrigin-RevId: 342009365
2020-11-12 02:48:26 -08:00
Rahul Joshi 745c8aa0b1 Extract some duplicated code into a helper function.
- Extract code to create result memref's into a ConvertResults function.
- Also fix a bug when using reifyReturnTypes: use correct index for result_shape instead
  of always using the first element.

PiperOrigin-RevId: 341852227
2020-11-11 10:01:00 -08:00
Alexander Belyaev d4f2c767d3 [HLO] Fix HLO DynamicBroadcastInDimOp -> LHLO lowering.
The conversion had a bug in computation of strides and sizes args for std.memref_reinterpret_cast. The previous version also relied on linalg::ReshapeOp to do broadcasting when the rank of the output was higher than the rank of the input. Now the broadcasting is entirely done via descriptor modification and linalg::ReshapeOp was replaced with CopyOp.

PiperOrigin-RevId: 341379871
2020-11-09 04:24:40 -08:00
Smit Hinsu 4ef12aa000 Update GetDimensionSize and SetDimensionSize ops to use I64 attribute for dimension
This is to match with HLO semantics and general dimension semantics in MLIR.

Also,

* Define minimal verifier for these ops.
* Add folder for SetDimensionSize op on static shaped dimension.
* Fix assumption of ranked shape in GetDimensionSize op.

PiperOrigin-RevId: 341150923
2020-11-06 18:03:04 -08:00
Robert Suderman a926e0f040 Removed Op(Complex, Real) lowering to address complex type inference issue
Lowerings that depended on operations between real and complex types may
not infer the correct intermediate type. Removing these operations as
they are not technically legally generated operations. Updated tests
to validate this.

PiperOrigin-RevId: 341128903
2020-11-06 15:24:23 -08:00
Tres Popp af4c9774dc Handle rank 1 broadcasts in unranked kernel lowering.
Previously this started at rank 2 after checking for scalars and equal shapes. This resulted in cases such as <1xf32> + <2xf32> being treated as impossible.

PiperOrigin-RevId: 341043965
2020-11-06 07:22:43 -08:00
Dmitry Volodin 1821c69910 PR #44405: Fix typos in compiler directory
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/44405

Splitting #43857 by top-level directories.
Copybara import of the project:

--
fa5da7d5478649d11321dcac9f867b0a57e4798a by Dmitry Volodin <mr.molkree@gmail.com>:

fix typos in compiler dir

--
4d3c9f047f7ecb8ab299f1bf28a86fd39096eee7 by Dmitry Volodin <mr.molkree@gmail.com>:

fix one test as "atleast" in it comes from Bazel

--
9440ebaaa9fc4a735f7f72f0c8f0de4ec58afbd6 by Dmitry Volodin <mr.molkree@gmail.com>:

a bit more

PiperOrigin-RevId: 340819994
2020-11-05 03:31:54 -08:00
Alexander Belyaev 3d930d08c2 [HLO] Delete LHLO memref cast ops and migrate to STD ones.
PiperOrigin-RevId: 340663578
2020-11-04 09:26:34 -08:00
Richard Uhler 82031b356c Improve error message for improperly shaped slice indices.
The slice indices must be rank-1 and have the same number of elements of the
rank of the operand. Give reasonable error messages for violations of these
requirements instead of a misleading error message that the types of the
indices don't all match.

PiperOrigin-RevId: 340660822
2020-11-04 09:10:51 -08:00
Marius Brehler ff9b8c6f65 PR #44499: Add missing dep on MLIRMhloPassIncGen target
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/44499

The file `sink_constants_to_control_flow.cc` includes the header
`PassDetail.h`, which itself includes `mhlo_passes.h.inc`. The latter is
not guaranteed to be already generated since there was no dependency set
to MLIRMhloPassIncGen.
Copybara import of the project:

--
0ff51ccc88c1ba049eb2e9555afb54079bea39c9 by Marius Brehler <marius.brehler@iml.fraunhofer.de>:

Add missing dep on MLIRMhloPassIncGen target

The file `sink_constants_to_control_flow.cc` includes the header
`PassDetail.h`, which itself includes `mhlo_passes.h.inc`. The latter is
not guaranteed to be already generated since there was no dependency set
to MLIRMhloPassIncGen.

PiperOrigin-RevId: 340485068
2020-11-03 11:18:48 -08:00
Sean Silva d3ea3abdec Remove `results_escape_functions` from HloLegalizeToLhlo
PiperOrigin-RevId: 340464958
2020-11-03 09:49:56 -08:00
Tres Popp 81e8d778c4 Fix bug using std.rank instead of shape.rank
PiperOrigin-RevId: 339890070
2020-10-30 09:59:24 -07:00
Tres Popp 76b30fd426 Move unranked chlo lowering to transform_unranked_hlo.
Additionally:
- Forward listeners through new if/else op builders.
This corrects an error that led to incomplete legalization of broadcasted op
lowering.
- Use OpConversionPattern to ensure up to date operand values are used.
PiperOrigin-RevId: 339838833
2020-10-30 02:56:44 -07:00
Nicolas Vasilache 880f603239 Drop OperationFolder usage with Linalg fusion.
PiperOrigin-RevId: 339653466
2020-10-29 06:10:14 -07:00
Thomas Joerg 3a6580bf75 Integrate LLVM at llvm/llvm-project@4d11daa659
Updates LLVM usage to match
[4d11daa659a1](https://github.com/llvm/llvm-project/commit/4d11daa659a1)

PiperOrigin-RevId: 339631220
2020-10-29 03:02:38 -07:00
Benjamin Kramer 3bf4277ea4 [MLIR] Add a lmhlo.reduce -> linalg.generic converter
Doesn't support tensors right now, as it's somewhat hairy to support both at
the same time. Since we use a generic lowering the result is messy
and needs a mem2reg pass to eliminate extra load/store/allocas.

PiperOrigin-RevId: 339562971
2020-10-28 16:38:19 -07:00
Thomas Joerg 7363748bae Integrate LLVM at llvm/llvm-project@0fc1aa22ee
Updates LLVM usage to match
[0fc1aa22ee6a](https://github.com/llvm/llvm-project/commit/0fc1aa22ee6a)

PiperOrigin-RevId: 339239851
2020-10-27 06:56:16 -07:00
Benjamin Kramer b7aa01dbe0 Integrate LLVM at llvm/llvm-project@26750a1264
Updates LLVM usage to match
[26750a1264b3](https://github.com/llvm/llvm-project/commit/26750a1264b3)

PiperOrigin-RevId: 339127601
2020-10-26 15:11:52 -07:00
Smit Hinsu 6eda9ed273 Add compare_type optional attribute to CompareOp in HLO dialects
If unspecified, `compare_type` is FLOAT for float element types, SIGNED for signed element types and UNSIGNED for unsigned element types. compare_type can be TOTALORDER for float element types.

- Added import and export support the attribute.
- Restricted legalization from HLO to TF to the default compare types.
- Updated existing usage of the CompareOp

PiperOrigin-RevId: 339099219
2020-10-26 12:58:29 -07:00
Richard Uhler f9843fabe1 Use InferTypeOpInterface for HLO_SliceOp.
Instead of having a custom builder to construct a slice op without an explicit
return type.

PiperOrigin-RevId: 339058864
2020-10-26 09:54:13 -07:00
Hanhan Wang 444fae9bac [NFC] Make naming style consistent.
Use lowercase with underscores between words instead of camelStyle.

PiperOrigin-RevId: 338722328
2020-10-23 12:23:25 -07:00
Sean Silva c3f4ab06d3 Use OpConversionPattern instead of BufferizeOpConversionPattern
As described in mlir/Transforms/Bufferize.h, patterns that don't need the special methods on a BufferizeTypeConverter should use a regular OpConversionPattern.

PiperOrigin-RevId: 338424819
2020-10-22 01:20:43 -07:00
Roman Dzhabarov ae00ae487c [mlir] Simplify DDR matching patterns with equal operands for operators.
This https://reviews.llvm.org/D89254 diff introduced implicit matching between same name arguments. Modify usages accordingly.

PiperOrigin-RevId: 338090110
2020-10-20 10:47:09 -07:00
A. Unique TensorFlower 7a983ea389 Add folder for mhlo::pad
PiperOrigin-RevId: 337827560
2020-10-19 04:21:44 -07:00
A. Unique TensorFlower 4a18aa41ee Add folder to mhlo::round_nearest_afz
PiperOrigin-RevId: 337823786
2020-10-19 03:45:15 -07:00
Jacques Pienaar 27968619b7 Verify non-scalar inputs for HLO concat
XLA HLO concat does not accept scalars, so fail verification if this occurs. Avoids segfault when accessing an empty output shape.

PiperOrigin-RevId: 337618167
2020-10-16 19:39:31 -07:00
Stephan Herhut 2e30b59ddc Extend hlo-fuse-linalg slightly to support aliased returns.
The fusion heuristic identifies the root of a fusion by checking whether an
output of a linalg operation is a function result. It did not consider outputs
flowing through aliasing operations (like casts).

PiperOrigin-RevId: 337479910
2020-10-16 04:04:23 -07:00
A. Unique TensorFlower 51cd4200b6 Make LMHLO's Dot have the same power as MHLO's DotGeneral.
PiperOrigin-RevId: 337391565
2020-10-15 15:09:06 -07:00
A. Unique TensorFlower 05ee41baf8 Add folder for mhlo::scatter
PiperOrigin-RevId: 337274351
2020-10-15 03:26:05 -07:00
Adrian Kuegel 0f36979e2c Integrate LLVM at llvm/llvm-project@220de1f32a
Updates LLVM usage to match
[220de1f32add](https://github.com/llvm/llvm-project/commit/220de1f32add)

PiperOrigin-RevId: 337270545
2020-10-15 02:54:24 -07:00
Rahul Joshi f6b4e6758a Add GPU specific LMHLO level ops
- Introduce operations in a new lmhlo_gpu dialect that map to GPU library function calls
  in the XLA:GPU backend.
- Add basic unit tests as well.

PiperOrigin-RevId: 337132166
2020-10-14 11:23:55 -07:00
A. Unique TensorFlower 15fe5ca737 Integrate LLVM at llvm/llvm-project@6713332fdd
Updates LLVM usage to match
[6713332fddb7](https://github.com/llvm/llvm-project/commit/6713332fddb7)

PiperOrigin-RevId: 336988895
2020-10-13 17:21:14 -07:00
Adrian Kuegel 53d16d9f8e Integrate LLVM at llvm/llvm-project@93377888ae
Updates LLVM usage to match
[93377888ae89](https://github.com/llvm/llvm-project/commit/93377888ae89)

PiperOrigin-RevId: 336669842
2020-10-12 08:42:31 -07:00