The memory setup has changed a lot with the 1.10 release for TaskManagers and with the 1.11 release for Masters. Many configuration options were removed or their semantics changed. This guide will help you to migrate the TaskManager memory configuration from Flink <= 1.9 to >= 1.10 and the Master memory configuration from Flink <= 1.10 to >= 1.11.
Note Before version 1.10 for TaskManagers and before 1.11 for Masters, Flink did not require that memory related options are set at all as they all had default values. The new memory configuration requires that at least one subset of the following options is configured explicitly, otherwise the configuration will fail:
|for TaskManager:||for Master:|
This spreadsheet can also help to evaluate and compare the results of the legacy and new memory computations.
This chapter shortly lists all changes to Flink’s memory configuration options introduced with the 1.10 release. It also references other chapters for more details about migrating to the new configuration options.
The following options are completely removed. If they are still used, they will be ignored.
|Check the description of the new option taskmanager.memory.managed.fraction. The new option has different semantics and the value of the deprecated option usually has to be adjusted. See also how to migrate managed memory.|
|On-heap managed memory is no longer supported. See also how to migrate managed memory.|
|Pre-allocation is no longer supported and managed memory is always allocated lazily. See also how to migrate managed memory.|
The following options are deprecated but if they are still used they will be interpreted as new options for backwards compatibility:
|Deprecated option||Interpreted as|
|taskmanager.memory.managed.size, see also how to migrate managed memory.|
Although, the network memory configuration has not changed too much it is recommended to verify its configuration. It can change if other memory components have new sizes, e.g. the total memory which the network can be a fraction of. See also new detailed memory model.
The container cut-off configuration options,
have no effect anymore. See also how to migrate container cut-off.
The previous options which were responsible for the total memory used by Flink are
Despite their naming, they included not only JVM Heap but also other off-heap memory components. The options have been deprecated.
The Mesos integration also had a separate option with the same semantics:
mesos.resourcemanager.tasks.mem which has also been removed.
If you use the mentioned legacy options without specifying the corresponding new options, they will be directly translated into the following new options:
taskmanager.memory.flink.size) for standalone deployments
taskmanager.memory.process.size) for containerized deployments (Yarn or Mesos)
It is also recommended using these new options instead of the legacy ones as they might be completely removed in the following releases.
See also how to configure total memory now.
JVM Heap memory previously consisted of the managed memory (if configured to be on-heap) and the rest which included any other usages of heap memory. This rest was the remaining part of the total memory, see also how to migrate managed memory.
Now, if only total Flink memory or total process memory is configured, then the JVM Heap is the rest of what is left after subtracting all other components from the total memory, see also how to configure total memory.
Additionally, you can now have more direct control over the JVM Heap assigned to the operator tasks
see also Task (Operator) Heap Memory.
The JVM Heap memory is also used by the heap state backends (MemoryStateBackend
or FsStateBackend) if it is chosen for streaming jobs.
See also how to configure managed memory now.
The previous option to configure managed memory size (
taskmanager.memory.size) was renamed to
taskmanager.memory.managed.size and deprecated.
It is recommended to use the new option because the legacy one can be removed in future releases.
If not set explicitly, the managed memory could be previously specified as a fraction (
of the total memory minus network memory and container cut-off (only for Yarn and
Mesos deployments). This option has been completely removed and will have no effect if still used.
Please, use the new option
This new option will set the managed memory to the specified fraction of the
total Flink memory if its size is not set explicitly by
If the RocksDBStateBackend is chosen for a streaming job,
its native memory consumption should now be accounted for in managed memory.
The RocksDB memory allocation is limited by the managed memory size.
This should prevent the killing of containers on Yarn and Mesos.
You can disable the RocksDB memory control by setting state.backend.rocksdb.memory.managed
false. See also how to migrate container cut-off.
Additionally, the following changes have been made:
taskmanager.memory.off-heapis removed and will have no effect anymore.
taskmanager.memory.preallocateis removed and will have no effect anymore.
Previously, there were options responsible for setting the JVM Heap size of the Flink Master:
Despite their naming, they represented the JVM Heap only for standalone deployments. For the containerized deployments (Kubernetes and Yarn), they also included other off-heap memory consumption. The size of JVM Heap was additionally reduced by the container cut-off which has been completely removed after 1.11.
The Mesos integration did not take into account the mentioned legacy memory options. The scripts provided in Flink to start the Mesos Master process did not set any memory JVM arguments. After the 1.11 release, they are set the same way as it is done by the standalone deployment scripts.
The mentioned legacy options have been deprecated. If they are used without specifying the corresponding new options, they will be directly translated into the following new options:
jobmanager.memory.heap.size) for standalone and Mesos deployments
jobmanager.memory.process.size) for containerized deployments (Kubernetes and Yarn) It is also recommended using these new options instead of the legacy ones as they might be completely removed in the following releases.
Now, if only the total Flink memory or total process memory is configured, then the JVM Heap
is also derived as the rest of what is left after subtracting all other components from the total memory, see also
how to configure total memory. Additionally, you can now have more direct
control over the JVM Heap by adjusting the
For containerized deployments, you could previously specify a cut-off memory. This memory could accommodate for unaccounted memory allocations.
Dependencies which were not directly controlled by Flink were the main source of those allocations, e.g. RocksDB, JVM internals, etc.
This is no longer available, and the related configuration options (
will have no effect anymore. The new memory model introduced more specific memory components to address these concerns.
In streaming jobs which use RocksDBStateBackend, the RocksDB native memory consumption should be accounted for as a part of the managed memory now. The RocksDB memory allocation is also limited by the configured size of the managed memory. See also migrating managed memory and how to configure managed memory now.
The other direct or native off-heap memory consumers can now be addressed by the following new configuration options:
The direct or native off-heap memory consumers can now be addressed by the following new configuration options:
This section describes the changes of the default
flink-conf.yaml shipped with Flink.
The total memory for TaskManagers (
taskmanager.heap.size) is replaced by
in the default
flink-conf.yaml. The value increased from 1024Mb to 1728Mb.
The total memory for Masters (
jobmanager.heap.size) is replaced by
in the default
flink-conf.yaml. The value increased from 1024Mb to 1600Mb.
See also how to configure total memory now.