Introduction
OutOfMemoryException (often seen as java.lang.OutOfMemoryError) is one of the most frustrating problems Java developers face in production. When this happens inside a Hazelcast cluster, the impact becomes even more serious nodes crash, data partitions get redistributed, performance drops, and entire applications may become unstable.
Hazelcast unlike a in memory data grid, it is fast and efficient, it requires careful memory planning. If your cluster stores millions of entries, performs frequent queries, or uses features like Near Cache or backups, improper memory settings can lead to out of memory issues.
In this guide we will explore, why OutOfMemoryExceptions happen in Hazelcast, Avoid OutOfMemoryExceptions in Hazelcast, and how to configure your maps, caches, and memory settings correctly. This is a complete, real-world, practical guide for Java developers working with Hazelcast in production.

1. Why OutOfMemoryExceptions Happen in Hazelcast
Hazelcast stores data in memory, and if that memory is not controlled properly, it quickly becomes full. OutOfMemoryExceptions happen when the JVM heap or off-heap memory reaches its limit. This usually occurs because Hazelcast maps grow without boundaries, objects are too large, or the system stores more backups and cached data than expected.
Common OOM Scenarios
- Maps with no eviction policies: Hazelcast will keep adding entries to the map forever. Over time, the map grows so large that it fills up all available memory, causing an OutOfMemoryException.
- Huge objects stored in IMaps: Storing heavy objects like objects with large lists, images, long strings, or nested structures consumes a lot of memory.
- Too many backups (1 backup means 2× memory usage): Every backup makes a full copy of the map’s data.
1 backup = 2 copies
2 backups = 3 copies
This multiplies memory usage and can easily exhaust heap or off heap memory. - Near Cache storing unbounded entries: If near cache is enabled without limit or eviction, it stores a local copy of the Map
- Heavy queries without indexes: Heavy SQL query or predicate queries executed without indexes can also cause OutOfMemory
- Serialization of large objects: Large objects require more space when it is serialized. Hazelcast stores both key and value in serialized form, so if your objects are big, the serialized binary form becomes even bigger, leading to faster memory consumption.
- Insufficient JVM heap: If the configured heap size is too small for the amount of data stored, Hazelcast will encounter an out of memory error an even before reaching to the capacity.
- Memory pressure during partition migrations: When a node leaves the cluster or joins newly, Hazelcast redistributes partitions. During this time, temporary copies of the data are created. If the cluster is already near memory limits, migrations trigger
OutOfMemoryException.
Hazelcast will store everything you put into it unless you configure limits. This is why unbounded maps are the one of the reason clusters run out of memory.
2. Understanding Hazelcast Memory Architecture
Before learning to avoid OutOfMemoryExceptions in Hazelcast, you must understand how Hazelcast uses the internal memory. There are two ways to store the data in Hazelcast:
1. On-heap Memory
The JVM heap memory configuration is (-Xms, -Xmx memory). All the user data and map metadata by default stored here.
Pros:
- Easy to configure the Heap memory
- It works in all versions of Hazelcast
Cons:
- Subject to Garbage Collection pauses
- Limited by JVM heap size
- Easier to run into OutOfMemoryError
2. Off-heap Memory (Native Memory)
The Native Memory feature available on the Hazelcast Enterprise version allows storing data outside the JVM heap, called High-Density Memory Store.
Pros:
- No GC pressure
- More efficient storage
- Can allocate large memory regions
Cons:
- Requires Enterprise Edition
3. How Hazelcast Stores Data Internally
Before exploring of avoid OutOfMemoryExceptions in Hazelcast, let’s explore Hazelcast storage. Hazelcast does not store your Java objects exactly as they are. Instead, it converts (serializes) them into a convert the data in to binary format and then stores the binary form in Hazelcast memory. This process ensures distributed communication is faster and avoids sending full Java objects across the network.
However, this internal storage format adds overhead. Each entry in an IMap contains not only the key and value, but also stores metadata, indexes, and optional backups. Knowing all these underlines helps developers to estimate memory consumption more accurately.
1. Binary vs In-Memory Object Storage
Hazelcast stores data objects in two forms:
Binary Storage
- This is the default and recommended method.
- Objects are stored in serialized binary form.
- Faster for network transfer.
- Lower memory usage compared to storing full objects.
In-Memory Object Format
- Used when
in-memory-format="OBJECT"is enabled. - Hazelcast keeps actual Java objects in heap.
- Faster read operations but:
- Higher memory consumption
- More GC pressure
- Higher OOM risk
Recommendation: Use in-memory-format="BINARY" unless you have a strong need for Java object references.
2. Cost of Serialization
Hazelcast serialization process affects memory storage in three ways:
- Raw object size: Your Java object’s fields and nested structures.
- Serialized size: The binary version may be smaller or larger depending on:
- Type of serializer used
- Field count
- String length
- Collections
- Hazelcast metadata overhead: Hazelcast adds:
- timestamps
- version IDs
- expiration metadata
- partition information
That means storing a “small” object can actually cost 2 – 3× its size in the JVM heap.
3. Memory Cost of Map Entries
When the data is stored in Hazelcast, every entry in a Hazelcast map includes:
- Serialized key
- Serialized value
- Map metadata
- Partition-table references
- Backup copies (if configured)
Memory Consumption Example
If a single entry consumes 500 bytes after serialization and metadata:
- With 1 backup → 1000 bytes total
- With 2 backups → 1500 bytes total
If you store 1 million such entries:
| Backup Count | Total Memory Required |
|---|---|
| 0 backups | 500 MB |
| 1 backup | 1 GB |
| 2 backups | 1.5 GB |
These are the reasons why many developers unexpectedly hit OutOfMemory errors, they forget that Hazelcast maintains multiple copies of their data.
The Biggest Root Causes of OOM in Hazelcast
Below are the most common issues and how to solve them.
1. Storing Too Many Entries (No Eviction)
If your map has no eviction policy, Hazelcast will store data forever. This is the leading cause of OOM.
Wrong (No Eviction)
<map name="users">
<backup-count>1</backup-count>
</map>
Correct (With Eviction)
<map name="users">
<backup-count>1</backup-count>
<eviction policy="LRU" max-size-policy="PER_NODE" size="500000"/>
</map>
LRU (Least Recently Used) is an eviction policy used in Hazelcast to remove old or inactive entries from a map when it reaches its maximum size(e.g. 500,000).
2. Missing TTL (Time-To-Live)
Maps frequently store cached data. If your data gets stale, it makes no sense to keep it forever.
Add TTL
You can declare TTL (Time to Live) configuration in the map configuration
<time-to-live-seconds>600</time-to-live-seconds>
This evicts entries after 600 seconds (10 minutes).
3. Large Objects or Huge Serialized Data
In Hazelcast, value objects that contain large datasets, lists, big arrays, or heavy nested structures may take megabytes, each of which can cause OutOfMemory.
You can fix this by:
- Reduce object size
- Store lightweight DTOs
- Compress large fields manually
- Use Hazelcast compression options if available
4. Too Many Backups
To many backups of maps into clusters can also cause OOM, backups multiply your memory usage.
backup-count="1"→ 2 copies of databackup-count="2"→ 3 copies of data
The recommended setting of <backup-count>
<backup-count>1</backup-count>
<async-backup-count>1</async-backup-count>
Configuring asynchronous backups reduces latency and memory pressure during write operations.
5. Heavy SQL/Predicate Queries Without Indexes
SQL or Predicate queries on huge maps without indexes force Hazelcast to scan every entry, loading them into memory, and that can cause OOM.
Add the index to avoid OutOfMemoryExceptions in Hazelcast :
<indexes>
<index type="HASH">
<attribute>username</attribute>
</index>
</indexes>
Adding Indexes to map configuration reduces memory overhead during scanning and improves performance dramatically. It will avoid OutOfMemoryExceptions in Hazelcast.
To know more about indexes, you can refer to the article.
6. Unbounded Near Cache on Client Side
Near Cache is a local in-memory cache that stores frequently accessed data on the client side (or sometimes on the member node itself). Instead of fetching an entry from the remote Hazelcast cluster every time, the client quickly reads it from its own local memory. It is can also cause OOM, if unbounded.
You can configure as below to avoid OutOfMemoryExceptions in Hazelcast:
<near-cache name="javatecharc">
<invalidate-on-change>true</invalidate-on-change>
<max-size>20000</max-size>
<eviction eviction-policy="LRU"/>
<time-to-live-seconds>3600"/>
</near-cache>
- invalidate-on-change: clears stale entries when server data changes
- max-size: limits memory consumption
- eviction-policy: removes the least recently used entries
- TTL: automatically expires old entries
7. Insufficient JVM Memory
f the configured JVM heap is too small, the node will crash under load.
Recommended Heap Based on Use Case:
- Small maps: 4GB heap
- Medium maps (10M entries): 8–16GB
- Large clusters: Use off-heap native memory
5. Best Practices to Avoid OutOfMemoryExceptions in Hazelcast
In this section we will explore the complete list of safety settings Java developers should always apply, to avoid OutOfMemoryExceptions in Hazelcast.
1. Always Configure Eviction
Configuring eviction policy is your first line of defense to avoid OutOfMemoryExceptions in Hazelcast.
The sample XML eviction policy configuration.
<map name="products">
<eviction policy="LFU" max-size-policy="USED_HEAP_PERCENTAGE" size="70"/>
</map>
The config size=70 ensures the map uses only 70% of available heap size.
2. Use TTL and Max Idle to Avoid OutOfMemoryExceptions in Hazelcast
What are TTL and Max Idle?
- TTL (Time-To-Live): the maximum lifetime of an entry after it is created or updated. Once TTL expires, the entry is removed from the regardless of access.
- Max Idle: the maximum time an entry can remain unused (no read or write). If an entry is not accessed within this configured time, it expires and is removed.
XML Example:
<time-to-live-seconds>3600</time-to-live-seconds>
<max-idle-seconds>600</max-idle-seconds>
3. Use Off-Heap Memory for Large Data (Enterprise Feature)
Off-heap or native memory, lets Hazelcast store data outside of the configured JVM heap. Instead of putting serialized entries on the Java heap (which increases GC pressure), Hazelcast places them in a native memory region. This reduces garbage collection pauses and lets you handle much larger datasets per node.
To avoid OutOfMemoryExceptions in Hazelcast, configure the native memory:
<native-memory enabled="true">
<size unit="GIGABYTES">32</size>
<allocator-type>POOLED</allocator-type>
</native-memory>
4. Use MapStore or Persistence to Reduce Memory Load
Instead of storing everything in the Hazelcast in-memory, it is better move older data to a database.
A sample Java MapStore Example:
public class UserMapStore implements MapStore<String, User> {
@Override
public void store(String key, User value) {
// Save to DB
}
@Override
public User load(String key) {
// Read from DB
return fetchFromDb(key);
}
}
Configure the UserMapStore into Hazelcast XML:
<map-store enabled="true">
<class-name>com.javatecharc.UserMapStore</class-name>
<write-delay-seconds>5</write-delay-seconds>
</map-store>
5. Use Batching for Bulk Inserts
In Hazelcast doing millions of map.put() operations individually causes memory spikes. To avoid OutOfMemoryExceptions in Hazelcast, use batch operation of bulk inserts:
Map<String, User> batch = new HashMap<>();
for (int i = 0; i < 1000; i++) {
batch.put("u" + i, user);
}
map.putAll(batch);
6. JVM Tuning to avoid OutOfMemoryExceptions in Hazelcast
Hazelcast works best with G1GC, you can configure the Hazelcast Heap Size for JVM tuning to avoid OutOfMemoryExceptions in Hazelcast.
The recommended JVM flag as below:
-XX:+UseG1GC
-XX:MaxGCPauseMillis=200
-XX:+ParallelRefProcEnabled
-XX:+UseStringDeduplication
-XX:+UnlockExperimentalVMOptions
Hazelcast Heap Size Configuration:
-Xms8g
-Xmx8g
It is recommended that always set equals value to avoid resizing.
7. Diagnostic Tools to Detect OOM Issues
Hazelcast internally manages and provides the built in tools to help developers understand in line memory usage.
1. Hazelcast Management Center
In Hazelcast Management Center, it has in build graphical representation to view the below parameters of the memory usage.
- heap usage
- event queues
- slow operations
- migrations
- hot partitions
- map sizes
In Hazelcast, this is the easiest way to identify problematic maps growing without limits.
2. Hazelcast Diagnostic Logs
You can enable the Hazelcast Diagnostic Logs, to monitor the memory and also other aspects.
<properties>
<property name="hazelcast.diagnostics.enabled">true</property>
<property name="hazelcast.diagnostics.max.rolled.file.size.mb">50</property>
</properties>
3. Java Tools
You can use Java provided tools, to monitor the memory leaks and threads.
- VisualVM
- JFR (Java Flight Recorder)
- jcmd
- heap dumps (
jmap -dump)
8. Real-World Scenarios and How to Fix Them
Hazelcast users and developers often face below common situations, we will discuss here to resolve them quickly.
Scenario 1: Map Growing Without Limit (Most Common)
Problem:
The map keeps increasing until heap is full, this very common issue in Hazelcast faced by developers.
Cause:
The common cause of the issue is that No eviction policy configured.
Fix:
You can configure the eviction and TTL.
<eviction policy="LRU" size="200000" max-size-policy="PER_NODE"/>
<time-to-live-seconds>1800</time-to-live-seconds>
Scenario 2: Client JVM OutOfMemory Due to Near Cache
Fix:
To avoid OutOfMemoryExceptions in Hazelcast set a max size and eviction.
<near-cache name="users">
<max-size>5000</max-size>
<eviction eviction-policy="LFU"/>
</near-cache>
Scenario 3: Heavy Queries Causing Memory Spikes
Fix:
To avoid OutOfMemoryExceptions in Hazelcast heavy operation add the indexes.
<index type="HASH">
<attribute>email</attribute>
</index>
Scenario 4: Large Partition Migrations
In case of any issue, nodes crashes or a node leaves the cluster unexpectedly, the crashed node’s data migrates to the remaining nodes, consuming huge memory temporarily.
Fixes:
To fix this and avoid OutOfMemoryExceptions in Hazelcast:
- Increase cluster size
- Limit partition size
- Use async backups
Scenario 5: Oversized Objects Stored in IMaps
It always recommended to avoid oversized object into Hazelcast, in can cause OOM.
Fix:
- Store lightweight DTOs
- Remove unused fields
- Compress strings/lists where needed
9. Checklists for Production
his checklist for production to avoid OutOfMemoryExceptions in Hazelcast, environment helps ensure your Hazelcast cluster never runs out of memory.
Pre-Deployment Memory Checklist
- Eviction policy configured
- TTL and Max Idle set
- Backups set to appropriate count
- Indexes created for all query fields
- JVM heap size tuned
- Off-heap memory enabled if needed
- MapStore configured where applicable
Runtime Monitoring Checklist
- Watch heap usage in Management Center
- Monitor partition migrations
- Watch Near Cache size
- Track slow tasks and long GC pauses
- Check diagnostics logs weekly
Disaster Recovery Checklist
- At least 2 members per cluster
- Async backups enabled
- WAN Replication configured (if needed)
- Snapshot or persistence enabled
Conclusion
To avoid OutOfMemoryExceptions in Hazelcast is all about preventing unbounded memory growth and monitoring the cluster. With configuration of proper eviction, TTL, indexing, memory tuning, and diagnostics, you can build a highly scalable Hazelcast cluster that runs reliably in production.
This guide covered all the essential configurations and real-world best practices every Java developer should know before deploying Hazelcast at scale.







