You need to sign in to do that
Don't have an account?

Batchable instance is too big
Hi All,
I've recently started having issues with production code whereby an overnight batch started complaining about "Batchable instance is too big". I suspected that the heap size might have at some point exceeded its limits (due to the batch having a stateful shared map that accumulates data across the batch execution blocks). Upon inspection the system was throwing error when the heaps size reached 42% (approx 5.4 million bytes) of its heap size capacity. Confused I called all the Limits class methods at the end of the execution block to see if anything had exceed its limts, yet the only thing even remotely alarming was the heap size, of which again was not exceeding the limit.
In response I tried to replicate the issue with a simple example. The class I created is shown below:
global class TestBatch implements Database.Batchable<sObject>, Database.Stateful
{
global Map<Integer, Integer> globalMap = new map<Integer, Integer>();
global Database.QueryLocator start(Database.BatchableContext bc)
{
return Database.getQueryLocator('SELECT Id FROM Account');
}
global void execute(Database.batchableContext bc, List<sObject> objects)
{
Integer i=globalMap.size();
final Integer interval = 1000;
for(Integer x=0; x<100000; ++x)
{
globalMap.put(i, i);
if(Math.mod(x,interval) == 0)
{
System.debug(LoggingLevel.Error, 'Limits.getHeapSize(): ' + Limits.getHeapSize());
System.debug(LoggingLevel.Error, 'Limits.getLimitHeapSize(): ' + Limits.getLimitHeapSize());
}
++i;
}
}
global void finish(Database.BatchableContext bc)
{
}
}
This code should easily compile in any sandbox. If you open the command line and enter the following command:
Database.executeBatch(new TestBatch(), 1);
It should iterate throught few times, assuming that there are a few accounts in the system. After two successful execution blocks, the system started complaining with the same error, even though the heap size only reached approximately 2.7 million bytes.
Does anyone have any throughts as to why Salesforce complains about a large instance when it hasn't even reached half way through their enforced limits?
I've recently started having issues with production code whereby an overnight batch started complaining about "Batchable instance is too big". I suspected that the heap size might have at some point exceeded its limits (due to the batch having a stateful shared map that accumulates data across the batch execution blocks). Upon inspection the system was throwing error when the heaps size reached 42% (approx 5.4 million bytes) of its heap size capacity. Confused I called all the Limits class methods at the end of the execution block to see if anything had exceed its limts, yet the only thing even remotely alarming was the heap size, of which again was not exceeding the limit.
In response I tried to replicate the issue with a simple example. The class I created is shown below:
global class TestBatch implements Database.Batchable<sObject>, Database.Stateful
{
global Map<Integer, Integer> globalMap = new map<Integer, Integer>();
global Database.QueryLocator start(Database.BatchableContext bc)
{
return Database.getQueryLocator('SELECT Id FROM Account');
}
global void execute(Database.batchableContext bc, List<sObject> objects)
{
Integer i=globalMap.size();
final Integer interval = 1000;
for(Integer x=0; x<100000; ++x)
{
globalMap.put(i, i);
if(Math.mod(x,interval) == 0)
{
System.debug(LoggingLevel.Error, 'Limits.getHeapSize(): ' + Limits.getHeapSize());
System.debug(LoggingLevel.Error, 'Limits.getLimitHeapSize(): ' + Limits.getLimitHeapSize());
}
++i;
}
}
global void finish(Database.BatchableContext bc)
{
}
}
This code should easily compile in any sandbox. If you open the command line and enter the following command:
Database.executeBatch(new TestBatch(), 1);
It should iterate throught few times, assuming that there are a few accounts in the system. After two successful execution blocks, the system started complaining with the same error, even though the heap size only reached approximately 2.7 million bytes.
Does anyone have any throughts as to why Salesforce complains about a large instance when it hasn't even reached half way through their enforced limits?
Hello Guys,
Probably It may be because of this latest release of salesforce.
See the below link
https://success.salesforce.com/issues_view?id=a1p300000008Y2RAAU
globalMap.size(): 199001
Limits.getHeapSize(): 4777647
Limits.getLimitHeapSize(): 12000000
Percent Memory Usage: 39.81%
This result happened right before the system complained that the "Batch instance is too big". Notice that the size is SIGNIFICANTLY lower then 12MB, yet it complains. On top of which I'm completley glossing over the fact that if the batch were to be serialized to the database, the size would have been even lower as the system would have to store, theoretically, 1592008 bytes (199001 entries x (4 bytes for map integer key value + 4 bytes for map paired interger value)).