+ Start a Discussion
Vivek S 42Vivek S 42 

apex cpu time exceeded in bulk data load jobs

In our salesforce environment using informatica we are inserting or updating records in large number of upto 1,50,000 records in parallel mode. (Can not go for serial mode) In turn of these operations triggers get fired and they call each other and some other triggers to perform some insert/update operations. And also there are some workflows which will get fired on when conditions are meet.

From the logs of transaction i can see the job of insert/update getting started with the chunk of 200 records and when entire cycle got complete of what it takes to finish for the 200 records by firing the triggers and inert/updating the other objects. Once first cycle get completed ideally it should end there and a new chunk of records with fresh set of governor limits should start ...thts what i am assuming!! But, after first cycle got completed again a new set of records are trying to get insert/update with in the same single transaction.

Since a new set is also getting chained in the same transaction, after 3 to 4 chunks are done the job is getting failed because of apex cpu time limit exceeded error. Obviously we will face apex cpu timeout since for all chunks of records are getting fired in single transaction and governor limits are not getting reset for each chunk processed.  

Can anyone tell me how this bulk load from informatica behaves with salesforce? example, if i insert 20000 set of records, if salesforce is processing with each size of 200 chunk, then 100 chunks are required to complete this. Is this all 100 chunks comes under single transaction? 

Can anyone provide me the inputs on this. 

Note: my code is complex and critical logic. Code optimization is done. Cannot go for future calls. Wherever i can use future calls i have made all the necessery optimizatons. Cannot go for serial mode. 
Zuinglio Lopes Ribeiro JúniorZuinglio Lopes Ribeiro Júnior
Hello Vivek,

One thing is for sure, each chunck is considered a different transaction but the triggers might be causing a cascade effect in terms of volume of records to process and causing the failure after a few chunks. If these triggers are from Informatica I suggest you to contact their team to anylize what are the possibilities. 

In addition, if the batch is using Database.Stateful be sure that is really needed because maybe is keeping unecessary data between the transactions which and depending on your code design may imply more data to be processed.

General Guidelines for Data Loads

Using Batch Apex

Using Batch Apex

Hope to have helped!


Don't forget to mark your thread as 'SOLVED' with the answer that best helps you.