+ Start a Discussion
Steve Berley [Left Propeller]Steve Berley [Left Propeller] 

Locking rows at the last minute

I have a utility routine (pasted below) that I use for all writes to the database which lets me catch errors.  This works perfectly in all but one of my orgs.  In that org, I generate a ton of UNABLE_TO_LOCK_ROW errors.  This is probably because it's a very complex org with lots of processes, many external, updating data.

The method receives a list of SObjects which it acts upon. 

How would I go about re-querying the objects in the received list from this method using FOR UPDATE so I can quickly commit the changes while minimizing the lock time?
public static void saveList(string command, list<sobject> objList, string caller){
		if ( objList == null || objList.size() == 0 ) return;
		try {
			if (command == 'insert') insert objlist;
			else if (command == 'update') update objlist;
			else if (command == 'upsert') upsert objlist;
			else if (command == 'delete') delete objlist;
		} catch (exception e){
			logError(caller, e);


Alain CabonAlain Cabon
Hi Steve,

You are looking for samples of:  "retry logic for failures" but the best solution is to ... avoid them as far as possible (mutliple threads: difficult with flows, lex processes and external accesses).

Record-Level Locking: Designing Record Access for Enterprise Scale
"Because Salesforce holds these locks very briefly, customers who are experiencing a small number of locking errors might be able to handle the problem by adding retry logic to their integration code. Customers who experience frequent locking from integrations and mass updates should sequence batches so that the same records are not updated in multiple threads simultaneously."

There is this very new  BatchApexErrorEvent (beta) in Winter’19 submited by Andrew Fawcett  (January 8, 2019).

Building a Batch Retry Framework With BatchApexErrorEvent:

Steve, your question is one of the most difficult problem: same questionhttps://salesforce.stackexchange.com/questions/42139/can-a-dml-retry-solve-unable-to-lock-row-race-conditions

Robert Sösemann and sfdcfox are not beginners and the alternatives are "very few" (none) (FOR UPDATE).

The new architecture patterns use an Unit Of Work but there is just a rollback in case of problem.

Unable to lock row - Record currently unavailable errors

Cumulus uses a DmlWrapper but I don't see anything about a retrty logic for failures.

This problem is very interesting and that would be valuable that people share their own retry logic for failures on this thread.
Steve Berley [Left Propeller]Steve Berley [Left Propeller]
Thanks @Alain!
Had a brainstorm overnight – 
What if I get all of the object ids from objList, and query that same set of objects using FOR UPDATE.  Then loop through each record in objList...
set<id> objIDs = new set<id>();
string objName;
for (sobject obj : objList) {
	if (obj.id != null){
		if (objName == null) objName = getSObjectName(obj.id);

if (objName != null) {
	string q = 'select id from ' + objName + ' where id in :objIDs FOR UPDATE ';
	map <id, sobject> lockedObjs = new Map<id, sobject>((List<sobject>)Database.query(q));

	for (sobject obj : objList){
		if ( lockedObjs.containsKey(obj.id)){
			sobject locked = lockedObjs.get(obj.id);
			map<string,object> fields = obj.getpopulatedfieldsasmap();
			for (string key : fields.keyset()){
				if (key != 'id') locked.put(key, fields.get(key));
			lockedObjs.put(obj.id, locked);
		} else lockedObjs.put(obj.id, obj);
Update lockedObjs.values();

public static String getSObjectName(id i){
	return i.getSObjectType().getDescribe().getName().toLowerCase();


So I’m effectively reproducing the mods I’ve received in the newly queried & locked records as well as adding any records that are new then quickly saving
Alain CabonAlain Cabon
Hi Steve,

So you prevent the problem as far as possible with FOR UPDATE (everywhere) and that is the best solution.

"retry logic for failures" are interesting for precise cases (business logic) but not for technical problems like UNABLE_TO_LOCK_ROW errors (it is too late).
Steve Berley [Left Propeller]Steve Berley [Left Propeller]
Yeah - I know but I'm trying to avoid using it everywhere because I have a lot going on.  My goal/hope is to leave all existing code as is and just take the lock when I'm about to write; hence the new idea...
Alain CabonAlain Cabon
Ok, we all have this problem of UNABLE_TO_LOCK_ROW errors and we don't use enough "SELECT ... FOR UPDATE" (not very promoted either in the books about Apex).

Your generic way is smart even if you would like to catch and retry sometimes but I didn't find samples of these retries (and how to do a retry in a process or a flow? a loop is dangerous/useless (time out quickly) and that could need asynchronous calls to a "rescue" program (quite theoretical, not a real world solution with short time (and budget), very complicated, in France, we call that a "gas plant"). 
Alain CabonAlain Cabon
Hi Steve,

Click-to-Cloud (Australia) ( http:// https://www.ctcproperty.com/About.html (http:// https://www.ctcproperty.com/About.html)  , the boss has the distinguished red hat probably ) has developed a sophisticated complete alternative solution for Apex (more "functional").

Atom.apex:; Atom.apex is a library that tries to save you from the doom of Salesforce governor limits.
What we have dreamed is that if the governor limits are reached, Salesforce does not throw exceptions directly, and instead it will schedule our remaining logic to the next available time spot to continue.
Sadly this has always been a dream and Salesforce aborts your execution brutally in this case(roll back your transaction as well, which is a good thing).
This is how we got our inspiration. We want to split our business logic into steps, so that each step can be executed sequentially and if resources are running out, we have a chance to save our progress and then continue the remaining steps.
We call the whole process an Atom, which means that your business logic is guaranteed to be executed even if some governor limits are reached, so that it will not be broken.

Big picture: Here we split our business logic into CustomBeforeCompute, CustomCompute and CustomAfterCompute. The Atom instance is created with the initial data, then it will execute the CustomBeforeCompute. After that, it will do a 'for-each' loop of CustomCompute over the data indexed by 'items', and the looping item will be saved under the key of 'item'. Finally it will do the CustomAfterCompute.
All is not set off until the fork is invoked. During any step, if governor limits are reached, a new queueable job will be created to continue to run the remaining logic.


But you need to install their solution from the beginning of a new project.
Alain CabonAlain Cabon
High Reliability DML and Concurrency Design Patterns for Apex


User-added image

Yes: we have seen this error (ever) so what is the solution?
User-added image

YOU CANNOT ABSOLUTELY PREVENT THESE ERRORS  ( ... I had this bad feeling but now, it is sure)

User-added image

...  "Let the user deal with them": an other great solution (very useful and I used it very often, I already imagine a big smile on their face) !
Steve Berley [Left Propeller]Steve Berley [Left Propeller]
Exactly!  I'm with you and can it doesn't take much to imagine how well "Let the user deal with them" flies with clients.

Of course the presentation doesn't expand on the "recovery" approach handling them in async code.  Now, if only I could find my copy of the book they mention.

Steve Berley [Left Propeller]Steve Berley [Left Propeller]
p.s.  I'm thinking a multi--pronged approach to addressing this...

1. Convincing the client that we should review the code leveraging the Salesforce API to make sure it's bulkified.
2. I've implemented the code above that leverages getpopulatedfieldsasmap() in a sandbox and timing tests show a negligible difference in performance.  I'm debating just putting it into prod and watching for errors with an "undo" changeset at the ready to deploy.
3. I need to find that book ;-)

Thanks, as always, for the help and comiseration.

Alain CabonAlain Cabon
Hi Steve,

1) The processes (the simplified flows, Lightning process builder) are also dangerous despite the claim that they are bulkified now "by default".
Salesforce promote the "zero code" (great idea) but the processes are also a source of problems and we cannot change anything for the underlying programs. 

2) Click-to-Cloud develop many interesting ideas like the Sweet Apex the next generation of Apex development, transpiling Sweet Apex(a feature-rich Apex-like code) to Apex classes. Before we reveal the secret of Sweet.apex, we would like to introduce one important Salesforce technique, SFDX. This is a developer-friendly command line tool that helps us do lots of things and creates many possibilities. Without DX, it would be nearly impossible to deploy our Apex code directly from locally to our Orgs. Yes, SFDX paves the way for building a brand-new development process, and let's take a closer look at this.  https://github.com/Click-to-Cloud/Sweet.apex

It is like Typescript with Javascript (mimic the missing features by transpiling) at first glance. There is nothing specific for the concurrent access (even with transpiling that is impossible).

Their Query class doesn't address the problem of "concurrent access and locking" either. Apex is not multi-threaded by giving access to the underlying threads.We cannot create a thread in Apex like in java but we can launch asynchronous treatments in praying that they will not use the same data. The use of "FOR UPDATE" is really rare in the Salesforce documentation (quoted one time for the definition of "FOR UPDATE" itselt) 

http://​​​​​​​https://github.com/Click-to-Cloud/Query.apex (http://https://github.com/Click-to-Cloud/Query.apex)

3) The most interesting code (zip file) is here Advanced Apex Programming of Dan Applemanhttp://advancedapex.com/samplecode/
You can download the source code of the Third Edition 1.1 with great Apex sample code to manage the concurrency and I see "for update" used clearly in his code.