function readOnly(count){ }
Starting November 20, the site will be set to read-only. On December 4, 2023,
forum discussions will move to the Trailblazer Community.
+ Start a Discussion

Competing upserts causing "DUPLICATE_VALUE" errors - should this even be possible?

I've got an object defined with an "external ID" field (in this case, email address).

We've got code that takes real-world values and upserts them into this object.  We don't bother querying for matches first - we just upsert the given values, knowing that the object they will either insert (if new) or update (thus resolving to the correct ID).  Either way my SObjects are guaranteed to have a valid ID post-upsert.

However, if I run enough processes in parallel, I'll get an upsert error:

EXCEPTION: Upsert failed.  First exception on row 0; first error:
DUPLICATE_VALUE, duplicate value found: i__FullAddr__c duplicates value on record with id: a0T700000003bX4

I'll investigate using the Database.upsert() method to retry the upsert on failure; hopefully that will fix the problem.

Am I smoking crack to think that upsert should be an atomic operation?  This is empirically not the case - we've clearly got interleaved upserts - but is this a bug or a "feature"?
OK, a couple discoveries.

(1) Database.upsert() doesn't help, because the exception is still thrown (rather than "isSuccess()" simply returning false).

However, because the DML exception can be caught, this code works around this issue:

try { upsert addr FullAddr__c; }
catch (DMLException e) { System.debug('retry!'); upsert addr FullAddr__c; }

Using try/catch might actually be beneficial here, because the overhead of the Exception mechanism increases the length of time between the failure and the retry attempt (thus letting the retry succeeed).

Still, this is a bit of a hack and doesn't provide any guarantees that the 2nd upsert will succeed (although I can't think of any way that it wouldn't...).

Regardless, I'm curious to hear whether upsert is supposed to be atomic or not...
Note - playing with Database.upsert() shows something interesting.

If you specify "allOrNone" = true, then the upsert method will throw the first DML exception it hits.

If you specify "allOrNone" = false, then the upsert method will not throw DML exceptions but instead will cache them into the UpsertResult object.

That seems a bit off to me, as the usage pattern changes depending on the allOrNone parameter.  ( if you specify "allOrNone" = true, you might still want to get the UpsertResult back rather than having to catch the exception).

From another perspective, though, it makes sense as if "allOrNone" is true then all of the UpsertResults after the first Exception are undefined.

It would be helpful if the docs provided some more clarity on this nuance.



Has anyone else seen and found an answer for this issue? I have two processes (I believe processes, not threads of the same session) that generate this error when upserting the same record within a very short timeframe. Is the only solution to catch and try again, which is not full-proof? Note that my external ID is set to be unique.

SFDC error java.lang.Exception: Insert failed. First exception on row 0; first error: DUPLICATE_VALUE, duplicate value found: ENV_USI_Number__c duplicates value on record with id: a0A40000008lt3L

Thanks, any help is much appreciated.

In our experience, catch & try again is actually foolproof.

Of your two interleaved upserts, one succeeds & the other fails (because both try to run as an "insert").

When you retry the failed one, it will succeed because the upsert runs as an update.


But how would we catch this if the upsert is coming from an external process? What I mean is, if some kind of middleware is doing the upserting. I'd love to be able to have some kind of trigger that reads something like:


if(trigger.isAfter && (trigger.isInsert || trigger.isUpdate)){

    //check for  "DUPLICATE_VALUE" errors and skip only the records that caused them

   //process all thers



Would this be possible?


I think your middleware needs to be aware of the issue & compensate for it.


"after" triggers don't see the failed rows.


"before" triggers might see the rows that will eventually fail, but they are limited in what they can do.

James Boutcher 8James Boutcher 8
Problem still exists today. Opened a case on this. The try/catch/retry is a hack.
Vidya DVidya D
Hi, Did you get answer on this? 
Even I am interested in catching errors that are failed in insert/update  due to duplicate value. The field is marked as unique and due to sharing rule, before insert validation doesn't pull the record that has duplicate value

Is there any way to catch error after insert something as you have mentioned above.

if(trigger.isAfter && (trigger.isInsert || trigger.isUpdate)){

    //check for  "DUPLICATE_VALUE" errors and skip only the records that caused them

   //process all thers