You need to sign in to do that
Don't have an account?
gov limit - spacing records from trigger
I have a trigger that converts one record into a BUNCH of other records depending on the length of time between two date fields. If the dates are long (like 20 months) and there are 150 of them I hit 3 governor limits (script lines, DML, and query).
Several experts have looked over the code with me and the margin for optimization is low, and I'm thinking about the alternatives right now (please feel free to comment as to how much value you see these paths having or if I am leaving out any good ones):
1) Have the DB guy only load these records with long dates 20 or 30 at a time. (Q: How long does he need to wait before SF puts them into separate trigger calls?)
2) Get rid of the trigger entirely and have a workflow rule that checks a flag on each incoming record to see if it has been processed yet and handle them until the governor limits get close. (Q: How nasty of a hack is this in SF land?)
3) Drop SF for heroku, ec2, azure, etc.
All Answers
Thanks for helping understand the platform a little better, Noam.
It looks like there is a way to periodically check to see how close a block of code is getting to the limit. The downside is it looks like there is a lot to check and it would count against the limits itself (aside from the nastiness of what a hack that approach would be). See here:
http://www.bulkified.com/Monitoring+Salesforce.com+Governor+Limits
I am disinclined to post the source code on the internet because that would be a victory for the FSF and potentially for my company's competitors.
I misunderstood the difference between bulkifying apex and batch apex, which I am now reading up on. Unfortunately, it looks like trigger-based batch requests have some warnings about using them on triggers. *Sigh*.
http://www.salesforce.com/us/developer/docs/apexcode/Content/apex_batch.htm
Thanks, Noem!