• rsoese@systecs
  • NEWBIE
  • 0 Points
  • Member since 2012

  • Chatter
    Feed
  • 0
    Best Answers
  • 0
    Likes Received
  • 0
    Likes Given
  • 5
    Questions
  • 10
    Replies

I just ran into two new (at least for me) limits of the Salesforce.com / Database.com platform that give me a shiver. I executed a SOQL query of the like:

SELECT field_x1, .., field_xN, SUM(field_y1),..., SUM(field_yM) FROM...
GROUP BY field_x1, .., field_xN

With N >40 and M > 50 I ran into:

MALFORMED_QUERY: maximum number of aliased fields exceeded: 100

and

MALFORMED_QUERY: Group By must contain 32 fields or less

Ok, ok! You probably do not often have a case where you need a SOQL statement with more than 68 aggregated and 32 grouped fields. One time is enough to break a software feature ;-)

Any experience how to prevent this? Submit a case, chunk query, ...?!
I haven't found a single word on this on Google.

I am very confused by running into an "Too many query locator rows: 10001" error when calling those lines in APEX:

 

@HttpGet
global static List<DataRow__c> doGet() {
  	RestRequest req = RestContext.request;
	Integer pageNumber = Integer.valueOf(req.params.get('pn'));
	Integer pageSize = Integer.valueOf(req.params.get('ps'));
	    
	Database.QueryLocator queryLocator = Database.getQueryLocator(QUERY);
	ApexPages.StandardSetController ssc = new ApexPages.StandardSetController(queryLocator);
	ssc.setPageNumber(pageNumber);
        ssc.setPageSize(pageSize);
        return ssc.getRecords();
}

What I am trying to do is select records from a dataset larger than 30.000 records. I do thos in chunks of a few hundred records. To define a chunk I planned on using the upcoming new SOQL OFFSET feature.

 

Then I read that OFFSET cannot be used for my purpose aS OFFSET cannot have a value > 2000 (Why the heck is that?)

 

So I refactored my code to use StandardSetController and chunking via pageSize and pageNumber.

 

But now I am running into this "Too many query locator rows: 10001" error although I am doing no insert update or anything.

I am JUST creating this Query Locator.

 

What am I doing wrong?

I am currently thinking of ways to connect to arbitrary Force.com orgs and let them share and receive data bigger than 10mb.

I read the SOAP Webservices have many restrictions regarding size of the transmitted data and other limits.

I have used the Bulk API from Java to upload big amount of data and would like to do the same directly from within the plattform.

 

Has anyone done this already and can share experiences?

Is this possible at all?

Or is this a bad idea and better ways to do this exist?

 

I would love to hear from you.

 

Best regards,

 

Robert

Imagine on your org you have a managed package APP with the object APP__PARENT__c.

The customer creates a custom object PARENT__c and CHILD__c and creates a MasterDetail relation pointing from CHILD__c to PARENT__c and! APP__PARENT__c.

 

There is no chance to unstall APP without being forced to delete all my CHILD__c records.

How would one solve this? Especially if your not the customer but the provider of the APP?

 

Any hints from other ISVs are welcome.

 

When using the infamous dataloader one gets bombarded with dozenz of error csvs with soemwhat cryptic error messages, Is there any resource out there where I can lookup what each messages means and how to cope with it?

 

Any ideas are very welcome.

 

Robert

I have custom component which combined to dropdown list, and I did binding to custom object fields, no issue to get the value from object, but can not save the change..need helps.

 

custom component clss

public with sharing class cComponetCombinedList {
  public string basecolorvalue {get;set;} 
  public string stripecolorvalue {get;set;}
  private list<SelectOption> items;
  public list<SelectOption> getItems(){
    List<SelectOption> items = new List<SelectOption>();
    items.add(new SelectOption('','--None--'));
    items.add(new SelectOption('BG','BG'));
    items.add(new SelectOption('BK','BK'));
    items.add(new SelectOption('BU','BU'));
    items.add(new SelectOption('BU (LT BL)','BU (LT BL)'));
    items.add(new SelectOption('BN','BN'));
    items.add(new SelectOption('DB','DB'));
    items.add(new SelectOption('DG','DG'));
    items.add(new SelectOption('GN','GN'));
    items.add(new SelectOption('GN (LT GN)','GN (LT GN)'));
    items.add(new SelectOption('GY','GY'));
    items.add(new SelectOption('OG','OG'));
    items.add(new SelectOption('PK','PK'));
    items.add(new SelectOption('RD','RD'));
    items.add(new SelectOption('VT','VT'));
    items.add(new SelectOption('WH','WH'));
    items.add(new SelectOption('YE','YE'));
    return items;
  }
  

}

 

Custom Component

 

<apex:component controller="cComponetCombinedList">
	<apex:attribute name="basecolor" description="the base color of the cable" 
			type="String" required="true" assignTo="{!basecolorvalue}">
	</apex:attribute>
	<apex:attribute name="stripecolor" description="the stripe color of the cable" 
			type="String" required="true" assignTo="{!stripecolorvalue}">
	</apex:attribute>
	<apex:selectList value="{!basecolorvalue}" size="1">
		<apex:selectOptions value="{!items}">
		</apex:selectOptions>
	</apex:selectList>
	<apex:selectList value="{!stripecolorvalue}" size="1">
		<apex:selectOptions value="{!items}">
		</apex:selectOptions>
	</apex:selectList>
</apex:component>

 

below is the component referenced in VF:

apex:PageBlockSectionItem >
					<apex:outputLabel value="Pin 1 Base / Stripe Color:"/>
					<c:ComponentCombinedList basecolor="{!SRS_Product_Configuration__c.Pin_1_Base_Color__c}" 
								stripecolor="{!SRS_Product_Configuration__c.Pin_1_Stripe_Color__c}">
					</c:ComponentCombinedList>
				</apex:PageBlockSectionItem>

 

rendered in page correct:

 

1

 

when I change the values and save:

2

 

the value still as old:

 

Any ideas, comments are apperiated. Thanks

 

Qingsong

 

 

I just ran into two new (at least for me) limits of the Salesforce.com / Database.com platform that give me a shiver. I executed a SOQL query of the like:

SELECT field_x1, .., field_xN, SUM(field_y1),..., SUM(field_yM) FROM...
GROUP BY field_x1, .., field_xN

With N >40 and M > 50 I ran into:

MALFORMED_QUERY: maximum number of aliased fields exceeded: 100

and

MALFORMED_QUERY: Group By must contain 32 fields or less

Ok, ok! You probably do not often have a case where you need a SOQL statement with more than 68 aggregated and 32 grouped fields. One time is enough to break a software feature ;-)

Any experience how to prevent this? Submit a case, chunk query, ...?!
I haven't found a single word on this on Google.

Hy,

 

I am trying to insert record feeds with the Apex Dataloader (through the command line) and I am not able to do this, because the mapping information for the feed items fail with this message:

 

ERROR com.salesforce.dataloader.action.progress.NihilistProgressAdapter  - Field mapping is invalid: MY_OBJECT__FEED.PARENTID=ParentId\:ExternalId__c

 

The line that fails is:

 

MY_OBJECT__FEED.PARENTID=ParentId\:ExternalId__c

 

ExternalId__c exits on MY_OBJECT__c and is an external id. So this is not the reason for the error. But strangly when I want to check the relation name by doing a getDescribe().getRelationName() on MY_OBJECT__FEED.PARENTID it returns a NULL.

 

???

 

Can anyone help with that?

 

Regards Robert

 

 

When using the infamous dataloader one gets bombarded with dozenz of error csvs with soemwhat cryptic error messages, Is there any resource out there where I can lookup what each messages means and how to cope with it?

 

Any ideas are very welcome.

 

Robert

I’m the developer of a package that has a heavy dependence on a scheduled Batch Apex job. The package currently runs in a dozen or so orgs, some of which have fairly large amounts of data. One org in particular has over 3 million records that are processed by the Batch Apex job.

 

Over the past 3 months, we’ve been encountering a lot of stability problems with Batch Apex.  We’ve opened cases for several of these issues, and they’ve been escalated to Tier 3 Support, but it consistently takes 2 weeks or more to get a case escalated, and then it can several more weeks to get a meaningful reply form Tier 3.

 

We really need to talk with the Product Manager responsible for Batch Apex. We asked Tier 3 to make that introduction, but they said they couldn’t. We’re trying to work with Sales to set up a discussion with a Product Manager, but so far, we haven’t had any luck there either. We’re hoping that a Product Manager might see this post and get in touch with us. We need to find out whether Batch Apex is a reliable-enough platform for our application.

 

Here are a few examples of the problems we’ve been having:

 

  • The batch job aborts in the start() method. Tier 3 Support told us that the batch job was occasionally timing out because its initial  query was too complex. We simplified the query (at this point, there are no WHERE or ORDER BY clauses), but we occasionally see timeouts or near timeouts. However, from what we can observe in the Debug Logs, actually executing the query (creating the QueryLocator) takes only a few seconds, but then it can take many minutes for the rest of the start() method to complete. This seems inconsistent with the “query is too complex” timeout scenario that Tier 3 support described.  (Case 04274732.)
  • We get the “Unable to write to ACS Stores” problem. We first saw this error last Fall, and once it was eventually fixed, Support assured us that the situation would be monitored so it couldn’t happen again. Then we saw it happen in January, and once it was eventually fixed, Support assured us (again) that the situation would be monitored so it couldn’t happen again. However, having seen this problem twice, we have no confidence that it won’t arise again. (Case 04788905.)
  • In one run of our job, we got errors that seemed to imply that the execute() method was being called multiple times concurrently. Is that possible? If so, (a) the documentation should say so, and (b) it seems odd that after over 6 months of running this batch job in a dozen different orgs, it suddenly became a problem.

 

  • We just got an error saying, “First error: SQLException [java.sql.SQLException: ORA-00028: your session has been killed. SQLException while executing plsql statement: {?=call cApiCursor.mark_used_auto(?)}(01g3000000HZSMW)] thrown but connection was canceled.” We aborted the job and ran it again, and the error didn’t happen again.
  • We recently got an error saying, “Unable to access query cursor data; too many cursors are in use.” We got the error at a time when the only process running on behalf of that user was the Batch Apex process itself. (Perhaps this is symptomatic of the “concurrent execution” issue, but if the platform is calling our execute() method multiple times at once, shouldn’t it manage cursor usage better?)
  • We have a second Batch Apex job that uses an Iterable rather than a QueryLocator. When Spring 11 was released, that Batch Apex job suddenly began to run without calling the execute() method even once. Apparently, some support for the way we were creating the Iterable changed, and even though we didn’t change the API version of our Apex class, that change caused our Batch Apex job to stop working. (Case 04788905.)
  • We just got a new error, "All attempts to execute message failed, message was put on dead message queue."

 

We really need to talk with a Product Manager responsible for Batch Apex. We need to determine whether Batch Apex is sufficiently stable and reliable for our needs. If not, we’ll have to find a more reliable platform, re-implement our package, and move our dozen or more customers off of Salesforce altogether.

 

If you’re responsible for Batch Apex or you know who is, please send me a private message so we can make contact. Thank you!

 

  • April 04, 2011
  • Like
  • 0