MSCRM Retrieve Multiple PlugIn limits the other Retrievemultiple uery - plugins

In my scenario, there is a plugin (Retrieve Multiple) on Annotation. This plugin is nothing just a part of BLOB Storage solution(used for Attachment Management solution provided by Microsoft). So, it is clear that in our CRM, MicrosoftlLabsAzureBlobstorage is being used.
Now, I am executing a console app which retrieves multiple annotations through Query Expression. When it tries to fetch records around 500 or 600, it throws below error.
{The plug-in execution failed because no Sandbox Hosts are currently
available. Please check that you have a Sandbox server configured and
that it is running.\r\nSystem.ServiceModel.CommunicationException:
Microsoft Dynamics CRM has experienced an error. Reference number for
administrators or support: #AFF51A0F"}
When I fetch specific records or very less records, it executes fine.
So, I my question is that is there any limitation in number for Rerieve Multiple Query ? if retrievemultiple PlugIn exists ?
Is there any other clue that I am not able to find ?

To work around this conflict, in your console application code you may want to try retrieving smaller pages of annotations, say 50 at a time, and loop through the pages to process them all.
This article provides sample code for paging a QueryExpression.
Here's the abridged version of that sample:
// The number of records per page to retrieve.
int queryCount = 3;
// Initialize the page number.
int pageNumber = 1;
// Initialize the number of records.
int recordCount = 0;
// Create the query expression
QueryExpression pagequery = new QueryExpression();
pagequery.EntityName = "account";
pagequery.ColumnSet.AddColumns("name", "emailaddress1");
// Assign the pageinfo properties to the query expression.
pagequery.PageInfo = new PagingInfo();
pagequery.PageInfo.Count = queryCount;
pagequery.PageInfo.PageNumber = pageNumber;
// The current paging cookie. When retrieving the first page,
// pagingCookie should be null.
pagequery.PageInfo.PagingCookie = null;
while (true)
{
// Retrieve the page.
EntityCollection results = _serviceProxy.RetrieveMultiple(pagequery);
if (results.Entities != null)
{
// Retrieve all records from the result set.
foreach (Account acct in results.Entities)
{
Console.WriteLine("{0}.\t{1}\t{2}", ++recordCount, acct.Name,
acct.EMailAddress1);
}
}
// Check for more records, if it returns true.
if (results.MoreRecords)
{
// Increment the page number to retrieve the next page.
pagequery.PageInfo.PageNumber++;
// Set the paging cookie to the paging cookie returned from current results.
pagequery.PageInfo.PagingCookie = results.PagingCookie;
}
else
{
// If no more records are in the result nodes, exit the loop.
break;
}
}
This page has more info and another sample.

Related

Facebook Graph API Ad Report Run - Message: Unsupported get request

I'm making an async batch request with 50 report post request on it.
The first batch request returns me the Report Ids
1st Step
dynamic report_ids = await fb.PostTaskAsync(new
{
batch = batch,
access_token = token
});
Next I'm getting the reports info, to get the async status to see if they are ready to be downloaded.
2st Step
var tListBatchInfo = new List<DataTypes.Request.Batch>();
foreach (var report in report_ids)
{
if (report != null)
tListBatchInfo.Add(new DataTypes.Request.Batch
{
name = !ReferenceEquals(report.report_run_id, null) ? report.report_run_id.ToString() : report.id,
method = "GET",
relative_url = !ReferenceEquals(report.report_run_id, null) ? report.report_run_id.ToString() : report.id,
});
}
dynamic reports_info = await fb.PostTaskAsync(new
//dynamic results = fb.Post(new
{
batch = JsonConvert.SerializeObject(tListBatchInfo),
access_token = token
});
Some of the ids generated in the first step are returning this error, once I call them in the second step
Message: Unsupported get request. Object with ID '6057XXXXXX'
does not exist, cannot be loaded due to missing permissions, or does
not support this operation. Please read the Graph API documentation at
https://developers.facebook.com/docs/graph-api
I know the id is correct because I can see it in using facebook api explorer. What am I doing wrong?
This may be caused by Facebook's replication lag. That typically happens when your POST request is routed to server A, returning report ID, but query to that ID gets routed to server B, which doesn't know about the report existence yet.
If you try to query the ID later and it works, then it's the lag. Official FB advice for this is to simply wait a bit longer before querying the report.
https://developers.facebook.com/bugs/250454108686614/

Mark an order as "Full Payment" on Sage 200

I am inserting orders on Sage 200 through an application using the client side, C# and APIs.
I would like to check the "Full payment" checkbox on the "Payment with order" tab.
Currently, I am setting the PaymentType property, which is not working.
order.PaymentType = Sage.Accounting.SOP.SOPOrderPaymentTypeEnum.EnumSOPOrderPaymentTypeFull;
order is an instance of Sage.Accounting.SOP.SOPOrder.
Do you know how I can check that property?
The following method should supply the required results.
private static void SetPaymentWithOrder(Sage.Accounting.SOP.SOPOrder sopOrder)
{
// Indicate that order has payment
sopOrder.PaymentWithOrder = true;
// This is full payment order
sopOrder.PaymentType = Sage.Accounting.SOP.SOPOrderPaymentTypeEnum.EnumSOPOrderPaymentTypeFull;
// Fetch the the Payment Methods. SOPPaymentMethods contructor accepts the boolean flag whether to fetch payment methods including card processing method or not.
Sage.Accounting.SOP.SOPPaymentMethods paymentMethodsCollection = new Sage.Accounting.SOP.SOPPaymentMethods(false);
// Set the first payment method of the collection to the order
sopOrder.PaymentMethod = paymentMethodsCollection.First;
}
dont know if you ever managed to figure this one out or not.
Not sure if you knew this, but you cannot modify the Sales Order on the view form, or at least shouldn't be trying to do so.
Using either of the Enter/Amend Sales Order forms will allow you to do so.
What is potentially happening, is that the properties that the controls are bound to are not updating the UI after your code has run.
You can simply force this to happen using the following
Fetching the underlying bound object
public Sage.Accounting.SOP.SOPOrderReturn SOPOrderReturn
{
get
{
//Loop over the boundobjects collection
//check if the bound object is of the type we want - e.g. SOPOrderReturn
//if correct type, return this object
Sage.Common.Collections.BoundObjectCollection boundObjects = this.form.BoundObjects;
if (boundObjects != null)
{
foreach (object boundObject in boundObjects)
{
if (boundObject is Sage.Accounting.SOP.SOPOrderReturn)
{
this._sopOrderReturn = boundObject as Sage.Accounting.SOP.SOPOrderReturn;
break;
}
}
}
return this._sopOrderReturn;
}
}
Fetch the correct underlying form type that the amendable form is, suspending the databinding,
perform your changes,
resuming the databinding
Sage.MMS.SOP.MaintainOrderForm maintainOrderForm = this.form.UnderlyingControl as Sage.MMS.SOP.MaintainOrderForm;
maintainOrderForm.BindingContext[this.SOPOrderReturn].SuspendBinding();
this.SOPOrderReturn.PaymentWithOrder = true;
this.SOPOrderReturn.PaymentType = Sage.Accounting.SOP.SOPOrderPaymentTypeEnum.EnumSOPOrderPaymentTypeFull;
maintainOrderForm.BindingContext[this.SOPOrderReturn].ResumeBinding();
should do the trick.

Azure Mobile Services Node.js update column field count during read query

I would like to update a column in a specific row in Azure Mobile Services using server side code (node.js).
The idea is that the column A (that stores a number) will increase its count by 1 (i++) everytime a user runs a read query from my mobile apps.
Please, how can I accomplish that from the read script in Azure Mobile Services.
Thanks in advance,
Check out the examples in the online reference. In the table Read script for the table you're tracking you will need to do something like this. It's not clear whether you're tracking in the same table the user is reading, or in a separate counts table, but the flow is the same.
Note that if you really want to track this you should log read requests to another table and tally them after the fact, or use an external analytics system (Google Analytics, Flurry, MixPanel, Azure Mobile Engagement, etc.). This way of updating a single count field in a record will not be accurate if multiple phones read from the table at the same time -- they will both read the same value x from the tracking table, increment it, and update the record with the same value x+1.
function read(query, user, request) {
var myTable = tables.getTable('counting');
myTable.where({
tableName: 'verses'
}).read({
success: updateCount
});
function updateCount(results) {
if (results.length > 0) {
// tracking record was found. update and continue normal execution.
var trackingRecord = results[0];
trackingRecord.count = trackingRecord.count + 1;
myTable.update(trackingRecord, { success: function () {
request.execute();
});
} else {
console.log('error updating count');
request.respond(500, 'unable to update read count');
}
}
};
Hope this helps.
Edit: fixed function signature and table names above, adding another example below
If you want to track which verses were read (if your app can request one at a time) you need to do the "counting" request and update after the "verses" request, because the script doesn't tell you up front which verse records the user requested.
function read(query, user, request) {
request.execute( { success: function(verseResults) {
request.respond();
if (verseResults.length === 1) {
var countTable = tables.getTable('counting');
countTable.where({
verseId: verseResults[0].id
}).read({
success: updateCount
});
function updateCount(results) {
if (results.length > 0) {
// tracking record was found. update and continue normal execution.
var trackingRecord = results[0];
trackingRecord.count = trackingRecord.count + 1;
countTable.update(trackingRecord);
} else {
console.log('error updating count');
}
}
}
});
};
Another note: make sure your counting table has an index on the column you're selecting by (tableName in the first example, verseId in the second).

IPP .Net V2 CustomerQuery.ExecuteQuery wont return more than 500 items

I'm using version 2.1.12.0 of the IPP .Net Dev Kit, and having a problem where when I use ExecuteQuery to return a list of all of the customers for a QBD instance, it will only return the first 500.
In the IPP documentation, it talks about using the ChunkSize and StartPage, but the .net library only allows you to specify the ChunkSize.
Is there a way to make ExecuteQuery return more than 500 records when using this version of the .net library?
var cq = new CustomerQuery() { ActiveOnly = true };
var results = cq.ExecuteQuery<Ipp.Customer>(context);
// results will never contain more than 500.
I found a solution to the problem, the IPP .net SDK does let you specify the IteratorId. It turns out the Item property on the CustomerQuery/QueryBase represents the IteratorId XML field. If you don't specify the Item/IteratorId, then calling ExecuteQuery will always return the first 500 results.
Working code sample below:
var cq = new CustomerQuery() { ActiveOnly = true };
// this fills in the IteratorId that is documented on the IPP website
// if you leave this out, the loop below will run infinitely if there
// are >= 500 records returned.
cq.Item = Guid.NewGuid().ToString("N");
ReadOnlyCollection<Ipp.Customer> cqr = null;
do
{
cqr = cq.ExecuteQuery<Ipp.Customer>(context);
// do something with the results returned here.
}
while (cqr.Count == 500);

Updating MongoDB in Meteor Router Filter Methods

I am currently trying to log user page views in meteor app by storing the userId, Meteor.Router.page() and timestamp when a user clicks on other pages.
//userlog.js
Meteor.methods({
createLog: function(page){
var timeStamp = Meteor.user().lastActionTimestamp;
//Set variable to store validation if user is logging in
var hasLoggedIn = false;
//Checks if lastActionTimestamp of user is more than an hour ago
if(moment(new Date().getTime()).diff(moment(timeStamp), 'hours') >= 1){
hasLoggedIn = true;
}
console.log("this ran");
var log = {
submitted: new Date().getTime(),
userId: Meteor.userId(),
page: page,
login: hasLoggedIn
}
var logId = Userlogs.insert(log);
Meteor.users.update(Meteor.userId(), {$set: {lastActionTimestamp: log.submitted}});
return logId;
}
});
//router.js This method runs on a filter on every page
'checkLoginStatus': function(page) {
if(Meteor.userId()){
//Logs the page that the user has switched to
Meteor.call('createLog', page);
return page;
}else if(Meteor.loggingIn()) {
return 'loading';
}else {
return 'loginPage';
}
}
However this does not work and it ends up with a recursive creation of userlogs. I believe that this is due to the fact that i did a Collection.find in a router filter method. Does anyone have a work around for this issue?
When you're updating Meteor.users and setting lastActionTimestamp, Meteor.user will be updated and send the invalidation signal to all reactive contexts which depend on it. If Meteor.user is used in a filter, then that filter and all consecutive ones, including checkLoginStatus will rerun, causing a loop.
Best practices that I've found:
Avoid using reactive data sources as much as possible within filters.
Use Meteor.userId() where possible instead of Meteor.user()._id because the former will not trigger an invalidation when an attribute of the user object changes.
Order your filters so that they run with the most frequently updated reactive data source first. For example, if you have a trackPage filter that requires a user, let it run after another filter called requireUser so that you are certain you have a user before you track. Otherwise if you'd track first, check user second then when Meteor.logginIn changes from false to true, you'd track the page again.
This is the main reason we switched to meteor-mini-pages instead of Meteor-Router because it handles reactive data sources much easier. A filter can redirect, and it can stop() the router from running, etc.
Lastly, cmather and others are working on a new router which is a merger of mini-pages and Meteor.Router. It will be called Iron Router and I recommend using it once it's out!