CosmosDB Paging Return Value - rest

I am trying to return paging results the request from CosmosDB. I saw this example from here but I am not sure what to do with the response variable.
// Fetch query results 10 at a time.
var queryable = client.CreateDocumentQuery<Book>(collectionLink, new FeedOptions { MaxItemCount = 10 });
while (queryable.HasResults)
{
FeedResponse<Book> response = await queryable.ExecuteNext<Book>();
}
Am I suppose to return it directly? Or do I have to do something further with the response variable? I tried to return the response variable directly and it's not working. Here's my code:
public async Task<IEnumerable<T>> RunQueryAsync(string queryString)
{
var feedOptions = new FeedOptions { MaxItemCount = 3 };
IQueryable<T> filter = _client.CreateDocumentQuery<T>(_collectionUri, queryString, feedOptions);
IDocumentQuery<T> query = filter.AsDocumentQuery();
var response = new FeedResponse<T>();
while (query.HasMoreResults)
{
response = await query.ExecuteNextAsync<T>();
}
return response;
}
Update:
After reading #Evandro Paula's answer, I followed the URL and changed my implementation to below. But it is still giving me 500 status code:
public async Task<IEnumerable<T>> RunQueryAsync(string queryString)
{
var feedOptions = new FeedOptions { MaxItemCount = 1 };
IQueryable<T> filter = _client.CreateDocumentQuery<T>(_collectionUri, queryString, feedOptions);
IDocumentQuery<T> query = filter.AsDocumentQuery();
List<T> results = new List<T>();
while (query.HasMoreResults)
{
foreach (T t in await query.ExecuteNextAsync())
{
results.Add(t);
}
}
return results;
}
And here's the exception message:
Cross partition query is required but disabled. Please set
x-ms-documentdb-query-enablecrosspartition to true, specify
x-ms-documentdb-partitionkey, or revise your query to avoid this
exception., Windows/10.0.17134 documentdb-netcore-sdk/1.9.1
Update 2:
I added the EnableCrossPartitionQuery to true and I am able to get the response from CosmosDB. But I am not able to get the 1 item that I defined. Instead, I got 11 items.

Find below a simple example on how to use the CosmosDB/SQL paged query:
private static async Task Query()
{
Uri uri = new Uri("https://{CosmosDB/SQL Account Name}.documents.azure.com:443/");
DocumentClient documentClient = new DocumentClient(uri, "{CosmosDB/SQL Account Key}");
int currentPageNumber = 1;
int documentNumber = 1;
IDocumentQuery<Book> query = documentClient.CreateDocumentQuery<Book>("dbs/{CosmoDB/SQL Database Name}/colls/{CosmoDB/SQL Collection Name}", new FeedOptions { MaxItemCount = 10 }).AsDocumentQuery();
while (query.HasMoreResults)
{
Console.WriteLine($"----- PAGE {currentPageNumber} -----");
foreach (Book book in await query.ExecuteNextAsync())
{
Console.WriteLine($"[{documentNumber}] {book.Id}");
documentNumber++;
}
currentPageNumber++;
}
}
Per exception described in your question Cross partition query is required but disabled, update the feed options as follows:
var feedOptions = new FeedOptions { MaxItemCount = 1, EnableCrossPartitionQuery = true};
Find a more comprehensive example at https://github.com/Azure/azure-documentdb-dotnet/blob/d17c0ca5be739a359d105cf4112443f65ca2cb72/samples/code-samples/Queries/Program.cs#L554-L576.

you are not specifying any where criteria for your specific item...so you are getting all results..try specifying criteria for the item (id , name etc) you are looking for. And keep in mind cross partition queries consume much more RUs n time, you can revisit architecture of your data model..Ideally always do queries with in same partition

Related

Error VS403357 when batch creating many work items in Azure DevOps using .NET API

I'm trying to use the Azure DevOps .NET API to batch create WorkItems in a AzureDevOps repository, but when I submit the batch request, I'm getting back an error message: "VS403357: Work items in the batch are expected to be unique, but found work item with ID -1 in more than one request."
Here's my code:
public void ExecuteWorkItemMigration(int[] workItemIds, IProgress<ProgressResult> progress = null)
{
var wiql = "SELECT * FROM WorkItems";
var query = new Query(_workItemStore, wiql, workItemIds);
var workItemCollection = query.RunQuery();
string projectName = MainSettings.AzureDevOpsSettings.ProjectName;
List<WitBatchRequest> batchRequests = new List<WitBatchRequest>();
foreach (WorkItemTfs tfsWorkItem in workItemCollection)
{
JsonPatchDocument document = CreateJsonPatchDocument(tfsWorkItem);
string workItemType = GetWorkItemType(tfsWorkItem);
WitBatchRequest wibr = _azureDevopsWorkItemTrackingClient.CreateWorkItemBatchRequest(projectName, workItemType,
document, true, true);
batchRequests.Add(wibr);
}
List<WitBatchResponse> results = _azureDevopsWorkItemTrackingClient.ExecuteBatchRequest(batchRequests).Result;
}
private static JsonPatchDocument CreateJsonPatchDocument(WorkItemTfs tfsWorkItem, int id = -1)
{
var document = new JsonPatchDocument();
document.Add(
new JsonPatchOperation
{
Path = "/id",
Operation = Operation.Add,
Value = id
});
document.Add(
new JsonPatchOperation
{
Path = "/fields/System.Title",
Operation = Operation.Add,
Value = tfsWorkItem.Title
});
if (tfsWorkItem.Fields.Contains("ReproSteps"))
document.Add(
new JsonPatchOperation
{
Path = "/fields/Microsoft.VSTS.TCM.ReproSteps",
Operation = Operation.Add,
Value = tfsWorkItem.Fields["ReproSteps"].Value
});
}
Any suggestions about what I need to do to get this working properly?
I have tried submitting different unique ID's but it doesn't seem to prevent the error from happening.
You need to use unique negative ID's for creating the WorkItem ID.
Something like this:
public void ExecuteWorkItemMigration(int[] workItemIds, IProgress<ProgressResult> progress = null)
{
var wiql = "SELECT * FROM WorkItems";
var query = new Query(_workItemStore, wiql, workItemIds);
var workItemCollection = query.RunQuery();
string projectName = MainSettings.AzureDevOpsSettings.ProjectName;
List<WitBatchRequest> batchRequests = new List<WitBatchRequest>();
int id = -1;
foreach (WorkItemTfs tfsWorkItem in workItemCollection)
{
JsonPatchDocument document = CreateJsonPatchDocument(tfsWorkItem, id--);
string workItemType = GetWorkItemType(tfsWorkItem);
WitBatchRequest wibr = _azureDevopsWorkItemTrackingClient.CreateWorkItemBatchRequest(projectName, workItemType,
document, true, true);
batchRequests.Add(wibr);
}
List<WitBatchResponse> results = _azureDevopsWorkItemTrackingClient.ExecuteBatchRequest(batchRequests).Result;
}

Weird issue while processing events coming from kinesis

I setup amazon connect on aws and if I make a test call, it will put that call in a aws kinesis stream. I am trying to write a lambda that process this records and save them to database.
If I make a simple call (call the number - asnwer - hangup) it works just fine. However if I make a multipart call (call a number - answer - trasnfer to another number - hangup) this comes to kinesis as two separate records (CTR).
My lambda process the CTR (Contact Trace Records) one by one. First it saves the CTR to a table called call_segments and then it query this table to see if the other part of this call is already there. If it is, it merges the data and save to a table called completed_calls, otherwise skips it.
If a call has more than on segment (if it was transfered to another number) it brings it to you as two events.
My problem is that even though I am processing the events one after the other it seems that when the second event is processed (technically the call segment from first event is already in database), it can not see the first segment of the call.
here is my code:
const callRecordService = require("./call-records-service");
exports.handler = async (event) => {
await Promise.all(
event.Records.map(async (record) => {
return processRecord(record);
})
);
};
const processRecord = async function(record) {
try{
const payloadStr = new Buffer(record.kinesis.data, "base64").toString("ascii");
let payload = JSON.parse(payloadStr);
await callRecordService.processCTR(payload);
}
catch(err){
// console.error(err);
}
};
and here is the service file:
async function processCTR(ctr) {
let userId = "12"
let result = await saveCtrToContactDetails(ctr, userId);
let paramsForCallSegments = [ctr.InstanceARN.split("instance/").pop(), ctr.ContactId]
let currentCallSegements = await dbHelper.getAll(dbQueries.getAllCallSegmentsQuery, paramsForCallSegments)
let completedCall = checkIfCallIsComplete(currentCallSegements);
if (completedCall) {
console.log('call is complete')
let results = await saveCallToCompletedCalls(completedCall);
}
}
//------------- Private functions --------------------
const saveCtrToContactDetails = async (ctr, userId) => {
let params = [ctr.ContactId,userId,ctr.callDuration];
let results = await dbHelper.executeQuery(dbQueries.getInsertCallDetailsRecordsQuery, params);
return results;
}
const checkIfCallIsComplete = (currentCallSegements) => {
//This function checks if all callSegments are already in call_segments table.
}
const saveCallToCompletedCalls = async (completedCall) => {
let contact_id = completedCall[0].contact_id;
let user_id = completedCall[0].user_id;
let call_duration = completedCall[0] + completedCall[1]
completedCall.forEach(callSegment => {
call_duration += callSegment.call_duration;
});
let params = [contact_id, user_id, call_duration];
let results = await dbHelper.executeQuery(dbQueries.getInsertToCompletedCallQuery, params);
};

Adding entities using AddRange method does not refresh entity ID

I am using EF Core 1.1.1. I have noticed when i add IEnumerable<Entity> using AddRange method and then call SaveChanges() then entities gets saved in the database however their ID does not get refreshed.
Code below does not refresh ID after SaveChanges(). Note i am passing requests as IEnumerable
public async Task Post([FromBody]IEnumerable<string> values)
{
var requests = values.Select(x => new Test()
{
Name = x,
Status = "Init"
});
await _dbContext.Tests.AddRangeAsync(requests).ConfigureAwait(false);
await _dbContext.SaveChangesAsync().ConfigureAwait(false);
foreach (var r in requests)
{
var id = r.ID;
}
}
Code below does not refresh ID after SaveChanges(). Note i am passing request.ToList() as a parameter to AddRange method
public async Task Post([FromBody]IEnumerable<string> values)
{
var requests = values.Select(x => new Test()
{
Name = x,
Status = "Init"
});
await _dbContext.Tests.AddRangeAsync(requests.ToList()).ConfigureAwait(false);
await _dbContext.SaveChangesAsync().ConfigureAwait(false);
foreach (var r in requests)
{
var id = r.ID;
}
}
Code below does refresh ID after SaveChanges(). Note I am calling ToList() after selecting values.
public async Task Post([FromBody]IEnumerable<string> values)
{
var requests = values.Select(x => new Test()
{
Name = x,
Status = "Init"
}).ToList(); //<------ ToList() or ToArray() would work
await _dbContext.Tests.AddRangeAsync(requests).ConfigureAwait(false);
await _dbContext.SaveChangesAsync().ConfigureAwait(false);
foreach (var r in requests)
{
var id = r.ID;
}
}
I am not sure if this is a bug in EF or this is how it supposed to work. I understand IEnumerable is lazy and List and Array are eager, but if AddRange method is taking IEnumerable as parameter then it should work regardless.
As Ivan states the reason you are not seeing the id's is that in the Non working cases you are enumerating new test objects.
If you place a break point inside the enumerable you will see that during your for each it creates new Test objects at that time. These are NOT the same objects that were place in the database.
You are actually enumerating the IEnumerable twice
public async Task Post([FromBody]IEnumerable<string> values)
{
var requests = values.Select(x => {
//place break point here
new Test()
{
Name = x,
Status = "Init"
}
});
await _dbContext.Tests.AddRangeAsync(requests.ToList()).ConfigureAwait (false);
await _dbContext.SaveChangesAsync().ConfigureAwait(false);
foreach (var r in requests)
{
var id = r.ID;
}
}

get office 365 mail in oldest first order using OutlookServiceClient

ResponseModel responseModel = new ResponseModel();
var contacts = new List();
OutlookServicesClient client = new OutlookServicesClient(new Uri("https://outlook.office.com/api/v2.0/"),
async() =>
{
return oValidationResponse.access_token;
});
try
{
var userDetail = await client.Me.Contacts.ExecuteAsync();
How I use it for OrderBy CreatedDateTime i.e
var userDetail = await client.Me.Contacts.OrderBy(x=>x.CreatedDateTime).ExecuteAsync();
this syntax gives error IContact not contain CreatedDateTime, so now I have no other way to use.
Based on the code you were retrieve the contacts. Here is an sample that retrieve the messages and order them with ReceivedDateTime.
OutlookServicesClient client = new OutlookServicesClient(new Uri("https://outlook.office.com/api/v2.0/"), () =>
{
return Task.Delay(10).ContinueWith(t => accessToken);
});
var Messages = client.Me.Messages.OrderBy(msg => msg.ReceivedDateTime).Take(20).ExecuteAsync().Result;
int i = 0;
foreach (var msg in Messages.CurrentPage)
{
Console.WriteLine($"({++i,-3}:){msg.Subject,-50}:\t{msg.ReceivedDateTime,-30}");
}
More detail about the mail rest refer to here.

Querying OCB from JavaScript (WireCloud)

I'm trying to get type fields for each attribute of my entities. Quering Orion and getting entities is not the problem (I do this through NGSI Source widget) but the way getting these parameters.
From NGSI Source (usual suscription to Orion instance):
var doInitialSubscription = function doInitialSubscription() {
this.subscriptionId = null;
this.ngsi_server = MashupPlatform.prefs.get('ngsi_server');
this.ngsi_proxy = MashupPlatform.prefs.get('ngsi_proxy');
this.connection = new NGSI.Connection(this.ngsi_server, {
ngsi_proxy_url: this.ngsi_proxy
});
var types = MashupPlatform.prefs.get('ngsi_entities').split(new RegExp(',\\s*'));
var entityIdList = [];
var entityId;
for (var i = 0; i < types.length; i++) {
entityId = {
id: '.*',
type: types[i],
isPattern: true
};
entityIdList.push(entityId);
}
var attributeList = null;
var duration = 'PT3H';
var throttling = null;
var notifyConditions = [{
'type': 'ONCHANGE',
'condValues': MashupPlatform.prefs.get('ngsi_update_attributes').split(new RegExp(',\\s*'))
}];
var options = {
flat: true,
onNotify: handlerReceiveEntity.bind(this),
onSuccess: function (data) {
this.subscriptionId = data.subscriptionId;
this.refresh_interval = setInterval(refreshNGSISubscription.bind(this), 1000 * 60 * 60 * 2); // each 2 hours
window.addEventListener("beforeunload", function () {
this.connection.cancelSubscription(this.subscriptionId);
}.bind(this));
}.bind(this)
};
this.connection.createSubscription(entityIdList, attributeList, duration, throttling, notifyConditions, options);
};
var handlerReceiveEntity = function handlerReceiveEntity(data) {
for (var entityId in data.elements) {
MashupPlatform.wiring.pushEvent("entityOutput", JSON.stringify(data.elements[entityId]));
}
};
To MyWidget:
MashupPlatform.wiring.registerCallback("entityInput", function (entityString) {
var entity;
entity = JSON.parse(entityString);
id = entity.id;
type = entity.type;
for(var attr in entity){
attribute = entity[attr];
}
I'm trying to code something similar to obtain the value of type fields. How can I do that? (I'm sure it's quite easy...)
You cannot make use of the current NGSI source operator implementation (at least v3.0.2) if you want to get the type metadata of attributes as the NGSI source makes use of the flat option (discarding that info).
We are studying updating this operator to allow creating subscriptions without using the flat option. The main problem here is that other components expect the data provided by this operator being provided in the format returned when using the flat option. I will update this answer after analysing deeper the issue.