Entity Framework Group Sum handling empty values - entity-framework

Im trying to return monthly sum from database table CustomerComplaintExpenseRows. Since I need monthly value even if it's 0 I have to group Range of months with sum query. In var q cc.sum might be null in some months and decimal datatype won't accept "?? 0" -operation. How can I get my code to return 0 if the value is empty?
var firstDayOfYear = new DateTime(DateTime.Now.Year, 1, 1);
var lastDayOfYear = new DateTime(DateTime.Now.Year, 12, 31);
var months = Enumerable.Range(1, 12)
.Select(month => month)
.ToList();
var complaints = _context.CustomerComplaintExpenseRows
.Where(a => a.CustomerComplaint.ComplaintDate > firstDayOfYear
&& a.CustomerComplaint.ComplaintDate < lastDayOfYear)
.GroupBy(o => o.CustomerComplaint.ComplaintDate.Month)
.Select(a => new
{
a.Key,
sum = a.Sum(i => i.RowValue)
})
.ToList();
var q = from m in months
join cc in complaints on m equals cc.Key into joined
from cc in joined.DefaultIfEmpty()
select new
{
name = CultureInfo.CreateSpecificCulture("fi-FI")
.DateTimeFormat
.GetAbbreviatedMonthName(m),
value = cc.sum
};
Example results Im looking for:
[
{
"name": 'Jan',
"value": 300,00
},
{
"name": 'Feb',
"value": 0,00
},
{
"name": 'Mar',
"value": 5,30
},
]

Just for completeness
set value like this..
value = (cc.sum == null)? 0: cc.sum

Related

How to search a list of Object by another list of items in dart

How to search a list of a class object with one of its property matching to any value in another list of strings
I am able to get filtering based on a single string , but not on a list of strings
final List<shop_cart.ShoppingCart> cartprd = snapshot.documents
.map((f) => shop_cart.ShoppingCart.fromMap(f.data))
.toList();
List<SomeClass> list = list to search;
List<String> matchingList = list of strings that you want to match against;
list.where((item) => matchingList.contains(item.relevantProperty));
If the number of items in list is large, you might want to do:
List<SomeClass> list = list to search;
List<String> matchingList = list of strings that you want to match against;
final matchingSet = HashSet.from(matchingList);
list.where((item) => matchingSet.contains(item.relevantProperty));
Or else just always store the matching values as a hashset.
In case if you want to check for a value in a list of objects . you can follow this :
List rows = [
{"ags": "01224", "name": "Test-1"},
{"ags": "01224", "name": "Test-1"},
{"ags": "22222", "name": "Test-2"},
];
bool isDataExist(String value) {
var data= rows.where((row) => (row["name"].contains(value)));
if(data.length >=1)
{
return true;
}
else
{
return false;
}
}
you can put your own array of objects on rows . replace your key with name . you can do your work based on true or false which is returned from the function isDataExist
As of today, you can't.
(A side note : You can use .where, .singleWhere, .firstWhere. This site explains various list/array methods.)
You can simply use List.where() to filter a list
final List<shop_cart.ShoppingCart> cartprd = snapshot.documents
.where((f) => shop_cart.ShoppingCart.contains(f.data));
var one = [
{'id': 1, 'name': 'jay'},
{'id': 2, 'name': 'jay11'},
{'id': 13, 'name': 'jay222'}
];
int newValue = 13;
print(one
.where((oldValue) => newValue.toString() == (oldValue['id'].toString())));
OUTPUT : ({id: 13, name: jay222})
store output in any variable check if variable.isEmpty then new value is unique either
var checkValue = one
.where((oldValue) => newValue.toString() == (oldValue['id'].toString()))
.isEmpty;
if (checkValue) {
print('Unique');
} else {
print('Not Unique');
}
OUTPUT : Not Unique

Ag-grid: Count the number of rows for each filter choice

In my ag-grid I want to display counts of rows next to each filter choice in a set filter, and maybe sort choices by that count (descending).
This is what it looks like by default:
I want the choices to be displayed as
Select All (88)
Katie Taylor (2)
Darren Sutherland (1)
John Joe Nevin (1)
Barack Obama (0)
...
What is the most efficient way to get those counts (and maybe sort the choices accordingly) from the row data, taking into account filters already set in the other fields (if any)?
Assuming your columns field is "name", you could try building up a map and refer to this in the filters cellRenderer:
var nameValueToCount = {};
function updateNameValueCounts() {
nameValueToCount = {};
gridOptions.api.forEachNodeAfterFilter((node) => {
if(!nameValueToCount.hasOwnProperty(node.data.name)) {
nameValueToCount[node.data.name] = 1;
} else {
nameValueToCount[node.data.name] = nameValueToCount[node.data.name] + 1;
}
});
}
And your column def would look like this:
{
headerName: "Name",
field: "name",
width: 120,
filter: 'set',
filterParams: {
cellRenderer: NameFilterCellRenderer
}
},
And finally, the NameFilterCellRenderer would look like this:
function NameFilterCellRenderer() {
}
NameFilterCellRenderer.prototype.init = function (params) {
this.value = params.value;
this.eGui = document.createElement('span');
this.eGui.appendChild(document.createTextNode(this.value + " (" + nameValueToCount[params.value] + ")"));
};
NameFilterCellRenderer.prototype.getGui = function () {
return this.eGui;
};
You would need to ensure that you called updateCountryCounts to update it when data changed (either with new/updated data, or when a filter was updated etc), but this should work for your usecase I think

MongoDB String attribute to Date

I have a collection with around 500,0000 entries in MongoDB.
Each entry has an attribute Date in the following format:
"Date" : "21/01/2005"
I'd like to know how I can convert it in such a way to Date format, so I can then index it (old-new) and query for entries by year.
I have tried:
db.collection.find().forEach(function(element){
element.OrderDate = ISODate(element.OrderDate);
db.collection.save(element);
})
But this just seems to change the Date attribute to today's date, along with time in the following format:
"Date" : ISODate("2016-02-11T11:41:45.680Z")
Thank you in advance.
Convert the field to the correct date object by spliting the string on the given delimiter. Use parseInt() to convert the strings into numbers, and the new Date() constructor builds a Date from those parts: the third part will be the year, the second part the month, and the first part the day. Since Date uses zero-based month numbers you have to subtract one from the month number.
The following demonstrates this approach:
var cursor = db.collection.find({"OrderDate": {"$exists": true, "$type": 2 }});
while (cursor.hasNext()) {
var doc = cursor.next();
var parts = doc.OrderDate.split("/");
var dt = new Date(
parseInt(parts[2], 10), // year
parseInt(parts[1], 10) - 1, // month
parseInt(parts[0], 10) // day
);
db.collection.update(
{"_id": doc._id},
{"$set": {"OrderDate": dt}}
)
};
For improved performance especially when dealing with large collections, take advantage of using the Bulk API for bulk updates as you will be sending the operations to the server in batches of say 500 which gives you a better performance as you are not sending every request to the server, just once in every 500 requests.
The following demonstrates this approach, the first example uses the Bulk API available in MongoDB versions >= 2.6 and < 3.2. It updates all
the documents in the collection by changing the OrderDate fields to date fields:
var bulk = db.collection.initializeUnorderedBulkOp(),
counter = 0;
db.collection.find({"OrderDate": {"$exists": true, "$type": 2 }}).forEach(function (doc) {
var parts = doc.OrderDate.split("/");
var dt = new Date(
parseInt(parts[2], 10), // year
parseInt(parts[1], 10) - 1, // month
parseInt(parts[0], 10) // day
);
bulk.find({ "_id": doc._id }).updateOne({
"$set": { "OrderDate": dt}
});
counter++;
if (counter % 500 == 0) {
bulk.execute(); // Execute per 500 operations and re-initialize every 500 update statements
bulk = db.collection.initializeUnorderedBulkOp();
}
})
// Clean up remaining operations in queue
if (counter % 500 != 0) { bulk.execute(); }
The next example applies to the new MongoDB version 3.2 which has since deprecated the Bulk API and provided a newer set of apis using bulkWrite():
var bulkOps = db.collection.find({"OrderDate": {"$exists": true, "$type": 2 }}).map(function (doc) {
var parts = doc.OrderDate.split("/");
var dt = new Date(
parseInt(parts[2], 10), // year
parseInt(parts[1], 10) - 1, // month
parseInt(parts[0], 10) // day
);
return {
"updateOne": {
"filter": { "_id": doc._id } ,
"update": { "$set": { "OrderDate": dt } }
}
};
})
db.collection.bulkWrite(bulkOps);

MongoDB tweet hashtags coincidence count

I have some tweets downloaded to my mongodb.
The tweet document looks something like this:
{
"_id" : NumberLong("542499449474273280"),
"retweeted" : false,
"in_reply_to_status_id_str" : null,
"created_at" : ISODate("2014-12-10T02:02:02Z"),
"hashtags" : [
"Canucks",
"allhabs",
"GoHabsGo"
]
...
}
I want a construct a query/aggregation/map-reduce that will give me the count of tweets that have the same two hash tags. For every pair of nonequal hashtags it gives me the count of tweets eg.:
{'count': 12, 'pair': ['malaria', 'Ebola']}
{'count': 1, 'pair': ['Nintendo', '8bit']}
{'count': 1, 'pair': ['guinea', 'Ebola']}
{'count': 1, 'pair': ['fitness', 'HungerGames']}
...
I've made a python script to do this:
hashtags = set()
tweets = db.tweets.find({}, {'hashtags':1})
#gather all hashtags from every tweet
for t in tweets:
hashtags.update(t['hashtags'])
hashtags = list(hashtags)
hashtag_count = []
for i, h1 in enumerate(hashtags):
for j, h2 in enumerate(hashtags):
if i > j:
count = db.tweets.find({'hashtags' : {'$all':[h1,h2]}}).count()
if count > 0:
pair = {'pair' : [h1, h2], 'count' : count}
print(couple)
db.hashtags_pairs.insert(pair)
But I want to make it just with a query or JS functions to use the map-reduce.
Any ideas?
There's no aggregation pipeline or query that can compute this from your given document structure, so you'll have to use map/reduce if you don't want to drastically change the collection structure or construct a secondary collection. The map/reduce, however, is straightforward: in the map phase, emit a pair (pair of hashtags, 1) for each pair of hashtags in the document, then sum the values for each key in the reduce phase.
var map = function() {
var tags = this.tags;
var k = tags.length;
for (var i = 0; i < k; i++) {
for (var j = 0; j < i; j++) {
if (tags[i] != tags[j]) {
var ts = [tags[i], tags[j]].sort();
emit({ "t0" : ts[0], "t1" : ts[1] }, 1)
}
}
}
}
var reduce = function(key, values) { return Array.sum(values) }

mongodb query with group()?

this is my collection structure :
coll{
id:...,
fieldA:{
fieldA1:[
{
...
}
],
fieldA2:[
{
text: "ciao",
},
{
text: "hello",
},
]
}
}
i want to extract all fieldA2 in my collection but if the fieldA2 is in two or more times i want show only one.
i try this
Db.runCommand({distinct:’coll’,key:’fieldA.fieldA2.text’})
but nothing. this return all filedA1 in the collection.
so i try
db.coll.group( {
key: { 'fieldA.fieldA2.text': 1 },
cond: { } },
reduce: function ( curr, result ) { },
initial: { }
} )
but this return an empty array...
How i can do this and see the execution time?? thank u very match...
Since you are running 2.0.4 (I recommend upgrading), you must run this through MR (I think, maybe there is a better way). Something like:
map = function(){
for(i in this.fieldA.fieldA2){
emit(this.fieldA.fieldA2[i].text, 1);
// emit per text value so that this will group unique text values
}
}
reduce = function(values){
// Now lets just do a simple count of how many times that text value was seen
var count = 0;
for (index in values) {
count += values[index];
}
return count;
}
Will then give you a collection of documents whereby _id is the unique text value from fieldA2 and the value field is of the amount of times is appeared i the collection.
Again this is a draft and is not tested.
I think the answer is simpler than a Map/Reduce .. if you just want distinct values plus execution time, the following should work:
var startTime = new Date()
var values = db.coll.distinct('fieldA.fieldA2.text');
var endTime = new Date();
print("Took " + (endTime - startTime) + " ms");
That would result in a values array with a list of distinct fieldA.fieldA2.text values:
[ "ciao", "hello", "yo", "sayonara" ]
And a reported execution time:
Took 2 ms