How to add quantities with duplicate keys - numbers

Data :
(Boss, Count_DR)
(B1,30),
(B2,20),
(B1,40),
(B3,50)
How to store in java collections or how to print in the following format?
Duplicates have to be added So B1(30+40)
(B1,70),
(B2,20),
(B3,50)
In SQL I can simply do it as
select Boss, sum(Count_DR) from table
Group By Boss;
I am just finding simplest way of doing it in Java. Can anyone help me please ?
Thanks
Rahul

Don't do Java, but
Create a hashmap to hold the key value pairs
loop through the input pairs
if the key is present in the hash sum the current value plus the input
if the key isn't present add it and the value to the hash.

Related

How to extract values with TSQL json_value( ) on string {"key":"key1","value":"value1"}

Thank you in advance for answering my questions.
I need to extract value using TSQL json_vlaue() from
col1 = [{"key":"key1","value":"value1"}, {"key":"key2","value":"value2"}, ...]
I can extract value like json_value(col1, '$[0].Value'), json_value(col1, '$[1].Value') ...
However, there is no guarantee that the key-value pair order will always be the same.
So I would need to extract value using keys rather than indexes.
Do I need to manipulate the string to {"key1":"value1", "key2":"value2"} then apply json_value(col, '$.key1') to get the value? If so, how do I preprocess the string?
Thanks!

Is there any way for Access 2016 to sort the numbers that are part of a "text" data type formatted field as though they are numeric values?

I am working on a database that (hopefully) will end up using a primary key with both numbers and letters in the values to track lots of agricultural product. Due to the way in which the weighing of product takes place at more than one facility, I have no other option but to maintain the same base number but use letters in addition to this base number to denote split portions of each lot of product. The problem is, after I create record number 99, the number 100 suddenly floats up and underneath 10. This makes it difficult to maintain consistency and forces me to replace this alphanumeric lot ID with a strictly numeric value in order to keep it sorted (which I use "autonumber" as the data type). Either way, I need the alphanumeric lot ID, and so having 2 ID's for the same lot can be confusing for anyone inputting values into the form. Is there a way around this that I am just not seeing?
If you're using query as a data source then you may try to sort it by string converted to number, something like
SELECT id, field1, field2, ..
ORDER BY CLng(YourAlphaNumericField)
Edit: you may also try Val function instead of CLng - it should not fail on non-numeric input
Why not properly format your key before saving ? e.g: "0000099". You will avoid a costly conversion later.
Alternatively, you could use 2 fields as the composite PK. One with the Number (as Long) and one with the Location (as String).

How to find an arbitrary key within a postgres jsonb object?

There are several operators in postgres for getting elements at a certain path in jsonb.
But how could I retrieve all the values that have a key of 'foo', if I don't know where in the whole object structure they will appear?
I saw there is a regex matches function which would return me matching regexes, but the object keyed off foo could be arbitrarily complex, so tough to come up with a regex that would pull the whole object out neatly?
Thanks for your help
SELECT jsonb_column->'foo'
FROM table
[WHERE jsonb_column ? 'foo'] -- ignore values without key "foo"

Generate random composite key

I have an array of strings, and an array of dates.
I need to use these as seeds to insert a composite key into a sqlite table.
So far I am doing this:
For dates (contains dates from now to past, user selects number of days)
For name (a unique random subset of a master array)
Insert
Is there a better way of doing this (there always seem to be in perl)
use "rand($#array_name)" function, to get random index of the array you have, then just use that value, which will be different at each time.

get duplicate record in large file using MapReduce

I have a large file contain > 10 million line. I want to get dupplicate line using MapReduce.
How can I solve this problem?
Thanks for help
You need to make use of the fact that the default behaviour of MapReduce is to group values based on a common key.
So the basic steps required are:
Read in each line of you file to you mapper, probably using something like the TextInputFormat.
Set the output Key (Text object) to the value of each line. The contents of the value doesn't really matter. You can just set it to a NullWritable if you want.
In the reduce check the number of values grouped for each key. If you have more than one value you know you have a duplicate.
If you just want the duplicate values, write out the keys that have multiple values.