db.fs.files.distinct( "metadata.user" )
[
"5027",
"6048",
"6049",
]
The below X represents where I would like the numbers from the above query to appear.
db.fs.files.find({ 'metadata.user' : X, 'metadata.folder' : 'inbox' }).count()
I'm trying to find a way to iterate through each of the users in the first query and count the total number of results in the second query. Is there an easy way to craft this query in the MongoDB Shell?
The output I would be looking for would be (just looking for pure numbers):
User Inbox
5027 9872
6048 12
6049 125
Update:
I was able to accomplish something pretty close to what I was looking for:
# for x in $(psql -d ****** postgres -c "select user_name from users where user_name ~ '^[0-9]+$' order by user_name;" | tail -n +3 | head -n -2); do mongo vmdb --quiet --eval "db.fs.files.find({ 'metadata.user' : '$x'}).count()"; done| sort -nr
1381
1073
982
However, i'm missing out on the username part. The point is to generate a list of users with the number of messages in their mailboxes.
Please try this
var myCursor = db.collection.find( "metadata.user" );
while(myCursor.hasNext()) {
db.fs.files.find({ 'metadata.user' : X, 'metadata.folder' : 'inbox' }).count();
}
Related
How to read table data row by row of postgres db using shell script.
I tried out this:
psql -d db_name -t -c "SELECT * FROM table_name" | while read -a Record ; do
echo $Record[0]
echo $Record[1]
done
but this apporach is giving me data like:
Apple
|
Why this | is coming when I'm fetching row data only.
Actually I want to create json objects out of a db table data.
in the format:
column-name : value,
column-name : value
..... ;
Something like that
Table name -> student
Fields :
id : string
name : string
age : int
inSchool : boolean
Table data :
ID Name Age inSchool
1 Amit 18 Yes
2 Sunil 21 No
3 Anil 17 Yes
The output i want :
[
{
id : 1,
name : Amit,
age : 18,
inSchool : 1;
},
{
id : 2,
name : Sunil,
age : 21,
inSchool : 0;
},
{
id : 3,
name : Anil,
age : 17,
inSchool : 1;
}
]
If there's is any good way, please help me.
Let Postgres do the aggregation and spool the output into file after turning off header formatting:
postgres=> \t
Tuples only is on.
postgres=> \o data.json
postgres=> select jsonb_agg(to_jsonb(s)) from student s;
Or in a single line:
psql -d db_name -t -c "select jsonb_agg(to_jsonb(s)) from student s" -o data.json
After that, the file data.json will contain the entire table as a huge JSON array.
I am runig a hive query and redirecting its output to a file
$ hive -e "select id, age from employee" > /tmp/1
$ cat /tmp/1
1 44
2 32
3 NULL
I want the Null to be printed as \N so that I can upload it to mysql.
This is a sample query but I have more than 20 columns in real and any column can have null value. Writing a if() or case() in select column wont be helpful. I want a generic solution.
A simple case statement should do.
$ hive -e "select id
, case when age is null then '\\\N' else age end as age from employee" > /tmp/1
$ less /tmp/1
id age
1 44
2 32
3 \N
The following command is correctly importing the data from csv file. But the problem is that there are 2 entries for the same number.
I need both the entries for 417176718 in the same document (so no $set). How do I keep both these values using mongo import?
cat final.txt
number, date, duration, type, destination
417176718 , 2013-01-23 20:09:00 , 1 , NORMAL_CLEARING , 61998487
409334392 , 2013-01-24 11:25:18 , 40 , NO_ANSWER , 09821973636
919480909 , 2013-01-25 20:58:00 , 40 , NORMAL_CLEARING , 09919480909
417176718 , 2013-01-24 20:09:00 , 1 , FAILED , 61998487
mongoimport -d mydb -c vcalls --type csv --file final.txt --headerline
This is exactly what a map reduce is for.
Once you've got this in the db, run a map reduce like this:
mapper= function(){emit(this.number, {'data':[{'date':this.date, 'duration':this.duration, 'type':this.type, 'destination':this.destination}]});}
reducer = function(k,v){
data=[];
for (i=0;i<v.length;i++){
for (j=0;j<v[i].data.length;j++){
data.push(v[i].data[j]);
}
}
return {'data':data}
}
db.vcalls.mapReduce(mapper, reducer, 'reducedcalls')
This should give you data that a single record per number with a list that contains the calls.
how can I find pattern relationships using rest cypher?
My query running on terminal :-
MATCH (n)<-[:DEPENDS_ON*]-(dependent) RETURN n.host as Host,
count(DISTINCT dependent) AS Dependents ORDER BY Dependents
DESC LIMIT 1**
output is :-
+--------------------+
| Host | Dependents |
+--------------------+
| "SAN" | 20 |
+--------------------+
where as equivalent query with rest :-
String query = "{\"query\" : \"MATCH (website)<-[rel]-(dependent) " +
"WHERE TYPE(rel) = {rtype} RETURN website.host as Host," +
"count(DISTINCT dependent) AS Dependents ORDER BY Dependents DESC LIMIT 1" +
" \", \"params\" : {\"rtype\" : \"DEPENDS_ON*\"}}";
and output is empty(no records) !!!
Any help appreciated.
P.S- When we dont use "*" in our query everything goes ok. IE both queries give same result
In the second query you are passing the relationship type as "DEPENDS_ON*" which is incorrect since the asterisk is being included.
The asterisk is for allowing arbitrary length paths for the specified relationship but is not part of the type.
I have some data (separated by semicolon) with close to 240 rows in a text file temp1.
temp2.txt stores 204 rows of data (separated by semicolon).
I want to:
Sort the data in both files by field1, i.e. the first data field in every row.
Compare the data in both files and redirect the rows that are not equal in separate files.
Sample data:
temp1.txt
1000xyz400100xyzA00680xyz0;19722.83;19565.7;157.13;11;2.74;11.00
1000xyz400100xyzA00682xyz0;7210.68;4111.53;3099.15;216.95;1.21;216.94
1000xyz430200xyzA00651xyz0;146.70;0.00;0.00;0.00;0.00;0.00
temp2.txt
1000xyz400100xyzA00680xyz0;19722.83;19565.7;157.13;11;2.74;11.00
1000xyz400100xyzA00682xyz0;7210.68;4111.53;3099.15;216.95;1.21;216.94
The sort command I'm using:
sort -k1,1 temp1 -o temp1.tmp
sort -k1,1 temp2 -o temp2.tmp
I'd appreciate if someone could show me how to redirect only the missing/mis-matching rows into two separate files for analysis.
Try
cat temp1 temp2 | sort -k1,1 -o tmp
# mis-matching/missing rows:
uniq -u tmp
# matching rows:
uniq -d tmp
You want the difference as described at http://www.pixelbeat.org/cmdline.html#sets
sort -t';' -k1,1 temp1 temp1 temp2 | uniq -u > only_in_temp2
sort -t';' -k1,1 temp1 temp2 temp2 | uniq -u > only_in_temp1
Notes:
Use join rather than uniq, as shown at the link above if you want to compare only particular fields
If the first field is fixed width then you don't need the -t';' -k1,1 params above
Look at the comm command.
using gawk, and outputting lines in file1 that is not in file2
awk -F";" 'FNR==NR{ a[$1]=$0;next }
( ! ( $1 in a) ) { print $0 > "afile.txt" }' file2 file1
interchange the order of file2 and file to output line in file2 that is not in file1