Using Python need to print in csv format - mongodb

{u'Test1': u'Result1', u'_id': ResultId('987600234565ade'), u'bugseverity': u'major'}
{u'Test2': u'Result2', u'_id': ResultId('987600234465ade'), u'bugseverity': u'minor'}
{u'Test3': u'Result3', u'_id': ResultId('9876002399999de'), u'bugseverity': u'minor'}
The output received after running query on a mongodb is given above. Using this output I need to print values in csv format using python.

I slightly changed the input data into a form which would make more sense technically speaking.
In addition I removed the ResultId() since this seems to be a special datatype which needs to be converted into a string separately before doing any further data handling after receiving the responses from the database.
However, I would suggest doing something like this using csv.Dictwriter():
import csv
# changed sample data key `Test` in order to have this key equal in all responses
# which would make more sense technically
data = [{u'Test': u'Result1', u'_id': '987600234565ade', u'bugseverity': u'major'},
{u'Test': u'Result2', u'_id': '987600234465ade', u'bugseverity': u'minor'},
{u'Test': u'Result3', u'_id': '9876002399999de', u'bugseverity': u'minor'}]
# define the column names
fieldnames = ['Test', '_id', 'bugseverity']
with open('dict.csv', 'w') as f:
writer = csv.DictWriter(f, fieldnames=fieldnames)
writer.writeheader()
for d in data:
for key, value in d.items():
writer.writerow(d)
Giving dict.csv as output:
Test,_id,bugseverity
Result1,987600234565ade,major
Result1,987600234565ade,major
Result1,987600234565ade,major
Result2,987600234465ade,minor
Result2,987600234465ade,minor
Result2,987600234465ade,minor
Result3,9876002399999de,minor
Result3,9876002399999de,minor
Result3,9876002399999de,minor

Related

Convert csv file to map

I have a csv file containing a list of abbreviations and their full values such that the file looks like the below
original,mappedValue
bbc,britishBroadcastingCorporation
ch4,channel4
I want to convert this csv file into a Map such that it is of the form
val x:Map[String,String] = Map("bbc"->"britishBroadcastingCorporation", "ch4"->"channel4")
I have tried using the below:
Source.fromFile("pathToFile.csv").getLines().drop(1).map(_.split(","))
but this leaves me with an Iterator[Array[String]]
You are close , split provides an array. You have to convert it into a tuple and then to a map
Source.fromFile("/home/agr/file.csv").getLines().drop(1).map(csv=> (csv.split(",")(0),csv.split(",")(1))).toMap
res4: scala.collection.immutable.Map[String,String] = Map(bbc -> britishBroadcastingCorporation, ch4 -> channel4)
In real life , you will check for existance of bad rows and filtering out the array splits whose length is less than 2 or may be put that into another bin as bad data etc.

OCTAVE data import from PCE-VDL data logger device and conversion of decimal coma to decimal point

I have a measurement device PCE-VDL, which gives me measurements in following CSV format below, which I need to import to OCTAVE for further investigation.
Especially I need to import last 3 columns with xyz acceleration data.
The file is in CSV format with delimiter of semicolon ";".
I have tried:
A_1 = importdata ("file.csv", ";", 3);
but have recieved
error: missing_idx(10): out of bound 9
The CSV file looks like this:
#PCE-VDL X - TableView series
#2020.16.11
#Date;Time;Duration [s];t [°C];RH [%];p [mbar];aX [g];aY [g];aZ [g];
2020.28.10;16:16:32:0000;00:000;;;;0,0195;-0,0547;1,0039;
2020.28.10;16:16:32:0052;00:005;;;;0,0898;-0,0273;0,8789;
2020.28.10;16:16:32:0104;00:010;;;;0,0977;-0,0313;0,9336;
2020.28.10;16:16:32:0157;00:015;;;;0,1016;-0,0273;0,9297;
The numbers in last 3 columns have also decimal coma and not decimal point. So there probably should be done also some conversion.
Thank you very much for any help.
Regards
EDIT: 18.11.2020
Thanks for help. I have tried now following:
A_1_str = fileread ("file.csv");
A_1_str_m = strrep (A_1_str, ".", "-");
A_1_str_m = strrep (A_1_str_m, ",", ".");
save "A_1_str_m.csv" A_1_str_m;
A_1 = importdata ("A_1_str_m.csv", ";", 8);
and still receive error: file_content(140): out of bound 139
There is probably some problem with time format in first columns, which I do not want to read. I just need last three columns.
After my conversion, the file looks like this:
# Created by Octave 5.1.0, Wed Nov 18 21:40:52 2020 CET <zdenek#ASUS-F5V>
# name: A_1_str_m
# type: sq_string
# elements: 1
# length: 7849
#PCE-VDL X - TableView series
#2020-16-11
#Date;Time;Duration [s];t [°C];RH [%];p [mbar];aX [g];aY [g];aZ [g];
2020-28-10;16:16:32:0000;00:000;;;;0.0195;-0.0547;1.0039;
2020-28-10;16:16:32:0052;00:005;;;;0.0898;-0.0273;0.8789;
2020-28-10;16:16:32:0104;00:010;;;;0.0977;-0.0313;0.9336;
Thanks for support!
You can first read the data with fileread, which stores the data as a string. Then you can manipulate the string like this:
new_string = strrep(string, ",", ".");
strrep replaces all occurrences of a pattern within a string. Afterwards you save this data as a separate file or you overwrite the existing file with the manipulated data. When this is done you proceed as you have tried before.
EDIT: 19.11.2020
To avoid the additional heading lines in the new file, you can save it like this:
fid = fopen("A_1_str_m.csv", "w");
fputs(fid, A_1_str_m);
fclose(fid);
fputs will just write the string to the file.
The you can read the new file with dlmread.
A1_buf = dlmread("A_1_str_m.csv", ";");
A1_buf = real(A1); # get the real value of the complex number
A1_buf(1:3, :) = []; # remove the headlines
A1 = A1_buf(:, end-3:end-1); # get only the the 3 columns you're looking for
This will give you the three columns your looking for. But the date and time data will be ignored.
EDIT 20.11.2020
Replaced abs with real, so the sign of the value will be kept.
Use csv2cell from the io package.

Convert module-qualified OID to ObjectIdentity

How do you programatically convert module-qualified OIDs to ObjectIdentity? I want to convert something like "IP-MIB::ipAdEntAddr.127.0.0.1.123" to an ObjectIdentity. Splitting it into ObjectIdentity('IP-MIB', 'ipAdEntAddr', '127.0.0.1.123') or into ObjectIdentity('IP-MIB', 'ipAdEntAddr', 127, 0, 0, 1, 123) doesn't work, as resolveWithMib fails with "Bad IP address syntax"
Considering ipAddrTable is indexed by just one column (ipAdEntAddr), I think this should resolve just fine:
ObjectIdentity('IP-MIB', 'ipAdEntAddr', '127.0.0.1')
API-wise, ObjectIdentity understands MIB and object names as first two parameters, the rest should be individual sub-indices. So if 123 index component would make sense, it should go separately:
ObjectIdentity('IP-MIB', 'ipAdEntAddr', '127.0.0.1', 123)

Writing columns of data into a text file

I'm using the following code to write the vectors sortedthresh_strain and probofdetectionanddelamprop1 into a text file. However, the text file output is as follows:
0.0030672 1.6592e-080.0033489 5.1721e-080.0034143
where 0.0033489 5.1721e-08 should be on the next line of the text file. i.e. It should be:
0.0030672 1.6592e-08
0.0033489 5.1721e-08
I am unsure of how to do this.
Edit: Using the proposed answer:
0.0049331 0.0049685 0.0049894 0.0050094 0.005156 0.0051741 0.0052139 0.0053399 0.0054486 0.0056022 7.0711e-21 3.0123e-19
The 2nd column is required to contain:
7.0711e-21
3.0123e-19
And,
dlmwrite('THRESHUNCERTAINTYFINALPLOTLSIGMA5.dat'[sortedthresh_strain,probofdetectionanddelamprop1],'delimiter', '\t');
If you have R2013b or later, see this answer. If you have an earlier version but have the statistics toolbox you can use the dataset object to do this very easily just like tables in R2013b. Using dataset:
data1 = {'a','b','c'}'
data2 = [1, 2, 3]'
ds = dataset(data1, data2)
export(ds, 'file', 'data.txt')
If you don't want the variable names in the result text file you can use 'WriteVarNames', false in your call to export.
Good luck!
I think your data is in row vectors, but should be column vectors for it to work like you want.
Just add a transpose with '.
dlmwrite('THRESHUNCERTAINTYFINALPLOTLSIGMA5.dat',[sortedthresh_strain',probofdetectionanddelamprop1'],'delimiter', '\t');

BSON Decoder exception

Since Mongo uses BSON, I am using the BSONDecoder from Java API to get the BSON document from the Mongo query and print the string output. In the following a byte[] array stores the bytes of the MongoDB document (when I print the hex values they are the same as in Wireshark)
byte[] array = byteBuffer.array();
BasicBSONDecoder decoder = new BasicBSONDecoder();
BSONObject bsonObject = decoder.readObject(array);
System.out.println(bsonObject.toString());
I get the following error:
org.bson.BSONException: should be impossible
Caused by: java.io.IOException: unexpected EOF
at org.bson.BasicBSONDecoder$BSONInput._need(BasicBSONDecoder.java:327)
at org.bson.BasicBSONDecoder$BSONInput.read(BasicBSONDecoder.java:364)
at org.bson.BasicBSONDecoder.decodeElement(BasicBSONDecoder.java:118)
at org.bson.BasicBSONDecoder._decode(BasicBSONDecoder.java:79)
at org.bson.BasicBSONDecoder.decode(BasicBSONDecoder.java:57)
at org.bson.BasicBSONDecoder.readObject(BasicBSONDecoder.java:42)
at org.bson.BasicBSONDecoder.readObject(BasicBSONDecoder.java:32)
... 4 more
Looking on the implementation
https://github.com/mongodb/mongo-java-driver/blob/master/src/main/org/bson/LazyBSONDecoder.java it looks that it is caught in
throw new BSONException( "should be impossible" , ioe );
The above takes place in query to the database (by query I mean that byte[] array contains all the bytes after the document length). The query itself contains a string "ismaster" or in hex is "x10 ismaster x00 x01 x00 x00 x00 x00". I suspect it is the BSON format of {isMaster: 1}, but I still do not understand why it fails.
You say:
byte[] array contains all the bytes after the document length
If you are stripping off the first part of the BSON that's returned, you are not passing a valid BSON document to the parser/decoder.
See BSON spec for details, but in a nut-shell, the first four bytes are the total size of the binary document in little endian format.
You are getting an exception in the code that is basically trying to read an expected number of bytes. It read the first int32 as length and then tried to parse the rest of it as BSON elements (and got an exception when it didn't find a valid type in the next byte). Pass it everything you get back from the query, including document size and it will work correctly.
This works just fine:
byte[] array = new BigInteger("130000001069734d6173746572000100000000", 16).toByteArray();
BasicBSONDecoder decoder = new BasicBSONDecoder();
BSONObject bsonObject = decoder.readObject(array);
System.out.println(bsonObject.toString());
And produces this output:
{ "isMaster" : 1}
There is something wrong with the bytes in your byteBuffer. Note that you must include the whole document (including the first 4 bytes which are the size).