Extracting coverity .csv file from coverity server - coverity

How to extract .csv file from Coverity server in command line.
please Explain Command cov-manage-im with example.
Is there any need to install Coverity in Windows, though I have access to the website and can manually download the .csv file

For those who came here looking to export CSV using the web interface to the Coverity server, if you open the menu sidebar, then in the "Issues by ..." and "File" sections, each subsection has a drop-down option "Export CSV" which does the job !

This is clearly explained in Documentation:
Show all defects in 'proj' project
You basically need to use --mode defects to query defects and
--show to output in csv format
cov-manage-im --host <host> --port <port> --user id --password pwd --mode defects --show --project proj
You can add fields as
cov-manage-im --host <host> --port <port> --user id --password pwd --mode defects --show --project proj --fields cid,severity, classification ..
And add filters as
cov-manage-im --host <host> --port <port> --user id --password pwd --mode defects --show --project proj --fields cid,severity, classification .. --status Fixed --class Bug --severity Major ...
and so on and so forth. Just look at detailed documentation on your Coverity Connect instance

As I also needed to download Coverity report as CSV, using the web-ui, I attach here a screenshot, to better explain how this is done.
At the view panel, select the view you want to export (here it is High Impact Outstanding)
now click on the down-arrow and select 'Export CSV'

DC: This is not for command line, this can help when you want to download from server.
I think you can refere this link click
so the basic idea behind this is that first you save a copy of all the issues without any filter( if you want all) then download it.
Solution:-
For any unsaved view (in this case, the view by double clicking on one specific snapshot), if we want to export the result, we need to save this view with a valid name.
click on the gear mark, and check "Save as a Copy".
Provide a name you want, and save the view with OK button.
After that, this view can be accessed via the view list, and can be
exported via "Export to CSV" as other views.

Related

How to mongoimport from any location in cmd

So right now I'm using mongoose to work with MongoDB (learning). Something I tried to do was import a JSON array by using
mongoimport --db mongo-testing --collection test --drop --file data.json --jsonArray
but it didn't work giving an error that mongoimport was not found. After looking around in StackOverflow, I found that you needed to run the command in the \bin\ folder where your mongo PATH variable points to. So I did just that.
mongoimport --db mongo-testing --collection test --drop --file /d/Node/mongo-test/data.json --jsonArray
The command above I ran in the \bin\ folder and pointed the file via an absolute path and it worked! However, I don't think that's the right way to do it.
Question: Is there a way I could run the command in /d/Node/mongo-test/ file where my Node.js project is or do I need to go to the bin folder each time
bin folder for mongo
UPDATE: So I tried downloading the zip file instead of the msi. After downloading it, copy it into mongoDB path in ProgramFiles (beside Server is where i put it).
C:\Program Files\MongoDB\mongodb-database-tools-windows-x86_64-100.2.0\bin
Add the above line into ur PATH variables (if you put it in the same spot).
Close all terminals and if you are using VSC terminal, close the program and restart it.
This worked for me! Hope it helps someone too

Export report through command with filter options or more fileds

I'm using coverity 8.1. I run the following command to export the coverity report as csv file through command.
/opt/cov-analysis-linux64-8.1.0/bin/cov-manage-im --mode defects --show --stream $cov_stream_cpp --stream $cov_stream_java --host $cov_server --port $cov_port --auth-key-file $cov_auth_file --fields cid,file > ~/Coverity.csv
Is there anyway I can filter the report by Impact? I want to export by High/Medium/Low.
I don't find any option. I also tried to get all fields, but it supports only action, checker, cid, classification, component, ext-ref, file, function, legacy, owner, severity, status, stream-name and nothing else. I wanted to get the Impact also.
How to get it?
it looks like the command:
$ cov-manage-im --mode defects --show --auth-key-file $auth --host $host --port $port --output fields
action
cid
checker
classification
component
ext-ref
file
function
owner
severity
status
legacy
stream-name (stream scope only)
doesn't list the field you want; as far as i can tell you are out of luck

Cannot import example dataset (the system cannot find the specified file)

I am following the example given on MongoDB's website here, but I am running into trouble when trying to import sample data.
When running the command
mongoimport --db test --collection restaurants --drop --file primer-dataset.json
I get the error:
Failed: open primer-dataset.json: The system cannot find the file specified
The problem is, I am not sure what directory MongoDB expects this file to be in. I tried placing it in data/db, but that did not work. Note that I am only using default settings.
I know this is a somewhat trivial question and I feel stupid for asking it but I can not find documentation on this anywhere. Where is MongoDB expecting import files?
MongoDB expects the file to be in the directory from where you are running the command mongoimport.
If you place your file under data/db then set mongodb path as global environment variable and execute the command from data/db directory.
Additionally if you have security enabled for your mongodb then you need to execute command as below
mongoimport --username admin --password password --db test --collection restaurants --drop --file primer-dataset.json
here admin is the user authorized to perform db operations for test database and restaurants is the collection name.
For Windows!
Save file using notepad++ in .json format at MongoDB/bin where mongoimport command is present.
Simple notepad has trouble doing it.
It happened to me as well. The issue was that though the file was visible as restaurants.json actually the file was restaurants.json.json (since saved in JSON format). The issue was resolved after properly changing the name.
i have trouble like you, check you path to file, mongoimport.exe and your file may be stay in another folders.
use mongoimport -d test1 -c restaraunts companies.json for import to mongodb.
Check the filename extension, and make sure it's a ".json" file;
After this I successfully run the
mongoimport --db test --collection restaurants --drop --file [path\to\Json file]
command;
In my case, I removed the --drop parameter and it worked perfectly. I guess, it is throwing this error:
Failed: open paht/file-name.json: The system cannot find the file specified.
because the collection it wants to drop is not available, because I have not created any before.
you must copy your json file into C:\Windows\System32 and write this command on cmd:
mongoimport --db test --collection mongotest --type json --file yournamefile.json

mongoexport CSV with out header fields

I have below in a shell script to export certain fields from a mongo collection to a CSV file.
mongoexport --host localhost --db mydb --collection ratings --csv > data.csv --fields userId,filmId,score
My problem is that the result is generated comes with the header values.
ex:
userId,filmId,score
517,533,5
518,534,5
Is there a way that I can generate the csv file with out the header fields?
The mongoexport utility is very spartan and does not support a load of features. Instead the intention is that you augment with other available OS commands or if you really must then create your own code for explicit needs.
But this sample using tail is quite simple to skip the first emitted header line when you consider that all output is going to STDOUT by default anyway:
mongoexport --host localhost --db mydb --collection ratings \
--fields userId,filmId,score \
| tail -n+2 > data.csv
So it is just "piping through" | the tail command with the -n+2 option, that basically says "skip the first line" and then you just redirect > output to the file you want.
Just like most command line utilities, there is no need to build in options that can be performed with other common utilities in such a chained pattern as above. That is why there is no such option built in.
Since version 3.4 you can add --noHeaderLine as option within the command.

Import a data base file.json into robo3T (robomongo)

I have a file named services.json containing a data base that I exported from a windows mongodb, and I want to import that file into robomongo (connected to mongodb installed by npm) on Ubuntu.
I'm a beginner and I don't know how to proceed, which terminal use (robomongo or Ubuntu)?
to import data for a collection in Robomongo:
Right click on collection.
Select 'insert Document'.
Paste your json data
Click validate.
Click save.
Ok, I found the answer. In shell Mac OS X or Unix type:
$ mongoimport -d your Database Name -c your Collection Name --file /path/to/my/fileThatIwantToImport.json
For anyone wishing to use mongoimport with a remote db (#andi-giga), here's what I did to make it work:
mongoimport -h xxx.mlab.com --port 2700 -d db_name -c collection_name -u user_name -p password --type json --file /Path/to/file.json
Arguments should be self-explanatory.
-h hostname
More information at this link
I don't have enough points to comment on Varun's answer, but if you use export jsonArray and then import using Robo3T (Robomongo), make sure to remove the commas in between the objects, as well as remove the square brackets.
It's not really a JSON format that ROBO 3T accepts, but rather bunch of JSON objects separated by newlines.
(if you use export Standard, then it's already formatted for document insert)
if this is not a bson, and only json, you can use mongoimport --jsonArray . refference Insert json file into mongodb
RoboMongo is just the UI for your mongod which is the primary daemon process for the MongoDB system.
The only option to import from RoboMongo is
Right Click on Collection -> Insert Document
Apart from this you can import using the mongoimport command from terminal.
Open terminal and type mongo
Now in mongo interactive shell
Use the following command to import the json file as collection
mongoimport -d database_name -c collection_name --file < path to the
json file
Tested :
mongoimport --jsonArray -d <DataBase Name> -c <Collection Name> --file /path/to/my/fileThatIwantToImport.json
It works very well!
Insert Document will insert all the JSON file data under a single document.
Apparently the tool does not support JSON import.
There are two ways to import the database into MongoDB. one is with robomongo/Robo 3T and one is with a shell command. I always choose the second method due to fewer and easy steps.
FIRST METHOD
Install MongoDB on your machine. Also, check it was installed properly or not by using mongod command on your terminal. So, for importing a new DB on your MongoDB write this command on your terminal
mongostore -host <HostIp | 127.0.0.1> -port <mongoPort | 27017> -db <DBname> <Directory-path>
So, for example you’re running MongoDB on local machine with default port i.e 27017 and your DB files are store at /usr/library/userDatabase then write this command and check DB is imported in your MongoDB
mongostore -host 127.0.0.1 -port 27017 -db userDatabase /usr/library/userDatabase
For more details check this article.
Import MongoDB using shell and robomongo/Robo 3T