I'm brand new to Grafana. Can I (and how) load a JSON into Grafana and display as a table? Or is it only for time series data?
I'm loading grafana with:
docker run -d \
-p 3000:3000 \
--name=grafana \
-e "GF_INSTALL_PLUGINS=grafana-simple-json-datasource" \
grafana/grafana
For example:
[{
"hostname": "1.2.3.4"
}, {
"hostname": "2.3.4.5"
}, {
"hostname": "3.4.5.6"
}]
Display that as:
hostname
1.2.3.4
2.3.4.5
3.4.5.6
If I can achieve that (which is the scope of this post),ultimately I want to load 2x tables in and diff them to show a third (calculated) table which includes the items in table 1 but NOT table 2.
For example, if table 2 is:
hostname
2.3.4.5
3.4.5.6
Then table 3 would be:
hostname
1.2.3.4
Grafana is just a visualisation tool. It needs a data source to query data and display. It is optimised for time series data, but static data can also be displayed easily.
Use the API plugin .
You can also use TestData DB data source which is available within Grafana to test scenarios. (does not use json though)
Once the data source is configured, you can use table panel to display data based on queries.
Each dashboard can have multiple panels so tables can be shown side by side.
Related
I want to migrate a collection from one Mongo Atlas Cluster to another. How do I go about in doing that?
There are 2 poosible approaches here:
Migration with downtime: stop the service, export the data from the collection to some 3rd location, and then import the data into the new collection on the new cluster, and resume the service.
But there's a better way: using the MongoMirror utility. With this utility, you can sync collections across clusters without any downtime. the utility first syncs the db (or selected collections from it) and then ensures subsequent writes to the source are synced to the dest.
following is the syntax I used to get it to run:
./mongomirror --host atlas-something-shard-0/prod-mysourcedb--shard-00-02-pri.abcd.gcp.mongodb.net:27017 \
--username myUserName \
--password PASSWORD \
--authenticationDatabase admin \
--destination prod-somethingelse-shard-0/prod-mydestdb-shard-00-02-pri.abcd.gcp.mongodb.net:27017 \
--destinationUsername myUserName \
--destinationPassword PASSWORD \
--includeNamespace dbname.collection1 \
--includeNamespace dbname.collection2 \
--ssl \
--forceDump
Unfortunately, there are MANY pitfalls here:
ensure your user has the correct role. this is actually covered in the docs so read the relevant section closely.
to correctly specify the host and destination fields, you'll need to obtain both the RS name and the primary instance name. One way to get these is to use the mongosh tool and run rs.conf() on both source and destination clusters. The RS name is specified as "_id" in the command's output, and the instances are listed as "members" in the output. You'll want to take the primary instance's "host' field. the end result should look like RS-name/primary-instance-host:port
IF you specify replica-set, you MUST specify the PRIMARY instance. Failing to do so will result in an obscure error (EOF something).
I recommend adding the forceDump flag (at least until you manage to get it to run for the first time).
If you specify non-existing collections, the utility will only give one indication that they don't exist and then go on to "sync" these, rather than failing.
I successfully installed mongodb-charts and was able to create a dashboard also.
Now I want to save/export this dashboard to JSON(or any other format). Is there a feature to save/export and load/import Mongodb charts ? This would be useful if I want the same dashboard on some other server.
Also There was no tag for mongodb-charts. So any one with tag creation privilege can please create the tag.
MongoDB Charts is in beta release only.
It is designed for MongoDB Atlas and according to the official page "Share dashboards and collaborate" you can share dashboards and charts by adding new users and giving them permissions on the dashboard view "Access" button.
In Access view you can make your dashboard fully public by selecting "Everyone" option and choose permissions rights or just share with specific users.
As a hack, if you want to convert your dashboard into JSON format and transfer from one instance of MongoDB Charts to another, you can try mongodump "metadata" database in your MongoDB instance connected to MongoDB Charts.
It has 4 collections:
dashboards
datasources
items
users
But all relationships are made through GUID ids, so without manual editing you can easily corrupt data during mongorestore.
UPDATE:
The following bash script shows how to export dashboard and chart for migrating data to different MongoDB Charts instance:
# Your Dashboard and Chart names you want to export
dashboard_name="My Dashboard"
chart_name="My Chart"
# Exporting to `tmp` folder data from dashboard collection
mongodump --db metadata --collection dashboards --query "{"title": '$dashboard_name'}" --out "/tmp/"
dashboard_id=$(mongo --quiet --eval "db.getSiblingDB('metadata').dashboards.findOne({'title': '$dashboard_name'}, {'id': 1})['_id']")
# Exporting to `tmp` folder data from items collection
mongodump --db metadata --collection items --query "{\$and: [{'title': '$chart_name'}, {'dashboardId': '$dashboard_id'}]}" --out "/tmp/"
# After the following data is restored to different MongoDB Charts instance
# you need to make sure to modify the following records in imported data
# according to your datasource in new MongoDB Charts instance.
# for dashboards collection modify GUID for the following fields according to new datasource:
mongo --quiet --eval "db.getSiblingDB('metadata').dashboards.findOne({'title': '$dashboard_name'}, {'owners': 1, 'tenantId': 1, 'itemSummary.dataSourceId': 1})"
# for items collection modify GUID for the following fields according to new datasource:
mongo --quiet --eval "db.getSiblingDB('metadata').items.findOne({\$and: [{'title': '$chart_name'}, {'dashboardId': '$dashboard_id'}]}, {'dataSourceId': 1, 'tenantId': 1})"
Remember, this approach is not official and it is possible to corrupt your data.
You could use charts for Trello which works the same way as mongoDB Charts. It allows you to connect to your mongoDB database or to any other system, make your charts and export them as JSON, CSV...
I want to store results from Coverity® to InfluxDB and I was wondering does Coverity have REST API?
If you're only trying to dump data to InfluxDB, you can curl data from REST API and insert resulting json to the database. I do something similar, but in CSV format.
Create a view in coverity 'Issues: By Snapshot' that contains all your defects.
Curl data from coverity view
json format
curl --user <userid>:<password>
"http://<coverity_url>/api/viewContents/issues/v1/<View Name>?projectId=<project ID>&rowCount=-1"
csv format
curl --header "Accept: text/csv" --user <userid>:<password>
"http://<coverity_url>/api/viewContents/issues/v1/<View Name>?projectId=<project ID>&rowCount=-1"
Example:
If you created a view 'My Defects' in project 'My Project' the command would be
curl --user <userid>:<password> "http://<coverity_url>/api/viewContents/issues/v1/My%20Defects?projectId=My%20Project&rowCount=-1"
In above URL:
%20 -- URL encoded space
rowcount=-1 -- Download all rows in view. You can set it to desired limit.
Not really, no.
There is a very limited REST api but it only covers a few very specific things. I'd recommend you use cov-manage-im where you can and only use the SOAP API if you need something more.
cov-manage-im can help, it can be used to retrive defects for specific projects and streams. cov-manage-im --help can give you more info
cov-manage-im --host youcovhostname --user yourusername --password yourpassword --mode defects --show --project yourprojectname
I am currently trying to use a dashDB database with the db2cli utility and ODBC (values are from Connect/Connection Information on the dashDB web console). At this moment I can perfectly do SELECT or INSERT statements and fetch data from custom tables which I have created, thanks to the command:
db2cli execsql -connstring "DRIVER={IBM DB2 ODBC DRIVER - IBMDBCL1}; DATABASE=BLUDB; HOSTNAME=yp-dashdb-small-01-lon02.services.eu-gb.bluemix.net; PORT=50000; PROTOCOL=TCPIP; UID=xxxxxx; PWD=xxxxxx" -inputsql /tmp/input.sql
Now I am trying to do a DB2 LOAD operation through the db2cli utility, but I don't know how to proceed or even if it is possible to do so.
The aim is to import data from a file without cataloging the DB2 dashDB database on my side, but only through ODBC. Does someone know if this kind of operation is possible (with db2cli or another utility)?
The latest API version referenced from the DB2 on Cloud (ex DashDB) dashboard is available here. It requires first to call the /auth/tokens endpoint to generate an auth token based on your Bluemix credentials to be used to authorize the API calls.
I've published recently a npm module - db2-rest-client - to simplify the usage of these operations. For example, to load data from a .csv file you can use the following commands:
# install the module globally
npm i db2-rest-client -g
# call the load job
export DB_USERID='<USERID>'
export DB_PASSWORD='<PASSWORD>'
export DB_URI='https://<HOSTNAME>/dbapi/v3'
export DEBUG=db2-rest-client:cli
db2-rest-client load --file=mydata.csv --table='MY_TABLE' --schema='MY_SCHEMA'
For the load job, a test on Bluemix dedicated with a 70MB source file and about 4 million rows took about 4 minutes to load. There are also other CLI options as executing export statement, comma separated statements and uploading files.
This is not possible. LOAD is not an SQL statement, therefore it cannot be executed via an SQL interface such as ODBC, only using the the DB2 CLP, which in turn requires a cataloged database.
ADMIN_CMD() can be invoked via an SQL interface, however, it requires that the input file be on the server -- it won't work with a file stored on your workstation.
If JDBC is an option, you could use the CLPPlus IMPORT command.
You can try loading data using REST API.
Example:
curl --user dashXXX:XXXXXX -H "Content-Type: multipart/form-data" -X POST -F loadFile1=#"/home/yogesh/Downloads/datasets/order_details_0.csv" "https://yp-dashdb-small-01-lon02.services.eu-gb.bluemix.net:8443/dashdb-api/load/local/del/dashXXX.ORDER_DETAILS?hasHeaderRow=true×tampFormat=YYYY-MM-DD%20HH:MM:SS.U"
I have used the REST API and have not seen any size limitations. In ver 1.11 of dashDB local (warehouse db) external tables have been included. As long as file is on the container it can be loaded. Also the DB2 Load locks the table until load is finished where a external table load won't
There are a number of ways to get data into Db2 Warehouse on Cloud. From a command line you can use Lift CLI https://lift.ng.bluemix.net/ which provides the best performance for large data sets
You can also use EXTERNAL TABLEs https://www.ibm.com/support/knowledgecenter/ean/SS6NHC/com.ibm.swg.im.dashdb.sql.ref.doc/doc/r_create_ext_table.html which are also high performance and have lots of options
This is a quick example using a local file (not on the server) hence the REMOTESOURCE YES option
db2 "create table foo(i int)"
echo "1" > /tmp/foo.csv
db2 "insert into foo select * from external '/tmp/foo.csv' using (REMOTESOURCE YES)"
db2 "select * from foo"
I
-----------
1
1 record(s) selected.
for large files, you can use gzip, either on the fly
db2 "insert into foo select * from external '/tmp/foo.csv' using (REMOTESOURCE GZIP)"
or from gzip'ed files
gzip /tmp/foo.csv
db2 "insert into foo select * from external '/tmp/foo2.csv.gz' using (REMOTESOURCE YES)"
I'm using AWS data pipeline service to pipe data from a RDS MySql database to s3 and then on to Redshift, which works nicely.
However, I also have data living in an RDS Postres instance which I would like to pipe the same way but I'm having a hard time setting up the jdbc-connection. If this is unsupported, is there a work-around?
"connectionString": "jdbc:postgresql://THE_RDS_INSTANCE:5432/THE_DB”
Nowadays you can define a copy-activity to extract data from a Postgres RDS instance into S3. In the Data Pipeline interface:
Create a data node of the type SqlDataNode. Specify table name and select query
Setup the database connection by specifying RDS instance ID (the instance ID is in your URL, e.g. your-instance-id.xxxxx.eu-west-1.rds.amazonaws.com) along with username, password and database name.
Create a data node of the type S3DataNode
Create a Copy activity and set the SqlDataNode as input and the S3DataNode as output
this doesn't work yet. aws hasnt built / released the functionality to connect nicely to postgres. you can do it in a shellcommandactivity though. you can write a little ruby or python code to do it and drop that in a script on s3 using scriptUri. you could also just write a psql command to dump the table to a csv and then pipe that to OUTPUT1_STAGING_DIR with "staging: true" in that activity node.
something like this:
{
"id": "DumpCommand",
"type": "ShellCommandActivity",
"runsOn": { "ref": "MyEC2Resource" },
"stage": "true",
"output": { "ref": "S3ForRedshiftDataNode" },
"command": "PGPASSWORD=password psql -h HOST -U USER -d DATABASE -p 5432 -t -A -F\",\" -c \"select blah_id from blahs\" > ${OUTPUT1_STAGING_DIR}/my_data.csv"
}
i didn't run this to verify because it's a pain to spin up a pipeline :( so double check the escaping in the command.
pros: super straightforward and requires no additional script files to upload to s3
cons: not exactly secure. your db password will be transmitted over the wire without encryption.
look into the new stuff aws just launched on parameterized templating data pipelines: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-custom-templates.html. it looks like it will allow encryption of arbitrary parameters.
AWS now allow partners to do near real time RDS -> Redshift inserts.
https://aws.amazon.com/blogs/aws/fast-easy-free-sync-rds-to-redshift/