In my project we use influx dB and Grafana for our log and other analysis which is running on an Ubuntu machine. Now recently due to a migration process, the ports were blocked like 3000(for Grafana) and 8086 (for influx dB) which will be remain blocked for some security reason. So, I am unable to connect them through the browser and postman.
So as a worked around we are planning to move these (at least the dashboards) to a local setup. I checked the process are up and running.
But unable to locate the physical location of the dashboard files.
I have a default setting, don't have any separate database configuration for grafana.
[database]
# You can configure the database connection by specifying type, host, name, user and password
# as separate properties or as on string using the url properties.
# Either "mysql", "postgres" or "sqlite3", it's your choice
;type = sqlite3
;host = 127.0.0.1:3306
;name = grafana
;user = root
# If the password contains # or ; you have to wrap it with triple quotes. Ex """#password;"""
;password =
# Use either URL or the previous fields to configure the database
# Example: mysql://user:secret#host:port/database
;url =
# For "postgres" only, either "disable", "require" or "verify-full"
;ssl_mode = disable
;ca_cert_path =
;client_key_path =
;client_cert_path =
;server_cert_name =
Is there any location where I can find these JSON file?
I figure it out by some research, thought I can help the community if someone same searching for the answer.
The default folder of the dashboard is /var/lib/grafana. If you navigate to the folder, you will find a file name grafana.db.
Download this file to your local machine or any machine which you want.
Please download sqlitebrowser from here.
Now on the sqlitebrowser click on open database and select the grafana.db file. And right click on the dashboard table and select Browse Table. and select the data section and you will find the dashboard.
Look at whatever is starting grafana on your machine. It could be set up as a service, or run from a script like .bashrc potentially, or from docker. It does look like /var/lib/grafana could be the default place.
In my case on a RPi with influx, grafana etc using IOTstack, it is started from docker with docker-compose. The docker-compose.yml file defines the paths. It does look like /var/lib/grafana is the default place, but this can be remapped somewhere else. And it's likely to be mapped somewhere else in order to be backed up, ~/IOTstack/volumes/grafana/data, in my case.
grafana:
container_name: grafana
image: grafana/grafana
restart: unless-stopped
user: "0"
ports:
- "3000:3000"
environment:
- GF_PATHS_DATA=/var/lib/grafana
- GF_PATHS_LOGS=/var/log/grafana
volumes:
- ./volumes/grafana/data:/var/lib/grafana
- ./volumes/grafana/log:/var/log/grafana
networks:
- iotstack_nw`
Related
Why do the contents of my EFS-backed volume vary depending on the container reading it?
I'm seeing divergent, EFS-related behavior depending on whether I run two processes in a single container or each in their own containers.
I'm using the Docker Compose ECS integration to launch the containers on Fargate.
The two processes are Database and Verifier.
Verifier directly inspects the on-disk storage of Database.
For this reason they share a volume, and the natural docker-compose.yml looks like this (simplifying):
services:
database:
image: database
volumes:
- database-state:/database-state
verifier:
image: verifier
volumes:
- database-state:/database-state
depends_on:
- database
volumes:
database-state: {}
However, if I launch in this configuration the volume database-state is often in an inconsistent state when read by Verifier, causing it to error.
OTOH, if I combine the services so both Database and Verifier run in the same container there are no consistency issues:
services:
database-and-verifier:
image: database-and-verifier
volumes:
- database-state:/database-state
volumes:
database-state: {}
Note that in both cases the database state is stored in database-state. This issue doesn't appear if I run locally, so it is specific to Fargate / EFS.
Any ideas what's going on and how to fix it?
This feels to me like a write-caching issue, but I doubt EFS would have such a basic problem.
It also feels like it could be a permissions issue, where key files are somehow hid from Verfier.
Thanks!
Is there a way to migrate from a docker-compose configuration using all anonymous volumes to one using named volumes without needing manual intervention to maintain data (e.g. manually copying folders)? This could entail having users run a script on the host machine but there would need to be some safeguard against a subsequent docker-compose up succeeding if the script hadn't been run.
I contribute to an open source server application that users install on a range of infrastructure. Our users are typically not very technical and are resource-constrained. We have provided a simple docker-compose-based setup. Persistent data is in a containerized postgres database which stores its data on an anonymous volume. All of our administration instructions involve stopping running containers but not bringing them down.
This works well for most users but some users have ended up doing docker-compose down either because they have a bit of Docker experience or by simple analogy to up. When they bring their server back up, they get new anonymous volumes and it looks like they have lost their data. We have provided instructions for recovering from this state but it's happening often enough that we're reconsidering our configuration and exploring transitioning to named volumes.
We have many users happily using anonymous volumes and following our administrative instructions exactly. These are our least technical users and we want to make sure that they are not negatively affected by any change we make to the docker-compose configuration. For that reason, we can't "just" change the docker-compose configuration to use named volumes and provide a script to migrate data. There's too high of a risk that users would forget/fail to run the script and end up thinking they had lost all their data. This kind of approach would be fine if we could somehow ensure that bringing the service back up with the new configuration only succeeds if the data migration has been completed.
Side note for those wondering about our choice to use a containerized database: we also have a path for users to specify an external db server (e.g. RDS) but this is only accessible to our most resourced users.
Edit: Here is a similar ServerFault question.
Given that you're using an official PostgreSQL image, you can exploit their database initialization system
If you would like to do additional initialization in an image derived from this one, add one or more *.sql, *.sql.gz, or *.sh scripts under /docker-entrypoint-initdb.d (creating the directory if necessary). After the entrypoint calls initdb to create the default postgres user and database, it will run any *.sql files, run any executable *.sh scripts, and source any non-executable *.sh scripts found in that directory to do further initialization before starting the service.
with a change of PGDATA
This optional variable can be used to define another location - like a subdirectory - for the database files. The default is /var/lib/postgresql/data. If the data volume you're using is a filesystem mountpoint (like with GCE persistent disks) or remote folder that cannot be chowned to the postgres user (like some NFS mounts), Postgres initdb recommends a subdirectory be created to contain the data.
to solve the problem. The idea is that you define a different location for Postgres files and mount a named volume there. The new location will be empty initially and that will trigger database initialization scripts. You can use this to move data from anonymous volume and do this exactly once.
I've prepared an example for you to test this out. First, create a database on an anonymous volume with some sample data in it:
docker-compose.yml:
version: "3.7"
services:
postgres:
image: postgres
environment:
POSTGRES_PASSWORD: test
volumes:
- ./test.sh:/docker-entrypoint-initdb.d/test.sh
test.sh:
#!/bin/bash
set -e
psql -v ON_ERROR_STOP=1 --username "postgres" --dbname "postgres" <<-EOSQL
CREATE TABLE public.test_table (test_column integer NOT NULL);
INSERT INTO public.test_table VALUES (1);
INSERT INTO public.test_table VALUES (2);
INSERT INTO public.test_table VALUES (3);
INSERT INTO public.test_table VALUES (4);
INSERT INTO public.test_table VALUES (5);
EOSQL
Note how this test.sh is mounted, it should be in /docker-entrypoint-initdb.d/ directory in order to be executed at the initialization stage. Bring the stack up and down to initialize the database with this sample data.
Now create a script to move the data:
move.sh:
#!/bin/bash
set -e
rm -rf $PGDATA/*
mv /var/lib/postgresql/data/* "$PGDATA/"
and update the docker-compose.yml with a named volume and a custom location for data:
docker-compose.yml:
version: "3.7"
services:
postgres:
image: postgres
environment:
POSTGRES_PASSWORD: test
# set a different location for data
PGDATA: /pgdata
volumes:
# mount the named volume
- pgdata:/pgdata
- ./move.sh:/docker-entrypoint-initdb.d/move.sh
volumes:
# define a named volume
pgdata: {}
When you bring this stack up it won't find the database (because named volume is initially empty) and Postgres will run initialization scripts. First it runs its own script to create an empty database then it runs custom scripts from the /docker-entrypoint-initdb.d directory. In this example I mounted move.sh into that directory, which will erase temporary database and move old database to the new location.
I have a dockerfile for frontend, one for backend, and one for the database.
In the backend portion of the project, I have a dockerfile and a docker-compose.yml file.
the dockerfile is great for the backend because it configures the backend, copies and sets up the information etc. I like it alot.
The issue i have come to though is that if i can easily create a dockerfile for the dbms, but it requires me to put it in a different directory, where i was hoping to just define it in the same directory as the backend, and because of the fact the backend and the dbms is so tightly coupled, i figured this is where docker-compose would go.
My issue I ran into is that in a compose file, I cant do a COPY into the dbms container. I would just have to create another dockerfile to set that up. I was thinking that would work.
When looking on github, there was a big enhancement thread about it, but the closest people would get is just creating volume relationship, which fails to do what I want.
Ideally, All i want to be able to do is to stand up a postgres dbms in a fashion such that i could conduct load balancing on it later down the line with 1 write, 5 read or something, and have its initial db defined in my one sql file.
Am I missing something? I thought i was going about it correctly, but maybe I need to create a whole new directory with a dockerfile for the dbms.
Thoughts on how I should accomplish this?
Right now i was doing something like:
version: '2.0'
services:
backend:
build: .
ports:
- "8080:8080"
database:
image: "postgres:10"
environment:
POSTGRES_USER: "test"
POSTGRES_PASSWORD: "password"
POSTGRES_DB: "foo"
# I shouldnt have volumes as it would copy the entire folder and its contents to db.
volumes:
- ./:/var/lib/postgresql/data
To copy things with docker there an infinite set of possibilities.
At image build time:
use COPY or ADD instructions
use shell commands including cp,ssh,wget and many others.
From the docker command line:
use docker cp to copy from/to hosts and containers
use docker exec to run arbitrary shell commands including cp, ssh and many others...
In docker-compose / kubernetes (or through command line):
use volume to share data between containers
volume can be local or distant file systems (network disk for example)
potentially combine that with shell commands for example to perform backups
Still how you should do it dependy heavily of the use case.
If the data you copy is linked to the code and versionned (in the git repo...) then treat as it was code and build the image with it thanks to the Dockerfile. This is for me a best practice.
If the data is a configuration dependrnt of the environement (like test vs prod, farm 1 vs farm 2), then go for docker config/secret + ENV variables.
If the data is dynamic and generated at production time (like a DB that is filled with user data as the app is used), use persistant volumes and be sure you understand well the impact of container failure for your data.
For a database in a test system it can make sense to relauch the DB from a backup dump, a read only persistant volume or much simpler backup the whole container at a known state (with docker commit).
Well.
I have docker-compose.yaml with Postgres image (it is simple sample)
And I have NodeJS-script with raw SQL-query to Postgres:
'COPY (SELECT * FROM mytable) to ‘/var/lib/postgresql/data/mytable.csv‘'
What happening?
mytable.csv saved into Postgres container
What I need?
Save mytable.csv to HOST MACHINE (or another container from docker-compose)
Anyway, context: I have big tables (1m+ rows) and it necessary to generate and save files by Postgres server. But this process (saving) will start via NodeJS script with "COPY"-query in other container / host machine.
Does you know information about how to do this things?
my docker-compose.yml:
version: "3.6"
services:
postgres:
image: postgres:10.4
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=1234
volumes:
- postgres-storage:/var/lib/postgresql/data
ports:
- "5432:5432"
UPDATE:
I did some graphic in Miro for my process. The main problem in THIRD:I can't return .csv file to NodeJS or save it into NodeJS container. I can do 2 things:
Return rows for forming file in NodeJS (but NodeJS server will do it slowly)
Save .CSV file in Postgres container. But I need a .CSV file into a NodeJS container
Schema with two containers that I need
Well, thanks for man, who linked this question with question about COPY TO STDOUT (sorry, didn't remember question ID).
So, problem solved by using COPY TO STDOUT and small npm-module pg-copy-streams
The code:
const client = new Client(config);
await client.connect();
const output = fs.createWriteStream('./output.csv');
const result = client.query(copyTo('COPY (select * from my_table) TO STDOUT WITH (FORMAT CSV, HEADER)'));
result.pipe(output);
So, Postgres will sent csv file in stream to host NodeJS script, and on the NodeJS-side we need to only write this stream in file without converting to csv.
Thanks!
I have a specific situation where I need to connect Kamailio to PostgreSQL DB rather than MySQL. Can someone please provide the step for that. Tried multiple steps from the forum but it failed.
Problem faced: whenever kamailio creates the database in PostgreSQL it keeps on asking the password and ultimately it fails.
Ubuntu version: 16.04 LTS
Kamailio: 5.0
I have done following things so far:
1. Included the postgre modules
2. Modified kamailio.cfg and added following lines:
#!ifdef WITH_PGSQL
# - database URL - used to connect to database server by modules such
# as: auth_db, acc, usrloc, a.s.o.
#!ifndef DBURL
#!define DBURL "postgres://kamailio:password#localhost/kamailio"
#!endif
#!endif
This is my file kambdctlrc:
# The Kamailio configuration file for the control tools.
#
# Here you can set variables used in the kamctl and kamdbctl setup
# scripts. Per default all variables here are commented out, the control tools
# will use their internal default values.
## your SIP domain
SIP_DOMAIN=sip.<DOMAIN>.net
## chrooted directory
# $CHROOT_DIR="/path/to/chrooted/directory"
## database type: MYSQL, PGSQL, ORACLE, DB_BERKELEY, DBTEXT, or SQLITE
# by default none is loaded
#
# If you want to setup a database with kamdbctl, you must at least specify
# this parameter.
DBENGINE=PGSQL
## database host
DBHOST=localhost
## database host
# DBPORT=3306
## database name (for ORACLE this is TNS name)
DBNAME=kamailio
# database path used by dbtext, db_berkeley or sqlite
# DB_PATH="/usr/local/etc/kamailio/dbtext"
## database read/write user
DBRWUSER="kamailio"
## password for database read/write user
DBRWPW="password"
## database read only user
DBROUSER="kamailioro"
Thanks in advance !!
Finally, we have figured out the issue, It was a small mistake in .pgpass file which eventually creating authentication problem.