connecting to a database in OrientDB console - orientdb

I'm trying to connect to the GratefulDeadConcerts without any sucess.
I'm using docker.
orientdb> connect remote:localhost root root
Disconnecting from the database [null]...OK
Connecting to remote Server instance [remote:localhost] with user 'root'...OK
orientdb {server=remote:localhost/}> list databases
Found 5 databases:
* VehicleHistoryGraph (plocal)
* GratefulDeadConcerts (plocal)
* OpenBeer (plocal)
* BetterDemo (plocal)
* Tolkien-Arda (plocal)
orientdb {server=remote:localhost/}> connect plocal:../databases/GratefulDeadConcerts root root
Disconnecting from remote server [remote:localhost/]...
OK
Connecting to database [plocal:../databases/GratefulDeadConcerts] with user 'root'...
Error: com.orientechnologies.orient.core.exception.OStorageException: Cannot open local storage '../databases/GratefulDeadConcerts' with mode=rw
DB name="GratefulDeadConcerts"
Error: com.orientechnologies.orient.core.exception.OStorageException: Cannot open the storage 'GratefulDeadConcerts' because it does not exist in path: ../databases/GratefulDeadConcerts
DB name="GratefulDeadConcerts"
what I am doing wrong?

Can you try this?
connect remote:localhost/GratefulDeadConcerts root root

The official Docker image comes without a database inside: the gratefulDeadConcerts isn't included. This is because usually a volume where to store databases is provided, and a demo database increases the size of the image.
You don't show the command you are using to launch the container, but I suppose you have read the doc (https://hub.docker.com/_/orientdb/) and you're using something like that:
docker run -d --name orientdb -p 2424:2424 -p 2480:2480 \
-v <config_path>:/orientdb/config \
-v <databases_path>:/orientdb/databases \
-v <backup_path>:/orientdb/backup \
-e ORIENTDB_ROOT_PASSWORD=rootpwd \
orientdb
Point your browser to localhost:2480 and download a db from our site: http://orientdb.com/docs/last/Studio-Home-page.html
Wrapping up:
- read the doc about the image un docker hub
- read the doc about studio and how to import a public database
Regards

Maybe you could try this?
connect remote:localhost/databases/GratefulDeadConcerts root root

Decide you want to connect remote or to the local server itself,
1.if it is local use plocal
orientdb> CONNECT plocal:../databases/databasename username password
if it is remote sever
orientdb> CONNECT remote:192.168.1.1/databasename username password

Related

creating a postgresql database back end for a new Label Studio project

I am creating a local Label Studio server to host images to annotate in our office. I would like the database back end to be postgresql and not sqlite and be located in a particular directory, not the default and not the same as the 'data-dir'. I have got a test server working across the network with various machines annotating images on the server, but the backend was sqlite for this test.
Everything I've tried to get a postgresql backend db has failed for various reasons. Some commands result in a sqlite db (occasionally with the name 'postgresql') located in my required directory; others create postgres/pyscopg2 errors but I think they're up a garden path.
The host machine is running Ubuntu 20.04 LTS. And serves another postgresql db over the network using other APIs. Postgresql version running is 12.9.
I have created a conda environment and pip installed Label Studio as the documentation suggested.
Here's what I've tried:
Start the conda environment. Follow instructions to assign environment variables from https://labelstud.io/guide/storedata.html#PostgreSQL-database which at time of writing is:
DJANGO_DB=default
POSTGRE_NAME=postgres
POSTGRE_USER=postgres
POSTGRE_PASSWORD=
POSTGRE_PORT=5432
POSTGRE_HOST=db
Then a few variations on the start command (I didn't include the backslashes, just put here for readability/comparability):
label-studio start --init \
-db postgresql \
--database /path/to/label-studio/databases/newdb \
--data-dir /path/to/label-studio/media_dirs/test_proj
result: db is where expected, but:
file newdb
gives "newdb: SQLite 3.x database, last written using SQLite version 3038002"
label-studio start --init \
--database /path/to/label-studio/databases/newdb \
-db postgresql \
--data-dir /path/to/label-studio/media_dirs/test_proj
result: a db at specified path named 'postgresql' and still an sqlite db. This seems to mirror the mistake mentioned at: https://github.com/heartexlabs/label-studio/issues/1660
I have also tried the above two commands with the '--init' argument omitted with same results.
Then I tried adding something on the front of the command suggested at the same link above:
DJANGO_DB=default label-studio start \
--database /path/to/label-studio/databases/newdb \
--data-dir /path/to/label-studio/media_dirs/test_proj
result: psycopg2.OperationalError: FATAL: password authentication failed for user "postgres"
FATAL: password authentication failed for user "postgres"
DJANGO_DB=default POSTGRE_PASSWORD= label-studio start \
--database /path/to/label-studio/databases/newdb \
--data-dir /path/to/label-studio/media_dirs/test_proj
result: psycopg2.OperationalError: fe_sendauth: no password supplied
Any help and resolution would be highly appreciated.
Also, I can't tag this with 'label-studio' because I'm not quite at the required reputation to create a new tag, so if anyone who can feels like doing so, pleaseandthankyou!
Your last option was closer than all the others. Have you tried to run LS using this:
DJANGO_DB=default POSTGRE_NAME=<postgres_name> POSTGRE_USER=<postgres_user> POSTGRE_PASSWORD=<password> POSTGRE_PORT=<db_port> POSTGRE_HOST=<db_host> label-studio
Sure, you have to run postgres service by yourself, configure it properly, create the DB <postgres_name>, the user <postgres_user> and set the password <password>, grant access rights to this user. Also don't forget to specify <db_host> (localhost?), <db_port> (5432?)

Running a Chainlink Node - Remote DATABASE_URL Config PostgreSQL problem

I have been trying since yesterday to connect to a ChainLink node and I was not able to.
I followed the steps at this website
I am having a problem with "Set the Remote DATABASE_URL Config" (I think that this is my only error because of the [ERROR] listed below, I do not know if I am doing something else wrong since every command was executed without error)
I am using the Docker option to create the database listed here.
I am always having this error:
"[ERROR] unable to lock ORM: failed to connect to host=localhost user=some-postgres database=postgres: dial error (dial tcp [::1]:5432: connect: cannot assign requested address) logger/default.go:155 stacktrace=github.com/smartcontractkit/chainlink/core/logger.Errorf
/chainlink/core/logger/default.go:155"
After writing in my Ubuntu Terminal (ON WINDOWS 10):
"cd ~/.chainlink-kovan && docker run -p 6688:6688 -v ~/.chainlink-kovan:/chainlink -it --env-file=.env smartcontract/chainlink:0.10.1 local n"
I do not know how to connect to the database and what to write as attributes. All of the other steps and installs I have accomplished successfully.
I just want to know how to create a database on PostgreSQL and connect it to Docker as explained on the ChainLink website and write the appropriate command in the Ubunto terminal (for the "Remote DATABASE_URL Config PostgreSQL" step) so that I can run my node.
Thanks! (PS: I am a beginner and your help is much appreciated, and if I forgot to mention any important information please let me know so that I add it)
A comprehensive 101 for docker-postgres can be found here: https://hackernoon.com/dont-install-postgres-docker-pull-postgres-bee20e200198
Basically, you need to deploy a postgres db with docker
Pre-Reqs:
Create a dir for you docker/postgres:
mkdir -p $HOME/docker/volumes/postgres
Example:
docker run --rm --name pg-docker -e POSTGRES_USER=<any_desired_name> -e POSTGRES_PASSWORD=docker -e POSTGRES_DB=<any_db_name> -d -p 5432:5432 -v $HOME/docker/volumes/postgres:/var/lib/postgresql/data postgres
For postgres username, it can be anything like "super_chain" or etc.
For postgres db, it can be "chainlink"
After, docker is up and running. Just follow up the docs tut, where you need to write the DB URL to the .env file
Cheers

Setup a PostgreSQL connection to an already existing project in Docker

I had never used PostgreSQL nor Docker before. I set up an already developed project that uses these two technologies in order to modify it.
To get the project running on my Linux (Pop!_OS 20.04) machine I was given these instructions (sorry if this is irrelevant but I don't know what is important and what is not to state my problem):
Installed Docker CE and Docker Compose.
Cloned the project with git and ran the commands git submodule init and git submodule update.
Initialized the container with: docker-compose up -d
Generated the application configuration file: ./init.sh
After all of that the app was available at http://localhost:8080/app/ and I got inside the project's directory the following subdirectories:
And inside dbdata:
Now I need to modify the DB and there's where the difficulty arose since I don't know how to set up the connection with PostgreSQL inside Docker.
In a project without Docker which uses MySQL I would
Create the local project's database "dbname".
Import the project's DB: mysql -u username -ppassword dbname < /path/to/dbdata.sql
Connect a DB client (DBeaver in my case) to the local DB and perform the necessary modifications.
In an endeavour to do something like that with PostgeSQL, I have read that I need to
Install and configure Ubuntu 20.04 serve.
Install PostgreSQL.
Configure Postgres “roles” to handle authentication and authorization.
Create a new Database.
And then what?
How can I set up the connection in order to be able to modify the DB from DBeaver and see the changes reflected on http://localhost:8080/app/ when Docker is involved?
Do I really need an Ubuntu server?
Do I need other program than psql to connect to Postgres from the command line?
I have found many articles related to the local setup of PostgreSQL with Docker but all of them address the topic from scratch, none of them talk about how to connect to the DB of an "old" project inside Docker. I hope someone here can give directions for a newbie on what to do or recommend an article explaining from scratch how to configure PostgreSQL and then connecting to a DB in Docker. Thanks in advance.
Edit:
Here's the output of docker ps
You have 2 options to get into known waters pretty fast:
Publish the postgres port on the docker host machine, install any postgres client you like on the host and connect to the database hosted in the container as you would have done this traditionally. You will use localhost:5433 to reach the DB. << Update: 5433 is the port where the postgres container is published on you host, according to the screenshot.
Another option is to add another service in your docker-compose file to host the client itself in a container.
Here's a minimal example in which I am launching two containers: the postgres and an adminer that is exposed on the host machine on port 9999.
version: '3'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: example
adminer:
image: adminer
restart: always
ports:
- 9999:8080
then I can access the adminer at localhost:9999 (password is example):
Once I'm connected to my postgres through adminer, I can import and execute any SQL query I need:
A kind advice is to have a thorough lecture to understand how the data is persisted in a Docker context. Performance and security are also topics that you might want to add under your belt as a novice in the field better sooner than later.
If you're running your PostgreSQL container inside your own machine you don't need anything else to connect using a database client. That's because to the host machine, all the containers are accessible using their own subnet.
That means that if you do this:
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' 341164c5050f`
it will output a list of IPs that you can configure in your DBeaver to access the container instance directly.
If you're not fond of doing that (or you prefer to use cli) you can always use the psql inside the installation of the PostgreSQL container to achieve something like you described in mysql point nº2:
docker exec -i 341164c5050f bash -c 'psql -U $POSTGRES_USER' < /path/to/your/schema.sql
It's important to inform the -i, otherwise it'll not read the schema from the stdin. If you're looking for psql in the interactive mode, use -it instead.
Last but not least, you can always edit the docker-compose.yml file to export the port and connect to the instance using the public IP/loopback device.

How can I use restore mongodb with MupX since mongodb is inside docker container?

I used mongodump to backup my database since I want to move it from being hosted at compose.io to being hosted locally in the server itself using mupx.
Once I setup the app and have it running, how can I restore the mongodump? I am using mupx, and when I ssh into the server I see that mongodb is inside a docker container.
What are the steps needed to use mongorestore given that I can copy the mongodump files from my local pc to the server.
1) Use scp command to copy the mongodump folder from my local pc to the server
2) SSH into the server
At this point I am logged into the server and am in the same directory as the dump folder. Mongodb is running inside docker. How can I use mongorestore to restore mongodb to the data in the dump folder?
I figured out how to do it. Here are step by step instructions.
Here are the instructions
1) Copy dump folder to server
scp -r /local_path/to/dump_folder root#111.222.33.4:/remote/path
2) SSH into server
ssh root#111.222.33.4
3) Copy from root of server to inside docker container
docker cp dump_folder mongodb:/dump_folder
4) Go into mongodb docker container
docker exec -it mongodb bash
5) check if copied folder exists
ls (you should see dump_folder, if you named it the same folder as in this example)
6) use mongorestore
mongorestore --drop -d AppName dump_folder
For example: you copied mongo dump file to /data folder of server.
While you run docker container, you can mount /data folder into docker container.
docker run -v /data:/var/lib/mongodb -p 27017 ....
After that, you access to inside docker container and go to /var/lib/mongodb. You can see mongo dump file here by using : ls command.
Here, you can use mongorestore to restore mongo data.

Postgres Docker Image: Failed to map database to host

I'm using the stock official Postgres image from Docker Hub. docker pull postgres. I wanted to map the data directory in the Postgres container to my OS X host. So, I tried this.
docker run --rm -p 5432:5432 -e POSTGRES_PASSWORD=mypass -v `pwd`/data:/var/lib/postgresql/data postgres
This resulted in the Postgres container failing to launch correctly.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... initdb: could not create directory "/var/lib/postgresql/data/global": Permission denied
initdb: removing contents of data directory "/var/lib/postgresql/data"
The goal I'm trying to achieve is to have my database data stored on the host machine, so that I can start a postgres container and have it read (or load) the database from a previous instance. Am I on the right track or is this a stupid way to achieve database persistence?
According to official documentation you should use boot2docker to resolve the issue. However, without it, you won't be able to mount container.