I am trying to find the location of .dbshell storing, but I can't manage to find.
May I know normally where does it storing?
Environment:
Docker mongo
The server is installing in GCP, I am using debian server and run with ssh to manage the server.
Related
I'm trying to get RabbitMQ to monitor a postgresql database to create a message queue when database rows are updated. The eventual plan is to feed this message queue into an AWS EKS (Elastic Kubernetes Service) cluster as a job.
I've read many many approaches to this but they are still confusing as a newcomer to RabbitMQ and many seemed to be written more than 5 years ago so I'm not sure if they'll still work with current versions of postgres and rabbitmq.
I've followed this guide about installing the area51/notify-rabbit docker container which can connect the two via a node app, but when I ran the docker container it immediately stopped and didn't seem to do anything.
There is also this guide, which uses a go app to connect the two, but I'd rather not use Go ouside of a docker container.
Additionally, there is also this method, to install the pg_amqp extension from a repository which hasn't been updated in years, which allows for a direct connection from PostgreSQL to RabbitMQ. However, when I followed this and attempted to install pg_amqp on my Postgres db (postgresql 12), I was unable to connect using psql to the database, getting the classic error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
My current set-up, is I have a rabbitMQ server installed in a docker container in an AWS EC2 instance which I can access via the internet. I ran the following to install and run it:
docker pull rabbitmq:3-management
docker run --rm -p 15672:15672 -p 5672:5672 rabbitmq:3-management
The postgresql database is running on a separate EC2 instance and both instances have the required ports open for accessing data from each server.
I have also looked into using Amazon SQS as well for this, but it didn't seem to have any info on linking Postgresql up to it. I haven't really seen any guides or Stack Overflow questions on this since 2017/18 so I'm wondering if this is still the best way to create a message broker for a kubernetes system? Any help/pointers on this much appreciated.
In the end, I decided the best thing to do was create some simple Python scripts to do the LISTEN/NOTIFY steps and route traffic from PostgreSQL to RabbitMQ based off the following code https://gist.github.com/kissgyorgy/beccba1291de962702ea9c237a900c79
I set it up inside Docker containers and set them to run in my Kubernetes cluster so they are within the automatic restarts if they fail.
I'm trying to implement a small API in docker and i need that API writes to a database which is hosted on the same server but running on windows server 2006
I can't change the OS in the server because that server also works as a gateway for Powerbi
Should i mount the volume to (I'm guessing C:/mongodb/data) or should i make the insert by the localhost
These are my limitations :
host: running windows server 2006 (cant change this)
app: a container running in windows subsystem for linux (it has to run on linux because i need async functions and i only have knoweldge in python|nodejs) but it has to persist the data in the mongo database running on host
mongo database : it has to be running in windows server because a Power Bi Gateway is running comsuming data
keeping with diagrams maybe this will help to explain this in a better way
As far as I understand your system is as in the picture. You want to write data to MongoDB. There should be a network bridge to connect between the host and the Linux environment. You can access MongoDB via bridge IP. If you want to run another MongoDB and mount disk where is in the host. It is not reliable because the data may conflict.
I had never used PostgreSQL nor Docker before. I set up an already developed project that uses these two technologies in order to modify it.
To get the project running on my Linux (Pop!_OS 20.04) machine I was given these instructions (sorry if this is irrelevant but I don't know what is important and what is not to state my problem):
Installed Docker CE and Docker Compose.
Cloned the project with git and ran the commands git submodule init and git submodule update.
Initialized the container with: docker-compose up -d
Generated the application configuration file: ./init.sh
After all of that the app was available at http://localhost:8080/app/ and I got inside the project's directory the following subdirectories:
And inside dbdata:
Now I need to modify the DB and there's where the difficulty arose since I don't know how to set up the connection with PostgreSQL inside Docker.
In a project without Docker which uses MySQL I would
Create the local project's database "dbname".
Import the project's DB: mysql -u username -ppassword dbname < /path/to/dbdata.sql
Connect a DB client (DBeaver in my case) to the local DB and perform the necessary modifications.
In an endeavour to do something like that with PostgeSQL, I have read that I need to
Install and configure Ubuntu 20.04 serve.
Install PostgreSQL.
Configure Postgres “roles” to handle authentication and authorization.
Create a new Database.
And then what?
How can I set up the connection in order to be able to modify the DB from DBeaver and see the changes reflected on http://localhost:8080/app/ when Docker is involved?
Do I really need an Ubuntu server?
Do I need other program than psql to connect to Postgres from the command line?
I have found many articles related to the local setup of PostgreSQL with Docker but all of them address the topic from scratch, none of them talk about how to connect to the DB of an "old" project inside Docker. I hope someone here can give directions for a newbie on what to do or recommend an article explaining from scratch how to configure PostgreSQL and then connecting to a DB in Docker. Thanks in advance.
Edit:
Here's the output of docker ps
You have 2 options to get into known waters pretty fast:
Publish the postgres port on the docker host machine, install any postgres client you like on the host and connect to the database hosted in the container as you would have done this traditionally. You will use localhost:5433 to reach the DB. << Update: 5433 is the port where the postgres container is published on you host, according to the screenshot.
Another option is to add another service in your docker-compose file to host the client itself in a container.
Here's a minimal example in which I am launching two containers: the postgres and an adminer that is exposed on the host machine on port 9999.
version: '3'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: example
adminer:
image: adminer
restart: always
ports:
- 9999:8080
then I can access the adminer at localhost:9999 (password is example):
Once I'm connected to my postgres through adminer, I can import and execute any SQL query I need:
A kind advice is to have a thorough lecture to understand how the data is persisted in a Docker context. Performance and security are also topics that you might want to add under your belt as a novice in the field better sooner than later.
If you're running your PostgreSQL container inside your own machine you don't need anything else to connect using a database client. That's because to the host machine, all the containers are accessible using their own subnet.
That means that if you do this:
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' 341164c5050f`
it will output a list of IPs that you can configure in your DBeaver to access the container instance directly.
If you're not fond of doing that (or you prefer to use cli) you can always use the psql inside the installation of the PostgreSQL container to achieve something like you described in mysql point nº2:
docker exec -i 341164c5050f bash -c 'psql -U $POSTGRES_USER' < /path/to/your/schema.sql
It's important to inform the -i, otherwise it'll not read the schema from the stdin. If you're looking for psql in the interactive mode, use -it instead.
Last but not least, you can always edit the docker-compose.yml file to export the port and connect to the instance using the public IP/loopback device.
How can I run my mongo db service on windows 10
I have downloaded mongo db. tried coping data files in required directories . Whenever I try start mongodb service from task manager->services it shows windows could not start mongo db service.
Here's the official documentation about how to install and run MonogDB in window. Link
Secondly this question is also asked before you can get help from here too.
If you already have installed MongoDB in your window OS and make sure you have set the environment variable for MongoDB.
To start MongoDB server - type 'mongod' in command prompt.
Start MongoDB server
To start client - type 'mongo' in command prompt.
Start MongoDB client
I am working on golang project, recently I read about docker and try to use docker with my app. I am using mongoDB for database.
Now problem is that, I am creating Dockerfile to install all packages and compile and run the go project.
I am running mongo data as locally, if I am running go program without docker it gives me output, but if I am using docker for same project (just installing dependencies with this and running project), it compile successfully but not gives any output, having error::
CreateSession: no reachable servers
my Dockerfile::
# Start from a Debian image with the latest version of Go installed
# and a workspace (GOPATH) configured at /go.
FROM golang
WORKDIR $GOPATH/src/myapp
# Copy the local package files to the container's workspace.
ADD . /go/src/myapp
#Install dependencies
RUN go get ./...
# Build the installation command inside the container.
RUN go install myapp
# Run the outyet command by default when the container starts.
ENTRYPOINT /go/bin/myapp
# Document that the service listens on port 8080.
EXPOSE 8080
EXPOSE 27017
When you run your application inside Docker, it's running in a virtual environment; It's just like another computer but everything is virtual, including the network.
To connect your container to the host, Docker gives it an special ip address and give this ip an url with the value host.docker.internal.
So, assuming that mongo is running with binding on every interface on the host machine, from the container it could be reached with the connection string:
mongodb://host.docker.internal:21017/database
Simplifying, Just use host.docker.internal as your mongodb hostname.
In your golang project, how do you specify connection to mongodb? localhost:27017?
If you are using localhost in your code, your docker container will be the localhost and since you don't have mongodb in the same container, you'll get the error.
If you are starting your docker with command line docker run ... add --network="host". If you are using docker-compose, add network_mode: "host"
Ideally you would setup mongodo in it's own container and connect them from your docker-compose.yml -- but that's not what you are asking for. So, I won't go into that.
In future questions, please include relevant Dockerfile, docker-compose.yml to the extent possible. It will help us give more specific answer.