how do I run run junit on remote postgresql docker - postgresql

I'm trying to run juinit on a remote postgresql docker.
I tried to use testcontainers.org but I can't get the configuration to work
I keep getting timeout exception
Does testcontainers support such setup? if so is there an example for that?
Thanks

as long as you have your Docker configured correctly, it should work out of the box without any additional configuration.
Just make sure that your environment variables point to the remote Docker daemon.

Related

Connect PostgreSQL to rabbitMQ

I'm trying to get RabbitMQ to monitor a postgresql database to create a message queue when database rows are updated. The eventual plan is to feed this message queue into an AWS EKS (Elastic Kubernetes Service) cluster as a job.
I've read many many approaches to this but they are still confusing as a newcomer to RabbitMQ and many seemed to be written more than 5 years ago so I'm not sure if they'll still work with current versions of postgres and rabbitmq.
I've followed this guide about installing the area51/notify-rabbit docker container which can connect the two via a node app, but when I ran the docker container it immediately stopped and didn't seem to do anything.
There is also this guide, which uses a go app to connect the two, but I'd rather not use Go ouside of a docker container.
Additionally, there is also this method, to install the pg_amqp extension from a repository which hasn't been updated in years, which allows for a direct connection from PostgreSQL to RabbitMQ. However, when I followed this and attempted to install pg_amqp on my Postgres db (postgresql 12), I was unable to connect using psql to the database, getting the classic error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
My current set-up, is I have a rabbitMQ server installed in a docker container in an AWS EC2 instance which I can access via the internet. I ran the following to install and run it:
docker pull rabbitmq:3-management
docker run --rm -p 15672:15672 -p 5672:5672 rabbitmq:3-management
The postgresql database is running on a separate EC2 instance and both instances have the required ports open for accessing data from each server.
I have also looked into using Amazon SQS as well for this, but it didn't seem to have any info on linking Postgresql up to it. I haven't really seen any guides or Stack Overflow questions on this since 2017/18 so I'm wondering if this is still the best way to create a message broker for a kubernetes system? Any help/pointers on this much appreciated.
In the end, I decided the best thing to do was create some simple Python scripts to do the LISTEN/NOTIFY steps and route traffic from PostgreSQL to RabbitMQ based off the following code https://gist.github.com/kissgyorgy/beccba1291de962702ea9c237a900c79
I set it up inside Docker containers and set them to run in my Kubernetes cluster so they are within the automatic restarts if they fail.

Concourse CI: Quickstart + localhost

I've just started 'kicking the tires' on Concourse-CI, using the quickstart tutorial as my starting point. That much works fine.
I've created a super basic pipeline with a single task, just like the quickstart tutorial. But instead of pulling the busybox image and executing the echo command, I'm pulling another image, and running a command that would try to update a local postgres db.
When I run the pipeline - my task (docker image writing to local postgres db) fails - because connection can't be made to the local db. I've searched far and wide - and can't seem to figure out how to do this. In the docker-compose from the quickstart tutorial, I've tried adding CONCOURSE_CONTAINERD_ALLOW_HOST_ACCESS: "true" to no avail
Any suggestions on how I may be able to achieve this?
Turns out my issue had nothing to do with Concourse.
The local postgres instance I was attempting to write to was only accepting connections from localhost - which won't allow connections from Docker containers. I updated postgres setting to allow remote connections - and all is well.

Docker containers cannot bind to network

I usually use the default network for docker containers, and I had a mongo database running in one just fine and the port was exposed to the network successfully. Then, I tried to attach a new python container to that container using the --link option (yes, I now realize that that is deprecated). An error was thrown, and in my hubris, I didn't capture it, I just went on. Now, when I try to start my mongo database, it fails saying that it can't bind the network. "Failed to set up listener: SocketException: Permission denied"
I removed the container and tried to re-create it, but no luck. I've put this into a permanent state of bad. Any suggestions on how to fix this so I can get my database back?
Thanks.
Edit: Should have mentioned, Ubuntu 20.04, Docker 19.03.11
Also, this only seems to be a problem with any new mongo containers. I can start postgres, and web servers, etc without issues.
Turns out, whatever that error was when I tried to use --link, it had corrupted the mongo image on my machine, so all new instances of that image failed to connect to the network. That's why removing the container and recreating it didn't fix the problem. I needed to delete the local mongo image, and re-pull from the docker hub.

How to connect local Mongo database to docker

I am working on golang project, recently I read about docker and try to use docker with my app. I am using mongoDB for database.
Now problem is that, I am creating Dockerfile to install all packages and compile and run the go project.
I am running mongo data as locally, if I am running go program without docker it gives me output, but if I am using docker for same project (just installing dependencies with this and running project), it compile successfully but not gives any output, having error::
CreateSession: no reachable servers
my Dockerfile::
# Start from a Debian image with the latest version of Go installed
# and a workspace (GOPATH) configured at /go.
FROM golang
WORKDIR $GOPATH/src/myapp
# Copy the local package files to the container's workspace.
ADD . /go/src/myapp
#Install dependencies
RUN go get ./...
# Build the installation command inside the container.
RUN go install myapp
# Run the outyet command by default when the container starts.
ENTRYPOINT /go/bin/myapp
# Document that the service listens on port 8080.
EXPOSE 8080
EXPOSE 27017
When you run your application inside Docker, it's running in a virtual environment; It's just like another computer but everything is virtual, including the network.
To connect your container to the host, Docker gives it an special ip address and give this ip an url with the value host.docker.internal.
So, assuming that mongo is running with binding on every interface on the host machine, from the container it could be reached with the connection string:
mongodb://host.docker.internal:21017/database
Simplifying, Just use host.docker.internal as your mongodb hostname.
In your golang project, how do you specify connection to mongodb? localhost:27017?
If you are using localhost in your code, your docker container will be the localhost and since you don't have mongodb in the same container, you'll get the error.
If you are starting your docker with command line docker run ... add --network="host". If you are using docker-compose, add network_mode: "host"
Ideally you would setup mongodo in it's own container and connect them from your docker-compose.yml -- but that's not what you are asking for. So, I won't go into that.
In future questions, please include relevant Dockerfile, docker-compose.yml to the extent possible. It will help us give more specific answer.

How to start up a Kubernetes cluster using Rocket?

I'm using a Chromebook Pixel 2, and it's easier to get Rocket working than Docker. I recently installed Rocket 1.1 into /usr/local/bin, and have a clone of the Kubernetes GitHub repo.
When I try to use ./hack/local-up-cluster.sh to start a cluster, it eventually fails with this message:
Failed to successfully run 'docker ps', please verify that docker is installed and $DOCKER_HOST is set correctly.
According to the docs, k8s supports Rocket. Can someone please guide me about how to start a local cluster without a working Docker installation?
Thanks in advance.
You need to set three environment variables before running ./hack/local-up-cluster.h:
$ export CONTAINER_RUNTIME=rkt
$ export RKT_PATH=$PATH_TO_RKT_BINARY
$ export RKT_STAGE1_IMAGE=PATH=$PATH_TO_STAGE1_IMAGE
This is described in the docs for getting started with a local rkt cluster.
Try running export CONTAINER_RUNTIME="rocket" and then re-running the script.