Docker Compose Port Mapping in Github Actions - docker-compose

I'm trying to set up a testing CI with Github Actions and Docker Compose. You can see the repository here.
I have a frontend on port 3000 which communicates with a backend on port 4000. I am using a testing library (Cypress) that launches predetermined tasks in an emulated browser against the frontend.
My docker setup works locally, however I can't seem to get the networking / port mapping working correctly in the Github runner. The frontend service can't be found on http://localhost:3000.
NOTE: I am using network_mode: host to simplify the environment.
How can I configure the Github workflow to successfully connect to the frontend application on the host network on port 3000?

Related

Internal Server Error when trying to connect to postgres container on Google Cloud Platform using Pgadmin

I have deployed a postgres docker image using a cloudbuild.yaml file. My service is deployed on cloud run but I am not able to connect to the database using the url provided by the cloud run. I have included cloudbuild.yaml below.
- name: "gcr.io/cloud-builders/docker"
args: ['pull', 'postgres']
- name: "gcr.io/cloud-builders/docker"
args: ['tag', 'postgres','gcr.io/abi/postgres']
- name: "gcr.io/cloud-builders/docker"
args: ['push', 'gcr.io/abi/postgres']
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: gcloud
args: ['run', 'deploy','postgres', '--image','gcr.io/abi/postgres', '--region','us-central1','--platform','managed', '--port','5435', '--allow-unauthenticated','--set-env-vars','POSTGRES_USER=postgres','--set-env-vars','POSTGRES_PASSWORD=root','--set-env-vars','POSTGRES_DB=abi','--set-env-vars','PGDATA=/var/lib/postgresql/data','--set-env-vars','PGPORT=5435']
I also have the pgadmin image deployed and running on cloud run. I am getting an internal server error when I tried to add postgres database server to pgadmin using the URL provided by cloud run. Both the pgadmin and postgres images are deployed in the same project on GCP.
The reason your container is crashing is that Google Cloud Run requires containers to respond to HTTP Requests.
Your container must listen on the configured TCP port, the default is 8080. The scheme is HTTP. Clients can connect to the Google Cloud Frontend using either HTTP (port 80) or HTTPS (port 443). HTTP requests will be redirected to HTTPS. In essence, your container must provide a web server. No other client protocols are supported such as FTP, SMTP, PostgreSQL, etc.
An additional problem for you is persistent data storage. How will you initialize the database files? Cloud Run can create and destroy a container at any time. Data is not persisted between container starts.
Container runtime contract

Running an ASP.NET Core web app in GitHub Codespaces gives 502 Bad Gateway

I'm trying to get GitHub Codespaces working with a simple asp.net core web app.
I've created a new asp.net core web app (dotnet new webapp), confirmed it all works locally, and then added it to an Organization GitHub repo.
From here, I open the repo in Codespaces, in the browser.
I've configured the container for C# and Sql, my docker files are unchanged from the ones that are generated.
I can build fine, then I dotnet run from the Terminal and I can see the app starts up fine:
Info: Microsoft.Hosting.Lifetime[0]
Now listening on: https://localhost:5001
info: Microsoft.Hosting.Lifetime[0]
Now listening on: http://localhost:5000
info: Microsoft.Hosting.Lifetime[0]```
I also get a message about port forwarding, and I can see that port forward set up:
However, when I try that URL, I get an NGINX 502 Bad Gateway error from the 5001 port. From the 5000 port URL it just tries to redirect me to https://localhost:5001.
Am I missing some part of setup?
Things I've tried
Adding forwardedPorts to the devcontainer.json file.
Toggling the port forward to public / private
Changing the Port Protocol to https
Logging in and out of github, and rebuilding the devcontainer many times.
Cross-posting https://github.com/github/feedback/discussions/7116, where this seems to be discussed more heavily. We can update this when we've got a firm answer.

git lab ci service connection refused

I have a docker image listed in my gitlab ci services list. When I make an htpp request using curl to my docker service url everything works fine. But when I run my tests which makes an http request using axios to the service docker image url it says connection refused here is the exact message connect EINVAL 0.0.31.129:80 - Local (0.0.0.0:0)
The thing is that I was using a service on my registry running on specific port and the runner was trying to connect to the registry at port 80. Because since I did specify the port the runner don't exactly know what protocol to use. So, the runner pick default port for http which is 80. So, the fix will be to add http protocol to the registry url.

How to expose services between containers with docker-compose

On circleci, when I declare multiple dockers for a job:
dockers:
app: company/image
selenium: selenium/image
app will expose a port 4000 and selenium will expose port 4444.
Then from app container, I can access selenium service via localhost:4444, and on selenium container, I can access app webserver via localhost:4000.
docker-compose, however, behaves differently. I only allow me to access to selenium:4444 from app, and app:4000 from selenium.
I want docker-compose to behave similar to circleci, in which it allows me to use localhost:port to access other services. How can I do that?
The way to achieve the above is via networking_mode:
I need to tell docker-compose to run selenium using networking_mode = "services:app" so that every ports listened by selenium will be available to access from app using just localhost:PORT (and vice versa)
This is explained here: Can docker-compose share an ip between services with discrete ports?
Also the reason for it to work is explained in the docker networking model here: https://codability.in/docker-networking-explained/

Port issue in Running Open LDAP container on Bluemix

I am trying to run and OpenLDAP container on Bluemix using IBM Containers. I am using cloudesire/openldap and sucessfully run the contianer on my local linux machine. I tried to run in my Bluemix account using IBM Container.
I am unable to test it using ldapsearch or telnet using port 389. I managed to run some other container and telnet them sucessfully, but not with Open LDAP container.
Is port 389 blocked by the Bluemix router? How proxy the port?
Port 389 is currently blocked for IBM Containers running on Bluemix.
Please open a support ticket with IBM Bluemix Support and request this port to be exposed:
https://developer.ibm.com/bluemix/support/#support