We have hosted Gitlab on a private dedicated server that can only be accessed through an L2TP VPN tunnel. The DNS of the domain is set to IP to the host in the internal network so obviously it's not accessible without the VPN. The problem is that when Gitlab CI/CD gets triggered, an error occurs in the job with the following log:
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /builds/web/XXX/.git/
fatal: unable to access 'https://<our domain>/web/XXX.git/': Could not resolve host: <our domain>
Heres .gitlab-ci.yml:
image: docker/compose:latest
services:
- docker:dind
before_script:
- docker version
- docker-compose --version
- 'command -v ssh-agent >/dev/null || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
build:
stage: build
script:
- docker-compose --context remote up -d --build
It seems like it can't find IP of the host.
I don't even know where to start to diagnose the problem. In the hosting server I pinged the host address and it displayed the correct IP address. How can I fix this?
This error is because the host is not known docker.
The same issue has been occurred to me as well. I have added the domain name and domain ip in extra_hosts parameter under docker executor configuration in config.toml
Get the ip address of the host you are trying to add, as you need to add both hostname and host ip in extra_hosts parameter.
In your config.toml, change like this:
[[runners]]
name = "maven-docker"
url = "https://<your-domain-name>/gitlab/"
token = "MXvXVma55_Kw2o"
executor = "docker"
[runners.custom_build_dir]
[runners.docker]
tls_verify = false
image = "maven:latest"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
extra_hosts = ["<your-domain-name>:<your-domain-ip>"]
pull_policy = "never"
shm_size = 0
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.custom]
run_exec = ""
Then, restart the gitlab runner using command sudo gitlab-runner restart or gitlab-runner restart
Related
the overview of the environment:
Mongodb cluster on Atlas that is peered with the vpc
Ec2 instance running in the VPC
docker swarm inside the EC2 instance.
What am I experiencing:
I am able to connect to mongo using the mongo cli from the EC2 instance
all my containers aren't able to connect to the mongodb even though they are running on this EC2 instance
as soon as I whitelist the public ip of the EC2 instance they are able to connect - but this is weird, I want them to be able to connect because the instance they are running on is able to connect without any special whitelisting.
swarm initialisation command I used:
docker swarm --init --advertise-addr <private ip of the EC2>
It didn't work when i tried with the public ip and it also doesn't work when i am not adding the --advertise-addr to the swarm init.
additional useful information:
Dockerfile:
FROM node:12-alpine as builder
ENV TZ=Europe/London
RUN npm i npm#latest -g
RUN mkdir /app && chown node:node /app
WORKDIR /app
RUN apk add --no-cache python3 make g++ tini \
&& apk add --update tzdata
USER node
COPY package*.json ./
RUN npm install --no-optional && npm cache clean --force
ENV PATH /app/node_modules/.bin:$PATH
COPY . .
EXPOSE 8080
FROM builder as dev
USER node
CMD ["nodemon", "src/services/server/server.js"]
FROM builder as prod
USER node
HEALTHCHECK --interval=30s CMD node healthcheck.js
ENTRYPOINT ["/sbin/tini", "--"]
CMD ["node", "--max-old-space-size=2048" ,"src/services/server/server.js"]
I have no clue why it behaves like this, How can I fix this ?
After having a meeting with a senior DevOps engineer we finally found the problem.
turns out the CIDR block of the network that the containers were running in was overlapping with the CIDR of the VPC. what we did is add the following to the docker-compose
networks:
wm-net:
driver: overlay
ipam:
driver: default
config:
- subnet: 172.28.0.0/16 #this CIDR doesn't overlap with the VPC
Your swarm containers are running on a separate network on your EC2.
As explained here, a special 'ingress' network is created by docker when initializing a swarm.
In order to allow them to connect you may need to reconfigure the default settings set up by docker, or whitelist the specific network interface that is used by docker's ingress network.
We have a web app using postgresql DB, being deployed to tomcat on CentOS7 env. We are using docker (and docker-compose), and running on an Azure visual machine.
We cannot pre-set the admin user 'postgres' password (e.g. mysecret) during the docker/docker-compoase build process.
We have tried to use the environments: setting from the docker-compose.yml file, also the ENV in the ./portgres Dockerfile file. Neither works.
I had to manually use 'docker exec -it /bin/bash' to run the psql command to set the password. We like to avoid the manual step.
$ cat docker-compose.yml
version: '0.2'
services:
app-web:
build: ./tomcat
ports:
- "80:80"
links:
- app-db
app-db:
build: ./postgres
environment:
- "POSTGRES_PASSWORD=password"
- "PGPASSWORD=password"
expose:
- "5432"
$ cat postgres/Dockerfile
FROM centos:7
RUN yum -y install https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-6-x86_64/pgdg-centos96-9.6-3.noarch.rpm
RUN yum -y install postgresql96 postgresql96-server postgresql96-libs postgresql96-contrib postgresql96-devel
RUN yum -y install initscripts
USER postgres
ENV POSTGRES_PASSWORD=mysecret
ENV PGPASSWORD=mysecret
RUN /usr/pgsql-9.6/bin/initdb /var/lib/pgsql/9.6/data
RUN echo "listen_addresses = '*'" >> /var/lib/pgsql/9.6/data/postgresql.conf
RUN echo "PORT = 5432" >> /var/lib/pgsql/9.6/data/postgresql.conf
RUN echo "local all all trust" > /var/lib/pgsql/9.6/data/pg_hba.conf
RUN echo "host all all 127.0.0.1/32 trust" >> /var/lib/pgsql/9.6/data/pg_hba.conf
RUN echo "host all all ::1/128 ident" >> /var/lib/pgsql/9.6/data/pg_hba.conf
RUN echo "host all all 0.0.0.0/0 md5" >> /var/lib/pgsql/9.6/data/pg_hba.conf
EXPOSE 5432
ENTRYPOINT ["/usr/pgsql-9.6/bin/postgres","-D","/var/lib/pgsql/9.6/data","-p","5432"]
Web app deployment fails, db authentication error with wrong password (the password 'mysecret' is defined in the web app JPA persistence.xml). Assuming the password was not properly set (default initdb does not set the password)
Then, manually change the password using the above mentioned docker exec command, everything works.
Like to set the password during the docker build time. Based on Postgres/Docker documentation and some threads, either environments: from docker-compose setting or ENV from the docker file would work. Neither works for us.
I have followed the docker-compose concourse installation set up
Everything is up and running but I cant figure out what to use as --tsa-host value in command to connect worker to TSA host
Would be worth mentioning that docker concourse web and db are running on the same machine that I hope to use as bare metal worker.
I have tried 1. to use IP address of concourse web container but no joy. I cannot even ping the docker container IP from host.
1.
sudo ./concourse worker --work-dir ./worker --tsa-host IP_OF_DOCKER_CONTAINER --tsa-public-key host_key.pub
--tsa-worker-private-key worker_key
I have also tried using the 2. CONCOURSE_EXTERNAL_URL and 3. the ip address of the host but no luck either.
2.
sudo ./concourse worker --work-dir ./worker --tsa-host http://10.XXX.XXX.XX:8080 --tsa-public-key host_key.pub
--tsa-worker-private-key worker_key
3.
sudo ./concourse worker --work-dir ./worker --tsa-host 10.XXX.XXX.XX:8080 --tsa-public-key host_key.pub
--tsa-worker-private-key worker_key
Other details of setup:
Mac OSX Sierra
Docker For Mac
Please confirm you use the internal IP of the host, not public IP, not container IP.
--tsa-host <INTERNAL_IP_OF_HOST>
If you use docker-compose.yml as in its setup document, you needn't care of TSA-HOST, the environmen thas been defined already
CONCOURSE_TSA_HOST: concourse-web
I used the docker-compose.yml recently with the steps described here https://concourse-ci.org/docker-repository.html .
Please confirm that there is a keys directory next to the docker-compose.yml after you executed the steps.
mkdir -p keys/web keys/worker
ssh-keygen -t rsa -f ./keys/web/tsa_host_key -N ''
ssh-keygen -t rsa -f ./keys/web/session_signing_key -N ''
ssh-keygen -t rsa -f ./keys/worker/worker_key -N ''
cp ./keys/worker/worker_key.pub ./keys/web/authorized_worker_keys
cp ./keys/web/tsa_host_key.pub ./keys/worker
export CONCOURSE_EXTERNAL_URL=http://192.168.99.100:8080
docker-compose up
I am actually working on automated testing of my playbooks with Gitlab-CI, Ubuntu is working very well and getting no issues.
The Problem actually I have is with CentOS and Systemd, first of all the Playbook ( installing Postgres 9.5 inside CentOS7):
- name: Ensure PostgreSQL is running
service:
name: postgresql-9.5
state: restarted
ignore_errors: true
when:
- ansible_os_family == 'RedHat'
so, and this is what I get if i want to start postgres inside the container:
Failed to get D-Bus connection: Operation not permitted\nFailed to get D-Bus connection: Operation not permitted\nFailed to get D-Bus connection: Operation not permitted\nFailed to get D-Bus connection: Operation not permitted\nFailed to get D-Bus connection: Operation not permitted\n
I already had to run the Container in privileged mode, with c-groups and anything else. Already tried differend Docker Containers but nothing is working.
When using docker, i think it would be better just use postgres to start the server.
Command like
postgres -D /opt/postgresql/data/ > /var/log/postgresql/pg_server.log 2>&1 &
When you use docker, you don't have a fully functional systemd.
You can use the solution suggested by #KJocker to make a postgresql functional container. Or instead you can configure systemd to work inside the container here is a document check
I had the same thing when using Ansible on a Docker container.... and I have written a docker-systemctl-replacement for that. It works for PostgreSQL - no need to change the Ansible script, it can stay that way for a deployment on a real machine.
Edit the conf of your gitlab runner instance /etc/gitlab-runner/config.toml
From :
[runners.docker]
privileged = false
volumes = ["/cache"]
To :
[runners.docker]
privileged = true
volumes = ["/sys/fs/cgroup:/sys/fs/cgroup:ro", "/cache"]
Add :
[runners.docker.tmpfs]
"/run" = "rw"
"/tmp" = "rw"
[runners.docker.services_tmpfs]
"/run" = "rw"
"/tmp" = "rw"
Restart gitlab-runner.
On your docker image, edit getty tty1 service to permit autologin of user root after systemd boot up
sed -e 's|/sbin/agetty |/sbin/agetty -a root |g' -i /etc/systemd/system/getty.target.wants/getty\#tty1.service
Use that docker image in image name section of .gitlab-ci.yml and add the following to start systemd. Do not edit entrypoint
script:
- /lib/systemd/systemd --system --log-target=kmsg &
- sleep 5
- systemctl start postgresql-9.5
When running my Dockerfile I need to grab dependencies. This is done using go get ./....
However when doing docker build -t test . it hangs at the go get command.
here is the error message
exec go get -v -d
github.com/gorilla/mux (download)
cd .; git clone https://github.com/gorilla/mux /go/src/github.com/gorilla/mux Cloning into
'/go/src/github.com/gorilla/mux'... fatal: unable to access
'https://github.com/gorilla/mux/': Could not resolve host: github.com
package github.com/gorilla/mux: exit status 128
here is the dockerfile
FROM golang
# Create a directory inside the container to store all our application and then make it the working directory.
RUN mkdir -p /go/src/example-app
WORKDIR /go/src/example-app
# Copy the example-app directory (where the Dockerfile lives) into the container.
COPY . /go/src/example-app
# Download and install any required third party dependencies into the container.
RUN go-wrapper download
RUN go-wrapper install
RUN go get ./...
# Set the PORT environment variable inside the container
ENV PORT 8080
# Expose port 8080 to the host so we can access our application
EXPOSE 8080
# Now tell Docker what command to run when the container starts
CMD ["go-wrapper", "run"]
I assume you're doing that via ssh on another machine. Check if it has a dns server in your /etc/network/interfaces. It should look somehow like this:
iface eth0 inet static
address 192.168.2.9
gateway 192.168.2.1
netmask 255.255.255.0
broadcast 192.168.2.255
dns-nameservers 192.168.2.1 8.8.4.4
DNS servers that "always" work are 8.8.8.8 and 8.8.4.4, both provided by Google. If that doesn't resolve your problem, you should check your internet connection for other misconfigurations, but first try this.