Containerized webservices with nginx-proxy / acme-companion on a single VPS - docker-compose

What I have : a VPS with an its IPV4 IPADRESS and a valid domain name binded to it with an A record in my provider DNS control panel.
Lets call my domain name : mydomain.com and my IPV4 ip adress denoted as IPADRESS for debugging purposes.
What I want : a nextcloud instance and django-based blog running in parallel on my VPS and being able to acces to them respectfully by accessing cloud.mydomain.com for my nextcloud instance and blog.mydomain.com for my django-based blog throught HTTPS.
What i've done :
I've tried to use nginx-proxy + its letsencrypt companion with a docker framework.
First of all, here my working directory is /home/ubuntu/.
Here is tree /home/ubuntu/ -L 2 output :
.
├── mywebsite-django
│   └── mysite
│   ├── Dockerfile
│   ├── blog
│   ├── config
│   ├── db.sqlite3
│   ├── docker-compose.yml
│   ├── manage.py
│   ├── mywebsite
│   ├── nginx
│   ├── requirements.txt
│   └── staticfiles
├── nextcloud_setup
│   ├── app
│   │   ├── config
│   │   ├── custom_apps
│   │   ├── data
│   │   └── themes
│   ├── docker-compose.yml
│   └── proxy
│   ├── certs
│   ├── conf.d
│   ├── html
│   └── vhost.d
└── nginx_setup
├── certs
│   ├── mydomain.com
│   ├── blog.mydomain.com
│   ├── default.crt
│   ├── default.key
│   └── dhparam.pem
├── conf.d
│   └── default.conf
├── docker-compose.yml
├── html
├── nginx.tmpl
├── templates
│   └── nginx.tmpl
└── vhost.d
└── default
26 directories, 14 files
Then i create a docker network :
So i run sudo docker network create nginx-proxy.
Then i run my nginx-proxy+letsencrypt container :
cd nginx_setup + sudo docker-compose up -d
where nginx_setup/docker-compose.ymlis :
version: '3'
services:
nginx:
image: nginx
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
container_name: nginx
restart: unless-stopped
logging:
options:
max-size: "10m"
max-file: "3"
ports:
- "80:80"
- "443:443"
volumes:
- /home/ubuntu/nginx_setup/conf.d:/etc/nginx/conf.d
- /home/ubuntu/nginx_setup/vhost.d:/etc/nginx/vhost.d
- /home/ubuntu/nginx_setup/html:/usr/share/nginx/html
- /home/ubuntu/nginx_setup/certs:/etc/nginx/certs:ro
environment:
DEFAULT_HOST: "mydomain.com"
nginx-gen:
image: jwilder/docker-gen
container_name: nginx-gen
restart: unless-stopped
volumes:
- /home/ubuntu/nginx_setup/conf.d:/etc/nginx/conf.d
- /home/ubuntu/nginx_setup/vhost.d:/etc/nginx/vhost.d
- /home/ubuntu/nginx_setup/html:/usr/share/nginx/html
- /home/ubuntu/nginx_setup/certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:rw
- /home/ubuntu/nginx_setup/templates/:/etc/docker-gen/templates:ro
command: -notify-sighup nginx -watch -only-exposed /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
restart: unless-stopped
volumes:
- /home/ubuntu/nginx_setup/conf.d:/etc/nginx/conf.d
- /home/ubuntu/nginx_setup/vhost.d:/etc/nginx/vhost.d
- /home/ubuntu/nginx_setup/html:/usr/share/nginx/html
- /home/ubuntu/nginx_setup/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:rw
environment:
NGINX_DOCKER_GEN_CONTAINER: "nginx-gen"
NGINX_PROXY_CONTAINER: "nginx"
networks:
default:
external:
name: nginx-proxy
The nginx.tmpl is defined as follow :
server {
listen 80 default_server;
server_name _; # This is just an invalid value which will never trigger on a real hostname.
error_log /proc/self/fd/2;
access_log /proc/self/fd/1;
return 503;
}
{{ range $host, $containers := groupByMulti $ "Env.VIRTUAL_HOST" "," }}
upstream {{ $host }} {
{{ range $index, $value := $containers }}
{{ $addrLen := len $value.Addresses }}
{{ $network := index $value.Networks 0 }}
{{/* If only 1 port exposed, use that */}}
{{ if eq $addrLen 1 }}
{{ with $address := index $value.Addresses 0 }}
# {{$value.Name}}
server {{ $network.IP }}:{{ $address.Port }};
{{ end }}
{{/* If more than one port exposed, use the one matching VIRTUAL_PORT env var */}}
{{ else if $value.Env.VIRTUAL_PORT }}
{{ range $i, $address := $value.Addresses }}
{{ if eq $address.Port $value.Env.VIRTUAL_PORT }}
# {{$value.Name}}
server {{ $network.IP }}:{{ $address.Port }};
{{ end }}
{{ end }}
{{/* Else default to standard web port 80 */}}
{{ else }}
{{ range $i, $address := $value.Addresses }}
{{ if eq $address.Port "80" }}
# {{$value.Name}}
server {{ $network.IP }}:{{ $address.Port }};
{{ end }}
{{ end }}
{{ end }}
{{ end }}
}
server {
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
server_name {{ $host }};
proxy_buffering off;
error_log /proc/self/fd/2;
access_log /proc/self/fd/1;
location / {
proxy_pass http://{{ trim $host }};
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
{{ end }}
got from here
Note : Once running, when i run sudo docker-compose logs from /home/ubuntu/nginx_setup/, nothing appears to be wrong..
Then i run my django container :
cd /home/ubuntu/mywebsite-django/mysite/ + sudo docker-compose up -d
My file /home/ubuntu/mywebsite-django/mysite/docker-compose.ymlis defined by :
version: '3'
services:
gunicorn:
container_name: myblog
build: .
command: sh -c "python manage.py makemigrations &&
python manage.py migrate &&
python manage.py collectstatic --noinput &&
gunicorn --bind 0.0.0.0:8000 --workers 2 mywebsite.wsgi:application"
volumes:
- ./staticfiles:/static
environment:
VIRTUAL_HOST: blog.mydomain.com
VIRTUAL_PORT: 8000
LETSENCRYPT_HOST: mydomain.com
LETSENCRYPT_EMAIL: mymail#forletsecrypt.com
ports:
- "8000:8000"
networks:
default:
external:
name: nginx-proxy
Note : Once running, when i run sudo docker-compose logs from /home/ubuntu/mywebsite-django/mysite/, nothing appears to be wrong..
What i get :
curl blog.mydomain.com output :
<html>
<head><title>503 Service Temporarily Unavailable</title></head>
<body>
<center><h1>503 Service Temporarily Unavailable</h1></center>
<hr><center>nginx/1.23.2</center>
</body>
</html>
Note : i did not try to launch my nextcloud instance since even my django app does not work
Whats wrong here ?
Here some details on my machine :
sudo docker network ls output:
NETWORK ID NAME DRIVER SCOPE
ce90ed81eade bridge bridge local
c6325fd6c267 host host local
834d9a715380 nginx-proxy bridge local
78c28ce57f15 none null local
and
sudo ufw status verbose output
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip
To Action From
-- ------ ----
80,443/tcp (Nginx Full) ALLOW IN Anywhere
22/tcp ALLOW IN Anywhere
80,443/tcp (Nginx Full (v6)) ALLOW IN Anywhere (v6)
22/tcp (v6) ALLOW IN Anywhere (v6)

Related

Helm dependencies - Reference to values inside the helm dependancy (dependency outputs)

I'll give the context first then the problem: I am interested in centralising a set of values that would be in my values.yaml.
Initial plan was to create a config map with the centralised values that I could load using the lookup helm function. Sadly for me the CD tool I use (ArgoCD) doesn't support lookup.
Current chain of thought would be to create a dummy helm chart that would contain the centralised values. I would set this helm chart as a dependency. Can I get some outputs out of this dependancy that can be used elsewhere? If yes, how to refer to them in the values.yaml?
One approach could be like this:
Create a folder structure such as this
yourappfolder
- Chart.yaml
- value.yaml // the main value file
- value.dev.yaml //contains env-specific values and override value.yaml
and publish a completely new helm chart like my-generic-chart to a registry with default values already placed and update it in Chart.yaml as a dependency
# Chart.yaml
apiVersion: v2
name: myapplication
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: "1.16.0"
dependencies:
- name: my-generic-chart
version: 5.6.0
repository: "URL to your my-default-chart"
and don't forget to put all the values under my-default-chart
# values.yaml
my-default-chart:
image: nginx
imageTag: 1
envs:
log-format: "json"
log-level: "none"
..
..
# values.dev.yaml
my-default-chart:
imageTag: 1.1
envs:
log-level: "debug"
..
..
the values.dev.yaml file will override the values in values.yaml and both of them together will override the values in the generic chart default values.yaml file.
Now you have to create a generic chart that fits to all of your applications or create a generic chart for each type of application
Figured it is possible by 2 ways:
Exporting/Importing child values: https://helm.sh/docs/topics/charts/#importing-child-values-via-dependencies and I found a great related answer: How to use parameters of one subchart in another subchart in Helm
Using "Templates": https://helm.sh/docs/chart_template_guide/subcharts_and_globals/#sharing-templates-with-subcharts "Parent charts and subcharts can share templates. Any defined block in any chart is available to other charts."
I create one chart "data" that doesn't create anything but creates template function (for example "data.project") then I use another
├── Chart.lock
├── Chart.yaml
├── charts
│   ├── data
│   │   ├── Chart.yaml
│   │   ├── templates
│   │   │   └── _helpers.tpl
│   │   └── values.yaml
│   ├── sub0
│   │   ├── Chart.yaml
│   │   ├── templates
│   │   │   └── configmap.yaml
│   │   └── values.yaml
└── values.yaml
charts/data/templates/_helpers.tpl contains:
{{- define "data.project" -}}
project_name
{{- end }}
The top Chart.yaml contains:
---
apiVersion: v2
name: multi
type: application
version: 0.1.0
appVersion: "1.16.0"
dependencies:
- name: data
version: 0.1.0
repository: "file://charts/data" # This is while testing locally, once shared in a central repository this will achieve the centralised data information
import-values:
- data
- name: sub0
version: 0.1.0
repository: "file://charts/sub0"
Then from anywhere I can do:
"{{ include \"data.project\" . }}"
And obtain the value
Of course the "data" chart will need to live in a separate repository

Docker stack isn't updating the folder structure with new image

Recently I have changed my dockerfile to use a cleaner folder structure but this isn't being updated in the stack deploy
My folder structure:
├── Dockerfile.dev
├── Dockerfile.prod
├── env/
├── requirements.txt
├── src
│   ├── app
│   │   ├── __init__.py
│   │   ├── modules
│   │   ├── __pycache__
│   │   ├── services
│   │   └── util
│   ├── __init__.py
│   └── main.py
└── version.conf
Docker compose file (recorder api part):
api-recorder:
image: img-api-recorder:latest
build:
context: ../api-recorder-python/
dockerfile: Dockerfile.${DOCKER_ENV}
deploy:
mode: replicated
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints: [node.role == manager]
volumes:
- ${BASE_DIR}api-recorder-python:${WORKDIR}
depends_on:
- zookeeper
environment:
PYTHON_ENV: ${DOCKER_ENV}
Old Dockerfile:
FROM python:3
WORKDIR /usr/src/app
RUN apt-get update
RUN pip install --upgrade pip
RUN pip install kafka-python
RUN pip install python-dotenv
RUN pip install pymongo pymongo[srv]
RUN pip install psycopg2
RUN ln -snf /usr/share/zoneinfo/America/Sao_Paulo \
/etc/localtime && \
echo "America/Sao_Paulo" > /etc/timezone
COPY . .
CMD ["python3", "-u", "src/main.py"]
So what I did was create the requirements.txt and changed the COPY command
New Dockerfile:
FROM python:3
WORKDIR /usr/src
COPY ./requirements.txt ./
RUN apt-get update
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
RUN ln -snf /usr/share/zoneinfo/America/Sao_Paulo \
/etc/localtime && \
echo "America/Sao_Paulo" > /etc/timezone
COPY ./src ./
# CMD ["python3", "-u", "main.py"]
CMD ["python3", "-m", "http.server"]
The weird thing is that the new Dockerfile is being built correctly in a new image because if I run docker run -it [image_name]:latest bash and list the directories I receive this:
which is the new structure made by the new Dockerfile on the other hand if I run the stack deploy and enter inside the container I will be in the /usr/src path and it will have a wrong structure:
The content inside the app folder is wrong, it should have the program code inside it
How can I clean it? I already tried delete all the volumes, images, containers, I even reinstalled docker, I don't know what else to do.
Your docker compose has both an image and build section. Precedence is given to the image so that is being used not the build.
You probably want:
api-recorder:
build:
context: ../api-recorder-python/
dockerfile: Dockerfile.${DOCKER_ENV}
deploy:
mode: replicated
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints: [node.role == manager]
volumes:
- ${BASE_DIR}api-recorder-python:${WORKDIR}
depends_on:
- zookeeper
environment:
PYTHON_ENV: ${DOCKER_ENV}
I discovered what is happening.
In my docker-compose file I was setting a volume that was overwriting my path, this is happening because my WORKDIR environment is set to /usr/src/app which is the same path that I set in my dockerfile but in the docker-compose this path is being mirrored to my api-recorder-python folder structure
So the only thing that I did was change the volume to ${BASE_DIR}api-recorder-python/src:${WORKDIR} and I changed my WORKDIR to /usr/src and all worked fine
Here:
api-recorder:
image: img-api-recorder:latest
build:
context: ../api-recorder-python/
dockerfile: Dockerfile.${DOCKER_ENV}
deploy:
mode: replicated
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints: [node.role == manager]
volumes:
- ${BASE_DIR}api-recorder-python/src:${WORKDIR}
depends_on:
- zookeeper
environment:
PYTHON_ENV: ${DOCKER_ENV}

Flux V2 not pushing new image version to git repo

I've upgrade from Flux V1 to V2. It all went fairly smooth but I can't seem to get the ImageUpdateAutomation to work. Flux knows I have images to update but it doesn't change the container image in the deployment.yaml manifest and commit the changes to Github. I have no errors in my logs so I'm at a bit of a loss as to what to do next.
I have an file structure that looks something like this:
├── README.md
├── staging
│   ├── api
│   │   ├── deployment.yaml
│   │   ├── automation.yaml
│   │   └── service.yaml
│   ├── app
│   │   ├── deployment.yaml
│   │   ├── automation.yaml
│   │   └── service.yaml
│   ├── flux-system
│   │   ├── gotk-components.yaml
│   │   ├── gotk-sync.yaml
│   │   └── kustomization.yaml
│   ├── image_update_automation.yaml
My staging/api/automation.yaml is pretty strait-forward:
---
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImageRepository
metadata:
name: api
namespace: flux-system
spec:
image: xxx/api
interval: 1m0s
secretRef:
name: dockerhub
---
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImagePolicy
metadata:
name: api
namespace: flux-system
spec:
imageRepositoryRef:
name: api
policy:
semver:
range: ">=1.0.0"
My staging/image_update_automation.yaml looks something like this:
---
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImageUpdateAutomation
metadata:
name: flux-system
namespace: flux-system
spec:
git:
checkout:
ref:
branch: master
commit:
author:
email: fluxcdbot#users.noreply.github.com
name: fluxcdbot
messageTemplate: '{{range .Updated.Images}}{{println .}}{{end}}'
push:
branch: master
interval: 1m0s
sourceRef:
kind: GitRepository
name: flux-system
update:
path: ./staging
strategy: Setters
Everything seems to be ok here:
❯ flux get image repository
NAME READY MESSAGE LAST SCAN SUSPENDED
api True successful scan, found 23 tags 2021-07-28T17:11:02-06:00 False
app True successful scan, found 18 tags 2021-07-28T17:11:02-06:00 False
❯ flux get image policy
NAME READY MESSAGE LATEST IMAGE
api True Latest image tag for 'xxx/api' resolved to: 1.0.1 xxx/api:1.0.1
app True Latest image tag for 'xxx/app' resolved to: 3.2.1 xxx/app:3.2.1
As you can see from the policy output the LATEST IMAGE api is 1.0.1, however when I view the current version of my app and api they have not been updated.
kubectl get deployment api -n xxx -o json | jq '.spec.template.spec.containers[0].image'
"xxx/api:0.1.5"
Any advice on this would be much appreciated.
My issue was that I didn't add the comment after my image declaration in my deployment yaml. More details. Honestly, I'm surprised this is not Annotation instead of a comment.
spec:
containers:
- image: docker.io/xxx/api:0.1.5 # {"$imagepolicy": "flux-system:api"}

Multiple dockerised PHP-FPM and Nginx applications

my goal is to get multiple PHP services running. So that I can use the same framework I would copy the code from framework to each service (1 & 2).
tree
├── Framework
│ └── frw.class.php
├── CodeService1
│ └── index.php (rescue frw.class.php)
├── CodeService2
│ └── index.php (rescue frw.class.php)
├── docker-compose.yml
├── nginx
├──conf
└── myapp.conf
version: '2'
services:
phpfpm:
image: 'bitnami/php-fpm:8.0.2'
container_name: project1
networks:
- app-tier
volumes:
- ./Framework:/app
- ./CodeService1:/app/service1
service2:
image: 'bitnami/php-fpm:8.0.2'
container_name: Service1
networks:
- app-tier
volumes:
- ./Framework:/app
- ./CodeService2:/app/service2
nginx:
image: 'bitnami/nginx:latest'
depends_on:
- phpfpm
- service2
networks:
- app-tier
ports:
- '80:8080'
- '443:8443'
volumes:
- ./nginx/conf/myapp.conf:/opt/bitnami/nginx/conf/server_blocks/myapp.conf
networks:
app-tier:
driver: bridge
currently the index-files looks like
CodeService1\index.php
<?php declare (strict_types = 1);
echo ("Service1</br>");
CodeService2\index.php
<?php declare (strict_types = 1);
echo ("Service2</br>");
But this won't work. I also tried to outsource the part of create the service (image and copy files) to separates Dockerfiles. but this also won't run.
i call localshost/service1 or localshost/service1 or .
thanks a lot
Most probable is that in your nginx host you set upstream to phpfpm
set $upstream phpfpm
and that's why CodeService1 only is resolved.
You can set upstream conditionaly, e.g:
# set default to codeservice1
set $upstream phpfpm:9000;
# if service2 url, resolve from service2
if ($request_uri ~ "(^/service2)"){
set $upstream service2:9000
}
fastcgi_pass $upstream;

Unable to read files using common library

I have an issue when trying to use a file from a library chart. Helm fails when I try to access a file from the library.
I have followed the example from the library_charts documentation.
Everything is the same as the documentation except two parts:
I have added the file mylibchart/files/foo.conf and this file is referenced in mylibchart/templates/_configmap.yaml's data key (in the documentation, data is an empty object):
├── mychart
│ ├── Chart.yaml
│ └── templates
│ └── configmap.yaml
└── mylibchart
├── Chart.yaml
├── files
│ └── foo.conf
└── templates
├── _configmap.yaml
└── _util.yaml
cat mylibchart/templates/_configmap.yaml
{{- define "mylibchart.configmap.tpl" -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name | printf "%s-%s" .Chart.Name }}
data:
fromlib: yes
{{ (.Files.Glob "files/foo.conf").AsConfig | nindent 2 }}
{{- end -}}
{{- define "mylibchart.configmap" -}}
{{- include "mylibchart.util.merge" (append . "mylibchart.configmap.tpl") -}}
{{- end -}}
This error is caused by the fact that mychart/files/foo.conf does not exist. If I create it, it does not crash, but contains mychart/files/foo.conf's content instead of mylibchart/files/foo.conf's content.
The file foo.conf does exist inside the file generated by "helm dependency update" (mychart/charts/mylibchart-0.1.0.tgz).
How can I use that file from the .tgz file?
You can easy reproduce the issue by cloning the project: https://github.com/florentvaldelievre/helm-issue
Helm version:
version.BuildInfo{Version:"v3.2.3", GitCommit:"8f832046e258e2cb800894579b1b3b50c2d83492", GitTreeState:"clean", GoVersion:"go1.13.12"}