How to share files between containers in the same kubernetes pod - kubernetes

I have two containers, one running phpfpm and the other running nginx. I'd like the container running nginx to access the webroot files which has css and javascript files. Right now, nginx successfully passes off php requests to phpfpm, but no styles are showing up for instance when the webpage is rendered.
This is running off of minikube on a linux system.
kind: ConfigMap
apiVersion: v1
metadata:
name: php-ini
namespace: mixerapi-docker
data:
php.ini: |
extension=intl.so
extension=pdo_mysql.so
extension=sodium
extension=zip.so
zend_extension=opcache.so
[php]
session.auto_start = Off
short_open_tag = Off
opcache.interned_strings_buffer = 16
opcache.max_accelerated_files = 20000
opcache.memory_consumption = 256
realpath_cache_size = 4096K
realpath_cache_ttl = 600
expose_php = off
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-conf
namespace: mixerapi-docker
data:
default.conf: |-
server {
listen 80;
root /srv/app/webroot/;
index index.php;
location / {
try_files $uri /index.php?$args;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
include fastcgi_params;
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: php-nginx-deployment
namespace: mixerapi-docker
labels:
app: php-nginx
spec:
replicas: 1
selector:
matchLabels:
app: php-nginx
template:
metadata:
labels:
app: php-nginx
spec:
containers:
- name: php
image: mixerapidev/demo:latest
imagePullPolicy: Always
ports:
- containerPort: 9000
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: php-secret
key: database-url
- name: SECURITY_SALT
valueFrom:
secretKeyRef:
name: php-secret
key: cakephp-salt
volumeMounts:
- name: php-ini
mountPath: /usr/local/etc/php/conf.d
- name: php-application
mountPath: /application
- name: nginx
image: nginx:1.19-alpine
ports:
- containerPort: 80
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/conf.d
- name: php-application
mountPath: /application
volumes:
- name: php-ini
configMap:
name: php-ini
- name: php-application
persistentVolumeClaim:
claimName: php-application-pv-claim
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: default.conf
path: default.conf
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: php-application-pv-claim
namespace: mixerapi-docker
labels:
app: php-nginx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: mixerapi-docker
spec:
selector:
app: php-nginx
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30002
---
apiVersion: v1
kind: Service
metadata:
name: php
namespace: mixerapi-docker
spec:
selector:
app: php-nginx
ports:
- protocol: TCP
port: 9000
targetPort: 9000

Not sure if this is the best approach, but I based it off of this:
How to share files between containers in the same kubernetes pod
kind: ConfigMap
apiVersion: v1
metadata:
name: php-ini
namespace: mixerapi-docker
data:
php.ini: |
extension=intl.so
extension=pdo_mysql.so
extension=sodium
extension=zip.so
zend_extension=opcache.so
[php]
session.auto_start = Off
short_open_tag = Off
opcache.interned_strings_buffer = 16
opcache.max_accelerated_files = 20000
opcache.memory_consumption = 256
realpath_cache_size = 4096K
realpath_cache_ttl = 600
expose_php = off
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-conf
namespace: mixerapi-docker
data:
default.conf: |-
server {
listen 80;
root /application/webroot/;
index index.php;
location / {
try_files $uri /index.php?$args;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
include fastcgi_params;
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: php-nginx-deployment
namespace: mixerapi-docker
labels:
app: php-nginx
spec:
replicas: 1
selector:
matchLabels:
app: php-nginx
template:
metadata:
labels:
app: php-nginx
spec:
containers:
- name: php
image: mixerapidev/demo:latest
imagePullPolicy: Always
ports:
- containerPort: 9000
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: php-secret
key: database-url
- name: SECURITY_SALT
valueFrom:
secretKeyRef:
name: php-secret
key: cakephp-salt
volumeMounts:
- name: php-ini
mountPath: /usr/local/etc/php/conf.d
- name: application
mountPath: /application
lifecycle:
postStart:
exec:
command:
- "/bin/sh"
- "-c"
- >
cp -r /srv/app/. /application/.
- name: nginx
image: nginx:1.19-alpine
ports:
- containerPort: 80
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/conf.d
- name: application
mountPath: /application
volumes:
- name: php-ini
configMap:
name: php-ini
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: default.conf
path: default.conf
- name: application
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: mixerapi-docker
spec:
selector:
app: php-nginx
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30002
---
apiVersion: v1
kind: Service
metadata:
name: php
namespace: mixerapi-docker
spec:
selector:
app: php-nginx
ports:
- protocol: TCP
port: 9000
targetPort: 9000
I had composer install running in a docker entrypoint, so I needed to move that into the Dockerfile for this to work. I updated my Dockerfile with this and removed install from my entrypoint if the ENV is prod:
RUN if [[ "$ENV" = "prod" ]]; then \
composer install --prefer-dist --no-interaction --no-dev; \
fi

Related

Kubernetes init container hanging (Init container is running but not ready)

I am facing an weird issue in kubernetes yaml file with initContainers. It shows that my initContainer is successfully running but it is in not ready state and it remains forever. There are no errors in initcontainer logs and logs shows success result .Am i missing anything ?
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: graphql-engine
name: graphql-engine
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: graphql-engine
strategy: {}
template:
metadata:
labels:
io.kompose.service: graphql-engine
spec:
initContainers:
# GraphQl
- env:
- name: HASURA_GRAPHQL_ADMIN_SECRET
value: devsecret
- name: HASURA_GRAPHQL_DATABASE_URL
value: postgres://postgres:postgres#10.192.250.55:5432/zbt_mlcraft
- name: HASURA_GRAPHQL_ENABLE_CONSOLE
value: "true"
- name: HASURA_GRAPHQL_JWT_SECRET
value: '{"type": "HS256", "key": "LGB6j3RkoVuOuqKzjgnCeq7vwfqBYJDw", "claims_namespace": "hasura"}'
- name: HASURA_GRAPHQL_LOG_LEVEL
value: debug
- name: HASURA_GRAPHQL_UNAUTHORIZED_ROLE
value: public
- name: PVX_MLCRAFT_ACTIONS_URL
value: http://pvx-mlcraft-actions:3010
image: hasura/graphql-engine:v2.10.1
name: graphql-engine
ports:
- containerPort: 8080
resources: {}
restartPolicy: Always
containers:
- env:
- name: AUTH_CLIENT_URL
value: http://localhost:3000
- name: AUTH_EMAIL_PASSWORDLESS_ENABLED
value: "true"
- name: AUTH_HOST
value: 0.0.0.0
- name: AUTH_LOG_LEVEL
value: debug
- name: AUTH_PORT
value: "4000"
- name: AUTH_SMTP_HOST
value: smtp.gmail.com
- name: AUTH_SMTP_PASS
value: fahkbhcedmwolqzp
- name: AUTH_SMTP_PORT
value: "587"
- name: AUTH_SMTP_SENDER
value: noreplaypivoxnotifications#gmail.com
- name: AUTH_SMTP_USER
value: noreplaypivoxnotifications#gmail.com
- name: AUTH_WEBAUTHN_RP_NAME
value: Nhost App
- name: HASURA_GRAPHQL_ADMIN_SECRET
value: devsecret
- name: HASURA_GRAPHQL_DATABASE_URL
value: postgres://postgres:postgres#10.192.250.55:5432/zbt_mlcraft
- name: HASURA_GRAPHQL_GRAPHQL_URL
value: http://graphql-engine:8080/v1/graphql
- name: HASURA_GRAPHQL_JWT_SECRET
value: '{"type": "HS256", "key": "LGB6j3RkoVuOuqKzjgnCeq7vwfqBYJDw", "claims_namespace": "hasura"}'
- name: POSTGRES_PASSWORD
value: postgres
image: nhost/hasura-auth:latest
name: auth
ports:
- containerPort: 4000
resources: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: graphql-engine
name: graphql-engine
spec:
type: LoadBalancer
ports:
- name: "8080"
port: 8080
targetPort: 8080
selector:
io.kompose.service: graphql-engine
status:
loadBalancer: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: graphql-engine
name: auth
spec:
ports:
- name: "4000"
port: 4000
targetPort: 4000
selector:
io.kompose.service: graphql-engine
status:
loadBalancer: {}
Init Container expected to be in ready state
The Status field of the initContainer is not relevant here. What you need is that your initContainer is deterministic. Currently your initContainer is running, because the used image is built to run indefinite.
Initcontainers need to built that they run their process and then exit with an exitcode 0. Graphql-engine on the other hand is a container that will run indefinite and provide an API.
What are you trying to accomplish with this graphql-engine pod?

Failed to connect mongo-express to mongoDb in k8s

I configured mongodb with user name and password, and deployed mongoDb and mongoDb express.
The problem is that I'm getting the following error in mongo-express logs:
Could not connect to database using connectionString: mongodb://username:password#mongodb://lc-mongodb-service:27017:27017/"
I can see that the connection string contains 27017 port twice, and also "mongodb://" in the middle that should not be there.
This is my mongo-express deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: lc-mongo-express
labels:
app: lc-mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: lc-mongo-express
template:
metadata:
labels:
app: lc-mongo-express
spec:
containers:
- name: lc-mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: lc-configmap
key: DATABASE_URL
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: lc-secret
key: MONGO_ROOT_USERNAME
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: lc-secret
key: MONGO_ROOT_PASSWORD
---
apiVersion: v1
kind: Service
metadata:
name: lc-mongo-express-service
spec:
selector:
app: lc-mongo-express
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
And my mongoDb deployment:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: lc-mongodb-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeMode: Filesystem
storageClassName: gp2
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: lc-mongodb
labels:
app: lc-mongodb
spec:
replicas: 1
serviceName: lc-mongodb-service
selector:
matchLabels:
app: lc-mongodb
template:
metadata:
labels:
app: lc-mongodb
spec:
volumes:
- name: lc-mongodb-storage
persistentVolumeClaim:
claimName: lc-mongodb-pvc
containers:
- name: lc-mongodb
image: "mongo"
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: lc-secret
key: MONGO_ROOT_USERNAME
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: lc-secret
key: MONGO_ROOT_PASSWORD
command:
- mongod
- --auth
volumeMounts:
- mountPath: '/data/db'
name: lc-mongodb-storage
---
apiVersion: v1
kind: Service
metadata:
name: lc-mongodb-service
labels:
name: lc-mongodb
spec:
selector:
app: lc-mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
What am I doing wrong?
Your connection string format is wrong
You should be trying out something like
mongodb://[username:password#]host1[:port1][,...hostN[:portN]][/[defaultauthdb][?options]]
Now suppose if you are using the Node js
const MongoClient = require('mongodb').MongoClient;
const uri = "mongodb+srv://<username>:<password>#<Mongo service Name>/<Database name>?retryWrites=true&w=majority";
const client = new MongoClient(uri, { useNewUrlParser: true });
client.connect(err => {
// creating collection
const collection = client.db("test").collection("devices");
// perform actions on the collection object
client.close();
});
also you missing the Db path args: ["--dbpath","/data/db"] in command while using the PVC and configuring the path
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mongo
name: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mongo
spec:
containers:
- image: mongo
name: mongo
args: ["--dbpath","/data/db"]
livenessProbe:
exec:
command:
- mongo
- --disableImplicitSessions
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
readinessProbe:
exec:
command:
- mongo
- --disableImplicitSessions
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongo-creds
key: username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-creds
key: password
volumeMounts:
- name: "mongo-data-dir"
mountPath: "/data/db"
volumes:
- name: "mongo-data-dir"
persistentVolumeClaim:
claimName: "pvc"

not load secret in k8s

I am learning to use k8s and I have a problem. I have been able to perform several deployments with the same yml without problems. My problem is that when I mount the secret volume it loads me the directory with the variables but it does not detect them as environments variable
my secret
apiVersion: v1
kind: Secret
metadata:
namespace: insertmendoza
name: authentications-sercret
type: Opaque
data:
DB_USERNAME: aW5zZXJ0bWVuZG96YQ==
DB_PASSWORD: aktOUDlaZHRFTE1tNks1
TOKEN_EXPIRES_IN: ODQ2MDA=
SECRET_KEY: aXRzaXNzZWd1cmU=
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: insertmendoza
name: sarys-authentications
spec:
replicas: 1
selector:
matchLabels:
app: sarys-authentications
template:
metadata:
labels:
app: sarys-authentications
spec:
containers:
- name: sarys-authentications
image: 192.168.88.246:32000/custom:image
imagePullPolicy: Always
resources:
limits:
memory: "500Mi"
cpu: "50m"
ports:
- containerPort: 8000
envFrom:
- configMapRef:
name: authentications-config
volumeMounts:
- name: config-volumen
mountPath: /etc/config/
readOnly: true
- name: secret-volumen
mountPath: /etc/secret/
readOnly: true
volumes:
- name: config-volumen
configMap:
name: authentications-config
- name: secret-volumen
secret:
secretName: authentications-sercret
> microservice#1.0.0 start
> node dist/index.js
{
ENGINE: 'postgres',
NAME: 'insertmendoza',
USER: undefined, <-- not load
PASSWORD: undefined,<-- not load
HOST: 'db-service',
PORT: '5432'
}
if I add them manually if it recognizes them
env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: authentications-sercret
key: DB_USERNAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: authentications-sercret
key: DB_PASSWORD
> microservice#1.0.0 start
> node dist/index.js
{
ENGINE: 'postgres',
NAME: 'insertmendoza',
USER: 'insertmendoza', <-- work
PASSWORD: 'jKNP9ZdtELMm6K5', <-- work
HOST: 'db-service',
PORT: '5432'
}
listening queue
listening on *:8000
in the directory where I mount the secrets exist!
/etc/secret # ls
DB_PASSWORD DB_USERNAME SECRET_KEY TOKEN_EXPIRES_IN
/etc/secret # cat DB_PASSWORD
jKNP9ZdtELMm6K5/etc/secret #
EDIT
My solution speed is
envFrom:
- configMapRef:
name: authentications-config
- secretRef: <<--
name: authentications-sercret <<--
I hope it serves you, greetings from Argentina Insert Mendoza
If I understand the problem correctly, you aren't getting the secrets loaded into the environment. It looks like you're loading it incorrectly, use the envFrom form as documented here.
Using your example it would be:
apiVersion: v1
kind: Secret
metadata:
namespace: insertmendoza
name: authentications-sercret
type: Opaque
data:
DB_USERNAME: aW5zZXJ0bWVuZG96YQ==
DB_PASSWORD: aktOUDlaZHRFTE1tNks1
TOKEN_EXPIRES_IN: ODQ2MDA=
SECRET_KEY: aXRzaXNzZWd1cmU=
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: insertmendoza
name: sarys-authentications
spec:
replicas: 1
selector:
matchLabels:
app: sarys-authentications
template:
metadata:
labels:
app: sarys-authentications
spec:
containers:
- name: sarys-authentications
image: 192.168.88.246:32000/custom:image
imagePullPolicy: Always
resources:
limits:
memory: "500Mi"
cpu: "50m"
ports:
- containerPort: 8000
envFrom:
- configMapRef:
name: authentications-config
- secretRef:
name: authentications-sercret
volumeMounts:
- name: config-volumen
mountPath: /etc/config/
readOnly: true
volumes:
- name: config-volumen
configMap:
name: authentications-config
Note the volume and mount was removed and just add the secretRef section. Those should now be exported as environment variables in your pod.

Trying to convert from AWS classic load balancer to application load balancer in Amazon EKS

I have everything working using a classic load balancer. I would now like to update my Kubernetes environment to use an application load balancer instead of the classic load balancer. I have tried a few tutorials but no luck so far. I keep getting 503 errors after I deploy.
I brought my cluster up with eksctl then installed and ran the sample application in this tutorial https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html. I did get an ALB up and all worked as it should with the sample application outlined in the tutorial. I tried modifying the YAML for my environment to use an ALB and keep getting 503 errors. I am not sure what to try next.
I suspect my issue might be that I have the Nginx and my application in the same container(which I would like to keep if possible).
Here is the YAML for my application that I updated to try to get the ALB working:
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
data:
nginx.conf: |
events {
}
http {
include /etc/nginx/mime.types;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name test-ggg.com www.test-ggg.com;
if ($http_x_forwarded_proto = "http") {
return 301 https://$server_name$request_uri;
}
root /var/www/html;
index index.php index.html;
location static {
alias /var/www/html;
}
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
labels:
name: deployment
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 2
selector:
matchLabels:
name: templated-pod
template:
metadata:
name: deployment-template
labels:
name: templated-pod
spec:
volumes:
- name: app-files
emptyDir: {}
- name: nginx-config-volume
configMap:
name: nginx-config
containers:
- image: xxxxxxx.dkr.ecr.us-east-2.amazonaws.com/test:4713
name: app
volumeMounts:
- name: app-files
mountPath: /var/www/html
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "cp -r /var/www/public/. /var/www/html"]
resources:
limits:
cpu: 100m
requests:
cpu: 50m
- image: nginx:alpine
name: nginx
volumeMounts:
- name: app-files
mountPath: /var/www/html
- name: nginx-config-volume
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
resources:
limits:
cpu: 100m
requests:
cpu: 50m
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: "service-alb"
namespace: default
annotations:
alb.ingress.kubernetes.io/target-group-attributes: slow_start.duration_seconds=45
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '5'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '2'
alb.ingress.kubernetes.io/healthy-threshold-count: '2'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '3'
spec:
ports:
- port: 80
targetPort: 80
name: http
type: NodePort
selector:
app: templated-pod
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-2:dddddddd:certificate/f61c2837-484c-ddddddddd-bab7c4d4452c
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
labels:
app: app-ingress
spec:
rules:
- host: test-ggg.com
http:
paths:
- backend:
serviceName: "service-alb"
servicePort: 80
path: /*
Here is the yaml with the classic load balancer. Everything works when I use this :
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
data:
nginx.conf: |
events {
}
http {
include /etc/nginx/mime.types;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name test-ggg.com www.test-ggg.com;
if ($http_x_forwarded_proto = "http") {
return 301 https://$server_name$request_uri;
}
root /var/www/html;
index index.php index.html;
location static {
alias /var/www/html;
}
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
labels:
name: deployment
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 2
selector:
matchLabels:
name: templated-pod
template:
metadata:
name: deployment-template
labels:
name: templated-pod
spec:
volumes:
- name: app-files
emptyDir: {}
- name: nginx-config-volume
configMap:
name: nginx-config
containers:
- image: 99ddddddddd.dkr.ecr.us-east-2.amazonaws.com/test:4713
name: app
volumeMounts:
- name: app-files
mountPath: /var/www/html
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "cp -r /var/www/public/. /var/www/html"]
resources:
limits:
cpu: 100m
requests:
cpu: 50m
- image: nginx:alpine
name: nginx
volumeMounts:
- name: app-files
mountPath: /var/www/html
- name: nginx-config-volume
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
resources:
limits:
cpu: 100m
requests:
cpu: 50m
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: service-loadbalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-2:dddddddd:certificate/f61c2837-484c-4fac-a26c-dddddddd4452c
spec:
selector:
name: templated-pod
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 80
type: LoadBalancer
After some tutorial help I learned more about services, selectors and pod naming!
(Great tutorial - https://www.youtube.com/watch?v=sGZx3OjMPQI)
I had the pod named - "name: templated-pod"
I had the selector in the service looking for:
selector:
app: templated-pod
It could not make the connection!
I changed the selector to the following and it worked :
selector:
name: templated-pod
Hope this helps others!!

consul StatefulSet failing

I am trying to deploy consul using kubernetes StatefulSet with following manifest
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: consul
labels:
app: consul
rules:
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: consul
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: consul
subjects:
- kind: ServiceAccount
name: consul
namespace: dev-ethernet
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: consul
namespace: dev-ethernet
labels:
app: consul
---
apiVersion: v1
kind: Secret
metadata:
name: consul-secret
namespace: dev-ethernet
data:
consul-gossip-encryption-key: "aIRpNkHT/8Tkvf757sj2m5AcRlorWNgzcLI4yLEMx7M="
---
apiVersion: v1
kind: ConfigMap
metadata:
name: consul-config
namespace: dev-ethernet
data:
server.json: |
{
"bind_addr": "0.0.0.0",
"client_addr": "0.0.0.0",
"disable_host_node_id": true,
"data_dir": "/consul/data",
"log_level": "INFO",
"datacenter": "us-west-2",
"domain": "cluster.local",
"ports": {
"http": 8500
},
"retry_join": [
"provider=k8s label_selector=\"app=consul,component=server\""
],
"server": true,
"telemetry": {
"prometheus_retention_time": "5m"
},
"ui": true
}
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: consul
namespace: dev-ethernet
spec:
selector:
matchLabels:
app: consul
component: server
serviceName: consul
podManagementPolicy: Parallel
replicas: 3
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
template:
metadata:
labels:
app: consul
component: server
annotations:
consul.hashicorp.com/connect-inject: "false"
spec:
serviceAccountName: consul
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- consul
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
securityContext:
fsGroup: 1000
containers:
- name: consul
image: "consul:1.8"
args:
- "agent"
- "-advertise=$(POD_IP)"
- "-bootstrap-expect=3"
- "-config-file=/etc/consul/config/server.json"
- "-encrypt=$(GOSSIP_ENCRYPTION_KEY)"
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: GOSSIP_ENCRYPTION_KEY
valueFrom:
secretKeyRef:
name: consul-secret
key: consul-gossip-encryption-key
volumeMounts:
- name: data
mountPath: /consul/data
- name: config
mountPath: /etc/consul/config
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- consul leave
ports:
- containerPort: 8500
name: ui-port
- containerPort: 8400
name: alt-port
- containerPort: 53
name: udp-port
- containerPort: 8080
name: http-port
- containerPort: 8301
name: serflan
- containerPort: 8302
name: serfwan
- containerPort: 8600
name: consuldns
- containerPort: 8300
name: server
volumes:
- name: config
configMap:
name: consul-config
volumeClaimTemplates:
- metadata:
name: data
labels:
app: consul
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: aws-gp2
resources:
requests:
storage: 3Gi
But gets ==> encrypt has invalid key: illegal base64 data at input byte 1 when container starts.
I have generated consul-gossip-encryption-key locally using docker run -i -t consul keygen
Anyone knows whats wrong here ?
secret.data must be base64 string.
try
kubectl create secret generic consul-gossip-encryption-key --from-literal=key="$(docker run -i -t consul keygen)" --dry-run -o=yaml
and replace
apiVersion: v1
kind: Secret
metadata:
name: consul-secret
namespace: dev-ethernet
data:
consul-gossip-encryption-key: "aIRpNkHT/8Tkvf757sj2m5AcRlorWNgzcLI4yLEMx7M="
ref: https://www.consul.io/docs/k8s/helm#v-global-gossipencryption