I configured mongodb with user name and password, and deployed mongoDb and mongoDb express.
The problem is that I'm getting the following error in mongo-express logs:
Could not connect to database using connectionString: mongodb://username:password#mongodb://lc-mongodb-service:27017:27017/"
I can see that the connection string contains 27017 port twice, and also "mongodb://" in the middle that should not be there.
This is my mongo-express deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: lc-mongo-express
labels:
app: lc-mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: lc-mongo-express
template:
metadata:
labels:
app: lc-mongo-express
spec:
containers:
- name: lc-mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: lc-configmap
key: DATABASE_URL
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: lc-secret
key: MONGO_ROOT_USERNAME
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: lc-secret
key: MONGO_ROOT_PASSWORD
---
apiVersion: v1
kind: Service
metadata:
name: lc-mongo-express-service
spec:
selector:
app: lc-mongo-express
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
And my mongoDb deployment:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: lc-mongodb-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeMode: Filesystem
storageClassName: gp2
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: lc-mongodb
labels:
app: lc-mongodb
spec:
replicas: 1
serviceName: lc-mongodb-service
selector:
matchLabels:
app: lc-mongodb
template:
metadata:
labels:
app: lc-mongodb
spec:
volumes:
- name: lc-mongodb-storage
persistentVolumeClaim:
claimName: lc-mongodb-pvc
containers:
- name: lc-mongodb
image: "mongo"
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: lc-secret
key: MONGO_ROOT_USERNAME
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: lc-secret
key: MONGO_ROOT_PASSWORD
command:
- mongod
- --auth
volumeMounts:
- mountPath: '/data/db'
name: lc-mongodb-storage
---
apiVersion: v1
kind: Service
metadata:
name: lc-mongodb-service
labels:
name: lc-mongodb
spec:
selector:
app: lc-mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
What am I doing wrong?
Your connection string format is wrong
You should be trying out something like
mongodb://[username:password#]host1[:port1][,...hostN[:portN]][/[defaultauthdb][?options]]
Now suppose if you are using the Node js
const MongoClient = require('mongodb').MongoClient;
const uri = "mongodb+srv://<username>:<password>#<Mongo service Name>/<Database name>?retryWrites=true&w=majority";
const client = new MongoClient(uri, { useNewUrlParser: true });
client.connect(err => {
// creating collection
const collection = client.db("test").collection("devices");
// perform actions on the collection object
client.close();
});
also you missing the Db path args: ["--dbpath","/data/db"] in command while using the PVC and configuring the path
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mongo
name: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mongo
spec:
containers:
- image: mongo
name: mongo
args: ["--dbpath","/data/db"]
livenessProbe:
exec:
command:
- mongo
- --disableImplicitSessions
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
readinessProbe:
exec:
command:
- mongo
- --disableImplicitSessions
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongo-creds
key: username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-creds
key: password
volumeMounts:
- name: "mongo-data-dir"
mountPath: "/data/db"
volumes:
- name: "mongo-data-dir"
persistentVolumeClaim:
claimName: "pvc"
Related
I am facing an weird issue in kubernetes yaml file with initContainers. It shows that my initContainer is successfully running but it is in not ready state and it remains forever. There are no errors in initcontainer logs and logs shows success result .Am i missing anything ?
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: graphql-engine
name: graphql-engine
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: graphql-engine
strategy: {}
template:
metadata:
labels:
io.kompose.service: graphql-engine
spec:
initContainers:
# GraphQl
- env:
- name: HASURA_GRAPHQL_ADMIN_SECRET
value: devsecret
- name: HASURA_GRAPHQL_DATABASE_URL
value: postgres://postgres:postgres#10.192.250.55:5432/zbt_mlcraft
- name: HASURA_GRAPHQL_ENABLE_CONSOLE
value: "true"
- name: HASURA_GRAPHQL_JWT_SECRET
value: '{"type": "HS256", "key": "LGB6j3RkoVuOuqKzjgnCeq7vwfqBYJDw", "claims_namespace": "hasura"}'
- name: HASURA_GRAPHQL_LOG_LEVEL
value: debug
- name: HASURA_GRAPHQL_UNAUTHORIZED_ROLE
value: public
- name: PVX_MLCRAFT_ACTIONS_URL
value: http://pvx-mlcraft-actions:3010
image: hasura/graphql-engine:v2.10.1
name: graphql-engine
ports:
- containerPort: 8080
resources: {}
restartPolicy: Always
containers:
- env:
- name: AUTH_CLIENT_URL
value: http://localhost:3000
- name: AUTH_EMAIL_PASSWORDLESS_ENABLED
value: "true"
- name: AUTH_HOST
value: 0.0.0.0
- name: AUTH_LOG_LEVEL
value: debug
- name: AUTH_PORT
value: "4000"
- name: AUTH_SMTP_HOST
value: smtp.gmail.com
- name: AUTH_SMTP_PASS
value: fahkbhcedmwolqzp
- name: AUTH_SMTP_PORT
value: "587"
- name: AUTH_SMTP_SENDER
value: noreplaypivoxnotifications#gmail.com
- name: AUTH_SMTP_USER
value: noreplaypivoxnotifications#gmail.com
- name: AUTH_WEBAUTHN_RP_NAME
value: Nhost App
- name: HASURA_GRAPHQL_ADMIN_SECRET
value: devsecret
- name: HASURA_GRAPHQL_DATABASE_URL
value: postgres://postgres:postgres#10.192.250.55:5432/zbt_mlcraft
- name: HASURA_GRAPHQL_GRAPHQL_URL
value: http://graphql-engine:8080/v1/graphql
- name: HASURA_GRAPHQL_JWT_SECRET
value: '{"type": "HS256", "key": "LGB6j3RkoVuOuqKzjgnCeq7vwfqBYJDw", "claims_namespace": "hasura"}'
- name: POSTGRES_PASSWORD
value: postgres
image: nhost/hasura-auth:latest
name: auth
ports:
- containerPort: 4000
resources: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: graphql-engine
name: graphql-engine
spec:
type: LoadBalancer
ports:
- name: "8080"
port: 8080
targetPort: 8080
selector:
io.kompose.service: graphql-engine
status:
loadBalancer: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: graphql-engine
name: auth
spec:
ports:
- name: "4000"
port: 4000
targetPort: 4000
selector:
io.kompose.service: graphql-engine
status:
loadBalancer: {}
Init Container expected to be in ready state
The Status field of the initContainer is not relevant here. What you need is that your initContainer is deterministic. Currently your initContainer is running, because the used image is built to run indefinite.
Initcontainers need to built that they run their process and then exit with an exitcode 0. Graphql-engine on the other hand is a container that will run indefinite and provide an API.
What are you trying to accomplish with this graphql-engine pod?
How do we configure keycloak to use the external postgres (AWS RDS)?
We deployed it in kubernetes using quarkus distro and update dthe DB env variables in our deployment.yaml , however it is still taking the local h2 data base and not the postgres.
For better understanding providing the deployment.yaml file we are using:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "5"
kubectl.kubernetes.io/last-applied-configuration: |
creationTimestamp: "2022-06-21T16:47:29Z"
generation: 5
labels:
app: keycloak
name: keycloak
namespace: kc***
resourceVersion: "29233550"
uid: 3634683e-657c-4278-9002-82a3ce64b968
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: keycloak
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: keycloak
spec:
containers:
- args:
- start
- --hostname=kc-test.k8.com
- --https-certificate-file=/opt/pem/cert-pem/cert.pem
- --https-certificate-key-file=/opt/pem/key-pem/key.pem
- --log-level=DEBUG
env:
- name: KEYCLOAK_ADMIN
value: ****
- name: KEYCLOAK_ADMIN_PASSWORD
value: *****
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_ADDR
value: jdbc:postgresql://database.c**7irl*****.us-east-1.rds.amazonaws.com/database
- name: DB_DATABASE
value: ****
- name: DB_USER
value: postgres
- name: DB_SCHEMA
value: public
- name: DB_VENDOR
value: POSTGRES
- name: JGROUPS_DISCOVERY_PROTOCOL
value: dns.DNS_PING
- name: JGROUPS_DISCOVERY_PROPERTIES
value: dns_query=keycloak
- name: CACHE_OWNERS_COUNT
value: "2"
- name: CACHE_OWNERS_AUTH_SESSIONS_COUNT
value: "2"
image: quay.io/keycloak/keycloak:17.0.0
imagePullPolicy: IfNotPresent
name: keycloak
ports:
- containerPort: 7600
name: jgroups
protocol: TCP
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 8443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /realms/master
port: 8443
scheme: HTTPS
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
resources: {}
securityContext:
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/pem/key-pem
name: key-pem
- mountPath: /opt/pem/cert-pem
name: cert-pem
- mountPath: /opt/keycloak/data
name: keydata
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: key-pem
name: key-pem
- configMap:
defaultMode: 420
name: cert-pem
name: cert-pem
- emptyDir: {}
name: keydata
status:
availableReplicas: 3
conditions:
- lastTransitionTime: "2022-06-21T18:02:32Z"
lastUpdateTime: "2022-06-21T18:02:32Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2022-06-21T18:01:53Z"
lastUpdateTime: "2022-06-21T18:16:41Z"
message: ReplicaSet "keycloak-5c84476694" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 5
readyReplicas: 3
replicas: 3
updatedReplicas: 3
Is your external DB also in same namespace?
if yes you can use below way.
<external postgres (AWS RDS)>secret-name in k8s secret contains all the below details.
Using this method it will dynamically fetch details from secret.
env:
- name: DB_DATABASE
valueFrom:
secretKeyRef:
name: database-secret-name
key: dbname
- name: DB_ADDR
valueFrom:
secretKeyRef:
name: database-secret-name
key: host
- name: DB_PORT
valueFrom:
secretKeyRef:
name: database-secret-name
key: port
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: database-secret-name
key: password
- name: DB_USER
valueFrom:
secretKeyRef:
name: database-secret-name
key: user
if your external postgres in different namespace, copy you database secret to keycloak namespace and give it a try.
DB_ADDR is env variable for Keycloak versions 16-. Use doc for you Keycloak version https://www.keycloak.org/server/all-config
Keycloak 17+ has KC_DB_URL:
db-url
The full database JDBC URL.
If not provided, a default URL is set based on the selected database vendor. For instance, if using 'postgres', the default JDBC URL would be 'jdbc:postgresql://localhost/keycloak'.
CLI: --db-url
Env: KC_DB_URL
Of course configure also other env variables for your Keycloak version properly.
Issue
Hello, I want to set up a Nextcloud w/ Postgres as a database instance on my Kubernetes cluster. However, when I define the env variables for Postgres in the Nextcloud deployment file, the Nextcloud pod instance can't connect to the Postgres pod instance based on the service name. If anyone can give me any suggestions on what to do to solve this, please. Thank you in advance.
Error
Here is the result of kubectl logs <pod-name> command for the Nexcloud pod instance:
Previous: Doctrine\DBAL\Exception: Failed to connect to the database: An exception occurred in the driver: SQLSTATE[08006] [7] could not translate host name "home-drive-db" to address: Temporary failure in name resolution
Trace: #0 /var/www/html/lib/private/Setup/PostgreSQL.php(98): OC\DB\Connection->connect()
#1 /var/www/html/lib/private/Setup.php(351): OC\Setup\PostgreSQL->setupDatabase('rafidini')
#2 /var/www/html/core/Command/Maintenance/Install.php(108): OC\Setup->install(Array)
#3 /var/www/html/3rdparty/symfony/console/Command/Command.php(255): OC\Core\Command\Maintenance\Install->execute(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#4 /var/www/html/3rdparty/symfony/console/Application.php(1009): Symfony\Component\Console\Command\Command->run(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#5 /var/www/html/3rdparty/symfony/console/Application.php(273): Symfony\Component\Console\Application->doRunCommand(Object(OC\Core\Command\Maintenance\Install), Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#6 /var/www/html/3rdparty/symfony/console/Application.php(149): Symfony\Component\Console\Application->doRun(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#7 /var/www/html/lib/private/Console/Application.php(209): Symfony\Component\Console\Application->run(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#8 /var/www/html/console.php(99): OC\Console\Application->run()
#9 /var/www/html/occ(11): require_once('/var/www/html/c...')
#10 {main}
retrying install...
Files
home-drive-db.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: home-drive-app
name: home-drive-db
spec:
replicas: 1
selector:
matchLabels:
pod-label: home-drive-db-pod
app: home-drive-app
template:
metadata:
labels:
pod-label: home-drive-db-pod
app: home-drive-app
spec:
containers:
- name: home-drive-db
image: postgres:alpine
resources:
limits:
memory: 512Mi
cpu: "1"
requests:
memory: 256Mi
cpu: "0.2"
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: nextcloud
- name: POSTGRES_USER
value: rafidini
- name: POSTGRES_PASSWORD
value: password1
volumeMounts:
- name: db-storage
mountPath: /var/lib/postgresl/data
subPath: postgres
volumes:
- name: db-storage
persistentVolumeClaim:
claimName: home-drive-shared-storage
---
apiVersion: v1
kind: Service
metadata:
labels:
app: home-drive-app
name: home-drive-db
spec:
type: ClusterIP
selector:
pod-label: home-drive-db-pod
app: home-drive-app
ports:
- port: 5432
protocol: TCP
targetPort: 5432
home-drive-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
appe: home-drive-app
name: home-drive-server
spec:
replicas: 1
selector:
matchLabels:
pod-label: home-drive-server-pod
app: home-drive-app
template:
metadata:
labels:
pod-label: home-drive-server-pod
app: home-drive-app
spec:
containers:
- image: nextcloud:production
ports:
- containerPort: 5432
env:
- name: POSTGRES_HOST
value: home-drive-db:5432
- name: POSTGRES_DB
value: nextcloud
- name: POSTGRES_USER
value: rafidini
- name: POSTGRES_PASSWORD
value: password1
- name: NEXTCLOUD_ADMIN_USER
value: rafidini
- name: NEXTCLOUD_ADMIN_PASSWORD
value: password1
resources:
limits:
memory: 1Gi
cpu: "1"
requests:
memory: 256Mi
cpu: "1"
name: home-drive-app
volumeMounts:
- name: server-storage
mountPath: /var/www/html
subPath: server-data
volumes:
- name: server-storage
persistentVolumeClaim:
claimName: home-drive-shared-storage
---
apiVersion: v1
kind: Service
metadata:
labels:
app: home-drive-app
name: home-drive-server
spec:
selector:
app: home-drive-app
pod-label: home-drive-server-pod
ports:
- protocol: TCP
port: 80
I am learning to use k8s and I have a problem. I have been able to perform several deployments with the same yml without problems. My problem is that when I mount the secret volume it loads me the directory with the variables but it does not detect them as environments variable
my secret
apiVersion: v1
kind: Secret
metadata:
namespace: insertmendoza
name: authentications-sercret
type: Opaque
data:
DB_USERNAME: aW5zZXJ0bWVuZG96YQ==
DB_PASSWORD: aktOUDlaZHRFTE1tNks1
TOKEN_EXPIRES_IN: ODQ2MDA=
SECRET_KEY: aXRzaXNzZWd1cmU=
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: insertmendoza
name: sarys-authentications
spec:
replicas: 1
selector:
matchLabels:
app: sarys-authentications
template:
metadata:
labels:
app: sarys-authentications
spec:
containers:
- name: sarys-authentications
image: 192.168.88.246:32000/custom:image
imagePullPolicy: Always
resources:
limits:
memory: "500Mi"
cpu: "50m"
ports:
- containerPort: 8000
envFrom:
- configMapRef:
name: authentications-config
volumeMounts:
- name: config-volumen
mountPath: /etc/config/
readOnly: true
- name: secret-volumen
mountPath: /etc/secret/
readOnly: true
volumes:
- name: config-volumen
configMap:
name: authentications-config
- name: secret-volumen
secret:
secretName: authentications-sercret
> microservice#1.0.0 start
> node dist/index.js
{
ENGINE: 'postgres',
NAME: 'insertmendoza',
USER: undefined, <-- not load
PASSWORD: undefined,<-- not load
HOST: 'db-service',
PORT: '5432'
}
if I add them manually if it recognizes them
env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: authentications-sercret
key: DB_USERNAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: authentications-sercret
key: DB_PASSWORD
> microservice#1.0.0 start
> node dist/index.js
{
ENGINE: 'postgres',
NAME: 'insertmendoza',
USER: 'insertmendoza', <-- work
PASSWORD: 'jKNP9ZdtELMm6K5', <-- work
HOST: 'db-service',
PORT: '5432'
}
listening queue
listening on *:8000
in the directory where I mount the secrets exist!
/etc/secret # ls
DB_PASSWORD DB_USERNAME SECRET_KEY TOKEN_EXPIRES_IN
/etc/secret # cat DB_PASSWORD
jKNP9ZdtELMm6K5/etc/secret #
EDIT
My solution speed is
envFrom:
- configMapRef:
name: authentications-config
- secretRef: <<--
name: authentications-sercret <<--
I hope it serves you, greetings from Argentina Insert Mendoza
If I understand the problem correctly, you aren't getting the secrets loaded into the environment. It looks like you're loading it incorrectly, use the envFrom form as documented here.
Using your example it would be:
apiVersion: v1
kind: Secret
metadata:
namespace: insertmendoza
name: authentications-sercret
type: Opaque
data:
DB_USERNAME: aW5zZXJ0bWVuZG96YQ==
DB_PASSWORD: aktOUDlaZHRFTE1tNks1
TOKEN_EXPIRES_IN: ODQ2MDA=
SECRET_KEY: aXRzaXNzZWd1cmU=
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: insertmendoza
name: sarys-authentications
spec:
replicas: 1
selector:
matchLabels:
app: sarys-authentications
template:
metadata:
labels:
app: sarys-authentications
spec:
containers:
- name: sarys-authentications
image: 192.168.88.246:32000/custom:image
imagePullPolicy: Always
resources:
limits:
memory: "500Mi"
cpu: "50m"
ports:
- containerPort: 8000
envFrom:
- configMapRef:
name: authentications-config
- secretRef:
name: authentications-sercret
volumeMounts:
- name: config-volumen
mountPath: /etc/config/
readOnly: true
volumes:
- name: config-volumen
configMap:
name: authentications-config
Note the volume and mount was removed and just add the secretRef section. Those should now be exported as environment variables in your pod.
I have two containers, one running phpfpm and the other running nginx. I'd like the container running nginx to access the webroot files which has css and javascript files. Right now, nginx successfully passes off php requests to phpfpm, but no styles are showing up for instance when the webpage is rendered.
This is running off of minikube on a linux system.
kind: ConfigMap
apiVersion: v1
metadata:
name: php-ini
namespace: mixerapi-docker
data:
php.ini: |
extension=intl.so
extension=pdo_mysql.so
extension=sodium
extension=zip.so
zend_extension=opcache.so
[php]
session.auto_start = Off
short_open_tag = Off
opcache.interned_strings_buffer = 16
opcache.max_accelerated_files = 20000
opcache.memory_consumption = 256
realpath_cache_size = 4096K
realpath_cache_ttl = 600
expose_php = off
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-conf
namespace: mixerapi-docker
data:
default.conf: |-
server {
listen 80;
root /srv/app/webroot/;
index index.php;
location / {
try_files $uri /index.php?$args;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
include fastcgi_params;
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: php-nginx-deployment
namespace: mixerapi-docker
labels:
app: php-nginx
spec:
replicas: 1
selector:
matchLabels:
app: php-nginx
template:
metadata:
labels:
app: php-nginx
spec:
containers:
- name: php
image: mixerapidev/demo:latest
imagePullPolicy: Always
ports:
- containerPort: 9000
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: php-secret
key: database-url
- name: SECURITY_SALT
valueFrom:
secretKeyRef:
name: php-secret
key: cakephp-salt
volumeMounts:
- name: php-ini
mountPath: /usr/local/etc/php/conf.d
- name: php-application
mountPath: /application
- name: nginx
image: nginx:1.19-alpine
ports:
- containerPort: 80
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/conf.d
- name: php-application
mountPath: /application
volumes:
- name: php-ini
configMap:
name: php-ini
- name: php-application
persistentVolumeClaim:
claimName: php-application-pv-claim
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: default.conf
path: default.conf
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: php-application-pv-claim
namespace: mixerapi-docker
labels:
app: php-nginx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: mixerapi-docker
spec:
selector:
app: php-nginx
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30002
---
apiVersion: v1
kind: Service
metadata:
name: php
namespace: mixerapi-docker
spec:
selector:
app: php-nginx
ports:
- protocol: TCP
port: 9000
targetPort: 9000
Not sure if this is the best approach, but I based it off of this:
How to share files between containers in the same kubernetes pod
kind: ConfigMap
apiVersion: v1
metadata:
name: php-ini
namespace: mixerapi-docker
data:
php.ini: |
extension=intl.so
extension=pdo_mysql.so
extension=sodium
extension=zip.so
zend_extension=opcache.so
[php]
session.auto_start = Off
short_open_tag = Off
opcache.interned_strings_buffer = 16
opcache.max_accelerated_files = 20000
opcache.memory_consumption = 256
realpath_cache_size = 4096K
realpath_cache_ttl = 600
expose_php = off
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-conf
namespace: mixerapi-docker
data:
default.conf: |-
server {
listen 80;
root /application/webroot/;
index index.php;
location / {
try_files $uri /index.php?$args;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
include fastcgi_params;
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: php-nginx-deployment
namespace: mixerapi-docker
labels:
app: php-nginx
spec:
replicas: 1
selector:
matchLabels:
app: php-nginx
template:
metadata:
labels:
app: php-nginx
spec:
containers:
- name: php
image: mixerapidev/demo:latest
imagePullPolicy: Always
ports:
- containerPort: 9000
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: php-secret
key: database-url
- name: SECURITY_SALT
valueFrom:
secretKeyRef:
name: php-secret
key: cakephp-salt
volumeMounts:
- name: php-ini
mountPath: /usr/local/etc/php/conf.d
- name: application
mountPath: /application
lifecycle:
postStart:
exec:
command:
- "/bin/sh"
- "-c"
- >
cp -r /srv/app/. /application/.
- name: nginx
image: nginx:1.19-alpine
ports:
- containerPort: 80
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/conf.d
- name: application
mountPath: /application
volumes:
- name: php-ini
configMap:
name: php-ini
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: default.conf
path: default.conf
- name: application
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: mixerapi-docker
spec:
selector:
app: php-nginx
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30002
---
apiVersion: v1
kind: Service
metadata:
name: php
namespace: mixerapi-docker
spec:
selector:
app: php-nginx
ports:
- protocol: TCP
port: 9000
targetPort: 9000
I had composer install running in a docker entrypoint, so I needed to move that into the Dockerfile for this to work. I updated my Dockerfile with this and removed install from my entrypoint if the ENV is prod:
RUN if [[ "$ENV" = "prod" ]]; then \
composer install --prefer-dist --no-interaction --no-dev; \
fi