Kubernetes Executor with Proxy Settings - kubernetes

I am using the helm chart for gitlab runner in a Kubernetes Cluster and need to pass environment variables to my Kubernetes Runner to allow him to download for example content from s3 cache. Unfortunately it does not work. Anyone any solutions for me ?
my values.yaml:
gitlabUrl: https://example.com
image: default-docker/gitlab-runner:alpine-v14.0.1
runnerRegistrationToken: XXXXXXXXXXXXX
imagePullPolicy: IfNotPresent
imagePullSecrets:
- name: "k8runner-secret"
rbac:
create: true
runners:
config: |
[[runners]]
environment = ["http_proxy: http://webproxy.comp.db.de:8080", "https_proxy: http://webproxy:comp:db:de:8080", "no_proxy: \"localhost\""]
[runners.kubernetes]
image = "default-docker/ubuntu:16.04"
cpu_request = "500m"
memory_request = "1Gi"
namespace = "gitlab"
[runners.cache]
Type = "s3"
Path = "cachepath"
Shared = true
[runners.cache.s3]
ServerAddress = "s3.amazonaws.com"
BucketName = "exampleBucket"
BucketLocation = "eu-west-1"
Insecure = false
tags: "test"
locked: true
name: "k8s-runner"
resources:
limits:
memory: 1Gi
cpu: 500m
requests:
memory: 250m
cpu: 50m
ENVIRONMENT:
http_proxy: http://webproxy.comp.db.de:8080
https_proxy: http://webproxy:comp:db:de:8080
no_proxy: "localhost"
config.template.toml located on the pod:
[[runners]]
[runners.kubernetes]
image = "default-docker/ubuntu:16.04"
cpu_request = "500m"
memory_request = "1Gi"
namespace = "gitlab"
[runners.cache]
Type = "s3"
Path = "cachepath"
Shared = true
[runners.cache.s3]
ServerAddress = "s3.amazonaws.com"
BucketName = "exampleBucket"
BucketLocation = "eu-west-1"
Insecure = false
config.toml located on the pod:
concurrent = 10
check_interval = 30
log_level = "info"
listen_address = ':9252'
It looks for me that he is not adding the environment variables. If I enter the env cmd I also can't find the environment variables.
I am thankful for every helping hand

Related

Universal splunk forwarder as sidecar not showing internal splunk logs

I have implemented a sidecar container to forward my main application logs to splunk.
Have used universalsplunkforwarder image.
After I deploy both my main application and forwarder seems up and running. But anyway not recieving any logs in splunk index specified.
To troubleshoot splunkd log or any specific splunk internal logs are not found in /var/log path.
Can someone please help how we enable this splunk internal logs?
piece of deployment.yaml
- name: universalforwarder
> image: <docker-registry>/splunk/universalforwarder:latest
> imagePullPolicy: Always
> env:
> - name: SPLUNK_START_ARGS
> value: "--accept-license --answer-yes"
> - name: SPLUNK_USER
> value: splunk
> - name: SPLUNK_PASSWORD
> value: ****
> - name: SPLUNK_CMD
> value: add monitor /var/log
> resources:
> limits:
> memory: "312Mi"
> cpu: "300m"
> requests:
> memory: "80Mi"
> cpu: "80m"
> volumeMounts:
> - name: shared-logs
> mountPath: /var/log
Piece of confgmap.yml
outputs.conf: |-
[tcpout]
defaultGroup = idxm4d-bigdata
[tcpout:idxm4d-bigdata]
server = <servers>
clientCert = /opt/splunkforwarder/etc/auth/ca.pem
sslPassword = password
sslVerifyServerCert = false
inputs.conf: |-
[monitor:/bin/streaming/adapters/logs/output.log]
[default]
host = localhost
index = krp_idx
[monitor:/bin/streaming/adapters/logs/output.log]
disabled = false
sourcetype = log4j
recursive = True
deploymentclients.conf: |-
targetUri = <target-uri>
props.conf: |-
[default]
TRANSFORMS-routing=duplicate_data
[telegraf]
category = Metrics
description = Telegraf Metrics
pulldown_type = 1
DATETIME_CONFIG =
NO_BINARY_CHECK = true
SHOULD_LINEMERGE = true
disabled = false
INDEXED_EXTRACTIONS = json
KV_MODE = none
TIMESTAMP_FIELDS = time
TRANSFORMS-routing=duplicate_data
kind: ConfigMap
Not able to view splunkd logs to troubleshoot if splunk is able to get the logs or what might be the issue
Thanks

Hashicorp vault pods with pending status

I deployed hashicorp vault with 3 replicas. Pod vault-0 is running but the other two pods are in pending status.
enter image description here
This is my override yaml,
# Vault Helm Chart Value Overrides
global:
enabled: true
tlsDisable: true
injector:
enabled: true
# Use the Vault K8s Image https://github.com/hashicorp/vault-k8s/
image:
repository: "hashicorp/vault-k8s"
tag: "0.9.0"
resources:
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 256Mi
cpu: 250m
affinity: ""
server:
auditStorage:
enabled: true
standalone:
enabled: false
image:
repository: "hashicorp/vault"
tag: "1.6.3"
resources:
requests:
memory: 4Gi
cpu: 1000m
limits:
memory: 8Gi
cpu: 1000m
ha:
enabled: true
replicas: 3
raft:
enabled: true
setNodeId: true
config: |
ui = true
listener "tcp" {
tls_disable = true
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "raft" {
path = "/vault/data"
}
service_registration "kubernetes" {}
config: |
ui = true
listener "tcp" {
tls_disable = true
address = "[::]:8200"
cluster_address = "[::]:8201"
}
service_registration "kubernetes" {}
# Vault UI
ui:
enabled: true
serviceType: "ClusterIP"
externalPort: 8200
Did a kubectl describe into the pending pods and can see the following status message. I am not sure I am adding the correct affinity settings in the override file. Not sure what I am doing wrong. I am using vault helm charts to deploy to a docker desktop local cluster. Appreciate any help.
enter image description here
There are a few problems in your values.yaml file.
1.You set
server:
auditStorage:
enabled: true
but you didn't specify how the PVC would be created and what the Storage class is. The chart expects you to do that if you enable the storage. Look at: https://github.com/hashicorp/vault-helm/blob/v0.9.0/values.yaml#L443
Turn it false if you just testing on your local machine or specify storage config.
2.You set empty affinity variable for the injector but not for the server. Set
affinity: ""
for the server too. Look at: https://github.com/hashicorp/vault-helm/blob/v0.9.0/values.yaml#L338
3.An uninitialised and sealed Vault cluster is not really usable. You need to initialize and unseal Vault before it becomes ready. That means setting up a readinessProbe. Something like this:
server:
readinessProbe:
path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
4.Last one but this is kinda optional. Those memory requests:
resources:
requests:
memory: 4Gi
cpu: 1000m
limits:
memory: 8Gi
cpu: 1000m
are a bit on the higher side. Setting up an HA cluster of 3 replicas with each requesting 4Gi of memory might result in Insufficient memory errors - most likely to happen when deploying on a local cluster.
But then again, you local machine might have 32 gigs of memory - I wouldn't know ;) If it doesn't, trim down those to fit on your machine.
So the following values works for me:
# Vault Helm Chart Value Overrides
global:
enabled: true
tlsDisable: true
injector:
enabled: true
# Use the Vault K8s Image https://github.com/hashicorp/vault-k8s/
image:
repository: "hashicorp/vault-k8s"
tag: "0.9.0"
resources:
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 256Mi
cpu: 250m
affinity: ""
server:
auditStorage:
enabled: false
standalone:
enabled: false
image:
repository: "hashicorp/vault"
tag: "1.6.3"
resources:
requests:
memory: 256Mi
cpu: 200m
limits:
memory: 512Mi
cpu: 400m
affinity: ""
readinessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
ha:
enabled: true
replicas: 3
raft:
enabled: true
setNodeId: true
config: |
ui = true
listener "tcp" {
tls_disable = true
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "raft" {
path = "/vault/data"
}
service_registration "kubernetes" {}
config: |
ui = true
listener "tcp" {
tls_disable = true
address = "[::]:8200"
cluster_address = "[::]:8201"
}
service_registration "kubernetes" {}
# Vault UI
ui:
enabled: true
serviceType: "ClusterIP"
externalPort: 8200

Hashicorp vault - Client sent an HTTP request to an HTTPS server - Readiness Probes

Currently having a problem where the readiness probe is failing when deploying the Vault Helm chart. Vault is working but whenever I describe the pods get this error. How do I get the probe to use HTTPS instead of HTTP if anyone knows how to solve this I would be great as losing my mind slowly?
Kubectl Describe pod
Name: vault-0
Namespace: default
Priority: 0
Node: ip-192-168-221-250.eu-west-2.compute.internal/192.168.221.250
Start Time: Mon, 24 Aug 2020 16:41:59 +0100
Labels: app.kubernetes.io/instance=vault
app.kubernetes.io/name=vault
component=server
controller-revision-hash=vault-768cd675b9
helm.sh/chart=vault-0.6.0
statefulset.kubernetes.io/pod-name=vault-0
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 192.168.221.251
IPs:
IP: 192.168.221.251
Controlled By: StatefulSet/vault
Containers:
vault:
Container ID: docker://445d7cdc34cd01ef1d3a46f2d235cb20a94e48279db3fcdd84014d607af2fe1c
Image: vault:1.4.2
Image ID: docker-pullable://vault#sha256:12587718b79dc5aff542c410d0bcb97e7fa08a6b4a8d142c74464a9df0c76d4f
Ports: 8200/TCP, 8201/TCP, 8202/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
/bin/sh
-ec
Args:
sed -E "s/HOST_IP/${HOST_IP?}/g" /vault/config/extraconfig-from-values.hcl > /tmp/storageconfig.hcl;
sed -Ei "s/POD_IP/${POD_IP?}/g" /tmp/storageconfig.hcl;
/usr/local/bin/docker-entrypoint.sh vault server -config=/tmp/storageconfig.hcl
State: Running
Started: Mon, 24 Aug 2020 16:42:00 +0100
Ready: False
Restart Count: 0
Readiness: exec [/bin/sh -ec vault status -tls-skip-verify] delay=5s timeout=5s period=3s #success=1 #failure=2
Environment:
HOST_IP: (v1:status.hostIP)
POD_IP: (v1:status.podIP)
VAULT_K8S_POD_NAME: vault-0 (v1:metadata.name)
VAULT_K8S_NAMESPACE: default (v1:metadata.namespace)
VAULT_ADDR: http://127.0.0.1:8200
VAULT_API_ADDR: http://$(POD_IP):8200
SKIP_CHOWN: true
SKIP_SETCAP: true
HOSTNAME: vault-0 (v1:metadata.name)
VAULT_CLUSTER_ADDR: https://$(HOSTNAME).vault-internal:8201
HOME: /home/vault
VAULT_CACERT: /vault/userconfig/vault-server-tls/vault.ca
Mounts:
/home/vault from home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from vault-token-cv9vx (ro)
/vault/config from config (rw)
/vault/userconfig/vault-server-tls from userconfig-vault-server-tls (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vault-config
Optional: false
userconfig-vault-server-tls:
Type: Secret (a volume populated by a Secret)
SecretName: vault-server-tls
Optional: false
home:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
vault-token-cv9vx:
Type: Secret (a volume populated by a Secret)
SecretName: vault-token-cv9vx
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7s default-scheduler Successfully assigned default/vault-0 to ip-192-168-221-250.eu-west-2.compute.internal
Normal Pulled 6s kubelet, ip-192-168-221-250.eu-west-2.compute.internal Container image "vault:1.4.2" already present on machine
Normal Created 6s kubelet, ip-192-168-221-250.eu-west-2.compute.internal Created container vault
Normal Started 6s kubelet, ip-192-168-221-250.eu-west-2.compute.internal Started container vault
Warning Unhealthy 0s kubelet, ip-192-168-221-250.eu-west-2.compute.internal Readiness probe failed: Error checking seal status: Error making API request.
URL: GET http://127.0.0.1:8200/v1/sys/seal-status
Code: 400. Raw Message:
Client sent an HTTP request to an HTTPS server.
Vault Config File
# global:
# tlsDisable: false
injector:
enabled: false
server:
extraEnvironmentVars:
VAULT_CACERT: /vault/userconfig/vault-server-tls/vault.ca
extraVolumes:
- type: secret
name: vault-server-tls # Matches the ${SECRET_NAME} from above
affinity: ""
readinessProbe:
enabled: true
path: /v1/sys/health
# # livelinessProbe:
# # enabled: true
# # path: /v1/sys/health?standbyok=true
# # initialDelaySeconds: 60
ha:
enabled: true
config: |
ui = true
api_addr = "https://127.0.0.1:8200" # Unsure if this is correct
storage "dynamodb" {
ha_enabled = "true"
region = "eu-west-2"
table = "global-vault-data"
access_key = "KEY"
secret_key = "SECRET"
}
# listener "tcp" {
# address = "0.0.0.0:8200"
# tls_disable = "true"
# }
listener "tcp" {
address = "0.0.0.0:8200"
cluster_address = "0.0.0.0:8201"
tls_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
tls_key_file = "/vault/userconfig/vault-server-tls/vault.key"
tls_client_ca_file = "/vault/userconfig/vault-server-tls/vault.ca"
}
seal "awskms" {
region = "eu-west-2"
access_key = "KEY"
secret_key = "SECRET"
kms_key_id = "ID"
}
ui:
enabled: true
serviceType: LoadBalancer
In your environment variable definitions you have:
VAULT_ADDR: http://127.0.0.1:8200
And non TLS is diable on your Vault configs (TLS enabled):
listener "tcp" {
address = "0.0.0.0:8200"
cluster_address = "0.0.0.0:8201"
tls_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
tls_key_file = "/vault/userconfig/vault-server-tls/vault.key"
tls_client_ca_file = "/vault/userconfig/vault-server-tls/vault.ca"
}
And your Readiness probe is executing in the pod:
vault status -tls-skip-verify
So that's trying to connect to http://127.0.0.1:8200, you can try changing the environment variable to use HTTPS: VAULT_ADDR=https://127.0.0.1:8200
You may have another (different) issue with your configs and env variable not matching:
K8s manifest:
VAULT_API_ADDR: http://$(POD_IP):8200
Vault configs:
api_addr = "https://127.0.0.1:8200"
✌️
If you are on Mac add the Vault URL to your .zshrc or .bash_profile file.
On the terminal open either .zshrc or .bash_profile file by doing this:
$ open .zshrc
Copy and paste this into it export VAULT_ADDR='http://127.0.0.1:8200'
Save the file by issuing on the terminal
$ source .zshrc
You can also set the tlsDisable to false in the global settings like this:
global:
tlsDisable: false
As the documentation for the helm chart says here:
The http/https scheme is controlled by the tlsDisable value.

airflow kubernetes not reading pod_template_file

I am running Airflow with k8s executor.
I have everything set up under the [kubernetes] section and things are working fine. However, I would prefer to use a pod file for the worker.
So I generated a pod.yaml from one of the worker container that spins up.
I have placed this file on a location accessible by the scheduler pod something like
/opt/airflow/yamls/workerpod.yaml
But when I try to specify this file in pod_template_file parameter, it gives me these errors
[2020-03-02 22:12:24,115] {pod_launcher.py:84} ERROR - Exception when attempting to create Namespaced Pod.
Traceback (most recent call last):
File "/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/airflow/contrib/kubernetes/pod_launcher.py", line 81, in run_pod_async
resp = self._client.create_namespaced_pod(body=req, namespace=pod.namespace, **kwargs)
File "/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 6115, in create_namespaced_pod
(data) = self.create_namespaced_pod_with_http_info(namespace, body, **kwargs)
File "/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 6206, in create_namespaced_pod_with_http_info
collection_formats=collection_formats)
File "/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 334, in call_api
_return_http_data_only, collection_formats, _preload_content, _request_timeout)
File "/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 168, in __call_api
_request_timeout=_request_timeout)
File "/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 377, in request
body=body)
File "/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/kubernetes/client/rest.py", line 266, in POST
body=body)
File "/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/kubernetes/client/rest.py", line 222, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (403)
Reason: Forbidden
HTTP response headers: HTTPHeaderDict({'Audit-Id': 'ab2bc6dc-96f9-4014-8a08-7dae6e008aad', 'Cache-Control': 'no-store', 'Content-Type': 'application/json', 'Date': 'Mon, 02 Mar 2020 22:12:24 GMT', 'Content-Length': '660'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"examplebashoperatorrunme0-c9ca5d619bc54bf2a456e133ad79dd00\" is forbidden: unable to validate against any security context constraint: [fsGroup: Invalid value: []int64{0}: 0 is not an allowed group spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 0: must be in the ranges: [1000040000, 1000049999] spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 0: running with the root UID is forbidden]","reason":"Forbidden","details":{"name":"examplebashoperatorrunme0-c9ca5d619bc54bf2a456e133ad79dd00","kind":"pods"},"code":403}
[2020-03-02 22:12:24,141] {kubernetes_executor.py:863} WARNING - ApiException when attempting to run task, re-queueing. Message: pods "examplebashoperatorrunme0-c9ca5d619bc54bf2a456e133ad79dd00" is forbidden: unable to validate against any security context constraint: [fsGroup: Invalid value: []int64{0}: 0 is not an allowed group spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 0: must be in the ranges: [1000040000, 1000049999] spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 0: running with the root UID is forbidden]
Just to clarify, the pod.yaml file is generated from same running container that comes from configs in kubernetes section of airflow.cfg that works just fine. The run as user is correct. The SA is correct but still I am getting this error.
I am unsure if I should place this file in relation to where I kick off my kubectl apply ?
Since it goes in the airflow.cfg, I didn't think that would be the case but rather should be accessible from within the scheduler container.
One strange thing I noticed is that even though I have specified and seem to be using KubernetesExecutor but when the individual worker pods come on they said LocalExecutor. That's something I had changed in the workerpod.yaml file to KubernetesExecutor.
here is pod yaml file
apiVersion: v1
kind: Pod
metadata:
annotations:
openshift.io/scc: nonroot
labels:
app: airflow-worker
kubernetes_executor: "True"
name: airflow-worker
# namespace: airflow
spec:
affinity: {}
containers:
env:
- name: AIRFLOW_HOME
value: /opt/airflow
- name: AIRFLOW__CORE__EXECUTOR
value: KubernetesExecutor
#value: LocalExecutor
- name: AIRFLOW__CORE__DAGS_FOLDER
value: /opt/airflow/dags
- name: AIRFLOW__CORE__SQL_ALCHEMY_CONN
valueFrom:
secretKeyRef:
key: MYSQL_CONN_STRING
name: db-secret
image: ourrepo.example.com/airflow-lab:latest
imagePullPolicy: IfNotPresent
name: base
# resources:
# limits:
# cpu: "1"
# memory: 1Gi
# requests:
# cpu: 400m
# memory: 1Gi
securityContext:
capabilities:
drop:
- KILL
- MKNOD
- SETGID
- SETUID
volumeMounts:
- mountPath: /opt/airflow/dags
name: airflow-dags
readOnly: true
subPath: airflow/dags
- mountPath: /opt/airflow/logs
name: airflow-logs
- mountPath: /opt/airflow/airflow.cfg
name: airflow-config
readOnly: true
subPath: airflow.cfg
# - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
# name: airflow-cluster-access-token-5228g
# readOnly: true
dnsPolicy: ClusterFirst
# imagePullSecrets:
# - name: airflow-cluster-access-dockercfg-85twh
priority: 0
restartPolicy: Never
schedulerName: default-scheduler
securityContext:
# fsGroup: 0
runAsUser: 1001
seLinuxOptions:
level: s0:c38,c12
serviceAccount: airflow-cluster-access
serviceAccountName: airflow-cluster-access
# tolerations:
# - effect: NoSchedule
# key: node.kubernetes.io/memory-pressure
# operator: Exists
volumes:
- name: airflow-dags
persistentVolumeClaim:
claimName: ucdagent
- emptyDir: {}
name: airflow-logs
- configMap:
defaultMode: 420
name: airflow-config
name: airflow-config
# - name: airflow-cluster-access-token-5228g
# secret:
# defaultMode: 420
# secretName: airflow-cluster-access-token-5228g
Here is the working kubernetes config from airflow.cfg
[kubernetes]
#pod_template_file = /opt/airflow/yamls/workerpod.yaml
dags_in_image = False
worker_container_repository = ${AIRFLOW_IMAGE_NAME}
worker_container_tag = ${AIRFLOW_IMAGE_TAG}
worker_container_image_pull_policy = IfNotPresent
delete_worker_pods = False
in_cluster = true
namespace = ${AIRFLOW_NAMESPACE}
airflow_configmap = airflow-config
run_as_user = 1001
dags_volume_subpath = airflow/dags
dags_volume_claim = ucdagent
worker_service_account_name = airflow-cluster-access
[kubernetes_secrets]
AIRFLOW__CORE__SQL_ALCHEMY_CONN = db-secret=MYSQL_CONN_STRING
UPDATE: my airflow version is 1.10.7. I am guessing this is a newer parameters. I am trying to find if this is currently an empty config reference or it has been implemented in latest which is right now 1.10.9
UPDATE: This parameter has not beeen implemented as of 1.10.9

Container Optimized OS performance

After upgrading my cluster nodes image from CONTAINER_VM to CONTAINER_OPTIMIZED_OS I ran into performance degradation of the PHP Application up to 10 times.
Did i miss something in my configuration or its a common issue?
I tried to take machines with more CPU and memory but it affected the performance slightly.
Terraform configuration:
resource "google_compute_address" "dev-cluster-address" {
name = "dev-cluster-address"
region = "europe-west1"
}
resource "google_container_cluster" "dev-cluster" {
name = "dev-cluster"
zone = "europe-west1-d"
initial_node_count = 2
node_version = "1.7.5"
master_auth {
username = "*********-dev"
password = "*********"
}
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/devstorage.full_control",
"https://www.googleapis.com/auth/sqlservice.admin"
]
machine_type = "n1-standard-1"
disk_size_gb = 20
image_type = "COS"
}
}
Kubernetes deployment for Symfony Application:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: deployment-dev
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: dev
spec:
containers:
- name: nginx
image: nginx:1.13.5-alpine
volumeMounts:
- name: application
mountPath: /var/www/web
- name: nginx-config
mountPath: /etc/nginx/conf.d
ports:
- containerPort: 80
resources:
limits:
cpu: "20m"
memory: "64M"
requests:
cpu: "5m"
memory: "16M"
- name: php
image: ********
lifecycle:
postStart:
exec:
command:
- "bash"
- "/var/www/provision/files/init_php.sh"
envFrom:
- configMapRef:
name: symfony-config-dev
volumeMounts:
- name: application
mountPath: /application
- name: logs
mountPath: /var/www/var/logs
- name: lexik-jwt-keys
mountPath: /var/www/var/jwt
ports:
- containerPort: 9000
resources:
limits:
cpu: "400m"
memory: "1536M"
requests:
cpu: "300m"
memory: "1024M"
- name: cloudsql-proxy-mysql
image: gcr.io/cloudsql-docker/gce-proxy:1.09
resources:
limits:
cpu: "10m"
memory: "64M"
requests:
cpu: "5m"
memory: "16M"
command:
- "/cloud_sql_proxy"
- "-instances=***:europe-west1:dev1=tcp:0.0.0.0:3306"
- name: cloudsql-proxy-analytics
image: gcr.io/cloudsql-docker/gce-proxy:1.09
resources:
limits:
cpu: "20m"
memory: "64M"
requests:
cpu: "10m"
memory: "16M"
command:
- "/cloud_sql_proxy"
- "-instances=***:europe-west1:analytics-dev1=tcp:0.0.0.0:3307"
- name: sidecar-logging
image: alpine:3.6
args: [/bin/sh, -c, 'tail -n+1 -f /var/www/var/logs/prod.log']
volumeMounts:
- name: logs
mountPath: /var/www/var/logs
resources:
limits:
cpu: "5m"
memory: "20M"
requests:
cpu: "5m"
memory: "20M"
volumes:
- name: application
emptyDir: {}
- name: logs
emptyDir: {}
- name: nginx-config
configMap:
name: config-dev
items:
- key: nginx
path: default.conf
- name: lexik-jwt-keys
configMap:
name: config-dev
items:
- key: lexik_jwt_private_key
path: private.pem
- key: lexik_jwt_public_key
path: public.pem
One of the reasons could be the fact that Kubernetes actually started enforcing the CPU limits with Container-Optimized OS.
resources:
limits:
cpu: "20m"
These were not enforced on the older ContainerVM images.
Could you please try removing/relaxing cpu limits from your pod-spec and see if it helps?