How to propagate kubernetes events from a GKE cluster to google cloud log - kubernetes

Is there anyway to propagate all kubernetes events to google cloud log? For instance, a pod creation/deletion or liveness probing failed, I knew I can use kubectl get events in a console.However, I would like to preserve those events in a log file in the cloud log with other pod level logs. It is quite helpful information.

It seems that OP found the logs, but I wasn't able to on GKE (1.4.7) with Stackdriver. It was a little tricky to figure out, so I thought I'd share for others. I was able to get them by creating an eventer deployment with the gcl sink.
For example:
deployment.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
k8s-app: eventer
name: eventer
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
k8s-app: eventer
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
k8s-app: eventer
spec:
containers:
- name: eventer
command:
- /eventer
- --source=kubernetes:''
- --sink=gcl
image: gcr.io/google_containers/heapster:v1.2.0
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
terminationMessagePath: /dev/termination-log
restartPolicy: Always
terminationGracePeriodSeconds: 30
Then, search for logs with an advanced filter (substitute your GCE project name):
resource.type="global"
logName="projects/project-name/logs/kubernetes.io%2Fevents"

Related

kubernetes k8 unable to pull latest image

Hi I am working in Kubernetes. Below is my k8 for deployment.
apiVersion: apps/v1
kind: Deployment
metadata: #Dictionary
name: webapp
spec: # Dictionary
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
# maxUnavailable will set up how many pods we can add at a time
maxUnavailable: 50%
# maxSurge define how many pods can be unavailable during the rolling update
maxSurge: 1
selector:
matchLabels:
app: webapp
instance: app
template:
metadata: # Dictionary
name: webapplication
labels: # Dictionary
app: webapp # Key value paids
instance: app
annotations:
vault.security.banzaicloud.io/vault-role: al-dev
spec:
serviceAccountName: default
terminationGracePeriodSeconds: 30
containers: # List
- name: al-webapp-container
image: ghcr.io/my-org/al.web:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
resources:
requests:
memory: "1Gi"
cpu: "900m"
limits:
memory: "1Gi"
cpu: "1000m"
imagePullSecrets:
- name: githubpackagesecret
whenever I deploy this into kubernetes, Its not picking the latest image from the github packages. What should I do in order to pull the latest image and update the current running pod with latest image? Can someone help me to fix this issue. Any help would be appreciated. Thank you
There could be chances if you are doing deployment with the same latest tag deployment might not be getting the updated as same imageTag.
Pod restart is required so it will download the new image each time, if still it's the same there is an issue with building of the cache image.
What you can do as of now to try the
kubectl rollout restart deploy <deployment-name> -n <namespace>
this will restart the pods and it will fetch the new image for all PODs and check if latest code running.
Since you have imagePullPolicy: Always set it should always pull the image. Can you do kubectl describe to the pod while its starting so we can seed the logs ?

Kubernetes share storage between replicas

We have Kubernetes running on our own servers. For Persistent Storage we have a NFS server. This works great.
Now we want to deploy an application with multiple replicas that should have shared storage between them, but the storage should not be persistent. When the pods are deleted, the data should be gone as well.
I was hoping I could achieve it with the following
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
image: nginx:latest
imagePullPolicy: IfNotPresent
name: nginx
volumeMounts:
- name: shared-data
mountPath: /shared-data
resources:
limits:
memory: 500Mi
All replicas have /shared-data, but when 1 replica stores data in that folder, the other replica's cannot see the file, so it is not shared.
What are my options?
You can use a PVC to share data between the pods. Then, you can setup a preStop lifecycle hook
for the pods to cleanup the data when the pod gets deleted.
Here, is an example of adding preStop hook on a pod: https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/

Kubernetes reattach to same persistent volume after delete

I have a app where two pods needs to have access to the same volume. I want to be able to delete the cluster and then after apply to be able to access the data that is on the volume.
So for example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: retaining
provisioner: csi.hetzner.cloud
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: media
spec:
#storageClassName: retaining
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-php
labels:
app: myapp-php
k8s-app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp-php
template:
metadata:
labels:
app: myapp-php
k8s-app: myapp
spec:
containers:
- image: nginx:1.17
imagePullPolicy: IfNotPresent
name: myapp-php
ports:
- containerPort: 9000
protocol: TCP
resources:
limits:
cpu: 750m
memory: 3Gi
requests:
cpu: 750m
memory: 3Gi
volumeMounts:
- name: media
mountPath: /var/www/html/media
volumes:
- name: media
persistentVolumeClaim:
claimName: media
nodeSelector:
mytype: main
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-web
labels:
app: myapp-web
k8s-app: myapp
spec:
selector:
matchLabels:
app: myapp-web
template:
metadata:
labels:
app: myapp-web
k8s-app: myapp
spec:
containers:
- image: nginx:1.17
imagePullPolicy: IfNotPresent
name: myapp-web
ports:
- containerPort: 9000
protocol: TCP
resources:
limits:
cpu: 10m
memory: 128Mi
requests:
cpu: 10m
memory: 128Mi
volumeMounts:
- name: media
mountPath: /var/www/html/media
volumes:
- name: media
persistentVolumeClaim:
claimName: media
nodeSelector:
mytype: main
If I do these:
k apply -f pv-issue.yaml
k delete -f pv-issue.yaml
k apply-f pv-issue.yaml
I want to connect the same volume.
What I have tried:
If I keep the file as is, the volume will be deleted so the data will be lost.
I can remove the pvc declaration from the file. Then it works. My issue that on the real app I am using kustomize and I don't see a way to exclude resources when doing kustomize build app | kubectl delete -f -
Tried using retain in the pvc. It retains the volume on delete, but on the apply a new volume is created.
Statefulset, however I don't see a way that to different statefulsets can share the same volume.
Is there a way to achieve this?
Or should I just do regular backups, and restore the volume data from backup when recreating the cluster?
Is there a way to achieve this? Or should I just do regular backups, and restore the volume data from backup when recreating the cluster?
Cluster deletion will make all your local volumes to be deleted. You can achieve this by storing the data outside the cluster. Kubernetes has a wide variety of storage providers to help you deploy data on a variety of storage types.
You may want to think also that you can keep the data locally on nodes with usage of hostPath but that is also not a good solution since it will require you to pin the pod to the specific node to avoid data loss. And if you delete you cluster in a way that all of you VM are gone, then this will be also gone.
Having some network-attached storage would be right way to go here. Very good example of those are Persistence disks which durable network storage devices that you instances can access. They're located independently from you virtuals machines and they are not being deleted when you delete the cluster.

Kubernetes, minikube and vpa: vpa doesn't scale up to target

before start, i'm running kubernetes on mac.
minikube: 1.17.0
metrics-server: 1.8+
vpa: vpa-release-0.8
my issue is vpa doesn't scale up my pod just keep recreating pods. i followed gke vpa example. i set resource requests of deployment cpu: 100m, memory: 50mi. and deploy vpa. it gave me recommendation. updatemode is Auto as well. but it keep recreating pod, doesn't change resource requests when i checked the pod what recreated by kubectl describe pod podname.
enter image description here
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: my-vpa
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: my-app-deployment
updatePolicy:
updateMode: "Auto"
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
spec:
replicas: 2
selector:
matchLabels:
app: my-app-deployment
template:
metadata:
labels:
app: my-app-deployment
spec:
containers:
- name: my-container
image: k8s.gcr.io/ubuntu-slim:0.1
resources:
requests:
cpu: 100m
memory: 50Mi
command: ["/bin/sh"]
args: ["-c", "while true; do timeout 0.5s yes >/dev/null; sleep 0.5s; done"]
Status:
Conditions:
Last Transition Time: 2021-02-03T03:13:38Z
Status: True
Type: RecommendationProvided
Recommendation:
Container Recommendations:
Container Name: my-container
Lower Bound:
Cpu: 25m
Memory: 262144k
Target:
Cpu: 548m
Memory: 262144k
Uncapped Target:
Cpu: 548m
Memory: 262144k
Upper Bound:
Cpu: 100G
Memory: 100T
Events: <none>
i tried with kind as well. but it recreate pods with new resource request, but never run keep pending because the node's resource is not enough. and i think the reason why vpa doesn't work properly is minikube or me didn't make multiple node. you think is that relative?

Mounting kubernetes volume in multiple containers within a pod with gitlab

I am setting up a CI/CD environment for the first time consisting of a single node kubernetes (minikube).
On this node I created a PV
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
data-volume 1Gi RWO Retain Bound gitlab-managed-apps/data-volume-claim manual 20m
and PVC
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-volume-claim Bound data-volume 1Gi RWO manual 19m
Now I would like to create a pod with multiple containers accessing to this volume.
Where and how do you advise to setup this using gitlab pipelines gitlab-ci etc? Multiple repos may be the best fit for the project.
Here is the fully working example of deployment manifest file, having in Pod's spec defined two containers (based on different nginx docker images) using the same PV, from where they serve custom static html content on ports 80/81 accordingly:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
run: nginx
name: nginx
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/nginx
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: nginx
spec:
volumes:
- name: my-pv-storage
persistentVolumeClaim:
claimName: my-pv-claim-nginx
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: my-pv-storage
subPath: html_custom
- image: custom-nginx
imagePullPolicy: IfNotPresent
name: custom-nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: my-pv-storage
subPath: html
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
Yes probaly you can do that run multiple container in one pod sharing the one PVC.
In CI/CD if you have multiple repos and if commit comes in one repo it will build new Docker image and push it to the registry and deployed to k8s cluster.
In CI/CD if you have the plan to use latest tag for image tagging then you can use multi-container in pod. it will be easy to manage deployment if there is commit in only one repository.
If you have plan to use SHA:hash for CI/CD-tagging images then how will you manage the deployment file having two containers config.