Kubernetes | Rolling Update on Replica Set - kubernetes

I'm trying to perform a rolling update of the container image that my Federated Replica Set is using but I'm getting the following error:
When I run: kubectl rolling-update mywebapp -f mywebapp-v2.yaml
I get the error message: the server could not find the requested resource;
This is a brand new and clean install on Google Container Engine (GKE) so besides creating the Federated Cluster and deploying my first service nothing else has been done. I'm following the instructions from the Kubernetes Docs but no luck.
I've checked to make sure that I'm in the correct context and I've also created a new YAML file pointing to the new image and updated the metadata name. Am I missing something? The easy way for me to do this is to delete the replica set and then redeploy but then I'm cheating myself :). Any pointers would be appreciated
mywebappv2.yaml - new yaml file for rolling update
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: mywebapp-v2
spec:
replicas: 4
template:
metadata:
labels:
app: mywebapp
spec:
containers:
- name: mywebapp
image: gcr.io/xxxxxx/static-js:v2
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
name: mywebapp
My original mywebapp.yaml file:
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: mywebapp
spec:
replicas: 4
template:
metadata:
labels:
app: mywebapp
spec:
containers:
- name: mywebapp
image: gcr.io/xxxxxx/static-js:v2
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
name: mywebapp

Try kind: Deployment.
Most kubectl commands that support Replication Controllers also
support ReplicaSets. One exception is the rolling-update command. If
you want the rolling update functionality please consider using
Deployments instead.
Also, the rolling-update command is imperative
whereas Deployments are declarative, so we recommend using Deployments
through the rollout command.
-- Replica Sets |
Kubernetes

Related

kubernetes k8 unable to pull latest image

Hi I am working in Kubernetes. Below is my k8 for deployment.
apiVersion: apps/v1
kind: Deployment
metadata: #Dictionary
name: webapp
spec: # Dictionary
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
# maxUnavailable will set up how many pods we can add at a time
maxUnavailable: 50%
# maxSurge define how many pods can be unavailable during the rolling update
maxSurge: 1
selector:
matchLabels:
app: webapp
instance: app
template:
metadata: # Dictionary
name: webapplication
labels: # Dictionary
app: webapp # Key value paids
instance: app
annotations:
vault.security.banzaicloud.io/vault-role: al-dev
spec:
serviceAccountName: default
terminationGracePeriodSeconds: 30
containers: # List
- name: al-webapp-container
image: ghcr.io/my-org/al.web:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
resources:
requests:
memory: "1Gi"
cpu: "900m"
limits:
memory: "1Gi"
cpu: "1000m"
imagePullSecrets:
- name: githubpackagesecret
whenever I deploy this into kubernetes, Its not picking the latest image from the github packages. What should I do in order to pull the latest image and update the current running pod with latest image? Can someone help me to fix this issue. Any help would be appreciated. Thank you
There could be chances if you are doing deployment with the same latest tag deployment might not be getting the updated as same imageTag.
Pod restart is required so it will download the new image each time, if still it's the same there is an issue with building of the cache image.
What you can do as of now to try the
kubectl rollout restart deploy <deployment-name> -n <namespace>
this will restart the pods and it will fetch the new image for all PODs and check if latest code running.
Since you have imagePullPolicy: Always set it should always pull the image. Can you do kubectl describe to the pod while its starting so we can seed the logs ?

Add ExternalSecret to Yaml file deploying to K8s

I'm trying to deploy a Kubernetes processor to a cluster on GCP GKE but the pod fails with the following error:
secret "service-account-credentials-dbt-test" not found: CreateContainerConfigError
This is my deployment.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: dbt-core-processor
namespace: prod
labels:
app: dbt-core
spec:
replicas: 1
selector:
matchLabels:
app: dbt-core
template:
metadata:
labels:
app: dbt-core
spec:
containers:
- name: dbt-core-processor
image: IMAGE
resources:
requests:
cpu: 50m
memory: 1Gi
limits:
cpu: 1
memory: 2Gi
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
valueFrom:
secretKeyRef:
name: service-account-credentials-dbt-test
key: service-account-credentials-dbt-test
---
apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
name: service-account-credentials-dbt-test
namespace: prod
spec:
backendType: gcpSecretsManager
data:
- key: service-account-credentials-dbt-test
name: service-account-credentials-dbt-test
version: latest
When I run kubectl apply -f deployment.yml I get the following error:
deployment.apps/dbt-core-processor created
error: unable to recognize "deployment.yml": no matches for kind "ExternalSecret" in version "kubernetes-client.io/v1"
This creates my processor but the pod fails to spin up the secrets:
secret "service-account-credentials-dbt-test" not found: CreateContainerConfigError
How do I add the secrets from my secrets manager in GCP to this deployment?
ExternalSecret is a custom resource definition (CRD) and it looks like it is not installed on your cluster.
I googled kubernetes-client.io/v1 and it looks like you may be following instructions from the old, archived project that first provided this CRD? The GitHub repo pointed me to a maintained project that has replaced it.
The good news is that the current project has what looks like comprehensive documentation, including a guide to how to install the CRDs on your cluster and the proper configuration for the External secret.

Kubernetes HPA not working: unable to get metrics

My pod scaler fails to deploy, and keeps giving an error of FailedGetResourceMetric:
Warning FailedComputeMetricsReplicas 6s horizontal-pod-autoscaler failed to compute desired number of replicas based on listed metrics for Deployment/default/bot-deployment: invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
I have ensured to install metrics-server as you can see when I run the the following command to show the metrics-server resource on the cluster:
kubectl get deployment metrics-server -n kube-system
It shows this:
metrics-server
I also set the --kubelet-insecure-tls and --kubelet-preferred-address-types=InternalIP options in the args section of the metrics-server manifest file.
This is what my deployment manifest looks like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: bot-deployment
labels:
app: bot
spec:
replicas: 1
selector:
matchLabels:
app: bot
template:
metadata:
labels:
app: bot
spec:
containers:
- name: bot-api
image: gcr.io/<repo>
ports:
- containerPort: 5600
volumeMounts:
- name: bot-volume
mountPath: /core
- name: wallet
image: gcr.io/<repo>
ports:
- containerPort: 5000
resources:
requests:
cpu: 800m
limits:
cpu: 1500m
volumeMounts:
- name: bot-volume
mountPath: /wallet_
volumes:
- name: bot-volume
emptyDir: {}
The specifications for my pod scaler is shown below too:
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: bot-scaler
spec:
metrics:
- resource:
name: cpu
target:
averageUtilization: 85
type: Utilization
type: Resource
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: bot-deployment
minReplicas: 1
maxReplicas: 10
Because of this the TARGET options always remains as /80%. Upon introspection, the HPA makes that same complaint over and over again, I have tried all options, that I have seen on some other questions, but none of them seem to work. I have also tried uninstalling and reinstalling the metrics-server many times, but it doesn't work.
One thing I notice though, is that the metrics-server seems to shut down after I deploy the HPA manifest, and it fails to start. When i check the state of the metrics-server the READY option shows 0/1 even though it was initially 1/1. What could be wrong?
I will gladly provide as much info as needed. Thank you!
Looks like your bot-api is missing it's resource request and limit. your wallet has them though. the hpa uses all the resources in the pod to calculate the utilization

assign memory resources to a running pod?

i would want to know how can i assign memory resources to a running pod ?
i tried kubectl get po foo-7d7dbb4fcd-82xfr -o yaml > pod.yaml
but when i run the command kubectl apply -f pod.yaml
The Pod "foo-7d7dbb4fcd-82xfr" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
thanks in advance for your help .
Pod is the minimal kubernetes resources, and it doesn't not support editing as you want to do.
I suggest you to use deployment to run your pod, since it is a "pod manager" where you have a lot of additional features, like pod self-healing, pod liveness/readness etc...
You can define the resources in your deployment file like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: mendhak/http-https-echo
resources:
limits:
cpu: 15m
memory: 100Mi
requests:
cpu: 15m
memory: 100Mi
ports:
- name: http
containerPort: 80
As #KoopaKiller mentioned, you can't update spec.containers.resources field, this is mentioned in Container object spec:
Compute Resources required by this container. Cannot be updated.
Instead you can deploy your Pods using Deployment object. In that case if you change resources config for your Pods, deployment controller will roll out updated versions of your Pods.

Openshift: Is it possible to make different pods of the same deployment to use different resources?

In Openshift, say there are two pods of the same deployment in Test env. Is it possible to make one pod to use/connect to database1, make another pod to use/connect to dababase2 via label or configuration?
I have created two diff pods with same code base or image containing same compiled code. Using spring profiling,passed two different arguments for connection to oracle database.
for example
How about try to use StatefulSet for deploying each pod ? StatefulSet make each pod uses each PersistentVolume, so if you place each configuration file which is configured with other database connection data on each PersistentVolume, each pod can use other database each other. Because the pod can refer different config file.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: app
spec:
serviceName: "app"
replicas: 2
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: example.com/app:1.0
ports:
- containerPort: 8080
name: web
volumeMounts:
- name: databaseconfig
mountPath: /usr/local/databaseconfig
volumeClaimTemplates:
- metadata:
name: databaseconfig
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Mi