I am using GKE and Jfrog artifactory. I am building image with tag like
cicd-docker-local.jfrog.io/stage_proj:50d3afd0
If i see artifactory i can see the image in https://cicd.jfrog.io/cicd/webapp/ which is right. But my GKE is not able to recognise the image and it throws error like
couldn't parse image reference "'cicd-docker-local.jfrog.io/stage_proj:50d3afd0'": invalid reference format: InvalidImageName
But my image exists. Is there any problem with my image and name.
Deployment file portion:
containers:
-
image: "<IMAGE_NAME>"
In yaml file
- sed -i "s%<IMAGE_NAME>%'${STAGE_CONTAINER_IMAGE}'%g" deployment.yaml
STAGE_CONTAINER_IMAGE = cicd-docker-local.jfrog.io/stage_proj:50d3afd0
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: go
name: hello-world-go
spec:
progressDeadlineSeconds: 60
replicas: 3
selector:
matchLabels:
app: go
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 33%
type: RollingUpdate
template:
metadata:
labels:
app: go
spec:
containers:
-
image: "<IMAGE_NAME>"
# image: cicd-docker-local.jfrog.io/stage_proj: 50d3afd0
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 2
periodSeconds: 2
name: go
ports:
-
containerPort: 8080
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 2
periodSeconds: 2
If i use and sed command, i get error in kubernetes. But if i use cicd-docker-local.jfrog.io/stage_proj: 50d3afd0 directly, there is no error. Am i doing SED command wrongly?
Related
So I'm working on my first helm deployment. I'm working on deploying polr an URL shortener.
I'm having issues with my first deployment. Absolutely nothing starts up and I'm puzzled about where to go from here.
I'm using commands like..
kubectl describe deployment/polr
helm lint pre-polr
helm install polr pre-polr --dry-run --debug
However, it doesn't give me any good details and since there are no pods spinning up. I feel like I'm missing some commands that might help. Could anyone suggest any?
Here are my manifests:
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: polr
labels:
app: polr-app
spec:
replicas: 1
selector:
matchLabels:
app: polr-app
template:
metadata:
labels:
app: polr-app
spec:
containers:
- name: polr
image: matthewspah/polr
ports:
- name: http
containerPort: 8080
protocol: TCP
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
- name: polr-db
image: bitnami/mysql
ports:
- name: mysql
containerPort: 3306
protocol: TCP
readinessProbe:
tcpSocket:
port: 3306
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 3306
initialDelaySeconds: 15
periodSeconds: 20
Service
apiVersion: v1
kind: Service
metadata:
name: polr
labels:
name: polr-app
spec:
selector:
app: polr-app
type: ClusterIP
ports:
- port: 8080
targetPort: 8080
protocol: TCP
name: http
I have a k8s service which maps to pod deployment with 2 replicas and is exposed as clusterIp service. I am seeing an issue when the 2nd pod gets scheduled to the same node the readiness probe (http call to an api in container port) is failing with "unable to connect error" . Is this due to some port conflict?
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-deployment
namespace: demo
labels:
app: demo
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: demo
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/app-configmap.yaml") . | sha256sum }}
labels:
app: demo
spec:
containers:
- name: demo
image: demo-app-image:1.0.1
ports:
- containerPort: 8081
livenessProbe:
httpGet:
path: /healthcheck
port: 8081
initialDelaySeconds: 30
periodSeconds: 60
failureThreshold: 3
successThreshold: 1
timeoutSeconds: 15
readinessProbe:
httpGet:
path: /healthcheck
port: 8081
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3
successThreshold: 1
timeoutSeconds: 15
volumeMounts:
- name: config-volume
mountPath: /config/app
volumes:
- name: config-volume
configMap:
name: demo-configmap
items:
- key: config
path: config.json
nodeSelector:
usage: demo-server
Service
apiVersion: v1
kind: Service
metadata:
name: demo-service
namespace: demo
labels:
app: demo-service
spec:
selector:
app: demo
ports:
- name: admin-port
protocol: TCP
port: 26001
targetPort: 8081
I have a k8s service which maps to pod deployment with 2 replicas and is exposed as clusterIp service. I am seeing an issue when the 2nd pod gets scheduled to the same node the readiness probe (http call to an api in container port) is failing with "unable to connect error" . Is this due to some port conflict?
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-deployment
namespace: demo
labels:
app: demo
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: demo
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/app-configmap.yaml") . | sha256sum }}
labels:
app: demo
spec:
containers:
- name: demo
image: demo-app-image:1.0.1
ports:
- containerPort: 8081
livenessProbe:
httpGet:
path: /healthcheck
port: 8081
initialDelaySeconds: 30
periodSeconds: 60
failureThreshold: 3
successThreshold: 1
timeoutSeconds: 15
readinessProbe:
httpGet:
path: /healthcheck
port: 8081
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3
successThreshold: 1
timeoutSeconds: 15
volumeMounts:
- name: config-volume
mountPath: /config/app
volumes:
- name: config-volume
configMap:
name: demo-configmap
items:
- key: config
path: config.json
nodeSelector:
usage: demo-server
I am trying to implement the Rolling update of deployments in Kubernetes. I have followed a lot of articles that say that there would be zero downtime but when I run curl continuously. A couple of my requests failed before getting a response back. Below is the deployment file.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-deployment
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: my-image
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
The next thing I did was added
MinReadySeconds: 120
This takes care of this issue but it is not an optimum solution since we want to switch to the next pod as soon as it starts servicing requests and kill the old pod. I have two questions -
Can there be a condition when both the pods - the new and the old are
running and both start servicing the traffic? That would also not be
ideal as well. Since we want only one pod to service the request at a
time.
Is there any other out of the box solution that Kubernetes provides
to do a rolling deployment?
Try this. this should work for you . try doing a update of your image.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-deployment
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
For your better understanding check this link
I am configuring a StatefulSet where I want the number of replicas (spec.replicas as shown below) available to somehow pass as a parameter into the application instance. My application needs spec.replicas to determine the numer of replicas so it knows what rows to load from a MySQL table. I don't want to hard-code the number of replicas in both spec.replicas and the application parameter as that will not work when scaling the number of replicas up or down, since the application parameter needs to adjust when scaling.
Here is my StatefulSet config:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
labels:
run: my-app
name: my-app
namespace: my-ns
spec:
replicas: 3
selector:
matchLabels:
run: my-app
serviceName: my-app
podManagementPolicy: Parallel
template:
metadata:
labels:
run: my-app
spec:
containers:
- name: my-app
image: my-app:latest
command:
- /bin/sh
- /bin/start.sh
- dev
- 2000m
- "0"
- "3" **Needs to be replaced with # replicas**
- 127.0.0.1
- "32990"
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /health
port: 8081
initialDelaySeconds: 180
periodSeconds: 10
timeoutSeconds: 3
readinessProbe:
failureThreshold: 10
httpGet:
path: /ready
port: 8081
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 3
ports:
- containerPort: 8080
protocol: TCP
resources:
limits:
memory: 2500Mi
imagePullSecrets:
- name: snapshot-pull
restartPolicy: Always
I have read the Kubernetes docs and the spec.replicas field is scoped at the pod or container level, never the StatefulSet, at least as far as I have seen.
Thanks in advance.
You could use a yaml anchor to do this:
Check out:
https://helm.sh/docs/chart_template_guide/yaml_techniques/#yaml-anchors
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
labels:
run: my-app
name: my-app
namespace: my-ns
spec:
replicas: &numReplicas 3
selector:
matchLabels:
run: my-app
serviceName: my-app
podManagementPolicy: Parallel
template:
metadata:
labels:
run: my-app
spec:
containers:
- name: my-app
image: my-app:latest
command:
- /bin/sh
- /bin/start.sh
- dev
- 2000m
- "0"
- *numReplicas
- 127.0.0.1
- "32990"
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /health
port: 8081
initialDelaySeconds: 180
periodSeconds: 10
timeoutSeconds: 3
readinessProbe:
failureThreshold: 10
httpGet:
path: /ready
port: 8081
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 3
ports:
- containerPort: 8080
protocol: TCP
resources:
limits:
memory: 2500Mi
imagePullSecrets:
- name: snapshot-pull
restartPolicy: Always
Normally you would use the downward api for this kind of thing. https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/
However it is currently not possible for kubernetes to propagate deployment/statefulset spec data into the pod spec with the downward api, nor should it be. If you are responsible for this software I'd set up some internal functionality so that it can find it's peers and determine their count periodically.