yq replace value in manifest yaml - yq

I have a k8s manifest file for loadbalancer below and cannot for the life of me get the $ipaddress be replaced with value, I have got to to overwrite whole file or part of or even just leave blank. How can I replace only the $ipaddress like in below
Tried as example 2 below:
yq e '.spec|=select(.loadBalancerIP) .ports.port = "172.16.87.98"' manifest.yaml
yq e -i '(.spec|=select(.loadBalancerIP.$ipaddress) = "172.16.87.98"' manifest.yaml
apiVersion: v1
kind: Service
metadata:
name: my-lb-cluster
spec:
loadBalancerIP: $ipaddress
ports:
- name: ssl
port: 8080
selector:
role: webserver
sessionAffinity: None
type: LoadBalancer

If the YAML is as simple as in your question, you can use:
yq e -i '.spec.loadBalancerIP = "172.16.87.98"' manifest.yaml
...to update manifest.yaml and set .loadBalancerIP inside .spec to "172.16.87.98".

I know it's late but this can help if you want to pass the value from a variable.
export LB_IP=1.1.1.1
yq e -i '.spec.loadBalancerIP= env(LB_IP)' manifest.yaml

Related

How to change a pod name

I'm very new to k8s and the related stuff, so this may be a stupid question: How to change the pod name?
I am aware the pod name seems set in the helm file, in my values.yaml, I have this:
...
hosts:
- host: staging.application.com
paths:
...
- fullName: application
svcPort: 80
path: /*
...
Since the application is running in the prod and staging environment, and the pod name is just something like application-695496ec7d-94ct9, I can't tell which pod is for prod or staging and can't tell if a request if come from the prod or not. So I changed it to:
hosts:
- host: staging.application.com
paths:
...
- fullName: application-staging
svcPort: 80
path: /*
I deployed it to staging, pod updated/recreated automatically but the pod name still remains the same. I was confused about that, and I don't know what is missing. I'm not sure if it is related to the fullnameOverride, but it's empty so it should be fine.
...the pod name still remains the same
The code snippet in your question likely the helm values for Ingress. In this case not related to Deployment of Pod.
Look into your helm template that define the Deployment spec for the pod, search for the name and see which helm value was assigned to it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox # <-- change & you will see the pod name change along. the helm syntax surrounding this field will tell you how the name is construct/assign
labels:
app: busybox
spec:
replicas: 1
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
containers:
- name: busybox
image: busybox
imagePullPolicy: IfNotPresent
command: ["ash","-c","sleep 3600"]
Save the spec and apply, check with kubectl get pods --selector app=busybox. You should see 1 pod with name busybox prefix. Now if you open the file and change the name to custom and re-apply and get again, you will see 2 pods with different name prefix. Clean up with kubectl delete deployment busybox custom.
This example shows how the name of the Deployment is used for pod(s) underneath. You can paste your helm template surrounding the name field to your question for further examination if you like.

How can I use LoadBalancer in BareMetal -KinD?

My question is totally simple that is related to kubernetes awesome tool called KinD. I am using KinD for my flask application:
import os
import requests
from flask import Flask
from jaeger_client import Config
from flask_opentracing import FlaskTracing
app = Flask(__name__)
config = Config(
config={
'sampler':
{'type': 'const',
'param': 1},
'logging': True,
'reporter_batch_size': 1,},
service_name="service")
jaeger_tracer = config.initialize_tracer()
tracing = FlaskTracing(jaeger_tracer, True, app)
def get_counter(counter_endpoint):
counter_response = requests.get(counter_endpoint)
return counter_response.text
def increase_counter(counter_endpoint):
counter_response = requests.post(counter_endpoint)
return counter_response.text
#app.route('/')
def hello_world():
counter_service = os.environ.get('COUNTER_ENDPOINT', default="https://localhost:5000")
counter_endpoint = f'{counter_service}/api/counter'
counter = get_counter(counter_endpoint)
increase_counter(counter_endpoint)
return f"""Hello, World!
You're visitor number {counter} in here!\n\n"""
FROM python:3.7-alpine
RUN mkdir /app
RUN apk add --no-cache py3-pip python3 && \
pip3 install flask Flask-Opentracing jaeger-client
WORKDIR /app
ADD ./app /app/
ADD ./requirements.txt /app/
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python", "/app/main.py"]
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-flask-deployment
spec:
selector:
matchLabels:
app: my-flask-pod
replicas: 2
template:
metadata:
labels:
app: my-flask-pod
spec:
containers:
- name: my-flask-container
image: yusufkaratoprak/awsflaskeks:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 5000
service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-flask-service
spec:
selector:
app: my-flask-pod
ports:
- port: 6000
targetPort: 5000
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 203.0.113.10
Configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.42.42.100-172.42.42.105 #Update this with your Nodes IP range
Results:
Everything looks good. When I write my browser: http://172.42.42.101:6000/
Result:
Also , I want to add my service events. Everything looks good :
and Also I want to add #Kaan Mersin advise : I added 80 as default port.
Actually for bare metal server you should use another solution except of LoadBalancer to expose your application such as NodePort, Ingress and so on. Because LoadBalancer service type just for Cloud providers like Google, Azure, AWS and etc.
Check official documentation for LoadBalancer
Follow below way with thinking that you are using NodePort instead of LoadBalancer, NodePort service type also load balance traffic between pods ıf you are looking forward just balancing feature ;
You need to use http://172.42.42.101:30160 from outside of the host which is running containers on. Port 6000 is just accessible inside the cluster with internal IP (10.96.244.204 is in your case). Whenever you expose your deployment, automatically (also you can define manually) one of the NodePort (by default between 30000 - 32767)assign to the service for external request.
For service details, you need to run the below command. The command output will give you NodePort and another details.
kubectl describe services my-service
Please check related kubernetes documentation
The error Chrome is throwing is ERR_UNSAFE_PORT. Change your Service port to 80 and then hit http://172.42.42.101/. Alternatively you can choose any other port you like so long as Chrome doesn't consider it an unsafe port. See this answer on SuperUser for a list of unsafe ports.

Set values in a knative service.yaml file using environment variables

Is there a way to set the values of some keys in a Knative service.yaml file using environment variables?
More detail
I am trying to deploy a Knative service to a Kubernetes cluster using GitLab CI. Some of the variables in my service.yaml file depend on the project and environment of the GitLab CI pipeline. Is there a way I can seamlessly plug those values into my service.yaml file without resorting to hacks like sed -i ...?
For example, given the following script, I want the $(KUBE_NAMESPACE), $(CI_ENVIRONMENT_SLUG), and $(CI_PROJECT_PATH_SLUG) values to be replaced by accordingly-named environment variables.
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: design
namespace: "$(KUBE_NAMESPACE)"
spec:
template:
metadata:
name: design-v1
annotations:
app.gitlab.com/env: "$(CI_ENVIRONMENT_SLUG)"
app.gitlab.com/app: "$(CI_PROJECT_PATH_SLUG)"
spec:
containers:
- name: user-container
image: ...
timeoutSeconds: 600
containerConcurrency: 8
I don't think there is a great way to expand environment variables inside of an existing yaml, but if you don't want to use sed, you might be able to use envsubst:
envsubst < original.yaml > modified.yaml
You would just run this command before you use the yaml to expand the environment variables contained within it.
Also I think you'll need your variables to use curly braces, instead of parentheses, like this: ${KUBE_NAMESPACE}.
EDIT: You might also be able to use this inline like this: kubectl apply -f <(envsubst < service.yaml)
More than a Knative issue this is more of Kubernetes limitation. Kubernetes allows some expansion but not in annotations or namespace definitions. For example, you can do it in container env definitions:
containers:
- env:
- name: PODID
valueFrom: ...
- name: LOG_PATH
value: /var/log/$(PODID)
If this is a CI/CD system like Gitlab the environment variables should be in a shell environment, so a simple shell expansion will do. For example.
#!/bin/bash
echo -e "
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: design
namespace: "${KUBE_NAMESPACE}"
spec:
template:
metadata:
name: design-v1
annotations:
app.gitlab.com/env: "${CI_ENVIRONMENT_SLUG}"
app.gitlab.com/app: "${CI_PROJECT_PATH_SLUG}"
spec:
containers:
- name: user-container
image: ...
timeoutSeconds: 600
containerConcurrency: 8
" | kubectl apply -f -
You can also use envsubst as a helper like mentioned in the other answer.

Error: UPGRADE FAILED: failed to replace object: Service "api" is invalid: spec.clusterIP: Invalid value: "": field is immutable

When doing helm upgrade ... --force I'm getting this below error
Error: UPGRADE FAILED: failed to replace object: Service "api" is invalid: spec.clusterIP: Invalid value: "": field is immutable
And This is how my service file looks like: (Not passing clusterIP anywhere )
apiVersion: v1
kind: Service
metadata:
name: {{ .Chart.Name }}
namespace: {{ .Release.Namespace }}
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
labels:
app: {{ .Chart.Name }}-service
kubernetes.io/name: {{ .Chart.Name | quote }}
dns: route53
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
spec:
selector:
app: {{ .Chart.Name }}
type: LoadBalancer
ports:
- port: 443
name: https
targetPort: http-port
protocol: TCP
Helm Version: 3.0.1
Kubectl Version: 1.13.1 [Tried with the 1.17.1 as well]
Server: 1.14
Note: Previously I was using some old version (of server, kubectl, helm) at that time I did not face this kind of issue.
I can see lots of similar issues in GitHub regarding this, but unable to find any working solution for me.
few of the similar issues:
https://github.com/kubernetes/kubernetes/issues/25241
https://github.com/helm/charts/pull/13646 [For Nginx chart]
I've made some tests with Helm and got the same issue when trying to change the Service type from NodePort/ClusterIP to LoadBalancer.
This is how I've reproduced your issue:
Kubernetes 1.15.3 (GKE)
Helm 3.1.1
Helm chart used for test: stable/nginx-ingress
How I reproduced:
Get and decompress the file:
helm fetch stable/nginx-ingress
tar xzvf nginx-ingress-1.33.0.tgz
Modify service type from type: LoadBalancer to type: NodePort in the values.yaml file (line 271):
sed -i '271s/LoadBalancer/NodePort/' values.yaml
Install the chart:
helm install nginx-ingress ./
Check service type, must be NodePort:
kubectl get svc -l app=nginx-ingress,component=controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-controller NodePort 10.0.3.137 <none> 80:30117/TCP,443:30003/TCP 1m
Now modify the Service type again to LoadBalancer in the values.yaml:
sed -i '271s/NodePort/LoadBalancer/' values.yaml
Finally, try to upgrade the chart using --force flag:
helm upgrade nginx-ingress ./ --force
And then:
Error: UPGRADE FAILED: failed to replace object: Service "nginx-ingress-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable
Explanation
Digging around I found this in HELM source code:
// if --force is applied, attempt to replace the existing resource with the new object.
if force {
obj, err = helper.Replace(target.Namespace, target.Name, true, target.Object)
if err != nil {
return errors.Wrap(err, "failed to replace object")
}
c.Log("Replaced %q with kind %s for kind %s\n", target.Name, currentObj.GetObjectKind().GroupVersionKind().Kind, kind)
} else {
// send patch to server
obj, err = helper.Patch(target.Namespace, target.Name, patchType, patch, nil)
if err != nil {
return errors.Wrapf(err, "cannot patch %q with kind %s", target.Name, kind)
}
}
Analyzing the code above Helm will use similar to kubectl replace api request (instead of kubectl replace --force as we could expect)... when the helm --force flag is set.
If not, then Helm will use kubectl patch api request to make the upgrade.
Let's check if it make sense:
PoC using kubectl
Create a simple service as NodePort:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
labels:
app: test-svc
name: test-svc
spec:
selector:
app: test-app
ports:
- port: 80
protocol: TCP
targetPort: 80
type: NodePort
EOF
Make the service was created:
kubectl get svc -l app=test-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-svc NodePort 10.0.7.37 <none> 80:31523/TCP 25
Now lets try to use kubectl replace to upgrade the service to LoadBalancer, like helm upgrade --force:
kubectl replace -f - <<EOF
apiVersion: v1
kind: Service
metadata:
labels:
app: test-svc
name: test-svc
spec:
selector:
app: test-app
ports:
- port: 80
protocol: TCP
targetPort: 80
type: LoadBalancer
EOF
This shows the error:
The Service "test-svc" is invalid: spec.clusterIP: Invalid value: "": field is immutable
Now, lets use kubectl patch to change the NodePort to LoadBalancer, simulating the helm upgrade command without --force flag:
Here is the kubectl patch documentation, if want to see how to use.
kubectl patch svc test-svc -p '{"spec":{"type":"LoadBalancer"}}'
Then you see:
service/test-svc patched
Workaround
You should to use helm upgrade without --force, it will work.
If you really need to use --force to recreate some resources, like pods to get the latest configMap update, for example, then I suggest you first manually change the service specs before Helm upgrade.
If you are trying to change the service type you could do it exporting the service yaml, changing the type and apply it again (because I experienced this behavior only when I tried to apply the same template from the first time):
kubectl get svc test-svc -o yaml | sed 's/NodePort/LoadBalancer/g' | kubectl replace --force -f -
The Output:
service "test-svc" deleted
service/test-svc replaced
Now, if you try to use helm upgrade --force and doesn't have any change to do in the service, it will work and will recreate your pods and others resources.
I hope that helps you!

Kubectl apply command for updating existing service resource

Currently I'm using Kubernetes version 1.11.+. Previously I'm always using the following command for my cloud build scripts:
- name: 'gcr.io/cloud-builders/kubectl'
id: 'deploy'
args:
- 'apply'
- '-f'
- 'k8s'
- '--recursive'
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_REGION}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLUSTER_NAME}'
And the commands just working as expected, at that time I'm using k8s version 1.10.+. However recently I got the following error:
spec.clusterIP: Invalid value: "": field is immutable
metadata.resourceVersion: Invalid value: "": must be specified for an update
So I'm wondering if this is an expected behavior for Service resources?
Here's my YAML config for my service:
apiVersion: v1
kind: Service
metadata:
name: {name}
namespace: {namespace}
annotations:
beta.cloud.google.com/backend-config: '{"default": "{backend-config-name}"}'
spec:
ports:
- port: {port-num}
targetPort: {port-num}
selector:
app: {label}
environment: {env}
type: NodePort
This is due to https://github.com/kubernetes/kubernetes/issues/71042
https://github.com/kubernetes/kubernetes/pull/66602 should be picked to 1.11
I sometimes meet this error when manually running kubectl apply -f somefile.yaml.
I think it happens when someone have changed the specification through the Kubernetes Dashboard instead of by applying new changes through kubectl apply.
To fix it, I run kubectl edit services/servicename which opens the yaml specification in my default editor. Then remove the fields metadata.resourceVersion and spec.clusterIP, hit save and run kubectl apply -f somefile.yaml again.
You need to set the spec.clusterIP on your service yaml file with value to be replaced with clusterIP address from service as shown below:
spec:
clusterIP:
Your issue is discuused on the following github there as well a workaround to help you bypass this issue.