yq - How to keep only certains keys in a (nested) object? - yq

I have a bunch of Kubernetes resources (i.e. a lot of yaml files), and I would like to have a result with only certain paths.
My current brutal approach looks like:
cat my-list-of-deployments | yq eval 'select(.kind == "Deployment") \
| del(.metadata.labels, .spec.replicas, .spec.selector, .spec.strategy, .spec.template.metadata) \
| del(.spec.template.spec.containers.[0].env, del(.spec.template.spec.containers.[0].image))' -
Of course this is super inefficient.
In the path .spec.template.spec.containers.[0] I actually want ideally delete anything except: .spec.template.spec.containers.[*].image and .spec.template.spec.containers.[*].resources (where "*" means, keep all array elements).
I tried something like
del(.spec.template.spec.containers.[0] | select(. != "name"))
But this did not work for me. How can I make this better?
Example input:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-one
spec:
template:
spec:
containers:
- image: app-one:0.2.0
name: app-one
ports:
- containerPort: 80
name: http
resources:
limits:
cpu: 50m
memory: 512Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-two
spec:
template:
spec:
containers:
- image: redis:3.2-alpine
livenessProbe:
exec:
command:
- redis-cli
- info
- server
periodSeconds: 20
name: app-two
readinessProbe:
exec:
command:
- redis-cli
- ping
resources:
limits:
cpu: 100m
memory: 128Mi
startupProbe:
periodSeconds: 2
tcpSocket:
port: 6379
Desired output:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-one
spec:
template:
spec:
containers:
- name: app-one
resources:
limits:
cpu: 50m
memory: 512Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-two
spec:
template:
spec:
containers:
- name: app-two
resources:
limits:
cpu: 100m
memory: 128Mi

The key is to use the with_entries function inside the .containers array to manually mark the required fields - name, resources and use the |= update operator to put the modified result back
yq eval '
select(.kind == "Deployment").spec.template.spec.containers[] |=
with_entries( select(.key == "name" or .key == "resources") ) ' yaml

Related

Nexus on k3s on restart does not persist Users and data

I have installed on K3S raspberry pi cluster nexus with the following setups for kubernetes learning purposes. First I created a StatefulSet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nexus
namespace: dev-ops
spec:
serviceName: "nexus"
replicas: 1
selector:
matchLabels:
app: nexus-server
template:
metadata:
labels:
app: nexus-server
spec:
containers:
- name: nexus
image: klo2k/nexus3:latest
env:
- name: MAX_HEAP
value: "800m"
- name: MIN_HEAP
value: "300m"
resources:
limits:
memory: "4Gi"
cpu: "1000m"
requests:
memory: "2Gi"
cpu: "500m"
ports:
- containerPort: 8081
volumeMounts:
- name: nexusstorage
mountPath: /sonatype-work
volumes:
- name: nexusstorage
persistentVolumeClaim:
claimName: nexusstorage
Storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nexusstorage
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
numberOfReplicas: "3"
staleReplicaTimeout: "30"
fsType: "ext4"
diskSelector: "ssd"
nodeSelector: "ssd"
pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nexusstorage
namespace: dev-ops
spec:
accessModes:
- ReadWriteOnce
storageClassName: nexusstorage
resources:
requests:
storage: 50Gi
Service
apiVersion: v1
kind: Service
metadata:
name: nexus-server
namespace: dev-ops
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: /
prometheus.io/port: '8081'
spec:
selector:
app: nexus-server
type: LoadBalancer
ports:
- port: 8081
targetPort: 8081
nodePort: 32000
this setup will spin up nexus, but if I restart the pod the data will not persist and I have to create all the setups and users from scratch.
What I'm missing in this case?
UPDATE
I got it working, nexus needs on mount permissions on directory. The working StatefulSet looks as it follow
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nexus
namespace: dev-ops
spec:
serviceName: "nexus"
replicas: 1
selector:
matchLabels:
app: nexus-server
template:
metadata:
labels:
app: nexus-server
spec:
securityContext:
runAsUser: 200
runAsGroup: 200
fsGroup: 200
containers:
- name: nexus
image: klo2k/nexus3:latest
env:
- name: MAX_HEAP
value: "800m"
- name: MIN_HEAP
value: "300m"
resources:
limits:
memory: "4Gi"
cpu: "1000m"
requests:
memory: "2Gi"
cpu: "500m"
ports:
- containerPort: 8081
volumeMounts:
- name: nexus-storage
mountPath: /nexus-data
volumes:
- name: nexus-storage
persistentVolumeClaim:
claimName: nexus-storage
important snippet to get it working
securityContext:
runAsUser: 200
runAsGroup: 200
fsGroup: 200
I'm not familiar with that image, although checking dockerhub, they mention using a Dockerfile similar to that of Sonatype. Then, I would change the mountpoint for your volume, to /nexus-data
This is the default path storing data (they set this env var, then declare a VOLUME). Which we can confirm, looking at the repository that most likely produced your arm-capable image
And following up on your last comment, let's try to also mount it in /opt/sonatype/sonatype-work/nexus3...
In your statefulset, change volumeMounts, to this:
volumeMounts:
- name: nexusstorage
mountPath: /nexus-data
- name: nexusstorage
mountPath: /opt/sonatype/sonatype-work/nexus3
volumes:
- name: nexusstorage
persistentVolumeClaim:
claimName: nexusstorage
Although the second volumeMount entry should not be necessary, as far as I understand. Maybe something's wrong with your storage provider?
Are you sure your PVC is write-able? Reverting back to your initial configuration, enter your pod (kubectl exec -it) and try to write a file at the root of your PVC.

Configuring yaml file

I'm learning k8s, I found an example in the MS docs. The problem I'm having is that I want to switch what GITHUB repo thats being used. I havent been able to figure out the path within this yaml example
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-back
template:
metadata:
labels:
app: azure-vote-back
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-vote-back
image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-front
template:
metadata:
labels:
app: azure-vote-front
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-vote-front
image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
This YAML example doesn't have a Github Repo field at all. That's why you can't find a path.
If you're trying to change the container image source, it has to be from a container registry (or your own filesystem), which is located at
containers: image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
where mcr.microsoft.com is the container registry.
You won't be able to connect this directly to a Github Repository, but any container registry will work, and I believe Github has one at https://ghcr.io (that link itself will direct you back to Github)

Kubernetes: Error converting YAML to JSON: yaml: line 12: did not find expected key

I'm trying to add ciao to my Kubernetes single node cluster and every time I run the kubectl apply -f command, I keep running into the error " error converting YAML to JSON: YAML: line 12: did not find expected key ". I looked at the other solutions but they were no help. Any help will be appreciated.
kind: Namespace
metadata:
name: monitoring
---
apiVersion: v1
kind: Secret
metadata:
name: ciao
namespace: monitoring
data:
BASIC_AUTH_USERNAME: YWRtaW4=
BASIC_AUTH_PASSWORD: cGFzc3dvcmQ=
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ciao
namespace: monitoring
spec:
replicas: 1
template:
metadata:
selector:
labels:
app: ciao
spec:
containers:
- image: brotandgames/ciao:latest
imagePullPolicy: IfNotPresent
name: ciao
volumeMounts: # Emit if you do not have persistent volumes
- mountPath: /app/db/sqlite/
name: persistent-volume
subPath: ciao
ports:
- containerPort: 3000
resources:
requests:
memory: 256Mi
cpu: 200m
limits:
memory: 512Mi
cpu: 400m
envFrom:
- secretRef:
name: ciao
---
apiVersion: v1
kind: Service
metadata:
name: ciao
namespace: monitoring
spec:
ports:
- port: 80
targetPort: 3000
protocol: TCP
type: NodePort
selector:
app: ciao
Looks there's an indentation in your Deployment definition. This should work:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ciao
namespace: monitoring
labels:
app: ciao
spec:
replicas: 1
selector:
matchLabels:
app: ciao
template:
metadata:
labels:
app: ciao
spec:
containers:
- image: brotandgames/ciao:latest
imagePullPolicy: IfNotPresent
name: ciao
volumeMounts: # Emit if you do not have persistent volumes
- mountPath: /app/db/sqlite/
name: persistent-volume
subPath: ciao
ports:
- containerPort: 3000
resources:
requests:
memory: 256Mi
cpu: 200m
limits:
memory: 512Mi
cpu: 400m
envFrom:
- secretRef:
name: ciao
Keep in mind that in this definition the PV persistent-volume needs to exist in your cluster/namespace.

Kubernetes Error With Deployment YAML File

I have the following file using which I'm setting up Prometheus on my Kubernetes cluster:
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: plant-simulator-monitoring
spec:
replicas: 1
selector:
matchLabels:
name: prometheus-server
template:
metadata:
labels:
app: prometheus-server
spec:
containers:
- name: prometheus
image: prom/prometheus:latest
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
emptyDir: {}
When I apply this to my Kubernetes cluster, I see the following error:
ts=2020-03-16T21:40:33.123641578Z caller=sync.go:165 component=daemon err="plant-simulator-monitoring:deployment/prometheus-deployment: running kubectl: The Deployment \"prometheus-deployment\" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{\"app\":\"prometheus-server\"}: `selector` does not match template `labels`"
I could not see anything wrong with my yaml file. Is there something that I'm missing?
As I mentioned in comments, You have issue with matching labels.
In spec.selector.matchLabels you have name: prometheus-server and in spec.template.medatada.labels you have app: prometheus-server. Values there need to be the same. Below what I get when used your yaml:
$ kubectl apply -f deploymentoriginal.yaml
The Deployment "prometheus-deployment" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"app":"prometheus-server"}: `selector` does not match template `labels`
And output when I used below yaml with the same labels:
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: plant-simulator-monitoring
spec:
replicas: 1
selector:
matchLabels:
name: prometheus-server
template:
metadata:
labels:
name: prometheus-server
spec:
containers:
- name: prometheus
image: prom/prometheus:latest
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
emptyDir: {}
$ kubectl apply -f deploymentselectors.yaml
deployment.apps/prometheus-deployment created
More detailed info about selectors/labels can be found in Official Kubernetes docs.
There is a mismatch between the label in selector(name: prometheus-server) and metadata (app: prometheus-server). Below should work.
selector:
matchLabels:
app: prometheus-server
template:
metadata:
labels:
app: prometheus-server

How to read & modify Kube Manifest values with yq?

I have a Kube manifest that need be applied to a couple of kubernetes clusters with different resource settings. For that I need to change resource section of this file on the fly. Here's its contents:
apiVersion: v1
kind: Service
metadata:
name: abc-api
labels:
app: abc-api
spec:
ports:
- name: http
port: 80
targetPort: 3000
- name: https
port: 3000
targetPort: 3000
selector:
app: abc-api
tier: frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: abc-api
labels:
app: abc-api
spec:
selector:
matchLabels:
app: abc-api
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: abc-api
tier: frontend
spec:
containers:
- image: ABC_IMAGE
resources:
requests:
memory: "128Mi"
cpu: .30
limits:
memory: "512Mi"
cpu: .99
I searched and found that yq is a better tool for this. However when I read values from this file, it only shows it till the line with '3 dashes': no values past that.
# yq r worker/deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: hometales-api
labels:
app: hometales-api
spec:
ports:
- name: http
port: 80
targetPort: 3000
- name: https
port: 3000
targetPort: 3000
selector:
app: hometales-api
tier: frontend
I want to read the Deployment section, as well as edit the resource values.
Section to read:
---
apiVersion: apps/v1
kind: Deployment
metadata:
....
Section to edit:
resources:
requests:
memory: "128Mi"
cpu: .20
limits:
memory: "512Mi"
cpu: .99
So 1st part of Q: how to read after 2nd instance of 3-dashes?
2nd part of Q: how to edit resource values on the fly?
I'm able to run this command and read this section, but can't read memory or cpu value further:
# yq r -d1 deployment.yaml "spec.template.spec.containers[0].resources.requests"
memory: "128Mi"
cpu: .20
Use the -d CLI option. See https://mikefarah.gitbook.io/yq/commands/write-update#multiple-documents for more details.
Also Kubernetes has its own thing for in kubectl patch.