consider the following kubernetes manifest (mymanifest.yml) :
apiVersion: v1
kind: Pod
metadata:
name: firstpod
spec:
containers:
- image: nginx
name: nginx
---
apiVersion: v1
kind: Pod
metadata:
name: secondpod
spec:
containers:
- image: nginx
name: nginx
If I do kubectl apply -f mymanifest.yml both pods are deployed.
I remember someone told me that it is possible to deploy only one pod. Something like :
kubectl apply -f mymanifest.yml secondpod
But it doesn't work.
Is there a way to do it?
Thx in advance
You can add labels to the pods
apiVersion: v1
kind: Pod
metadata:
name: firstpod
labels:
app: firstpod
spec:
containers:
- image: nginx
name: nginx
---
apiVersion: v1
kind: Pod
metadata:
name: secondpod
labels:
app: secondpod
spec:
containers:
- image: nginx
name: nginx
Use specific label to filter while applying the yaml. The filter supports =, ==, and !=
kubectl apply -f mymanifest.yml -l app=secondpod
You can also use --prune which is an alpha feature.
# Apply the configuration in manifest.yaml that matches label app=secondpod and delete all the other resources that are
not in the file and match label app=secondpod.
kubectl apply --prune -f mymanifest.yml -l app=secondpod
Related
I would like to set the name field in a Namespace resource and also replace the namespace field in a Deployment resource with the same value, for example my-namespace.
Here is kustomization.json:
namespace: <NAMESPACE>
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesJson6902:
- patch: |-
- op: replace
path: /metadata/name
value: <NAMESPACE>
target:
kind: Namespace
name: system
version: v1
resources:
- manager.yaml
and manager.yaml:
apiVersion: v1
kind: Namespace
metadata:
labels:
control-plane: controller-manager
name: system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
selector:
matchLabels:
control-plane: controller-manager
replicas: 1
template:
metadata:
labels:
control-plane: controller-manager
spec:
containers:
- command:
- /manager
args:
- --enable-leader-election
image: controller:latest
name: manager
I tried using kustomize edit set namespace my-namespace && kustomize build, but it only changes the namespace field in the Deployment object.
Is there a way to change both field without using sed, in 'pure' kustomize and without having to change manually value in kustomization.json?
Is there a way to change both field without using sed, in 'pure' kustomize and without having to change manually value in kustomization.json?
I managed to achieve somewhat similar with the following configuration:
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: my-namespace
resources:
- deployment.yaml
depyloment.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
And here is the output of the command that you used:
➜ kustomize kustomize edit set namespace my-namespace7 && kustomize build .
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace7
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: my-namespace7
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
What is happening here is that once you set the namespace globally in kustomization.yaml it will apply it to your targets which looks to me that looks an easier way to achieve what you want.
I cannot test your config without manager_patch.yaml content. If you wish to go with your way further you will have update the question with the file content.
I new in kube, and im trying to create deployment with configmap file. I have the following:
app-mydeploy.yaml
--------
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-mydeploy
labels:
app: app-mydeploy
spec:
replicas: 3
selector:
matchLabels:
app: mydeploy
template:
metadata:
labels:
app: mydeploy
spec:
containers:
- name: mydeploy-1
image: mydeploy:tag-latest
envFrom:
- configMapRef:
name: map-mydeploy
map-mydeploy
-----
apiVersion: v1
kind: ConfigMap
metadata:
name: map-mydeploy
namespace: default
data:
my_var: 10.240.12.1
I created the config and the deploy with the following commands:
kubectl create -f app-mydeploy.yaml
kubectl create configmap map-mydeploy --from-file=map-mydeploy
when im doing kubectl describe deployments, im getting among the rest:
Environment Variables from:
map-mydeploy ConfigMap Optional: false
also kubectl describe configmaps map-mydeploy give me the right results.
the issue is that my container is CrashLoopBackOff, when I look at the logs, it says: time="2019-02-05T14:47:53Z" level=fatal msg="Required environment variable my_var is not set.
this log is from my container that says that the my_var is not defined in the env vars.
what im doing wrong?
I think you are missing you key in the command
kubectl create configmap map-mydeploy --from-file=map-mydeploy
try to this kubectl create configmap map-mydeploy --from-file=my_var=map-mydeploy
also I highly recommend that if you are just using one value, create you configMap from literal kubectl create configmap my-config --from-literal=my_var=10.240.12.1 then related the configMap in your deployment as you are currently doing it.
The command:
kubectl apply view-last-applied -f object.yml
displays the latest applied configuration file of an object.
Does a command exist that gives the entire 'applied' history of a given object?
For example, given the created configuration (using kubectl create -f pod.spec --save-config):
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
and the applied configurations (using kubectl apply -f pod.spec):
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.9
revision 2:
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.9.1
the command should give:
$ kubectl appy log -f pod.spec
applied <later date>:
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.9.1
applied <earlier date>:
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.9
No, only the latest applied configuration is persisted in the object
In kubernetes 1.8, when I create a deployment for example
apiVersion: apps/v1beta2 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Then when I do a
kubectl get deploy nginx-deployment -o yaml
I got
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2018-01-24T01:01:01Z
....
Why the apiversion is extension/v1beta1 instead of apiVersion: apps/v1beta2
When you create a deployment, the apiserver persists it and is capable of converting the persisted deployment into any supported version.
kubectl get deployments actually requests the extensions/v1beta1 version (you can see this by adding --v=6)
To get apps/v1beta2 deployments, do kubectl get deployments.v1beta2.apps
You might use an old version of kubectl.
If so, please upgrade your kubectl to 1.8, then create the deployment again.
My Understanding of this doc page is, that I can configure service accounts with Pods and hopefully also deployments, so I can access the k8s API in Kubernetes 1.6+. In order not to alter or use the default one I want to create service account and mount certificate into the pods of a deployment.
How do I achieve something similar like in this example for a deployment?
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
As you will need to specify 'podSpec' in Deployment as well, you should be able to configure the service account in the same way. Something like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-deployment
spec:
template:
# Below is the podSpec.
metadata:
name: ...
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
...
kubernetes nginx-deployment.yaml where serviceAccountName: test-sa
used as non default service account
Link: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
test-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-sa
namespace: test-ns
nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: test-ns
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: nginx
replicas: 1 # tells deployment to run 1 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: nginx
spec:
serviceAccountName: test-sa
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80