Configmaps - Angular - kubernetes

So i have a configmap config.json
{
"apiUrl": "http://application.cloudapp.net/",
"test": "1232"
}
called 'continuousdeployment'
The pod yaml
apiVersion: v1
kind: Pod
metadata:
name: continuousdeployment-ui
spec:
containers:
- name: continuousdeployment-ui
image: release.azurecr.io/release.ccf.ui:2758
volumeMounts:
- name: config-volume
mountPath: app/wwwroot/assets/config
volumes:
- name: config-volume
configMap:
name: continuousdeployment
Without the config map set i can see the config file when going to the IP address ./assets/config/config.json and i can view it all fine. when i apply the config map the only part of the json i can then see is:
{
"apiUrl":
So any ideas?
Update - Just amended the config.json to the following
{
"test":"1232",
"apiUrl":"http://applictaion.cloudapp.net/"
}
and in the browser i can see
{
"test":"1232",
configmap
{
"kind": "ConfigMap",
"apiVersion": "v1",
"metadata": {
"name": "continuousdeployment-ccf-ui-config",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/configmaps/continuousdeployment-ccf-ui-config",
"uid": "94ee863c-42b3-11e9-8872-36a65af8d9e4",
"resourceVersion": "235988",
"creationTimestamp": "2019-03-09T21:37:47Z"
},
"data": {
"config.json": "{\n \"apiUrl\": \"http://application.cloudapp.net/\"\n}"
}
}
even tried this
{
"kind": "ConfigMap",
"apiVersion": "v1",
"metadata": {
"name": "continuousdeployment-ccf-ui-config1",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/configmaps/continuousdeployment-ccf-ui-config1",
"uid": "ee50ad25-42c0-11e9-8872-36a65af8d9e4",
"resourceVersion": "243585",
"creationTimestamp": "2019-03-09T23:13:21Z"
},
"data": {
"config.json": "{\r\n \"apiUrl\": \"http://application.cloudapp.net/\"\r\n}"
}
}
So if i remove all odd characters and just do the alphabet, when i view the config.json it gets to 'q'
{
"kind": "ConfigMap",
"apiVersion": "v1",
"metadata": {
"name": "continuousdeployment-ccf-ui-config1",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/configmaps/continuousdeployment-ccf-ui-config1",
"uid": "ee50ad25-42c0-11e9-8872-36a65af8d9e4",
"resourceVersion": "244644",
"creationTimestamp": "2019-03-09T23:13:21Z"
},
"data": {
"config.json": "{abcdefghijklmnopqrstuvwxyz}"
}
}

Sorted it!
volumeMounts:
- mountPath: /app/wwwroot/assets/config/config.json
name: config-volume
subPath: config.json
Had to put the following in the pod and now the json is visable and works

Related

kubectl get AzureAssignedIdentities -A -o yaml is empty

I am trying to deploy an api version with the following templates:
"apiVersion": "apiextensions.k8s.io/v1",
"kind": "CustomResourceDefinition",
"metadata": {
"name": "azureassignedidentities.aadpodidentity.k8s.io"
},
"spec":{
"conversion": {
"strategy": None
},
"group": "aadpodidentity.k8s.io",
"names": {
"kind": "AzureAssignedIdentity",
"listKind": "AzureAssignedIdentityList",
"plural": "azureassignedidentities",
"singular": "azureassignedidentity"
},
"preserveUnknownFields": true,
"scope": "Namespaced",
"versions":[
"name": "v1",
"served": true,
"storage": true,
]
},
"status": {
"acceptedNames":{
"kind": ""
"listKind": ""
"plural": ""
"singular": ""
},
"conditions": [],
"storedVersions": []
}
When I ran
kubectl get AzureAssignedIdentities -A -o yaml
I am getting empty response as below.
apiVersion: v1
items: []
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Can anyone please tell what's wrong here.
Thanks in advance!

How does --save-config work with resource definition?

With the below yml file:
apiVersion: v1
kind: Pod
metadata:
name: my-nginx
spec:
containers:
- name: my-nginx
image: nginx:alpine
On running kubectl create -f nginx.pod.yml --save-config, then as per the documentation: If true, the configuration of current object will be saved in its annotation.
Where exactly is this annotation saved? How to view this annotation?
Below command would print all the annotations present in the pod my-nginx:
kubectl get pod my-nginx -o jsonpath='{.metadata.annotations}'
Under kubectl.kubernetes.io/last-applied-configuration of the above output, your configuration used is stored.
Here is an example showing the usage:
Original manifest for my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: my-deploy
name: my-deploy
spec:
replicas: 1
selector:
matchLabels:
app: my-deploy
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: my-deploy
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
Created the deployment as follow:
k create -f x.yml --save-config
deployment.apps/my-deploy created
kubectl get deployments.apps my-deploy -o jsonpath='{.metadata.annotations.kubectl\.kubernetes\.io\/last-applied-configuration}' |jq .
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {},
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
},
"name": "my-deploy",
"namespace": "default"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "my-deploy"
}
},
"strategy": {},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
}
},
"spec": {
"containers": [
{
"image": "nginx",
"name": "nginx",
"resources": {}
}
]
}
}
},
"status": {}
}
kubectl get deployments.apps my-deploy -o jsonpath='{.spec.template.spec.containers[*].image}'
nginx
Now some user came and changed the image on nginx from nginx to httpd, using imperative commands.
k set image deployment/my-deploy nginx=httpd --record
deployment.apps/my-deploy image updated
kubectl get deployments.apps my-deploy -o jsonpath='{.spec.template.spec.containers[*].image}'
httpd
However, we can check that the last applied declarative configuration is not updated.
kubectl get deployments.apps my-deploy -o jsonpath='{.metadata.annotations.kubectl\.kubernetes\.io\/last-applied-configuration}' |jq .
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {},
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
},
"name": "my-deploy",
"namespace": "default"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "my-deploy"
}
},
"strategy": {},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
}
},
"spec": {
"containers": [
{
"image": "nginx",
"name": "nginx",
"resources": {}
}
]
}
}
},
"status": {}
}
Now, change the image name in the original manifest file from nginx to flask, then do kubectl apply(a declarative command)
kubectl apply -f orig.yml
deployment.apps/my-deploy configured
kubectl get deployments.apps my-deploy -o jsonpath='{.spec.template.spec.containers[*].image}'
flask
Now check the last applied configuration annotation, this would have flask in it. Remember, it was missing when kubectl set image command was used.
kubectl get deployments.apps my-deploy -o jsonpath='{.metadata.annotations.kubectl\.kubernetes\.io\/last-applied-configuration}' |jq .
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {},
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
},
"name": "my-deploy",
"namespace": "default"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "my-deploy"
}
},
"strategy": {},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
}
},
"spec": {
"containers": [
{
"image": "flask",
"name": "nginx",
"resources": {}
}
]
}
}
},
"status": {}
}
Where is the "last-applied" annotation saved:
Just like everything else, Its saved in etcd , created the pod using the manifest provided in the question and ran raw etcd command to print the content. (in this dev environment, etcd was not encrypted).
ETCDCTL_API=3 etcdctl --cert /etc/kubernetes/pki/apiserver-etcd-client.crt --key /etc/kubernetes/pki/apiserver-etcd-client.key --cacert /etc/kubernetes/pki/etcd/ca.crt get /registry/pods/default/my-nginx
/registry/pods/default/my-nginx
k8s
v1Pod⚌
⚌
my-nginxdefault"*$a3s4b729-c96a-40f7-8de9-5d5f4ag21gfa2⚌⚌⚌b⚌
0kubectl.kubernetes.io/last-applied-configuration⚌{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"my-nginx","namespace":"default"},"spec":{"containers":[{"image":"nginx:alpine","name":"my-nginx"}]}}

Template PersistentVolumeSelector labels in StatefulSet's volumeClaimTemplate

So there is:
the StatefulSet to control several replicas of a Pod in an ordered manner.
the PersistentVolumeClaim to provide volume to a Pod.
the statefulset.spec.volumeClaimTemplate[] to bind the previous two together.
the PersistentVolumeSelector to control which PersistentVolume fulfills which PersistentVolumeClaim.
Suppose I have persistent volumes named pv0 and pv1, and a statefulset with 2 replicas called couchdb. Concretely, the statefulset is:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: couchdb
spec:
...
replicas: 2
template:
...
spec:
containers:
- name: couchdb
image: klaemo/couchdb:1.6
volumeMounts:
- name: db
mountPath: /usr/local/var/lib/couchdb
volumes:
- name: db
persistentVolumeClaim
claimName: db
volumeClaimTemplates:
- metadata:
name: db
spec:
...
this StatefulSet generates two PersistentVolumeClaim named db-couchdb-0 and db-couchdb-1. The problem is that it is not guaranteed that pvc db-couchdb-0 will be always bound to pv0.
The question is: how do you ensure controlled binds for PersistentVolumeClaim managed by a StatefulSet controller?
I tried adding a volume selector like this:
selector:
matchLabels:
name: couchdb
to the statefulset.spec.volumeClaimTemplate[0].spec but the value of name doesn't get templated. Both claims will end up looking for a PersistentVolume labeled name=couchdb.
What you're looking for is a claimRef inside the persistent volume, which have the name and namespace of PVC, to which you want to bind your PV. Please have a look at the following jsons:
Pv-0.json
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "pv-data-vol-0",
"labels": {
"type": "local"
}
},
"spec": {
"capacity": {
"storage": "10Gi"
},
"accessModes": [
"ReadWriteOnce"
],
"storageClassName": "local-storage",
"local": {
"path": "/prafull/data/pv-0"
},
"claimRef": {
"namespace": "default",
"name": "data-test-sf-0"
},
"nodeAffinity": {
"required": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "kubernetes.io/hostname",
"operator": "In",
"values": [
"ip-10-0-1-46.ec2.internal"
]
}
]
}
]
}
}
}
}
Pv-1.json
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "pv-data-vol-1",
"labels": {
"type": "local"
}
},
"spec": {
"capacity": {
"storage": "10Gi"
},
"accessModes": [
"ReadWriteOnce"
],
"storageClassName": "local-storage",
"local": {
"path": "/prafull/data/pv-1"
},
"claimRef": {
"namespace": "default",
"name": "data-test-sf-1"
},
"nodeAffinity": {
"required": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "kubernetes.io/hostname",
"operator": "In",
"values": [
"ip-10-0-1-46.ec2.internal"
]
}
]
}
]
}
}
}
}
Statefulset.json
{
"kind": "StatefulSet",
"apiVersion": "apps/v1beta1",
"metadata": {
"name": "test-sf",
"labels": {
"state": "test-sf"
}
},
"spec": {
"replicas": 2,
"template": {
"metadata": {
"labels": {
"app": "test-sf"
},
"annotations": {
"pod.alpha.kubernetes.io/initialized": "true"
}
}
...
...
},
"volumeClaimTemplates": [
{
"metadata": {
"name": "data"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"storageClassName": "local-storage",
"resources": {
"requests": {
"storage": "10Gi"
}
}
}
}
]
}
}
The volumeClaimTemplate will create two PVC test-sf-data-0 and test-sf-data-1. The two PV definition contains the claimRef section which has the namespace and PVC name on which PV should bind to. Please note that you have to provide the namespace as a mandatory because PV's are independent of namespace and there might be two PVC with same name on two different namespace. Hence, how does kubernetes controller manager will understand on which PVC, PV should bind, if we don't provide namespace name.
Hope this answers your question.
If you are using dynamic provisioning, the answer is No you can not. Because a volume was dynamically provisioned is always deleted after release.
If not dynamic provisioning, you need to reclaim the pv manually.
Check the reclaiming section of k8s doc.

Getting IP into env variable via DNS in kubernetes

I'm trying to get my head around K8s coming from docker compose. I would like to setup my first pod with two containers which I pushed to a registry. Following question:
How do I get the IP via DNS into a environment variable, so that registrator can connect to consul? See container registrtor in args consul://consul:8500. The consul needs to be changed with the env.
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "service-discovery",
"labels": {
"name": "service-discovery"
}
},
"spec": {
"containers": [
{
"name": "consul",
"image": "eu.gcr.io/{myproject}/consul",
"args": [
"-server",
"-bootstrap",
"-advertise=$(MY_POD_IP)"
],
"env": [{
"name": "MY_POD_IP",
"valueFrom": {
"fieldRef": {
"fieldPath": "status.podIP"
}
}
}],
"imagePullPolicy": "IfNotPresent",
"ports": [
{
"containerPort": 8300,
"name": "server"
},
{
"containerPort": 8400,
"name": "alt-port"
},
{
"containerPort": 8500,
"name": "ui-port"
},
{
"containerPort": 53,
"name": "udp-port"
},
{
"containerPort": 8443,
"name": "https-port"
}
]
},
{
"name": "registrator",
"image": "eu.gcr.io/{myproject}/registrator",
"args": [
"-internal",
"-ip=$(MY_POD_IP)",
"consul://consul:8500"
],
"env": [{
"name": "MY_POD_IP",
"valueFrom": {
"fieldRef": {
"fieldPath": "status.podIP"
}
}
}],
"imagePullPolicy": "Always"
}
]
}
}
Exposing pods to other applications is done with a Service in Kubernetes. Once you've defined a service you can use environment variables related to that services within your pods. Exposing the Pod directly is not a good idea as Pods might get rescheduled.
When e.g. using a service like this:
apiVersion: v1
kind: Service
metadata:
name: consul
namespace: kube-system
labels:
name: consul
spec:
ports:
- name: http
port: 8500
- name: rpc
port: 8400
- name: serflan
port: 8301
- name: serfwan
port: 8302
- name: server
port: 8300
- name: consuldns
port: 8600
selector:
app: consul
The related environment variable will be CONSUL_SERVICE_IP
Anyways it seems others actually stopped using that environment variables for some reasons as described here

Failed to negotiate an API version when using Kubernetes RBAC authorizer plugin

I followed the instruction in reference document. I created a ClusterRole called 'admin-roles' granting admin privilege, and bound the role to user 'tester'.
In k8s master:
# curl localhost:8080/apis/rbac.authorization.k8s.io/v1alpha1/clusterroles
{
"kind": "ClusterRoleList",
"apiVersion": "rbac.authorization.k8s.io/v1alpha1",
"metadata": {
"selfLink": "/apis/rbac.authorization.k8s.io/v1alpha1/clusterroles",
"resourceVersion": "480750"
},
"items": [
{
"metadata": {
"name": "admins-role",
"selfLink": "/apis/rbac.authorization.k8s.io/v1alpha1/clusterroles/admins-role",
"uid": "88a58ac6-471a-11e6-9ad4-52545f942a3b",
"resourceVersion": "479484",
"creationTimestamp": "2016-07-11T03:49:56Z"
},
"rules": [
{
"verbs": [
"*"
],
"attributeRestrictions": null,
"apiGroups": [
"*"
],
"resources": [
"*"
]
}
]
}
# curl localhost:8080/apis/rbac.authorization.k8s.io/v1alpha1/clusterrolebindings
{
"kind": "ClusterRoleBindingList",
"apiVersion": "rbac.authorization.k8s.io/v1alpha1",
"metadata": {
"selfLink": "/apis/rbac.authorization.k8s.io/v1alpha1/clusterrolebindings",
"resourceVersion": "480952"
},
"items": [
{
"metadata": {
"name": "bind-admin",
"selfLink": "/apis/rbac.authorization.k8s.io/v1alpha1/clusterrolebindings/bind-admin",
"uid": "c53bbc34-471a-11e6-9ad4-52545f942a3b",
"resourceVersion": "479632",
"creationTimestamp": "2016-07-11T03:51:38Z"
},
"subjects": [
{
"kind": "User",
"name": "tester"
}
],
"roleRef": {
"kind": "ClusterRole",
"name": "admins-role",
"apiVersion": "rbac.authorization.k8s.io/v1alpha1"
}
}
But when run kubectl get pods with 'tester' as user:
error: failed to negotiate an api version; server supports: map[], client supports: map[extensions/v1beta1:{} authentication.k8s.io/v1beta1:{} autoscaling/v1:{} batch/v1:{} federation/v1alpha1:{} v1:{} apps/v1alpha1:{} componentconfig/v1alpha1:{} policy/v1alpha1:{} rbac.authorization.k8s.io/v1alpha1:{} authorization.k8s.io/v1beta1:{} batch/v2alpha1:{}]
You can't hit the discovery API. Update the your ClusterRole to include "nonResourceURLs": ["*"].
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admins-role
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
nonResourceURLs: ["*"]