kubectl get AzureAssignedIdentities -A -o yaml is empty - kubernetes

I am trying to deploy an api version with the following templates:
"apiVersion": "apiextensions.k8s.io/v1",
"kind": "CustomResourceDefinition",
"metadata": {
"name": "azureassignedidentities.aadpodidentity.k8s.io"
},
"spec":{
"conversion": {
"strategy": None
},
"group": "aadpodidentity.k8s.io",
"names": {
"kind": "AzureAssignedIdentity",
"listKind": "AzureAssignedIdentityList",
"plural": "azureassignedidentities",
"singular": "azureassignedidentity"
},
"preserveUnknownFields": true,
"scope": "Namespaced",
"versions":[
"name": "v1",
"served": true,
"storage": true,
]
},
"status": {
"acceptedNames":{
"kind": ""
"listKind": ""
"plural": ""
"singular": ""
},
"conditions": [],
"storedVersions": []
}
When I ran
kubectl get AzureAssignedIdentities -A -o yaml
I am getting empty response as below.
apiVersion: v1
items: []
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Can anyone please tell what's wrong here.
Thanks in advance!

Related

How does --save-config work with resource definition?

With the below yml file:
apiVersion: v1
kind: Pod
metadata:
name: my-nginx
spec:
containers:
- name: my-nginx
image: nginx:alpine
On running kubectl create -f nginx.pod.yml --save-config, then as per the documentation: If true, the configuration of current object will be saved in its annotation.
Where exactly is this annotation saved? How to view this annotation?
Below command would print all the annotations present in the pod my-nginx:
kubectl get pod my-nginx -o jsonpath='{.metadata.annotations}'
Under kubectl.kubernetes.io/last-applied-configuration of the above output, your configuration used is stored.
Here is an example showing the usage:
Original manifest for my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: my-deploy
name: my-deploy
spec:
replicas: 1
selector:
matchLabels:
app: my-deploy
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: my-deploy
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
Created the deployment as follow:
k create -f x.yml --save-config
deployment.apps/my-deploy created
kubectl get deployments.apps my-deploy -o jsonpath='{.metadata.annotations.kubectl\.kubernetes\.io\/last-applied-configuration}' |jq .
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {},
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
},
"name": "my-deploy",
"namespace": "default"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "my-deploy"
}
},
"strategy": {},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
}
},
"spec": {
"containers": [
{
"image": "nginx",
"name": "nginx",
"resources": {}
}
]
}
}
},
"status": {}
}
kubectl get deployments.apps my-deploy -o jsonpath='{.spec.template.spec.containers[*].image}'
nginx
Now some user came and changed the image on nginx from nginx to httpd, using imperative commands.
k set image deployment/my-deploy nginx=httpd --record
deployment.apps/my-deploy image updated
kubectl get deployments.apps my-deploy -o jsonpath='{.spec.template.spec.containers[*].image}'
httpd
However, we can check that the last applied declarative configuration is not updated.
kubectl get deployments.apps my-deploy -o jsonpath='{.metadata.annotations.kubectl\.kubernetes\.io\/last-applied-configuration}' |jq .
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {},
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
},
"name": "my-deploy",
"namespace": "default"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "my-deploy"
}
},
"strategy": {},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
}
},
"spec": {
"containers": [
{
"image": "nginx",
"name": "nginx",
"resources": {}
}
]
}
}
},
"status": {}
}
Now, change the image name in the original manifest file from nginx to flask, then do kubectl apply(a declarative command)
kubectl apply -f orig.yml
deployment.apps/my-deploy configured
kubectl get deployments.apps my-deploy -o jsonpath='{.spec.template.spec.containers[*].image}'
flask
Now check the last applied configuration annotation, this would have flask in it. Remember, it was missing when kubectl set image command was used.
kubectl get deployments.apps my-deploy -o jsonpath='{.metadata.annotations.kubectl\.kubernetes\.io\/last-applied-configuration}' |jq .
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {},
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
},
"name": "my-deploy",
"namespace": "default"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "my-deploy"
}
},
"strategy": {},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
}
},
"spec": {
"containers": [
{
"image": "flask",
"name": "nginx",
"resources": {}
}
]
}
}
},
"status": {}
}
Where is the "last-applied" annotation saved:
Just like everything else, Its saved in etcd , created the pod using the manifest provided in the question and ran raw etcd command to print the content. (in this dev environment, etcd was not encrypted).
ETCDCTL_API=3 etcdctl --cert /etc/kubernetes/pki/apiserver-etcd-client.crt --key /etc/kubernetes/pki/apiserver-etcd-client.key --cacert /etc/kubernetes/pki/etcd/ca.crt get /registry/pods/default/my-nginx
/registry/pods/default/my-nginx
k8s
v1Pod⚌
⚌
my-nginxdefault"*$a3s4b729-c96a-40f7-8de9-5d5f4ag21gfa2⚌⚌⚌b⚌
0kubectl.kubernetes.io/last-applied-configuration⚌{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"my-nginx","namespace":"default"},"spec":{"containers":[{"image":"nginx:alpine","name":"my-nginx"}]}}

Can't add second interface to the pod with multus - minikube

I am trying to deploy a pod with second interface using multus-cni. However, when I deploy my pod I only see just one interface the main one. The secondary interface is not created.
I followed the steps in the quick start guide to install multus.
Environment:
minikube v1.12.1 on Microsoft Windows 10 Enterprise
Kubernetes v1.18.3 on Docker 19.03.12
Multus version
--cni-version=0.3.1
$00-multus.conf
{ "cniVersion": "0.3.1", "name": "multus-cni-network", "type": "multus", "kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig", "delegates": [ { "cniVersion": "0.3.1", "name":
"bridge", "type": "bridge", "bridge": "bridge", "addIf": "true", "isDefaultGateway": true, "forceAddress": false, "ipMasq": true, "hairpinMode": true, "ipam": { "type": "host-local
", "subnet": "10.244.0.0/16" } } ] }
$1-k8s.conf
{
"cniVersion": "0.3.1",
"name": "bridge",
"type": "bridge",
"bridge": "bridge",
"addIf": "true",
"isDefaultGateway": true,
"forceAddress": false,
"ipMasq": true,
"hairpinMode": true,
"ipam": {
"type": "host-local",
"subnet": "10.244.0.0/16"
}
}
$87-podman-bridge.conflist
{
"cniVersion": "0.4.0",
"name": "podman",
"plugins": [
{
"type": "bridge",
"bridge": "cni-podman0",
"isGateway": true,
"ipMasq": true,
"hairpinMode": true,
"ipam": {
"type": "host-local",
"routes": [{ "dst": "0.0.0.0/0" }],
"ranges": [
[
{
"subnet": "10.88.0.0/16",
"gateway": "10.88.0.1"
}
]
]
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
},
{
"type": "firewall"
},
{
"type": "tuning"
}
]
}
$multus.kubeconfig
apiVersion: v1
kind: Config
clusters:
- name: local
cluster:
server: https://[10.96.0.1]:443
certificate-authority-data: .....
users:
- name: multus
user:
token: .....
contexts:
- name: multus-context
context:
cluster: local
user: multus
current-context: multus-context
File of '/etc/cni/multus/net.d'
**NetworkAttachment info:**
cat <<EOF | kubectl create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf
spec:
config: '{
"cniVersion": "0.3.1",
"type": "macvlan",
"master": "eth1",
"mode": "bridge",
"ipam": {
"type": "host-local",
"subnet": "192.168.1.0/24",
"rangeStart": "192.168.1.200",
"rangeEnd": "192.168.1.216",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "192.168.1.1"
}
}'
EOF
Pod yaml info:
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: samplepod
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf
spec:
containers:
name: samplepod
command: ["/bin/ash", "-c", "trap : TERM INT; sleep infinity & wait"]
image: alpine
EOF
I installed new minikube version, now adding secondary interface seems to be fine.

Configmaps - Angular

So i have a configmap config.json
{
"apiUrl": "http://application.cloudapp.net/",
"test": "1232"
}
called 'continuousdeployment'
The pod yaml
apiVersion: v1
kind: Pod
metadata:
name: continuousdeployment-ui
spec:
containers:
- name: continuousdeployment-ui
image: release.azurecr.io/release.ccf.ui:2758
volumeMounts:
- name: config-volume
mountPath: app/wwwroot/assets/config
volumes:
- name: config-volume
configMap:
name: continuousdeployment
Without the config map set i can see the config file when going to the IP address ./assets/config/config.json and i can view it all fine. when i apply the config map the only part of the json i can then see is:
{
"apiUrl":
So any ideas?
Update - Just amended the config.json to the following
{
"test":"1232",
"apiUrl":"http://applictaion.cloudapp.net/"
}
and in the browser i can see
{
"test":"1232",
configmap
{
"kind": "ConfigMap",
"apiVersion": "v1",
"metadata": {
"name": "continuousdeployment-ccf-ui-config",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/configmaps/continuousdeployment-ccf-ui-config",
"uid": "94ee863c-42b3-11e9-8872-36a65af8d9e4",
"resourceVersion": "235988",
"creationTimestamp": "2019-03-09T21:37:47Z"
},
"data": {
"config.json": "{\n \"apiUrl\": \"http://application.cloudapp.net/\"\n}"
}
}
even tried this
{
"kind": "ConfigMap",
"apiVersion": "v1",
"metadata": {
"name": "continuousdeployment-ccf-ui-config1",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/configmaps/continuousdeployment-ccf-ui-config1",
"uid": "ee50ad25-42c0-11e9-8872-36a65af8d9e4",
"resourceVersion": "243585",
"creationTimestamp": "2019-03-09T23:13:21Z"
},
"data": {
"config.json": "{\r\n \"apiUrl\": \"http://application.cloudapp.net/\"\r\n}"
}
}
So if i remove all odd characters and just do the alphabet, when i view the config.json it gets to 'q'
{
"kind": "ConfigMap",
"apiVersion": "v1",
"metadata": {
"name": "continuousdeployment-ccf-ui-config1",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/configmaps/continuousdeployment-ccf-ui-config1",
"uid": "ee50ad25-42c0-11e9-8872-36a65af8d9e4",
"resourceVersion": "244644",
"creationTimestamp": "2019-03-09T23:13:21Z"
},
"data": {
"config.json": "{abcdefghijklmnopqrstuvwxyz}"
}
}
Sorted it!
volumeMounts:
- mountPath: /app/wwwroot/assets/config/config.json
name: config-volume
subPath: config.json
Had to put the following in the pod and now the json is visable and works

Extract LoadBalancer name from kubectl output with go-template

I'm trying to write a go template that extracts the value of the load balancer. Using --go-template={{status.loadBalancer.ingress}} returns [map[hostname:GUID.us-west-2.elb.amazonaws.com]]% When I add .hostname to the template I get an error saying, "can't evaluate field hostname in type interface {}". I've tried using the range keyword, but I can't seem to get the syntax right.
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"creationTimestamp": "2018-07-30T17:22:12Z",
"labels": {
"run": "nginx"
},
"name": "nginx-http",
"namespace": "jx",
"resourceVersion": "495789",
"selfLink": "/api/v1/namespaces/jx/services/nginx-http",
"uid": "18aea6e2-941d-11e8-9c8a-0aae2cf24842"
},
"spec": {
"clusterIP": "10.100.92.49",
"externalTrafficPolicy": "Cluster",
"ports": [
{
"nodePort": 31032,
"port": 80,
"protocol": "TCP",
"targetPort": 8080
}
],
"selector": {
"run": "nginx"
},
"sessionAffinity": "None",
"type": "LoadBalancer"
},
"status": {
"loadBalancer": {
"ingress": [
{
"hostname": "GUID.us-west-2.elb.amazonaws.com"
}
]
}
}
}
As you can see from the JSON, the ingress element is an array. You can use the template function index to grab this array element.
Try:
kubectl get svc <name> -o=go-template --template='{{(index .status.loadBalancer.ingress 0 ).hostname}}'
This is assuming of course that you're only provisioning a single loadbalancer, if you have multiple, you'll have to use range
try this:
kubectl get svc <name> -o go-template='{{range .items}}{{range .status.loadBalancer.ingress}}{{.hostname}}{{printf "\n"}}{{end}}{{end}}'

Failed to negotiate an API version when using Kubernetes RBAC authorizer plugin

I followed the instruction in reference document. I created a ClusterRole called 'admin-roles' granting admin privilege, and bound the role to user 'tester'.
In k8s master:
# curl localhost:8080/apis/rbac.authorization.k8s.io/v1alpha1/clusterroles
{
"kind": "ClusterRoleList",
"apiVersion": "rbac.authorization.k8s.io/v1alpha1",
"metadata": {
"selfLink": "/apis/rbac.authorization.k8s.io/v1alpha1/clusterroles",
"resourceVersion": "480750"
},
"items": [
{
"metadata": {
"name": "admins-role",
"selfLink": "/apis/rbac.authorization.k8s.io/v1alpha1/clusterroles/admins-role",
"uid": "88a58ac6-471a-11e6-9ad4-52545f942a3b",
"resourceVersion": "479484",
"creationTimestamp": "2016-07-11T03:49:56Z"
},
"rules": [
{
"verbs": [
"*"
],
"attributeRestrictions": null,
"apiGroups": [
"*"
],
"resources": [
"*"
]
}
]
}
# curl localhost:8080/apis/rbac.authorization.k8s.io/v1alpha1/clusterrolebindings
{
"kind": "ClusterRoleBindingList",
"apiVersion": "rbac.authorization.k8s.io/v1alpha1",
"metadata": {
"selfLink": "/apis/rbac.authorization.k8s.io/v1alpha1/clusterrolebindings",
"resourceVersion": "480952"
},
"items": [
{
"metadata": {
"name": "bind-admin",
"selfLink": "/apis/rbac.authorization.k8s.io/v1alpha1/clusterrolebindings/bind-admin",
"uid": "c53bbc34-471a-11e6-9ad4-52545f942a3b",
"resourceVersion": "479632",
"creationTimestamp": "2016-07-11T03:51:38Z"
},
"subjects": [
{
"kind": "User",
"name": "tester"
}
],
"roleRef": {
"kind": "ClusterRole",
"name": "admins-role",
"apiVersion": "rbac.authorization.k8s.io/v1alpha1"
}
}
But when run kubectl get pods with 'tester' as user:
error: failed to negotiate an api version; server supports: map[], client supports: map[extensions/v1beta1:{} authentication.k8s.io/v1beta1:{} autoscaling/v1:{} batch/v1:{} federation/v1alpha1:{} v1:{} apps/v1alpha1:{} componentconfig/v1alpha1:{} policy/v1alpha1:{} rbac.authorization.k8s.io/v1alpha1:{} authorization.k8s.io/v1beta1:{} batch/v2alpha1:{}]
You can't hit the discovery API. Update the your ClusterRole to include "nonResourceURLs": ["*"].
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admins-role
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
nonResourceURLs: ["*"]