I have deployed Pod using kubernetes Rest API POST /api/v1/namespaces/{namespace}/pods
The request body has the podspec with volumes something as below:
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "test",
"namespace": "default"
},
"spec": {
"volumes": [
{
"name": "test-secrets",
"secret": {
"secretName": "test-secret-one"
}
}
],
"containers":[
<<container json>>.........
]
}
}
Now I want to change the secret name test-secret-one to test-secret-two for the Pod?
How can I achieve this? And what Rest API I need to use?
Patch rest API - I can use the change container image but cant be used for Volumes. If this can be used Can you give me an example or reference?
Is there any Kubernetes Rest API to restart the Pod. Note that we are not using a deployment object model. It is directly deployed as Pod, not as deployment.
Can anyone help here?
I'm posting the answer as Community Wiki as solution came from #Matt in the comments.
Volumes aren't updatable fields, you will need to recreate the pod
with the new spec.
The answer to most of your questions is use a deployment and patch it.
The deployment will manage the updates and restarts for you.
A different approach is also possible and was suggested by #Kitt:
If you only update the content of Secrets and Configmap instead of
renaming them, the mounted volumes will be refreshed by kubelet in
the duration --sync-frequency(1m as default).
Related
I'm trying to setup kubedge with cloudcore in EKS (k8s version 1.21.12) and edgecore in an external server. As part of the kubeedge setup, I had to create a node object manually in cloudside which will be labelled as edge node.
But when I do the kubectl apply -f node.json, I'm getting the following response:
C:\Users\akhi1\manifest>kubectl apply -f node.json
node/edge-node-01 created
C:\Users\akhi1\manifest>kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-xx-xx-xxx-213.ap-southeast-1.compute.internal Ready <none> 3h48m v1.21.12-eks-xxxx << this node was already in my eks cluster
As you can see, I'm not able see the newly created node 'edge-node-01' in the list.
On checking the kube events, I got the following:
C:\Users\akhi1\manifests>kubectl get events
LAST SEEN TYPE REASON OBJECT MESSAGE
13m Normal DeletingNode node/edge-node-01 Deleting node edge-node-01 because it does not exist in the cloud provider
For manually node registration, I followed this doc:
https://kubernetes.io/docs/concepts/architecture/nodes/#manual-node-administration
My node.json would look like this:
{
"kind": "Node",
"apiVersion": "v1",
"metadata": {
"name": "edge-node-01",
"labels": {
"name": "my-first-k8s-edge-node"
}
}
}
I have also checked, node restriction and admission controller but couldn't find anything related to it.
Please let me know why, eks is blocking me to create a node object that doesn't have an underlying ec2 attached.
Thanks in advance,
Akhil
I have multi stacks CDK setup, the core stack contains the VPC and EKS. The "Tenant" stack that deploys some s3 buckets and k8s namespaces and some other tenant related deployments.
cdk ls is displaying all the existing stacks as expected.
- eks-stack
- tenant-a
- tenant-b
If I want to deploy only a single tenant stack, I run cdk deploy tenant-a. To my surprise, I see that in my k8s cluster, the manifest of tenant-1 and tenant-b were deployed, and not just tenant-a as I expected.
The CDK output on the CLI correctly outputs that tenant-a was deployed. The CLI output doesn't mention tenant-b. I also see that most of the changes did happen inside the eks stack and not in the tenant stack, as I am using the references.
# app.py
# ...
# EKS
efs_stack = EksStack(
app,
"eks-stack",
stack_log_level="INFO",
)
# Tenant Specific stacks
tenants = ['tenant-a', 'tenant-b']
for tenant in tenants:
tenant_stack = TenantStack(
app,
f"tenant-stack-{tenant}",
stack_log_level="INFO",
cluster=eks_cluster_stack.eks_cluster,
tenant=tenant,
)
--
#
# Inside TenantStack.py a manifest is applied to k8s
self.cluster.add_manifest(f'db-job-{self.tenant}', {
"apiVersion": 'v1',
"kind": 'Pod',
"metadata": {"name": 'mypod'},
"spec": {
"serviceAccountName": "bootstrap-db-job-access-ssm",
"containers": [
{
"name": 'hello',
"image": 'amazon/aws-cli',
"command": 'magic stuff ....'
}
]
}
})
I found out that when I import a cluster by its attributes and by reference
eg.
self.cluster = Cluster.from_cluster_attributes(
self, 'cluster', cluster_name=cluster,
open_id_connect_provider=eks_open_id_connect_provider,
kubectl_role_arn=kubectl_role
I can deploy tenant stack a and b separately and my core eks stack stays untouched. Now I have read It's recommended to use references as CDK can automatically can create dependencies and detect circular dependencies.
There is an option to exclude dependencies. Use cdk deploy tenant-a --exclusively to don't deploy dependencies.
I have installed minikube in a VIM and I have service account token with all the privileges. Is there any API from kubernetes to fetch the resource usage(Overall).
To get CPU and Memory usage you can use (depending on the object you like to see) the following:
kubectl top pods
or
kubectl top nodes
which will show you
$ kubectl top pods
NAME CPU(cores) MEMORY(bytes)
nginx-1-5d4f8f66d9-xmhnh 0m 1Mi
Api reference might look like the following:
$ curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/pods
...
{
"metadata": {
"name": "nginx-1-5d4f8f66d9-xmhnh",
"namespace": "default",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/nginx-1-5d4f8f66d9-xmhnh",
"creationTimestamp": "2019-07-29T11:48:13Z"
},
"timestamp": "2019-07-29T11:48:11Z",
"window": "30s",
"containers": [
{
"name": "nginx",
"usage": {
"cpu": "0",
"memory": "1952Ki"
}
}
]
}
...
As for API there is few ways of accessing it.
You can use proxy by running kubectl proxy --port:8080 &
The following command runs kubectl in a mode where it acts as a reverse proxy. It handles locating the API server and authenticating.
See kubectl proxy for more details.
Then you can explore the API with curl, wget, or a browser, like so:
curl http://localhost:8080/api/
You can access it without proxy by using authentication token.
It is possible to avoid using kubectl proxy by passing an authentication token directly to the API server, like this:
Using grep/cut approach:
# Check all possible clusters, as you .KUBECONFIG may have multiple contexts:
kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}'
# Select name of cluster you want to interact with from above output:
export CLUSTER_NAME="some_server_name"
# Point to the API server refering the cluster name
APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(#.name==\"$CLUSTER_NAME\")].cluster.server}")
# Gets the token value
TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(#.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 -d)
# Explore the API with TOKEN
curl -X GET $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
And you can also access the API using several official client libraries for example Go or Python. Other libraries are available to see here.
If you install the kubernetes metrics server it will expose those metrics as an api https://github.com/kubernetes-incubator/metrics-server
I am getting following error while accessing the app deployed on Azure kubernetes service
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {},
"code": 403
}
I have followed all steps as given here https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-prepare-app
I know that this is something to do with authentication and RBAC, but i don't know what exactly is wrong and where should i make changes.
Just follow the steps in the link you posted. You will be successful in finishing that. The destination of each step below:
Create the image and make sure it can work without any error.
Create an Azure Container Registry and push the image into the registry.
Create a Service Principal for the AKS to let it just can pull the image from the registry.
Change the yaml file and make it pull the image from the Azure Registry, then crate pods in the AKS nodes.
You just need these four steps to run the application on AKS. Then get the IP address through the command kubectl get service azure-vote-front --watch like the step 4. If you can not access the application, check your steps carefully again.
Also, you can check all the pods status through the command kubectl describe pods or one pod with kubectl describe pod podName.
Update
I test with the image you provide and the result here:
And you can get the service information and know which port you should use to browse.
I just setup a kubenetes cluster base on this link https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#multi-platform
I check with kubectl get nodes, then the master node is Ready, but when I access to the link https://k8s-master-ip:6443/
it show the error: User "system:anonymous" cannot get path "/".
What is the trick I am missing ?
Hope you see something like this:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
This is good as not everyone should be able to access the cluster, if you want to see the services run "kubectl proxy", this should enable access to the services from the outside world.
C:\dev1> kubectl proxy
Starting to serve on 127.0.0.1:8001
And when you hit 127.0.0.1:8001 you should see the list of services.
The latest kubernetes deployment tools enable RBAC on the cluster. Jenkins is relegated to the catch-all user system:anonymous when it accesses https://192.168.70.94:6443/api/v1/.... This user has almost no privileges on kube-apiserver.
The bottom-line is, Jenkins needs to authenticate with kube-apiserver - either with a bearer token or a client cert that's signed by the k8s cluster's CA key.
Method 1. This is preferred if Jenkins is hosted in the k8s cluster:
Create a ServiceAccount in k8s for the plugin
Create an RBAC profile (ie. Role/RoleBinding or ClusterRole/ClusterRoleBinding) that's tied to the ServiceAccount
Config the plugin to use the ServiceAccount's token when accessing the URL https://192.168.70.94:6443/api/v1/...
Method 2. If Jenkins is hosted outside the k8s cluster, the steps above can still be used. The alternative is to:
Create a client cert that's tied to the k8s cluster's CA. You have to find where the CA key is kept and use it to generate a client cert.
Create an RBAC profile (ie. Role/RoleBinding or ClusterRole/ClusterRoleBinding) that's tied to the client cert
Config the plugin to use the client cert when accessing the URL https://192.168.70.94:6443/api/v1/...
Both methods work in any situation. I believe Method 1 will be simpler for you because you don't have to mess around with the CA key.
By default, your clusterrolebinding has system:anonymous set which blocks the cluster access.
Execute the following command, it will set a clusterrole as cluster-admin which will give you the required access.
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous