I need to update all the pods(rolling updates), with env variable changes
I am not using kubectl, but using REST api.
Right not i am deleting the service and pods; And then recreating both services and pods. (It usually take around minutes, and there is downtime). Wanted similar with rolling update, without downtime.
If you want to restart all pod attached to a deployment, then you can do that by running
$ curl -k --data '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubrnetes.io/restartedAt":"'"$(date +%Y-%m-%dT%T%z)"'"}}}}}' -XPATCH -H "Accept: application/json, */*" -H "Content-Type: application/strategic-merge-patch+json" localhost:8001/apis/extensions/v1beta1/namespaces/default/deployments/mydeployment
Use deployment instead of pods.
Deployment has DeploymentStrategy , maxUnavailable, maxSurge using which you can achieve zero downtime upgrade.
For changing env just change it the deployment yaml and apply it to the cluster. It will rollout the deployment without any downtime.
Kubectl internally calls rest api exposed by Kubernetes API Server. You could check what rest call being sent by kubectl by increasing the verbosity. Once you know the rest api being called you could call those apis as well.
kubectl rollout restart deployment/frontend -v=10
Related
[ Note to reader: This is a long question, please bear with me ]
I've been messing with Cloud Run for Anthos as a side project, and have got a service up and running on http (not https) -- it performs as expected, scaling up and scaling down as traffic requires.
I now want to add SSL to it, so traffic comes on SSL only. And I've been banging my head against a wall on this trying to get it to work. Mainly centering on this link
I have setup the cluster default domain (it shows up on the Cloud Run for Anthos page) and my service is accessible via http://myservice.mynamespace.mydomain.com (not my real domain, obviously)
I've patched the knative configs, enabling autoTLS and setting mydomain as needed:
kubectl patch configmap config-domain --namespace knative-serving --patch '{"data": {"mydomain.com": ""}}'
kubectl patch configmap config-domain --namespace knative-serving --patch '{"data": {"nip.io": null}}'
kubectl patch configmap config-domainmapping --namespace knative-serving --patch '{"data": {"autoTLS": "Enabled"}}'
kubectl patch configmap config-network --namespace knative-serving --patch '{"data": {"autoTLS": "Enabled"}}'
kubectl patch configmap config-network --namespace knative-serving --patch '{"data": {"httpProtocol": "Enabled"}}'
The documentation talks about using gcloud domain mapping:
gcloud run domain-mappings describe --domain DOMAIN
Yet, this only is available on beta
ERROR: (gcloud.run.domain-mappings.describe) This command group is in beta for fully managed Cloud Run; use `gcloud beta run domain-mappings`.
Using beta, however, also fails
ERROR: (gcloud.beta.run.domain-mappings.describe) NOT_FOUND: Resource 'mydomain.com' of kind 'DOMAIN_MAPPING' in region 'europe-west2' in project 'my-project' does not exist
Making my cluster zonal or regional makes no difference.
The documentation also mentions using kcert
kubectl get kcert
Yet this does not show anything until after I have deployed my service.
$ kubectl get kcert --all-namespaces
NAMESPACE NAME READY REASON
default route-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
And this NEVER shows ready. Trying to hit http://myservice.mynamespace.mydomain.com remains fine, but hitting https://myservice.mynamespace.mydomain.com returns:
curl: (35) OpenSSL SSL_connect: Connection reset by peer in connection to myservice.mynamespace.mydomain.com:443
Can anyone guide me into why I can't get SSL working on this?
Related question -- is it possible to use the already-configured cert-manager cluster issuer instead? Such as described here?
I have a requirement to stop deployment by label name and start it again, via the API
also I need to do that for a group of deployments so I added label for each of them
so i know how to filter the deploymnet by desire label. but I found that if I would like to stop deployment from running, I do need to scale it down and changed the replica number to 0
is there any other option to do that via API? because now I should need to keep the replica for start (scale-up again) but this is a parameter that not easy to keep in a lifecycle of a service
so now the best option that I found is smth like :
PAYLOAD='[{"op":"replace","path":"/spec/replicas","value":"3"}]'
curl -X PATCH -d$PAYLOAD -H 'Content-Type: application/json-patch+json' $API_URL
but I am asking if there is smth else and if there is a group "stop /start" like in docker swarm that you can just run docker stack rm for example
If you would like to run kubectl scale deployments mydeployment --replicas=0 via API call, you can run below command
$ curl -k \
-X PUT \
-d #- \
-H "Authorization: Bearer $TOKEN" \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
https://$ENDPOINT/apis/apps/v1/namespaces/$NAMESPACE/deployments/$NAME/scale <<'EOF'
{
"kind": "Scale",
"apiVersion": "apps/v1beta1",
...
}
EOF
More examples can be found in Openshift RestAPI documentations.
How about a solution where you store number of replicas in annoatation:
export DEPLOYMENT_NAME=xxx
kubectl annotate deployments $DEPLOYMENT_NAME replicas-before=$(kubectl get deployments.apps $DEPLOYMENT_NAME -ojsonpath="{.spec.replicas}")
kubectl scale deployment --replicas 0 $DEPLOYMENT_NAME
kubectl scale deployment --replicas $(kubectl get deployments.apps nginx -ojsonpath="{.metadata.annotations.replicas-before}") $DEPLOYMENT_NAME
This does not require additional saving the state externally. You save the current state in an annotation in this example called replicas-before, and then scale down the deployment to 0. If you want to restore the number or replicas, just read the replicas from annotation and scale the deployment up to this value.
I know you asked for a solution using k8s api. Just run the kubectl command with -v=10 and see what api requests are being sent.
I have started kubectl proxy from within my pods and am able to access kubernetes APIs. I have a need to restart my statefulset.
Using kubectl, I would done this:
kubectl rollout restart statefulset my-statefulset
However, I would like to do this using the REST APIs. For instance, I can delete my pods, using this:
curl -XDELETE localhost:8080/api/v1/namespaces/default/pods
Is there any equivalent REST endpoint that I can use to rollout restart a statefulset?
I run your command kubectl rollout restart statefulset my-statefulset --v 10 and notice the output logs.
I figured out kubectl makes a patch request when I apply above command. And I am able to do that patch request using curl like following
curl -k --data '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubrnetes.io/restartedAt":"'"$(date +%Y-%m-%dT%T%z)"'"}}}}}'\
-XPATCH -H "Accept: application/json, */*" -H "Content-Type: application/strategic-merge-patch+json"\
localhost:8080/apis/apps/v1/namespaces/default/statefulsets/my-statefulset
is it possible to execute kubectl commands as a curl by simply hitting GKE kube master api for some resources and get json back ?
Kubernetes is an entirely API-based system ,to interact with the Kubernetes API you need a ServiceAccount (obtained through a Cluster Role and a RoleBinding).
Here you can find the documentation for Google Kubernetes Engine API: https://cloud.google.com/kubernetes-engine/docs/reference/rest
Also as side note, might be usefully:
https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#accessing-the-kubernetes-api
Kubernetes is REST API based and can be called via curl.
https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#accessing-the-kubernetes-api
Kubectl internally does curl to Kubernetes API which can be verified via running below command and searching for curl and you can execute the same curl command. In the below example kubectl is using certificate for authentication and executing curl against Kubernetes API.
kubectl get nodes --v=10
curl -k -v -XGET -H "Accept: application/json;as=Table;v=v1beta1;g=meta.k8s.io, application/json" -H "User-Agent: kubectl/v1.17.0 (darwin/amd64) kubernetes/70132b0" 'https://127.0.0.1:32768/api/v1/nodes?limit=500'
But to call Kubernetes REST API you can either use a client certificate or a JWT bearer token. A service account which has a bearer token is the recommended way to communicate to Kubernetes API from a pod.
Kubernetes API.
I have a service called my-service with an endpoint called refreshCache. my-service is hosted on multiple servers, and occasionally I want an event in my-service on one of the servers to trigger refreshCache on my-service on all servers. To do this I manually maintain a list of all the servers that host my-service, pull that list, and send a REST request to <server>/.../refreshCache for each server.
I'm now migrating my service to k8s. Similarly to before, where I was running refreshCache on all servers that hosted my-service, I now want to be able to run refreshCache on all the pods that host my-service. Unfortunately I cannot manually maintain a list of pod IPs, as my understanding is that IPs are ephemeral in k8s, so I need to be able to dynamically get the IPs of all pods in a node, from within a container in one of those pods. Is this possible?
Note: I'm aware this information is available with kubectl get endpoints ..., however kubectl will not be available within my container.
For achieving this the best way would be to use a K8s config inside the pod.
For this the K8s Client can help. Here is an example python script that can be used to get pods and their metadata from inside the pod.
from kubernetes import client, config
def trigger_refresh_cache():
# it works only if this script is run by K8s as a POD
config.load_incluster_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(label_selector='app=my-service')
for i in ret.items:
print("%s\t%s\t%s" %
(i.status.pod_ip, i.metadata.namespace, i.metadata.name))
# Rest of the logic goes here to trigger endpoint
Here the method load_incluster_config() is used which loads the kubeconfig inside pod via the service account attached to that pod.
You don't need kubectl to access the Kubernetes API. You can do it with any tool that can make HTTP requests.
The Kubernetes API is a simple HTTP REST API, and all the authentication information that you need is present in the container if it runs as a Pod in the cluster.
To get the Endpoints object named my-service from within a container in the cluster, you can do:
curl -k -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
https://kubernetes.default.svc:443/api/v1/namespaces/{namespace}/endpoints/my-service
Note: replace {namespace} with the namespace of the my-service Endpoints resource.
And to extract the IP addresses of the returned JSON, you could pipe the output to a tool like jq:
... | jq -r '.subsets[].addresses[].ip'
Note that the Pod from which you are executing this needs read permissions for the Endpoints resource, otherwise the API request is denied.
You can do this with a ClusterRole, ClusterRoleBinding, and Service Account (you need to set this up only once):
kubectl create sa endpoint-reader
kubectl create clusterrole endpoint-reader --verb=get,list --resource=endpoints
kubectl create clusterrolebinding endpoint-reader --serviceaccount=default:endpoint-reader --clusterrole=endpoint-reader
Then, use the endpoint-reader ServiceAccount for the Pod from which you want to execute the above curl command by specifying it in the pod.spec.serviceAccountName field.
Granting permissions for any other API operations (i.e. combinations of verbs and resources) works in the same way.