I have created minikube cluster. I have to run my automation script in the minikube for testcases using pytest. I have to pass service account. How to get the it? Anyone can please help?
While running minikube add extra flags:
minikube start \
--extra-config=apiserver.service-account-signing-key-file=/var/lib/minikube/certs/sa.key \
--extra-config=apiserver.service-account-key-file=/var/lib/minikube/certs/sa.pub \
--extra-config=apiserver.service-account-issuer=api \
--extra-config=apiserver.service-account-api-audiences=api,spire-server,nats \
--extra-config=apiserver.authorization-mode=Node,RBAC \
--extra-config=kubelet.authentication-token-webhook=true
Take a look: minikube-sa, kubernetes-psat.
Related
I have not found a way to do so using C# K8s SDK: https://github.com/kubernetes-client/csharp
How to create a AKS Cluster in C#? Basically, the following command:
az aks create -g $RESOURCE_GROUP -n $AKS_CLUSTER \
--enable-addons azure-keyvault-secrets-provider \
--enable-managed-identity \
--node-count $AKS_NODE_COUNT \
--generate-ssh-keys \
--enable-pod-identity \
--network-plugin azure
Sending a PUT request with payload (JSON body) to ARM.
See this: https://learn.microsoft.com/en-us/rest/api/aks/managed-clusters/create-or-update?tabs=HTTP
On my ubuntu 18.04 aws server I try to create cluster via kops.
kops create cluster \
--name=asdf.com \
--state=s3://asdf \
--zones=eu-west-1a \
--node-count=1 \
--node-size=t2.micro \
--master-size=t2.micro \
--master-count=1 \
--dns-zone=asdf.com \
--ssh-public-key=~/.ssh/id_rsa.pub
kops update cluster --name asdf.com
Succesfully Updated my cluster.
But when i try to validate and try to
kubectl get nodes
I got the error : Server gave http response to https server
kops validate cluster --name asdf.com
Validation failed: unexpected error during validation: error listing nodes: Get https://api.asdf.com/api/v1/nodes: http: server gave HTTP response to HTTPS client
Error
I could’nt solve this.
I tried
kubectl config set-cluster asdf.com --insecure-skip-tls-verify=true
but it didnt work.
Please can you help?
t2.micro instances may be too small for a control plane nodes. They will certainly be very slow in booting properly. You can try omitting that flag (i.e use the default size) and see if that boots up properly.
Tip: use kops validate cluster --wait=30m as it may provide more clues to what is wrong.
Except for the instance size, the command above looks good. But if you want to dif deeper, you can have a look at https://kops.sigs.k8s.io/operations/troubleshoot/
I configured Gitlab runner (version 11.4.2) to use Kubernetes executor.
Here is my non-interactive registrer command:
gitlab-runner register
--non-interactive \
--registration-token **** \
--url https://mygitlab.net/ \
--tls-ca-file /etc/gitlab-runner/certs/ca.crt \
--executor "kubernetes" \
--kubernetes-image-pull-secrets pull-internal \
--kubernetes-image-pull-secrets pull-external \
--name "kube-docker-runner" \
--tag-list "docker" \
--config "/etc/gitlab-runner/config.toml" \
--kubernetes-image "docker:latest" \
--kubernetes-helper-image "gitlab/gitlab-runner-helper:x86_64-latest" \
--output-limit 32768
It works fine and I can see the execution log in the Gitlab UI
In kubernetes, I see the runner pod composed by 2 containers : helper and build. I expected to see execution job logs by watching the build container logs but it's not the case. I would like to centralize these job execution log with a tool like fluentdbit by reading the container stdout output.
If I start the docker:latest alone (without runner execution) in a pod deployed in the same kubernetes cluster, I can see the logs on stdout. Any idea for configuring the stdout of build container properly ?
Being a newbie I was trying to install calico with minikube.
I did downloaded it from https://github.com/kubernetes/minikube/releases/tag/v0.20.0 into my Ubuntu OS.
I tried the following commands to install it:
minikube start --network-plugin=cni
Then downloaded https://github.com/projectcalico/calico/blob/master/v2.6/getting-started/kubernetes/installation/hosted/calicoctl.yaml into my /usr/local/bin/ location of Ubuntu
Finally tried d to install by
kubectl apply -f calico.yaml
But after that command the terminal hunged up for a long time without any response.
I tried couple of time but it resulted the same.
Please help, I am not able to install it
First of all, I will suggest to get the latest minikube version from here.
Once you are done getting your latest minikube, there are 2 ways to install and run Calico with minikube:
policy-only mode
networking (includes policy) mode
With policy-only mode without networking:
minikube start --network-plugin=cni --host-only-cidr 172.17.17.1/24 \
--extra-config=kubelet.PodCIDR=192.168.0.0/16 \
--extra-config=proxy.ClusterCIDR=192.168.0.0/16 \
--extra-config=controller-manager.ClusterCIDR=192.168.0.0/16 \
--extra-config=controller-manager.CIDRAllocatorType=RangeAllocator \
--extra-config=controller-manager.AllocateNodeCIDRs=true
Then use kubectl apply -f https://github.com/projectcalico/calico/blob/master/v2.6/getting-started/kubernetes/installation/hosted/kubernetes-datastore/policy-only/1.7/calico.yaml
or with networking (which also includes policy) mode which configures the networking and provides policy:
minikube start --network-plugin=cni --host-only-cidr 172.17.17.1/24 \
--extra-config=kubelet.PodCIDR=192.168.0.0/16 \
--extra-config=proxy.ClusterCIDR=192.168.0.0/16 \
--extra-config=controller-manager.ClusterCIDR=192.168.0.0/16
then kubectl apply -f https://github.com/projectcalico/calico/blob/master/v2.6/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
For more ref:- https://github.com/projectcalico/calico/issues/1013#issuecomment-325689943
Hope that help you to start
I have created a Google Dataproc cluster, but need to install presto as I now have a requirement. Presto is provided as an initialization action on Dataproc here, how can I run this initialization action after creation of the cluster.
Most init actions would probably run even after the cluster is created (though I haven't tried the Presto init action).
I like to run clusters describe to get the instance names, then run something like gcloud compute ssh <NODE> -- -T sudo bash -s < presto.sh for each node. Reference: How to use SSH to run a shell script on a remote machine?.
Notes:
Everything after the -- are args to the normal ssh command
The -T means don't try to create an interactive session (otherwise you'll get a warning like "Pseudo-terminal will not be allocated because stdin is not a terminal.")
I use "sudo bash" because init actions scripts assume they're being run as root.
presto.sh must be a copy of the script on your local machine. You could alternatively ssh and gsutil cp gs://dataproc-initialization-actions/presto/presto.sh . && sudo bash presto.sh.
But #Kanji Hara is correct in general. Spinning up a new cluster is pretty fast/painless, so we advocate using initialization actions when creating a cluster.
You could use initialization-actions parameter
Ex:
gcloud dataproc clusters create $CLUSTERNAME \
--project $PROJECT \
--num-workers $WORKERS \
--bucket $BUCKET \
--master-machine-type $VMMASTER \
--worker-machine-type $VMWORKER \
--initialization-actions \
gs://dataproc-initialization-actions/presto/presto.sh \
--scopes cloud-platform
Maybe this script can help you: https://github.com/kanjih-ciandt/script-dataproc-datalab