We create the cluster using the following command
kops create cluster --node-count=3 --node-size=c5.2xlarge --master-count=3 --master-size=c5.xlarge --zones=eu-west-1a --name=${KOPS_CLUSTER_NAME} --yes
We are using kops cluster. We export the kubeconfig using this command
$ kops export kubecfg --admin --kubeconfig ~/workspace/kubeconfig --state=s3://YOUR-S3-BUCKET-NAME"
It works fine for sometime. But after sometime we again start getting the same error as TTL expires for kubeconfig
error: You must be logged in to the server (Unauthorized) kops
Is there any way we can get rid of this annoying TTL?
After going through the docs, found that we can actually give the validity of the kubeconfig as an argument
$ kops export kubecfg --admin=87600h0m0s --kubeconfig ~/workspace/kubeconfig --state=s3://<bucket-name> --name=<cluster-name>
Related
I'm trying to connect to a cluster and I'm getting the following error:
gcloud container clusters get-credentials cluster1 --region europe-west2 --project my-project
Fetching cluster endpoint and auth data.
CRITICAL: ACTION REQUIRED: gke-gcloud-auth-plugin, which is needed for continued use of kubectl, was not found or is not executable.
Install gke-gcloud-auth-plugin for use with kubectl by following https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
kubeconfig entry generated for dbcell-cluster.
I have installed Google Cloud SDK 400, kubektl 1.22.12, gke-gcloud-auth-plugin 0.3.0, and also setup /~.bashrc with
export USE_GKE_GCLOUD_AUTH_PLUGIN=True
gke-gcloud-auth-plugin --version
Kubernetes v1.24.0-alpha+f42d1572e39979f6f7de03bd163f8ec04bc7950d
but when I try to connect to the cluster always I'm getting the same error, any idea here?
Thanks
The cluster exist in that region, also I verfied the env variable
with
echo $USE_GKE_GCLOUD_AUTH_PLUGIN
True
I installed the gke-gcloud-auth-plugin using gcloud components install... I do not know what more can I check
gcloud components list
I solved the same problem by removing my current kubeconfig context for GCP.
Get your context name running:
kubectl config get-contexts
Delete the context:
kubectl config delete-context CONTEXT_NAME
Reconfigure the credentials
gcloud container clusters get-credentials CLUSTER_NAME --region REGION --project PROJECT
The warning message should be gone by now.
I'm trying to validate Kubernetes cluster using kops validate command, but it throws this error:
Error: Validation failed: cannot load kubecfg settings for "demo.k8s.naveen.tk": context "demo.k8s.naveen.tk" does not exist
Did you run kops update cluster --name demo.k8s.naveen.tk --yes yet? That command should automatically create the context.
You can also run kops export kubecfg --name demo.k8s.naveen.tk --admin, which will create a temporary admin certificate.
I'm trying to follow this step by step to upload the airflow in Kubernetes (https://github.com/EamonKeane/airflow-GKE-k8sExecutor-helm) but in this part of the execution I have problems as follows:
Researching on the topic did not find anything that solved so far my problem, does anyone have any suggestions of what to do?
SQL_ALCHEMY_CONN=postgresql+psycopg2://$AIRFLOW_DB_USER:$AIRFLOW_DB_USER_PASSWORD#$KUBERNETES_POSTGRES_CLOUDSQLPROXY_SERVICE:$KUBERNETES_POSTGRES_CLOUDSQLPROXY_PORT/$AIRFLOW_DB_NAME
echo $SQL_ALCHEMY_CONN > /secrets/airflow/sql_alchemy_conn
# Create the fernet key which is needed to decrypt database the database
FERNET_KEY=$(dd if=/dev/urandom bs=32 count=1 2>/dev/null | openssl base64)
echo $FERNET_KEY > /secrets/airflow/fernet-key
kubectl create secret generic airflow \
--from-file=fernet-key=/secrets/airflow/fernet-key \
--from-file=sql_alchemy_conn=/secrets/airflow/sql_alchemy_conn
Unable to connect to the server: error executing access token command
"/google/google-cloud-sdk/bin/gcloud config config-helper
--format=json": err=exit status 1 output= stderr=ERROR: gcloud crashed (BadStatusLine): '' If you would like to report this issue, please run
the following command: gcloud feedback To check gcloud for common
problems, please run the following command: gcloud info
--run-diagnostics
I solved this by creating a new cloud shell tab to connect the cluster:
gcloud container clusters get-credentials testcluster1 --zone = your_zone
Example:
get the name and location of your cluster
gcloud container clusters list
then
gcloud container clusters get-credentials demo --region=us-west1-a
if I run jx create cluster aws -> it creates the cluster on aws without any issues but if I won't to specify some options like this:
jx create cluster aws --zones us-east-2b --nodes=2 --node-size=t2.micro --master-size=t2.micro
Then it fails constantly, whatever I tried to change, giving out these kind of errors for almost all options:
Error: unknown flag: - -node-size and the same for other options. Options were taken from here https://jenkins-x.io/commands/jx_create_cluster_aws/
Setting up the cluster with kops with whatever options don't have any issues
I asked about this in a comment, but the actual answer appears to be that you are on a version of jx that doesn't match the documentation. Because this is my experience with a freshly downloaded binary:
$ ./jx create cluster aws --verbose=true --zones=us-west-2a,us-west-2b,us-west-2c --cluster-name=sample --node-size=5 --master-size=m5.large
kops not found
kubectl not found
helm not found
? Missing required dependencies, deselect to avoid auto installing: [Use arrows to move, type to filter]
❯ ◉ kops
◉ kubectl
◉ helm
? nodes [? for help] (3)
^C
$ ./jx --version
1.3.90
you can see what version of jx you are using via:
jx version
you can check the options of a command via jx help create cluster aws or by browsing the online CLI reference for the command: jx create cluster aws
I have created Kubernetes on Alibaba Cloud and would like to control from client, such as kube master master/nodes kubectl, kubernetes-dashboard, deploy (manifests) from local to cloud, etc without SSH.
I know that we can use kubeconfig, but no idea for it now, please help me more, thanks.
If you created a cluster using kubeadm for example, you will need to enter the instance through SSH and download the kube-apiserver client certificates and CA from /etc/kubernetes/pki.
Once you have them, you can add the configuration to kubeconfig using these commands (based on https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/10-configuring-kubectl.md). Make sure you replace the IP_ADDRESS_OF_YOUR_CLUSTER, CLIENT_CERTIFICATE, CLIENT_KEY placeholders (instead of admin you can choose another name for the credentials):
kubectl config set-cluster your cluster \
--certificate-authority=CA_CERTIFICATE \
--embed-certs=true \
--server=https://IP_ADDRESS_OF_YOUR_CLUSTER:6443
kubectl config set-credentials admin \
--client-certificate=CLIENT_CERTIFICATE \
--client-key=CLIENT_KEY
kubectl config set-context your-cluster-context \
--cluster=your-cluster \
--user=admin
If you get authentication errors, then you used the incorrect certificates.
In addition, make sure that you open port 6443 in your cloud firewall, otherwise you will not be able to access.