For the last two hours, I have been unable to delete a cluster with kops even though I have deleted the only EC2 instance I had as well as my S3 bucket.
When I type:
kubectl config get-contexts
I get:
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kubecourse.k8s.local kubecourse.k8s.local kubecourse.k8s.local
Next I type:
kops delete cluster --yes
But get:
Error: --name is required
Usage:
kops delete cluster [CLUSTER] [flags]
Then I type:
kops delete cluster --name=kubecourse.k8s.local --yes
But get:
kops delete cluster --name=kubecourse.k8s.local
Error: State Store: Required value: Please set the --state flag or export KOPS_STATE_STORE.
For example, a valid value follows the format s3://<bucket>.
So I type:
kops delete cluster --state=s3://k8-course-london
But this time get:
Error: --name is required
Usage:
kops delete cluster [CLUSTER] [flags]
And I'm stuck in a cycle. Your help would be most appreciated.
Looks like syntax used is wrong..
Right Syntax--
kops delete cluster --name=k8s.cluster.site --yes
https://kops.sigs.k8s.io/cli/kops_delete_cluster/
In CI, with gcp auth plugin I was using gcloud auth activate-service-account ***#developer.gserviceaccount.com --key-file ***.json prior to execute kubectl commands.
Now with gke-gcloud-auth-plugin I can’t find any equivalent to use a gcp service account key file.
I've installed gke-gcloud-auth-plugin and gke-gcloud-auth-plugin --version is giving me Kubernetes v1.25.2-alpha+ae91c1fc0c443c464a4c878ffa2a4544483c6d1f
Would you know if there’s a way?
I tried to add this command:
kubectl config set-credentials my-user --auth-provider=gcp
But I still get:
error: The gcp auth plugin has been removed. Please use the "gke-gcloud-auth-plugin" kubectl/client-go credential plugin instead.
You will need to set the env variable to use the new plugin before doing the get-credentials:
export USE_GKE_GCLOUD_AUTH_PLUGIN=True
gcloud container clusters get-credentials $CLUSTER \
--region $REGION \
--project $PROJECT \
--internal-ip
I would not have expected the env variable to still be required (now that the gcp auth plugin is completely deprecated) - but it seems it still is.
Your kubeconfig will end up looking like this if the new auth provider is in use.
...
- name: $NAME
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: gke-gcloud-auth-plugin
installHint: Install gke-gcloud-auth-plugin for use with kubectl by following
https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
provideClusterInfo: true
I've been using
helm install spinnaker stable/spinnaker -f spinnaker-config.yaml --timeout 1200s --version 2.0.0-rc9
Which is the latest helm chart for Spinnaker.
Using this on a freshly created K8s cluster on GKE.
Just installed Helm so I have the latest.
Result is that it created a Job named spinnaker-install-using-hal and the pod for this job keeps restarting...
Container logs show:
/opt/halyard/scripts/config.sh: line 10: syntax error near unexpected token `newline'
I've actually found this file is mounted from a ConfigMap named *-spinnaker-halyard-config.
The ConfigMap value for config.sh is set to:
# Spinnaker version
$HAL_COMMAND config version edit --version 1.19.4
# Storage
$HAL_COMMAND config storage gcs edit --project XXXXXXXXXX --json-path /opt/gcs/key.json --bucket <GCS-BUCKET-NAME>
$HAL_COMMAND config storage edit --type gcs
# Docker Registry
$HAL_COMMAND config provider docker-registry enable
if $HAL_COMMAND config provider docker-registry account get dockerhub; then
PROVIDER_COMMAND='edit'
else
PROVIDER_COMMAND='add'
fi
$HAL_COMMAND config provider docker-registry account $PROVIDER_COMMAND dockerhub --address index.docker.io \
\
--repositories library/alpine,library/ubuntu,library/centos,library/nginx
$HAL_COMMAND config provider kubernetes enable
if $HAL_COMMAND config provider kubernetes account get default; then
PROVIDER_COMMAND='edit'
else
PROVIDER_COMMAND='add'
fi
$HAL_COMMAND config provider kubernetes account $PROVIDER_COMMAND default --docker-registries dockerhub \
--context default --service-account true \
\
\
\
\
--omit-namespaces=kube-system,kube-public \
\
\
--provider-version v2
$HAL_COMMAND config deploy edit --account-name default --type distributed \
--location default
# Use Deck to route to Gate
$HAL_COMMAND config security api edit --no-validate --override-base-url /gate
$HAL_COMMAND config features edit --artifacts true
In line #9 it has the value <GCS-BUCKET-NAME> instead of the real bucket name. This probably caused the script to fail.
Still not sure what causes that to not be populated.
Found the problem... maybe someone finds it helpful...
I was using the following guide https://medium.com/velotio-perspectives/know-everything-about-spinnaker-how-to-deploy-using-kubernetes-engine-57090881c78f
Which is a great guide but i guess not too updated...
Anyway it says you should configure
storageBucket: $BUCKET
gcs:
enabled: true
project: $PROJECT
jsonKey: '$SA_JSON'
Which is incorrect... it should be as follows:
gcs:
enabled: true
bucket: $BUCKET
project: $PROJECT
jsonKey: '$SA_JSON'
This solved it.
For my e2e tests I'm spinning up a separate cluster into which I'd like to import my production TLS certificate. I'm having trouble to switch the context between the two clusters (export/get from one and import/apply (in)to another) because the cluster doesn't seem to be visible.
I extracted a MVCE using a GitLab CI and the following .gitlab-ci.yml where I create a secret for demonstration purposes:
stages:
- main
- tear-down
main:
image: google/cloud-sdk
stage: main
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud config set project secret-transfer
- gcloud auth activate-service-account --key-file key.json --project secret-transfer
- gcloud config set compute/zone us-central1-a
- gcloud container clusters create secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID --project secret-transfer --machine-type=f1-micro
- kubectl create secret generic secret-1 --from-literal=key=value
- gcloud container clusters create secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID --project secret-transfer --machine-type=f1-micro
- gcloud config set container/use_client_certificate True
- gcloud config set container/cluster secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
- kubectl get secret letsencrypt-prod --cluster=secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID -o yaml > secret-1.yml
- gcloud config set container/cluster secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
- kubectl apply --cluster=secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID -f secret-1.yml
tear-down:
image: google/cloud-sdk
stage: tear-down
when: always
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud config set project secret-transfer
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone us-central1-a
- gcloud container clusters delete --quiet secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
- gcloud container clusters delete --quiet secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
I added secret-transfer-[1/2]-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID before kubectl statements in order to avoid error: no server found for cluster "secret-transfer-1-...-...", but it doesn't change the outcome.
I created a project secret-transfer, activated the Kubernetes API and got a JSON key for the Compute Engine service account which I'm providing in the environment variable GOOGLE_KEY. The output after checkout is
$ echo "$GOOGLE_KEY" > key.json
$ gcloud config set project secret-transfer
Updated property [core/project].
$ gcloud auth activate-service-account --key-file key.json --project secret-transfer
Activated service account credentials for: [131478687181-compute#developer.gserviceaccount.com]
$ gcloud config set compute/zone us-central1-a
Updated property [compute/zone].
$ gcloud container clusters create secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID --project secret-transfer --machine-type=f1-micro
WARNING: In June 2019, node auto-upgrade will be enabled by default for newly created clusters and node pools. To disable it, use the `--no-enable-autoupgrade` flag.
WARNING: Starting in 1.12, new clusters will have basic authentication disabled by default. Basic authentication can be enabled (or disabled) manually using the `--[no-]enable-basic-auth` flag.
WARNING: Starting in 1.12, new clusters will not have a client certificate issued. You can manually enable (or disable) the issuance of the client certificate using the `--[no-]issue-client-certificate` flag.
WARNING: Currently VPC-native is not the default mode during cluster creation. In the future, this will become the default mode and can be disabled using `--no-enable-ip-alias` flag. Use `--[no-]enable-ip-alias` flag to suppress this warning.
WARNING: Starting in 1.12, default node pools in new clusters will have their legacy Compute Engine instance metadata endpoints disabled by default. To create a cluster with legacy instance metadata endpoints disabled in the default node pool, run `clusters create` with the flag `--metadata disable-legacy-endpoints=true`.
WARNING: Your Pod address range (`--cluster-ipv4-cidr`) can accommodate at most 1008 node(s).
This will enable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs.
Creating cluster secret-transfer-1-9b219ea8-9 in us-central1-a...
...done.
Created [https://container.googleapis.com/v1/projects/secret-transfer/zones/us-central1-a/clusters/secret-transfer-1-9b219ea8-9].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-central1-a/secret-transfer-1-9b219ea8-9?project=secret-transfer
kubeconfig entry generated for secret-transfer-1-9b219ea8-9.
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
secret-transfer-1-9b219ea8-9 us-central1-a 1.12.8-gke.10 34.68.118.165 f1-micro 1.12.8-gke.10 3 RUNNING
$ kubectl create secret generic secret-1 --from-literal=key=value
secret/secret-1 created
$ gcloud container clusters create secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID --project secret-transfer --machine-type=f1-micro
WARNING: In June 2019, node auto-upgrade will be enabled by default for newly created clusters and node pools. To disable it, use the `--no-enable-autoupgrade` flag.
WARNING: Starting in 1.12, new clusters will have basic authentication disabled by default. Basic authentication can be enabled (or disabled) manually using the `--[no-]enable-basic-auth` flag.
WARNING: Starting in 1.12, new clusters will not have a client certificate issued. You can manually enable (or disable) the issuance of the client certificate using the `--[no-]issue-client-certificate` flag.
WARNING: Currently VPC-native is not the default mode during cluster creation. In the future, this will become the default mode and can be disabled using `--no-enable-ip-alias` flag. Use `--[no-]enable-ip-alias` flag to suppress this warning.
WARNING: Starting in 1.12, default node pools in new clusters will have their legacy Compute Engine instance metadata endpoints disabled by default. To create a cluster with legacy instance metadata endpoints disabled in the default node pool, run `clusters create` with the flag `--metadata disable-legacy-endpoints=true`.
WARNING: Your Pod address range (`--cluster-ipv4-cidr`) can accommodate at most 1008 node(s).
This will enable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs.
Creating cluster secret-transfer-2-9b219ea8-9 in us-central1-a...
...done.
Created [https://container.googleapis.com/v1/projects/secret-transfer/zones/us-central1-a/clusters/secret-transfer-2-9b219ea8-9].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-central1-a/secret-transfer-2-9b219ea8-9?project=secret-transfer
kubeconfig entry generated for secret-transfer-2-9b219ea8-9.
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
secret-transfer-2-9b219ea8-9 us-central1-a 1.12.8-gke.10 104.198.37.21 f1-micro 1.12.8-gke.10 3 RUNNING
$ gcloud config set container/use_client_certificate True
Updated property [container/use_client_certificate].
$ gcloud config set container/cluster secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
Updated property [container/cluster].
$ kubectl get secret secret-1 --cluster=secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID -o yaml > secret-1.yml
error: no server found for cluster "secret-transfer-1-9b219ea8-9"
I'm expecting kubectl get secret to work because both clusters exist and the --cluster argument points to the right cluster.
Generally gcloud commands are used to manage gcloud resources and handle how you authenticate with gcloud, whereas kubectl commands affect how you interact with Kubernetes clusters, whether or not they happen to be running on GCP and/or created in GKE. As such, I would avoid doing:
$ gcloud config set container/use_client_certificate True
Updated property [container/use_client_certificate].
$ gcloud config set container/cluster \
secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
Updated property [container/cluster].
It's not doing what you probably think it's doing (namely, changing anything about how kubectl targets clusters), and might mess with how future gcloud commands work.
Another consequence of gcloud and kubectl being separate, and in particular kubectl not knowing intimately about your gcloud settings, is that the cluster name from gcloud perspective is not the same as from the kubectl perspective. When you do things like gcloud config set compute/zone, kubectl doesn't know anything about that, so it has to be able to identify clusters uniquely which may have the same name but be in different projects and zone, and maybe not even in GKE (like minikube or some other cloud provider). That's why kubectl --cluster=<gke-cluster-name> <some_command> is not going to work, and it's why you're seeing the error message:
error: no server found for cluster "secret-transfer-1-9b219ea8-9"
As #coderanger pointed out, the cluster name that gets generated in your ~/.kube/config file after doing gcloud container clusters create ... has a more complex name, which currently has a pattern something like gke_[project]_[region]_[name].
So you could run commands with kubectl --cluster gke_[project]_[region]_[name] ... (or kubectl --context [project]_[region]_[name] ... which would be more idiomatic, although both will happen to work in this case since you're using the same service account for both clusters), however that requires knowledge of how gcloud generates these strings for context and cluster names.
An alternative would be to do something like:
$ KUBECONFIG=~/.kube/config1 gcloud container clusters create \
secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID \
--project secret-transfer --machine-type=f1-micro
$ KUBECONFIG=~/.kube/config1 kubectl create secret secret-1 --from-literal=key=value
$ KUBECONFIG=~/.kube/config2 gcloud container clusters create \
secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID \
--project secret-transfer --machine-type=f1-micro
$ KUBECONFIG=~/.kube/config1 kubectl get secret secret-1 -o yaml > secret-1.yml
$ KUBECONFIG=~/.kube/config2 kubectl apply -f secret-1.yml
By having separate KUBECONFIG files that you control, you don't have to guess any strings. Setting the KUBECONFIG variable when creating a cluster will result in creating that file and gcloud putting the credentials for kubectl to access that cluster in that file. Setting the KUBECONFIG environment variable when running kubectl command will ensure kubectl uses the context as set in that particular file.
You probably mean to be using --context rather than --cluster. The context sets both the cluster and user in use. Additionally the context and cluster (and user) names created by GKE are not just the cluster identifier, it's gke_[project]_[region]_[name].
According to the documentation found here, I followed these steps:
Created and Published the 3 Docker images by building them and publishing them correctly to my Docker registry on Google Container Registry
Created a Kubernetes clusters with 3 nodes on Google Cloud Platform
Created a SQL Instance on Google Cloud Platform using PostgreSQL Version 9.6
Installed locally Kubernetes gcloud components install kubectl
Run helm .api/helm/api update
At the end, I run locally this command and get the following error:
helm install --name api ./api/helm/api \
--set php.repository=eu.gcr.io/my_projet_id/php \
--set nginx.repository=eu.gcr.io/my_project_id/nginx \
--set secret=mySecret \
--set postgresql.postgresPassword=myPostgresPassword \
--set postgresql.persistence.enabled=true \
--set corsAllowUrl='^https?://[a-z\]*\.my-domain.io$' \
--set postgresql.enabled=false \
--set postgresql.url=pgsql://my_db_user:my_db_user_password#ip_sql_instance/my_db_name?serverVersion=9.6
Error: release api failed: namespaces "default" is forbidden: User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "default": Unknown user "system:serviceaccount:kube-system:default"
When I run kubectl get nodes I got a list with the 3 default nodes created by Google Cloud Platform when I created the cluster.
Am I executing the right helm install command?
Which namespace should I use?