I have an Azure Kubernetes cluster with Velero installed. A Service Principal was created for Velero, per option 1 of the instructions.
Velero was working fine until the credentials for the Service Principal were reset. Now the scheduled backups are failing.
NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
daily-entire-cluster-20210727030055 Failed 0 0 2021-07-26 23:00:55 -0000 13d default <none>
How can I update the secret for Velero?
1. Update credentials file
First, update your credentials file (for most providers, this is credentials-velero and the contents are described in the plugin installation instructions: AWS, Azure, GCP)
2. Update secret
Now update the velero secret. On linux:
kubectl patch -n velero secret cloud-credentials -p '{"data": {"cloud": "'$(base64 -w 0 credentials-velero)'"}}'
patch tells kubectl to update a resource by merging the provided data
-n velero tells kubectl to use the velero namespace
secret is the resource type
cloud-credentials is the name of the secret used by Velero to store credentials
-p specifies that the next word is the patch data. It's more common to patch using JSON rather than YAML
'{"data": {"cloud": "<your-base64-encoded-secret-will-go-here>"}}' this is the JSON data that matches the existing structure of the Velero secret in Kubernetes. <your-base64-encoded-secret-will-go-here> is a placeholder for the command we'll insert.
$(base64 -w 0 credentials-velero) reads the file credentials-velero in the current directory, turns off word wrapping of the output (-w 0), BASE64-encodes the contents of the file, and inserts the result in the data.
Related
[ Note to reader: This is a long question, please bear with me ]
I've been messing with Cloud Run for Anthos as a side project, and have got a service up and running on http (not https) -- it performs as expected, scaling up and scaling down as traffic requires.
I now want to add SSL to it, so traffic comes on SSL only. And I've been banging my head against a wall on this trying to get it to work. Mainly centering on this link
I have setup the cluster default domain (it shows up on the Cloud Run for Anthos page) and my service is accessible via http://myservice.mynamespace.mydomain.com (not my real domain, obviously)
I've patched the knative configs, enabling autoTLS and setting mydomain as needed:
kubectl patch configmap config-domain --namespace knative-serving --patch '{"data": {"mydomain.com": ""}}'
kubectl patch configmap config-domain --namespace knative-serving --patch '{"data": {"nip.io": null}}'
kubectl patch configmap config-domainmapping --namespace knative-serving --patch '{"data": {"autoTLS": "Enabled"}}'
kubectl patch configmap config-network --namespace knative-serving --patch '{"data": {"autoTLS": "Enabled"}}'
kubectl patch configmap config-network --namespace knative-serving --patch '{"data": {"httpProtocol": "Enabled"}}'
The documentation talks about using gcloud domain mapping:
gcloud run domain-mappings describe --domain DOMAIN
Yet, this only is available on beta
ERROR: (gcloud.run.domain-mappings.describe) This command group is in beta for fully managed Cloud Run; use `gcloud beta run domain-mappings`.
Using beta, however, also fails
ERROR: (gcloud.beta.run.domain-mappings.describe) NOT_FOUND: Resource 'mydomain.com' of kind 'DOMAIN_MAPPING' in region 'europe-west2' in project 'my-project' does not exist
Making my cluster zonal or regional makes no difference.
The documentation also mentions using kcert
kubectl get kcert
Yet this does not show anything until after I have deployed my service.
$ kubectl get kcert --all-namespaces
NAMESPACE NAME READY REASON
default route-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
And this NEVER shows ready. Trying to hit http://myservice.mynamespace.mydomain.com remains fine, but hitting https://myservice.mynamespace.mydomain.com returns:
curl: (35) OpenSSL SSL_connect: Connection reset by peer in connection to myservice.mynamespace.mydomain.com:443
Can anyone guide me into why I can't get SSL working on this?
Related question -- is it possible to use the already-configured cert-manager cluster issuer instead? Such as described here?
I want to restart my Kubernetes access ssh key using commands from this website:
https://github.com/kubernetes/kops/blob/master/docs/security.md#ssh-access
so those:
kops delete secret --name <clustername> sshpublickey admin
kops create secret --name <clustername> sshpublickey admin -i ~/.ssh/newkey.pub
kops update cluster --yes
And when I type last command "kops update cluster --yes" I get that error:
completed cluster failed validation: spec.spec.kubeProxy.enabled: Forbidden: kube-router requires kubeProxy to be disabled
Does Anybody have any idea what can I change those secret key without disabling kubeProxy?
This problem comes from having set
spec:
networking:
kuberouter: {}
but not
spec:
kubeProxy:
enabled: false
in the cluster spec.
Export the config using kops get -o yaml > myspec.yaml, edit the config according to the error above. Then you can apply the spec using kops replace -f myspec.yaml.
It is considered a best practice to check the above yaml into version control to track any changes done to the cluster configuration.
Once the cluster spec has been amended, the new ssh key should work as well.
What version of kubernetes are you running? If you are running the latests one 1.18.xx the user its not admin but ubuntu.
One other thing that you could do is to first edit the cluster and set the spect of kubeproxy to enabled fist . Run kops update cluster and rolling update and then do the secret delete and creation.
Being absolutely new to openshift, i'm curious how i can check if any of the running containers are running in "privileged" mode (openshift v4.6). Digging in the documentation and searching the net i could only find information regarding SCCs, which is great and all, but i didnt find anything regarding this (apart from an older version of openshift, where the oc get pods (or something similar command) used to show if a pod was running with such privileges
By default pods use the Restricted SCC. The pod's SCC is determined by the User/ServiceAccount and/or Group. Then, you also have to consider that a SA may or may not be bound to a Role, which can set a list of available SCCs.
To find out what SCC a pod runs under:
oc get pod $POD_NAME -o yaml | grep openshift.io/scc
The following commands can also be useful:
# get pod's SA name
oc get pod $POD_NAME -o yaml | grep serviceAccount:
# list service accounts that can use a particular SCC
oc adm policy who-can use scc privileged
# list users added by the oc adm policy command
oc get scc privileged -o yaml
# check roles and role bindings of your SA
# you need to look at rules.apiGroups: security.openshift.io
oc get rolebindings -o wide
oc get role $ROLE_NAME -o yaml
An OpenShift project comes with 3 service accounts by default which are builder, default, deployer.
The containers you deploy to that namespace will be assigned to "default" service account and that is the one which has "restricted" scc role.
You can find more here: https://www.openshift.com/blog/managing-sccs-in-openshift
I am trying to invoke kubectl from within my CI system. I wish to use a google cloud service account for authentication. I have a secret management system in place that injects secrets into my CI system.
However, my CI system does not have gcloud installed, and I do not wish to install that. It only contains kubectl. Is there any way that I can use a credentials.json file containing a gcloud service account (not a kubernetes service account) directly with kubectl?
The easiest way to skip the gcloud CLI is to probably use the --token option. You can get a token with RBAC by creating a service account and tying it to a ClusterRole or Role with either a ClusterRoleBinding or RoleBinding.
Then from the command line:
$ kubectl --token <token-from-your-service-account> get pods
You still will need a context in your ~/.kube/config:
- context:
cluster: kubernetes
name: kubernetes-token
Otherwise, you will have to use:
$ kubectl --insecure-skip-tls-verify --token <token-from-your-service-account> -s https://<address-of-your-kube-api-server>:6443 get pods
Note that if you don't want the token to show up on the logs you can do something like this:
$ kubectl --token $(cat /path/to/a/file/where/the/token/is/stored) get pods
Also, note that this doesn't prevent you from someone running ps -Af on your box and grabbing the token from there, for the lifetime of the kubectl process (It's a good idea to rotate the tokens)
Edit:
You can use the --token-auth-file=/path/to/a/file/where/the/token/is/stored with kubectl to avoid passing it through $(cat /path/to/a/file/where/the/token/is/stored)
I'm trying to deploy an application in a GKE 1.6.2 cluster running ContainerOS but the instructions on the website / k8s are not accurate anymore.
The error that I'm getting is:
Error from server (Forbidden): User "circleci#gophers-slack-bot.iam.gserviceaccount.com"
cannot get deployments.extensions in the namespace "gopher-slack-bot".:
"No policy matched.\nRequired \"container.deployments.get\" permission."
(get deployments.extensions gopher-slack-bot)
The repository for the application is available here available here.
Thank you.
I had a few breaking changes in the past with using the gcloud tool to authenticate kubectl to a cluster, so I ended up figuring out how to auth kubectl to a specific namespace independent of GKE. Here's what works for me:
On CircleCI:
setup_kubectl() {
echo "$KUBE_CA_PEM" | base64 --decode > kube_ca.pem
kubectl config set-cluster default-cluster --server=$KUBE_URL --certificate-authority="$(pwd)/kube_ca.pem"
kubectl config set-credentials default-admin --token=$KUBE_TOKEN
kubectl config set-context default-system --cluster=default-cluster --user=default-admin --namespace default
kubectl config use-context default-system
}
And here's how I get each of those env vars from kubectl.
kubectl get serviceaccounts $namespace -o json
The service account will contain the name of it's secret. In my case, with the default namespace, it's
"secrets": [
{
"name": "default-token-655ls"
}
]
Using the name, I get the contents of the secret
kubectl get secrets $secret_name -o json
The secret will contain ca.crt and token fields, which match the $KUBE_CA_PEM and $KUBE_TOKEN in the shell script above.
Finally, use kubectl cluster-info to get the $KUBE_URL value.
Once you run setup_kubectl on CI, your kubectl utility will be authenticated to the namespace you're deploying to.
In Kubernetes 1.6 and GKE, we introduce role based cess control. The authors of your took need to give the service account the ability to get deployments (along with probably quite a few others) to its account creation.
https://kubernetes.io/docs/admin/authorization/rbac/