I'm attempting to get Config Connector up and running on my GKE project and am following this getting started guide.
So far I have enabled the appropriate APIs:
> gcloud services enable cloudresourcemanager.googleapis.com
Created my service account and added policy binding:
> gcloud iam service-accounts create cnrm-system
> gcloud iam service-accounts add-iam-policy-binding ncnrm-system#test-connector.iam.gserviceaccount.com --member="serviceAccount:test-connector.svc.id.goog[cnrm-system/cnrm-controller-manager]" --role="roles/iam.workloadIdentityUser"
> kubectl wait -n cnrm-system --for=condition=Ready pod --all
Annotated my namespace:
> kubectl annotate namespace default cnrm.cloud.google.com/project-id=test-connector
And then run through trying to apply the Spanner yaml in the example:
~ >>> kubectl describe spannerinstance spannerinstance-sample
Name: spannerinstance-sample
Namespace: default
Labels: label-one=value-one
Annotations: cnrm.cloud.google.com/management-conflict-prevention-policy: resource
cnrm.cloud.google.com/project-id: test-connector
API Version: spanner.cnrm.cloud.google.com/v1beta1
Kind: SpannerInstance
Metadata:
Creation Timestamp: 2020-09-18T18:44:41Z
Generation: 2
Resource Version: 5805305
Self Link: /apis/spanner.cnrm.cloud.google.com/v1beta1/namespaces/default/spannerinstances/spannerinstance-sample
UID:
Spec:
Config: northamerica-northeast1-a
Display Name: Spanner Instance Sample
Num Nodes: 1
Status:
Conditions:
Last Transition Time: 2020-09-18T18:44:41Z
Message: Update call failed: error fetching live state: error reading underlying resource: Error when reading or editing SpannerInstance "test-connector/spannerinstance-sample": googleapi: Error 403: Request had insufficient authentication scopes.
Reason: UpdateFailed
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning UpdateFailed 6m41s spannerinstance-controller Update call failed: error fetching live state: error reading underlying resource: Error when reading or editing SpannerInstance "test-connector/spannerinstance-sample": googleapi: Error 403: Request had insufficient authentication scopes.
I'm not really sure what's going on here, because my cnrm service account has ownership of the project my cluster is in, and I have the APIs listed in the guide enabled.
The CC pods themselves appear to be healthy:
~ >>> kubectl wait -n cnrm-system --for=condition=Ready pod --all
pod/cnrm-controller-manager-0 condition met
pod/cnrm-deletiondefender-0 condition met
pod/cnrm-resource-stats-recorder-58cb6c9fc-lf9nt condition met
pod/cnrm-webhook-manager-7658bbb9-kxp4g condition met
Any insight in to this would be greatly appreciated!
By the error message you have posted, I should supposed that it might be an error in your GKE scopes.
To GKE access others GCP APIs you must allow this access when creating the cluster. You can check the enabled scopes with the command:
gcloud container clusters describe <cluster-name> and find in the result for oauthScopes.
Here you can see the scope's name for Cloud Spanner, you must enable the scope https://www.googleapis.com/auth/cloud-platform as minimum permission.
To verify in the GUI, you can see the permission in: Kubernetes Engine > <Cluster-name> > expand the section permissions and find for Cloud Platform
Related
I am a beginner with Kubernetes. I have enabled it from Docker Destop and now I want to install Kubernetes Dashboard.
I followed this link:
https://github.com/kubernetes/dashboard#getting-started
And I executed my first command in Powershell as an administrator:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
I get the following error:
error: error validating
"https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml":
error validating data:
ValidationError(Deployment.spec.template.spec.securityContext):
unknown field "seccompProfile" in
io.k8s.api.core.v1.PodSecurityContext; if you choose to ignore these
errors, turn validation off with --validate=false
In which case I tried to use the same command with --validate=false.
Then it went and gave no errors and when I execute :
kubectl proxy
I got an access token using:
kubectl describe secret -n kube-system
and I try to access the link as provided in the guide :
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
I get the following swagger response:
The error indicated that your cluster version is not compatible to use seccompProfile.type: RuntimeDefault. In this case you don't apply the dashboard spec (https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml) right away, you download and comment the following line in the spec:
...
spec:
# securityContext:
# seccompProfile:
# type: RuntimeDefault
...
Then you apply the updated spec kubectl apply -f recommended.yaml.
I do understand drawbacks of doing this, however I have image that will work only with root user running cmd within it.
Server kubernetes version is: v1.19.14.
Inside my deployment.yaml I have:
spec:
containers:
- name: myapp
securityContext:
allowPrivilegeEscalation: false
runAsUser: 0
command: ...
image:...
But when I describe rs I see following:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 0s (x13 over 21s) replicaset-controller Error creating: pods "myapp-7cdd994c56-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].securityContext.runAsUser: Invalid value: 0: running with the root UID is forbidden]
What do I do wrong?
The error message says:
PodSecurityPolicy: unable to admit pod: [spec.containers[0].securityContext.runAsUser: Invalid value: 0: running with the root UID is forbidden]
Pod Security Policy is defined in the documentation as:
[...] a cluster-level resource that controls security sensitive aspects of the pod specification. The PodSecurityPolicy objects define a set of conditions that a pod must run with in order to be accepted into the system [...]
You are using a cluster for which the Pod Security Policy forbids the use of root containers (See Pod Security Policy - Users and Groups)
You have to change the Pod Security Policy yourself or ask your cluster administrator to do so.
Note that:
PodSecurityPolicy is deprecated as of Kubernetes v1.21, and will be removed in v1.25.
I'm trying to apply the following StorageBucket resource from Google's sample manifest:
apiVersion: storage.cnrm.cloud.google.com/v1alpha2
kind: StorageBucket
metadata:
labels:
label-one: "value-one"
name: dmacthedestroyer-hdjkwhekhjewkjeh-storagebucket-sample
spec:
lifecycleRule:
- action:
type: Delete
condition:
age: 7
versioning:
enabled: true
cors:
- origin: ["http://example.appspot.com"]
responseHeader: ["Content-Type"]
method: ["GET", "HEAD", "DELETE"]
maxAgeSeconds: 3600
The response times out with the following errors:
$ kubectl apply -f sample.yaml
Error from server (Timeout): error when creating "sample.yaml": Timeout: request did not complete within requested timeout 30s
UPDATE:
For some unknown reason, the error message has changed to this:
Error from server (InternalError): error when creating "sample.yaml": Internal error occurred: failed calling webhook "cnrm-deny-unknown-fields-webhook.google.com": Post https://cnrm-validating-webhook-service.cnrm-system.svc:443/deny-unknown-fields?timeout=30s: net/http: TLS handshake timeout
I've tested this on two different networks, with the same error result.
I installed the Config Connector components as described in their documentation, using a dedicated service account with the roles/owner permissions, exactly as stated in the above instructions.
I have successfully deployed IAMServiceAccount and IAMServiceAccountKey resources with this setup.
How should I proceed to troubleshoot this?
My issue was due to an incorrect service account configuration.
In particular, I was assigning the owner role to a different project.
After properly configuring my service account, the timeout errors are resolved.
Hi I created a project in Openshift and attempted to add a turbine-server image to it. A Pod was added but I keep receiving the following error in the logs. I am very new to OpenShift and i would appreciate any advice or suggestions as to how to resolve this error. I can supply either further information that is required.
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/booking/pods/turbine-server-2-q7v8l . Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked..
How to diagnose
Make sure you have configured a service account, role, and role binding to the account. Make sure the service account is set to the pod spec.
spec:
serviceAccountName: your-service-account
Start monitoring atomic-openshift-node service on the node the pod is deployed and the API server.
$ journalctl -b -f -u atomic-openshift-node
Run the pod and monitor the journald output. You would see "Forbidden".
Jan 28 18:27:38 <hostname> atomic-openshift-node[64298]:
logging error output: "Forbidden (user=system:serviceaccount:logging:appuser, verb=get, resource=nodes, subresource=proxy)"
This means the service account appuser doest not have the authorisation to do get on the nodes/proxy resource. Then update the role to be able to allow the verb "get" on the resource.
- apiGroups: [""]
resources:
- "nodes"
- "nodes/status"
- "nodes/log"
- "nodes/metrics"
- "nodes/proxy" <----
- "nodes/spec"
- "nodes/stats"
- "namespaces"
- "events"
- "services"
- "pods"
- "pods/status"
verbs: ["get", "list", "view"]
Note that some resources are not default legacy "" group as in Unable to list deployments resources using RBAC.
How to verify the authorisations
To verify who can execute the verb against the resource, for example patch verb against pod.
$ oadm policy who-can patch pod
Namespace: default
Verb: patch
Resource: pods
Users: auser
system:admin
system:serviceaccount:cicd:jenkins
Groups: system:cluster-admins
system:masters
OpenShift vs K8S
OpenShift has command oc policy or oadm policy:
oc policy add-role-to-user <role> <user-name>
oadm policy add-cluster-role-to-user <role> <user-name>
This is the same with K8S role binding. You can use K8S RBAC but the API version in OpenShift needs to be v1 instead of rbac.authorization.k8s.io/v1 in K8s.
References
Managing Authorization Policies
Using RBAC Authorization
User and Role Management
Hi thank you for the replies - I was able to resolve the issue by executing the following commands using the oc command line utility:
oc policy add-role-to-group view system:serviceaccounts -n <project>
oc policy add-role-to-group edit system:serviceaccounts -n <project>
I am trying to deploy spinnaker on multi node . I have 2 VMs : the first with halyard and kubectl the second contain the kubernetes master api.
my kubectl is well configured and able to communicate with the remote kubernetes api,
the "kubectl get namespaces " works
kubectl get namespaces
NAME STATUS AGE
default Active 16d
kube-public Active 16d
kube-system Active 16d
but when I run this cmd
hal config provider -d kubernetes account add spin-kubernetes --docker-registries myregistry
I get this error
Add the spin-kubernetes account
Failure
Problems in default.provider.kubernetes.spin-kubernetes:
- WARNING You have not specified a Kubernetes context in your
halconfig, Spinnaker will use "default-system" instead.
? We recommend explicitly setting a context in your halconfig, to
ensure changes to your kubeconfig won't break your deployment.
? Options include:
- default-system
! ERROR Unable to communicate with your Kubernetes cluster:
Operation: [list] for kind: [Namespace] with name: [null] in namespace:
[null] failed..
? Unable to authenticate with your Kubernetes cluster. Try using
kubectl to verify your credentials.
- Failed to add account spin-kubernetes for provider
kubernetes.
From the error message there seem to be two approaches to this, set your halconfig to talk to the default-system context, so it could communicate with your cluster or the other way around, that is configure your context.
Try this:
kubectl config view
I suppose you'll see the context and current context over there to be default-system, try changing those.
For more help do
kubectl config --help
I guess you're looking for the set-context option.
Hope that helps.
You can set this in your halconfig as mentioned by #Naim Salameh.
Another way is to try setting your K8S cluster info in your default Kubernetes config ~/.kube/config.
Not certain this will work since you are running halyard and kubectl on different VM's.
# ~/.kube/config
apiVersion: v1
clusters:
- cluster:
server: http://my-kubernetes-url
name: my-k8s-cluster
contexts:
- context:
cluster: my-k8s-cluster
namespace: default
name: my-context
current-context: my-context
kind: Config
preferences: {}
users: []