When deploy beta from gitlab receive an error with creating secrets - kubernetes

Before deploy I created namespace 'site-beta' in Kubernetes.
When deploy from gitlab receive error:
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "/v1, Resource=secrets", GroupVersionKind: "/v1, Kind=Secret"
Name: "gitlab-registry-site-beta", Namespace: "site-beta"
from server for: "STDIN": secrets "gitlab-registry-site-beta" is forbidden: User "system:serviceaccount:site-prod:default" cannot get resource "secrets" in API group "" in the namespace "site-beta"
I dont understand why using system:serviceaccount:site-prod:default instead of system:serviceaccount:site-beta:default
I will be grateful for any thoughts on this

Related

Kubernetes: Receiving Server returned HTTP response code: 403 for URL: https://kubernetes.default.svc.cluster.local:443 while deploying Ignite

While deploying Apache Ignite on Kubernetes I am getting following error in logs of the apache ignite pods
Caused by: java.io.IOException: Server returned HTTP response code: 403 for URL: https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces/ignite-stack/endpoints/ignite-service
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1894)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:263)
at org.apache.ignite.internal.kubernetes.connection.KubernetesServiceAddressResolver.getServiceAddresses(KubernetesServiceAddressResolver.java:109)
... 21 more
These kind of error :
Server returned HTTP response code: 403 for URL:https://kubernetes.default.svc.cluster.local:443
usally happens when serviceaccount, secret token for service account, and clusterrolebinding is not in sync (means there is some discrepancy either in token not to service account, or they are totally in different namespaces). Therefore, apiServer cant validate the request as authentic request and gives 403 error.
In my case, service account was having wrong namespace as shown below:
Where as if you see my service account details i configured with following namespace
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2022-09-12T14:35:38Z"
name: ignite
namespace: ignite-stack
resourceVersion: "854530"
uid: b6177528-b05c-4d10-9446-09f54164ca39
secrets:
- name: ignite-secret
So make sure to check these mistakes first.

Tekton dashboard deployment error- Error from server (Forbidden): error when creating "config/service.yaml"

I am trying to practice the steps in my local using minikube. From the teokton-dashboard in my local I am able to import the resource using git url. I followed all the steps but when creating pipeline runs I am getting error as Error from server (Forbidden): error when creating "config/service.yaml". I tried using default namespace as well as tekton-pipeline namespace. I used tekton-dashboard as serviceAccount.
[![Error from server (Forbidden): error when retrieving current configuration of:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "myapp", Namespace: "tekton-pipelines"
from server for: "config/deployment.yaml": deployments.apps "myapp" is forbidden: User "system:serviceaccount:tekton-pipelines:tekton-dashboard" cannot get resource "deployments" in API group "apps" in the namespace "tekton-pipelines"
Error from server (Forbidden): error when creating "config/service.yaml": services is forbidden: User "system:serviceaccount:tekton-pipelines:tekton-dashboard" cannot create resource "services" in API group "" in the namespace "tekton-pipelines"][1]][1]
Attaching screenshots for reference

GKE: Service account for Config Connector lacks permissions

I'm attempting to get Config Connector up and running on my GKE project and am following this getting started guide.
So far I have enabled the appropriate APIs:
> gcloud services enable cloudresourcemanager.googleapis.com
Created my service account and added policy binding:
> gcloud iam service-accounts create cnrm-system
> gcloud iam service-accounts add-iam-policy-binding ncnrm-system#test-connector.iam.gserviceaccount.com --member="serviceAccount:test-connector.svc.id.goog[cnrm-system/cnrm-controller-manager]" --role="roles/iam.workloadIdentityUser"
> kubectl wait -n cnrm-system --for=condition=Ready pod --all
Annotated my namespace:
> kubectl annotate namespace default cnrm.cloud.google.com/project-id=test-connector
And then run through trying to apply the Spanner yaml in the example:
~ >>> kubectl describe spannerinstance spannerinstance-sample
Name: spannerinstance-sample
Namespace: default
Labels: label-one=value-one
Annotations: cnrm.cloud.google.com/management-conflict-prevention-policy: resource
cnrm.cloud.google.com/project-id: test-connector
API Version: spanner.cnrm.cloud.google.com/v1beta1
Kind: SpannerInstance
Metadata:
Creation Timestamp: 2020-09-18T18:44:41Z
Generation: 2
Resource Version: 5805305
Self Link: /apis/spanner.cnrm.cloud.google.com/v1beta1/namespaces/default/spannerinstances/spannerinstance-sample
UID:
Spec:
Config: northamerica-northeast1-a
Display Name: Spanner Instance Sample
Num Nodes: 1
Status:
Conditions:
Last Transition Time: 2020-09-18T18:44:41Z
Message: Update call failed: error fetching live state: error reading underlying resource: Error when reading or editing SpannerInstance "test-connector/spannerinstance-sample": googleapi: Error 403: Request had insufficient authentication scopes.
Reason: UpdateFailed
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning UpdateFailed 6m41s spannerinstance-controller Update call failed: error fetching live state: error reading underlying resource: Error when reading or editing SpannerInstance "test-connector/spannerinstance-sample": googleapi: Error 403: Request had insufficient authentication scopes.
I'm not really sure what's going on here, because my cnrm service account has ownership of the project my cluster is in, and I have the APIs listed in the guide enabled.
The CC pods themselves appear to be healthy:
~ >>> kubectl wait -n cnrm-system --for=condition=Ready pod --all
pod/cnrm-controller-manager-0 condition met
pod/cnrm-deletiondefender-0 condition met
pod/cnrm-resource-stats-recorder-58cb6c9fc-lf9nt condition met
pod/cnrm-webhook-manager-7658bbb9-kxp4g condition met
Any insight in to this would be greatly appreciated!
By the error message you have posted, I should supposed that it might be an error in your GKE scopes.
To GKE access others GCP APIs you must allow this access when creating the cluster. You can check the enabled scopes with the command:
gcloud container clusters describe <cluster-name> and find in the result for oauthScopes.
Here you can see the scope's name for Cloud Spanner, you must enable the scope https://www.googleapis.com/auth/cloud-platform as minimum permission.
To verify in the GUI, you can see the permission in: Kubernetes Engine > <Cluster-name> > expand the section permissions and find for Cloud Platform

GKE Config Connector StorageBucket resource times out on kubectl apply

I'm trying to apply the following StorageBucket resource from Google's sample manifest:
apiVersion: storage.cnrm.cloud.google.com/v1alpha2
kind: StorageBucket
metadata:
labels:
label-one: "value-one"
name: dmacthedestroyer-hdjkwhekhjewkjeh-storagebucket-sample
spec:
lifecycleRule:
- action:
type: Delete
condition:
age: 7
versioning:
enabled: true
cors:
- origin: ["http://example.appspot.com"]
responseHeader: ["Content-Type"]
method: ["GET", "HEAD", "DELETE"]
maxAgeSeconds: 3600
The response times out with the following errors:
$ kubectl apply -f sample.yaml
Error from server (Timeout): error when creating "sample.yaml": Timeout: request did not complete within requested timeout 30s
UPDATE:
For some unknown reason, the error message has changed to this:
Error from server (InternalError): error when creating "sample.yaml": Internal error occurred: failed calling webhook "cnrm-deny-unknown-fields-webhook.google.com": Post https://cnrm-validating-webhook-service.cnrm-system.svc:443/deny-unknown-fields?timeout=30s: net/http: TLS handshake timeout
I've tested this on two different networks, with the same error result.
I installed the Config Connector components as described in their documentation, using a dedicated service account with the roles/owner permissions, exactly as stated in the above instructions.
I have successfully deployed IAMServiceAccount and IAMServiceAccountKey resources with this setup.
How should I proceed to troubleshoot this?
My issue was due to an incorrect service account configuration.
In particular, I was assigning the owner role to a different project.
After properly configuring my service account, the timeout errors are resolved.

I have problems with Kubernetes Cert-manager

I'm trying to deploy a website with Kubernetes. While deploying I get this error:
[exec kubectl]: Error from server (Forbidden): error when retrieving current configuration of:
Resource: "certmanager.k8s.io/v1alpha1, Resource=certificates", GroupVersionKind: "certmanager.k8s.io/v1alpha1, Kind=Certificate"
Name: "mysite", Namespace: "mysite-com-9777808"
Object: &{map["apiVersion":"certmanager.k8s.io/v1alpha1" "kind":"Certificate" "metadata":map["name":"mysite" "namespace":"mysite-com-9777808" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":""]] "spec":map["secretName":"mysite-cert" "acme":map["config":[map["domains":["mysite.com" "www.mysite.com"] "http01":map["ingressClass":"nginx"]]]] "dnsNames":["mysite.com" "www.mysite.com"] "issuerRef":map["kind":"ClusterIssuer" "name":"letsencrypt-production"]]]}
from server for: "/mase.ai/mysite.com/deployment/webapp.yml": certificates.certmanager.k8s.io "mysite" is forbidden: User "system:serviceaccount:mysite-com-9777808:mysite-com-9777808-service-account" cannot get certificates.certmanager.k8s.io in the namespace "mysite-com-9777808"
[deployment]: An error happend during the kubectl command.
Does maybe anybody has an idea what the problem could be?
Any help is very appreciated!