How to remove Cusom Resources from kubernetes? - kubernetes

I have configured Crossplane on my machine, created a cluster with a bunch of other resources, and now I am trying to cleanup everything.
For instance, I am trying to delete a Cluster by using kubectl delete clusters <cluster-name>, but it simply does not get removed. In minikube dashboard I see the following condition on that cluster:
Type: Terminating
Reason: InstanceDeletionCheck
Message: could not confirm zero CustomResources remaining: timed out waiting for the condition
My goal is to cleanup the managed resources created by playing with this github repo: https://github.com/upbound/platform-ref-multi-k8s, and I would really appreciate any help
This is the output of the kubectl describe cluster multik8s-cluster-aws-wd95c-qlbrt command:
Name: multik8s-cluster-aws-wd95c-qlbrt
Namespace:
Labels: crossplane.io/claim-name=multik8s-cluster-aws
crossplane.io/claim-namespace=default
crossplane.io/composite=multik8s-cluster-aws-wd95c
Annotations: crossplane.io/external-create-pending: 2022-05-03T08:36:12Z
crossplane.io/external-create-succeeded: 2022-05-03T08:36:14Z
crossplane.io/external-name: multik8s-cluster-aws
API Version: eks.aws.crossplane.io/v1beta1
Kind: Cluster
Metadata:
Creation Timestamp: 2022-05-03T08:24:13Z
Deletion Grace Period Seconds: 0
Deletion Timestamp: 2022-05-03T10:15:06Z
Finalizers:
finalizer.managedresource.crossplane.io
Generate Name: multik8s-cluster-aws-wd95c-
Generation: 6
Managed Fields:
...
Owner References:
API Version: multik8s.platformref.crossplane.io/v1alpha1
Controller: true
Kind: EKS
Name: multik8s-cluster-aws-wd95c-h2nbj
UID: 76852ac3-58a9-42ec-8307-c02e490e8f32
Resource Version: 507248
UID: f02fa30d-9878-4be9-bebc-838d7e58d565
Spec:
Deletion Policy: Delete
For Provider:
...
Status:
At Provider:
Arn: arn:aws:eks:us-west-2:305615705119:cluster/multik8s-cluster-aws
Created At: 2022-05-03T08:36:14Z
Endpoint: https://519EADEC62BE27B27903C30E01A8E22D.gr7.us-west-2.eks.amazonaws.com
Identity:
Oidc:
Issuer: https://oidc.eks.us-west-2.amazonaws.com/id/519EADEC62BE27B27903C30E01A8E22D
Platform Version: eks.6
Resources Vpc Config:
Cluster Security Group Id: sg-0b9baf2fff4385125
Vpc Id: vpc-0fca5959a43bbdf71
Status: ACTIVE
Conditions:
Last Transition Time: 2022-05-03T08:48:26Z
Message: update failed: cannot update EKS cluster version: InvalidParameterException: Unsupported Kubernetes minor version update from 1.21 to 1.16
Reason: ReconcileError
Status: False
Type: Synced
Last Transition Time: 2022-05-03T08:48:25Z
Reason: Available
Status: True
Type: Ready
Events: <none>

Related

Configuring RBAC for kubernetes

I used the following guide to set up my chaostoolkit cluster: https://chaostoolkit.org/deployment/k8s/operator/
I am attempting to kill a pod using kubernetes, however the following error:
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods is forbidden: User \"system:serviceaccount:chaostoolkit-run:chaostoolkit-b3af262edb\" cannot list resource \"pods\" in API group \"\" in the namespace \"task-dispatcher\"","reason":"Forbidden","details":{"kind":"pods"},"code":403}
I set my serviceAccountName to an RBAC that I created but for some reason my kubernetes defaults to "system:serviceaccount:chaostoolkit-run:chaostoolkit-b3af262edb".
apiVersion: v1
kind: ConfigMap
metadata:
name: my-chaos-exp
namespace: chaostoolkit-run
data:
experiment.yaml: |
---
version: 1.0.0
title: Terminate Pod Experiment
description: If a pod gets terminated, a new one should be created in its place in a reasonable amount of time.
tags: ["kubernetes"]
secrets:
k8s:
KUBERNETES_CONTEXT: "docker-desktop"
method:
- type: action
name: terminate-k8s-pod
provider:
type: python
module: chaosk8s.pod.actions
func: terminate_pods
arguments:
label_selector: ''
name_pattern: my-release-rabbitmq-[0-9]$
rand: true
ns: default
---
apiVersion: chaostoolkit.org/v1
kind: ChaosToolkitExperiment
metadata:
name: my-chaos-exp
namespace: chaostoolkit-crd
spec:
serviceAccountName: test-user
automountServiceAccountToken: false
pod:
image: chaostoolkit/chaostoolkit:full
imagePullPolicy: IfNotPresent
experiment:
configMapName: my-chaos-exp
configMapExperimentFileName: experiment.yaml
restartPolicy: Never
Error which is shared is using default service account "choastoolkit". Look like the role associated might not proper permissions.
The service account "test-user" which is been used in ChaosToolkitExperiment defintion should have proper role access to delete pod.
Please specify proper service account having proper role access.

cert-manager challange stuck in waiting `Waiting for http-01 challenge propagation: failed to perform self check GET request`

I have a challenge that is failing with:
Waiting for http-01 challenge propagation: failed to perform self check GET request, how can I resolve this, or at least diagnose it further?
Deleting the challenge results in a new challenge created with the same error. Surprising the URL responds with a correct http 200 and token (http://testabcxyz.ddns.net/.well-known/acme-challenge/8_F7kwZBcjgXPV2pq8GlxHrIcO_WJoNBtyf1hEr4lhk)
What is responsible for initiating the self check?
kubectl describe challenges --all-namespaces
Name: testabcxyzingress-cert-1968456099-91847910-2604628612
Namespace: local-testing
Labels: <none>
Annotations: <none>
API Version: acme.cert-manager.io/v1alpha3
Kind: Challenge
Metadata:
Creation Timestamp: 2020-04-30T15:13:37Z
Finalizers:
finalizer.acme.cert-manager.io
Generation: 1
Owner References:
API Version: acme.cert-manager.io/v1alpha2
Block Owner Deletion: true
Controller: true
Kind: Order
Name: testabcxyzingress-cert-1968456099-91847910
UID: 93838384-6f45-42d9-a32f-3b051fad55c4
Resource Version: 1089800
Self Link: /apis/acme.cert-manager.io/v1alpha3/namespaces/local-testing/challenges/testabcxyzingress-cert-1968456099-91847910-2604628612
UID: ac318c10-85ce-4a20-b178-a307fd20a039
Spec:
Authz URL: https://acme-staging-v02.api.letsencrypt.org/acme/authz-v3/52738879
Dns Name: testabcxyz.ddns.net
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt-staging
Key: zzzzzzzzzzzzzzzzzzzzzzzzzzzzz
Solver:
http01:
Ingress:
Class: nginx
Token: zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz
Type: http-01
URL: https://acme-staging-v02.api.letsencrypt.org/acme/chall-v3/52738879/kysudg
Wildcard: false
Status:
Presented: true
Processing: true
Reason: Waiting for http-01 challenge propagation: failed to perform self check GET request 'http://testabcxyz.ddns.net/.well-known/acme-challenge/8_F7kwZBcjgXPV2pq8GlxHrIcO_WJoNBtyf1hEr4lhk': Get "http://testabcxyz.ddns.net/.well-known/acme-challenge/8_F7kwZBcjgXPV2pq8GlxHrIcO_WJoNBtyf1hEr4lhk": dial tcp 174.138.100.234:80: connect: connection timed out
State: pending
Events: <none>
<Paste>
This eventually resolved, I can't remember what I did tough. I think there was an additional error message when doing a kubectl describe orders.

Cannot use BackendConfig on GKE

I have an application that I can deploy to kubernetes (Google Kubernetes Engine) to which I'm trying to add Google's CDN. For this I'm adding a BackendConfig. But when my gitlab pipeline tries to apply it it returns the following error.
$ kubectl apply -f backend-config.yaml
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "cloud.google.com/v1beta1, Resource=backendconfigs", GroupVersionKind: "cloud.google.com/v1beta1, Kind=BackendConfig"
I have a strongly suspect that the account the pipeline is running under does not have enough privileges to access backend configs. Being new to k8s and gke I'm not sure how to fix this. Especially as I cannot find what permission is needed for this.
Edit
I added a kubectl get backendconfigs to my pipeline and that fails with the same error. Running it from my gcloud sdk environment the same command works.
Note the cluster is managed by Gitlab and using RBAC. My understanding is that gitlab creates service accounts per namespace in k8s with the edit role.
Edit 2
Added ClusterRole and ClusterRoleBinding based on Arghya's answer.
Output of $ kubectl get crd
NAME CREATED AT
backendconfigs.cloud.google.com 2020-01-09T15:37:27Z
capacityrequests.internal.autoscaling.k8s.io 2020-04-28T11:15:26Z
certificaterequests.cert-manager.io 2020-01-15T06:53:47Z
certificates.cert-manager.io 2020-01-15T06:53:48Z
challenges.acme.cert-manager.io 2020-01-15T06:53:48Z
challenges.certmanager.k8s.io 2020-01-09T15:47:01Z
clusterissuers.cert-manager.io 2020-01-15T06:53:48Z
clusterissuers.certmanager.k8s.io 2020-01-09T15:47:01Z
issuers.cert-manager.io 2020-01-15T06:53:48Z
issuers.certmanager.k8s.io 2020-01-09T15:47:01Z
managedcertificates.networking.gke.io 2020-01-09T15:37:53Z
orders.acme.cert-manager.io 2020-01-15T06:53:48Z
orders.certmanager.k8s.io 2020-01-09T15:47:01Z
scalingpolicies.scalingpolicy.kope.io 2020-01-09T15:37:53Z
updateinfos.nodemanagement.gke.io 2020-01-09T15:37:53Z
Output of kubectl describe crd backendconfigs.cloud.google.com
Name: backendconfigs.cloud.google.com
Namespace:
Labels: <none>
Annotations: <none>
API Version: apiextensions.k8s.io/v1beta1
Kind: CustomResourceDefinition
Metadata:
Creation Timestamp: 2020-01-09T15:37:27Z
Generation: 1
Resource Version: 198
Self Link: /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/backendconfigs.cloud.google.com
UID: f0bc780a-32f5-11ea-b7bd-42010aa40111
Spec:
Conversion:
Strategy: None
Group: cloud.google.com
Names:
Kind: BackendConfig
List Kind: BackendConfigList
Plural: backendconfigs
Singular: backendconfig
Scope: Namespaced
Validation:
Open APIV 3 Schema:
Properties:
API Version:
Description: APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources
Type: string
Kind:
Description: Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
Type: string
Metadata:
Type: object
Spec:
Description: BackendConfigSpec is the spec for a BackendConfig resource
Properties:
Cdn:
Description: CDNConfig contains configuration for CDN-enabled backends.
Properties:
Cache Policy:
Description: CacheKeyPolicy contains configuration for how requests to a CDN-enabled backend are cached.
Properties:
Include Host:
Description: If true, requests to different hosts will be cached separately.
Type: boolean
Include Protocol:
Description: If true, http and https requests will be cached separately.
Type: boolean
Include Query String:
Description: If true, query string parameters are included in the cache key according to QueryStringBlacklist and QueryStringWhitelist. If neither is set, the entire query string is included and if false the entire query string is excluded.
Type: boolean
Query String Blacklist:
Description: Names of query strint parameters to exclude from cache keys. All other parameters are included. Either specify QueryStringBlacklist or QueryStringWhitelist, but not both.
Items:
Type: string
Type: array
Query String Whitelist:
Description: Names of query string parameters to include in cache keys. All other parameters are excluded. Either specify QueryStringBlacklist or QueryStringWhitelist, but not both.
Items:
Type: string
Type: array
Type: object
Enabled:
Type: boolean
Required:
enabled
Type: object
Connection Draining:
Description: ConnectionDrainingConfig contains configuration for connection draining. For now the draining timeout. May manage more settings in the future.
Properties:
Draining Timeout Sec:
Description: Draining timeout in seconds.
Format: int64
Type: integer
Type: object
Iap:
Description: IAPConfig contains configuration for IAP-enabled backends.
Properties:
Enabled:
Type: boolean
Oauthclient Credentials:
Description: OAuthClientCredentials contains credentials for a single IAP-enabled backend.
Properties:
Client ID:
Description: Direct reference to OAuth client id.
Type: string
Client Secret:
Description: Direct reference to OAuth client secret.
Type: string
Secret Name:
Description: The name of a k8s secret which stores the OAuth client id & secret.
Type: string
Required:
secretName
Type: object
Required:
enabled
oauthclientCredentials
Type: object
Security Policy:
Type: object
Session Affinity:
Description: SessionAffinityConfig contains configuration for stickyness parameters.
Properties:
Affinity Cookie Ttl Sec:
Format: int64
Type: integer
Affinity Type:
Type: string
Type: object
Timeout Sec:
Format: int64
Type: integer
Type: object
Status:
Type: object
Version: v1beta1
Versions:
Name: v1beta1
Served: true
Storage: true
Status:
Accepted Names:
Kind: BackendConfig
List Kind: BackendConfigList
Plural: backendconfigs
Singular: backendconfig
Conditions:
Last Transition Time: 2020-01-09T15:37:27Z
Message: no conflicts found
Reason: NoConflicts
Status: True
Type: NamesAccepted
Last Transition Time: <nil>
Message: the initial names have been accepted
Reason: InitialNamesAccepted
Status: True
Type: Established
Stored Versions:
v1beta1
Events: <none>
Create a ClusterRole and ClusterRoleBinding for service account example-sa in namespace example-namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: backendconfig-role
rules:
- apiGroups: ["cloud.google.com"]
resources: ["backendconfigs"]
verbs: ["get", "watch", "list", "create", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: backendconfig-rolebinding
subjects:
- kind: ServiceAccount
name: example-sa
namespace: example-namespace
roleRef:
kind: ClusterRole
name: backendconfig-role
apiGroup: rbac.authorization.k8s.io
To check the permission is applied
kubectl auth can-i get backendconfigs --as=system:serviceaccount:example-namespace:example-sa -n example-namespace

Kubernetes VPA failed to fetch list of containers. Reason: context deadline exceeded. Last server error: <nil>

Hi I am trying to deploy a VPA object for one of my deployment but when describing the conditions are failing with error
Status:
Conditions:
Last Transition Time: 2020-01-08T13:03:55Z
Message: Fetching history failed: Failed to fetch list of containers. Reason: context deadline exceeded. Last server error: <nil>
Reason: 2020-01-08T13:03:55Z
Status: False
Type: FetchingHistory
Last Transition Time: 2020-01-08T13:03:55Z
Status: False
Type: LowConfidence
Last Transition Time: 2020-01-08T13:03:55Z
Message: No pods match this VPA object
Reason: NoPodsMatched
Status: True
Type: NoPodsMatched
Last Transition Time: 2020-01-08T13:03:55Z
Status: True
Type: RecommendationProvided
Here is my VPA file:
apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
name: xtenter-vpa
namespace: mynamespace
spec:
targetRef:
apiVersion: "extensions/v1beta1"
kind: Deployment
name: mydeploy
updatePolicy:
updateMode: "Auto"
One more scenario I found is that when I run the VPA with updatePolicy mode as "Off" I got a different error in the status of VPA.
Conditions:
Last Transition Time: 2020-01-10T06:53:55Z
Message: Fetching history failed: Failed to fetch list of containers. Reason: Failed to listTimeSerises with retries. Last server error: rpc error: code = InvalidArgument desc = Name must begin with '{resource_container_type}/{resource_container_id}', got: projects/
Reason: 2020-01-10T06:53:55Z
Status: False
Type: FetchingHistory
Last Transition Time: 2020-01-10T06:44:55Z
Status: False
Type: LowConfidence
Last Transition Time: 2020-01-10T06:44:55Z
Message: No pods match this VPA object
Reason: NoPodsMatched
Status: True
Type: NoPodsMatched
Last Transition Time: 2020-01-10T06:44:55Z
Status: True
Type: RecommendationProvided
Cluster details :
version: 1.13.11-gke.14
Stackdriver Kubernetes Engine Monitoring: Disabled
Legacy Stackdriver Logging: Enabled
Could you please help me understand what is the root cause here?
I was able to deploy VPA without any error in a GKE cluster version 1.14.7-gke.25 by following this documentation. I suggest you try the same; this will probably help you identify any issues with your current configuration.
Thanks Imtiaz , Will try that but there is an issue in Kubernetes autoscaler 1.13.11-gke.14 which will be fixed.. Please check below link
https://github.com/kubernetes/autoscaler/issues/2725#issuecomment-575134855

Deploying MongoDB in kubernetes does not create pods/services

I'm following this
https://github.com/mongodb/mongodb-enterprise-kubernetes
and
https://docs.opsmanager.mongodb.com/current/tutorial/install-k8s-operator/
to deploy mongodb inside a Kubernetes cluster on DigitalOcean.
So far everything worked except the last step. Deploying mongodb. I'm trying to do like suggested in the documentation:
---
apiVersion: mongodb.com/v1
kind: MongoDbReplicaSet
metadata:
name: mongodb-rs
namespace: mongodb
spec:
members: 3
version: 4.0.4
persistent: true
project: project-0
credentials: mongodb-do-ops
It doesn't work. The resource of type MongoDbReplicaSet is created, but no pods and services are deployed like written in docs.
kubectl --kubeconfig="iniside-k8s-test-kubeconfig.yaml" describe MongoDbReplicaSet mongodb-rs -n mongodb
Name: mongodb-rs
Namespace: mongodb
Labels: <none>
Annotations: <none>
API Version: mongodb.com/v1
Kind: MongoDbReplicaSet
Metadata:
Creation Timestamp: 2018-11-21T21:35:30Z
Generation: 1
Resource Version: 2948350
Self Link: /apis/mongodb.com/v1/namespaces/mongodb/mongodbreplicasets/mongodb-rs
UID: 5e83c7b0-edd5-11e8-88f5-be6ffc4e4dde
Spec:
Credentials: mongodb-do-ops
Members: 3
Persistent: true
Project: project-0
Version: 4.0.4
Events: <none>
I got it working.
As It stands in documentation here:
https://docs.opsmanager.mongodb.com/current/tutorial/install-k8s-operator/
data.projectName
Is not optional. After looking at operator logs, operator cloudn't create replica set deployment because projectName was missing in ConfigMap.