petset on GKE: could not find the requested resource - kubernetes

I want to experiment with PetSet on GKE.
I have a 1.3.5 Kubernetes cluster on GKE, but PetSet does not seem to be activated.
> kubectl get petset
Unable to list "petsets": the server could not find the requested resource
Do I need to activate v1alpha1 feature on GKE ?

I'm using PetSet in zone europe-west1-d but got the error you're seeing when I tried in zone europe-west1-c.
Update:
Today, September 1, I got an email from Google Cloud Platform announcing that PetSet was "accidentally enabled" and will be disabled on September 30.
Dear Google Container Engine customer,
Google Container Engine clusters running Kubernetes 1.3.x versions accidentally enabled Kubernetes alpha features (e.g. PetSet), which are not production ready. Access to alpha features has already been disabled for clusters not using them, but cannot be safely disabled in clusters that are currently using alpha resources. The following clusters in projects owned by you have been identified as running alpha resources:
Please delete the alpha resources from your cluster. Continued usage of these features after September 30th may result in an unstable or broken cluster, as access to alpha features will be disabled.
The full list of unsupported alpha resources that are currently enabled (and will be disabled) is below:
Resource API Group
petset apps/v1alpha1
clusterrolebindings rbac.authorization.k8s.io/v1alpha1
clusterroles rbac.authorization.k8s.io/v1alpha1
rolebindings rbac.authorization.k8s.io/v1alpha1
roles rbac.authorization.k8s.io/v1alpha1
poddisruptionbudgets policy/v1alpha1

Related

Istio on GKE in Autopilot mode

Hi there I was reviewing the GKE autopilot mode and noticed that in cluster configureation istio is disabled and I'm not able to change it. Also installation via istioctl install fail with following error
error installer failed to update resource with server-side apply for obj MutatingWebhookConfiguration//istio-sidecar-injector: mutatingwebhookconfigurations.admissionregistration.k8s.io "istio-sidecar-injector" is forbidden: User "something#example" cannot patch resource "mutatingwebhookconfigurations" in API group "admissionregistration.k8s.io" at the cluster scope: GKEAutopilot authz: cluster scoped resource "mutatingwebhookconfigurations/" is managed and access is denied
Am I correct or it's not possible to run istio in GKE autopilot mode?
TL;DR
It is not possible at this moment to run istio in GKE autopilot mode.
Conclusion
If you are using Autopilot, you don't need to manage your nodes. You don't have to worry about operations such as updating, scaling or changing the operating system. However, the autopilot has a number of limitations.
Even if you are trying to install istio with a command istioctl install, istio will not be installed. You will then see the following message:
This will install the Istio profile into the cluster. Proceed? (y/N) y
✔ Istio core installed
✔ Istiod installed
✘ Ingress gateways encountered an error: failed to wait for resource: resources not ready after 5m0s: timed out waiting for the condition
Deployment/istio-system/istio-ingressgateway
Pruning removed resources 2021-05-07T08:24:40.974253Z warn installer retrieving resources to prune type admissionregistration.k8s.io/v1beta1, Kind=MutatingWebhookConfiguration: mutatingwebhookconfigurations.admissionregistration.k8s.io is forbidden: User "something#example" cannot list resource "mutatingwebhookconfigurations" in API group "admissionregistration.k8s.io" at the cluster scope: GKEAutopilot authz: cluster scoped resource "mutatingwebhookconfigurations/" is managed and access is denied not found
Error: failed to install manifests: errors occurred during operation
This command failed, bacuse for sidecar injection, installer tries to create a MutatingWebhookConfiguration called istio-sidecar-injector. This limitation is mentioned here.
For more information you can also read this page.
It is not possible to create mutating admission webhooks according to documentation
You cannot create custom mutating admission webhooks for Autopilot clusters
Since Istio uses mutating webhooks to inject its sidecars, it will probably not work and it is also consistent with the error you get.
According to the documentation this should be possible with GKE 1.21:
In GKE version 1.21.3-gke.900 and later, you can create validating and
mutating dynamic admission webhooks. However, Autopilot modifies the
admission webhooks objects to add a namespace selector which excludes the
resources in managed namespaces (currently, kube-system) from being
intercepted. Additionally, webhooks which specify one or more of following
resources (and any of their sub-resources) in the rules, will be rejected:
group: ""
resource: nodes
group: certificates.k8s.io
resource: certificatesigningrequests
group: authentication.k8s.io
resource: tokenreviews
https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#webhooks_limitations

coordination.k8s.io api in GKE

I'm trying the leader-election code example provided with the go client (here) in a GKE cluster v1.13.7.
That requires a resource of type Lease of groupVersion coordination.k8s.io/v1 but there isn't. I know that Lease has been promoted to v1 in k8s 1.14 (not yet available with gke), but I expected to find the v1beta1 version.
Try with
kubectl proxy
curl -X GET localhost:8001/apis/coordination.k8s.io
and I get
404 page not found
Although the feature is v1 in 1.14, GKE has not incorporated this feature yet.
Since GKE is a fully managed product, the eng team decide which features to incorporate I to the GKE offering.
I recommend opening a feature request through the Google Public Issue Tracker and provide your use case for the feature to have integrated in future releases

How to install Kubernetes v1.10.11 on a GCP cluster?

There was recently a Kubernetes security hole that was patched in v1.10.11 (among other versions), so I would like to upgrade to that version. I am currently on v1.10.9. However, when running the command gcloud container get-server-config to get the list of valid node versions, v1.10.11 doesn't show up. Instead, it jumps straight from v1.10.9 to v1.11.2.
Does anyone have any idea why I cannot seem to use the usual gcloud container clusters upgrade [CLUSTER_NAME] --cluster-version [CLUSTER_VERSION] to upgrade to this version?
Thanks in advance!
Based on:
https://cloud.google.com/kubernetes-engine/docs/security-bulletins#december-3-2018
If you have Kubernetes in v1.10.9 you should (to patch this security hole) update your GKE Cluster to 1.10.9-gke.5.
The following Kubernetes versions are now available for new clusters and for opt-in master upgrades for existing clusters:
1.9.7-gke.11,
1.10.6-gke.11,
1.10.7-gke.11,
1.10.9-gke.5,
1.11.2-gke.18
Please validate your Scheduled master auto-upgrades option in GKE.
If it's enabled your cluster masters were auto-upgraded by Google and the next possible version to update is further version so v1.11.2, what is showing by GKE for you.

How to change fluentd config for GKE-managed logging agent?

I have a container cluster in Google Container Engine with Stackdriver logging agent enabled. It is correctly pulling stdout logs from my containers. Now I would like to change the fluentd config to specify a log parser so that the logs shown in the GCP Logging view will have the correct severity and component.
Following this Stackdriver logging guide from kubernetes.io, I have attempted to:
Get the fluentd ConfigMap as a yml file
Added a new <filter> according to my log4js log format
Created a new ConfigMap named fluentd-cm-2 in kube-system namespace
Edited the DaemonSet for fluentd and set its ConfigMap to fluentd-cm-2. I did this using kubectl edit ds instead of kubectl replace -f because the latter failed with an error message: "the object has been modified", even after getting a fresh copy of the DaemonSet yaml.
Unexpected result: The DaemonSet is restarted, but its configuration is reverted back to the original ConfigMap, so my changes did not take effect.
I have also tried editing the ConfigMap directly (kubectl edit cm fluentd-gcp-config-v1.1 --namespace kube-system) and saved it, but it was also reverted.
I noticed that the DaemonSet and ConfigMap for fluentd are tagged with addonmanager.kubernetes.io/mode: Reconcile. I would conclude that GKE has overwritten my settings because of this "reconcile" mode.
So, my question is: how can I change the fluentd configuration in a Google Container Engine cluster, when the logging agent was installed by GKE on cluster provisioning?
Please take a look at the Prerequisites section on the documentation page you mentioned. It's mentioned there, that on GKE you cannot change the default Stackdriver Logging integration. The reason is that GKE maintains this configuration: updates the agent, watches its health and so on. It's not possible to provide the same level of support for all possible configurations.
However, you can always disable the default integration and deploy your own, patched version of DaemonSet. You can find out how to disable the default integration in the GKE documentation:
gcloud beta container clusters update [CLUSTER-NAME] \
--logging-service=none
Note, that after you disabled the default integration, you have to maintain the new deployment yourself: update the agent, set the resources, watch its health.
Here is a solution for using your own fluentd daemonset that is very much like the one included with GKE.
https://cloud.google.com/solutions/customizing-stackdriver-logs-fluentd

kubernetes petset on google cloud

I am running a kubernetes cluster on google cloud(version 1.3.5) .
I found a redis.yaml
that uses petset to create a redis cluster but when i run kubectl create -f redis.yaml i get the following error :
error validating "redis.yaml": error validating data: the server could not find the requested resource (get .apps); if you choose to ignore these errors, turn validation off with --validate=false
i cant find why i get this error or how to solve this.
PetSet is currently an alpha feature (which you can tell because the apiVersion in the linked yaml file is apps/v1alpha1). It may not be obvious, but alpha features are not supported in Google Container Engine.
As described in api_changes.md, alpha level API objects are disabled by default, have no guarantees that they will exist in future versions, can break compatibility with older versions at any time, and may destabilize the cluster.
I'm using PetSet with some success, for example https://github.com/Yolean/kubernetes-mysql-cluster, in zone europe-west1-d but when I tried europe-west1-c I got the aforementioned error.
Google just enabled Alpha Clusters for GKE as announced here: https://cloud.google.com/container-engine/docs/alpha-clusters
Now you are able (but not SLA covered) to use all alpha features within an alpha cluster, what was disable previously.