K8S api cloud.google.com not available in GKE v1.16.13-gke.401 - kubernetes

I am trying to create a BackendConfig resource on a GKE cluster v1.16.13-gke.401 but it gives me the following error:
unable to recognize "backendconfig.yaml": no matches for kind "BackendConfig" in version "cloud.google.com/v1"
I have checked the available apis with the kubectl api-versions command and cloud.google.com is not available. How can I enable it?
I want to create a BackendConfig whit a custom health check like this:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: my-backendconfig
spec:
healthCheck:
checkIntervalSec: 8
timeoutSec: 1
healthyThreshold: 1
unhealthyThreshold: 3
type: HTTP
requestPath: /health
port: 10257
And attach this BackendConfig to a Service like this:
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/backend-config: '{"default": "my-backendconfig"}'

As mentioned in the comments, issue was caused due to the lack of HTTP Load Balancing add-on in your cluster.
When you are creating GKE cluster with all default setting, feature like HTTP Load Balancing is enabled.
The HTTP Load Balancing add-on is required to use the Google Cloud Load Balancer with Kubernetes Ingress. If enabled, a controller will be installed to coordinate applying load balancing configuration changes to your GCP project
More details can be found in GKE documentation.
For test I have created Cluster-1 without HTTP Load Balancing add-on. There was no BackendConfig CRD - Custom Resource Definition.
The CustomResourceDefinition API resource allows you to define custom resources. Defining a CRD object creates a new custom resource with a name and schema that you specify. The Kubernetes API serves and handles the storage of your custom resource. The name of a CRD object must be a valid DNS subdomain name.
Without BackendConfig and without cloud apiVersion like below
user#cloudshell:~ (k8s-tests-XXX)$ kubectl get crd | grep backend
user#cloudshell:~ (k8s-tests-XXX)$ kubectl api-versions | grep cloud
I was not able to create any BackendConfig.
user#cloudshell:~ (k8s-tests-XXX) $ kubectl apply -f bck.yaml
error: unable to recognize "bck.yaml": no matches for kind "BackendConfig" in version "cloud.google.com/v1"
To make it work, you have to enable HTTP Load Balancing You can do it via UI or command.
Using UI:
Navigation Menu > Clusters > [Cluster-Name] > Details > Clikc on
Edit > Scroll down to Add-ons and expand > Find HTTP load balancing and change from Disabled to Enabled.
or command:
gcloud beta container clusters update <clustername> --update-addons=HttpLoadBalancing=ENABLED --zone=<your-zone>
$ gcloud beta container clusters update cluster-1 --update-addons=HttpLoadBalancing=ENABLED --zone=us-central1-c
WARNING: Warning: basic authentication is deprecated, and will be removed in GKE control plane versions 1.19 and newer. For a list of recommended authentication methods, see: https://cloud.google.com/kubernetes-engine/docs/how-to/api-server-authentication
After a while, when Add-on was enabled:
$ kubectl get crd | grep backend
backendconfigs.cloud.google.com 2020-10-23T13:09:29Z
$ kubectl api-versions | grep cloud
cloud.google.com/v1
cloud.google.com/v1beta1
$ kubectl apply -f bck.yaml
backendconfig.cloud.google.com/my-backendconfig created

Related

Install calico GlobalNetworkPolicy via helm chart

I am trying to install a calico GlobalNetworkPolicy that will be applicable to all the pods in the cluster regardless of namespace , and to apply GlobalNetworkPolicy as per docs here -
Calico network policies and Calico global network policies are applied
using calicoctl
i.e calicoctl command (assuming calicoctl binary installed in the host) ->
calicoctl apply -f global-policy.yaml
OR if we have a calicoctl pod running ->
kubectl exec -ti -n kube-system calicoctl -- /calicoctl apply -f global-deny.yaml -o wide
global-policy.yaml ->
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: default-deny
spec:
selector: projectcalico.org/namespace == "kube-system"
types:
- Ingress
- Egress
Question -> How do I install such a policy via helm chart ? As helm implicitly calls via kubectl and that causes error on install.
Error using kubectl or helm =>
Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: resource mapping not found for name: "default-deny" namespace: "" from "": no matches for kind "GlobalNetworkPolicy" in version "projectcalico.org/v3"
As per the Doc given by you Calico global network policy is a non-namespaced resource and can be applied to any kind of endpoint (pods, VMs, host interfaces) independent of namespace.
But you are using namespace in the Yaml, that might be the reason for the error. Kindly remove the name space and try again.
Because global network policies use kind: GlobalNetworkPolicy, they are grouped separately from kind: NetworkPolicy. For example, global network policies will not be returned from calicoctl get networkpolicy, and are rather returned from calicoctl get globalnetworkpolicy.
Below is the reference yaml from Doc :
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: allow-tcp-port-6379
Refer For more information on Global Network Policy, Calico Install Via Helm and Calico command line tools.

Prometheus returns error context deadline exceeded

I deployed Prometheus with an Helm chart from Rancher. Targets such as Alertmanager, Prometheus, Grafana, Node-exporter, Kubelet etc. are configured automatically. The endpoint from alertmanager refers to the IP address of the specific pod for example. I also configured multiple targets successfully like Jira and Confluence.
Since the service external-dns is running in the namespace kube-system, it's also configured automatically. But only this service is getting the error Context deadline exceeded.
I checked in a random pod if those metrics are accessible by running the command curl -s http://<IP-ADDRESS-POD>:7979/metrics. Also did this with the service ip address (kubectl get service external-dns and curl-s http://<IP-ADDRESS-SVC>:7979/metrics).
Both of these curl commands returned the metrics within a second. So increasing the scrape timeout won't help.
But when I exec in the Prometheus container and use the promtool debug metrics command it shows the same behaviour like in my browser. The external-dns returns a timeout with both of the IP addresses and if I try this with another target it just returns the metrics.
I also don't think it's a SSL issue, because I already injected the correct CA bundle for the targets Jira and Confluence.
So anybody an idea? :)
I had to edit the NetworkPolicy in the kube-system namespace. The containers from the cattle-monitoring-system namespace are now allowed to access the containers from the kube-system namespace. You can upload your NetworkPolicies here and it visualizes which resources has access or not. The NetworkPolicy looks like this now:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-network-policy
namespace: kube-system
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
name: cattle-monitoring-system
- from:
- podSelector: {}
podSelector: {}
policyTypes:
- Ingress

Cannot deploy virtual-server on Minikube

I am just exploring and want to helm my k8dash, but got the weird error since I have been able to deploy on AWS EKS.
I am running them on my Minikube V1.23.2
My helm version is v3.6.2
Kubernetes kubectl version is v1.22.3
Basically if I do helm template, the VirtualServer would be like this:
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: k8dash
namespace: k8dash
spec:
host: namahost.com
routes:
- action:
pass: RELEASE-NAME
path: /
upstreams:
- name: RELEASE-NAME
port: 80
service: RELEASE-NAME
and I got this error:
Error: unable to build Kubernetes objects from release manifest: unable to recognize "": no matches for kind "VirtualServer" in version "k8s.nginx.org/v1"
It's weird, deploying this one on AWS EKS just fine but locally got this error and I could not find any clue while Googling. Does it has something to do with my tools version?
You have to install additional CRDs as both VirtualServer and VirtualServerRoute are not oob, but nginx resources.
CustomResourceDefinitions:
The CustomResourceDefinition API resource allows you to define custom
resources. Defining a CRD object creates a new custom resource with a
name and schema that you specify. The Kubernetes API serves and
handles the storage of your custom resource. The name of a CRD object
must be a valid DNS subdomain name.
This frees you from writing your own API server to handle the custom
resource, but the generic nature of the implementation means you have
less flexibility than with API server aggregation.
Nginx Create Custom Resources
Note: By default, it is required to create custom resource definitions
for VirtualServer, VirtualServerRoute, TransportServer and Policy.
Otherwise, the Ingress Controller pods will not become Ready. If you’d
like to disable that requirement, configure -enable-custom-resources
command-line argument to false and skip this section.
Create custom resource definitions for VirtualServer and VirtualServerRoute, TransportServer and Policy resources.
You can find crds under https://github.com/nginxinc/kubernetes-ingress/tree/master/deployments/common/crds:
$ git clone https://github.com/nginxinc/kubernetes-ingress/
$ cd kubernetes-ingress/deployments
$ git checkout v2.0.3 (or latest, as you wish)
$ kubectl apply -f common/crds/k8s.nginx.org_virtualservers.yaml
$ kubectl apply -f common/crds/k8s.nginx.org_virtualserverroutes.yaml
$ kubectl apply -f common/crds/k8s.nginx.org_transportservers.yaml
$ kubectl apply -f common/crds/k8s.nginx.org_policies.yaml
After successful applying you will be able to create both VirtualServer and VirtualServerRoute

Kubernetes ingress for dynamic URL

I am developing an application that allows users to play around in their own sandboxes with a finite life time. Conceptually, it can be thought as if users were playing games of Pong. Users can interact with a web interface hosted at main/ to start a game of Pong. Each game of Pong will exist in its own pod. Since each game has a finite lifetime, the pods are dynamically created on-demand (through the Kubernetes API) as Kubernetes jobs with a single pod. There is therefore a one-to-one relationship between games of Pong and pods. Up to this point I have it all figured out.
My problem is, how do I set up an ingress to map dynamically created URLs, for example main/game1, to the corresponding pods? That is, if a user starts a game through the main interface, I would like him to be redirected to the URL of the corresponding pod where his game is hosted.
I could pre-allocate a set of urls, check if they have active jobs, and redirect if they do not, but the does not scale well. I am thinking dynamically assigning URLs is a common pattern in Kubernetes, so there must be a standard way to do this. I have looked at using nginx-ingress, but that is not a requirement.
Furthermore the comment, I created for you a little demo on minikube providing a working Ingress Class controller (enabled via minikube addons enable ingress).
Replicating the multiple Deployment that simulates the games.
kubectl create deployment deployment-1 --image=nginxdemos/hello
kubectl create deployment deployment-2 --image=nginxdemos/hello
kubectl create deployment deployment-3 --image=nginxdemos/hello
kubectl create deployment deployment-4 --image=nginxdemos/hello
kubectl create deployment deployment-5 --image=nginxdemos/hello
Same for Services resources:
kubectl create service clusterip deployment-1 --tcp=80:80
kubectl create service clusterip deployment-2 --tcp=80:80
kubectl create service clusterip deployment-3 --tcp=80:80
kubectl create service clusterip deployment-4 --tcp=80:80
kubectl create service clusterip deployment-5 --tcp=80:80
Finally, it's time for the Ingress one but we have to be quite hacky since we don't have the subcommand create available.
for number in `seq 5`; do echo "
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: deployment-$number
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: hello-world.info
http:
paths:
- path: /game$number
backend:
serviceName: deployment-$number
servicePort: 80
" | kubectl create -f -; done
Now you have Pod, Service and Ingress: obviously, you have to replicate the same result using Kubernetes API but, as I suggested in the comment, you should create a single Ingress resource and update accordingly Path subkey in a dynamic way.
However, if you try to simulate the cURL call faking the Host header, you can see the working result:
# curl `minikube ip`/game2 -sH 'Host: hello-world.info'|grep -i server
<p><span>Server address:</span> <span>172.17.0.5:80</span></p>
<p><span>Server name:</span> <span>deployment-2-5b98b954f6-8g5fl</span></p>
# curl `minikube ip`/game4 -sH 'Host: hello-world.info'|grep -i server
<p><span>Server address:</span> <span>172.17.0.7:80</span></p>
<p><span>Server name:</span> <span>deployment-4-767ff76774-d2fgj</span></p>
You can see the Pod IP and name as well.
I agree with Efrat Levitan. It's not the task for ingress/kubernetes itself.
You need another application (different layer of abstraction) to distinguish where the traffic should be routed for example istio and Routing rule for HTTP traffic based on cookies.

spinnaker /halyard : Unable to communicate with the Kubernetes cluster

I am trying to deploy spinnaker on multi node . I have 2 VMs : the first with halyard and kubectl the second contain the kubernetes master api.
my kubectl is well configured and able to communicate with the remote kubernetes api,
the "kubectl get namespaces " works
kubectl get namespaces
NAME STATUS AGE
default Active 16d
kube-public Active 16d
kube-system Active 16d
but when I run this cmd
hal config provider -d kubernetes account add spin-kubernetes --docker-registries myregistry
I get this error
Add the spin-kubernetes account
Failure
Problems in default.provider.kubernetes.spin-kubernetes:
- WARNING You have not specified a Kubernetes context in your
halconfig, Spinnaker will use "default-system" instead.
? We recommend explicitly setting a context in your halconfig, to
ensure changes to your kubeconfig won't break your deployment.
? Options include:
- default-system
! ERROR Unable to communicate with your Kubernetes cluster:
Operation: [list] for kind: [Namespace] with name: [null] in namespace:
[null] failed..
? Unable to authenticate with your Kubernetes cluster. Try using
kubectl to verify your credentials.
- Failed to add account spin-kubernetes for provider
kubernetes.
From the error message there seem to be two approaches to this, set your halconfig to talk to the default-system context, so it could communicate with your cluster or the other way around, that is configure your context.
Try this:
kubectl config view
I suppose you'll see the context and current context over there to be default-system, try changing those.
For more help do
kubectl config --help
I guess you're looking for the set-context option.
Hope that helps.
You can set this in your halconfig as mentioned by #Naim Salameh.
Another way is to try setting your K8S cluster info in your default Kubernetes config ~/.kube/config.
Not certain this will work since you are running halyard and kubectl on different VM's.
# ~/.kube/config
apiVersion: v1
clusters:
- cluster:
server: http://my-kubernetes-url
name: my-k8s-cluster
contexts:
- context:
cluster: my-k8s-cluster
namespace: default
name: my-context
current-context: my-context
kind: Config
preferences: {}
users: []