Why won't kubernetes v1.5 recognize service.spec.loadBalancerIp? - kubernetes

I'm running kubernetes v1.5 (api reference here). The field service.spec.loadBalancerIp should exist but I keep getting the following error when I attempt to set it.
error: error validating ".../service.yaml": error validating data: found invalid field loadBalancerIp for v1.ServiceSpec; if you choose to ignore these errors, turn validation off with --validate=false
service.yaml:
kind: Service
apiVersion: v1
metadata:
name: some-service
spec:
type: LoadBalancer
loadBalancerIp: xx.xx.xx.xx
selector:
deployment: some-deployment
ports:
- protocol: TCP
port: 80
kubectl version output:
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:53:09Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4", GitCommit:"7243c69eb523aa4377bce883e7c0dd76b84709a1", GitTreeState:"clean", BuildDate:"2017-03-07T23:34:32Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
I'm running my cluster on gke.
Any thoughts?

You have a typo in your spec. It should be loadBalancerIP, not loadBalancerIp. Note the uppercase P

Related

Cannot apply prometheusrule caused by a gateway timeout

I'm trying to modify some prometheusrule in my cluster but I'm encountering a timeout error that I don't understand. Here is a sample of the rule I'm trying to modify.
# modif.yaml
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
annotations:
meta.helm.sh/release-name: monitoring-platform
meta.helm.sh/release-namespace: monitoring
creationTimestamp: "2022-01-04T09:20:58Z"
generation: 1
labels:
app: prometheus-operator
app.kubernetes.io/managed-by: Helm
prometheus: kube-op
release: monitoring-platform
name: kube-op-apps-rules
namespace: monitoring
resourceVersion: "948572193"
uid: a461d478-9e61-4004-a129-9ed3f5efe8b0
spec:
groups:
- name: kubernetes-apps
rules:
- alert: KubePodCrashLooping
annotations:
message: Pod {{ $labels.namespace }}/{{ $labels.pod }} ({{ $labels.container
}}) is restarting {{ printf "%.2f" $value }} times / 20 minutes.
runbook_url: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubepodcrashlooping
expr: rate(kube_pod_container_status_restarts_total{job="kube-state-metrics",
pod!~"social-reco-prod.*"}[15m]) * 60 * 20 > 0
for: 1h
labels:
severity: critical
If I try to apply this file I get the following error
09:14:23 bastien#work:/work/$ kubectl -v 6 apply -f modif.yaml
I0103 09:14:25.389081 1969468 loader.go:379] Config loaded from file: /home/bastien/.kube/config
I0103 09:14:25.436584 1969468 round_trippers.go:445] GET https://1.2.3.4/openapi/v2?timeout=32s 200 OK in 46 milliseconds
I0103 09:14:25.655706 1969468 round_trippers.go:445] GET https://1.2.3.4/apis/external.metrics.k8s.io/v1beta1?timeout=32s 200 OK in 14 milliseconds
I0103 09:14:25.664871 1969468 cached_discovery.go:82] skipped caching discovery info, no resources found
I0103 09:14:25.696947 1969468 round_trippers.go:445] GET https://1.2.3.4/apis/monitoring.coreos.com/v1/namespaces/monitoring/prometheusrules/kube-op-apps-rules 404 Not Found in 30 milliseconds
I0103 09:14:25.728817 1969468 round_trippers.go:445] GET https://1.2.3.4/api/v1/namespaces/monitoring 200 OK in 31 milliseconds
I0103 09:14:59.759927 1969468 round_trippers.go:445] POST https://1.2.3.4/apis/monitoring.coreos.com/v1/namespaces/monitoring/prometheusrules?fieldManager=kubectl-client-side-apply 504 Gateway Timeout in 34030 milliseconds
The rule seems to be good since I have a secondary cluster on which the apply command is working fine. Both cluster have the same version :
# The malfunctioning one
12:37:34 bastien#work:/work/$ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:10:43Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.14-gke.4300", GitCommit:"348bdc1040d273677ca07c0862de867332eeb3a1", GitTreeState:"clean", BuildDate:"2022-08-17T09:22:54Z", GoVersion:"go1.16.15b7", Compiler:"gc", Platform:"linux/amd64"}
# The working one
13:25:07 bastien#work:/work/$ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:10:43Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.14-gke.4300", GitCommit:"348bdc1040d273677ca07c0862de867332eeb3a1", GitTreeState:"clean", BuildDate:"2022-08-17T09:22:54Z", GoVersion:"go1.16.15b7", Compiler:"gc", Platform:"linux/amd64"}
Do you have any clue what's wrong? Or at least where could I find some logs or info on what's going on?
I finally stumbled upon someone who had an issue which looked like mine.
https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/8303
In this thread the OP did several tests until someone pointed out a potential solution that consists in allowing GKE master to communicate with kubelet.
https://github.com/prometheus-operator/prometheus-operator/issues/2711#issuecomment-521103022
The proposed terraform is the following :
resource "google_compute_firewall" "gke-master-to-kubelet" {
name = "k8s-master-to-kubelets"
network = "XXXXX"
project = "XXXXX"
description = "GKE master to kubelets"
source_ranges = ["${data.terraform_remote_state.network.master_ipv4_cidr_block}"]
allow {
protocol = "tcp"
ports = ["8443"]
}
target_tags = ["gke-main"]
}
Once I added this firewall rule on my side it completely fix my issue. I still don't know why it suddenly stop working.

Unable to deploy emissary-ingress in local kubernetes cluster. Fails with `error validating data: ValidationError(CustomResourceDefinition.spec)`

I'm trying to install emissary-ingress using the instructions here.
It started failing with error no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta". I searched and found an answer on Stack Overflow which said to update apiextensions.k8s.io/v1beta1 to apiextensions.k8s.io/v1 which I did.
It also asked to use the admissionregistration.k8s.io/v1 which my kubectl already uses.
When I run the kubectl apply -f filename.yml command, the above error was gone and a new error started popping in with error: error validating data: ValidationError(CustomResourceDefinition.spec): unknown field "validation" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec;
What should I do next?
My kubectl version - Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:16:05Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:32:41Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
minikube version - minikube version: v1.23.2
commit: 0a0ad764652082477c00d51d2475284b5d39ceed
EDIT:
The custom resource definition yml file: here
The rbac yml file: here
The validation field was officially deprecated in apiextensions.k8s.io/v1.
According to the official kubernetes documentation, you should use schema as a substitution for validation.
Here is a SAMPLE code using schema instead of validation:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: crontabs.stable.example.com
spec:
group: stable.example.com
versions:
- name: v1
served: true
storage: true
---> schema: <---
# openAPIV3Schema is the schema for validating custom objects.
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
cronSpec:
type: string
pattern: '^(\d+|\*)(/\d+)?(\s+(\d+|\*)(/\d+)?){4}$'
image:
type: string
replicas:
type: integer
minimum: 1
maximum: 10

Can't change apiVersion of an deployed Ingress from extensions/v1beta1 to networking.k8s.io/v1

Current version:
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.9", GitCommit:"9dd794e454ac32d97cde41ae10be801ae98f75df", GitTreeState:"clean", BuildDate:"2021-03-18T01:00:06Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
kubectl version --short:
Client Version: v1.20.0
Server Version: v1.19.9
I have tried to change the apiVersion through kubectl edit command and save it and was getting below error message added up in the YAML file after saving the file:
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Error Message:
ingresses.extensions "current-ingress" was not valid: <nil>: Invalid value: "The edited file failed validation": [ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown >field "serviceName" in io.k8s.api.networking.v1.IngressBackend, >ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown >field "servicePort" in io.k8s.api.networking.v1.IngressBackend]
spec:
rules:
- host: // hostname
http:
paths:
- backend:
serviceName: location
servicePort: 80
path: /
pathType: ImplementationSpecific
snippet here
Above is the format it has for serviceName and servicePort in the YAML file, I think that's the right way to write serviceName and servicePort declaration. If not, guide me here.
Could anyone help me how we can change the apiVersion of an deployed Ingress.
Thanks in Advance!
Don't worry about the warning, it's just to inform you about the available API versioning as per this issue.
For the error it seems like you're using an outdated format, which has changed with networking.k8s.io/v1 as per this PR, so it should be service.name and servic.port.number.
Please refer to the official Kubernetes documentation for more information.

kube-apiserver not coming up after adding --admission-control-config-file flag

root#ubuntu151:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:12:48Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.10", GitCommit:"62876fc6d93e891aa7fbe19771e6a6c03773b0f7", GitTreeState:"clean", BuildDate:"2020-10-15T01:43:56Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
My admission webhooks require authentication so I'm restarting the apiserver by specifying the location of the admission control configuration file via the --admission-control-config-file flag. Doing as:
root#ubuntu151:~# vi /etc/kubernetes/manifests/kube-apiserver.yaml
...
spec:
containers:
- command:
- --admission-control-config-file=/var/lib/kubernetes/kube-AdmissionConfiguration.yaml
...
root#ubuntu151:~# vi /var/lib/kubernetes/kube-AdmissionConfiguration.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: ValidatingAdmissionWebhook
configuration:
apiVersion: apiserver.config.k8s.io/v1
kind: WebhookAdmissionConfiguration
kubeConfigFile: /var/lib/kubernetes/kube-config.yaml
- name: MutatingAdmissionWebhook
configuration:
apiVersion: apiserver.config.k8s.io/v1
kind: WebhookAdmissionConfiguration
kubeConfigFile: /var/lib/kubernetes/kube-config.yaml
kubeConfigFile: /var/lib/kubernetes/kube-config.yaml is the file I copied from ~/.kube/config
Now my kube-apiserver is not coming up .
Please help !
Thanks in Advance !

{ambassador ingress} Not able to use canary and add_request_headers in the same Mapping

I want to pass a few custom headers to a canary service. On adding both the mappings to the template, it is disregarding the weight and adding the header to 100% of the traffic and routing them to the canary service.
Below is my ambassador service config
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: flag_off_mapping
prefix: /web-app/
service: web-service-flag
weight: 99
---
apiVersion: ambassador/v1
kind: Mapping
name: flag_on_mapping
prefix: /web-app/
add_request_headers:
x-halfbakedfeature: enabled
service: web-service-flag
weight: 1
I expect 99% of the traffic to hit the service without any additional headers and 1% of the traffic to hit the service with x-halfbakedfeature: enabled header added to the request object.
Ambassador: 0.50.3
Kubernetes environment [AWS L7 ELB]
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-04T04:48:03Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:36:14Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
$
Apologies for X-posting in Github and SO.
Please take a look here:
As workaround could you consider:
"Make another service pointing to the same canary instances with an ambassador
annotation containing the same prefix and headers required."