configmap is not working for service and loadBalancerIP - kubernetes

I'm used following kubernetes API. Configmap is not working with service and loadbalancer.
Here is code -
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: Resource_group
valueFrom :
configMapKeyRef :
name : app-configmap
key : Resource_group
name: appliance-ui
spec:
loadBalancerIP: Static_public_ip
valueFrom :
configMapKeyRef :
name : app-configmap
key : Static_public_ip
type: LoadBalancer
ports:
- port: 80
selector:
app: appliance-ui
Here is error -
error: error validating "ab.yml": error validating data: [ValidationError(Service.metadata.annotations.valueFrom): invalid type for io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta.annotations: got "map", expected "string", ValidationError(Service.spec): unknown field "valueFrom" in io.k8s.api.core.v1.ServiceSpec]; if you choose to ignore these errors, turn validation off with --validate=false
I have tried with --validate=false. It didn't work. Please let me know whether configmap will work for service and loadBalancerIP field or not.

Unfortunately, you cannot configure a certain value for Service manifest through configMapKeyRef. AFAIK ConfigMap should mount to a Pod(container) for referring the values, so it does not allow other resource except Pod. Refer Configure a Pod to Use a ConfigMap or ConfigMap and Pods for more details.

Related

Configuring RBAC for kubernetes

I used the following guide to set up my chaostoolkit cluster: https://chaostoolkit.org/deployment/k8s/operator/
I am attempting to kill a pod using kubernetes, however the following error:
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods is forbidden: User \"system:serviceaccount:chaostoolkit-run:chaostoolkit-b3af262edb\" cannot list resource \"pods\" in API group \"\" in the namespace \"task-dispatcher\"","reason":"Forbidden","details":{"kind":"pods"},"code":403}
I set my serviceAccountName to an RBAC that I created but for some reason my kubernetes defaults to "system:serviceaccount:chaostoolkit-run:chaostoolkit-b3af262edb".
apiVersion: v1
kind: ConfigMap
metadata:
name: my-chaos-exp
namespace: chaostoolkit-run
data:
experiment.yaml: |
---
version: 1.0.0
title: Terminate Pod Experiment
description: If a pod gets terminated, a new one should be created in its place in a reasonable amount of time.
tags: ["kubernetes"]
secrets:
k8s:
KUBERNETES_CONTEXT: "docker-desktop"
method:
- type: action
name: terminate-k8s-pod
provider:
type: python
module: chaosk8s.pod.actions
func: terminate_pods
arguments:
label_selector: ''
name_pattern: my-release-rabbitmq-[0-9]$
rand: true
ns: default
---
apiVersion: chaostoolkit.org/v1
kind: ChaosToolkitExperiment
metadata:
name: my-chaos-exp
namespace: chaostoolkit-crd
spec:
serviceAccountName: test-user
automountServiceAccountToken: false
pod:
image: chaostoolkit/chaostoolkit:full
imagePullPolicy: IfNotPresent
experiment:
configMapName: my-chaos-exp
configMapExperimentFileName: experiment.yaml
restartPolicy: Never
Error which is shared is using default service account "choastoolkit". Look like the role associated might not proper permissions.
The service account "test-user" which is been used in ChaosToolkitExperiment defintion should have proper role access to delete pod.
Please specify proper service account having proper role access.

Kong's flaky rate limiting behavior

I have deployed some APIs in Azure Kubernetes Service and I have been experimenting with Kong to be able to use some of its features such as rate limiting and IP restriction but it doesn't always work as expected. Here is the plugin objects I use:
apiVersion: configuration.konghq.com/v1
kind: KongClusterPlugin
metadata:
name: kong-rate-limiting-plugin
annotations:
kubernetes.io/ingress.class: kong
labels:
global: 'true'
config:
minute: 10
policy: local
limit_by: ip
hide_client_headers: true
plugin: rate-limiting
---
apiVersion: configuration.konghq.com/v1
kind: KongClusterPlugin
metadata:
name: kong-ip-restriction-plugin
annotations:
kubernetes.io/ingress.class: kong
labels:
global: 'true'
config:
deny:
- {some IP}
plugin: ip-restriction
The first problem is when I tried to apply these plugins across the cluster by setting the global label to \"true\" as described here, I got this error when applying it with kubectl:
metadata.labels: Invalid value: "\\\"true\\\"": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue', or 'my_value', or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')
The second problem is even though I used KongClusterPlugin and set global to 'true', I still had to add the plugins explicitly to the ingress object for them to work. Here is my ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ing
annotations:
konghq.com/plugins: kong-rate-limiting-plugin,kong-ip-restriction-plugin
konghq.com/protocols: https
konghq.com/https-redirect-status-code: "301"
namespace: default
spec:
ingressClassName: kong
...
And here is my service:
apiVersion: v1
kind: Service
metadata:
name: my-svc
namespace: default
spec:
externalTrafficPolicy: Local
type: LoadBalancer
...
The third problem is by setting limit_by to ip, I expected it to rate-limit per IP, but I noticed it would block all clients when the threshold was hit collectively by the clients. I tried to mitigate that by preserving the client IP and setting externalTrafficPolicy to Local in the service object as I thought maybe the Kubernetes objects weren't receiving the actual client's IP. Now the rate limiting behavior seems to be more reasonable, however sometimes it's as if it's back to its old state and returns HTTP 429 randomly. The other issue I see here is I can set externalTrafficPolicy to Local only when the service type has been set to LoadBalancer or NodePort. I set my service to be of type LoadBalancer which exposes it publicly and seems to be a problem. It would be ironic that using an ingress controller that's supposed to shield the service rather exposes it. Am I missing something here or does this make no sense?
The fourth problem is the IP restriction plugin doesn't seem to be working. I was able to successfully call the APIs from a machine with the IP I put in 'config.deny'.
The fifth problem is the number of times per minute I have to hit the APIs to get a HTTP 429 doesn't match the value I placed in 'config.minute'.

k8s ExternalName endpoint not found - but working

I deployed a simple test ingress and an externalName service using kustomize.
The deployment works and I get the expected results, but when describing the test-ingress it shows the error: <error: endpoints "test-external-service" not found>.
It seems like a k8s bug. It shows this error, but everything is working fine.
Here is my deployment:
kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: platform
resources:
- test-ingress.yaml
- test-service.yaml
generatorOptions:
disableNameSuffixHash: true
test-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: test-external-service
namespace: platform
spec:
type: ExternalName
externalName: "some-working-external-elasticsearch-service"
test-ingress.yaml:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: nginx-external
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_cache_bypass $http_upgrade;
spec:
rules:
- host: testapi.mydomain.com
http:
paths:
- path: /
backend:
serviceName: test-external-service
servicePort: 9200
Here, I connected the external service to a working elasticsearch server. When browsing to testapi.mydomain.com ("mydomain" was replaced with our real domain of course), I'm getting the well known expected elasticsearch results:
{
"name" : "73b40a031651",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "Xck-u_EFQ0uDHJ1MAho4mQ",
"version" : {
"number" : "7.10.1",
"build_flavor" : "oss",
"build_type" : "docker",
"build_hash" : "1c34507e66d7db1211f66f3513706fdf548736aa",
"build_date" : "2020-12-05T01:00:33.671820Z",
"build_snapshot" : false,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
So everything is working. But when describing the test-ingress, there is the following error:
test-external-service:9200 (<error: endpoints "test-external-service" not found>)
What is this error? Why am I getting it even though everything is working properly? What am I missing here?
This is how the kubectl describe ingress command works.
The kubectl describe ingress command calls the describeIngressV1beta1 function, which calls the describeBackendV1beta1 function to describe the backend.
As can be found in the source code, the describeBackendV1beta1 function looks up the endpoints associated with the backend services, if it doesn't find appropriate endpoints, it generate an error message (as in your example):
func (i *IngressDescriber) describeBackendV1beta1(ns string, backend *networkingv1beta1.IngressBackend) string {
endpoints, err := i.client.CoreV1().Endpoints(ns).Get(context.TODO(), backend.ServiceName, metav1.GetOptions{})
if err != nil {
return fmt.Sprintf("<error: %v>", err)
}
...
In the Integrating External Services documentation, you can find that ExternalName services do not have any defined endpoints:
ExternalName services do not have selectors, or any defined ports or endpoints, therefore, you can use an ExternalName service to direct traffic to an external service.
Service is a Kubernetes abstraction that uses labels to chose pods to route traffic to.
Endpoints track the IP Addresses of the objects the service send traffic to. When a service selector matches a pod label.
This is the case with Kubernetes service with the type ClusterIP, NodePort or LoadBalancer.
For your case, you use a Kubernetes service with the type ExternalName where the endpoint is a server outside of your cluster or in a different namespace, thus kubernetes displays that error message when you try to describe the ingress.
Usually we do not create an ingress that points to a service of type ExternalName because we are not supposed to expose externally a service that it is already exposed. The kubernetes ingress expects a service with the type ClusterIP, NodePort or LoadBalancer, that is why you got that unexpected error when you described the ingress.
If you are browsing that ExternalName within the cluster, it would be better to avoid using an ingress and use the service uri instead (test-external-service.<namespace>.svc.cluster.local:9200)
Anyway if you insist on using the Ingress, you can create a Headless service without selector and then manually create an endpoint using the same name as of the service. Follow the example here

Error: validation failed: unable to recognize "": no matches for kind "FrontendConfig" in version "networking.k8s.io/v1beta1"

I am using fronendconfig.yaml file to enable http to https redirection, but it is giving me chart validation failed error. Listing the content of my yaml file. This issue is I am facing GKE ingress. My GKE master version is "1.17.14-gke.1600".
apiVersion: networking.k8s.io/v1beta1
kind: FrontendConfig
metadata:
name: "abcd"
spec:
redirectToHttps:
enabled: true
responseCodeName: "301"
Using annotations in values.yaml file like this.
ingress:
enabled: true
annotations:
networking.k8s.io/v1beta1.FrontendConfig: "abcd"
As of now, HTTP-to-HTTPS redirect is in beta and only available for GKE 1.18.10-gke.600 or greater as per the documentation.
Since you stated to be using GKE 1.17.14-gke.1600, this won't be available for your cluster.

error validating data: [ValidationError(Pod.spec)

I am a beginner and start learning Kubernetes.
I'm trying to create a POD with the name of myfirstpodwithlabels.yaml and write the following specification in my YAML file. But when I try to create a POD I get this error.
error: error validating "myfirstpodwithlabels.yaml": error validating data: [ValidationError(Pod.spec): unknown field "contianers" in io.k8s.api.core.v1.PodSpec, ValidationError(Pod.spec): missing required field "containers" in io.k8s.api.core.v1.PodSpec]; if you choose to ignore these errors, turn validation off with --validate=false
My YAML file specification
kind: Pod
apiVersion: v1
metadata:
name: myfirstpodwithlabels
labels:
type: backend
env: production
spec:
contianers:
- image: aamirpinger/helloworld:latest
name: container1
ports:
- containerPort: 80
There is a typo in your .spec Section of the yaml.
you have written:
"contianers"
as seen in the error message when it really should be
"containers"
also for future reference: if there is an issue in your resource definitions yaml it helps if you actually post the yaml on stackoverflow otherwise helping is not an easy task.