Kong set Service timeouts via Helm Charts - kubernetes

I am trying to set the Timeouts for my Kong Service, which is deployed with Helm Charts.
In my Service.yaml file I added these annotations, as referenced in the Kong Docs.
annotations:
konghq.com/connect-timeout: 120000
konghq.com/write-timeout: 120000
konghq.com/read-timeout: 120000
However in the deployment Process I get the following error:
> Error: unable to build kubernetes objects from release manifest: unable to decode "": json: cannot unmarshal number into Go struct field ObjectMeta.metadata.annotations of type string

Okay the answer is simply pass the values as strings.
annotations:
konghq.com/connect-timeout: "120000"
konghq.com/write-timeout: "120000"
konghq.com/read-timeout: "120000"

Related

Cannot deploy virtual-server on Minikube

I am just exploring and want to helm my k8dash, but got the weird error since I have been able to deploy on AWS EKS.
I am running them on my Minikube V1.23.2
My helm version is v3.6.2
Kubernetes kubectl version is v1.22.3
Basically if I do helm template, the VirtualServer would be like this:
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: k8dash
namespace: k8dash
spec:
host: namahost.com
routes:
- action:
pass: RELEASE-NAME
path: /
upstreams:
- name: RELEASE-NAME
port: 80
service: RELEASE-NAME
and I got this error:
Error: unable to build Kubernetes objects from release manifest: unable to recognize "": no matches for kind "VirtualServer" in version "k8s.nginx.org/v1"
It's weird, deploying this one on AWS EKS just fine but locally got this error and I could not find any clue while Googling. Does it has something to do with my tools version?
You have to install additional CRDs as both VirtualServer and VirtualServerRoute are not oob, but nginx resources.
CustomResourceDefinitions:
The CustomResourceDefinition API resource allows you to define custom
resources. Defining a CRD object creates a new custom resource with a
name and schema that you specify. The Kubernetes API serves and
handles the storage of your custom resource. The name of a CRD object
must be a valid DNS subdomain name.
This frees you from writing your own API server to handle the custom
resource, but the generic nature of the implementation means you have
less flexibility than with API server aggregation.
Nginx Create Custom Resources
Note: By default, it is required to create custom resource definitions
for VirtualServer, VirtualServerRoute, TransportServer and Policy.
Otherwise, the Ingress Controller pods will not become Ready. If you’d
like to disable that requirement, configure -enable-custom-resources
command-line argument to false and skip this section.
Create custom resource definitions for VirtualServer and VirtualServerRoute, TransportServer and Policy resources.
You can find crds under https://github.com/nginxinc/kubernetes-ingress/tree/master/deployments/common/crds:
$ git clone https://github.com/nginxinc/kubernetes-ingress/
$ cd kubernetes-ingress/deployments
$ git checkout v2.0.3 (or latest, as you wish)
$ kubectl apply -f common/crds/k8s.nginx.org_virtualservers.yaml
$ kubectl apply -f common/crds/k8s.nginx.org_virtualserverroutes.yaml
$ kubectl apply -f common/crds/k8s.nginx.org_transportservers.yaml
$ kubectl apply -f common/crds/k8s.nginx.org_policies.yaml
After successful applying you will be able to create both VirtualServer and VirtualServerRoute

Controller pattern in k8s operators?

"Operators make use of the controller pattern"
What is controller pattern in k8s operators ?
The controller pattern has been summarized in these 3 sentences:
A controller tracks at least one Kubernetes resource type. These objects have a spec field that represents the desired state. The controller(s) for that resource are responsible for making the current state come closer to that desired state.
Ref: https://kubernetes.io/docs/concepts/architecture/controller/#controller-pattern
Basically, the Kubernetes controllers watches for the respective resource in a control loop. Once, it find a resource, it reads the desired state from the spec and do some work to make the cluster state same as the desired state.
For example, you have created a Deployment where you have specified in the spec that you want 1 pod that runs your application. Now, Deployment controller sees this and create 1 pod in the cluster to match your desired state. Now, if you update the Deployment spec and say that you now want 2 pods. The Deployment controller will see this change as it is always watching for Deployment and create another pod in the cluster to match the desired state.
You can find more details about these in the following resources:
Inside of Kubernetes Controller
A deep dive into Kubernetes controllers
The Job controller is an example of a Kubernetes built-in controller. Built-in controllers manage state by interacting with the cluster API server.
Ref: https://kubernetes.io/docs/concepts/architecture/controller/#control-via-api-server
In my understanding, you create a xxx-operator, it means you add a Kind for your k8s, the kubectl explain <kind-name> can show the Kind msg you've given.
So, use the xx-operator, you can set your app with a simpler yaml. You can also use tools like Kustomize or Helm based on this way.
e.g. :
run this, at first:
kubectl explain ZookeeperCluster
you will get an err.
you can install a helm chart pravega/zookeeper-operator:
helm install --create-namespace -n op-pravega-zk -- zookeeper-operator
pravega/zookeeper-operator
you will get out msg after installed but it's helpless:
NAME: zookeeper-operator
LAST DEPLOYED: Thu Nov 25 11:20:31 2021
NAMESPACE: op-pravega-zk
STATUS: deployed
REVISION: 1
TEST SUITE: None
but, now, if you run this again:
kubectl explain ZookeeperCluster
you will get these:
KIND: ZookeeperCluster
VERSION: zookeeper.pravega.io/v1beta1
DESCRIPTION:
ZookeeperCluster is the Schema for the zookeeperclusters API
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <Object>
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
spec <Object>
ZookeeperClusterSpec defines the desired state of ZookeeperCluster
status <Object>
ZookeeperClusterStatus defines the observed state of ZookeeperCluster
now, you can use it like this yaml ( ... means ellipsis) :
apiVersion: "zookeeper.pravega.io/v1beta1"
kind: "ZookeeperCluster"
metadata:
name: zookeeper
namespace: pravega-zk
...
spec:
replicas: 3
image:
repository: pravega/zookeeper
...
pod:
serviceAccountName: zookeeper
storageType: persistence
persistence:
reclaimPolicy: Delete
spec:
storageClassName: local-hostpath
...
or like this ( a ZookeeperCluster Kind in this helm chart ) :
helm install --create-namespace -n pravega-zk --set persistence.storageClassName=local-hostpath -- zookeeper pravega/zookeeper
other e.g. like:
Create a Zookeeper cluster.
kubectl create --namespace zookeeper -f - <<EOF
apiVersion: zookeeper.pravega.io/v1beta1
kind: ZookeeperCluster
metadata:
name: zookeeper
namespace: zookeeper
spec:
replicas: 1
EOF
Ref: https://banzaicloud.com/docs/supertubes/kafka-operator/install-kafka-operator/
btw, after you install a operator, you can use kubectl describe clusterrole zookeeper-operator to show some message, then run kubectl api-resources to find it, then you may find the name of this kind ....

How to create crunchydata postgres cluster using CRD on Terraform?

I created K8S cluster using Terraform and I also created CRD for Crunchydata Postgres Operator
I obtained CRD for Postgres cluster creation from this link
Terraform script looks like below (tailored output)
resource "kubectl_manifest" "pgocluster" {
yaml_body = <<YAML
apiVersion: crunchydata.com/v1
kind: Pgcluster
metadata:
annotations:
current-primary: ${var.pgo_cluster_name}
labels:
crunchy-pgha-scope: ${var.pgo_cluster_name}
deployment-name: ${var.pgo_cluster_name}
name: ${var.pgo_cluster_name}
pg-cluster: ${var.pgo_cluster_name}
pgo-version: 4.6.2
pgouser: admin
name: ${var.pgo_cluster_name}
namespace: ${var.cluster_namespace}
YAML
}
But when I execute 'terraform apply' it errored as
Error: pgo/UserGrp failed to create kubernetes rest client for update of resource: resource [crunchydata.com/v1/Pgcluster] isn't valid for cluster, check the APIVersion and Kind fields are valid
However, according to the official link mentioned above following should work
apiVersion: crunchydata.com/v1
kind: Pgcluster
I am not sure whether it's issue with Terraform or link was not updated correctly
Kindly let me know what should be changed / done to fix this issue as I am stuck with this issue
Finally, I figured out the issue and the issue was pgo_cluster_name was not given in lowercase
I was able to get the following error only when I executed the target individually ie terraform apply --target=<target_name>
Error: pgo/UserGrp failed to run apply: error when creating "/tmp/773985147kubectl_manifest.yaml": Pgcluster.crunchydata.com "UserGrp" is invalid: metadata.name: Invalid value: "UserGrp": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
I set pgo_cluster_name=UsrGrp instead of pgo_cluster_name=usrgrp

helm not creating the resources

I have tried to run Helm for the first time. I am having deployment.yaml, service.yaml and ingress.yaml files alongwith values.yaml and chart.yaml.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: abc
namespace: xyz
labels:
app: abc
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: 3
template:
spec:
containers:
- name: abc
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
ports:
-
containerPort: 8080
service.yaml
apiVersion: v1
kind: Service
metadata:
name: abc
labels:
app.kubernetes.io/managed-by: {{ .Release.Service }}
namespace: xyz
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: {{ .Values.service.sslCert }}
spec:
ports:
- name: https
protocol: TCP
port: 443
targetPort: 8080
- name: http
protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
selector:
app: abc
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "haproxy-ingress"
namespace: xyz
labels:
app.kubernetes.io/managed-by: {{ .Release.Service }}
annotations:
kubernetes.io/ingress.class: alb
From what I can see I do not think I have missed putting app.kubernetes.io/managed-by but still, I keep getting an error:
rendered manifests contain a resource that already exists. Unable to
continue with install: Service "abc" in namespace "xyz" exists and
cannot be imported into the current release: invalid ownership
metadata; label validation error: missing key
"app.kubernetes.io/managed-by": must be set to "Helm"; annotation
validation error: missing key "meta.helm.sh/release-name": must be set
to "abc"; annotation validation error: missing key
"meta.helm.sh/release-namespace": must be set to "default"
It renders the file locally correctly.
helm list --all --all-namespaces returns nothing.
Please help.
You already have some resources, e.g. service abc in the given namespace, xyz that you're trying to install via a Helm chart.
Delete those and install them via helm install.
$ kubectl delete service -n <namespace> <service-name>
$ kubectl delete deployment -n <namespace> <deployment-name>
$ kubectl delete ingress -n <namespace> <ingress-name>
Once you have these resources deployed via Helm, you will be able to perform helm update to change properties.
Remove the "app.kubernetes.io/managed-by" label from your yaml's, this will be added by Helm.
The error below is quiet common:
label validation error: missing key "app.kubernetes.io/managed-by":
must be set to "Helm"; annotation validation error: missing key
"meta.helm.sh/release-name": must be set to ..
So I'll provide a bit longer explanation and also a context to the topic.
What happend?
It seems that you tried to create resources that were already exist and created outside of Helm (probably with kubectl).
Why Helm throw the error?
Helm doesn't allow a resource to be owned by more than one
deployment.
It is the responsibility of the chart creator to ensure that the chart
produce unique resources only.
How can you solve this?
Option 1 - Follow the error message and add the meta.helm.sh annotations:
As can be describe in this PR: Adopt resources into release with correct instance and managed-by labels
Helm will no longer error when attempting to create a resource that
already exists in the target cluster if the existing resource has the
correct meta.helm.sh/release-name and
meta.helm.sh/release-namespace annotations, and matches the label
selector app.kubernetes.io/managed-by=Helm. This facilitates
zero-downtime migrations to Helm 3 for managing existing deployments,
and allows Helm to "adopt" existing resources that it previously
created.
(*) I think that the meta.helm.sh scope is a less common approach today.
Option 2 - Add the app.kubernetes.io/instance label:
As can be seen in different Helm chart providers (Bitnami, Nginx ingress controller, External-Dns for example) - the combination of the two labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
(*) Notice: There are some CD tools like ArgoCD that automatically sets the app.kubernetes.io/instance label and uses it to determine which resources form the app.
Option 3 - Delete old resources.
It might be relevant in your specific case where the old resources might not be relevant anymore.
For those who need some context
What are those labels?
Shared labels and annotations share a common prefix: app.kubernetes.io. Labels without a prefix are private to users. The shared prefix ensures that shared labels do not interfere with custom user labels.
In order to take full advantage of using these labels, they should be applied on every resource object.
The app.kubernetes.io/managed-by label is used to describe the tool being used to manage the operation of an application - for example: helm.
Read more on the Recommended Labels section.
Are they added by helm?
No.
First of all, as mentioned before, those labels are not specific to Helm and Helm itself never requires that a particular label be present.
From the other hand, Helm docs recommend to use the following Standard Labels. app.kubernetes.io/managed-by is one of them and should be set to {{ .Release.Service }} in order to find all resources managed by Helm.
So it is the role of the chart maintainer to add those labels.
What is the best way to add them?
Many Helm chart providers adds them to the _helpers.tpl file and let all resources include it:
labels: {{ include "my-chart.labels" . | nindent 4 }}
The trick here is to chase the error message.
For example, in the below case the erro message points at something wrong with the 'service' in namespace 'xyz'
Unable to
continue with install: Service "abc" in namespace "xyz" exists and
cannot be imported into the current release: invalid ownership
metadata; label validation error: missing key
"app.kubernetes.io/managed-by": must be set to "Helm"; annotation
validation error: missing key "meta.helm.sh/release-name": must be set
to "abc"; annotation validation error: missing key
"meta.helm.sh/release-namespace": must be set to "default"
Simply delete the same service from the mentioned namespace with below:
kubectl -n xyz delete svc abc
And then try the installation/deployment again. It might so happen that similar issue may appear but for a different resource as shown in the below example:
Release "nok-sec-sip-tls-crd" does not exist. Installing it now.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Role "nok-sec-sip-tls-crd-role" in namespace "debu" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "nok-sec-sip-tls-crd": current value is "nok-sec-sip"
Again use the kubectl command and delete the resource mentioned in the error message. For example, in the above case the error resource should be deleted with the below command:
kubectl delete role nok-sec-sip-tls-crd-role -n debu
I was getting this error because I was trying to upgrade the helm chart with wrong release name. So it conflicted with the existing resources in same namespace.
I was running this command with wrong releasename
helm upgrade --install --namespace <namespace> wrong-releasename <chart-folder>
and got the similar errors
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap \"cmname\" in namespace \"namespace\" exists and cannot be imported into the current release
invalid ownership metadata; label validation error: missing key \"app.kubernetes.io/managed-by\": must be set to \"Helm\"; annotation validation error: missing key \"meta.helm.sh/release-name\": must be set to \"wrong-releasename\"; annotation validation error: missing key \"meta.helm.sh/release-namespace\": must be set to \"namespace\"
I checked the existing helm releases in the same namespace and used the same name as the listed release name to upgrade my helmchart
helm ls -n <namespace>
helm upgrade --install --namespace <namespace> releasename <chart-folder>
Here's a faster and more thorough way to get rid of argo so it can be reinstalled :
helm list -A # see argocd in namespace argocd
helm uninstall argocd -n argocd
kubectl delete namespace argocd
The last line gets rid of all secrets and other resources not cleaned up by uninstalling the helm chart, and was needed in my environment, otherwise, I got the same sorts of errors about duplicate resources you were seeing.
We use GitOps via Flux, and I was getting the same rendered manifests contain a resource that already exists error. For me the problem was I accidentally defined a resource with the same name in two different files, so it was trying to create it twice. I removed the duplicate resource definition from one of the files to fix it up.

Error Prometheus endpoint for checking AlertManager

I installed Prometheus (follow in this link: https://devopscube.com/setup-prometheus-monitoring-on-kubernetes/)
But, when checking status of Targets, it shows "Down" for AlertManager service, every another endpoint are up, please see the attached file
Then, I check Service Discovery, the discovered labels shows:
"address="192.168.180.254:9093"
__meta_kubernetes_endpoint_address_target_kind="Pod"
__meta_kubernetes_endpoint_address_target_name="alertmanager-6c666985cc-54rjm"
__meta_kubernetes_endpoint_node_name="worker-node1"
__meta_kubernetes_endpoint_port_protocol="TCP"
__meta_kubernetes_endpoint_ready="true"
__meta_kubernetes_endpoints_name="alertmanager"
__meta_kubernetes_namespace="monitoring"
__meta_kubernetes_pod_annotation_cni_projectcalico_org_podIP="192.168.180.254/32"
__meta_kubernetes_pod_annotationpresent_cni_projectcalico_org_podIP="true"
__meta_kubernetes_pod_container_name="alertmanager"
__meta_kubernetes_pod_container_port_name="alertmanager"
__meta_kubernetes_pod_container_port_number="9093""
But Target Labels show another port (8080), I don't know why:
instance="192.168.180.254:8080"
job="kubernetes-service-endpoints"
kubernetes_name="alertmanager"
kubernetes_namespace="monitoring"
First, if you want to install prometheus and grafana without getting sick, you need to do it though helm.
First install helm
And then
helm install installationWhatEverName stable/prometheus-operator
I've reproduced your issue on GCE.
If you are using version 1.16+ you have probably changed apiVersion as in tutorial you have Deployment in extensions/v1beta1. Since K8s 1.16+ you need to change it to apiVersion: apps/v1. Otherwise you will get error like:
error: unable to recognize "STDIN": no matches for kind "Deployment" in version "extensions/v1beta1"
Second thing, in 1.16+ you need to specify selector. If you will not do it you will receive another error:
`error: error validating "STDIN": error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec; if you choose to ignore these errors, turn validation off with --validate=false`
It would look like:
...
spec:
replicas: 1
selector:
matchLabels:
app: prometheus-server
template:
metadata:
labels:
app: prometheus-server
spec:
containers:
...
Regarding port 8080 please check this article with example.
Port: Port is the port number which makes a service visible to
other services running within the same K8s cluster. In other words,
in case a service wants to invoke another service running within the
same Kubernetes cluster, it will be able to do so using port specified
against “port” in the service spec file.
It worked for my environment in GCE. Did you configure firewall for your endpoints?
In addition. In Helm 3 some hooks were deprecated. You can find this information here.
If you still have issue please provide your YAMLs witch applied changes to version 1.16+.