how to move prometheus adapter to another namespace? - kubernetes

For now I have prometheus and prometheus adapter in different namespaces:
I tried to configure adapter YML but I was not successful:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
creationTimestamp: "2020-01-30T08:49:05Z"
generation: 2
labels:
app: prometheus-adapter
chart: prometheus-adapter-2.0.1
heritage: Tiller
release: prometheus-adapter
name: prometheus-adapter
namespace: my-custom-namespace
resourceVersion: "18513075"
selfLink: /apis/apps/v1/namespaces/my-custom-namespace/deployments/prometheus-adapter
...
But I see error:
the namespace of the object (my-custom-namespace) does not match the namespace on the request (default)
How to fix it ?

You can not edit an existing resource to change namespace.You need to delete the existing deployment first and then recreate the deployment in another namespace.
Edit:
With Helm2 you need to delete the release first helm delete --purge release-name and then deploy it to different namespace as helm install stable/prometheus-adapter --namespace namespace-name
With helm 3 since there is no --namespace flag you need to delete the existing deployment and then redeploy it to a different namespace as below example to deploy metrics server.
$ helm install metricserver stable/metrics-server
Error: the namespace from the provided object "kube-system" does not match the namespace "default". You must pass '--namespace=kube-system' to perform this operation.
$ helm install metricserver stable/metrics-server --namespace=kube-system
Error: the namespace from the provided object "kube-system" does not match the namespace "default". You must pass '--namespace=kube-system' to perform this operation.
$ kubectl config set-context kube-system --cluster=kubernetes --user=kubernetes-admin --namespace=kube-system
Context "kube-system" created.
$ kubectl config use-context kube-system
Switched to context "kube-system".
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kube-system kubernetes kubernetes-admin kube-system
kubernetes-admin#kubernetes kubernetes kubernetes-admin
metallb kubernetes kubernetes-admin metallb
nfstorage kubernetes kubernetes-admin nfstorage
$ helm install metricserver stable/metrics-server
NAME: metricserver
LAST DEPLOYED: 2019-05-26 14:37:45.582245559 -0700 PDT m=+2.942929639
NAMESPACE: kube-system
STATUS: deployed

For helm 2 you can install the chart in any namespace you want by using:
helm install stable/prometheus-adapter --name my-release --namespace foo
Keep in mind that you need to remove the previous one.
This can be done using helm delete --purge my-release
Also there is a really nice article regarding changes in Helm3 Breaking Changes in Helm 3 (and How to Fix Them).

Related

Install calico GlobalNetworkPolicy via helm chart

I am trying to install a calico GlobalNetworkPolicy that will be applicable to all the pods in the cluster regardless of namespace , and to apply GlobalNetworkPolicy as per docs here -
Calico network policies and Calico global network policies are applied
using calicoctl
i.e calicoctl command (assuming calicoctl binary installed in the host) ->
calicoctl apply -f global-policy.yaml
OR if we have a calicoctl pod running ->
kubectl exec -ti -n kube-system calicoctl -- /calicoctl apply -f global-deny.yaml -o wide
global-policy.yaml ->
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: default-deny
spec:
selector: projectcalico.org/namespace == "kube-system"
types:
- Ingress
- Egress
Question -> How do I install such a policy via helm chart ? As helm implicitly calls via kubectl and that causes error on install.
Error using kubectl or helm =>
Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: resource mapping not found for name: "default-deny" namespace: "" from "": no matches for kind "GlobalNetworkPolicy" in version "projectcalico.org/v3"
As per the Doc given by you Calico global network policy is a non-namespaced resource and can be applied to any kind of endpoint (pods, VMs, host interfaces) independent of namespace.
But you are using namespace in the Yaml, that might be the reason for the error. Kindly remove the name space and try again.
Because global network policies use kind: GlobalNetworkPolicy, they are grouped separately from kind: NetworkPolicy. For example, global network policies will not be returned from calicoctl get networkpolicy, and are rather returned from calicoctl get globalnetworkpolicy.
Below is the reference yaml from Doc :
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: allow-tcp-port-6379
Refer For more information on Global Network Policy, Calico Install Via Helm and Calico command line tools.

Where does Helm store installation state?

When you run a helm install command, Helm outputs information like the revision of this installation.
Where does Helm store this information? (I assume it's in the cluster somewhere.)
Depends on configuration
I found the answer in the docs.
Helm 3 changed the default release information storage to Secrets in the namespace of the release.
https://helm.sh/docs/topics/advanced/#storage-backends
It goes on to say that you can configure it to instead store that state in a ConfigMap or in a PostgreSQL database.
So by default, kubectl get secret --namespace my-namespace will include an entry like
sh.helm.release.v1.st.v1 helm.sh/release.v1 1 13m
And kubectl describe secret sh.helm.release.v1.st.v1 will output something like
Name: sh.helm.release.v1.st.v1
Namespace: my-namespace
Labels: modifiedAt=1613580504
name=st
owner=helm
status=deployed
version=1
Annotations: <none>
Type: helm.sh/release.v1
The storage is changed in Helm 3 as follows:
Releases are stored as Secrets by default (it could use PostgreSQL).
Storage is in the namespace of the release.
Naming is changed to sh.helm.release.v1.<release_name>.v<revision_version>.
The Secret type is set as helm.sh/release.v1.
List installed helm Charts:
$ helm ls --all-namespaces
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
chrt-foobar default 2 2019-10-14 15:18:31.529138228 +0100 IST deployed chrt-foobar-0.1.0 1.16.0
chrt-test test 1 2019-10-14 15:20:28.196338611 +0100 IST deployed chrt-test-0.1.0 1.16.0
List helm releases history
$ kubectl get secret -l "owner=helm" --all-namespaces
NAMESPACE NAME TYPE DATA AGE
default sh.helm.release.v1.chrt-foobar.v1 helm.sh/release.v1 1 3m2s
default sh.helm.release.v1.chrt-foobar.v2 helm.sh/release.v1 1 2m40s
test sh.helm.release.v1.chrt-test.v1 helm.sh/release.v1 1 43s
There are two parts to Helm in Helm2: The Helm client (helm) and the Helm server (Tiller) (removed in Helm3).
When we run helm init it install the Tiller part on Kubernetes cluster. You can confirm the installation
kubectl get pods --namespace kube-system
#see Tiller running.
Where does Helm store this information? (I assume it's in the cluster somewhere.)
As for
By default, tiller stores release information in ConfigMaps in the namespace where it is running, the new version also supports SQL storage backend for release information.
storage-backends
To get release information
kubectl get configmap -n kube-system -l "OWNER=TILLER"
then check the release info from config map
kubectl get configmap -n kube-system -o yaml myapp.v2:
how-helm-uses-configmaps-to-store-data

Cert-manager order is in invalid state

I’m migrating from a GitLab managed Kubernetes cluster to a self managed cluster. In this self managed cluster need to install nginx-ingress and cert-manager. I have already managed to do the same for a cluster used for review environments. I use the latest Helm3 RC to managed this, so I won’t need Tiller.
So far, I ran these commands:
# Add Helm repos locally
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo add jetstack https://charts.jetstack.io
# Create namespaces
kubectl create namespace managed
kubectl create namespace production
# Create cert-manager crds
kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.11/deploy/manifests/00-crds.yaml
# Install Ingress
helm install ingress stable/nginx-ingress --namespace managed --version 0.26.1
# Install cert-manager with a cluster issuer
kubectl apply -f config/production/cluster-issuer.yaml
helm install cert-manager jetstack/cert-manager --namespace managed --version v0.11.0
This is my cluster-issuer.yaml:
# Based on https://docs.cert-manager.io/en/latest/reference/issuers.html#issuers
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: XXX # This is an actual email address in the real resource
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- selector: {}
http01:
ingress:
class: nginx
I installed my own Helm chart named docs. All resources from the Helm chart are installed as expected. Using cURL, I can fetch the page over HTTP. Google Chrome redirects me to an HTTPS page with an invalid certificate though.
The additional following resources have been created:
$ kubectl get secrets
NAME TYPE DATA AGE
docs-tls kubernetes.io/tls 3 18m
$ kubectl get certificaterequests.cert-manager.io
NAME READY AGE
docs-tls-867256354 False 17m
$ kubectl get certificates.cert-manager.io
NAME READY SECRET AGE
docs-tls False docs-tls 18m
$ kubectl get orders.acme.cert-manager.io
NAME STATE AGE
docs-tls-867256354-3424941167 invalid 18m
It appears everything is blocked by the cert-manager order in an invalid state. Why could it be invalid? And how do I fix this?
It turns out that in addition to a correct DNS A record for #, there were some AAAA records that pointed to an IPv6 address I don’t know. Removing those records and redeploying resolved the issue for me.

Namespace deployment issue in Kubernetes Helm Chart

I am now testing the deployment into different namespace using Kubernetes. Here I am using Kubernetes Helm Chart for that. In my chart, I have deployment.yaml and service.yaml.
When I am defining the "namespace" parameter with Helm command helm install --upgrade, it is not working. When I a read about that I found the statement that - "Helm 2 is not overwritten by the --namespace parameter".
I tried the following command:
helm upgrade --install kubedeploy --namespace=test pipeline/spacestudychart
NB Here my service is deploying with default namespace.
Screenshot of describe pod:
Here my "helm version" command output is like follows:
docker#mildevdcr01:~$ helm version
Client: &version.Version{SemVer:"v2.14.3",
GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3",
GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Because of this reason, I tried to addthis command in deployment.yaml, under metadata.namespace like following,
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "spacestudychart.fullname" . }}
namespace: test
I created test and prod, 2 namespaces. But here also it's not working. When I adding like this, I am not getting my service up. I am not able to accessible. And in Jenkins console there is no error. When I defined in helm install --upgrade command it was deploying with default namespace. But here not deploying also.
After this, I removed the namespace from deployment.yaml and added metadata.namespace like the same. There also I am not able to access deployed service. But Jenkins console output still showing success.
Why namespace is not working with my Helm deployment? What changes I need to do here for deploying test/prod instead of this default namespace.
Remove namespace: test from all of your chart files and helm install --namespace=namespace2 ... should work.
On Helm 3.2+, I would suggest (based on this thread) to move the namespace creation to the CLI:
1 ) Add the --create-namespace after the -n flag:
helm upgrade --install <name> <repo> -n <namespace> --create-namespace
2 ) Inside the different resources - pass the Release namespace:
namespace: {{ .Release.Namespace }}

kubernetes rolling update using helm

I am new to Helm. I have installed Minikube & Helm on my windows system. I am able create pods using Helm and see deployment,pods & replicaset in dashboard.
I want to do rolling update using Helm. Guide me how to do rolling update in K8s using Helm.
Creating Tomcat pod using Helm
helm create hello-world
Changed image name and deployment name in deployment.yaml
kind: Deployment
metadata:
name: mytomcat
spec:
containers:
- name: {{ .Chart.Name }}
image: tomcat
Install
helm install hello-world
NAME: whopping-dolphin
LAST DEPLOYED: Wed Aug 30 21:38:42 2017
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
whopping-dolphin-hello-world 10.0.0.178 <none> 80/TCP 0s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
mytomcat 1 1 1 0 0s
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app=hello-world,release=whopping-dolphin" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
I see mytomcat deployment and pod mytomcat-2768693561-hd2hd in dashboard.
Now I would like to give command which will delete my current deployment & pod in k8s and it should create new deployment and pod.
It will be helpful if I get sample commands and yaml.
Below command is working fine for Rolling update.
First time it will be install
next time it will be upgrade
helm upgrade --install tom-release --set appName=mytomcatcon
hello-world
tom-release is my release name and passing runtime values to helm chart using --set option