OpenWhisk cannot be installed in Microk8s due to error: serviceaccount "default" not found - kubernetes

I have a single node Kubernetes cluster, with Microk8s (1.26/stable) installed with snap, on Linuxmint (Ubuntu derivation).
I have installed helm too, more precisely helm 3.7/stable. Installed with snap command.
Using helm, I have tried to install OpenWhisk on this single node using this yaml for the helm command:
helm install owdev openwhisk/openwhisk -n openwhisk --create-namespace -f whisk.yaml
The whisk.yaml file contents is:
whisk:
ingress:
type: NodePort
apiHostName: localhost
apiHostPort: 31001
useInternally: false
nginx:
httpsNodePort: 31001
# disable affinity
affinity:
enabled: false
toleration:
enabled: false
invoker:
options: "-Dwhisk.kubernetes.user-pod-node-affinity.enabled=false"
# must use KCF as kind uses containerd as its container runtime
containerFactory:
impl: "kubernetes"
Once the command is executed, I get the following error:
Error: INSTALLATION FAILED: pods "owdev-wskadmin" is forbidden: error looking up service account openwhisk/default: serviceaccount "default" not found
Any idea of what I made wrong?
The following command is executed to get the secrets:
microk8s kubectl get secrets --all-namespaces
NAMESPACE NAME TYPE DATA AGE
kube-system kubernetes-dashboard-certs Opaque 0 24h
kube-system microk8s-dashboard-token kubernetes.io/service-account-token 3 24h
kube-system kubernetes-dashboard-csrf Opaque 1 24h
kube-system kubernetes-dashboard-key-holder Opaque 2 24h
default lithops-regcred kubernetes.io/dockerconfigjson 1 18h
openwhisk owdev-whisk.auth Opaque 2 17m
openwhisk owdev-db.auth Opaque 2 17m
openwhisk sh.helm.release.v1.owdev.v1 helm.sh/release.v1 1 17m

It was missing the ingress addon on microk8s. Removing using helm openWhisk, adding ingress addon on microk8s, and re executing the openWhisk command:
helm install owdev openwhisk/openwhisk -n openwhisk --create-namespace -f whisk.yaml
OpenWhisk get installed.

Related

Where does Helm store installation state?

When you run a helm install command, Helm outputs information like the revision of this installation.
Where does Helm store this information? (I assume it's in the cluster somewhere.)
Depends on configuration
I found the answer in the docs.
Helm 3 changed the default release information storage to Secrets in the namespace of the release.
https://helm.sh/docs/topics/advanced/#storage-backends
It goes on to say that you can configure it to instead store that state in a ConfigMap or in a PostgreSQL database.
So by default, kubectl get secret --namespace my-namespace will include an entry like
sh.helm.release.v1.st.v1 helm.sh/release.v1 1 13m
And kubectl describe secret sh.helm.release.v1.st.v1 will output something like
Name: sh.helm.release.v1.st.v1
Namespace: my-namespace
Labels: modifiedAt=1613580504
name=st
owner=helm
status=deployed
version=1
Annotations: <none>
Type: helm.sh/release.v1
The storage is changed in Helm 3 as follows:
Releases are stored as Secrets by default (it could use PostgreSQL).
Storage is in the namespace of the release.
Naming is changed to sh.helm.release.v1.<release_name>.v<revision_version>.
The Secret type is set as helm.sh/release.v1.
List installed helm Charts:
$ helm ls --all-namespaces
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
chrt-foobar default 2 2019-10-14 15:18:31.529138228 +0100 IST deployed chrt-foobar-0.1.0 1.16.0
chrt-test test 1 2019-10-14 15:20:28.196338611 +0100 IST deployed chrt-test-0.1.0 1.16.0
List helm releases history
$ kubectl get secret -l "owner=helm" --all-namespaces
NAMESPACE NAME TYPE DATA AGE
default sh.helm.release.v1.chrt-foobar.v1 helm.sh/release.v1 1 3m2s
default sh.helm.release.v1.chrt-foobar.v2 helm.sh/release.v1 1 2m40s
test sh.helm.release.v1.chrt-test.v1 helm.sh/release.v1 1 43s
There are two parts to Helm in Helm2: The Helm client (helm) and the Helm server (Tiller) (removed in Helm3).
When we run helm init it install the Tiller part on Kubernetes cluster. You can confirm the installation
kubectl get pods --namespace kube-system
#see Tiller running.
Where does Helm store this information? (I assume it's in the cluster somewhere.)
As for
By default, tiller stores release information in ConfigMaps in the namespace where it is running, the new version also supports SQL storage backend for release information.
storage-backends
To get release information
kubectl get configmap -n kube-system -l "OWNER=TILLER"
then check the release info from config map
kubectl get configmap -n kube-system -o yaml myapp.v2:
how-helm-uses-configmaps-to-store-data

Snapshot of Hostpath volume in kubernetes example clarification

I have a K8s cluster inside Azure VMs, running Ubuntu 18.
Cluster was provisioned using conjure-up.
I am trying to test the kubernetes snapshot feature. Trying to follow the steps here:
https://github.com/kubernetes-incubator/external-storage/blob/master/snapshot/doc/examples/hostpath/README.md
While i can follow most instructions on the page, not sure of what this specific command does:
"_output/bin/snapshot-controller -kubeconfig=${HOME}/.kube/config"
directly executing this instruction doesnt work as such.
Can anyone explain what this does and how to run this part successfully?
Or better yet point to a complete walk-through if it exists.
Update
Tried out steps from
https://github.com/kubernetes-incubator/external-storage/tree/master/snapshot/deploy/kubernetes/hostpath
Commented out below line since not using RBAC
# serviceAccountName: snapshot-controller-runner
Then deployed using
kubectl create -f deployment.yaml
kubectl create -f pv.yaml
kubectl create -f pvc.yaml
kubectl create -f snapshot.yaml
These yaml are from examples 'as is':
github.com/kubernetes-incubator/external-storage/blob/master/snapshot/doc/examples/hostpath/
kubectl describe volumesnapshot snapshot-demo Name: snapshot-demo
Namespace: default
Labels: SnapshotMetadata-PVName=hostpath-pv
SnapshotMetadata-Timestamp=1555999582450832931
Annotations: <none>
API Version: volumesnapshot.external-storage.k8s.io/v1
Kind: VolumeSnapshot
Metadata:
Creation Timestamp: 2019-04-23T05:56:05Z
Generation: 2
Resource Version: 261433
Self Link: /apis/volumesnapshot.external-storage.k8s.io/v1/namespaces/default/volumesnapshots/snapshot-demo
UID: 7b89194a-658c-11e9-86b2-000d3a07ff79
Spec:
Persistent Volume Claim Name: hostpath-pvc
Snapshot Data Name:
Status:
Conditions: <nil>
Creation Timestamp: <nil>
Events: <none>
the snapshot resource is created however the volumesnapshotdata is NOT created.
kubectl get volumesnapshotdata
No resources found.
kubectl get crd
NAME CREATED AT
volumesnapshotdatas.volumesnapshot.external-storage.k8s.io 2019-04-21T04:18:54Z
volumesnapshots.volumesnapshot.external-storage.k8s.io 2019-04-21T04:18:54Z
kubectl get pod
NAME READY STATUS RESTARTS AGE
azure 1/1 Running 2 2d21h
azure-2 1/1 Running 2 2d20h
snapshot-controller-5d798696ff-qsh6m 2/2 Running 2 14h
ls /tmp/test/
data
Enabled featuregate for volume snapshot
cat /var/snap/kube-apiserver/924/args
--advertise-address="192.168.0.4"
--min-request-timeout="300"
--etcd-cafile="/root/cdk/etcd/client-ca.pem"
--etcd-certfile="/root/cdk/etcd/client-cert.pem"
--etcd-keyfile="/root/cdk/etcd/client-key.pem"
--etcd-servers="https://192.168.0.4:2379"
--storage-backend="etcd3"
--tls-cert-file="/root/cdk/server.crt"
--tls-private-key-file="/root/cdk/server.key"
--insecure-bind-address="127.0.0.1"
--insecure-port="8080"
--audit-log-maxbackup="9"
--audit-log-maxsize="100"
--audit-log-path="/root/cdk/audit/audit.log"
--audit-policy-file="/root/cdk/audit/audit-policy.yaml"
--basic-auth-file="/root/cdk/basic_auth.csv"
--client-ca-file="/root/cdk/ca.crt"
--requestheader-allowed-names="system:kube-apiserver"
--requestheader-client-ca-file="/root/cdk/ca.crt"
--requestheader-extra-headers-prefix="X-Remote-Extra-"
--requestheader-group-headers="X-Remote-Group"
--requestheader-username-headers="X-Remote-User"
--service-account-key-file="/root/cdk/serviceaccount.key"
--token-auth-file="/root/cdk/known_tokens.csv"
--authorization-mode="AlwaysAllow"
--admission-control="NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
--allow-privileged=true
--enable-aggregator-routing
--kubelet-certificate-authority="/root/cdk/ca.crt"
--kubelet-client-certificate="/root/cdk/client.crt"
--kubelet-client-key="/root/cdk/client.key"
--kubelet-preferred-address-types="[InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP]"
--proxy-client-cert-file="/root/cdk/client.crt"
--proxy-client-key-file="/root/cdk/client.key"
--service-cluster-ip-range="10.152.183.0/24"
--logtostderr
--v="4"
--feature-gates="VolumeSnapshotDataSource=true"
What am i missing here?
I think everything you need is already present here: https://github.com/kubernetes-incubator/external-storage/tree/master/snapshot/deploy/kubernetes/hostpath
There is one YAML for deployment of snapshot controller and one YAML for snapshotter RBAC rules.

Traefik dashboard/web UI 404 when installed via helm on Digitalocean single node cluster

I am trying to set Traefik as my ingress controller and load balancer on a single node cluster(Digital Ocean). Following the official Traefik setup guide I installed Traefik using helm:
helm install --values values.yaml stable/traefik
# values.yaml
dashboard:
enabled: true
domain: traefik-ui.minikube
kubernetes:
namespaces:
- default
- kube-system
#output
RESOURCES:
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
operatic-emu-traefik-f5dbf4b8f-z9bzp 0/1 ContainerCreating 0 1s
==> v1/ConfigMap
NAME AGE
operatic-emu-traefik 1s
==> v1/Service
operatic-emu-traefik-dashboard 1s
operatic-emu-traefik 1s
==> v1/Deployment
operatic-emu-traefik 1s
==> v1beta1/Ingress
operatic-emu-traefik-dashboard 1s
Then I created the service exposing the Web UI
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/ui.yaml
Then I can clearly see my traefik pod running and an external-ip being assigned:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard ClusterIP 10.245.156.214 <none> 443/TCP 11d
service/kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 14d
service/operatic-emu-traefik LoadBalancer 10.245.137.41 <external-ip> 80:31190/TCP,443:30207/TCP 5m7s
service/operatic-emu-traefik-dashboard ClusterIP 10.245.8.156 <none> 80/TCP 5m7s
Then opening http://external-ip/dashboard/ leads to 404 page not found
I read a ton of answers and tutorials but keep missing something. Any help is highly appreciated.
I am writing this post as the information is a bit much to fit in a comment. After spending enough time on understanding how k8s and helm charts work, this is how I solved it:
Firstly, I missed the RBAC part, I did not create ClusterRole and ClusterRoleBinding in order to authorise Traefik to use K8S API (as I am using 1.12 version). Hence, either I should have deployed ClusterRole and ClusterRoleBinding manually or added the following in my values.yaml
rbac:
enabled: true
Secondly, I tried to access dashboard ui from ip directly without realising Traefik uses hostname to direct to its dashboard as #Rico mentioned above (I am voting you up as you did provide helpful info but I did not manage to connect all pieces of the puzzle at that time). So, either edit your /etc/hosts file linking your hostname to the external-ip and then access the dashboard via browser or test that it is working with curl:
curl http://external-ip/dashboard/ -H 'Host: traefik-ui.minikube'
To sum up, you should be able to install Traefik and access its dashboard ui by installing:
helm install --values values.yaml stable/traefik
# values.yaml
dashboard:
enabled: true
domain: traefik-ui.minikube
rbac:
enabled: true
kubernetes:
namespaces:
- default
- kube-system
and then editing your hosts file and opening the hostname you chose.
Now the confusing part from the official traefik setup guide is the section named Submitting an Ingress to the Cluster just below the Deploy Traefik using Helm Chart that instructs to install a service and an ingress object in order to be able to access the dashboard. This is unneeded as the official stable/traefik helm chart provides both of them. You would need that if you want to install traefik by deploying all needed objects manually. However for a person just starting out with k8s and helm, it looks like that section needs to be completed after installing helm via the official stable/traefik chart.
I believe this is the same issue as this.
You either have to connect with the traefik-ui.minikube hostname or add a host entry on your Ingress definition like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: kube-system
name: traefik-ingress
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: yourown.hostname.com
http:
paths:
- path: /dashboard
backend:
serviceName: traefik-web-ui
servicePort: web
You can check with:
$ kubectl -n kube-system get ingress

Taint a node in kubernetes live cluster

How can I achieve the same command with a YAML file such that I can do a kubectl apply -f? The command below works and it taints but I can't figure out how to use it via a manifest file.
$ kubectl taint nodes \
172.4.5.2-3a1d4eeb \
kops.k8s.io/instancegroup=loadbalancer \
NoSchedule
Use the -o yaml option and save the resulting YAML file and make sure to remove the status and some extra stuff, this will apply the taint , but provide you the yaml that you can later use to do kubectl apply -f , and save it to version control ( even if you create the resource from command line and later get the yaml and apply it , it will not re-create the resource , so it is perfectly fine )
Note: Most of the commands support --dry-run , that will just generate the yaml and not create the resource , but in this case , I could not make it work with --dry-run , may be this command does not support that flag.
C02W84XMHTD5:~ iahmad$ kubectl taint node minikube dedicated=foo:PreferNoSchedule -o yaml
apiVersion: v1
kind: Node
metadata:
annotations:
node.alpha.kubernetes.io/ttl: "0"
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: 2018-10-16T21:44:03Z
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
kubernetes.io/hostname: minikube
node-role.kubernetes.io/master: ""
name: minikube
resourceVersion: "291136"
selfLink: /api/v1/nodes/minikube
uid: 99a1a304-d18c-11e8-9334-f2cf3c1f0864
spec:
externalID: minikube
taints:
- effect: PreferNoSchedule
key: dedicated
value: foo
then use the yaml with kubectl apply:
apiVersion: v1
kind: Node
metadata:
name: minikube
spec:
taints:
- effect: PreferNoSchedule
key: dedicated
value: foo
I have two nodes in my cluster, please look at labels
kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
172.16.2.53 Ready node 7d4h v1.19.7 type=primary
172.16.2.89 Ready node 33m v1.19.7 type=secondary
Lets say I want to taint node name with "172.16.2.89"
kubectl taint node 172.16.2.89 type=secondary:NoSchedule
node/172.16.2.89 tainted
Example -
kubectl taint node <node-name> <label-key>=<value>:NoSchedule
I have two nodes in my cluster, please look at labels
NoExecute means the pod will be evicted from the node.
NoSchedule means the scheduler will not place the pod onto the node

kubernetes rolling update using helm

I am new to Helm. I have installed Minikube & Helm on my windows system. I am able create pods using Helm and see deployment,pods & replicaset in dashboard.
I want to do rolling update using Helm. Guide me how to do rolling update in K8s using Helm.
Creating Tomcat pod using Helm
helm create hello-world
Changed image name and deployment name in deployment.yaml
kind: Deployment
metadata:
name: mytomcat
spec:
containers:
- name: {{ .Chart.Name }}
image: tomcat
Install
helm install hello-world
NAME: whopping-dolphin
LAST DEPLOYED: Wed Aug 30 21:38:42 2017
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
whopping-dolphin-hello-world 10.0.0.178 <none> 80/TCP 0s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
mytomcat 1 1 1 0 0s
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app=hello-world,release=whopping-dolphin" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
I see mytomcat deployment and pod mytomcat-2768693561-hd2hd in dashboard.
Now I would like to give command which will delete my current deployment & pod in k8s and it should create new deployment and pod.
It will be helpful if I get sample commands and yaml.
Below command is working fine for Rolling update.
First time it will be install
next time it will be upgrade
helm upgrade --install tom-release --set appName=mytomcatcon
hello-world
tom-release is my release name and passing runtime values to helm chart using --set option