How to deploy keycloak as a pod in kubernetes dashboard which is set up up in AWS EC2? - kubernetes

Added from a form which is quay.io/keycloak/keycloak
Changed from Loadbalancer to a Nodeport
Can visit that but ip:port
Showing error to add a user from localhost:8080 or use add-user-keycloak script

Please follow the docs Keycloak on Kubernetes.
You can find instructions there, how deploy keycloak inside minikube.
However you can download this deployment files and modify the settings according to your needs.
F.E. you can change service type from Loadbalancer to NodePort.
In addition please consider making a changes in other settings like: KEYCLOAK_USER, KEYCLOAK_PASSWORD:
wget -q -O - https://raw.githubusercontent.com/keycloak/keycloak-quickstarts/latest/kubernetes-examples/keycloak.yaml | \
sed "s/LoadBalancer/NodePort/" | \
kubectl create -f -
In order to access your keycloak instance you should change:
minikube ip ( to your externalIp address associated with your vm or use nodeIP from inside the vm)
verify your service NodePort by running.
kubectl get services/keycloak -o go-template='{{(index .spec.ports 0).nodePort}}'
kubectl get svc -o wide
By default NodePort should be in the range (30000-32767)

Related

Kong Gateway using Kubernetes

Trying to deploy kong gateway via Kubernetes:
Created a namespace: kong-helm
Applied yaml files (using kubectl on kong-helm namespace) which includes: configmap.yaml, service.yaml, secret.yaml, ingress.yaml.
Upon applying the dbless.yaml(https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/master/deploy/single/all-in-one-dbless.yaml)ingress dbless pod is running.
kubectl get svc --all-namespaces - able to see the service(kong-test-poc) is created.
But when port forward is given: kubectl port-forward service/kong-test-poc 80:8080
Getting the following error: Error from server (NotFound): services "kong-test-poc" not found
Can you please tell how to rectify this error?
I believe you are missing the specific namespace where the service is running to it's going to your default namespace.
kubectl -n kong-helm port-forward service/kong-test-poc 8080:8080
I also recommend using an different port than 80 locally as this a unix reserved port. Also make sure that the kong-test-poc is configured to listen on 8080 (you didn't post the definition)

openshift add service account to deployed app

I'm trying to add a service account to a deployed application but so far I keep getting the "application not available message" I did the following
created service account
oc create sa name-sa
oc add policy add-scc-to-user anyuid -z name-sa -n book
add service account to deployed app
oc set serviceacccount deploymentconfig wordapp name-sa
I check the pods and the application is running but I still not able to see any output from the route and the oc desribe pod command doesn't give any errors
I'm not sure that ServiceAccount's permission causes on this matter.
I think first you should check out the relationship of the access flow through Route -> Service -> Pod, and verify your application work well using curl command.
I show you the troubleshooting steps as follows.
Check your Route what Service is bound with it.
In this case, docker-registry Service is associated with the Route.
$ oc describe route <your routename>
:
Service: docker-registry
Weight: 100 (100%)
Endpoints: 10.128.1.94:5000 <--- You can check if this IP is matched with your application pod IP.
Then check the Service whether it can detect Endpoint pods correctly.
$ oc describe svc docker-registry
:
Port: 5000-tcp 5000/TCP
TargetPort: 5000/TCP
Endpoints: 10.128.1.94:5000 <--- You can check if this IP is matched with your application pod IP.
Verify the accessibility for the application on the pod using curl
$ oc rsh <your pod name> curl -vs http://localhost:5000/
:
< HTTP/1.1 200 OK <--- You check if you can get expected response of your application on the pod.
Additionally, you can also check your pod are running with setting SCC permission and ServiceAccount.
4 oc get pod <your podname> -o yaml | grep -E 'scc|serviceAccountName'
openshift.io/scc: anyuid
serviceAccountName: name-sa

"unable to retrieve the complete list of server APIs: tap.linkerd.io/v1alpha1" error using Linkerd on private cluster in GKE

Why does the following error occur when I install Linkerd 2.x on a private cluster in GKE?
Error: could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: tap.linkerd.io/v1alpha1: the server is currently unable to handle the request
The default firewall rules of a private cluster on GKE only permit traffic on ports 443 and 10250. This allows communication to the kube-apiserver and kubelet, respectively.
Linkerd uses ports 8443 and 8089 for communication between the control and the proxies deployed to the data plane.
The tap component uses port 8089 to handle requests to its apiserver.
The proxy injector and service profile validator components, both of which are types of admission controllers, use port 8443 to handle requests.
The Linkerd 2 docs include instructions for configuring your firewall on a GKE private cluster: https://linkerd.io/2/reference/cluster-configuration/
They are included below:
Get the cluster name:
CLUSTER_NAME=your-cluster-name
gcloud config set compute/zone your-zone-or-region
Get the cluster MASTER_IPV4_CIDR:
MASTER_IPV4_CIDR=$(gcloud container clusters describe $CLUSTER_NAME \
| grep "masterIpv4CidrBlock: " \
| awk '{print $2}')
Get the cluster NETWORK:
NETWORK=$(gcloud container clusters describe $CLUSTER_NAME \
| grep "^network: " \
| awk '{print $2}')
Get the cluster auto-generated NETWORK_TARGET_TAG:
NETWORK_TARGET_TAG=$(gcloud compute firewall-rules list \
--filter network=$NETWORK --format json \
| jq ".[] | select(.name | contains(\"$CLUSTER_NAME\"))" \
| jq -r '.targetTags[0]' | head -1)
Verify the values:
echo $MASTER_IPV4_CIDR $NETWORK $NETWORK_TARGET_TAG
# example output
10.0.0.0/28 foo-network gke-foo-cluster-c1ecba83-node
Create the firewall rules for proxy-injector and tap:
gcloud compute firewall-rules create gke-to-linkerd-control-plane \
--network "$NETWORK" \
--allow "tcp:8443,tcp:8089" \
--source-ranges "$MASTER_IPV4_CIDR" \
--target-tags "$NETWORK_TARGET_TAG" \
--priority 1000 \
--description "Allow traffic on ports 8843, 8089 for linkerd control-plane components"
Finally, verify that the firewall is created:
gcloud compute firewall-rules describe gke-to-linkerd-control-plane
In my case, it was related to linkerd/linkerd2#3497, when the Linkerd service had some internal problems and couldn't respond back to the API service requests. Fixed by restarting its pods.
Solution:
The steps I followed are:
kubectl get apiservices : If linkered apiservice is down with the error CrashLoopBackOff try to follow the step 2 otherwise just try to restart the linkered service using kubectl delete apiservice/"service_name". For me it was v1alpha1.tap.linkerd.io.
kubectl get pods -n kube-system and found out that pods like metrics-server, linkered, kubernetes-dashboard are down because of the main coreDNS pod was down.
For me it was:
NAME READY STATUS RESTARTS AGE
pod/coredns-85577b65b-zj2x2 0/1 CrashLoopBackOff 7 13m
Use kubectl describe pod/"pod_name" to check the error in coreDNS pod and if it is down because of /etc/coredns/Corefile:10 - Error during parsing: Unknown directive proxy, then we need to use forward instead of proxy in the yaml file where coreDNS config is there. Because CoreDNS version 1.5x used by the image does not support the proxy keyword anymore.
This was a linkerd issue for me. To diagnose any linkerd related issues, you can use the linkerd CLI and run linkerd check this should show you if there is an issue with linkerd and links on instructions to fix it.
For me, the issue was that linkerd root certs had expired. In my case, linkerd was experimental in a dev cluster so I removed it. However, if you need to update your certificates you can follow the instructions at the following link.
https://linkerd.io/2.11/tasks/replacing_expired_certificates/
Thanks to https://stackoverflow.com/a/59644120/1212371 I was put on the right path.

Kubernetes dashboard

I have been able to successfully setup kubernetes on my Centos 7 server.
On trying to get the dashboard working after following the documentation, running 'kubectl proxy' it
attempts to run using 127.0.0.1:9001 and not my server ip. Do this mean I cannot access kubernetes dashboard outside the server?
I need help on getting the dashboard running using my public ip
You can specify on which address you want to run kubectl proxy, i.e.
kubectl proxy --address <EXTERNAL-IP> -p 9001
Starting to serve on 100.105.***.***:9001
You can also use port forwarding to access the dashboard.
kubectl port-forward --address 0.0.0.0 pod/dashboard 8888:80
This will listen port 8888 on all addresses and route traffic directly to your pod.
For instance:
rsha:~$ kubectl port-forward --address 0.0.0.0 deploy/webserver 8888:80
Forwarding from 0.0.0.0:8888 -> 80
In another terminal running
rsha:~$ curl 100.105.***.***:8888
<html><body><h1>It works!</h1></body></html>
As I understand, you would like to access the dashboard from your laptop. What you should do is create an admin account called k8s-admin:
$ kubectl --namespace kube-system create serviceaccount k8s-admin
$ kubectl create clusterrolebinding k8s-admin --serviceaccount=kube-system:k8s-admin --clusterrole=cluster-admin
Then setup kubectl on your laptop, e.g. for macOS it looks like this (see documentation):
$ brew install kubernetes-cli
Setup a proxy to your workstation. Create a ~/.kube directory on your laptop and then scp the ~/.kube/config file from the k8s (Kubernetes) master to your ~/.kube directory.
Then get the authentication token you need to connect to the dashboard:
$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep k8s-admin | awk '{print $1}')
Now start the proxy:
$ kubectl proxy
Now open the dashboard by going to:
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
You should see the Token option and then copy-paste the token from the prior step and Sign-In.
You can follow this tutorial.

Unable to expose kubernetes dashboard to access it from outside

I also tried changing kubernetes services yaml file to Node port and then tried exposing the dashboard from the new port i am getting error "connection is not private".So how to access the dashboard by making the connection private?
Can you try below steps...
Undeploy the existing dashboard.
Deploy dashboard using the below command:
kubectl create -f https://raw.githubusercontent.com/epasham/docker-repo/master/k8s/dashboard/kubernetes-dashboard.yml
master $ kubectl get svc -n kube-system | grep dashboard
kubernetes-dashboard NodePort 10.110.127.61 <none> 9090:30000/TCP 2m
master $ curl 10.110.127.61:9090 ( within the cluster )
Access the dashboard from browser using HOSTNAME:30000