Set up UI dashboard on single node Kubernetes cluster set up with kubeadm - kubernetes

I set up Kubernetes on a Ubuntu 16.04 vServer following this tutorial https://kubernetes.io/docs/getting-started-guides/kubeadm/
On this node I want to make Kubernetes Dashboard available but after starting the service via kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml I have no clue how to proceed.
The UI is not accessible via https://{master-ip}/ui.
How can I make the UI publicly accessible?

The easiest is to try running kubectl proxy on the client machine where you want to use the dashboard and then access the dashboard at http://127.0.0.1:8001 with a browser on the same client machine.
If you want to connect via master node ip as described in your answer you need to set up authentication first. See this and this.

Related

How to install and connect to Control-M Agent in a Kubernetes cluster?

I am very new to using Control-M. I need to install Agents inside a Kubernetes cluster -could you please tell me which steps I need to follow or point me in the direction of the relevant documentation? Once installed (which I don't know how to do), how can I "connect" my control-m server to the agent ?
Thanks very much for any help/guidance you can provide.
BMC have a FAQ for this, note the Agent settings will need tweaking (see answer 1). Support for this is better in v9.0.19 and 9.0.20. Also check out the link to github -
1. If we provision agent on Kubernetes pod as containers what should be Agent host name? . By default it is taking kubernetes pod name as host name which is no pingable from outside.
You can use a StatefulSet so the name will be set.
If you want to Control-M/Server (from outside k8s) to connect to a Control-M/Agent inside k8s you need to change the connection type to a persistent connection (see utilities agent: ctmagcfg, ctm: ctm_menu) that will be initiate from the Control-M/Agent side.
Additional Information: Best Practices for using Control-M to run a pod to completion in a Kubernetes-based cluster
https://github.com/controlm/automation-api-community-solutions/tree/master/3-infrastructure-as-code-examples/kubernetes-statefulset-agent-using-pvc
2. Can we connect the Agent provisioned in kubernetes via load balancer?
Yes. LoadBalancer will expose a static name/ip and allow the Control-M/Server to connect the Control-M/Agent but it is not needed (see the persistent connection) and it cost money in most clouds (for example in AWS it's actually defining an elastic IP that you are paying for)
3. Since we see couple of documents from the bmc communities for installing Agent on kubernetes via docker image then there should be a way to discover it from the on-prem Control-M/Server.
The Control-M/Agent discover is done from the Control-M/Agent side using CLI (or rest call) "ctm provision setup" once the pod (container) starts.
This API configures the Control-M/Agent (for example: to use persistent connection that was mentioned) and define/register it in Control-M/Server.
4. When setting agents in a kubernetes environment, does an agent need to be installed on each node in a cluster?
The Control-M/Agent only needs to be installed once. It does not have to be installed on every node.
5. Can the agent be installed on the cluster through a DeamonSet and shared by all containers?
The agent can be installed through a DeamonSet but this will install an agent on each node in the cluster. Each agent will be considered a separate installation and each agent would individually be added in the CCM. Alternatively an agent can be installed in a StatefulSet where only one agent is installed but has access to the Kubernetes cluster

Running kiam server securely

Can anyone explain an example of using kiam on kubernetes to manage service-level access control to aws resources?
According to the docs:
The server is the only process that needs to call sts:AssumeRole and
can be placed on an isolated set of EC2 instances that don't run other
user workloads.
I would like to know to run the server part of it away from nodes that host your services.
Answer: KIAM architecture is well explained here:
https://www.bluematador.com/blog/iam-access-in-kubernetes-kube2iam-vs-kiam
Basically you want to use Master Nodes in your cluster with IAM::STS permissions on them to install the Server portion of kiam and then let your worker nodes connect to master nodes to retrieve credentials.
DISCLAIMER: I did some digging on k2iam and kiam without going all the way through to taking them to a test bench and wasn't happy with what I found out. It turns out we don't need them anymore starting with K8s 1.13 in EKS, that is as of september 4th as native support from AWS has been added for PODS to access IAM STS.
https://docs.aws.amazon.com/en_pv/eks/latest/userguide/iam-roles-for-service-accounts.html

Azure Container Service with Kubernetes - Containers not able to reach Internet

I created an ACS (Azure Container Service) using Kubernetes by following this link : https://learn.microsoft.com/en-us/azure/container-service/container-service-kubernetes-windows-walkthrough & I deployed my .net 4.5 app by following this link : https://learn.microsoft.com/en-us/azure/container-service/container-service-kubernetes-ui . My app needs to access Azure SQL and other resources that are part of some other resource groups in my account, but my container is not able to make any outbound calls to network - both inside azure and to internet. I opened some ports to allow outbound connections, that is not helping either.
When I create an ACS does it come with a gateway or should I create one ? How can I configure ACS so that it allows outbound network calls ?
Thanks,
Ashok.
Outbound internet access works from an Azure Container Service (ACS) Kubernetes Windows cluster if you are connecting to IP Addresses other than the range 10.0.0.0/16 (that is you are not connecting to another service on your VNET).
Before Feb 22,2017 there was a bug where Internet access was not available.
Please try the latest deployment from ACS-Engine: https://github.com/Azure/acs-engine/blob/master/docs/kubernetes.windows.md., and open an issue there if you still see this, and we (Azure Container Service) can help you debug.
For the communication with service running inside the cluster, you can use the Kube-dns which allows you to access service by its name. You can find more details at https://kubernetes.io/docs/admin/dns/
For the external communication (internet), there is no need to create any gateway etc. By default your containers inside a pod can make outbound connections. To verify this, you can run powershell in one of your containers and try to run
wget http://www.google.com -OutFile testping.txt
Get-Contents testping.txt
and see if it works.
To run powershell, ssh to your master node - instructions here
kubectl exec -it <pod_name> -- powershell

kubeadm init on CentOS 7 using AWS as cloud provider enters a deadlock state

I am trying to install Kubernetes 1.4 on a CentOS 7 cluster on AWS (the same happens with Ubuntu 16.04, though) using the new kubeadm tool.
Here's the output of the command kubeadm init --cloud-provider aws on the master node:
# kubeadm init --cloud-provider aws
<cmd/init> cloud provider "aws" initialized for the control plane. Remember to set the same cloud provider flag on the kubelet.
<master/tokens> generated token: "980532.888de26b1ef9caa3"
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready
The issue is that the control plane does not become ready and the command seems to enter a deadlock state. I also noticed that if the --cloud-provider flag is not provided, pulling images from Amazon EC2 Container Registry does not work, and when creating a service with type LoadBalancer an Elastic Load Balancer is not created.
Has anyone run kubeadm using aws as cloud provider?
Let me know if any further information is needed.
Thanks!
I launched a cluster with kubeadm on AWS recently (kubernetes 1.5.1), and it was stuck on same step as your does. To solve it I had to add "--api-advertise-addresses=LOCAL-EC2-IP", it didn't work with external IP (which kubeadm probably fetches itself, when not specified other IP). So it's either a network connectivity issue (try temporarily a 0.0.0.0/0 security group rule on that master instance), or something else... In my case was a network issue, it wasn't able to connect to itself using its own external IP :)
Regarding PV and ELB integrations, I actually did launch a "PersistentVolumeClaim" with my MongoDB cluster and it works (it created the volume and attached to one of the slave nodes)
here is it for example:
PV created and attached to slave node
So latest version of kubeadm that ships with kubernetes 1.5.1 should work for you too!
One thing to note: you must have proper IAM role permission to create resources (assign your master node, IAM role with something like "EC2 full access" during testing, you can tune it later to allow only the few needed actions)
Hope it helps.
The documentation (as of now) clearly states the following in the limitations:
The cluster created here doesn’t have cloud-provider integrations, so for example won’t work with (for example) Load Balancers (LBs) or Persistent Volumes (PVs). To easily obtain a cluster which works with LBs and PVs Kubernetes, try the “hello world” GKE tutorial or one of the other cloud-specific installation tutorials.
http://kubernetes.io/docs/getting-started-guides/kubeadm/
There are a couple of possibilities I am aware of here -:
1) In older kubeadm versions selinux blocks access at this point
2) If you are behind a proxy you will need to add the usual to the kubeadm environment -:
HTTP_PROXY
HTTPS_PROXY
NO_PROXY
Plus, which I have not seen documented anywhere -:
KUBERNETES_HTTP_PROXY
KUBERNETES_HTTPS_PROXY
KUBERNETES_NO_PROXY

How to set KUBE_ENABLE_INSECURE_REGISTRY=true on a running Kubernetes cluster?

I forgot to set export KUBE_ENABLE_INSECURE_REGISTRY=true when running kube-up.sh (AWS provider). I was wondering if there was anyway to retroactively apply that change to a running cluster. It is only a 3 node cluster so doing it manually is an option. Or is the only way to tear down the cluster and start from scratch?
I haven't tested it but in theory you just need to add --insecure-registry 10.0.0.0/8 (if you are running your insecure registry in the kube network 10.0.0.0/8) to the docker daemon options (DOCKER_OPTS).
You can also specify the url instead of the network.