Running kiam server securely - kubernetes

Can anyone explain an example of using kiam on kubernetes to manage service-level access control to aws resources?
According to the docs:
The server is the only process that needs to call sts:AssumeRole and
can be placed on an isolated set of EC2 instances that don't run other
user workloads.
I would like to know to run the server part of it away from nodes that host your services.

Answer: KIAM architecture is well explained here:
https://www.bluematador.com/blog/iam-access-in-kubernetes-kube2iam-vs-kiam
Basically you want to use Master Nodes in your cluster with IAM::STS permissions on them to install the Server portion of kiam and then let your worker nodes connect to master nodes to retrieve credentials.
DISCLAIMER: I did some digging on k2iam and kiam without going all the way through to taking them to a test bench and wasn't happy with what I found out. It turns out we don't need them anymore starting with K8s 1.13 in EKS, that is as of september 4th as native support from AWS has been added for PODS to access IAM STS.
https://docs.aws.amazon.com/en_pv/eks/latest/userguide/iam-roles-for-service-accounts.html

Related

UDP/TCP Broadcast in Managed Kubernetes Services (specifically AWS-EKS)

We have an app that uses UDP broadcast messages to form a "cluster" of all instances running in the same subnet.
We can successfully run this app in our (pretty std) local K8s installation by using hostNetwork:true for pods. This works because all K8s nodes are in the same subnet and broadcasting is possible. (a minor note: the K8s setup uses flannel networking plugin)
Now we want to move this app to the managed K8s service # AWS. But our initial attempts have failed. The 2 daemons running in 2 different pods didn't see each other. We thought that was most likely due to the auto-generated EC2 worker node instances for the AWS K8s service residing on different subnets. Then we created 2 completely new EC2 instances in the same subnet (and the same availability-zone) and tried running the app directly on them (not as part of K8s), but that also failed. They could not communicate via broadcast messages even though the 2 EC2 instances were on the same subnet/availability-zone.
Hence, the following questions:
Our preliminary search shows that AWS EC2 does probably not support broadcasting/multicasting, but still wanted to ask if there is a way to enable it? (on AWS or other cloud provider)?
We had used hostNetwork:true because we thought it would be much harder, if not impossible, to get broadcasting working with K8s pod-networking. But it seems some companies offer K8s network plugins that support this. Does anybody have experience with (or recommendation for) any of them? Would they work on AWS for example, considering that AWS doesn't support it on EC2 level?
Would much appreciate any pointers as to how to approach this and whether we have any options at all..
Thanks
Conceptually, you need to create overlay network on top of the VPC native like this. There's a CNI that support multicast and here's the AWS blog about it.

Good solutions to automate infrastructure deployment locally?

I have recently been reading more about infrastructure as a service (IaaS) and platform as a service (PaaS) and had some questions. I see when we opt for a PaaS solution, it is generally very easy to create the infrastructure as the cloud providers handle that for us and we can even automate the deployment using an infrastructure as code solution like Terraform.
But if we use an IaaS solution or even a local on premise cluster, we lose a lot of the automation it seems that PaaS allows. So I was curious, are there any good tools out there for automating infrastructure deployment on a local cluster that is not in the cloud?
The best thing I could think of was to run a local Kubernetes cluster and then Dockerize each of the infrastructure components, but this seems difficult as each node in the cluster will need its own specific configuration files.
From my basic Googling, it seems like there is not a good solution to this.
Edit:
I was not clear enough with my original intentions. I have two problems I am trying to solve.
How do I automate infrastructure deployment locally? For example, suppose I wanted to create a Hadoop HDFS cluster. I would need to configure one node to be the namenode with an accessible IP, and the other nodes to be datanodes that are aware of the namenode's IP. At the moment, I have to do this manually by logging into each node, checking it's IP, and then configuring each one. How would I automate this? If I were to use a Kubernetes approach, how do I specify that one of the running pods needs to be the namenode and the others are datanodes? How do I find the pods' IPs and have them be aware of the namenode IP?
The next problem I have is very similar to the first, but a slight modification. How would I deploy specific configuration files to each node. For instance in Kafka, the configuration file for one node, requires the IPs of the Zookeeper nodes, as well as the IP it should listen on. This may be different for every node in the cluster. Is there a good way to make these config files pod specific, so that I do not have to do bash text processing to insert the correct contents into each pod's config files?
You can use Terraform for all of your on-premise Infra. Automation, and Ansible for configuration management.
Let's say you have three HPE servers, Install K8s or VMware on them using Ansible, then you can treat them as three Avvaliabilty zones in one region, same as AWS. from this you can start deploying dockerize apps, or helm charts using Terraform.
Summary:
Ansbile for installing and configuration K8s.
Terraform for provisioning K8s.
Helm for installing apps on K8s.
After this you gonna have a base automated on-premise Infra.

Openshift : MongoDB and Mongoexpress

I have some team members that don't have permission to pull docker image locally on their machine and run as local instance of mongodb and mongoexpress.
So am planning to deploy as mongodb and mongoexpress as pods in Openshift to access locally. Can anyone provide the steps to do that in Openshift? Or else proper resource where I can find information / steps.
I am new to openshift.
Can anyone provide the steps to do that in Openshift?
This tutorial explains how apps (containers) are deployed from images in OpenShift
Or else proper resource where I can find information/steps
It depends on your needs, the main question is if you really need container orchestration tools or no. If you need them then you can consider installing them locally:
Docker for Windows
Minikube
or in cloud:
Google Kubernetes Engine aka GKE (it allows you creating basic Kubernetes cluster in a few clicks)
OpenShift (I haven't been dealing with it yet)
from what I've already seen, Kubernetes provides a lot of documentation (with examples, etc) on topic.
Last but not least, there is a really nice step-by-step guide on how to create Kubernetes cluster from scratch "the hard way" if you need the cluster to be fully managed by you.

Deploy Kubernetes on Self-host Production environment

I am trying to install kubernetes on Self-hosted production environment running on Ubuntu 16.04. I am not able to find any helpful guide to setup production grade kubernetes master and connect worked nodes to it.
any help is much appreciated.
you can use the kubespray to self Host production environment.
https://github.com/kubernetes-incubator/kubespray
Depends on what you understand by saying "self-host". The most people think it's about deploying kubernetes in the own environment.
If you want to compare different approaches to deploy k8s in a custom environment, refer to this article which covers a bunch of options suitable for that.
If you are interested in how to set up an HA Kubernetes cluster using kubeadm, refer to this article.
However, in kubernetes, there is a different definition of "self-hosted". It means running kubernetes itself as a workload in kubernetes. If you are interested in a real self-hosted approach (on a custom environment), refer to this article
Hope this helps
You can use typhoon which can be used to provision an HA kubernetes cluster.
Here is a sample configuration which I used to bring up my own home cluster.
A few advantages of typhoon are that you have the option of choosing your choice of a cloud provider for provisioning your infrastructure, which is done using terraform and the fact that it gives you upstream k8s is a big plus too.
Internally, it uses bootkube to bring up the temporary control plane, which would consist of
api-server
controller-manager
scheduler
and then when we have the temporary control plane object, we inject the objects to the API server to have our k8s cluster.
Have a look at this kubecon talk given by CoreOS which explains how this is working.

kubeadm init on CentOS 7 using AWS as cloud provider enters a deadlock state

I am trying to install Kubernetes 1.4 on a CentOS 7 cluster on AWS (the same happens with Ubuntu 16.04, though) using the new kubeadm tool.
Here's the output of the command kubeadm init --cloud-provider aws on the master node:
# kubeadm init --cloud-provider aws
<cmd/init> cloud provider "aws" initialized for the control plane. Remember to set the same cloud provider flag on the kubelet.
<master/tokens> generated token: "980532.888de26b1ef9caa3"
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready
The issue is that the control plane does not become ready and the command seems to enter a deadlock state. I also noticed that if the --cloud-provider flag is not provided, pulling images from Amazon EC2 Container Registry does not work, and when creating a service with type LoadBalancer an Elastic Load Balancer is not created.
Has anyone run kubeadm using aws as cloud provider?
Let me know if any further information is needed.
Thanks!
I launched a cluster with kubeadm on AWS recently (kubernetes 1.5.1), and it was stuck on same step as your does. To solve it I had to add "--api-advertise-addresses=LOCAL-EC2-IP", it didn't work with external IP (which kubeadm probably fetches itself, when not specified other IP). So it's either a network connectivity issue (try temporarily a 0.0.0.0/0 security group rule on that master instance), or something else... In my case was a network issue, it wasn't able to connect to itself using its own external IP :)
Regarding PV and ELB integrations, I actually did launch a "PersistentVolumeClaim" with my MongoDB cluster and it works (it created the volume and attached to one of the slave nodes)
here is it for example:
PV created and attached to slave node
So latest version of kubeadm that ships with kubernetes 1.5.1 should work for you too!
One thing to note: you must have proper IAM role permission to create resources (assign your master node, IAM role with something like "EC2 full access" during testing, you can tune it later to allow only the few needed actions)
Hope it helps.
The documentation (as of now) clearly states the following in the limitations:
The cluster created here doesn’t have cloud-provider integrations, so for example won’t work with (for example) Load Balancers (LBs) or Persistent Volumes (PVs). To easily obtain a cluster which works with LBs and PVs Kubernetes, try the “hello world” GKE tutorial or one of the other cloud-specific installation tutorials.
http://kubernetes.io/docs/getting-started-guides/kubeadm/
There are a couple of possibilities I am aware of here -:
1) In older kubeadm versions selinux blocks access at this point
2) If you are behind a proxy you will need to add the usual to the kubeadm environment -:
HTTP_PROXY
HTTPS_PROXY
NO_PROXY
Plus, which I have not seen documented anywhere -:
KUBERNETES_HTTP_PROXY
KUBERNETES_HTTPS_PROXY
KUBERNETES_NO_PROXY