Kubernetes accessing resource across namespace - kubernetes

I have multiple teams and each team has bunch of Applications and Each application will have different environments( DEV, STAGE, PROD ). Looking for a way to limit the accesses using namespace.
Say, each team will have their own namespace.
I dont want Application deployed in namespace A access configMaps from namespace B. But, I want Applications deployed in namespace A access Rest Applications deployed in namespace B( Either thru ingress or services).
Also, dev applications should not have visibility on STAGE applications.
But, there are few applications which will serve request for DEV & STAGE based on tenantId in the request header.
What is the recommendation for creating namespace here?
Thanks

Namespaces automatically separate resources in the cluster. So if you create a namespace A and B, then if you create a configmap in namespace A it will automatically be unavailable in namespace B.
If you want to restrict users on what they can do in your cluster. For example if you like developers to be able to create resources on development but only view things in staging or production. I would take a look at using RBAC
If you want to separate access to and from applications on the network layer I would suggest taking a look at Network Policies. For that you would need an applicable networking solution for example Project Calico

Related

Application Load Balancers in an EKS cluster

I'm trying to figure out ways to automate k8s deployments in an EKS cluster. I'm trying to set up namespaces for each specific environment. One for dev, one for staging, and one for production. My production namespace is in a separate region and also in a separate cluster (dev & staging are in one cluster). I'm a little new to this concept, but does it make sense to have each respective application load balancer in it's respective namespace? Is that practice common or best practice? Any ideas on automating deployments would be appreciated.
Hi Dave Michaels,
I assume there are two questions in your post above:
If we use a dedicated namespace in the same cluster (dev & staging setup), can we use a dedicated load balancer for each of these namespaces? Is this good practice.
Answer: Yes. As you are using the namespace concept for each environment in the same cluster, it is Ok to create a dedicated load balancer (promise me you will use ingress :)) in each of these namespaces as we need an easier way to access those environments. To be frank, I am not a fan of using namespaces for environments, because as your cluster grows and lots of microservices getting added to it, you might want to use namespace for another reason eg., namespace per team or domain to have granular access rights. But I have seen teams using it for different environments successfully as well.
Suggest automated Kubernetes deployments possibilities?
This is a large topic by itself.
As your microservices grow, you will have multiple Kubernetes manifests to handle, first thing I will suggest is to either use a configuration manager like Kustomize or a package manager like Helm to segregate variables from actual manifests, this will help to easily automate deployment across environments (same cluster or different clusters). Coming to actual deployment automation, if there is no existing CD in place I would suggest exploring tools that support natively Kubernetes that supports GitOps, like FluxCD or ArgoCD etc

Using a Google service account keyfile in a Kubernetes serviceaccount as a testing environment replacement for GKE workload identity

I have a GKE app that uses kubernetes serviceaccounts linked to google service accounts for api authorizations in-app.
Up until now, to test these locally, I had two versions of my images- one with and one without a test-keyfile.json copied into them for authorization. (The production images used the serviceaccount for authorization, the test environment would ignore the serviceaccounts and instead look for a keyfile which gets copied in during the image build.)
I was wondering if there was a way to merge the images into one, and have both prod/test use the Kubernetes serviceaccount for authorization. On production, use GKE's workload identity, and in testing, use a keyfile(s) linked with or injected into a Kubernetes serviceaccount.
Is such a thing possible? Is there a better method for emulating GKE workload identity on a local test environment?
I do not know a way of emulating workload identity on a non-Google Kubernetes cluster, but you could change your app to read the auth credentials from a volume/file or the metadata server, depending on the environment setting. See this article (and particularly the code linked there) for an example of how to authenticate using local credentials or Google SA depending on environmental variables.The article also shows how to use Pod overlays to keep the prod vs dev changes separate from the bulk of the configuration.

Kubernetes: How to manage multiple separate deployments of the same app

We're migrating our app's deployments from using VMs to Kubernetes and as my knowledge of Kubernetes is very limited, I'm lost how I could set up deployment for multiple clients.
Right now we have a separate VM for each client but how to separate the clients in Kubernetes in a way that will be cost and resource efficient and easy to manage?
I managed to create dev and staging environments using namespaces and this is working great.
To update dev and staging deployment I just use kubectl apply -f <file> --namespace staging.
Now I need to deploy app to production for several clients (50+). They should be completely separate from each other (using separate environment variables and secrets) while code should be the same. And I don't know what is the best way to achieve that.
Could you please hint me what is the right way for that in Kubernetes?
You can use Kustomize. It provides purely declarative approach to configuration customization to manage an arbitrary number of distinctly customized Kubernetes configurations.
https://github.com/kubernetes-sigs/kustomize/tree/master/examples
one (or a set of namespaces) by customer
kustomize has a very good patterns system to handle generic configuration and several adaptation by clients
use NetworkPolicy to isolate network between clients

Kubernetes, Automatic Service fallback to another namespace

I have multiple environments represented by multiple namespaces in my kubernetes.
All application has its service endpoints defined in each namespace.
And we have three environments, dev, alpha, and beta. (Which is equivalent of dev, test, and stage). These environments are permanent, which means all the applications are running there.
Now in my team, there are few parallel development happening, for which we are planning to create multiple environments for the release and which will be only having few applications which are part of that release.
Let's think of this example: I am building feature1 and have an impact on app1 and app2
There are 10 other apps which are not having any impact.
So for my development and the parallel testing majority of services that I have to point to existing alpha or beta env and only point the app1 and app2 in the namespace.
I could achieve this by having an ExternalName mapping for all other services.
But if I have more than 100 services and managing the external endpoint in a yaml I feel very difficult.
Is there any way that I can route all the traffic to another namespace(If there exist no service with that name.)
Is there a way for global ExternalName for a Namespace?
As far as I know, it is not possible to redirect traffic to a different namespace based on the existing pods or Services in the current namespace. You can select a destination for Service only by changing its YAML configuration, and it is only possible to select pods in the same namespace.
You can simplify the deployment procedure by using Helm charts. It allows you to put variables into YAML configuration of Deployments, Services, etc, and use separate value file to substitute them during installation to the cluster. Here is a link to a blog post on Using Helm to deploy to Kubernetes

Configurations for a shared kubernetes cluster for multiple projects

I'm looking for a way of managing multiple projects with a common tech stack: nginx, php-fpm, mysql.
It will be a managed service provided by my company. This means customers won't deal with the cluster internals. Customers choose a plan so they can have more or less resources reserved. Think of it like a service like wordpress.com or ghost.io.
When a customer comes, we reserve a set of nodes for him. The goal is that any customer can use unused resources from another customer.
First attempt: namespaces per customer
namespace customer1:
nginx deploy and service
php-fpm deploy and service
mysql deploy and service
namespace customer2:
nginx deploy and service
php-fpm deploy and service
mysql deploy and service
But I think this division is too rigid in order to share unused resources.
Second attempt: shared namepace, custom names per resources
namespace hive:
customer1.nginx deploy and service
customer1.php-fpm deploy and service
customer1.mysql deploy and service
customer2.nginx deploy and service
customer2.php-fpm deploy and service
customer2.mysql deploy and service
It look better for me, but I think resources are too tight coupled to cusomer yet.
The only thigs that define a project are a domain, a source code directory and a database (I'll deal later with logs and other stuff).
Are there any other approaches to think of the cluster as a kind of "compute fog"?
I think namespace is the right thing to do what you want. All namespaces can share the same physical resources, unless you preserve nodes for each namespace, there is no rigid resource division for namespace with general usage.