Would would happen, if I created a svc with an identical name? - kubernetes

I have an svc running eg my-svc-1,
and I run a deployment that makes an svc of the same name my-svc-1. What would happen?

Service should be unique within a namespace but not across a namespaces. The uniqueness applies to all namespace-based scoping objects for example Deployment, service, secrets etc.
Namespaces provides a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces.
If you really need the same service with the different software version, you can create another namespace named B and then you can create a service having name my-svc-1.
working-with-objects-namespaces

Related

Getting my k8s microservice application's non-namespaced dependancies

If I have a microservice app within a namespace, I can easily get all of my namespaced resources within that namespace using the k8s api. I cannot, however, view what non-namespaced resources are being used by the microservice app. If I want to see my non-namespaced resources, I can only see them all at once, with no indication of which ones are dependancies in the microservice app.
How can I find my dependancies related to my application? I'd like to be able to get reference to things like PersistentVolumes, StorageClasses, ClusterRoles, etc. that are being used by the app's namespaced resources.
Your code, running in a pod container inside a namespace, runs using a serviceaccount set using pod.spec.serviceAccountName.
If not set, it'll run using the default serviceaccount.
You need to create a clusterRole in order to grant access to cluster-wide resources specific verbs, then in the pod namespace assign this clusterRole to the serviceaccount, via a roleBinding targetting the clusterRole create before.
Then your pod, using a kubernetes client, and using the "in-cluster config" auth method, will be able to query the apiserver to get/list/watch/delete/patch... the said cluster-wide resources.
This is a definitely a non-trivial task because of the many ways such dependency can come into play: whenever an object "uses" another one, there we could identify a dependency. The issue is that this "use" relation can take many forms: e.g., a Pod can reference a Volume in its definition (which would be a sort of direct dependency), but can also use a PersistentVolumeClaim which would then instantiate a PV through use of a StorageClass -- and these relations are only known to Kubernetes at run time, when the YAML definitions are applied.
In other words:
To chase dependencies, you would have to inspect the YAML description of resources in-use, knowing the semantics of each: there's no single depends: value in each but one would need to follow e.g., the spec.storageClass of a PVC, the spec.volumes: of a Pod, etc.
In some cases, this would not even be enough: e.g., for matching Services and Pods this would not even be enough, as one would have to match ports on each side.
All of this would need to be done by extracting YAML from a running K8s cluster, since some relations between resources would not be known until they are instantiated.
You could check How do you visualise dependencies in your Kubernetes YAML files? article by Daniele Polencic shows a few tools that can be used to visualize dependencies:
There isn't any static tool that analyses YAML files. But you can visualise your dependencies in the cluster with Weave Scope, KubeView or tracing the traffic with Istio.

requests through the Kubernetes cluster by OPA

I want to use OPA(Open Policy Agent) in kubernetes but have some questions which are not still clear for me:
Let’s take a look at a specific case together: for instance, there is a pod creation in a namespace and we can know the namespace from the pod object at OPA. But, can we get the namespace object separately to learn the authority which this namespace belongs to?
More explicitly, I mean can we do requests through the Kubernetes cluster by OPA?
for instance, there is a pod creation with the name of Test. I just want to allow this creation for only an authority called TestAuthority. When the pod is created, we know the namespace data but not the authority. To figure out the authority which this pod belongs to, I need to have the namespace object and look out its labels. Can we do so by OPA?
Additionally, can we say allow pod creation with the names of Test1, Test2, and Test3? So, any pod creation with the name of Test4 should be denied.
Thank you in advance for your help
(1) Yes, see https://github.com/open-policy-agent/kube-mgmt#caching or https://github.com/open-policy-agent/gatekeeper#replicating-data depending on which integration you are wanting to use. Both allow replicating objects from kubernetes into OPA to reference in the policies.
(2) Yes, you can write policies like that.
credit:Patrick East answered from OPA Slack

What is the difference between namespaces and contexts in Kubernetes?

I found specifying like kubectl --context dev --namespace default {other commands} before kubectl client in many examples. Can I get a clear difference between them in a k8's environment?
The kubernetes concept (and term) context only applies in the kubernetes client-side, i.e. the place where you run the kubectl command, e.g. your command prompt.
The kubernetes server-side doesn't recognise this term 'context'.
As an example, in the command prompt, i.e. as the client:
when calling the kubectl get pods -n dev, you're
retrieving the list of the pods located under the namespace 'dev'.
when calling the kubectl get deployments -n dev, you're
retrieving the list of the deployments located under the namespace 'dev'.
If you know that you're targetting basically only the 'dev' namespace at the moment, then instead of adding "-n dev" all the time in each of your kubectl commands, you can just:
Create a context named 'context-dev'.
Specify the namespace='dev' for this context.
Set the current-context='context-dev'.
This way, your commands above will be simplified to as followings:
kubectl get pods
kubectl get deployments
You can set different contexts, such as 'context-dev', 'context-staging', etc., whereby each of them is targeting different namespace. BTW it's not obligatory to prefix the name with 'context'. You can just set the name with 'dev', 'staging', etc.
Just as an analogy where a group of people are talking about films. So somewhere within the conversation the word 'Rocky' was used. Since they're talking about films, it's clear and there's no ambiguity that 'Rocky' here refers to the boxing film 'Rocky' and not about the "bumpy, stony" terrains. It's redundant and unnecessary to mention 'the movie Rocky' each time. Just one word, 'Rocky', is enough. The context is obviously about film.
The same thing with Kubernetes and with the example above. If the context is already set to a certain cluster and namespace, it's redundant and unnecessary to set and / or mention these parameters in each of your commands.
My explanation here is just revolving around namespace, but this is just an example. Other than specifying the namespace, within the context you will actually also specify which cluster you're targeting and the user info used to access the cluster. You can have a look inside the ~/.kube/config file to see what information other than the namespace is associated to each context.
In the sample command in your question above, both the namespace and the context are specified. In this case, kubectl will use whatever configuration values set within the 'dev' context, but the namespace value specified within this context (if it exists) will be ignored as it will be overriden by the value explicitly set in the command, i.e. 'default'.
Meanwhile, the namespace concept is used in both sides: server and client sides. It's a logical grouping of Kubernetes objects. Just like how we group files inside different folders in Operating Systems.
You use multiple contexts to target multiple different Kubernetes clusters.You can quickly switch between clusters by using the kubectl config use-context command.
Namespaces are a way to divide cluster resources between multiple users (via resource quota).Namespaces are intended for use in environments with many users spread across multiple teams, or projects.
A context in Kubernetes is a group of access parameters. Each context contains a Kubernetes cluster, a user, and a namespace. The current context is the cluster that is currently the default for kubectl: all kubectl commands run against that cluster. Each of the context that have been used will be available on your .kubeconfig.
Meanwhile a namespace is a way to support multiple virtual cluster within the same physical cluster. This usually will be related to resource quota as well as RBAC management.
A context is the connection to a specific cluster (username/apiserver host) used by kubectl. You can manage multiple clusters that way.
Namespace is a logical partition inside a specific cluster to manage resources and constraints.

Should I use kubernetes default namespace?

We are going to provide customers a function by deploying and running a container in customers kubernetes environment. After the job is done, we will clean up the container. Currently, the plan is to use k8s default namespace, but I'm not sure whether it can be a concern for customers. I don't have much experience in k8s related field. Should we give customers' an option to specify a namespace to run container, or just use the default namespace? I appreciate your suggestions!
I would recommend you not use (!?) the default namespace for anything ever.
The following is more visceral than objective but it's drawn from many years' experience of Kubernetes. In 2016, a now former colleague and I blogged about the use of namespaces:
https://kubernetes.io/blog/2016/08/kubernetes-namespaces-use-cases-insights/
NB since then, RBAC was added and it permits enforcing separation, securely.
Although it exists as a named (default) namespace, it behaves as if there is (the cluster has) no namespace. It may be (!?) that it was retcon'd into Kubernetes after namespaces were added
Unless your context is defined to be a specific other namespace, kubectl ... behaves as kubectl ... --namespace=default. So, by accident it's easy to pollute and be impacted by pollution in this namespace. I'm sure your team will use code for your infrastructure but mistakes happen and "I forgot to specify the namespace" is easily done (and rarely wanted).
Using non-default namespaces becomes very intentional, explicit and, I think, precise. You must, for example (per #david-maze answer) be more intentional about RBAC for the namespace's resources.
Using namespaces is a mechanism that promotes multi-tenancy which is desired for separation of customers (business units, versions etc.)
You can't delete the default namespace but you can delete (and by consequence delete all the resources constrained by) any non-default namespace.
I'll think of more, I'm sure!
Update
Corollary: generally don't constrain resources to namespace in specs but use e.g. kubectl apply --filename=x.yaml --namespace=${NAMESPACE}
I'd consider the namespace name pretty much a required option. I would default to the namespace name specified in the .kube/config file, if that's at all a choice for you. (That may not be default.)
RBAC rules or organizational policies also might mean the default namespace can't or shouldn't be used. One of the clusters I work with is a shared cluster where each user has their own namespace, enforced by RBAC policies; except for cluster admins, nobody gets to use default, and everybody needs to be able to configure the namespace to run in their own.

Kubernetes RBAC: How to allow exec only to a specific Pod created by Deployment

I have an application namespace with 30 services. Most are stateless Deployments, mixed with some StatefulSets etc. Fairly standard stuff that is.
I need to grant a special user a Role that can only exec into certain Pod. Currently RBAC grants the exec right to all pods in the namespace, but I need to tighten it down.
The problem is Pod(s) are created by a Deployment configurator, and the Pod name(s) are thus "generated", configurator-xxxxx-yyyyyy. Since you cannot use glob (ie. configurator-*), and Role cannot grant exec for Deployments directly.
So far I've thought about:
Converting Deployment into StatefulSet or a plain Pod, so Pod would have a known non-generated name, and glob wouldn't be needed
Moving the Deployment into separate namespace, so the global exec right is not a problem
Both of these work, but neither is optimal. Is there a way to write a proper Role for this?
RBAC, as it is meant by now, doesn't allow to filter resources by other attributes than namespace and resource name. The discussion is open here.
Thus, namespaces are the smallest piece at authorizing access to pods. Services must be separated in namespaces thinking in what users could need access to them.
The optimal solution right now is to move this deployment to another namespace since it needs different access rules than other deployments in the original namespace.