Context
I am trying to use OpenCensus and Linkerd.
Though Linkerd has an option to automatically provision OpenCensus and jaeger in its namespace, I don't want to use them. Instead, I deployed them independently by myself under the namespace named 'ops'.
Questions
Whether OpenCensus collector should be injected by Linkerd.
At the end (exactly 4th line from the last) of the the official docs, it says,
Ensure the OpenCensus collector is injected with the Linkerd proxy.
What does this mean?
Should I inject linkerd sidecar into OpenCensus collector pod?
If so, why?
Should I suffix serviceaccount name by namespace?
For example, let's say I've configured the default namespace like this.
apiVersion: v1
kind: Namespace
metadata:
name: default
annotations:
linkerd.io/inject: enabled
config.linkerd.io/trace-collector: my-opencensus-collector.ops:12345
config.alpha.linkerd.io/trace-collector-service-account: my-opencensus-collector-service-account
my-opencensus-collector is in ops namespace, so I put .ops at the end of its service name, resulting my-opencensus-collector.ops:12345.
And the dedicated service account for the OpenCensus collector exists in ops namespace, too. In this case, should I put the namespace name at the end of service account name as well?
Which one would be right?
config.alpha.linkerd.io/trace-collector-service-account: my-opencensus-collector-service-account
or
config.alpha.linkerd.io/trace-collector-service-account: my-opencensus-collector-service-account.ops
Thanks!
Whether OpenCensus collector should be injected by Linkerd.
Yes, the OpenCensus collector should be injected with the Linkerd proxy because the proxies themselves send the span info using mTLS. With mTLS, the sending (client) and receiving (server) sides of the request must present certificates to each other in to verify that identities to each other in a way that validates that the identity was issued by the same trusted source.
The Linkerd service mesh is made up of the control plane and the data plane. The control plane is a set of services that run within the cluster to implement the features of the service mesh. Mutual TLS (mTLS) is one of those features and is implemented by the linkerd-identity component of the control plane.
The data plane is comprised of any number of the Linkerd proxies which are injected into the services in the application, like the OpenCensus collector. Whenever a proxy is started within a pod, it sends a certificate signing request to the linkerd-identity component and receives a certificate in return.
So, when the Linkerd proxies in the control plane send the spans to the collector, they authenticate themselves with those certificates, which must be verified by the proxy injected into the OpenCensus collector Pod. This ensures that all traffic, even distributed traces, are sent securely within the cluster.
Should I suffix serviceaccount name by namespace?
In your case, you should suffix the service account with the namespace. By default, Linkerd will use the Pod namespace, so if the service account doesn't exist in the Pod namespace, then the configuration will be invalid. The logic has a function that checks for a namespace in the annotation name and appends it, if it exists:
func ammendSvcAccount(ns string, params *Params) {
hostAndPort := strings.Split(params.CollectorSvcAddr, ":")
hostname := strings.Split(hostAndPort[0], ".")
if len(hostname) > 1 {
ns = hostname[1]
}
params.CollectorSvcAccount = fmt.Sprintf("%s.%s", params.CollectorSvcAccount, ns)
}
So, this one is correct:
config.alpha.linkerd.io/trace-collector-service-account: my-opencensus-collector-service-account.ops
Related
I have set these environment variables inside my pod named main_pod.
$ env
HTTP_PROXY=http://myproxy.com
http_proxy=http://myproxy.com
I also have another dynamic pod in pattern sub_pod-{number} which has a service attached to it called sub_pod-{number}.
So, if I add NO_PROXY=sub_pod-1 environment variable in main_pod, request with URL http://sub_pod-1:5000/health_check will run successfully as it won't be directed through proxy which is fine.
But I want this process to be dynamic. sub_pod_45 might spawn at runtime and sub_pod-1 might get destroyed. Is there any better way to handle this rather than updating NO_PROXY for every pod creation / destruction ?
Is there any resource / network policy / egress rule from which I can tell pod that if domain name belongs to kubernetes service, do not route it through proxy server?
Or can I simply use regex or glob patterns in NO_PROXY env variable like NO_PROXY=sub_pod-* ?
Edited
Result of nslookup
root#tmp-shell:/# nslookup sub_pod-1
Server: 10.43.0.10
Address: 10.43.0.10#53
Name: sub_pod-1.default.svc.cluster.local
Address: 10.43.22.139
When no_proxy=cluster.local,
Proxy bypassed when requested with FQDN
res = requests.get('http://sub_pod-1.default.svc.cluster.local:5000')
Proxy didn't bypass when requested with service name only
res = requests.get('http://sub_pod-1:5000') # I want this to work
I would not want to ask my developers to change the application to use FQDN.
Is there any way cluster can identify if URL resolves to a service present within the network and if it happens do not route the request to proxy ?
Libraries that support the http_proxy environment variable generally also support a matching no_proxy that names things that shouldn't be proxied. The exact syntax seems to vary across languages and libraries but it does seem to be universal that setting no_proxy=example.com causes anything.example.com to not be proxied either.
This is relevant because the Kubernetes DNS system creates its names in a domain based on the cluster name, by default cluster.local. The canonical form of a Service DNS name, for example, is service-name.namespace-name.svc.cluster.local., where service-name and namespace-name are the names of the corresponding Kubernetes objects.
I suspect this means it would work to do two things:
Set an environment variable no_proxy=cluster.local; and
Make sure to use the FQDN form when calling other services, service.namespace.svc.cluster.local.
Pods have similar naming, but are in a pod.cluster.local subdomain. The cluster.local value is configurable at a cluster level and it may be different in your environment.
If I have a microservice app within a namespace, I can easily get all of my namespaced resources within that namespace using the k8s api. I cannot, however, view what non-namespaced resources are being used by the microservice app. If I want to see my non-namespaced resources, I can only see them all at once, with no indication of which ones are dependancies in the microservice app.
How can I find my dependancies related to my application? I'd like to be able to get reference to things like PersistentVolumes, StorageClasses, ClusterRoles, etc. that are being used by the app's namespaced resources.
Your code, running in a pod container inside a namespace, runs using a serviceaccount set using pod.spec.serviceAccountName.
If not set, it'll run using the default serviceaccount.
You need to create a clusterRole in order to grant access to cluster-wide resources specific verbs, then in the pod namespace assign this clusterRole to the serviceaccount, via a roleBinding targetting the clusterRole create before.
Then your pod, using a kubernetes client, and using the "in-cluster config" auth method, will be able to query the apiserver to get/list/watch/delete/patch... the said cluster-wide resources.
This is a definitely a non-trivial task because of the many ways such dependency can come into play: whenever an object "uses" another one, there we could identify a dependency. The issue is that this "use" relation can take many forms: e.g., a Pod can reference a Volume in its definition (which would be a sort of direct dependency), but can also use a PersistentVolumeClaim which would then instantiate a PV through use of a StorageClass -- and these relations are only known to Kubernetes at run time, when the YAML definitions are applied.
In other words:
To chase dependencies, you would have to inspect the YAML description of resources in-use, knowing the semantics of each: there's no single depends: value in each but one would need to follow e.g., the spec.storageClass of a PVC, the spec.volumes: of a Pod, etc.
In some cases, this would not even be enough: e.g., for matching Services and Pods this would not even be enough, as one would have to match ports on each side.
All of this would need to be done by extracting YAML from a running K8s cluster, since some relations between resources would not be known until they are instantiated.
You could check How do you visualise dependencies in your Kubernetes YAML files? article by Daniele Polencic shows a few tools that can be used to visualize dependencies:
There isn't any static tool that analyses YAML files. But you can visualise your dependencies in the cluster with Weave Scope, KubeView or tracing the traffic with Istio.
How to prevent a user to spawn pods in namespace with serviceaccounts that have high privileges but allow them to create namespace ?
For example, I have a cluster with velero in a velero namespace. I want to prevent the user to create pods with the veleroe serviceaccount to prevent the user to create privileged accounts. But I want that the user can create namespace and use serviceaccount with restritected PSP.
In my opinion the idiomatic way of enforcing this in Kubernetes is by creating a dynamic validating admission controller.
https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/ https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook
I know it could sound a bit complex, but trust me, it's really simple. Eventually, an admission control is simply a webhook endpoint (a piece of code) which can change and/or enforce a certain state on created objects.
So in your case: create a dynamic validating webhook and simply disallow creation of pods that does not match your restrictions, with a corresponding relevant error message.
First of all the service account used by Valero is in the Valero namespace. So if the user don't have RBAC to do anything in Valero namespace it will not be able to use the service account used by Valero. You should define RBAC for users such a way that they only can do CRUD on resources in the intended namespaces and can not do CRUD on resources in other namespaces. When I say resources it also includes service account.
I am new to Kubernetes. I was going through some tutorials related to Kubernetes deployment. I am seeing two different commands which looks like doing similar things.
The below command is from google code lab (URL: https://codelabs.developers.google.com/codelabs/cloud-springboot-kubernetes/index.html?index=..%2F..index#7 )
$ kubectl create service loadbalancer hello-java --tcp=8080:8080
Another command is being seen in a different place along with the Kubernetes site (https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/)
$ kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
Now as per my understanding both the command are creating services from deployments with loadbalancer and exposing them to the outer world.
I don't think there will be two separate commands for the same task. There should be some difference that I am not able to understand.
Would anyone please clarify this to me?
There are cases where the expose command is not sufficient & your only practical option is to use create service.
Overall there are 4 different types of Kubernetes services, for some it really doesn't matter if you use expose or create, while for others it maters very much.
The types of Kubernetes services are:
ClusterIP
NodePort
LoadBalancer
ExternalName
So for example in the case of the NodePort type service let's say we wanted to set a node port with value 31888 :
Example 1:
In the following command there is no argument for the node port value, the expose command creates it automatically:
kubectl expose deployment demo --name=demo --type=NodePort --port=8080 --target-port=80
The only way to set the node port value is after being created using the edit command to update the node port value: kebctl edit service demo
Example 2:
In this example the create service nodeport is dedicated to creating the NodePort type and has arguments to enable us to control the node port value:
kubectl create service nodeport demo --top=8080:80 --node-port=31888
In this Example 2 the node port value is set with the command line and there is no need to manually edit the value as in case of Example 1.
Important :
The create service [service-name] does not have an option to set the service's selector , so the service wont automatically connect to existing pods.
To set the selector labels to target specific pods you will need to follow up the create service [service-name] with the set selector command :
kubectl set selector service [NAME] [key1]=[value1]
So for above case 2 example, if you want the service to work with a deployment with pods labeled myapp: hello then this is the follow-up command needed:
kubectl set selector service demo myapp=hello
The main differences can be seen from the docs.
1.- kubectl create command
Create a resource from a file or from stdin.
JSON and YAML formats are accepted.
2.- kubectl expose command
Expose a resource as a new Kubernetes service.
Looks up a deployment, service, replica set, replication controller or
pod by name and uses the selector for that resource as the selector
for a new service on the specified port. [...]
Even though both achieve the same thing in the examples you provided, the create command is kind of a more global one, with it you can create all resources by using the command line or a yaml/json file. However, the expose command will only create a service resource, and it's mainly used to expose other already existing resources.
Source: K8s Docs
I hope this helps a little : Here the key would be to understand the difference between services and deployments. As per this link [1] you will notice that a deployment deals with the mortality of Pods automatically. However , if a Pod is terminated and then another is spun up how do the
Pods continue to communicate when their IPs change? They use Services : “a Service is an abstraction which defines a logical set of Pods and a policy by which to access them”. Additionally, it may be of interest to view this link [2] as it describes that the kubectl expose command creates a service which in turn creates an external IP and a Load Balancer. As a beginner it may be of help to review the command language used with Kubernetes, this link [3] describes (as mentioned in another answer) that the kubectl create command is used to be more specific about the objects it creates. As well using the create command you can create a larger variety of objects.
[1]:Service :https://kubernetes.io/docs/concepts/services-networking/service/
[2]:Deploying a containerized web application :https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app#step_6_expose_your_application_to_the_internet
[3]:How to create objects: https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-command/#how-to-create-objects
From my understanding, approach 1 (using create service) just creates service object and as label selector is not specified it does not have any underlying target pods. But in approach 2 (using expose deployment) the service load balances all the pods which are created using deployment as the service is attached with required labels automatically.
I am spinning up a kubernetes job as a helm pre-install hook on GKE.
The job uses google/cloud-sdk image and I want it to create a compute engine persistent disk.
Here is its spec:
spec:
restartPolicy: OnFailure
containers:
- name: create-db-hook-container
image: google/cloud-sdk:latest
command: ["gcloud"]
args: ["compute", "disks", "create", "--size={{ .Values.volumeMounts.gceDiskSize }}", "--zone={{ .Values.volumeMounts.gceDiskZone }}", "{{ .Values.volumeMounts.gceDiskName }}"]
However this fails with the following error:
brazen-lobster-create-pd-hook-nc2v9 create-db-hook-container ERROR:
(gcloud.compute.disks.create) Could not fetch resource: brazen-lobster-create-pd-hook-nc2v9
create-db-hook-container
- Insufficient Permission: Request had insufficient authentication scopes.
brazen-lobster-create-pd-hook-nc2v9 create-db-hook-container
Apparently I have to grant the gcloud.compute.disks.create permission.
My question is to whom I have to grant this permission?
This is a GCP IAM permission therefore I assume it cannot be granted specifically on a k8s resource (?) so it cannot be dealt within the context of k8s RBAC, right?
edit: I have created a ComputeDiskCreate custom role, that encompasses two permissions:
gcloud.compute.disks.create
gcloud.compute.disks.list
I have attached it to service account
service-2340842080428#container-engine-robot.uam.gserviceaccount.com that my IAM google cloud console has given the name
Kubernetes Engine Service Agent
but the outcome is still the same.
In GKE, all nodes in a cluster are actually Compute Engine VM instances. They're assigned a service account at creation time to authenticate them to other services. You can check the service account assigned to nodes by checking the corresponding node pool.
By default, GKE nodes are assigned the Compute Engine default service account, which looks like PROJECT_NUMBER-compute#developer.gserviceaccount.com, unless you set a different one at cluster/node pool creation time.
Calls to other Google services (like the compute.disks.create endpoint in this case) will come from the node and be authenticated with the corresponding service account credentials.
You should therefore add the gcloud.compute.disks.create permission to your nodes' service account (likely PROJECT_NUMBER-compute#developer.gserviceaccount.com) in your Developer Console's IAM page.
EDIT: Prior to any authentication, the mere ability for a node to access a given Google service is defined by its access scope. This is defined at node pool's creation time and can't be edited. You'll need to create a new node pool and ensure you grant it the https://www.googleapis.com/auth/compute access scope to Compute Engine methods. You can then instruct your particular pod to run on those specific nodes.