How to expose cluster+project values to container in GKE (or current-context in k8s) - kubernetes

My container code needs to know in which environment it is running on GKE, more specifically what cluster and project. In standard kubernetes, this could be retrieved from current-context value (gke_<project>_<cluster>).
Kubernetes has a downward api that can push pod info to containers - see https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/ - but unfortunately nothing from "higher" entities.
Any thoughts on how this can be achieved ?
Obviously I do not want to explicit push any info at deployment (e.g. as env in the configMap). I rather deploy using a generic/common yaml and have the code at runtime retrieve the info from env or file and branch accordingly.

You can query the GKE metadata server from within your code. In your case, you'd want to query the /computeMetadata/v1/instance/attributes/cluster-name and /computeMetadata/v1/project/project-id endpoints to get the cluster and project. The client libraries for each supported language all have simple wrappers for accessing the metadata API as well.

Related

Skaffold dev stream logs of pods created by helm hooks

I would like to see the output from my pre-install/post-install helm hooks when using skaffold dev, but this does not seem to work.
Which filters does skaffold use to get all the pods for log tailing? Is there a way to force skaffold to pick up the hooks by applying some labels (e.g. skaffold.dev/run-id: static) ?
Context
Doing dev with local docker, the image building is pretty fast, so for some use cases there is no need to use file sync and special dev-mode container images with file watching inside.
There is this feature request: https://github.com/GoogleContainerTools/skaffold/issues/1441, but this is for adding hooks to skaffold itself.
The pods created by helm hooks are not removed (https://github.com/GoogleContainerTools/skaffold/issues/2876), but this is expected behavior for helm delete.
Thanks #acristu for the question. Skaffold dev here.
Currently, skaffold is unaware of pods deployed in the pre and post helm hooks.
The reason, we don't parse the manifests in these hooks and hence can't transform those to add the required label skaffold.dev/run-id
Currently there is no way to force skaffold to pick up the logs from these pods/containers
That said we had a pending feature request to extend the current log configuration to include resourceType or resourceName like portForward section
portForward: # describes user defined resources to port-forward.
- resourceType: # Kubernetes type that should be port forwarded.
resourceName:
Supporting this in skaffold would be great idea.

Deleting kubernetes yaml: how to prevent old objects from floating around?

i'm working on a continuous deployment routine for a kubernetes application: everytime i push a git tag, a github action is activated which calls kubectl apply -f kubernetes to apply a bunch of yaml kubernetes definitions
let's say i add yaml for a new service, and deploy it -- kubectl will add it
but then later on, i simply delete the yaml for that service, and redeploy -- kubectl will NOT delete it
is there any way that kubectl can recognize that the service yaml is missing, and respond by deleting the service automatically during continuous deployment? in my local test, the service remains floating around
does the developer have to know to connect kubectl to the production cluster and delete the service manually, in addition to deleting the yaml definition?
is there a mechanism for kubernetes to "know what's missing"?
You need to use a CI/CD tool for Kubernetes to achieve what you need. As mentioned by Sithroo Helm is a very good option.
Helm lets you fetch, deploy and manage the lifecycle of applications,
both 3rd party products and your own.
No more maintaining random groups of YAML files (or very long ones)
describing pods, replica sets, services, RBAC settings, etc. With
helm, there is a structure and a convention for a software package
that defines a layer of YAML templates and another layer that
changes the templates called values. Values are injected into
templates, thus allowing a separation of configuration, and defines
where changes are allowed. This whole package is called a Helm
Chart.
Essentially you create structured application packages that contain
everything they need to run on a Kubernetes cluster; including
dependencies the application requires. Source
Before you start, I recommend you these articles explaining it's quirks and features.
The missing CI/CD Kubernetes component: Helm package manager
Continuous Integration & Delivery (CI/CD) for Kubernetes Using CircleCI & Helm
There's no such way. You can deploy resources from yaml file from anywhere if you can reach the node and configure kube config. So kubernetes will not know how to respond on a file deletion. If you still want to do this, you can write a program (a go code) which checks the availability of files in one place and deletes the corresponding resource whenever the file gets deleted.
There's one way via kubernetes is by using kubernetes operator, and whenever there is any change in your files you can update the crd used to deploy resources via operator.
Before deleting the yaml file, you can run kubectl delete -f file.yaml, this way all the resources created by this file will be deleted.
However, what you are looking for, is achieving the desired state using k8s. You can do this by using tools like Helmfile.
Helmfile, allow you to specify the resources you want to have all in one file, and it will achieve the desired state every time you run helmfile apply

Spring Cloud Data Flow + Kubernetes, asking for the task pod to be deployed on non-default namespaces

I have a setup with scdf-server on kubernetes working fine, it deploys each task in an on-demand pod on the very same default namespace, the one that hosts the scdf-server pod.
Now, I need to deploy a pod in another namespace and I can't find the argument/property to use in the scdf server dashboard for the pod to be created in the given namespace. Does anybody know how to find that? I tried spring.cloud.deployer.kubernetes.namespace, deployer.kubernetes.namespace, spring.cloud.deployer.kubernetes.environmentVariables, deployer.<app>.kubernetes.namespace, spring.cloud.dataflow.task.platform.kubernetes.namespace, scheduler.kubernetes.environmentVariables SPRING_CLOUD_SCHEDULER_KUBERNETES_NAMESPACE... as both 'properties' and 'arguments' text boxes...
This seems like a duplicate thread that was posted in SCDF gitter channel. The properties were described and pointed out in the commentary - more details here.

Can you define Kubernetes Services / Pods using YAML in Terraform?

I am using the Kubernetes Provider to describe services/pods in Terraform.
It can get confusing using the Hashicorp Configuration Language to define kubernetes_pod or kubernetes_service resources because the Kubernetes documentation describes everything in YAML which it means you need to translate it into HCL.
Is it possible to define pods as YAML and use them with kubernetes_pod and kubernetes_service resources as templates?
While Terraform normally uses HCL, this is a superset of JSON (much like YAML itself) so can also read JSON.
One possible option would be to take the YAML examples you already have and convert them into JSON and then use Terraform on those.
Unfortunately, that's unlikely to work because keywords are likely to be different for how Terraform is expecting things so you'd need to write something to do some basic translation of the input YAML to a Terraform resource JSON. At this point, it'd probably be worth just adding HCL output to the conversion so your outputted Terraform config is more readable if you ever intend to keep the Terraform config around instead of just one shot converting and applying the config.
The benefit of doing things this way would be that you have a reusable Kubernetes config that could be ran using kubectl or other tools but gives you the power of Terraform's lifecycle management, being able to plan changes and integration with non Kubernetes parts of your infrastructure (such as setting up instances to run the Kubernetes cluster on).
I've not used it much but I believe Kops will allow you to keep pod/service config in typical Kubernetes YAML files but can then use Terraform to manage the configuration and even allows you to output the Terraform configuration so you can run it outside of Kops itself.
The hashicorp/kubernetes provider does not support raw YAML/JSON, and they have no intention of implementing it.
The possible solutions are:
K2tf, a tool for converting Kubernetes RAW YAML manifests into Terraform HCL for the Kubernetes provider.
Use an alternative community Kubernetes provider, such as gavinbunny/kubectl, which does support raw YAML and can track each resource and the attributes in Terraform state, unlike the kubernetes-alpha provider.
Another solution is to use the hashicorp/kubernetes-alpha provider, you can pass in either a Terraform object or convert raw YAML manifest into a TF object for using in the provider resource. The downside is that the attributes are not tracked as individual objects and thus a change will cause the entire resource to be tainted.
Using the kubectl provider.
This core of this provider is the kubectl_manifest resource, allowing free-form yaml to be processed and applied against Kubernetes. This yaml object is then tracked and handles creation, updates and deleted seamlessly - including drift detection. This provider is ideal if you want to track the manifest in Terraform:
resource "kubectl_manifest" "test" {
yaml_body = file("path/to/manifest.yaml")
}
Using the kubernetes-alpha provider
The kubernetes_manifest represents one Kubernetes resource as described in the manifest attribute. The manifest value is the HCL transcription of a regular Kubernetes YAML manifest. To transcribe an existing manifest from YAML to HCL, use the Terrafrom built-in function yamldecode(), or use the tfk8s tool to convert YAML into manifest attributes for the kubernetes-alpha provider manifest resource.
Example using yamldecode:
resource "kubernetes_manifest" "service" {
provider = kubernetes-alpha
manifest = yamldecode(file("path/to/manifest.yaml"))
}
Why doesn't the kubernetes provider support RAW YAML?
Supporting YAML/JSON in hashicorp/kubernetes was considered before (the very first proposal of K8S provider was exactly that) and during the initial implementation of this provider and we decided not to do it.
The reason is that you can't accurately track resources created from RAW YAML as Terraform objects.
From Terraform's developer perspective it is very tricky to get around
the way K8S API works where you send an array [a, b, c] to the Create
API and then you Get back [a, b, c, d]. This happens for example with
pods that get some secret volumes attached automatically, but happens
with most other resources I had the chance to play with. The
whitelisting/blacklisting is tricky part.
You may also be interested in the following project, which allows you to convert YAML files to Terraform's HCL.
https://github.com/sl1pm4t/k2tf
Description:
A tool for converting Kubernetes API Objects (in YAML format) into HashiCorp's Terraform configuration language.
The converted .tf files are suitable for use with the Terraform Kubernetes Provider

Is it possible to disable service discovery with environment variables in kubernetes?

As we know, kubernetes supports 2 primary modes of finding a Service - environment variables and DNS, could we disable the first way and only choose the DNS way?
As shown in this PR, this feature will land with Kubernetes v1.13. From the PR (as Docs are not available yet) I expect it to be the field enableServiceLinks in the pod spec with true as default.
Edit: It has been a while and the PR finally landed. The enableServiceLinks was added as an optional Boolean to the Kubernetes PodSpec.
For the record: using DNS to discover service endpoints is the recommended approach. The docker link behavior, from where the environment variables originate, has long been deprecated.
Per kubernetes v1.8 source, it's impossible to disable services discovery with environment variables.
Only service meet either condition is exposed by envVars.
service in the same namespace as the pod;
kubernetes service in the default namespace;
Even though, these environment variables can be overwritten by env and envFrom defined in pod template.
I'm wondering what's your scenario, maybe we can figure out some workaround.