What should be the name pattern for k8s service? - service

I was wondering what is the best name pattern for a service object in k8s environment.
Should it be %service-name%-service or just %service-name%?
workflow-service or just workflow?
What are the arguments for both sides?

In kubernetes the service dns follow the below pattern
<service-name>.<namespace-name>.svc.cluster.local
i have seen people using svc or service appended to the service name with '-' as the delimiter like below, say, redis
redis-service
redis-svc
redis
all the three are perfectly fine but the first one makes more sense interms of readability and standard way of representing the service object.

In fact, while creating a service it is not needed to append "-service" in the name. The general way of doing it is to name the service same as the name of the pods it is pointing to. Hope this helps.
Thank you!

this is simply a matter of taste. if you want verbosity, add -service. but since resources are separate anyway, why be verbose.

Related

How to use Kustomize and create an env like: "http://${namePrefix}service-a/some-path" or "jdbc:db2://${namePrefix}service-b:${dbPort}/${dbName}"

Lets say I need to create environment variables or ConfigMap entries like this:
- name: JDBC_URL
value: "jdbc:db2://alice-service-a:50000/db1"
- name: KEYCLOAK_BASE_URL
value: "http://alice-keycloak:8080/auth"
Where alice- is the namePrefix. How do I do this using Kustomize?
The containers I use actually do need references to other containers that are string concatenations of "variables" like above.
It doesn't look like Kustomize's vars can do this. The documentation entry Unstructured Edits seems to describe this and is under a heading called "Eschewed Features", so I guess that isn't going to happen. A similar feature request, #775 Support envsubst style variable expansion was closed.
Coming from Helm, that was easy.
What are my options if I want to move from Helm to Kustomize, but need to create an env or ConfigMap entry like e.g. jdbc:db2://${namePrefix}-service-b:${dbPort}/${dbName} (admittedly a contrived example)?
I'm guessing I'll have to resort to functionality external to Kustomize, like envsubst. Are there any best practices for cobbling this together, or am I writing my own custom-deploy-script.sh?
I'm afraid I've come up against one of the limitations of Kustomize.
The State of Kubernetes Configuration Management: An Unsolved Problem | by Jesse Suen | Argo Project has this to say under "Kustomize: The Bad":
No parameters & templates. The same property that makes kustomize applications so readable, can also make it very limiting. For example, I was recently trying to get the kustomize CLI to set an image tag for a custom resource instead of a Deployment, but was unable to. Kustomize does have a concept of “vars,” which look a lot like parameters, but somehow aren’t, and can only be used in Kustomize’s sanctioned whitelist of field paths. I feel like this is one of those times when the solution, despite making the hard things easy, ends up making the easy things hard.
Instead, I've started using gomplate: A flexible commandline tool for template rendering in addition to Kustomize to solve the challenge above, but having to use two tools that weren't designed to work together is not ideal.
EDIT: We ended up using ytt for this instead of gomplate.
I can heavily recommend the article: The State of Kubernetes Configuration Management: An Unsolved Problem. Nice to know I'm not the only one hitting this road block.

Validate Kubernetes Object Creation

I would like to implement functionality (or even better reuse existing libraries/APIs!) that would intercept a kubectl command to create an object and perform some pre-creation validation tasks on it before allowing kubectl command to proceed.
e.g.
check various values in the yaml against external DB for example
check a label conforms to the internal naming convention
and so on..
Is there an accepted pattern or existing tools etc?
Any guidance appreciated
The way to do this is by creating a ValidatingAdmissionWebhook. It's not for the faint of heart and even a brief example would be an overkill as a SO answer. A few pointers to start:
https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook
https://banzaicloud.com/blog/k8s-admission-webhooks/
https://container-solutions.com/a-gentle-intro-to-validation-admission-webhooks-in-kubernetes/
I hope this helps :-)
I usually append - - dry-run to kubectl command to check and validate the YAML config

How can I apply pod selector and namespace selector, both, in the same ingress rule?

Kubernetes documentation example here shows how a network policy can be applied for a source specified by either a pod selector OR a namespace selector. Can I specify a source the fulfills both constraints at the same time.
e.g. Can a source be a pod with label "tier=web" which is deployed in namespace "ingress".
P.S. For now, I have it working by adding namespace name as pod-labels.
Yes, this is possible, but not immediately intuitive. If you look at the section below the chunk you linked, it gives a pretty good explanation (this appears to have been added after you asked your question). The NetworkPolicy API documentation here is generally helpful as well.
Basically, if you put each selector as two separate items in the list like the example does, it is using a logical OR. If you put them as two items in the same array element in the list (no dash in front of the second item) like the example below to AND the podSelector and namespaceSelector, it will work. It may help to see these in a yaml to json converter.
Here's an ingress chunk from their policy, modified to AND the conditions
ingress:
- from:
- namespaceSelector:
matchLabels:
project: myproject
podSelector:
matchLabels:
role: frontend
This same sort of logic applies to using the ports rule if you use that alongside of the to or from statements. You'll notice in the example that they do not have a dash in front of ports under the ingress rule. If they had put a dash in front, it would OR the conditions of ingress and ports.
Here are some GitHub links from when they were discussing how to implement combining selectors:
This comment may give a little more background. The API already supported the OR, so doing it otherwise would've broken some functionality for people with that implemented: https://github.com/kubernetes/kubernetes/issues/50451#issuecomment-336305625
https://github.com/kubernetes/kubernetes/pull/60452

Understanding the Logic of Kubernetes API framework in Go

I'm currently trying to wrap my head around learning Go, some details of the kubernetes API I haven't used before and the kubernetes api framework for Go at the same time, and would appreciate your help in understanding the grammar of that framework and why people use it anyways.
Honestly I'm not sure why to use a framework in the first place if it contains the same information as the REST endpoint. Wouldn't it make more sense to just call the API directly via a http library?
And here's one example (taken from some real code):
pod, err := kubecli.CoreV1().Pods(namespace).Get(name, metav1.GetOptions{})
What I feel bothersome is that I have to look up everything in the API docs and then I additionally need to figure out that /v1/ translates to CoreV1(). And I'm not even sure where I could look that up. Also the whole block metav1.GetOptions{} seems completely unnecessary, or which part of a HTTP request is represented by it?
I hope I could make clear what the confusion is and hope for your help in clearing it up.
Edit:
Here's also an example, generated from the new operator-framework which sadly doesn't make it much better:
return &v1.Pod{
TypeMeta: metav1.TypeMeta{
Kind: "Pod",
APIVersion: "v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "busy-box",
Namespace: cr.Namespace,
OwnerReferences: []metav1.OwnerReference{
*metav1.NewControllerRef(cr, schema.GroupVersionKind{
Group: v1alpha1.SchemeGroupVersion.Group,
Version: v1alpha1.SchemeGroupVersion.Version,
Kind: "Memcached",
}),
},
Labels: labels,
},
Spec: v1.PodSpec{
Containers: []v1.Container{
{
Name: "busybox",
Image: "busybox",
Command: []string{"sleep", "3600"},
},
},
},
}
The API docs don't know anything about this TypeMeta object. And the second element is called ObjectMeta: instead of metadata. I mean, I'm not a magician. How should I know this.
I'm a bit late, but here is my 2 cents.
Why to use client-go instead of http library
There are serval pros with client-go.
Kubernetes resource is defined as strongly-typed class, means less misspelled debugging and easy to refactor.
When we manipulate some resources, It authenticates with cluster automatically (doc), what it only needs is a valid config. And we need not to know how exactly the authentication is done.
It has multiple versions compatible with different Kubernetes version. It make our code align with specify kubernetes version much easier, without knowing every detail of API changes.
How do I know which class and method should be called
In API Reference, each resource has the latest Group and Version tag.
For example, Pod is group core, version v1, kind Pod in v1.10.
GoDoc listed all properties and links to detail explanation for every class like Pod.
So the pod list can be found by calling CoreV1(), then Pods(namespace string), then List(opts meta_v1.ListOptions).
A colleague suggested that there are in fact autogenerated docs called godoc. While it doesn't answer all questions it's improving my ability to to use the API libraries already.
Adding a couple of things to what's been mentioned so far:
You could indeed simply make http calls against the apiserver but client-go has already done all the hard work for you! Take for example this 'watch' endpoint:
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#watch-202
you could code up the functionality yourself, or make use of e.g. the SharedInformer interface
https://github.com/kubernetes/client-go/blob/master/tools/cache/shared_informer.go#L34-L41
The code in client-go has been tested and should be relatively bug-free.
If you set up your editor correctly for golang it will give you type hints and available function calls when you start typing in golang API calls.
I'd first learn golang and then try to grok client-go

k8s/gke/gcr - scope an image to a namespace

I have a gke cluster with a number of different namespaces. I would like to be able to in effect namespace my images the same way my other resources are namespaced. That is, I would like pods in different namespaces to be able to reference an image using the same name but for them to get different images depending on which namespace they are in. One way to achieve this (if it were supported) might be to substitute the name of the namespace into the image name in the yml, eg:
containers:
- image: eu.gcr.io/myproject/$(NAMESPACE)-myimage
name: myimage
Then I could push eu.gcr.io/myproject/mynamespace-myimage to make my image available to namespace mynamespace.
Is there any tidy way to achieve this kind of thing. If not, and since I've been unable to find anybody else asking similar questions, is there some way in which this is a bad thing to want to do?
I don't think this is possible. Kubernetes supports expansion on fields like command and args. This lets you use substitutions with a variable set in container's env field, which can come from configmap/secret. Example: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#use-environment-variables-to-define-arguments
However I don't think variable expansion works for the image field. :(
What you're trying to do does not seem like a great idea: Having different images across multiple environments defeats the purpose of having test/staging environments.
Instead you should probably should use the same image to test on all platforms, by changing env vars, configMaps etc.