service-docker-compose.yml contains definitions for ServiceA, ServiceB
web-docker-compose.yml contains definitions for ServiceC but needs to reuse definitions from service-docker-compose.yml. I would prefer not to copy over the definitions and the environment variables, how should I manage this
Related
I have an svc running eg my-svc-1,
and I run a deployment that makes an svc of the same name my-svc-1. What would happen?
Service should be unique within a namespace but not across a namespaces. The uniqueness applies to all namespace-based scoping objects for example Deployment, service, secrets etc.
Namespaces provides a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces.
If you really need the same service with the different software version, you can create another namespace named B and then you can create a service having name my-svc-1.
working-with-objects-namespaces
I've encountered a rather rookie issue with k8s. I'm particularly new to k8s, and setup staging and production services / deployments for a Django - celery - redis application within a cluster. However. In my excitement that I actually managed to get something working, I didn't check to think if it was 100% correct.
Essentially, I've noticed that the pre-production Django application doesn't care which celery deployment it references when dispatching a periodic task. It might go to staging, it might try the pre-production deployment. THIS IS BAD.
So, I've been looking at labels and selectors, as well as namespaces.
However, I should probably slow down - my first question, how would I use something native to k8s to run different environments of deployments, such that they are all isolated from each other. So the pre-production Django application can only talk to the pre-production celery-worker or pre-production celery-beat deployments...
*My answer I feel is to use labels and selectors? But ... is it best to use namespaces?
Any pro-guidance around this subject would be amazing.
You could go with a namespace per environment and then make sure that when calling another service you always use the dns shorthand of a a single service name.
This will, however, not limit a specific service to call another in another namespace/environment. If you are the sole developer, or trust all other this might be fine.
An other alternative it so run different clusters but run them identically. When your number of deployments grows you'll likely end up with this anyway.
This way you can have one namespace per team, or per domain, and it will look identical in all environments.
Aside from creating a new cluster per environment, you can separate deployments by namespace or just different stacks in one namespace. The last case (the one you use now) is the easiest to shoot in the leg since you have to change a lot of thing to make it isolated. At very least you need a different set of resource names (both in manifests and configuration) and labels to match.
Out of the three methods I think namespace separation is the easiest; it works on DNS-based service discovery. Suppose you have a copy of redis and your application in two different namespaces (dev and prod for example). Both instances of the app are configured to use redis at redis:6379. When they call DNS to resolve the hostname, CoreDNS (the internal DNS service) would respond with different answers depending on which namespace the request came from. And so your app in dev namespace will get an IP-address of redis in dev namespace, and the app from prod namespace will contact redis from prod namespace. This method does not apply any restriction, if you wish you can specifically make it so that both apps use the same copy of redis. For that, instead of redis:6379 you have to use a full service DNS name, like this:
redis.<namespace>.svc.cluster.local:6379
Regardless of whatever method you choose to go with, I strongly recommend you to get familiar with Kustomize, Helm, or both. These tools are to help you avoid duplicating resource manifests and thus spend less time spawning and managing instances. I will give you a minimal example for Kustomize because it is built in kubectl. Consider the following directory structure:
.
├── bases
│ └── my-app
│ ├── deployment.yml # your normal deployment manifest
│ └── kustomization.yml
└── instances
├── prod
│ ├── kustomization.yml
│ └── namespace.yml # a manifest that creates 'prod' namespace
└── test
├── kustomization.yml
└── namespace.yml # a manifest that creates 'test' namespace
bases is where you keep a non-specific skeleton of your application. This isn't meant to be deployed, like a class it has to be instantiated. instances is where you describe various instances of your application. Instances are meant to be deployed.
bases/my-app/kustomization.yml:
# which manifests to pick up
resources:
- deployment.yml
instances/prod/kustomization.yml:
# refer what we deploy
bases:
- ../../bases/my-app
resources:
- namespace.yml
# and this overrides namespace attribute for all manifests referenced above
namespace: prod
instances/test/kustomization.yml:
# the same as above, only the namespace is different
bases:
- ../../bases/my-app
resources:
- namespace.yml
namespace: test
Now if you go into instances directory and use kubectl apply -k prod you will deploy deployment.yml to prod namespace. Similarly kubectl apply -k test will deploy it to the test namespace.
And this is how you can create several identical copies of your application stack in different namespaces. It should be fairly isolated unless some shared resources from other namespaces involved. In other words, if you deploy each component (such as the database) per namespace and those components are not configured to access components from other namespaces - it will work as expected.
I encourage you to read more on Kustomize and Helm, since namespace overriding is just a basic thing these can do. You can manage labels, configuration, names, stack components and more.
I have a container type that I will need to intermittently add and delete from my cluster, but each different container instance will need a unique configuration in the form of environment variables.
What is the best way to structure this with Kubernetes? Should I have a separate workload for each container? Should I have one common workload and update the pod with new containers as needed?
The containers are isolated applications that don't have anything to do with their siblings.
If you want to do this you can use the jobs in kubernetes, if there is no use case of PVC or PV
For each different type create a new job with the different type of environment variables.
I have a container type that I will need to intermittently add and
delete from my cluster, but each different container instance will
need a unique configuration in the form of environment variables.
configure multiple jobs or deployment in which you can give different options of environment variables.
I do it by
Having a common deployment yaml. You should mount configmaps as environment variables
Packaging with Helm
Deploying with different values.yaml for each instance.
As Harsh stated, you can also use Jobs pattern but I prefer having single file & many values files instead of different jobs definitions.. Because if a key changes, you will need to update all job definitions..
Declarative definitions for resources in a kubernetes cluster such as Deployments, Pods, Services etc.. What are they referred to as in the Kubernetes eco-system?
Possibilities i can think of:
Specifications (specs)
Objects
Object configurations
Templates
Is there a consensus standard?
Background
I'm writing a small CI tool that deploys single or multiple k8s YAML files. I can't think of what to name these in the docs and actual code.
The YAML form is generally a manifest. In a Helm chart they are templates for manifests (or more often just "templates"). When you send them to the API you parse the manifest and it becomes an API object. Most types/kinds (you can use either term) have a sub-struct called spec (eg. DeploymentSpec) that contains the declarative specification of whatever that type is for. However that is not required and some core types (ConfigMap, Secret) do not follow that pattern.
I am writing a YAML file to use Kubernetes and I wondering how to pass variables to args field.
I need to do something like this :
args: ['--arg1=http://12.12.12.12:8080','--arg2=11.11.11.11']
But I don't want to hard code those values for --arg1 and --arg2, instead it should be like,
args: ['--arg1='$HOST1,'--arg2='$HOST2]
How should I do this?
You have two options that are quite different and really depend on your use-case, but both are worth knowing:
1) Helm would allow you to create templates of Kubernetes definitions, that can use variables.
Variables are supplied when you install the Helm chart, and before the resulting manifests are deployed to Kubernetes.
You can change the variables later on, but what it does is regenerate the YAML and re-deploy "static" versions of the result (template+variables=YAML that's sent to Kubernetes)
2) ConfigMaps allow you to separate a configuration from the pod manifest, and share this configuration across several pods/deployments.
You can later reference the ConfigMap from your pod/deployment manifests.
Hope this helps!