Can `oc create` behave in a "transactional"/"atomic" manner when asked to create _multiple_ objects on the cluster? - deployment

I have written a number of related OKD object definitions, each in its own YAML file. These together essentially make up an application deployment. I am doing something like the following to install my application on an OKD cluster, which works to my satisfaction when none of the objects already exist [on the cluster]:
oc create -f deploymentconfig.yaml,service.yaml,route.yaml,configmap.yaml,secret.yaml
However, if some of the objects oc create is asked to create, already exist on the cluster, then oc create refuses to re-create them (naturally) but it will have created all the other ones that did not exist.
This isn't ideal when the objects I am creating on the cluster were made to behave "in tandem", and are parts of an application where they depend on one another -- the configuration map, for instance, is pretty much a hard requirement as without it the container will fail to start properly (lacking configuration data through a mounted volume).
I'd like to know, can oc create be made to behave like either all of the objects specified on the command line, are installed, or none if some of them already exist or if there were errors?
I am aware OKD has template faculties and other features that may greatly help with application deployment, so if I am putting too much (misplaced) faith on oc create here, I'll take an alternative solution if oc create by design does not do "transactions". This is just me trying what seems simple from where I currently stand -- not being much of an OKD expert.

Unfortunately, there is no such thing.
In Kubernetes (and so in Openshift), manifests are declarative, but they are declarative by resource.
You can oc apply or oc replace to create or modify some resource in a atomic way, but the same cannot be done with a lot of resources because Kubernetes don't see them as a unity.
Even if you have a Template or a List, some resources may have problems and you will end with a part of the whole.
For this kind of thing helm is much more versatile and works as you want with --atomic flag.

Related

How to distribute n different configs to exactly n pods

I have a containerized daemon that I need to run one instance of for every thing. Each thing has a unique set of configs associated with it, but the container image is the same. The configs can be set simply as environment variables. I have a list of the configs, and I need to define the desired state as having exactly 1 pod running for each thing. What is the appropriate way to construct this in Kubernetes with or without Helm?
My understanding is that ReplicaSets and Deployments work on identical containers, in other words they would all be spun up with the same environment variables? I understand that StatefulSet may be able to represent this, but the deamons do not need to hold state really, they do not need persistent storage, they can be killed at will, so long as another with the same configs comes up soon afterwards.
One clue I was given by somebody was to use Helmfile or Helm partials. That is the extent of what they told me. I have not yet investigated whether those are appropriate or not.
You are correct saying that Deployment and ReplicaSets are running on identical containers, so the way I see it you have 2 options:
Deploy multiple deployments with different configs defined in the values file:
You can see an example here, where multiple configs are set in the values file and using {{ range }} to iterate and create multiple deployments
Iterate over you configurations names/files using scripting language of your choice and create separate release for each of your configuration via the command line for example: --set configName=
Personally, I would go with the 2nd option since multiple helm releases can harness the helm cli to better understand what is running and it's state. also, any CRUD action you would like to do would be less dangerous since the deployments are decoupled

Kubernetes - What exactly is imperative Vs Declarative

I am seeing multiple and different explanations for imperative Vs Declarative for Kubernetes - something like Imperative means when we use yaml files to create the resources to describe the state and declarative vice versa.
what is the real and clear difference between these two. I would really appreciate if you can put the group of commands fall under the same - like Create under imperative way etc ..
"Imperative" is a command - like "create 42 widgets".
"Declarative" is a statement of the desired end result - like "I want 42 widgets to exist".
Typically, your yaml file will be declarative in nature: it will say that you want 42 widgets to exist. You'll give that to Kubernetes, and it will execute the steps necessary to end up with having 42 widgets.
"Create" is itself an imperative command, but what you're creating is a Kubernetes cluster. What the cluster should look like is determined by the declarations in the yaml file.
Imperative
Official docs on Managing Kubernetes Objects Using Imperative Commands.
Kubernetes objects can quickly be created, updated, and deleted directly using imperative commands built into the kubectl command-line tool.
kubectl run nginx --generator=run-pod/v1 --image=nginx
kubectl create service nodeport <myservicename>
kubectl delete pod
Declarartive
Kubernetes objects can be created, updated, and deleted by storing multiple object configuration files in a directory and using kubectl apply to recursively create and update those objects as needed. This method retains writes made to live objects without merging the changes back into the object configuration files. kubectl diff also gives you a preview of what changes apply will make.
Official docs on Declarative Management of Kubernetes Objects Using Configuration Files.
Official docs on Declarative Management of Kubernetes Objects Using Kustomize
Define what you want in an yaml file and use kubectl apply
kubectl apply -f app.yaml
kubectl apply -f <directory>/
kubectl apply -f https://k8s.io/examples/application/simple_deployment.yaml
Imperative command means::: We are not creating any yaml file and directly changing resources like pod service network anything via
Command line so that is imperative command.
Imperative object configuration::: means we are creating any resources as per our requirement in yaml file where we will remove
default value's which we don’t need everything except required things so in that case this is imperative object configuration AND that is CREATE command..
Declarative object configuration:: We don’t care about anything just we need final output so in simple words we copied
yaml from internet and created a pod where motive is to only create pod/resources So in that case we use APPLY command.

What are the benefits of helm?

Okay you can easy install application but where is the benefit compared to normal .yaml files from Kubernetes?
Can someone give me a example where it is useful to use helm and why normal Kubernetes is not sufficient?
Also a confrontation for helm and Kubernetes would be nice.
With Helm, a set of resources (read as Kubernetes manifests) logically define a release - and you need to treat this group of resources as a single unit.
A simple example on why this is necessary: Imagine an application bundle that has, let's say, 10 kubernetes objects in total. On the next release, due to the changes in the app, now 1 of the resources is not needed anymore - there are 9 objects in total. How would I roll out this new release? If I simply do kubectl apply -f new_release/, that wouldn't take care of the deletion of that 1 resource that is not needed anymore. This means, I cannot roll upgrades that doesn't need manual intervention. Helm takes care of this.
Helm also keeps a history of releases with their exact set of resources, so you can rollback to a previous release with a single command, in case things go wrong.
Also, one of the things you need often is templating your resources - imagine you want to deploy multiple instances of the same exact application. What would you do?
Kubernetes doesn't offer many options to tackle this problem - one solution is to use different namespaces: Don't specify namespace in the manifests, but give it in the command, such as kubectl apply -n my_namespace -f resources/, but what if you want to deploy two of this instances on the same namespace? Then you need some kind of name/label/selector templating, and Helm takes care of that.
These are some examples for the use cases that Helm addresses.

kubernetes - kubectl run vs create and apply

I'm just getting started with kubernetes and setting up a cluster on AWS using kops. In many of the examples I read (and try), there will be commands like:
kubectl run my-app --image=mycompany/myapp:latest --replicas=1 --port=8080
kubectl expose deployment my=app --port=80 --type=LoadBalancer
This seems to do several things behind the scenes, and I can view the manifest files created using kubectl edit deployment, and so forth However, i see many examples where people are creating the manifest files by hand, and using commands like kubectl create -f or kubectl apply -f
Am I correct in assuming that both approaches accomplish the same goals, but that by creating the manifest files yourself, you have a finer grain of control?
Would I then have to be creating Service, ReplicationController, and Pod specs myself?
Lastly, if you create the manifest files yourself, how do people generally structure their projects as far as storing these files? Are they simply in a directory alongside the project they are deploying?
The fundamental question is how to apply all of the K8s objects into the k8s cluster. There are several ways to do this job.
Using Generators (Run, Expose)
Using Imperative way (Create)
Using Declarative way (Apply)
All of the above ways have a different purpose and simplicity. For instance, If you want to check quickly whether the container is working as you desired then you might use Generators .
If you want to version control the k8s object then it's better to use declarative way which helps us to determine the accuracy of data in k8s objects.
Deployment, ReplicaSet and Pods are different layers which solve different problems.All of these concepts provide flexibility to k8s.
Pods: It makes sure that related containers are together and provide efficiency.
ReplicaSet: It makes sure that k8s cluster has desirable replicas of the pods
Deployment: It makes sure that you can have different version of Pods and provide the capability to rollback to the previous version
Lastly, It depends on use case how you want to use these concepts or methodology. It's not about which is good or which is bad.
There is a little more nuance to the difference between apply and create than what is already mentioned here. Kubectl create can be used imperatively on the command line or declaratively against a manifest file.
Kubectl apply is used declaratively against a manifest file. You can't use kubectl apply imperatively.
One key difference is when you already have an object and you want to update something. Even if you used a manifest file with kubectl create, you will get an error when you use kubectl create again to update the same resource. But, if you use kubectl apply, you will not get an error. It will update the resource without any issues.
So, the convention is to use kubectl apply to create AND update resources, kubectl create is used to create resources, and kubectl run is used to create a pod with a specific image, namespace, etc. for experimentation and testing with the --dry-run=client option.

Is there a way to make kubectl apply restart deployments whose image tag has not changed?

I've got a local deployment system that is mirroring our production system. Both are deployed by calling kubectl apply -f deployments-and-services.yaml
I'm tagging all builds with the current git hash, which means that for clean deploys to GKE, all the services have a new docker image tag which means that apply will restart them, but locally to minikube the tag is often not changing which means that new code is not run. Before I was working around this by calling kubectl delete and then kubectl create for deploying to minikube, but as the number of services I'm deploying has increased, that is starting to stretch the dev cycle too far.
Ideally, I'd like a better way to tell kubectl apply to restart a deployment rather than just depending on the tag?
I'm curious how people have been approaching this problem.
Additionally, I'm building everything with bazel which means that I have to be pretty explicit about setting up my build commands. I'm thinking maybe I should switch to just delete/creating the one service I'm working on and leave the others running.
But in that case, maybe I should just look at telepresence and run the service I'm dev'ing on outside of minikube all together? What are best practices here?
I'm not entirely sure I understood your question but that may very well be my reading comprehension :)
In any case here's a few thoughts that popped up while reading this (again not sure what you're trying to accomplish)
Option 1: maybe what you're looking for is to scale down and back up, i.e. scale your deployment to say 0 and then back up, given you're using configmap and maybe you only want to update that, the command would be kubectl scale --replicas=0 -f foo.yaml and then back to whatever
Option 2: if you want to apply the deployment and not kill any pods for example, you would use the cascade=false (google it)
Option 3: lookup the rollout option to manage deployments, not sure if it works on services though
Finally, and that's only me talking, share some more details like which version of k8s are you using? maybe provide an actual use case example to better describe the issue.
Kubernetes, only triggers a deployment when something has changed, if you have image pull policy to always you can delete your pods to get the new image, if you want kube to handle the deployment you can update the kubernetes yaml file to container a constantly changing metadata field (I use seconds since epoch) which will trigger a change. Ideally you should be tagging your images with unique tags from your CI/CD pipeline with the commit reference they have been built from. this gets around this issue and allows you to take full advantage of the kubernetes rollback feature.