Okteto ignore certain yaml file - github

In my GitHub repo I have 2 yaml files:
k8s/deploy-all-secrets.yaml
k8s/deploy-edge.yaml
I make use of cloud.okteto.com to deploy this deployment-file. But I don't want Okteto to deploy the deploy-all-secrets.yaml file. Is there any way I can exclude this file from Okteto?
I tried using a .stignore file, but this had no result.

Another option you have is to create an okteto-pipeline.yaml file at the root of your repo. This allows you to control how Okteto deploys your pipeline. For the scenario you describe, it would look like this:
deploy:
- kubectl apply -f deploy-all-secrets.yaml
More information on how to customize your pipelie is available here.
Note: The .stignore file is only used by the okteto up command, during the file synchronization phase. More information on that is available here.

According to the documentation https://okteto.com/docs/cloud/okteto-pipeline/.
You can place any k8s manifest file that needs to be executed using kubectl apply in this folder. So by simply removing the deploy-all-secrets.yaml from the k8s folder, it won't be executed by Okteto.

Related

Safest and best way to retrieve the current configuration file in yaml for a single or bunch of resources in a kubernetes cluster

I applied a file xyz.yml sometime ago in EKS (Amazon elastic kubernetes service cluster) to deploy a statefulset pod from my local machine. This file is versioned in GitHub. However, there were few manual applies made using kubectl for this file to the kubernetes cluster after that, so it looks like the source file i have right now in GitHub might be out of sync from the cluster.
Is there a safe and easy way to retrieve this file in yaml directly from the cluster using kubectl so that i can use that from now in my GitHub source code. I do not want to make changes in my GitHub source code and then apply them to the cluster as the file might be out of sync.
If somehow i could directly retrieve the file in YAML from the kubernetes cluster, that would really help solve the problem. I tried --dry-run or kubectl diff but don't seem to be helping.
I am new to kubernetes, hence do not want to experiment with commands directly on the cluster.
Any help here would be greatly appreciated.
Cheers,
Ashley
You can try with edit:
kubectl -n <namespace name> edit [deployment, pod, svc] <name>
You can get the current YAML of individual resources with:
kubectl get <resource> -o yaml
But you can't get all the resources that you created with this file at once because Kubernetes doesn't keep track of the manifest files in which the resource definitions were supplied.
So you would need to check which resources were created by your file and get them individually as above. Or if all the resources in this file have common labels, perhaps you could get them more easily by these labels.

No YAML Files in K8s Deployment

TLDR: My understanding from learning all about K8s is that you need lots and lots of yaml files, however, I just deployed an app to a K8s clusters with 0 yaml files and it succeeded. Why is that? Does google cloud or K8s have defaults it uses when the app does not have any yaml file settings?
Longer:
I have a dockerized spring app that I deployed to a google cloud cluster I created via the UI.
It had 0 yaml files in there, so my expectation that kubectl deploy would fail, however, it succeeded and my stateless app is up there chugging away.
How does that work?
Well the gcp created for you in the background. I assume you pushed your docker image or CI to cluster and from there you just did few clicks right? same stuff you can do it on openshift environment. but in the background yaml file get's generated. if you edit the pod on your UI you will see that yaml file.
as above #Volodymyr Bilyachat said you can create deployment via imparative way or using declarative way(yaml). I would suggest always use declarative way.
you can see your deployment yaml file which you created from UI by doing
kubectl get deployment <deployment_name> -o yaml
kubectl get deployment <deployment_name> -o yaml > name.yaml #This will output your yaml file into name.yaml file
You can run your containers/pods using plain commands.
kubectl run podname --image=name
As you said 0 yaml files. But main idea of those files that you push them to source control and run test them via different environments using CI/CD.
Other benefit of yaml files that you can share configuration and someone else will be able to create infrastructure without having to write anything. Here is example how you can run elasticsearch with one command
kubectl apply -f https://download.elastic.co/downloads/eck/1.2.0/all-in-one.yaml

Using kubectl roll outs to update my images, but need to also keep my deployment object in version control

In My CICD, I am:
generating a new image with a unique tag. foo:dev-1339 and pushing it to my image repo (ECR).
Then I am using a rolling update to update my deployment.
kubectl rolling-update frontend --image=foo:dev-1339
But I have a conflict here.
What if I also need to update some part of my deployment object as stored in a deployment.yaml file. Lets say harden a health check or add a parameter?
Then when I re apply my deployment object as a whole it will not be in sync with the current replica set, the tag will get reverted and I will lose that image update as it exists in the cluster.
How do I avoid this race condition?
A typical solution here is to use a templating layer like Helm or Kustomize.
In Helm, you'd keep your Kubernetes YAML specifications in a directory structure called a chart, but with optional templating. You can specify things like
image: myname/myapp:{{ .Values.tag | default "latest" }}
and then deploy the chart with
helm install myapp --name myapp --set tag=20191211.01
Helm keeps track of these values (in Secret objects in the cluster) so they don't get tracked in source control. You could check in a YAML-format file with settings and use helm install -f to reference that file instead.
In Kustomize, your CI tool would need to create a kustomize.yaml file for per-deployment settings, but then could set
images:
- name: myname/myapp
newTag: 20191211.01
If you trust your CI tool to commit to source control then it can check this modified file in as part of its deployment sequence.
Imperative vs Declarative workflow
There is two fundamental ways of using kubectl for applying changes to your cluster. The Imperative way, when you do commands is a good way for experimentation and development environment. kubectl rolling-updated is an example of an imperative command. See Managing Kubernetes using Imperative Commands.
For a production environment, it is recommended to use a Declarative workflow, by editing manifest-files, store them in a Git-repository. Automatically start a CICD work when you commit or merge. kubectl apply -f <file> or more interesting kubectl apply -k <file> is an example of this workflow. See Declarative Management using Config files or more interesting Declarative Management using Kustomize
CICD for building image and deployment
Building an artifact from source code, including a container image may be done in a CICD pipeline. Managing application config and applying it to the Kubernetes cluster may also be done in a CICD pipeline. You may want to automatize it all, e.g. for doing Continuous Deployment and combine both pipelines to a single long pipeline. This is a more complicated setup and there is no single answer on how to do it. When the build-parts is done, it may trigger an update of the image field in the app configuration repository to trigger the configuration-pipeline.
Unfortunately there is no solution, either from the command line or through the yaml files
As per the doc here, "...a Deployment is a higher-level controller that automates rolling updates of applications declaratively, and therefore is recommended" over the use of Replication Controllers and kubectl rolling-update. Updating the image of a Deployment will trigger Deployment's rollout.
An approach could be to update the Deployment configuration yaml (or json) under version control in the source repo and apply the changed Deployment configuration from the version control to the cluster.

how to set kubernetes persistent environment variable

I want keep version of all pods (App) in env inside namespace. so i can use them in yaml file to create deployment. or even in ci/cd makes devops easier.
right now developer must set the version in yaml file.
If you want to use the environment variables in menifest file or in yaml file you can simply use the kubernetes secrets & config maps.
where can store the environment and use them during the deployment.
That's about the design principle, and that's the ideal approach to apply for your pipeline.
You don't have to save the exact version of all your Pods inside the manifest file, just use the latest or environment-like tag (e.g staging or production)
And in your pipeline, you could patch the deployment with the corresponding tag based on your build.
One example of this approach:
kubectl patch deployment $YOUR_DEPLOYMENT_NAME -p "{\"metadata\":{\"labels\":{\"image\":\"$YOUR_BUILD_STAGE-$PIPELINE_ID\"}},\"spec\":{\"revisionHistoryLimit\":2,\"template\":{\"spec\":{\"containers\":[{\"name\":\"$YOUR_CONTAINER_NAME\",\"image\":\"$DOCKER_IMAGE_NAME:$YOUR_BUILD_STAGE-$PIPELINE_ID\"}]}}}}"

Using kubectl to tear down from yaml

I created a .yaml file following this tutorial. You deploy the web service with kubectl apply -f shopfront-service.yaml. So far so good. The author says nothing though about how to tear everything down.
With TerraForm or CloudFormation you use the same .yaml file to remove all resources. I would think that K8 would also support cleaning up using the same .yaml file, but I can't find any way to do this.
Is there a way to delete resources with the same .yaml file used to create the deployment?
kubectl delete -f shopfront-service.yaml
see kubectl delete docs