We are deploying our application to OpenShift and using YAML we specify the Image pull secret and image location.
The image location will pull the image from a artifactory located in different geo,
sometimes we observe connection issues during deployment since the region is far
Is there a way in OpenShift deployment to check that If the image is not being pulled from artifactory then pull it from a different artifactory in different region ?
I know that we can update the variable in OpenShift YAML but I am looking for a way to do it conditionally
What is the best way to achieve this
Related
I am using pvc now to manage data in gitlab. I am trying to migrate data from gitlab to gitea while installing gitea. What should I do? If gitea looks at pvc, will user information and projects be maintained?
I also want to know the internal structure of gitea. Does the sample exist like the gitlab image below?
enter image description here
I am trying to create a codepipeline in aws , I am able to build my code , push image to ECR as well. then I want to rename the image in my deployment file that I have in git repo so that new deployment is created with the new image version.
Also If I keep same image then how will i deploy the deployment file in deploy stage in Kubernetes using aws codepipeline
If you are trying to deploy different versions of the app in different environments - you can use kustomize. You can use codepipeline variables to update the values.
I'm currently developing my ci/cd pipeline via 'gitHub' actions.
My k8s deployments being managed by 'helm' and runs on GKE and my 'images' stored in 'gcp'
I've successfully manages to build and deploy a new image via 'gitHub' actions, and now I've
would like that one of the pods will fetch the latest version after the image was deployed to 'gcp'.
As I understand the current flow is to update the helm chart version after creating the new image and run 'helm upgrade' from k8s (am I right?), but currently I would like to skip the helm
versioning part and just force the pod to get the new image.
Until now to make it work, after I was creating the new image I was simply deleting the pod and because the deployment is exists the pod was recreated, but my questions is:
Should I do the same from my 'CI' pipeline(deleting the pod) or there is another way doing that?
Use kubectl rollout
If you are using latest tag for image and imagePullPolicy is set as Always, you can try kubectl rollout command to fetch the latest built image.
But latest image tag is not recommended for the prod deployment, because you cannot ensure the full control of the deployment version.
Update image tag in values.yaml file
If you have some specific reasons to avoid chart version bump, you can only update the values.yaml file and try helm upgrade command with the new values.yaml file which has the new image tag. In this case, you have to use specific image tags, not latest.
If you have to use latest image tag, you can use sha256 value of the image as the tag in the values.yaml file.
Our current CI deployment phase works like this:
Build the containers.
Tag the images as "latest" and < commit hash >.
Push images to repository.
Invoke rolling update on appropriate RC(s).
This has been working great for RC based deployments, but now that the Deployment object is becoming more stable and an underlying feature, we want to take advantage of this abstraction over our current deployment schemes and development phases.
What I'm having trouble with is finding a sane way to automate the update of a Deployment with the CI workflow. What I've been experimenting with is splitting up the git repo's and doing something like:
[App Build] Build the containers.
[App Build] Tag the images as "latest" and < commit hash >.
[App Build] Push images to repository.
[App Build] Invoke build of the app's Deployment repo, passing through the current commit hash.
[Deployment Build] Interpolate manifest file tokens (currently just the passed commit hash e.g. image: app-%%COMMIT_HASH%%)
[Deployment Build] Apply the updated manifest to the appropriate Deployment resource(s).
Surely though there's a better way to handle this. It would be great if the Deployment monitored for hash changes of the image's "latest" tag...maybe it already does? I haven't had success with this. Any thoughts or insights on how to better handle the deployment of Deployment would be appreciated :)
The Deployment only monitors for pod template (.spec.template) changes. If the image name didn't change, the Deployment won't do the update. You can trigger the rolling update (with Deployments) by changing the pod template, for example, label it with commit hash. Also, you'll need to set .spec.template.spec.containers.imagePullPolicy to Always (it's set to Always by default if :latest tag is specified and cannot be update), otherwise the image will be reused.
We've been practising what we call GitOps for a while now.
What we have is a reconciliation operator, which connect a cluster to configuration repository and makes sure that whatever Kubernetes resources (including CRDs) it finds in that repository, are applied to the cluster. It allows for ad-hoc deployment, but any ad-hoc changes to something that is defined in git will get undone in the next reconciliation cycle.
The operator is also able to watch any image registry for new tags, an update image attributes of Deployment, DaemonSet and StatefulSet types of objects. It makes a change in git first, then applies it to the cluster.
So what you need to do in CI is this:
Build the containers.
Tag the images as <commit_hash>.
Push images to repository.
The agent will take care of the rest for you, as long you've connected it to the right config repo where the app Deployment object can be found.
For a high-level overview, see:
Google Cloud Platform and Kubernetes
Deploy Applications & Manage Releases
Disclaimer: I am a Kubernetes contributor and Weaveworks employee. We build open-source and commercial tools that help people to get to production with Kubernetes sooner.
We'd like to have a separate test and prod project on the Google Cloud Platform but we want to reuse the same docker images in both environments. Is it possible for the Kubernetes cluster running on the test project to use images pushed to the prod project? If so, how?
Looking at your question, I believe by account you mean project.
The command for pulling an image from the registry is:
$ gcloud docker pull gcr.io/your-project-id/example-image
This means as long as your account is a member of the project which the image belongs to, you can pull the image from that project to any other projects that your account is a member of.
Yes, it's possible since the container images are on a per-container basis.