How to perform a filewatch in kubernetes - kubernetes

We need to perform a file watch on a container and trigger a script once the file is available.
Anyone who have come across this kind of requirement pls assist.
Thankyou

We use a https://hub.docker.com/r/weaveworks/watch/ container as a side car to watch for changes in a file and then trigger a reload of a process when a config file changes.
The README says Watches for changes in a directory tree... so this should also trigger an event when a new file is created and you should be able to use this for your purposes.

Related

How do you undo an operation in Argocd?

Lets say you use Argocd to deploy helm charts to Kubernetes. Things work great but you have a kubernetes resource finalizer on a resource. Somebody deletes the resource and now Argo just waits in the state of 'Progressing' or 'Deleting'. It can't actually do the delete due to the finalizer. This is a good protection mechanism for very important files like aws iam files.
But I am hopeful somebody can help me figure out. Is there anyway to stop the operation given to argo and instead just let it sync again as normal? Maybe somebody made a mistake and the finalizer worked as intended. Instead of clearing the finalizer and dealing with the consequences. Can the consequences be prevented by undoing argocds operation?
Thank you
Either you need to delete the corresponding Argocd application or you need to roll back the deployment. If you delete the application it will remove all the resources created by the application and it will stop the operation. If you roll back to the previous version it will undo the changes you have made in the current deployment and bring all your resources to previous versions.
You can use Argo CD CLI argocd app rollback’ -r ’, to roll back to the particular version you want.
You can also roll back from Argo CD UI. If your finalizer is still present you need to manually remove the finalizer and then re-apply the resource definitions.
Please check this document

GitHub Actions: auto-PR on some files update?

I'm very new to GitHub Actions/CI/CD, and I want to know whether it is possible to automate the following scenario:
I have a local script that makes use of some APIs to download some files onto my local machine. My current status is that: I have to run the script every day to check whether the content of these files is updated or not. If some of those files got updated then I need to add those changes into a new branch and push it to a repository as a PR.
My trying: My idea is that since it's possible to compare the hash of the downloaded files to know whether any of those got updated. The next thing to do is to make this into an event to trigger some action?
If it's possible could you share some resources/tutorials about how to do it?
I tested something similar on GitHub to understand how the CI/CD GitHub actions works.
the script is based on an SQLite DataBase which is updated automatically each time (automatic git push). And it uses Github Secrets to store encrypted Tokens/Passwords.
You can find my scheduler in the follow link: https://github.com/noweh/project-marvel-memories/blob/master/.github/workflows/run-schedule.yml.
you can find more information directly in the github documentation.
Here for the Github actions: https://docs.github.com/en/actions/learn-github-actions/events-that-trigger-workflows.
And here for the Github encrypted secrets: https://docs.github.com/en/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-an-environment

Trigger Jenkins job on change of directory content

Below is the problem,
There is a directory,Lets called is user/mydir
Is it possible to trigger a jenkins job on any change to this direcory (eg:copy a file , delete a file ,create a file in the directory).
Any feedback will be really helpful.
You can use the FSTrigger Plugin to do that.

Capistrano & syncing static files to remote location

As part of a Capistrano deployment I want to sync my CSS, JS and images to a remote location (Amazon S3). However, I'm worried that if the Capistrano deployment fails, this will leave me with updated CSS, JS and images.. but the main application code will still be on the previous release.
I'm wondering if there is a way with Capistrano to only trigger a task if the deployment is marked as complete and current folder is pointing at the newly deployed release folder? This would at least allow me to only update the static files once I know the main source has been updated..
Of course, I would still have the problem of - what if the sync with S3 fails? But I think this way would have the better balance in terms of points of failure.
Capistrano has a bunch of hooks you can use during the deploy.
http://capistranorb.com/documentation/getting-started/flow/
You probably want to use the 'deploy:finished' hook for your use case, for example.
after 'deploy:finished', 'deploy:sync_assets_to_S3'
Then create a cap task to perform your file uploads.

TeamCity, how to get names of the files edited

I am using TeamCity and I am new to it. I have added a Build Configuration to the TeamCity and I created one VCS root to attach to it.
However, my project have a special requirement to detect a particular file that was changed in the VCS root location and use that file in build step. I am sure this could be done in TeamCity, I am not able to figure out how.
Any help? Thanks,
To get the names of the files changed this is what I did. Thanks to Sam Jones.
I used System.TeamCity.build.changedFiles.file variable as follows.
Add a command line build step
Select Run as Custom Script
Add the script copy "%system.teamcity.build.changedFiles.file%" changelog.txt in script box.
You will get the changes in changelog.txt file in the format specified on this link.
NOTE: teamcity.build.changedFiles.file does not work. You need to use system.teamcity.build.changedFiles.file
It sounds like you want a VCS Trigger that specifies VCS Trigger Rules, so that a build configuration will run when someone makes a change to a particular file. The documentation has some nice examples of how to do this. If you're trying to trigger a build on one particular file, try this:
+:foo/bar.txt
This excludes all files from the trigger rule and then includes bar.txt in the foo directory. Paths are relative to the root of the repository (do not include a preceding slash). If someone modifies foo/bar.txt, the build configuration will be triggered to run.
VCS Trigger Rules also support pattern matching and all sorts of other options. Check out the documentation.