ansible / boto / any other suggestions - update ASG tag - tags

Would anyone have suggestions, pointers, examples of updating a current ASG's tagging?
I have a need to update tags on ASGs rather frequently. So far, playing with ec2_asg, I'm unable to figure out if it's possible. My next best so far has been having ansible run the CLI commands themselves. Before I start going the boto path and writing my own script, I thought I'd ask the community.
Any suggestion, pointer or example is always appreciated. Thank you.

You can use the ec2_tag module. Just apply the tags you want on the ASG ARN.
Creates, removes and lists tags from any EC2 resource. The resource is referenced by its resource id (e.g. an instance being i-XXXXXXX). It is designed to be used with complex args (tags), see the examples. This module has a dependency on python-boto.

Related

K8s: Editing vs Patching vs Updating

In the kubectl Cheat Sheet (https://kubernetes.io/docs/reference/kubectl/cheatsheet/), there are 3 ways to modify resources. You can either update, patch or edit.
What are the actual differences between them and when should I use each of them?
I would like to add a few things to night-gold's answer. I would say that there are no better and worse ways of modifying your resources. Everything depends on particular situation and your needs.
It's worth to emphasize the main difference between editing and patching namely the first one is an interactive method and the second one we can call batch method which unlike the first one may be easily used in scripts. Just imagine that you need to make change in dozens or even a few hundreds of different kubernetes resources/objects and it is much easier to write a simple script in which you can patch all those resources in an automated way. Opening each of them for editing wouldn't be very convenient and effective. Just a short example:
kubectl patch resource-type resource-name --type json -p '[{"op": "remove", "path": "/spec/someSection/someKey"}]'
Although at first it may look unnecessary complicated and not very convenient to use in comparison with interactive editing and manually removing specific line from specific section, in fact it is a very quick and effective method which may be easily implemented in scripts and can save you a lot of work and time when you work with many objects.
As to apply command, you can read in the documentation:
apply manages applications through files defining Kubernetes
resources. It creates and updates resources in a cluster through
running kubectl apply. This is the recommended way of managing
Kubernetes applications on production.
It also gives you possibility of modifying your running configuration by re-applying it from updated yaml manifest e.g. pulled from git repository.
If by update you mean rollout ( formerly known as rolling-update ), as you can see in documentation it has quite different function. It is mostly used for updating deployments. You don't use it for making changes in arbitrary type of resource.
I don't think I have the answer to this but I hope this will help.
All three methods do the same thing, they modify some resources configuration but the command and way to it is not the same.
As describe in the documentation:
Editing is when you open the yaml configuration file that is in the kubernetes cluster and edit it (with vim or other) to get direct modification on your cluster. I would not recommand this outside of testing purpose, reapplying conf from orignal yaml file will delete modificaitons.
Patching seems the same to me, but without opening the file and targetting specific parts of the resources.
Updating in the documentation it seems that's it's all other method to update a resource without using patch or edit. Some of those can be used for debug/testing, for example forcing a resource replace, or update an image version. Others are used to update them with new configurations.
From experience, I only used editing and some command of update for testing, most of time I reapply the configurations.

Validate Kubernetes Object Creation

I would like to implement functionality (or even better reuse existing libraries/APIs!) that would intercept a kubectl command to create an object and perform some pre-creation validation tasks on it before allowing kubectl command to proceed.
e.g.
check various values in the yaml against external DB for example
check a label conforms to the internal naming convention
and so on..
Is there an accepted pattern or existing tools etc?
Any guidance appreciated
The way to do this is by creating a ValidatingAdmissionWebhook. It's not for the faint of heart and even a brief example would be an overkill as a SO answer. A few pointers to start:
https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook
https://banzaicloud.com/blog/k8s-admission-webhooks/
https://container-solutions.com/a-gentle-intro-to-validation-admission-webhooks-in-kubernetes/
I hope this helps :-)
I usually append - - dry-run to kubectl command to check and validate the YAML config

Puppet Class: define a variable which list all files in a directory

I'm defining my own Puppet class, and I was wondering if it is possible to have an array variable which contains a list of all files in a specific directory. I was wondering to have a similar syntax like below, but didn't found a way to make it work.
$dirs = Dir.entries('C:\\Program Files\\Java\\')
Does anyone how to do it in a Puppet file?
Thanks!
I was wondering if it is possible to have an array variable which contains a list of all files in a specific directory.
Information about the current state of the machine to be configured is conveyed to the catalog compiler via facts. These are available to your classes as top-scope variables, and Puppet (or Facter, actually) provides ways to define your own custom facts. That's a link into the Facter 3 manual, but similar applies to earlier versions. Do not overlook the rest of the Facter documentation, which has more relevant information on this topic.
On the other hand, information about the machine providing catalog-building services -- the master in a master / agent setup -- can be obtained by writing and calling a custom function. This is rarely what you actually want, but it's worth mentioning because you might one day want a custom function for some other purpose.

How to replace a shared file when deploying code with Capistrano?

Update: TL;DR there seems to be no built-in way to achieve this, so a custom task is an easy solution.
Capistrano provides facilities to share files and directories over all releases. This is convenient and provides even some safety on files that should not be easily changed (or must remain the same across releases), e.g. a database configuration file.
But when it comes to replace or just update one of these shared files, I end up doing it manually, directly on the target machine. I would like to improve on that, for instance by asking Capistrano to overwrite some or all shared files when deploying. A kind of --force flag with some granularity.
I am not aware of any such kind of facility, and failing so far in my search. Any pointer?
Thinking about it
One of the reason why this facility does not exist (except that I did not find it!) is that it may be harder than it looks. For example, let's assume we have a shared database configuration file, and we exclude it from version control for security reason (common practice). Current release relies on version 1 of the DB configuration. The next release requires version 2 of the DB configuration. If the deployment goes well, everything's good. It gets harder when rolling back after some error with the new release (e.g. a regression), as version 1 must then be available.
Such automation would be cool and convenient, but dangerous as well. Yet I have practical use cases at hand.
I created a template method to do this. For example, I could have a task like this:
task :create_database_yml do
on roles(:app, :db) do
within(shared_path) do
template "local/path/to/database.yml.erb",
"config/database.yml",
:mode => "600"
end
end
end
And then I have a database.yml.erb template that uses things like fetch(:database_password) to fill in appropriate values. You can use the ask method in Capistrano to prompt for these values so they are never committed.
The implementation of template can be very simple: you just need to read the file, pass it through ERB, and then use Capistrano's upload! to place the results on the server.
My version is a little more complicated than yours probably needs to be, but in case you are curious:
https://github.com/mattbrictson/capistrano-mb/blob/7600440ecd3331945d03e059368b75849857f1fb/lib/capistrano/mb/dsl.rb#L104
One approach is to use a system configuration tool like Chef or Puppet to deploy the configuration files distinctly from Capistrano.
Another approach is to create a custom task to do this: https://coderwall.com/p/wgs6gw/copy-local-files-to-remote-server-using-capistrano-3
I personally don't change on-server configs often enough or on enough servers yet to have tried to automate it. Crafting an scp command which copies the desired config file to all of the required servers has sufficed in the past.

How to avoid naming colisions in resource names?

I just had an error in my recipe because i had an resource with the same name in another recipe. I had a execute resource named 'download-package' resource in both recipes...
How can i avoid naming collisions in chef recipes?
As far as I know there's no magical way for this and a report handler won't be able to report resource duplication (but I may be wrong here, anyone with better knowledge is welcome to confirm/infirm this statement)
The best you can do is running test with vagrant isolated boxes and fix the Warnings when necessary ...
I think you may already check this with chefspec/berkshelf as the converge will raise this kine of warnings, it involves mocking the runlist with chefspec (with a role or something like that)
Here is a great blog about how to test cookbooks https://micgo.net/
Chefspec doc is here: https://github.com/sethvargo/chefspec
Hope it will help you