I am trying to tag an AWS AMI that is given to me by another team. The AMI is showing under "Private Images". I cant seem to tag it with terraform even though the whole environment is built on terraform. Have you encountered issue like this? Any tool will help, I was also looking into packer however, packer does not seem to tag the image that it does not create.
I tried python script and bash script, but they are becoming difficult to manage when you have 6 tags.
For example in python, I have to
Key = "environment"
Value = "dev"
So this becomes difficult. Any suggestion would be appreciated
You can only tag using Terraform while resource creation or modification. you can write Python code to do this.
I can help you if needed.
please share the requirements in details with screenshot.
Related
Context
I will rely on a component shipped with Gcloud SDK CLI.
For migration purposes and other reasons, I want to know which version of the cloud-sdk starts shipping this component and avoid the "install the latest version".
My issue
In the official changelog: https://cloud.google.com/sdk/docs/release-notes, there are no references to the component I am looking for.
Tries
I have tried to run this command naively
for gcloud_version in 390.0.0-alpine 391.0.0-alpine 392.0.0-alpine 393.0.0-alpine; do
echo "---> ${gcloud_version}"
docker run --rm -ti google/cloud-sdk:${gcloud_version} gcloud components list
done
Unfortunately, every list do not show the component I am waiting for (even the latest version of gcloud, 393 at the time of writting).
Discovery
However, when I run gcloud components install MY_UNLISTED_COMPONENT it works ...
Not a very reliable way to find out which version has the component I want.
Do you know if:
this is an issue?
I can report this somewhere?
It is relevant to do it?
Thanks for your help!
From #DazWilkin
It would be helpful if you included the name of the public albeit unlisted component in your question. The Release Notes includes a "Send Feedback" option and you may want to provide this feedback there.
it's reasonable to expect it to be documented. I encourage you to send feedback via the release page and to consider filing an issue on Google's public Issue Tracker.
on Azure in repo's script of CI/CD for the best practices which code to use: JSON or PowerShell or CLI ?
which code from above is the best and professional way to use ?
Thank you.
As for the best practices of script of CI/CD, each of them has their own features and could be used in it.
For example, when you would like to create the script to build VMs, you could select the JSON one because through the JSON script, you could see all information of the VMs more directly than other kinds of scripts. You could see the example through https://learn.microsoft.com/en-us/azure/virtual-machines/linux/tutorial-build-deploy-azure-pipelines?tabs=java-script .
As for building AKS, you could use Power shell to create the script as it needs fewer related properties to be defined while comparing with other scripts. You could see some examples in the following document https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-powershell
To be concluded, each kind of way to create the script is available for CI/CD and you could select which you are used to using or according to the advantages of each script.
I just started working with Kubeflow and I ran into a problem. I need my pipeline to be able to automatically get the name of the experiment it belongs to. I tried to use the kfp package but it seems to me that there is no way to get the experiment name of the current run. Do you have any suggestions? Thank you very much!
A run is tied to a experiment, not the other way around. When you run a pipeline you specify the experiement name with kfp.Client.run_pipeline as argument. When you do not specify an experiment then it will automatically be tied to the default experiment on AI platforms.
So you won't need to get the experiment name since you specify the experiment when running a pipeline.
I would like to validate my CloudFormation templates before running them. I know about the aws cloudformation validate-template ... cli command, but that ignores incorrect property names. I don't know what the point of that cli command is if it won't catch these kind of mistakes.
I want something that will catch those kind of mistakes before running the templates. An IDE or external service that does this would be fine.
We had a quite similar issue with erroneous Cloud-Formation templates and created (I’m a co-author) a command-line tool, that validates them - besides the standard AWS validation it also has many custom checks, that were essential for us:
https://github.com/Appliscale/perun
I believe it doesn't support property names validation yet, but any feature requests (or pull request even better), are welcome. We will do our best to address them as soon as we can.
After installing Perun, to validate the template you can use the command validate:
~ $ perun validate <PATH TO THE TEMPLATE>
Moreover, it also allows managing (creation, updates etc.) CF stacks and monitoring the status updates.
The cfn-lint tool was built for this exact purpose. It is actively maintained by the AWS team and it has a couple of IDE integrations.
Same issue with me. There is no way to validate the property name. But you can reduce the mistake using Atom IDE with plugins cloudformation, it helps me to create a resources property so I can reduce my typo mistakes.
Is it possible to have the collaboration and workspace sharing features in a self-hosted environment built from https://github.com/ajaxorg/cloud9?
it is possible with the newer version from https://github.com/c9/core, just pass --collab flag to the server.js script
The --collab flag activated the "Share" and "Collaborate" buttons for me, but the actual feature doesn't work. When adding a user, I get
Error adding workspace member: Cannot POST /api/collab/0/members/add?silent=false&access_token=token
Any solution to this? I feel like passing the --collab flag is more of a hack than anything and not intended to be run on self-hosted users, only for Cloud9 servers.