Terraform with Azure - Resource actions symbols dont display correctly in the logs - azure-devops

When deploying a terraform task via Azure, the resulting logs dont display the resource symbols correctly.
2020-03-16T22:19:21.4807821Z An execution plan has been generated and is shown below.
2020-03-16T22:19:21.4808173Z Resource actions are indicated with the following symbols:
2020-03-16T22:19:21.4808433Z [32m+[0m create
2020-03-16T22:19:21.4808648Z [33m~[0m update in-place
2020-03-16T22:19:21.4808823Z [0m
2020-03-16T22:19:21.4809040Z Terraform will perform the following actions:
Wondering if its possible to get the correct symbols instead of the 32m+, etc.

To fix those weird characters just use the -no-color option on your plan and apply steps!
And for reference those weird values represent colors of the terminal - see: https://github.com/shiena/ansicolor/blob/master/README.md
No-Color option: https://www.terraform.io/docs/commands/plan.html#no-color

You're seeing ANSI codes, setting your Git Bash terminal to support xterm-256color is one way to see these (usually invisible) codes.

Related

Access agent hostname for a build variable

I've got release pipelines defined that have worked. I've got a config transform that will write a API url to a config file (currently with a hardcoded api url).
What I'd like to do is be able to have the config be re-written based on the agent its being deployed on.
eg. if the machine being deployed to is TEST-1, I'd like to write https://TEST-1.somedomain.com/api into a config using that transform step.
The .somedomain.com/api can be static.
I've tried modifying the pipeline variable's value to be https://${{Environment.Name}}.somedomain.com/api, but it just replaces the API_URL in the config with that literal string (does not populate machine name in that variable).
Being that variables are the source of value that is being written to configs during the transform, I'm struggling to see another way to do this.
some gotchas
Using non yaml pipeline definitions (I know I saw people put logic in variable definitions within yaml pipelines)
Can't just use localhost, as the configuration is being read into a javascript rich app that would have js trying to connect to localhost vs trying to connect to the server.
I'm interested in any ways I could solve this problem
${{Environment.Name}} is not valid syntax for either YAML or classic pipelines.
In classic pipelines it would be $(Environment.Name).
In YAML, $(Environment.Name) or ${{ variables['Environment.Name'] }} would work.

Is there a way in Terraform Enterprise to read the payload from VCS?

I have configured a webhook between github and terraform enterprise correctly, so each time I push a commit, the terraform module gets executed. Why I want to achieve is to use part of the branch name where the push was made and pass it as a variable in the terraform module.
I have read that the value of a variable can be a HCL code, but I am unable to find the correct object to access the payload (or at least, the branch name), so at this moment I think it is not possible to get that value directly from the workspace configuration.
if you get a workaround for this, it may also work from me.
At this point the only idea I get is to call the terraform we hook using an API Call
Thanks in advance
Ok, after several try and error I found out that it is not possible to get any information in the terraform module if you are using the VCS mode. So, in order to be able to get the branch, I got these options:
Use several workspaces
You can configure a workspace for each branch, so you may create a variable a select that branch in each workspace. The problem is you will be repeating yourself with this option
Use Terraform CLI and a GitHub action
I used these fine tutorial from Hashicorp for creating a Github action that uses Terraform Cloud. It gets you done the 99% of the job. For passing a varible you must be aware that there are two methods, using a file or using an enviromental variable (check that information on the Hashicorp site here). So using a:
terraform apply -var="branch=value"
won't work. In my case I used the tfvars approach, so in my Github Action I put this snippet:
- name: Setup Terraform variables
id: vars
run: |-
cat > terraform.auto.tfvars <<EOF
branch = "${GITHUB_REF#refs/*/}"
EOF
I defined a variable within terraform called branch, I was able to get and work with this value

Error in Google Cloud Shell Commands while working on the lab (Securing Google Cloud with CFT Scorecard)

I am working in a GCP lab (Securing Google Cloud with CFT Scorecard). All instructions for the lab are given.
First I have to run the following two commands to set environment variables
export GOOGLE_PROJECT=$DEVSHELL_PROJECT_ID
export CAI_BUCKET_NAME=cai-$GOOGLE_PROJECT
In the second command given above I don't know what to replace with my own credentials? May be that is the reason I am getting error.
Now I have to enable the "cloudasset.googleapis.com" gcloud service. For this they gave the following command.
gcloud services enable cloudasset.googleapis.com \
--project $GOOGLE_PROJECT
Error for this is given in the screeshot attached herewith:
Error in the serviec enabling command
Next step is to clone the policy: The given command for that is:
git clone https://github.com/forseti-security/policy-library.git
After that they said: "You realize Policy Library enforces policies that are located in the policy-library/policies/constraints folder, in which case you can copy a sample policy from the samples directory into the constraints directory".
and gave this command:
cp policy-library/samples/storage_blacklist_public.yaml policy-library/policies/constraints/
On running this command I received this:
error on running the directory command
Finally they said "Create the bucket that will hold the data that Cloud Asset Inventory (CAI) will export" and gave the following command:
gsutil mb -l us-central1 -p $GOOGLE_PROJECT gs://$CAI_BUCKET_NAME
I am confused in where to replace my own credentials like in the place of project_Id I wrote my own project id.
Also I don't know these errors are ocurring. Kindly help me.
I'm unable to access the tutorial.
What happens if you run the following:
echo ${DEVSHELL_PROJECT_ID}
I suspect you'll get an empty result because I think this environment variable isn't actually set.
I think it should be:
echo ${DEVSHELL_GCLOUD_CONFIG}
Does that return a result?
If so, perhaps try using that variable instead:
export GOOGLE_PROJECT=${DEVSHELL_GCLOUD_CONFIG}
export CAI_BUCKET_NAME=cai-${GOOGLE_PROJECT}
It's not entirely clear to me why this tutorial is using this approach but, if the above works, it may get you further along.
We're you asked to create a Google Cloud Platform project?
As per the shared error, this seems to be because your env variable GOOGLE_PROJECT is not set. You can verify it by using echo $GOOGLE_PROJECT and seeing whether it returns the project ID or not. You could also use echo $DEVSHELL_PROJECT_ID. If that returns the project ID and the former doesn't, it means that you didn't export the variable as stated at the beginning.
If the problem is that GOOGLE_PROJECT doesn't have any value, there are different approaches on how to solve it.
Set the env variable as you explained at the beginning. Obviously this will only work if the variable DEVSHELL_PROJECT_ID is also set.
export GOOGLE_PROJECT=$DEVSHELL_PROJECT_ID
Manually set the project ID into that variable. This is far from ideal because in Qwiklabs they create a new temporal project on every lab, so this would've only worked if you were still on that project. The project ID can be seen on both of your shared screenshots.
export GOOGLE_PROJECT=qwiklabs-gcp-03-c6e1787dc09e
Avoid using the argument --project. According to the documentation, the aforementioned argument is optional and if none is used the command will take the one by default, which will be on the configuration settings. You can get the current project by using this:
gcloud config get-value project
If the previous command matches the project ID you want to use, you can simply issue the following command:
gcloud services enable cloudasset.googleapis.com
Notice that the project ID is not being explicitly mentioned using --project.
Regarding your issue with the GitHub file, I have checked the repository and the file storage_blacklist_public.yaml doesn't seem to be in the directory policy-library/samples. There seems to be a trace that it was once there, but it isn't anymore, they should probably update the lab as it isn't anymore.
About your credentials confusion, you don't have to use your own project ID, just the one given on your lab. If I recall properly all the needed data should be on the left side of the lab. Still, you shouldn't need to authenticate in a normal situation as you are already logged in your temporal project if you are accessing it form the Cloud Shell, which is where you should be doing all this.
Adding this for the later versions
in the gcloud shell you can set a temp variable for the current project id with
PROJECT_ID="$(gcloud config get-value project)"
then use like
--project ${PROJECT_ID}

Creating Kubernetes Endpoint in VSTS generates error

What setting up a new Kubernetes endpoint and clicking "Verify Connection" the error message:
"The Kubconfig does not contain user field. Please check the kubeconfig. " - is always displayed.
Have tried multiple ways of outputting the config file to no avail. I've also copy and pasted many sample config files from the web and all end up with the same issue. Anyone been successful in creating a new endpoint?
This is followed by TsuyoshiUshio/KubernetesTask issue 35
I try to reproduce, however, I can't do it.
I'm not sure, however, I can guess it might the mismatch of the version of the cluster/kubectl which you download by the download task/kubeconfig.
Workaround might be like this:
kubectl version in your local machine and check the current server/client version
specify the same version as the server on the download task. (by default it is 1.5.2)
See the log of your release pipeline which is fail, you can see which kubectl command has been executed, do the same thing on your local machine with fitting your local pc's environment.
The point is, before go to the VSTS, download the kubectl by yourself.
Then, put the kubeconfg on the default folder like ~/.kube/config or set environment variables KUBECONFIG to the binary.
Then execute kubectl get nodes and make sure if it works.
My kubeconfig is different format with yours. If you use AKS, az aks install-cli command and az aks get-credentials command.
Please refer https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough .
If it works locally, the config file must work on the VSTS task environment. (or this task or VSTS has a bug)
I had the same problem on VSTS.
Here is my workaround to get a Service Connection working (in my case to GCloud):
Switched Authentication to "Service Account"
Run the two commands told by the info icon next to the fields Token and Certificate: "Token to authenticate against Kubernetes.
Use the ‘kubectl get serviceaccounts -o yaml’ and ‘kubectl get secret
-o yaml’ commands to get the token."
kubectl get secret -o yaml > kubectl-secret.yaml
Search inside the the file kubectl-secret.yaml the values ca.crt and token
Enter the values inside VSTS to the required fields
The generated config I was using had a duplicate line, removing this corrected the issue for me.
users:
- name: cluster_stuff_here
- name: cluster_stuff_here

What does the "One of the deployment parameters has an empty key" mean in VSTS ARM release

My ARM template resource group deployment fails in VSTS.
I get an error without any specific reference to parameter that has an issue: "One of the deployment parameters has an empty key. Please see https://aka.ms/arm-deploy/#parameter-file for details."
The referenced url contain general information, with one comment asking the same question, but no answer to it.
Person asking it alluded that it may have something to do with the version of the deployment step (2.*) and it not using Powershell anymore. I went though the template back and forth comparing parameters in BeyondCompare and nothing sticks out...
Does anyone know what does this error mean?
I had the same issue and found out that some parameters has a space in their values.
So you should write -adminUsername "$(vmuser)".
This works for me
Check Your parameter key or value does not have space in between.
if your value required space then, use "".
check this link.
Example,
direct value -param1 "Value with Space"
value from pipeline variables -param1 "$(valueFromVariables)".
It means you've got a parameterkey in your deployment template without a name. For example "-" instead of "-parametername" or "- parametername" (notice the space).
It can also happen if you manage to paste an 'em-dash' (e.g. from a web browser) instead of a standard dash.
We had the same as matendie; a space between the dash and the parameter name:
- pricingTier "standard"
^ note the space
So, I'm not sure what the issue was, but I gave up on trying to identify the problem, and I deleted the release definition. Recreating it from scratch using the same template, works fine now...
Maybe the definition got some how corrupted.
Not sure, but new one is not having this issue.
Thanks
In my case the problem was with template parameters override. I needed to put parameter value in quotes - "DEV" on screenshot below.
Ran into this the other day. The release pipeline used to be working, and it suddenly started failing continuously with this error:
Error text:
##[error] One of the deployment parameters has an empty key. Please see https://aka.ms/resource-manager-parameter-files for details.
##[warning] Validation errors were found in the Azure Resource Manager template. This can potentially cause template deployment to fail. Task failed while creating or updating the template deployment.. Please follow https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/template-syntax
Starting Deployment.
Deployment name is TemplateDeployment-20220504-******-****
There were errors in your deployment. Error code: InvalidDeploymentParameterKey.
##[error] One of the deployment parameters has an empty key. Please see https://aka.ms/resource-manager-parameter-files for details.
##[error] Check out the troubleshooting guide to see if your issue is addressed: https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-resource-group-deployment?view=azure-devops#troubleshooting
##[error] Task failed while creating or updating the template deployment.
Change that caused the error
I had changed the build pipeline such that the build numbers would now have spaces in it: so it changed from my-build-number to my build number. I was still using template param overrides this way: -buildNumber $(Build.BuildNumber):
this would expand to -buildNumber my build number, which breaks the command line processing of the ARM template deployment release task.
Solution
Used quotes for my build number variable: -buildNumber "$(Build.BuildNumber)":
Now this would expand to -buildNumber "my build number", and the Azure Resource Manager (ARM) template deployment release task is happy: