Is there any way to load terraform modules with private git repo? I've been planning to implement it with an Azure DevOps pipeline so I think that using it with ssh key its not an option.
Any ideas/suggestions on how I could achieve this goal?
Thanks in advance
You can load the Terraform modules from any place that you can get it where you execute the Terraform command. I see you use the Azure Repos and you want to execute the Terraform in the Pipeline. So you can use the relative path to load the modules. For example, your folder structure like this:
And you create the VM module in the VM folder and network in the network folder. And then you want to load the modules in the terraform folder within the main.tf file, then you can add code in the main.tf file like this:
module "vm" {
source = "modules/vm"
...
}
module "network" {
source = "modules.network"
...
}
It will load the modules from the path you set for the source. If you have any more questions, please give me the messages to let me know. I'm glad to give you a favor.
Related
I've been using Terraform for some time but I'm new to Terraform Cloud. I have a piece of code that if you run it locally it will create a .tf file under a folder that I tell him but if I run it with Terraform CLI on Terraform cloud this won't happen. I'll show it to you so it will be more clear for everyone.
resource "genesyscloud_tf_export" "export" {
directory = "../Folder/"
resource_types = []
include_state_file = false
export_as_hcl = true
log_permission_errors = true
}
So basically when I launch this code with terraform apply in local, it creates a .tf file with everything I need. Where? It goes up one folder and under the folder "Folder" it will store this file.
But when I execute the same code on Terraform Cloud obviously this won't happen. Does any of you have any workaround with this kind of troubles? How can I manage to store this file for example in a github repo when executing github actions? Thanks beforehand
The Terraform Cloud remote execution environment has an ephemeral filesystem that is discarded after a run is complete. Any files you instruct Terraform to create there during the run will therefore be lost after the run is complete.
If you want to make use of this information after the run is complete then you will need to arrange to either store it somewhere else (using additional resources that will write the data to somewhere like Amazon S3) or export the relevant information as root module output values so you can access it via Terraform Cloud's API or UI.
I'm not familiar with genesyscloud_tf_export, but from its documentation it sounds like it will create either one or two files in the given directory:
genesyscloud.tf or genesyscloud.tf.json, depending on whether you set export_as_hcl. (You did, so I assume it'll generate genesyscloud.tf.
terraform.tfstate if you set include_state_file. (You didn't, so I assume that file isn't important in your case.
Based on that, I think you could use the hashicorp/local provider's local_file data source to read the generated file into memory once the MyPureCloud/genesyscloud provider has created it, like this:
resource "genesyscloud_tf_export" "export" {
directory = "../Folder"
resource_types = []
include_state_file = false
export_as_hcl = true
log_permission_errors = true
}
data "local_file" "export_config" {
filename = "${genesyscloud_tf_export.export.directory}/genesyscloud.tf"
}
You can then refer to data.local_file.export_config.content to obtain the content of the file elsewhere in your module and declare that it should be written into some other location that will persist after your run is complete.
This genesyscloud_tf_export resource type seems unusual in that it modifies data on local disk and so its result presumably can't survive from one run to the next in Terraform Cloud. There might therefore be some problems on the next run if Terraform thinks that genesyscloud_tf_export.export.directory still exists but the files on disk don't, but hopefully the developers of this provider have accounted for that somehow in the provider logic.
I'm performing an api call in my jenkinsfile that requires specifying a path to file 'A'. Assuming file A is located on the same repo, I am not sure how to refer to file A when running the jenkinsfile.
I feel like this has been done before, but I can't find any resource. Any help is appreciated.
You don't say whether you are using a scripted or declaritive Jenkinsfile, as the details differ. However the principle is the same as far as I am concerned. Basically to do anything with a file you will need to be within a node clause - essentially the controller opens a session on one of the agents and does actions there. You need to checkout your repo on that node:
The scripted Jenkinsfile would look something like (assuming you are not bothered about which node you are running on):
node("") {
checkout scm // "scm" equates to the configuration that the job was run with
// the whole repo will be now available
}
I have configured a webhook between github and terraform enterprise correctly, so each time I push a commit, the terraform module gets executed. Why I want to achieve is to use part of the branch name where the push was made and pass it as a variable in the terraform module.
I have read that the value of a variable can be a HCL code, but I am unable to find the correct object to access the payload (or at least, the branch name), so at this moment I think it is not possible to get that value directly from the workspace configuration.
if you get a workaround for this, it may also work from me.
At this point the only idea I get is to call the terraform we hook using an API Call
Thanks in advance
Ok, after several try and error I found out that it is not possible to get any information in the terraform module if you are using the VCS mode. So, in order to be able to get the branch, I got these options:
Use several workspaces
You can configure a workspace for each branch, so you may create a variable a select that branch in each workspace. The problem is you will be repeating yourself with this option
Use Terraform CLI and a GitHub action
I used these fine tutorial from Hashicorp for creating a Github action that uses Terraform Cloud. It gets you done the 99% of the job. For passing a varible you must be aware that there are two methods, using a file or using an enviromental variable (check that information on the Hashicorp site here). So using a:
terraform apply -var="branch=value"
won't work. In my case I used the tfvars approach, so in my Github Action I put this snippet:
- name: Setup Terraform variables
id: vars
run: |-
cat > terraform.auto.tfvars <<EOF
branch = "${GITHUB_REF#refs/*/}"
EOF
I defined a variable within terraform called branch, I was able to get and work with this value
I'm using pulumi, but I have a problem.
for example, if I use terraform, I would do this:
cd terraform/component/${componentName}
terraform workspace new dev
terraform workspace select dev
terraform init -input=true -reconfigre -backend-config "bucket=${bucket_name}" -backend-config "profile=${profile_name}"
terraform apply dev.tfvars
in that cases, in Pulumi, how can I specify script file to update?
even if I update pulumi, index.ts will be invoked.
I wont to specify script file path to update.
folder structure is like here.
src/
components
lambda
main.ts
ec2
main.ts
in this cases, I want to run something like this.
pulumi up src/components/ec2/main.ts
pulumi up src/components/lambda/main.ts
I dont think you can do something like this with pulumi, it looks for the main.ts in the local folder. What you can do - is create a config parameter in your code and use that to define which code path pulumi will take (I'm using python, but the idea is the same):
if (config.get("parameter_name") == "path_one"):
call_function_from_file_1
else:
call_function_from_file_2
How do I tell DSC that a resource/module from our internal code repository (not on a private Gallery feed), first?
Do I just use a basic Script Resource and bring the files down (somehow) into $PSModulePath and import them?
Update
There's a CmdLet called Get-DSCResource that will list the available resources on a system, i.e. that reside in the correct path(s), and provide some information that can be used with Import-DscResource, which is a 'dynamic keyword' that is placed within a Configuration block in a DSC script to declare the dependencies.
As for getting the resources/modules down to the target system, I'm not sure yet.
If you are using a dsc pull server then you just need to make sure that your custom module(s) are on that server. I usually put them in program files\windowspowershell\modules.
In the configuration you can just specify that you want to import your custom module and then proceed with the custom dsc resource
Configuration myconfig {
Import-DSCResource customModule
Node somenode {
customresource somename {
}
}
}
If you dont have a pull server and you want to push configurations then you have to make sure that your custom modules are on all target systems. you can use the DSC file resource to copy the modules or maybe just use a ps script or any other means to copy them and then use DSC for your custom configurations.