my main.tf file looks like below
module "sql_vms" {
source = "git::https://iuclk3yjmv7qgglu3igkgxffacc2pzsv7nyhs44wmsjnrvccctaq#dev.azure.com/sampleuser/my_code/_git/terraform_modules.git//compute"
rg_name = var.resource_group_name
location = module.resource_group.external_rg_location
vnet_name = var.virtual_network_name
subnet_name = var.sql_subnet_name
app_nsg = var.application_nsg
vm_count = var.count_vm
base_hostname = var.sql_host_basename
sto_acc_suffix = var.storage_account_suffix
vm_size = var.virtual_machine_size
vm_publisher = var.virtual_machine_image_publisher
vm_offer = var.virtual_machine_image_offer
vm_sku = var.virtual_machine_image_sku
vm_img_version = var.virtual_machine_image_version
username = var.username
password = var.password
}
The modules are in same repo, technically not right but for now, I want to use the Azure repo which has a terraform module and creates multiple VM's from TF modules.
I get the error like below
2020-08-23T02:27:38.1439274Z [command]/usr/local/bin/terraform init -backend-config=storage_account_name=stoaccautomationnonprod -backend-config=container_name=stoacccon01nonprod -backend-config=key=nonprod.tfstate -backend-config=resource_group_name=automation -backend-config=arm_subscription_id=cc800481-b728-4d8f-81be-e80b955d346e -backend-config=arm_tenant_id=*** -backend-config=arm_client_id=*** -backend-config=arm_client_secret=***
2020-08-23T02:27:38.1441494Z [0m[1mInitializing modules...[0m
2020-08-23T02:27:38.1442513Z Downloading git::https://iuclk3yjmv7qgglu3igkgxffacc2pzsv7nyhs44wmsjnrvccctaq#dev.azure.com/sampleuser/my_code/_git/terraform_modules.git for sql_vms...
2020-08-23T02:27:38.1443347Z [31m
2020-08-23T02:27:38.1444113Z [1m[31mError: [0m[0m[1mFailed to download module[0m
2020-08-23T02:27:38.1444608Z
2020-08-23T02:27:38.1445408Z [0mCould not download module "sql_vms" (main.tf:1) source code from
2020-08-23T02:27:38.1446189Z "git::https://iuclk3yjmv7qgglu3igkgxffacc2pzsv7nyhs44wmsjnrvccctaq#dev.azure.com/sampleuser/my_code/_git/terraform_modules.git":
2020-08-23T02:27:38.1446845Z error downloading
2020-08-23T02:27:38.1447746Z 'https://iuclk3yjmv7qgglu3igkgxffacc2pzsv7nyhs44wmsjnrvccctaq#dev.azure.com/sampleuser/my_code/_git/terraform_modules.git':
2020-08-23T02:27:38.1448669Z /usr/bin/git exited with 128: Cloning into '.terraform/modules/sql_vms'...
2020-08-23T02:27:38.1449408Z fatal: could not read Password for
2020-08-23T02:27:38.1450157Z 'https://iuclk3yjmv7qgglu3igkgxffacc2pzsv7nyhs44wmsjnrvccctaq#dev.azure.com':
2020-08-23T02:27:38.1450684Z terminal prompts disabled
2020-08-23T02:27:38.1450936Z
2020-08-23T02:27:38.1451324Z [0m[0m
2020-08-23T02:27:38.1451716Z [31m
2020-08-23T02:27:38.1452230Z [1m[31mError: [0m[0m[1mFailed to download module[0m
2020-08-23T02:27:38.1452525Z
2020-08-23T02:27:38.1453109Z [0mCould not download module "sql_vms" (main.tf:1) source code from
2020-08-23T02:27:38.1454386Z "git::https://iuclk3yjmv7qgglu3igkgxffacc2pzsv7nyhs44wmsjnrvccctaq#dev.azure.com/sampleuser/my_code/_git/terraform_modules.git":
2020-08-23T02:27:38.1454903Z error downloading
2020-08-23T02:27:38.1456723Z 'https://iuclk3yjmv7qgglu3igkgxffacc2pzsv7nyhs44wmsjnrvccctaq#dev.azure.com/sampleuser/my_code/_git/terraform_modules.git':
2020-08-23T02:27:38.1457540Z /usr/bin/git exited with 128: Cloning into '.terraform/modules/sql_vms'...
2020-08-23T02:27:38.1458063Z fatal: could not read Password for
2020-08-23T02:27:38.1458813Z 'https://iuclk3yjmv7qgglu3igkgxffacc2pzsv7nyhs44wmsjnrvccctaq#dev.azure.com':
2020-08-23T02:27:38.1459301Z terminal prompts disabled
2020-08-23T02:27:38.1459470Z
2020-08-23T02:27:38.1459765Z [0m[0m
2020-08-23T02:27:38.1459896Z
2020-08-23T02:27:38.1496541Z ##[error]Terraform command 'init' failed with exit code '1'.: Failed to download module | Failed to download module
2020-08-23T02:27:38.1786437Z ##[section]Finishing: terraform init
I was thinking to use SSH instead of HTTPS with PAT Token, unfortunately I couldn't figure it out how to add public key on Microsoft agent?
Please assist
When using the SSH key to pull the Terraform modules, you need to generate the SSH key yourself. And then create an SSH Key in the DevOps:
And then you need to upload the private key in the pipeline variable group as secure files and add the step to install the SSH in your agent. The Install SSH in an agent job like this:
Get more details about use SSH to pull the remote Terraform module.
Related
We try to configure an Azure VM using an Azure DevOps pipeline. We first create the machine using Terraform and then we need to configure it. Right now the pipeline is functional when we use a customized Ubuntu Azure DevOps agent (a VM we setup ourselves in Azure).
We prefer to use a Microsoft Hosted Ubuntu Agent. When we try to run our pipeline using the Microsoft Hosted Ubuntu agent we fail with a message "winrm or requests is not installed".
We have done a lot of research and attempts to install the needed components, but none have been fruitful.
All the examples and documentation on the internet we can find don't mention our specific use case. Ansible configuration of Windows VMs in Azure from a Microsoft Hosted Ubuntu agent. Isn't it possible for some reason?
If it is, any pointers in the right direction will be much appreciated!
The error we see in the Azure DevOps pipeline is this:
ansible-playbook -vvvv -i inventory/hosts.cfg main.yml --extra-vars '{"customer_name": "<REMOVED>" }'
ansible-playbook [core 2.12.5]
config file = None
configured module search path = ['/home/vsts/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vsts/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/vsts/.ansible/collections:/usr/share/ansible/collections
executable location = /home/vsts/.local/bin/ansible-playbook
python version = 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0]
jinja version = 2.10.1
libyaml = True
No config file found; using defaults
setting up inventory plugins
host_list declined parsing /home/vsts/work/1/s/ansible/inventory/hosts.cfg as it did not pass its verify_file() method
auto declined parsing /home/vsts/work/1/s/ansible/inventory/hosts.cfg as it did not pass its verify_file() method
yaml declined parsing /home/vsts/work/1/s/ansible/inventory/hosts.cfg as it did not pass its verify_file() method
Parsed /home/vsts/work/1/s/ansible/inventory/hosts.cfg inventory source with ini plugin
Loading collection ansible.windows from /home/vsts/.local/lib/python3.8/site-packages/ansible_collections/ansible/windows
Loading collection community.windows from /home/vsts/.local/lib/python3.8/site-packages/ansible_collections/community/windows
redirecting (type: modules) ansible.builtin.win_service to ansible.windows.win_service
redirecting (type: modules) ansible.builtin.win_service to ansible.windows.win_service
redirecting (type: modules) ansible.builtin.win_service to ansible.windows.win_service
redirecting (type: modules) ansible.builtin.win_service to ansible.windows.win_service
redirecting (type: modules) ansible.builtin.win_service to ansible.windows.win_service
Loading callback plugin default of type stdout, v2.0 from /home/vsts/.local/lib/python3.8/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: main.yml *************************************************************
Positional arguments: main.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/home/vsts/work/1/s/ansible/inventory/hosts.cfg',)
extra_vars: ('{"customer_name": "<REMOVED>"}',)
forks: 5
1 plays in main.yml
PLAY [windows:pro] *********************************************************
TASK [Gathering Facts] *********************************************************
task path: /home/vsts/work/1/s/ansible/main.yml:1
redirecting (type: modules) ansible.builtin.setup to ansible.windows.setup
Using module file /home/vsts/.local/lib/python3.8/site-packages/ansible_collections/ansible/windows/plugins/modules/setup.ps1
Pipelining is enabled.
**fatal: [51.144.125.149]: FAILED! => {
"msg": "winrm or requests is not installed: No module named 'winrm'"
}**
PLAY RECAP *********************************************************************
51.144.125.149 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
We tried to fix the problem by installing various potentially relevant components in the pipeline just before running the ansible-playbook command, for instance this one
pip3 install pywinrm
Later, based on input on this SO question we tried this in the pipeline:
python3 -m pip install --ignore-installed pywinrm
find / -name winrm.py
ansible-playbook -vvv -i inventory/hosts.cfg main.yml
The find command finds winrm.py here:
/opt/pipx/venvs/ansible-core/lib/python3.8/site-packages/ansible/plugins/connection/winrm.py
The ansible-playbook configuration we are using is:
ansible-playbook [core 2.12.5]
config file = None
configured module search path =
['/home/vsts/.ansible/plugins/modules',
'/usr/share/ansible/plugins/modules']
ansible python module location = /opt/pipx/venvs/ansible-
core/lib/python3.8/site-packages/ansible
ansible collection location =
/home/vsts/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/pipx_bin/ansible-playbook
python version = 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC
9.4.0]
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
The error we get is:
task path: /home/vsts/work/1/s/ansible/main.yml:1
redirecting (type: modules) ansible.builtin.setup to
ansible.windows.setup
Using module file /opt/pipx/venvs/ansible-
core/lib/python3.8/site-
packages/ansible_collections/ansible/windows/plugins/modules/
setup.ps1
Pipelining is enabled.
fatal: [13.73.148.141]: FAILED! => {
"msg": "winrm or requests is not installed: No module named
'winrm'"
}
you can try solution in RedHat knowledgebase
https://access.redhat.com/solutions/3356681
Last comment suggestion (replace yum with apt commands)
I was getting this error even if python2-winrm version 0.3.0 is
already installed via yum
yum list installed | grep winrm python2-winrm.noarch
0.3.0-1.el7 #epel
pip install "pywinrm>=0.2.2" only resulted in "Requirement already
satisfied"
I ran this to resolve the error -
yum autoremove python2-winrm.noarch
pip install "pywinrm>=0.2.2"
Then ping: pong worked just fine over https, port=5986
ram#thinkred1cartoon$ ansible all -i hosts.txt -m win_ping
172.16.96.135 | SUCCESS => {
"changed": false,
"ping": "pong" }
conversely, if you don't want to run command 1, then command 2 won't
work for you. In that case, run command 3
3 ) pip install --ignore-installed "pywinrm>=0.2.2"
So the background is this: I have an Xcode project that depends on a swift package that's in a private repository on github. Of course, this requires a key to access. So far, I've managed to configure CI such that I can ssh into the instance and git clone the required repository for the swift package. Unfortunately when running it with xcbuild as CI does, it doesn't work and I get this message:
static:ios distiller$ xcodebuild -showBuildSettings -workspace ./Project.xcworkspace \
-scheme App\ Prod
Resolve Package Graph
Fetching git#github.com:company-uk/ProjectDependency.git
xcodebuild: error: Could not resolve package dependencies:
Authentication failed because the credentials were rejected
In contrast, git clone will happily fetch this repo as seen here:
static:ios distiller$ git clone git#github.com:company-uk/ProjectDependency.git
Cloning into 'ProjectDependency'...
Warning: Permanently added the RSA host key for IP address '11.22.33.44' to the list of known hosts.
remote: Enumerating objects: 263, done.
remote: Counting objects: 100% (263/263), done.
remote: Compressing objects: 100% (171/171), done.
remote: Total 1335 (delta 165), reused 174 (delta 86), pack-reused 1072
Receiving objects: 100% (1335/1335), 1.11 MiB | 5.67 MiB/s, done.
Resolving deltas: 100% (681/681), done.
For a bit more context, this is running on CircleCI, set up with a Deploy key on GitHub, which has been added to the Job on CI.
Any suggestions about what might be different between the way Xcode tries to fetch dependencies and the way vanilla git does it would be great. Thanks.
For CI pipelines where you cannot sign into GitHub or other repository hosts this is the solution I found that bypasses the restrictions/bugs of Xcode around private Swift packages.
Use https urls for the private dependencies because the ssh config is currently ignored by xcodebuild even though the documentation says otherwise.
Once you can build locally with https go to your repository host and create a personal access token (PAT). For GitHub instructions are found here.
With your CI system add this PAT as a secret environment variable. In the script below it is referred to as GITHUB_PAT.
Then in your CI pipeline before you run xcodebuild make sure you run an appropriately modified version of this bash script:
for FILE in $(grep -Ril "https://github.com/[org_name]" .); do
sed -i '' "s/https:\/\/github.com\/[org_name]/https:\/\/${GITHUB_PAT}#github.com\/[org_name]/g" ${FILE}
done
This script will find all https references and inject the PAT into it so it can be used without a password.
Don't forget:
Replace [org_name] with your organization name.
Replace ${GITHUB_PAT} with the name of your CI Secret if you named it differently.
Configure the grep command to ignore any paths you don't want modified by the script.
This seems to be a bug in Xcode 11 with SSH. Switching to HTTPS for resolving Swift Packages fixes the issue:
So from this:
E29801192303068A00018344 /* XCRemoteSwiftPackageReference "ProjectDependency" */ = {
isa = XCRemoteSwiftPackageReference;
repositoryURL = "git#github.com:company-uk/ProjectDependency.git";
requirement = {
branch = "debug";
kind = branch;
};
};
to:
E29801192303068A00018344 /* XCRemoteSwiftPackageReference "ProjectDependency" */ = {
isa = XCRemoteSwiftPackageReference;
repositoryURL = "https://github.com/company-uk/ProjectDependency.git";
requirement = {
branch = "debug";
kind = branch;
};
};
Also, now that Xcode 12 is out, you can use that, where it's fixed.
In order to get private swift packages working with GitHub actions I had to add the following:
I had to add an SSH key to my secrets
On the xcodebuild step, I had to add the flag: -usePackageSupportBuiltinSCM
right before executing the xcodebuild step, I had to add the following run script:
- name: Add CI SSH Key
run: ssh-add - <<< "${{ secrets.YOUR_SECRET_SSH_KEY }}"
You can resolve this issue in a CI environment with Xcode 12 by adding your GitHub account to Accounts within Xcode.
Sign in with your GitHub account name and a personal access token you created on Github.
We are using Jenkins with Fastlane tools and when xcodebuild is invoked, it will use the access token to authenticate into the repos using HTTPS.
I had the same issue, the root cause for me is: the default github ssh key type is ed25519 ssh-keygen -t ed25519 -C "your_email#example.com".
But XCode doesn't support ed25519. Changed to RSA key works: ssh-keygen -t rsa -b 4096 -C "your_email#example.com"
https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent
You can see the error under XCode Preference > Accounts > Github
I was able to build with a package from a private repo in GitHub actions with HTTPS URLs by creating a .netrc file using the extractions/netrc#v1 action.
Build:
runs-on: macos-12
steps:
- uses: actions/checkout#v3
- uses: extractions/netrc#v1
with:
machine: github.com
username: user
password: ${{ secrets.SWIFT_PACKAGE_MANAGER_PAT }}
- uses: extractions/netrc#v1
with:
machine: api.github.com
username: user
password: ${{ secrets.SWIFT_PACKAGE_MANAGER_PAT }}
After this, xcodebuild will use the PAT when accessing the private repo.
I tried to use GITHUB_TOKEN, but it seems that it is restricted to the current repo only. So I created a PAT for my GitHub account and added that to the repo secrets.
I failed to deploy my file, which was developed from blogdown (dev, R 3.6.1) and hugo (0.57.2) on the netlify platform.
I have tried to update the URL of my config.toml file from \ to my target web name https*.com\ .
Also, I created a netlify.toml at the root directory.
Both of them did not make any sense.
Local development is fine, while the netlify could not be deployed well.
failed during stage 'building site': Build script returned non-zero exit code: 255
Related code:
blogdown::new_site(theme = "gcushen/hugo-academic")
# netlify
[build]
publish = "public"
command = "hugo"
[context.production.environment]
HUGO_VERSION = "0.57.2"
HUGO_ENV = "production"
HUGO_ENABLEGITINFO = "true"
[context.branch-deploy.environment]
HUGO_VERSION = "0.57.2"
# 0.57.2
blogdown::hugo_version()
This answer is probably too late, but I just had the same issue. It which was solved by the Netlify team.
Trying to deploy through Github provides a built command "hugo". This caused the error message.
Go to your netlify page and the domain settings on the failed deploy. Remove "hugo" in the command built and retry to deploy.
This is the first time for me write the bb file, so please give me some help.
I can fetch the http tarball from external network, after I put it into the local source mirror directory, disable the external network and run the bb file, it works well. But when I tried to fetch a git source tarball, and do everything as before, the bb file failed to fetch the git source tarball from the source mirror after I disable the external network.
ERROR: Task 587 (/$PATH/******.bb, do_fetch) failed with exit code '1'
NOTE: Tasks Summary: Attempted 402 tasks of which 382 didn't need to
be rerun and 1 failed.
The following is my bb file:
SRCBRANCH = "********"
SRCREV = "AUTOINC"
SRC_URI = "git://***************.git;branch=${SRCBRANCH};protocol=https"
LIC_FILES_CHKSUM = "file://LICENSE;beginline=4;endline=16;md5=**********"
SRC_URI[md5sum] = "***************"
SRC_URI[sha256sum] = "***************"
S = "${WORKDIR}/git"
I can guess that as you use AUTOINC, the cause of your error can be checksum mismatch, but as you haven't provided error message from your do_fetch log, I cannot say for sure. You can find it by the path
build/tmp/work/one_of_directories/name_of_your_recipe/version/tmp/log.do_fetch
I am using Chef, invoked by Capistrano.
There is a directive to clone a repository using git.
git node['rails']['rails_root'] do
repository "git#myrepo.com:/myproj.git"
reference "master"
action :sync
user node['rails']['rails_user']
group node['rails']['rails_group']
end
When it gets to this point, I get:
** [out :: 10.1.1.1] STDERR: Host key verification failed.
So, I need to add a "known_hosts" entry. No problem. But to which user? The core of my problem is that I have no idea which user is executing what commands, and if they are invoking sudo, etc.
I've run keyscan to populate the known_hosts of root, and the user I ssh in as, to no avail.
Note, this git repo is read-protected, and requires ssh key access.
Another way to solve https://github.com/opscode-cookbooks/ssh_known_hosts
this worked for me
You can use an ssh wrapper approach. Look here for details.
Briefly do the following steps
First, create a file in the cookbooks/COOKBOOK_NAME/files/default directory that is named wrap-ssh4git.sh and which contains the following:
#!/usr/bin/env bash
/usr/bin/env ssh -o "StrictHostKeyChecking=no" $1 $2
Then, use the following block for your deployment:
directory "/tmp/private_code/.ssh" do
owner "ubuntu"
recursive true
end
cookbook_file "/tmp/private_code/wrap-ssh4git.sh" do
source "wrap-ssh4git.sh"
owner "ubuntu"
mode 00700
end
deploy "private_repo" do
repo "git#github.com:acctname/private-repo.git"
user "ubuntu"
deploy_to "/tmp/private_code"
action :deploy
ssh_wrapper "/tmp/private_code/wrap-ssh4git.sh"
end
The git repository will be cloned as user node['rails']['rails_user'] (via https://docs.chef.io/resource_git.html) - I assume that users known_hosts file is the one you have to modify.
I have resolved this issue as below
_home_dir = nil
node['etc']['passwd'].each do |user, data|
if user.eql? node['jenkins']['username']
_home_dir = data['dir']
end
end
key_config ="Host *\n\tStrictHostKeyChecking no\n"
file "#{_home_dir}/.ssh/config" do
owner node['jenkins']['username']
group node['jenkins']['username']
mode "0600"
content key_config
end