In an Ansible role I generate the user's SSH key. After that I want to print it to the screen and pause so the user can copy and paste it somewhere else. So far I have something like this:
- name: Generate SSH keys for vagrant user
user: name=vagrant generate_ssh_key=yes ssh_key_bits=2048
- name: Show SSH public key
command: /bin/cat $home_directory/.ssh/id_rsa.pub
- name: Wait for user to copy SSH public key
pause: prompt="Please add the SSH public key above to your GitHub account"
The 'Show SSH public key' task completes but doesn't show the output.
TASK: [Show SSH public key] ***************************************************
changed: [default]
There may be a better way of going about this. I don't really like the fact that it will always show a 'changed' status. I did find this pull request for ansible - https://github.com/ansible/ansible/pull/2673 - but not sure if I can use it without writing my own module.
I'm not sure about the syntax of your specific commands (e.g., vagrant, etc), but in general...
Just register Ansible's (not-normally-shown) JSON output to a variable, then display each variable's stdout_lines attribute:
- name: Generate SSH keys for vagrant user
user: name=vagrant generate_ssh_key=yes ssh_key_bits=2048
register: vagrant
- debug: var=vagrant.stdout_lines
- name: Show SSH public key
command: /bin/cat $home_directory/.ssh/id_rsa.pub
register: cat
- debug: var=cat.stdout_lines
- name: Wait for user to copy SSH public key
pause: prompt="Please add the SSH public key above to your GitHub account"
register: pause
- debug: var=pause.stdout_lines
If you pass the -v flag to the ansible-playbook command, then ansible will show the output on your terminal.
For your use case, you may want to try using the fetch module to copy the public key from the server to your local machine. That way, it will only show a "changed" status when the file changes.
Prints pubkey and avoid the changed status by adding changed_when: False to cat task:
- name: Generate SSH keys for vagrant user
user: name=vagrant generate_ssh_key=yes ssh_key_bits=2048
- name: Check SSH public key
command: /bin/cat $home_directory/.ssh/id_rsa.pub
register: cat
changed_when: False
- name: Print SSH public key
debug: var=cat.stdout
- name: Wait for user to copy SSH public key
pause: prompt="Please add the SSH public key above to your GitHub account"
Related
I have a secret being used as env var in another env var as follows:
- name: "PWD"
valueFrom:
secretKeyRef:
name: "credentials"
key: "password"
- name: HOST
value: "xyz.mongodb.net"
- name: MONGODB_URI
value: "mongodb+srv://user:$(PWD)#$(HOST)/db_name?"
When i exec into the container and run env command to see the values of env i see -
mongodb+srv://user:password123
#xyz.mongodb.net/db_name?
The container logs show error as authentication failure.
Is this something that is expected to work in kubernetes ? There docs talk about dependent env vars but do not give example using secrets. Did not find clear explanation on this after extensive search. Only found this one article doing something similar.
Some points to note -
The secret is a sealed secret.
This is the final manifest's contents, but all this is templated using helm.
The value is being used inside a spring boot application
Is the new line after 123 expected ?
If this evaluation of env from a secret in another env is possible then what am I doing wrong here ?
The issue was with the command used to encode the secret - echo "pasword" | base64. The echo adds a newline character at the end of the string. Using echo -n "password" | base64 fixed the secret.
Closing the issue.
I'm trying to make Home Assistant button/switch, which changes state of my lamp through REST call.
I got set up server with command, which changes state of lamp on 192.168.43.21/lampSwitch and returns json {"state": "ON"} or OFF, based on after-switching state.
I'm facing problem with scripting entities and showing current/returned state in Hassio - acquiring state and changing it via home-screen switch.
My configuration:
# Loads default set of integrations. Do not remove.
default_config:
# Text to speech
tts:
- platform: google_translate
automation: !include automations.yaml
script: !include scripts.yaml
scene: !include scenes.yaml
rest:
- scan_interval: 5
resource: http://192.168.43.21/
sensor:
- name: "Temperatura"
# unique_id: "sensor.temperature_sensor"
json_attributes_path: "$.response.system"
value_template: "{{value_json['temperature']}}"
json_attributes:
- "temperature"
- name: "Wilgotność powietrza"
# unique_id: "sensor.humidity_sensor"
json_attributes_path: "$.response.system"
value_template: "{{value_json['humidity']}}"
json_attributes:
- "humidity"
- name: "Poziom wody"
# unique_id: "sensor.water_sensor"
json_attributes_path: "$.response.system"
value_template: "{{value_json['water']}}"
json_attributes:
- "water"
lamp_switch:
- command: "Lamp switch"
trigger:
platform:
action:
url: http://192.168.43.21/lampSwitch/
I saw solution which uses cURL and command line, but I couldn't find any suiting example.
Note that the rest entities works just fine (shows up at home screen).
Thank you in advance
I use this to launch applications on my Samsung TV. I define my_command in Configuration.yaml, then use:
/bin/bash -c "/usr/bin/curl -X POST 'http://<<My_IP>>:8001/api/v2/applications/'{{ variables.my_command }}"
I call this from a button and pass a variable that defines the application that I want.
I hope this helps.
I need to use the "plugin/azure_rm.yml" version of azure_rm instead of the older/deprecated "script/azure_rm.ini" to gather dynamic inventory in Azure, in particular because we need VMs from scalesets to be included in inventory. How can I do this?
You can use an inventory script, which (again) calls ansible-inventory, and have it create the config file via a heredoc:
#!/usr/bin/env bash
cat > azure_rm.yml <<HEREDOC
---
plugin: azure_rm
include_vmss_resource_groups:
- '*'
hostvar_expressions:
ansible_host: private_ipv4_addresses | first
plain_host_names: true
keyed_groups:
# places each host in a group named 'tag_(tag name)_(tag value)' for each tag on a VM.
- prefix: tag
key: tags
# places each host in a group named 'azure_loc_(location name)', depending on the VM's location
- prefix: azure_loc
key: location
# group by platform (to copy prefix from ec2.py), eg: platform_windows
- prefix: platform
key: os_disk.operating_system_type
HEREDOC
ansible-inventory -i azure_rm.yml --list
rm azure_rm.yml
So it's literally having ansible-inventory call ansible-inventory, but with a different argument. Note that in order to get the inventory for the correct subscription, you have to create a copy the credential used, with the desired subscription id; it doesn't appear that you can override AZURE_SUBSCRIPTION_ID via the yml environment param.
I'm working on an ansible-playbook which should help to generate build agents for a continuous delivery pipeline. Among other issues, I'll need to install an oracle client on such an agent. I want to do something like
- name: "Provide response file"
copy: src=/custom.rsp dest=/opt/oracle
Within the custom.rsp file I've got some variables to be substituted. Normally, one could do it with a separate shell command like this:
- name: "Substitute Vars"
shell: "sed 's|<PARAMETER>|<VALUE>|g' -i /opt/oracle/custom.rsp"
I don't like it, though. There should be a more convinient way to do this. Anybody giving me a hint?
You want to be using a template rather than copying a static file.
Also, when using the copy or template modules, the dest parameter is a full path AND filename, not just a path. So if you want to end up with a copy of custom.rsp in the directory /opt/oracle then you need to do this:
- name: "Provide response file"
template: src=/custom.rsp dest=/opt/oracle/custom.rsp
I'm going to extend Bruce's answer with an example:
This is part of my inventory.yaml:
kafka_stage:
children:
kafka_with_zookeeper_stage:
kafka_only_stage:
vars:
zookeeper_hosts: "kafka-stage01:2181,kafka-stage02:2181,kafka-stage03:2181"
kafka_with_zookeeper_stage:
hosts:
kafka-stage01:
broker_id: 0
kafka-stage02:
broker_id: 1
vars:
services:
kafka:
zookeeper:
This is part of a configuration file:
# The id of the broker. This must be set to a unique integer for each broker.
broker.id={{ broker_id }}
# {{ zookeeper_hosts }}
advertised.listeners=PLAINTEXT://{{ ansible_host }}:9092
# {{ services }}
This command in a playbook:
- name: Copy to Host
ansible.builtin.template:
src: my_configfile.properties
dest: /tmp/hejsan.properties
Gave me this on the remote host kafka-stage02:
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1
# kafka-stage01:2181,kafka-stage02:2181,kafka-stage03:2181
advertised.listeners=PLAINTEXT://kafka-stage02:9092
# {'kafka': None, 'zookeeper': None}
I am using Chef, invoked by Capistrano.
There is a directive to clone a repository using git.
git node['rails']['rails_root'] do
repository "git#myrepo.com:/myproj.git"
reference "master"
action :sync
user node['rails']['rails_user']
group node['rails']['rails_group']
end
When it gets to this point, I get:
** [out :: 10.1.1.1] STDERR: Host key verification failed.
So, I need to add a "known_hosts" entry. No problem. But to which user? The core of my problem is that I have no idea which user is executing what commands, and if they are invoking sudo, etc.
I've run keyscan to populate the known_hosts of root, and the user I ssh in as, to no avail.
Note, this git repo is read-protected, and requires ssh key access.
Another way to solve https://github.com/opscode-cookbooks/ssh_known_hosts
this worked for me
You can use an ssh wrapper approach. Look here for details.
Briefly do the following steps
First, create a file in the cookbooks/COOKBOOK_NAME/files/default directory that is named wrap-ssh4git.sh and which contains the following:
#!/usr/bin/env bash
/usr/bin/env ssh -o "StrictHostKeyChecking=no" $1 $2
Then, use the following block for your deployment:
directory "/tmp/private_code/.ssh" do
owner "ubuntu"
recursive true
end
cookbook_file "/tmp/private_code/wrap-ssh4git.sh" do
source "wrap-ssh4git.sh"
owner "ubuntu"
mode 00700
end
deploy "private_repo" do
repo "git#github.com:acctname/private-repo.git"
user "ubuntu"
deploy_to "/tmp/private_code"
action :deploy
ssh_wrapper "/tmp/private_code/wrap-ssh4git.sh"
end
The git repository will be cloned as user node['rails']['rails_user'] (via https://docs.chef.io/resource_git.html) - I assume that users known_hosts file is the one you have to modify.
I have resolved this issue as below
_home_dir = nil
node['etc']['passwd'].each do |user, data|
if user.eql? node['jenkins']['username']
_home_dir = data['dir']
end
end
key_config ="Host *\n\tStrictHostKeyChecking no\n"
file "#{_home_dir}/.ssh/config" do
owner node['jenkins']['username']
group node['jenkins']['username']
mode "0600"
content key_config
end