Creating credential using Ansible Tower REST API - rest

In my Ansible Tower, I have a custom credential by the name of Token wherein we store atoken so that using this credential we do not have to log in and can use this credential in various jobs.
Below are the fields required -
Name:
Credential Type: (where we select this custom credential type)
API Token Value: (where the token is entered and is also denoted as
an extra variable my_token)
Below is the yml file I am using to do the needful -
—-
Required info
tasks:
- name: Create credential
uri:
url: “https://ans........../api/v1/credentials/“
method: “POST”
kind: SecureCloud
name: Token
body:
extra_vars:
my_token: “{ key }”
body_format: json
I am confused as to how to enter the field values Name and Credential Types in the above playbook. Do I also require any other field(s) while doing so? Also is the url in the uri module correct?

There are two ways of creating a custom credential (I prefer the second one):
First Option: Your Approach - URI Module
- name: Create Custom Credential
uri:
url: "https://endpoint/api/v2/credentials/"
method: POST
user: admin
password: password
headers:
Content-Type: "application/json"
body: '{"name":"myfirsttoken","description":"","organization":34,"credential_type":34,"inputs":{"token":"MyToken"}}'
force_basic_auth: true
validate_certs: false
status_code: 200, 201
no_log: false
But, be careful because this is not idempotent and you should do a GET Credentials First with the method: GET, register your results and find your credential in your register.json.results variable.
Second Option: My Preferred Approach - tower-cli
You can do exactly the same, easier and idempotent with:
- name: Add Custom Credential
command: tower-cli credential create --name="{{ item }}" --credential-type "{{ credential_type }}" --inputs "{'token':'123456'}" -h endpoint -u admin -p password --organization Default
no_log: true
with_items:
- MyCustomToken
You will get something like:
== ============= ===============
id name credential_type
== ============= ===============
46 MyCustomToken 34
== ============= ===============
The cool stuff is that you can fully automate your tokens and even autogenerate them with:
token: "{{ lookup('password', '/dev/null length=20 chars=ascii_letters,digits') }}"
And then:
---
- name: Create Custom Credential Token
hosts: localhost
connection: local
gather_facts: false
vars:
token: "{{ lookup('password', '/dev/null length=20 chars=ascii_letters,digits') }}"
credential_type: MyCustom
tasks:
- name: Create Credential Type
tower_credential_type:
name: "{{ credential_type }}"
description: Custom Credentials type
kind: cloud
inputs: {"fields":[{"secret":true,"type":"string","id":"token","label":"token"}],"required":["token"]}
state: present
tower_verify_ssl: false
tower_host: endpoint
tower_username: admin
tower_password: password
- name: Add Custom Credential
command: tower-cli credential create --name="{{ item }}" --credential-type "{{ credential_type }}" --inputs "{'token':'{{ token }}'}" -h endpoint -u admin -p password --organization Default
no_log: true
with_items:
- MyCustomToken

Related

Ansible Playbook that runs 2nd play as different user

The goal is to have only 1 playbook, that can be executed with the initial password setup when the os is built.
This playbook will add a service account, and then execute the remaining plays as that service account.
The issue im having is that the subsequent plays are not using the service account correctly.
Does anyone have any advice on how to get his method to work?
I can see that its using the new account, but its not passing the password for that new account.
my playbook is below
---
#name: Playbook to run through roles to provision new server
- hosts: all
gather_facts: false
become: true
#become_user: '{{ root_user }}' #this is commented out to show what acct is being used
tasks:
#User root account to add new service account, so root account can also be managed.
- name: Add Service Accounts
include_tasks: ../steps/ServiceAccount_add.yml
- name: Pause for 30 seconds
ansible.builtin.pause:
seconds: 30
#2nd play to be ran as service account so root is not used.
- hosts: all
gather_facts: false
become: true
remote_user: '{{ Service_Account }}'
become_user: '{{ Service_Account }}'
vars:
ansible_become_password: '{{ Service_AccountPW }}'
remote_user_password: '{{ Service_AccountPW }}'
tasks:
- name: Run Baseline
include_tasks: ../steps/Yum_baseline.yml
- name: Run Update
include_tasks: ../steps/Yum_Update.yml
Everything executes up to this part:
<IPADDRESS> SSH: EXEC sshpass -d8 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="service_account"' -o ConnectTimeout=10 -o ControlPath=/tmp/bwrap_656_m3k5zy9e/awx_656_4iz9_26u/cp/a07c97f8e1 IPADDRESS '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
<IPADDRESS> (5, '', 'Permission denied, please try again.\r\nPermission denied, please try again.\r\nPermission denied (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n')
fatal: [IPADDRESS]: UNREACHABLE! => {
"changed": false,
"msg": "Invalid/incorrect password: Permission denied, please try again.\r\nPermission denied, please try again.\r\nPermission denied (publickey,gssapi-keyex,gssapi-with-mic,password).",
"unreachable": true
}
Thank you! #Zeitounator for his comment!
Here is the final code that works
---
#name: Playbook to run through roles to provision new server
- hosts: all
gather_facts: false
become: true
#become_user: '{{ root_user }}'
tasks:
#User root account to add new service account, so root account can also be managed.
- name: Add Service Accounts
include_tasks: ../steps/Ansible_accountadd.yml
#Jeff Geerling suggested looking into this
- name: Reset ssh connection to allow user changes to affect 'current login user'
ansible.builtin.meta: reset_connection
#2nd play to be ran as service account so root is not used.
- hosts: all
gather_facts: false
remote_user: '{{ Service_Account }}' # this is used to change the ssh user
vars:
ansible_ssh_pass: '{{ Service_AccountPW }}' #set the ssh user pw
tasks:
- name: Run Baseline
include_tasks: ../steps/Yum_baseline.yml
- name: Run Update
include_tasks: ../steps/Yum_Update.yml

Ansible create kubernetes secret from file

Is it possible to create and k8s secret from a file in ansible?
Currently, I am doing it like this but it only works on the first run because if I run the playbook again it says the secret already exists
- name: generate keypair
openssh_keypair:
path: /srv/{{item.namespace}}/id_{{item.name}}_rsa
when: item.additional_keys == true
loop: "{{ containers_release }}"
- name: create private key secret for auth api
shell: kubectl -n {{ item.namespace }} create secret generic id-{{ item.name }}-rsa-priv --from-file=/srv/{{ item.namespace }}/id_authapi_rsa
when: item.additional_keys == true
loop: "{{ containers_release }}"
- name: create public key secret for {{ item.name }}
shell: kubectl -n {{ item.namespace }} create secret generic id-{{ item.name }}-rsa-pub --from-file=/srv/{{ item.namespace }}/id_{{ item.name }}_rsa.pub
when: item.additional_keys == true
loop: "{{ containers_release }}"
As I have mentioned in comment section ansible is idempotent. If the configuration is already in place, ansible makes no change after redeploying. That is why after running playbook again your are getting playbook again it say info that the secret already exists.
Take a look: create-secret-with-ansible.
You can try to use SecretHub.
See: ansible-playbook-secret.

How can I import users and teams to grafana?

I am provisioning grafana and running it without a database. I am using Terraform and Helm to do this. I already know that I can store my dashboard files, put them in the values.yaml file for the grafana helm chart, and provision them that way.
It's good that the dashboards persist between releases, but users and teams do not. I cannot find where I can upload or store some sort of JSON file containing this information.
For more information, I am using Google Oauth.
How can I provision users and teams' information? This does not have to be helm specific. If it's some sort of volume-mount thing, that would work too.
We just use the Grafana API via Ansible (using uri module), maybe it helps you or pushes you in the right direction.
- name: create users
uri:
url: "https://{{ grafana_url }}/api/admin/users"
user: admin
password: "{{ admin_password }}"
force_basic_auth: yes
method: POST
headers:
Accept: application/json
Content-Type: application/json
body:
name: "{{ item.name }}"
email: "{{ item.email }}"
login: "{{ item.email }}"
password: "{{ pass }}"
body_format: json
with_items: "{{ admin_list }}"
Then the list is a simple yaml.
admin_list:
- name: "Mrs. X"
login: "x#gmail.com"
- name: "Ms. Y"
login: "y#gmail.com"
And on a second note, you can define users in Terraform (never used it myself).
resource "grafana_organization" "org" {
name = "Grafana Organization"
admin_user = "admin"
create_users = true
admins = [
"admin#example.com"
]
editors = [
"editor-01#example.com",
"editor-02#example.com"
]
viewers = [
"viewer-01#example.com",
"viewer-02#example.com"
]
}

Ansible: Obtain api_token from gce_container_cluster

I launch the GCP cluster with no problem but I do not know how to get k8s ansible module to work. I would prefer to get the api_key to authenticate into k8s module.
My playbook is the following.
- name: Hello k8s
hosts: all
tasks:
- name: Create a cluster
register: cluster
gcp_container_cluster:
name: thecluster
initial_node_count: 1
master_auth:
username: admin
password: TheRandomPassword
node_config:
machine_type: g1-small
disk_size_gb: 10
oauth_scopes:
- "https://www.googleapis.com/auth/compute"
- "https://www.googleapis.com/auth/devstorage.read_only"
- "https://www.googleapis.com/auth/logging.write"
- "https://www.googleapis.com/auth/monitoring"
zone: europe-west3-c
project: second-network-255214
auth_kind: serviceaccount
service_account_file: "{{ lookup('env', 'GOOGLE_CREDENTIALS') }}"
state: present
- name: Show results
debug: var=cluster
- name: Create temporary file for CA
tempfile:
state: file
suffix: build
register: ca_crt
- name: Save content to file
copy:
content: "{{ cluster.masterAuth.clusterCaCertificate |b64decode }}"
dest: "{{ ca_crt.path }}"
- name: Create a k8s namespace
k8s:
host: "https://{{ cluster.endpoint }}"
ca_cert: "{{ ca_crt.path }}"
api_key: "{{ cluster.HOW_I_GET_THE_API_KEY}}" <<<-- Here is what I want!!!
name: testing
api_version: v1
kind: Namespace
state: present
Any idea?
I founded a workaround that is to call gcloud directly:
- name: Get JWT
command: gcloud auth application-default print-access-token
register: api_key
Obviously, I needed to:
Install GCloud
Redefine the envvar with the auth.json to GOOGLE_APPLICATION_CREDENTIALS.
The task calls gcloud directly to obtain the token, so no need to generate the token. I will try to add to add this feature as a module into ansible for better interoperability with kubernetes.
Once obtained it is possible to call k8s module like this:
- name: Create ClusterRoleBinding
k8s:
state: present
host: "https://{{ cluster.endpoint }}"
ca_cert: "{{ ca_crt.path }}"
api_version: rbac.authorization.k8s.io/v1
api_key: "{{ api_key.stdout }}"
definition:
kind: ClusterRoleBinding
metadata:
name: kube-system_default_cluster-admin
subjects:
- kind: ServiceAccount
name: default # Name is case sensitive
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
According to the fine manual, masterAuth contains two other fields, clientCertificate and clientKey that correspond to the client_cert: and client_key: parameters, respectively. From that point, you can authenticate to your cluster's endpoint as cluster-admin using the very, very strong credentials of the private key, and from that point use the same k8s: task to provision yourself a cluster-admin ServiceAccount token if you wish to do that.
You can also apparently use masterAuth.username and masterAuth.password in the username: and password: parameters of k8s:, too, which should be just as safe since the credentials travel over HTTPS, but you seemed like you were more interested in a higher entropy authentication solution.

How to identify the hosts in my playbook from a variable file?

on my hosts file, I have like 10 different groups that each has devices in it. each customer deployment, should go to a specific region and I want to specify that in a customer config file.
In my playbook, I tried to use a variable in front of hosts and my plan was to specify the hosts group in the config file.
master_playbook.yml
hosts: "{{ target_region }}"
vars:
custom_config_file: "./app_deployment/customer_config_files/xx_app_prod.yml"
xx_app_prod.yml
customer: test1
env: prod
app_port: 25073
target_region: dev
Error message I get:
ERROR! The field 'hosts' has an invalid value, which includes an undefined variable. The error was: 'target_region' is undefined
To determine a HOST(who is not the running host) in which groups he is in u have to use a little helper:
Create a script:
#!/usr/bin/env ansible-playbook
#call like: ./showgroups -i develop -l jessie.fritz.box
- hosts: all
gather_facts: no
tasks:
- name: show the groups the host(s) are in
debug:
msg: "{{group_names}}"
After that u can run a Playbook Like:
- name: "get group memberships of host"
shell: "{{ role_path }}/files/scripts/show_groups -i {{ fullinventorypath }} -l {{ hostname }}"
register: groups
- name: "create empty list of group memberships"
set_fact:
memberships: []
- name: "fill list"
set_fact:
memberships: "{{ memberships + item }}"
with_items: groups.stdout.lines