Couldn't find api version for Microsoft.DataFactory/factories?api-version=2018-06-01 - azure-data-factory

I am querying the data factories within a resource group but I couldn't make it work. I can query it through the rest api but not through azure_rm_resource_info ansible module:
- name: Check if Data Factory ({{ az_datafactory_name }}) already exists
azure.azcollection.azure_rm_resource_info:
auth_source: auto
client_id: "{{ az_client_id }}"
secret: "{{ az_secret }}"
tenant: "{{ az_tenant_id }}"
subscription_id: "{{ az_subscription_id }}"
url: "https://management.azure.com/subscriptions/{{ az_subscription_id }}/resourceGroups/{{ az_resource_group_name }}/providers/Microsoft.DataFactory/factories?api-version=2018-06-01"
register: az_datafactory
Output:
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Couldn't find api version for Microsoft.DataFactory/factories?api-version=2018-06-01"}

Thanks for the tip of #MrVitaminP. Changing the url seems to solve my problem:
url: "https://management.azure.com/subscriptions/{{ az_subscription_id }}/resourceGroups/{{ az_resource_group_name }}/providers/Microsoft.DataFactory/factories"

Related

Ansible loop based on fact to Restart Kafka Connector Failed Tasks

Here I want to restart Kafka Connect tasks if they are in failed state using ansible-playbook, I have fetched connector tasks state using 'set_fact'
I want to create a loop over collected facts to restart Kafka Connector Tasks using connector name and task id.
tasks:
- name: Gethering Connector Names
uri:
url: "{{scheme }}://{{ server }}:{{ port_no }}/connectors"
user: "{{ username }}"
password: "{{ password }}"
method: GET
force_basic_auth: yes
status_code: 200
register: conn_stat
- name: Checking for Connector status
uri:
url: "{{scheme }}://{{ server }}:{{ port_no }}/connectors/{{ abc_conn_name }}/status"
user: "{{ username }}"
password: "{{ password }}"
method: GET
force_basic_auth: yes
loop: "{{ conn_name }}"
loop_control:
loop_var: abc_conn_name
vars:
conn_name: "{{ conn_stat.json }}"
register: conn_stat_1
- name: Gethering Failed task id
set_fact:
failed_connector_name_task_id: "{{ conn_stat_1 | json_query('results[].json[].{name: name ,id: [tasks[?state == `RUNNING`].id [] | [0] ]}') }}"
- name: Failed connector name with tasks id
ansible.builtin.debug:
var: failed_connector_name_task_id
Getting below values from fact, which I need to push into loop
"failed_connector_name_task_id": [
{
"id": [
0
1
],
"name": "test-connector-sample"
},
{
"id": [
0
1
],
"name": "confluent-connect"
},
{
"id": [
0
1
2
],
"name": "confluent-test-1"
}
]
},
"changed": false
}
value need to be posted
- name: Restart Connector Failed tasks
uri:
url: "{{scheme }}://{{ server }}:{{ port_no }}/connectors/**{{name of connector}}**/tasks/**{{task ID}}**/restart"
user: "{{ username }}"
password: "{{ password }}"
method: POST
force_basic_auth: yes
status_code: 200
register: conn_stat
name of connector,
task ID want to use in loop.
In above I need to setup loop for tasks.
As we can see above connector 'confluent-test-1' have three tasks in failed state, so it need to be iterate three times with task 'confluent-test-1'.
This is a typical case where you want to use subelements, either through the aforementioned filter or lookup. Here is an example using the filter
- name: Restart Connector Failed tasks
uri:
url: "{{scheme }}://{{ server }}:{{ port_no }}/connectors/{{ item.0.name }}/tasks/{{ item.1 }}/restart"
user: "{{ username }}"
password: "{{ password }}"
method: POST
force_basic_auth: yes
status_code: 200
loop: "{{ failed_connector_name_task_id | subelements('id', skip_missing=True) }}"
References worth to read:
ansible loops
subelements filter
You can actually remove your last unnecessary set_fact task with e.g. the following construct:
- name: Restart Connector Failed tasks
vars:
failed_connector_name_task_id: "{{ conn_stat_1 | json_query('results[].json[].{name: name ,id: [tasks[?state == `RUNNING`].id [] | [0] ]}') }}"
uri:
url: "{{scheme }}://{{ server }}:{{ port_no }}/connectors/{{ item.0.name }}/tasks/{{ item.1 }}/restart"
user: "{{ username }}"
password: "{{ password }}"
method: POST
force_basic_auth: yes
status_code: 200
loop: "{{ failed_connector_name_task_id | subelements('id', skip_missing=True) }}"

Error: "datetime' is not json serializable" in Ansible playbook

I am getting below error while passing below date format to Ansible Tower REST API in extra_vars with Ansible uri module in body section.
date_slot: '2022-04-04T13:42:00' - variable
- name: "add extra vars "
uri:
url: "{{ tower_gui_url }}/api/v2/workflow_job_templates/{{ workflow_id }}/"
method: PATCH
url_username: "{{ tower_gui_username }}"
url_password: "{{ tower_gui_password }}"
body_format: json
body: { "extra_vars": "---\ndate_slot: {{ date_slot }}" }
validate_certs: false
force_basic_auth: true
return_content: yes
status_code: 200
Getting below error
datetime' is not json serializable
any idea on error, how to resolved it?

Ansible conditional won't recognise debug output

I am trying to pass the debug message to conditional on Kubernetes object but it looks like it doesn't recognise it properly:
- name: get some service status log
kubernetes.core.k8s_log:
namespace: "{{ product.namespace }}"
label_selectors:
- app.kubernetes.io/name=check-service-existence
register: service_existence
- name: some service existence check log
debug:
msg: "{{ service_existence.log_lines | first }}"
- name: create service for "{{ product.namespace }}"
kubernetes.core.k8s:
state: present
template: create-service.j2
wait: yes
wait_timeout: 300
wait_condition:
type: "Complete"
status: "True"
when: service_existence == "service_does_not_exist"
what I am getting when I am running it is:
TASK [playbook : some service existence check log] ***
ok: [127.0.0.1] =>
msg: service_does_not_exist
TASK [playbook : create service for "namespace"] ***
skipping: [127.0.0.1]
I suspect that it treats msg: as a part of string. How can I deal with this properly?
Since your debug message is about the value of service_existence.log_lines | first your conditional should also be.
when: service_existence.log_lines | first == "service_does_not_exist"

How to get postgresql_query results from Ansible

I'm trying to print the output of PostgreSQL query that is run by Ansible. Unfortunately I'm not sure how to get ahold of the return value.
- name: Get specific tables
postgresql_query:
db: "{{ database_name }}"
login_host: "{{ my_host }}"
login_user: "{{ my_user }}"
login_password: "{{ my_password }}"
query: SELECT * FROM pg_tables t WHERE t.tableowner = current_user
Googling just says to use register:, but the PostgreSQL ansible module does not have a register param:
fatal: [xx.xxx.xx.xx]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (postgresql_query) module: register Supported parameters include: ca_cert, db, login_host, login_password, login_unix_socket, login_user, named_args, path_to_script, port, positional_args, query, session_role, ssl_mode"}
The Ansible docs list return values for this module but there are no examples on how to use them, and everything I search for leads right back to register:.
Sounds like you are very close, but have register at the wrong indentation. It's a parameter of the task itself, not the postgresql module.
Try:
- name: Get specific tables
postgresql_query:
db: "{{ database_name }}"
login_host: "{{ my_host }}"
login_user: "{{ my_user }}"
login_password: "{{ my_password }}"
query: SELECT * FROM pg_tables t WHERE t.tableowner = current_user
register: result
- debug:
var: result

Helm lint error but everything looks ok to me

I'm getting this error when linting my helm project
$ helm lint --debug
==> Linting .
[INFO] Chart.yaml: icon is recommended
[ERROR] templates/: render error in "myProject/templates/configmap.yaml": template: myProject/templates/configmap.yaml:26:27: executing "myProject/templates/configmap.yaml" at <.Values.fileServiceH...>: can't evaluate field fileHost in type interface {}
Error: 1 chart(s) linted, 1 chart(s) failed
This is my configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: myProject-configmap
data:
tkn.yaml: |
iss: "{{ .Values.iss }}"
aud: "{{ .Values.aud }}"
db.yaml: |
database: "{{ .Values.database }}"
user: "{{ .Values.user }}"
host: "{{ .Values.host }}"
dialect: "{{ .Values.dialect }}"
pool:
min: "{{ .Values.pool.min }}"
max: "{{ .Values.pool.max }}"
acquire: "{{ .Values.pool.acquire }}"
idle: "{{ .Values.pool.idle }}"
fileservice.yaml: |
fileServiceHost:
fileHost: "{{ .Values.fileServiceHost.fileHost }}"
notificationservice.yaml: |
notificationServiceHost:
notificationHost: "{{ .Values.notificationservice.notificationHost }}"
organizationservice.yaml: |
organizationServiceHost:
organizationHost: "{{ .Values.organizationservice.organizationHost }}"
organizations.yaml: |
organizations: {{ .Values.organizations | toJson | indent 4 }}
epic.yaml: |
redirectUri: "{{ .Values.redirectUri }}"
This is my /vars/dev/fileservice.yaml file
fileServiceHost:
fileHost: 'https://example.com'
What is wrong that i'm getting this lint error?
You want to either use .Files.Get to load the yaml files or take the yaml content that you have in the yaml files and capture it in the values.yaml so that you can insert it directly in your configmap with toYaml.
If the values are just static and you don't need the user to override them then .Files.Get is better for you. If you want to be able to override the content in the yaml files easily at install time then just represent them in the values.yaml file.