How can I reference a tower inventory group from the playbook? My playbook is calling specific roles, each role will call on a different inventory group from tower.
So far what I have tried is:
host: "{{ inventory_hostname in groups['ios'] }}"
or
host: "ios"
or
host: ios
Is it just my syntax there on the templating?
I can't find any reference to this when using a tower inventory group.
I get some type of error stating could not be found, or attempting to acctually connect to "ios"
"failed to connect to ios:22"
You should use hosts: instead of host: if you want to reference a group.
From Ansible Tower support:
Each job template can see only one top-level inventory. It is possible, though, to create potentially overlapping groups and sub-groups within a single inventory. For most applications, the single inventory can be organized to provide necessary specificity. In cases where an particular inventory grouping must be referenced in multiple job templates, it is necessary to either use the same top-level inventory in both cases, or to duplicate the inventory group to both inventories.
So you can't reference other groups in hosts, each job template only sees the inventory it was provided but you can either use Workflow to change inventories or create overlapping groups.
Try starting with:
hosts: "{{ ansible_play_batch }}"
This "magic variable" gives you the list of [active, reachable] Hosts that were passed in from Tower.
I came to this because hosts: is a required field in the playbook, and I didn't want to hard-code over the top of the Tower Inventory selection.
Related
I want to check whether the database/Server is Online before I kick off a pipeline. In the database is down I want to cancel the pipeline processing. I also would like to log the results in a table.
format (columns) : DBName Status Date
If the DB/Server is down then I want to send an email to concerned team with formatted table showing which DB/Servers are down.
Approach:
Run a query on each of the servers. If there is a result, then format output as shown above. I am using ADF pipeline to achive this. My issue is how do I combine various outputs from different servers.
For e.g.
Server1:
DBName: A Status: ONLINE runDate:xx/xx/xxxx
Server2:
DBName: B Status: ONLINE runDate:xx/xx/xxxx
I would like to combine them as follows:
Server DBName Status runDate
1 A ONLINE xx/xx/xxxx
2 B ONLINE xx/xx/xxxx
Use this to update the logging table as well as in the email if I were to send one out.
Is this possible using the Pipeline activities or do I have to use mapping dataflows?
I did similar work a few weeks ago. We make an API where we put all server-related settings or URL endpoint which we need to ping.
You don't require to store username-password (of SQL Server) at all. When you ping the SQL server, it will timeout if it isn't online. If it's online it will give you password related error. This way you can easily figure out whether it's up and running.
AFAIK, If you are using azure-DevOps you can use your service account to log into the SQL server. If you have set up an AD to log into DevOps, this thing can be done in the build script.
Both way you will be able to make sure whether SQL Server is Up and Running or not.
You can have all the actions as tasks in a yaml pipeline
You need something like below:
steps:
task: Check database status
register: result
task: Add results to a file
shell: "echo text >> filename"
task: send e-mail
when: some condition is met
There are several modules to achieve what you need. You need to find the right modules. You can play around with the flow of tasks by registering results and using the when clause.
I'm a bit stumped on an Ansible issue. I've gotten a portion of my setup script working for my database servers, and I would like Ansible to be able to manage each server's postgresql.conf file. I currently have it pushing out an up-to-date copy of the config file, but this has presented a problem.
Our security certificates are unique to each server, and the postgresql.conf has parameters for setting these up for each server. I've currently got Ansible calculating the proper initial values for things like shared_buffers and effective_cache_size, but do not know how to get it to push a unique certificate out to each remote server, or to uniquely set the name in the config file to match the certificate name.
Are these even possible with Ansible?
I had a similar requirement recently to deploy a specific file (Java keystore) per host, along with the relevant keystore password to decrypt and use it.
For the per-host keystore password, I used host_vars: inside the group host file inventory/demoGroupName/hosts:
hostname1.com keystore_password="{{ vault_keystore_password_hostname1.com }}"
hostname2.com keystore_password="{{ vault_keystore_password_hostname2.com }}"
and then in inventory/demoGroupName/vault:
vault_keystore_password_hostname1.com=superSecurePassword
vault_keystore_password_hostname2.com=nonRepeatedPassword
(Note: the use of vault_ as a prefix is recommended in Ansible's best practices, but feel free to modify this to suit your scenario)
and then in the job, simply place {{ keystore_password }} in the relevant spot, as an example:
- name: arbitrary tasks
command: configure-keystore --password="{{ keystore_password }}"
Now in my scenario, I placed the individual keystores into the roles/role_name/files directory and copied it across by substituting in the {{ inventory_hostname }} as part of the filename, but this answer gives a better solution in my opinion. Either will work for your situation, but the latter is probably better long-term. If the name of the certificate can be made the same for all hosts, that will also simplify your situation somewhat.
I'm tasked with automating the creation of Azure VM's, and naturally I do a number of more or less broken iterations of trying to deploy a VM image. As part of this, I automatically allocate serial hostnames, but there's a strange reason it's not working:
The code in the link above works very well, but the contents of my ResourceGroup is not as expected. Every time I deploy (successfully or not), a new entry is created in whatever list is returned by Get-AzureRmResourceGroupDeployment; however, in the Azure web interface I can only see a few of these entries. If, for instance, I omit a parameter for the JSON file, Azure cannot even begin to deploy something -- but the hostname is somehow reserved anyway.
Where is this list? How can I clean up after broken deployments?
Currently, Get-AzureRmResourceGroupDeployment returns:
azure-w10-tfs13
azure-w10-tfs12
azure-w10-tfs11
azure-w10-tfs10
azure-w10-tfs09
azure-w10-tfs08
azure-w10-tfs07
azure-w10-tfs06
azure-w10-tfs05
azure-w10-tfs02
azure-w7-tfs01
azure-w10-tfs19
azure-w10-tfs1
although the web interface only lists:
azure-w10-tfs12
azure-w10-tfs13
azure-w10-tfs09
azure-w10-tfs05
azure-w10-tfs02
Solved using the code $siblings = (Get-AzureRmResource).Name | Where-Object{$_ -match "^$hostname\d+$"}
(PS. If you have tips for better tags, please feel free to edit this question!)
If you create a VM in Azure Resource Management mode, it will have a deployment attached to it. In fact if you create any resource at all, it will have a resource deployment attached.
If you delete the resource you will still have the deployment record there, because you still deployed it at some stage. Consider deployments as part of the audit trail of what has happened within the account.
You can delete deployment records with Remove-AzureRmResourceGroupDeployment but there is very little point, since deployments have no bearing upon the operation of Azure. There is no cost associated they are just historical records.
Querying deployments with Get-AzureRmResourceGroupDeployment will yield you the following fields.
DeploymentName
Mode
Outputs
OutputsString
Parameters
ParametersString
ProvisioningState
ResourceGroupName
TemplateLink
TemplateLinkString
Timestamp
So you can know whether the deployment was successful via ProvisioningState know the templates you used with TemplateLink and TemplateLinkString and check the outputs of the deployment etc. This can be useful to figure out what template worked and what didn't.
If you want to see actual resources, that you are potentially being charged for, you can use Get-AzureRmResource
If you just want to retrieve a list of the names of VMs that exist within an Azure subscription, you can use
(Get-AzureRmVM).Name
Launching
cap rubber:create_staging
starts to check the account's EC2 existing security groups. The first check is on the default group, which cannot be deleted from the AWS web-console. So the response to the following prompt is naturally 'N'
* Security Group already in cloud, syncing rules: default
Rule '{"protocol"=>"tcp", "from_port"=>"1", "to_port"=>"65535", "source_group_name"=>"", "source_group_account"=>"460491791257"}' exists in cloud, but not locally, remove from cloud? [y/N]: N
Yet, four checks later,
* Missing rule, creating: {"source_group_name"=>"default", "source_group_account"=>"460491791257", "protocol"=>"tcp", "from_port"=>"1", "to_port"=>"65535"}
/Users/you/.rvm/gems/ruby-1.9.3-p551/gems/excon-0.45.4/lib/excon/middlewares/expects.rb:10:in `response_call': Duplicate => the specified rule \"peer: sg-0910926c, TCP, from port: 1, to port: 65535, ALLOW\" already exists (Fog::Compute::AWS::Error)
Clearly there is an attempt to create an identical rule. The only difference is that the one picked up from the check has an empty string for source_group_name, while the rubber routine tries to create the same rule with the source_group_name identified.
Creating a tag in EC2 web-console with 'source_group_name' and the default value does not change any behaviour. Does this require a fix via EC2 or in rubber?
Edit while the following does effectively work, the source of the problem was rubber versions. The latest was not being used and thus probably was at origin of problem list of versions is here
This can be overcome by creating a new security group in the EC2 web-console and editing the config file rubber/rubber.yml to the same identity created in the console (line 206 or thereabout)
security_groups:
default:
description: The default security group
rules:
- source_group_name: rubber_default
Then, in config/rubber/instance- env .yml the security_groups bloc needs amending (line 52 or therabout):
security_groups:
- rubber-default
To resolve problem mentioned in subject I wrote following code:
String link = externalizer.publishLink(resolverFactory.getAdministrativeResourceResolver(null),"");
I cannot check it because I have only author machine but following code will executes only on publishers.
On production we have several publisher.I want to get different results for every publisher.
Will my code work on publishers?
Have you defined sling:osgiConfig for the pid - com.day.cq.commons.impl.ExternalizerImpl?
You could configure this in OSGi console [1] directly as well.
In the configuration, you could supply dns name like 'publish http://www.example.com'
In case of multiple domain names for multiple publish instances, define sling:osgiConfig nodes for this service and attach it to 'run modes' of those publish instances. This should work.
On side note - Externalizer service is generally used for non-HTML content like email, etc. In HTML, you could use relative urls.
[1] http://localhost:4502/system/console/configMgr