In chef how is it possible to generate a variable from a recipe and another recipe to use that variable? - chef-recipe

I want a recipe which will run on a client to create a variable which will store the FQDN of the client and another recipe which will run on another server to use that variable .how is that possible in chef.

Looks like you are looking for service discovery, Chef might not be the best tool for this job. However, if your client is running Chef, its FQDN is already stored in Chef server. You can pull it in various ways. For example
client_node = search(:node, "recipes:client_cookbook::client_recipe")
Then you can access client's FQDN from the node mash - client_node["fqdn"].

Related

How to connect build agent to PostgreSQL database

My integration tests for my asp.net core application require a connection to a PostgreSQL database. In my deployment pipeline I only want to deploy if my integration tests pass.
How do I supply a working connection string inside the Microsoft build agent?
I looked under service connections and couldn't see anything related to a database.
If you are using Microsoft hosted agent, then your database need to be accessible from internet.
Otherwise, you need to it on self-hosted agent that can access your database.
I assume the default connectionstring is in appsettings.json, you could store the actual database connectionstring to a secret variable, then update appsettings.json file with that variable value through some task (e.g. Set Json Property) or do it programming (e.g. powershell script) before running web app and starting test during build.
If you can use any PostgreSQL database, you can use service container with a docker image that has PostgreSQL database (e.g. postgres).
For classical pipeline, you could call docker command run the image.
I would recommend you to use runsettings which you can override in task. In that way you will keep your connection string away of source control. Please check this link. And in terms of service connection, you don't need any service connection, only what you need is proper connection string.
Since I don't know how you connect to your DB in details I can't give you more info. If you provide example how you already connect to database I can try to provide a better answer.

Managing Multiple servers in an environment with Powershell DSC

I want to manage the servers in our staging pipeline with Powershell DSC (push model). The servers map to the environments as following
Development: 1 server
Test: 2 servers
UAT: 2 servers
Production: 2 servers
The server(s) within one environment do have the same configuration. But the configuration is different between the environments. I wanted to go with the push model because I do not have to setup a pull server.
Powershell DSC offers the option to manage the configuration via configuration data in a separate file But this comes with the caveat that you need to specify a node name that matches the respective server name. And that means, I need to copy the configuration data for each server in one environment. And when changing the configuration I need to remember that there is a second place where I need to update the configuration value.
Additionally, I do not really care about the server names. If the servers are exchanged tomorrow for new servers, the configuration should be just applied which is relevant to the environment.
What is the best practice approach to manage multiple servers within one environment with the same configuration?
Check the links, I think they cover scenerio
Using A Single DSC Configuration for Multiple Servers
enter link description here
DSC ConfigurationNames with multiple nodes
enter link description here
The mof file that gets produced does not contain the nodename inside it. So as long as you build a generic configuration, you can rename it after the fact at deploy time.
You can create one config for each environment with some generic name. Then enumerate the list of servers and make a copy of the config for each one with that servers name.
You can take it a step further. Have a share where you create a folder for each server that matches the server's name. Then copy the mof for that server into that folder with a name of localhost.mof. You can then run Start-DSCConfiguration -Path \\server\share\$env:computername from that machine as part of my deployment script.

Get the machine name of an Azure worker or web role using PowerShell?

Is there any way using the PowerShell Azure cmdlets to get the machine name on which an Azure worker or web role is running? Specifically, I'm looking for the name that starts with "RD". I'm not 100% sure if I'm searching for this using the right terminology, because my results are clouded with information about Azure Virtual Machines. I've also been exploring the objects returned from such calls as Get-AzureDeployment and Get-AzureVM, but haven't found the "RD" name anyplace yet.
I've also found the discussion here, but wondering if it's out of date: http://social.msdn.microsoft.com/Forums/windowsazure/en-US/73eb430a-abc7-4c15-98e7-a65308d15ed9/how-to-get-the-computer-name-of-a-webworker-role-instance?forum=windowsazuremanagement
Motivation: My New Relic monitoring often complains "server not reporting" for instances that have been decommissioned. New Relic's server monitoring knows only the "RD..." names, and I'm looking for a quick way to get a list of these from Azure so that I can compare and see if New Relic is only complaining about old instances or if there's a real problem with one of the current instances.
You can actually get more significant host names than RD... by setting the vmName key in the cloud service's ServiceConfiguration file.
Then, your host names will be of the form vmnameXX, where XX is the instance number of the role. (i.e. "MyApp01", "MyApp02", ...)
For details on this, see the links below:
https://azure.microsoft.com/documentation/articles/virtual-networks-viewing-and-modifying-hostnames/
http://blogs.msdn.com/b/cie/archive/2014/03/30/custom-hostname-for-windows-azure-paas-virtual-machines.aspx

AWS deployment without using SSH

I've read some articles recently on setting up AWS infrastructure w/o enabling SSH on Ec2 instances. My web app requires a binary to run. So how can I deploy my application to an ec2 instance w/o using ssh?
This was the article in question.
http://wblinks.com/notes/aws-tips-i-wish-id-known-before-i-started/
Although doable, like the article says, it requires to think about servers as ephemeral servers. A good example of this is web services that scale up and down depending on demand. If something goes wrong with one of the servers you can just terminate your server and spin up another one.
Generally, you can accomplish this using a pull model. For example at bootup pull your code from a git/mecurial repository and then execute scripts to setup your instance. The script will setup all the monitoring required to determine whether your server and application are up and running appropriately. You would still need an SSH client for this if you want to pull your code using ssh. (Although you could also do it through HTTPS)
You can also use configuration management tools that don't use ssh at all like Puppet or Chef. Essentially your node/server will pull all your application and server configuration from the Puppet master or the Chef server. The Puppet agent or Chef client would then perform all the configuration/deployment/monitoring changes for your application to run.
If you with this model I think one of the most critical components is monitoring. You need to know at all times if there's something wrong with one of your server and in the event something goes wrong discard the server and spin up a new one. (Even better if this whole process is automated)
Hope this helps.

Sensible deployment using EC2

We're currently using RightScale, and every time we deploy, we execute a script on the server or server array that we want to update. It pulls the code from a GitHub repository, creates a new folder in /var/www/releases/TIMESTAMP, and symlinks the document root, /var/www/current, to that directory.
We're looking to get a better deployment strategy, such as something where we SSH into one of the servers on the private network, and run a command-line script to deploy what we want to deploy.
However, this means that this one server has to have its public key in the authorized_keys of all of the servers we want to deploy to. Is this safe? Wouldn't this be a single server that would allow all the other servers to be accessed?
What's the best way to approach this?
Thanks!
We use a similar strategy to deploy, though we're not with Rightscale anymore.
I think generally that approach is fine and I'd be interested to learn what you think is not serious about it.
If you want to do your ssh thing, then I'd go about it the following:
Lock down ssh using security groups, e.g. open ssh only up to specific IP or servers with a deploy security-group, or similar. The disadvantage here is that you might lock yourself out when the other servers are down, etc..
I'd put public keys on each instance to allow a password-less login. If you're security concious, you rotate those keys on a monthly basis or for example, when employees are leaving, etc..
Use fabric or capistrano to log into your servers (from the deploy master) using ssh and do your deployment.
Again, I think Rightscale's approach is not unique to them. A lot of services do it like that. The reason is that e.g. when you symlink and keep the previous version around, it's easier to rollback and so on.