How to make self updating pipeline in concourse - concourse

I would like to make a pipeline that as first step checks its own configuration and updates itself if needed.
What tool / API should I use for this? Is there a docker image that has this installed for the correct concourse version? What is the advised way to authenticate in concourse from such task?

Regarding the previous answer suggesting the Fly binary, see the Fly resource.
However, having a similar request, I am going to try with the Pipeline resource. It seems more specific and has var injection solved directly through parameters.
I still have to try it out, but it seems to me that it would be more efficient to have a single pipeline which updates all pipelines, and not having to insert this job in all of your pipelines.
Also, a specific pipeline should not be concerned with itself, just the source code it builds (or whatever it does). If you want to start a pipeline if its config file changed, this could be done by modifying a triggering resource, e.g. pushing an empty "pipeline changed" commit

naively, it'd be a task which gets the repo the pipeline is committed to, and does a fly set-pipeline to update the configuration. However there are a few gotchas here:
fly binary. you'll want fly executable to be available to your container which runs this task, and it should be same version of fly as the concourse that's being targeted. Probably that means you should download it directly via curl from the host.
authenticating with the concourse server. you'll need to provide credentials for fly to use -- probably via parameters.
parameter updates. if new parameters become needed, you'll need to use some kind of single source for all the parameters that need to be set, and use --load-vars-from rather than just --var. My group uses Lastpass notes with a bunch of variables saved in them and download via the lpass tool, but that gets hard if you use 2FA or similar.
moving the server. You will need the external address of the concourse to be injected as a parameter as well, if you want to be resilient to it changing.

Related

Azure Devops - Manage, Run and Track one-time Sql Scripts

We have a database project that uses a dacpac to deploy schema changes and also allows a pre-deployment and post-deployment script.
However, we frequently have to run one-off scripts and security would prefer that developers not have write access in prod (we do not have DBA role at this time). I'm trying to find a solution that would work with azure devops to store one-time run scripts in git, run the script if it has not been run before, and not run the script the next time the pipeline runs. We'd like this done through devops so the SP has access to run the queries and not the dev, and anything flowing through the pipe has been through our peer review process, plus we have record of what was executed.
I'm looking for suggestions from anyone who has done this or is aware of any product which can do this.
Use liquibase. Though I would have it as part of my code base you can also use it from the CLI and run your scripts using that tool.
Liquibase keeps track of what SQL files you have published across deployments so you can have multiple stages say DIT, UAT, STAGING, PROD and it can apply the remaining one off SQL changes over time.
Generally unless you really need support, I doubt you'd need the commercial version. The opensource version is more than sufficient for my system needs and I have a relatively complex system already.
The main reason I like liquibase over other technologies is it allows for SQL based change sets. So the learning curve is a lot lower.
Two tips:
don't rely on the automatic computation of the logicalFilePath, explicitly set it even if it is repeating yourself. This allows you to refactor your scripts so instead of lumping everything into a single folder you may group them later on.
Name your scripts with the date first. That way you can leverage the natural sorting order.
I've faced a similar problem in the past:
Option 1
If you can afford to have an additional table in your database to keep track of what was executed or not, your problem can be easily solved, there is a tool which helps you: https://github.com/DbUp/DbUp
Then you would have a new repository let's call it OneOffSqlScriptsRepository and your pipeline would consume this repository:
resources:
repositories:
- repository: OneOffSqlScriptsRepository
endpoint: OneOffSqlScriptsEndpoint
type: git
Thus you'd create a pipeline to run this DbUp application consuming the scripts from the OneOffSqlScripts repository, the DB would take care of executing the scripts only once (it's configurable).
The username/password for the database can be stored safely in the library combined with azure keyvaults, so only people with the right access rights could access them (apart from the pipeline).
Option 2
This option assumes that you wanna do everything by using only the native resources that azure pipelines can provide.
Create a OneOffSqlScripts as in option1
Create a ScriptsRunner repository
In the ScriptRunner repository, you'd create a folder containing a .json file with the name of the scripts and the amount of times (or a boolean) you've had run them.
eg.:
[{
"id": 1
"scriptName" : "myscript1.sql"
"runs": 0 //or hasRun : false
}]
Then write a python script that reads and writes a json file by updating the amount of runs, thus you'd need to update your repository after each pipeline run. It would mean that your pipeline will perform a git commit / push operation after each run in case there new scripts to be run.
The algorithm is like these, the implementation can be tuned.

Is there an easy way to run Azure DevOps PowerShell scripts on my local machine?

I tried to find anything on this but I didn't succeed. Maybe I am using the wrong words for the search.
What I am trying to achieve is that I have a script that can run in an Azure DevOps environment as well as on my local machine for debug purposes. As far as I can see to execute locally I would need some kind of wrapper for the script that is behaving like the Azure DevOps Task is. Does anything like that exist out there?
If you want to have more control over building your code and be able to see intermediate results you need to install self-hosted agent on your machine. Here you have more info about this.
Most of the task are simply wrappers around console tools which adds sort of authorization or making them visually accessible. Maybe useful for you will be enable System.Debug flag on Microsoft agent to see more details what particular task does. You will see more details and thus be able to better understand what is happening behind.
For instance if you use variables in your script like $(someVariable) setting System.Debug you will see your final script in the log with replaced values.
Be aware also that Secret variables are masked. So you may find *** in logs instead of real value.
However, there is no easy way just to extract and wrap what task does to repeat it on your machine without involving Azure DevOps agent.

Can I trap the Informatica Amazon S3Bucket name doesn’t match standards

In Informatica we have mapping source qualifiers connecting to Amazon Web Services—AWS.
We often and erratically get a failure that our s3 bucket names do not comply with naming standards. We restart the workflows again and they continue on every time successfully.
Is there a way to trap for this specifically and then maybe call a command object to restart the workflow command via PMCMD?
How are you starting the workflows in regular runs?
If you are using a shell script, you can add a logic to restart if you see a particular error. I have created a script a while ago to restart workflows for a particular error.
In a nut shell it works like this
start workflow (with pmcmd)
#in case of an error
check repository db and get the error
if the error is specific to s3 bucket name
restart the workflow
Well... It's possible for example to have workflow one (W1):
your_session --> cmd_touch_file_if_session_failed
and another workflow (W2), running continuously:
event_wait_for_W1_file --> pmcmd_restart_W1 --> delete_watch_file
Although it would be a lot better to nail down the cause for your failures and get it resolved.

How to run a remote Powershell in VSTS release only if script exists?

In VSTS release management there is a nice remote Powershell task where we can run a script on the target machine. However I'd need a way to tell the release managment that only run this file if it exists, otherwise silently ignore that.
I know I can configure a task to not block the process in case of error, however in that case there still will be an exclamation mark in the log and the deployment will get the partial succeeded status. I'd like to avoid this and show success even if the file doesn't exist.
With this I need it to support kind of optional setup scripts for several deployed products.
There isn’t the setting or feature in VSTS to check whether the script file is existing or not.
The simple way is that, you can create another script to call target script.
Create another script (e.g. wapperScript.ps1) to call target script (can use parameter to accept the target script path) and add to source control
Add Windows Machine copy task to copy wapperScript.ps1 to target machine
Add Remote PowerShell task to run wapperScript.ps1
If you make your script more robust with a guard clause so that it can be called regardless of any given environmental condition. This keeps your pipeline less complicated. You can take action on the "file exists" leg and do a noop on the other. You can signal to the release process either way with log entries.

Using Partial Cnfiguration without hardcoding configuration name in LCM property

I would like to combine few small DSC configurations into one MOF file. I know there is something like Partial Configuration in Powershell v5, however to use this feature i have to reconfigure LCM on target node everytime when amount of configurations is changed (which is impossible because i want to configure LCM manually only once on first DSC configuration).
Unfortunatelly DSC do not allow to reconfigure LCM via DSC Resource which means i cannot change this setting by "Pull Mode" on local machine.
I'm still wondering why LCM do not support "*" inside PartialConfigurtion property when it could be very usefull specially when every configuration uses GUID anyway (*.GUID.MOF)
Have you ever found any solution to workaround this problem?
Thanks in advance
DSC doesn't require all partial configuration fragments to be available at the time of applying the configuration. So you can still populate many partialConfig ahead of time in LCM which may become available at some point of time. This gives you some flexibility for not modifying LCM settings every time you need to add another partial configuration. I would also suggest opening a uservoice issue request # https://windowsserver.uservoice.com/forums/301869-powershell/category/148047-desired-state-configuration-dsc for:
Allowing '*' in partial configuration.
Allowing updating meta-config from pull server.