Can I pass the VCAP_SERVICES to the test stage at the IBM Cloud Continuous Delivery pipeline? - ibm-cloud

When in (unit) test stage I'm running the following commands:
echo "Installing Node Modules"
npm install
echo "Run Unit Tests"
npm run test-mocha
My problem is that I cannot access the VCAP_SERVICES in the test stage (job is set to unit test).
Is there a way to access / pass them?

The only way I see, is using the cf cli over the provided shell in that stage. But that would require authentication and you do not want to store your user date there for sure.
So one way would be to store the data in the provided environment tab for that stage. Then you have to adapt these data, in case something is changed, because it is not provided by the vcap file but that seems to be how it is for the test stage at least.

As already mentioned, the best way to use VCAP_SERVICES in the test stage is to set it yourself in the stage's Environment Properties configuration.
The pipeline is the build environment. It needs to be able to run even if the app is not yet deployed or has crashed. We sometimes copy in values from the runtime environment, but the build environment should minimize its dependencies on the runtime environment wherever possible.
There's also the question of the pipeline workers being able to access the runtime services specified in VCAP_SERVICES. For the services I've used in my pipelines it has always worked, but it's not a guaranteed thing.

Related

Azure Devops - Manage, Run and Track one-time Sql Scripts

We have a database project that uses a dacpac to deploy schema changes and also allows a pre-deployment and post-deployment script.
However, we frequently have to run one-off scripts and security would prefer that developers not have write access in prod (we do not have DBA role at this time). I'm trying to find a solution that would work with azure devops to store one-time run scripts in git, run the script if it has not been run before, and not run the script the next time the pipeline runs. We'd like this done through devops so the SP has access to run the queries and not the dev, and anything flowing through the pipe has been through our peer review process, plus we have record of what was executed.
I'm looking for suggestions from anyone who has done this or is aware of any product which can do this.
Use liquibase. Though I would have it as part of my code base you can also use it from the CLI and run your scripts using that tool.
Liquibase keeps track of what SQL files you have published across deployments so you can have multiple stages say DIT, UAT, STAGING, PROD and it can apply the remaining one off SQL changes over time.
Generally unless you really need support, I doubt you'd need the commercial version. The opensource version is more than sufficient for my system needs and I have a relatively complex system already.
The main reason I like liquibase over other technologies is it allows for SQL based change sets. So the learning curve is a lot lower.
Two tips:
don't rely on the automatic computation of the logicalFilePath, explicitly set it even if it is repeating yourself. This allows you to refactor your scripts so instead of lumping everything into a single folder you may group them later on.
Name your scripts with the date first. That way you can leverage the natural sorting order.
I've faced a similar problem in the past:
Option 1
If you can afford to have an additional table in your database to keep track of what was executed or not, your problem can be easily solved, there is a tool which helps you: https://github.com/DbUp/DbUp
Then you would have a new repository let's call it OneOffSqlScriptsRepository and your pipeline would consume this repository:
resources:
repositories:
- repository: OneOffSqlScriptsRepository
endpoint: OneOffSqlScriptsEndpoint
type: git
Thus you'd create a pipeline to run this DbUp application consuming the scripts from the OneOffSqlScripts repository, the DB would take care of executing the scripts only once (it's configurable).
The username/password for the database can be stored safely in the library combined with azure keyvaults, so only people with the right access rights could access them (apart from the pipeline).
Option 2
This option assumes that you wanna do everything by using only the native resources that azure pipelines can provide.
Create a OneOffSqlScripts as in option1
Create a ScriptsRunner repository
In the ScriptRunner repository, you'd create a folder containing a .json file with the name of the scripts and the amount of times (or a boolean) you've had run them.
eg.:
[{
"id": 1
"scriptName" : "myscript1.sql"
"runs": 0 //or hasRun : false
}]
Then write a python script that reads and writes a json file by updating the amount of runs, thus you'd need to update your repository after each pipeline run. It would mean that your pipeline will perform a git commit / push operation after each run in case there new scripts to be run.
The algorithm is like these, the implementation can be tuned.

Is there an easy way to run Azure DevOps PowerShell scripts on my local machine?

I tried to find anything on this but I didn't succeed. Maybe I am using the wrong words for the search.
What I am trying to achieve is that I have a script that can run in an Azure DevOps environment as well as on my local machine for debug purposes. As far as I can see to execute locally I would need some kind of wrapper for the script that is behaving like the Azure DevOps Task is. Does anything like that exist out there?
If you want to have more control over building your code and be able to see intermediate results you need to install self-hosted agent on your machine. Here you have more info about this.
Most of the task are simply wrappers around console tools which adds sort of authorization or making them visually accessible. Maybe useful for you will be enable System.Debug flag on Microsoft agent to see more details what particular task does. You will see more details and thus be able to better understand what is happening behind.
For instance if you use variables in your script like $(someVariable) setting System.Debug you will see your final script in the log with replaced values.
Be aware also that Secret variables are masked. So you may find *** in logs instead of real value.
However, there is no easy way just to extract and wrap what task does to repeat it on your machine without involving Azure DevOps agent.

How to make self updating pipeline in concourse

I would like to make a pipeline that as first step checks its own configuration and updates itself if needed.
What tool / API should I use for this? Is there a docker image that has this installed for the correct concourse version? What is the advised way to authenticate in concourse from such task?
Regarding the previous answer suggesting the Fly binary, see the Fly resource.
However, having a similar request, I am going to try with the Pipeline resource. It seems more specific and has var injection solved directly through parameters.
I still have to try it out, but it seems to me that it would be more efficient to have a single pipeline which updates all pipelines, and not having to insert this job in all of your pipelines.
Also, a specific pipeline should not be concerned with itself, just the source code it builds (or whatever it does). If you want to start a pipeline if its config file changed, this could be done by modifying a triggering resource, e.g. pushing an empty "pipeline changed" commit
naively, it'd be a task which gets the repo the pipeline is committed to, and does a fly set-pipeline to update the configuration. However there are a few gotchas here:
fly binary. you'll want fly executable to be available to your container which runs this task, and it should be same version of fly as the concourse that's being targeted. Probably that means you should download it directly via curl from the host.
authenticating with the concourse server. you'll need to provide credentials for fly to use -- probably via parameters.
parameter updates. if new parameters become needed, you'll need to use some kind of single source for all the parameters that need to be set, and use --load-vars-from rather than just --var. My group uses Lastpass notes with a bunch of variables saved in them and download via the lpass tool, but that gets hard if you use 2FA or similar.
moving the server. You will need the external address of the concourse to be injected as a parameter as well, if you want to be resilient to it changing.

How to trigger a build within a build chain after x days?

I am currently using Teamcity to deploy a web application to Azure Cloud Services. We typically deploy using powershell scripts to the Staging Slot and thereafter do a manual swap (Staging to Production) on the Azure Portal.
After the swap, we typically leave the Staging slot active with the old production deployment for a few days (in the event we need to revert/backout of the deployment) and thereafter delete it - this is a manual process.
I am looking to automate this process using Teamcity. My intended solution is to have a Teamcity build kick off x days after the deployment build has suceeded (The details of the build steps are irrelevant since I'd probably use powershell again to delete the staging slot)
This plan has pointed me to look into Teamcity build chains, snapshot dependencies etc.
What I have done so far is
correctly created the build chain by creating a snapshot dependency on the deployment build configuration and
created a Finish Build Trigger
At the moment, the current approach kickoffs the dependent build 'Delete Azure Staging Web' (B) immediately after the deployment build has succeeded. However, I would like this to be a delayed build after x days.
Looking at the above build chain, I would like the build B to run on 13-Aug-2016 at 7.31am (if x=3)
I have looked into the Schedule Trigger option as well, but am slightly lost as to how I can use it to achieve this. As far as I understand, using a cron expression will result in the build continuously running which is not what I want - I would like for the build B to only execute once.
Yes this can be done by making use of the REST api.
I've made a small sample which should convey the fundamental steps. This is a PowerShell script that will clear the triggers on another build configuration (determined by the parameter value in the script) and add a scheduled trigger with a start time X days on from the current time (determined by the parameter value in the script)
1) Add a PowerShell step to the main build, at the end and run add-scheduled-trigger as source code
2) Update the parameter values in the script
$BuildTypeId - This is the id of the configuration you want to add the trigger to
$NumberOfDays - This is the number of days ahead that you want to schedule the trigger for
There is admin / admin embedded in the script = Username / Password authentication for the REST api
One this is done you should see a scheduled trigger created / updated each time you build the first configuration
Hope this helps

How do I deploy service fabric application from VSTS release pipeline?

I have configured a CI build for a Service Fabric application, in Visual Studio Team Services, according to this documentation: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-set-up-continuous-integration
But instead of having my CI build do the publishing, I only perform the Build and Package tasks, and include all Service Fabric related output, such as pkg folder, scripts, publish profiles and application parameters, in the drop. This way I can pass it along to the new Release pipeline (agent-based releases) to do the actual deployment of my service fabric application.
In my release definition I have a single Azure Powershell task, that uses an ARM endpoint (with proper service principals configured).
When I deploy my app to an existing service fabric cluster, I use the default Deploy-FabricApplication cmdlet passing along the pkg folder and a publish profile that is configured with a connection to the existing cluster.
The release fails with an error message "Cluster connection instance is null". And I cannot understand why?
Doing some debugging I have found that:
The Deploy-FabricApplication cmdlet executes the Connect-ServiceFabricCluster cmdlet just fine, but as soon as the Publish-NewServiceFabricApplication cmdlet takes over execution, then the cluster connection is lost.
I would expect that this scenario is possible using the service fabric cmdlets, but I cannot figure out how to keep the cluster connection open during depoyment.
UPDATE: The link to the documentation no longer refers to the Service Fabric powershell scripts, so the pre-condition for this question is no longer documented. The article now refers to the VSTS build and release tasks, which can be prefered over the powershell cmdlets I tried to use.
When the Connect-ServiceFabricCluster function is called (from Deploy-FabricApplication.ps1) a local $clusterConnection variable is set after the call to Connect-ServiceFabricCluster. You can see that using Get-Variable.
Unfortunately there is logic in some of the SDK scripts that expect that variable to be set but because they run in a different scope, that local variable isn't available.
It works in Visual Studio because the Deploy-FabricApplication.ps1 script is called using dot source notation, which puts the $clusterConnection variable in the current scope.
I'm not sure if there is a way to use dot sourcing when running a script though the release pipeline but you could, as a workaround, make the $clusterConnection variable global right after it's been set via the Connect-ServiceFabricCluster call. Edit your Deploy-FabricApplication.ps1 script and add the following line after the connection logic (~line 169):
$global:clusterConnection = $clusterConnection
By the way, you might want to consider setting up custom build/release tasks that deploy a Service Fabric application, rather than using the various Deploy-FabricApplication.ps1 scripts.
There now exists a built-in VSTS task for deploying a Service Fabric app so you no longer need to bother with executing the PowerShell script on your own. Task documentation page is at https://www.visualstudio.com/docs/build/steps/deploy/service-fabric-deploy. The original CI article has also been updated which provides details on how to set everything up: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-set-up-continuous-integration/.
Try to use "PowerShell" task instead of "Azure PowerShell" task.
I hit the same bug today and opened a GitHub issue here
On a side note, VS generated script Deploy-FabricApplication.ps1 uses module
"$((Get-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Service Fabric SDK" -Name "FabricSDKPSModulePath").FabricSDKPSModulePath)\ServiceFabricSDK.psm1"
That's where Publish-NewServiceFabricApplication comes from. You can check the deployment logic and rewrite it in more sane way using lower-level ServiceFabric SDK cmdlets (potentially getting connection using Get-ServiceFabricClusterConnection instead of global-ling it)