How to deploy an opsworks application by cloudformation? - deployment

In a cloudformation template, I create an opsworks stack, a layer, an instance and an application. This template sets up and configures the instance by a chef cookbook of recipes and scripts. How can I deploy the application automatically from the template without clicking manually on deploy inside the stack ? After the deploy the defined Deloy recipes from the cookbook are being executed:
"MyLayer": {
"Type": "AWS::OpsWorks::Layer",
"DependsOn" : "OpsWorksServiceRole",
"Properties": {
"AutoAssignElasticIps" : false,
"AutoAssignPublicIps" : true,
"CustomRecipes" : {
"Setup" : ["cassandra::setup","awscli::setup","settings::setup"],
"Deploy": ["imports::deploy"]
},
"CustomSecurityGroupIds" : { "Ref" : "SecurityGroupIds" },
"EnableAutoHealing" : true,
"InstallUpdatesOnBoot": false,
"LifecycleEventConfiguration": {
"ShutdownEventConfiguration": {
"DelayUntilElbConnectionsDrained": false,
"ExecutionTimeout": 120 }
},
"Name": "script-node",
"Shortname" : "node",
"StackId": { "Ref": "MyStack" },
"Type": "custom",
"UseEbsOptimizedInstances": true,
"VolumeConfigurations": [ {
"Iops": 10000,
"MountPoint": "/dev/sda1",
"NumberOfDisks": 1,
"Size": 20,
"VolumeType": "gp2"
}]
}
}
An application looks like this:
Any idea ?
Thank you.

The CreateDeployment API call generates a one-off event that executes the Deploy actions within your OpsWorks stack. I don't think any official CloudFormation resource maps to this directly, but here are some ideas on how to call it within the context of a CloudFormation template:
Write a Custom Resource that calls CreateDeployment (e.g., via the AWS SDK for Node.js) when created.
Add an AWS::CodePipeline::Pipeline resource to your template that's configured to deploy your OpsWorks app as part of a Deploy Stage. See Using AWS CodePipeline with AWS OpsWorks Stacks for documentation on this integration. (Though it's an extra service + layer of complexity, I think CodePipeline is a better layer of abstraction for modeling deployment actions in your application stack anyway.)

I believe this can be done within the recipes. So in your recipes you'll have a function to validate the app name and if it exists then proceed with the deployment.
For example your deploy recipe would look something like this:
if validator(node[:app][:name]) == true
do whatever
end
and this validator function can reside in your chef library:
def validator(app_name)
app = search("aws_opsworks_app", "name:#{app_name}").first
if app[:deploy] == true
Chef::Log.warn("PROCEEDING: Deploy initiated for #{app[:name]}")
end
end

Related

Dynamically create Step Function state machines locally from CFN template

Goal
I am trying to dynamically create state machines locally from generated Cloud Formation (CFN) templates. I need to be able to do so without deploying to an AWS account or creating the definition strings manually.
Question
How do I "build" a CFN template into a definition string that can be used locally?
Is it possible to achieve my original goal? If not, how are others successfully testing SFN locally?
Setup
I am using Cloud Development Kit (CDK) to write my state machine definitions and generating CFN json templates using cdk synth. I have followed the instructions from AWS here to create a local Docker container to host Step Functions (SFN). I am able to use the AWS CLI to create, run, etc. state machines successfully on my local SFN Docker instance. I am also hosting a DynamoDB Docker instance and using sam local start-lambda to host my lambdas. This all works as expected.
To make local testing easier, I have written a series of bash scripts to dynamically parse the CFN templates and create json input files by calling the AWS CLI. This works sucessfully when writing simple state machines with no references (no lambdas, resources from other stacks, etc.). The issue arises when I want to create and test a more complicated state machine. A state machine DefinitionString in my generated CFN templates looks something like:
{'Fn::Join': ['', ['{
"StartAt": "Step1",
"States": {
{
"StartAt": "Step1",
"States": {
"Step1": {
"Next": "Step2",
"Retry": [
{
"ErrorEquals": [
"Lambda.ServiceException",
"Lambda.AWSLambdaException",
"Lambda.SdkClientException"
],
"IntervalSeconds": 2,
"MaxAttempts": 6,
"BackoffRate": 2
}
],
"Type": "Task",
"Resource": "arn:', {'Ref': 'AWS::Partition'}, ':states:::lambda:invoke",
"Parameters": {
"FunctionName": "', {'Fn::ImportValue': 'OtherStackE9E150CFArn77689D69'}, '",
"Payload.$": "$"
}
},
"Step2": {
"Next": "Step3",
"Retry": [
{
"ErrorEquals": [
"Lambda.ServiceException",
"Lambda.AWSLambdaException",
"Lambda.SdkClientException"
],
"IntervalSeconds": 2,
"MaxAttempts": 6,
"BackoffRate": 2
}
],
"Type": "Task",
"Resource": "arn:', {'Ref': 'AWS::Partition'}, ':states:::lambda:invoke",
"Parameters": {
"FunctionName": "', {'Fn::ImportValue': 'OtherStackE9E150CFArn77689D69'}, '",
"Payload.$": "$"
}
}
}
}
]
},
"TimeoutSeconds": 10800
}']]}
Problem
The AWS CLI does not support json objects, the CFN functions like 'Fn::Join' are not supported, and there are no references allowed ({'Ref': 'AWS::Partition'}) in the definition string.
There is not going to be any magic here to get this done. The CDK renders CloudFormation and that CloudFormation is not truly ASL, as it contains references to other resources, as you pointed out.
One direction you could go would to be to deploy the SFN to a sandbox stack, and allow CFN to dereference all the values and produce the SFN ASL in the service, then re-extract that ASL for local testing.
It's hacky, but I don't know any other way to do it, unless you want to start writing parses that turn all those JSON intrinsics (like Fn:Join) into static strings.

Need to report on data from Retrospectives - Azure Dev Ops

We need a way to access data through an automated way (either Rest API or some SDK) that is contained within the Retrospective Azure Dev Ops extension. Currently, there is an option to export CSV but the process is manual and limited to each Retrospective. Any ideas/thoughts?
You can try like as the following steps:
Run the API to get the information of project teams in a project.
Request URL
POST https://dev.azure.com/{organization_Name}/_apis/Contribution/HierarchyQuery?api-version=5.0-preview.1
Request Body
{
"contributionIds": ["ms.vss-admin-web.org-admin-groups-data-provider"],
"dataProviderContext": {
"properties": {
"teamsFlag": true,
"sourcePage": {
"url": "https://dev.azure.com/{organization_Name}/{project_Name}/_settings/teams",
"routeId": "ms.vss-admin-web.project-admin-hub-route",
"routeValues": {
"project": "{project_Name}",
"adminPivot": "teams",
"controller": "ContributedPage",
"action": "Execute",
"serviceHost": "{organization_Id} ({organization_Name})"
}
}
}
}
}
Run the API to list the retrospectives for a specified project team in the project.
GET https://extmgmt.dev.azure.com/{organization_Name}/_apis/ExtensionManagement/InstalledExtensions/ms-devlabs/team-retrospectives/Data/Scopes/Default/Current/Collections/{projectTeam_identityId}/Documents?api-version=3.1-preview.1
Run the API to get more details about a specified retrospective.
GET https://extmgmt.dev.azure.com/{organization_Name}/_apis/ExtensionManagement/InstalledExtensions/ms-devlabs/team-retrospectives/Data/Scopes/Default/Current/Collections/{retrospective_Id}?api-version=3.1-preview.1
However, we have not any available interface (API or CLI) to Export CSV content.

Deployed Keycloak Script Mapper does not show up in the GUI

I'm using the docker image of Keycloak 10.0.2. I want Keycloak to supply access_tokens that can be used by Hasura. Hasura requires custom claims like this:
{
"sub": "1234567890",
"name": "John Doe",
"admin": true,
"iat": 1516239022,
"https://hasura.io/jwt/claims": {
"x-hasura-allowed-roles": ["editor","user", "mod"],
"x-hasura-default-role": "user",
"x-hasura-user-id": "1234567890",
"x-hasura-org-id": "123",
"x-hasura-custom": "custom-value"
}
}
Following the documentation, and using a script I found online, (See this gist) I created a Script Mapper jar with this script (copied verbatim from the gist), in hasura-mapper.js:
var roles = [];
for each (var role in user.getRoleMappings()) roles.push(role.getName());
token.setOtherClaims("https://hasura.io/jwt/claims", {
"x-hasura-user-id": user.getId(),
"x-hasura-allowed-roles": Java.to(roles, "java.lang.String[]"),
"x-hasura-default-role": "user",
});
and the following keycloak-scripts.json in META-INF/:
{
"mappers": [
{
"name": "Hasura",
"fileName": "hasura-mapper.js",
"description": "Create Hasura Namespaces and roles"
}
]
}
Keycloak debug log indicates it found the jar, and successfully deployed it.
But what's the next step? I can't find the deployed mapper anywhere in the GUI, so how do I activate it? I tried creating a protocol Mapper, but the option 'Script Mapper' is not available. And Scopes -> Evaluate generates a standard access token.
How do I activate my deployed protocol mapper?
Of course after you put up a question on SO you still keep searching, and I finally found the answer in this JIRA issue. The scripts feature has been a preview feature since (I think) version 8.
So when starting Keycloak you need to provide:
-Dkeycloak.profile.feature.scripts=enabled
and after that your Script Mapper will show up in the Mapper Type dropdown on the Create Mapper screen, and everything works.

How to configure Mattermost Plugins

I have deployed Mattermost Team Edition from the Helm Chart
onto my k8s Cluster and it's working great.
The issue is that the config.json file is mounted as a secret,
so configuration can't be done from the UI but in the config.json that is part of values.yaml in the helm chart.
How does one configure plugins? For starters, I would like to enable the zoom plugin
configJSON: {
"PluginSettings": {
"Enable": true,
"EnableUploads": true,
"Directory": "./plugins",
"ClientDirectory": "./client/plugins",
"Plugins": {},
"PluginStates": {
"zoom": {
"Enable": true
},
"com.mattermost.nps": {
"Enable": false
},
"mattermost-webrtc-video": {
"Enable": true
},
"github": {
"Enable": true
},
"jira": {
"Enable": true
},
}
}
Is this the right way of enabling the plugins?
How do I configure the plugins,
especially the zoom one needs API credentials..
I see two options:
The safe way
Run another Mattermost server instance locally (for exemple using the Mattermost preview Docker, very easy to set up), configure your plugins and use its configuration file section for your cluster instances.
The manual, error-prone way
Edit the config.json yourself as you've started. For each plugin, there are two sections to edit, Plugins and PluginStates:
"PluginSettings": {
// [...]
"Plugins": {
"your.plugin.id": {
"pluginProperty1": "...",
"pluginProperty2": "...",
"pluginProperty3": "...",
// [...]
},
},
"PluginStates": {
// [...]
"your.plugin.id": {
"Enable": true
},
}
}
As you see, this requires to know what properties are defined for each plugin, for which there's only the solution to consult the plugin's documentation, or even it's code (look for a file called plugin.json at the root of the plugin's GitHub repo, in the settings section).
I would recommand the first method if there's really no way for you to use the GUI to install&configure plugins.
For other readers' information, in most Mattermost setups, you should be able to use the UI for this, even in High Availability Mode if your version is recent enough.
Add the following to your values.yaml:
config:
MM_PLUGINSETTINGS_CLIENTDIRECTORY: "./client/plugins"
MM_PLUGINSETTINGS_ENABLEUPLOADS: "true"

Automatically schedule future deployment in Octopus

Update: I found executing script on the octopus server is now available in version 3.3, I haven't update my octopus yet but I will take that would work as designed. I'm still wondering if there is a better way to do this without octo.exe?
The task I'm trying to accomplish is after each successful production deployment, automatically schedule a DR deployment to happen next 24 hours.
My desired approach is have octopus do it.
I added a new Octopus step at the end of the deployment only runs upon success of previous step. I attempted to use octo deploy-release --deployAt can be found here in the newly created step.
My challenge is, a script step requires me to pick a target role, which means it will be executed on a tentacle. Also, presence of Octo.exe is required.
I tried to create my own octopus step template, a deployment target role is still required in my customized step.
{
"Id": "ActionTemplates-2",
"Name": "Octopus - Schedule Deployment",
"Description": "Schedule a future octopus deployment",
"ActionType": "Octopus.Script",
"Version": 3,
"Properties": {
"Octopus.Action.Script.Syntax": "PowerShell",
"Octopus.Action.Script.ScriptBody": "--hide--"
},
"SensitiveProperties": {},
"Parameters": [
{
"Name": "OctoPath",
"Label": "Path for Octo.exe",
"HelpText": "Location for octo.exe",
"DefaultValue": null,
"DisplaySettings": {
"Octopus.ControlType": "SingleLineText"
}
},
{
"Name": "projName",
"Label": "Project Name",
"HelpText": "The name of the project should be deployed",
"DefaultValue": null,
"DisplaySettings": {
"Octopus.ControlType": "SingleLineText"
}
},
{
"Name": "days",
"Label": "Days",
"HelpText": "The days in future this deployment would happen",
"DefaultValue": null,
"DisplaySettings": {
"Octopus.ControlType": "SingleLineText"
}
},
{
"Name": "hours",
"Label": "Hours",
"HelpText": "The hours in future this deployment would happen",
"DefaultValue": null,
"DisplaySettings": {
"Octopus.ControlType": "SingleLineText"
}
},
{
"Name": "env",
"Label": "Environment to deploy",
"HelpText": "The environment next deployment should happen",
"DefaultValue": null,
"DisplaySettings": {
"Octopus.ControlType": "SingleLineText"
}
}
],
"$Meta": {
"ExportedAt": "2016-04-20T13:58:54.263Z",
"OctopusVersion": "3.2.0",
"Type": "ActionTemplate"
}
}
Is there a way to alter the template to get rid of the role selection and have octopus server directly execute it as it does for Azure script step?
Is there any another way we can have octopus server automatically schedule the deployment without external help? I guess this go back to first problem, I may still need octopus to run something on the server side.
Note: We kick off production deployment manually, thus I don't have another tool waiting for the response of the deployment. I think it is possible to have a process regularly call out the last deployment and do some analysis then schedule new deployment accordingly but this is not as clean as have octopus do it directly. Injecting octo.exe to a random production machine is not desired at all
You could create new WebAPI project in C#, pull in the Octopus.Deploy nuget package,
write code that accepts HTTP requests, and deals with the scheduling logic.
Host that project on the same server as Octopus server itself. Should be 20-30 minute job to set the website up in IIS.
In your deployment process, add step that creates http request, and done. You could go even one step further, and have the site/service listen for every successful deployment, and do decisions based on that, such that other projects don't have to add extra steps to octopus deployment process.
As you said, polling is also viable option.
Alternatively, if you're on Octopus deploy 3.0, they already expose REST API, I am not sure if it's powerful enough to allow you create scheduled deployment, but you could explore that: https://github.com/OctopusDeploy/OctopusDeploy-Api/wiki/Releases
I agree floating octo.exe in production servers is bad idea. It might get out of sync, and your production server shouldn't deal with this.