Can I update Windows ClientIdentities after cluster creation? - azure-service-fabric

I currently have something like this in my cluseterConfig.json file.
"ClientIdentities": [
{
"Identity": "{My Domain}\\{My Security Group}",
"IsAdmin": true
}
]
My questions are:
My cluster is stood up and running. Can I add a second security group to this cluster while running? I've search through the powershell commands and didn't see one that matched this but I may have missed it?
If I can't do this while the cluster is running do I need delete the cluster and recreate? If I do need to recreate I'm zeroing in on the word ClientIdentities. I'm assuming I can have multiple identities and my config should look something like
ClientIdentities": [{
"Identity": "{My Domain}\\{My Security Group}",
"IsAdmin": true
},
{
"Identity": "{My Domain}\\{My Second Security Group}",
"IsAdmin": false
}
]
Thanks,
Greg

Yes, it is possible to update ClientIdentities once the cluster is up using a configuration upgrade.
Create a new JSON file with the added client identities.
Modify the clusterConfigurationVersion in the JSON config.
Run Start-ServiceFabricClusterConfigurationUpgrade -ClusterConfigPath "Path to new JSON"

Related

Dynamically create Step Function state machines locally from CFN template

Goal
I am trying to dynamically create state machines locally from generated Cloud Formation (CFN) templates. I need to be able to do so without deploying to an AWS account or creating the definition strings manually.
Question
How do I "build" a CFN template into a definition string that can be used locally?
Is it possible to achieve my original goal? If not, how are others successfully testing SFN locally?
Setup
I am using Cloud Development Kit (CDK) to write my state machine definitions and generating CFN json templates using cdk synth. I have followed the instructions from AWS here to create a local Docker container to host Step Functions (SFN). I am able to use the AWS CLI to create, run, etc. state machines successfully on my local SFN Docker instance. I am also hosting a DynamoDB Docker instance and using sam local start-lambda to host my lambdas. This all works as expected.
To make local testing easier, I have written a series of bash scripts to dynamically parse the CFN templates and create json input files by calling the AWS CLI. This works sucessfully when writing simple state machines with no references (no lambdas, resources from other stacks, etc.). The issue arises when I want to create and test a more complicated state machine. A state machine DefinitionString in my generated CFN templates looks something like:
{'Fn::Join': ['', ['{
"StartAt": "Step1",
"States": {
{
"StartAt": "Step1",
"States": {
"Step1": {
"Next": "Step2",
"Retry": [
{
"ErrorEquals": [
"Lambda.ServiceException",
"Lambda.AWSLambdaException",
"Lambda.SdkClientException"
],
"IntervalSeconds": 2,
"MaxAttempts": 6,
"BackoffRate": 2
}
],
"Type": "Task",
"Resource": "arn:', {'Ref': 'AWS::Partition'}, ':states:::lambda:invoke",
"Parameters": {
"FunctionName": "', {'Fn::ImportValue': 'OtherStackE9E150CFArn77689D69'}, '",
"Payload.$": "$"
}
},
"Step2": {
"Next": "Step3",
"Retry": [
{
"ErrorEquals": [
"Lambda.ServiceException",
"Lambda.AWSLambdaException",
"Lambda.SdkClientException"
],
"IntervalSeconds": 2,
"MaxAttempts": 6,
"BackoffRate": 2
}
],
"Type": "Task",
"Resource": "arn:', {'Ref': 'AWS::Partition'}, ':states:::lambda:invoke",
"Parameters": {
"FunctionName": "', {'Fn::ImportValue': 'OtherStackE9E150CFArn77689D69'}, '",
"Payload.$": "$"
}
}
}
}
]
},
"TimeoutSeconds": 10800
}']]}
Problem
The AWS CLI does not support json objects, the CFN functions like 'Fn::Join' are not supported, and there are no references allowed ({'Ref': 'AWS::Partition'}) in the definition string.
There is not going to be any magic here to get this done. The CDK renders CloudFormation and that CloudFormation is not truly ASL, as it contains references to other resources, as you pointed out.
One direction you could go would to be to deploy the SFN to a sandbox stack, and allow CFN to dereference all the values and produce the SFN ASL in the service, then re-extract that ASL for local testing.
It's hacky, but I don't know any other way to do it, unless you want to start writing parses that turn all those JSON intrinsics (like Fn:Join) into static strings.

Deployed Keycloak Script Mapper does not show up in the GUI

I'm using the docker image of Keycloak 10.0.2. I want Keycloak to supply access_tokens that can be used by Hasura. Hasura requires custom claims like this:
{
"sub": "1234567890",
"name": "John Doe",
"admin": true,
"iat": 1516239022,
"https://hasura.io/jwt/claims": {
"x-hasura-allowed-roles": ["editor","user", "mod"],
"x-hasura-default-role": "user",
"x-hasura-user-id": "1234567890",
"x-hasura-org-id": "123",
"x-hasura-custom": "custom-value"
}
}
Following the documentation, and using a script I found online, (See this gist) I created a Script Mapper jar with this script (copied verbatim from the gist), in hasura-mapper.js:
var roles = [];
for each (var role in user.getRoleMappings()) roles.push(role.getName());
token.setOtherClaims("https://hasura.io/jwt/claims", {
"x-hasura-user-id": user.getId(),
"x-hasura-allowed-roles": Java.to(roles, "java.lang.String[]"),
"x-hasura-default-role": "user",
});
and the following keycloak-scripts.json in META-INF/:
{
"mappers": [
{
"name": "Hasura",
"fileName": "hasura-mapper.js",
"description": "Create Hasura Namespaces and roles"
}
]
}
Keycloak debug log indicates it found the jar, and successfully deployed it.
But what's the next step? I can't find the deployed mapper anywhere in the GUI, so how do I activate it? I tried creating a protocol Mapper, but the option 'Script Mapper' is not available. And Scopes -> Evaluate generates a standard access token.
How do I activate my deployed protocol mapper?
Of course after you put up a question on SO you still keep searching, and I finally found the answer in this JIRA issue. The scripts feature has been a preview feature since (I think) version 8.
So when starting Keycloak you need to provide:
-Dkeycloak.profile.feature.scripts=enabled
and after that your Script Mapper will show up in the Mapper Type dropdown on the Create Mapper screen, and everything works.

How to configure Mattermost Plugins

I have deployed Mattermost Team Edition from the Helm Chart
onto my k8s Cluster and it's working great.
The issue is that the config.json file is mounted as a secret,
so configuration can't be done from the UI but in the config.json that is part of values.yaml in the helm chart.
How does one configure plugins? For starters, I would like to enable the zoom plugin
configJSON: {
"PluginSettings": {
"Enable": true,
"EnableUploads": true,
"Directory": "./plugins",
"ClientDirectory": "./client/plugins",
"Plugins": {},
"PluginStates": {
"zoom": {
"Enable": true
},
"com.mattermost.nps": {
"Enable": false
},
"mattermost-webrtc-video": {
"Enable": true
},
"github": {
"Enable": true
},
"jira": {
"Enable": true
},
}
}
Is this the right way of enabling the plugins?
How do I configure the plugins,
especially the zoom one needs API credentials..
I see two options:
The safe way
Run another Mattermost server instance locally (for exemple using the Mattermost preview Docker, very easy to set up), configure your plugins and use its configuration file section for your cluster instances.
The manual, error-prone way
Edit the config.json yourself as you've started. For each plugin, there are two sections to edit, Plugins and PluginStates:
"PluginSettings": {
// [...]
"Plugins": {
"your.plugin.id": {
"pluginProperty1": "...",
"pluginProperty2": "...",
"pluginProperty3": "...",
// [...]
},
},
"PluginStates": {
// [...]
"your.plugin.id": {
"Enable": true
},
}
}
As you see, this requires to know what properties are defined for each plugin, for which there's only the solution to consult the plugin's documentation, or even it's code (look for a file called plugin.json at the root of the plugin's GitHub repo, in the settings section).
I would recommand the first method if there's really no way for you to use the GUI to install&configure plugins.
For other readers' information, in most Mattermost setups, you should be able to use the UI for this, even in High Availability Mode if your version is recent enough.
Add the following to your values.yaml:
config:
MM_PLUGINSETTINGS_CLIENTDIRECTORY: "./client/plugins"
MM_PLUGINSETTINGS_ENABLEUPLOADS: "true"

A CNAME record pointing from mytmp.trafficmanager.net to mywebapp.azurewebsites.net was not found

I am trying to create all my azure resources from PowerShell script. All resources are getting, but it is also throwing this exception.
A CNAME record pointing from mytmp.trafficmanager.net to mywebapp.azurewebsites.net was not found
But I can see a traffic manager endpoint has been configured properly. What do I miss here, any idea?
PS Code:
{
"comments": "Generalized from resource: '/subscriptions/<subid>/resourceGroups/<rgid>/providers/Microsoft.Web/sites/<web_app_name>/hostNameBindings/<traffic_manager_dns>'.",
"type": "Microsoft.Web/sites/hostNameBindings",
"name": "[concat(parameters('<web_app_name>'), '/', parameters('hostNameBindings_<traffic_manager_dns>_name'))]",
"apiVersion": "2016-08-01",
"location": "South Central US",
"scale": null,
"properties": {
"siteName": "<web_app_name>",
"domainId": null,
"hostNameType": "Verified"
},
"dependsOn": [
"[resourceId('Microsoft.Web/sites', parameters('sites_<web_app_name>_name'))]"
]
}
Above code throws that exception actually. When I commented this code block everything is fine. But I wanted to understand the reason for the error.
A CNAME record pointing from mytmp.trafficmanager.net to mywebapp.azurewebsites.net was not found
It indicates the DNS record is not created when deploying the template. You need to prove that you are the owner of the hostname. You also could test it from the Azure portal manually.
Before deploy the template you need to create an CNAME record with your DNS provider. For more information you could refer to Map a CNAME record.

Logstash-Forwader 3.1 state file .logstash-forwarder not updating

I am having an issue with Logstash-forwarder 3.1.1 on Centos 6.5 where the state file /.logstash-forwarder is not updating as information is sent to Logstash.
I have found as activity is logged by logstash-forwarder the corresponding offset is not recorded in /.logstash-forwarder 'logrotate' file. The ./logstash-forwarder file is being recreated each time 100 events are recorded but not updated with data. I know the file has been recreated because I changed permissions to test, and permissions are reset each time.
Below are my configurations (With some actual data italicized/scrubbed):
Logstash-forwarder 3.1.1
Centos 6.5
/etc/logstash-forwarder
Note that the "paths" key does contain wildcards
{
"network": {
"servers": [ "*server*:*port*" ],
"timeout": 15,
"ssl ca": "/*path*/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/a/b/tomcat-*-*/logs/catalina.out"
],
"fields": { "type": "apache", "time_zone": "EST" }
}
]
}
Per logstash instructions for Centos 6.5 I have configured the LOGSTASH_FORWARDER_OPTIONS value so it looks like the following:
LOGSTASH_FORWARDER_OPTIONS="-config /etc/logstash-forwarder -spool-size 100"
Below is the resting state of the /.logstash-forwarder logrotate file:
{"/a/b/tomcat-set-1/logs/catalina.out":{"source":"/a/b/tomcat-set-1/logs/catalina.out","offset":433564,"inode":*number1*,"device":*number2*},"/a/b/tomcat-set-2/logs/catalina.out":{"source":"/a/b/tomcat-set-2/logs/catalina.out","offset":18782151,"inode":*number3*,"device":*number4*}}
There are two sets of logs that this is capturing. The offset has stayed the same for 20 minutes while activities have been occurred and sent over to Logstash.
Can anyone give me any advice on how to fix this problem whether it be a configuration setting I missed or a bug?
Thank you!
After more research I found it was announced that Filebeats is the preferred forwarder of choice now. I even found a post by the owner of Logstash-Forwarder that the program is full of bugs and is not fully supported any more.
I have instead moved to Centos7 using the latest version of the ELK stack, using Filbeats as the forwarder. Things are going much smoother now!