How to configure Mattermost Plugins - kubernetes

I have deployed Mattermost Team Edition from the Helm Chart
onto my k8s Cluster and it's working great.
The issue is that the config.json file is mounted as a secret,
so configuration can't be done from the UI but in the config.json that is part of values.yaml in the helm chart.
How does one configure plugins? For starters, I would like to enable the zoom plugin
configJSON: {
"PluginSettings": {
"Enable": true,
"EnableUploads": true,
"Directory": "./plugins",
"ClientDirectory": "./client/plugins",
"Plugins": {},
"PluginStates": {
"zoom": {
"Enable": true
},
"com.mattermost.nps": {
"Enable": false
},
"mattermost-webrtc-video": {
"Enable": true
},
"github": {
"Enable": true
},
"jira": {
"Enable": true
},
}
}
Is this the right way of enabling the plugins?
How do I configure the plugins,
especially the zoom one needs API credentials..

I see two options:
The safe way
Run another Mattermost server instance locally (for exemple using the Mattermost preview Docker, very easy to set up), configure your plugins and use its configuration file section for your cluster instances.
The manual, error-prone way
Edit the config.json yourself as you've started. For each plugin, there are two sections to edit, Plugins and PluginStates:
"PluginSettings": {
// [...]
"Plugins": {
"your.plugin.id": {
"pluginProperty1": "...",
"pluginProperty2": "...",
"pluginProperty3": "...",
// [...]
},
},
"PluginStates": {
// [...]
"your.plugin.id": {
"Enable": true
},
}
}
As you see, this requires to know what properties are defined for each plugin, for which there's only the solution to consult the plugin's documentation, or even it's code (look for a file called plugin.json at the root of the plugin's GitHub repo, in the settings section).
I would recommand the first method if there's really no way for you to use the GUI to install&configure plugins.
For other readers' information, in most Mattermost setups, you should be able to use the UI for this, even in High Availability Mode if your version is recent enough.

Add the following to your values.yaml:
config:
MM_PLUGINSETTINGS_CLIENTDIRECTORY: "./client/plugins"
MM_PLUGINSETTINGS_ENABLEUPLOADS: "true"

Related

Next.js Multi Zone - Fast Refresh in Development Environment

I am currently using Next.js and multi zones for our web apps and I'm hitting an issue where webpackDevMiddleware only sees the current app I am on for changes. I use Docker to create my network. I'm hoping to change the scope to watch all apps in my workspace and refresh when any of them change.
My main issue is when I'm accessing app2 and make changes to app2, app1 doesn't see changes had been made to refresh the screen to update the view from app2.
I did verify if going directly to app2 and making a change, the page does refresh but I'd like developers to access the app1 and route to app2 from there. This will prevent them from needing to know what port (localhost:3000, localhost:3001, localhost:3002, etc.) to access for the right app.
Here is my next.config.js
const { APP2_URL } = process.env;
module.exports = {
webpackDevMiddleware: (config) => {
config.watchOptions = {
poll: 1000,
aggregateTimeout: 300,
};
return config;
},
output: "standalone",
async rewrites() {
return [
{
source: "/app1",
destination: "/",
},
{
source: "/app2",
destination: `${APP2_URL}/app2`,
},
{
source: "/app2/:path*",
destination: `${APP2_URL}/app2/:path*`,
},
];
},
};
Each webapp is in it's own Docker container, so I'm guessing I need to add additional settings to watch the remote container for app2. Any guidance to get me started would be appreciated.
This ended up easier than anticipated. Next.js handled it themselves. Confirmed it works as expected with Next.js version 12.2.5.

Deployed Keycloak Script Mapper does not show up in the GUI

I'm using the docker image of Keycloak 10.0.2. I want Keycloak to supply access_tokens that can be used by Hasura. Hasura requires custom claims like this:
{
"sub": "1234567890",
"name": "John Doe",
"admin": true,
"iat": 1516239022,
"https://hasura.io/jwt/claims": {
"x-hasura-allowed-roles": ["editor","user", "mod"],
"x-hasura-default-role": "user",
"x-hasura-user-id": "1234567890",
"x-hasura-org-id": "123",
"x-hasura-custom": "custom-value"
}
}
Following the documentation, and using a script I found online, (See this gist) I created a Script Mapper jar with this script (copied verbatim from the gist), in hasura-mapper.js:
var roles = [];
for each (var role in user.getRoleMappings()) roles.push(role.getName());
token.setOtherClaims("https://hasura.io/jwt/claims", {
"x-hasura-user-id": user.getId(),
"x-hasura-allowed-roles": Java.to(roles, "java.lang.String[]"),
"x-hasura-default-role": "user",
});
and the following keycloak-scripts.json in META-INF/:
{
"mappers": [
{
"name": "Hasura",
"fileName": "hasura-mapper.js",
"description": "Create Hasura Namespaces and roles"
}
]
}
Keycloak debug log indicates it found the jar, and successfully deployed it.
But what's the next step? I can't find the deployed mapper anywhere in the GUI, so how do I activate it? I tried creating a protocol Mapper, but the option 'Script Mapper' is not available. And Scopes -> Evaluate generates a standard access token.
How do I activate my deployed protocol mapper?
Of course after you put up a question on SO you still keep searching, and I finally found the answer in this JIRA issue. The scripts feature has been a preview feature since (I think) version 8.
So when starting Keycloak you need to provide:
-Dkeycloak.profile.feature.scripts=enabled
and after that your Script Mapper will show up in the Mapper Type dropdown on the Create Mapper screen, and everything works.

Can I update Windows ClientIdentities after cluster creation?

I currently have something like this in my cluseterConfig.json file.
"ClientIdentities": [
{
"Identity": "{My Domain}\\{My Security Group}",
"IsAdmin": true
}
]
My questions are:
My cluster is stood up and running. Can I add a second security group to this cluster while running? I've search through the powershell commands and didn't see one that matched this but I may have missed it?
If I can't do this while the cluster is running do I need delete the cluster and recreate? If I do need to recreate I'm zeroing in on the word ClientIdentities. I'm assuming I can have multiple identities and my config should look something like
ClientIdentities": [{
"Identity": "{My Domain}\\{My Security Group}",
"IsAdmin": true
},
{
"Identity": "{My Domain}\\{My Second Security Group}",
"IsAdmin": false
}
]
Thanks,
Greg
Yes, it is possible to update ClientIdentities once the cluster is up using a configuration upgrade.
Create a new JSON file with the added client identities.
Modify the clusterConfigurationVersion in the JSON config.
Run Start-ServiceFabricClusterConfigurationUpgrade -ClusterConfigPath "Path to new JSON"

How to deploy an opsworks application by cloudformation?

In a cloudformation template, I create an opsworks stack, a layer, an instance and an application. This template sets up and configures the instance by a chef cookbook of recipes and scripts. How can I deploy the application automatically from the template without clicking manually on deploy inside the stack ? After the deploy the defined Deloy recipes from the cookbook are being executed:
"MyLayer": {
"Type": "AWS::OpsWorks::Layer",
"DependsOn" : "OpsWorksServiceRole",
"Properties": {
"AutoAssignElasticIps" : false,
"AutoAssignPublicIps" : true,
"CustomRecipes" : {
"Setup" : ["cassandra::setup","awscli::setup","settings::setup"],
"Deploy": ["imports::deploy"]
},
"CustomSecurityGroupIds" : { "Ref" : "SecurityGroupIds" },
"EnableAutoHealing" : true,
"InstallUpdatesOnBoot": false,
"LifecycleEventConfiguration": {
"ShutdownEventConfiguration": {
"DelayUntilElbConnectionsDrained": false,
"ExecutionTimeout": 120 }
},
"Name": "script-node",
"Shortname" : "node",
"StackId": { "Ref": "MyStack" },
"Type": "custom",
"UseEbsOptimizedInstances": true,
"VolumeConfigurations": [ {
"Iops": 10000,
"MountPoint": "/dev/sda1",
"NumberOfDisks": 1,
"Size": 20,
"VolumeType": "gp2"
}]
}
}
An application looks like this:
Any idea ?
Thank you.
The CreateDeployment API call generates a one-off event that executes the Deploy actions within your OpsWorks stack. I don't think any official CloudFormation resource maps to this directly, but here are some ideas on how to call it within the context of a CloudFormation template:
Write a Custom Resource that calls CreateDeployment (e.g., via the AWS SDK for Node.js) when created.
Add an AWS::CodePipeline::Pipeline resource to your template that's configured to deploy your OpsWorks app as part of a Deploy Stage. See Using AWS CodePipeline with AWS OpsWorks Stacks for documentation on this integration. (Though it's an extra service + layer of complexity, I think CodePipeline is a better layer of abstraction for modeling deployment actions in your application stack anyway.)
I believe this can be done within the recipes. So in your recipes you'll have a function to validate the app name and if it exists then proceed with the deployment.
For example your deploy recipe would look something like this:
if validator(node[:app][:name]) == true
do whatever
end
and this validator function can reside in your chef library:
def validator(app_name)
app = search("aws_opsworks_app", "name:#{app_name}").first
if app[:deploy] == true
Chef::Log.warn("PROCEEDING: Deploy initiated for #{app[:name]}")
end
end

Can I host a satis repo simply as a github repo?

I've set up a satis repository on github to share some company internal packages across projects.
Now when I try to "depend" on the new repository, I tried this:
"repositories": [ {
"type": "composer",
"url": "https://raw.githubusercontent.com/[organisation]/satis/master/web/packages.json?token=[token-copied-from-url]"
} ]
and it works far enough that composer finds package.json, however, then it fails with:
[Composer\Downloader\TransportException]
The "https://raw.githubusercontent.com/[organization]/satis/master/web/packages.json?token=[token-copied-from-url]/include/all$[some-json-file].json" file could not be downloaded (HTTP/1.1 404 Not Found)
which isn't surprising as the ?token part appears to generate an invalid url.
I can work around this by manually moving the contents of the included file into packages.json directly, but this is less than ideal, especially if satis decides to generate multiple files.
Another problem I assume this will cause is that I don't know much about the validity of the token. Perhaps it doesn't have a long lifetime and then satis will need to be regenerated regularly.
Is there a way I can get away with hosting my satis repo as "just" a github repo?
You can store your static Satis repository in a private GitHub repo, and then use GitHub's raw.githubusercontent.com domain to serve it over HTTPS. The slightly hacky part is ensuring that composer properly authenticates against the GitHub repo.
Push Satis repository to GitHub
Generate your Satis repository and push it to your private GitHub repo, lets's say https://github.com/your-org/your-satis-repo inside the output/ directory.
Prepare your projects' composer.json files
In your projects' composer.json files, add your Satis repo to the "repositories" section:
{
"type": "composer",
"url": "https://raw.githubusercontent.com/your-org/your-satis-repo/master/output"
}
Set up HTTP basic authentication
Finally, to make composer authenticate via HTTP basic auth against raw.githubusercontent.com you will need do add a new entry to the "http-basic" section in your local composer's auth.json:
{
"http-basic": {
"raw.githubusercontent.com": {
"username": "GITHUB_USERNAME",
"password": "GITHUB_TOKEN"
}
}
}
Caveats
We've found that raw.githubusercontent.com is cached, so it can take a few minutes before changes to your Satis repository are visible.
Initial testing suggests it can be done.
I think you need to remove packages.json from your repository URL and I suspect the ?token parameter. In theory you can pass the token via a header:
https://developer.github.com/v3/#authentication
I have not tested this however.
You can see a working test without authentication here:
https://github.com/markchalloner/satishostcomposer
https://github.com/markchalloner/satishost
To try it out:
git clone git#github.com:markchalloner/satishostcomposer.git
cd satishostcomposer
composer install
Should install vendor/markchalloner/satishostdemo
Example satis.json:
{
"name": "Satis Host",
"homepage": "https://raw.githubusercontent.com/markchalloner/satishost/master/web/",
"archive": {
"directory": "dist",
"format": "tar",
"prefix-url": "https://raw.githubusercontent.com/markchalloner/satishost/master/web/",
"skip-dev": true
},
"repositories": [
{
"_comment": "Demo packages",
"type": "package",
"package": {
"name": "markchalloner/satishostdemo",
"version": "0.1.0",
"dist": {
"type": "zip",
"url": "dist/demo.zip"
}
}
}
],
"require-all": true
}
Example composer.json (in your project):
{
"repositories": [
{
"type": "composer",
"url": "https://raw.githubusercontent.com/markchalloner/satishost/master/web",
"options": {
"http": {
"header": [
"API-TOKEN: YOUR-API-TOKEN"
]
}
}
}
],
"require": {
"markchalloner/satishostdemo": "0.1.0"
},
"minimum-stability": "dev"
}
Thanks