ActiveMQ Artemis address security roles application order - activemq-artemis

I have some unclear moments with security for addresses. The application order of security roles is not clear for me.
Let's imagine, we add security settings for test_user (via addSecuritySettings) {send, consume, browse, ...} to ADR.TEST.#. From wildcard docs this settings will apply to ADR.TEST.IN. And it's true, if I check via Hawtio getRolesAsJson().
Then I give with same actions same security settings for another_user to ADR.TEST.IN. In result i have 2 users (test_user, another_user) with same permissions for ADR.TEST.IN.
If then I make same step for third user last_user to ADR.TEST.#, last_user would not have any permissions for ADR.TEST.IN, what suits ADR.TEST.#.
Is it bug or feature?
UPD: Code example:
ActiveMQServerControl activeMQServerControl;
...
activeMQServerControl.addSecuritySettings("ADR.TEST.#", "test_user", "test_user", "test_user", "test_user", "test_user", "test_user", "test_user");
activeMQServerControl.addSecuritySettings("ADR.TEST.IN", "another_user,test_user", "another_user,test_user", "another_user,test_user", "another_user,test_user", "another_user,test_user", "another_user,test_user", "another_user,test_user");
activeMQServerControl.addSecuritySettings("ADR.TEST.#", "test_user,last_user", "test_user,last_user", "test_user,last_user", "test_user,last_user", "test_user,last_user", "test_user,last_user", "test_user,last_user");
This is output for activeMQServerControl.getRolesAsJSON("ADR.TEST.IN") after first assignment:
[{"name":"test_user","send":true,"consume":true,"createDurableQueue":true,"deleteDurableQueue":true,"createNonDurableQueue":true,"deleteNonDurableQueue":true,"manage":true,"browse":false,"createAddress":false,"deleteAddress":false}]
After second:
[{"name":"test_user","send":true,"consume":true,"createDurableQueue":true,"deleteDurableQueue":true,"createNonDurableQueue":true,"deleteNonDurableQueue":true,"manage":true,"browse":false,"createAddress":false,"deleteAddress":false},{"name":"another_user","send":true,"consume":true,"createDurableQueue":true,"deleteDurableQueue":true,"createNonDurableQueue":true,"deleteNonDurableQueue":true,"manage":true,"browse":false,"createAddress":false,"deleteAddress":false}]
Same output after third:
[{"name":"test_user","send":true,"consume":true,"createDurableQueue":true,"deleteDurableQueue":true,"createNonDurableQueue":true,"deleteNonDurableQueue":true,"manage":true,"browse":false,"createAddress":false,"deleteAddress":false},{"name":"another_user","send":true,"consume":true,"createDurableQueue":true,"deleteDurableQueue":true,"createNonDurableQueue":true,"deleteNonDurableQueue":true,"manage":true,"browse":false,"createAddress":false,"deleteAddress":false}]
So my question is about last operation. I gave permissions to last_user for ADR.TEST.#, but there is not any permissions for ADR.TEST.IN

ActiveMQ Artemis select only the security settings with most specific match to get the roles for an address.
In your case the security settings with most specific match are:
activeMQServerControl.addSecuritySettings("ADR.TEST.IN", "another_user,test_user", "another_user,test_user", "another_user,test_user", "another_user,test_user", "another_user,test_user", "another_user,test_user", "another_user,test_user");
To add last_user role you should update the security settings with the match ADR.TEST.IN, i.e.:
activeMQServerControl.addSecuritySettings("ADR.TEST.IN", "another_user,test_user,last_user", "another_user,test_user,last_user", "another_user,test_user,last_user", "another_user,test_user,last_user", "another_user,test_user,last_user", "another_user,test_user,last_user", "another_user,test_user,last_user");

Related

NATS-How to set the subscribe and publish permission when using request-reply in python?

I want to set auth permission, but it seems different when using request-reply mode.
Here is my setting:
values.yaml
users:
-user: test
password: testtest
permissions:
subcribe: ["test"]
pulbish: ["test"]
python code
nc = await nats.connect("nats://test:testtest#jetstream-nats:4222")
js = nc.jetstream()
await js.add_stream(name="test", subjects=["test"]
Error message:
nats.errors.Error: nats: permissions violation for subscription to "_inbox.xxxxxxxxxxxx.*"
nats.errors.Error: nats: permissions violation for publish to "$js.api.stream.create.test"
If I change value.yaml to this, it would not show any error and still can't publish to stream "test".
users:
-user: test
password: testtest
permissions:
subcribe: ["_INBOX.>"]
pulbish: ["$JS.API.STREAM.CREATE.>"]
But if I change value.yaml to this, it would occur the same error message
users:
-user: test
password: testtest
permissions:
subcribe: ["_INBOX.>"]
pulbish: ["$JS.API.STREAM.CREATE.test.>"]
========================================================================================
nats.errors.Error: nats: permissions violation for subscription to "_inbox.xxxxxxxxxxxx.*"
nats.errors.Error: nats: permissions violation for publish to "$js.api.stream.create.test"
My question is HOW TO set the subscribe and publish permission when using request-reply?
If i want to set user "testuser" only can publish to stream "test" and subscribe "test", how to set my yaml file?
A publish to a stream only requires permission to the actual subject of the message, in this case test. What appears to be happening is that you are also trying to create the stream with that user which requires different permissions (that you added in the second snippet). In both snippets, you have typos in your YAML, pulbish instead of publish and subcribe instead of subscribe.
If you want the same user to be able to create the stream and publish to it, try this:
users:
- user: test
password: testtest
permissions:
subscribe: ["_INBOX.>"]
publish: ["$JS.API.STREAM.CREATE.test", "test"]

Deploying custom Keycloak theme with Operator (v15.1.1 & v16.0.0)

I have a theme with a size >1MB (which precludes the configmap-solution provided as an answer to this question).
This theme has been been packaged according to the Server Development Guide - its folder structure is
META-INF/keycloak-themes.json
themes/[themeName]/login/login.ftl
themes/[themeName]/login/login-reset-password.ftl
themes/[themeName]/login/template.ftl
themes/[themeName]/login/template.html
themes/[themeName]/login/theme.properties
themes/[themeName]/login/messages/messages_de.properties
themes/[themeName]/login/messages/messages_en.properties
themes/[themeName]/login/resources/[...]
The contents of keycloak-themes.json are
{
"themes": [{
"name" : "[themeName]",
"types": [ "login" ]
}]
}
where [themeName] is my theme name.
Keycloak is running with 3 instances, its resource spec includes:
extensions:
- [URL-to-jar]
Deployment was successful according to the logs of each pod - each log contains a message containing
Deployed "[jar-name].jar" (runtime-name : "[jar-name].jar")
However, in the admin console, I cannot select the theme from the extension for the login-theme. Creating a new realm via crd with a preconfigured login-theme via spec-entry
loginTheme: [themeName]
also does not work - in the admin-console, the selected entry for the login-theme is empty.
I may be missing something basic, but it seems like this ought to work according to this answer if I am not mistaken.
As is so often the case, an uncaught typo was the source of the error.
The directory-structure must not be
META-INF/keycloak-themes.json
themes/[theme-name]/[theme-role]/theme.properties
[...]
But instead
META-INF/keycloak-themes.json
theme/[theme-name]/[theme-role]/theme.properties
[...]
Given a correct structure, keycloak-operator can successfully deploy and load custom-themes as jar-extensions.

Deploy Graylog on GKE

I'm having a hard time deploying Graylog on Google Kubernetes Engine, I'm using this configuration https://github.com/aliasmee/kubernetes-graylog-cluster with some minor modifications. My Graylog server is up but show this error in the interface:
Error message
Request has been terminated
Possible causes: the network is offline, Origin is not allowed by Access-Control-Allow-Origin, the page is being unloaded, etc.
Original Request
GET http://ES_IP:12900/system/sessions
Status code
undefined
Full error message
Error: Request has been terminated
Possible causes: the network is offline, Origin is not allowed by Access-Control-Allow-Origin, the page is being unloaded, etc.
Graylog logs show nothing in particular other than this:
org.graylog.plugins.threatintel.tools.AdapterDisabledException: Spamhaus service is disabled, not starting (E)DROP adapter. To enable it please go to System / Configurations.
at org.graylog.plugins.threatintel.adapters.spamhaus.SpamhausEDROPDataAdapter.doStart(SpamhausEDROPDataAdapter.java:68) ~[?:?]
at org.graylog2.plugin.lookup.LookupDataAdapter.startUp(LookupDataAdapter.java:59) [graylog.jar:?]
at com.google.common.util.concurrent.AbstractIdleService$DelegateService$1.run(AbstractIdleService.java:62) [graylog.jar:?]
at com.google.common.util.concurrent.Callables$4.run(Callables.java:119) [graylog.jar:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
but at the end :
2019-01-16 13:35:00,255 INFO : org.graylog2.bootstrap.ServerBootstrap - Graylog server up and running.
Elastic search health check is green, no issues in ES nor Mongo logs.
I suspect a problem with the connection to Elastic Search though.
curl http://ip_address:9200/_cluster/health\?pretty
{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 4,
"active_shards" : 4,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
After reading the tutorial you shared, I was able to identify that kubelet needs to run with the argument --allow-privileged.
"Elasticsearch pods need for an init-container to run in privileged mode, so it can set some VM options. For that to happen, the kubelet should be running with args --allow-privileged, otherwise the init-container will fail to run."
It's not possible to customize or modify kubelet parameter/arguments, there is a feature request found here: https://issuetracker.google.com/118428580, so this can be implemented in a future.
Also in case you are modifying kubelet directly on the node(s), it's possible that the master resets the configuration and it isn't guaranteed that the configurations will be persistent.

LaunchConfiguration it's just stalling (IAM policy is having Admin priviledge) with 2 configSets in Cloudformation

Let me know how do I delete the stack after TTL. Seems 2 configsets are not working..
"All" : [ "ConfigureSampleApp", "ConfigureTTL" ]

Can I update Windows ClientIdentities after cluster creation?

I currently have something like this in my cluseterConfig.json file.
"ClientIdentities": [
{
"Identity": "{My Domain}\\{My Security Group}",
"IsAdmin": true
}
]
My questions are:
My cluster is stood up and running. Can I add a second security group to this cluster while running? I've search through the powershell commands and didn't see one that matched this but I may have missed it?
If I can't do this while the cluster is running do I need delete the cluster and recreate? If I do need to recreate I'm zeroing in on the word ClientIdentities. I'm assuming I can have multiple identities and my config should look something like
ClientIdentities": [{
"Identity": "{My Domain}\\{My Security Group}",
"IsAdmin": true
},
{
"Identity": "{My Domain}\\{My Second Security Group}",
"IsAdmin": false
}
]
Thanks,
Greg
Yes, it is possible to update ClientIdentities once the cluster is up using a configuration upgrade.
Create a new JSON file with the added client identities.
Modify the clusterConfigurationVersion in the JSON config.
Run Start-ServiceFabricClusterConfigurationUpgrade -ClusterConfigPath "Path to new JSON"