NATS-How to set the subscribe and publish permission when using request-reply in python? - kubernetes-helm

I want to set auth permission, but it seems different when using request-reply mode.
Here is my setting:
values.yaml
users:
-user: test
password: testtest
permissions:
subcribe: ["test"]
pulbish: ["test"]
python code
nc = await nats.connect("nats://test:testtest#jetstream-nats:4222")
js = nc.jetstream()
await js.add_stream(name="test", subjects=["test"]
Error message:
nats.errors.Error: nats: permissions violation for subscription to "_inbox.xxxxxxxxxxxx.*"
nats.errors.Error: nats: permissions violation for publish to "$js.api.stream.create.test"
If I change value.yaml to this, it would not show any error and still can't publish to stream "test".
users:
-user: test
password: testtest
permissions:
subcribe: ["_INBOX.>"]
pulbish: ["$JS.API.STREAM.CREATE.>"]
But if I change value.yaml to this, it would occur the same error message
users:
-user: test
password: testtest
permissions:
subcribe: ["_INBOX.>"]
pulbish: ["$JS.API.STREAM.CREATE.test.>"]
========================================================================================
nats.errors.Error: nats: permissions violation for subscription to "_inbox.xxxxxxxxxxxx.*"
nats.errors.Error: nats: permissions violation for publish to "$js.api.stream.create.test"
My question is HOW TO set the subscribe and publish permission when using request-reply?
If i want to set user "testuser" only can publish to stream "test" and subscribe "test", how to set my yaml file?

A publish to a stream only requires permission to the actual subject of the message, in this case test. What appears to be happening is that you are also trying to create the stream with that user which requires different permissions (that you added in the second snippet). In both snippets, you have typos in your YAML, pulbish instead of publish and subcribe instead of subscribe.
If you want the same user to be able to create the stream and publish to it, try this:
users:
- user: test
password: testtest
permissions:
subscribe: ["_INBOX.>"]
publish: ["$JS.API.STREAM.CREATE.test", "test"]

Related

Azure devops Variables and Terraform

I am trying to create a azure key vault with the help of terraform where i want to save my DB password in my azure devops pipeline because obviously I cannot hardcode it to my tfvars file.
As u can see i m creating an empty job and saving my password variable with value in pipeline
but I am not able to understand why my terraform plan is waiting in console like it is asking user to enter the password
below is snapshot of LOG:
can u please help me that what I am missing here ??
Also , I have i m passing my password in command line : then I am getting below error :
2022-05-13T05:11:00.5948619Z [31m│[0m [0m[1m[31mError: [0m[0m[1mbuilding account: getting authenticated object ID: Error listing Service Principals: autorest.DetailedError{Original:adal.tokenRefreshError{message:"adal: Refresh request failed. Status Code = '401'. Response body: {"error":"invalid_client","error_description":"AADSTS7000215: Invalid client secret provided. Ensure the secret being sent in the request is the client secret value, not the client secret ID, for a secret added to app 'a527faff-6956-4b8a-93ad-d9a14ab41610'.\r\nTrace ID: 81c1b1e8-1b0c-4f21-ad90-baf277d43801\r\nCorrelation ID: c77d437b-a6e8-4a74-8342-1508de00fa3a\r\nTimestamp: 2022-05-13 05:11:00Z","error_codes":[7000215],"timestamp":"2022-05-13 05:11:00Z","trace_id":"81c1b1e8-1b0c-4f21-ad90-baf277d43801","correlation_id":"c77d437b-a6e8-4a74-8342-1508de00fa3a","error_uri":"https://login.microsoftonline.com/error?code=7000215"} Endpoint https://login.microsoftonline.com/*/oauth2/token?api-version=1.0", resp:(http.Response)(0xc00143c000)}, PackageType:"azure.BearerAuthorizer", Method:"WithAuthorization", StatusCode:401, Message:"Failed to refresh the Token for request to https://graph.windows.net//servicePrincipals?%24filter=appId+eq+%27a527faff-6956-4b8a-93ad-d9a14ab41610%27&api-version=1.6", ServiceError:[]uint8(nil), Response:(*http.Response)(0xc00143c000)}[0m
2022-05-13T05:11:00.5952404Z [31m│[0m [0m

How do I modify existing jobs to switch owner?

I installed Rundeck v3.3.5 (on CentOS 7 via RPM) to replace an old Rundeck instance that was decommissioned. I did the export/import of projects (which worked brilliantly) while connected to the new server as the default admin user. The imported jobs run properly on the correct schedule. I subsequently configured the new server to use LDAP authentication and configured ACLs for users/roles. That also works properly.
However, I see an error like this in the service.log:
ERROR services.NotificationService - Error sending notification email to foo#bar.com for Execution 9358 Error executing tag <g:render>: could not initialize proxy [rundeck.Workflow#9468] - no Session
My thought is to switch job owners from admin to a user that exists in LDAP. I mean, I would like to switch job owners regardless, but I'm also hoping it addresses the error.
Is there a way in the web interface or using rd that I can bulk-modify jobs to switch the owner?
It turns out that the error in the log was caused by notification settings in an included job. I didn't realize that notifications were configured on the parameterized shared job definition, but there were; removing the notification settings caused the error to stop being added to /var/log/rundeck/service.log.
To illustrate the problem, here are chunks of YAML I've edited to show just the important parts. Here's the common job:
- description: Do the actual work with arguments passed
group: jobs/common
id: a618ceb6-f966-49cf-96c5-03a0c2efb9d8
name: do_the_work
notification:
onstart:
email:
attachType: file
recipients: ops#company.com
subject: Actual work being started
notifyAvgDurationThreshold: null
options:
- enforced: true
name: do_the_job
required: true
values:
- yes
- no
valuesListDelimiter: ','
- enforced: true
name: fail_a_lot
required: true
values:
- yes
- no
valuesListDelimiter: ','
scheduleEnabled: false
sequence:
commands:
- description: The actual work
script: |-
#!/bin/bash
echo ${RD_OPTION_DO_THE_JOB} ${RD_OPTION_FAIL_A_LOT}
keepgoing: false
strategy: node-first
timeout: '60'
uuid: a618ceb6-f966-49cf-96c5-03a0c2efb9d8
And here's the job that calls it (the one that is scheduled and causes an error to show up in the log when it runs):
- description: Do the job
group: jobs/individual
name: do_the_job
...
notification:
onfailure:
email:
recipients: ops#company.com
subject: '[Rundeck] Failure of ${job.name}'
notifyAvgDurationThreshold: null
...
sequence:
commands:
- description: Call the job that does the work
jobref:
args: -do_the_job yes -fail_a_lot no
group: jobs/common
name: do_the_work
If I remove the notification settings from the common job, the error in the log goes away. I'm not sure if sending notifications from an included job is not supported. It would be useful to me if it was, so I could place notification settings in a single location. However, I can understand why it presents a problem for the scheduler/executor.

how to create a rundeck user manually with read-only access to one, many, or all rundeck projects

in /etc/rundeck/realm.properties, the inline documentation is obfuscating to put it mildly.
The default for admin is:
admin:,user,admin,architect,deploy,build
and now I want to create all the users with no write or create capabilities at all except for me and one other and for ALL projects.
What are appropriate fields for "John Doe"?
jdoe:,........ fill in ........
Thanks much - if there is a document which points that out clearly, that would be good too.
Cheers.
Add your user in this way:
username:password,group1,group2,group3,groupn
Now, you need to add an ACL (Access Control List) that manages the new user (or new group defined with the user). Go to Gear Icon -> Access Control -> + Create New ACL button.
For example, this ACL is focused on "group1" (execute jobs only):
# Project scope
description: project level ACL.
context:
project: 'MyProject'
for:
resource:
- equals:
kind: event
allow: [read] # allow read of all activity (jobs run by all users)
job:
- allow: [run, read] # allow read of all jobs
adhoc:
- deny: run # don't allow adhoc execution
node:
- allow: [read, run] # allow run on nodes with the tag 'mytag'
by:
group: group1
---
# Application scope
description: application level ACL.
context:
application: 'rundeck'
for:
project:
- match:
name: 'MyProject'
allow: [read]
by:
group: group1
Keep in mind that you can use LDAP/AD to get users and groups, or PAM.
Also, you have a good ACL example here.

Failing to put release

I'm getting the following error when trying to put a release in a private repo.
creating release ReleaseName error running command: POST https://api.github.com/repos/my-org/my-repo/releases: 404 Not Found []
prior to getting this error, I was getting:
error running command: GET https://api.github.com/repos/my-org/my-repo.git/releases: 404 Not Found []
so I know the get is now working, and it's something specific to the POST.
My resource config is as follows (admittedly doctored to protect the innocent)
- name: gh-release
type: github-release
source:
owner: my-org
repository: my-repo
access_token: {{access-token}}
and the put looks like this (again doctored):
- put: gh-release
params:
name: package/name
tag: version/version
tag_prefix: package/tag-prefix
commitish: package/commitish
globs:
- package/*.tar.gz
I know the access token works for the GET, and I've used it with curl successfully.
Any ideas what I might be doing wrong?
UPDATE: fixed indentation.
Turns out that it was a permissions problem. The user that the token was for only had read access to the repository. Using a different user token, or updating the repo access to write fixed the issue.

How to deploy symfony2 - my dev env works but not prod

I have read the cookbook regarding deploying my symfony2 app to production environment. I find that it works great in dev mode, but the prod mode first wouldn't allow signing in (said bad credentials though I signed in with those very credentials in dev mode), and later after an extra run of clearing and warming up the prod cache, I just get http500 from my prod route.
I had a look in the config files and wonder if this has anything to do with it:
config_dev.php:
imports:
- { resource: config.yml }
framework:
router: { resource: "%kernel.root_dir%/config/routing_dev.yml" }
profiler: { only_exceptions: false }
web_profiler:
toolbar: true
intercept_redirects: false
monolog:
handlers:
main:
type: stream
path: %kernel.logs_dir%/%kernel.environment%.log
level: debug
firephp:
type: firephp
level: info
assetic:
use_controller: true
config_prod:
imports:
- { resource: config.yml }
#doctrine:
# orm:
# metadata_cache_driver: apc
# result_cache_driver: apc
# query_cache_driver: apc
monolog:
handlers:
main:
type: fingers_crossed
action_level: error
handler: nested
nested:
type: stream
path: %kernel.logs_dir%/%kernel.environment%.log
level: debug
I also noticed that there is a routing_dev.php but no routing_prod, the prod encironment works great however on my localhost so... ?
In your production environment when you run the app/console cache:warmup command you need to make sure you run it like this: app/console cache:warmup --env=prod --no-debug Also, remember that the command will warmup the cache as the current user, so all files will be owned by the current user and not the web server user (eg: www-data). That is probably why you get a 500 server error. After you warmup the cache run this: chown -R www-data.www-data app/cache/prod (be sure to replace www-data with your web server user.
Make sure your parameters.ini file has any proper configs in place since its common for this file to not be checked in to whatever code repository you might be using. Or (and I've even done this) its possible to simply forget to put parameters from dev into the prod parmeters.ini file.
You'll also need to look in your app/logs/prod.log to see what happens when you attempt to login.