How to prevent creating too many secrets in HashiCorp Vault - hashicorp-vault

I am looking for a better solution for secrets rotation and found Vault dynamic secrets is a good one. By enabling secrets engine, say to database, the applications / services can lease dynamic secrets.
I noticed every time the application to lease a database secret, Vault creates a new user / account in the DB. I understand, each application / service needs to be a good citizen and uses the secret as per the lease time. However, in a microservices environment, an implementation bug may cause the services to request too many dynamic secrets thus triggering to create too many accounts in the DB.
Is there any way to prevent creating too many accounts? I am just worrying too many accounts may cause problem in the DB.

You could go down the static roles which would created one role with a fixed user name and then vault would just rotate that password when it needs to be rotated.
Here are some docs to get you started
https://www.vaultproject.io/api/secret/databases#create-static-role
https://www.vaultproject.io/docs/secrets/databases#static-roles
Also, a warning from the website:
Not all database types support static roles at this time. Please
consult the specific database documentation on the left navigation or
the table below under Database Capabilities to see if a given database
backend supports static roles.`

Related

Statelessness of JWT and the definition of state

I have read quite a bit about JWT. I understand that using JWT, you do not need a central database to check tokens against, which is great. However, do we not need to securely store the JWT secret key in different services in sync? Why is this not considered "state" (albeit a much smaller one than a token database)?
Because the secret key is static, it doesn't change regularly. The main problem of stateful applications is that you have to sync the state between your app server instances (for example through a database, as you said), which has the potential to create a big bottleneck. With the secret, you can just have it in a text file which exists on every server instance and not worry about having it synchronized between servers.

Handling database changes with Azure deployment slot

I’m currently configuring an environment for a web site. As I’m using azure, I’d like to use deployment slots in order to ensure that users won’t get any downtime. While I understand the goal of deployment slots, I have difficulties to understand how they could be usable in my case.
Basically, the site is using a database that will evolve as the times goes. In other words, most of my releases will alter the schema of the database and I can’t ensure that it will always be backward compatible (indeed I might delete columns or something).
Therefore, there are two solutions in my mind . Either both DS will share the same database or they use different ones.
However, if they share the same database, after the deployment is executed in the staging DS, it can be that the production one start failing (because it’s referencing a deleted column for example). So I can’t use DS the way.
Using two separate databases seems to be a viable option... if they are synchronized. Indeed, if the db of the staging db is only there to validate a deployment, I can’t swap the site to this DS while production DS is updated, as the data of this DB won’t be up to date. Therefore I need to ensure that data are in sync BUT that when the staging gets updated, this sync is somehow paused as the schema won’t be the same anymore...
All the article I read talk about DS without really mentioning the database issue, so I don’t really understand how it is supposed to work. Can some enlighten me a bit please ?

Multiple pods and nodes management in Kubernetes

I've been digging the Kubernetes documentation to try to figure out what is the recommend approach for this case.
I have a private movie API with the follow microservices (pods).
- summary
- reviews
- popularity
Also I have accounts that can access these services.
How do restrict access to services per account e.g. account A can access all the services but account B can only access summary.
The account A could be doing 100x more requests than account B. It's possible to scale services for specific accounts?
Should I setup the accounts as Nodes?
I feel like I'm missing something basic here.
Any thoughts or animated gifs are very welcome.
It sounds like this is level of control should be implemented at the application level.
Access to particular parts of your application, in this case the services, should probably be controlled via user permissions. Similar line of thought for scaling out the services...allow everything to scale but rate limit up front, e.g., account A can get 10 requests per second and account B can do 100x. Designating accounts to nodes might also be possible, but should be avoided. You don't want to end up micromanaging the orchestration layer :)

Deploying IdentityServer3 on Load Balancer

We are moving right along with building out our custom IdentityServer solution based on IdentityServer3. We will be deploying in a load balanced environment.
According to https://identityserver.github.io/Documentation/docsv2/configuration/serviceFactory.html there are a number of services and stores that need to be implemented.
I have implemented the mandatory user service, client and scope stores.
The document says there are other mandatory items to implement but that there are default InMemory versions.
We were planning on using the default in memory for the other stuff but am concerned that not all will work in a load balanced scenario.
What are the other mandatory services and stores we must implement for things to work properly when load balanced?
With multiple Identity Server installations serving the same requests (e.g. load balanced) you won't be able to use the various in-memory token stores, otherwise authorization codes, refresh tokens and reference tokens issued by one server won't be recognized by the other, nor will user consent be persisted. If you are using IIS, machine key synchronization is also necessary to have tokens work across all instances.
There's an entity framework package available for the token stores. You'll need the operational data.
There's also a very useful guide to going live here.

Openstack swift - deny deleting and modifying objects

Is there a way to configure a container so that for a certain user it allows creation of new objects, but denies deletion and modification of existing objects?
My case is that I provide a web service which receives and serves files using remote openstack swift storage and I want that in case of a credential compromise at the web service level, the person who gains access to those credentials would not be able to alter existing files.
To the best of my knowledge, I don't think it is possible to deny any user from deleting or updating existing objects of the same container, when one can upload objects using credentials.
But you can write a java API and expose it to the user to upload file and internally you can upload the file using the set of credentials. Do not expose the functions that the user is not supposed to do (delete/update etc). You can have all your creds and everything in the code (better to be encrypted). This way you may achieve what you want. But this is a work around.