Reacting to Operations on Secret Manager - gcloud

We want to use Secret Manager as a primary datastore for secrets.
When certain secrets change in the primary datastore, we want to react on that and update a secondary datastore (with the latest values).
Is there a way of reacting to when a Secret got created/changed/deleted without explicit polling?

As of 2020-07-21, this is not currently possible, but notifications are on the roadmap.

Related

How to prevent creating too many secrets in HashiCorp Vault

I am looking for a better solution for secrets rotation and found Vault dynamic secrets is a good one. By enabling secrets engine, say to database, the applications / services can lease dynamic secrets.
I noticed every time the application to lease a database secret, Vault creates a new user / account in the DB. I understand, each application / service needs to be a good citizen and uses the secret as per the lease time. However, in a microservices environment, an implementation bug may cause the services to request too many dynamic secrets thus triggering to create too many accounts in the DB.
Is there any way to prevent creating too many accounts? I am just worrying too many accounts may cause problem in the DB.
You could go down the static roles which would created one role with a fixed user name and then vault would just rotate that password when it needs to be rotated.
Here are some docs to get you started
https://www.vaultproject.io/api/secret/databases#create-static-role
https://www.vaultproject.io/docs/secrets/databases#static-roles
Also, a warning from the website:
Not all database types support static roles at this time. Please
consult the specific database documentation on the left navigation or
the table below under Database Capabilities to see if a given database
backend supports static roles.`

Statelessness of JWT and the definition of state

I have read quite a bit about JWT. I understand that using JWT, you do not need a central database to check tokens against, which is great. However, do we not need to securely store the JWT secret key in different services in sync? Why is this not considered "state" (albeit a much smaller one than a token database)?
Because the secret key is static, it doesn't change regularly. The main problem of stateful applications is that you have to sync the state between your app server instances (for example through a database, as you said), which has the potential to create a big bottleneck. With the secret, you can just have it in a text file which exists on every server instance and not worry about having it synchronized between servers.

Kubernetes dynamic pod provisioning

I have an app I'm building on Kubernetes which needs to dynamically add and remove worker pods (which can't be known at initial deployment time). These pods are not interchangeable (so increasing the replica count wouldn't make sense). My question is: what is the right way to do this?
One possible solution would be to call the Kubernetes API to dynamically start and stop these worker pods as needed. However, I've heard that this might be a bad way to go since, if those dynamically-created pods are not in a replica set or deployment, then if they die, nothing is around to restart them (I have not yet verified for certain if this is true or not).
Alternatively, I could use the Kubernetes API to dynamically spin up a higher-level abstraction (like a replica set or deployment). Is this a better solution? Or is there some other more preferable alternative?
If I understand you correctly you need ConfigMaps.
From the official documentation:
The ConfigMap API resource stores configuration data as key-value
pairs. The data can be consumed in pods or provide the configurations
for system components such as controllers. ConfigMap is similar to
Secrets, but provides a means of working with strings that don’t
contain sensitive information. Users and system components alike can
store configuration data in ConfigMap.
Here you can find some examples of how to setup it.
Please try it and let me know if that helped.

How to watch a Pod api endpoint in a Kubernetes Operator using the SDK

Description
I have a CR associated with a POD with a container that is exposing an API, eg:
/available
returning for example
{"available":"true"}
Is there a way to create a controller watcher on that API call that whenever the response changes will trigger the reconcile function?
I believe it could be possible using a channel with the controller watcher, but I don't see any similar examples out there
Using
Kubernetes operator-sdk version v0.5.0+git
I'm afraid it's not as easy as you hope. The Kubernetes controller objects react to add/edit/delete operations of some kind of resource in the cluster. A value exposed inside an API of something running inside the Pod is not visible in the resource database. There is not event going off, notifying about the change either.
I see two things you can consider doing:
Create a regular controller that would have the reconcile function triggered fairly often, but would check the availability value inside it's reconcile function. If it changed, it would do something, if it didn't it would do nothing.
Just create a separate task, outside the controller domain, that would monitor this API value and do something.
Controllers work using notifications, they don't actively watch for changes, they are notified about them. That's different than what you want to do - periodically check the API response.

Google Cloud SQL Database Delete Protection

I would like the ability to protect against the deletion of a cloud SQL instance. This seems like a good step to take to avoid actions from an angry employee or a regretful click.
Google added a deletion protection flag for Cloud SQL in August 2022.
https://cloud.google.com/sql/docs/mysql/deletion-protection
I couldn't find anything like literally protecting the instance vs deletion, but, you could use the predefined roles in your instance to try to protect your instances from, as you said, angry employees.
For example:
Keeping the role owner to yourself (assuming you are, indeed, the owner of this project).
Depending on the needs of the employees, you can probably assign them the role cloudsql.editor or similar. If this is too much, you can create your own custom roles to narrow down what you need.
As for a regretful click, there is no much you can do. You could regularly create an export and save it on one of your buckets, just in case you need to create again your instance after a 'regretful' click.
Well, terraform certainly seems to have added some kind of deletion protection on the GCP sql instance. When I try to "terraform destroy" , I get this error
Error: Error, failed to delete instance because deletion_protection is set to true. Set it to false to proceed with instance deletion
Perhaps this functionality was added after the OP had reported the issue - which is quite possible given how old this thread is.
A related issue which talks about this.