I would like the ability to protect against the deletion of a cloud SQL instance. This seems like a good step to take to avoid actions from an angry employee or a regretful click.
Google added a deletion protection flag for Cloud SQL in August 2022.
https://cloud.google.com/sql/docs/mysql/deletion-protection
I couldn't find anything like literally protecting the instance vs deletion, but, you could use the predefined roles in your instance to try to protect your instances from, as you said, angry employees.
For example:
Keeping the role owner to yourself (assuming you are, indeed, the owner of this project).
Depending on the needs of the employees, you can probably assign them the role cloudsql.editor or similar. If this is too much, you can create your own custom roles to narrow down what you need.
As for a regretful click, there is no much you can do. You could regularly create an export and save it on one of your buckets, just in case you need to create again your instance after a 'regretful' click.
Well, terraform certainly seems to have added some kind of deletion protection on the GCP sql instance. When I try to "terraform destroy" , I get this error
Error: Error, failed to delete instance because deletion_protection is set to true. Set it to false to proceed with instance deletion
Perhaps this functionality was added after the OP had reported the issue - which is quite possible given how old this thread is.
A related issue which talks about this.
Related
I have seen that there is an extension called "delete user data" which simplifies the data deletion process performing hard deletions in order to accomplish with the GDPR policies.
This extension is really cool, and let us config it specifying full path to docs or collections, including storage stuff. But... what if I need to run a query because the user id is not the identifier of the document?
Is it possible to configure the extension to "perform queries"? Is it perfectly normal to run another auth triggered cloud function for deleting query-related docs/fields?
Yes, you can have as many cloud triggered events as you like.
It sounds like, based on the information you've provided that writing a new Cloud Function triggered onDelete would be the best approach.
Is it possible to configure the extension to "perform queries"?
No it is not possible, see this anwser.
Is it possible to prevent project deletion in OpenShift?
I have some projects that must not be deleted. Sure I can recreate any projects that were accidentally deleted but there would still be an outage.
I've read through a lot of docs but haven't come across anything yet. Haven't found anything on preventing namespace deletion in Kubernetes either. I'm hoping I missed something.
You can prevent deletion by certain users but not deletion in general. A good starting point for Authorization in Openshift might be
https://docs.okd.io/3.9/architecture/additional_concepts/authorization.html
This means you should have a user or group of users who you trust to not delete a project by accident and ordinary user who create and destroy objects inside this project but can not destroy it by themself.
Hope this helps.
I'm using MongoDB as my database, and as a first-time back-end developer the ease with which I can delete an entire database/collection really bothers me.
Simply typing db.collection.remove() removes all records from that collection!
I know that an effective backup strategy should render this a non-issue, but I occasionally do run .remove() on some collections, and I'd hate to type in the wrong collection name by accident and (a) have to go through a backup restore, and (b) lose whatever data I had gathered between the backup and the restore, especially as my app gathers a lot of user data.
Is there any 'safeguard' I can set up my database to use, even if it's just a warning/confirmation that says
"Yo, are you sure you want to remove everything from <collectionname>? Choose: Yes/No"
User roles won't fix your problem. If your account has permissions to delete one user, you could accidentally delete them all. If your account has permissions to update an attribute for one user, you could accidentally update all of your users.
There's a simple fix for this however.
Step 0: Backup your database. And test your backups regularly. And make sure you get alerted if the backup did not run, or errored. Replica sets are not backups. I know this is obvious, but evidentally it's not obvious to everybody.
Step 1: Write a web admin GUI interface for your database. This it will only take a day or two -- and it should be simple enough that a secretary or intern could use it without fear for your data. (If you think this will take a long time, find a framework with more bells and whistles. Your admin console doesn't even need to be written in the same language as your app.)
Step 2: Data migrations (maintenance transformations of your database) should always be run from scripts checked into source control and tested on non-prod beforehand. The script could be as simple as mongo -e "foo.update(blah)", but you should run it as a script to avoid cut-n-paste errors. Ideally, you would even have a checklist for all migrations. (Check that you have a recent backup. Check the database log and system load beforehand. Write a before and after query that will tell you if the migration was successful...)
Step 3: You now no longer need to use the production Mongo console. So don't. It's a useful tool for development, but that's only needed on local development databases.
The above-mentioned Roles might be useful for read-only queries. But you can already do that against the non-master replica set member.
tl;dr: You can go pretty far using cowboy admin techniques, but eventually you're going to figure out that it's better (and not much more work) to automate everything.
There is nothing you can do in the current version to provide this functionality.
In a future version when user defined roles are available you could define a role which allows insert() and update() but not remove() or drop() etc. and therefore make yourself log-in as a different higher-role user, but that's not available in the current (2.4) version.
This is related to Is there ReadOnly REST API key to a MongoLab database, or is it always ReadWrite and How does Mongolab REST API authenticate
I want to make it possible for unauthenticated users of my web app to create resources and share them. The created resource is an array of links ['link1', 'link2', 'link3'].
I'm looking at using MongoLabs directly from the client for this, which is possible through their REST api.
The problem though is that as far as I can see, if I do that, it would be impossible to prevent vandalists to clear out the entire collection rather easily.
Is this correct, and if so, is there a simple solution (without running a custom backend) to do something like this?
First off, you could create a "history", so if something goes wrong you can call on an easy command to restore records.
Secondly you might screen connected clients for abusive behavior; eg measure the number of delete or update commands in a certain timeset. If this get triggered you can call on your restoration process.
Note; i have no experience with MongoLabs whatsoever, but this - to me - would be a suitable safeguard in creating a public api.
In most command interfaces I've seen, there is typically an "Execute" method which takes takes a command input and either returns void or some generic structure indicating if the command executed successfully or not (we are using the latter). Now, I've never thought of this before, but we suddenly got the need to know some more details about the result of the command than what you can expose generically.
Consider the following example:
you have a team and you are creating a screen where you can add members to your team. The members of the team are shown in a grid below the "add new member"-stuff. Now, when you press "add new member" you want to run some jquery/roundohuse/whatever and add the new member to the list of team members. No problems so far, but: you also want to include some identification data in a hidden field for each member and this id-data comes from the server.
So the problem is: how can I get that id-data from the server? The "AddNewTeamMember" command which I am pushing through the "ExecuteCommand"-method does not give me anything useful back, and if I add a new query method to the service saying something like: "GetLastAddedTeamMember" then I might just get the last entry added by someone else (at least if this is data which is very aggressively added by different users). In some situations you have a natural unique identifier generated on the client side which we can use, but for team members we did not.
Given that you have no choice but to update an on-page widget when another command completes, I see two choices for you:
Shoot off the command, display something locally that indicates it is submitted, and then wait until you get a notification from the server that the team member list has changed. Update the widget to reflect that.
Add a correlation ID to your command when you submit it, and add the team member provisionally locally to the list. When you get a confirmation from the server that a team member update happened because of your correlation ID, update your local data.
I would suggest the first approach, where the "provisional indicator" could be throwing a marked version of the normal indication into place; then, when you finally get an update you should have the data you need.
Given you went with CQRS to solve this problem I assume you have frequent updates to the content of those widgets happening in the background already, so have presumably solved the "background update" problem.
If not, I suggest you either ditch CQRS as a bad - over-complicated - solution in your problem space, or solve the background update problem first.
If you want to add an existing team member, you should query the read side of your application for this data. If you need to add a new team member, you have to consider if it's necessary to show the user in the grid below at once. Can you wait until the team member is in place on the read side? You can also query a service on the server side to get an unique ID (it can return a GUID). Then you add the team member to the grid, and of course, send the command to the server. But, if it's possible, try to design the application in a way that you don't have to show the team member at once. It's also possible to give the user a message saying something like this: "Team member added, waiting for response from server.". Then use AJAX to query the read side for new team members. When it appears on the read side, show it in the grid. You might have to deal with team members added by other users, but does it matter? CQRS gives you a great way to collaborate with other users, so maybe you should take advantage of that. As I see it; CQRS forces you to think different, and that may not be a bad thing.