Firestore - Delete all user data - google-cloud-firestore

I have seen that there is an extension called "delete user data" which simplifies the data deletion process performing hard deletions in order to accomplish with the GDPR policies.
This extension is really cool, and let us config it specifying full path to docs or collections, including storage stuff. But... what if I need to run a query because the user id is not the identifier of the document?
Is it possible to configure the extension to "perform queries"? Is it perfectly normal to run another auth triggered cloud function for deleting query-related docs/fields?

Yes, you can have as many cloud triggered events as you like.
It sounds like, based on the information you've provided that writing a new Cloud Function triggered onDelete would be the best approach.

Is it possible to configure the extension to "perform queries"?
No it is not possible, see this anwser.

Related

SAPUI5 multiple users working on one table entry

I'm currently developing an application in the SAP BTP for multiple users. In the application you have one table where all responsibilities of a specific task are written down. These responsibilities may overlap between the users, which means that for one responsibility multiple users are mentioned.
In the application the users should click on either accept or reject if they still are responsible for this task. After they have given their feedback, they can click on a save button to write everything via a batch submit to the hana db. If they are not responsible anymore their name should be removed from the tasks and they should not see this task anymore.
The problem I am facing is that currently everything is stored in one database table and if one user gives feedback to some entries while another user works on the same entries, the user who saves his entries last will override the first one.
I have tried searching for a delta insert into the database or to live update after each user input or to lock the data when another user is currently working. But none of these seem to work fine, because users would still be able to override each others entries or they may lock some entries forever.
My question therefore is, what is the usual approach to manage multiple user inputs on a single table or is using a single table a bad practise at first?
My second question would be if sapui5 supports this approach or if I can handle this in another way?
You need to do server-side validation, before the save action.
UI5 does not support this directly, you can handle it by yourself.
Because we are stateless with ui5 / data you could use the draft concept
https://experience.sap.com/fiori-design-web/draft-handling/
Or something like already said backend logic with checks before safe.

Extending Azure multistage yaml pipelines logs

I'm trying to log completion of each stage of multi stage yaml pipelines with some custom details.
How can i add custom details to https://dev.azure.com//_settings/audit logs.
Is there a way to persist this information in sqldb or any other persistant storage option.
How can i subscribe to the these log events.
How can i add custom details to
https://dev.azure.com//_settings/audit logs.
I'm afraid this does not available for you to achieve.
Because the sentence format of details is defined and fixed by our backend class. Once the corresponding action occurred, beside the action class, the event method will also be called to generate and record the log into audit page. These are all finished by backend. And we haven't expose this operation permission to users until now.
But based on my own, this is a good idea that we may consider to expand. Because Customized details can make the details more readable for the company. You can raise your idea here, vote and comment it. Our corresponding Product Group review these suggestion regularly, and consider to take into our develop roadmap depend on its priority(votes).
How can i subscribe to the these log events.
There has one important thing that I need let you know, the audit log only keep for 90 days. And it will be cleared after 90 days, including our backend database. The nutshell is, if you want the audit logs which more than 90 days, we also have no idea to help on restore that.
So I suggest you can configure one scheduled pipeline with Powershell task.
In this powershell task, run this api to get and then store it with any file type you want, e.g .csv, .json and etc.
For the schedule value, you can set it as any time period you like. Just less than 90 days, so that it do not make you lose any audit event log.
Is there a way to persist this information in sqldb or any other
persistant storage option.
If you can use a different database, I'd better suggest you consider to using a document storage solution such as CouchDB, DynamoDB or MongoDB.
Depend on your actual used, you can make use of Command line task with self-agent, to execute corresponding storing commands.
For sample, what I used is MongoDB and I can run below commands to store the JSON file that api generated previously:
C:\>mongodb\bin\mongoimport --jsonArray -d mer -c docs --file audit20191231.json

Google Cloud SQL Database Delete Protection

I would like the ability to protect against the deletion of a cloud SQL instance. This seems like a good step to take to avoid actions from an angry employee or a regretful click.
Google added a deletion protection flag for Cloud SQL in August 2022.
https://cloud.google.com/sql/docs/mysql/deletion-protection
I couldn't find anything like literally protecting the instance vs deletion, but, you could use the predefined roles in your instance to try to protect your instances from, as you said, angry employees.
For example:
Keeping the role owner to yourself (assuming you are, indeed, the owner of this project).
Depending on the needs of the employees, you can probably assign them the role cloudsql.editor or similar. If this is too much, you can create your own custom roles to narrow down what you need.
As for a regretful click, there is no much you can do. You could regularly create an export and save it on one of your buckets, just in case you need to create again your instance after a 'regretful' click.
Well, terraform certainly seems to have added some kind of deletion protection on the GCP sql instance. When I try to "terraform destroy" , I get this error
Error: Error, failed to delete instance because deletion_protection is set to true. Set it to false to proceed with instance deletion
Perhaps this functionality was added after the OP had reported the issue - which is quite possible given how old this thread is.
A related issue which talks about this.

how to invoke an operation with the data we get after processing

Hello guys I'm looking for creating an work flow where it will do some processing and will fetch some persons. I need to enforce each person by invoking an extension which takes a person entity as in put.
Any suggestions or pointers will be deeply appreciated.
I going to to assume you want to use an ITIM operation to do this, and you are not going to use an operation tied to an activity (e.g. person/account modify, restore, suspend, add, delete, etc.).
So for example, if you want to create an operation that fetches all users where their department=sales and you want to make a change to those persons (e.g. change all their phone numbers to be 800-555-1212) then you could use a lifecycle rule coupled with an operation.
First create an operation. I would suggest creating the operation at the entity type level of Person unless you need to specifically only allow this operation to work on one person type. Once you get into the operation diagram java applet, click the Properties button and change the operation type to Non Static.
Now, do your coding and flow in the operation. Your person will be coming in as the relevant data object Entity. So make your changes in a script node, then send it through a modify person extension, and finally a policy enforcement extension. I recommend the policy enforcement extension because if you are using provisioning policies then you want any accounts to be updated when you update the extension.
Save your operation, and create a lifecycle at the same level (either at the global Person at the entity level of a specific Person type). Set the operation for the lifecycle to the same operation you just created. On the event page, set your filter to be what you are wanting the operation to apply to (e.g. (department=sales)). You can set a schedule on the lifecycle to automate it, or you can simply save it and run the lifecycle manually by selecting it and clicking the Run button.
Check within IBM's InfoCenter pages for more information on all the workflow extensions that are available out of the box, as well as all of the javascript functions that are available.

Is it possible to create a safe API for public editable data with MongoLabs?

This is related to Is there ReadOnly REST API key to a MongoLab database, or is it always ReadWrite and How does Mongolab REST API authenticate
I want to make it possible for unauthenticated users of my web app to create resources and share them. The created resource is an array of links ['link1', 'link2', 'link3'].
I'm looking at using MongoLabs directly from the client for this, which is possible through their REST api.
The problem though is that as far as I can see, if I do that, it would be impossible to prevent vandalists to clear out the entire collection rather easily.
Is this correct, and if so, is there a simple solution (without running a custom backend) to do something like this?
First off, you could create a "history", so if something goes wrong you can call on an easy command to restore records.
Secondly you might screen connected clients for abusive behavior; eg measure the number of delete or update commands in a certain timeset. If this get triggered you can call on your restoration process.
Note; i have no experience with MongoLabs whatsoever, but this - to me - would be a suitable safeguard in creating a public api.