We are using Facebook login in out application need to know about data deletion policy - facebook

Need to know more about data deletion policy, In which case we need to provide data deletion option to user.

Related

Keycloak. Storage SPI with external database

We already have DB with users.
We have to migrate all records to Keycloak DB or we can just implement Storage SPI ?
We don't want to migrate records, because we should also support old DB, it brings problems because we will need synchronize 2 DB.
Can you please write what could be the problems in this approach and write your advices for resolve theirs ?
USER DATA SOURCES
Moving to a system such as Keycloak will require an architectural design on how to manage user fields. Some user fields will need migrating to an identity database managed by Keycloak. Applications can then receive updates to these fields within tokens.
KEYCLOAK DATA
Keycloak will expect to have its own user account storage, and this is where each user's subject claim will originate from. If a new user signs up, the user will be created here before being created in your business data.
Keycloak user data will include fields such as name and email if they are sent in forgot password workflows. You can keep most other user fields in your business data if you prefer.
So to summarize, a migration will be needed, but you don't have to migrate all user fields.
BUSINESS DATA
This may include other user fields that you want to keep where they are, but also include in access tokens and use for authorization in APIs. Examples are values like roles, permissions, tenant ID, partner ID, supscription level.
DESIGN STEPS
My recent blog post walks through some examples and suggests a way to think through your end-to-end flows. There are a couple of different user data scenarios mentioned there.
It is worth doing a day or two of sketching out how you want your system to work. In particular how your APIs will authorize requests, and how you will manage both existing and new users. This avoids the potential for finding expensive problems later.

Managing Firebase Cloud Messaging Tokens with Multiple Users

Looking at the Firebase docs, it suggests that a FCM token is generated for each client instance - which must then be stored manually. If I'm linking each token to a user document in a Firestore database, will I need to manually remove the device-specific token if the user logs out?
For example, user A launches the app and their FCM token (e.g. "ABC") is stored to their user document. Then, user A logs out and B logs in. The FCM token would not refresh**, and therefore I'd need to remove that token from A's user document and move it to B's. Otherwise, any notifications destined for A would be sent to B (all on the same device).
Is this thinking correct? It seems like a tricky way to manage the tokens but as far as I can tell is necessary?
** As per Firebase docs, the token is only refreshed when:
The app is restored on a new device
The user uninstalls/reinstall the app
The user clears app data.
Reading more of the docs, would it be a better solution to locally process the notification first - decide whether it was destined for the logged in account, then present it? I.e. not add any low-level sensitive data (e.g. chat message) to the notification and simply provide an 'notification to fetch a new message'?
Yes, that is correct. An FCM token identifies an installation of a specific app on a specific device, nothing more and nothin less. It has no inherent relation to a user, so if you need such a relation, you will have to link them together yourself.
Keep in mind that just like multiple users can use a single device, a single user can also use multiple devices. In my experience that is in fact the more common scenario of the two.
Locally checking the target user of the notification against the actual current user is an interesting concept that could definitely help prevent showing the data to the wrong user.
In general though, you can also clear the token when the user signs out of your app (or a new user signs in). This is (as far as I can tell) the most common way of dealing with this scenario (see 1, 2, and more from this).

How to Avoid Facebook Graph API Limit with million of users

I have a WordPress webpage with posts retrieving from a public Facebook page. The FB page is not mine. However the problem is that I have millions of visitors on my Web page and every time a user visits the web page it make an API call to FB Page. Since facebook allows only a limited number of API calls in a time frame, My limit reaches instantly with such a huge number of visitors. Is there any solution to this problem. an idea in my mind is:
1. retrieve posts from Facebook and store them locally and display them every time a user visits. Is it possible? If Yes where to start ?
Or can we get more API calls by paying facebook or things like that. I am ready to pay as far as my API calls can be made sufficient for my needs.
I am open to any solution and would be very thankful for any help to resolve the problem.
There are several possible solutions to this problem
Storing responses in database
You can add a middlepoint to your requests to Facebook API using your application. This would mean that you would have a database table which stores Facebook-related information, possibly along with a lifecycle time, like:
facebook_data(user_id, lifecycle_time, ...)
Whenever you would theoretically need to send a request to Facebook API, you can check the database table to see whether the user already has a record in that table and whether it is still valid. If so, give this data to the user. If not, send an API request to Facebook and store the response in this table.
Storing responses in localStorage/memory/file
You can also store Facebook-related data in the localStorage of the web browser of the memory of an app or a file, or even a local database specific for each user. This would prevent a lot of communication and server load your app is issuing on your server(s).
Queueing user requests to be sent
If the Facebook-related data is not very urgent to your users, you can queue requests to be sent and send a single request instead of a request for each user's each visit. You can do this via a cron job.
Periodically sending requests to Facebook
You can group your users into batches and periodically update their values via Facebook and storing in your database.
Combination
Naturally, you can combine the approaches, for instance, you can store in local memory, file, or localStorage values and in the database in the same time, so first locally stored information is searched for, not needing even a request if it exists and is still valid. If not, then checking the database record and using that if it exists and is still valid. And if the data is not found in local resources, nor your database, then you can send an API request.

Allowing a user to update their own profile using the REST API

I have been experimenting with the REST API using my logged in user account's token to then make PUT requests on my user record to update some custom attributes.
In order to get to this work I had to grant my user account the manage-users role in Keycloak, prior to this I was getting forbidden responses back.
I can now make the PUT request successfully, and after logging out and logging back in I can see the updated attributes I set in my PUT request.
But I have now allowed my user to be able to manage all users in my realm, which I dont want to allow.
Instead I only want to be able to update my own account details.
I know the user can view their own profile and make changes on the Keycloak provided screens. But for certain custom attributes I want to be able to do this from the client side application they are logged in to, so using the REST API but not granting them a role that could allow them to update other users details.
Is this possible?
According to the User section Keycloak's Admin REST API, this is not possible.
One solution would be for your client app to send the update request to a backend. The backend will verify that the update request is legit (aka the JWT is verified and the update does apply to the user requesting the change).
Another solution would be to theme the User Account Service's screens to add input fields for your custom attributes, as the documentation says that:
This screen can be extended to allow the user to manage additional attributes. See the Server Developer Guide for more details.
The second option seems the more secure. I hope that helps.
This seems to be possible with the Account Management API.
Unfortunately, I didn't find any official documentation about that. However, there's an example in Keycloak that demonstrates how to do it.

Unattended AggCat API access

I'd like to confirm my reading of the documentation for AggCat:
Real-time API access with categorization requires a user token which expires in one hour
If we want to refresh a customer's data behind the scenes, as an unattended process, we can use the Batch Data APIs?
Mike,
You can refresh the access tokens anytime you want, but they are always only an hour long.
Batch will update the account transactions on a nightly basis and will generate a file that you can consume to capture transactions. To download those files you will make use of the Batch API calls which also utilize the user token. You can at anytime perform a real-time refresh on an account to capture up-to-date transaction information. If you wish to have categorization that information will be available in both the batch file as well as the getTransactions API call.
regards,
Ben
You can refresh the access tokens anytime you want, but they are always only an hour long.
Batch is only if you want to get categorization. Nothing to do with unattended access specifically.