My organisation recently started using folders on their Facebook page. The conversations then can be categorized as Inbox, Unread, Follow Up, Done and Spam.
I regularly download the messages via Graph API and everything worked fine when they kept all the conversations in the Inbox. However recently they categorized the conversations, so the inbox has only 7 elements at the moment, 0 elements in the Unread and Follow Up and countless elements in the Done and Spam folders.
I used the following query before:
me/conversations?fields=updated_time,messages.limit(100){message,from,created_time}&limit=100
Now it only returns the elements from the inbox.
The Graph API reference vaguely describes parameters like folder and tags. I tried to use the folder parameter like:
me/conversations?folder=done
me/conversations?folder=unread
me/conversations?folder=randomstring
It all the time returned the same 7 elements from the inbox.
However, if I query me/conversations?folder=spam it returns 10 different elements, which does not overlap with the actual "Spam" folder and contains elements marked as "Done". (They are quite fishy conversations, so they might have been marked as spam and there might be two different definitions of being spam as marked as spam or being in the spam folder, I don't know.)
The API reference does not specify how to actually use the folder and the tags parameter and does not say anything how to query messages in the other folders.
Any idea how to access the conversations in the other folders? It's fine for me to query the folders one-by-one or to query from all folders as well.
The documentation might need some updating. But for now, I can query messages in "DONE" via:
/{page-id}/conversations?tags=action:archived
After investigating this endpoint, I think ?folders is some ancient param before the rework of the messaging from messages in folders to conversations to pages. If I query my page for the spam folder, I never get messages listed, even though I flagged some with spam. And while they are flagged with spam, I do not get those on the conversations query (without folder=spam).
I believe, facebook changed the system into tags on conversations. You can get those like so: [page_id]/conversations?fields=participants,messages{tags,message}&folder=sent.
Note: The marked "no sent tag" is not related to the folder applied (as they dont work). Maybe the tagging on conversations is the reason you get the same result from querying threads at [page_id]?fields=threads{participants,messages{tags,message}}
So for now I think, one has to get over everthing there to read the inbox. Yet I am a bit sad about not getting flagged spam messages from the graph. I will investigate this a bit further later^^
Call folder=page_done with Page Access Token to retrieve all threads/messages in Done folder of Page inbox.
{page-id}/conversations?folder=page_done
To get Page Access Token
{page-id}?fields=access_token
Related
I have read a few questions on here about e-mail clients prefetching URLs in e-mails. An answer to this seems to be to add a new confirmation page, where the user has to click a button to confirm the desired action.
But, this answer states the following:
As of Feb 2017 Outlook (https://outlook.live.com/) scans emails
arriving in your inbox and it sends all found URLs to Bing, to be
indexed by Bing crawler.
This effectively makes all one-time use links like
login/pass-reset/etc useless.
(Users of my service were complaining that one-time login links don't
work for some of them and it appeared that BingPreview/1.0b is hitting
the URL before the user even opens the inbox)
Drupal seems to be experiencing the same problem:
https://www.drupal.org/node/2828034
My major concern is with this statement:
As of Feb 2017 Outlook (https://outlook.live.com/) scans emails
arriving in your inbox and it sends all found URLs to Bing, to be
indexed by Bing crawler.
If this is the case, any URL in an e-mail meant to confirm an action, e.g. confirming a login, subscription, or unsubscription, can end up searchable in a search engine, if that's whats meant by indexed in the quote above. In this case, it's Bing. Not even a dedicated confirmation page where the user confirms the desired action truly mitigates this.
Scenario #1
If I email the user a login link with a one-time token in the URL, that URL will end up in Bing. This token will have a short lifetime, lets say 5 minutes, so I doubt anyone will manage to search on Bing and find the URL before the user clicks it or it expires.
Scenario #2
The user gets an e-mail with a link to confirm a subscription. This link is perhaps valid for 24 hours. This might(?) be long enough for someone else to stumble over the link on a search engine and accidentally (or on purpose) confirm the subscription on behalf of the user.
Scenario #2 is not uncommon, it's even best practice to use double opt-in as far as I am aware.
Scenario #3
Unsubscribe URLs in the bottom of newsletters. Maybe valid for forever? You don't want this publicly searchable in an search engine.
Assume all the one-time confirmation links land on a confirmation page where the user confirms the desired action.
Is it truly the issue that URLs in e-mails are indexed by search engines, at least Bing? And will they actually end up publicly searchable? If not, what is meant by indexed in the quote above?
I'll add for the sake of completion that I don't think I've had much of a problem with this in my own use of the web, so my gut feeling is that this is unlikely the case.
Is it truly the issue that URLs in e-mails are indexed by search engines, at least Bing?
I can't definitely say if they are being indexed or not, only Bing could answer this question, but they are surely being visited, at least with a simple GET request. I just tested this sending myself a link to a page on my website that logs the requests that are made against it, and indeed I'm seeing a GET coming from 207.46.13.181 (reverse DNS says msnbot-207-46-13-181.search.msn.com), which suggests that an automated program from search.msn.com is crawling the link. This leads me to believe that yes, they are trying to index the link's content somehow, but it's only my opinion really.
And will they actually end up publicly searchable? If not, what is meant by "indexed" in the quote above?
Well, again, impossible to say unless you work for Bing. In any case, "indexing" means exactly what you think it does: parsing the content of a page to potentially include it in search results.
The real question here is: does this somehow represent a security problem or will it compromise my website's functionality?
It surely has the potential to: if your confirmation/reset/subscription/whatever process only relies on a single GET request with the appropriate GET parameter, then you should definitely revisit the strategy, as it obviously allows anyone to perform the action (even maliciously for example enumerating possible IDs for your GET parameters).
If the link you are trying to send contains sensible information or can be used to alter important data for an user of your website, then you should at least put it behind a login page only giving access to the interested user. This way, anyone who wants to access it (including search engines) will be redirected to a login page if not already logged in.
If the link you are trying to send is just some kind of harmless confirmation link (e.g. subscribe/unsubscribe from a newsletter), then at least use a form inside the web page to do the actual confirmation through a POST request (possibly also using a CSRF token), otherwise you will unequivocally end up with false positives.
I'd like to get all messages to and from a facebook page, including those in the "Done" folder.
Using
graph.facebook.com/{page-id}/conversations?fields=id,messages{message,to,from,created_time}
I'm able to retrieve all messages in the inbox, but threads I've marked as "Done" in the web UI are not listed. I can retrieve their messages via
graph.facebook.com/{conversation-id}?fields=id,messages{message,to,from,created_time}
but that requires knowing the conversation-id.
I also know about the conversations webhook. While that's great for realtime and will work for all future messages, it doesn't help with retrieving historical messages, which I also want.
Is there a way to also get the conversation-ids for messages in the "Done" folder of a pages inbox?
Yes.
You can use {page-id}/conversations?folder=page_done with Page Access Token. It retrieves all threads in Done folder of Page inbox.
I am connecting to O365 Outlook Mail Get Messages REST API, e.g.
GET https://outlook.office365.com/api/v1.0/me/messages?$top=50&$select=Id
and I am trying to retrieve just IDs so I can determine if messages have been deleted from my inbox (e.g. diff'ing against a previous ID list). I'm checking #odata.nextLink to perform a synchronous series of REST calls until complete.
I'm finding that this call has roughly the same performance as downloading the full message (e.g. without the $select clause), aka ~50 Ids / second. I'd like to know if there is a more efficient / quicker way of retrieving just a list of Ids of all messages in the Inbox. A call to retrieve a list of deleted/moved Ids from a point in time (e.g. tombstones) would also work, something like:
GET https://outlook.office365.com/api/v1.0/me/messages?$top=50&$select=Id&$filter=DateTimeTombstone gt 2014-09-01T00:00:00Z
Thanks!
No, currently there isn't. Sync is on our radar to add though, which sounds like it might help your scenario.
Don't know about the REST API, but EWS lets you sync any Exchange folder - this way you will know which items were created/modified/deleted without loading all items in the folder - see https://msdn.microsoft.com/en-us/library/office/Ee693003(v=EXCHG.80).aspx
I'm building an application for my personal use that saves all my facebook messages in a database on my computer.
But I have a problem as it seems only few messages can be accessed through the Graph API.
I created a token with all the possible permissions.
When issuing a call:
/me/inbox
I get all the threads in my inbox but for some of them the comments field which contains the actual messages is missing. It's mostly for conversation with people that are not friend with me on facebook.
For those threads, when I try to get more information by /<id_of_the_thread>
I get an error (code 100) Unsupported get request. from the graph api.
Is it a normal behaviour of the API?
What am I missing here?
Don't hesitate if you know a better way of saving all my messages.
Another, somewhat inferior, but much more accessible way of obtaining one's Facebook messages is by downloading a copy of your Facebook data through https://www.facebook.com/settings. This way you can download an archive with all your FB data, including your messages. They are however capped to 10,000 messages per conversation, and are all stored in one .htm file, which is not very practical if you want to do further operation on them.
No i think, we can't specified the Thread by using ID, but commonly i'm sorting the threads by its sender. CMIIW
I have a web site with various graphs embedded in it that are generated externally. Occasionally those graphs will fail to generate and I would like to catch that when it happens. These graphs are embedded in multiple pages and I would rather not check each page manually. Is there any kind of tool or perhaps a browser addon that could periodically take screenshots of different URLs and email them in a single email? It would be sufficient to have scaled-down screenshots of full pages emailed maybe once a day to me, allowing me to take a quick glance and see that all the graphs are there and look okay.
I'm a big fan of automation. Rather than have emails generated that you then have to look at, take a look at 'replacing custom missing images in jquery'. This will run a piece of Javascript for each image that fails. Extending that to make a request to a URL that you control, which may also include the broken URL (or just the filename that is broken) would not be too hard. That URL would then generate an email, and store the broken URL so that it doesn't send 5000 emails if there's a flurry of hits to your page.
Another idea building on the above is to effectively change the external 404 from the source site to a local one (eg /backend/missing-images/) - the full-path need not exist - you are just generating a local 404 record in your apache logs. Logwatch will send a list of 404 pages from the apache log to you daily (or more often, if you want) by email.