Decoding facebook's blob video url (part II) - facebook

This a first question for me. Hence I could not add a comment to the original.
I followed the solution and worked as expected.
However, when I tried to download 2 private videos that were part of the same post, only the 1st URL produced the correct video. The inspection of the 2nd URL, showed, as expected, that it is different from the 1st.
URL 1
URL 2
Although I do not think it makes a difference, I observed that the host name starting with "sc-content" or "video".
Any thoughts?
Thank you!

The answer to the original question described how you bring up the developer window, go to the network tab and observe the various URLs scrolling by as the video is playing. At some point you can copy one of the URLs from said window, strip out bytestart & byteend. This then constitutes the URL I mention.

I did find the answer to my own question after reading some the Graph API documentation on cookies. Thank you WhizKid!
If an FB post contains several videos, the steps below show how to download all videos using youtube-dl 2019.09.28:
press play on video #1
ctrl-right click and "Show Video URL"; select all & copy to text editor
go back to post
press play on video #2
ctrl-right click and "Show Video URL"; select all & copy to text editor
KEY: go back to post and reload page
within the terminal window, enter: "youtube-dl -u user -p pass URL#1" and/or URL#2
NB. Occasionally you may get "Something went wrong. We are having trouble playing this video". You can reload and carry on; it will eventually play and you can get the video URL as per above.

Related

Searching inside JSONs in Chrome devtools

Is there a possibility to searching inside all JSON objects from all available responses in the network tab? Currently it works, but very randomly and isn't much reliable. Sometimes and especially in a smaller responses it's ok but when you have more assets almost always looking for, e.g. specific params value ends unsuccessfully. Do you know any smart solution of that issue? I've checked and first question associated with it has already few years and Google devs still haven't responded.
Example: I have object ID in response body, but cannot find it by search CTRL+F
I think one way is to save all the response in a file (manually or automatically, if possibile by using a browser extension).
After you have stored all the responses in a file you can parse the file and find things inside the file by using a script or just regex.
You can save the answers (as HAR file) manually (I use firefox) by right clicking on a network response inside the developer console panel.
I found that is the same for chrome.
Look here:
https://developers.google.com/web/tools/chrome-devtools/network/reference
I didn't search if there is a way to automatically store all the responses received by a browser. I'm not sure, but I think it isn't possible :/

How to capture network info in Chrome devtools when clicking a link pop up a new download tab and closed right away?

I’m trying to use chrome devtools to see what network requests are.
But for some links, a new tab will be created for downloading a file and once the file is downloaded the tab is immediately closed.
There is no time to for me to inspect what the network requests are involved in the new tab. Is there a way to force the download in the original window so that I can still see the network activity?
As this answer suggest, yo may want to use chrome net export using chrome://net-export/
How it works?
You open a new tab and enter chrome://net-export/
Press the start logging to disk button and select a file
Do whatever
Press the stop recording button and inspect the file (should be formatted to be readable)
How to reproduce?
function popup() {
window.open('https://google.com', '_blank')
}
<button onclick="popup()">
click me
</button>
You will get WAY more information than you wished for, so - be patient when going over all the traffic details and also - make your recording as targeted and short as possible
Enjoy
EDIT
#Nathan raises a fair point in the comment - this method is not visual. a tool that may help to visualize the data is netlog viewer
Use the link, press the choose file button and upload your json file
In the left menu select events - this will display all events in a big table
Filter table by using URL_REQUEST or
Click each item to inspect and get detailed information (such as: url, headers, method, etc.)
There are other cool tools there (such as timeline) but it is different from chrome dev tools. This solution is just another set of tools for developers, that's all

How to save an image using its URL in Matlab

I have a problem to save an image with its URL. I once used URLWRITE function. it works well for most of image urls. However, for this one: http://www.freegreatpicture.com/files/157/1562-cute-little-cat.jpg , I cannot save this image to my disk using urlwrite(url, 'cat.jpg'); . Anyone can help ? Thanks !
PS. the saved image cannot be opened.
After I click the url, the image is as this:
Problem will be with that sites and the url forwarding due MVC i guess. If you click at your link, you dont get the exact image. You get the sites where you need to click at the download button and wait 10 or how much seconds. Probably if you get the IMAGE link.. you will have no problems. But you dont in here. Its not problem of your script or in urlwrite function it is "problem" (probably intention) of that sites

How can I simply add a downloadable PDF file to my page?

I want to add a pdf and word format of my resume to my portfolio page and make it downloadable. Does anyone have some simple script?
Add a link to the file and let the browser handle the download.
You may be over-complicating the problem. It's possible to use a href pointing to the location of the .pdf or .doc file, when a user clicks on this in their browser, generally they will be asked if they would like to save or open the file, depending on their OS/configuration.
If this is still confusing, leave a comment and I'll explain anything you don't get.
Create the PDF. Upload it. Add a link.
Save yourself 30 minutes tossing around with PDFGEN code.
You will want to issue or employ the Content-Disposition HTTP header to force the download otherwise some browsers may recognize the common file extensions and try to automatically open the file contents. It will feel more professional if the link actually downloads the file instead of launching an app - important for a resume I think.
Content-Disposition must be generated within the page from the server side as far as I know.
Option:
Upload your resume to Google Docs.
Add a link to the file on your portfolio page just as I do in the menu of my blog:
Use Google Docs Viewer passing to it the URL of the PDF as you can see in this link.

List Out all video from url

I am trying to list out all Video from a url. For this i m sending an request to "You Tube"
url as "http://www.youtube.com/" and want to list out all available video . But i didn't get anything from that request ? any idea or any documentation hint ?
There are utilities for downloading youtube videos (for example Linux has youtube-dl), but it's not uncommon for sites with large numbers of downloadable files to prevent attempts to simply download everything - and even though you said you wanted to list rather than download all the videos, that's unfortunately what it would suggest to a website administrator.
Besides, files on youtube are not accessed by simple urls like http://www.youtube.com/filename
Something more is required. I don't think you can treat the (what is it?) 11 character alphabet soup as a filename, it's a parameter passed to the software which streams back the video.
EDIT: youtube-dl is a command-line program in Linux and probably BSD. You need to know the URL of the Youtube video so you can type (for example)
youtube-dl http://www.youtube.com/watch?v=Z1JZ9O15280
If you had a list of these URLs you could put them in a file and make a bulk download script - but that takes us back to your original question.
In Firefox I would right-click on a link to a Youtube video and choose 'copy link location'. Then paste the URLs one at a time into a text file. But this question is drifting away from mere programming...