List Out all video from url - iphone

I am trying to list out all Video from a url. For this i m sending an request to "You Tube"
url as "http://www.youtube.com/" and want to list out all available video . But i didn't get anything from that request ? any idea or any documentation hint ?

There are utilities for downloading youtube videos (for example Linux has youtube-dl), but it's not uncommon for sites with large numbers of downloadable files to prevent attempts to simply download everything - and even though you said you wanted to list rather than download all the videos, that's unfortunately what it would suggest to a website administrator.
Besides, files on youtube are not accessed by simple urls like http://www.youtube.com/filename
Something more is required. I don't think you can treat the (what is it?) 11 character alphabet soup as a filename, it's a parameter passed to the software which streams back the video.
EDIT: youtube-dl is a command-line program in Linux and probably BSD. You need to know the URL of the Youtube video so you can type (for example)
youtube-dl http://www.youtube.com/watch?v=Z1JZ9O15280
If you had a list of these URLs you could put them in a file and make a bulk download script - but that takes us back to your original question.
In Firefox I would right-click on a link to a Youtube video and choose 'copy link location'. Then paste the URLs one at a time into a text file. But this question is drifting away from mere programming...

Related

Searching inside JSONs in Chrome devtools

Is there a possibility to searching inside all JSON objects from all available responses in the network tab? Currently it works, but very randomly and isn't much reliable. Sometimes and especially in a smaller responses it's ok but when you have more assets almost always looking for, e.g. specific params value ends unsuccessfully. Do you know any smart solution of that issue? I've checked and first question associated with it has already few years and Google devs still haven't responded.
Example: I have object ID in response body, but cannot find it by search CTRL+F
I think one way is to save all the response in a file (manually or automatically, if possibile by using a browser extension).
After you have stored all the responses in a file you can parse the file and find things inside the file by using a script or just regex.
You can save the answers (as HAR file) manually (I use firefox) by right clicking on a network response inside the developer console panel.
I found that is the same for chrome.
Look here:
https://developers.google.com/web/tools/chrome-devtools/network/reference
I didn't search if there is a way to automatically store all the responses received by a browser. I'm not sure, but I think it isn't possible :/

wget behaves differently with different adresses

I have these two urls:
https://cdn.pixabay.com/photo/2017/06/24/09/13/dog-2437110_960_720.jpg
and
http://www.deutschland-machts-effizient.de/SiteGlobals/KAENEF/StyleBundles/Bilder/sublogo.png;jsessionid=DF603F2801D8F686FD4BCFAD770C3FC9?__blob=normal&v=3
Trying to access the pictures with wget works for the first one, but does not for the second one. Of course the first more closely resembles a picture (ending in .jpg), but any browser I tested displayed both as pictures I could download.
Instead of a picture I download a 2000 line html file, which contains several img tags. I guess I could try any of the urls, but I want to automate this for a general case, so this doesn't really help me.
What is the inherent difference between both pictures in the way they are stored on their respective server?
How can I download the second picture using wget?

Powershell method of downloading file from a website with a changing URL?

I have been given a task that involves downloading a single file every day from a website. Let's call it "https://test.example.com". I have credentials that allow me to login to the site, where a Flash interface then presents the files that are available for download. After the file is downloaded, it is then processed in a variety of ways. I have already put together the Powershell that handles all that, I am just having a hard time with automating the actual download of the file.
I used the Flash interface to download a few files while watching the network activity, and found that it is actually pulling the file from this URL:
https://test.example.com/link/EBDB7F67EF3B28XX99NCAD9920160423/file.zip
Therefore, I was able to put this together in order to automatically get the file via my PS script:
$url = 'https://test.example.com/link/EBDB7F67EF3B28XX99NCAD9920160423/file.zip'
$output = "C:\Downloads\file.zip"
Invoke-WebRequest -Uri $url -OutFile $output
However, the long string of numbers in the URL changes every day. The only discernible pattern I can find is that the last eight digits are always the date on which that particular file is posted.
Is there a good way to approach this? I've been experimenting with wildcards and patterns, as well as checking the HTML for elements that I can filter, but I am having a hard time finding the correct solution.
This is very hard to automate. You can't drive Flash from the script unless it is specifically designed for that. As I see it now your only options are:
Contact site devs if possible, maybe they can give you a details on function that generates link. This gives me an idea - perhaps you can reverse engineer Flash code to find that function details yourself. Use flash decompiler for this.
Simulate the user browsing the flash site. This can be done in one of the following ways:
Autohotkey - you can record mouse clicking relative to the browser window and execute the script again. Unless flash interface is too dynamic and unpredictive it will work.
Sikuli - another automation language which relies on picture segment recognition.
All above 2.* methods produce fragile automation code as they depend on browser settings (zoom, theme) and even OS settings. For this reason you need to dedicate one machine for that in all probability (virtual machine ofc). Decompiling flash code and re-implementing the url generting code in powershell will make it a reliable 100%.
As somebody said in comments this is not a powershell queestion but browser automation question.

How can I simply add a downloadable PDF file to my page?

I want to add a pdf and word format of my resume to my portfolio page and make it downloadable. Does anyone have some simple script?
Add a link to the file and let the browser handle the download.
You may be over-complicating the problem. It's possible to use a href pointing to the location of the .pdf or .doc file, when a user clicks on this in their browser, generally they will be asked if they would like to save or open the file, depending on their OS/configuration.
If this is still confusing, leave a comment and I'll explain anything you don't get.
Create the PDF. Upload it. Add a link.
Save yourself 30 minutes tossing around with PDFGEN code.
You will want to issue or employ the Content-Disposition HTTP header to force the download otherwise some browsers may recognize the common file extensions and try to automatically open the file contents. It will feel more professional if the link actually downloads the file instead of launching an app - important for a resume I think.
Content-Disposition must be generated within the page from the server side as far as I know.
Option:
Upload your resume to Google Docs.
Add a link to the file on your portfolio page just as I do in the menu of my blog:
Use Google Docs Viewer passing to it the URL of the PDF as you can see in this link.

Streaming and playing an MP3 stream. .mp3 URL format

I used the sample code from http://cocoawithlove.com/2008/09/streaming-and-playing-live-mp3-stream.html. it runs OK with default URL. But when I replace with my URL "http://dl.mp3.kapsule.info/fsfsdfdsfdserwrwq3/fc90613208cc3f16ae6d6ba05d21880c/4b5244f0/b/7e/b7e80afa18d06fdd3dd9f9fa44b51fc0.mp3?filename=Every-Day-I-Love-You.mp3", this app shows an message as "Audio not Found". But when I put my URL on Address Bar of Web Browser, I can download this .mp3 file.
really, I can't understand why it is?
pleased tell me!
Thank you very much
My guess would be that the app is designed to play a MP3 encoded audio stream with no limit in length (which is different from your ordinary music file). To set this up, you need a streaming server on the client side.
I think you can find out for sure by trying with a different radio station that transmits in MP3. If that works, it's most likely that your app doesn't like your file.
You should, as Vivek recommends, also try using a simpler download URL for your file, in case the App gets confused by the URL's length and/or structure.
As mentioned, this is due to the URL of the file. The AudioStreamer code specifically checks for the extension of the file and tries to figure out the audio type based on that. If you change that logic to handle your custom URLs, it will start working
So to point you in the right direction: open AudioStreamer.m and look for the references of
hintForFileExtension:
This function returns the type of file based on the extension. If you know the file type won't change (always mp3), the quick and dirty solution is to always assign mp3 type without any logic... like this:
err = AudioFileStreamOpen(self, MyPropertyListenerProc, MyPacketsProc, kAudioFileMP3Type, &audioFileStream);
Note: I've put kAudioFileMP3Type constant instead of calculated value
PS yes, it does work with static mp3 files, even though it's designed for streams and hence misses some of the functionality one would expect from a player that plays a static file on the server (caching, prefetching, proper seeking)
Thats because the default url directly points to a file in the webserver, whereas the the url you've mentioned is a HTTP (POST/GET) operation, which the application may not be designed to handle.
I suspect that your URL is one-time-use. When I try to visit it, I see 408 - Request Timeout.
Many links on mass file sharing websites are like this. If you could download the file directly, you wouldn't sit through a page of ads and premium account offers.
Try again with a file on a normal website, like this one.