Does anyone know if it is possible to save media from the internet on the local device using Sencha Touch? From what I've seen so far, I understand it's definitely possible to save XML or JSON data locally on the device, but I have had no luck finding ways to store media locally.
To be more specific, I am looking to program an app that provides the user with a series of audio seminars - like podcasts, really. The user would be able to stream those audio files directly from the internet, but I also need to provide the user with the ability to save an episode/seminar for later. This will be important for when a user is traveling and does not have a reliable internet connection or data plan.
The primary delivery device would be on iOS (iPhone, iPad, iPod Touch) and I would hope to be able to use the same technology on Android devices - but that would be a secondary phase.
If this is possible, how would I go about saving material? And what, if any, would be the limitations on doing so? Any thoughts on this would be greatly appreciated.
I have done something similar using Sencha Touch 1 and PhoneGap to produce a hybrid app.
Basically, I use Sencha Touch to download the JSON, etc and LocalStorage to hold the data. Downloading media/files/etc to the actual device is not supported in Sencha Touch as the framework doesn't have access to a file system.
I then use PhoneGap's API's to tap into the device's native file system and download files to the app's Documents directory and pass the file names/paths to Sencha Touch for use in the app.
I'm assuming you are looking to create a hybrid app based on your question but if this is strictly a web app then there isn't much you can do.
TO add to the above point, you possibly could base64 encode the file and store it within LocalStorage but this isn't a sustainable model as LocalStorage only gives you 5mb of space. If you go over 5mb, the user is prompted (yes, no) to allow LocalStorage to use more space (in 5mb increments). Since the files your reference have the potential to be 5mb each, you can see how this could quickly become unmanageable for both you and the user.
EDIT:
See http://phonegap.com/ for the native wrapper
http://blog.clearlyinnovative.com/post/2056122828/phonegap-plugin-for-downloading-url-all-the-code for the phonegap download plugin
and https://github.com/aaronksaunders/FileDownLoadApp for the code
Check this website out. Scroll down to storing data offline. They discuss Sencha Touch provides a set of data store and proxy classes that make it very easy to work with data from (and going to) a variety of sources - both server- and client-side... hope this helped, cheers.
Related
i'm fairly new to Flutter and is currently working on a course app that requires downloading the videos to the app.
The downloaded video will only be accessible through the app just like Youtube and Netflix, and will be hidden/encrypted from gallery. Would greatly appreciate if someone if someone could point me in the right direction in building this feature.
On iOS and Android your app has it's own isolated folder for storing documents. Items stored there are not intended to be accessible to the user outside of your app. This folder isn't scanned by the Gallery or accessible to other apps on the device. (However, with a little effort a user can access the files so this is not a complete solution where security is an concern. You would need to add encryption if you didn't, say, want a motivated user to copy the video file to a PC and be able to play it.)
the path_provider plugin gives your Flutter app common file locations on a device. The private app folder location is retrieved with getApplicationDocumentsDirectory()
"Download video" is a vague requirement. Most video on the internet (Netflix, Youtube) is provided via HLS or DASH for streaming, which you do download but the video is split up into many files- sometimes thousands of files for a single video. The dart:http package is likely what you're going to want to use to get/download the files (unless the video files aren't available via HTTP/HTTPS, then you'll need a different transport-specific library, like FTP, RTSP, etc.)
I ran into this problem in Safari where it appears that WebRTC is not fully supported. So when I call
navigator.webkitGetuserMedia()
I get an undefined error.
So my question to the community is what is the best way to write a Meteor app that captures Video on a mobile device and saves it on the said device.
If you have done this, I would appreciate it very much if you could share with me and the community how you went about this.
Specific Answer
The modern API is: navigator.mediaDevices.getUserMedia(constraints). See the docs here.
In the past, I've been unsuccessful with getUserMedia on iOS, but according to this post it can be done on iOS 11.
As for saving it, you can write to the browser's file system, but that API is only supported in Chrome. If you want to write to the camera roll, you'd need native code in the mix.
General Advice
I've spent several years of my life dealing with recording, uploading, and processing video using meteor. If you are doing anything more than trivial web recording, these observations may save you some time:
Chrome (on everything but iOS) has the best API for web recording. If you can require chrome for recording, that's ideal. Firefox is a close second, only because it doesn't support the file system API.
If you need to record and upload long videos on iOS, build a native app. Don't consider any kind of hybrid - that's a serious trap. The number of corner cases and things you need to check is pretty astounding, and the only way to get over those hurdles is with native code.
I'm currently developing an ios app with a lot of audio. I want to release versions in different languages but this means having all the audio in every language, which would take up too much space.
Is there a way to release different versions of the app to the different app stores, including only the relevant audio? Can this be done under one app name/id?
Thanks!
I believe that the short answer is 'no', but there are work arounds.
Each app id has one binary associated with it, although you can change the app name by localisation.
Audio is a resource, so there is no reason not to download it separately from a web server under your control. I don't believe there is any restriction on downloading media.
Alternatively, you can store the audio on Apple's servers and call it downloadable content for a freebie in app purchase.
I think you should use webservices to get the audio files from the server and get the relevant file from the server so this may help you to achieve only the relevant data in the application.
I want to build a facebook app featuring a personalized video which imports content assets from the user's facebook profile and their extended social graph and integrates these assets within the timeline. I am thinking of using Flash however a key stipulation is that the app works on mobile - and so I would need to use HTML5. My question is: Can I use Flash to build the application and then compile the app as HTML5 - or is there an alternative solution in the form of a HTML5 video toolkit with a programming layer that would allow me to build a web app / access the Facebook API?
I have done this a few times over the years and yes flash was the easiest however there are a few options which you have available to you that I know of which will be purely HTML5 based, personally I'd stay away from flash here as it will end up just getting int he way:
1- The cleanest method is to use a video compositing tool on the server side which can be programmed to accept variables. Personally I have only ever done this using ffmpeg however there a couple of alternatives which are out there.
The basic process would be to grab the media from FB then to composite them at certain point on top/below/around a base video which is sitting on the server using a shell script which you then pass the media assets to as variables. There are so many options as to how you might want this to be done, probably best id to have a look at some of these examples:
http://broadcasterproject.wordpress.com/2010/05/18/how-to-layerremix-videos-with-free-command-line-tools/
http://graphcomp.com/ffmpeg/
ffmpeg watermark without vhook?
note that last time I did this I used vhooks and custom filters, vhooks are now deprecated
This method will mean a reasonably heavy server load if your app is popular but it's probably the most robust across devices etc.
2- Use Popcorn.js, and let the processing be done on the client side. you could hard code it using css/js/html but popcorn is pretty stable although I havent seen how it runs on devices but in theory it should work (all standardized technologies). Basically the process would be to use javascript to fire the display of images overlayed on the video base file at preset cue points. Popcorn has all of the methods and means for you to do this already.
Hope this helps a bit. Good luck, sounds fun.
we realised some interactive video apps and one recent project was quite like your question describes.
We used adobe flash to track the motion - and published the project via create.js. You could have an image sequence from within create.js or put a video in a layer behind. This video would then control the player head time of the create.js motion tracked sequence via jquery.
worked fine - here a link to a testsetup with an image sequence.
Video Integration would be the next step.
http://www.jungeroemer.net/projekte/testpersvid/elftest01.html
(German text, sorry but it's nothing important to read there.
Just click the images and go for it)
you can download the sources from the link, if you need i can also upload the flash file to show you the motion tracking.
I want to make a small app that displays a PDF, presenting zoom-able single pages with a previous-next page function.
The Core Graphics API is pretty much the same in Cocoa and Cocoa touch. Read up on CGPDFDocument, it should provide you with everything you will need to render PDF pages. You won't need to read the PDF spec or use a library to parse PDF files directly. You will probably to learn more about Core Graphics / Quartz 2D / etc. to understand how to use those functions inside of a Cocoa app.
Based on the gradually evolving Apple policy of rejecting application submissions that duplicate functionality already on the iPhone I would worry about spending too much time even as a newbie on something that is part of the core iPhone feature-set.
This is pretty trivial. The CGPDFDocument functions will allow you to do anything you'd want to do with a PDF file.
The iPhone and iPod touch can view PDFs already, as one of the TV adverts in the UK shows an email with a .pdf attachment (of swimming lessons) being viewed. It can also view .doc, .xls, and so on, so if he is creating a viewer type application then supporting those as well could be a nice feature addition later on.
This means there is a PDF framework on these devices that you will need to access. Presumably Apple can provide support here if he is a paid up developer. Syncing the PDFs to the device is the actual real difficulty, as this isn't supported by iTunes. I assume that you would need to write a network based synchronisation tool, or have an online cloud for holding people's PDFs.
The device doesn't support Flash, so using PDF to Flash conversion tools will not work.
I found this HTML5 framework that should work on an iPad http://bakerframework.com/
but I didn't test it yet.