I know that generally, it's no problem to overlay HTML (and even do advanced compositing operations) to HTML5 native video. I've seen cool tricks with keying out green screens in realtime, in the browser, for example.
What I haven't see yet, though, is something that tracks in-video content, perhaps at the pixel level, and modifies the composited overlay in accordance. Motion tracking, basically. A good example would be an augmented reality sort of app (though for simplicity's sake, let's say augmenting an overlay over on-demand video rather than live video).
Has anyone seen any projects like this, or even better, any frameworks for HTML5 video overlaying (other than transport controls)?
If we use the canvas tag to capture the instances of the video, we are able to get the pixel level information of the video. Then we can detect the motion tracking i think. May be the work of HTML5 will be upto grabing the pixel informaion, then its our work to detect the things we need..
And i didnt find any such frame works for HTML5 video tag, as there is no common video format supported by all browsers...
The canvas tag video mode is not supported by iOS.
Wirewax built an open API for that. Works quite well - even on Iphone.
http://www.wirewax.com/
Related
I am building an ARKit app where we want to be able to take a photo of the scene. I am finding the image quality of the ARCamera view is not good enough to take photos with on an iPad Pro.
Standard camera image:
ARCamera image:
I have seen an Apple forum post that mentions this could be iPad Pro 10.5 specific and is related to fixed lens position (https://forums.developer.apple.com/message/262950#262950).
Is there are public way to change the setting?
Alternatively, I have tried to use AVCaptureSession to take a normal photo and apply it to sceneView.scene.background.contents to switch out a blurred image for higher res image at the point the photo is taken but can't get AVCapturePhotoOutput to work with ARKit
Update: Congrats to whoever filed feature requests! In iOS 11.3 (aka "ARKit 1.5"), you can control at least some of the capture settings. And you now get 1080p with autofocus enabled by default.
Check ARWorldTrackingConfiguration.supportedVideoFormats for a list of ARConfiguration.VideoFormat objects, each of which defines a resolution and frame rate. The first in the list is the default (and best) option supported on your current device, so if you just want the best resolution/framerate available you don't have to do anything. (And if you want to step down for performance reasons by setting videoFormat, it's probably better to do that based on array order rather than hardcoding sizes.)
Autofocus is on by default in iOS 11.3, so your example picture (with a subject relatively close to the camera) should come out much better. If for some reason you need to turn it off, there's a switch for that.
There's still no API for changing the camera settings for the underlying capture session used by ARKit.
According to engineers back at WWDC, ARKit uses a limited subset of camera capture capabilities to ensure a high frame rate with minimal impact on CPU and GPU usage. There's some processing overhead to producing higher quality live video, but there's also some processing overhead to the computer vision and motion sensor integration systems that make ARKit work — increase the overhead too much, and you start adding latency. And for a technology that's supposed to show users a "live" augmented view of their world, you don't want the "augmented" part to lag camera motion by multiple frames. (Plus, on top of all that, you probably want some CPU/GPU time left over for your app to render spiffy 3D content on top of the camera view.)
The situation is the same between iPhone and iPad devices, but you notice it more on the iPad just because the screen is so much larger — 720p video doesn't look so bad on a 4-5" screen, but it looks awful stretched to fill a 10-13" screen. (Luckily you get 1080p by default in iOS 11.3, which should look better.)
The AVCapture system does provide for taking higher resolution / higher quality still photos during video capture, but ARKit doesn't expose its internal capture session in any way, so you can't use AVCapturePhotoOutput with it. (Capturing high resolution stills during a session probably remains a good feature request.)
config.videoFormat = ARWorldTrackingConfiguration.supportedVideoFormats[1]
I had to look for a while on how to set the config, so maybe it will help somebody.
This lets you pick the one with the highest resolution, you can change it so that it picks by most fps, etc...
if let videoFormat = ARWorldTrackingConfiguration.supportedVideoFormats.sorted { ($0.imageResolution.width * $0.imageResolution.height) < ($1.imageResolution.width * $1.imageResolution.height) }.last{
configuration.videoFormat = videoFormat
}
AVFoundation allows a developer to create a custom Video Player instead of using Apple's own fullscreen MPMoviePlayer.
In my project, I'm using this approach to achieve something close to what the YouTube iOS app does, which is to have a "canvas" view up on top where playback takes place, along with several controls and text labels at the bottom:
As you know, a YouTube URL "points" to HTML data, and it is meant to be used within a UIWebView. Tapping on the thumbnail inside this UIWebView brings up a fullscreen Player, which is what I'm trying to avoid.
I would like to know if the YouTube API provides a URL pointing to the actual video (mp4 file), so that I may use it along AVFoundation.
I want to find a legal way to do this, so it should definetely comply to both Apple's and Google's terms of service.
Thanks,
The easiest way to achieve this is by adding an overlay UIView, which you can place over any part of the screen. And putting it on top of the UIWebView, and set it to not respond to user interactions. As you can change size and placement and making it transparent, you can effectively choose which user interactions you want and which you don't want. And you will not mess around with youTube ToS in any way.
http://www.youtube.com/get_video_info?video_id= with the video id appended contains the mp4 url, may require some regex work (found here: https://github.com/hellozimi/HCYoutubeParser/blob/master/YoutubeParser/Classes/HCYoutubeParser.m)
I'm somewhat hesitant to ask this question because it it is only borderline software development, but I've had zero luck finding what I need anywhere else.
I am building a landing page for an iPhone app and would like to have a video of the app in use that plays inside an image frame of an actual iPhone device (Apple does this on many of their marketing pages). I don't care about the video being played on anything pre-HTML 5, it can be as simple as an mp4.
Are there any tools out there that will easily generate this sort of functionality for me? Given how often I see this sort of thing, I'm surprised that I am having so much trouble finding something that just =gives it to me out of the box.
Alternatively, I am not opposed to coding something myself, so if that's the only option, a suitable answer to this question would be a pointer to some info on the best way to generically overlay an HTML 5 video on top of an image (the iPhone frame).
The reason you probably don't find anything is probably because it's so simple. Put a <video> tag inside a <div> tag, set background-image of the frame, add some padding so the video is centered within the frame, and voila.
In My application as the user opens the camera, camera should capture a image as soon as there is a difference in image when compared to previous image and camera should always be in capturing mode.
This should be done automatically without any user interaction.Please Help me out as i couldn't find the solution asap.
Thanks,
ravi
I don't think the iPhone camera can do what you want.
It sounds like your doing a type of motion detection by comparing two snapshots taken at different times and seeing if something has changed between the older and the newer image. To that you need:
I don't think the iPhone can do what you want. The camera is not setup to automatically take photos and I don't think the hardware can support the level of processing needed to compare two images in enough detail to detect motion.
Hmmmm, in thinking about it, you might be able to detect motion by somehow measuring the frame differentials in the compression of video. All the video codecs save space by only registering the parts of the video that change from frame-to-frame. So, a large change in the saved data would indicate a large change in the environment.
I have no idea how to go about doing that but it might give you a starting point.
You could try using opencv for motion detection based on differences between captured frames but I'm not sure if the iPhone API allows reading multiple frames from the camera.
Look for motempl.c in the opencv distribution.
You can do a screenshot to automatically capture the image, using the UIGetScreenImage function.
I am interested in doing some image-hacking apps. To get a better sense of expected performance can someone give me some idea of the overhead of touching each pixel at fullscreen resolution?
Typical use case: The use pulls a photo out of the Photo Album, selects a visual effect and - unlike a Photoshop filter - gestural manipulation of the device drives the effect in realtime.
I'm just looking for ballpark performace numbers here. Obviously, the more compute intensive my effect the more lag I can expect.
Cheers,
Doug
You will need to know OpenGL well to do this. The iPhone OpenGL ES hardware has a distinct advantage over many desktop systems in that there is only one place for memory - so textures don't really need to be 'uploaded to the card'. There are ways to access the memory of a texture pretty well directly.
The 3GS has a much faster OpenGL stack than the 3G, you will need to try it on the 3GS/equivalent touch.
Also compile and run the GLImageProcessing example code.
One thing that will make a big difference is if you're going to do this at device resolution or at the resolution of the photo itself. Typically, photos transferred from iTunes are scaled to 640x480 (4 times the number of pixels as the screen). Pictures from the camera roll will be larger than that - up to 3Mpix for 3GS photos.
I've only played around with this a little bit, but doing it the obvious way - i.e. a CGImage backed by an array in your code - you could see in the range of 5-10 FPS. If you want something more responsive than that, you'll have to come up with a more-creative solution. Maybe map the image as textures on a grid of points, and render with OpenGL?
Look up FaceGoo in the App Store. That's an example of an app that uses a straightforward OpenGL rendering loop to do something similar to what you're talking about.
Not doable, not with the current APIs and a generic image filter. Currently you can only access the screen through OpenGL or higher abstractions and OpenGL is not much suited to framebuffer operations. (Certainly not the OpenGL ES implementation on iPhone.) If you change the image every frame you have to upload new textures, which is too expensive. In my opinion the only solution is to do the effects on the GPU, using OpenGL operations on the texture.
My answer is just wait a litle until they get rid of the openGL 1.0 devices and finally bring Core Image over to the iphone SDK.
With Fragment shaders this is very doable on the newer devices.
I'm beginning to think the only way to pull this off is to write a suite of vertex/fragment shaders and do it all in OpenGL ES 2.0. I'd prefer not to incur the restriction of limiting the app to iPhone 3GS but I think thats the only viable way to go here.
I was really hoping there was some CoreGraphics approach that would work but that does not appear to be the case.
Thanks,
Doug