not able to upload 360 video to youtube - virtual-reality

I just bought my theta s 360 camera. I wanted to upload it into youtube. I followed a few tutorials. But it was unsuccessful. I have attached the screenshot of the video that was uploaded in youtube.
Thank you. this is the screen shot of the video i had tried to upload and failed

You should edit your video to make it 'spherical'. It would look more like this:Panoramic Image
A 360 image is generally twice as wide as it is tall (360x180 for example)

Not really related to computer programing, but here's the answer:
I have a Ricoh Theta S myself and I got the same issue the first time I downloaded a video.
Download the Theta UVC Blender and drag-and-drop your video, so the software outputs a "equirectangular / spherical" format that you can further upload to Facebook or Youtube.

Related

How to preview video thumbnails while scrubbing the progress bar in video_player(Flutter)?

I am building and VOD app, and trying to implement Youtube like on scrubbing showing thumbnails images for the relative time frame.
I have generated video thumbnails for every minute of total video length, now is there any way built in already to show these for HLS videos, or I need to build some custom solution for it?
If anyone has worked on this before would like some insights on it.

Why does the Chewie video player for flutter take up the whole screen size regardless of video dimensions?

I am new to flutter, and am trying to play videos in my app. I followed this tutorial on using Chewie to play videos by copying and pasting the code from its main.dart and chewie_list_item.dart code snippets on the website into a fresh project (the github project provided by the website is too outdated for me to debug, so I did copy pasted instead)
I expected to get something like this, with the player "wrapped" around the video:
However, I get this (on my android virtual device), with the video player taking up the entire size of the screen regardless of the video dimensions. I did try inputting AspectRatio according to the video dimensions, but all that did was to eliminate the stretching of the video, but the main issue remains.
Why does it behave this way, and how do I achieve the result shown in the first image? This is the test project I made: https://github.com/nathantew14/chewie_test Thanks!

OpenCV detect iphone orientation

I have a site where users can upload video. When testing some video uploads that are processed with OpenCV and Python, if the video was recorded on an iPhone it always assumes the video was taken in landscape mode by rotating the phone 90 degrees to the left, such that videos in portrait mode are sideways and videos taken in the other landscape direction (90 degrees to the right) are upside down.
I know I can use OpenCV to rotate videos, but is there a way to detect:
a) if the video is even taken with an iPhone or not
b) if so, what the orientation should be, how much to rotate the video by
OpenCV is an computer vision library, for your problem you can't use OpenCV (AFAIK). What you need is to get the metadata of the video. Metadata contains the all the information you need about that video. Here you can see what does metadata contains. You should search how to extract metadata from a video. Take a look at this.
Good luck!

How to record screen to video on iPhone with openGL (view preview layer) and UIkit elements?

I have searched everywhere and tried mixing and matching different bits of code but I haven't found anything that works or anyone with the same question.
Basically I want to be able to create video demos of iPhone apps that include standard UIKit elements and also the image coming from the camera (video preview layer). I don't want to use airPlay or iOS simulator to project onto the desktop then capture because I want to be able to make videos outside in public. I have successfully been able to video capture the screen with this code but with the video preview layer being blank. I read that its because its using openGL and what I'm capturing is from the CPU, not the GPU. I have successfully used GPUImage from Brad Larson to capture the video preview layer but it doesn't capture the rest of the UIView. I have seen code that combines both and converts to an image but I'm not sure if that would be too slow for realtime video capture. Can someone point me in the right direction?
It might not be the cleanest solution, but it will work nonetheless: did you consider jailbreaking? I hope Apple does sue me for this one but if you really want to record your screen then simply install a screen recorder. Enough options can be found: http://www.google.be/search?q=iphone+jailbreak+record+screen
And if you don't like it: recover your phone for a previous backup.
(for the record: I'm against jailbreaking and posting this from a productivity point of view)

How to make swf file smaller?

I'm trying to capture a video demo of my iPhone app so I can post it on Youtube.
I'm using Jing application (http://www.techsmith.com/jing/) for Mac, and was able to capture a video of the app. However, I looked at the resulting file size and it's 130MB!!! That's huge!
The video is 3:40 mins.
Any tips on how I can make the size of the file smaller, so that I can easily share it and post it on Youtube?
Thanks!!
Make the dimensions smaller, don't have a keyframe as every single frame, change the quality from best to good, same for audio (best to good), change the format. Isn't it kind of strange that you're posting a SWF movie for an iPhone app? Haha, anyway, hope this works!