casting Oculus quest 2 with scrcpy - oculusquest

I have a simple question.
When casting oculus quest 2 with scrcpy, the casted video shows two screens while oculus app's video shows one screen.
Why Oculus quest 2's video with scrcpy shows two screens? and how can oculus app get the screen one eye sees?
enter image description here
(scrcpy)
enter image description here
(oculus app)

You can use SCRCPY and get full screen view using the following command:
scrcpy --crop 1730:974:1934:450 --max-fps 30

You are seeing two screens because that is literally what the screen in the Oculus headset is showing (one image for each eye).
In order to not see this, you need to provide a -c crop parameter. Depending on your eye spacing that you have set it the oculus headset, you may need to tweak it some, but this should get you a good starting point:
scrcpy -b 25M -c 1440:1540:60:60
(The -b sets the bitrate.)

Following the previous answers, SCRCPY can capture the video shown on the two displays in the Meta/Oculus Quest 2. As the display renders the image for the screens to pass through special VR lenses, it has this unusual shape. For Quest 2, the most proper crop for landscape oriented capturing I could find is 1600:900:2017:510. It can be cropped with this command:
scrcpy -b25M --crop 1600:900:2017:510

Related

Hologramposition in videostream from HoloLens to Android tablet has an "offset"

I am working with Unity on a project, where I need to provide a video and audio connection between an android tablet and the hololens. Furthermore there should be the option, to instantiate holograms via a click on the tablet (where the video of the hololens view is shown, kind of remote help). Right now most of these things are working as they should (using FMETP Stream 3.0 asset for video and Photon Voice for audio), but the positions of the holograms visible in the hololens are not the same as shown in the video on the tablet.
To solve this problem, I tried changing the possible values in my scripts, like the calculation of the hologramposition depending on where the click is executed, as well as the values from the assets used, which are regulating which part of the video should be shown on the tablet. I don't know if there could be a mistake currently, but I can't find one - I would be very happy, if anyone has tips or an idea what I should check again in more detail or what parameters are the most important.
So what I can reach is that the holograms can be placed at the position, where it is clicked at on the tablet, but then they are shown at a different position when looking through the hololens. As well it is possible to assign the position of the tabletclick so that the hologram position seen through the hololens is correct, but then the position shown in the tabletvideo is not correct. But both appears to be impossible, the video shown on the tablet seems to show the holograms wrong. I am doing research since over two weeks now, but I can not find the solution to fix the „offset“. What I found as well is, that the offset seems to vary with the distance between the hologram and the hololens.
I attached a scetch from the problem here - the text in english is from the first line to the last:
(X) Position of the click on tablet
(arrow) Where the marking is shown at
(red) For Hololens correct
(blue) For tablet correct
Hope maybe this helps
Can anyone help me or has an idea why that could be? Or does anyone know where I maybe could get some help, if I have problems with hololens development - could be paid too? All I can find from e.g. Unity are abonnements for a longer time,but that does not make sense for me, because the problem should be fixed this or next week at the latest. Thanks in advance, I appreciate every help.

How to draw lines from browser on remote mobile AR app?

I am looking for a solution to share the screen from a mobile AR app (ARKit or Unity AR Foundation).
The screen needs to be shared to a browser on the desktop and it should be possible to draw lines on the screen from the browser using the mouse in the AR environment that can be seen on the mobile app which is sharing the screen.
After some investigation there does not seem to be a viable solution to truly share the same AR instance with browser/mobile as you can do with 2 mobile devices.
There should however be some sort of work around possible as it can be done with Vuforia Chalk AR.
Here is a GIF showing how it works:
AR Drawing demo
Sharing a video seems to be possible
Specifically trying to figure out how the line is drawn from the browser and then displayed on the mobile AR app
How can you achieve the same functionality with open source alternatives or Unity and custom code (No Vuforia is possible)?
Looking for a tutorial or some directions to how this can be implemented.

Unity Standalone App: Resolution Scaling Issue (OSX)

I have developed a Mac-OSX standalone app in Unity3D (Scale with Resolution: 1920x1080). Most of the time I present the app on my laptop, connected to an external screen, which works pretty well.
But when I'm on the road and use the app on my laptop screen (1440x900), the UI's are all over the place...
I know, BUT is there a way to run the app in a 16:9 ratio (with black bars on the top and bottom), the same as I can do it in the Unity3D editor?
I don't want to go back and re-scale everything to a lower resolution, as this would be a crazy job :(
Is there any solution without re-doing it?
Cheers
Thanks, but sadly this didn't work for me. The problem is my Reference Resolution in the Canvas Scaler is already set to 1920x1080. To fix this, I have to lower the reference, but this means re-doing all the UI's.
However, I found a workaround, which is strange but working. I have set the Default Screen Resolution in Unity to 1920x1080 as in the image/link below.
Now the strange part :)
1. Start the app with the option key on an HD screen and set the resolution to 1920x1080.
2. Save and Quit
3. Now the app is scaled down to 16:9 and keeps everything in place.
This works as long as you don't start this app or any other Unity build, with the option key on a lower resolution screen. In this case it will scale everything back again. You can redo the steps 1-3 and it will work again.
Not sure if there is another way to do this, but at least I have kind of a solution.
Default Screen Resolution in Unity to 1920x1080
There are two things you should check; 'Resolution and Presentation' under 'Player Settings', and canvas scalers. I have being in a situation similar to yours, I had built an iOS application meant for the vertical orientation, but also wanted to test it on macOS, the application launched correctly in fullscreen mode looking like this (The top status bar was intentionally brought down to show the aspect ratio):
Instead of using black bars on the sides, it seems like that Unity simply used the colour of the canvas of the current scene to fill in the gaps. This was my settings to get this result:
Only the 16:10 ratio (MacBook Pro) was checked, but the application functions in the iPhone's vertical ratio, I did not have to manually rescale the canvas,I simply switched the build platform, this was allowed by the canvas scalers I have added onto every canvas with the following setting (UI Scale mode: Scale with screen size | Screen match mode: expand), this would allow the canvas to expand automatically and scale to any aspect ratio, without messing up the UI:
Hope this Helps!

How to record screen to video on iPhone with openGL (view preview layer) and UIkit elements?

I have searched everywhere and tried mixing and matching different bits of code but I haven't found anything that works or anyone with the same question.
Basically I want to be able to create video demos of iPhone apps that include standard UIKit elements and also the image coming from the camera (video preview layer). I don't want to use airPlay or iOS simulator to project onto the desktop then capture because I want to be able to make videos outside in public. I have successfully been able to video capture the screen with this code but with the video preview layer being blank. I read that its because its using openGL and what I'm capturing is from the CPU, not the GPU. I have successfully used GPUImage from Brad Larson to capture the video preview layer but it doesn't capture the rest of the UIView. I have seen code that combines both and converts to an image but I'm not sure if that would be too slow for realtime video capture. Can someone point me in the right direction?
It might not be the cleanest solution, but it will work nonetheless: did you consider jailbreaking? I hope Apple does sue me for this one but if you really want to record your screen then simply install a screen recorder. Enough options can be found: http://www.google.be/search?q=iphone+jailbreak+record+screen
And if you don't like it: recover your phone for a previous backup.
(for the record: I'm against jailbreaking and posting this from a productivity point of view)

Applying Effect to iPhone Camera Preview "Video" Using OpenGL

My goal is to write a custom camera view controller that:
Can take photos in all four interface orientations with both the back and, when available, front camera.
Properly rotates and scales the preview "video" as well as the full resolution photo.
Allows a (simple) effect to be applied to BOTH the preview "video" and full resolution photo.
My previous effort is documented in this question. My latest attempt was to modify Apple's sample GLVideoFrame (from WWDC 2010). However, I have not been able to get the iPhone 4 to display the preview "video" properly when the session preset is AVCaptureSessionPresetPhoto.
Has anyone tried this or know why the example doesn't work with this preset?
Apple's example uses a preset with 640x480 video dimensions and a default texture size of 1280x720. The iPhone 4 back camera delivers only 852x640 when the preset is AVCaptureSessionPresetPhoto.
iOS device camera video/photo dimensions when preset is AVCaptureSessionPresetPhoto:
iPhone 4 back: video is 852x640 & photos are 2592x1936
iPhone 4 front: video & photos are 640x480
iPod Touch 4G back: video & photos are 960x720
iPod Touch 4G front: video & photos are 640x480
iPhone 3GS: video is 512x384 & photos are 2048x1536
Update
I got the same garbled video result when switching Brad Larson's ColorTracking example (blog post) to use the AVCaptureSessionPresetPhoto.
The issue is that AVCaptureSessionPresetPhoto is now context-aware and runs in different resolutions based on whether you are displaying video or still image captures.
The live preview is different for this mode because it pads the rows with extra bytes. I'm guessing this is some sort of hardware optimization.
In any case, you can see how I solved the problem here:
iOS CVImageBuffer distorted from AVCaptureSessionDataOutput with AVCaptureSessionPresetPhoto
The AVCaptureSessionPresetPhoto is for taking pictures, not capturing live feed. You can read about it here: http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/03_MediaCapture.html
(My belief is that this is actually two different cams or sensors, as they behave very differently, and there's a couple of seconds delay just for switching between the Photo and, say, 640x480).
You can't even use both presets at the same time, and switching between them is a headache as well - How to get both the video output and full photo resolution image in AVFoundation Framework
HTH, although not what you wanted to hear...
Oded.