How to create a virtual PulseAudio stereo source from the Left Mic Channel & the Right Monitor Channel? - pulseaudio

I have just one sound card. 2 ins & 2 outs.
I want to make a new virtual stereo source to record from that is:
the left mic input panned hard left,
and whatever is currently playing on the right output panned hard right
I know it must be possible, and I can get partway there, but not quite getting it.
I've been trying permutations of load-module commands in ~/.config/pulse/default.pa
Help!

Related

Wwise Resonance Audio Room behaving strangely in Unity

I am trying to implement Resonance Audio via Wwise for a game in Unity, but I have a problem I can not solve for the life of me.
The problem is that the zones where I can hear the room reverb doesn't correspond to the set room boundaries. I've uploaded a video to illustrate the problem. Each room in the game has a WwiseResonanceAudioRoom component attached to it, with no change in scale, that fits the room. But for each of the three rooms, I can only hear the room reverb in some specific part of the room. Some reverb for objects on a longer distance can be heard from parts of the room where the others can't be heard. I've added a debugging box to the game view which outputs the exact data that the ResonanceAudio.cs script is sending to Wwise. In Wwise, all of the reverbed sounds are sent to a "RoomEffectsMasterBus" with the Resonance Audio mixer plugin on it, as well as an ambisonic aux bus (which I turned the volume down on for the video). The regular sound works well, and I'm using Wwise Rooms to separate the room sounds.
How the hell do I make Resonance Audio output room reverbs in the entire room? I haven't touched the Resonance Audio scripts (other than now adding the debug text output). Any help would be greatly appreciated!
I'm using
- Wwise 2019.2.0.7216
- Resonance Audio SDK 2019.2.1.91 from Wwise Launcher
- Unity 2019.2.17f1
- The ResonanceAudioForWWise 1.2.1 Unity-Wwise implementation wrapper scripts for Unity
Here's the video showing the issue:
https://youtu.be/0Y7GXG69IZ0
Room 2 boundaries 1
Room 1 boundaries 2

Monitor movement on a real world track controls Unity Camera

I am working, or about to be working, on a project where I would like to move a monitor along a railed track, left to right and vice versa, and have the camera in Unity update its movement left or right to follow the track movement. I am attempting to do something like this Video Example but on a smaller scale, and have, at certain points along the track, triggers that will spawn different animations based on where on the track the monitor is, sort of like a timeline.
I guess my question is, does anyone have any thoughts or ideas on how this is accomplished? My initial thought is using a accelerometer to get movement data and use that with some scripting to control the camera, but I am not sure how big a factor the potential error may be with this approach. Another idea would be to use some sort of Laser Range Finder to get a constant distance strapped to the back of the monitor or the track, right now I am really just brainstorming.
On the unity side of things I will be fine, I am just wondering if there are other ways, or if people may have other ideas on how this is done, before I jump into spending money on trial and error modes. Thanks for any insight that anyone may be able to offer in advanced!

Using a Raspberry Pi to rotate a video

I want to build a kind of photo booth. A normal booth is so boring so I decided to build a funky fancy one.
I will use 2 Raspberry Pis. One for stream, shoot and print the photo. The other to display the live stream of the photo.
The streaming shooting and printing is already done. Now I am building the video stream display part.
I will show the picture in format 1:1, because i want to display it on every 3 shot rotated by a random angle. So the guys in front of TV have to bend their heads, so I will get strange and funny pictures. Maybe it is possible to rotate constantly like a hypnotic spiral.
On Windows with VLC the rotation of the stream works very well. How can I do this on a Raspberry Pi?
I did it now with HTML and Browser in Fullscreen Mode.
Rotate Videostream and crop it to 1:1

How to "hang on" to two distinct points coming from iPhone's camera input in live stream?

how could I get "hold of" two points coming from the iPhone's camera (in live stream) like these people do it : http://www.robots.ox.ac.uk/~gk/youtube.html (they're using this technique to bypass the need for markers in AR..)
I'm not interested in AR, I'm only interested in coming up with a way to "hang on" to such points coming from the camera's live stream and not lose them regardless of whether I'm moving the camera closer to them or further away, to the left or right,..etc.
Is it just a matter of coming up with a code that scans the camera's input for something that is "standing out" (because of diferrence in color?high contrast, etc?)
Thank you for any helpful ideas or starting points!
Check out http://opencv.willowgarage.com/wiki/
OpenCV is an open source lib that can do lots of things around image recognition and tracking.
If you google for it, along with iOS as a keyword you should run into a few related projects that might help you further.

Do there any dev who wrote iPhone wifi/bluetooth multiplay before?

do there any dev who wrote iPhone wifi/bluetooth multiplay before?
Recently, I'm trying to make my latest game Doodle Kart to have mulityplay via bluetooth. But I found that there are hugh data need to share between the 2 devices.
-your car's position and direction
-your car's status(it is in normal state, it is hitting by bullet, it is falling into to hole....)
-CUP car's position, dicretion, and their status
-items position and status (pencil, bullet...)
I'm thinking about one device calculate all the things, and the other device just wait and receive the data to display on the screen. Does it make sense?
Hey, I should ask you the most important question first: Do you think it's possible to make bluetooth multiplay work on my game? It's just too much data need to share between the device.
Usually Multiplayer Games just share "events", like:
Player begins to turn left/right.
Player begins to accelerate.
Player shoots from x/y/z to direction x/y/z.
Item spawns at x/y/z.
Player aquires item.
The other parts just calculate the rest themselves as if everything would happen for them.
This reduces the data needed to transmit but requires periodic "full updates" that sync the game state again (i.e. every 10 seconds).
In short:
Transfer actions, not data.