How to run multiple applications that need x server in headless ubuntu server? - remote-server

I have a single graphics card in a remote headless ubuntu server.
I have set nvidia-xconfig to make a virtual display.
I need virtual x server, but not xvfb, to run a unity game headless, yet record a video of rendering of the game.
Xvfb does work fine, but it does not use graphics card, which makes rendering very slow.
This works fine with xinit if I run a single game.
xinit ./game.x86_64
Starts game and renders game play without any problem.
However, when I try to start multiple games simultaneously with different x-servers:
xinit ./game1.x86_64 -- :0
xinit ./game2.x86_64 -- :1
This does not render properly. One of the game (the one that started first) does not render. (Checked with the recorded video) As far as I know, this is because a single graphics card can only have a single x-server running.
Then, I set multiple screens by tweaking xorg.conf and tried
xinit ./game1.x86_64 -- :0.0
xinit ./game2.x86_64 -- :0.1
However, since xinit tries to start a new server, the latter one does not work, telling there's already an x-server running at X:0
If I search for multiple monitor x-server setting, what I can find is real monitor setup only, yet what I need is virtual monitor setup.
Is there a way to start multiple applications that need a screen in a headless server?
What I think I need to know is
A way to start a x-server with multiple screens and tell the application to use which screen OR
A way to use windows manager remotely on console
If there's any better solutions, or if I have missed something, it would be really helpful too.

I was misunderstanding how xinit works.
I figured this out by running xinit without any clients in the background (like tmux)
xinit or xinit -- :0
and then, specifying which display to use. Of course, multiple monitors are set in xorg.conf.
For unity, it was enough to just exporting display environment variable.
export DISPLAY=:0.0 for game1
export DISPLAY=:0.1 for game2
The log says unity recognizes both display, but game1 says
:0.0 display is 'display 0 (Primary display)'
while game2 says
:0.1 display is 'display 0 (Primary display)'
My misunderstanding was that I thought xinit is only used with client application, although xinit can run without any client application and just running in the background.

Related

RPi Pyaudio/Portaudio + ALSA: How to select/change mux inputs

I'm working on a project that is using a Raspberry Pi with Raspbian and an SGTL5000 based sound card (FePi.) I have no problem selecting the card and getting samples in both directions - once I have configured the multiplexer to properly select line In/Out. I did this with Alsamixer. I want to automate the process so that the only step required is to run the application.
I don't see a way to do this using PyAudio/PortAudio. Is my only option the ALSA API or is there a way to do this with PyAudio (or PortAudio) that I'm not spotting?
Thanks in advance for any insight you can provide.
Oz (in DFW)
I ran into a similar problem, I wanted to automate changing mux settings but I wanted to adjust inputs not exposed by alsamixer too.
To deal with the limitations of the driver I ended up porting over the Teensy 3.x sgtl5000 control software to the pi yesterday
https://github.com/Swap-File/pi-sgtl5000
You could force feed the same commands via i2c via python.
The only downside is, once you start force feeding the sound card i2c commands, you break alsamixer (and anything else that might try to adjust it's own volume settings).

Unity UNET How to change online scene in sync with clients

I'm using the old Unity 2017.3 UNET implementation in my game. Players connect to a server and are placed in a Lobby scene until the party leader selects another level to go to. The implementation is just a slightly modified version of the default NetworkLobbyManager.
The trouble started now that I've begun heavily testing the networking code by running a compiled build for a client and then using the editor as the server. Very frequently, the server running in the editor will run a great deal slower than the compiled client build. So when I use NetworkManager.ServerChangeScene the client will load the scene before the server, which will cause all NetworkIdentities on the client scene to be disabled (because they haven't been created on the server yet.)
It's a lot less likely to happen if the server is running a compiled build, because the server will almost always load the scene before any clients. But it does surface a bigger issue with Unity itself. That there's no guarantee that the server will be available when changing scenes.
Is there some other way of changing scenes in a networked game? Is there a way to guarantee that the server enters the scene before any clients? Or am I stuck just kind of hoping that the network remains stable between scene changes?
Well I thought about it more overnight and decided to look in other directions, because more investigation revealed that sometimes the Network Identities are still disabled when the server loads the scene first as well.
I was looking through the UNET source code and realized that the server should be accounting for situations where it loads the scene after the clients, although that code looks a little jank to me. This theory was backed up by the documentation I found that also says NetworkIdentities in the Scene on startup are treated as if they are spawned dynamically when the server starts.
Knowing those things now, I'm starting to think that I'm just dumb and messed some stuff up on my end. The object that was being disabled is a manager that enables and disables other NetworkIdentity objects. I'm pretty sure the main problem is that it's disabling a network identity on the client, that is still enabled on the server, which is causing the whole thing to go haywire.
In the future, I'm just going to try and stay away from enabling and disabling game objects on a networked basis and stick to putting relevant functionality behind a flag of my own so that I can "soft disable" an object without bugging out any incoming RPCs or SyncVar data.

Object persistence in WSGI

I've been developing a web interface for a simple raspberry pi project. It's only turning lights on and off, but I've been trying to add a dimming feature with PWM.
I'm using modWSGI with Apache, and RPi.GPIO for GPIO access. For my prototype I'm using (3) SN74HC595's in series for the LED outputs, and am trying to PWM the OE line to dim the lights.
Operating the shift registers is easy, because they hold the outputs in between updates. However, for PWM to work the GPIO.PWM instance must stay active between WSGI sessions. This is what I'm having trouble with. I've been working on this for a few days, and saw a couple similar questions here. But nothing for active objects like PWM, only simple counters and such.
My two thoughts are:
1) Use the global scope to hold the PWM object, and use PWM.ChangeDutyCycle() in the WSGI function to change brightness. This approach has worked before, but it seems like it might not here.
Or 2) Create a system level daemon (or something) and make calls to that from within my WSGI function.
Very important with mod_wsgi if you need things in memory to persist across requests, is that you must use mod_wsgi daemon mode and not embedded mode. Embedded mode is the default though, so you need to make sure you are configuring it. The default for daemon mode is single process and so requests will always hit the same process. It is still multithreaded though, so make sure you are protecting global data access/update with thread locking.
Details on embedded vs daemon mode in:
http://modwsgi.readthedocs.io/en/develop/user-guides/processes-and-threading.html
You will see some example about daemon mode in document which also explains how you should be configuring your virtual environment.
http://modwsgi.readthedocs.io/en/develop/user-guides/virtual-environments.html
For anyone looking at this in 2020:
I changed mod_wsgi to single thread mode. I'm not sure if it's related to Python, mod_wsgi, or bad juju, but it still would not last long term. After a few hours the PWM would stop at full off.
I tried rolling my own PWM daemon, but ultimately went with the pigpio module (is Joan on SE?). It's been working perfect for me.

How do I programmatically turn on/off Mute Groups on my Behringer X32?

I've got a Behringer X32 rack, which uses an extension of the OSC (Open Sound Control) protocol. This particular rack communicates via UDP packets on port 10023. A fellow named Patrick Maillot actually has some pretty extensive albeit unofficial documentation of the protocol, including multiple executables you can download to interact with the system (outside of the official Behringer apps).
What I would like to do is pretty simple, though I'm having a hard time getting up to speed with this. I want to be able to mute and subsequently un-mute Mute Group 1 on my device. The mute group is already set up; all I want to do is utilize the protocol to either activate or deactivate it.
I can successfully connect to the rack using the X32_Command.exe program. But wading through the documentation, here's what I came up with as my best guess for which commands I should be sending:
/config/mute/1/ON
/config/mute/1/OFF
However, I don't think I have the syntax right (or maybe I've just got the wrong set of commands altogether), because those don't seem to do anything. In the X32_Command.exe console application I appear to receive the following responses when issuing those commands, respectively:
->X, 20 B: /config/mute/1/ON~~~
->X, 20 B: /config/mute/1/OFF~~
However, nothing actually happens on the rack. The mute group isn't affected at all when I issue these commands. How do I get this working? What am I doing wrong?
Just saw this (better late than never). The correct syntax for X32_Commmand.exe would be (as stated in the documentation):
/config/mute/1 ,i 0
/config/mute/1 ,i 1

How Can I intercept high level GDI draw commands?

I'm trying to make a application that allows remote access to other applications (running on different machines). The idea is to make give users transparent accesso to certain applications, I've basically two options:
Application Streaming
Intercepting draw command and reproduce them in the client
(of course, the input is redirected from the client to the server)
I've a working version with application streaming, but I don't have a clue of how to do it through hooking in the Win API...
Any ideas ?
What you're describing sounds a lot like a Windows metafile. The metafile captures all GDI drawing commands to a file; that file can then be passed to a remote PC and rendered there.
See CreateEnhMetaFile for starters. This returns a handle to a device context, which you draw to instead of drawing to the normal screen device context.