I want to run a rendering software on a remote head-less server. Xvfb is an option but apparently doesn't support hardware acceleration. There are some hints as to how to set up a GPU-supporting X server without a physical display (e.g. for unity and nvidia-xconfig) but they all seem to require root access. I cannot run commands as root on the server since it is part of the university HPC cluster.
Related
I have a rasp pi 3 which controls an irrigation system. The pi is headless, and is controlled via the home network by running a Tomcat server, which creates a UI. Mostly, the irrigation system is automatic, but I can log in to the Tomcat server to see the log of its activities, or change settings, or update the planting plan or manually control the sprinklers.
Now I'd like something similar, but in a place where there is no internet available. That is, I'd like to use the pi as a standalone web server, so that I can access the UI using a browser on my phone. I've looked through the rasp pi documentation, but it describes setting up access points, which seem to depend on a wired connection between the pi and the router. I've read about connecting via SSH to a standalone setup, but that article said that support for a standalone server isn't available.
Is what I'm looking for possible?
Thanks,
John
I have a bunch of automated UI tests that currently require a physical monitor to run. Can I somehow create a virtual monitor in Windows 10 that functions like a real monitor to the OS? I want to run the UI tests in a remote cloud environment without screens.
I think I heard sometime that VR-development (Virtual Reality) have had similar problems in that VR also need a physical monitor attached (except the VR-headset) and that this was perhaps solved by Nvidia/Intel? with a fake monitor driver or similar? Or was it virtual desktops in VR? I can't find the source for any of this anymore...
The easiest way is to use the Spacedesk utility:
https://spacedesk.net/
Spacedesk server part is installed on your PC.
The client part (viewer) will also be required - any device on Android/Windows in same LAN segment.
Small hack:
You can also install Windows Client on Spacedesk server PC and manually assign client IP from another subnet. As a result, you will assume real Windows fake display )...
As title say i'm looking for something like Pulseway.
Basically i need to monitor 3.000 devices. And i was wondering if there is something where i can install an agent on this devices and monitor/command them securely via a web panel.
Thanks for your time
You can try to use Monitis Server/Device monitoring. They have agent for both Linux and Windows systems. We are using it for checking our internal servers performance. BTW you can install agent in one machine and monitor other devices with your internal network using internal uptime monitoring. On one agent you can setup as many monitor as you need. here is more details http://www.monitis.com/server-monitoring
Question mark
I'm wondering whether it's possible for the VM guest machine to pop up a window to the MS-Windows host machine once a task is done within the VM (not an email). If I'm not dreaming, how to achieve that ?
Why
The VM is a simulator for a production server. Code is written within the host IDE and tested straight into the VM. So files are transferred manually from the IDE to the VM, and then automatically moved, formatted, chmoded, chowned and so on in the VM. This process can take a while, so I want to warn the devleopper once the process is over. The developper have no access to the VM and shall not necessary have one.
Config
Tool: VirtualBox 4.1
host: MS-Windows XP or Windows seven
guest: VM Debian
shared dir: yes
network : bridged connection
If this ability existed, it would be quite a security hole in VirtualBox. Guest VMs gaining access to the host machine's OS is not a good thing! As such, I don't think it's possible to accomplish this in a supported manner.
Instead, think of it as two separate machines. What mechanisms do you have for causing alerts or popups on one machine from another? Is anything like IMs, netsend, etc enabled in your environment?
iam confused over these two concepts. The xen split driver model and paravirtualization. Are these two the same ? Do you get the split driver model when xen is running in full virtualized mode ?
Paravirtualization is the general concept of making modifications to the kernel of a guest Operating System to make it aware that it is running on virtual, rather than physical, hardware, and so exploit this for greater efficiency or performance or security or whatever. A paravirtualized kernel may not function on physical hardware at all, in a similar fashion to attempting to run an Operating System on incompatible hardware.
The Split Driver model is one technique for creating efficient virtual hardware. One device driver runs inside the guest Virtual Machine (aka domU) and communicates with another corresponding device driver inside the control domain Virtual Machine (aka dom0). This pair of codesigned device drivers function together, and so can be considered to be a single "split" driver.
Examples of split device drivers are Xen's traditional block and network device drivers when running paravirtualized guests.
The situation is blurrier when running HVM guests. When you first install a guest Operating System within a HVM guest, it uses the OS's native device drivers that were designed for use with real physical hardware, and Xen and dom0 emulate those devices for the new guest. However, when you then install paravirtual drivers within the guest (these are the "tools" that you install in the guest on XenServer, or XenClient, and likely also on VMware, etc.) - well, then you're in a different configuration again. What you have there is a HVM guest, running a non-paravirtualized OS, but with paravirtual split device drivers.
So, to answer your question, when you're running in fully virtualized mode, you may or may not be using split device drivers -- it depends on whether or not they are actually installed to be used by the guest OS. Recent Linux kernels already include paravirtual drivers that can be active within a HVM domain.
As I understand it, they're closely related, though not exactly the same. Split drivers means that a driver in domU works by communicating with a corresponding driver in dom0. The communication is done via hypercalls that ask the Xen hypervisor to move data between domains. Paravirtualization means that a guest domain knows it's running under a hypervisor and talks to the hypervisor instead of trying to talk to real hardware, so a split driver is a paravirtualized driver, but paravirtualization is a broader concept.
Split drivers aren't used in an HVM domain because the guest OS uses its own normal drivers, which think they're talking to real hardware.