I'm invoking an application on a remote machine using PsExec (called from a C# test project). The invoked application has a GUI and also listens to a TCP port. At certain times I need to send it a command (using TCP) that takes a screenshot of the application and returns the screenshot back to my test.
The issue is that for that to work properly, the window of the application should be in front of other windows, but for some reason, PsExec always opens in behind all other application windows.
Note that I do use the /i switch and I can see the window if I bring it to front manually (e.g. using Alt+Tab), but I can't make PsExec to open it in front of other applications.
Related
I am trying to create a small script I can run to connect to ExhangeOnline but when I run it connects but immediately disconnects before I can use the tool to make changes to Exchange. Any way I can keep PowerShell connected until I am done with it?
I have the following scenario:
I set up a Docker container with access to the X11 socket, essentially I did this: https://stackoverflow.com/a/25334301
Then I installed Firefox within the container and started it using the "firefox" command in bash.
What I noticed: If Firefox was already running on my host machine when I started it in the container, it essentially "escaped" the container as it just opened a new window of the host instance of Firefox. It therefore had access to everything on the host machine and the container became useless.
This also works vice versa: If Firefox is not running on the host and I start an instance in the container, it is really running inside the container. If I then start Firefox on the host, the new instance is also running inside the container.
However, I couldn't reproduce this behavior with gvim instead of Firefox.
I am well aware of the security problems inherent with X11 socket sharing, but I cannot explain the scenario I described above. Why can a container start a "process"---or rather a window---outside of its restricted environment? And how is it even possible that my host system starts a process within a container only because the same program is already running inside a container?
(Please note that I didn't know how to call such a graphical instance of a program other than "process", although it's probably not a real process in this case...)
System: Ubuntu GNOME 14.10, Docker 1.5, ubuntu:latest Docker image.
UPDATE: This doesn't happen if I start Firefox using the -new-instance flag, so it seems to be more of a Firefox problem than a X11 socket problem.
UPDATE 2: Seems that this happens in other scenarios as well, for example using ssh with X-forwarding:
https://unix.stackexchange.com/questions/104476/why-starting-firefox-from-command-line-in-vm-starts-the-firefox-in-the-host-ma
and
https://superuser.com/questions/462055/launching-firefox-on-remote-server-causes-local-firefox-to-open-the-page-instead
Now the question is, how the hell does Firefox do this? What kind of X11 sorcery do they use to find out if Firefox is already running?
Because you forward the x11 socket into the container, any graphical program, whether inside the container or outside the container, will be talking to the same Xorg server. This is the same as when using ssh with X-forwarding.
Now let's say that one firefox instance is already started and communicating with that xserver. If we are the second firefox process starting up, we might find that first process by navigating the window tree from the root. We might be able to identify a window belonging to firefox through some properties that it sets on it's windows. Once we found a window belonging to firefox, we might send a message to the process owning that window, asking it to add a new tab.
Perhaps if we find such a process and ask it to open a new tab, we just die off as our job is done.
Of course, we could always just look at the source and find out that indeed firefox does basically this. In particular they:
find an existing window
and then notify it
But they don't notify it with a client message. They do it by changing a window property. Presumably the process that creates the window also subscribes to property change notifications. In case you're curious the full path through the code is:
from parsing the command line, StartRemoteClient
which creats a client (note that they do this over d-bus/wayland also) and then calls SendCommandLine()
which is a virtual function, so find it's override XRemoteClient
and in there you see where it calls the previous two functions linked to FindBestWindow() and then DoSendCommandLine().
I've got a batch file dmx2vlc which will play a random video file through VLC-Player when called.
It works well locally but I need this to happen on another machine on the network (will be adhoc) and the result (VLC-Player playing the video) must be visible on the remote screen.
I've tried SSH, Powershell and PsExec, but both seem to run the batch file and the player in the session of the command line, even when applying a patch to allow multiple logins.
So IF I get to run the batch file it is never visible on screen.
Using Teamviewer and the like is no option as I need to be able to call all this programmatically from my dmx program.
I'm not bound to being able to call the batch directly, it would be sufficient for me if I could somehow trigger it to run.
Sadly latency is a problem here as we are talking about a lighting (thus dmx) environment.
Any hints would be greatly appreciated!
You can use PSexec if the remote system is XP with the interactive parameter if you state the session to interact with, 0 would probably be the console (person physically in front of the machine).
This has issues with Windows Vista and newer as it pops up a prompt to ask the user to change their display mode first.
From memory, you could create a scheduled task on the remote system pretty easily though and as long as it's interactive the user should see it.
Good luck.
Try using web interface. It is rather easy: VLC is running http server, and accessing particular URL from remote machine will give full control over VLC. Documentation can be found here
I have a command line program that listen to a tcp port until user type Q to exist. It works fine in local powershell window. But when I try to run it on another machine using powershell remote session, it just starts and quit. Is there a way to keep it running?
The remote script runs in a PowerShell that never becomes visible so AFAICT it doesn't even got a console handle by which to handle reading keyboard input.
You can take a look at the SysInternals utility - psexec. From my testing, that utility works for what you are trying to do.
Ensure you have Powershell 3 or higher since it adds support for detached sessions/background jobs.
Use a Remote Disconnected Session, described on Technet
With TestComplete 8 we have a script that is scheduled to start 06:00 every morning by this line:
"C:\Program Files\Automated QA\TestComplete 8\Bin\TestComplete.exe" "C:\Attracs\TestComplete\Attracs\AttracsTEST\AttracsTESTProject.mds" /r /e /SilentMode
The problem is that this often fails. The log remark says:
An error occurred while calling the "Keys" method or property of the "TcxCustomInnerTextEdit" object.
The object or one of its parent objects does not exist.
If I connect to the computer with Remote Desktop and manually run the script it works fine.
There is no screensaver active and the power scheme is set to never sleep.
I have noticed that Testcomplete needs a handle to GUI (the screen is visible) or the script got this kind of errors. Could it be that when it starts it have no handle to the GUI components because they aren't visible ?
From the helps Running Tests via Remote desktop:
However, if you minimize the Remote Desktop window (the window that display the remote computer’s desktop), the operating system switches the remote session to the GUI-less mode and does not display windows and controls. As a result, TestComplete (or TestExecute) is unable to interact with the tested application’s GUI, as the GUI does not actually exist in this case and your automated GUI test fails.
To avoid this issue, you can keep the Remote Desktop window visible during the test run, but this may be inconvenient as it occupies some part or even your entire screen and leaves less space for you to run your local applications.
Any solution for this?
There is a way to enable the console connection in Windows to be active at all times, which allows TestComplete to work without actually connecting with RDP.
From: Running Tests in Minimized Remote Desktop Windows
Log in to the computer from which you
connect to remote computers.
Close all open Remote Desktop
sessions.
Launch the Registry editor
(Regedit.exe).
If you have a 32-bit operating system:
Locate the
HKEY_CURRENT_USER\Software\Microsoft\Terminal
Server Client\ Registry key if you
want to change the connection settings
for the current user only.
-- or --
Locate the
HKEY_LOCAL_MACHINE\Software\Microsoft\Terminal
Server Client\ Registry key if you
want to change the connection settings
for all the users.
Create a new DWORD value in this key
and name it
RemoteDesktop_SuppressWhenMinimized.
Specify 2 as the value data.
If you have a 64-bit operating system:
Locate the
HKEY_CURRENT_USER\Software\Wow6432Node\Microsoft\Terminal
Server Client\ Registry key if you
want to change the connection settings
for the current user only.
-- or --
Locate the
HKEY_LOCAL_MACHINE\Software\Wow6432Node\Microsoft\Terminal
Server Client\ Registry key if you
want to change the connection settings
for all the users.
Add the
RemoteDesktop_SuppressWhenMinimized
value to the key.
I found this page
http://www.automatedqa.com/support/viewarticle/12567/viewarticle.aspx?aid=12567
It seems that a solution could be that running TestComplete in a Virtual machine.
/Roland
To run any UI test, the UI needs to be available. Hence, the machine should be unlocked so that TestComplete can perform user actions like mouse click, keys, etc to work.
However, if you have non UI test like running Web Services then it will work.