Execute task after a certain amount of time after PC start - scheduled-tasks

For a museum installation I need to find a way (I guess with task sheduler) to do this scenario :
1 - Museum's staff start the pc (Windows 10)
2 - "Resolume" (a software) start at launch of the pc (Already achieve this, I've put it in the application list that run at start).
3 - - Then after a certain amount of time (because Resolume project need time to initialize), execute a macro/keyboard input that I have configured as a "starting key" to Resolum.
So the staff doesn't have to do anything execpt starting the PC everymorning :)
The part in bold is where I struggle, I don't find a way to do a task after x times after pc starts. And in this case I guess the task is launching a script that will do this keyboard input (If you have answer to do this asswell, I'll take it, thanks).
Forgive me for grammar errors, I'm french and I wrote this quickly, museum open tomorow (I received the projet this morning)

Related

Close duplicate 32 or 64 bit application

I've released a small application to a couple of my friends. Some of them needed a 32-bit version. If you have a 64-bit OS you can run both applications 64 and 32bit. This would create a duplicate.
The question that comes to my mind is, can I prevent the same application from running twice?
--- Solution approach ---
I have tried working with WinExist creating a If-Statement at the very beginning and checking whether or not ahk_exe MyApplicationName.exe exists. This functions fails to succeed whenever the user changes the file name however.
I have also tried creating a .txt file inside the Temp folder, leaving behind the currents application Unique-ID so I could close the duplicate. This does not seem to be sufficient for me however, as this method allows the user to alter the ID and bypass it.
--- Final words ---
Any other ideas on how one could prevent the user from running both versions at the same time?
Try adding this to the auto-execute section of the 32-bit version:
If (A_Is64bitOS)
ExitApp

Is WinDbg supposed to be so excruciatingly slow?

I'm trying to analyze some mini crash dumps. I'm using Windows 10 Pro Build 1607 and WinDbg 10.0.14321.1024. I have my symbol file path set to
SRV*C:\SymCache*https://msdl.microsoft.com/download/symbols
Basically, whenever I load up a minidump (all < 1 MB .dmp files), it takes WinDbg forever to actually analyze them. I understand the first run can take long, but it took mine almost 12 hours before it would let me enter a command. I assumed that, since the symbols were cached, it wouldn't take long at all to re-open the same .dmp. This is not the case. It loads up, goes pretty much instantaneously to "Loading Kernel Symbols", then takes another 30 minutes before it prints the "BugCheck" line. It's been another 30 minutes, and I still can't enter commands into it.
My PC has a 512 GB SSD, 8 GB of RAM, and an i5-4590. I don't think it should be this slow.
What am I doing wrong?
These kind of complaints seem to occur more often lately and I can reproduce it on my PC. This is not your fault but some issue with the Internet or the symbol server on Microsoft side.
Monitoring the traffic with Wireshark and looking at my disk on how the symbol cache get populated, I can say:
only one file is being downloaded at a time.
the problem also occurs with older WinDbg versions (6.2.9200)
the problem occurs with HTTP and HTTPS
when symbols are found, the transfer speed is very slow, then increasing. The effective transfer rate is down at 11 kb/s to 20 kb/s (on a line which can handle 6500 kb/s)
there's quite a high number of packets out of order, duplicate packets etc., especially during the "lookup phase" where no file is downloaded yet. Such a lookup phase can easily take 8 minutes.
even if the file already exists on disk, the "lookup phase" is performed.
the HTTP roundtrip time (request to response) is 8 to 9 seconds
This is the symbol server being really slow. Other have noticed as well: https://twitter.com/BruceDawson0xB/status/772586358556667904
Your symbol path contains a local cache so it should load faster next time around, but it seems that the cache is not effective, I can't tell really why (I suspect the downloaded symbols are not a perfect match and they are being downloaded again, every time).
I would recommend modifying the _NT_SYMBOL_PATH (or whatever is the way your sympath is initialized) to SRV*C:\SymCache only, ie. do not attempt to automatically download, just use the symbols you already have cached locally. The image should open fairly fast. Only enable the symbols server if you discover missing symbols.
I ran into the same problem (extremely slow windbg), but loading/reloading/fixing/caching symbols did not help. By accident, I figured out that this problem persists when I try to print memory with address taken from a register, like
db rax
The rule of thumb is to always use # with the register name.
db #rax
Without this symbol, the debugger considers rax to be a symbol name, and looks for it some time (depending on the amount of symbols you have loaded) and fails to find it eventually, and falls back to treating it like a register name. Printing memory from register with # symbol works instantly, even if you have gigs of symbols loaded in memory. As you can see, this problem is also symbol-related, but in a different way.

MATLAB - save command window contents (but process already running)?

I have this optimization problem that runs for hours. I was hoping it would be over but it's not. I'm at a lab pc and I wdon't want someone to switch off the simulation when I leave. Of course I could write a sign board. Wondering if there is any other way to tell matlab to save the output of the current running command to a file or switch diary on while another process is running?

How to detect if cronned script is stuck

I have a few Perl scripts on a Solaris SunOS system which basically connect to other nodes on the network and fetch/process command logs. They run correctly 99% of the time when run manually, but sometimes they get stuck. In this case, I simply interrupt it and run again.
Now, I intend to cron them, and I would like to know if there is a way to detect if the script got stuck in the middle of execution (for whatever reason), and preferably exit as soon as that happens, in order to release any system resources it may be occupying.
Any help much appreciated.
TMTOWTDI, but one possibility:
At the start of your script, write the process id to a temporary file.
At the end of the script, remove the temporary file.
In another script, see if there are any of these temporary files more than a few minutes/hours old.

Why doesn't "coverage.py run -a" always increase my code coverage percent?

I have a GUI application that I am trying to determine what is being used and what isn't. I have a number of test suites that have to be run manually to test the user interface portions. Sometimes I run the same file a couple of times with "coverage.py run file_name -a" and do different actions each time to check different interface tools. I would expect that each time I ran with the -a argument, I could only increase the code covered line count by coverage.py (at least unless new files are pulled in). However, sometimes it gives lower code coverage after an additional run - what could be causing this?
I am not editing source between runs and no new files are being pulled in as far as I can tell. I am using coverage.py version 3.5.1.
That sounds odd indeed. If you can provide source code and a list of steps to reproduce the problem, I'd like to take a look at it: you can create a ticket for it here: https://bitbucket.org/ned/coveragepy/issues