I've released a small application to a couple of my friends. Some of them needed a 32-bit version. If you have a 64-bit OS you can run both applications 64 and 32bit. This would create a duplicate.
The question that comes to my mind is, can I prevent the same application from running twice?
--- Solution approach ---
I have tried working with WinExist creating a If-Statement at the very beginning and checking whether or not ahk_exe MyApplicationName.exe exists. This functions fails to succeed whenever the user changes the file name however.
I have also tried creating a .txt file inside the Temp folder, leaving behind the currents application Unique-ID so I could close the duplicate. This does not seem to be sufficient for me however, as this method allows the user to alter the ID and bypass it.
--- Final words ---
Any other ideas on how one could prevent the user from running both versions at the same time?
Try adding this to the auto-execute section of the 32-bit version:
If (A_Is64bitOS)
ExitApp
Related
I use many VS Code workspaces throughout the day. Most of them are backed by directories on NFS-mounted drives, which are only mounted while I'm VPN'd in to my employer's network. Opening VS Code while not VPN'd in will cause all of my windows to close, leaving me with blank/empty workspaces, and then I have to set them all back up again in the morning. It only takes a few minutes to do, but I'm lazy and it's not neat; I like things neat. I know that I can start VS Code without any workspaces using the -n option, which is great, but then the next time I start up the editor for real (i.e. for work purposes), all of my workspaces need to be reopened again (see previous statement re: I'm lazy and I like things neat).
Is there a way to indicate that I want to start VS Code without any project just this one time, and then the next time I start I want all of my old workspaces to reopen as normal? Alternately, does anyone know where the state information is stored and how to edit it? I have no qualms about saving it off and then restoring it after I'm done.
Absent any miracle solution, I've at least found the correct file to manipulate: the storage.json file, which on MacOS is found at:
~/Library/Application Support/Code/storage.json
I wrote a Perl script to do the manipulation. When I want to go "offline" it reads in the JSON file, loops through the opened windows, identifies the ones I don't want, and removes them using jq, then launches VS Code. When I'm ready to go back "online", I read a backup of the original file looking for the windows I previously removed, adds them back in (also using jq), and then launches VS Code.
The Perl script is a bit rough around the edges to be posted publicly, but people might find the jq helpful. To delete, you want to identify the windows to be removed as (zero-based) indexes in the array, and then delete them with the following:
jq '. | del(.windowsState.openedWindows[1,2,5])' '/Users/me/backups/online-storage.json' >'/Users/me/Library/Application Support/Code/storage.json'
If you want to add them back in at some point, you extract the full JSON bits from the backup file, and then use the following command to append them to the back of the array:
jq '.windowsState.openedWindows += [{"backupPath":"...",...,"workspaceIdentifier": {...}}, {"backupPath":"...",...,"workspaceIdentifier": {...}}, {"backupPath":"...",...,"workspaceIdentifier": {...}}]' '/Users/me/backups/offline-storage.json' >'/Users/me/Library/Application Support/Code/storage.json'
The inserted JSON is elided for clarity; you'll want to include the full JSON strings, of course. I don't know what significance the ordering has, so pulling them out of the middle of the array and appending them to the end of the array will likely have some consequence; it's not significant for my purposes, but YMMV.
My problem is as described. My script downloads files through an external call to cmd (using the system function and then .NET to make keypresses). The issue is that when it tries to fopen these files I downloaded (filenames from a text file I write as I download), it doesn't find them, causing an error. When I run the script again after seeing it fail, it works but only up to the point where it's trying to download/call new files again, where it runs into the same problem.
Are new files downloaded during when a script is running somehow not visible to the search path? Because the folder is most definitely in my search path (seeing as it works outside of during-script downloads). It's not that it isn't getting the files fast enough either, cause they appear in my folder almost instantly, and I've tried a delay to allow for it to recognize it, but that didn't work either.
I'm not sure if it's important to note that the script calls an external function which tries to read the files from the .txt list I create in the main script.
Any ideas?
The script to download the files looks like so:
NET.addAssembly('System.Windows.Forms');
sendkey = #(strkey) System.Windows.Forms.SendKeys.SendWait(strkey);
system('start cygwinbatch.bat')
pause(.1)
sendkey(callStr1)
sendkey('{ENTER}')
pause(.1)
sendkey(callStr2)
sendkey('{ENTER}')
pause(.1)
sendkey('exit')
pause(2)
sendkey('{ENTER}')
But that is not the main reason I am asking: I am confident that the downloads are occurring when the script calls them, because I see them appearing in my folder as it called. I am more confused as to why MATLAB doesn't seem to know they are there while the script is running, and I have to stop it and run it again for it to recognize the ones I've downloaded already.
Thank you,
Aaron
The answer here is probably to run the 'rehash' function. Matlab does not look for new files while executing an operation, and in some environments misses new files even during interactive activity.
Running the rehash function forces Matlab to search through its full path and determine if there are any new files.
I've never tried to run rehash in the middle of an operation though. ...
My guess is that the MATLAB interpreter is trying to look ahead and is throwing errors based on a snapshot of what the filesystem looked like before the files were downloaded. Do you get different behavior if you run it one line at a time using F9? If that's the case then you may be able to prevent the interpreter from looking ahead by using eval().
Specs: SAS 9.3, GNU emacs 23.4.1 (2012-06-05), and what I think is kind of an old version of ESS (sometime in 2001 maybe?). Unix -- I think SunOS 5.10.
I'm a fairly new emacs user and a very new user of ESS[SAS]. I have noticed that code which works when submitted as an entire file will not work when I submit it region by region. Here is an example:
libname mylib "/.";
data temp; set mylib.temp; run;
Assume that I have a directory as specified in libname and there is a SAS dataset called temp in it -- my code works properly when I submit the whole thing, but if I run just the first line and then the second, it tells me that "mylib" is not defined.
Based on the behavior of the log files -- the second submission creates a log file that overwrites the first -- I think what is going on is that it's starting a new SAS session with each submission. Why might it be doing this? (ESS does not do that with R code -- I can run a snippet to define a variable and then that variable stays defined.) I find I make fewer programming errors if I test things incrementally rather than running all my code in batch mode, and the GUI for SAS in unix leaves something to be desired, so I would really like to find a fix to this.
The only other clue I have is a pair of warnings I get on every submission:
NOTE: Unable to open SASUSER.REGSTRY. WORK.REGSTRY will be opened
instead.
NOTE: All registry changes will be lost at the end of the session.
WARNING: Unable to copy SASUSER registry to WORK registry. Because of
this,you will not see registry customizations during this session.
NOTE: Unable to open SASUSER.PROFILE. WORK.PROFILE will be opened
instead.
NOTE: All profile changes will be lost at the end of the session.
I googled those warnings separately and found they're commonly associated with a corrupt profile, but unfortunately the recommended fix did not fix anything (after deleting the iffy profile, it just restored itself and the error persisted). I am not sure whether this is related or not.
I realize this question might be out of scope for stackoverflow; if it is, a redirect to a more appropriate forum would be much appreciated.
I have a few Perl scripts on a Solaris SunOS system which basically connect to other nodes on the network and fetch/process command logs. They run correctly 99% of the time when run manually, but sometimes they get stuck. In this case, I simply interrupt it and run again.
Now, I intend to cron them, and I would like to know if there is a way to detect if the script got stuck in the middle of execution (for whatever reason), and preferably exit as soon as that happens, in order to release any system resources it may be occupying.
Any help much appreciated.
TMTOWTDI, but one possibility:
At the start of your script, write the process id to a temporary file.
At the end of the script, remove the temporary file.
In another script, see if there are any of these temporary files more than a few minutes/hours old.
I have noticed that MATLAB (R2011b on Windows 7, 64 bit) tends to slow down if I am in debugging mode for a long period of time (e.g. 3 hours). I don't recall this happening on previous versions of MATLAB.
The slow down is small, but significant enough to have an impact on my productivity (sometimes MATLAB needs to wait for up to 1 sec before I can type on the command line or on the editor).
I usually spend hours on debugging mode (e.g. after stopping at a keyboard statement) coding full projects in this mode. I find working on debugging mode convenient to organically grow my code while inspecting my code anytime in execution time.
The odd thing is my machine has 16 GB of RAM and the total size of all workspaces while in debugging mode is usually less than 4 GB. I don't have any other large process running in the background, and my system reports ~8GB of free RAM.
Also, unfortunately MATLAB does not let me call pack from debugging mode; it complains with :
Warning: PACK can only be used from the MATLAB command line.
I have reproduced this behavior after restarting MATLAB, rebooting my system, and on different days. With this, my question/s are:
Has anybody else noticed this? Is there anything I could do to prevent this slowdown without exiting debugging mode?
Are there any technical notes or statements from Mathworks addressing this issue?
In case it matters, my code is on a network drive, so I added the following on my startup.m file, which should alleviate any impact on performance resulting from it:
system_dependent('RemoteCWDPolicy', 'None');
system_dependent('RemotePathPolicy', 'None');
system_dependent('DirChangeHandleWarn','Never');
I have experienced some similar issues. The problem ended up being that Mathworks changed how Matlab caches files. For some users, it is now storing data in the TMP folder as defined by the environment variables. This folder was being scanned by anti virus and causing a lot of performance problem. Of course, IT wouldn't let us exclude the TMP folder from scans. So we added a line to our start up script that changes the environment variable of TMP to some other location within an excluded folder.
You don't have to worry about changing the variable back or messing up other programs. When applications launch, they copy the environment variables into their own local instance of them. Any changes made to them only change the local copy of those variables, not the system copy.
Here is the function you will need.
setenv('TEMP', 'C:\TEMP');
I'm not sure if it was TMP or TEMP. Check your environment variables to be sure.
I am using MATLAB R2011 on linux 10, windows 7 (32 bit).
I experienced MATLAB slowing down while printing simple variables in command window.
It turned that there was one .m file loaded in my Editor.
It was a big file with 10000 lines. These lines were simple data that should have been saved as mat file. When i closed this file, the editor was back to its normal speed.