I'm opening this one as it's related to another thread I've opened but not the same problem.
I currently have 2 scripts to monitor a folder. In this folder (a sharepoint site folder), is a powerpoint presentation running on a remote laptop. The goal is to have the possibility to change the presentation without having to go to the laptop itself (as there will eventually be a large number of them in remote locations).
So what I'm doing so far is a scrip1 that monitors when a file is dropped in the folder. This script then shut downs the presentation currently running.
Another script monitors if a ppt file is renamed in the folder (as the instructions will be: 1-dump your file 2-delete Slide.pptx 3- rename your new file as Slide.pptx). I then kicks off Powerpoint with the presentation file.
But don't know how to have both scripts running at the same time. Even having 1 calling 2 and when 2 has run it calls 1 again.
Any ideas ?
Thanks
Don't reinvent. You can use PowershellGuard. Original ruby guard is also what I have used for this purpose (it is way more mature).
Guards allow you to run multiple scripts mapped to file specification.
Related
I am running VS Code on a Windows machine running Ubuntu on WSL 2 that mounts a remote drive from a Linux server using FUSE. This allows me to edit comfortably on VS Code while I run the documents on the server and it generally works great. However, if I am editing and my computer loses its Internet connection briefly, the FUSE mount is lost. If I am in the middle of editing and don't notice, then when I save it, VS Code will see nothing in the directory and create a bunch of directories and save the file locally, which is not what I want to do.
For example, I might be editing a file that is in the mount folder, which is the remote mount. I work on the file mount/somedir/somedir2/someFile.txt. If the Internet connection drops, the remote filesystem is unmounted. If I click Ctrl-s, then VS Code sees only an empty folder called mount. It then creates a somedir directory, then a somedir2 directory, then a someFile.txt file, and saves it there. It is often some time before I catch the problem, and while it is resolvable, I end up with multiple versions of the same file (one on my computer and one on the server) and rationalizing the two is a pain and, if I do it wrong, can end up with me losing work and data (which has happened).
Is there a way to tell VS Code to give an error message when attempting to save a file to a suddenly nonexistent directory, rather than creating it automatically for me? That would make my life much easier.
I'm running into some issues with my existing powershell script that copies data from a remote location into a local folder - that local folder also happens to be sync'd with google drive for desktop.
I'm seeing incomplete files being uploaded etc. In order to combat this I think it would be easier/better to change where the initial remote > local is putting its files, and instead of copying directly into the sync folder - copy into a temp/staging location that's NOT the sync folder.
Once that process is complete then use the powershell move-cmd to simply 'move' which will just update file locators to be that of the sync folder.
I think this will solve my issue.
Anyone see any problems with this approach?
If you have ruled out device connectivity, multiple files being uploaded at once vs. a single file being uploaded, and mobile device app or internet browser there is nothing wrong with your approach. If you need anymore assistance please reply to this thread or mark this as the answer.
I'm struggling to find any method that works with current Unity.
This for a conventional Windows build (not a Windows Universal via VS).
So, there's the separate data, dll, etc files of a build: how to create a civilian-usable "single exe" for Windows, with current Unity??
As said afaik this was actually always the case.
See e.g. Windows standalone Player build binaries to see a list of resulting output of a build. It exists back until version 2017.2.
So the short answer is:
It is how it is. You will always get multiple files and the data folder as output.
What you can do however is using a pack tool which simply packs all your folder content into one single exe file.
One example is Appacker
=>
BUT unfortunately there is one known issue: Windows Defender recognizes it and every exe created with it as malware. The reason for that is actually mentioned by the author in the link
Spoiler: A self-extracting .exe file? Windows Defender hates that trick!
So either with this tool or any similar one there is no real way around that except you need to trust the tool and your users need to trust you ^^
(The icon is also only used for the process window, not for the exe file itself ^^)
The long and correct way would probably be to create an actual installer for your final app which is then allowed to extract all the files to a certain location.
So in the end the user anyway will again have an exe and according data and dll files e.g. in the Programs folder but get a registered shortcut to the Start Menu which is just how any other application on Windows usually works like.
Just to add to the answer.
In 2020 if it's a game you should just use Steam. Making auto-update way easier for your users.
https://partner.steamgames.com/doc/gettingstarted
I have a legacy Windows application (no source code) that does something with files in a given directory say C:\Pickup The directory path is hard coded into the application and cannot be changed. If I run multiple instances of this application, the instances will compete for the same files in C:\Pickup which is not good.
This application does not have a GUI. I launch it from Task Scheduler many times a day and it runs from 1 minutes to say 20 minutes depending on the number of files it needs to process in C:\Pickup
I am wondering if there is easy to use virtualization technology that will allow me to launch instances of this application in some virtual space where each instance gets its own C:\Pickup folder?
EDIT 1: I am thinking of a solution like IE uses for plug-ins (ActiveX controls) that run inside of it. Somehow when the plug-in accesses the file system, it gets it's own view of the file system. Does anyone know how IE does this?
You can just spin up a series of VM's with something like virtual box. Create a share and mount as D:\ on all of the VM's, then run a batch script to copy the files from your share to C:\Pickup.
A little background
I'm working on a project that requires me to use an old (from 2006) large system of MatLab scripts. The script exists in an archive folder on a cluster but I need it to run fully from my cluster folder. I've got it mostly running from my personal folder but not entirely. It runs perfectly but there is a Python script that is called somewhere that doesn't exist in my personal directory.
What I want to do
Since the MatLab code I'm running includes many different script files, which themselves call even more script files, poring through them to find information about the Python script would be very very time consuming.
Therefore, I would like to be able to tell MatLab to not go to specific folders when calling a script, but instead, return an error. For example, if a scripted is called in the directory /notmyfolder, I want it to return an error.
Is this possible?