PyCharm: Excluding a single file from autocomplete - autocomplete

So, at my job we have grunt.js running to compile all of our js into a single file. This is great little feature of grunt.js (with grunt:requirejs/growl) BUT its causing a problem. PyCharm will frequently freeze for 3 - 10 seconds.
If i disable grunt then the freezing wont happen (since there is no 65 KLOC js file). The files that are combined are being parsed (is what i have narrowed it down to) for autocomplete. How would i remove a single file? I could, potentially create a folder for the combined file, but i really do not want too...
Edit: Better engrish...
Not "remove" a single file, but "exclude" a single file (sorry, brain.speed > finger.speed)

So the answer, the obvious one, is that the file system, with grunts, need to have the automagic combined files in their own directory. Then that directory can be excluded. (Right click, toward the bottom).
Since i did that, my pycharm has yet to freeze.

Related

using powershell to only read last 1month of folders. copy in 7 days worth of folders then using robocopy

apologies to start as im new to powershell and robocopy.
i have a robocopy command that pulls in any files within its many subfolders that are within a maxage of 7. however, the main folder has a huge amount of folders dating back years(and i only need last 7 days each week it runs) so its slow reading each file in each folder before it even copies using robocopy.
it looks like powershell commands may be a way for me to limit the search of files for my robocopy, would this be possible? currently robocopy search each files in each folder in my main folder, ideally i would want it to be smart enough to only search even a months worth of files and then copy over last 7 days. this would speed up the run time hugely.
if possible even further, i only want csv files in each of the folders in my main folder but current robocopy searches the other folders and its files as well which takes time. all the csv files are in a folder called "run" in each parent folder(parent folder is a unique number within the "mainfolder".
my robocopy command:
robocopy \\server\mainfolder \\server\new_main_folder /S /maxage:7 /r:0 /w:0
I was going to point to you either FastCopy or FreeFileSync, both handle long file name paths and work well for me. But found problems running FastCopy when trying to filter folders the way you described. I wasn't getting the results I expected, so that leaves FreeFileSync. There is a little bit of a learning curve with FreeFileSync, but really, the only problem/complaint I've had with it is the xml based batch script that you can use to automate the program kept changing formats and they haven't been providing a way to read the old xml batch scripts with the new version of the software. Maybe that has changed, I haven't looked into that lately.
Maybe other people have had better experience with RoboCopy, but I found it to take literally many multiples longer to do the same job as many other copy programs. I don't think FreeFileSync is as fast as FastCopy, but I've never seen it act as bad as what I experienced with RoboCopy.
The way FreeFileSync works is:
You define 1 or more source/destination pairs.
There is a global setting at the top to set the defaults for all copy pairs.
There are individual settings per each copy pair that when set override the global settings.
In the filter tab of the settings you can set "Time span:" to "Last x days:" and set it to the 7 days that you want.
You can change include from * to something like \run\*.csv. I didn't try that exact pattern, but the patterns I did try worked as expected (Unlike FastCopy).
The Synchronization tab is the tricky/fun one. You can do logs, versioning, tell the system to shutdown or restart when done, maintain a database for tracking moved files ("Detect moved files" checkbox), and all kinds of adjustments to how it behaves when files don't match.
When done, there is I believe at least 2 options for saving the configuration - though I've always just created the xml based batch script and called that from another scripting language or an icon on the desktop.

How to automatically delete Dymolas build files after simulation?

Every time I simulate in Dymola, a number of "useless" (for me) files are created in the working directory - i.e. dsfinal.txt, dsin.txt, dslog.txt, dsmodel.c, dymosim.exe. I find it annoying as it messes up my directory.
Is there a way to select only the desired output files to be kept after the simulations, without the need of manually deleting the undesired ones?
Those are temporary, but necessary files for Dymola. As far as I know there is no option to delete them automatically. Of course you could script that, but I don't see a real point to it and those files are used by some functionality - e.g. dsfinal.txt is used when as simulation is continued.
Some notes: Those files are created in the working directory - which should contain temporary files only. The working directory can be set via the GUI using File -> Options -> Settings:
A rather common problem is, that there is a Open and a Load function in Dymola:
As the description states, Load does not influence the working directory, whereas Open sets it to the directory from which a file is opened. The latter is also true for opening files e.g. via a double-click from the explorer. So usually it is better to go with Load.
My advice would be to separate the directories in which models/packages are stored and the working directory. This way the working directories content can be fully deleted basically anytime...

Can I have VS Code skip opening previous workspaces one time only?

I use many VS Code workspaces throughout the day. Most of them are backed by directories on NFS-mounted drives, which are only mounted while I'm VPN'd in to my employer's network. Opening VS Code while not VPN'd in will cause all of my windows to close, leaving me with blank/empty workspaces, and then I have to set them all back up again in the morning. It only takes a few minutes to do, but I'm lazy and it's not neat; I like things neat. I know that I can start VS Code without any workspaces using the -n option, which is great, but then the next time I start up the editor for real (i.e. for work purposes), all of my workspaces need to be reopened again (see previous statement re: I'm lazy and I like things neat).
Is there a way to indicate that I want to start VS Code without any project just this one time, and then the next time I start I want all of my old workspaces to reopen as normal? Alternately, does anyone know where the state information is stored and how to edit it? I have no qualms about saving it off and then restoring it after I'm done.
Absent any miracle solution, I've at least found the correct file to manipulate: the storage.json file, which on MacOS is found at:
~/Library/Application Support/Code/storage.json
I wrote a Perl script to do the manipulation. When I want to go "offline" it reads in the JSON file, loops through the opened windows, identifies the ones I don't want, and removes them using jq, then launches VS Code. When I'm ready to go back "online", I read a backup of the original file looking for the windows I previously removed, adds them back in (also using jq), and then launches VS Code.
The Perl script is a bit rough around the edges to be posted publicly, but people might find the jq helpful. To delete, you want to identify the windows to be removed as (zero-based) indexes in the array, and then delete them with the following:
jq '. | del(.windowsState.openedWindows[1,2,5])' '/Users/me/backups/online-storage.json' >'/Users/me/Library/Application Support/Code/storage.json'
If you want to add them back in at some point, you extract the full JSON bits from the backup file, and then use the following command to append them to the back of the array:
jq '.windowsState.openedWindows += [{"backupPath":"...",...,"workspaceIdentifier": {...}}, {"backupPath":"...",...,"workspaceIdentifier": {...}}, {"backupPath":"...",...,"workspaceIdentifier": {...}}]' '/Users/me/backups/offline-storage.json' >'/Users/me/Library/Application Support/Code/storage.json'
The inserted JSON is elided for clarity; you'll want to include the full JSON strings, of course. I don't know what significance the ordering has, so pulling them out of the middle of the array and appending them to the end of the array will likely have some consequence; it's not significant for my purposes, but YMMV.

ClearCase, a makefile use case

I have an issue with the clearmake command in IBM ClearCase,
I use clearmake command to run my own makefile so i can build my program from the 'C' sourse code.
I want to put a command in make file, like shell cleartool -some-command to ignore all checkouts and all private files.
The disadvantage is that in config spec, i must include the command element * CHECKEDOUT.
But in my use case i want to working with files and the same time i could make a compile/build with the old files, so i could work faster and i shouldn't change views or edit configspecs.
But my contemplation is, if i can ignore the checked outed files with a command, without to lose it.
Could you give me a solution ?
I want to working with files and the same time i could make a compile/build with the old files,
It would be easier to use two different snapshot views loaded on the disk at two different places.
In one (where no checkout has ever been done), you can set all files writables (through Windows, not ClearCase): all the files becomes hijacked, but modifiable, host for compilation/testing purposes.
In the other view, you keep your checked out files and your work in progress (but do not run your clearmake).

CVS keeps adding code at the end of the file I want to commit

I have trouble with 4 files in my CVS project. Each time I commit one of those files, CVS keeps adding the same line of code at the end of it. This line of code is a repeated line of the current file (but not the last line of it).
I've try several things : update, delete lines and commit, delete all lines and commit, adding lines and commit, adding header and commit. But I always get the same line of code added to the end of my file. I could delete all files and recreate those, but I would lost all my history data.
I find it awkward that CVS is modifying my file when I commit. Is it not counter productive as it may add errors in a compliant code?
I could add that my file is a .strings (text file, unicode). I'm working on a branch, but recently merge it in the trunk.
More Details:
I'm using TortoiseSVN on a virtual Windows machine, which has access to my Documents folder of Mac OS X via a Network Drive between those two.
It turns out that my colleague, which has the same project but on a real Windows folder, could commit without any problem.
And now that he done that, the problem is solve for me too.
But I have no idea what happen. My only clue would be a hidden character in Mac OS X that would breaks TortoiseSVN. Is it possible?
I haven't experienced this issue with CVS, but note that you mention that the file you are editing is Unicode text (you don't mention if this means UTF8 or UTF16, but either can cause issues).
Depending on how your CVS server was built, and how (and on what platform) it is being run, it is highly possible that the server is not Unicode-aware. This can cause a whole range of possible issues, including expanding RCS-style $ tags in places where the second (or later) byte of a Unicode character is equal to ASCII '$'.
The workaround for this is to mark Unicode source files as binary objects. From the command line, this can be done using
cvs add -kb file-name
when adding a new file, or
cvs admin -kb file-name
for an existing file (replace file-name with the name of your file).
In the latter case, I'd recommend removing the (local copy of the) file and running 'cvs update' to get it back after changing the type.
Note that doing this is unlikely to help with changes you're already seeing in the file, so make sure to check the file, and fix any existing problem after making this change.