visual c++ 2008 keeps losing custom build rule setting - visual-c++-2008

So for various "boring" reasons, I'm stuck using VC++ 2008.
Now, I have a custom build rule that parses a ".h" file and produces a .cpp.
The build rule works fine when I can get the setting in the .vcproj (it appears as a <Tool Name="my rule"/> element as a child of the <FileConfiguration> elements for each <File> element).
In the ".rule" file, the FileExtension attribute I've specified is "*.hxx" and not "*.h" as I don't want the custom rule running on every .h, only the ones I want it to. Changing the extension of the files to run the rule on to something other than .h is not an option for reasons that are beyond my control.
The rule works fine, the generated .cpp gets compiled, the dependencies etc are all working - i.e. VC++ only does the custom step when the .h changes etc.
Manually hacking the xml in the .vcproj gets thing working, the issue is the Visual Studio GUI keeps messing with the tool setting and deleting it from the ".vcproj". I haven't definitively determined exactly under what conditions Visual Studio mucks up the setting, as it's somewhat random, but mostly when any change to the project needs to be saved is my observation.
Sometimes (not always) I can manually change the tool in the properties page in the visual studio GUI and it will save it for the active configuration (e.g. "Debug"), but when I try to add it to other configurations (e.g. "Release" or "All configurations") the GUI gets confused and deletes the tool setting for all configurations instead of adding it for the other configuration.
This seems to happen also if I first change the active configuration -> when you go to set the custom tool it gets confused and deletes the setting from all configurations.
I've been able to get similar rules working fine when the input file for the custom rule has a unique extension, it seems to be related to the input name matching ".h" and the default rule for .h's and that my ".rule" file doesn't specify a corresponding matching pattern.
Note this setup with the non matching FileExtension pattern is what was recommended on MSDN for VC++ 2008, which is why I did it that way.
Anybody had any similar issues ? Any clues at all on what a robust solution and/or workaround might be ?
I just need to preserve the setting in the context that NOOBS might be using the VS GUI so you can't trust them to not do "certain things".

OK, so after much searching here, and anywhere else I could think of, and trying a bunch of dastardly incantations I've still got no joy on an actual solution to the problem of preventing the IDE deleting the custom build tool setting when the extension pattern doesn't match the file.
So the work around was I changed the rule to use "*.h" pattern and do a test to see if the custom command needs to be run. The following is my "FunkyRule.rules" file in case anybody needs to see what I did:
<?xml version="1.0" encoding="utf-8"?>
<VisualStudioToolFile
Name="Funky code generator"
Version="8.00"
>
<Rules>
<CustomBuildRule
Name="Funky"
DisplayName="Funky code generator"
CommandLine="IF NOT EXIST $(InputFileName) goto :notFound
NeedsFunky $(InputFileName) 1>nul || goto :notFound
echo Generating Funky things for $(InputFileName)
FunkyCodeGenerator $(InputFileName) -o ".\GeneratedFiles\funky_$(InputName).cpp "
goto :done
:notFound
exit 0
:done
"
Outputs=".\GeneratedFiles\funky_$(InputName).cpp"
AdditionalDependencies="$(InputFileName)"
FileExtensions="*.h"
ShowOnlyRuleProperties="false"
>
<Properties>
</Properties>
</CustomBuildRule>
</Rules>
</VisualStudioToolFile>
This CustomBuildRule needs two external commands "NeedsFunky" and "FunkyCodeGenerator".
"NeedsFunky" is a command that returns 0 if the input .h is a valid input for "FunkyCodeGenerator" and non-zero otherwise.
"FunkyCodeGenerator" is the command that does the "funky-ness" taking the input .h as a parameter and requiring a -o option to specify the output file.
In this context (i.e. whatever environmemt Visual Studio calls the resulting temporary .bat file), I had some trouble making an IF statement work using the usual "IF errorlevel" syntax, so hence the unusual "cmd || goto :failLabel" construct.
If you add this rules file to the custom build rules for a project you'll automatically get "some funkiness" on all your ".h" files :-)

Related

Step my script not its imports - functionality?

It is actually a credit to the strength of PyDev/ Eclipse that the debugger also steps through the corresponding parts of the imported numpy/pandas, at the places their functionalities are used by my script e.g. df = pandas.dataframe({...
But if I am confident that the imports work OK: Is there a way for the debugger to step only through my own 10 lines of script and not its imports? It would save a lot of inspection time.
(Eclipse for C/C++ on Windows 10 64bit)
Thank you!
There's actually such functionality available in the debugger, but it currently doesn't have an UI (still didn't have time to implement it).
Still, you can set an environment variable to use it.
I.e.: add an environment variable named PYDEVD_FILTERS (you can add it in the interpreter configuration or by editing your launch) and set it to be a list of paths which match the directories you want to ignore separated by ; (fnmatch style) -- those matches will be skipped by the debugger.
See: https://github.com/fabioz/PyDev.Debugger/blob/master/_pydevd_bundle/pydevd_utils.py#L191 as a reference for this (i.e.: pydevd_utils.is_ignored_by_filter).

Cond statement doxygen does not work

I am trying to separate out internal and external documentation using the doxygen constructs of cond; but i just cant seem to get get it working. I would essentially like to exclude some files completely and not conditionally. Regardless of where i add the tag (before include, before header guards etc) , the files and source both show up.
What i have tried in vain is to take the test file from doxegen repo for
conditional test and add it to the project.
Steps to reproduce [Linux]
create a new directory.
copy paste the above file (had to rename it to .h as .c was passed over?).
generate dummy config via doxygen -g.
update Doxyfile ENABLED_SECTION = COND_ENABLED.
Run doxygen.
check html/index.html
This however is still visible in the html documentation it generates for the project. I have set the ENABLED_SECTION variable with other values , but cond_enabled function still shows up. Running the testing directory of the project (doxygen) it passes. So i am lost.
Any suggestions?
Tried with latest version 1.8.14.
Thanks!
Regarding the \cond problems (not an answer directly to the real problem you face, I think, but to long for a comment).
The mentioned file is used in the, limited, testing doxygen can do / does and the first lines contain some instructions on what to do. Furthermore there is a default Doxyfile with the tests in use. It is hard to run a separate test outside the doxygen build tree.
Regarding the remark "Running the testing directory of the project (doxygen) it passes." This is correct, here, at the moment, only testing is done against the XML output and the generated output is compared to a once created version of the XML output. No tests are done, at the moment, in respect to HTML or PDF / LaTeX. Recently the test framework has been slightly extended so in the future this should be possible (compare the xhtml and tex output, but some work has still to be done here).
The version of the parser sees the \cond in the first line (normal C comment) as a doxygen command and skips everything till the first \endcond (your friend in these cases is always doxygen -d preprocessor). I think that removing / modifying the first line will result in an already better result. There is however another hiccup for e.g. HTML output. As the function cond_enabled is not documented and EXPAND_ALL is not set to YES the function will not appear in the documentation. So best is also to add a line of documentation with the function cond_enabled.
Regarding the seen HTML problems I modified the the relevant test in doxygen slightly and pushed a proposed patch to github (pull request 714, https://github.com/doxygen/doxygen/pull/714).
Note: the problem of skipping the \cond in normal C comment is quite a bit harder to implement (seen the logical complexity of the doxygen code in pre.l and commentcnv.l.
EDIT: 2018/06/10: The push request has been integrated in the master version on github.

LLDB: Specifying source search folders for flattened folder structure

I'm using VSCode + CodeLLDB + LLDB to debug a JIT'ed language (KL), however I'm having trouble getting LLDB to recognize the source files.
This is a duplicate question to LLDB equivalent of gdb "directory" command for specifying source search path?, however it's accepted answer doesn't work for me.
LLDB seems to think every source unit is compiled to the local directory - so if I execute
kl /MyWork/someFile.kl
and this file includes /any/other/path/external.kl, LLDB will believe the file is located as /MyWork/external.kl
So far, I'm (mostly) working around this problem by using
settings set target.source-map /MyWork/ /any/other/path/
However this only seems to work for a single folder. If I tried:
settings set target.source-map /MyWork/ /any/other/path/
settings set target.source-map /MyWork/ /I/use/many/dependencies/
Then LLDB seems to not be able to find -any- files in either folder. Interestingly, when I tried this LLDB errored with accurate messages
can't find external.kl in /I/use/many/dependencies/
can't find dependencies.kl in /any/other/path/
Which are accurate messages, but seems almost as if LLDB is just looking for an excuse to error-out :).
Note - I can set breakpoints and view locals, I just can't seem to view the source code at that spot.
Anywho - is there any suggestions on how to deal with this issue? There are 3 possibilities to work with:
- Modify/work with LLDB to find the source files
- Modify CodeLLDB to modify paths between LLDB + VSCode
- Somehow convince VSCode to ignore the path given to it, and search its own folders for any file matching the name.
My suspicion is that LLDB is the right place to fix this, but I am open to any suggestions (right up to the point of linking each and every source file into a flat folder I can redirect to).
lldb only knows about source paths from what is written in the debug information. The rule in DWARF (the debug format lldb uses) is that if an include file is just given by relative path or base name then it is taken to be relative to the compilation directory. Looks like that's what is happening in your case. That sounds like a compiler bug. lldb's not going to be able to reconstruct the file hierarchy at this point.
The source maps should give you a manual way to fix this, and from the sound of it, what you are describing should work. But maybe there's something else odd in the DWARF output from kl that is confusing it. You'll need to file a bug with some example binary to http://bugreporter.apple.com.

ClearCase, a makefile use case

I have an issue with the clearmake command in IBM ClearCase,
I use clearmake command to run my own makefile so i can build my program from the 'C' sourse code.
I want to put a command in make file, like shell cleartool -some-command to ignore all checkouts and all private files.
The disadvantage is that in config spec, i must include the command element * CHECKEDOUT.
But in my use case i want to working with files and the same time i could make a compile/build with the old files, so i could work faster and i shouldn't change views or edit configspecs.
But my contemplation is, if i can ignore the checked outed files with a command, without to lose it.
Could you give me a solution ?
I want to working with files and the same time i could make a compile/build with the old files,
It would be easier to use two different snapshot views loaded on the disk at two different places.
In one (where no checkout has ever been done), you can set all files writables (through Windows, not ClearCase): all the files becomes hijacked, but modifiable, host for compilation/testing purposes.
In the other view, you keep your checked out files and your work in progress (but do not run your clearmake).

How can I add a custom package to the startup path in Dymola/Modelica?

I have a custom package that I find myself reusing repeatedly in Dymola models, and I'd like to put this package in a common directory that is automatically loaded whenever I start Dymola. My current strategy is to load the custom package when a model I'm working on is loaded and then save total. This is not elegant because the contents of the custom package end up saved in multiple locations across my hard drive, and if I change one of them, the changes are not reflected everywhere. I would like a more robust way of distributing this custom package to all of my models. Is there a way to tell Dymola to automatically load my custom packaged every time?
The trick is to add the following lines to settings.mos in c:/Users/USERNAME/AppData/Roaming/Dynasim:
Utilities.setenv("MODELICAPATH", "C:\Users\USERNAME\Documents\Dymola");
openModel("c:\Users\USERNAME\Documents\Dymola\UserDefined\package.mo")
The first line adds the directory to the path that Dymola uses to search for packages that have not been loaded prior to the first run of a model, and the second line loads the specified package. These two commands may be somewhat redundant, but I am doing both because I want to make sure my custom packages are on the path in addition to loading the UserDefined package.
Two suggestions. First, you need to add your package to the MODELICAPATH. You'll have to consult the Dymola documentation to figure out exactly what you need to do. But normally, what this means is that you have to set an environment variable that gives a list of directories (; separated) to be searched for your package. Now that will put it in your path so it can find it automatically, but it won't load it until it needs it.
If you want it to always appear in the package browser, you'll probably need to set up a .mos file (script) to load it. Dymola has that capability, but you'll have to read the manual to figure out what that script has to be called and where Dymola expects to find it.
I hope that helps.
In the instalation folder of Dymola 2018 -> insert -> dymola.mos
I've added the lines:
Utilities.setenv("MODELICAPATH", "C:\Users\XXXX\Documents\Dymola");
openModel("C:\Users\XXXX\Documents\Dymola\DCOL\package.mo");
openModel(“C:\Users\XXXX\Documents\Dymola\Annex60 1.0.0\package.mo”);
Now I don't get the utilities sentence, as the DCOL package loads fine without it and the added 'utilities' package in the package menu is useless.
But it does not open the Annex60 package.
I've tried a lot of different combinations and can't get multiple packages to load. I doubt that "cd" and "Advanced.ParallelizeCode", which are also added in the text work.
The accepted answer does not work since Dymola 2017 FD01, as the file settings.mos is not used anymore. User settings are stored in the setup.dymx file instead, located in
C:\Users\USERNAME\AppData\Roaming\DassaultSystemes\Dymola
In contrast to the setup.mos file you can not include custom lines with modelica script in setup.dymx.
The answer using dymola.mos still works, but you need admin privileges to modify this file.
Here is a simple solution which works with all Dyomola versions:
You can pass a .mos-script as first parameter to the dymola.exe.
This can e.g. be done like this:
Create a .mos script somewhere with commands like openModel(), etc.
Create a desktop shortcut to Dymola.exe
Open the properties of the shortcut and add the path to the .mos script in the Target text field. It will then look something like this:
"C:\Program Files\Dymola 2018 FD01\bin64\Dymola.exe" "C:\<some-path>\startup.mos"
Start Dymola with the desktop shortcut. The script will be executed and eventual errors or messages are displayed in the Commands window
Another suggestion where you don't need to hardcode your package into an environment variable of your operating system (and maybe more safe for inexperienced programmers):
Go to the folder where Dymola is installed (e.g. C:\Program Files\Dymola 2020).
Search for the Dymola.mos file in the insert-folder. 'insert' folder
Open the script (e.g., in notepad++)
Add the link(s) to your Dymola-library-package.mo file(s) here with the openModel statement
e.g., openModel("C:/IDEAS/package.mo"); Dymola.mos script
Save the script. Now, every time you open Dymola, your libraries will be loaded automatically.