how to import a Cmake Project which used .sh scripts for build to Vscode? - visual-studio-code

I have a legacy code base which has CMake configuration and .sh files which calls build commands with respect to build type ( release, relwithdebinfo etc.) as well as does a lot of things.
I have to work over codebase. So far I used STM32CubeIDe. I imported the project as existing Makefile project and then I changed C/C++ Build directory to where makefile outputs converted.
So whenever I did a change on the code, I hit the build command over UI and it calls make command at where I modifed the path above.
This is working but In case of a need of debug then I had to use .elf outputs and OZONE J link Debug program.
I have to do build+debug in same environment but eclipse is slow and making me struggle.
How can I make VSCode host to my legacy code in order to build and debug and also navigate in code i.e go to the definition of code in VSCode, not only building.
Please navigate me if anything needs to share from existing code base, if there is still unclear.

Related

VSCode: how to structure a simple python package with few modules and tests, debugging and linting?

I'm having more trouble than I'd like to admit to structure a simple project in Python to develop using Visual Studio Code.
How should I structure in my file system a project that is a simple Python package with a few modules? Just a bunch of *.py files together. My requisites are:
I must be able to step debug it in vscode.
It has a bunch of unit tests using pytest.
I can select to debug a specific test from vscode tab and it must stop in breakpoints.
pylint must not show any false positives.
The test files must be in a different directory of the main module files.
I must be able to run all the tests from the console.
The module is executed inside a virtual environment using python standard lib module venv
The code will use type hints
I may use another linter, even another test framework.
Nothing fancy, but I'm really having trouble to get it right. I want to know:
How should I organize my subdirectory: a folder with the main files and a sibling folder with the tests? Or a subfolder with the code and a subsubfolder with the tests?
Which dirs must have a init.py file?
How the tests should import the files from the module? Should I use relative imports?
Should I create a pytest.ini file?
Should I create a .env file?
What's the content of my launch.json the debugger file config in vscode?
Common dir structure:
app
__init__.py
yourappcode.py
tests (pytest looks for this)
__init__.py
test_yourunittests.py
server.py if you have one
.env
.coveragerc
README.md
Pipfile
.gitignore
pyproject.toml if you want
.vscode (helpful)
launch.json
settings.json
Or you could do one better. Ignore my structure and look at the some of famous python projects github page. Like fastAPI, Flask, asgi, aiohttp are some that I can think of right now
Also:
I think absolute imports are easier to work with compared to relative imports, I could be wrong though
vscode is able to use pytest. Make sure you have a testing extension. Vscode has a built in one im pretty sure. You can configure it to pytest and specify your test dir. You can also run your test from command line. If youre at the root, just running ‘pytest’ will recognise your tests dir if it’s named that by default. Also your actual test files need to start with prefix test_ i think.
The launch.json doesn’t need to be anything special. When you click on the settings button next to play button in the debug panel. Vscode will ask what kind of app is it. I.e If its a flask app, select python then select flask and it will auto generate a settings file which you can tweak however you want in order to get your app to run. I.e maybe you want to expose a different port or the commands to run your app are different
It sounds to me like you just need to spend a bit of time configuring vscode to your specific python needs. For example, you can use a virtualenv and linting in whichever way you want. You just need to have a settings.json file in the .vscode folder in your repo where you specify your settings. Configurations to specify python virtualenv and linting methods can be found online

run pydev project from file-system (with imports from different packages)

I want to run my working pydev project python code by double clicking the main module (outside of eclipse): xxx.py
The problem is that due to my imports being in different packages:
from src.apackage.amodule import obj
when xxx.py is double clicked it complains it doesn't know where the imports are (even though when I run xxx.py in pydev it magically knows what I'm importing).
A simple workaround is to remove all of the packages and move all of the modules into one directory (that obviously works but is very inconvenient)
How can I run my code in the file system without doing that work around?
This page answers my question excellently:
http://blog.habnab.it/blog/2013/07/21/python-packages-and-you/
Bottom line is always execute your code from the top, highest level, root directory (e.g. using a minimal main.py file that executes the main script of your program). Then, use absolute imports always and you never have a missing module issue since you start the program from the top directory and all imports are based off that 'home' path.
The problem you encountered is the natural behavior of most languages. A programm only knows about its working path (the path it is started in), the paths which are registered in the environment variables and at least relative paths.
The "magic" of the executable you created is therefore: It collects all scripts/modules needed, and copies/combines them next to/in the executable. The Executable then runs within the directory where all other scripts also reside and voila ...
If you are not happy with your workaround of creating an executable every time you want to run your project without PyDev there are two alternatives.
First but not the one I would suggest is registering the working path into in the environment variables.
Second and the one I think is much better: Create a link to the python executable and alter the calling string of the textfield "Target:". Append the path to your script you would like to run. Then alter the textfield "Start in:" and enter the project directory. After you did this you should be able to start your project with a simple double click.
(If you rely on external libraries which are neither on the path nor in you project you could search for appending paths temporarily to the pythonpath via the sys module.)
I hope I could help a bit.

How to determine if a build is from the editor or command line?

I am building a C++ solution with Visual Studio 2005.
Sometimes I open the solution in Visual Studio and build it from within the development environment. Other times I build it from the command line using msbuild.exe. I'm wondering if there is a way that I can determine which of these two types of builds I'm using at compile time (for example, a macro or something like like that). I want to change the path of my output files based on this determination. So, if I'm building from within Visual Studio I would put my output files in FolderA but if I'm building from the command line I would put my output files in FolderB. Is this possible?
Perhaps you can pass in a command-line parameter when building from the command-line that would indicate you are building the solution from the command-line. Otherwise, you can assume you are building from within Visual Studio.
I don't have the answer to your general question, but in order to change the output path, have you thought of adding project configurations ? You could copy project configurations and update the output path of the new ones.

How to setup a DotNetNuke Development Environment with Source Control?

My team is developing a new DotNetNuke web application and would like to know what is recommended to setup a development environment with source control and automated builds? We would like to keep the DNN source code separate from our custom modules and extensions source code.
The DotNetNuke Compiled Module template for Visual Studio wants us to store the source code in the DesktopModules directory of the DNN source code and output to the DNN source code bin directory. Is this the recommended structure? I would rather keep the files in different locations, but then it becomes more difficult to run and debug locally as it would require an install of the module for each change. Also, how should an automated build deploy any changes?
How have others set this up? Is there a recommended best practice?
For my source control, I develop modules in their own project. This contains the module code, test code, data provider code (if applicable) and anything else. This is checked into source control like any other project. Note that the module project contains no links to a specific DNN website, and DNN references are made in the project to a common "bin" directory that references your target build. For example, in my projects folder, I have \bin460 , \bin480, \bin510, \bin520 etc. Each of these folders holds a set of binaries for a specific DNN version. That way you can build against a particular version but test against any version you like.
The problem with source-controlling a module in place in a dnn install is
- sometimes not all of the module code is easily isolated under a single parent directory
- doesn't lend well to a PA module approach
- not easy to shift the project to a different DNN Version for development or testing
- easy to inadvertently source control parts of the DNN solution, particularly with integrated VS source control solutions.
This approach compiles quickly because you're not trying to compile the entire project. For test deployment I have a build script that copies the various parts of the module into a target website. This can be done via the compile (link the build script) or just run after you've had a successful compile in a cmd window. My build script has a 'target' environment switch, so that I can say 'dnn520' to deploy the build to my test dnn520 install. Note that you need to manually create the module configuration first before this will work, but this is a one-time effort, and you can use the export feature to create your .dnn module manifest.
To build your module package, invest the time in a comprehensive script which will take the various parts from your source directory, and zip them into an install package. Keep all of the parts in your source control folder, and copy them into a temp directory, then run a command-line zip utility (I use an ancient version of pkzip) to pack it into an installable file.
The benefits of this approach is :
- separation of module code from installed code
- simple way of keeping only the module code in source control (don't have to exclude all the website code)
- ability to quickly test out modules in different dnn versions
- packaging script allows you to quickly and easily build a new version of a module for install testing/deployment
The drawbacks are
- can't use the magic green 'go' button in VS (have to manually attach debugger)
- more setup time than developing in-place
We typically stick to keeping the module code in a folder under DesktopModules and building to the website's bin directory.
In source control, we just map the individual modules, rather than the entire website. Depending on what we're working on, a module may be an entire project in source control, or we may have multiple related modules in the same project, living next to each other.
Automatically deploying changes is somewhat difficult in DNN. It's highly recommended to have a build script that packages your module into an installable form. You can then copy installable packages into the website's Install/Module folder, and get the URL /Install/Install.aspx?mode=InstallResources, which will install any packages in that folder.
In response to bduke's answer. You should, and don't want to build projects in the DesktopModules folder.
That's where all of the source code for the site out of the box goes.
That's where you modules will be "installed" and thus if someone "updates" or re-installs one, then it will be overwritten
It can make upgrading your Application far more difficult. Many developers don't understand that the idea of not touching the original source code files to modify their behavior. BECAUSE it will just be overwritten when you perform an upgrade.
If you want to build modules, create a solution folder called Modules and place your seperate project modules there.
If you want to debug them, make the target debug output point to the web\bin folder.
If you want to install/deploy them. Build it in release mode and install them through the Module/Extension filter.

How do I build project files and packages for Borland C++ Builder 5 from the command line?

How do I build Borland C++ project files (bpr) and package files (bpk) from the command line? Project groups (bpg) are apparently make files and can be compile with make. But bpks and bprs are xml based and the Export to Makefile won't compile with make.
If I put a project in a bpg, make can't seem to find any of the files specified in the bpg since they all appear to be relative references. I changed the references to absolutes and make reports:
Fatal: Unable to open makefile
You don't need to directly compile a bpr. Just create a bpk which just includes that single bpr, and you can use make to compile it.
"c:\program files\borland\cbuilder5\bin\make" -B -s -fabc.bpg
If you also have other borland compilers installed, do not call the make.exe from the other compiler.
EDIT: execute the make command in the directory where the bpg and bpr is located.
Using bpr2mak and make works for me just fine, so as Roger said, you need to give details on what errors you're getting. BPK files can also be processed with bpr2mak. I'm using this method to compile a large project with many components, without difficulties.
Perhaps you could give some more information on 'won't compile'.
I.e. What error messages are you getting.
One frequent problem the come up with make is addressed at the following
http://www.delphigroups.info/3/8/36427.html