I'm working on a project that uses Eclipse to build an embedded application for several different platforms. Right now I have it set up to use a different build configuration to determine which compiler is used to build but I'm having trouble getting the index to work properly.
The core of the problem is that I have some files that are platform specific and simply won't compile anywhere else (e.g. setting hardware registers for an ARM won't work on a PowerPC). Building is easy as I'm using a custom makefile so I can just exclude sources from different builds. The Eclipse indexer however doesn't know that so it tries to index both files at the same time which causes problems due to different compilers having different default include directories and macros defined. I've figured out how to change the discovery options discover one compiler or the other but I can't get it to find both at once (unless I manually add each include directory and #define but I'd like to avoid that if possible).
Is there a way to automatically discover the include paths and #defines for two different compilers in the same project? Alternatively, is it possible to tell Eclipse not to try parsing files that aren't used in the build?
I figured out how to do it. Under the C/C++ General > Code Analysis section you can select certain files/folders to be excluded. I turned off errors about unresolved symbols, functions and types and now it works fine.
Related
I'm using VSCode with the PlatformIO plugin and PlatformIO bazel integration (https://github.com/mum4k/platformio_rules) to write code for multiple different types of micro controllers (ATTiny84, ATTiny85, Arduino Nano, ESP32-S3). The code builds correctly as the bazel integration to PlatformIO selects the correct libraries, but as there are libraries with the same name made for different types of micro-controllers, IntelliSense will select one kind of at random and will mark a bunch of symbols as not defined or libraries not found (effectively, a lot of red squiggly marks)
As an example, I have some code that configures a timer for ATTiny85, and access registers like TCCR1 and OCR1A, which are correctly defined in Arduino.h for that micro-controller, but are not defined in any of the other 4 versions of the library available
Another example would be Arduino.h not defining Serial when built on ATTiny, but doing it when working with the Arduino Nano
The code I'm working on here is C++
Because of the fact that the whole project includes code for all of those micro-controllers (the project makes use of several different types of micros) I need to put the directories where all of those libraries are in for all of the micros in the include path, which is what I think is causing this problem. I have tried to fully qualify the path for the library that I'm using in my code, but that doesn't work because libraries included inside those libraries are not (and cannot be) fully qualified, so at some point this problem happens again, just one level of indirection down. Also, this code might end up being open source at some point, hence I cannot force my own absolute paths there
I would like either
For VSCode IntelliSense to get information about the right libraries to include through either PlatformIO or Bazel (best solution)
If that doesn't work, I'd like to have the option to change the include path on a per-directory basis. This would add additional directory structure on my project, and I'm still not 100% sure it would work, but would be a start
At this point I feel like my only option is to disable the red squiggly lines completely, but that looks like a sub-optimal solution
I'm developing a bare-metal embedded application; no OS or MMU. I'm using a toolchain that consists of arm-none-eabi-gcc, ld and make. It requires some plugins to be dynamically loaded/unloaded and I don't know how to create script for this configuration.
The host application has a defined API for plugin system, it consists of function declarations for init_plugin() and execute_plugin().
There's several C files called plugin01.c, plugin02.c..., which are all implementing that defined API. I want to compile them and then place all plugins in exactly the same address space. There's only a single plugin loaded at once, so there's no problem with memory collisions. After compiling and linking I would extract these plugins from output file and load them separately into the target hardware.
I need help with solving two problems:
Linker should not complain about multiple different definitions of same function
Linker needs to place all code from pluginXX.c files into same memory range. It should reset the location counter after linking each plugin. It should assign same VMA and different LMA. Same VMA allows running plugin when it is loaded in that location, and different LMA allows me to extract compiled and linked plugins from output file.
For anyone interested, I managed to solve it.
The problem of conflicts in symbol names was solved by prefixing all sections with .pluginXX string, and renaming symbols to pluginXX_init_plugin and pluginXX_execute_plugin.
The problem of placing all plugin code into same address space was solved by using OVERLAY feature of linker. All plugins linked together with the host with just one linker pass, which guarantees that everything is linked correctly.
Before this, I tried two-step linking by partially linking only host code into an object file, which would then be linked with each plugin separately. This was waste of time. During the partial linking step, unused code could not be discarded (--gc-sections option was not available together with partial linking --relocatable option) and code would not fit into available memory.
First of all, I know how to write a basic/intermediate level
makefile. In my c++ projects I use a makefile that does a lot of stuff
automatically. The most important to me is that it automatically
detects all source files (which are always in the same folder) using
wildcards, uses that to predict the name (and location) of all object files, and compiles appropriately.
Recently I've been trying to achieve the same effect with my scala
projects, but I've hit two obstacles.
Copilled class files which belong to packages are stored inside
subdirectories (like com/me/mypack/). This is a problem because
Make needs to find these files to check the timestamps (and I
have no idea how to do that automatically).
Some source files (such as those defining a package object)
generate class files with different naming standards. Again, Make
needs to know where these class files are and I don't know how to
do that automatically.
The consequence of this is that the "problematic" source files are
recompiled every time I run make (which is aggravated by scala's long
compile times). I'd like to know how to fix that without having to
manually write out the entire list of expected class files.
EDIT As an extra note: I'd like to avoid placing the source files in subdirectories. I like keeping them all in the same directory for several reasons.
You should use sbt or Maven for Scala. These are designed specifically for the way Scala and Java work, and they will be much easier to set up and use. They also provide many more features than make does.
These tools are used for a variety of things. Compiling is a big one, but they are also important for dependency management. Also, sbt (and probably Maven?) does "incremental compilation", so that only classes that have changed are recompiled, which speeds up compilation.
I personally use sbt, but I know people who prefer Maven.
I'm currently working on a project which source code should be as portable as possible; that is, the project (in C#, but it is not very relevant) represent an application that should be executed on Android (with Mono-Android), on iPhone (with MonoTouch) and WinMobile (with official Compact Framework). Without going into details, the corresponding MSBuild solution consists of an independent-platform library (from a source code point of view, at least) which declare various interfaces and classes that represent an abstraction of each feature that is not common to the various platform (i.e. the UI). In addition, there are a corresponding library that specialize (for each platform) the "base library"; the effective application executable is a program that uses the abstraction and the common standard libraries.
Developing on WinMobile and Android is not really a problem: Mono-Android add-in can be installed on VS 2010, so both platforms can be handled with MS VS.
Initially the solution was created in VS, so the initial configuration and the related projects (Android and WinMobile) are automatically generated.
After that I've imported the solution in MonoDevelop under Mac (the only platform that is officially supported by MonoTouch), and I've created the project for the iPhone library; switching the configuration to generate the assemblies (iPhoneSimulator) the "base library" was not possible to compile due to a missing project type configuration; specifically, the GUID used by MonoTouch for <ProjectTypeGuids> is {E613F3A2-FE9C-494F-B74E-F63BCB86FEA6}; adding this GUID I can now compile "base library" in MonoDevelop.
The problem arises when I try to re-import the solution in VS: since there's no Windows version of MonoTouch, VS cannot find the add-in for the specified project type, and the project doesn't load.
Looking to the specifications of MSBuild project file, it seems that there are tons of options that cannot be set or modified within the project/solution editor in VS; however the format is quite complicated and now I'm asking your help!
Is there a way to specify in the project file that a project type is present only if a particular configuration is selected independently to which is the environment I'm using?
The general approach is something like this; a condition that progressivly builds your property, referencing any value the property already may have:
<ProjectTypeGuids
Condition="'$(BuildingInsideVisualStudio)' != 'true'"
>;{E613F3A2-FE9C-494F-B74E-F63BCB86FEA6}"</ProjectTypeGuids>
<ProjectTypeGuids>{OTHER-GUIDS-HERE}$(ProjectTypeGuids)</ProjectTypeGuids>
This will detect the VS condition (when building) and omit the unkonwn guid. I'm not sure however if it will work when the project is opened, this property might only apply to building. There may be a similar "sentinal" property for building on Mono, and you can reverse the condition.
I solved an unrelated, but very similar issue of cross-platform development by excluding the files that presented themselves as cranky when going between Linux and Windows. I have my project under source control and utilized that to keep things working cooperatively.
http://www.aydabtudev.com/2011/05/what-goes-into-source-control-android.html
It's not a 1-to-1 for your issue, but it might give you clues/ideas on how to solve your problem.
Putting development tools (compilers, IDEs, editors, ...) and runtime environments (jre, .net framework, interpreters, ...) under the version control has a couple of nice reasons. First, you can easily compile/run your program just by checking out your repository. You don't have to have anything else. Second, the triple is surely version compatible as you once tested it. However, it has its own drawbacks. The main one is the big volume of large binary files that must be put under version control system. That may cause the VCS slower and the backup process harder. What's your idea?
Tools and dependencies actually used to compile and build the project, absolutely - it is very useful if you ever have to debug an issue or develop a fix for an older version and you've moved on to newer versions that aren't quite compatible with the old ones.
IDE's & editors no - ideally you're project should be buildable from a script so these would not be necessary. The generated output should still be the same regardless of what you used to edit the source.
I include a text (and thus easily diff-able) file in every project root called "How-to-get-this-project-running" that includes any and all things necessary, including the correct .net version and service packs.
Also for proprietry IDE's (e.g. Visual Studio), there can be licensing issues as this makes it difficult to manage who is using which pieces of software.
Edit:
We also used to store batch files that automatically checked out the source code automatically (and all dependencies) in source control. Developers just check out the "Setup" folder and run the batch scripts, instead of having to search the repository for appropriate bits and pieces.
What I find is very nice and common (in .Net projects I have experience with anyway) is including any "non-default install" dependencies in a lib or dependencies folder with source control. The runtime is provided by the GAC and kind of assumed.
First, you can easily compile/run your program just by checking out your repository.
Not true: it often isn't enough to just get/copy/check out a tool, instead the tool must also be installed on the workstation.
Personally I've seen libraries and 3rd-party components in the source version control system, but not the tools.
I keep all dependencies in a folder under source control named "3rdParty". I agree that this is very convinient and you can just pull down the source and get going. This really shouldnt affect the performance of the source control.
The only real draw back is that the initial size to pull down can be fairly large. In my situation anyone who pulls downt he code usually will run it also, so it is ok. But if you expect many people to pull down the source just to read then this can be annoying.
I've seen this done in more than one place where I worked. In all cases, I've found it to be pretty convenient.