Importing source files and folders into IAR Workbench - import

I have a cup of source files in a certain folder structure in my file system. I want to use this structure for a project in the IAR Workbench. Thinking of Eclipse, that could be so easy! But in the IAR Workbench, the folders will become to "Groups", which are only kind of virtual folders. The Workbench doesn't care about folders.
Is there some easy and fast way to import them?
Up to now I have to add the groups manually each and then add the files to the groups, and that's really annoying!
Is there maybe a tool to generate a proper project file (*.ewp) out of a file/folder structure path?
This would help me a lot!

You should have a look at IAR Project/Add Project Connection command.
Although IAR doesn't seem to have any public documentation on the xml syntax, or at least I couldn't find any, you can find Infineon DAVE (Config.xml) and Freescale PE (ProjectInfo.xml) files if you search around. These can be used as examples to figure out the syntax on how to write your own xml files in one of these interfaces, to allow you to specify where all your c, h, assembly and library files are from where ever they may be in your file system. They also allow you to define preprocessor includes for compiler/assembler, and DAVE allows you to define a path variable, which is also very useful.
See: https://mcuoneclipse.com/2013/11/01/iar-arm-v6-7-comes-with-improved-processor-expert-support/
I have modified a DAVE Config.xml file and found it EXTREMELY useful for managing and migrating even just a handful of project files. For example to upgrade to a new release with all files having a new directory root, you just change a single line in the xml file (defining the new root), and all source files, compiler includes etc are all updated to the new level. No more manually editing the preprocessor includes or replacing all the files in the project. And no more fiddling around with ../../ file system hierarchy navigation stuff, you just specify directly (or indirectly via a path to) where the files are, no more relative from where your project happens to be. VERY NICE.
IAR should consider opening this up (documenting) for general users, as it is very useful for project management and migration. While at it they should also consider generalizing the xml syntax a little bit and allow for definition of IAR group heading names, specifying linker file name, and definitely allowing multiple xml files to be included (connected) (so that subprojects can be easily added or removed without effecting the other subproject definition files) and a few basic things like that.
If they where to do a bang up job on this, they might consider allowing most/all aspects of IAR project configuration that might be required by the subproject, to be defined in these xml files, and then entire (sub)projects could just be plopped down anywhere and be up an running extremely quickly (OK, just let me dream a bit :)

For anyone who happens upon this you may want to check out https://github.com/IARSystems/project-migration-tools. They have a tool for pulling in file trees here.

Related

Clean, standard method for referencing local files in a project-based IDE (such as Eclipse or Visual Studio)?

Recently I've begun taking advantage of the features offered by using robust IDEs, particularly the debugger and autocomplete found in Eclipse Juno and Visual Studio 2012.
However, many of my projects deal with lots of local files; for game projects I have custom content files, for data mining I have lots of data files that need to be referenced from a set of Python scripts, etc.
My issue is that storing these files within the project structure of the IDE seems hacky somehow (also, the IDEs tend to require a single entry point, which isn't so cool for working with data via a suite of scripts). The only other option I've found, using absolute paths relative to the drive, results in less-than-generalizable code.
My question: is there a good, clean method for referencing local data files (text files, XML, images, etc.) while still taking advantage of the features of a heavyweight IDE?
It seems there are ways such as "debug in directory" and "local reference folder" systems, but I'm wondering if there's some general way people deal with this.
Thank you for any information or suggestion!
As for me, I'm always just either:
storing files in the project dir, and using a versioning system (svn, git) exclude those and only include the directories they're in
symlink the files, or whole directories, depending on the structure, into the project ; if you use relative symlinks instead of absolute ones, it makes it pretty easy for multiple people to have different files/content while still working on the same project over one repository. As it seems you're using windows (afaik visual-studio is windows only?), I think that newer windows versions should support symlinks as well, up from vista as far as I remember.
Edit: quick google search led me to: http://en.wikipedia.org/wiki/NTFS_symbolic_link

Custom Eclipse (CDT) project layout, different from folder structure

A good hello to you fellow Stackoverflow people.
I am stuck with a small dilemma here.
At my work we used to work with UltraEdit projects but we want to migrate to using Eclipse CDT. (Not using its compiler/build options, we need an external SDK for this).
On the harddisk we have a specific folder structure to keep things seperate between two teams. Namely the 'productcode' + 'applicationcode'-group and the 'drivercode'-group.
Both groups have their own folder where they place sourcecode in.
application
drivercode
productcode
The filenames are given a specific prefix, denoting to which 'layer' they belong.
os (operating system)
application
system
unit
component
IO
hardware
All of these files (except for application which is only allowed in the application folder) can be in the product or drivercode folder.
In UltraEdit all of these files are grouped under their respective layer. So our project has the following folders:
0 Operating System
1 Application Layer
2 System Safety Layer
3 Unit Layer
4 Component Layer
5 IO Layer
6 Hardware Layer
Generic
XML
The virtual folder '0 Operating System' holds all os_xxx files from the real folders 'drivercode/productcode' And the same goes for 2, 3, 4, 5 and 6.
TL;DR:
Is it possible to get the same (virtual) folder structure within Eclipse CDT?
To make things more complex, this whole folder structure is devided in 3 projects. E.G. proj-1, proj-2, proj-3 and there is also a shared folder that holds code that is shared among projects.
I had a similar situation. Rather than a bunch of hunt/peck for linked resources, which tend break the ability to reuse the .*project files elsewhere, I made a 'workspace setup" script that just symlinked the sources into the directories where their projects were. That way the default eclipse mechanisms (build all source within a tree) just work out of the box.
I have found one way, but it is quite cumbersome.
I can create the structure I want using Linked Resource Folder and files.
However this means I need to go through all dialog's per folder/file in order to add them to the list. I hope there is an other way though. So I'll not accept my own answer as of yet.
Eclipse CDT plays well with existing projects.
I guess you probably also have manually generated Makefile? Then you only need to use File -> Import -> C/C++ -> Existing code as Makefile Project.
This will leave all your source where it was and team members that prefer to no use Eclipse can still use whatever they want, and build from command line.

How to create several flash application sharing common codebase in FlashDevelop/ActionScript 3.0?

Situation:
I need several swf/exe output files compiled in FlashDevelop from several projects. More than 60% of ActionScript 3.0 source is common for all project, rest are project-specific. How can I organize that in FlashDevelop? I want to have "one-click-to-build all" setting without duplicating common codebase (so when I need to fix something I do not need to copy-paste solution into several files).
All sources are under develeopment and will change very often.
A straightforward solution is to make an external classpath, for instance:
c:\dev\shared_src\
c:\dev\project1\
c:\dev\project2\
Then configure each project:
Project Properties > Classpath
Add Classpath > select '../shared_src'
PS: of course you should keep everything under source control.
Using svn:externals you could structure your repository in such a way that the commom parts are stored just once in the source control system, so changes made can be synchronised with just a single commit and update cycle.
For example, imagine that you have ^/ProjectA and ^/ProjectB, each of with require ^/Common as a sub directory.
Using svn:externals, pull ^/Common into both projects.
The exact nature of doing this will depend on the version of svn you use, and any client you use (such as TortoiseSvn). Refer to the relevant edition of the svn book for specifics.
The ease of implementing this will depend quite a lot on how separate the common code currently is in your application; and pulling in directories as directories is much more practical than trying to pull them into an existing directory; and unfortunately wildcards for filepaths are not supported.
However, based on your description of your aim; this is the most straight-forward solution I can imagine.
Hope this helps.

Managing a scala project with a Makefile

First of all, I know how to write a basic/intermediate level
makefile. In my c++ projects I use a makefile that does a lot of stuff
automatically. The most important to me is that it automatically
detects all source files (which are always in the same folder) using
wildcards, uses that to predict the name (and location) of all object files, and compiles appropriately.
Recently I've been trying to achieve the same effect with my scala
projects, but I've hit two obstacles.
Copilled class files which belong to packages are stored inside
subdirectories (like com/me/mypack/). This is a problem because
Make needs to find these files to check the timestamps (and I
have no idea how to do that automatically).
Some source files (such as those defining a package object)
generate class files with different naming standards. Again, Make
needs to know where these class files are and I don't know how to
do that automatically.
The consequence of this is that the "problematic" source files are
recompiled every time I run make (which is aggravated by scala's long
compile times). I'd like to know how to fix that without having to
manually write out the entire list of expected class files.
EDIT As an extra note: I'd like to avoid placing the source files in subdirectories. I like keeping them all in the same directory for several reasons.
You should use sbt or Maven for Scala. These are designed specifically for the way Scala and Java work, and they will be much easier to set up and use. They also provide many more features than make does.
These tools are used for a variety of things. Compiling is a big one, but they are also important for dependency management. Also, sbt (and probably Maven?) does "incremental compilation", so that only classes that have changed are recompiled, which speeds up compilation.
I personally use sbt, but I know people who prefer Maven.

Should I put my output files in source control?

I've been asked to put every single file in my project under source control, including the database file (not the schema, the complete file).
This seems wrong to me, but I can't explain it. Every resource I find about source control tells me not to put generated output files in a source control system. And I understand, it's not "source" files.
However, I've been presented with the following reasoning:
Who cares? We have plenty of bandwidth.
I don't mind having to resolve a conflict each time I get the latest revision, it's just one click
It's so much more convenient than having to think about good ignore files
Also, if I have to add an external DLL file in the bin folder now, I can't forget to put it in source control, as the bin folder is not being ignored now.
The simple solution for the last bullet-point is to add the file in a libraries folder and reference it from the project.
Please explain if and why putting generated output files under source control is wrong.
You haven't explained what "the database file" is.
I would certainly include 3rd party libraries in source control, as they're necessarily for the build and it's good to have a way of reproducing a build at a later time with the library versions you used at that particular moment. But yes, those libraries should be included from a "libraries" folder rather than the output directory.
I wouldn't generally include my own libraries built from the sources elsewhere in the same repository - although I have been in situations where that's been worth doing, where some projects didn't use the "latest and greatest" version of a common library, but just occasionally updated.
The most important practical argument I'd give against including everything, in a world where disk, processor and network are considered free and instantaneous, is that it makes it harder to tell what really changed for any given commit. It's easier to look down a list of 3 source files than 3 source files and 150 binaries from the obj/bin directories.
Generated output files (in general) are "dangerous" in a VCS because:
what you need to version is how to regenerate them: the day you will need to actually update them, chances are you won't remember how to do it
they can contain some private generated file which make them work on the committer desktop, but not on a client one ("works on my machine" TM syndrome)
some generated file are not easily stored in delta (binary especially), making them consuming lots of space (and the topic of cleaning that space will come-up someday...)
External libraries are not generated directly by your project, and can be put in a VCS, although external repositories like a public Maven repo are better at this kind of management.
Do we also put compiled object files such as class files, executables, DLLs build from our source? What about when we're doing serious volume testing and that database becomes many gigabytes or terabytes in size?
The clue is in the name: it's Source Code Management System.
I can understand the simplicity of put eveything in, it's more likely that developer doesn't forget some important file. But if you're doing regular automated builds then surely that gets picked up anyway?
I think the key phrase is here:
It's so much more convenient than
having to think about good ignore
files
Are you explicitly forbiden from having good ignore files? My guess is that already you are excluding .exe and .class (or whatever) files. Suppose you did take the trouble to exclude your database would that be a problem? Why? It's a concious action that you are chosing to take for the commone good. In Eclipse it's a couple of seconds work to add a new file type to the workspace's CVS ignore rules for all projects.
A rule of "No Ignore Files" is almost self-evidently absurd. Once you have the freedom the have some ignore files then why not just use them intelligently to exclude the DB? Who is inconveninced? Only yourself, if anyone, and you're prepared to do the extra work.