Build system for not compiled languages - gtk

I did create a python projet with gnome-builder using the Gnome Application template. I realized that the template builds the entire project structure and adds build capabilities using the meson build system, so I was curious. Why use build system for languages that are not compiled like python?

Build systems aren't only for compiling, they're also for distributing; full applications often include other data like CSS files, UI description files, application metadata files, settings schema files, etc. All these need to be packaged with the application and installed into the right place.
An additional reason is that many applications in a non-compiled language like Python or Javascript sometimes include a private library written in a compiled language, for things that are performance-sensitive.

Related

Cross platform build matrix with SBT for Scala + Native project

I'm seeking suggestions for publishing and releasing a Scala/ Java project I'm currently working on that uses SBT as the primary build tool. The project structure looks like the following,
|- parent
| |- native
| |- core
Here,
Project parent is the project root containing all the plugins, build.sbt and just acts as a wrapper for the main 2 sub modules - native and core.
The native project contains no Scala/ Java core, but only rust code that is access using JNI from the core module. This project has its own internal build system cargo which builds native as a system specific library (so on linux, dylib on mac and dll on windows).
core is the main module that will be consumed by the end users and provides a nice interface that depends on the native code.
Everything is working fine in development as I'm relying on the sbt-jni plugin, but the final automated release process is what I'm confused about.
What I want to do is the following,
Publish the project as a platform independent jar on maven repo.
This jar, should preferably of the core sub module containing the built core code and all the platform lib from native together. This way this jar can work across a matrix of,
Java versions - 8, 11, 17
Scala versions - 2.12, 2.13, 3.x
OS and platforms - mac (silion/ intel), windows and linux
Preferably not publish the native sub module at all because it doesn't need to be exposed to the end users and is incapable of direct usage without the core.
Preferably not rely on maven dependency classifiers for platform specific libs that get built from native module (so, dll and dylib)
Automate all of this on GitHub actions via a release workflow.
I know what for different scala version there will be different final jars with suffix like _2.12 and this is fine. Java versions can be handled with -target or release javac options.
Looking for any guidance and suggestions regarding this as I do not want to end up following some anti-pattern. This is also the first scala + rust project that I am trying out.
Thanks in advance.

Eclipse CDT: combine a make project with a cmake one

I have a standard C project in Eclipse CDT. Naturally it uses make. I have decided to add some JSON support to my application to be able to load/save its configuration in a readable format that the user (if such desire occurs) can alter those manually and/or through an external tool. I've looked up two options so far namely Jansson and Json-C. The problem is that both are using cmake, which, if I recall correctly, can be imported in Eclipse CDT without problems (though in itself CDT can't create cmake projects).
I was thinking of adding a script for the pre-build step of my project that runs cmake (as an external command) and sets up the JSON library (static and/or dynamic) so that when the build process of my projects starts the library file will be available.
Is there a better way to combine a cmake with a make project in Eclipse CDT? I know that cmake basically generates a Makefile but I've never done such a combination before.
Even if there is a JSON C library somewhere out ther that uses make (I'm 99.9% sure there is such thing :D) I'd still like to know how to tackle this situation.

Creating netbeans platform application bundles

I am developing a Netbeans Platform app. I assume there will be three types of typical users and each of these groups will use a slightly different set of modules. So I would like to create 4 different bundles (for the three user types + everything). This is similar to what Netbeans itself offers: there are 5 different downloads (Java SE, Java EE, C/C++, HTML5 & PHP and All). Note that this is just the default, the user can still download the Java SE bundle and then go to the Update Center and manually install all the plugins from the Java EE bundle.
How is this achieved? Do I have to manually create several different nbproject/project.properties and nbproject/platform.properties files (and then manually keep them up-to-date) and use external scripts to build the suite with each of them ? Or is there some less hacky way?
Create three module suites. Each of them will target one user type and can have 1 or more modules. Configure them to use their own cluster (you need to look it up in docs, README in NETBEANS_HOME/harness can be useful). Then it should be possible to customize NetBeans installer to build what you need.
NetBeans distros are same thing: set of clusters built together and wrapper with installer.

Project management of both mixed and binary files

we develop software which also includes a lot of art assets (binary files). We would like to version control our source code, but do not keep track of binary files changes (artists works separately and upload new art assets). However, one checkout should produce full tree of both source and art. This is needed because we want to run continuous integration system afterwards (nightly builds, testing). What you could recommend? Maybe any of those web hosted apps allows one to do that.
I would recommend storing binaries in an artifact repository like Nexus: it offers a simple referential based on the filesystem, with simple directories based on a naming convention.
This is very easy to maintain.
Such a model is linked to the source control through a declarative process: you declare in a file (pom.xml) what version of which artifact you need for your specific version of your code.
That pom.xml file is managed like the rest of your sources.

How do you store third party libraries in your source control?

How do you store third party libraries that you use in your project in your source control?
When would you store binaries in your source control?
When would you store the code in your source control?
Would you ever store both? In what situations would you do this?
(Btw, I'm using .NET but it doesn't really matter for this question)
How: a vendor branch is generally a good approach
When (thirdparties): for minimizing the number of referential involved: you could add those libraries into a separate external referential (like Maven), but that mean you need to access that extra referential for each of your environment (development - integration - homologation - preproduction - production)
When (code): for managing the complexity of the changes, when you know updates and fixes for current versions running into production will be needed while new development are in progress.
Why (store both): for deployment reason: you can manage a complete configuration (list of elements you need) in one referential and query it wherever and whenever you need for:
development (you query what you need to develop and execute your code, including thirdparties needed for compilation/execution)
tests (integration, homologation): you query the exacts tags you want to update your testing workspace with
production: you identify exactly what goes into production from one source: your SCM.
For test and production environments, that also mean your own product (the packaged result of what you are building) should also go into your SCM (only the official releases, not the intermediate ones used internally).
If other projects depend on your product, they will build their own project against your packaged version stored in the SCM, not against your source code they somehow recompiled.
Why this is important ?
Because in the end, what will run in production is that packaged version of your product, not your "source code re-compiled". Hence the importance to make all your test with the target final form of your product, clearly stored and tagged in your SCM.
Martin Lazar raises a legitimate point in his answer
Source control is called "source" control, because it is supposed to control sources.
While that may have been historically true, every current RCS have evolved toward SCM (Source Code Management), which does not just control sources, but also manages changes to documents, programs, and other information stored as computer files.
Binaries can then been stored (even stored with binary-delta)
Plus that allows some of those SCM to propose S"C"M feature (as in Source Configuration Management).
That SCM (Configuration) not only stores any kind of "set of files", but also their relationships (aka dependencies) between those sets, in order for you to query one set of file, and to "pull" every other deliveries on which that set depends on (to build or to deploy or to run)
How do you store third party libraries that you use in your project in your source control?
As binary or source or both. Depends on the library.
When would you store binaries in your source control?
A third party library for which we don't have the source or an internal library which we haven't made any changes to or the library is too huge to be built around.
When would you store the code in your source control?
Say, we use an internal library A but have made some bug fixed specific to product X. Then product X's depot will keep the source and build it.
Would you ever store both? In what situations would you do this?
Yes, all the latest binaries are stored in source control.
So the tree looks like this:
product
|-- src
|-- build
|-- lib
|- 3rdparty
|- internal
...
Assuming you are using .Net:
I create a "Libraries" folder in my project and source control that contains any third party assemblies.
My solution then references those assemblies and my build process pulls that folder down to our build server.
Any one pulling your code from source control should be able to compile it without having to hunt down references and assemblies.
Source control is called "source" control, because it is supposed to control sources..
In Java it's common pattern to use some version control system to store sources and other resources like configuration XML files or pictures, and than to use some dependency management tool like Apache Maven, which will store, download and manage your project's dependencies on 3rd party libraries. Then when you reinstall your OS, Maven can automatically download your dependecies from central repository (or your own private repositories as well) and store them in a local cache on your disk. You don't even have to know where the cache is :)
Maven can be also used with other languages and as far as I know plugins for .net and C/C++ are available, but I haven't used it with anything else than Java.
I don't put 3rd party source or binaries in SC. My rationale was that I would not have to update SC just to update the libraries. I am starting to regret this though. When I have had to recreate the project I find myself running around looking for the lib sources instead of just syncing to SC.
On a recent Java project I switched to using Maven - this was quite nice since it meant I didn't need to store any third party jars in a lib/ directory. At compile time maven would pull in the dependencies. One nice side affect is the jars have the version number in their filename.
My experience has been to create a "lib" folder and keep all 3rd party binaries in there. I will create a totally separate tree for the Source Code to these third parties if it is available.
Some places where this might be different is if you are using an open source vs. a retail 3rd party, with open source solutions I tend to just include the code in my projects and not check-in the binaries.
You don't need to store third party libraries in your source control repository. Those libraries (think of SDL, libcurl, etc.) should always be available on the web.
Just two raccomandations:
make sure to state clearly in your code which version of the library you should compile against
be sure that that specific version is always available on the web
Generally speaking, I would do roughly what has been prescribed by other users.
In the case of Subversion, and I admit I can't speak to the inclusion of the feature in the case of other systems, one can use an External to link in a repository maintained elsewhere. In a checkout/export, you'll get a full copy of the source including the updated code from that external repository all in one go.
This was particularly nice for me with PHP/JS development as there is no issue regarding storing binaries. I'd keep an external of the Zend Framework, Doctrine ORM, and jQuery all in a /lib/ folder of the project. Every update would give me a complete, updated copy of -all- the necessary files without any more work than adding that repository URL to an svn property.
If you are using git (which I recommend), and you have the source of the third party library, then storing the library in its own git repository and including it as a submodule is an excellent setup.
http://book.git-scm.com/5_submodules.html
http://git-scm.com/docs/git-submodule
If the source library is also using git then you can clone their library and push it to your own server so you'll never lose it.
Git submodules allow you to specify which revision of a library is required for a project, which is great for maintaining compatibility.
Conceptually you need to store at least the binaries (and the headers if you do C/C++)
That's usually the only way with third party libraries where you don't have the source.
If you have the source you can opt to store the source and build the third party libraries in your build process.
You should be able to install a fresh OS, get your sources from source control, built and run. So yes, you should put them in source control.
It depends on how big they are. When binaries or installers are too big, it can cause havoc for remote users. The upside of storing binaries and installers is that everything a developer needs to get up and running is in source control and the versions are correct. If you have a separate installation location, versions can get messed up. So, in general I like to store small or moderate binaries in source control, but larger ones I leave out.
Edit: Oh, and I call mine "BinRef" :)
When would you store binaries in your source control?
I store binaries in source control when I want they ability to quickly revert back to the old version of an application. There are number of reason I would do this, those include the following:
Application testing in an environment that exactly matches production is not always possible.
The previous programmer may not have used source control or may have used it incorrectly. So it can be difficult to be sure the source code you are changing is the version that matches what the user is working with.
On source code and compiled libraries (most likely standalone):
If I don't use the third party components (compiled libraries) or provided source to build inhouse software components, I simply take them off the shelf and install them as prescribed (which might include a compile, pl/sql code for example). I would not install them in a dependency management repo or any source control system for the simple reason that I don't want them incorporated accidently or otherwise in components I'm building and I don't want to track them in any software cycle. This is an asset (software asset) that should be tracked with other tools. If I'm not using them for software development, I don't and shouldn't see them in tools I use for software development.
If I depend on them for building my own software, I would store them all day long.