develop custom package, avoid rsyncing source to the build directory - buildroot

I have custom package, where the sourcecode is directly in the package directory.
At the moment buildroot copies the sourcode to the builddirectory.
Is it possible to avoid this unnecessary overhead? The Makefile supports srcdir != builddir.
in < 2011 buildroot it was possible to specify the SRCDIR and nothing was rsynced.

This is unfortunately not supported right now. We are working on supporting out-of-tree build on a per-package basis in order to solve this issue. A first prototype has been sent to the list some time ago, but needs more work.

Related

How to download a specific version of Lens (k8slens)

I want to install a previous, specific version of k8slens (https://k8slens.dev/) but impossible for me to find this (neither for mac nor windows !) Do I have to download the source code and rebuild it? Even with this there is no "install" section in the makefile !
Why is it so difficult to find a specific version?
Yes, you can easily download the source code for a specific version tag and the compile and use it. The list of tags is here.
Once you get the source code of your desired version you can generate the binary with :
make build
And then simply run that binary to get your required version. Just know that install simply means copying a compiled binary in a known Path of the system so it can be executed. There is nothing special about it.
The question was made some time ago but just in case you haven't reached the answer yet, One thing I did to solve this problem was that i went to the lens repo : https://github.com/lensapp/lens/releases and search through the old release versions seeking for a binary asset (.exe) (the news version provide the source code but not the binary) then i managed to find the binary of the version 4.2.3. ( release in Apr 26, 2021)
Worked perfectly for me. Hope it helps

error: unable to spawn process (Argument list too long) in Xcode Build

I am getting this error:
"error: unable to spawn process (Argument list too long)
** ARCHIVE FAILED **
The following build commands failed:
CompileSwiftSources normal arm64 com.apple.xcode.tools.swift.compiler
(1 failure)
Exitcode =65 "
I went through this link:
Xcode export localization throws error "Argument list too long"
This article provides a good temporary solution of the problem stating to reduce the path hierarchy. But this does not seem to be an appropriate approach. Can anyone provide me with a different approach to the solution for this problem?
In my case, it was about custom configurations in .xcconfig files.
My config files were including Pods configurations like:
// Development.xcconfig
#include "Pods/Target Support Files/Pods-MyProject/Pods-MyProject.debug (development).xcconfig"
#include "Pods/Target Support Files/Pods-MyProjectTests/Pods-MyProjectTests.debug (development).xcconfig"
#include "Pods/Target Support Files/Pods-MyProject/Pods-MyProject.release (development).xcconfig"
#include "Pods/Target Support Files/Pods-MyProjectTests/Pods-MyProjectTests.release (development).xcconfig"
// Production.xcconfig
#include "Pods/Target Support Files/Pods-MyProject/Pods-MyProject.debug (production).xcconfig"
#include "Pods/Target Support Files/Pods-MyProjectTests/Pods-MyProjectTests.debug (production).xcconfig"
#include "Pods/Target Support Files/Pods-MyProject/Pods-MyProject.release (production).xcconfig"
#include "Pods/Target Support Files/Pods-MyProjectTests/Pods-MyProjectTests.release (production).xcconfig"
This produced the error you mentioned, when I added Firebase pods into my Podfile.
So to make this compile again I had to:
remove all inclusion (#include ...),
set them explicitly in the Project -> Info -> Configuration, as follows:
Quick tip:
If you don't want manually setting up corresponding target configurations (those with red icon), mark them as None and run pod install. This will automatically change it for you.
A few days ago I faced a similar challenge. I want to provide details and share my research with SO community.
First of all I found this thread and I followed the link in the asked question.
And yes, thats right, the answer marked in the link is correct, but the solutions to this problem did not suit me.
Problem
In my case, I had this problem when I changed the folder hierarchy in my project to be more convenient and suitable for me.
#oOEric option did not suit me, because according to the rules, the hierarchy of groups in Xcode should coincide with the hierarchy of folders in the system.
But I've already had about 1680 swift files to compiling.
The problem was that I had too long path to the compiled files and their number was too large.
Research
Then I start research and found swift jira with the same bug.
Here some links:
Main
Linked Issue 1
Linked Issue 2
Linked Issue 3
Bug on Open Radar
But here I didn't find some solutions for me.
Most of all I was pleased with this response of the swift developers.
Again, this is an Xcode-side issue, not a Swift-side issue. Commenting here won't make the Xcode engineers work any faster!
(We're not all the same people at Apple.)
Okey, after this answer, I was finally convinced that if it is an Xcode bug, then the solution should be sought in Xcode.
Solutions
Temporary solution
You need to move your project higher in the hierarchy of your system.
I choose this one, because I have really big project and the use of other solutions will require more than one day from me.
In my case, I conducted an experiment and calculated that the length of the path to the project should be no more than 50 characters.
But this is a temporary solution. If your project grows further, you will have to shorten the path or use other solutions.
Cocoa Touch Framework target
This solution is suitable for files that do not use dependencies.
First of all you need to add Cocoa Touch Framework as a target to you main project.
This target should be added automatically to Embedded Binaries and Linked Framework and Libraries.
After this you need to find some files without dependencies and change target membership to your "TestTarget".
Don't forget classes, properties, methods, enums, protocols from cocoa touch framework should have open or public access.
And don't forget clean your DerivedData folder.
Modular iOS
This solution has a more integrated approach.
If you want to use any dependencies in your Cocoa Touch Frameworks you should go to this guide and make more complex refactoring for your big project!
Link to solution
I think this is the best solution.
I hope this big answer will help someone!
I solved this by setting build system to Legacy build system
in file-> workspace setting -> select workspace setting
I solved this by reducing the hierarchy of groups in Xcode.
e.g. original files at project_name/project_name/About/Model/Text
I removed the groups "Model", "Text" and moved files under project_name/project_name/About/
I made simple script for temporary fix that problem. https://github.com/gregoryvit/flatter
It simply move all swift files in Xcode project to root group.
Error - unable to spawn process (Argument list too long)
There are many reason for this error. Some of these are mentioned below:
Your project might have many swift files (say more than 2000)
Most of the Swift source files may be deeply nested inside directories
Many of these files have absolute paths with more than 150 characters (eg. /Macintosh HD⁩/⁨Users⁩/jayprakashnd⁩/⁨mySampleProject/Module1…)
Xcode swift complier takes the absolute paths of all source files while so compiling, the ARG_MX limit is reached and build fails.
This has been fixed in Xcode 11 wherein a flag is used to set unlimited number of swift files.
Solutions:
Switch to Xcode 11 and add USE_SWIFT_RESPONSE_FILE to YES in build settings - User Defined Section
If you cannot switch to Xcode 11 then take a new checkout of your project in Macintosh HD⁩ ▸ ⁨Users⁩ directory with folder name as minimal as possible.
Solution 2 worked for me like a charm!
It happened to me when I use Xcode 11 beta version using Live Preview. Then I solved it by restarting Xcode after that error has gone.
I have fixed this issue, moving my folder of My Xcode project to the mac root and changing my name Folder to less characters.
Terminal: cd /
Change name folder to BX (example).
xcode11
-build setting
-user-defined
-add setting: USE_SWIFT_RESPONSE_FILE
-setValue: YES
Doing this you enable xcode to have more files than is allowed. But im not sure if this always solve the problem.
I have changed build type to legacy and it also resolved problem for me, both locally and on our CI builder. For me it was caused during checking Podfile.lock and Manifest.lock. Probably this could be skipped in our pipeline since we are always installing pods on CI.
If you faced this issue on your Flutter project while building in Release mode (or Archive) check out my this answer: https://stackoverflow.com/a/61446892/5502121
Long story short:
set your build system to New Build System in File > Project Settings…
remove ios and build_ios folders
run flutter create . to init new ios module
run pod install
run flutter pub get
check your Xcode build config (it should be Release mode and General iOS Device)
and you're good to go
I have the same problem. I fix it using a temporary solution, however that work for me.
My solution is to change the Derived Data folder to a directory with a shorter path.
The Steps is as the following:
Xcode -> File -> Workspace Settings... -> Select Custom Location for Derived Data and give a shorter path as the location.

install titanium module from github

i'm having some issues getting modules to work on my app - I keep on getting told that the requested module cannot be found.
It is entirely possible that i'm not installing the modules correctly - so, for the purposes of this question:
Once i've downloaded the zip file from git hub using the green "download" button, what do i do to import the module into my project? Not how do i tell tiapp.xml to use the module - just what do i do to install it?
Can you please run through using the facebook module found at https://github.com/appcelerator-modules/ti.facebook
thanks!
Inside the zip file of the download you'll see there is a folder called modules. This is the same folder that is in your root of the project.
So, an iOS module should be installed in the /modules/iphone folder. Once added, you can add Facebook to your app like this:
<module platform="iphone">facebook</module>
If you want to specify a version you can do so like this:
<module platform="iphone" version="1.1.0>facebook</module>
note: I made up the version number
You can also add it through the tiapp editor in Appcelerator studio, although it doesn't always seem to find the module. This might be a bug in studio though, usually it works great.
HMMMM
Two main issues here, one directly relating, the other less so.
Issue 1
The link i gave to get the codebase from github is wrong - well, it gets the codebase, but not in a form that can be used as a module. It is, in fact, the uncompiled version.
Versions for download can be found here.
So that takes care of issue one, what about
Issue 2
The latest version for use is a bit broken. Seems someone (from the appcelerator team???) decided to make the latest 6.0.1 release have a minsdk of the (at this time) as-yet-unreleased version 6.0.0, and sets the apiversion to 3.
This breaks the current release of 5.5.1, so for anyone reading this prior to 6.0.0 for appcelerator, you will want to use this release version.

Storing third-party framework/middleware into source control that needs to alter your compiler/IDE

I know there are posts that ask how one stores third-party libraries into source control (such as this and this). While those are great answers, I still can't find the answer to this:
How do you store third-party middleware/frameworks binaries that need to alter your compiler / IDE for the library to work properly? Note: for my needs, I don't need to store the middleware source, I only store header files / lib / JAR ..so that it's ready to be linked.
Typically, you simply link libraries to your app, and you are good. But what about middleware / frameworks that need more?
Specific examples:
Qt moc pre-processor.
ZeroC Ice Slice (ice) compiler (similar to CORBA IDL preprocessor).
Basically these frameworks/middleware need to generate their own code before your application can link to it.
From the point of view of the developer, ideally he wants to just checkout, and everything should be ready to go. But then my IDE/compiler will not be setup properly yet, so the compilation will fail..
What do you think?
Backup everything including the setup of the IDE, operating system, etc. This is what i do
1) Store all 3rd party libraries in source control. I have a branch for all the libraries.
2) Backup the entire tool chain which was used to build. This includes every tool. Each tool is installed into the same directory on each developers computer, so this makes it simple to setup a developers machine remotely.
3) This is the most hardcore, but prepare 1 perfect developer IDE setup which is clean, then make a VMWare / VirtualPC image out of it. This will be useful when you cant seem to get the installers to work in future.
I learned this lesson the painful way because I often have to wade through visual studio 6 code which don't build properly.
I think that a better solution is to make sure that the build is self-contained and downloads all necessary software for itself unless you tell it otherwise. This is the way maven works, and it is really handy. The downside is that it sometimes needs to download a application server or similar, which is highly unpractical, but at least the build succeeds and it becomes the new developers responsibility to improve the build if needed.
This does of course not work great if your software needs attended installs, but I would try to avoid any such dependencies in any case. You can add alternative routes (e.g the ant script compiles the code if eclipse hasn't done it yet). If this is not feasible, an alternative option is to fail with a clear indication of what went wrong (e.g 'CORBA_COMPILER_HOME' not set, please set and try again').
All that said, the most complete solution is of course to ship everything with your app (i.e OS, IDE, the works), but I doubt that that is applicable in the general case, how would you feel about that type of requirements to build a software product? It also limits people who want to adapt your software to new platforms.
What about adding 1 step.
A nant script which is started with a bat file. The developer would only have to execute one .bat file, the bat file could start nant, and the Nant script could be made to do anything you need.
This is actually a pretty subtle question. You're talking about how to manage features of the environment which are necessary in order to allow your build to proceed. In this case it's the top level of your code toolchain, but the problem can be generalised to include the entire toolchain, and even key aspects of the operating system.
In my place of work, we have various requirements of the underlying operating system before our code will successfully run. This includes machine-specific configurations as well as ensuring correct versions of system libraries and language runtimes are present. We've dealt with this by maintaining a standard generic build machine image which contains the toolchain requirements we need. We can push this out to a virgin machine and get a basic environment that contains the complete toolchain and any auxiliary programs.
We then use fsvs to version control any additional configuration, which can be layered on to specific groups of machines as needed.
Finally, we use custom scripts hooked in to our CI server (we use Hudson) to perform any pre-processing steps required for specific projects.
The main advantages for us of this approach is:
We can build and deploy developer and production machines very easily (and have IT handle this side of the problem).
We can easily replace failed machines.
We have a known environment for testing (we install everything to a simulated 'production server' before going live).
We (the software team) version control critical configuration details and any explicit pre-processing steps.
I would outsource the task of building the midleware to a specialized build server and only include the binary output as regular 3rd party dependencies under source control.
If this strategy can be successfully applied depends on whether all developers need to be able to change midleware code and recompile it frequently. But this issue could also be solved via a Continous Integration Server like Teamcity that allows to create private builds.
Your build process would look like the following:
Middleware repo containing middleware code
Build server, building middleware
Push middleware build output to project repository as 3rd party references
Update: This doesn't really answer how to modify the IDE. It's just a sort-of Maven replacement thingy for C++/Python/Java. You shouldn't need to modify the IDE to build stuff, if so, you need a different IDE or a system that generates/modifies IDE files for you. (See CMake for a cross-platform c/c++ project file generator.)
I've written a system (first in Ant/Beanshell at two different places, then rewrote it in Python at my current job) where third-partys are compiled separately (by someone), stored and shared via HTTP.
Somewhat hurried description follows:
Upon start, the build system looks through all modules in repo, executes each module's setup target, which downloads the specific version of a third-party lib or app that the current code revision uses. These are then unzipped, PATH/INCLUDE etc are added to (or, for small libs, copy them to a single directory for the current repo) and then launches Visual Studio with /useenv.
Each module's file check for stuff that it needs, and if it needs installing and licensing, such as Visual Studio, Matlab or Maya, that must be on the local computer. If that's not there, the cmd-file will fail with a nice error message. This way, you can also check that the correct version is in there
So there are a number of directories on the local disk involved. %work% needs to be set using an global environment variable, preferrable on a different disk than system or source-checkout, at least if doing heavy C++.
%work% <- local store for all temp files, unzip, and for each working copy's temp files
%work%/_cache <- downloaded zips (2 gb)
%work%/_local <- local zips (for development or retrieved in other manners while travvelling)
%work%/_unzip <- unzips of files in _cache (10 gb)
%work%&_content <- textures/3d models and other big files (syncronized manually, this is 5 gb today, not suitable for VC either)
%work%/D_trunk/ <- store for working copy checked out to d:/trunk
%work%/E_branches/v2 <- store for working copy checked out to e:/branches/v2
So, if trunk uses Boost 1.37 and branches/v2 uses 1.39, both boost-1.39 and boost-1.37 reside in /_cache/ (as zips) and /_unzip/ (as raw files).
When starting visual studio using bat files from d:/trunk/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.37, while if runnig e:/branches/v2/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.39.
In the repo, only a small set of bootstrap binaries need to be stored (i.e. wget and 7z).
We currently download about 2 gb of packed data, which is unzipped to 10 gb (pdb files are huge!), so keeping this out of source control is essential. Having this system allows us to keep the repo size small enough to use DVCS such as Mercurial (or Git) instead of SVN, which is very nice. (I'm thinking of using Mercurials bigfiles extension or file sharing instead of a separately http-served directory.)
It work flawlessly. Developers need only to check out, set an enviroment variable for their local cache, then run Visual Studio via a specific batch-file in the repo. No unzipping or compiling or stuff. A new developer can set up his computer in no time. (Installing Visual Studio takes the order of a magnitude more time.)
First time on a new computer takes some time, but then it's fast, only a few seconds. Downloads/unzips are shared on the local computer, do checking out additional branches/versions does not occupy more space. Working offline is also possible, you just need to get the zip files manually if new ones have been uploaded. (This mechanism is essential to test new versions/compilations of third-party libraries.)
The basics are in a repo on bitbucket but it needs more work before it's ready for the public. Apart from doc and polish, I plan to:
extend it to use cmake instead of raw
vcproj-files, to make it more
cross-platform.
script the entire
process from checkout/download of
third-party packages to building and
zipping them (including storing the
download in a local repo) ... currently that's on my dev computer. Not good. Will fix. :)
As for moc, we use Qt's Visual Studio add-in, which stores this in the .vcproj files. Works well. I do think that CMake is one of the best answers for this though

What artifacts to save for a released build?

So, I now know what to save from nightly builds. What about when I give something to customers?
For example, I probably want to save debugging information (e.g. PDB).
What else?
We use:
installers
binaries
pdbs
tag of source files
any other source files that might not be in svn - for example config.status
build log
You made me wonder if I'm missing anything important
Compiler and library version information (it may not be part of the build log). Somebody else mentioned the whole binaries.
Linker map file (it can sometimes help the remote debugging of a problem).
Unstripped executable (if on a Unix system you strip it the executable before making it available to clients).
For the SDK releases we do include:
PDB and XML for the libraries (packaged with the latest snapshot of the samples)
Packaged snapshot of sources from SVN (just because we can)
Link to the online documentation (docs are generated from the source automatically)
Trace messages don't necessarily need to be generated by default but the possibility to enable them can be very helpful.
Results and Information generated from ATPs that are run on the build (probably as part of the build process).