Swift Package Manager For Development - swift

Question:
Is there a way to create a "development package" with SPM similar to a development pod in cocoa pods that will let me make changes to the actual source project of a dependency package (local path)?
Context:
I'm working on a project that needs to be split into three separate projects. One of these projects is shared by the other two (in this case a data model, shared by a server and a client). For the client, as it uses uikit, I have a development cocoa pod setup that lets me work within the client workspace, make edits to the data model project, and then immediately compile and run. My changes to the data model are then saved in the data model project.
However, for the server, as it is entirely built with SPM, if I want to make edits to the data model project (which I want to have reflected to the client), I currently have to make them in the data model project, then retag it with a new minor version number, clean the server project, and rebuild. I'd love to just set this up like I do with cocoa pods.
If I can't do that, is there at least a way to tell SPM to only update one of my dependencies to a new version number (or to the max version as specitifed within the Package.swift. i.e. minor version of .4, so if I retag from .401 to .402 it would update)? I would have thought I could do this in the Package.pins, but that doesn't seem to work. Not sure why it's not a hidden file if editing it doesn't effect actual changes.

The concept you call "development package" is called Editable Package in Swift Package Manager:
For the packages which are in the editable state, swift build will always use the exact sources in this directory to build, regardless of its state, git repository status, tags, or the tag desired by dependency resolution. In other words, this will just build against the sources that are present.

Related

Reusing swift package binaries across project configurations (to save disk space)?

I have an iOS project with multiple (around 10) configurations - apart from Debug and Release, there are more configurations which only differ in adding more compilation conditions (like 'simulate free user', 'simulate paid user', 'update more often').
Unfortunately, this causes the swift packages to be rebuilt for each configuration, and as eg. Realm's derived data are about 3 GB, this takes up a lot of space (and build time when rebuilding a configuration I have't needed for a while).
Is there a way to tell Xcode to reuse packages for DEBUG configuration for all other configurations containing DEBUG (eg. DEBUG FASTUPDATES)?
I think swift packages should not be getting my compilation conditions anyway, is that right?
Well... it's hard for Xcode to detect that two build configurations, with different build settings, would result in the exact same binaries.
Having different user-defined settings values, as you have, can easily lead to different builds in any of the packages, and Xcode can't know that unless it goes through every package and checks if the diff between the build settings results if a diff between the compiled binaries, which would be even slower than simply rebuilding the files.
Cocoapods had support for this, i.e. you could specify which build configuration from the Pods project matched the one from your project. You could try doing a similar setup, however, this would require a non-trivial amount of manual work.
The other alternative would be to extract those user-defined build settings into other forms: user defaults, settings pane, etc. But this also depends on the architecture of your project, and can also be time-consuming/risky.

Cocoa application with SwiftPM

My general goal is to create an app, that grabs all data from the postgreSQL database. Firstly, I connected C API libpq to connect my database. Then, I found a nice wrapper around libpq to make my life easier, thanks to Perfect. To install this wrapper, I need to create Package.swift, add a dependency and regenerate my xcodeproj with swift package generate-xcodeproj.
But when I do that, the whole structure of my project is being rebuilt and, as a result, when I run the project, a simulator doesn't start and I lose understanding on what's happening and build usually fails.
New project structure
It would've been nice if somebody explained to me, what happens when I generate a project after adding Package.swift file and how to keep everything working with new packages being added.
... when I run the project, a simulator doesn't start and I lose understanding on what's happening and build usually fails.l
I suppose you already have the libpq added and working. If this is an iOS project, try just adding the PostgresSQL.swift instead of using the package.

Rust library development workflow

When developing a library in Rust (+ Cargo), how do I achieve the fast recompile/test cycle?
When developing an app, it's easy, I:
Make changes in the code
Switch to the terminal and run cargo run
See the compiler feedback
But now I want to extract parts of my app as a library and publish it on GitHub.
I would like to continue developing my app, but now with this library as a dependency. I'm going to develop both the library and the app in parallel.
How do I get same quick feedback now?
Both the library and the app will be developed on the same machine, I would like to make changes to the library, update the app correspondingly and see the compiler feedback.
I'm guessing I could use my library as a dependency in Cargo.toml and run cargo update each time I want to update my app's dependencies, but this will be somewhat slow because it will have to download the code from github each time and recompile all dependencies.
You can use this somewhat undocumented feature of cargo. Add the following line to ~/.cargo/config file (or /path/to/your/binary/project/.cargo/config to limit the effect to your binary project):
paths = ["/path/to/your/library"]
From now on every cargo package (or those under /path/to/your/binary/project root) which depends on your library will use /path/to/your/library as the source code for it regardless of what is specified in this package manifest, so you can keep Git repo URL in your program manifest. Hopefully this feature will be documented in future.
Update
This is now documented in the Cargo guide.

MEF Plugin update strategies for winforms

I'm developing an extensible application with MEF. The application will have many types of plugins to collect and process data in different ways.
I'm thinking about building a versioned online repository for the plugins, that will enable the user to download new versions of the plugins when they become available.
It would be nice if MEF could load different versions of the same plugin simultaneously, though from what I understand this isn't possible (correct me if I'm wrong).
So I've resigned myself to the fact I will need to update the plugin and archive the previous version.
What would be the best strategy for doing this?
Example 1
The Application downloads a new version of a loaded running plugin. I can't place the plugin in the plugin directory as there is already DLL with the same name. So I could rename the new plugin with a version suffix. I can't load the same assembly, so I guess I'll have to force a restart. So on restart, it achieves the old plugin and loads the new one.
--- This seems solutions seems a little messy
Example 2
The Application downloads a new version of a loaded running plugin.
The plugin is encased in some type of installer.
The installer closes the host gracefully and archives the existing plugin.
The installer installs new plugin and restarts the host app.
--- this also seems a little messy
I am seeking any correction of my assumptions, or any insight into a successful strategy to my achieve my goal.
The .NET Framework has a fearure called Shadow Copy which allows you to update loaded assebmlies. Basically it copies the assemblies to a temp folder and loads them from there. This way the assemblies located in your application's installation folder will not be locked by the OS and you will be able to replace them. ASP.NET, unit test framweorks and many other applications use shadow copying.
To enable this feature you will need to load your application in a new AppDomain since you cannot enable shadow copying on the main AppDomain. You can create a simple loader that will create an AppDomain and execute your application there. This is very straight forward. For an example of MEF + Shadow Copy have a look at Glenn Block's Way of MEF and in particullar the PartUpdatesInPlace sample.
Now as far as versioning is concerned you will need to be able to have two or more versions of an assembly loaded at the same time in the same application domain. There are two ways to do this:
Strong named assemblies in the GAC.
Assemblies with the version included in their name (like Plugin.v1.dll). Strong naming is optional in this case but a good idea nonetheless. The advantage of this approach is that two or more versions of a plugin can coexist in the same directory.
Have a look at this answer for an example of MEF + Versioning.
You can even use the recomposition feature of MEF and have your plugin container updated after:
A new plugin assembly is added
A plugin assembly is deleted
A plugin assembly is replaced
Have a look at this question for an example.

Storing third-party framework/middleware into source control that needs to alter your compiler/IDE

I know there are posts that ask how one stores third-party libraries into source control (such as this and this). While those are great answers, I still can't find the answer to this:
How do you store third-party middleware/frameworks binaries that need to alter your compiler / IDE for the library to work properly? Note: for my needs, I don't need to store the middleware source, I only store header files / lib / JAR ..so that it's ready to be linked.
Typically, you simply link libraries to your app, and you are good. But what about middleware / frameworks that need more?
Specific examples:
Qt moc pre-processor.
ZeroC Ice Slice (ice) compiler (similar to CORBA IDL preprocessor).
Basically these frameworks/middleware need to generate their own code before your application can link to it.
From the point of view of the developer, ideally he wants to just checkout, and everything should be ready to go. But then my IDE/compiler will not be setup properly yet, so the compilation will fail..
What do you think?
Backup everything including the setup of the IDE, operating system, etc. This is what i do
1) Store all 3rd party libraries in source control. I have a branch for all the libraries.
2) Backup the entire tool chain which was used to build. This includes every tool. Each tool is installed into the same directory on each developers computer, so this makes it simple to setup a developers machine remotely.
3) This is the most hardcore, but prepare 1 perfect developer IDE setup which is clean, then make a VMWare / VirtualPC image out of it. This will be useful when you cant seem to get the installers to work in future.
I learned this lesson the painful way because I often have to wade through visual studio 6 code which don't build properly.
I think that a better solution is to make sure that the build is self-contained and downloads all necessary software for itself unless you tell it otherwise. This is the way maven works, and it is really handy. The downside is that it sometimes needs to download a application server or similar, which is highly unpractical, but at least the build succeeds and it becomes the new developers responsibility to improve the build if needed.
This does of course not work great if your software needs attended installs, but I would try to avoid any such dependencies in any case. You can add alternative routes (e.g the ant script compiles the code if eclipse hasn't done it yet). If this is not feasible, an alternative option is to fail with a clear indication of what went wrong (e.g 'CORBA_COMPILER_HOME' not set, please set and try again').
All that said, the most complete solution is of course to ship everything with your app (i.e OS, IDE, the works), but I doubt that that is applicable in the general case, how would you feel about that type of requirements to build a software product? It also limits people who want to adapt your software to new platforms.
What about adding 1 step.
A nant script which is started with a bat file. The developer would only have to execute one .bat file, the bat file could start nant, and the Nant script could be made to do anything you need.
This is actually a pretty subtle question. You're talking about how to manage features of the environment which are necessary in order to allow your build to proceed. In this case it's the top level of your code toolchain, but the problem can be generalised to include the entire toolchain, and even key aspects of the operating system.
In my place of work, we have various requirements of the underlying operating system before our code will successfully run. This includes machine-specific configurations as well as ensuring correct versions of system libraries and language runtimes are present. We've dealt with this by maintaining a standard generic build machine image which contains the toolchain requirements we need. We can push this out to a virgin machine and get a basic environment that contains the complete toolchain and any auxiliary programs.
We then use fsvs to version control any additional configuration, which can be layered on to specific groups of machines as needed.
Finally, we use custom scripts hooked in to our CI server (we use Hudson) to perform any pre-processing steps required for specific projects.
The main advantages for us of this approach is:
We can build and deploy developer and production machines very easily (and have IT handle this side of the problem).
We can easily replace failed machines.
We have a known environment for testing (we install everything to a simulated 'production server' before going live).
We (the software team) version control critical configuration details and any explicit pre-processing steps.
I would outsource the task of building the midleware to a specialized build server and only include the binary output as regular 3rd party dependencies under source control.
If this strategy can be successfully applied depends on whether all developers need to be able to change midleware code and recompile it frequently. But this issue could also be solved via a Continous Integration Server like Teamcity that allows to create private builds.
Your build process would look like the following:
Middleware repo containing middleware code
Build server, building middleware
Push middleware build output to project repository as 3rd party references
Update: This doesn't really answer how to modify the IDE. It's just a sort-of Maven replacement thingy for C++/Python/Java. You shouldn't need to modify the IDE to build stuff, if so, you need a different IDE or a system that generates/modifies IDE files for you. (See CMake for a cross-platform c/c++ project file generator.)
I've written a system (first in Ant/Beanshell at two different places, then rewrote it in Python at my current job) where third-partys are compiled separately (by someone), stored and shared via HTTP.
Somewhat hurried description follows:
Upon start, the build system looks through all modules in repo, executes each module's setup target, which downloads the specific version of a third-party lib or app that the current code revision uses. These are then unzipped, PATH/INCLUDE etc are added to (or, for small libs, copy them to a single directory for the current repo) and then launches Visual Studio with /useenv.
Each module's file check for stuff that it needs, and if it needs installing and licensing, such as Visual Studio, Matlab or Maya, that must be on the local computer. If that's not there, the cmd-file will fail with a nice error message. This way, you can also check that the correct version is in there
So there are a number of directories on the local disk involved. %work% needs to be set using an global environment variable, preferrable on a different disk than system or source-checkout, at least if doing heavy C++.
%work% <- local store for all temp files, unzip, and for each working copy's temp files
%work%/_cache <- downloaded zips (2 gb)
%work%/_local <- local zips (for development or retrieved in other manners while travvelling)
%work%/_unzip <- unzips of files in _cache (10 gb)
%work%&_content <- textures/3d models and other big files (syncronized manually, this is 5 gb today, not suitable for VC either)
%work%/D_trunk/ <- store for working copy checked out to d:/trunk
%work%/E_branches/v2 <- store for working copy checked out to e:/branches/v2
So, if trunk uses Boost 1.37 and branches/v2 uses 1.39, both boost-1.39 and boost-1.37 reside in /_cache/ (as zips) and /_unzip/ (as raw files).
When starting visual studio using bat files from d:/trunk/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.37, while if runnig e:/branches/v2/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.39.
In the repo, only a small set of bootstrap binaries need to be stored (i.e. wget and 7z).
We currently download about 2 gb of packed data, which is unzipped to 10 gb (pdb files are huge!), so keeping this out of source control is essential. Having this system allows us to keep the repo size small enough to use DVCS such as Mercurial (or Git) instead of SVN, which is very nice. (I'm thinking of using Mercurials bigfiles extension or file sharing instead of a separately http-served directory.)
It work flawlessly. Developers need only to check out, set an enviroment variable for their local cache, then run Visual Studio via a specific batch-file in the repo. No unzipping or compiling or stuff. A new developer can set up his computer in no time. (Installing Visual Studio takes the order of a magnitude more time.)
First time on a new computer takes some time, but then it's fast, only a few seconds. Downloads/unzips are shared on the local computer, do checking out additional branches/versions does not occupy more space. Working offline is also possible, you just need to get the zip files manually if new ones have been uploaded. (This mechanism is essential to test new versions/compilations of third-party libraries.)
The basics are in a repo on bitbucket but it needs more work before it's ready for the public. Apart from doc and polish, I plan to:
extend it to use cmake instead of raw
vcproj-files, to make it more
cross-platform.
script the entire
process from checkout/download of
third-party packages to building and
zipping them (including storing the
download in a local repo) ... currently that's on my dev computer. Not good. Will fix. :)
As for moc, we use Qt's Visual Studio add-in, which stores this in the .vcproj files. Works well. I do think that CMake is one of the best answers for this though