Improving Fastlane's runtime for Scan -> Gym -> Deploy - swift

I've been trying to include Fastlane in a CI environment where the following should happen:
any commit to the master branch should trigger a test run and then a build into Testflight.
any commit to the development branch should trigger a test run and then a build into Fabric Beta.
any other commit or pull request should trigger a test run.
The lanes are working with all the code signing through match.
In order to avoid building twice I'm building through Gym and then Scan with skip_build: true and clean: false, as mentioned in #3353.
Although this does seem to help with the build time, due to the amount of cocoapods dependencies, it goes over the 50 minute limit in travis-ci.org. (Feel free to check the build logs)
How can this be improved in terms of running time? (Aside from fixing the slow compiling Swift functions mentioned in #3)
For reference, here's my Fastfile.

One way you can speed up your build phase is using prebuilt frameworks. It's like importing AVFoundation or any other Apple toolkit on your project.
Try to identify which dependency is slowing the running time down and move it to a prebuilt framework.
Carthage is a nice tool that allows you to use prebuilt frameworks and manage dependencies as well. You can cache Carthage builds on your CI. Check out this great blog post on how you can achieve caching.

I don't know of a way to re-use pre-built derived data for scan, gym and snapshot. Main reason for that is that those are builds for different architectures, with potentially different xcconfigs.

Related

improve deployment time of yocto based development

In our company we have switched to Linux embedded systems (2 different platforms). Our build is based on yocto.
Unfortunately we are all quite unexperienced with it. Especially I have some questions about collaboration within yocto and with reusing its deployment (SDK).
In our company there are multiple teams involved in development:
Team1: Maintaining yocto and the Linux base system
Team2: Developing company-own libraries which will become part of the base system (own git repository, which is added in yocto recipe)
Team3: Developing software (based on Linux base system + company-own libraries)
The problem: Our current development workflow is quite slow.
For example:
There is some bug in a small company-own library LibA.
Team2 commits some bugfix for LibA (small project, a standalone build would take ~2min).
In a next step there has to be a Pull-Request updating the CommitID within LibA's yocto recipe (~15min including a small CI build + merge checks).
As soon as the Pull-Request is merged, a release-build can be triggered (CI). The result of this build is a SDK and a MfgTool (~120min for all platforms/variants).
Now Team3 has to download the new SDK and update their build to use the new SDK (~5min).
Then they trigger a release build of their software (~10min) which results in a flashable image.
So all in all a small change of LibA takes about 2 hours until it can be integrated by Team3, and another 15min for the complete software to be available to our test team.
What is the recommended standard way for this (I think we are not the only company with this issue)?
How can we improve the yocto build workflow?
Is there any way to prevent that the yocto build must be used if LibA changes?
Thank you very much for your help.
I would takle your main issue, the build time of 120m seems very long.
I would recommend to create use Source Mirror , sstate mirror. Do one buildjob in the night which creates those contents. Take a build machine with at least 8 cores and 16-32GB ram.

Azure DevOps: Build 1 Project in a Multi-Project Solution

I'm not sure if I'm searching correctly, but I'm hoping I can get some guidance here.
Current Setup
I have one solution with two projects:
Web API project
Node.js project
I'm using Azure DevOps with 2 Builds, each with their own Releases, one for each project. Each build definition only triggers when their respective project is updated/changed.
This works great!
I've noticed that each build actually builds the entire solution. In order to not waste processing power, I'd prefer to have each project only build their own project, with a few caveats.
The Web Api project does not depend on changes to the Node.js project, however, the Node.js can depend on changes to the Web Api. Because of this, if the Web Api fails, I don't want the Node.js to build/release.
Goals
What I'm trying to do is setup my build definitions so that the Web Api build, only builds the Web Api project. Whether it completes or errors out, let the build proceed as it normally does.
However, I want the Node.js project build both the Web Api and the Node.js project, so that if the Web Api build fails, then the whole build fails even if the Node.js would not have failed.
I've tried adding a new Visual Studio Build task and select the project only, but I got the following error:
[warning]C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(781,5): Warning : The OutputPath property is not set for project 'MyProject.csproj'. Please check to make sure that you have specified a valid combination of Configuration and Platform for this project. Configuration='release' Platform='any cpu'. You may be seeing this message because you are trying to build a project without a solution file, and have specified a non-default Configuration or Platform that doesn't exist for this project.
I'm currently looking on how to fix this, but I want to pose the following questions in case I'm headed in the wrong direction.
Questions
How can I setup the build definition to only build 1 project?
Is there a different configuration I should be creating instead of the one I mentioned?
Build Definition
What I'm trying to do is setup my build definitions so that the Web Api build, only builds the Web Api project. Whether it completes or errors out, let the build proceed as it normally does.
This is the case where you want a build definition that is targeting the project file instead of the .sln. Your error is that building the .csproj requires the OutputPath property to have a value, so just add it to the MSBuild Arguments box: /p:OutputPath="$(build.binariesDirectory)\MyProject". Build.BinariesDirectory is a predefined variable, but is otherwise not a required directory value. You can use what makes sense for you.
I want the Node.js project build both the Web Api and the Node.js project, so that if the Web Api build fails, then the whole build fails even if the Node.js would not have failed.
The simplest and least sophisticated way
From what I understand about your situation, this case doesn't require any additional changes. If you build the solution, then the pipeline will fail if either of the projects are broken. The downsides to this are:
The Node.js project doesn't "get" the newest changes to the Web API "dependency" until the Node.js project is changed and CI triggers a build
If you use a build completion trigger to mitigate downside 1 above, the Web API project gets built twice even though we know it should be successful both times
The more complex and sophisticated (but elegant?) way
Set a Build completion trigger on the Node.js pipeline that will trigger a build when the Web API pipeline is successful. This is similar to what you have now with some differences. With build completion AND a CI triggers on your Node.js pipeline, the Web API build can succeed regardless of the result of the downstream Node.js, but you will be building the Node.js project even when changes are not made to that project explicitly. (This may not be what you want if you're trying to save on agent activity)
Your Node.js pipeline can then have 2 separate build steps, each targeting one of the project files. However, the step for the Web API project build can have a condition to NOT perform if the Build.Reason is BuildCompletion. This allows the Node.js to be a downstream project of the Web API, but doesn't build the Web API if we already know it's successful.
Note: depending on how your references work between these projects in this solution, you may need to add other tasks for downloading the build artifacts and what-not to make sure everything is where it should be for building.

How can I make Service Fabric package sizes practical?

I'm working on a Service Fabric application that is deployed to Azure. It currently consists of only 5 stateless services. The zipped archive weighs in at ~200MB, which is already becoming problematic.
By inspecting the contents of the archive, I can see the primary problem is that many files are required by all services. An exact duplicate of those files is therefore present in each service's folder. However, the zip compression format does not do anything clever with respect to duplicate files within the archive.
As an experiment, I wrote a little script to find all duplicate files in the deployment and delete all but one of each files. Then I tried zipping the results and it comes in at a much more practical 38MB.
I also noticed that system libraries are bundled, including:
System.Private.CoreLib.dll (12MB)
System.Private.Xml.dll (8MB)
coreclr.dll (5MB)
These are all big files, so I'd be interested to know if there was a way for me to only bundle them once. I've tried removing them altogether but then Service Fabric fails to start the application.
Can anyone offer any advice as to how I can drastically reduce my deployment package size?
NOTE: I've already read the docs on compressing packages, but I am very confused as to why their compression method would help. Indeed, I tried it and it didn't. All they do is zip each subfolder inside the primary zip, but there is no de-duplication of files involved.
There is a way to reduce the size of the package but I would say it isn't a good way or the way things should be done but still I think it can be of use in some cases.
Please note: This approach requires target machines to have all prerequisites installed (including .NET Core Runtime etc.)
When building .NET Core app there are two deployment models: self-contained and framework-dependent.
In the self-contained mode all required framework binaries are published with the application binaries while in the framework-dependent only application binaries are published.
By default if the project has runtime specified: <RuntimeIdentifier>win7-x64</RuntimeIdentifier> in .csproj then publish operation is self-contained - that is why all of your services do copy all the things.
In order to turn this off you can simply add SelfContained=false property to every service project you have.
Here is an example of new .NET Core stateless service project:
<PropertyGroup>
<TargetFramework>netcoreapp2.2</TargetFramework>
<AspNetCoreHostingModel>InProcess</AspNetCoreHostingModel>
<IsServiceFabricServiceProject>True</IsServiceFabricServiceProject>
<ServerGarbageCollection>True</ServerGarbageCollection>
<RuntimeIdentifier>win7-x64</RuntimeIdentifier>
<TargetLatestRuntimePatch>False</TargetLatestRuntimePatch>
<SelfContained>false</SelfContained>
</PropertyGroup>
I did a small test and created new Service Fabric application with five services. The uncompressed package size in Debug was around ~500 MB. After I have modified all the projects the package size dropped to ~30MB.
The application deployed worked well on the Local Cluster so it demonstrates that this concept is a working way to reduce package size.
In the end I will highlight the warning one more time:
Please note: This approach requires target machines to have all prerequisites installed (including .NET Core Runtime etc.)
You usually don't want to know which node runs which service and you want to deploy service versions independently of each other, so sharing binaries between otherwise independent services creates a very unnatural run-time dependency. I'd advise against that, except for platform binaries like AspNet and DotNet of course.
However, did you read about creating differential packages? https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-application-upgrade-advanced#upgrade-with-a-diff-package that would reduce the size of upgrade packages after the initial 200MB hit.
Here's another option:
https://devblogs.microsoft.com/dotnet/app-trimming-in-net-5/
<SelfContained>True</SelfContained>
<PublishTrimmed>True</PublishTrimmed>
From a quick test just now, trimming one app reduced the package size from ~110m MB to ~70MB (compared to ~25MB for selfcontained=false).
The trimming process took several minutes for a single application though, and the project I work on has 10-20 apps per Service Fabric project. Also I suspect that this process isn't safe when you have a heavy reliance on dependency injection model in your code.
For debug builds we use SelfContained=False though because developers will have the required runtimes on their machines. Not for release deployments though.
As a final note, since the OP mentioned file upload being a particular bottleneck:
A large proportion of the deployment time is just zipping and uploading the package
I noticed recently that we were using the deprecated Publish Build Artifacts task when uploading artifacts during our build pipeline. It was taking 20 minutes to upload 2GB of files. I switched over the suggested Publish Pipeline Artifact task and it took our publish step down to 10-20 seconds. From what I can tell, it's using all kinds of tricks under the hood for this newer task to speed up uploads (and downloads) including file deduplication. I suspect that zipping up build artifacts yourself at that point would actually hurt your upload times.

Salesforce.com deployment

We are currently working on a Salesforce.com custom APEX project that involves a lot of apex classes, triggers and Visualforce pages. We also have numerous applications from AppExchange that are part of the system.
We develop all the Apex Classes, Visualforce pages, etc in test environment and then deploy it to the live environment using Eclipse IDE. What happens is that every time we deploy changes to the live environment, all the test methods of all the classes (including those from AppExchange Apps) seems to be executing. So deployment of a simple change could end up taking couple of minutes.
Is there a way in apex to "package" classes by namespace or something like that so that when we try to deploy a change, only the test methods relevant to that package are executed. If something like that exists, our deployment can happen much faster.
Unfortunately no, there is no partial testing for deployment of apex code, every change, no matter how minute or self-contained triggers a full test run. This among other things enforces code metrics (minimum total code coverage for instance)
IMHO, this is proving to be a two-sided coin when it comes to enforcing code reliability. When we started using apex all of our tests were very comprehensive performing actual testing of the code with lots of asserts and checks. Then we started having very very long deploy times so now our tests serve one and only function, satisfying minimum code coverage, and even with that simplification it takes almost 3 minutes to deploy anything and we only use 20% of our apex code allowance.
IMHO2, Apex is way too slow of a coding platform to be enforcing this kind of testing. I cant even imagine how long the tests would run if we reach 50% allowance, not to mention any more.
This is possible but you'll need to learn about Apache Ant and have a look at the Force.com Migration Toolkit. You can then use a Build file to determine which files are deployed as well as which tests are run.
I'm busy writing a whitepaper that'll touch on this and other related development strategies... I'll post to my blog when it's done.
If we use the apache ant migration tool we have many options for deployment
like
deployCodeFailingTest which will skip the test classes
and if you want to run only specific test classes
please use : something similar to this in ur build.xml
<target name="deployCode">
`<sf:deploy`
username="${sf.username}"
password="${sf.password}"
serverurl="${sf.serverurl}"
deployroot="codepkg">
<runTest>SampleDeployClass</runTest>
</sf:deploy>
</target>
for detailed reference please use this link
http://www.salesforce.com/us/developer/docs/daas/salesforce_migration_guide.pdf
I would recommend the following approach:
Git as repository for all your sf code
jenkins to deploy your code as CI/CD
PMD as the static code analyser
sfdx as the deployment method in jenkins for deployment.
Refer the trailhead link: https://trailhead.salesforce.com/users/strailhead/trailmixes/architect-dev-lifecycle-and-deployment

What steps are necessary to automate a build of iPhone app?

I have prior experience in build a automatic build process for .NET & Delphi projects but now want to automate the building of a iPhone project... not only simply builds but also to the final deployment..
I want a generic step list, with the command line actions that need to be performed, so anyone could adapt it to their particular build software.
Also, how build with support for 3.0 and 2.0 targets (or more general: How build to different deployments targets???)
So:
Preparation:
Set support for application versioning with agvtool.
Build steps:
Checkout the sourcecode
Clean the project
Increase the version: agvtool bump -all
If is for deployment also run: agvtool new-marketing-version <new version here>
Build the project (how?)
Build the test suite
Run the test suite
What more?
Building the target is the easiest of the pieces. Use xcodebuild. It can easily target separate SDKs. It's also the tool that will build your test suites for you (by using a separate target generally). I recommend relying on xcodebuild as much as possible. I've only seen heartache come from trying to wrap xcodebuild calls with make, jam or ant. You have to build with xcodebuild eventually, so it's worth studying the Xcode Build System Guide and learning to make the most use of it. It's quite powerful. I have a few introductions to configuring it here.
Running the test suite is more difficult to automate for iPhone (especially if you need to test on device). There have been other discussions of this. For many apps, you may not be able to fully automate this step.