how to copy pods from one xcode project to another? - iphone

I usually use the same pods for multiple projects, however I find myself having to run pod install each time. Isn't there a way to reuse existing pods for every new project?
I know in rails the way to do this is by copying Gemfile.lock and then running bundle install and that would avoid having to download all the gems (ie packages/libraries) from their respective repos.. further, Podfile.lock is pretty much coacoa pod's counterpart to Gemlock.file (ie it keeps track of the specific pod versions that got copied over). I'm guessing there must be a way to transfer specific pods from one project to another similar to how it's done on rails.

Related

Persistence volume change: Restart a service in Kubernetes container

I have an HTTP application (Odoo). This app support install/updating modules(addons) dynamically.
I would like to run this app in a Kubernetes cluster. And I would like to dynamically install/update the modules.
I have 2 solutions for this problem. However, I was wondering if there are other solutions.
Solution 1:
Include the custom modules with the app in the Docker image
Every time I made a change in the custom module and push it to a git repository. Jinkins pull the changes and create a new image and then apply the new changes to the Kubernetes cluster.
Advantages: I can manage the docker image version and restart an image if something happens
Drawbacks: This solution is not bad for production however the list of all custom module repositories should all be included in the docker file. Suppose that I have two custom modules each in its repository a change to one of them will lead to a rebuild of the whole docker image.
Solution 2:
Have a persistent volume that contains only the custom modules.
If a change is made to a custom module it is updated in the persistent volume.
The changes need to apply to each pod running the app (I don't know maybe doing a restart)
Advantages: Small changes don't trigger image build. We don't need to recreate the pods each time.
Drawbacks: Controlling the versions of each update is difficult (I don't know if we have version control for persistent volume in Kubernetes).
Questions:
Is there another solution to solve this problem?
For both methods, there is a command that should be executed in order to take the module changes into consideration odoo --update "module_name". This command should include the module name. For solution 2, How to execute a command in each pod?
For solution 2 is it better to restart the app service(odoo) instead of restarting all the nodes? Meaning, if we can execute a command on each pod we can just restart the service of the app.
Thank you very much.
You will probably be better off with your first solution. Specially if you already have all the toolchain to rebuild and deploy images. It will be easier for you to rollback to previous versions and also to troubleshoot (since you know exactly which version is running in each pod).
There is an alternative solution that is sometime used to provision static assets on web servers: You can add an emptyDir volume and a sidecar container to the pod. The sidecar pull the changes from your plugins repositories into the emptyDir at fixed interval. Finally your app container, sharing the same emptyDir volume will have access to the plugins.
In any case running the command to update the plugin is going to be complicated. You could do it at fixed interval but your app might not like it.

How can I make Service Fabric package sizes practical?

I'm working on a Service Fabric application that is deployed to Azure. It currently consists of only 5 stateless services. The zipped archive weighs in at ~200MB, which is already becoming problematic.
By inspecting the contents of the archive, I can see the primary problem is that many files are required by all services. An exact duplicate of those files is therefore present in each service's folder. However, the zip compression format does not do anything clever with respect to duplicate files within the archive.
As an experiment, I wrote a little script to find all duplicate files in the deployment and delete all but one of each files. Then I tried zipping the results and it comes in at a much more practical 38MB.
I also noticed that system libraries are bundled, including:
System.Private.CoreLib.dll (12MB)
System.Private.Xml.dll (8MB)
coreclr.dll (5MB)
These are all big files, so I'd be interested to know if there was a way for me to only bundle them once. I've tried removing them altogether but then Service Fabric fails to start the application.
Can anyone offer any advice as to how I can drastically reduce my deployment package size?
NOTE: I've already read the docs on compressing packages, but I am very confused as to why their compression method would help. Indeed, I tried it and it didn't. All they do is zip each subfolder inside the primary zip, but there is no de-duplication of files involved.
There is a way to reduce the size of the package but I would say it isn't a good way or the way things should be done but still I think it can be of use in some cases.
Please note: This approach requires target machines to have all prerequisites installed (including .NET Core Runtime etc.)
When building .NET Core app there are two deployment models: self-contained and framework-dependent.
In the self-contained mode all required framework binaries are published with the application binaries while in the framework-dependent only application binaries are published.
By default if the project has runtime specified: <RuntimeIdentifier>win7-x64</RuntimeIdentifier> in .csproj then publish operation is self-contained - that is why all of your services do copy all the things.
In order to turn this off you can simply add SelfContained=false property to every service project you have.
Here is an example of new .NET Core stateless service project:
<PropertyGroup>
<TargetFramework>netcoreapp2.2</TargetFramework>
<AspNetCoreHostingModel>InProcess</AspNetCoreHostingModel>
<IsServiceFabricServiceProject>True</IsServiceFabricServiceProject>
<ServerGarbageCollection>True</ServerGarbageCollection>
<RuntimeIdentifier>win7-x64</RuntimeIdentifier>
<TargetLatestRuntimePatch>False</TargetLatestRuntimePatch>
<SelfContained>false</SelfContained>
</PropertyGroup>
I did a small test and created new Service Fabric application with five services. The uncompressed package size in Debug was around ~500 MB. After I have modified all the projects the package size dropped to ~30MB.
The application deployed worked well on the Local Cluster so it demonstrates that this concept is a working way to reduce package size.
In the end I will highlight the warning one more time:
Please note: This approach requires target machines to have all prerequisites installed (including .NET Core Runtime etc.)
You usually don't want to know which node runs which service and you want to deploy service versions independently of each other, so sharing binaries between otherwise independent services creates a very unnatural run-time dependency. I'd advise against that, except for platform binaries like AspNet and DotNet of course.
However, did you read about creating differential packages? https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-application-upgrade-advanced#upgrade-with-a-diff-package that would reduce the size of upgrade packages after the initial 200MB hit.
Here's another option:
https://devblogs.microsoft.com/dotnet/app-trimming-in-net-5/
<SelfContained>True</SelfContained>
<PublishTrimmed>True</PublishTrimmed>
From a quick test just now, trimming one app reduced the package size from ~110m MB to ~70MB (compared to ~25MB for selfcontained=false).
The trimming process took several minutes for a single application though, and the project I work on has 10-20 apps per Service Fabric project. Also I suspect that this process isn't safe when you have a heavy reliance on dependency injection model in your code.
For debug builds we use SelfContained=False though because developers will have the required runtimes on their machines. Not for release deployments though.
As a final note, since the OP mentioned file upload being a particular bottleneck:
A large proportion of the deployment time is just zipping and uploading the package
I noticed recently that we were using the deprecated Publish Build Artifacts task when uploading artifacts during our build pipeline. It was taking 20 minutes to upload 2GB of files. I switched over the suggested Publish Pipeline Artifact task and it took our publish step down to 10-20 seconds. From what I can tell, it's using all kinds of tricks under the hood for this newer task to speed up uploads (and downloads) including file deduplication. I suspect that zipping up build artifacts yourself at that point would actually hurt your upload times.

How do Strongloop process manager versions and rollback functionality work

This evening I noticed my staging server ran out of disk space.
When investigating I saw each time I deploy my loopback-js app with strongloop process manager, it installs a brand new app in a new folder.
After deploying 20 times I have 20 versions, which each take up 140 Mb.
I assume those folders make it easy to switch between versions, but I cannot figure out how I should do that with strong-pm and if I can specify how many versions should be saved, etc...
How do these versions, rollback-functionality work in strongloop-process manager and where can i find documentation?
At the moment there is no true "rollback" mechanism in strong-pm. The closest you can get is to deploy a previously deployed git commit, which will re-use the previous deployment that matches that commit's hash.

Should docker image be bundled with code?

We are building a SaaS application. I don't have (for now - for this app) high demands on availability. It's mostly going to be used in a specific time zone and for business purposes only, so scheduled restarting at 3 in the morning shouldn't be a problem at all.
It is an ASP.NET application running in mono with the fastcgi server. Each customer will have - due to security reasons - his own application deployed. This is going to be done using docker containers, with an Nginx server in the front, to distribute the requests based on URL. The possible ways how to deploy it are for me:
Create a docker image with the fcgi server only and run the code from a mount point
Create a docker image with the fcgi server and the code
pros for 1. would seem
It's easier to update the code, since the docker containers can keep running
Configuration can be bundled with the code
I could easily (if I ever wanted to) add minor changes for specific clients
pros for 2. would seem
everything is in an image, no need to mess around with additional files, just pull it and run it
cons for 1.
a lot of folders for a lot of customers additionally to the running containers
cons for 2.
Configuration can't be in the image (or can it? - should i create specific images per customer with their configuration?) => still additional files for each customer
Updating a container is harder since I need to restart it - but not a big deal, as stated in the beginning
For now - the first year - the number of customers will be low and when the demand is low, any solution is good enough. I'm looking rather at - what is going to work with >100 customers.
Also for future I want to set up CI for this project, so we wouldn't need to update all customers instances manually. Docker images can have automated builds but not sure that will be enough.
My concerns are basically - which solution is less messier, maybe easier to automate?
I couldn't find any best practices with docker which cover a similar scenario.
It's likely that your application's dependencies are going to be dependent on the code, so you'll still have to sometimes rebuild the images and restart the containers (whenever you add a new dependency).
This means you would have two upgrade workflows:
One where you update just the code (when there are no dependency changes)
One where you update the images too, and restart the containers (when there are dependency changes)
This is most likely undesirable, because it's difficult to automate.
So, I would recommend bundling the code on the image.
You should definitely make sure that your application's configuration can be stored somewhere else, though (e.g. on a volume, or accessed through through environment variables).
Ultimately, Docker is a platform to package, deploy and run applications, so packaging the application (i.e. bundling the code on the image) seems to be the better way to use it.

Introduction to Erlang/OTP production applications deployment

I would like to develop and deploy an Erlang/OTP application into production on a VPS.
I am pretty familiar with developing Erlang code on a local machine and my question is about deployment.
Basically, I would like to know what steps I should take in order to move Erlang code from a local machine to a production server and make it run, i.e. be available for users.
Note: I have read some documentation about Erlang and command line, Erlang code module, Erlang releases, but I am still not sure how to pursue the required task.
However, I guess that deploying an Erlang-based software on a server is a bit more tricky than doing sudo tasksel for LAMP.
I plan to have an Erlang/OTP application which has Mochiweb, CouchDB (couchbeam) and boss_db as dependencies.
So, my newbie questions about deploying all that stuff on a production server are the following:
I plan to use Ubuntu Server 12.04; is there any better choice for a Linux distro to use for Erlang/OTP in production?
How all the code should be organized? Should I put my application into a /home/myapp/ dir and then put all the dependencies into /home/myapp/deps? Or should I put all dependencies into /usr/local/lib/erlang/lib? (returned by code:get_path()). Should I somehow update the dependencies regularly or should I freeze them?
How do I make the whole application start once the server starts? Should it be some kind of bash script or anything else?
I know that Erlang allows hot code upgrades, but how should I organize that? On Rails I could update the code with git, does anything similar exist in the Erlang world?
There are two types of dependencies: Internal and External. If you want to do it the right way(tm), it takes a bit of time getting to work:
External dependencies:
Taking the latter first, an external dependency is some other thing that has to run before your application can run. For instance a PostgreSQL database, or a Riak cluster. For those, you usually just use the usual stuff in Ubuntu for making it start up properly. I've had good experience with using monit for these tasks:
http://mmonit.com/monit/
Internal Dependencies:
For internal dependencies, you need to arrange your program into applications inside the Erlang VM. These have dependencies on each other, like the external dependencies. Your main application may need a logger running before it should start, for instance. Then you create a release. A release copies the Erlang binaries and necessary libraries/beams/applications into a release directory, forming a self-contained Erlang system. It contains a boot-script which tells how to start up the applications in the right order and keep them running. So you can tar-ball up this release, copy it to the server and then start it. There are some basics covered here:
http://learnyousomeerlang.com/release-is-the-word
but do also read the chapters before it on applications. You can also get rebar to call reltool for you to build a release. This is what I usually do.
Hot upgrades:
Handling hot upgrades in production can be done in a couple of ways. You can move the beam to the machine and then deploy it, take the shell and then call l(Module) to load it into the running system. This works for smaller fixes. For large systematic upgrades you can do a release-upgrade which will upgrade the running system on the fly without stopping service. But if your system is mostly shared nothing, it is usually not worth it. Instead, you can have multiple machines and upgrade them in sequence.
For instance, you can upgrade a machine and then use a system like HAProxy to send 2% of all requests to the new system. Then systematically turn up the request load weight.
While #I GIVE CRAP ANSWERS gave a pretty thorough summary, I feel compelled to throw in the use of sync, which helps to automate the hot-recompiling and reloading of modules.
The simple way is you specify sync as a rebar dependency, then when you're getting ready to deploy an upgrade, you can run sync:go() on the Erlang node. This starts the sync engine, which watches for filesystem changes. Then you can use git to push to your server. Sync will notice the files change, recompile them, and load the new beams automatically.
Then, you can run sync:stop() right away to tell the system to stop watching for filesystem changes (it's generally not recommended to keep sync running on a live server, just to prevent accidental recompiling if, for whatever reason, a source file changes and it's unintentional.