I have a module layer (embeds a framework) within my application to serve as a generic logging service to several services like Fabric, Mixpanel etc.
This module has a target dependency with the main app target.
I am initializing Fabric in the main app target's AppDelegate and later making calls to the Module layer for sending custom logs of this sort:
`CLSLogv("%#", getVaList("Some Log"))`
This however does not work unless I explicitly initialize Fabric in the Module layer
`Fabric.with([Crashlytics.self()])`
(The same way I've done it in the AppDelegate.)
I have NOT added a run script phase for Fabric in the Module target.
My question is, is this needed really since the module is a target dependency to the main app target and it already initializes Fabric in its app delegate?
If it is needed, is this going to cause any issues?
Related
I have an issue with a service worker, I have two different projects that are in the same server but in different folders, and I want to precache the files on project number 2 using my service worker (My service worker is already working on project number 1). My question is, is it possible to do this? is there any other way I can attack this? Any help is very much appreciated.
In general, yes, as long as the service worker is hosted at a URL that is at the same level (or "higher") than the root of each of those projects. That would ensure that each project will be within scope of the service worker.
I'm assuming that one of the challenges you're asking about relates to creating a precache manifest within that service worker that contains build artifacts from both projects. There are a few different ways to tackle that, but I think the most straightforward would be to ensure that you always run the build process for each project at the same time, and then when you use Workbox's build tooling to create the precache manifest, you ensure that you grab all the assets that were output by each of the projects.
The specifics of configuring that build process depends on what you're currently using. You mention that there's a service worker (presumably using Workbox's precaching) already in place for the first project, so I think just using the same build setup, with tweaks to pick up the additional assets, would be easiest.
The concrete question
For those who just want the direct questions:
Is there a way to temporarily disable default services on a ServiceFabric application type so that a new application can be installed (using Powershell) without automatically installing any default services?
A proposed solution here is to remove the default services from the manifest and later restoring them. I am able to write a PowerShell script to adjust the application manifest accordingly, but how do I update the application type using Powershell - assuming I already have altered the manifest?
Any solution that solves the contextual problem without requiring manual config meddling is acceptable - my proposed solution is probably not the only possible solution. We do explicitly want to avoid manual meddling.
When allowing meddling, we are already able to just comment out the default services when we need to. We're specifically looking for a solution that requires no meddling as this reduces bugs and debugging issues.
The context
I'm running into an issue with using the application manifest's default services during local development.
I am aware of the general "don't use default services" advice, and it is being followed. During CI build, the default services are removed and will not be relied upon for any of our clusters in Azure. The only exception here is local developer machines, which use default services to keep the developer F5 experience nicer by enabling all services when starting a debug session.
We have written specialized scripts that provision a new tenant (SF application) with their own set of services (SF service). Not every tenant should get every service, we want to opt-in to the services, which is what the script already does (based on a mapping that we manage elsewhere, which is not part of the current question as the provisioning script exists and works).
However, when default services are enabled, every tenant already gets every service and the actual opt-in provisioning is useless. This is the issue we're trying to fix.
This same script works in our production cluster since there are no default services configured there. The question is solely focus on the local development environment.
Essentially, we're dealing with two scenarios during local development:
When debugging, we want the default services to be on because it allows us to run all of our services by pressing F5 (and not requiring any further action)
When testing our provisioning script, we don't want default services because it gets in the way of our selective provisioning behavior
I'm aware that commenting the default services out of the manifest solves the issue, but this requires developers constantly toggling the content of the manifest and reinstalling the application type, which we'd like to avoid.
Ideally, we want to have the default services in the manifest (as is currently the case) but then have the provisioning script "disable" the default services for its own runtime (and restore the default services before exiting), as this gets us the desired behavior in both cases.
What is the solution that requires the least manual developer meddling to get the desired behavior in both scenarios?
I'm currently trying to implement it so that the provisioning script:
Copies the application manifest to a backup location
Removes the default services from the real manifest
Updates the application type using the new manifest (i.e. without default services)
Runs the provisioning logic
Restores the real manifest using the backup manifest from step 1
Updates the application type using the restored manifest (i.e. with default services)
It is specifically steps 3 and 6 that I do not know how to implement.
Consider having two sfproj projects in the solution. One with default services, one without.
Also look into using a start-service.ps1 script instead of default services. This way the two projects can use the same application manifest.
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-debugging-your-application#running-a-script-as-part-of-debugging
I am using a library which searches in registry for a dll. That dll can be installed by running MSI in the Service Fabric cluster and this path will be set.
But I wanted to avoid the installation of MSI in the cluster, and provided the required dlls in the package itself. During start up of the service, I am creating the registry entry and giving the location of the dll in my package. Everything is working as expected.
Is this approach ideal? Are we allowed to make changes to registry? If not, how do we solve this problem? Any pointers are appreciated.
If the library has to use the registry, there is nothing you can do about it other than register the values. If you could change the DLL to retrieve this information from the configuration file would be the ideal solution.
You can do it in SF, the right way to do it is using the SetupEntryPoint option of the ServiceManifest to do these management tasks, and from the Application manifest you can set the policies to specify which user you should run these policies. it is described here with more details
The main issue you have on SF with this approach is that you application might move around the cluster and you have to register it on every node, and maybe also remove it when the application is not running there anymore to avoid garbage in the registry.
I'm developing a Service Fabric-based trading platform that will host hundreds of different long-running trading algorithms, all of which conform to a common interface and share a good deal of common code but can be vastly different in their internal specifics. I could model each of the different algos as an application type (which I'd dynamically load) but given the large number of different algos I have to wonder if in makes more sense to create a single Plugin Runner application type then implement the algos as plugins.
In a related question, I understand how to implement a plugin architecture, in general, but I'm not quite sure where one would place the actual plugins in order to be discoverable by an instance running on Service Fabric.
Anyway, thanks for your help....
Both approaches can work I think. Using lots of Application Types adds the (significant) overhead of running lots of processes, but allows you to use and upgrade multiple versions of the same algorithm running simultaneously.
Using the plugin approach requires you to deal with versioning yourself.
Using the Application approach probably requires some kind of request router, while the
plugin service could make it's own decisions (if it's stateless).
You can create a Stateful service that acts as the plugin repository, or mount a file share, or use a database, no restrictions from the platform here. You can use naming conventions to locate the proper plugin.
The following approach could work if an application upgrade is acceptable to you when changing the set of plugins needed for a given application instance.
Recall that Service Fabric apps must be packaged before deployment or upgrade. Using either msbuild tasks or Powershell, you could copy your plugin dlls to the plugin runner service's code package as a post-packaging step prior to the app upgrade. Then your plugin dlls would be available to the service at startup using Assembly.Load and the code package's path, available in your service implementation's Context.CodePackageActivationContext.GetCodePackageObject("Your-Code-Package-Name").Path property. The code package's name is defined in ServiceManifest.xml, and is named Code by default.
I have made a osx application contains a normal app, a login item which stored in 'Contents/Library/LoginItems' directory and a XPC Service stored in 'XPC Services' directory.
My main application can communicated with XPC Service like below.
let connection = NSXPCConnection(serviceName: "me.wanyi.xxx-XPCService")
It works fine.
But the login agent application can't work. It reported it can't communicate with the helper application. I thought it couldn't locate the XPC service.
I found it did work after I embed the XPC Service binary into the agent's bundle. But there will be two xpc binary bundles in the same bundle. I think maybe this is not an elegant solution.
Is there another way to solve this problem?