How to specify core_pattern in systemd service file? - service

How can I specify core pattern for a particular service in systemd without affecting core patterns for other services?

Related

How can I distrubute loads to Kubernetes Pods?

I have work defined in a file/config with the following format,
config1,resource9
config3,resource21
config5,resource10
How can I spin individual pods based on the configuration? If I add one more line to the configuration, Kubernetes need to spin one more pod and send the configuration line to that pod.
How to store the configuration in Kubernetes and spin up pods based on the configuration?
Take a look at Kubernetes Operators. The pattern adds a Kubernetes management layer to an application. Basically you run a kubernetes native app (the operator) that connects to the kubernetes API and takes care of the deployment management for you.
If you are familiar with helm, then a quick way to get started is with the helm example. This example will create a new Nginx deployment for each Custom Resource you create. The Custom Resource contains all the helm values nginx requires for a deployment.
As a first step you could customise the example so that all you need to do is manage the single Custom Resource to deploy or update the app.
If you want to take it further then you may run into some helm limitations pretty quickly, for advanced use cases you can use the go operator-sdk directly.
There are a number of projects operators to browse on https://operatorhub.io/

Can (and should) Traefik be used in "file" mode in a Service Fabric cluster?

I'm investigating using Traefik as a reverse proxy within a Service Fabric cluster running Dockerized microservices. The official way of running Traefik within Service Fabric is to use the Service Fabric provider. Running Traefik within a Docker container is not recommended according to the Github readme because you can't reach the Service Fabric API through localhost:19080, but instead have to reach it through its public IP. This requires you to install SSL certs to securely talk to the API, which is a bit of a hassle.
I'm curious if there's any advantage using the Traefix Service Fabric provider (which requires complex setup) rather than just running Traefix in a container running with the File provider. Provided your services had ServiceDnsName attributes in the ApplicationManifest, thus allowing Service Fabric DNS to find them, it seems like a much simpler approach. The Traefik configuration would look something like:
[frontends]
[frontends.api]
backend = "api"
passHostHeader = true
[frontends.api.routes.forwarder]
rule = "PathPrefixStrip: /api/"
[frontends.someservice]
backend = "someservice"
passHostHeader = true
[frontends.someservice.routes.forwarder]
rule = "PathPrefixStrip: /SomeService/"
[backends]
[backends.api]
[backends.api.servers.endpoint]
url = "http://Api:11080"
[backends.someservice]
[backends.someservice.servers.endpoint]
url = "http://SomeService:12080"
You'd map port 80 to your Traefix service, which would then dispatch the HTTP call to the appropriate internal service based on the URL prefix.
Advantages:
No need to talk to the Service Fabric API, which is somewhat hacky to do from within a container.
Possibly more secure; Everything is internal, no need to worry about installing certificates
Disadvantages:
Your service routing is now tied to a TOML file within a Docker container rather than integrated into service and application manifest files. No way to modify this without redeploying that container.
I don't believe this would work unless all services were running in a Container (Though I believe if you enabled the reserve proxy, you can resolve the service by name using http://localhost:19081/AppName/ServiceName instead)
To me, it seems like a cleaner approach provided you're not changing and adding services all the time. Usually, the stuff stays fairly static though.
Are there any gotchas that I'm not considering, or any strong argument against doing this?
I will add my two cents to your considerations:
Is a preference between Dynamic and Static configuration.
The benefit is that it will reload with all the new configurations every time a service comes up with Traefik configuration. Today you say "it won't change", but in a few months, weeks or perhaps days, you might face a new requirement and you have to update the file, if the system evolves quickly, like on Microservices solution, you soon will face a hard work of updating this file by hand.
If you are sure it will hardly change, don't be limited by the Service Fabric Configuration or File, Traefik can get the configuration from REST , ETCD, DynamoDB and many other configuration providers to load the rules.
For the File approach, you can create the file in an Azure FileShare and mount as a volume in the container, so you don't need to rebuild the container to take effect.
Then you can just update the file and:
Restart the container to reload the file, or.
An even better approach is using the file.watch setting you don't need to restart the container and it will watch for any changes in the file.

Using envoy without pods (in on pres solution)

We are now on our journey to break our monolith (on-prem pkg (rpm/ova)) into services (dockers).
In the process we are evaluation envoy/istio as our communication and security layer, it looks great when running as sidecar in k8s, or each service on a separate machie.
As we are going to deliver several services within one machine, and can't deliver it within k8s, I'm not sure if we can use envoy, I didn't find any reference on using envoy in additional ways, are there additional deployment methods I can use to enjoy it?
You can run part of your services on Kubernetes and part on VMs.

Disadvantages of using eureka for Service Discovery with kubernetes

Context
I am deploying a set of services that are containerised using Docker into AWS. No matter which deployment solution is chosen (e.g. raw EC2/ECS/Elastic Beanstalk/Fargate) we will face the issue of "service discovery".
To name just a few of the options for service discovery that I've considered:
AWS Route 53 Service Registry
Kubernetes
Hashicorp Consul
Spring Cloud Netflix Eureka
Specifics Of My Stack
I am developing Java Spring Boot applications using Spring Cloud with the target deployment environment being AWS.
Given that my stack is Spring based, spring cloud eureka made sense to me while developing locally. It was easy to set up a single node, integrates well with the stack and ecosystem of choice and required very little set up.
Locally, we are using docker compose (not swarm) to deploy services - one of the containers deployed is a single node Eureka service discovery server.
However, when we progress outside of local development and into staging or production environment we are considering options like Kubernetes.
My Own Assessment Of Pros/Cons
AWS Route 53 Service Registry
Requires us to couple code specifically to AWS services. Not a problem per se, we are quite tied in anyway on other parts of the stack (SNS/SQS).
Makes running the stack locally slightly more difficult as it relies on Route 53, I suppose we could open up a certain hosted zone for local development.
AWS native, no managing service registries or extra "moving parts".
Spring Cloud Eureka
Downside is that thus requires us to deploy and manage a high availability service registry cluster and requires more resources. Another "moving part" to manage.
Advantages are that it fits into our stack well (spring ecosystem, spring boot, spring cloud, feign and zuul work well with this). Also can be run locally trivially.
I presume we need to configure the networks and registry zone to ensure that that clients publish their host address rather and docker container internal IP address. e.g. if service A is on host A and wants to talk to service B on host B, service B needs to advertise its EC2 address rather than some internal docker IP.
Questions
If we use Kubernetes for orchestration, are there any disadvantages to using something like Spring Cloud Eureka over the built in service discovery options described here https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services
Given Kube provides this, it seems suboptimal to then use eureka deployed using kube to perform discovery. I presume kube can make some optimisations that impact avaialbility and stability that might nit be possible using eureka. e.g kube would know when deploying a new service - eureka will have to rely on heartbeats/health checks and depending on how that is configured (e.g. frequency) this could result in stale records whereas i presume kube might not suffer from this for planned service shutdown/restarts. I guess it still does for unplanned failures such as a host failure or network partition.
Does anyone have any advice on this, do people use services like Kubernetes but use other mechanisms for service discovery rather than those provided by kube. Is there a good reason to do one or the other?
Possible Challenges I Anticipate
We could replace eureka, but relying on Kube to perform discovery will mean that we need to run kube locally to deploy whereas currently we have a simple tiny docker-compose file. Also, I'll have to look at how easy it'll be to ensure that ribbon, zuul and feign play nicely with this.
Currently we have ribbon configured with a eureka client so that service A can server to service B just as "service-b" for example and have ribbon resolve a healthy host via a eureka client. I guess we can configure ribbon to not use eureka and use an external Kube service name which will be resolved by Kube DNS at runtime...
Final Note
Thanks in advance for any contribution or advice. I know this might elicit a primarily opinion focused response. But I am hoping someone can provide objective guidance on when one solution might be preferable to another.
Service discovery is something you get out-of-the-box with Kubernetes. So having another external service in your platform will be another application to maintain, deploy and can be a point of failure. So I would stick with the the service discovery provided by Kubernetes.

Difference between native and guest service with Service Fabric

I have a bunch of services deploying as guest executables to service fabric and all seems fine. I was wondering if there was any point in porting the services to be native Fabric Service services.
Looking at the documentation I can't seem to find any benefits of having implementing them as such, am I missing something obvious?
If your services are stateless there is probably no compelling reason to migrate them into native stateless services. It could be different if your services were stateful; in this context I mean that they store some state inside the process.
The state in native stateful services is stored redundantly, so your services can cope with node failures. This could increase the resilience of your service. In general, you usually create native services in green field situations and rely on guest executables and containers in migration/hybrid situations.
Guest Executable is missing out on some advanced features but it is up to you to decide if you need them.
Benefits of running a guest executable in Service Fabric
There are several advantages to running a guest executable in a Service Fabric:
High availability. Applications that run in Service Fabric are made highly available. Service Fabric ensures that instances of an application are running.
Health monitoring. Service Fabric health monitoring detects if an application is running, and provides diagnostic information if there is a failure.
Application lifecycle management. Besides providing upgrades with no downtime, Service Fabric provides automatic rollback to the previous version if there is a bad health event reported during an upgrade.
Density. You can run multiple applications in a cluster, which eliminates the need for each application to run on its own hardware.
Discoverability: Using REST you can call the Service Fabric Naming service to find other services in the cluster.
I.e. There is something called Stateless Reliable Services - http://www.jamessturtevant.com/posts/Service-Fabric-Service-Types/
The above link will explain it more.