Avoid default services in Service Fabric? - azure-service-fabric

On the Service Fabric community Q&A today, one of the team members seemed to recommend not using default services. Is there some documentation which would elaborate on this further? Most importantly though, if there are no default services, how do any services get created, or is the implication that you'll need at least one default service which dynamically creates the rest?

Is there some documentation which would elaborate on this further?
There is not official documentation about it, most of the places talking about it will be found on these Q&A.
if there are no default services, how do any services get created, or is the implication that you'll need at least one default service which dynamically creates the rest?
On service fabric, you have two options to create your services:
The Declarative way, done via Default Services feature where you describe services that should run as part of your application using the ApplicationManifest.
and the Dynamic(Imperative) way using powershell commands to create these services once the application is deployed.
I've given an extensive answer about it on another SO question here:
Auto creation of Service without DefaultServices on developer machines

Related

kubernetes point domain to a local service

I'm new to Kubernetes and trying to point all requests to the domain to another local service.
Both applications are running in the same cluster under a different namespace
Example domains
a.domain.com hosting first app
b.domain.com hosting the second app
When I do a curl request from the first app to the second app (b.domain.com). it travels through the internet to the second app.
Usually what I could do is in /etc/hosts point b.domain.com to localhost.
What do we do in this case in Kubernetes?
I was looking into Network Policies but I'm not sure if it correct approach.
Also As I understood we could just call service name.namespace:port from the first app. But I would like to keep the full URL.
Let me know if you need more details to help me solve this.
The way to do it is by using the Kubernetes Gateway API. Now, it is true that you can deploy your own implementation since this is an Open Source project, but there are a lot of solutions already using it and it would be much easier to learn how to implement those instead.
For what you want, Istio would fit your needs. If your cluster is hosted in a Cloud environment, you can take a look at Anthos, which is the managed version of Istio.
Finally, take a look at the blog Welcome to the service mesh era, since the traffic management between services is one of the elements of the service mesh paradigm, among others like monitoring, logging, etc.

GKE with Hashicorp Vault - Possible to use Google Cloud Run?

I'm looking into deploying a cluster on Google Kubernetes Engine in the near future. I've also been looking into using Vault by Hashicorp in order to manage the secrets that my cluster has access to. Specifically, I'd like to make use of dynamic secrets for greater security.
However, all of the documentation and Youtube videos that cover this type of setup always mention that a set of nodes strictly dedicated to Vault should operate as their own separate cluster - thus requiring more VMs.
I am curious if a serverless approach is possible here. Namely, using Google Cloud Run to create Vault containers on the fly.
This video (should start at the right time) mentions that Vault can be run as a Deployment so I don't see there being an issue with state. And since Google mention that each Cloud Run service gets its own stable HTTPS endpoint, I believe that I can simply pass this endpoint to my configuration and all of the pods will be able to find the service, even if new instances are created. However, I'm new to using Kubernetes so I'm not sure if I'm entirely correct here.
Can anyone with more experience using Kubernetes and/or Vault point out any potential drawbacks with this approach? Thank you.
In beta since 3 weeks, and not officially announced (It should be in a couple of days) you can have a look to secret-manager. It's a serverless secret manager with, I think, all the basic requirements that you need.
The main reason that it has not yet announced, it's because the client library in several languages aren't yet released/finished
The awesome guy on your video link, Seth Vargo, has been involved in this project.
He has also released Berglas. It's write in Python, use KMS for ciphering the secret and Google Cloud Storage for storing them. I also recommend it.
I built a python library to easily use Berglas secret in Python.
Hope that this secret management tool will meet your expectation. In any case, it's serverless and quite cheap!

Difference between Kubernetes and Service Fabric

I have worked on Kubernetes and currently reading about Service Fabric, I know Service Fabric provides microservices framework models like stateful, stateless and actor but other than that it also provides GuestExecutables or Containers as well which is what Kubernetes also does manage/orchestrate containers. Can anyone explain a detailed difference between the two?
You can see in this project paolosalvatori/service-fabric-acs-kubernetes-multi-container-appthe same containers implemented both in Service Fabric, and in Kubernetes.
Their "service" (for external ingress access) is different, with Kubernetes being a bit more complete and diverse: see Services.
The reality is: there are "two slightly different offering" because of market pressure.
The Microsoft Azure platform, initially released in 2010, has implemented its own Microsoft Azure Fabric Controller, in order to ensure the services and environment do not fail if one or more of the servers fails within the Microsoft data center, and which also provides the management of the user's Web application such as memory allocation and load balancing.
But in order to attract other clients on their own Microsoft Data Center, they had to adapt to Kubernetes, released initially in 2014, which is now (2018) either adopted or closely considered by... pretty much everybody (as reported in late December)
(That does not mean one is "better" than the other,
only that the "other" is more "visible" than the first ;) )
So it is less about "a detailed difference between the two", and more about the ability to integrate Kubernetes-based system on Microsoft Data Centers.
This is in line (source: detailed here) with Microsoft continued its unprecedented shift toward an open (read: non-proprietary) staging platform for Azure (with Deis).
And Kubernetes orchestrator is available on Microsoft's Azure Container Service since February 2017.
You can see other differences in their architecture of a deployed application:
Service Fabric:
Vs. Kubernetes:
thieme mentions in the comments the article "Service Fabric and Kubernetes comparison, part 1 – Distributed Systems Architecture", from Marcin Kosieradzki.
Both are different. Kubernetes manages rkt or other containers.
Service Fabric is not for managing containers. In case it manages some, that does not make it its purpose. That does not enable it for a comparison with Kubernetes.
eg: When a pod dies Kubernetes puts it to other nodes immediately. The part of SF that manages containers does not do this, it is done by some other area of Service Fabric. And outside containers. And was not designed with containers in mind.

Running Concurrent Major versions of an API with google endpoints in Kubernetes

I'm struggling to find any documentation relating to the configuration of Extensible Service Proxy and Google Endpoints relating to the correct pattern for deploying multiple versions of an API.
Brief overview - I have docker building out two releases of an API.
they run in separate containers.
I currently have a kubernetes pod with ESP and APIv1.
Really I want to run a pod with ESP+APIv1 and a pod with ESP+APIv2 but I can work out how this would work - my external IP and DNS would all point at one pod - Endpoints doesn't seem to be contacted until the user gets to the ESP service, is there some mechanism for passing to another ESP instance - I'm clearly missing something here.
OR - In order to run multiple versions should I be running a pod with ESP, APIv1, and APIv2 in it? That doesn't seem ideal from a scalability or management point of view.
Unless APIv1 and APIv2 are disjoint, you can probably implement methods supporting both versions in the same dockerized app. This approach is explained in more detail here.
https://cloud.google.com/endpoints/docs/lifecycle-management

Google Container Engine Subnets

I'm trying to isolate services from one another.
Suppose ops-human has a bunch of mysql stores running on Google Container Engine, and dev-human has a bunch of node apps running on the same cluster. I do NOT want dev-human to be able to access any of ops-human's mysql instances in any way.
Simplest solution: put both of these in separate subnets. How do I do such a thing? I'm open to other implementations as well.
The Kubernetes Network-SIG team has been working on the network isolation issue for a while, and there is an experimental API in kubernetes 1.2 to support this. The basic idea is to provide network policy on a per-namespace basis. A third party network controller can then react to changes to resources and enforces the policy. See the last blog post about this topic for the details.
EDIT: This answer is more about the open-source kubernetes, not for GKE specifically.
The NetworkPolicy resource is now available in GKE in alpha clusters (see latest blog post).