We are evaluating Dapr for our microservice framework. We will be using Kubernetes. One of the advertised selling points for Dapr is service invocation and service discovery. Doesn't K8s already offer service discovery out of the box?
Short answer: Yes (Kubernetes offers Service Discovery)
While there may be several patterns (and tools being the implementation of these patterns) for Service Discovery, Kubernetes offers service discovery at its core through the Service objects avoiding any needs of using a particular technology or tool to achieve a basic Container-Managed runtime environment.
You can read more on Kubernetes Services in the official documentation.
It is worth noting that dapr is a platform agnostic portable runtime that does not depend on Kubernetes and its core Service Discovery feature.
It offers more features than simply discovering your services (it has been usually compared to Service Meshes tools as the tend to look out being the same):
It offers transparent and secured service-to-service calls
It allows Publish-Subscribe communication style
It offers a way to register triggers and resource bindings (allowing for function-as-a-service development style)
It offers observability out-of-the-box
...
Related
We have one project and there are two clusters inside. We would like to monitor and set alert policies for plenty of parameters like kube_pod_status_phase, kube_pod_container_status_restarts_total, etc. We are able to see all these parameters in Metric Explorer (with prefix kubernetes.io/anthos/..) but it doesn't show any data. Can anyone please guide us if any other configurations are missing to use Anthos Metrics? Or if anyone can provide a guide or steps to use Anthos Metrics?
Note: We have Istio configured in both clusters and we are using Workload Identity feature as well.
Any help would be highly appreciated.
Thank you.
I don't think you want to use this metrics.
Anthos, Anthos GKE and GKE are 3 different google products.
GKE:
is an enterprise-grade platform for containerized applications, including stateful and stateless, AI and ML, Linux and Windows, complex and simple web apps, API, and backend services. Leverage industry-first features like four-way auto-scaling and no-stress management. Optimize GPU and TPU provisioning, use integrated developer tools, and get multi-cluster support from SREs.
Anthos
is an open hybrid and multi-cloud application platform that enables you to modernize your existing applications, build new ones, and run them anywhere in a secure manner. Built on open source technologies pioneered by Google—including Kubernetes, Istio, and Knative—Anthos enables consistency between on-premises and cloud environments and helps accelerate application development.
Anthos GKE
is part of Anthos, lets you take advantage of Kubernetes and cloud technology in your data center and in the cloud. You get Google Kubernetes Engine (GKE) experience with quick, managed, and simple installs as well as upgrades validated by Google. And Google Cloud Console gives you a single pane of glass view for managing your clusters across on-premises and cloud environments.
If you will check information about Anthos GKE pricing you can read that:
Anthos is available as a monthly, term-based subscription service. Anthos subscription is required to use Anthos GKE. For pricing please contact sales.
So to get Anthos metrics, you would need to use Anthos GKE, which requires Anthos subscription. It can produce more costs, for details you would probably need to contact sales.
For monitoring purposes you should check possibilities described here and choose what would fit you best.
However, the most used ways are to use Prometheus on GKE and Stackdriver.
In addition, in the web you can find many HowTo regarding Monitoring on GKE like this tutorial.
Kubernetes OpenAPI specification is hosted here.
https://github.com/kubernetes/kubernetes/tree/master/api/openapi-spec
Additionally, various client APIs for the Kubernetes is provided here:
https://kubernetes.io/docs/reference/using-api/client-libraries/
Using the OpenAPI specification, I am able to generate the server code, which provides the REST services. However, the applications using these K8s client APIs (written in either language - Go, Java, etc.) do not use these REST API directly.
My objective is to mock the K8s server to use in the test automation and build a controlled environment to create various test scenarios.
Is there any ready-to-use Kubernetes mock available? If not, how we can interface the client APIs with the above OpenAPI generated REST server? This way, the applications shall continue to use the client APIs but internally, they will be communicating with the mocked K8s server and not the real one.
Please help with the options.
.
Not really a direct answer to your question, but most solutions i have seen implemented are not trying to mock the k8s API but are really using it through either k3s (from rancher labs) or KinD project (official way)
You then connect to it like a normal kubernetes cluster
We currently deployed our Spring Boot Application in GKE(Google Kubernetes Engine) and we are currently using cloud endpoint to secure our web services. We have 11 web service developed which will be consumed by external clients. Is there any way i check the SLO (times, performance) of a webservice in cloud endpoint or in stackdriver.
You might want to check:
Spring sleuth
Jaeger operator
Jaeger is a opentracing standard and can help understand the values, and sleuth is a tool to integrate with spring, there are several options, you might want also to consider opencensus
First you need to expose metrics from your applications. Spring Sleuth is a great choice if you're using Spring Boot.
Then you need to collect the metrics and visualize them. Google provides a tool for that called Stackdriver Trace. It can also do metric-based alerts. You can find a sample setup for your use case here.
There are other performance monitoring services such as Dynatrace or Datadog.
If you want a self-hosted solution, you can use Zipkin which is inspired by an internal Google system called Dapper.
Have you looked at Google cloud console UI? Its "Endpoints" tag should show all services your project is running.
We are planning to integrate a microservices platform which is running in Kubernetes and the architecture will be similar to the below reference architecture.
https://www.ibm.com/devops/method/content/architecture/microservices/1_0
What are the core components which can be replaces by Istio on the above Kubernetes microservices architecture?
I understand it can help in request routing, traffic management, policies and control.
If I need to migrate one of the similar project what are the components which is available in Istio which can be replaced in Kubernetes or vice versa?
It doesn’t replace any kubernetes core components. ISTIO is add-on for kubernetes to manage the micro service.
In the IBM article you can use ISTO for routing instated of Ingress Controller.
Is there a way to add a service to the System application of the SF or is that a no go scenario? What I'd like to do is to have a meta-data services about other applications in the cluster, where the service can be queried.
The system services are built into the cluster and are not extensible. You can certainly build an application that offers centralized metadata about other applications/services in the cluster but you would need to maintain that metadata yourself.