I have a large application with lots of microservices that communicate through Kafka. Right now it's working on GKE.
We are moving Kafka to confluent.io and we were planning to move some of the microservices to Google Cloud Run (fully managed).
BUT,... it looks like Google Cloud Run (fully managed) does not support listening to kafka events, right? Are there any plans to support it? Is there a workaround?
EDIT:
This post shared by andres-s, shows that you can implement your own cloud run and have it connected to confluent kafka, in Anthos.
It would be great to have this option in the fully-managed Google Cloud Run service.
But in the meantime, the question would be: is it possible to implement it in a regular GKE cluster (not Anthos)?
Google Cloud has a fully managed Kafka solution through SaaS partner Confluent, which uses Cloud Run for Anthos (with GKE)
Google Pub/Sub is the GCP alternative for Kafka, but through Confluent you can use kafka on GCP
Cloud Run is just Knative SERVING. It is stateless and spins up when it receives events. Due to this, it can't really subscribe to a topic and pull an event.
Knative Eventing is more stateful in nature and handles the pulls and subsequently triggers the pods running Knative Serving. They, ideally, are used together to give you the full serverless experience.
The good news is, there is a "hack". You can do Kafka to PubSub then PubSub to Cloud Run. If you are adventurous and don't mind OSS software, there are a number of Knative Eventing tutorials at serverlesseventing.com.
Related
I am trying to implement Azure osba service broker on google cloud shell to interact with google cloud kubernetes and Azure services, but i am not able to run it and always commands are ending in some error.
I have installed helm and service catalog also. Please suggest me any simple service broker for google cloud shell which i can implement easily for demo purpose. Can i use Google shell cloud MySQL ( GCP)? Please provide any information in form of website link or github.
You can use config connector to manage your Google Cloud Platform (GCP) resources through Kubernetes configuration as Google cloud platform service broker is deprecated.
This documentation will help you to get started with config connector by managing a cloud spanner instance. You can also refer to this repository that contains sample applications and resources like PubSub for use with Config Connector
I was thinking of testing out Google's Cloud Run for a simple app when all of a sudden I got thinking as to whether Cloud Run is basically a managed K8s cluster. I really wanted to know as to when using Cloud Run would be preferred over traditional K8s clusters and why we should prefer it?
Thanks.
Technology wise, cloud Run is a managed Kubernetes cluster with Knative to run the containers on top of it.
However Cloud Run brings an additional advantages when you run fully managed: you only pay for used resources. In other words, Cloud Run can do scale down to zero cost, rather than bottoming out at the cost of keeping a minimum sized cluster running.
I have 3-node on-prem cluster. Now i want to collect and analyze reverse proxy logs (and other system service fabric logs). I google and found this article and it says
Refer to Collect reverse proxy events to enable collecting events from
these channels in local and Azure Service Fabric clusters.
But that link describes how to enable, configure and collect reverse proxy logs for clusters in Azure. And I don't understand how to do it on-prem.
Please, help!
Service Fabric Events are just ETW events, you have the option to use the Builtin mechanism to collect and forward these events to a Monitoring Application like Windows Azure Diagnostics, or you can build your own.
If you decide to follow the approach in the documents, it will work on azure or On premises, the only caveat is that On-Premises it will send the logs to Azure, but it will work the same way.
On Premises, another way is build you own collector using EventFlow, you can configure EventFlow to collect the ReverseProxy ETW events and then forward these to ELK or any other monitoring platform.
I want to create and deploy Kafka cluster for data pipeline.
What is the prefer way to deploy it in the cloud , VMs or Kubernetes?
Kafka can run either way. However, if this is the question you are asking, then you might want to question if you really want to manage your own Kafka cluster in the cloud. Why not use an existing Kafka as a service offering?
In spring cloud dataflow, as per my understanding each stream is a microservice but the dataflow server is not. Am I right?
Is it possible to have multiple instances of spring cloud dataflow(SCDF) server? How to loadbalance the dataflow server? I am planning to deploy it in AWS.The official documentation didn't mention anything about loadbalancing of dataflow server. If it is possible how do Dashboard, shell works?
The SCDF-server is a regular Spring MVC + Spring Boot application that serves the REST-APIs, DSL commands, UI, and repository access for stream/task metadata persistence.
In platforms like Cloud Foundry, Kubernetes and others, upon scaling the SCDF-server, the platform automatically handles traffic routing and load-balancing.
If you were to orchestrate the deployment on your own and on AWS, you'd have to plug a load-balancer in front of the server instances. The shell, UI, and REST-APIs would hit the load-balancer instead, to interact with the SCDF-server.