Kafka Mirror Maker With Azure Event Hub - apache-kafka

Working on hybrid cloud solution of data movement from on premise to cloud and want to achieve Mirror Maker from Apache kafka on premise with Azure Event Hub in cloud?
Any specific version of Kafka if it possible?

Related

copy on premise sql server hosted on linux server to Azure datalake using azure data factory

I have a requirement to copy the table from on premise sql database hosted on linux server to Azure datalake using azure data factory. Self hosted integration works natively for windows system. Can someone share your thoughts or work around to achieve this requirement?
Regards
Aravindan
Unfortunately, this cannot be achieved as the SHIR has below system requirements and in order to connect to onPrem data sources ADF requires SHIR.
A workaround could be using SSIS packages in Linux to extract the data. For more information, please refer to this documentation: Extract, transform, and load data on Linux with SSIS

Apache Kafka patch release process

How Kafka will release the patch updates?
How users will get to know Kafka patch updates?
Kafka is typically available as a zip/tar that contains the binary files which we will use to start/stop/manage Kafka. You may want to:
Subscribe to https://kafka.apache.org/downloads by generating a feed for it.
Subscribe to any feeds that give you updates
Write a script that checks for new kafka releases https://downloads.apache.org/kafka/ periodically to notify or download.
The Kafka versioning format typically is major.minor.patch release.
Every time, there is a new Kafka release, we need to download the latest zip, use the old configuration files (make changes if required) and start Kafka using new binaries. The upgrade process is fully documented in the Upgrading section at https://kafka.apache.org/documentation
For production environments, we have several options:
1. Using Managed Kafka Service (like in AWS, Azure, Confluent etc)
In this case, we need not worry about patching and security updates to Kafka because it is taken care by the service provider itself. For AWS, you will typically get notifications in the Console regarding when your Kafka update is scheduled.
It is easy to get started to use Managed Kafka service for production environments.
2. Using self-hosted kafka in Kubernetes (eg, using Strimzi)
If you are running Kafka in Kubernetes environment, you can use Strimzi operator and helm upgrade to update to the version you require. You need to update helm chart info from repository using helm repo update.
Managed services and Kubernetes operators make managing easy, however, manually managing Kafka clusters is relatively difficult.

apache ambari local repository cloudera

i have a production cluster using ambari from hortonworks. Now cloudera has blocked every access to hdp repository, because a paid support license is needed.
This hit us really hard because we have big infrastructure using ambari, kafka, ans storm.
I'm trying to build ambari from source but i think that a local hdp repository is needed.
Anyone know how to build a repo strarting from kafka and storm source?

How to integrate OnPrem Azure DevOps Server with the cloud one?

My firm has the Azure DevOps online version where we have all our projects and repo's. We were not able to configure CI/CD for the repo's because our internal server network doesn't have access to the internet.
To overcome this issue, we built a new server that has access to the internet and also to the internal network. On the new server, we installed and configured Azure DevOps Server 2019. We don't want to migrate our repo's from the cloud version to the online version.
I am trying to link the OnPrem repo to the cloud repo but it was not working. I issued a PAT on the cloud version and added it as a service connection under Pipelines in the OnPrem version but still, I am not able to see and link the cloud repo's.
I can clone the repo from the cloud to the OnPrem server but that will not get the latest code as the code is being checked in the cloud repo's
Can anyone please guide me on how to link both of them, please.
Thanks!!!
I don't think there's a meaningful way to integrate Azure DevOps Services and Azure DevOps Server, as they are essentially the same product. I assume (but don't know) that you're looking to integrate Azure DevOps Services to on-premise builds and deployments, as you state that you want to keep the repos in Azure DevOps Services. So, in essence, you want to run build and deployment group agents in on-premise environment.
Take a look at the agent-documentation and especially the communication subsection:
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops
Or this old blog post, from which the communication section originates:
https://devblogs.microsoft.com/devops/deploying-to-on-premises-environments-with-visual-studio-team-services-or-team-foundation-server/
The ideal solution would probably be that you run self-hosted build agents in your server that's open to internet, and configure an agent pool for them in Azure DevOps Services. For deployments, you'll want to use Deployment Groups and install deployment group agents to target servers, where they'll just need outbound 443 access for communicating with Azure DevOps Services.
If that's not possible, you'd have to install deployment agents to the build machine, which then sees your other on-premise servers, but this is rather unsatisfactory solution since you'd either have to rely on WinRm capabilities for deployments, or expose too much network between your build server and other on-premise servers.

Any contrib package for Apache Beam, where I can commit a dataflow pipeline

I made this dataflow pipeline that will connect Pub/Sub to Big query. Any ideas where would be the right place to commit this upsteam in Apache Beam.