Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed last year.
Improve this question
I'm creating MSK with Cloud Formation.
By the way, I can't find how to create MSK Cluster configuration, in Cloud Formation documents.
So how can I create MSK Cluster configuration with Cloud Formation or
do I can't create MSK Cluster configuration with Cloud Formation?
If there are some references about that, please answer me.
This is not possible using regular CloudFormation resources. See here for an open feature request to add it.
There are however still options if you want to provision this using CloudFormation, for example:
Using a Custom Resource
Using an Extension, e.g. by creating your own
Both of these are, however, a lot more work than simply using a regular CloudFormation resource.
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 19 days ago.
Improve this question
I currently have an application deployed on AKS, which produces to a Kafka topic, with Kafka deployed on HDinsight. I want to implement a SASL/OAUTHBEARER as the security mechanism.
However, I'd also like the secrets to be stored in Azure Key Vault (AKV).
Is it possible to sync the secrets store in AKV with Kafka on HDinsight?
I have not tried it yet as I didn't find any documentation online that would indicate its feasibility, hence looking for guidance on this issue.
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 months ago.
Improve this question
I have highly available CloudSQL Postgres instances.
If I add read replicas to each one will it cause any downtime, will it require a restart?
It is highly likely there is no restart at all, I could not find something clear on the GCP Cloud SQL documentation.
According to Cloud SQL MySQL doc, if binary logging is not enabled then it requires a restart.
Creating Cloud SQL replicas in Postgres and SQL server don't have this caveat
Read replicas do NOT require any restart or downtime
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
In researching the many AWS offerings and plans, I'm overwhelmed by the terminology and pricing around Docker, RDS, EC2, Beanstalk, and trying to wrap my head around it all. In the end, all we'd like is the cheapest way to host internal Angular 7+ apps that have a corresponding Spring Boot REST API which pulls from a PostgresSQL database. Of course each app/REST/DB stack should have a dev, test, and prod environment as well. Utilizing AWS, what is a good and cost-effective way to achieve these requirements?
Angular - Use S3 and CloudFront (Static content)
Spring Boot Rest API's - Use EC2, Beanstalk or Lambda (for serverless)
PostgreSQL - Use RDS or install it on EC2 instance.
For Angular and Spring Boot Rest APIs, you can host both of them inside the EC2 machine.
For Database, You can host the postgresql servers on EC2 machines for dev and test environments and for production you can choose RDS.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Does anyone have a working example of using Snakemake with Azure Kubernetes Service (AKS)? If it is supported, which flags and setup are needed to use the Snakemake Kubernetes executor with AKS? What material there is out there is mostly on AWS with S3 buckets for storage.
I have never tried it, but you can basically take this as a blueprint, and replace the google storage part with a storage backend that is working in Azure. As far as I know, Azure has its own storage API, but there are workarounds to expose an S3 interface (google for Azure S3). So, the strategy would be to setup an S3 API, and then use the S3 remote provider for Snakemake. In the future, Snakemake will also support Azure directly as a remote provider.
you are aware of this already, but for the benefit of others:
Support of AKS has been build into Snakemake now. This works even without a shared filesystem. Most parts of the original blog post describing the implementation have made it into the official Snakemake executor documentation.
In a nutshell: upload your data to blob and deploy an AKS cluster. The run Snakemake with these flags: --default-remote-prefix --default-remote-provider AzBlob and --envvars AZ_BLOB_ACCOUNT_URL AZ_BLOB_CREDENTIAL, where AZ_BLOB_CREDENTIAL is optional if you use a SAS in the account URL. You can use your Snakefile as is.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I am looking to use mongo DB for my project but dont want to go in administrative overhead to manage mongo services.
As my project is currently hosting most of its component on AWS, i am looking for a managed mongo DB service (if any) provided by AWS.
AWS provides Dynamo DB as managed service and its well documented but accesing Mongo DB managed service over AWS is not very clear to me.
I have read about Mongo DB managed service - 'Atlas' but not sure can i access it as a service in my existing AWS instances.
Please provide your inputs for the best practice suitable for this scenario.
There is no Managed MongoDB Service provided by AWS.
However, there are managed MongoDB services which provides hosting on AWS (in addition to Azure, GCP etc. MongoDB Atlas is an example.
MongoDB Atlas provides managed mongoDB service with options to host on AWS and you may opt to use that. You can choose the region of your preference and then use VPC Peering feature to make the application servers in your existing VPC/Account communicate with the MongoDB Atlas Setup.
You can read more about all these at https://www.mongodb.com/cloud/atlas