I have two EC2 Instances running in Same region. One act as active and another one is passive.
Both IP’s have been added in the ALB backend pool
How do we configure in Route 53 as one Active Instance goes down another passive instance will act as a primary.
Any suggestions, how to achieve ?
Related
I have two tasks in AWS ECS.
A. is a default target, mysite.com
B. forwarding rule , path based "/api/*"
I want to also forward to task B container when the request has a separate port specified like
mysite.com:12345
Is it possible?
I tried to add a new listener beyond 443 and 80 but it shows a warning that it's not reachable because the security group is not allowing it(and I don't think I can change the security group).
If you want requests to reach your load balancer on that port, at least one security group on the load balancer must allow inbound traffic on that port. You can either edit an existing SG attached to the load balancer, or add a new SG to the load balancer.
On an AWS EKS cluster, I have deployed a stateful application.
In order to load balance my application across different pods and availability zones, I have added an HAProxy Ingress Controller which uses an external AWS NLB.
I have one NLB in this cluster which points to the HAProxy Service. On top of the NLB I have created a global accelerator and I've set the NLB as its target endpoint.
My requirement is to ensure that once a user connects to the DNS of the Global Accelerator, they will always be directed to the same endpoint server, i.e the same HAProxy Pod.
The connection workflow goes like this: Client User -> Global Accelerator -> NLB -> HAProxy pod.
While searching for ways to make this work, here's what I've done:
To ensure stickiness between the NLB and its target (HAProxy pods) I have enabled stickiness on the NLB targets.
Now, when it comes to the stickiness between the Global Accelerator and the NLB, it looks like the right thing to do is to set the Global Accelerator's Client Affinity attribute to "Source IP". According to the documentation, with this setting the Global Accelerator honors client affinity by routing all connections with the same source IP address to the same endpoint group.
My expectations were that with these attributes enabled, the user will always get connected to the same NLB which then connects to the same HAProxy pod.
After testing, when I connected to my application via the NLB DNS, the goal was achieved and I get a sticky connection. However, when I connect via the Global Accelerator, my session keeps crashing.
Any ideas of why that might be?
Or are there any suggestions of a different way to work with this?
This is not something that AWS supports (as of June 2022).
See this document https://aws.amazon.com/blogs/networking-and-content-delivery/updating-aws-global-accelerator-ec2-endpoints-based-on-autoscaling-group-events/
They specifically state
An example is when you want to send UDP traffic with client IP preservation to a handful of instances, with a guarantee that the same backend instances will handle requests from the same clients (client affinity). This is not possible with Application Load Balancers because they do not support UDP traffic, and Network Load Balancers do not support sticky sessions or client IP preservation with AWS Global Accelerator.
This is not a technical but more of architectural question I am asking here.
I have followed this blog for setting up the mongodb cluster. We have 2 private subnets in which I have configured 3 member replica set of mongodb. Now I want use a single dns like mongod.some_subdomain.example.com for whole cluster.
I do not have access to Route53 and setting/updating the dns records takes at least 2 hours in my case since I am dependant on our cloud support for it. I am not sure which server primarily responds to applications requests in mongodb cluster.
So is there a way to put the whole cluster behind ELB and use ELB as DNS to route traffic to primary and at the same time if there is failover then next primary would be the member of ELB except the arbiter node.
The driver will attempt to connect to all nodes in the replica set configuration. If you put nodes behind proxies the driver will bypass the proxies and try to talk to the nodes directly.
You can proxy standalone and sharded cluster deployments as the driver doesn't need a direct connection to data nodes in those but mapping multiple mongoses to a single address can create problems with retryable reads/writes, sessions, transactions etc. This is not a supported configuration.
I created my kubernetes cluster with specified security group for each ec2 server type, for example for backend server I have backend-sg associated with and a node-sg which is created with the cluster.
Now I try to restrict access to my backend ec2 and open only port 8090 as an inbound and port 8080 as an outbound to a specific security group (lets call it frontend-sg).
I was manage to do so but when changing the inbound port to 8081 in order to check that those restrictions actually worked I was still able to acess port 8080 from the frontend-sg ec2.
I think I am missing something...
Any help would be appreciated
Any help would be appriciated
I will try to illustrate situation in this answer to make it more clear. If I'm understanding your case correctly, this is what you have so far:
Now if you try ports from Frontend EC2 instance to Backend EC2 instance, and they are in same security group (node-sg) you will have traffic there. If you want to check group isolation then you should have one instance outside of node-sg and only in frontend-sg targetting any instance in backend-sg (supposing that both node-sg and backend-sg are not permitting said ports for inbound traffic)...
Finally, a small note... Kubernetes is by default closing all traffic (and you need ingress, loadbalancer, upstream proxy, nodePort or some other means to actually expose your front-facing services) so traditional fine graining of backend/frontend instances and security groups is not that "clearcut" when using k8s, especially since you don't really want to schedule manually (or by labels for that matter) which instances pods will actually run (but instead leave that to k8s scheduler for better unitilization of resources).
I have a mongoDB replica set in azure
I have:
server1 Primary
server2 secondary
server3 Arbiter
I have a dev environment on my local machine that I want to point to this mongoDB instance
What do I open on my Azure Firewall to make sure this configuration is setup with best practices.
Do I create a load balanced endpoint to the Primary and Secondary or do I create a single endpoint to the arbiter, or perhaps even something else?
thanks!
MongoDB will not play well with a load-balanced endpoint (as you might end up sending traffic to a secondary, and you'd have no control over this unless you implemented a custom probe for each VM, and then you'd need to update the probe's status based on the replicaset node's health, for each node). The MongoDB client-side driver is designed to work with a replicaset's topology to make the correct decision on which node to communicate with. Each replicaset node should have a discrete addressable ip:port. If you have all your instances in a single cloud service (e.g. myservice.cloudapp.net) then you'll need one port per instance (since they'd all share a single ip address). If each instance is in a different cloud service, then you can have the same port for each, with different dns name / ip address for each.
The best solution with an iptables is to open the third with an ip rule. It's open in the twice configuration and secure. This solution is the best architecture for your code.