VPC flow logs: Why does AWS exclude some traffic - amazon-vpc

I read the VPC flow logs blog post here. It excludes traffic to Amazon DNS servers.
Also DHCP traffic is excluded and requests for instance metadata.
Just curious to know why that is.

If you reach aws about it and they will just refer it as system limitation at this point and will be added as a road map

Related

How to point my domain to my EKS cluster?

I have followed the AWS getting started guide to provision an EKS cluster (3 public subnets and 3 private subnets). After creating it, I get the following API server endpoint https://XXXXXXXXXXXXXXXXXXXX.gr7.us-east-2.eks.amazonaws.com (replaced the URL with X's for privacy reasons).
Accessing the URL in the browser I get the expected output from the cluster endpoint.
Question: How do I point my registered domain in Route 53 to my cluster endpoint?
I can't use a cname record because my domain is a root domain and will receive an apex domain error.
I don' have access to a static ip, and I don't believe my EKS cluster has a public IP address I can directly used. This would mean I can't use an A record (as I need an IP address).
Can I please get help/instructions as to how I can point my domain straight to my cluster?
Below is my AWS VPC architecture:
Don't try and assign a pretty name to the API endpoint. Your cluster endpoint is the address that's used to talk to the control plane. When you configure your kubectl tool, the api endpoint is what kubectl talks to.
Once you've got an application running on your EKS cluster, and have a load balancer, or Ingress, or something for incoming connections, that's when you worry about creating pretty names.
And yes, If you're dealing with AWS load balancers, you don't get the option of A records, so you can't use the apex of the domain, unless you're hosting DNS in route 53, in which case, you can use "alias" records to point the apex of a domain at a load balancer.
Kubernetes is a massively complex thing to try understand and get running. Given that this is the type of question you're asking, it sounds like you don't have the full picture yet. I recommend (1) joining the Kubenetes slack channel. It'll be a much faster way to get help than SO, and (2) take in Jeff Geerling's excellent Kubernetes 101 course on youtube.

Access restrictions when using Gcloud vpn with Kubernetes

This is my first question on Stack Overflow:
We are using Gcloud Kubernetes.
A customer specifically requested a VPN Tunnel to scrape a single service in our Cluster (I know ingress would be more suited for this).
Since VPN is IP based and Kubernetes changes these, I can only configure the VPN to the whole IP range of services.
I'm worried that the customer will get full access to all services if I do so.
I have been searching for days on how to treat incoming VPN traffic, but haven't found anything.
How can I restrict the access? Or is it restricted and I need netpols to unrestrict it?
Incoming VPN traffic can either be terminated at the service itself, or at the ingress - as far as I see it. Termination at the ingress would probably be better though.
I hope this is not too confusing, thanks you so much in advance
As you mentioned, an external Load Balancer would be ideal here as you mentioned, but if you must use GCP Cloud VPN then you can restrict access into your GKE cluster (and GCP VPC in general) by using GCP Firewall rules along with GKE internal LBs HTTP or TCP.
As a general picture, something like this.
Second, we need to add two firewall rules to the dedicated networks (project-a-network and project-b-network) we created. Go to Networking-> Networks and click the project-[a|b]-network. Click “Add firewall rule”. The first rule we create allows SSH traffic from the public so that we can SSH into the instances we just created. The second rule allows icmp traffic (ping uses the icmp protocol) between the two networks.

How to setup manual fail over (PROD to DR) in IKS?

We have two IBM kubernetes cluster whenever issue happens in one cluster we need to failover to DR. Can anyone tell me how to do that automatically ? both cluster present in two different zones Montreal & torento. Also we have IBM Cloud internet service.
You could use the CIS service Global Load Balancer offering to set up a globally load-balanced and health-checked URL for your applications. You'd create a GLB for domain app.mydomain.com/app_path for example and then back it with the VIPs for your cluster ALBs in the same origin pool. Configure a health check at the GLB so traffic will be sent to the available endpoints that are healthy.
CIS GLB docs are covered at https://cloud.ibm.com/docs/cis?topic=cis-global-load-balancer-glb-concepts

Will PrivateLink allow firehose to access my private Redshift cluster?

I am trying to set up firehose to send data from a kinesis stream to a redshift cluster. Firehose successfully inserts the data to my s3 bucket, but I am receiving the following error when firehose attempts to execute the s3->Redshift copy command:
The connection to the specified Amazon Redshift cluster failed. Ensure that security settings allow Firehose connections, that the cluster or database specified in the Amazon Redshift destination configuration JDBC URL is correct, and that the cluster is available.
I have performed every setup step according to this except for one: I did not make my Redshift cluster publicly accessible. I am unable to do this bc the cluster is in a private VPC that does not have an internet gateway attached.
After researching the issue, I found this article which provides insight for how to set up an AWS PrivateLink with firehose. However, I have heard that some AWS services support PrivateLink and others do not. Would PrivateLink work for this case?
I am also concerned with how this would affect the security of my VPC. Could anyone provide insight to possible risks to using a PrivateLink?
I was able to solve this issue. Add an Internet gateway to your VPC route table.
Go to Redshift VPC.
On the Routes tab (you must have 3 private routes), choose Edit, Add another route, and add the following routes as necessary. Choose Save when you're done.
For IPv4 traffic, specify 0.0.0.0/0 in the Destination box, and select the internet gateway ID in the Target list.
If you add internet gateway ID to all 3 private routes, you might see Failure in other applications that are using the same route/VPC. To fix that, update only 1 route with internet gateway ID and the other two will have nat as destination for (0.0.0.0/0).

Whitelist traffic to mysql from a kubernetes service

I have a Cloud MySQL instance which allows traffic only from whitelisted IPs. How do I determine which IP I need to add to the ruleset to allow traffic from my Kubernetes service?
The best solution is to use the Cloud SQL Proxy in a sidecar pattern. This adds an additional container into the pod with your application that allows for traffic to be passed to Cloud SQL.
You can find instructions for setting it up here. (It says it's for GKE, but the principles are the same)
If you prefer something a little more hands on, this codelab will walk you through taking an app from local to on a Kubernetes Cluster.
I am using Google Cloud Platform, so my solution was to add the Google Compute Engine VM instance External IP to the whitelist.