Log activity monitoring on ELK box within EKS - kubernetes

I have configured an ELK stack server with filebeat which monitors logs across several nodes within the EKS cluster on AWS.
I would like to expose the Kibana dashboard so that I can view these logs. As the machine containing ELK stack has a private IP address (no public IP), how can I expose it to outside access so that I can view it from my desktop? There were recommendations to follow, however, the 1st one and 3rd one don't work quite well, whereas 2nd one is not preferred.
setup ingress onto ELK machine
setup DNS entry to have Route 53 entry point to the IP address of ELK machine
port forwarding
I would appreciate some insight into a potential solution.

Simply setting up port-forwarding on host machine worked for me.
ssh -v -N -L <local port>:<elk_host>:<remote port> <jump box>

Related

How to make two local minikube clusters communicate with each other?

I have two minikube clusters(two separate profiles) running locally call it minikube cluster A and minikube Cluster B. Each of these cluster also have an ingress and a dns associated with it locally. The dns are hello.dnsa and hello.dnsb . I am able to do ping on both of them and nslookup just like this https://minikube.sigs.k8s.io/docs/handbook/addons/ingress-dns/#testing
I want pod A in cluster A to be able to communicate with pod B in cluster B. How can I do that? I logged into pod A cluster A and I did telnet hello.dnsb 80 and it doesn't get connected because I suspect there is no route. similarly I logged into pod B of cluster B and did telnet hello.dnsb 80 and it doesnt get connected. However If I do telnet hello.dnsb 80 or telnet hello.dnsb 80 from my host machine, telnet works!
Any simple way to solve this problem for now? I am ok with any solution like even adding routes manually using ip route add if needed
Skupper is a plugin available for performing these actions. It is a service interconnect that facilitates secured communication between the clusters, for more information on skupper go through this documentation.
There are multiple examples in which minikube is integrated with skupper, go through this configuration documentation for more details.

Access Kubernetes applications via localhost from the host system

Is there any other way except port-forwarding, I can access the apps running inside my K8s cluster via http://localhost:port from my host operating system.
For example
I am running minikube setup to practise the K8s and I deployed three pods along with their services, I choose three different service type, Cluster IP, nodePort and LoadBalancer.
For Cluster IP, I can use port-forward option to access my app via localhost:port, but the problem is, I have to leave that command running and if for some reason, it is distributed, connection will be dropped, so is there any alternate solution here ?
For nodePort, I can only access this via minikube node IP not with the localhost, therefore, if I have to access this remotely, I wont have a route to this node IP address
For LoadBalancer, not a valid option as I am running minikube in my local system not in cloud.
Please let me know if there is any other solution to this problem, the reason why I am asking this when I deploy same application via docker compose, I can access all these services via localhost:port and I can even call them via VM_IP:port from other systems.
Thanks,
-Rafi

I am not able expose a service in kubernetes cluster to the internet

I have created a simple hello world service in my kubernetes cluster. I am not using any cloud provider and have created it in a simple Ubuntu 16.04 server from scratch.
I am able to access the service inside the cluster but now when I want to expose it to the internet, it does not work.
Here is the yml file - deployment.yml
And this is the result of the command - kubectl get all:
Now when I am trying to access the external IP with the port in my browser, i.e., 172.31.8.110:8080, it does not work.
NOTE: I also tried the NodePort Service Type, but then it does not provide any external IP to me. The state remains pending under the "External IP" tab when I do "kubectl get services".
How to resolve this??
I believe you might have a mix of networking problems tied together.
First of all, 172.31.8.110 belongs to a private network, and it is not routable via Internet. So make sure that the location you are trying to browse from can reach the destination (i.e. same private network).
As a quick test you can make an ssh connection to your master node and then check if you can open the page:
curl 172.31.8.110:8080
In order to expose it to Internet, you need a to use a public IP for your master node, not internal one. Then update your Service externalIPs accordingly.
Also make sure that your firewall allows network connections from public Internet to 8080 on master node.
In any case I suggest that you use this configuration for testing purposes only, as it is generally bad idea to use master node for service exposure, because this applies extra networking load on the master and widens security surface. Use something like an Ingress controller (like Nginx or other) + Ingress resource instead.
One option is also to do SSH local port forwarding.
ssh -L <local-port><private-ip-on-your-server><remote-port> <ip-of-your-server>
So in your case for example:
ssh -L 8888:172.31.8.110:8080 <ip-of-your-ubuntu-server>
Then you can simply go to your browser and configure a SOCKS Proxy for localhost:8888.
Then you can access the site on http://localhost:8888 .

Internal and External reverse proxy network using kubernetes and traefik, how?

I am trying to learn kubernetes and rancher. Here is what i want to accomplish :
I have few docker containers which i want to service only from my internal network using x.mydomain.com
I have same as above but those containers will be accessible from internet on x.mydomain.com
What i have at the moment is following :
Rancher server
RancherOS to be used for the cluster and as one node
I have made a cluster and added the node from 2. and disabled the nginx controller.
Install traefik app
I have forwarded port 80, 443 to my node.
Added few containers
Added ingress rules
So at the moments it works with the external network. I can write app1.mydomain.com from the internet and everything works as it should.
Now my problem is how can i add the internal network now ?
Do i create another cluster ? Another node on the same host ? Should i install two traefik and then use class in ingress for the internal stuff ?
My idea was to add another ip to the same interface on the rancheros then add another node on the same host but with the other ip but i can’t get it to work. Rancher sees both nodes with the same name and doesn’t use the information i give it i mean --address when creating the node. Of course even when i do this it would require that i setup a DNS server internally so it knows which domains are served internally but i haven’t done that yet since i can’t seem to figure out how to handle the two ip on the host and use them in two different nodes. I am unsure what is require, maybe it’s the wrong route i am going.
I would appreciate if somebody had some ideas.
Update :
I thought i had made it clear what i want from above. There is no YAML at the moment since i don't know how to do it. In my head it's simple what i want. Let me try to cook it down with an example :
I want 2 docker containers with web server to be able to be accessible from the internet on web1.mydomain.com and web2.mydomain.com and at the same time i want 2 docker containers with web server that i can access only from internal network on web3.mydomain.com and web4.mydomain.com.
Additional info :
- I only have one host that will be hosting the services.
- I only have one public IPv4 address.
- I can add additional ip alias to the one host i have.
- I can if needed configure an internal DNS server if required.
/donnib

Can't get to GCE instance from k8s pods on the same subnet

I have a cluster with container range 10.101.64.0/19 on a net A and subnet SA with ranges 10.101.0.0/18. On the same subnet, there is VM in GCE with IP 10.101.0.4 and it can be pinged just fine from within the cluster, e.g. from a node with 10.101.0.3. However, if I go to a pod on this node which got address 10.101.67.191 (which is expected - this node assigns addresses 10.101.67.0/24 or something), I don't get meaningful answer from that VM I want to access from this pod. Using tcpdump on icmp, I can see that when I ping that VM machine from the pod, the ping gets there but I don't receive ACK in the pod. Seems like VM is just throwing it away.
Any idea how to resolve it? Some routes or firewalls? I am using the same topology in the default subnet created by kubernetes where this work but I cannot find anything relevant which could explain this (there are some routes and firewall rules which could influence it but I wasn't successful when trying to mimic them in my subnet)
I think it is a firewall issue.
Here I've already provided the solution on Stakoverflow.
It may help to solve your case.