Kubernetes send HTTP request from Pods - kubernetes

Currently I have an Kubernetes cluster which is to analyze Video Feeds and send particular results based on the video. I wish to send an HTTP request from my Kubernetes pod from from time to time if the requested Video needs to be retrieved over the internet. However all of these requests seem to Fail. When I issued a CURL command in the shell of the container I receive an error message saying
could not resolve host
I have looked into a few answers and many of them involve exposing a port in the container running a server to the Kubernetes node and making it publicly available through a external IP which I have already Implemented.
I am still very much a Novice in this area. So any guidance is appreciated

Answered in comments, kube-dns was unavailable due to resource constraints.

Related

Internal API call from one pod to another pod in same Kubernetes cluster times out

I have two API that are publicly exposed, let's say xyz.com/apiA and xyz.com/apiB.
Both these API are running node and are dockerized services running as individual pods in the same namespace of a Kubernetes cluster.
Now, apiA calls apiB internally as part of its code logic. The apiA service makes a POST call to apiB and sends with it a somewhat large payload in its body parameter. This POST request times out if the payload in its body is more than 30kb.
We have checked the server logs and that POST request is not seen.
The error prompt shows connection timeout to 20.xx.xx.xx which is the public ip address of xyz.com
I'm new to Kubernetes and would appreciate your help.
So far have tried this, but it didn't help.
Please let me know if more information is needed.
Edit: kubectl client and server version is 1.22.0
To update the kind folks who took time to understand the problem and suggest solutions - The issue was due to bad routing. Internal APIs (apiB in the example above) should not be called using full domain name of xyz.com/apiB, rather they can be directly referenced using pod name as
http://pod_name.namespace.svc.local/apiB.
This will ensure internal calls are routed through Kubernetes DNS and don't have to go through load balancer and nginx, thus improving response time heavily.
Every call made to apiA was creating a domino effect by populating hundreds of calls to apiB and overloading the server, which caused it to fail only after a few thousand requests.
Lesson learned: Route all internal calls using cluster's internal network.

What are the minimum kubelet implementation requirements?

I’m looking for a breakdown of the minimal requirements for a kubelet implementation. Something like sequence diagrams/descriptions and APIs.
I’m looking to write a minimal kubelet I can run on a reasonably capable microcontroller so that app binaries can be loaded and managed from an existing cluster (the container engine would actually flash to a connected microcontroller and restart). I’ve been looking through the kubelet code and there’s a lot to follow so any starting points would be helpful.
A related question, does a kubelet need to run gRPC or can it fall back to a RESTful api? (there’s no existing gRPC I can run on the micro but there is nanopb and existing https APIs)
This probably won't be a full answer, however there are some details that will help you.
First I'll start with related question about using gRPC and/or REST API.
Based on the kubelet code there is a new server creation part to handle HTTP requests. Taking this into account, we can consider kubelet gets requests to its HTTPS endpoint.
Also indirectly seen from kubelet authentication/authorization documentation, there are details only about HTTPS endpoint.
Moving to an API part. It's still not documented properly so the best way to find some information is to look into code, e.g. about endpoints
Last part is this useful page where a lot of information about kubelet API is gathered

Within a Kubernetes cluster catch outgoing requests from a Pod and redirect to a different target

I have a cluster with 3 nodes. In each node i have a frontend application running in a Pod and backend application running in a separate Pod.
I send data from the frontend application to the backend application, to do this i utilise the Cluster IP Service and k8 dns resource.
I also have a function in my frontend where i send data to a separate service unrelated to my k8s cluster. I send this data using a standard AJAX request to a url with a payload i.e http://my-seperate-service-unrelated-tok8.com.
All of this works correctly and the cluster operates as i want. - i have this cluster deployed to GKE. 

I now want to run this cluster local using minikube, which i have been able to do, however, when i am running locally i do not want to send data to my external service - instead i want to forward it to either a new Pod i will create or just not send it.


The problem here is i need a proxy to intercept outgoing network traffic, check if the outgoing request is the request i am looking for and if it is then redirect it.
I understand each node running in a cluster has a kube-proxy service running within the node - which is used to forward traffic to the relevant services in the cluster. 

I would like to either extend this service, or create a new proxy service where i can listen for outgoing traffic to a specific url and redirect it. 

Is this possible to do in a k8 cluster? I assume there is a Service i can create to listen for all outgoing requests and redirect specific requests based on rules i set. 

I wasn’t sure if k8 clusters have a Service already configured i can simply add to - that’s why i thought of the kube-proxy, would anyone be able to advice on this?

I wanted to add this proxy so i don’t have to change my code when its ran locally in minikube or deployed to GKE.


Any help is greatly appreciated. Thanks!
I did a tool that help you to forward a service to another service,local port, service from other cluster, etc...
This way you can have exactly your same urls, ports and code... but the underlying services gets "replaced", if I understand correctly this is what you are looking for.
Here is a quick example of an stage service being replaced with my local 3000 port
This is the repository with more info and examples: linker-tool
If you are interested let me know if you need help or have any question.

GKE streaming large file download fails with partial response

I have an app hosted on GKE which, among many tasks, serve's a zip file to clients. These zip files are constructed on the fly through many individual files on google cloud storage.
The issue that I'm facing is that when these zip's get particularly large, the connection fails randomly part way through (anywhere between 1.4GB to 2.5GB). There doesn't seem to be any pattern with timing either - it could happen between 2-8 minutes.
AFAIK, the connection is disconnecting somewhere between the load balancer and my app. Is GKE ingress (load balancer) known to close long/large connections?
GKE setup:
HTTP(S) load balancer ingress
NodePort backend service
Deployment (my app)
More details/debugging steps:
I can't reproduce it locally (without kubernetes).
The load balancer logs statusDetails: "backend_connection_closed_after_partial_response_sent" while the response has a 200 status code. A google of this gave nothing helpful.
Directly accessing the pod and downloading using k8s port-forward worked successfully
My app logs that the request was cancelled (by the requester)
I can verify none of the files are corrupt (can download all directly from storage)
I believe your "backend_connection_closed_after_partial_response_sent" issue is caused by websocket connection being killed by the back-end prematurily. You can see the documentation on websocket proxying in nginx - it explains the nature of this process. In short - by default WebSocket connection is killed after 10 minutes.
Why it works when you download the file directly from the pod ? Because you're bypassing the load-balancer and the websocket connection is kept alive properly. When you proxy websocket then things start to happen because WebSocket relies on hop-by-hop headers which are not proxied.
Similar case was discussed here. It was resolved by sending ping frames from the back-end to the client.
In my opinion your best shot is to do the same. I've found many cases with similar issues when websocket was proxied and most of them suggest to use pings because it will reset the connection timer and will keep it alive.
Here's more about pinging the client using WebSocket and timeouts
I work for Google and this is as far as I can help you - if this doesn't resolve your issue you have to contact GCP support.

How to get the real ip in the request in the pod in kubernetes

I have to get the real ip from the request in my business.actually I got the 10.2.100.1 every time at my test environment. any way to do this ?
This is the same question as GCE + K8S - Accessing referral IP address and How to read client IP addresses from HTTP requests behind Kubernetes services?.
The answer, copied from them, is that this isn't yet possible in the released versions of Kubernetes.
Services go through kube_proxy, which answers the client connection and proxies through to the backend (your web server). The address that you'd see would be the IP of whichever kube-proxy the connection went through.
Work is being actively done on a solution that uses iptables as the proxy, which will cause your server to see the real client IP.
Try to get that service IP which service is associated with that pods.
One very roundabout way right now is to set up an HTTP liveness probe and watch the IP it originates from. Just be sure to also respond to it appropriately or it'll assume your pod is down.