I know you can debug java microservices with Telepresence. And Scala runs on JVM, so it shouldn't be a problem right? But for some reason I couldn't manage it.
I have followed this video on youtube for a tutorial of Telepresence.
I intercepted the service I wanted to debug. It gave some errors with Kafka and mariadb(mysql). So in IntelliJ I changed their configuration according to their properties on AKS like host = "kafka.default:9092", because its namespace is default, name is kafka and port on AKS is 9092. Both Kafka and Mysql seems to be connected and currently they are not giving any errors.
In IntelliJ I am calling a Boot.scala class which reads the config and builds the project as far as I understand. When I change this service's port to 80 in local application.conf (as it is in AKS) it gives me the following error. In any other port it just listens.
I|2022-07-13 15:41:00,893|c.a.libats.http.tracing.Tracing$|Request tracing disabled in config
E|2022-07-13 15:41:01,901|akka.io.TcpListener|Bind failed for TCP channel on endpoint [/0.0.0.0:80]
And a fetch process on the website is not finished when I intercept this service with Telepresence. I have breakpoints nearly everywhere, I always debug the project and it never hits any one of them. So, what exactly is going on? Also I am open to redirections on remote debugging AKS.
PS: This project is something I inherited and I don't have any previous scala experience. So I may be missing something easy.
PS2: Also nothing changes when I leave telepresence intercept and just run it locally. Same logs, same situation with 80 port.
Okay, it works. Just add telepresence it will intercept the pod and attach to your local application instead of the code running on the pod. So, you have to do your configuration accordingly, because it will be the code running inside the pod. Just don't use port 80 because it is reserved for HTTP even though when you "kubectl get svc -A" it says port as 80. Just use telepresence intercept nameoftheservice without specifying the port, it will do it instead of you and change the port accordingly in your local environment configuration.
Related
I'm following a tutorial How to Deploy a Dockerised Application on AWS ECS With Terraform and running into a 503 error trying to hit my App.
The App runs fine in a local Container (http://localhost:3000/contacts), but is unreachable via ECS deployment. When I check the AWS Console, I see health checks are failing, so there's no successful deployment of the App.
I've read through / watched a number of tutorials, and they all have the same configuration as in the tutorial mentioned above. I'm thinking something must have changed on the AWS side, but I can't figure it out.
I've also read a number of 503-related posts here, and tried various things such as opening different ports, and setting SG ingress wide open, but to no avail.
If anyone is interested in troubleshooting, and has a chance, here's a link to my code: https://github.com/CumulusCycles/tf-cloud-demo
Thanks for any insights anyone may have on this!
Cheers,
Rob
Your target group is configured to forward traffic to port 80 on the container. Your container is listening on port 3000. You need to modify your target group to forward traffic to the port your container is actually listening on:
resource "aws_lb_target_group" "target_group" {
name = "target-group"
port = 3000
protocol = "HTTP"
target_type = "ip"
vpc_id = "${aws_default_vpc.default_vpc.id}" # Referencing the default VPC
}
Your load balancer port is the only port external users will see. Your load balancer is listening on port 80 so people can hit it over HTTP without specifying a port. When the load balancer receives traffic on that port it forwards it to the target group. The target group receives traffic and then forwards it to an instance in the target group, on the configured port.
It does seem a bit redundant, but you need to specify the port(s) that your container listens on in the ECS task definition, and then configure that same port again in both the target group configuration, and the ECS service's load balancer configuration. You may even need to configure it again in the target group's health check configuration if the default health checks don't work for your application.
Note: If you look at the comments on that blog post you linked, you'll see several people saying the same thing about the target group port mapping.
My application is running within a pod container in a kubernetes cluster. Every time it is started in the container it allocates a random port. I would like to access my application from outside (from another pod or node for example) but since it allocates a random port I cannot create a serivce (NodePort or LoadBalancer) to map the application port to a specific port to be able to access it.
What are the options to handle this case in a kubernetes cluster?
Not supported, checkout the issue here. Even with docker, if your range is overly broad, you can hit issue as well.
One option to solve this would be to use a dynamically configured sidecar proxy that takes care of routing the pods traffic from a fixed port to the dynamic port of the application.
This has the upside that the service can be scaled even if all instances have different ports.
But this also has the potentially big downside that every request has an added overhead due to the extra network step and even if it's just some milliseconds this can have quite an impact depending on how the application works.
Regarding the dynamic configuration of the proxy you could e.g. use shared pod volumes.
Edit:
Containers within a pod are a little bit special. They are able to share predefined volumes and their network namespace.
For shared volumes this means. you are able to define a volume e.g. of type emptyDir and mount it in both containers. The result is that you can share information between both containers by writing them into that specific volume in the first pod and reading the information in the second.
For networking this makes communication between containers of one pod easier because you can use the loopback to communicate with between your containers. In your case this means the sidecar proxy container can call your service by calling localhost:<dynamic-port>.
For further information take a look at the official docs.
So how is this going to help you?
You can use a proxy like envoy and configure it to listen to dynamic configuration changes. The source for the dynamic configuration should be a shared volume between both your application container and the sidecar proxy container.
After your application started and allocated the dynamic port you can now automatically create the configuration for the envoy proxy and save it in the shared volume. The same source envoy is listening to due to the aforementioned configuration.
Envoy itself acts as a reverse proxy and can listen statically on port 8080 (or any other port you like) and routes the incoming network traffic to your application with dynamic port by calling localhost:<dynamic-port>.
Unfortunately I don't have any predefined configuration for this use case but if you need some inspiration you can take a look at istio - they use something quite similar.
As gohm'c mentioned there is no built in way to do this.
As a workaround you can run a script to adjust the service:
Start the application and retrieve the choosen random port from the application
Modify the Kubernetes service, loadbalancer, etc. with the new port
The downside of this is that if the port changes there will be short delay to update the service afterwards.
How about this:
Deploy and start your application in a Kubernetes cluster.
run kubectl exec <pod name here> -- netstat -tulpn | grep "application name" to identify the port number associated with your application.
run kubectl port-forward <pod name here> <port you like>:<random application port> to access the pod of your application.
Would this work for you?
Edit. This is very similar to #Lukas Eichler's response.
I discovered a strange behavior with K8s networking that can break some applications designs completely.
I have two pods and one Service
Pod 1 is a stupid Reverse Proxy (I don't know the implementation)
Pod 2 is a Webserver
The mentioned Service belongs to pod 2, the webserver
After the initial start of my stack I discovered that Pod 1 - the Reverse Proxy is not able to reach the webserver on the first attempt for some reason, ping is working fine and curl also.
Now I tried wget mywebserver inside of Pod 1 - Reverse Proxy and got back the following:
wget mywebserver
--2020-11-16 20:07:37-- http://mywebserver/
Resolving mywebserver (mywebserver)... 10.244.0.34, 10.244.0.152, 10.244.1.125, ...
Connecting to mywebserver (mywebserver)|10.244.0.34|:80... failed: Connection refused.
Connecting to mywebserver (mywebserver)|10.244.0.152|:80... failed: Connection refused.
Connecting to mywebserver (mywebserver)|10.244.1.125|:80... failed: Connection refused.
Connecting to mywebserver (mywebserver)|10.244.2.177|:80... connected.
Where 10.244.2.177 is the Pod IP of the Webserver.
The problem to me it seems is that the Reverse-Proxy does not try to trigger the attempt to forward the package twice, instead it only tries once where it fails like in the wget cmd above and the request gets dropped as the backed is not reachable due to fancy K8s IPtables stuff it seems...
If I configure the reverse-proxy not to use the Service DNS-name for load-off and instead use the Pod IP (10.244.2.177) everything is working fine and as expected.
I already tried this with a variety of CNI Providers like: Flannel, Calico, Canal, Weave and also Cilium as Kube-Proxy is not used with Cilium but all of them failed and all of them doing fancy routing nobody clearly understands out-of-the-box. So my question is how can I make K8s routing work immediately at this point? I already have reimplemented my whole stack to docker-swarm just to see if it works, and it does, flawlessly! So this issue has to do something with K8s routing scheme it seems.
Just to exclude misconfiguration from my side I also tried this with different ready-to-use K8s solutions like managed K8s from Digital-Ocean and or self-hosted RKE. All have the same behavior.
Does somebody maybe have a Idea what the problem might be and how to fix this behavior of K8s?
I might also be very useful to know what actually happens at the wget request, as this remains a mystery to me.
Many thanks in advance!
It turned out that I had several misconfigurations at my K8s Deployment.
I first removed ClusterIP: None as this leads to the behavior wget shows above at my question. Beside I've set app: and tier: wrong at my deployment. Anyways now everything is working fine and wget has a proper connection.
Thanks again
I was using NodePort to host a webapp on Google Container Engine (GKE). It allows you to directly point your domains to the node IP address, instead of an expensive Google load balancer. Unfortunately, instances are created with HTTP ports blocked by default, and an update locked down manually changing the nodes, as they are now created using and Instance Group/and an Immutable Instance Template.
I need to open port 443 on my nodes, how do I do that with Kubernetes or GCE? Preferably in an update resistant way.
Related github question: https://github.com/nginxinc/kubernetes-ingress/issues/502
Using port 443 on your Kubernetes nodes is not a standard practice. If you look at the docs you and see the kubelet option --service-node-port-range which defaults to 30000-32767. You could change it to 443-32767 or something. Note that every port under 1024 is restricted to root.
In summary, it's not a good idea/practice to run your Kubernetes services on port 443. A more typical scenario would be an external nginx/haproxy proxy that sends traffic to the NodePorts of your service. The other option you mentioned is using a cloud load balancer but you'd like to avoid that due to costs.
Update: A deamonset with a nodeport can handle the port opening for you. nginx/k8s-ingress has a nodeport on 443 which gets exposed by a custom firewall rule. the GCE UI will not show「Allow HTTPS traffic」as checked, because its not using the default rule.
You can do everything you do on the GUI Google Cloud Console using the Cloud SDK, most easily through the Google Cloud Shell. Here is the command for adding a network tag to a running instance. This works, even though the GUI disabled the ability to do so
gcloud compute instances add-tags gke-clusty-pool-0-7696af58-52nf --zone=us-central1-b --tags https-server,http-server
This also works on the beta, meaning it should continue to work for a bit.
See https://cloud.google.com/sdk/docs/scripting-gcloud for examples on how to automate this. Perhaps consider running on a webhook when downtime is detected. Obviously none of this is ideal.
Alternatively, you can change the templates themselves. With this method you can also add a startup to new nodes, which allows you do do things like fire a webhook with the new IP Address for a round robin low downtime dynamic dns.
Source (he had the opposite problem, his problem is our solution): https://stackoverflow.com/a/51866195/370238
If I understand correctly, if nodes can be destroyed and recreated themselves , how are you going to rest assured that certain service behind port reliably available on production w/o any sort of load balancer which takes care of route orchestration diverting port traffic to new node(s)
I'm running a small openshift cluster and would like to provide our developers with an hosted instance of mongo on it, which they connect to externally.
Which is easy enough, I thought. Sadly it still looks like all traffic has to go over haproxy and is limited to http/https. But my developers need to transparently access the correct mongo port 27017.
is there some way to expose the internal pod port, to the outside world, without knowing which pod it run on.
right now our dirty workaround is
oc port-forward mongodb-1-2n1ov 27017:27017
and than the client does a ssh forwarding from there machine to this.
instead we would rather have an automated solution that allows tcp forwarding for virtual defined hostnames.
could anyone point me in the right direction please?
You are right. We too had similar issue and only other way we though was to update the serviceCIDR which was routable within our network. We did not go that route though. HAProxy is http/https..while the services do support tcp/udp and mongodb:27017 relies on UDP.
I too would like to know more about this if anyone else can share.