marathon service port uniqueness - marathon

testing Marathon application/group deployment I have observed that if I try to deploy an application specifying a service port that has already been assigned to another app Marathon v2/apps endpoint rejects the request, as expected:
{"message”:"Requested service port 8306 conflicts with a service port in app /dbaas01/mysql"}
Yet, it seems that the service port uniqueness is not checked when submitting the deployment of an application group. I was able to deploy twice the same application group (changing the root group name) and using the same service ports for the applications.
Of course, this creates an issue with the haproxy-marathon-bridge: the load balancer configuration is modified so that the same port points to different services:
listen dbaas01_mysql-8306
bind 0.0.0.0:8306
mode tcp
option tcplog
balance leastconn
server dbaas01_mysql-1 172.30.15.84:31841 check
listen dbaas02_mysql-8306
bind 0.0.0.0:8306
mode tcp
option tcplog
balance leastconn
server dbaas02_mysql-1 172.30.15.85:31075 check
Is this the expected behavior? Why the check on the service port uniqueness is not performed on the application deployed using the /v2/groups endpoint?
Thank you in advance for feedbacks.
Best regards,
Marica

Related

Communication fail between Zabbix-Proxy and Server at port 10051 in a k8s cluster with HAProxy

I have a communication problem between Zabbix Proxy and Zabbix Server at port 10051. I’m using HAPROXY version 2.0.13. Look my Kubernetes scenario:
HAPROXY is working fine when I access my website zabbix.domain.com at port 80 and 443.
Zabbix-Proxy has a parameter “Server” that I set with ip address of worker-1 and the communication works fine, but this happen because the traffic don’t pass through HAPROXY server. When I try to set the Server parameter with my domain address zabbix.domain.com that go to my HAPROXY server, the communication dont work, give the impression that HAPROXY cant treat the request.
zabbix_proxy.conf: Work with Worker-1 ip addr, but dont work with domain name.
The domain name as I said, is pointing to HAPROXY server (10.0.0.110). I think the zabbix-proxy is trying to reach the port 10051 of HAPROXY server and the HAPROXY can’t deal with the requests to forward to my worker node.
This is my HAPROXY configuration, I test with frontend and backend, but now, I just rewrite with Listen parameter.
listen zabbix
mode tcp
bind :10051
option forwardfor
server worker-1 10.10.10.112:10051 check
server worker-1 10.10.10.113:10051 check
server worker-1 10.10.10.114:10051 check
Someone can help? There are some manner to point to my website zabbix.domain.com, the haproxy treat the request send to my worker-1 in port 10051? Please tell me If need more information.

GWT with http loadbalancer gives invalid SID value

I have 2 openfire servers and an elastic loadbalancer over them and built a gwt application that using http bind at port 7070
when connecting directly to one server it works good but when it connects to the loadbalancer on port 7070 it’s not working and output an error with 404 invalid SID value
Note:
When the load balancer is working at tcp mode it works fine but when its http mode it doesn’t work and i need to make a sticky session for it
That's because once BOSH session is established on one machine then it's tied to this machine. Without enabling sticky session on the ELB subsequent requests from the client can be routed to the second server, on which there is no BOSH session that maches the request, which in turn results in invalid SID (because SID doesn't exist on the other machine).
Alternative solution would be (if the machines would also expose public IP) to return "host" information in the BOSH response therefore client could use that information and then make subsequent requests to correct machine. But if that's not possible, they ou have to use "sticky session".

How to connect two applications runninig within Kubernetes

I have an application running on my own server with kubernetes. This application is supposed to work as a gateway and has a LoadBalancer service, which is exposing it to "the world". Now I'd like to connect this application with other applications running within the very same kubernetes cluster, so they can exchange HTTP requests with each other.
So let's say that my Gateway app is running on the port 9000, the app which I'd like to call runs on 9001. When I make curl my_cluster_ip:9001 it gives me a response. Nevertheless I never know, what the Cluster IP will be, so I can't implement this to my gateway app.
Use case is typing to the web browser url_of_my_server:9000 -> this will call the gateway -> it sends HTTP Request to the other app running in the cluster on the port 9001 -> response back to the gateway -> response back to the user.
Where the magic has to happen and how to easily make these two apps to talk with each other, while only one will be exposed to "the world" and the other one will be accessible only from within the cluster?
You can expose your app on port 9001 as a service (lets say myservice).
When you do that myservice.<namespace>.svc.cluster.local will resolve to IP addres of your app. More Info on DNS here : https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
And then you can access your app within Kubernetes cluster as:
http://myservice.<namespace>.svc.cluster.local:9001
You have a couple of options for internal service discovery:
You can use the cluster-internal DNS service to find the other application, as detailed in the answer by bits.
if both the proxy and the app runs in the same namespace, there are environment variables that expose the IP and ports. This may mean you have to restart the proxy if you remove/readd the other application, as the ports may change.
you can run both apps as two different containers in the same pod; this will ensure they get scheduled on the same host, which allows you to communicate on the same host.
Also note that support for your HTTP proxy setup already exist in Kubernetes; take a look at Ingress and Ingress Controllers.

How can I get my services to register with a specific port in Eureka?

My Setup
I have some services that register with Eureka. This registration info is used by Zuul to route requests to my services. Most of these services run on a port like 9999 or 8080. Each service is on it's own EC2 instance, and I have Nginx routing requests from port 80 to the server's port, so that I can keep my Security Group rules simple.
My Problem
When my service registers with Eureka, it gets registered with ${server.port}, which ends up being 8080 or 9999, etc. When Zuul attempts to route to {ec2host}:8080, it gets blocked by my Security Group rules. Based on the documentation, it looks like I should be able to specify a host and port with eureka.instance.hostname and eureka.instance.nonSecurePort. Whether I use those properties or not, my service registers with it's specific port.
Is there a way to get the Eureka client to register my service with port 80, instead of the server's port?

Which ports does Secure Gateway Client use?

I plan to set the Secure Gateway Client at DMZ at on-premise environment, so I need to open Outbound ports for SG Client to connect to SG on Bluemix. The following question is similar to my question, but the answer doesn't show the needed ports.
For the Bluemix Secure Gateway service, how does the data center's network need to be configured?
For the Bluemix Secure Gateway service, how does the data center's network need to be configured?
The following Bluemix Doc shows Outbound 443 is needed.
https://www.ng.bluemix.net/docs/troubleshoot/SecureGateway/ts_index-gentopic1.html#ts_sg_006
What are the best practices for running the Secure Gateway client?
Before you install the Docker client into your environment, ensure that both the internet and your on-premises assets are accessible and all host names are resolvable by a DNS. The client uses outbound port 443 to connect to the IBM Bluemix environment, normally this port is open since its secure. Ensure you check or modify additional firewall and IP Table rules that might apply.
But, the tcpdump, which I got when I executed "docker run -it ibmcom/secure-gateway-client XXXX", showed that SG Client used Outbound 443 and 9000. Is it collect that all ports SG Client uses are Outbound 443 and 9000 ?
Correct, if you are closing down both outbound and inbound ports using your firewall, then for outbound allow ports 443/9000. So your initial assertion is correct.