how can i use wso2 behind haproxy - docker-compose

i use dockerized wso2 https://github.com/wso2/docker-apim.
i want use api manager behind haproxy.
my config is:
frontend app
bind *:443 ssl crt /etc/ssl/wso2.pem
default_backend wso2
backend wso2
server node1 api-manager:9443 check ssl verify none
but after config it when open https://127.0.0.1/ in browser, it redirects to https://127.0.0.1:9443/publisher/.
how can i fix it ?

You can set the proxy ports in catalina-server.xml in repository/conf/tomcat location [1]. For 9443 port you can set the port 443 and restart the server.
[1] - https://docs.wso2.com/display/Carbon440/Adding+a+Custom+Proxy+Path

Related

How to configure haproxy-ingress for serving GRPC

Has anyone successful in configuring Haproxy ingress controller for serving a GRPC server in the backend
GRPC Client ----> Ingress----> GRPC Server (k8s Service) --> GRPC Server( Pod)
I tried configuring as per the documentation here (https://www.haproxy.com/blog/haproxy-1-9-2-adds-grpc-support/ and https://haproxy-ingress.github.io/docs/configuration/keys/#backend-protocol)
It is not working as expected. Wanted to check if I have missed some configuration here
gRPC works on top of h2 and, for compatibility reasons, client and server need to agree about the http protocol version they want to speak. In haproxy this is done using alpn keyword in the bind line, which only works on TLS connections. By default HAProxy Ingress configures alpn with h2,http1.1, allowing h2 and gRPC out of the box in the client side - but only on https connections.
If you're using plain http, client and server doesn't have a way to agree about a protocol, and the default version used is http1. You can overwrite this behavior configuring bind-http with :80 proto h2 but this should break http1 clients.

Encrypt & Decrypt data between Kubernetes API Server and Client

I have two kubernetes cluster setup with kubeadm and im using haproxy to redirect and load balance traffic to the different clusters. Now I want to redirect the requests to the respective api server of each cluster.
Therefore, I need to decrypt the ssl requests, read the "Host" HTTP-Header and encrypt the traffic again. My example haproxy config file looks like this:
frontend k8s-api-server
bind *:6443 ssl crt /root/k8s/ssl/apiserver.pem
mode http
default_backend k8s-prod-master-api-server
backend k8s-prod-master-api-server
mode http
option forwardfor
server master 10.0.0.2:6443 ssl ca-file /root/k8s/ssl/ca.crt
If I now access the api server via kubectl, I get the following errors:
kubectl get pods
error: the server doesn't have a resource type "pods"
kubectl get nodes
error: the server doesn't have a resource type "nodes"
I think im using the wrong certificates for decryption and encryption.
Do I need to use the apiserver.crt , apiserver.key and ca.crt files in the directory /etc/kubernetes/pki ?
Your setup probably entails authenticating with your Kubernetes API server via client certificates; when your HAProxy reinitiates the connection it is not doing so with the client key and certificate on your local machine, and it's likely making an unauthenticated request. As such, it probably doesn't have permission to know about the pod and node resources.
An alternative is to proxy at L4 by reading the SNI header and forwarding traffic that way. This way, you don't need to read any HTTP headers, and thus you don't need to decrypt and re-encrypt the traffic. This is possible to do with HAProxy.

Haproxy Backend module uses IP instead of domain name

I am trying to use domain name in haproxy backend module configuration
backend prod_auth
balance leastconn
option httpchk GET /auth-service/generalhealthcheck
http-check expect string "Service is up and reachable"
server auth-service-1 domain-service.com:8080 check
but haproxy uses IP(10.1.122.83) of domain-service.com instead of the domain name itself to do health check, it becomes an issue because my service works on domain name not on IP.
root#ram:~$ curl http://domain-service.com/auth-service/generalhealthcheck
["Service is up and reachable"]
root#ram:~$ curl http://10.1.122.83/auth-service/generalhealthcheck
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
I cannot make my service working on IP as there are multiple other services running in the same server and uses different domain name(rDNS).
I don't know why haproxy is using IP instead of domain name, i verified it using WireShark. is there any way I can force haproxy to use domain name mentioned in backend modules?
I tried setting in /etc/hosts but that does not work
domain-service.com domain-service.com
I am using HA-Proxy version 1.7.8 2017/07/07
You can customize the Host header send to your domain using the same option httpchk as following:
backend domain-service-backend
mode http
option httpchk GET /get HTTP/1.1\r\nHost:\ domain-service.com

Haproxy Connect with client with public ssl cert and Connect to server with insecure ssl

My senario is most likely SSL/TLS bridging or re-encryption but my backend server has a private cert that i dont have access to. So Haproxy needs to connect to the backend with insecure mode. How can i achive this configurations?
Solved using
frontend icinga
bind *:5665 ssl no-sslv3 no-tlsv10 crt /etc/ssl/private/haproxy/haproxy-com.pem
mode tcp
default_backend sss
backend sss
mode tcp
server name url:port ssl verify none

Can access a specific machine behind a haproxy loadbalancer for health check

I have a website served by several machines behind a haproxy loadbalancer. Now it use sticky sessions based on cookies. I'm using a uptimerobot to check the machines health but I cannot configure it to use cookies and I don't want the loadbalancer to be the only open point to the internet.
Is there a way to configure the load balancer to access the machines by a url parameter?
There is a way but it isn't advisable. In the config, create duplicate backend blocks and url_param-based ACLs to route requests to specific servers based on a URL parameter.
Example:
frontend fe
bind x.x.x.x:80
bind x.x.x.x:443 ssl crt.. blah
...
default_backend www
acl is_healthchk_s1 url_param(CHECK) -i s1
acl is_healthchk_s2 url_param(CHECK) -i s2
acl is_healthchk_sn url_param(CHECK) -i sn
use_backend be_healthchk_s1 if is_healthchk_s1
use_backend be_healthchk_s2 if is_healthchk_s2
use_backend be_healthchk_sn if is_healthchk_sn
backend www
server s1 x.x.x.x:8080 check
server s2 x.x.x.x:8080 check
server sn x.x.x.x:8080 check
backend be_healthchk_s1
server s1 x.x.x.x:8080 check
backend be_healthchk_s2
server s2 x.x.x.x:8080 check
backend be_healthchk_s3
server s3 x.x.x.x:8080 check
So, your uptime robot can check these instead:
domain.com/?CHECK=s1
domain.com/?CHECK=s2
domain.com/?CHECK=sn