When I do a curl command on the server I'm running openam on it works fine as long as I use the FQDN.
curl https://<fqdn>:port/openam/XUI/#login/
However if I call it by IP address:
curl https://<ipaddress>:port/openam/XUI/#login/
it returns:
{"code":400,"reason":"Bad Request","message":"FQDN \"<ipaddress>\" is not valid."}
I set up the /etc/hosts to include a line for <ipaddress> <fqdn> but still no luck. Any suggestions?
OpenAM uses cookies for SSO tokens, and cookies only work with a FQDN. You can not use a IP address
Related
I have a microservice architecture (implemented in Spring Boot) deployed in Google Kubernetes Engine. For this microservice architecture I have setup the following:
domain: comanddev.tk (free domain from Freenom)
a certificate for this domain
the following Ingress config:
The problem is that when I invoke an URL that I know it should be working https://comanddev.tk/customer-service/actuator/health, the response I get is ERR_TIMEDOUT. I checked Ingress Controller and I don't receive any request in the ingress although URL forwarding is set.
Update: I tried to set a "glue record" like in the following picture and the response I get is that the certificate is not valid (i have certificate for comanddev.tk not dev.comanddev.tk) and I get 401 after agreeing to access unsecure url.
I've digged a bit into this.
As I mentioned when you $ curl -IL http://comanddev.tk/customer-service/actuator/health you will received nginx ingress response.
As domain intercepts the request and redirect to the destination server I am not sure if there is point to use TLS.
I would suggest you to use nameserver instead of URL Forwarding, just use IP of your Ingress. In this option you would redirect request to your Ingress. When you are using Port Forwarding you are using Freenom redirection and I am not sure how its handled on their side.
I am trying to use domain name in haproxy backend module configuration
backend prod_auth
balance leastconn
option httpchk GET /auth-service/generalhealthcheck
http-check expect string "Service is up and reachable"
server auth-service-1 domain-service.com:8080 check
but haproxy uses IP(10.1.122.83) of domain-service.com instead of the domain name itself to do health check, it becomes an issue because my service works on domain name not on IP.
root#ram:~$ curl http://domain-service.com/auth-service/generalhealthcheck
["Service is up and reachable"]
root#ram:~$ curl http://10.1.122.83/auth-service/generalhealthcheck
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
I cannot make my service working on IP as there are multiple other services running in the same server and uses different domain name(rDNS).
I don't know why haproxy is using IP instead of domain name, i verified it using WireShark. is there any way I can force haproxy to use domain name mentioned in backend modules?
I tried setting in /etc/hosts but that does not work
domain-service.com domain-service.com
I am using HA-Proxy version 1.7.8 2017/07/07
You can customize the Host header send to your domain using the same option httpchk as following:
backend domain-service-backend
mode http
option httpchk GET /get HTTP/1.1\r\nHost:\ domain-service.com
I'm having a problem to setup SSO for the intranet websites. Currently i'm working with Tomcat 8.0 and Waffle 1.8.4. They work great, but there is only 1 problem: the browsers (Firefox and IE after the settings were applied as described here and here) can not do Kerberos authentication, only NTLM.
I analyzed the traffic with Wireshark, Tomcat sends the http header field "WWW-Authenticate" and the browser answers a base64-encoded string in the header field "Authentication", that contains NTLMSSP. I guess this is not Kerberos, or is it ?
I read a post (WWW-Authenticate uses NTLM and not Kerberos), that for Kerberos to work, the server has to be registered in the AD with the command setspn.exe.
I try to get the right syntax for setspn (described here), but without any luck.
The server has the following parameters:
IP: 10.0.0.1
Service: Tomcat-Http
Port: 8080
Accountname: company-net\foobar
I use this command for setspn:
setspn -A "HTTP/10.0.0.1:8080 company-net\foobar"
but not works. Both the server and the client are in the same Windows domain, using Windows 10.
What is wrong with it?
Do i need anything else ?
Kerberos relies on DNS (valid hostnames) and SPNs to function. Looks like you've done a bit of research so far which is good. What isn't that well known is that when you point an otherwise perfectly working fine Kerberos client to the IP of a host, rather than to it's DNS hostname, Kerberos will be bypassed and the fallback authentication mechanism will be employed instead - NTLM in this case.
Michael-O, the top Kerberos contributor to this forum, said it best with his answer about this back in 2012:
Kerberos does not work with IP adresses, it relies on domain names and
correct DNS entries only.
I am trying to call an endpoint on a locally hosted Tomcat server using Postman. However, when I make the call to:
localhost:9000/api/postMethod
it redirects my request through the corporate proxy.
I don't see any options in Postman and I have tried removing the proxy settings in the internet options, but the problem persists.
When I look at the Postman Console, I see the below. Why is it sending proxy-authorization and proxy-connection to localhost?
Request Headers:
cache-control:"no-cache"
Postman-Token:"{token}"
User-Agent:"PostmanRuntime/3.0.11-hotfix.2"
Accept:"*/*"
accept-encoding:"gzip, deflate"
proxy-authorization:"Basic {auth}"
referer:"http://localhost:9000/api/postMethod"
Response Headers:
cache-control:"no-cache"
connection:"Keep-Alive"
content-length:"8063"
content-type:"text/html; charset=utf-8"
pragma:"no-cache"
proxy-connection:"Keep-Alive"
I had my http_proxy system environment variable set. Looks like it's working once I removed it and restarted Postman.
You must set the following environment variables (I do it in .bashrc file):
$ env | grep PROXY
HTTP_PROXY=myProxy:9999
HTTPS_PROXY=myProxy:9999
NO_PROXY=localhost,127.0.0.1
Then, you have to tell Postman to use the System Proxy in the Proxy settings section. It's just annoying because you can set your proxy in the settings, but you can't set no proxy hosts.
If you're stuck behind a corporate proxy or similar and you're using automatic proxy configuration which won't let you allow to connect to localhost addresses, this is what helped me:
Try using your IP address instead of localhost as a host name in your client application, e.g. http://127.0.0.1/api/v1/...
I want to redirect all my browser request to abc.com when a request is sent to xyz.com
I was able to do this by adding an entry in the hosts file under windows.
However I see that i can go to http://abc.com when i type in http://xyz.com:8080
but I cannot seem to get the same redirection over https.
I found out that you cannot mention ports in the host file.
Need some help on this
HTTPS is specifically designed so that you can't do this - not only is one of the core points of SSL/TLS that the conversation be encrypted, it also ensures that you really are talking to who you think you are, that you haven't been redirected to a fake site via DNS.
That's not what the hosts file is for. It's about the hosts that you are referring to. abc.com and xyz.com are hosts.
All the hosts file does is associate a host name with an IP address. Nothing else is possible.
Get a clone of the part you need from the genuine site.. put it on local iis, add ssl binding using self signed certificate and add entry to hosts file.http://www.selfsignedcertificate.com. if you are in rush with no time to play with iis mgr use appcmd.
Youll get a not verified warning for untrusted issuer.. add it to trusted root cert authorities. http://www.robbagby.com/iis/self-signed-certificates-on-iis-7-the-easy-way-and-the-most-effective-way/
Never tried self signed cert tho.. let us now how your testinggoes.
A hosts file is DNS, which is used to resolve a domain name to an IP addresses, which has nothing to do with ports.
If you redirect from https://abc.com to https://xyz.com then they will need to be different servers with different certificates, as an SSL certificate is bound to the domain name.
Which means if you use your hosts file to lookup the ip address of abc.com when you try https://xyz.com then it wont work as the certificate will be for abc.com and wont match the hostheader https://xyz.com sent by your browser.
If you are using windows command for routing:
netsh interface portproxy add v4tov4 listenport=listen_port listenaddress=any_free_ip_address connectport=localhost_port connectaddress=127.0.0.1
The default port for http request is 80 so if one is using https use 443 as it is the default for https
With HTTPS, it'll be to do with the security certificate - likely you can't get around that, or at least ... I hope not.
Putting an entry in your hosts file only associates your human readable host name with an ip address, the rest happens in the application that makes http requests.
parts of uri on wikipedia:
https://upload.wikimedia.org/wikipedia/commons/thumb/d/d6/URI_syntax_diagram.svg/1068px-URI_syntax_diagram.svg.png
When ever an application makes a request for a resource, let's say your browser, turns what you type for address into a proper uri, which includes scheme.
If you don't type https, or leave the scheme out, you get http. You end up still getting https for some sites, because they use ssl redirection, maybe something like this: https://www.linkedin.com/pulse/how-use-nginx-reverse-proxy-https-wss-self-signed-ramos-da-silva/?articleId=6678584723419226112
Use nslookup xyz.com and get IP
then put this IP to hosts (/etc/hosts in Linux)
the https domain name must transform to IP from