Connect to DNS names trough SSL and manually specify IP of the DNS record (Local DNS poisoning/Spoofing) - powershell

I'm currently working on a script that will test the health of an ADFS service. The ADFS service uses the same domain name (split brain DNS) for both intranet access, as well as for public DNS (for internet connections through the proxy servers). If I'm logged into an intranet device and I attempt to perform an SSL connection to the ADFS service, my device will use the intranet IP of the service. If I do the same from a device that is not in the intranet, I will connect to the public facing IP.
I want my script to test the health of both the internal and external service, but I haven't found a way to perform an SSL connection to a certain hostname/fqdn, and use an specific IP depending on the test I'm trying to perform (intranet vs extranet). Connecting directly to the internal/external IP address is not an option, since the ip addresses are not part of the SSL cert subject alternative names.
One option I found Is to create a PS Session to a remote host that has public DNS servers configured, and execute my Extranet test through that PS Session, but Ideally, I would like to run both tests from one single server.
I'm trying to find an option that works in the context of my PowerShell session only, I don't want to change the DNS settings of the server or the global DNS cache since that will result in problems on the server, because it depends on that ADFS service for other services to work.
Any help will be appreciated

I could not find a way to achieve exactly what I asked, so instead, what I did was to deploy a small Rest API in Azure which calls my ADFS service. When I call that Rest API, ADFS receives the query from the Internet, allowing me to achieve test the health of my ADFS service from the internet.

Related

SSL application load balancer on AWS WITHOUT a custom domain

Is it possible to give a application load balancer on AWS a SSL certificate, allowing allowing only HTTPS connections, if I don't want to use a custom domain?
Currently developing some internal dashboard applications, so have no need/want for a domain name attached to them.
I can only dig up info and tutorials of creating to a certificate in Cloudformation, when wanting to add a domain forwarding to the LB.
The SSL certificate has to have a valid DNS name associated with it in order to work. You need to request a certificate via ACM and then attach that to the ELB. You can configure the ELB to only have an HTTPS listener to force secure communication.
Probably not.
It's not generally kosher to issue an SSL certificate to an IP address, and since all *.compute.amazonaws.com style DNS names are floating and could be reassigned at any moment, they damn well won't issue one for them either. (Same stands for Let's Encrypt, by the way: you have to have a DNS name not issued by a provider.)
Just give your internal service a DNS name, be it something like mydashboard.internal.mycompany.com or whatever; it'll be easier to access, too.

How do you configure a domain name for openfire server? Do I just buy a domain and set it as my XMPP domain?

so I am setting up a server for a messaging application which is being developed. I am using openfire server for this which I have installed and running on a PC. Right now, the xmpp domain is set to my computer name and server is working on my network, but obviously as its a local name it cannot be accessed from the outside.I am able to access the server from multiple computers on the same network using the Spark messaging client to test the server. So to be able to access my XMPP server from devices outside my network, do I just buy a domain name and set it as my XMPP domain in Openfire settings?
To answer your question, yes, with the following caveats:
You will either have to host the DNS server yourself or have the DNS provider serve the records for you.
A domain must have a static IP to address to point to. A home or a typical small business Internet account does not include a static IP (some providers actively prevent home accounts from serving web pages/services).
You must also configure your firewall to allow a mapping to the internal server.
I would recommend using an external provider to handle the network and hosting services for your program.

Hosting two different servers with one domain

I'm trying to host web pages using Win Server 2016. Currently, I have Jira and my personal web (IIS) servers. Using AWS, I currently have "myec2.com:port1" and "myec2.com/port2" running fine. And I'm planning to buy a domain "myname.com" to be connected to "myec2.long.name.com"
What I hope to do is "myname.com/jira" and "myname.com/mypage" or "jira.myname.com" and "mypage.myname.com" can redirect to Jira server and the IIS server. Is there a way I can achieve this goal?
Thanks in advance.
If you buy a domain like myname.com you will be able to configure any number of sub-domains such as jira.myname.com or mypage.myname.com as you like.
Usually what you would do is point those sub-domains to your server's IP then handle requests to those domains by setting up a web server (like apache or nginx) and configuring a virtual host (apache) or a server block (nginx) for each one of those sub-domains.

HTTPS for local IP address

I have a gadget[*] that connects to the user's WiFi network and responds to commands over a simple REST interface. The user uses a web app to control this gadget. The web app is currently served over http and the app's javascript does AJAX calls to the gadget's local IP address to control it. This scheme works well and I have no issues with it.
[*] By "gadget" I mean an actual, physical IoT device that the user buys and installs within their home, and configures to connect to their home WiFi network
Now, I want to serve this web app over https. I have no issue setting up https on the hosting side. The problem is, now the browser blocks access to the gadget (since the gadget's REST API is over http and not https).
The obvious solution is to have the gadget serve it's REST API over https. But how? It has a local IP address and no one will issue a certificate for it. (Even if they did, I'd have to buy a boatload of certificates for each possible local IP address.) I could round-trip via the cloud (by adding additional logic on my server side to accept commands from the web app and forward it to the gadget over another connection), but this will increase latencies.
Is there a way around this problem? One possibility that I have in mind is to:
Get a wildcard certificate (say, *.mydomain.com)
Run my own DNS that maps sub-domains to a local IP address following a pattern (For example, 192-168-1-123.mydomain.com would map to 192.168.1.123)
Use the wild-card certificate in all the gadgets
My web app could then make AJAX calls to https://192-168-1-123.mydomain.com instead of http://192.168.1.123 and latencies would remain unaffected aside from the initial DNS lookup
Would this work? It's an expensive experiment to try out (wildcard certificates cost ~$200) and running a DNS server seems like a lot of work. Plus I find myself under-qualified to think through the security implications.
Perhaps there's already a service out there that solves this problem?
While this is a pretty old question, it is still nothing that you find out-of-the-box solutions for today.
Just as #Jaffa-the-cake posted in a comment, you can lean on how Plex did it, which Filippo Valsorda explained in his blog:
https://blog.filippo.io/how-plex-is-doing-https-for-all-its-users/
This is very similar to what you proposed yourself. You don't even need a wildcard certificate, but you can generate certificates on-the-fly using Let's Encrypt. (You can still use wildcard certificates, if you want, which Let's Encrypt supports now, too.)
Just yesterday I did a manual proof-of-concept for that workflow, that can be automated with the following steps:
Write a Web Service that can create DNS entries for individual devices dynamically and generate matching certificates via Let's Encrypt - this is pretty easy using certbot and e.g. Google Cloud DNS. I guess Azure, AWS and others have similar offerings, too. When you use certbot's DNS plugins, you don't even need to have an actual web server running on port 80/443.
On you local device, contact that Web Service to generate a unique DNS entry (e.g. ..yourdns.com) and certificate for that domain
Use that certificate in your local HTTPS server
Browse to that domain instead of your local IP
Now you will have a HTTPS connection to your local server, using a local IP, but a publicly resolved DNS entry.
The downside is that this does not work offline from arbitrary clients. And you need to think of a good security concept to create trust between the client that requests a DNS and certificate, and your web service that will generate those.
BTW, do you mind sharing what kind of gadget it is that you are building?
If all you want is to access the device APIs through the web browser, A Simple solution would be to proxy all the requests to the device through your web server.this was even self signed certs for the devices wont be a problem. Only problem though is that the server would have to be on the same network as your devices.
If you are not on the same network, you can write a simple browser plugin (chrome) to send the api request to IoT device. but then the dependency on the app/plugin will be clumsy.

For the Bluemix Secure Gateway service, how does the data center's network need to be configured?

I am going to use Secure Gateway service in Bluemix and I have some questions about how I should make it work.
Systems in my data center's intranet access the Internet through a proxy (with no authentication). Can Secure Gateway connect to Bluemix via a proxy?
Does it connect to Bluemix via HTTPS protocol?
The network admins asked me: What are the IPs (or the IP range) of Bluemix, any idea?
Thank you very much.
A Secure Gateway instance runs in two parts, as shown in "Reaching enterprise backend with Bluemix Secure Gateway via console": the gateway and the gateway client. The gateway runs in Bluemix, the gateway client runs in the data center containing one or more systems of record to connect to. The gateway client needs network access to the Bluemix data center (typically via the Internet) and to the systems of record (via the data center's internal network). The gateway client initiates the connection, so it needs to know Bluemix's address, but Bluemix doesn't need to know the gateway client's address.
To answer your questions specifically:
A proxy isn't supported. The gateway and its client need direct access to each other.
The connection uses HTTPS for SSL encryption. The transport level security (TLS) options can be used to add authentication.
Bluemix's IP addresses aren't published.
For point 3:
The client connects outbound to the cloud services. Once the SecGW is connected, all additional Destination connects flow through that connection, no additional firewall or iptables rules are needed. If they have a rule in-place so that the on-premises machine where the SecureGateway client is installed can use the outbound port 443 (HTTPS) to make connections, that is all they need.