Is it possible to have mDNS / Bonjour publish the same service name on different servers - bonjour

What I was hoping to achieve was to have a list of servers that offer a specific service.
E.g. let's say that I have server01 that publishes a service called ControlSystem. Now I add server02 that also publishes a service called ControlSystem.
What I want is to be able to discover the list of servers that publish the ControlSystem service.
I think this should be possible (for example, you might have more than one printer that publishes a printing service), but when I register the service on two different servers with dsn-sd -R, then the output of dns-sd -L is simply:
Lookup ControlSystem._http._tcp.local
DATE: ---Mon 07 May 2018---
16:45:57.867 ...STARTING...
16:45:57.868 ControlSystem._http._tcp.local. can be reached at ControlSystem._http._tcp.local.:5000 (interface 11)
16:45:57.869 ControlSystem._http._tcp.local. can be reached at ControlSystem._http._tcp.local.:5000 (interface 11)
Which is not really useful because I'd like to have at least the IP address of the two servers.

As you stated it: "is it possible to publish the same service name" on different servers? The answer is yes.
But not as you have implemented it.
The service name, is just the name of the service, not the name of the instance. For example, for a web server service, the service name would be _http._tcp.
That name can be the same across servers.
What cannot be the same across servers, is the instance of each service. This is usually conformed concatenating the device name with the service name with the domain name (in Bonjour / Zeroconf, this is .local).
Continuing with the web servers example, it would result in server1._http._tcp.local and server2._http._tcp.local.
This instance name MUST be unique across servers (and even inside the same server, in case you would have several services, like using different ports).
The device name should also be unique. For example, in the case of ZeroConf, it is specified in the Multicast DNS RFC (8. Probing and Announcing on Startup).
About your specific case, you are registering in both servers the same instance name, which, as stated above is not allowed.
You should register in each server different instances, i.e.:
dns-sd -R server01 _http._tcp local 8088
dns-sd -R server02 _http._tcp local 8088
in each of the servers.
After doing this, you should be able to browse for HTTP services doing:
dns-sd -B _http._tcp local.
which should find both instances.
And finally, since you are trying to register a new non-standard service (ControlService), you should just replace _http with your service name, and _tcp with the actual protocol (_tcp or _udp):
dns-sd -R server01 ControlService._tcp local 8088
dns-sd -R server02 ControlService._tcp local 8088
And the query:
dns-sd -B ControlService._tcp local.

Related

How can I access simultaneously Synology-DSM of multiple NAS on remote Network with only one browser (Firefox)?

I have to access simultaneously multiple NAS DSM on a remote Network with Firefox. They are reachable over the Routers-IP and a dedicated port each, like: https:\xxx.xxx.xxx.xxx:5000 and https:\xxx.xxx.xxx.xxx:5100 and https:\xxx.xxx.xxx.xxx:5200. The problem is that Firefox loses the connection to the first NAS as soon as I connect to the next NAS. Multiple simultaneous connections are only possible with multiple browsers, why? How can I manage to connect to all NAS simultaneously, only with one browser?
I found a workaround.
You can start several separate FireFox profiles in such a way that no instance notices something of the other.
To do this, you have to call FireFox with "-p" and create as many profiles as you need:
Standard-User001
Standard-User002
...
And store them in separate folders:
D:\FirefoxProfile\Standard-User001\
D:\FirefoxProfile\Standard-User002\
...
Then you can create links on your desktop:
"C:\Program Files\Mozilla Firefox\Firefox.exe" -p "Standard-User001" -no-remote
"C:\Program Files\Mozilla Firefox\Firefox.exe" -p "Standard-User002" -no-remote
...
Now you have several completely separate FireFox installations.
with Synology NAS, when you login to DSM, the generated PHPSESSID cookie is the same if the domain name is the same, no matter if the specified port in the URL is different. So there is, as far as i know, no way to circumvent this mecanism (needless to say that it is actually a good thing for security in most case).
So the most simple solution i found is to setup CNAMEs on a dynDNS entry, assuming you fullfil the following requirements:
have your own domain name.
each DSM is configured to use a different port.
port forwarding on your router is setup (optional).
Now let's say you have two servers: nas1 reachable on port 5001, nas2 on port 5002 and a dynDNS, for instance nas-is-like.synology.me ( in DSM, check the "DDNS" tab in "External access" settings from the configuration panel if you need to setup this first ).
Then simply add wanted aliases in your DNS zone:
NAME TYPE TARGET
--------------------------------------------------
nas1.youdomain.tk CNAME nas-is-like.synology.me.
nas2.youdomain.tk CNAME nas-is-like.synology.me.
Then you will be able to login on both DSM interfaces ( nas1.youdomain.tk:5001 and nas2.youdomain.tk:5002 ) in the same brower and the same profile.

Is it possible to expose an Owin service?

We have created self-hosted services using OWIN. They are working fine inside the server and we can request and retrieve information using the http://localhost. We use a different port for each service so that we can go and get certain information from http://localhost:8001, other from http://localhost:8015 and so on.
Now, we need to expose the results of one of those self-hosted services to access to it through internet. We'd like to provide a custom address such http://ourpublicinfo.mydomain.com:8001 or using the server ip such http://209.111.145.73:8001.
Is that possible?
How can we implement it?
Our server OS is Windows Server 2012 R2
OWIN Self-Hosted apps can run on a Windows Service, as a Console process and, with if desired, as part of a more robust Host like IIS.
Since you mention your app is running as a service you're probably missing all the GUI goodies IIS provides. In reality however, IIS works on top of http.sys, just as HttpListener does (which is probably what you're using to self-host your app) 1. You just need to do some manual set up yourself:
First of all, you need to make a URL reservation in order to publish on a nonstandard port.
Why would you do that? Quite simply because you're not running under localhost alone anymore on your very own local machine, where you probably are an admin and/or have special privileges/powers.
Since this is a server, and the user used for running the Service might not be an admin (most probably), then you need to give permission to that user to use that URL... and here is where URL reservations come into scene.
You pretty much have to options:
open up the URL to be used by any user:
netsh http add urlacl url=http://209.111.145.73:8001/ user="everyone" listen=yes
or open up the URL to be used by the user(s) running the service, e.g.: NETWORK SERVICE:
netsh http add urlacl url=http://209.111.145.73:8001/ user="NETWORK SERVICE" listen=yes
There is a way to make the reservation for several users too, using sddl, user groups, etc... but I'll not get into it (you can look that up).
Second of all, you need to open up a hall through your firewall (if you don't have one on this day and age, I pity you!)
There are plenty of tutorials on this. You can use a GUI, netsh.exe and what not.
Pretty much all you need to do is make sure you allow incoming connections through that port and that should do the trick.
To make sure the hall is open through and through you can use a tool like http://www.yougetsignal.com/tools/open-ports/ and insert 209.111.145.73 in the Remote Address and 8001 in the Port Number.
If for some reason it shows that the port is closed, even after creating an incoming rule in your firewall for it, then you probably have one or more firewalls in between your server and the outside world.
With those to elements in place you should be able to access your Self-Hosted Service from the outside.
As for accessing your service through an address like http://ourpublicinfo.mydomain.com:8001, you'll need to create a DNS entry somewhere, most likely on your Domain Registrar for mydomain.com, where you could create an A Record for your ourpublicinfo subdomain pointing to 209.111.145.73.
From this point on, you should be able to access your service through direct IP and Port or through the afore mentioned URL.
Best of luck!
Note:
If your service will be access from other domains, you might need to make sure you have CORS (Cross Origen Resourece Sharing) well defined and working on your service too ;)

remotely pulling configuration information from BIND9 nameserver

How do I remotely pull configuration information from a running bind name server without logging in as root on the server where it is running?
I searched a lot and read many materials about BIND9 but still no answers.
I know there are some commands to conduct zone transfer or update zone resource data, but I didn't find any way to pull configuration info from a name server.
In short: you cannot. There is no provision in the DNS protocol to send server configuration. So whatever technology you use, it will NOT be DNS. And since Bind9 is designed to serve DNS requests and send DNS replies only, Bind9 cannot be coerced to send its configuration the way you'd expect.
You have to install and configure some other piece of software to be able to access the configuration. SSH is one of the most widespread such technology used for managing server configurations.
You could use "rndc -s dns-server dumpdb".
In named's configuration you point dump-file to a shared folder which is accessible from the system that ran rndc.

Get Azure public IP address from deployed app

I'm implementing the PASV mode in a FTP server, and I send to the client the IP address and port of the data end point. This is stupid because the IP is actually where the client is already connecting, so there ire two options:
How could I get the public IP
address from a given instance? Not
the VIP, but the public one.
How could I get the original target
IP address that the user used from
a Socket object? Considering routers and load balancers in the middle :P
An answer to any of this questions would do, although there is another way that could work... may I get the public IP address doing a DNS look up of myapp.cloudapp.net?
A fourth option would be use the Azure Management API library... but, too much trouble :P.
Cheers.
Not sure if you ever figured this out, but here's my take on it. The individual role instances are all behind the Windows Azure load balancer and have no idea what the original, outward-facing IP address is. Also, there's no Management API call that returns IP address - Get Deployment returns the URL but not the IP address. I think the only option is going to be a dns lookup.
Having said that: I don't think you can host a passive ftp server in your role instance (at least not elegantly). You may open up to 25 input endpoints on your role (up from 5 - see my recent blog post about this update), but there's manual work involved in the configuration. I don't know if your ftp application lets you limit your port range to such a small number of ports. Also:
You'd have to define each port as its own input endpoint (this is the manual labor part I mentioned) - input endpoints don't allow a port range to be specified, unlike the internal endpoints.
You'd have to specify the port number that's used internally, and the port numbers would need to be sequential
One last thing on ftp: you should be able to host an sftp server with no trouble, since all traffic comes through one port.
The hack that I'm contemplating right now is to retrieve http://www.icanhazip.com/. It isn't elegant and is subject to the availability of that service, but it gets the job done. A better solution would be appreciated!

difference between XMPP servername and XMPP servicename?

In Smack API, there is a configuration class for connection, described at this page
ConnectionConfiguration
I am confused about the distinction between service name and server name.
Suppose I have a computer named "mybox.mydomain.com", and I have ejabberd on it with a configured host called "myhost" (using the line {hosts, ["myhost"]}. in ejabbed.cfg),
what is the host name, server name and service name in this case?
myhost: service name (or XMPP domain)
mybox.mydomain.com: hostname and servername.
You can host an XMPP domain over any host, provided that you set the SRV records right in the DNS or if the client specifies to which host it is supposed to connect (like email).
Think of the JID you're using to log in, which contains username # domain. The domain is the logical name of the service you are using. For some services, like jabber.org, the service is run on a box that has the same name as the service. For many others, like WebEx Connect and GoogleTalk, the service domain is a starting point to figure out where to open a socket to, but not the name of the machine. If everything is set up right, you can look up the name of the machine to connect to in the DNS using an SRV record. For example, using dig:
$ dig +short -t SRV _xmpp-server._tcp.gmail.com
20 0 5269 xmpp-server4.l.google.com.
20 0 5269 xmpp-server2.l.google.com.
20 0 5269 xmpp-server1.l.google.com.
5 0 5269 xmpp-server.l.google.com.
20 0 5269 xmpp-server3.l.google.com.
If the service domain is not configured correctly in the DNS, or you're just testing things out, it's often useful to be able to specify this connect host separately from the domain. So for your example, you would use:
ConnectionConfiguration("mybox.mydomain.com",
5222,
"myhost");
If you ever want this service to be accessed by people off of your network (either client-to-server or server-to-server), it would make sense to rename your service domain to be something fully-qualified, to which you can attach SRV records for those external entities to use.