Whois list of Top Level Domain against their corresponding registrar - whois

I'm trying to find a list of TLD's to their corresponding whois server, for example
.com americanWhoisServer
.net someOtherWhoisServer
.au australianWhoisServer
In the end i'm aiming for something like a Dictionary where the key is the TLD and the value is the whois server address (eg whois.apnic.net).
Ah snap, i just realised that i am given the IP addresses and not domain names but a list could still come in handy.
How can i determine which whois server to use given a IP address? Guess and check?

You can get the official (?) tld list from http://data.iana.org/TLD/tlds-alpha-by-domain.txt and then query the IANA whois server (whois.iana.org at port 43) for each tld to get information about it. Slightly more export-friendly and official than the HTML tld list at http://www.iana.org/domains/root/db/.

Do not use a static, local list; whois servers may change in time (ok, not everyday, but it may happen); to find the server for a given domain or ip, start by querying the IANA whois server, basically, connect to whois.iana.org:43 and send the query string followed by "\r\n"; for example "ibm.com\r\n" or "72.163.5.201\r\n", the IANA whois server will then return an answer containing a "hint", for example, the query for 72.163.5.201 would return
% IANA WHOIS server
% for more information on IANA, visit http://www.iana.org
% This query returned 1 object
refer: whois.arin.net
inetnum: 72.0.0.0 - 72.255.255.255 organisation: ARIN status: ALLOCATED
whois: whois.arin.net
changed: 2004-08 source: IANA
now, parse the response searching for the "whois:" entry, extract the name of the whois server responsible for the IP or domain (whois.arin.net in this case) and repeat the query using that server; notice though, that in some cases (e.g. "com" domains and the verisign whois server) the second answer you'll get may return a referral to another whois server for example, here's the result of a query against the verisign whois for the verisign.com domain will return
Domain Name: VERISIGN-GRS.COM
Registrar: CSC CORPORATE DOMAINS, INC.
Sponsoring Registrar IANA ID: 299
Whois Server: whois.corporatedomains.com
Referral URL: http://www.cscglobal.com/global/web/csc/digital-brand-services.html
Name Server: AV1.NSTLD.COM
Name Server: AV2.NSTLD.COM
Name Server: AV3.NSTLD.COM
Name Server: AV4.NSTLD.COM
Status: clientTransferProhibited https://www.icann.org/epp#clientTransferProhibited
Status: serverDeleteProhibited https://www.icann.org/epp#serverDeleteProhibited
Status: serverTransferProhibited https://www.icann.org/epp#serverTransferProhibited
Status: serverUpdateProhibited https://www.icann.org/epp#serverUpdateProhibited
Updated Date: 12-jan-2016
Creation Date: 08-sep-2000
Expiration Date: 08-sep-2016
in such a case you'll need to locate the "Whois Server:" line, extract the whois server name (whois.corporatedomains.com in this case) and repeat the query; a last caveat, in some cases the "referral" may be returned even if the server is the one you just queried, so you should check such a condition to avoid an infinite loop

Each whois client has its own way of getting this information, since no standard was ever adopted.
GNU whois (as used on Debian) has a hardwired list (not a configuration file, a file included at compile-time, named tld_serv_list).
The whois client on FreeBSD uses an online (unofficial) list, maintained in the DNS, at whois-servers.net:
% dig +short CNAME fr.whois-servers.net
whois.nic.fr.
% dig +short CNAME in.whois-servers.net
whois.inregistry.net.

You might find the official IANA (Internet Assigned Numbers Authority) list at http://www.iana.org/domains/root/db/ a good starting/jumping off point. It lists the WHOIS (and nameservers) for all allocated TLDs and it's the official list, but not available in an "easy to export format"

I regularly compile such a list from IANA and PSL in XML: https://github.com/whois-server-list/whois-server-list
This list contains more than 900 top level domains and its respective whois servers. Additionally it includes more than 300 second level domains. The list gets updated frequently.

a list of servers for TLD and SLD can be found at the web address https://whois.sld.ro/servers-list.html and at the only root domains https://www.iana.org/domains/root/db
SLD / TLD Whois Server Last modified
.aaa whois.nic.aaa 2020-10-28
.gov.af whois.nic.af 2020-10-28
.com.af whois.nic.af 2020-10-28
.org.af whois.nic.af 2020-10-28
.edu.af whois.nic.af 2020-10-28
.ai whois.nic.ai 2020-10-28
.off.ai whois.nic.ai 2020-10-28
.com.ai whois.nic.ai 2020-10-28
....

Related

In docker-compose, what is the effect/purpose of `dns-search: .`

I am looking at the stackstorm docker-compose file, and within it almost all containers have a line dns_search: . According to docker-compose documentation, dns_search is for the purpose of configuring search domains.
I am used to seeing this in context of transparently adding a domain to unqualified short domains. For example if I add dns_search: mydomain.com, I would expect "host1" to transparently resolve as "host1.mydomain.com".
I have never seen this set as a single dot . before. What is the effect/purpose of doing this configuration?
I'm posting the answer from the Stackstorm Git project issue see comment/"dns_search: .". Paraphrasing: it was useful in old versions of Docker before 2017, before the ndots configuration was available. Nowadays that configuration has no impact, and in fact has been removed from the stackstorm docker-compose file.
I believe this is because all domain names end in . under the hood, but browsers and other software abstracts this out.
For example. under the hood www.google.com is actually www.google.com.
So, in the docker-compose file, this would essentially be saying "Find me any domain"
A bit more detail on why there's an extra dot, if you're interested:
Domain name resolution is heirachical, reading right to left, with each block, separated by a ., being a step in the process. A DNS resolver will first find a source of ., which will be able to return the address for a resolver for the next block, until it reaches the final block, where it returns the full DNS record.
Extending EdwardTeach's answer:
#ytjohn effectively said they did in the past because putting dns_search: . configures the DNS search domains to be only . instead of inheriting the host ones. I can't confirm that because I didn't test it.
Now, I tested what docker-compose does today, and in a container, cat /etc/resolve.conf returns:
nameserver 127.0.0.11
options ndots:0
Where options ndots:0 is (from resolv.conf docs):
ndots:n
Sets a threshold for the number of dots which must
appear in a name given to res_query(3) (see
resolver(3)) before an initial absolute query will
be made. The default for n is 1, meaning that if
there are any dots in a name, the name will be
tried first as an absolute name before any search
list elements are appended to it. The value for
this option is silently capped to 15.
With ndots:0, all domains will be attempted using the absolute name first, only then using the search list.
How I came to this conclusion
The Github comment:
If you don't set this dns_search: ., then whatever the host has in search in their /etc/resolv.conf will get put into your container's /etc/resolv.conf.
This doesn't happen. My host has search domain[0]: broadband (macOS command: scutil --dns), and in docker containers, it doesn't show broadband (linux command: cat /etc/resolv.conf). Instead, it says options ndots:0
dns_search docs:
dns defines custom DNS search domains to set on container network interface configuration. Can be a single value or a list.
What is a DNS search domain?
It is the DNS service used to resolve hostnames that are not fully qualified, e.g. hostname will try hostname.example.com then hostname.website.com if your search domains list was example.com, website.com. More information on https://superuser.com/a/184366
In another repo (crossdock), their dockerfile had the comment:
`dns_search: . # Ensures unified DNS config.`

Adding a custom domain name with surge.sh

I'm a newbie in domain names, DNS etc.
I'm using surge.sh for deploying my app. Now I want to add a custom domain, that I registered using transIP, and I can't get it working. I set the IP address to 45.55.110.124, as they explain here. All together, I entered the following settings:
Name: *
TTL: 1 min
Type: A
Address: 45.55.110.124
And another one, exactly the same but then using name #:
Name: #
TTL: 1 min
Type: A
Address: 45.55.110.124
I created a test page that contains hello domain, inside a simple html file. Now, I deployed the page by moving to the folder that contains the html file and doing: surge ./ mydomain.io.
I waited over 5 minutes and nothing is changing.
Now, my questions are:
What am I doing wrong?
My domain provider suggests that I also use an IPv6 address, but which one should I use for Surge?
Why is there an option of setting TTL longer than 1 minute, who wants to wait longer before their deploy comes online?
For starters, you want to use the CNAME instead of A record if possible. The reason for this is that their IP address can possibly change out from under you when infrastructure changes / updates / re-deploys. If possible, remove the A records and create CNAME records pointing to na-west1.surge.sh. instead.
Next, assuming that they want you to point to the same IP as na-west1.surge.sh resolves to, that IP is different from the documentation (possible even due to my previous explanation). You can ping the domain or use the host utility to get the current IP address:
$ host na-west1.surge.sh
na-west1.surge.sh has address 138.197.235.123
Armed with this information, try changing to CNAME records first. If this isn't possible, then use the updated IP address that you get from resolving the their CNAME.

perl script to serve whois data as requested on port 43

Warning: This is long and can probably only be answered by a professional perl programmer or someone involved with a domain registrar or registry company.
I run a website hosting, design, and domain registration business. We are a registrar for some TLDs and a couple of them require us to have a whois server for domains registered with us. I have a whois server set up which is working but I know it's not doing it the right way, so I'm trying to figure out what I need to change.
My script is set up so going to whois.xxxxxxxxxx.com via browser or doing whois -h whois.xxxxxxxxxx.com from shell works. A whois on a domain registered with us gives whois data and a domain not registered with us says it's not registered with us.
If needed, I can give the whois url, or it can be figured out from my profile. I just don't want to put it here to look like advertising or for search engines to end up going to.
The problem is how my script does it.
My whois url is set up in apache's httpd.conf file as a normal subdomain to listen on port 80, and it's also set up to listen on port 43. When called via browser, it works properly, gives a form to provide a domain and checks our database for that domain. How it works when called from shell is fine as well, but how it distinguishes between the 2 is weird, and how it gets the domain is also weird. It works, but it can't be the right way to do it.
How it distinguishes between shell and http is:
if ($ENV{REQUEST_METHOD} ne "GET") {
&shell_process;
}
else {
&http_process;
}
It would seem more logical for this to work:
if ($ENV{SERVER_PORT} eq 43) {
&shell_process;
}
else {
&http_process;
}
That doesn't work because even when called through port 43 as a whois request, the ENV vars are saying "SERVER_PORT = 80".
How it gets the domain name when called from shell is:
$domain = lc($ENV{REQUEST_METHOD});
You would think the domain would be the QUERY_STRING or more likely, in the ARGV vars, but it's not.
Here are the ENV vars (that matter) when called via http:
SERVER_NAME = whois.xxxxxxxxxxxxx.com
REQUEST_METHOD = GET
QUERY_STRING = domain=roughdraft.ws&submit=+Get+Whois+
SERVER_PORT = 80
REQUEST_URI = /index.cgi?domain=premierwebsitesolutions.ws&submit=+Get+Whois+
HTTP_HOST = whois.xxxxxxxxxxxxxx.com
Here are the ENV vars (that matter) when called via shell:
SERVER_NAME = whois.xxxxxxxxxxxxxx.com
REQUEST_METHOD = premierwebsitesolutions.ws
QUERY_STRING =
SERVER_PORT = 80
REQUEST_URI =
Notice the SERVER_PORT stays 80 either way, even though through shell it's set up on port 43.
Notice how via shell the REQUEST_METHOD is the domain being looked up.
I've done lots of searching and did find swhoisd: Simple Whois Daemon, but that's only for small databases. I also found the Daemon::Whois perl module, but it uses a cdb database which I know nothing about, it has no instructions to it, and it's a daemon which I don't really need because the script works fine when called through apache on port 43.
Does anyone know how this is supposed to be done?
Can I get the script to see that it was called via port 43?
Is it normal to use REQUEST_METHOD this way?
Is a whois server supposed to be running as a daemon?
Thanks for helping, or trying to.
Mike
WHOIS is not a HTTP-like protocol, so attempting to serve it through Apache on port 43 will not work correctly. You will need to write a separate daemon to serve WHOIS — if you don't want to use Daemon::Whois, you will probably at least want to use something like Net::Daemon to simplify things for you.
https://stackoverflow.com/a/933373/66519 states something could be set to detect cli vs web. It applies to php in this case. Based on the lack of answers here maybe it might help you get to something useful. Sorry for the formatting I am using the mobile SO app.

Is there a api for ganglia?

Hello I would like to enquire if there is an API that can be used to retrieve Ganglia stats for all clients from a single ganglia server?
The Ganglia gmetad component listens on ports 8651 and 8652 by default and replies with XML metric data. The XML data type definition can be seen on GitHub here.
Gmetad needs to be configured to allow XML replies to be sent to specific hosts or all hosts. By default only localhost is allowed. This can be changed in /etc/ganglia/gmetad.conf.
Connecting to port 8651 will get you a default XML report of all metrics as a response.
Port 8652 is the interactive port which allows for customized queries. Gmetad will recognize raw text queries sent to this port, i.e. not HTTP requests.
Here are examples of some queries:
/?filter=summary (returns a summary of the whole grid, i.e. all clusters)
/clusterName (returns raw data of a cluster called "clusterName")
/clusterName/hostName (returns raw data for host "hostName" in cluster "clusterName")
/clusterName?filter=summary (returns a summary of only cluster "clusterName")
The ?filter=summary parameter changes the output to contain the sum of each metric value over all hosts. The number of hosts is also provided for each metric so that the mean value may be calculated.
Yes, there's an API for Ganglia: https://github.com/guardian/ganglia-api
You should check this presentation from 2012 Velocity Europe - it was really a great talk: http://www.guardian.co.uk/info/developer-blog/2012/oct/04/winning-the-metrics-battle
There is also an API you can install from pypi with 'pip install gangliarest' and sets up a configurable API backed with a Redis cache and indexer to improve performance.
https://pypi.python.org/pypi/gangliarest

Catchall Router on Exim does not work

I have setup a catchall router on exim (used as last router):
catchall:
driver = redirect
domains = +local_domains
data = ${lookup{*#$domain}lsearch{/etc/aliases}}
retry_use_local_part
This works perfectly when sending emails locally. However, if I login to my GMail account and send an email to whatever#mydomain.com, then I get an "Unrouteable Address".
Thank you for any hints to solve this issue.
In the system_aliases: section of the config file you already have a section which does the lookup in /etc/aliases.
Replace
data = ${lookup{$local_part}lsearch{/etc/aliases}}
with
data = ${lookup{$local_part}lsearch*#{/etc/aliases}}
and make sure you have *:catchall_username* in /etc/aliases
This works great for a single domain mail server which is already using /etc/aliases
For this router to work, make sure that
mydomain.com is in local_domains
there is an entry for *#mydomain.com in /etc/aliases
MX record for mydomain.com is pointing to the server, where you've
configured this
This is old as heck, but I didn't see a good answer posted and someone else might want to know the answer.
This post is geared towards Debian with in single configuration file mode. It should work on any Linux Exim4 install though. For the purpose of explaining things we’ll use test#example.com which is configured with the hostname mail.example.com. The system will have a real user called test and we want to create an alias for test called alias. So the end result will all email sent to alias#example.com forwarded to test#example.com without having to create the user alias on the system.
First we need to create a place to store all of the alias files:
mkdir /etc/exim/aliases.d
vim /etc/exim/aliases.d/mail.example.com
contents of the alias file for mail.example.com alias:test
vim /etc/exim/exim4.conf.template
Now look for the section system_aliases. Here you’ll see data = ${lookup{$local_part}lsearch{/etc/aliases}} or something similar. Change that to
data = ${lookup{$local_part}lsearch{/etc/exim4/aliases.d/$domain}}
Save the file and restart exim. The alias should now work. To add support for other domains just add more alias files in the aliases.d directory with the correct hostname.
I copied and pasted this from my blog:
0xeb.info