perl script to serve whois data as requested on port 43 - perl

Warning: This is long and can probably only be answered by a professional perl programmer or someone involved with a domain registrar or registry company.
I run a website hosting, design, and domain registration business. We are a registrar for some TLDs and a couple of them require us to have a whois server for domains registered with us. I have a whois server set up which is working but I know it's not doing it the right way, so I'm trying to figure out what I need to change.
My script is set up so going to whois.xxxxxxxxxx.com via browser or doing whois -h whois.xxxxxxxxxx.com from shell works. A whois on a domain registered with us gives whois data and a domain not registered with us says it's not registered with us.
If needed, I can give the whois url, or it can be figured out from my profile. I just don't want to put it here to look like advertising or for search engines to end up going to.
The problem is how my script does it.
My whois url is set up in apache's httpd.conf file as a normal subdomain to listen on port 80, and it's also set up to listen on port 43. When called via browser, it works properly, gives a form to provide a domain and checks our database for that domain. How it works when called from shell is fine as well, but how it distinguishes between the 2 is weird, and how it gets the domain is also weird. It works, but it can't be the right way to do it.
How it distinguishes between shell and http is:
if ($ENV{REQUEST_METHOD} ne "GET") {
&shell_process;
}
else {
&http_process;
}
It would seem more logical for this to work:
if ($ENV{SERVER_PORT} eq 43) {
&shell_process;
}
else {
&http_process;
}
That doesn't work because even when called through port 43 as a whois request, the ENV vars are saying "SERVER_PORT = 80".
How it gets the domain name when called from shell is:
$domain = lc($ENV{REQUEST_METHOD});
You would think the domain would be the QUERY_STRING or more likely, in the ARGV vars, but it's not.
Here are the ENV vars (that matter) when called via http:
SERVER_NAME = whois.xxxxxxxxxxxxx.com
REQUEST_METHOD = GET
QUERY_STRING = domain=roughdraft.ws&submit=+Get+Whois+
SERVER_PORT = 80
REQUEST_URI = /index.cgi?domain=premierwebsitesolutions.ws&submit=+Get+Whois+
HTTP_HOST = whois.xxxxxxxxxxxxxx.com
Here are the ENV vars (that matter) when called via shell:
SERVER_NAME = whois.xxxxxxxxxxxxxx.com
REQUEST_METHOD = premierwebsitesolutions.ws
QUERY_STRING =
SERVER_PORT = 80
REQUEST_URI =
Notice the SERVER_PORT stays 80 either way, even though through shell it's set up on port 43.
Notice how via shell the REQUEST_METHOD is the domain being looked up.
I've done lots of searching and did find swhoisd: Simple Whois Daemon, but that's only for small databases. I also found the Daemon::Whois perl module, but it uses a cdb database which I know nothing about, it has no instructions to it, and it's a daemon which I don't really need because the script works fine when called through apache on port 43.
Does anyone know how this is supposed to be done?
Can I get the script to see that it was called via port 43?
Is it normal to use REQUEST_METHOD this way?
Is a whois server supposed to be running as a daemon?
Thanks for helping, or trying to.
Mike

WHOIS is not a HTTP-like protocol, so attempting to serve it through Apache on port 43 will not work correctly. You will need to write a separate daemon to serve WHOIS — if you don't want to use Daemon::Whois, you will probably at least want to use something like Net::Daemon to simplify things for you.

https://stackoverflow.com/a/933373/66519 states something could be set to detect cli vs web. It applies to php in this case. Based on the lack of answers here maybe it might help you get to something useful. Sorry for the formatting I am using the mobile SO app.

Related

Process x509 client certificates in Perl

I am working with Web::ID and have some questions.
From the FAQ for Web::ID:
How can I use WebID in Perl?
[...]
Otherwise, you need to use Web::ID directly. Assuming you've configured your web server to request a client certificate from the browser, and you've managed to get that client certificate into Perl in PEM format, then it's just:
my $webid = Web::ID->new(certificate => $pem);
my $uri = $webid->uri;
And you have the URI.
Anyway I'm stuck at the .. get that client certificate into Perl .. part.
I can see the client certificate is being passed along to the script by examining the %ENVenvironment variable. But I am still unsure how to actually process it in the way that Web::ID does... like examine the SAN.
According to the documentation of mod_ssl you will find the PEM encoded client certificate in the environment variable SSL_CLIENT_CERT, so all you need is to call
my $webid = Web::ID->new(certificate => $ENV{SSL_CLIENT_CERT});
However, Apache does not set the SSL_CLIENT_CERT environment variable by default. This is for performance reasons - setting a whole bunch of environment variables before spawning your Perl script (via mod_perl, or CGI, or whatever) is wasteful if your Perl script doesn't use them, so it only sets a small set of environment variables by default. You need to configure Apache correctly to tell it you want ALL DA STUFFZ. In particular you want something like this in .htaccess, or your virtual host config, or server config file:
SSLOptions +StdEnvVars +ExportCertData
While you're at it, you also want to make sure Apache is configured to ask clients to present a certificate. For that you want something like:
SSLVerifyClient optional_no_ca
All this is kind of covered in the documentation for Web::ID but not especially thoroughly.

Bind address in Sinatra Application

I'm running a classic style application in Sinatra and I would like to obtain a URL which the application is bound to. For example, if I start it in a development environment I would expect to get: http://localhost:4567/ while in production environment this would point to: http://example.com/
I know it is possible to retrieve it from the request. However, I need it in configuration block. How to do it?
Use the bind host and bind port method:
set :bind, 'example.com'
set :port, 80
should work.
taken from here. at the beginning of the page you can find how to implement them in you app (just copy it in front of your gets)

How to fix Rebol Cheyenne 404 with domain name and configuration file?

On Windows Server 2008 I created
reboltutorial.com [
root-dir %/www/
default [%index.html %index.rsp %index.php]
]
It returns 404 error page not found. Cheyenne only works with IP address ( http://88.191.118.45:2011/ ok http://reboltutorial.com ok also but on ISS 7).
How to fix this ?
Update: error log
Error in [conf-parser] : Can't access file www/ws-apps/ws-test-app.r
Error in [conf-parser] : Can't access file www/ws-apps/chat.r !
You have to make sure you have a directory named www in the map you installed cheyenne in. (Default dir %www/).
After that make sure the missing www/ws-apps/ws-test-app.r and www/ws-apps/chat.r files also exist.
First of all, HTTP 1.1 sends the full URL over the TCP session (including the domain-name typed on the Location: line). That's how one IP can serve multiple domains (Apache calls this VirtualHosts), so browsing by IP will be sending a different URL to whatever web server gets the request.
Thus it's not a great technical mystery for your machine to be set up in a way that it serves a different page for an IP address vs. a domain. But since you put "reboltutorial.com" in your Cheyenne config it seems that--if anything--that would be working while the IP address version would be failing.
I don't run Cheyenne, and you haven't offered up more details about your configuration. But since no one has answered I looked at the source tree to offer some advice on what you might try.
We know Cheyenne is getting the request and making the decision to hand back the 404, because of the format of the error. The Apache one looks different:
http://reboltutorial.com/show-me-apache-404/
http://88.191.118.45:2011/show-me-cheyenne-404/
So Cheyenne is getting the request. That much we know. The decision to serve up a 404 is made in send-response in the HTTPd.r file. It's a pretty simple test:
if all [file? out/content not exists? out/content][
log/error ["File not found: " mold out/content]
out/code: 404
out/content: none
]
If that's the place your 404 is being generated, then there should be a "File not found:" in your log and a mention of what file that is. If not, something strange is going on. You can throw something in there (even a quit if you're suspicious of the printed output) just to make sure it's getting to the line.
(FYI: In the future when you're looking at other Cheyenne problems, there is a is a setting called "verbosity" which affects the output and you can see in on-received in the HTTPd.r file that for verbosity > 0 it will log when it receives a request:
if verbose > 0 [
log/info ["================== NEW REQUEST =================="]
log/info ["Request Line=>" trim/tail to-string data]
]
If you bump up the verbosity level you might find an indication of the problem pretty quickly. If not, the code is fairly readable and you can put in your own trace points.)

What is the difference on a Wamp or Web Server for the definition of $_SERVER['DOCUMENT_ROOT'] and where does it get defined on both?

What is the difference on a Wamp or Web Server for the definition of $_SERVER['DOCUMENT_ROOT'] and where does it get defined on both?
And the reason I ask is I have a bunch of live websites that use the $_SERVER['DOCUMENT_ROOT'] definition.
On the live websites, $_SERVER['DOCUMENT_ROOT'] works perfectly.
On wamp server locally, it shows $_SERVER['DOCUMENT_ROOT'] as C:/wamp/www even though I have virtual hosts set up. Everything on wamp works except for the $_SERVER['DOCUMENT_ROOT'] declaration.
I used a php script suggested as a fix. The prepend_script set in auto_prepend_file in php.ini:is as follows:
auto_prepend_file = c:/wamp/www/prepend_script.php
$basePath = dirname(__FILE__); // assuming this script is in c:/wamp/www
$projectPath = preg_replace('#('.$basePath.'/[^/]+)/.*#i', '\\1', $_SERVER['PHP_SELF']);
$_SERVER['DOCUMENT_ROOT'] = $projectPath;
What that did was change the$_SERVER['DOCUMENT_ROOT'] from C:/wamp/www to /itsaboutwirelessnetworks/index.php
When in actuality for it to function properly it should be C:/wamp/www/itsaboutwirelessnetworks and not /itsaboutwirelessnetworks/index.php as the prepend script did.
So in order for myself and others to understand how this can affect them, I would appreciate an explanation of why it works on my live server and not the wamp server.
Hence the original question:
What is the difference on a Wamp or Live Web Server for the definition of $_SERVER['DOCUMENT_ROOT'] and where does it get defined on both?
The documentation says about DOCUMENT_ROOT: (emph mine)
The document root directory under
which the current script is executing,
as defined in the server's
configuration file.
So you should look at your config file and check if there is something strange there i'd say.
(I think that unless you've changed it, C:/wamp/www sounds like a valid value to assign to your document-root. If it should be something else, that must be defined in the config of the virtual host)
This seems like a bug in Apache, as it's defined in the Apache httpd.conf file.
http://www.devcha.com/2010/02/solved-incorrect-value-for.html
https://issues.apache.org/bugzilla/show_bug.cgi?id=26052
Apache/php/mysql websites are most often run on Linux servers and you're developing on Windows, the large differences between them could well account for the different bug behaviour.

Catchall Router on Exim does not work

I have setup a catchall router on exim (used as last router):
catchall:
driver = redirect
domains = +local_domains
data = ${lookup{*#$domain}lsearch{/etc/aliases}}
retry_use_local_part
This works perfectly when sending emails locally. However, if I login to my GMail account and send an email to whatever#mydomain.com, then I get an "Unrouteable Address".
Thank you for any hints to solve this issue.
In the system_aliases: section of the config file you already have a section which does the lookup in /etc/aliases.
Replace
data = ${lookup{$local_part}lsearch{/etc/aliases}}
with
data = ${lookup{$local_part}lsearch*#{/etc/aliases}}
and make sure you have *:catchall_username* in /etc/aliases
This works great for a single domain mail server which is already using /etc/aliases
For this router to work, make sure that
mydomain.com is in local_domains
there is an entry for *#mydomain.com in /etc/aliases
MX record for mydomain.com is pointing to the server, where you've
configured this
This is old as heck, but I didn't see a good answer posted and someone else might want to know the answer.
This post is geared towards Debian with in single configuration file mode. It should work on any Linux Exim4 install though. For the purpose of explaining things we’ll use test#example.com which is configured with the hostname mail.example.com. The system will have a real user called test and we want to create an alias for test called alias. So the end result will all email sent to alias#example.com forwarded to test#example.com without having to create the user alias on the system.
First we need to create a place to store all of the alias files:
mkdir /etc/exim/aliases.d
vim /etc/exim/aliases.d/mail.example.com
contents of the alias file for mail.example.com alias:test
vim /etc/exim/exim4.conf.template
Now look for the section system_aliases. Here you’ll see data = ${lookup{$local_part}lsearch{/etc/aliases}} or something similar. Change that to
data = ${lookup{$local_part}lsearch{/etc/exim4/aliases.d/$domain}}
Save the file and restart exim. The alias should now work. To add support for other domains just add more alias files in the aliases.d directory with the correct hostname.
I copied and pasted this from my blog:
0xeb.info