Percona MongoDB with LDAP is not working for more than 2 concurrent threads even if the connection pool configured for more than 2.
MongoDB Configuration,
setParameter:
saslauthdPath: /app/mongo/mongoldap/var/run/saslauthd/mux
authenticationMechanisms: PLAIN,SCRAM-SHA-1
ldapConnectionPoolSizePerHost: 10
ldapUseConnectionPool: true
ldapDebug: true
SASL Configuration,
ldap_servers: ldap://ldap.forumsys.com
ldap_mech: PLAIN
ldap_search_base: dc=example,dc=com
ldap_filter: (cn=%u)
ldap_bind_dn: cn=read-only-admin,dc=example,dc=com
ldap_password: password
Test Script (PHP),
<?php
use MongoDB\Driver\Manager as MongoDB;
use MongoDB\Driver\Query as Query;
use MongoDB\Driver\BulkWrite as BulkWrite;
try{
for($i=0;$i<3;$i++){
$handlerName = "handle".$i;
$$handlerName = new MongoDB("mongodb://xx.xx.xx.xx",array("authSource"=>"$external","authMechanism"=>"PLAIN","username"=>"cn=read-only-admin,dc=example,dc=com","password"=>"password","tls"=>true,"tlsCertificateKeyFile"=>"/xyzabc/dbs/mongoclient.pem","tlsCAFile"=>"/xyzabc/dbs/mongoca.pem","tlsAllowInvalidCertificates"=>true));
$filters=array();
$options=array();
$command = new Query($filters,$options);
$query="xyzabc.customerdetails";
$result = $$handlerName->executeQuery($query,$command);
$resultAsJson = $result->toArray();
$resultAsArray = json_decode(json_encode($resultAsJson), True);
print_r(count($resultAsArray));
echo "\n";
sleep(5);
}
for($i=0;$i<3;$i++){
$handlerName = "handle".$i;
$query="xyzabc.client";
$result = $$handlerName->executeQuery($query,$command);
$resultAsJson = $result->toArray();
$resultAsArray = json_decode(json_encode($resultAsJson), True);
print_r(count($resultAsArray));
echo "\n";
}
echo "Success";
}catch(Exception $e){
print_r($e);
echo "Failed";
}
?>
Test Script (Shell script for nohup),
nohup php test.php > output1.log 2>&1 &
nohup php test.php > output2.log 2>&1 &
nohup php test.php > output3.log 2>&1 &
nohup php test.php > output4.log 2>&1 &
nohup php test.php > output5.log 2>&1 &
Test Results,
Script executed in a single thread (same process ID) - there is no error it works for any number of connections
If the same script is executed on nohup (multi-thread or multiple process IDs) - only works for the first two threads, fails for 3rd and above
Error Message (MongoDB log),
LDAPLibraryError: Failed to authenticate 'cn=read-only-admin,dc=example,dc=com' using simple bind; LDAP error: Can't contact LDAP server
Percona MongoDB Version: 4.4.2-4
When the test PHP script is executed synchronously there is no error with the number of connections. I assume this is because the process ID is the same for all the DB connections so it uses the same connection pool.
On the other hand, if it is executed concurrently (with nohup), only the first 2 connections are working, by this, I assume only the first 2 connection pools are working and from 3rd connection pool, the requests are failing.
Since I have ldapConnectionPoolSizePerHost is set to 10, I don't understand why this is not working as expected.
Thanks in advance!
Related
What I have used as resource : https://metasploit.help.rapid7.com/v1/docs/rpc-api
First I have started msf rpc server :
msfrpcd -U msf -P test -f -S -a 127.0.0.1
[*] MSGRPC starting on 127.0.0.1:55553 (NO SSL):Msg...
[*] MSGRPC ready at 2019-01-11 00:56:29 +0900.
after that the server is up and is showing up via browser in http://127.0.0.1:55553
The script I have used while using XML::RPC to get data
use XML::RPC;
use strict;
use warnings;
my $fm = XML::RPC->new( 'http://127.0.0.1:55553/api/' );
my $session = $fm->call( 'auth.login', { username => 'msf', password => 'test' });
my $x = $fm->call('group.command'); #api
The error when I run the script :
no data at /usr/local/share/perl/5.26.1/XML/RPC.pm line 288.
It seems that the api used is not working or perphaps something wrong
Do you have any better way to get data from msf rpc server?
I'm running an application which builds and sends ICMP ECHO requests to a few different ip addresses. The application is written in Crystal. When attempting to open a socket from within the crystal docker container, Crystal raises an exception: Permission Denied.
From within the container, I have no problem running ping 8.8.8.8.
Running the application on macos, I have no problem.
Reading the https://docs.docker.com/engine/security/apparmor/ and https://docs.docker.com/engine/security/seccomp/ pages on apparmor and seccomp I was sure I'd found the solution, but the problem remains unresolved, even when running as docker run --rm --security-opt seccomp=unconfined --security-opt apparmor=unconfined socket_permission
update/edit: After digging into capabilities(7), I added the following line to my dockerfile: RUN setcap cap_net_raw+ep bin/ping trying to let the socket get opened but without change.
Thanks!
Relevant crystal socket code, full working code sample below:
# send request
address = Socket::IPAddress.new host, 0
socket = IPSocket.new Socket::Family::INET, Socket::Type::DGRAM, Socket::Protocol::ICMP
socket.send slice, to: address
Dockerfile:
FROM crystallang/crystal:0.23.1
WORKDIR /opt
COPY src/ping.cr src/
RUN mkdir bin
RUN crystal -v
RUN crystal build -o bin/ping src/ping.cr
ENTRYPOINT ["/bin/sh","-c"]
CMD ["/opt/bin/ping"]
Running the code, first native, then via docker:
#!/bin/bash
crystal run src/ping.cr
docker build -t socket_permission .
docker run --rm --security-opt seccomp=unconfined --security-opt apparmor=unconfined socket_permission
And finally, a 50 line crystal script which fails to open a socket in docker:
require "socket"
TYPE = 8_u16
IP_HEADER_SIZE_8 = 20
PACKET_LENGTH_8 = 16
PACKET_LENGTH_16 = 8
MESSAGE = " ICMP"
def ping
sequence = 0_u16
sender_id = 0_u16
host = "8.8.8.8"
# initialize packet with MESSAGE
packet = Array(UInt16).new PACKET_LENGTH_16 do |i|
MESSAGE[ i % MESSAGE.size ].ord.to_u16
end
# build out ICMP header
packet[0] = (TYPE.to_u16 << 8)
packet[1] = 0_u16
packet[2] = sender_id
packet[3] = sequence
# calculate checksum
checksum = 0_u32
packet.each do |byte|
checksum += byte
end
checksum += checksum >> 16
checksum = checksum ^ 0xffff_ffff_u32
packet[1] = checksum.to_u16
# convert packet to 8 bit words
slice = Bytes.new(PACKET_LENGTH_8)
eight_bit_packet = packet.map do |word|
[(word >> 8), (word & 0xff)]
end.flatten.map(&.to_u8)
eight_bit_packet.each_with_index do |chr, i|
slice[i] = chr
end
# send request
address = Socket::IPAddress.new host, 0
socket = IPSocket.new Socket::Family::INET, Socket::Type::DGRAM, Socket::Protocol::ICMP
socket.send slice, to: address
# receive response
buffer = Bytes.new(PACKET_LENGTH_8 + IP_HEADER_SIZE_8)
count, address = socket.receive buffer
length = buffer.size
icmp_data = buffer[IP_HEADER_SIZE_8, length-IP_HEADER_SIZE_8]
end
ping
It turns out the answer is that Linux (and by extension docker) does not give the same permissions that macOS does for DGRAM sockets. Changing the socket declaration to socket = IPSocket.new Socket::Family::INET, Socket::Type::RAW, Socket::Protocol::ICMP allows the socket to connect under docker.
A little more still is required to run the program in a non-root context. Because raw sockets are restricted to root, the binary must also be issued the correct capability for access to a raw socket, CAP_NET_RAW. However, in docker, this isn't necessary. I was able to get the program to run outside of super-user context by running sudo setcap cap_net_raw+ep bin/ping. This is a decent primer on capabilities and the setpcap command
MacOS doesn't use the same system of permissions, so setcap is just an unrecognized command. As a result, to get the above code to compile and run successfully on macOS without super-user context, I changed the socket creation code to:
socket_type = Socket::Type::RAW
{% if flag?(:darwin) %}
socket_type = Socket::Type::DGRAM
{% end %}
socket = IPSocket.new Socket::Family::INET, socket_type, Socket::Protocol::ICMP
Applying the CAP_NET_RAW capability for use in linux happens elsewhere in the build process if needed.
With those changes, I'm not seeing any requirement for changes to seccomp or apparmor from the default shipped with Docker in order to run the program.
I am trying to get logstash 2.3.3 websocket input working.
Logstash: https://download.elastic.co/logstash/logstash/logstash-2.3.3.tar.gz
Websocket Input Plugin for Logstash: https://www.elastic.co/guide/en/logstash/current/plugins-inputs-websocket.html
Websocket server: https://github.com/joewalnes/websocketd/releases/download/v0.2.11/websocketd-0.2.11-linux_amd64.zip
Websocket Client: Chrome Plugin "Simple Web Socket Client"
I am aware of a bug filed last year logstash 1.5.0 and the websocket input plugin. https://github.com/logstash-plugins/logstash-input-websocket/issues/3 I have also received those same error messages, although I can't reproduce them anymore. The following is my current procedure and result. I am hoping that bug has since been fixed and I just can't find the correct config.
First I installed the plugin and confirmed it is listed as installed.
/app/bin/logstash-plugin list | grep "websocket"
Next, I checked that logstash was working with the following config
input {
stdin { }
}
output {
file {
path => "/app/logstash-2.3.3/logstash-log.txt"
}
}
Logstash worked.
/app/logstash-2.3.3/bin/logstash agent --config /app/logstash-2.3.3/logstash.conf
Hello World
The file logstash-log.txt contained:
{"message":"Hello World","#version":"1","#timestamp":"2016-07-05T20:04:14.850Z","host":"server-name.domain.com"}
Next I opened port 9300
I wrote a simple bash script to return some numbers
#!/bin/bash
case $1 in
-t|--to)
COUNTTO=$2
shift
;;
esac
shift
printf 'Count to %i\n' $COUNTTO
for COUNT in $(seq 1 $COUNTTO); do
echo $COUNT
sleep 0.1
done
I started up websocketd pointing to my bash script
/app/websocketd --port=9300 /app/count.sh --to 7
I opened Simple Web Socket Client in Chrome and connected
ws://server-name.domain.com:9300
Success! It returned the following.
Count to 7
1
2
3
4
5
6
7
At this point I know websocketd works and logstash works. Now is when the trouble starts.
Logstash websocket input configuration file
input {
websocket {
codec => "plain"
url => "ws://127.0.0.1:9300/"
}
}
output {
file {
path => "/app/logstash-2.3.3/logstash-log.txt"
}
}
Run configtest
/app/logstash-2.3.3/bin/logstash agent --config /app/logstash-2.3.3/logstash.conf --configtest
Receive "Configuration OK"
Start up websocketd
/app/websocketd --port=9300 /app/logstash-2.3.3/bin/logstash agent --config /app/logstash-2.3.3/logstash.conf
Back in Simple Web Socket Client, I connect to ws://server-name.domain.com:9300. I see a message pop up that I started a session.
Tue, 05 Jul 2016 20:07:13 -0400 | ACCESS | session | url:'http://server-name.domain.com:9300/' id:'1467732248361139010' remote:'192.168.0.1' command:'/app/logstash-2.3.3/bin/logstash' origin:'chrome-extension://pfdhoblngbopfeibdeiidpjgfnlcodoo' | CONNECT
I try to send "hello world". Nothing apparent happens on the server. After about 15 seconds I see a disconnect message in my console window. logstash-log.txt is never created.
Any ideas for what to try? Thank you!
UPDATE 1:
I tried putting the following in a bash script called "launch_logstash.sh":
#!/bin/bash
exec /app/logstash-2.3.3/bin/logstash agent --config /app/logstash-2.3.3/logstash.conf
Then I started websocketd like so:
/app/websocketd --port=9300 /app/logstash-2.3.3/bin/launch_logstash.sh
Same result; no success.
Upon reading the websocketd documentation more closely, it sends the data received on the socket to a program's stdin. I was trying to listen to a socket in my logstash config, but the data is actually going to that app's stdin. I changed my config to this:
input {
stdin { }
}
output {
file {
path => "/app/logstash-2.3.3/logstash-log.txt"
}
}
Then launched websocketd like this:
/app/websocketd --port=9300 /app/logstash-2.3.3/bin/logstash agent --config /app/logstash-2.3.3/logstash.conf
So in short, until logstash-websocket-input implements their server option, stdin{} and stdout{} are the input and output if using websocketd as the web server.
I'm using the AWS CodeDeploy platform for automatic deployment of my REST services. The deployment script got a lot of steps that copy/configure/do other staff. If any of the steps fails - the entire deployment fails for this server and I got a clear notification about it. So, the last step I need is a health check - a validation that configuration was appropriate and all is up and running.
Of cause, I can make a couple curl POSTs, parse their results and use some extracted values within more curl POSTs to get some sanity coverage, but all this parsing sounds like a wheel invention.
Is there any convinient testing framework/tool that can be easily "packed" and invoked in scripts without installing a huge testing siutes on each of my production servers?
Given that you're doing REST you probably can rely on the status codes instead of parsing the body. If you get a code that's not in 2xx, then something is wrong.
If you want a more elaborate check you could add a special endpoint that does some DB queries and maybe sends some harmless queries to its integrations.
And the most complicated option would be to implement a smart post-deployment steps that follow some workflow procedure. You'd need either to use an elaborate bash-scripting, or use more advanced programming languages and frameworks (like RestAssured in Java or RestClient in Groovy).
Don't forget to introduce a loop with some timeout that does a health check since your first request may be sent too early while the app is still being deployed.
Here is an example of simple bash-script that checks the status and the version of the app:
#!/usr/bin/env bash
# Helps to define whether application deployment was successful by checking
# connection to HTTP resource. If the page is loaded and the response is 200
# or 201, then the script finishes successfully. In case connection refused
# is or Gateway Timeout (503) the script is trying to connect again within
# timeout period. Otherwise script finishes with fail.
# Needs required parameter url to application and optional parameters timeout
# (by default equals to 180) and artifact version. If artifact version
# parameter is given and the response is 200 or 201, then script also checks
# that # deployed version (gets from $url/version) equals to the passed
# version. If not, the script finishes with fail. Example of usage in bash
# script:
# sh post_deployment_test.sh http://blah.com/version 100 1.0.102-20160404.101644-5
# result=$?
#
# If $result value equals to 0, then connection is successfully established,
# otherwise, it is not established.
url=$1
timeout=$2
version=$3
if [ -z "$timeout" ]; then
timeout=180
fi
counter=0
delay=3
while [ $counter -le $timeout ]; do
command="curl -L -s -o /dev/null -w %{http_code} $url"
echo "Executing: $command"
status_code=$($command)
curl_code=$?
# Curl error code CURLE_COULDNT_CONNECT (7) means fail to connect to host or proxy.
# It occurs, in particular, in case when connection refused.
if [ $curl_code -ne 0 ] && [ $curl_code -ne 7 ]; then
echo "Connection is not established"
exit 1
fi
if [ $curl_code = 7 ] || [ $status_code = 503 ]; then
echo "Connection has not been established yet, because connection refused or service unavailable. Trying to connect again"
sleep $delay
let counter=$counter+$delay
continue
elif [ $status_code = 200 ] || [ $status_code = 201 ]; then
if [ -z "$version" ]; then
echo "Connection is successfully established"
exit 0
else
grep_result=`curl -L -s $url | grep $version`
if [ -z "$grep_result" ]; then
echo `curl -L -s $url`
echo "Deployed version doesn't equal to expected"
exit 1
else
echo "Connection is successfully established"
exit 0
fi
fi
else
echo "Connection is not established"
exit 1
fi
done
echo "Connection is not established"
exit 1
I've found something nice I was looking for: jasmine-node as a test runtime + frisby.js as a validation script tool.
It's both really portable (I just run npm install during the deployment) and really convenient in terms of scripting, e.g.(official example from frisby):
var frisby = require('frisby');
.get('https://api.twitter.com/1/statuses/user_timeline.json?screen_name=brightbit')
.expectStatus(200)
.expectHeaderContains('content-type', 'application/json')
.expectJSON('0', {
place: function(val) { expect(val).toMatchOrBeNull("Oklahoma City, OK"); }, // Custom matcher callback
user: {
verified: false,
location: "Oklahoma City, OK",
url: "http://brightb.it"
}
})
.expectJSONTypes('0', {
id_str: String,
retweeted: Boolean,
in_reply_to_screen_name: function(val) { expect(val).toBeTypeOrNull(String); }, // Custom matcher callback
user: {
verified: Boolean,
location: String,
url: String
}
})
.toss();
I am using Openssh module to connect to hosts using the (async => 1) option.
How is it possible to trap connection errors for those hosts that are not able to connect.I do not want the error to appear in the terminal but instead be stored in a data structure, since I want to finally format all the data as a cgi script.When I run the script the hosts that has a connection problem throw error in the terminal.The code executes further and try to run commands on disconnected hosts.I want to isolate the disconnected hosts.
my (%ssh, %ls); #Code copied from CPAN Net::OpenSSH
my #hosts = qw(host1 host2 host3 host4 );
# multiple connections are stablished in parallel:
for my $host (#hosts) {
$ssh{$host} = Net::OpenSSH->new($host, async => 1);
$ssh{$host}->error and die "no remote connection "; <--- doesn't work here! :-(
}
# then to run some command in all the hosts (sequentially):
for my $host (#hosts) {
$ssh{$host}->system('ls /');
}
$ssh{$host}->error and die "no remote connection doesn't work".
Any help will be appreciated.
Thanks
You run async connections. So program continue work and dont wait when connection is establised.
After new with async option you try to check error but it is not defined because connection in progress and no information about error.
As i understand you need wait after first loop until connection process got results.
Try to use ->wait_for_master(0);
If a false value is given, it will finalize the connection process and wait until the multiplexing socket is available.
It returns a true value after the connection has been succesfully established. False is returned if the connection process fails or if it has not yet completed (then, the "error" method can be used to distinguish between both cases).
for my $host (#hosts) {
$ssh{$host} = Net::OpenSSH->new($host, async => 1);
}
for my $host (#hosts) {
unless ($ssh{$host}->wait_for_master(0)) {
# check $ssh{$host}->error here. For example delete $ssh{$host}
}
}
# Do work here
I don't check this code.
PS: Sorry for my English. Hope it helps you.