saslauthd 0: No authentication failed - mongodb

Here I have attached my saslauthd conf file. ANd When I run the command
testsaslauthd -u mongoldap -p coolcomp#123 -f /var/run/saslauthd/mux
0:No authentication failed error occurs.
Please let me know if ther eis any solution to resolve this. Thanks in advance.

Related

How configure job execution for sudo users with NOPASSWD option?

I try run command from another user remotely from rundeck jobs.
Rundeck provide sudo login\passwod mechanism for escalate privilege.
On my remote server in sudoers file NOPASSWD option. How i can configure rundeck jobs in this case?
my node configurations:
<project>
<node name="testServer"
type="Node"
description="testNode"
hostname="IP_ADDRESS"
username="${option.Login}"
ssh-authentication="password"
sudo-command-enabled="true"
ssh-password-option="option.Password"
/>
</project>
Example command in job for become another user:
sudo /bin/su suuser -
Error's after trying executions:
Remote command failed with exit status -1
08:57:13 Sudo execution password response failed: Failed waiting for input prompt: Expected input was not seen in 5000 milliseconds
08:57:13 Failed: NonZeroResultCode: Remote command failed with exit status -1
I added the following three attributes to my node and it appears to work:
sudo-command-enabled="true"
sudo-prompt-pattern=""
sudo-command-pattern="^sudo.*"
You have to set the below attributes in Edit Project Configuration File
project.ssh-authentication=password
project.ssh-keypath=/home/rundeck/.ssh/id_rsa
project.ssh-password-storage-path=keys/xyz.password
project.sudo-command-enabled=true
project.sudo-password-option=option.jobPassword
project.sudo-password-storage-path=keys/xyz.password
project.sudo-prompt-pattern=^Password\:.*

Vagrant setup MongoDB errno 104

Im getting this error::
Any help would be great,thanks in advance
An error occurred while downloading the remote file. The error
message, if any, is reproduced below. Please fix this error and try
again.
SSL read: error:00000000:lib(0):func(0):reason(0), errno 104
ive changed the permission
sudo chown username:username -R ~/.vagrant.d
also, removed temp file data.
rm -rf ~/.vagrant.d/tmp/box*
download the box using the --insecure flag, something like
vagrant box add ubuntu/trusty64 --insecure
and replace ubuntu/trusty64 by the name of the box you are downloading

Hawq init failed -- "postgres" is needed by initdb

After I build incubator-hawq on Centos7.1, I tried to init it. But the error below occurs:
20160516:18:10:43:002036 hawqinit.sh:host-172-16-0-105:hawqadmin-[INFO]:-Loading hawq_toolkit...
ALTER ROLE
20160516:18:10:44:001766 hawq_init:host-172-16-0-105:hawqadmin-[INFO]:-20160516:18:10:43:002036 hawqinit.sh:host-172-16-0-105:hawqadmin-[INFO]:-Loading hawq_toolkit...
20160516:18:10:44:001766 hawq_init:host-172-16-0-105:hawqadmin-[INFO]:-Master init successfully
20160516:18:10:44:001766 hawq_init:host-172-16-0-105:hawqadmin-[INFO]:-Init segments in list: ['hawq-master']
20160516:18:10:44:001766 hawq_init:host-172-16-0-105:hawqadmin-[DEBUG]:-Start to init segment on node 'hawq-master'
20160516:18:10:44:001766 hawq_init:host-172-16-0-105:hawqadmin-[INFO]:-Total segment number is: 1
fgets failure: Success
The program "postgres" is needed by initdb but was either not found in the same directory as "/usr/hawq/bin/initdb" or failed unexpectedly.
Check your installation; "postgres -V" may have more information.
20160516:18:10:45:002318 hawqinit.sh:host-172-16-0-105:hawqadmin-[ERROR]:-Postgres initdb failed
20160516:18:10:45:002318 hawqinit.sh:host-172-16-0-105:hawqadmin-[ERROR]:-Segment init failed on host-172-16-0-105
20160516:18:10:45:001766 hawq_init:host-172-16-0-105:hawqadmin-[INFO]:-20160516:18:10:45:002318 hawqinit.sh:host-172-16-0-105:hawqadmin-[ERROR]:-Postgres initdb failed
20160516:18:10:45:002318 hawqinit.sh:host-172-16-0-105:hawqadmin-[ERROR]:-Segment init failed on host-172-16-0-105
20160516:18:10:45:001766 hawq_init:host-172-16-0-105:hawqadmin-[ERROR]:-HAWQ init failed on hawq-master
20160516:18:10:46:001766 hawq_init:host-172-16-0-105:hawqadmin-[INFO]:-0 of 1 segments init successfully
20160516:18:10:46:001766 hawq_init:host-172-16-0-105:hawqadmin-[ERROR]:-Segments init failed, exit
When I type the command, the below shows:
[hawqadmin#host-172-16-0-105 hawqAdminLogs]$ postgres -V
postgres (HAWQ) 8.2.15
Any advice? Thanks!
If "postgres -V" works, that means the postgres binary is good.
Before you do "hawq init cluster", please make sure:
1) $GPHOME in greenplum_path.sh is correctly set to the directory of hawq binary, i.e, /usr/hawq in your case
2) source $GPHOME/greenplum_path.sh
3) check if initdb and postgres binary is in $GPHOME/bin
From the error you pasted above, 2 possible causes:
(1) The binary postgres called is not /usr/hawq/bin/postgres, You can use which postgres to check the path.
(2) The dependent lib for postgres may be wrong. You can use ldd for linux or otool for mac to print all dependent lib paths, and check them.
Moreover, if any error when init hawq, please check log in ~/hawqAdminLogs/, you may find out the specific error message.
Hope it will help you to find out the root cause.
Recently I faced same error while initializing cluster.
Postgres -V showed correct version, which postgres showed /usr/local/hawq/bin/postgres, also the path was already set, still faced above error.
Finally resolved by setting LD_LIBRARY_PATH to /usr/local/hawq/lib/ and sourced it via .bashrc file.
Looks like you might have installed hawq binaries in different directory . Please check the following
1.Make sure you have all the right PATH set
Check hawq initdb binaries are there in /usr/hawq/bin/ directory
make sure you have successed compile hawq and install them
check postgres is in the same dir with initdb
if there are more than 1 postgres in your pc, make sure the path of postgres(the same dir with initdb) is in your PATH.

forgerock openam ssoadm STS configuration error while running create-sub-cfg

I am getting the following exception while running ssoadm's create-sub-cfg on forgerock openam13 version. I would appreciate any leads or hints to resolve this. Thanks
Command:
create-sub-cfg --servicename RestSecurityTokenService --subconfigname "test" --realm myrealm --datafile mydir1/my_realm_sts_attrs.properties
Exception:
Executing class, com.sun.identity.cli.schema.AddSubConfiguration.
com.sun.identity.cli.CLIException: Message:Unable to add subConfig test
at com.sun.identity.cli.schema.AddSubConfiguration.addSubConfigToRealm(AddSubConfiguration.java:150)
at com.sun.identity.cli.schema.AddSubConfiguration.handleRequest(AddSubConfiguration.java:103)
at com.sun.identity.cli.SubCommand.execute(SubCommand.java:296)
at com.sun.identity.cli.CLIRequest.process(CLIRequest.java:217)
at com.sun.identity.cli.CLIRequest.process(CLIRequest.java:139)
at com.sun.identity.cli.CommandManager.serviceRequestQueue(CommandManager.java:576)
at com.sun.identity.cli.CommandManager.<init>(CommandManager.java:173)
at com.sun.identity.cli.CommandManager.main(CommandManager.java:150)
Caused by: Message:Unable to add subConfig test
at com.sun.identity.sm.ServiceConfig.addSubConfig(ServiceConfig.java:343)
at com.sun.identity.cli.schema.AddSubConfiguration.addSubConfig(AddSubConfiguration.java:228)
at com.sun.identity.cli.schema.AddSubConfiguration.addSubConfigToRealm(AddSubConfiguration.java:131)
... 7 more
Unable to add subConfig test
Command process exited with value 127
You might want to have a look at OPENAM-8006
You probably need to replace:
--subconfigname "test"
with
--subconfigid "test"

Auth GET failed: 500 Internal Server Error

i have problem with swift..when i execute swift -V 2.0 -A http://xxx.xxx.x.xx:5000/v2.0/ -U cookbook:demo -K openstack stat
and then this is output
Auth GET failed: http://xxx.xxx.x.xx:5000/v2.0/tokens 500 Internal Server Error
any solution for me? :)
I hit this error while execute 'swift list'.
Error: Account GET failed ... 503 Internal Server Error (first 60 chars of response)...
On swift storage node, check log '/var/log/swift/account-server.log', and get a piece of error message:[Errno 13] Permission denied '/srv/node/sdb1/accounts'
According to the error message, I found the root cause is that, on swift storage node, the swift user doesn't have permission on directory '/srv/node/'. Grant permission with CMD: chown -R swift:swift /srv/node
And the problem is solved. Hope this helpful.