snmpget : Unknown user name - ubuntu-16.04

I am trying to install net-snmp from scratch to make snmpv3 to work on my computer.
I did install net-snmp and create the user, but when I want to make snmpget it reject me with snmpget: Unknown user name
To install net-snmp I followed the official guide
I did install the packages libperl-dev, snmp-mibs-downloader and snmp too using sudo apt-get install
Here is my /usr/local/share/snmp configuration where you can find the particular line rouser neutg
###############################################################################
#
# EXAMPLE.conf:
# An example configuration file for configuring the Net-SNMP agent ('snmpd')
# See the 'snmpd.conf(5)' man page for details
#
# Some entries are deliberately commented out, and will need to be explicitly activated
#
###############################################################################
#
# AGENT BEHAVIOUR
#
# Listen for connections from the local system only
# agentAddress udp:127.0.0.1:161
# Listen for connections on all interfaces (both IPv4 *and* IPv6)
agentAddress udp:161,udp6:[::1]:161
###############################################################################
#
# SNMPv3 AUTHENTICATION
#
# Note that these particular settings don't actually belong here.
# They should be copied to the file /var/lib/snmp/snmpd.conf
# and the passwords changed, before being uncommented in that file *only*.
# Then restart the agent
# createUser authOnlyUser MD5 "remember to change this password"
# createUser authPrivUser SHA "remember to change this one too" DES
# createUser internalUser MD5 "this is only ever used internally, but still change the password"
# If you also change the usernames (which might be sensible),
# then remember to update the other occurances in this example config file to match.
###############################################################################
#
# ACCESS CONTROL
#
# system + hrSystem groups only
view systemonly included .1.3.6.1.2.1.1
view systemonly included .1.3.6.1.2.1.25.1
# Full access from the local host
#rocommunity public localhost
# Default access to basic system info
rocommunity public default -V systemonly
# rocommunity6 is for IPv6
rocommunity6 public default -V systemonly
# Full access from an example network
# Adjust this network address to match your local
# settings, change the community string,
# and check the 'agentAddress' setting above
#rocommunity secret 10.0.0.0/16
# Full read-only access for SNMPv3
rouser authOnlyUser
# Full write access for encrypted requests
# Remember to activate the 'createUser' lines above
#rwuser authPrivUser priv
# It's no longer typically necessary to use the full 'com2sec/group/access' configuration
# r[ow]user and r[ow]community, together with suitable views, should cover most requirements
###############################################################################
#
# SYSTEM INFORMATION
#
# Note that setting these values here, results in the corresponding MIB objects being 'read-only'
# See snmpd.conf(5) for more details
sysLocation Sitting on the Dock of the Bay
sysContact Me <me#example.org>
# Application + End-to-End layers
sysServices 72
#
# Process Monitoring
#
# At least one 'mountd' process
proc mountd
# No more than 4 'ntalkd' processes - 0 is OK
proc ntalkd 4
# At least one 'sendmail' process, but no more than 10
proc sendmail 10 1
# Walk the UCD-SNMP-MIB::prTable to see the resulting output
# Note that this table will be empty if there are no "proc" entries in the snmpd.conf file
#
# Disk Monitoring
#
# 10MBs required on root disk, 5% free on /var, 10% free on all other disks
disk / 10000
disk /var 5%
includeAllDisks 10%
# Walk the UCD-SNMP-MIB::dskTable to see the resulting output
# Note that this table will be empty if there are no "disk" entries in the snmpd.conf file
#
# System Load
#
# Unacceptable 1-, 5-, and 15-minute load averages
load 12 10 5
# Walk the UCD-SNMP-MIB::laTable to see the resulting output
# Note that this table *will* be populated, even without a "load" entry in the snmpd.conf file
###############################################################################
#
# ACTIVE MONITORING
#
# send SNMPv1 traps
trapsink localhost public
# send SNMPv2c traps
#trap2sink localhost public
# send SNMPv2c INFORMs
#informsink localhost public
# Note that you typically only want *one* of these three lines
# Uncommenting two (or all three) will result in multiple copies of each notification.
#
# Event MIB - automatically generate alerts
#
# Remember to activate the 'createUser' lines above
iquerySecName internalUser
rouser internalUser
# generate traps on UCD error conditions
defaultMonitors yes
# generate traps on linkUp/Down
linkUpDownNotifications yes
###############################################################################
#
# EXTENDING THE AGENT
#
#
# Arbitrary extension commands
#
extend test1 /bin/echo Hello, world!
extend-sh test2 echo Hello, world! ; echo Hi there ; exit 35
#extend-sh test3 /bin/sh /tmp/shtest
# Note that this last entry requires the script '/tmp/shtest' to be created first,
# containing the same three shell commands, before the line is uncommented
# Walk the NET-SNMP-EXTEND-MIB tables (nsExtendConfigTable, nsExtendOutput1Table
# and nsExtendOutput2Table) to see the resulting output
# Note that the "extend" directive supercedes the previous "exec" and "sh" directives
# However, walking the UCD-SNMP-MIB::extTable should still returns the same output,
# as well as the fuller results in the above tables.
#
# "Pass-through" MIB extension command
#
#pass .1.3.6.1.4.1.8072.2.255 /bin/sh PREFIX/local/passtest
#pass .1.3.6.1.4.1.8072.2.255 /usr/bin/perl PREFIX/local/passtest.pl
# Note that this requires one of the two 'passtest' scripts to be installed first,
# before the appropriate line is uncommented.
# These scripts can be found in the 'local' directory of the source distribution,
# and are not installed automatically.
# Walk the NET-SNMP-PASS-MIB::netSnmpPassExamples subtree to see the resulting output
#
# AgentX Sub-agents
#
# Run as an AgentX master agent
master agentx
# Listen for network connections (from localhost)
# rather than the default named socket /var/agentx/master
#agentXSocket tcp:localhost:705
rouser neutg
Here is my persistant configuration file /var/net-snmp/snmpd.conf
createUser neutg SHA "password" AES passphrase
The command I run is :
snmpget -u neutg -A password -a SHA -X 'passphrase'
-x AES -l authPriv localhost -v 3 1.3.6.1.2.1.1
I don't understand why it do not take in count my user. (I did restart the snmpd after entering the user - multiple times!)
The version of net-snmp I use :
Thanks in advance :)

After many research I've found what the problem is.
snmpd was not taking in count my configuration files. I saw it using the command :
snmpd -Dread_config -H 2>&1 | grep "Reading" | sort -u
Which tells you which configurations files are loaded by snmpd.
You can see it as well looking at the configuration file /var/lib/snmp/snmpd.conf. When snmpd handle your users it creates special lines in the file. It looks like :
usmUser 1 3 0x80001f888074336938f74f7c5a00000000 "neutg" "neutg" NULL .1.3.6.1.6.3.10.1.1.3 0xf965e4ab0f35eebb3f0e3b30\
6bc0797c025821c5 .1.3.6.1.6.3.10.1.2.4 0xe277044beccd9991d70144c4c8f4b672 0x
usmUser 1 3 0x80001f888074336938f74f7c5a00000000 "myuser" "myuser" NULL .1.3.6.1.6.3.10.1.1.2 0x2223c2d00758353b7c3076\
236be02152 .1.3.6.1.6.3.10.1.2.2 0x2223c2d00758353b7c3076236be02152 0x
setserialno 1424757026
So if you do not see any usmUser it's probably that your badly added your users.
The soluce
sudo /usr/local/sbin/snmpd -c /var/net-snmp/snmpd.conf -c /usr/local/share/snmp/snmpd.conf

Related

Steps on how to set up Sonarqube for Xcode/Swif Project

I have been trying to get this done for two days now. I have had different error to deal with. I am Using an M1 Apple Chip Mac pro and Xcode 13.4 and it has been difficult to get Sonarqube running. I finally found a docker image which is M1 specific and I have been able to get Sonarqube running locally.
My current challenge is having the test result sent to the Sonarqube project.
I have tried several method which includes
xcrun xccov view YourPathToThisFile/*.xccovreport --json
This script is not working even though I wanted an xml format.
Is there a better way to have the code coverage report sent to sonarqube. I have Sonarqube running but the test result and coverage is not showing. Sonarqube page currently says "The main branch has no lines of code."
NB: I am running Sonarqube with Docker
Below is my Sonarqube properties file.
#
# Swift SonarQube Plugin - Enables analysis of Swift and Objective-C projects into SonarQube.
# Copyright © 2015 Backelite (${email})
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# Sonar Server details
sonar.host.url=http://localhost:9000/
sonar.login=782a04fee8bfc7ae181f04bbd13734eb89e5580c
# sonar.password=admin
# Project Details
sonar.projectKey=tinggios
sonar.projectName=TinggIOSApp
sonar.projectDescription=This is TinggiOS
# Comment if you have a project with mixed ObjC / Swift
sonar.language=swift
sonar.projectKey=tinggios
sonar.qualitygate.wait=true
# Path to source directories
# sonar.sources=SonarDemo,SonarDemoTests,SonarDemoUITests
sonar.sources=.
# Exclude directories
sonar.test.inclusions=**/*Test*/**
sonar.test.inclusions=*.swift
sonar.exclusions=**/*.xml,Pods/**/*,Reports/**/*
# sonar.inclusions=*.swift
# Path to test directories (comment if no test)
sonar.tests=TinggIOS/Core/Tests/CoreTests,TinggIOS/Home/Tests/HomeTests,TinggIOS/OnboardingUITest
# Destination Simulator to run surefire
# As string expected in destination argument of xcodebuild command
# Example = sonar.swift.simulator=platform=iOS Simulator,name=iPhone 6,OS=9.2
# sonar.swift.simulator=platform=iOS Simulator,name=iPhone 7,OS=12.0
sonar.swift.simulator=platform=iOS Simulator,name=iPhone 11,OS=15
# Xcode project configuration (.xcodeproj)
# and use the later to specify which project(s) to include in the analysis (comma separated list)
# Specify either xcodeproj or xcodeproj + xcworkspace
sonar.swift.project=TinggIOS/TinggIOS.xcodeproj
sonar.swift.workspace=TinggIOS/TinggIOS.xcworkspace
sonar.language=swift
sonar.c.file.suffixes=-
sonar.cpp.file.suffixes=-
sonar.objc.file.suffixes=-
# Specify your appname.
# This will be something like "myApp"
# Use when basename is different from targeted scheme.
# Or when slather fails with 'No product binary found'
sonar.swift.appName=TinggIOS
# Scheme to build your application
sonar.swift.appScheme=TinggIOS
# Configuration to use for your scheme. if you do not specify that the default will be Debug
sonar.swift.appConfiguration=Debug
##########################
# Optional configuration #
##########################
# Encoding of the source code
sonar.sourceEncoding=UTF-8
# SCM
# sonar.scm.enabled=true
# sonar.scm.url=scm:git:http://xxx
# JUnit report generated by run-sonar.sh is stored in sonar-reports/TEST-report.xml
# Change it only if you generate the file on your own
# The XML files have to be prefixed by TEST- otherwise they are not processed
sonar.junit.reportsPath=sonar-reports/TEST-report.xml
# Cobertura report generated by run-sonar.sh is stored in sonar-reports/coverage-swift.xml
# Change it only if you generate the file on your own
sonar.swift.coverage.reportPattern=sonar-reports/coverage-swift*.xml
#sonar.coverageReportPaths=sonarqube-generic-coverage.xml
#sonar.swift.coverage.reportPattern=sonar-reports/cobertura.xml
# OCLint report generated by run-sonar.sh is stored in sonar-reports/oclint.xml
# Change it only if you generate the file on your own
sonar.swift.swiftlint.report=sonar-reports/*swiftlint.txt
# Change it only if you generate the file on your own
sonar.swift.tailor.report=sonar-reports/*tailor.txt
# Paths to exclude from coverage report (surefire, 3rd party libraries etc.)
# sonar.swift.excludedPathsFromCoverage=pattern1,pattern2
# sonar.swift.excludedPathsFromCoverage=.*Tests.*,
##########################
# Tailor configuration #
##########################
# Tailor configuration
# -l,--max-line-length=<0-999> maximum Line length (in characters)
# --list-files display Swift source files to be analyzed
# --max-class-length=<0-999> maximum Class length (in lines)
# --max-closure-length=<0-999> maximum Closure length (in lines)
# --max-file-length=<0-999> maximum File length (in lines)
# --max-function-length=<0-999> maximum Function length (in lines)
# --max-name-length=<0-999> maximum Identifier name length (in characters)
# --max-severity=<error|warning (default)> maximum severity
# --max-struct-length=<0-999> maximum Struct length (in lines)
# --min-name-length=<1-999> minimum Identifier name length (in characters)
sonar.swift.tailor.config=--no-color --max-line-length=100 --max-file-length=500 --max-name-length=40 --max-name-length=40 --min-name-length=4

xrp {"result":{"error":"noNetwork","error_code":17,"error_message":"Not synced to Ripple network.","request":{"command":"fee"},"status":"error"}}

I'm trying to run a rippled non-validator node.
I'm using an 32GB RAM C5 class instance in aws with an external volume - io1 storage with 10000 iops.
I had node reboot for patching and since then it seems fine but it returns
curl --data-binary '{"method": "fee","params": []}' -H 'content-type:text/plain;' http://:5005/
A normal response is like;
{"result":{"current_ledger_size":"68","current_queue_size":"0","drops":{"base_fee":"10","median_fee":"5000","minimum_fee":"10","open_ledger_fee":"10"},"expected_ledger_size":"150","ledger_current_index":51375387,"levels":{"median_level":"128000","minimum_level":"256","open_ledger_level":"256","reference_level":"256"},"max_queue_size":"3000","status":"success"}}
err;
{"result":{"error":"noNetwork","error_code":17,"error_message":"Not synced to Ripple network.","request":{"command":"fee"},"status":"error"}}
After reboot though I just get this Not synced error.
When I spin up a node from fresh it has to download 95GB of data at about 1 a day and it gets the same error while I'm waiting.
I'm wondering what I need to do to keep these nodes stable.
[server]
port_rpc_admin_local
port_peer
port_ws_admin_local
#port_ws_public
#ssl_key = /etc/ssl/private/server.key
#ssl_cert = /etc/ssl/certs/server.crt
[port_rpc_admin_local]
port = 5005
ip = 0.0.0.0
admin = 127.0.0.1
protocol = http
[port_peer]
port = 51235
ip = 0.0.0.0
# alternatively, to accept connections on IPv4 + IPv6, use:
#ip = ::
protocol = peer
[port_ws_admin_local]
port = 6006
ip = 0.0.0.0
admin = 127.0.0.1
protocol = ws
#[port_ws_public]
#port = 6005
#ip = 0.0.0.0
#protocol = wss
#-------------------------------------------------------------------------------
[node_size]
huge
# This is primary persistent datastore for rippled. This includes transaction
# metadata, account states, and ledger headers. Helpful information can be
# found here: https://ripple.com/wiki/NodeBackEnd
# delete old ledgers while maintaining at least 2000. Do not require an
# external administrative command to initiate deletion.
[node_db]
type=RocksDB
path=/data01/rippled/db/rocksdb
open_files=2000
filter_bits=12
cache_mb=256
file_size_mb=8
file_size_mult=2
online_delete=120000
advisory_delete=0
# This is the persistent datastore for shards. It is important for the health
# of the ripple network that rippled operators shard as much as practical.
# NuDB requires SSD storage. Helpful information can be found here
# https://ripple.com/build/history-sharding
#[shard_db]
#path=/data01/rippled/db/shards/nudb
#max_size_gb=500
[database_path]
/data01/rippled/db
# This needs to be an absolute directory reference, not a relative one.
# Modify this value as required.
[debug_logfile]
/var/log/rippled/debug.log
[sntp_servers]
time.windows.com
time.apple.com
time.nist.gov
pool.ntp.org
# To use the XRP test network (see https://ripple.com/build/xrp-test-net/),
# use the following [ips] section:
# [ips]
# r.altnet.rippletest.net 51235
# File containing trusted validator keys or validator list publishers.
# Unless an absolute path is specified, it will be considered relative to the
# folder in which the rippled.cfg file is located.
[validators_file]
validators.txt
# Turn down default logging to save disk space in the long run.
# Valid values here are trace, debug, info, warning, error, and fatal
[rpc_startup]
{ "command": "log_level", "severity": "warning" }
# If ssl_verify is 1, certificates will be validated.
# To allow the use of self-signed certificates for development or internal use,
# set to ssl_verify to 0.
[ssl_verify]
1
This is my run command:
/opt/ripple/bin/rippled --silent --conf /etc/opt/ripple/rippled.cfg
I ran a C5.xlarge with an io1 storage volume with 10000 iops.
/opt/ripple/bin/rippled --net --silent --conf /etc/opt/ripple/rippled.cfg
[server]
port_rpc_admin_local
port_peer
port_ws_admin_local
#port_ws_public
#ssl_key = /etc/ssl/private/server.key
#ssl_cert = /etc/ssl/certs/server.crt
[port_rpc_admin_local]
port = 5005
ip = 0.0.0.0
admin = 127.0.0.1
protocol = http
[port_peer]
port = 51235
ip = 0.0.0.0
# alternatively, to accept connections on IPv4 + IPv6, use:
#ip = ::
protocol = peer
[port_ws_admin_local]
port = 6006
ip = 0.0.0.0
admin = 127.0.0.1
protocol = ws
#[port_ws_public]
#port = 6005
#ip = 0.0.0.0
#protocol = wss
#-------------------------------------------------------------------------------
[node_size]
medium
# This is primary persistent datastore for rippled. This includes transaction
# metadata, account states, and ledger headers. Helpful information can be
# found here: https://ripple.com/wiki/NodeBackEnd
# delete old ledgers while maintaining at least 2000. Do not require an
# external administrative command to initiate deletion.
[node_db]
type=RocksDB
path=/data01/rippled/db/rocksdb
open_files=2000
filter_bits=12
cache_mb=256
file_size_mb=8
file_size_mult=2
online_delete=120000
advisory_delete=0
# This is the persistent datastore for shards. It is important for the health
# of the ripple network that rippled operators shard as much as practical.
# NuDB requires SSD storage. Helpful information can be found here
# https://ripple.com/build/history-sharding
#[shard_db]
#path=/data01/rippled/db/shards/nudb
#max_size_gb=500
[database_path]
/data01/rippled/db
# This needs to be an absolute directory reference, not a relative one.
# Modify this value as required.
[debug_logfile]
/var/log/rippled/debug.log
[sntp_servers]
time.windows.com
time.apple.com
time.nist.gov
pool.ntp.org
# To use the XRP test network (see https://ripple.com/build/xrp-test-net/),
# use the following [ips] section:
# [ips]
# r.altnet.rippletest.net 51235
# File containing trusted validator keys or validator list publishers.
# Unless an absolute path is specified, it will be considered relative to the
# folder in which the rippled.cfg file is located.
[validators_file]
validators.txt
# Turn down default logging to save disk space in the long run.
# Valid values here are trace, debug, info, warning, error, and fatal
[rpc_startup]
{ "command": "log_level", "severity": "warning" }
# If ssl_verify is 1, certificates will be validated.
# To allow the use of self-signed certificates for development or internal use,
# set to ssl_verify to 0.
[ssl_verify]
1
/etc/init.d/rippled
#
# rippled -- startup script for rippled
#
# chkconfig: - 85 15
# processname: rippled
#
### BEGIN INIT INFO
# Provides: rippled
# Required-Start: $local_fs $remote_fs $network
# Required-Stop: $local_fs $remote_fs $network
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: start and stop rippled
### END INIT INFO
#
#
#PIDFILE="/data01/bitcoin/bitcoind.pid"
start() {
echo -n "Starting rippled: "
exec /opt/ripple/bin/rippled --net --conf /etc/opt/ripple/rippled.cfg "$#"
}
stop() {
echo "shutting down rippled"
exec /opt/ripple/bin/rippled stop"
RETVAL=$?
rm $PIDFILE
[ $RETVAL -eq 0 ] && rm -f $PIDFILE
return $RETVAL
}
force_start() {
echo -n "Force starting Bitcoind: "
echo -n "Starting rippled: "
exec /opt/ripple/bin/rippled --net --conf /etc/opt/ripple/rippled.cfg "$#"
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
stop
sleep 30
start
;;
force-start)
force_start
;;
*)
echo "Usage: {start|stop|restart|force-start}"
exit 1
;;
esac
exit $?```

why wildfly server is asking me to add users even after adding user?

mngmt-users.properties file. The users are added in the file but when I try to run the localhost it says it's running then if I try to view the admin console it is redirecting to http://localhost:9990/error/index_win.html. That tells the server is running but I could not open admin console.
#
# Properties declaration of users for the realm 'ManagementRealm' which is the default realm
# for new installations. Further authentication mechanism can be configured
# as part of the <management /> in standalone.xml.
#
# Users can be added to this properties file at any time, updates after the server has started
# will be automatically detected.
#
# By default the properties realm expects the entries to be in the format: -
# username=HEX( MD5( username ':' realm ':' password))
#
# A utility script is provided which can be executed from the bin folder to add the users: -
# - Linux
# bin/add-user.sh
#
# - Windows
# bin\add-user.bat
#
#$REALM_NAME=ManagementRealm$ This line is used by the add-user utility to identify the realm name already used in this file.
#
# On start-up the server will also automatically add a user $local - this user is specifically
# for local tools running against this AS installation.
#
# The following illustrates how an admin user could be defined, this
# is for illustration only and does not correspond to a usable password.
#
#admin=2a0923285184943425d1f53ddd58ec7a
tejaswini=25ab658c2861b2e64783aaa9ba95c2e5
aswini#19=388ced81791ddb1760b83dc4ec8b7a61
saisana=ff39d778414ab12d84fc4fa7fdacb634
alekya=d72e9c90345ce4d9290c3a2728b3cd60
prasad=c6c7c67cf343f6862d3b77bae9f61d17
teju=28b9e55b314fd60855a7843b4455dbed
Screen shot of added user
May be u have tried to create specifically application user or management user when u ran the addUser utility,
please refer the below link for steps to register user
https://bgasparotto.com/add-user-wildfly

ORA-12505, TNS:listener does not currently know of SID given in connect descriptor using oracle sql developer

i have gone through the forum to look for a solution for hours on end, i haven't encountered anyone with the same predicament as i have. I have installed Oracle 12c 64 bit on windows 8.1 64 bit system. When i try to make a new connection to the database, using oracle sql developer, run into this error, being a newbie to oracle database, i did try to run lsnrctl start i got
LSNRCTL for 64-bit Windows: Version 12.1.0.2.0 - Production on 18-OCT-2015 19:00
:45
Copyright (c) 1991, 2014, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1539)))
TNS-12541: TNS:no listener
TNS-12560: TNS:protocol adapter error
TNS-00511: No listener
64-bit Windows Error: 61: Unknown error
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521_1)))
TNS-12541: TNS:no listener
TNS-12560: TNS:protocol adapter error
TNS-00511: No listener
64-bit Windows Error: 2: No such file or directory
I tried to make sense of it but i coudn't.
Here is my listener.ora
# copyright (c) 1997 by the Oracle Corporation
#
# NAME
# listener.ora
# FUNCTION
# Network Listener startup parameter file example
# NOTES
# This file contains all the parameters for listener.ora,
# and could be used to configure the listener by uncommenting
# and changing values. Multiple listeners can be configured
# in one listener.ora, so listener.ora parameters take the form
# of SID_LIST_<lsnr>, where <lsnr> is the name of the listener
# this parameter refers to. All parameters and values are
# case-insensitive.
# <lsnr>
# This parameter specifies both the name of the listener, and
# it listening address(es). Other parameters for this listener
# us this name in place of <lsnr>. When not specified,
# the name for <lsnr> defaults to "LISTENER", with the default
# address value as shown below.
#
# LISTENER =
# (ADDRESS_LIST=
# (ADDRESS=(PROTOCOL=tcp)(HOST=localhost(PORT=1521))
# (ADDRESS=(PROTOCOL=ipc)(KEY=PNPKEY)))
# SID_LIST_<lsnr>
# List of services the listener knows about and can connect
# clients to. There is no default. See the Net8 Administrator's
# Guide for more information.
#
# SID_LIST_LISTENER=
# (SID_LIST=
# (SID_DESC=
#BEQUEATH CONFIG
# (GLOBAL_DBNAME=salesdb.mycompany)
# (SID_NAME=sid1)
# (ORACLE_HOME=/private/app/oracle/product/8.0.3)
# #PRESPAWN CONFIG
# (PRESPAWN_MAX=20)
# (PRESPAWN_LIST=
# (PRESPAWN_DESC=(PROTOCOL=tcp)(POOL_SIZE=2)(TIMEOUT=1))
# )
# )
# )
# PASSWORDS_<lsnr>
# Specifies a password to authenticate stopping the listener.
# Both encrypted and plain-text values can be set. Encrypted passwords
# can be set and stored using lsnrctl.
# LSNRCTL> change_password
# Will prompt for old and new passwords, and use encryption both
# to match the old password and to set the new one.
# LSNRCTL> set password
# Will prompt for the new password, for authentication with
# the listener. The password must be set before running the next
# command.
# LSNRCTL> save_config
# Will save the changed password to listener.ora. These last two
# steps are not necessary if SAVE_CONFIG_ON_STOP_<lsnr> is ON.
# See below.
#
# Default: NONE
#
# PASSWORDS_LISTENER = 20A22647832FB454 # "foobar"
# SAVE_CONFIG_ON_STOP_<lsnr>
# Tells the listener to save configuration changes to listener.ora when
# it shuts down. Changed parameter values will be written to the file,
# while preserving formatting and comments.
# Default: OFF
# Values: ON/OFF
#
# SAVE_CONFIG_ON_STOP_LISTENER = ON
# USE_PLUG_AND_PLAY_<lsnr>
# Tells the listener to contact an Onames server and register itself
# and its services with Onames.
# Values: ON/OFF
# Default: OFF
#
# USE_PLUG_AND_PLAY_LISTENER = ON
# LOG_FILE_<lsnr>
# Sets the name of the listener's log file. The .log extension
# is added automatically.
# Default=<lsnr>
#
# LOG_FILE_LISTENER = lsnr
# LOG_DIRECTORY_<lsnr>
# Sets the directory for the listener's log file.
# Default: <oracle_home>/network/log
#
# LOG_DIRECTORY_LISTENER = /private/app/oracle/product/8.0.3/network/log
# TRACE_LEVEL_<lsnr>
# Specifies desired tracing level.
# Default: OFF
# Values: OFF/USER/ADMIN/SUPPORT/0-16
#
# TRACE_LEVEL_LISTENER = SUPPORT
# TRACE_FILE_<lsnr>
# Sets the name of the listener's trace file. The .trc extension
# is added automatically.
# Default: <lsnr>
#
# TRACE_FILE_LISTENER = lsnr
# TRACE_DIRECTORY_<lsnr>
# Sets the directory for the listener's trace file.
# Default: <oracle_home>/network/trace
#
# TRACE_DIRECTORY_LISTENER=/private/app/oracle/product/8.0.3/network/trace
# CONNECT_TIMEOUT_<lsnr>
# Sets the number of seconds that the listener waits to get a
# valid database query after it has been started.
# Default: 10
#
# CONNECT_TIMEOUT_LISTENER=10
and my tnsnames.ora
# This file contains the syntax information for
# the entries to be put in any tnsnames.ora file
# The entries in this file are need based.
# There are no defaults for entries in this file
# that Sqlnet/Net3 use that need to be overridden
#
# Typically you could have two tnsnames.ora files
# in the system, one that is set for the entire system
# and is called the system tnsnames.ora file, and a
# second file that is used by each user locally so that
# he can override the definitions dictated by the system
# tnsnames.ora file.
# The entries in tnsnames.ora are an alternative to using
# the names server with the onames adapter.
# They are a collection of aliases for the addresses that
# the listener(s) is(are) listening for a database or
# several databases.
# The following is the general syntax for any entry in
# a tnsnames.ora file. There could be several such entries
# tailored to the user's needs.
<alias>= [ (DESCRIPTION_LIST = # Optional depending on whether u have
# one or more descriptions
# If there is just one description, unnecessary ]
(DESCRIPTION=
[ (SDU=2048) ] # Optional, defaults to 2048
# Can take values between 512 and 32K
[ (ADDRESS_LIST= # Optional depending on whether u have
# one or more addresses
# If there is just one address, unnecessary ]
(ADDRESS=
[ (COMMUNITY=<community_name>) ]
(PROTOCOL=tcp)
(HOST=<hostname>)
(PORT=<portnumber (1521 is a standard port used)>)
)
[ (ADDRESS=
(PROTOCOL=ipc)
(KEY=<ipckey (PNPKEY is a standard key used)>)
)
]
[ (ADDRESS=
[ (COMMUNITY=<community_name>) ]
(PROTOCOL=decnet)
(NODE=<nodename>)
(OBJECT=<objectname>)
)
]
... # More addresses
[ ) ] # Optional depending on whether ADDRESS_LIST is used or not
[ (CONNECT_DATA=
(SID=<oracle_sid>)
[ (GLOBAL_NAME=<global_database_name>) ]
)
]
[ (SOURCE_ROUTE=yes) ]
)
(DESCRIPTION=
[ (SDU=2048) ] # Optional, defaults to 2048
# Can take values between 512 and 32K
[ (ADDRESS_LIST= ] # Optional depending on whether u have more
# than one address or not
# If there is just one address, unnecessary
(ADDRESS
[ (COMMUNITY=<community_name>) ]
(PROTOCOL=tcp)
(HOST=<hostname>)
(PORT=<portnumber (1521 is a standard port used)>)
)
[ (ADDRESS=
(PROTOCOL=ipc)
(KEY=<ipckey (PNPKEY is a standard key used)>)
)
]
... # More addresses
[ ) ] # Optional depending on whether ADDRESS_LIST
# is being used
[ (CONNECT_DATA=
(SID=<oracle_sid>)
[ (GLOBAL_NAME=<global_database_name>) ]
)
]
[ (SOURCE_ROUTE=yes) ]
)
[ (CONNECT_DATA=
(SID=<oracle_sid>)
[ (GLOBAL_NAME=<global_database_name>) ]
)
]
... # More descriptions
[ ) ] # Optional depending on whether DESCRIPTION_LIST is used or not
am sorry for the long post, i just dint want to exclude anything out, is there a config that am missing?

Jail user to their home doesn't work

I'm using proftpd on Debian 7.
I need to jail each user in their own home directory, so they can't see and access parent folders.
Actually each user is logged in his own homedir but they can see and access parent folders.
As you can see below, I have already tried DefaultRoot ~ developers and also DefaultRoot ~ .
How can I jail each user in their own home directory, so they can't see and access parent folders?
This is my proftpd.conf
#
# /etc/proftpd/proftpd.conf -- This is a basic ProFTPD configuration file.
# To really apply changes, reload proftpd after modifications, if
# it runs in daemon mode. It is not required in inetd/xinetd mode.
#
# Includes DSO modules
Include /etc/proftpd/modules.conf
# Set off to disable IPv6 support which is annoying on IPv4 only boxes.
UseIPv6 on
# If set on you can experience a longer connection delay in many cases.
IdentLookups off
ServerName "Debian"
ServerType standalone
DeferWelcome off
MultilineRFC2228 on
DefaultServer on
ShowSymlinks on
TimeoutNoTransfer 600
TimeoutStalled 600
TimeoutIdle 1200
DisplayLogin welcome.msg
DisplayChdir .message true
ListOptions "-l"
DenyFilter \*.*/
# Use this to jail all users in their homes
DefaultRoot ~ developers
#DocumentRoot ~
# Users require a valid shell listed in /etc/shells to login.
# Use this directive to release that constrain.
# RequireValidShell off
# Port 21 is the standard FTP port.
Port 21
# In some cases you have to specify passive ports range to by-pass
# firewall limitations. Ephemeral ports can be used for that, but
# feel free to use a more narrow range.
# PassivePorts 49152 65534
# If your host was NATted, this option is useful in order to
# allow passive tranfers to work. You have to use your public
# address and opening the passive ports used on your firewall as well.
# MasqueradeAddress 1.2.3.4
# This is useful for masquerading address with dynamic IPs:
# refresh any configured MasqueradeAddress directives every 8 hours
<IfModule mod_dynmasq.c>
# DynMasqRefresh 28800
</IfModule>
# To prevent DoS attacks, set the maximum number of child processes
# to 30. If you need to allow more than 30 concurrent connections
# at once, simply increase this value. Note that this ONLY works
# in standalone mode, in inetd mode you should use an inetd server
# that allows you to limit maximum number of processes per service
# (such as xinetd)
MaxInstances 30
# Set the user and group that the server normally runs at.
User proftpd
Group nogroup
# Umask 022 is a good standard umask to prevent new files and dirs
# (second parm) from being group and world writable.
Umask 022 022
# Normally, we want files to be overwriteable.
AllowOverwrite on
# Uncomment this if you are using NIS or LDAP via NSS to retrieve passwords:
# PersistentPasswd off
# This is required to use both PAM-based authentication and local passwords
# AuthOrder mod_auth_pam.c* mod_auth_unix.c
# Be warned: use of this directive impacts CPU average load!
# Uncomment this if you like to see progress and transfer rate with ftpwho
# in downloads. That is not needed for uploads rates.
#
# UseSendFile off
TransferLog /var/log/proftpd/xferlog
SystemLog /var/log/proftpd/proftpd.log
# Logging onto /var/log/lastlog is enabled but set to off by default
#UseLastlog on
# In order to keep log file dates consistent after chroot, use timezone info
# from /etc/localtime. If this is not set, and proftpd is configured to
# chroot (e.g. DefaultRoot or <Anonymous>), it will use the non-daylight
# savings timezone regardless of whether DST is in effect.
#SetEnv TZ :/etc/localtime
<IfModule mod_quotatab.c>
QuotaEngine off
</IfModule>
<IfModule mod_ratio.c>
Ratios off
</IfModule>
# Delay engine reduces impact of the so-called Timing Attack described in
# http://www.securityfocus.com/bid/11430/discuss
# It is on by default.
<IfModule mod_delay.c>
DelayEngine on
</IfModule>
<IfModule mod_ctrls.c>
ControlsEngine off
ControlsMaxClients 2
ControlsLog /var/log/proftpd/controls.log
ControlsInterval 5
ControlsSocket /var/run/proftpd/proftpd.sock
</IfModule>
<IfModule mod_ctrls_admin.c>
AdminControlsEngine off
</IfModule>
#
# Alternative authentication frameworks
#
#Include /etc/proftpd/ldap.conf
#Include /etc/proftpd/sql.conf
#
# This is used for FTPS connections
#
#Include /etc/proftpd/tls.conf
#
# Useful to keep VirtualHost/VirtualRoot directives separated
#
#Include /etc/proftpd/virtuals.conf
# A basic anonymous configuration, no upload directories.
# <Anonymous ~ftp>
# User ftp
# Group nogroup
# # We want clients to be able to login with "anonymous" as well as "ftp"
# UserAlias anonymous ftp
# # Cosmetic changes, all files belongs to ftp user
# DirFakeUser on ftp
# DirFakeGroup on ftp
#
# RequireValidShell off
#
# # Limit the maximum number of anonymous logins
# MaxClients 10
#
# # We want 'welcome.msg' displayed at login, and '.message' displayed
# # in each newly chdired directory.
# DisplayLogin welcome.msg
# DisplayChdir .message
#
# # Limit WRITE everywhere in the anonymous chroot
# <Directory *>
# <Limit WRITE>
# DenyAll
# </Limit>
# </Directory>
#
# # Uncomment this if you're brave.
# # <Directory incoming>
# # # Umask 022 is a good standard umask to prevent new files and dirs
# # # (second parm) from being group and world writable.
# # Umask 022 022
# # <Limit READ WRITE>
# # DenyAll
# # </Limit>
# # <Limit STOR>
# # AllowAll
# # </Limit>
# # </Directory>
#
# </Anonymous>
# Include other custom configuration files
Include /etc/proftpd/conf.d/
<Global>
AccessGrantMsg "Benvenuto sul server demo Up3Up! Ricordati di fare sempre un backup di cio' che modifichi!"
AccessDenyMsg "Accessi al server FTP demo di Up3Up errati!"
</Global>
This is how I create users and set their own homedir
#!/bin/bash
echo "Procedura per la creazione di un utente FTP . . ."
#Chiedo il nome dell'account
read -p "Inserisci il nome (senza #up3up.net): " user
#Chiedo il percorso
echo "Percorso per $user # up3up.net (senza /var/www/up3upn/public_html/)"
read -p "Inserisci il percorso: " percorso
#Se non esiste il percorso lo creo
mkdir /var/www/up3upn/public_html/"$percorso" &> /dev/null
#Avverto che verra' chiesta la password
echo "Inserisci la password in chiaro per $user # up3up.net"
#Creo l'account' e chiedo la password
useradd -d /var/www/up3upn/public_html/"$percorso" "$user" &> /dev/null
usermod -m -d /var/www/up3upn/public_html/"$percorso" "$user" &> /dev/null
useradd -G developers "$user" &> /dev/null
passwd "$user"
echo "Account creato $user # up3up.net con percorso /var/www/up3upn/public_html/$percorso"
#Riavvio il servizio FTP
service proftpd restart &> /dev/null
you have to create a Group (like ftpjail) ald add all users that should be jailed to this group.
Then add the line to your proftpd.conf (must not be at end of file):
DefaultRoot ~ ftpjail # this must be a group!
now restart your FTP-Server and now the users are chrooted and jailed!
I had the same problem.. I found out that the DefaultRoot ~ developers line needs to be at the end of the config file..