I'm using DBD::Oracle in perl, and whenever a connection fails, the client generates a sqlnet.log file with error details.
The thing is, I already have the error trapped by perl, and in my own log file. I really don't need this extra information.
So, is there a flag or environment for stopping the creation of sqlnet.log?
As the Oracle Documentation states: To ensure that all errors are recorded, logging cannot be disabled on clients or Names Servers.
You can follow the suggestion of DCookie and use the /dev/null as the log directory. You can use NUL: on windows machines.
From the metalink
The logging is automatic, there is no way to turn logging off, but since you are on Unix server, you can redirect the log file to a null device, thus eliminating the problem of disk space consumption.
In the SQLNET.ORA file, set LOG_DIRECTORY_CLIENT and LOG_DIRECTORY_SERVER equal to a null device.
For example:
LOG_DIRECTORY_CLIENT = /dev/null
LOG_FILE_CLIENT = /dev/null
in SQLNET.ORA suppresses client logging completely.
To disable the listener from logging, set this parameter in the LISTENER.ORA file:
logging_listener = off
Are your clients on Windows, or *nix? If in *nix, you can set LOG_DIRECTORY_CLIENT=/dev/null in your sqlnet.ora file. Not sure if you can do much for a windows client.
EDIT: Doesn't look like it's possible in Windows. The best you could do would be to set the sqlnet.ora parameter above to a fixed location and create a scheduled task to delete the file as desired.
Okay, as Thomas points out there is a null device on windows, use the same paradigm.
IMPORTANT: DO NOT SET "LOG_FILE_CLIENT=/dev/null", this will cause permissions of /dev/null be reset each time your initialize oracle library, and when your umask is something that does not permit world readable-writable bits, those get removed from /dev/null and if you have permission to chmod that file: i.e running as root.
and running as root maybe something trivial, like php --version having oci php-extension present!
full details here:
http://lists.pld-linux.org/mailman/pipermail/pld-devel-en/2014-May/023931.html
you should use path inside directory that doesn't exist:
LOG_FILE_CLIENT = /dev/impossible/path
and hope nobody creates dir /dev/impossible :)
for Windows NUL probably is fine as it's not actual file there...
Related
I'm attempting to import an SQL dump in PgAdmin 4 using the psql client - However the error message returned is - The system cannnot find the file specified.
Here is a screenshot of my psql client -
The file films.sql is currently stored on my desktop, but I suspect the default location that the psql client accesses is not my desktop? Is there anyway to set the location that the client looks in order to resolve this?
The file SQL is viewable here: https://github.com/datacamp/courses-intro-to-sql/tree/master/datasets
I simply want to get the database on my local machine so that I don't need to store queries in an online learning platform. It would be best if this database is available locally to query and practice on.
I've attempted to execute the whole SQL file as a query on the films database but this does not seem to be working either and returns 'Asynchronous query execution/operation underway.
Query returned successfully in 388 msec.' - However it seems to be the case that the Asynchronous query never completes when I refresh the database.
Please can someone help?
Just give the path to your file:
psql -d my_database -f /path/to/the/file.sql
psql -d my_database -f C:/path/to/the/file.sql
Depending on whether you are on a unix/linux machine or Windows.
Oh, and if you aren't familiar with file paths you may want to take a step back and become more familiar with general computer terminology before diving into a RDBMS. Your learning will be much easier if you have a solid foundation to build upon.
I suspect this question might be moot for the asker at this point, but for anyone else stumbling upon it like I did: the interactive connection info prompts are provided by a batch script (in Windows, I'd guess there's an analogous shell script for Unix) called runpsql.bat, which then just passes your inputs as commandline arguments to the psql.exe executable. I was getting this error because I had migrated my Postgres installation and the batch script was calling a nonexistent path for psql.exe, hence The system cannot find the file specified. I edited runpsql.bat to point to the correct location of psql.exe and that resolved the issue. So for OP, I would look into PgAdmin4 and see where it's (presumably) calling runpsql.bat, then make sure that that calls psql.exe with the correct path.
I am working on installing PgPoolAdmin on my local ubuntu system for installing it on server later. Currently, I am able to login but I keep getting an error Could not read .pcppass fileFile not found. I have tried this and many other resources, but no luck. Where is it looking for this file?
The username and passowrd in pcp.conf is same as here, just its in plain text in .pcppass and md5 in pcp.conf. Is that correct?
pcp.conf I have on 2 location /var/www/html and /var/www/html/admin-tool/
Its contents :
#insert:hostname:port:username:password
*:*:akshay:PASSWORD
*:*:postgres:PASSWORD
Thank you.
.pcppass needs to accessible by the user that runs your web server. For example, if you are serving pgpoolAdmin through apache2 with default paths and users. The following should solve the issue.
cp ~/.pcpass /var/www/.pcppass
chown www-data:www-data /var/www/.pcppass
chmod 600 /var/www/.pcppass
By default a .pcppass file should be located in the user's $HOME directory. If you have created it elsewhere, then initialize the $PCPPASSFILE environment variable with the filepath. Make sure the file is in this format:hostname:port:username:password. Then you should be able to access the database.
Note: You cannot use wildcards in the password files, as it will give error sometimes. It is better to use exact host/port values for better security.
When I try to connect to my Postgres.app db using dbext, I get the following error:
dbext:PostgreSQL requires a '$HOME/.pgpass' file in order to authenticate. This file is
missing. The binary 'psql' does not accept commandline passwords.
Other programs connect just fine by using a "local" connection. (Postgres.app runs with my userid.)
In vim :!which psql correctly prints /opt/local/bin/psql (which I have symlinked to the one in the Postgres.app bin directory). And Postgres.app is set up to use "local" authentication and there's no clear sense of where a pg_hba.conf file would go (there is no etc directory in the app bundle). Moreover, Postgres.app doesn't have anything in its documentation about changing access configuration.
I've tried using dbext's :DBPromptForBufferParameters directly, as well as #tpope's vim-rails plugin (which returns without comment from dbext setup via :Rdbext.
So what do I do to get dbext to connect using a "local" connection?
Note - I spent a LOT of time trying to figure this out without trying the obvious, thus the post even when I already have the answer. I'm also curious to see if anyone else has a different approach.
It turns out you can just make an empty ~/.pgpass file (restricting read-write permissions to your userid only to avoid warnings). This was counterintuitive for me (since there is in fact no password), but I suppose in retrospect it's obvious I should have tried it.
I'll point this out on the dbext issue tracker.
I'm a selinux newbie and had to change the security context of a mercurial repo and config file on a CentOS box to get it serves from httpd.
Accidentally I issued "chcon -Rv --type=httpd_sys_script_exec_t /", which I could only stop when already masses of files and directories have been modified.
I read about restorecon to restore something to its default context, but it doesn't work for me, I got "permission denied".
What can I do to restore the whole filesystem to its selinux defaults?
You could try doing a fixfiles relabel to get things back in order. Else you could edit /etc/selinux/config and set the system to no longer enforce SELinux. Good luck!
You could either of the following to fix this.
fixfiles
create a file /.autorelabel and reboot the system.
restorecon -f file
Usually the conf file will be /etc/selinux/targeted/contexts/files/file_contexts
I have to write a Perl script to automatically copy data from remote server to my local system. The directory structure on remote systems is:
../log/D1/<date>.tar.gz
../log/D2/<date>.gz
../log/D3/<date>.tar.gz
../log/D4/<date>
and same on other server. I want to copy the data on local system in below format.
../log/S1/D1/<date>.tar.gz
../log/S1/D2/<date>.gz
../log/S1/D3/<date>.tar.gz
../log/S1/D4/<date>
and same for other servers i.e. S2, S3, etc
Also, no ssh supported Perl modules are available on remote server as well on local server and I dont have permission to install any Perl modules. The only good thing is that the connectivity is through password-less ssh keys.
Can anyone please suggest me any Perl code to get this done?
I believe you can access to shell command from perl.
So you can do this:
$cmd = "/usr/bin/scp remotefile localfile";
system $cmd;
NOTE: scp is secure-copy -- a buddy of ssh.
This does not require ssh-perl module but it require ssh support on both (which I have).
Hope this helps.
I started to suggest the scp command line program, but it seems that there's a CPAN module for that (no surprise). Check out Net::SCP.
By using scp on your client (where you can install new Perl modules) you can copy files without having to install any new software on the remote system. It just needs to have the ssh server running - which you've said it does.
I'd say stop trying to make life difficult for yourself and get the system to support the features you require.
Trying to develop for such a limited/ locked down platform is not going to be cost-effective in the long run - you'll develop stuff more slowly and it will have more bugs.
A little developer time is way more expensive than a decent hosted VM / hardware box.
Get a proper host, it will definitely save money (talk to your manager about this).
From your query above I understand that you don't have much permissions to install perl modules or do any changes which require administrative privileges. I love perl but to automate things like this you should use bash instead of perl. Below is the sample code I am using with password less ssh keys.
#!/bin/bash
DATE=`date`
BASEDIR="/basedir"
cd $BASEDIR
for HOST in S1 S2 S3
do
scp -q $HOST:$BASEDIR/D1/$DATE.tar.gz $HOST/D1/
echo "Data copy from $HOST done"
done
exit 0
You can use different date formats like date +%Y%m%d for current date in format YYYYMMDD. Also you can use this link to learn different date formats.
Hope this helps.
You may not be able to install anything in system-wide lib directories, but there is nothing preventing you from installing modules in a location to which you have write-access. See How do I keep my own module/library directory?
This creates no more of a security issue than allowing you to write scripts on this system in the first place.
So, go forth and install Net::SCP.
It sounds like you want rsync. You shouldn't have to do any programming at all.