How to validate a custom data file with cloud-init - cloud-init

I am using custom-data to pass in some boot time configuration to a server on azure. I can see the file at the location
/var/lib/cloud/instances/<instance_id>/user-data.txt
and cloud-init is showing as done (cloud-init status)
the problem is that none of the script is doing anything, and there are no config files produced.
I was wondering if anyone knew of a command to show if the file at the location above is valid syntactically?
thanks

I was wondering if anyone knew of a command to show if the file at the location above is valid syntactically?
To validate the config, check out the cloud-init schema subcommand. Something like this may be what you want:
On a live system:
cloud-init schema --system
To see the userdata passed, try:
cloud-init query userdata
To see everything passed, try:
cloud-init query --all

Related

new Sphinx version attempts a non-existing connection

I recently upgraded sphinx to version 2.2.11 on Ubuntu.
THen I started getting daily emails where a process is attempting to connect and generating this error:
ERROR: index 'test1stemmed': sql_connect: Access denied for user 'test'#'localhost'
ERROR: index 'test1': sql_connect: Access denied for user 'test'#'localhost'
The email warning has a topic which I assume is the info regarding the root of the problem
. /etc/default/sphinxsearch && if [ "$START" = "yes" ] && [ -x /usr/bin/indexer ]; then /usr/bin/indexer --quiet --rotate --all; fi
so /etc/default/sphinxsearch does have the start variable as yes.
but the /usr/bin/indexer is total gibberish.
Such a user never existed on the system AFAIK.
It would be interesting to know how this process got generated, but more importantly
How can this process be safely stopped?
I've seen that happen, it comes from the Sphinx install 'package'. Whoever setup that package, created a cron task that does that indexer --all command, that just tries to reindex every index (once a day IIRC). The package maintainer thought they being helpful :)
From https://packages.ubuntu.com/bionic/ppc64el/sphinxsearch/filelist
looks like it might be in
/etc/cron.d/sphinxsearch
You could remove that cron task, if dont want it.
Presumably you already have some other process for actually updating your actual real 'live' indexes. (either dedicated cron tasks, or maybe use RT indexes or whatever)
Also it seems you still have these 'test' indexes in your sphinx.conf. Maybe left over from the initial installation. Installing a new package I dont think would overwrite sphinx.conf to add them later?
May want to clear them out of your sphinx.conf if don't use them, could simplify the file.
(although possibly still want to get of the --all cron, which just blindly reindexes everything daily!)

"mount" a PostgreSQL database from files not Backup

I've been given a project to extract data from a PostgreSQL database. I've no previous experience with PostgreSQL but the project I have is to bug fix existing code, so all the logic to connect to the engine and get data is already in place.
The problem I have is the database has been given to me in the form of the folders and files straight from the source HDD, not a backup (which isn't going to happen so "Get the customer to give you a backup instead isn't an option here).
The folders also contained the actual PostgreSQL binaries so I looked a the version (9.4.14) and downloaded the nearest (9.4.18) from the PostgreSQL site and installed it. Now all I have to do is some how is to get it to look at my given data files.
I tried the obvious of copying the contents of the data folder into the installed data folder but after the PostgreSQL service won't start.
I did find a option in the conf file:
#data_directory = 'ConfigDir'
I changed this to:
data_directory = 'C:\customer\data'
But again the service won't start after this.
The data directory used by the service is defined through the service command line which overwrites any property defined in postgresql.conf.
You need to re-create the service in order to change the data directory, e.g.:
Remove the service:
pg_ctl -unregister -N postgresql-9.1
postgresql-9.1 is the "real" name of the service, not the "Display Name". You can see that in the properties of the service inside the "services" app.
Then re-create the service with the correct data directory:
pg_ctl -register -D -D c:\customer\data -N postgresql-9.1
Another way of "debugging" startup errors in Windows, is to start Postgres from the command line (not through the service) because some errors during startup are not logged in the Postgres logfile but they are displayed on the command line. You can do that with e.g.:
pg_ctl start -D c:\customer\data`
If the bin directory is not in your PATH you need to specify the full path to it on the command line, e.g.: c:\Postgres9.1\bin\pg_ctl

psql client failing to import dump file - the system cannot find the specified file

I'm attempting to import an SQL dump in PgAdmin 4 using the psql client - However the error message returned is - The system cannnot find the file specified.
Here is a screenshot of my psql client -
The file films.sql is currently stored on my desktop, but I suspect the default location that the psql client accesses is not my desktop? Is there anyway to set the location that the client looks in order to resolve this?
The file SQL is viewable here: https://github.com/datacamp/courses-intro-to-sql/tree/master/datasets
I simply want to get the database on my local machine so that I don't need to store queries in an online learning platform. It would be best if this database is available locally to query and practice on.
I've attempted to execute the whole SQL file as a query on the films database but this does not seem to be working either and returns 'Asynchronous query execution/operation underway.
Query returned successfully in 388 msec.' - However it seems to be the case that the Asynchronous query never completes when I refresh the database.
Please can someone help?
Just give the path to your file:
psql -d my_database -f /path/to/the/file.sql
psql -d my_database -f C:/path/to/the/file.sql
Depending on whether you are on a unix/linux machine or Windows.
Oh, and if you aren't familiar with file paths you may want to take a step back and become more familiar with general computer terminology before diving into a RDBMS. Your learning will be much easier if you have a solid foundation to build upon.
I suspect this question might be moot for the asker at this point, but for anyone else stumbling upon it like I did: the interactive connection info prompts are provided by a batch script (in Windows, I'd guess there's an analogous shell script for Unix) called runpsql.bat, which then just passes your inputs as commandline arguments to the psql.exe executable. I was getting this error because I had migrated my Postgres installation and the batch script was calling a nonexistent path for psql.exe, hence The system cannot find the file specified. I edited runpsql.bat to point to the correct location of psql.exe and that resolved the issue. So for OP, I would look into PgAdmin4 and see where it's (presumably) calling runpsql.bat, then make sure that that calls psql.exe with the correct path.

Run tasks as another user

Using Capistrano v3, how can I run all remote tasks through su as another user? I cannot find anything in the official documentation (http://capistranorb.com/)
For my use case, there is one SSH user and one user for every virtual host. User A connects to the server and should run all commands as user B.
This isn't much of an answer, but I don't think what you are trying to do is possible without code modifications. Here's why:
There are two primary cases where you would use a different user:
Deployment needs to run as a particular user because of file ownership.
Deployment needs to run with root permissions.
In the first case, you generally would simply tell Capistrano to ssh as that user.
In the second case, you would tell Capistrano to run certain commands with paswordless sudo (http://capistranorb.com/documentation/getting-started/authentication-and-authorisation/#authorisation).
I can see a situation where only one user is available via SSH, but file ownership and permissions is based on another user, so you want to make su part of the workflow. I'm sure it is possible to do, but if I had to do it, I would be reading the source code of Capistrano and overriding how shell commands are executed. This would be non-trivial.
If you have a specific command like rm which needs to run as a different user, you may be able to use the SSHKit.config.command_map[:rm] = 'sudo rm' mechanism to do it.
In a nutshell, I don't think what you are asking for is, on its face, easily done with Capistrano. If you have a specific use case, we may be able to offer suggestions as to how you may approach the problem differently which plays better to Capistrano's strengths.
Good luck!
Update
Looking further, the capistrano-rbenv gem has a mechanism by which it has overridden the execution of all commands:
task :map_bins do
SSHKit.config.default_env.merge!({ rbenv_root: fetch(:rbenv_path), rbenv_version: fetch(:rbenv_ruby) })
rbenv_prefix = fetch(:rbenv_prefix, proc { "#{fetch(:rbenv_path)}/bin/rbenv exec" })
SSHKit.config.command_map[:rbenv] = "#{fetch(:rbenv_path)}/bin/rbenv"
fetch(:rbenv_map_bins).each do |command|
SSHKit.config.command_map.prefix[command.to_sym].unshift(rbenv_prefix)
end
end
https://github.com/capistrano/rbenv/blob/master/lib/capistrano/tasks/rbenv.rake#L17
You might have success with something similar.
In order to run all remote tasks through su as another user I think you need to change Ownership for that User.
I'm Assuming that deployment folder name is /public_html/test .
sudo chown User:User /public_html/test # `chown` will change the owner ship so that `User` user can `**Read/Write**`
umask 0002
sudo chown User:User public_html/test/releases
sudo chown User:User public_html/test/shared
Hope this will solve your issue!!!

Stop Oracle from generating sqlnet.log file

I'm using DBD::Oracle in perl, and whenever a connection fails, the client generates a sqlnet.log file with error details.
The thing is, I already have the error trapped by perl, and in my own log file. I really don't need this extra information.
So, is there a flag or environment for stopping the creation of sqlnet.log?
As the Oracle Documentation states: To ensure that all errors are recorded, logging cannot be disabled on clients or Names Servers.
You can follow the suggestion of DCookie and use the /dev/null as the log directory. You can use NUL: on windows machines.
From the metalink
The logging is automatic, there is no way to turn logging off, but since you are on Unix server, you can redirect the log file to a null device, thus eliminating the problem of disk space consumption.
In the SQLNET.ORA file, set LOG_DIRECTORY_CLIENT and LOG_DIRECTORY_SERVER equal to a null device.
For example:
LOG_DIRECTORY_CLIENT = /dev/null
LOG_FILE_CLIENT = /dev/null
in SQLNET.ORA suppresses client logging completely.
To disable the listener from logging, set this parameter in the LISTENER.ORA file:
logging_listener = off
Are your clients on Windows, or *nix? If in *nix, you can set LOG_DIRECTORY_CLIENT=/dev/null in your sqlnet.ora file. Not sure if you can do much for a windows client.
EDIT: Doesn't look like it's possible in Windows. The best you could do would be to set the sqlnet.ora parameter above to a fixed location and create a scheduled task to delete the file as desired.
Okay, as Thomas points out there is a null device on windows, use the same paradigm.
IMPORTANT: DO NOT SET "LOG_FILE_CLIENT=/dev/null", this will cause permissions of /dev/null be reset each time your initialize oracle library, and when your umask is something that does not permit world readable-writable bits, those get removed from /dev/null and if you have permission to chmod that file: i.e running as root.
and running as root maybe something trivial, like php --version having oci php-extension present!
full details here:
http://lists.pld-linux.org/mailman/pipermail/pld-devel-en/2014-May/023931.html
you should use path inside directory that doesn't exist:
LOG_FILE_CLIENT = /dev/impossible/path
and hope nobody creates dir /dev/impossible :)
for Windows NUL probably is fine as it's not actual file there...