get PHP to compile ant release's safely - centos

On CentOS I would like to give the apache user permissions to "ant release" on a home dir it does not own how do I do that? the ant release I am using is as part of the android SDK - I have a dir /home/myuser/android_project/ and ant relase runs fine from there but I would like to give apache the permissions it needs to run it so I can run as as
<?php shell_exec('/home/myuser/android_project/ant release') ?>.
The gotcha
Also there is an issue since I sign the ant release I would like to have the password handled perhaps in a file that php can somehow magically "sign" the ant release.
Now.
Note: to Mr Tinker: Hold the horses - I know that this is might fall foul of the forum topic police, but in my considered opinion it is a unix issue. i.e. I know how PHP does shell_exec I need no programming help. I know how to run ant release manually so I need no installation help: I would like to sew together these two disparate manual "things" within linux (the CentOS server) so I believe 100% this is a unix issue

As you've already stated, you need to give the apache user permission to execute the /home/myuser/android_project/ant file.
tl;dr : run the following command (be warned, it might not be the most secure thing in the world):
chmod 777 /home/myuser/android_project/ant
If you're interested in why this might fix your problem, continue to read below.
First, you need to get some more information.
Run the following command:
ls -l /home/myuser/android_project/ant
The ls -l command will give you the read, write, and execute permissions for the specified file, along with the ownership information. The first column contains the permission information. The 3rd column indicates the owning user, and the 4th column indicates the owning group.
For example:
$ ls -l /etc/passwd
-rw-r--r--. 1 root root 2177 Aug 26 21:23 /etc/passwd
^^^
|----------- All Users & Groups
^^^
|-------------- Specified Group Owner
^^^
|------------------ Specified User Owner
This can be interpreted as user root and group root owning the /etc/passwd file.
The permissions are read as groups of 3 rwx characters. The first group is for owning user, the 2nd for owning group, and the 3rd for everyone else on the system. The permissions in this example mean that the root user can read and write to the file, the root group can read, and everyone else can read.
Now, each group of permissions can be represented as an octal digit:
--- == 0
--x == 1
-w- == 2
-wx == 3
r-- == 4
r-x == 5
rw- == 6
rwx == 7
You now have enough information to understand why the chmod 777 command above worked. Basically you will have given everyone on the system permission to read, write, and execute that ant file.
Ideally, you would only give the minimum permissions required to allow apache to execute the file, I'll leave that much as an exercise to the reader.

Related

Oracle mkdir command broken after update

I recently upgraded sqldeveloper to 22.x, I can't remember the previous version it was on. Now commands such as mkdir and spool are failing on scripts I used to run daily.
For example
host mkdir "C:\Users\Isaac\Requests\"
This script was completely unchanged and now it fails with
The filename, directory name, or volume label syntax is incorrect
Spool also fails with
SP2-0556: Invalid file name.
Again, this was a script I would run every single day, for the past year. I can't find what is causing this. Any ideas would be really helpful.
Remove the "quotes"
clear screen
host mkdir c:\Users\JDSMITH\Requests\
cd c:\Users\JDSMITH\Requests\
spool regions.csv
select /*csv*/ * from regions;
spool off
!type regions.csv
!dir
There's a chance this is a side effect of going to Java 11 from Java 8.
If you need a directory name with spaces, you can also use SQLcl.
See the 'sql.exe' in your bin directory, or download latest from oracle.com/sqlcl
Disclaimer: I'm an Oracle employee and a product manager for SQL Developer.

Blast+ Local Configuration: How to configure nt and nr databases?

I am configuring Blast+ on my mac (os sierra) and am having trouble configuring my nr and nt databases that I also downloaded locally. I am trying to follow NCBI's instructions here, and am getting hung up on the Configuration and Example Execution steps.
They say to change my .bash_profile so that it says:
export PATH=$PATH:$HOME/Documents/Luke/Research/Pedulla\ 17-18/blast/ncbi-blast-2.6.0+/bin
That works fine, and they say configure a path for BLASTDB "similarly" but to the file where my DB will be, so I have done this:
export BLASTDB=$BLASTDB:$HOME/Documents/Luke/Research/Pedulla\ 17-18/blast/blastdb/nt.00
which specifies the exact folder that I got when I unzipped the nt tar file from their FTP. With this path, if I run the command...
blastn -query test_query.fa -db nt.00 -task blastn -outfmt "7 qseqid sseqid evalue bitscore" -max_target_seqs 5
then it runs successfully and I get results, but I am worried that these are only being checked against the nt.00 section of the entire nt.00 database file, especially because if I run my test_query.fa sequence on the Web Blast, I get different results.
Also, their instructions say that the path only needs to point to the folder that contains the whole database folder nt.00, from the tar I unzipped--and not the specific nt.00 itself--, which in my case would just be "blastdb/" (As opposed to "blastdb/nt.00/" which then contains nt.00.nhd, nt.00.nal, etc.). That makes sense because when I am working I want to be able to blastn on the nt database but also blastp on the nr one, etc. by changing the -db flag on my command, and there shouldn't be a problem with having them all in this folder, right? But if I must specify the path for BLASTDB with the nt.00 DB added to the end, how could I ever use nr.00 in the same folder (blastdb/)? Essentially, I want to do as the instructions say, and just have this:
export BLASTDB=$BLASTDB:$HOME/Documents/Luke/Research/Pedulla\ 17-18/blast/blastdb/
And then depending on what database I want to use I could just say so after the -db flag on my command. But when I make the path like that above, it gives me this error:
BLAST Database error: No alias or index file found for nucleotide database [nt] in search path [/Users/LJStout::/Users/LJStout/Documents/Luke/Research/Pedulla 17-18/blast/blastdb:]
I have tried running that same blastn command from above and swapping out "nt" for "nt.00", and have tried these commands with the path for BLASTDB ending in both "blastdb/" and "blastdb/nt" and of course "blastdb/nt.00" which is the only one that runs without errors.
Here's an example of another thread I read where the OP is worried about his executions not checking the entire nt.00 folder, this was different than my problem however.
Thanks for you help!
This whole problem came down to having the nt.00 & nr.00 files, the original folders that result from unzipping their respective .tar.gz's, in the same parent folder when it should be that their contents are in the same parent folder. I simply deleted the folders they came in and copied the contents over to my new, singular parent. I was kind of mislead by the instructions, it was a simple mistake. Now, I have one folder, blastdb/ that contains all of the contents of every database I plan on using, including nt,nr, and refseq.

Capistrano v3 not able to cleanup old releases

Since I'm running my rails app as root, it creates files that are owned by root in the tmp directory. Because of this
cap production deploy:cleanup
can't remove old releases because it is not run as root.
I've looked at the capistrano v3 code, but I don't see a way to run the cleanup command as root. Is this option missing or is this problem occurring because I'm doing something wrong in another place of the deployment flow.
I start the app as root because I need to bind to port 80.
What you can also do is triggering a task just before cleaning up the old release :
namespace :deploy do
before :cleanup, :cleanup_permissions
desc 'Set permissions on old releases before cleanup'
task :cleanup_permissions do
on release_roles :all do |host|
releases = capture(:ls, '-x', releases_path).split
if releases.count >= fetch(:keep_releases)
info "Cleaning permissions on old releases"
directories = (releases - releases.last(1))
if directories.any?
directories.each do |release|
within releases_path.join(release) do
execute :sudo, :chown, '-R', 'deployuser', 'path/to/your/files/writtend/by/root'
end
end
else
info t(:no_old_releases, host: host.to_s, keep_releases: fetch(:keep_releases))
end
end
end
end
end
Note that you'll need to give your deployment user the right to execute this specific sudo command (with a sudoers definition file.
I've looked at the capistrano v3 code, but I don't see a way to run the cleanup command as root. Is this option missing or is this problem occurring because I'm doing something wrong in another place of the deployment flow.
There is no secret sauce in Capistrano, we rely on you having correctly set up the permissions for your deploy user as documented at http://www.capistranorb.com/
Removing directories requires write permissions on the parent directory, that is to say, given the following directory structure:
/var/www/releases/
\- 20131015180000
\- 20131015181500
\- 20131015183000
You need write permission on the /var/www/releases/ directory, as the list of files and directory in that directory, is stored in the directory.
From a similar StackSverflow question:
In UNIX and Linux, the ability to remove a file is not determined by the access bits of that file. It is determined by the access bits of the directory which contains the file.
From the Wikipedia article on Unix File Permissions:
The write permission grants the ability to modify a file. When set for a directory, this permission grants the ability to modify entries in the directory. This includes creating files, deleting files, and renaming files.
One of the things you may want to do is to create a group called app or web on your linux box and add root and the deploy user to the same group. Then, as part of your deployment, chmod the release_path permissions to g+s which will ensure that any new files created by root user are group writable.
You should then be able to remove the old folders as deploy user.
I was running into similar issues, so, to confirm, logged into my Web server via SSH, and tried rm -rf [directory], which also failed due to the same permissions issues, even logged in as admin. Running chmod -R 755 [directory]/, then rm -rf [directory]/ did work, though.
To fix it, in the project's silverstripe.rake file, I changed the command being run from:
execute :chown, "-R [user]:[group] /path/to/project"
to:
execute :chmod, "-R 755 /path/to/project"
So far, no more issues with deleting the oldest release when running cap [release name] deploy

dpkg: How to use trigger?

I wrote a little CDN server that rebuilds its registry pool when new pool-content-packages are installed into that registry pool.
Instead of having each pool-content-package call the init.d of the cdn-server, I'd like to use triggers. That way it would restart the server only once at the end of an installation run, after all packages were installed.
What have I to do to use triggers in my packages with debhelper support?
What you are looking for is dpkg-triggers.
One solution with use of debhelper to build the debian packages is this:
Step 1)
Create file debian/<serverPackageName>.triggers (replace <serverPackageName> with name of your server package).
Step 1a)
Define a trigger that watch the directory of your pool. The content of file would be:
interest /path/to/my/pool
Step 1b)
But you can also define a named trigger, which have to be fired explicit (see step 3).
content of file:
interest cdn-pool-changed
The name of the trigger cdn-pool-changed is free. You can take what ever you want.
Step 2)
Add handler for trigger to file debian/<serverPackageName>.postinst (replace <serverPackageName> with name of your server package).
Example:
#!/bin/sh
set -e
case "$1" in
configure)
;;
triggered)
#here is the handler
/etc/init.d/<serverPackageName> restart
;;
abort-upgrade|abort-remove|abort-deconfigure)
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0
Replace <serverPackageName> with name of your server package.
Step 3) (only for named triggers, step 1b) )
Add in every content package the file debian/<contentPackageName>.triggers (replace <contentPackageName> with names of your content packages).
content of file:
activate cdn-pool-changed
Use same name for trigger you defined in step 1.
More detailed Information
The best description for dpkg-triggers I could found is "How to use dpkg triggers". The corresponding git repository with examples you can get here:
git clone git://anonscm.debian.org/users/seanius/dpkg-triggers-example.git
I had a need and read and re-read the docs many times. I think that the process is not clearly explain or rather what goes where is not clearly explained. Here I hope to clarify the use of Debian package triggers.
Service with Configuration Directory
A service reading its settings in a specific directory can mark that directory as being of interest.
Say I create a new service which reads settings from /usr/share/my-service/config/...
That service gets two additions:
In its debian directory I add my-service.triggers
And here are the contents:
# my-service.triggers
interest /usr/share/my-service/config
This means if any other package installs or removes a file from that directory, the trigger enters its "needs to be run" state.
In its debian directory I also add my-service.postinst
And I have a script as follow to check whether the trigger happened and run a process as required:
# my-service.postinst
if [ "$1" = "triggered" ]
then
if [ "$2" = "/usr/share/my-service/config" ]
then
# this may or may not be what you need to do, but this is often
# how you handle a change in your service config files
#
systemctl restart my-service
fi
exit 0
fi
That's it.
Now packages adding extensions to your service can add their own configuration file(s) under /usr/share/my-service/config (or a directory under /etc/my-service/my-service.d/... or /var/lib/my-service/..., although that last one should be reserved for dynamic files, not files installed from a package) and dpkg automatically calls your postinst script with:
postinst triggered /usr/share/my-service/config
# where /usr/share/my-service/config is your <interest-path>
This call happens only once and after all the packages were installed, hence the advantage of having a trigger in the first place. This way each package does not need to know that it has to restart my-service and it does not happen more than once, which could cause all sorts of side effects (i.e. the service tries to listen on a TCP port and get error: address already in use).
IMPORTANT: keep in mind that the postinst should include a line with #DEBHELPER#.
So you do not have to do anything special in other packages. Only make sure to install the configuration files in the correct directory and dpkg picks up from there (i.e. in my example under /usr/share/my-service/config).
I have an extension to BIND9 called ipmgr which makes use of .ini files saved in a specific folder. It uses the files to generate DNS zones (way less errors that way! and it includes support for getting letsencrypt certificates and settings for dmarc/dkim). This package uses this case: a simple directory where configuration files get installed. Other packages do not need to do anything other than install files in the right place (/usr/share/ipmgr/zones, for this package).
Service without a Configuration Folder
In some (rare?) cases, you may need to trigger something in a service which is not driven by the installation of a new configuration file.
In this case, you can use an arbitrary name (it should include your package name to make sure it is unique since this name is global to the entire Debian/Ubuntu system).
To make this one work, you need three files, one of which is a trigger in the other packages.
State the Interest
As above, we have an interest. In this case, the interest is stated as a name on its own. The dpkg system distinguish between a name and a path because a name cannot include a slash (/) character. Names are limited to ASCII except control characters and spaces. I would suggest you stick to a-z, 0-9 and dashes (-).
# my-service.triggers
interest my-service-settings
This is useful if you cannot simply track a folder. For example, the settings could come from a network connection that a package offers once installed.
Listen for the Triggers
Again, as above, you need a postinst script in your Service Package. This captures the trigger and allows you to run a command. The script is the same, only you test for the name instead of the folder (note that you can have any number of triggers, so you could also have both: a folder as above and a special name as here).
# my-service.postinst
if [ "$1" = "triggered" ]
then
if [ "$2" = "my-service-settings" ]
then
# this may or may not what you need to do, but this is often
# how you handle a change in your service config files
#
systemctl restart my-service
fi
exit 0
fi
The Trigger
As mentioned above, we need a third file. An arbitrary name is not going to be triggered automatically by dpkg. It wouldn't know whether your other package needs to trigger something just like that (although it is fairly automated as it is already).
So in other packages, you create a trigger file which looks like this:
# other-package.triggers
activate my-service-settings
Now we recognize the name, it is the same as the interest stated above.
In other words, if the trigger needs to run for something other than just the installation of files in a given location, use a special name and add this triggers file with the activate keyword.
Other Features
I have not tested the other features of the dpkg-trigger(1) tool. There are other keywords support in the triggers files:
interest
interest-await
interest-noawait
activate
activate-await
activate-noawait
The deb-triggers manual page has additional information about those. I am not too sure what the await/noawait implies other than the trigger may happen at any time when nowait is used.
Automatic Trigger Added
The build system on Ubuntu (probably Debian too) automatically adds a triggers file with the following when your package includes a library:
$ cat triggers
# Triggers added by dh_makeshlibs/11.1.6ubuntu2
activate-noawait ldconfig
I suggest you exercise caution if your package includes libraries. If you have your own triggers file, I do not know whether this addition will still happen automatically.
This also shows us a special case where it wants to use the noawait. If I understand correctly, it has to run the ldconfig trigger ASAP so your commands will work as expected after the unpack. Otherwise ldd will not know anything about your newly installed library.

How do I search a CVS repository for a particular file?

Is there any way to do it? I only have client access and no access to the server. Is there a command I've missed or some software that I can install locally that can connect and find a file by filename?
You could grep the output of
cvs rlog -Nh .
(note the period character at the end - this effectively means: the whole repository).
That should give you info about the whole shebang including removed files and files added on branches.
You can use
cvs rls -Rde <modulename>
which will give you all files in recursively, e.g.
foo:
/x.py/1.2/Mon Dec 1 23:33:51 2008//
/y.py/1.1/Mon Dec 1 23:33:31 2008//
D/bar////
foo/bar:
/xxx/1.1/Mon Dec 1 23:36:38 2008//
Notice that the -d option gives you also deleted files; not sure whether you
wanted that. Without -e, it only gives you the file names.