SVN Pre-Commit hook - perl

I am new to svn.Svn repository is in Linux,and developers are working on windows using TSVN client.I implemented a per-commit hook with a proper comment of 32 characters.it is working in Linux.But i tried in TSVN client to commit the code with comment is less than 32 characters it is working.Can any one help me on this.
Here is the code:
$minchars = 10;
$svnlook = '/usr/bin/svnlook';
#--------------------------------------------
$repos = $ARGV[0];
$txn = $ARGV[1];
$comment = `$svnlook log -t "$txn" "$repos"`;
chomp($comment);
if ( length($comment) == 0 ) {
print STDERR "A comment is required!";
exit(1);
} elsif ( length($comment) < $minchars ) {
print STDERR "Comment must be at least $minchars characters.";
exit(1);
}
exit(0);

Try this:
Copy your script to another directory and modify it to use the -r parameter for the svnlook command rather than -t. Then, try it with a commit revision that should have failed.
For example:
$ cd $repo_dir/hooks
$ cp pre-commit $HOME
$ cd
$ vim pre-commit #Change from Transaction to Revision
$ # Revision #123 should have failed
$ ./pre-commit $repo $rev
If the script doesn't produce an error, you can try such things as printing out the comment in quotes to see whether or not it's zero in length, etc. It'll help you find the possible logic error in your script.
You should also use use strict; and use warnings; in your Perl scripts because it easily picks up errors you might not realize you have in your script. It's so easy to forget that a particular variable wasn't necessarily set, or that you mistyped a variable. These pragmas will pick up these types of errors which seem to cause about 90% of the problems in Perl:
#! /usr/bin/env perl
use strict;
use warnings;
my $svnlook = "/usr/bin/svnlook";
my $minchars = 10;
my $repos = $ARGV[0];
my $txn = $ARGV[1];
chomp ( my $comment = qx($svnlook log -t $txn $repos) );
if (not $comment) {
die "A comment is required!\n";
}
elsif ( length $comment < $minchars ) {
die "Comment must be at least $minchars characters.\n";
}
exit 0;
You can also use my pre-commit script. It can be used to verify the length and structure of the commit comment. For example, you could require the commit comment to require a defect ID. It also allows you to control who has commit rights in what parts of your repository and also enforce the use of certain properties on certain files. For example, you might want to make sure all shell scripts and Perl scripts have the svn:eol-style set to either native or LF.
It can also allow users to create a tag, but not allow users to make changes in a tag once created. This prevents users from accidentally checking out a tag, making a change, and then committing it.
And, one more thing:
Take a look at a continuous build system such as Jenkins. One of the things I've discovered is that by merely doing continuous builds, developers naturally improve their commit messages without doing any sort of enforcement.
That's because commit messages are now easily visible. Jenkins shows the changes in each build, whether the build itself was successful, test results, etc. It shows the changes and the commit comments. Suddenly, the commit comments become much more useful to the developers themselves, and they simply do better comments.
You can look at an svn log and see when I implemented Jenkins: Before there were either no commit comments, or such useful things as "reformatted code" or the very helpful "made changes" (both longer than 10 characters). Suddenly the comments are "Fixed BUG-1233. Checked for null pointer before passing it to foo method".

Related

Handling Perforce message in Perl when there are no new files submitted

I am trying to code a Perl subroutine that returns an array of files that has been modified and submitted to the Perforce repository from $previous_date until now. This is how the subroutine looks like:
sub p4_files {
my ($previous_date) = #_;
my $files = "//depot/project/design/...rtl.sv"
my $p4cmd = "p4 files -e $files\#$previous_date,\#now";
my #filelist = `$p4cmd`;
chomp #filelist;
return #filelist;
}
The subroutine works as expected if there are files submitted between the given dates. However, it happens that no new changes are made, and executing the p4 files command returns a message instead:
prompt% p4 files -e //depot/project/design/...rtl.sv\#25/05/2017,\#now
prompt% //depot/project/design/...rtl.sv\#25/05/2017,\#now - no revision(s) after that date.
How should I handle this in my Perl script? I would like to exit the script when such a situation is encountered.
Unfortunately, p4 returns exit code 0 regardless of whether it finds some files or whether it returns the "no revision(s) after that date" message. That means you have to parse the output.
The simplest solution is probably to exit the script if $filelist[0] =~ / - no revision\(s\) after that date\./. The downside is we don't know how "stable" that message is. Will future versions of Perforce emit this message exactly, or is it possible they would reword?
Another option is to use the -s switch: my $p4cmd = "p4 -s files -e $files\#$previous_date,\#now";. That causes p4 to prepend the "severity" before every line of output. If a file is found, the line will start with info:, while the "no revision(s) after that date" will start with error:. That looks a bit more stable to me: exit if grep /^error:/, #filelist. Watch out for the last line; when you use the -s switch, you get an extra line with the exit code.
Yet another option would be to use P4Perl. In that case you'd get the results as structured data, which will obviate the parsing. That's arguably the most elegant, but you'd need the P4Perl module first.
I suggest using the -F flag to tame the output:
my $p4cmd = "p4 -F %depotFile% files -e $files\#$previous_date,\#now";
and then go ahead with the:
my #filelist = `$p4cmd`;
good_bye() unless #filelist; # Say goodbye and exit.
#filelist will be empty if there are no lines of output containing a %depotFile% field, and now your caller doesn't need to try to parse the depot path out of the standard p4 files output.
If you want to massage the p4 files output further, take a look at p4 -e files (args) so you can see what the different fields are that you can plug into -F.
Just do nothing if the array isn't populated.
my #filelist = `$p4cmd`;
good_bye() unless #filelist; # Say goodbye and exit.
chomp #filelist;
To suppress the message, just redirect stderr of the command to a bitbucket:
my $p4cmd = "p4 files -e $files\#$previous_date,\#now 2> /dev/null";

How do I run shell commands in a CGI program as the nobody user?

I want to run shell commands in a CGI program (written in Perl). My program doesn’t have root permission. It runs as nobody. I want to use this code:
use strict;
system <<'EEE';
awk '{a[$1]+=$2;b[$1]+=$3}END{for(i in a)print i, a[i], b[i]|"sort -nk 3"}' s.txt
EEE
I can run my code successfully with perl from the command line but not as a CGI program.
Based on the code in your question, there are at least four possibilities for failure.
The nobody user does not have permission to execute your program.
The Perl code in your question has no shebang (#!) line. You are trying to run awk, so I assume you are running on some form of Unix. If your code is missing this line, then your operating system does not know how to run your program.
The file s.txt is either not in the executing program’s working directory, or it is not readable by the nobody user.
For whatever reason, awk is not reachable via the PATH of your executing program’s environment.
To quickly diagnose such low-level problems, try to have all error output to show up in the browser. One way to do this is adding the following just after the shebang line in your code.
BEGIN {
print "Content-type: text/plain\n\n";
open STDERR, ">&", \*STDOUT or print "$0: dup: $!";
}
The output will render as plain text rather than HTML, but this is a temporary measure to see your program’s output. By wrapping it in a BEGIN block, the code executes as soon as it parses. Redirecting STDERR means your browser also gets anything written to the standard output.
Another way to do this is with the CGI::Carp module.
use CGI::Carp 'fatalsToBrowser';
This way, errors go to the browser and also to the web server’s error log.
If you still see 500-series errors from your server, the problem is happening at a lower level: probably some failure to start perl. Go examine your server’s error log. Once your program is executing, you can remove this temporary redirection of error output.
Finally, I recommend changing your program to
#! /usr/bin/perl -T
BEGIN { print "Content-type: text/plain\n\n"; }
use strict;
use warnings;
$ENV{PATH} = "/bin:/usr/bin";
my $input = "/path/to/your/s.txt";
my $buckets = <<'EOProgram'
{ a[$1] += $2; b[$1] += $3 }
END { for (i in a) print i, a[i], b[i] }
EOProgram
open STDIN, "-|", "awk", $buckets, $input or die "$0: open: $!";
exec "sort", "-nk", 3 or die "$0: exec: $!";
The -T switch enables a security dataflow analysis called taint mode that prevents you from using unsanitized input on system operations such as open, exec, and so on that an attacker (or benign user supplying unexpected input) could use to harm your system. You should always add -T to CGI programs and any other code that runs on behalf of another user.
Given the nature of your awk program, a content type of text/plain seems reasonable. Output it as soon as possible.
With taint mode enabled, be explicit about the value of your PATH environment variable. If instead you stick with whatever untrusted PATH your program inherits, attempting to run external programs will fail.
Nail down the full path of your input. This will eliminate surprises.
Using the multi-argument forms of open and exec eliminates the shell and its argument parsing. (For completeness, system also has a similar multi-argument form.) Yes, writing it this way can mean being a little more deliberate (such as breaking out the arguments and setting up the pipeline yourself), but it also avoids nasty surprises.
I'm sure nobody is allowed to run shell commands. The problem is that nobody doesn't have permission to open the file s.txt. Add read permission for everyone to s.txt, and add execute permission to everyone on every directory up to s.txt.
I would suggest finding out the full qualified path for awk and specifying it directly. Likely the nobody that launched httpd had a very minimal path in its $ENV{PATH}. Displaying the $ENV{PATH} I am guessing will show this.
This is a good thing, I wouldn't modify the path, but just specify the path /usr/bin/awk or what not.
If you have shell access and it works, type 'which awk' to find this out.
i can run my codes successfully in
perl file but not in cgi file.
What web server are you running under? For instance, apache requires printing a CGI header i.e. print "Content-type: text/plain; charset=utf-8\n\n", or
use CGI;
my $q = CGI->new();
print $q->header('text/html');
(See CGI)
Apache will conplain in the log (error.log) about "premature end of script headers" IF what I said is the case.
You could just do it inline without having to fork out to another process...
if ( open my $fh, '<', 's.txt' ) {
my %data;
while (<$fh>) {
my ($c1,$c2,$c3) = split;
$data{a}{$c1} += $c2;
$data{b}{$c1} += $c3;
}
foreach ( sort { $data{b}{$a} <=> $data{b}{$b} } keys %{ $data{b} } ) {
print "$_ $data{a}{$_} $data{b}{$_}\n";
}
} else {
warn "Unable to open s.txt: $!\n";
}

How can I do a CVS checkout without using the Cvs module?

How to do CVS co using Perl without using Cvs module ?
system : http://perldoc.perl.org/functions/system.html
While you asked not to use a module, I always recommend it. CPAN kicks up Cvs::Simple. You may want to consider using it as a reference if you have business case reasons for not using a module.
I wrote this up in my blog but here it is in plain text.
I had to download and install expectperl and the IO::Tty perl module. This little perl script successfully does the cvs update, even with the ssh password prompting.
#!/usr/bin/perl
use Expect;
chdir("/files/hudson_local/jobs/MOJARRA_1_2X_ROLLING_GLASSFISH_2_1_1/workspace");
$ENV{"CVSROOT"} = ":ext:8bit#java.net/cvs/javaserverfaces-sources~cvs-repository";
($cvs = Expect->spawn("cvs update -d -P")) || die "Couldn't spawn cvs, $!";
unless ($cvs->expect(30, "Enter passphrase for key '/files/hudson_local/.ssh/id_rsa':")) {
die "Never got the passphrase prompt";
}
print $cvs "not the real password\r";
unless ($cvs->expect(300, "cvs update: Updating www/legal/jsf-cddl")) {
die "Never saw update starting";
}

How to redirect SVN stderrs to /dev/null using perl

I have a script to check if any data is available on svn repo path but not added into svn. It works fine for me but this gives stderr for adding and sending files like below;
Adding 1/a
Sending 1/a
Transmitting file data ...........
Committed revision 529.
Code:
use strict;
use warnings;
sub notAdded {
my #svnstatus = `svn st`;
foreach my $status (#svnstatus) {
chomp($status);
if ($status =~ m/^?/) {
my ($symble, $left) = split(' ', $status);
system("svn add $left");
}
}
}
&notAdded();
system("svn commit -m 'comment'");
Can anyone please suggest me how can I redirect this error to /dev/null within the script.
The normal way to hide unwanted output with SVN is to use the -q (quiet) flag:
svn -q add nothere
displays nothing.
Or the really easy way:
system("svn add $left 2>/dev/null");

How to search an entire CVS repository (all branches/history/comments)?

If I want to essentially grep every line ever in the repository, is there a way to do it? I know this would take a long time for large projects.
If not all inclusive, at least just the current branch and its entire source history?
Edit: I should have been more explicit. What if I don't have direct access to the server that the CVS repository is on? So I couldn't directly grep the filesystem that has the CVS repository.
There is no way to do this with standard CVS tools without access to the repository. A third party tool out there may do it (I don't know of one, although CS-CVS seems to claim to), but to do it programatically, you would have to do CVS logs on all the relevant files, and then retrieve and search each version reported by cvs in the logs (cvs log is a command line option in CVS that shows you the revision history of any file, but it doesn't show you the contents).
Here's what I recently used, in a case where I didn't have access to the server. It seemed to work that time. Call it from inside a working copy, with cvs in the PATH. Note that this doesn't search commit messages, but you can simply grep 'cvs log' for that.
#!/usr/bin/perl
# Searches CVS diffs and first revisions behind the current working
# directory for an expression (perlre syntax).
# Synopsis: cvsgrep [-n] <search-expression> [<file_1> ... <file_n>]
# -n means that contents of matching files should not be printed to stdout.
use Getopt::Std;
my %options=();
getopts("n",\%options);
my $no_content_dump=$options{"n"};
my $search_term=shift
or die "Error: usage is: cvsgrep [-n] <search-expression>".
" [<file_1> ... <file_n>]";
sub quote_fn
{
my $fn=shift;
$fn =~ s/\'/\'\"\'\"\'/g;
"'".$fn."'";
}
my $args_str;
while(#ARGV)
{
my $arg=shift;
$args_str.=' ' if $args_str;
$args_str.=&quote_fn($arg);
}
print
"Searching for term: $search_term",
($args_str?" in: $args_str":""),
"\n";
open CVSLOGH,"cvs log -N $args_str|" or die "Cannot execute cvs log: $!";
my #files_revisions=();
my $cur_file;
my $cur_revision;
while(<CVSLOGH>)
{
chop;
if(/^Working file\:\s*(.*)$/)
{
$cur_file=$1;
$cur_revision='';
}
elsif(/^revision\s+(.*)$/)
{
$cur_revision=$1;
}
elsif((/^\=\=\=\=/ || /^\-\-\-\-/) && $cur_revision)
{
push #files_revisions,{file=>$cur_file,rev=>$cur_revision};
}
}
close CVSLOGH;
my $matchcount=0;
my $count=0;
my $progress_msg="Scanned %d out of %d commit(s)\r";
my $erase_ln=(" " x (length($progress_msg)+20)) . "\r";
foreach my $file_revision(#files_revisions)
{
printf($progress_msg,$count++,scalar(#files_revisions));
my($file,$rev) = ($file_revision->{file},$file_revision->{rev});
$rev =~ /^(.*\.)([0-9]+)/;
my $revbase=$1;
my $revlastdigit=$2;
my $rev1=$revbase.($revlastdigit - 1);
my $diffcommand = "cvs diff -N -r $rev1 -r $rev ".&quote_fn($file);
open CVSDIFFH,"$diffcommand|" or die "Cannot execute cvs diff: $!";
my $diffresult;
while(<CVSDIFFH>)
{
if(/^[\<\>]/)
{
s/^.//;
$diffresult.=$_;
}
}
close CVSDIFFH;
if($diffresult =~ /$search_term/s)
{
print "${erase_ln}FOUND: in diff for $file $rev1:$rev\n";
$matchcount++;
system($diffcommand) unless $no_content_dump;
}
}
print "${erase_ln}Done ($matchcount match(es)).\n";
It depends on what you're looking for. CVS version files contain all of the edits that have ever happened to the file, in plaintext. So if you're simply looking for all files that contain a particular word, do a recursive grep on the repository.
If you're looking to find specific versions that contain those words, then you're going to have to extract the versions from the repository, which is expensive. However, if you can limit the set of files by grepping the repository, then it's not so bad.