How to patch with a .diff file? - diff

I found this patch on source forge (cocoa.diff), and it implies that I can patch using the cocoa.diff file. However, I can't seem to figure out how to use the .diff file.
Thanks for any help!
EDIT: I tried "patch p1 < cocoa.diff" and the output was "patch: ** Only garbage was found in the patch input." Does this mean that the .diff file is corrupt or incorrect? Also, I'm using Mac OSX 10.6.

the file cocoa.diff seems OK, however the link failed the first time I tried and returned some error HTML which indeed looks like garbage to patch. Your command seems nearly OK (lacks a dash: patch -p1 < cocoa.diff seems better).

Related

Need help to write a basic Command Line code

I'm using Windows 10 if it matters and I'm trying to feed a file to the "oeminst" app that will convert this file from .EDR to .CCSS. According to the app's website its usage summary is this:
oeminst [-options] [inputfiles]
-v Verbose
-n Don't install, show where files would be installed
-c Don't install, save files to current directory
-S d Specify the install scope u = user (def.), l = local system]
infile Manufacturers setup.exe install file(s) or .dll(s) containing install files
infile.[edr|ccss|ccmx] EDR file(s) to translate and install or CCSS or CCMX files to install
If no file is provided, oeminst will look for the install CD.
more info can be found here https://www.argyllcms.com/doc/oeminst.html
So far I tried this code:
C:\Users\PC>oeminst infile. [C:\Users\PC\testfile.edr]
oeminst: Error - Unable to load file 'infile [C:\Users\PC\testfile]'
I'd appreciate if someone at least could tell me if I'm doing it right or not.
P.S. sorry for the messed up text. Not sure how to fix it. It looks good in editing mode.
Try this : oeminst infile.edr C:\Users\PC\testfile.edr
Nevermind, I got it.
C:\Users\PC>oeminst C:\Users\PC\testfile.edr

colorgcc perl script with output to non-tty enabled writing to C dependency files

Ok, so here's my issue. I have written a build script in bash that pipes output to tee and sorts different output to different log files (so I can summarize errors/warnings at the end and get some statistics on files built). I wanted to use the colorgcc perl script (colorgcc.1.3.2) to colorize the output from gcc and had found in other places that this won't work piping to tee, since the script checks if it is writing to something that is not a tty. Having disabled this check everything was working until I did a full build and discovered some of the code we receive from another group builds C dependency files (we don't control this code, changing it or the build process for these isn't really an option).
The problem is that these .d files have the form as follows:
filename.o filename.d : filename.c \
dependant_file1.h \
dependant_file2.h (and so on for however many dependencies there are)
This output from GCC gets written into the .d file, but, since it is close enough to a warning/error message colorgcc outputs color codes (believe it's the check for filename:lineno:message but not 100% sure, could be filename:message check in the GCCOUT while loop). I've tried editing the regex to attempt to not match this but my perl-fu is admittedly pretty weak. So what I end up with is a color code on each line for these dependency files, which obviously causes the build to fail.
I ended up just replacing the check for ! -t STDOUT with a check for a NO_COLOR envar I set and unset in the build script for these directories (emulates the previous behavior of no color for non-tty). This works great if I run the full script, but doesn't if I cd into the directory and just run make (obviously setting and unsetting manually would work but this is a pain to do every time). Anyone have any ideas how to prevent this script from writing color codes into dependency files?
Here's how I worked around this. I added the following to colorgcc to search the gcc input for the flag to generate the .d files and just directly called the compiler in that case. This was inserted in place of the original TTY check.
for each $argnum (0 .. $#ARGV)
{
if ($ARGV[$argnum] =~ m/-M{1,2}/)
{
exec $compiler, #ARGV
or die("Couldn't exec");
}
}
I don't know if this is the proper 'perl' way of doing this sort of operation but it seems to work. Compiling inside directories that build .d files no longer inserts color codes and the source file builds do (both to terminal and my log files like I wanted). I guess sometimes the answer is more hacks instead of "hey, did you try giving up?".

How to find exactly why patch failed?

I have a unified diff patch which patch rejects. A visual inspection of the diff file and the original code finds the code contains all text expected by the diff file in the correct location. I have tried --ignore-whitespace and -F3 for good measure but patch still fails. Does an option exist to report exactly what is causing the patch to fail?

Applying a .patch file

I want to apply a .patch file to one file.
I placed both in the same folder and I tried this
trinity#Zion ~/Desktop $ patch -i lalala.patch
patching file install.sub
patch unexpectedly ends in middle of line
Hunk #1 FAILED at 1562.
1 out of 1 hunk FAILED -- saving rejects to file install.sub.rej
But as you see in the output, it failed. The content of install.sub.rej is basically all the code from lalala.patch
I tried similar commands but I got the same results. I guess I'm doing something wrong.
I know applying a patch is just 1 command but I'm so lost at this. If someone tells me the command or directly patches the file (and also tells me the command) thanks
original file
http://pastebin.com/raw.php?i=PKru8m5r
patch:
http://pastebin.com/raw.php?i=kkMUHtj8
Your patch command is fine. It is the patch file itself that gives the problem (at least for me (Kubuntu 11.04), as looking at the link you gave in the comment, all patch files contain the same error...?!)
To solve the problem for me, find this line in the patch file:
## -1562,6 +1562,8 ## set_timezone() {
and remove the set_timezone() { part and the error you describe is gone.
This part is showing the function where the changes are made. When looking at the patches on the page you gave in your comment, it shows that all of them contain this extra information. As far as I know (but I am not a patch guru, so please correct me) is this not accepted by the default patch command.
(Unfortunately enough, your patch still fails and the expected lines in the patch file compared to the original file do not match...)
Quite likely, the generated patch is "correct" but double-check the encode of it and be sure it's UTF-8.

How can I resume downloads in Perl?

I have a project that depends upon some other binaries to be downloaded from web at install time.For this what i do is:
if ( file-present-in-src/)
# skip that file
else
# use wget to download the file
The problem with this approach is that when I interrupt a download in middle, and do invoke the script next time, the partially downloaded file is also skipped (which is not desired), also I want wget to resume the download of the partially downloaded file.
How should I go about it:
Possible Solutions I could think of:
Let the file to be downloaded to some file say download_tmp. Move to original file
if successful.
Handle SIG{'INT'} to write proper cleanup code.
But none of these could help resume the partial file download,
Any insights?
Fist, I don't understand what this has to do with Perl, since you're using wget to do the dowloading ... You could use libwww-perl (perldoc LWP) and have more control about the download process.
Then I second your idea of downloading to a "tmp" filename and move the file on success.
However I think you need to go further and verify the integrity of the files. Doing an MD5 or SHA hash is very easy, and match the downloaded one with what you're expecting. You can have a short file on server containing the checksum (filename.md5). Determine success only when you have a match.
Note that catching all the signals and generally trying to make the process unkillable, and then expecting it to have worked is bound to fail at one point or another. There could be a network timeout, a crash, power failure, configuration problem on the server ... you should instead assume downloads can fail, because they will, and code so that your process can recover.
Finally you're not telling us what kind of binaries you're downloading and what you're doing with them. Since you use wget I'm going to assume you're on Unix; you should consider using RPM+Yum or the likes, they handle all this for you. RPM are easy to write, really.
use your first approach ..
download to "FileName".tmp
move "FileName".tmp to "FileName" move! not copy
once per diem clean out all .tmp files (paranoia rulez)
You could just use wget's -N and -c options and remove the entire "if file exists" logic.