Disable image writing support in libPNG - png

I'm trying to reduce the size of libPNG by disabling image writing support, since the software I'm writing does not require it. I thought that commenting out "option WRITE" in scripts/pnglibconf.dfa would achieve this, but it still builds in pngwrite.c and the size of the resulting library file is the same as when "option WRITE" is not commented. Is there something else I need to do in order to disable image writing support in libPNG? Thanks in advance!

The contrib/pngminim/decoder directory in the libpng distribution includes an example pngusr.dfa to do what you want. It turns all options off and then turns on just the ones that are needed for sequential reading.

Related

Unable to open PNG file with kview or PictureViewer; opens fine with other viewers

I have a PNG file created using libPNG library. The file opens perfectly on Windows picture viewer and MS Paint, but opening with kview (on Linux RHEL5) or QuickTime PictureViewer (on Windows) fails - the former reports a "libpng read error whereas the latter reports the file as being corrupted. A similar problem is seen when trying to process the PNG using ImageMagick library on Linux. Given that the PNG opens fine on some applications, it doesn't seem that the file is really corrupted; I therefore suspect some problem with version compatibility, but I am not sure. I tried searching the web but couldn't find any information on the root cause or a solution to this problem. Can someone please guide me on this?
Judging from the example image you posted in the comments, the problem is that your PNG lacks the ending IEND chunk -something you can test by opening it with tweakpng and inspecting the structure visually, or choosing "Check Validity-F5". It is somewhat predictable that those kind of PNG are displayed by some viewers and rejected by others.
If you are using libpng, it seems you forgot to call png_write_end()

building tshark

I am porting tshark to a different OS.Can someone tell me which files/folders can be removed from the source code ? I am aware that GTK isnt required;but it would be great if i could do away with all the unnecessary files/folders right at the beginning.
Thanks in advance.
Can someone tell me which files/folders can be removed from the source code ?
ui/gtk and, if you have it, ui/qt. Do not remove anything else under ui; at least some of that code is shared between TShark and the {GTK+,Qt} versions of Wireshark, and the code in ui/cli is the tap code for TShark.
help is Wireshark-specific, so you can probably remove that.
image is necessary if you're building for Windows (it arguably shouldn't contain both images for the GUI and .rc.in files for the Windows resource compiler, but maybe that makes referring to icons in image more complicated).
You could perhaps also remove doc and docbook if you don't plan to build any documentation.
You could perhaps remove test if you're not going to run tests, and packaging and debian if this OS isn't Windows or some flavor of UN*X for which Wireshark provides packaging mechanisms.
I don't know whether any of the autoconf or CMake stuff will break if you remove them, however. Unless you're running low on disk space, I'd leave all the directories and files in the source tree there, and just not bother porting the files that you don't need.

Unity3D Resource.LoadAll adding types to unity

I am trying to find a way to have unity treat my .ai files as text files, so that Resource.Load and LoadAll will load them. Is there a configuration change I can make to allow for this? Right now it skips the .ai files completly.
We found that the direct solution was not possible, or nobody seems to know it. To get around this issue, we found that there is another folder in unity that allows for resources to be added and it is not compressed or mangled, and can be enumerated.
Use streaming assets, and the problem goes away.

Eclipse diff for large file shows incorrect differences

I am not sure if anybody has experienced this.
I am working with a very large file having 7000 lines of code.
I made a lot of changes and when i compared the file with the repository version, it showed me incorrect differences.
I guess the diff algorithm buffers only limited number of lines ahead/behind for searching the current line, and on failing to find that, it simply shows diff with current line in new file.
One such snapshot > http://picasaweb.google.com/lh/photo/ENwZ4gqXxiCF3SWqVnVAqA?feat=directlink
If anybody knows any workaround, please let me know.
Thanks
Easy workaround - use another diff tool. I'm serious. I wouldn't waste time splitting up my files, or wondering how to get it to work with Eclipse's diff tool if there's some known issue with really big files.
I recommend Beyond Compare 3. I say this having used many different diff tools. It's not free, but it's worth it. In the rare chance that it gets confused, it allows you to with a couple of clicks realign any areas that it got confused on. I have used it with some pretty large files, and it rocks.
If you're concerned about Eclipse integration, there's even a plugin, BeyondCVS, that allows you to launch your Beyond Compare diffing from the Eclipse right click 'Compare' menus. Its name is kind of misleading though, as it doesn't appear to be related to CVS.
If you need something free, try one of these diff tools instead:
WinMerge
SourceGear DiffMerge
What version of eclipse are you using? And what edition? (Java? CDT? ...)
Depending on those data, it could make a difference, since files with several thousand lines are known to be a problem for the diff algorithm.
See this thread for illustration.
And do check, as mentioned in the same thread, your error log to check if any particular message could help you to pinpoint the cause of the failed diff.

How can I rotate and compress Log4perl log files?

From what I can tell neither Log4Perl or any of its related modules in CPAN supports rotate & compression of log files.
Rotation can be accomplished by using:
Log::Log4perl::Appender::File
Log::Dispatch::FileRotate.
But neither modules supports rotation and compression. (Log::Dispatch::FileRotate has it in its todo list, but it's not currently implemented).
It is possible to do this using the standard Logrotate facility in Linux, by using either Log::Log4perl::Appender::File's recreate_check_interval or recreate_check_signal.
From initial tests, it looks like using Logrotate with the delaycompress option will do the trick - even on a machine with high load, as once the file is moved, log4perl will continue logging to the same filehandle, until the signal is cought.
However, if delaycompress is not used, and there is (even a slight delay) between the compressing of the log file, and the catching the signal by the Perl program, some logging data might be lost.
What do you think? Are there other options we did not think of?
Over the years, I've found that you almost always want to use external methods for log file rotation with Log4perl. You simply avoid a lot of subtle issues (log delays, permission issues) that internal log rotates inevitably run into.
You've mentioned two methods that work with logrotate on Linux, why not stick with them? The Log4perl FAQ describes using newsyslog which is the FreeBSD equivalent of logrotate and provides similar features.
Have you thought about working with the Log::Dispatch::FileRotate's maintainers to add the features its missing and you need? It is open source after all. :)
If you don't want to deal with that yourself, there are various CPAN support consultancies that do that for you.
I contacted the author of Log::Dispatch::FileRotate, as suggested here, and he explained the reason why compression is not yet implemented in Log::Dispatch::FileRotate.
Basically, compressing right after rotation, might block the running process, during the compression which is pretty expensive.
The options suggested were to allow the user of Log::Dispatch::FileRotate to execute an arbitrary application on the file, just after rotation, thus doing it in another non blocking process.
Another suggestion was to have a filesystem trigger (like inotify) trigger the compression when the file is closed to writing by the main process.
Yet another suggestion , is to write the log file compressed through a gzip pipe, or one of the perl gzip modules. This works, but causes some problems (grep/less) won't work. zgrep and zless will work, but zgrep gives an ugly warning when grepping on a gzip file which is still open for writing. Using "tail" on the file will also not work - so this option isn't practical.