Generate PNG file with pre-known CRC - png

Is it possible to create a PNG file with a predefined CRC? (kind of a programming challenge..)
I have a python script to generate hex codes with the target CRC, but I'm not sure how to make a valid PNG out of it.
BTW - it may be that I'm talking nonsense, but it sounds possible on theory (right?)

You can use spoof.c to do that, either at the level of a PNG chunk or at the level of the entire file. (Note that a PNG file does not contain a CRC of the whole thing, only CRCs of the chunks.)

Related

Add the hash of the code in executable file

I have an STM32 application which uses two blocks of memory. In 0th block, I have a boot code (which runs just after power-on) and in 7th block, I have an application code (which may or may not run depending on the authorization decision given by the boot code).
Those two codes are developed hence generated by two separate projects. They are flashed on the specific blocks (boot code to 0th block and application code to 7th block) of STM32 NOR memory using openocd tool by giving an offset value to the openocd's write_image command.
What I would like to do basically in the boot code is that I want to calculate the hash of the application code and compare it with the reference digest. If they are equal, I will give the hand to the application code. For that, after I generate the executable (can be in elf, hex or bin format) of the application code, I want to:
Create another file (in any format listed above) which has 128K byte size
Copy the content of the executable file to the recently created file from its beginning (0 offset)
Write the hash of the executable to the last 32 bytes of the recently created file
Fill the gap with 0xFF
Finally flash this executable file (if it is still) to the 7th block of the memory
Do you think that it is doable and feasible? If so:
Which format should I use to generate the executable?
Do I have something that I need to give specific attention to achieve this?
Lastly, do you think that it makes sense to do that or is there any other more standard way for this purpose?
Thanks a lot in advance.
You just need to add an additional step to your building sequence. After the linking extract the binary file from elf
Then write a program in your favourite programming language which will calculate something and append the result to that bin file

How to convert .SFF file format to .BMP or .PNG or .JPG?

I need to convert my SFF file to PDF, then i need verify the document. i.e SFF file and converted file.
For that, I think to convert SFF file to image file and PDF file to image file.
Then comparing the both file using image processing.
To do this method:
Im searching for a program to convert SFF to BMP
Does anyone know such a program or has another idea how to do the job?
Thank you in advance...
Looks like you need reaConvertor. It appears to be a matured tool you can rely on. There is an online version of the tool here
I think:
https://github.com/Sonderstorch/sfftools
will do what you need (convert sff -> tiff/jpeg/..) and then you can use imageMagic (for example) to go to PDF.
Clearly not a current well used image format, however if you have legacy.sff Structured Fax Format, they are similar (not exactly identical) to a Monochrome G4 format.
By far the simplest programmable method to convert is using IrfanView which can Read Modify and Resave as other formats in batches.
Out put can be any other modern image type including Mono.BMP, G4.fax or as PDF (with or without GhostScript)

Get Maximum Compression from 7zip compression algorithm

I am trying to compress some of my large document files. But most of files are getting compresses by only 10% maximum. I am using 7zip Terminal Commands.
7z a filename.7z -m0=LZMA -mx=9 -mmt=on -aoa -mfb=64 filename.pptx
Any suggestion on changing parameters. I need at least 30% compression ratio.
.pptx files or .docx files are internally .zip archives. You can not expect a lot of compression on an already compressed file.
Documentation states lzma2 handles better data that can not be compressed, so you can try with
7z a -m0=lzma2 -mx filename.7z filename.pptx
But the required 30% is almost unreachable.
If you really need that compression, you could use the fact that a pptx is just a fancy zip file:
Unzip the pptx, then compress it with 7zip. To recover an equivalent (but not identical) pptx decompress with 7zip and recompress with zip.
There are probably some complications, for example with epub there is a certain file that must be stored uncompressed as first file in the archive at a certain offset from the start. I'm not familiar with pptx, but it might have similar requirements.
I think it's unlikely that the small reduction in file size is worth the trouble, but it's the only approach I can think of.
Depending on what's responsible for the size of the pptx you could also try to compress the contained files. For example by recompressing png files with a better compressor, stripping unnecessary data (e.g. meta-data or change histories) or applying lossy compression with lower quality settings for jpeg files.
Well just an idea to max compressing is
'recompress' these .zip archives(the .docx, .pptx, jar...) using -m0 (storing = NoCompression) and then
apply lzma2 on them
lzma2 is petty good - however if the file contains many jpg's consider to give the opensource packer peazip or more specify paq8o a try. Paq8 has a build in Jpeg compressor and supports range compression. So it will also come along with jpg's the are inside some other file. Winzip's zipx in contrast to this will require pure jpg files and is useless in this case.
But again to make PAQ effectively working/compressing your target file you'll need to 'null' the zip/deflate compression, turn it into an uncompressed zip.
Well PAQ is probably a little exotic, however it's in my eye's more honest and clear than zipx. PAQ is unsupport so it's as always a good idea to just google for what don't have/know and you will find something.
Zipx in contrast may appears a little intrigious since it looks like a normal zip and files are listed properly in Winrar or 7zip but when you like to extract jpg's it will fail so if the user is not experienced it may seem like the zip corrupted. It'll be much harder to find out that is a zipx that so far only winzip or The Unarchiver(unar.exe) can handle properly.
PPTX, XLSX, and DOCX files can indeed be compressed effectively if there are many of them. By unzipping each of them into their directories, an archiver can find commonalities between them, deduplicating the boilerplate XML as well as any common text between them.
If you must use the ZIP format, first create a zero-compression "store" archive containing all of them, then ZIP that. This is necessary because each file in a ZIP archive is compressed from scratch without taking advantage of redundancies across different files.
By taking advantage of boilerplate deduplication, 30% should be a piece of cake.

Method to decompress a PDF (non-Adobe) while retaining form fields?

I found a similar question that involves Acrobat, but in this case the PDF was made with a combination of MS Word and CenoPDF v3, with which I'm unfamiliar. Additionally the PDF is version 1.3. I'd like to decompress it, to see its low-level workings and make some changes. It's easy with GhostScript's -dCompressPages=false parameter, but that simultaneously strips all the fill-in form functionality. Is there a method for decompressing the file while leaving everything else intact? A quick search of the docs for tcpdf and fpdi (cited in the link) didn't reveal a compression option.
Ghostscript and pdfwrite isn't a good combination. The PDF file you get out is NOT the same as the one you put in. This is because of the way that Ghostscript and pdfwrite work; the input is fully interpreted to a sequence of graphics primitives, which is sent to the Ghostscript graphics library. These are then sent to the requested device, most devices then render the result to a bitmap, but the pdfwrite family reassemble those graphics primitives int a new PDF file.
Note that the contents of the new PDF file have no relationship to the original, other than the appearance when rendered. Ghostscript and pdfwrite do maintain much of the non-marking content of PDF files such as hyperlinks and so on (which obviously don't get turned into graphics primitives), by interpreting them into pdfmark operations (an extension to the PostScript language defined by Adobe). However, even if Ghostscript and pdfwrite maintained all this content, the resulting PDF file wouldn't be the same as the original one decompressed....
There are tools which will decompress PDF files, and I would recommend one of our other products, MuPDF. A part of this is mutool, and "mutool clean -d in.pdf out.pdf" will decompress pretty much everything in a PDF file
QPDF can decompress PDF documents (among other things). I used this tool in the past and it preserved forms and data.
The tool has some issues with large PDFs (can take too much time and memory for decompression). The tool can produce incomplete output (with warnings in console) for some partially broken / nonstandard PDFs.

copy part of audio file into a new file in iOS

I want to copy a part of the audio file, given the starting and ending point(in terms of time which I'll convert to packets or frames-is the conversion right?), and create a new audio file for the copied snippet. How do I copy?
Please advice.
Regards,
Namratha
For what file format(s)?
For the simplest and common .WAV (RIFF) file format, you can just copy the canonical 44-byte header (after checking to make sure the file is using only this simple format), update with the target file length, and then copy a selected sub-range of bytes (multiply time by sample rate by frame size) from the source file, and append that PCM data to the copied header. Apple's codec does not complain about audio files patched together this way.
For other formats, you might be able to convert them either to a simple WAVE file, or to an array of raw PCM samples of suitable sample rate, data type and endianess, and then do the above.