I'm doing a file sync application. for that I need to calculate both week and strong checksum for a file in client application and send it to the server. At the sever side, server needs to compare these checksum against a similar file (finding similar file is already implemented). I need week checksum to be 32 bit number and strong checksum to be 64 bit number.
I got rSync source code but I couldn't figure it out how it works.
If anyone familiar with these algorithm, please help me.
Thanks.
Refer to this: http://rsync.samba.org/tech_report/tech_report.html
It's pretty simple, and it is demonstrated in rsync's source code (see checksum.c).
Related
I would like, for educational purposes, to read PLC symbols table by using libnodave (or any equivalent open-source like snap7).
Actually, when I read data from merkers, I must know in advance what kind of variable will be present in the DB, also due to the fact that libnodave reads raw bytes in sequences.
I'm searching a way to know in advance what kind of data was chosen by the plc programmer when storing data so, when I use raw bytes read, I can easily monitor variables and adapt my reading and visualization routine.
Thanks in advance.
A program in a S7-3xx/4xx PLC has no symbolic addressing downloaded. So Libnodave or Snap7 can't point to a symbol.
TIA and the S7-12xx/15xx PLC are different. They have symbols downloaded. But so far as i know Libnodave or Snap7 can not use these symbols yet.
A solution maybe is the export the Symboltable is Step7/TIA to an Excel or .scv file and read there the symbol with it's format and address information.
(Libnodave does not support S7-12xx/15xx, use Snap7 instead.)
we have an ERP system running in our company based on Progress 8. Can you give an indication how compatible OpenEdge 11 is to version 8? Is it like "compile the source" and it will run (of course testing :-)) or more like every second line will need rework?
I know it's a general question but maybe you can provide a general answer? :o)
Thanks,
Gunter
Yes. Convert the db and recompile.
Sometimes you might run across keyword conflicts. A quick fix for that is the -k parameter (the "keyword forget list"). Using -k is a quick way to get old code that has variables or table/field names that have become new keywords to compile while you work on changing the names.
You might also see the occasional situation where the compiler has tightened up the rules a bit. For instance, there was some tightening of rules around defining shared variables in the v8/v9 time frame -- most of what I remember about that was looking at the impacted code and asking myself "how did that ever compile to start with?"
Another potential issue -- if your application uses a framework (such as "smart objects") whose API might change from release to release it is important to make sure that you compile against the version of that framework that your code requires -- not something newer but different.
Obviously you need to test but the overwhelmingly vast majority of code recompiles and runs without any issues.
We just did the conversion from Progress 8.3E to OpenEdge 11 a few days ago. It went on much like Tom wrote. Convert and recompile.
The only problem was one database that was originally created in Progress version 7 . Here conversion failed - but since it was a small database, it was quicker to dump , recreate and load.
I want to generate a MD5 hash of a text file in ABAP. I have not found any standard solution for generating it for a very big file. Function module CALCULATE_HASH_FOR_CHAR does not meet my requirements because it takes a string as an input parameter. Although it works for smaller files, in case of a for example 4 GB file one cannot construct such a big string.
Does anybody know whether there is a standard piece of coding for doing that (my google efforts did not bring me anything) or maybe someone has an MD5 algorithm in ABAP that calculates the hash of a file?
It looks like the implementation of this algorithm is impossible in ABAP because of the fact that the language does not allow arithmetic overflows during the calculations. This should also answer the question why it has not been implemented so far in SAP system. Either way looks that there is no other way as to call an external tool which of course is, regrettably, hardly platform independent.
EDIT: Ok! So with a great help of René and the code of Fast MD5 Implementation in Java I created the implementation of MD5 algorithm in ABAP . This implementation allows to update the calculated hash with more bytes, which of course might be coming from different sources.
There is no method which takes a file so far but anyways most of the work has been done.
Some simple ABAP Unit tests are included in the code, which also document how to use it.
Perhaps you could read the file in data blocks of a couple megabytes and create a hash list of those using the suggested function. And then create a single top hash using the generated hash list.
The SDN is usually a very good starting point for finding ABAP-related solutions. I was able to find this post: http://scn.sap.com/thread/1483479
The author suggests:
Upload the .txt file BUT as BIN.
Calculate the hash code using function MD5_CALCULATE_HASH_FOR_RAW
Are you able to get your file in binary format and use MD5_CALCULATE_HASH_FOR_RAW?
Edit: This post even has a more detailed answer using CALCULATE_HASH_FOR_RAW: http://scn.sap.com/thread/1298723
Quote of Shivanand Kalagi's answer:
STR_LEN = XSTRLEN( DATA ).
CALL FUNCTION 'CALCULATE_HASH_FOR_RAW'
EXPORTING
ALG = 'MD5'
DATA = DATA
LENGTH = STR_LEN
IMPORTING
HASH = L_MD5_HASH.
I am trying to use CRC for error checking for downloaded files.
I used NSURLConnection for downloading file.
But I have no idea where to start to use CRC for error checking.
Do I use some libraries? If so, would you recommend me one?
Can anyone tell me where I can find any examples or tutorials for CRC usage in c++ or objective c?
Thank you very much
First off, how is the file CRC handled on the server (if at all)? There are a multitude of algorithms (CRC32 for simple checksum, MD5/SHA for cryptographic hashes) which can be used, and you must use the same one. If the server does not provide any checksum information (say, from a checksum file), there is no use for a checksum as there is nothing to compare it to.
I am looking for a cache library in Perl. But the ones I found so far like Cache::Cache and CHI all seem to assume you want to read the file into a data structure in Perl. I am only interested to caching the files to disk without ever reading the file content into Perl.
The files I am dealing with are around 200 MB and will be downloaded from the net. I want a size limit of the cache and an expiry time for the cached files.
Any suggestions ?
Edit: As I did not find any ready library for this I have implemented it myself now. But if anyone can point to one anyway it would of course be interesting.
Solve the problem with one layer of indirection. Store references to files, not the files themselves, in the cache. How exactly a reference looks like depends on your use case.
Try the Cache::File module from CPAN