In which file is the _AppInfo data stored in Beckhoff TwinCAT 3 PLC - plc

I'm looking for the 'AppTimeStamp' information so this can be used to verify that the code is not updated/changed by service personel.
Detect code changes on Beckhoff PLC using C#
At this location I already find part of my information, but I was not able to add a comment due to the 'new user' limitations

You can find the AppTimestamp in the _AppInfo instance.
So just call _AppInfo.AppTimestamp in your program to know the time of the last application start.
Make sure you also check the number of online changes since last download with the OnlineChangeCnt counter which you will also find in the _AppInfo instance.

There are many possibilities where this value is saved. The TwinCAT saves data to the C:\TwinCAT\3.1\Boot folder, different files are explained here.
The ProjectName can be found for example from the configuration data (CurrentConfig.xml), from the end of the file (TcBootProject/ProjectInfo/ProjectName). The same file contains one date (<TcBootProject CreateTime="2019-06-10T13:14:17">), but it seems to be the build time of the boot project created.
I couldn't find the date of AppTimestamp in any files, but perhaps the TwinCAT uses the creation time of the files in those folders? Or perhaps it's hidden in the binary somewhere.
When you update the software without updating the boot project, the file Port_851_act.tizip is updated. So you can check its timestamp. When you update the boot project too, Port_851_boot.tizip and other files are also updated.
So basically, to check if the code is updated by someone, check that modified dates of the files under Boot directory. I suppose only .bootdata files should update as they contain saved persistent data. Of course, you can easily change the dates with 3rd party program. So one solution is to compare the Port_851.crc file contents since it contains the CRC check value of the code. It will always change when boot project is updated.

Related

How to do duplicate file check in DataStage?

For instance
File A Loaded then next day
File B Loaded then next day
This time Again, File A received this time sequence should be abort
Can anyone help me out with this
Thanks
There are multiple ways to solve this, but please don't do intentionally aborts as they're most likely boomerangs.
Keep track of filenames and file hashes (like MD5sum) in a table and compare the list before loading. If the file is known, handle/ignore it.
Just read the file again as if it was new or updated. Compare old data with new data using the Change Capture stage, handle data as needed, e.g. write changed and new data to target. (recommended)
I would not recommend writing a sequence that "should abort" as this is not the goal of an ETL process. If the file contains the very same content that is already known, just ignore it. If it has updated data, handle it as needed. Only abort, if there is a technical issue, e.g. the file given is wrong formatted. An abort of a job should indicate that something is wrong with the job. When you get a file twice, then it's not the job that failed.
If an error was found in the data that needs to be fixed by others, write the information about it to a table. Have a another independend process monitoring that table to tell the data producer about it (via dashboard, email,...).

Azure Data Factory For Each Loop is importing all my CSV files per iteration instead of just the file name I *think* I've told it to

I could really do with some help with ADF; I've recently started trying to use it thinking it would be similar to SSIS but wow am I having a hard time! I've built up this kinda complicated pipeline over the last few weeks which basically reads a list of files from a folder and from within a For Each loop it's supposed to check where the data starts per file and import it into a SQL table. I'll not bore you with all the issues I've had so far but atm it seems to be working aside from the For Each part of it, it's importing all the files in the folder per iteration and it seems to be the data set configuration which is not recognising the filename per iteration because if I look through the debugging I can see it pick up the list of files, set the DSFileName variable to the first of them, but the output of the data flow task is both files. So it seems like I've missed a step somewhere and I've just spent the last 5 hours looking and could really do with some help :(
I reckon to have followed the instructions here: https://www.sqlshack.com/how-to-use-iterations-and-conditions-activities-in-azure-data-factory/
Some pictures to show the debugging I've done:
Here it shows it's picking up 2 files (after I filtered out folders and stuff)
Here shows the first file name only being passed into the first data flow
Here shows the output from it, where it has picked up both files somehow and displays the count of 2 files
Here shows the Data Set set up where I believe to have correctly set the variable as the file name to be used
I just don't even know where to start now tbh, I reckon to have checked everything I can see and I'm not using any wild cards or anything. I can see it passing the 1 file name per iteration into that variable but each iteration I can see 2x counts of the file going into the table and the output of each data flow task showing both file counts.
Does anybody have any ideas or know what I've missed?
EDIT 23/07/22: Pics of the source as requested:
Data Source Settings
Data Source Options
So it turns out that adding .name to item() in the dataset parameter means it uses just the current one instead of them all.... I'm confused by this as all the documentation I've read states that item() references the CURRENT item within the For Each, did I misunderstand?
Adding .name to the dataset here is now importing just the current file per loop iteration

Using each plugin in Nutch separately

I'm using extractor plugin with Nutch-1.15. The plugin makes use of parsed data.
The plugin works fine when used as a whole. The problem arises when a few changes are made to the custom-extractos.xml file.
The entire crawling process needs to be restarted even if there is a small change in the custom-extractors.xml file.
Is there a way that single plugin can be used separately on parsed data?
Since this plugin is a Parser filter, it must be used as part of the Parse step, and is not stand-alone.
However, there are a number of things you can do.
If you are looking to change the configuration on the fly (only affecting newly parsed documents), you can use the extractor.file property to specify any location on the HDFS, and replace this file as needed, it will be read by each task.
If you are want to reapply the changes to previously parsed documents, the answer is dependent on the specifics of your crawl, but you may be able to run the parse step again using nutch parse on old segments (you will need to delete the existing parse folders in the segments).

Given code base hosted on TFS, which command can tell me which file has changed most?

I want to find out files under a given directory which have been updated most. Is there any command which can display this info? Or is there any way to get max version count for a given file, so I can write some script to get this info from all and then sort desc.
Do you mean changed the most number of times, or undergone the most code chrun?
Either way - looking at the report data might be the easiest option for you. Take a look at the following blog post I did explaining how to use Excel for looking at TFS data that uses churn as an example allowing you to drill down into folders and files - but you should be able to get the data that you are looking for.
Getting Started with the TFS Data Warehouse

Appending Dataset in Core Data execution of Update through iTuneStore

This one Also Resolved by myself.
Good answer would be,
New Edition App: make change the sqlite file name. check file exist, get the old edition app sqlite name value then remove, and add into that folder.
this is the best answer.
*New efficient answer
STEP 1: select .xcdatamodel file
STEP 2: Xcode -> Design -> Data Model -> Add Model Version
STEP 3: Manipulate new changes
For more detail about migrate or re-organize data model structure, visit Apple <here>
Although I didn't quiet sure how to add model version before reading Apple's document, My old way is still works.
Old resolved self answer
Example from 'Reciepe' given by apple, it don't replace the file if it is there already.
However, if your code need to update and get rid of old .SQL file then the explicit condition needed for check the old file sqlite name deleting function required for update.
Basic sequence would be,
CHECK 1: check if old file exist.
CHECK 2: if it does, remove it.
CHECK 3: then copy new one.
CHECK 4: if the old file don't exist,
CHECK 5: (if the new file don't exist),
CHECK 6: copy new file into that folder.
CHECK 7: if new file exist do nothing.
solved.
Original Posted Question
So I have completed my code work.
This is first time releasing the app through iTuneStore.
Current state of reading Core Data (.sqlite) file is already prefetched (already has information like apple's 'Reciepie' program).
Assuming I have successfully released through apple store, and decide to update my application to existing users.
Say I have sqlite contents but it contains bit more information than previous SQLite file under same structure.
Question 1. Every time update held to the existing user, does it removes previous ones and move new updated application?
Question 2. if it is not, then HOW can I append the existing sql value?
If you do not change the underlying data model, then the answers are:
1) updating from, say v1.0 to v1.1 does not remove the previous contents already stored and managed by Core Data.
2) Simply check the version of your application using the main bundle, and, if the version retrieved needs additional data to be inserted, do the insert. You can do the check and the insertion in
- (void)applicationDidFinishLaunching:(UIApplication *)application;