Using p2.artifacts.process without updating artifacts.size - eclipse

I have a p2 repo with zipped binary data wrapped in features. While creating this repo, I calculate unzipped data size and created artifacts.xml via shell script. But after signing this repo, when I rerun p2 metadata generator (using p2.artifacts.process in ANT script) to fix md5 checksum, it recalculates sizes and as those features are now zipped, it sets wrong artifact.size in artifacts.xml.
Is there any way, I can do any of following:
Using p2.artifacts.process without updating artifacts.size
property?
Any other way to update md5 checksum without changing artifacts.size property?
Any other better way to address this problem?

Related

Merging two MS Access Forms with Git

I have two .mdb files with one being a copy of the other. When there are made changes to the original mdb i want to merge them into my copy, which itself may have changed meanwhile.
As I require to use access 2002 Version Theres a lack of helpful plugins but Id be fine with Just using SaveAsText and LoadFromText Methods.
The Problem is - when i change the file i generated with SaveAsText, the checksum at the top of the file does Not Match the Content anymore and Access throws an Error 3011 when I Try to do LoadFromText.
Does anyone know about a way to work around this issue?

B&R Automation Studio transfer post event

Is there any way to execute a post project transfer event when transferring a project to a PLC?
I want to automatically change the value of a variable using fx the PVI interface every time I do a transfer.
I am not entirely sure what the usecase is for this. However the easiest way for some kind of post transfer script would be to utilize the Runtime Utility Center (RUC).
In the RUC, you can define instruction lists for a B&R PLC with an online connection. This includes instructions for transferring projects and setting values of process variables (PVs).
For transferring the project with RUC, you need to create a RUC package. This can be done under Settings/Export to Runtime Utility Center. You could also do this from the command line. More details in the help under Project management/Project installation/ Performing project installation/Export RUC Guid: cfe34190-f436-4c14-b06d-3a4ca39be7e7
This will create a zip, which you can then use in your RUC. For transfer command there is a wizard which activates when you double click on the command Transfer to target under Project installation The result is a line in the instruction list, which might look like this:
Transfer "C:\path\to\your\zip\project.zip", "InstallMode=Consistent InstallRestriction=AllowUpdatesWithoutDataLoss KeepPVValues=1 ExecuteInitExit=1"
After transferring you can write your PV. Under Process variable functions in the RUC you can find the command Write process variable. Also here there is a wizard and the result looks like this:
WriteVariable "taskname\VariableName", "USINT", "2"
I am using AS 4.4.6. There might be slight differences when using a other version.

Unable to run experiment on Azure ML Studio after copying from different workspace

My simple experiment reads from an Azure Storage Table, Selects a few columns and writes to another Azure Storage Table. This experiment runs fine on the Workspace (Let's call it workspace1).
Now I need to move this experiment as is to another workspace(Call it WorkSpace2) using Powershell and need to be able to run the experiment.
I am currently using this Library - https://github.com/hning86/azuremlps
Problem :
When I Copy the experiment using 'Copy-AmlExperiment' from WorkSpace 1 to WorkSpace 2, the experiment and all it's properties get copied except the Azure Table Account Key.
Now, this experiment runs fine if I manually enter the account Key for the Import/Export Modules on studio.azureml.net
But I am unable to perform this via powershell. If I Export(Export-AmlExperimentGraph) the copied experiment from WorkSpace2 as a JSON and insert the AccountKey into the JSON file and Import(Import-AmlExperiment) it into WorkSpace 2. The experiment fails to run.
On PowerShell I get an "Internal Server Error : 500".
While running on studio.azureml.net, I get the notification as "Your experiment cannot be run because it has been updated in another session. Please re-open this experiment to see the latest version."
Is there anyway to move an experiment with external dependencies to another workspace and run it?
Edit : I think the problem is something to do with how the experiment handles the AccountKey. When I enter it manually, it's converted into a JSON array comprising of RecordKey and IndexInRecord. But when I upload the JSON experiment with the accountKey, it continues to remain the same and does not get resolved into RecordKey and IndexInRecord.
For me publishing the experiment as a private experiment for the cortana gallery is one of the most useful options. Only the people with the link can see and add the experiment for the gallery. On the below link I've explained the steps I followed.
https://naadispeaks.wordpress.com/2017/08/14/copying-migrating-azureml-experiments/
When the experiment is copied, the pwd is wiped for security reasons. If you want to programmatically inject it back, you have to set another metadata field to signal that this is a plain-text password, not an encrypted password that you are setting. If you export the experiment in JSON format, you can easily figure this out.
I think I found the issue why you are unable to export the credentials back.
Export the JSON graph into your local disk, then update whatever parameter has to be updated.
Also, you will notice that the credentials are stored as 'Placeholders' instead of 'Literals'. Hence it makes sense to change them to Literals instead of placeholders.
This you can do by traversing through the JSON to find the relevant parameters you need to update.
Here is a brief illustration.
Changing the Placeholder to a Literal:

Recovering data from Firebird database partially-encrypted by ransomware

our test server was hacked and they installed a ransomware (Cry36) for which there is no solution to date. We also didn't keep any snapshots up to date (lesion learned).
Since it's only a test server, i am not too worried. But we had stored in our Firebird DB (v2.5) a bunch of work which i would like to save.
Looking at the database in a hex editor, i can see that the data is encrypted up until offset 00006430.
Looking at the structure of the firebird database it says that all the headers are encrypted (Header page, PIP,..., Data page).
All the data is still there.
I've tryed with gfix and even copying the headers from an older version of the db. But while it does fix the db, the headers are wrong and most of the new pages are removed.
Does anyone have any idea how to restore the database or extract the tables?
Regards
I have used this method restoring ransomware files encrypted on hard drives from any ransomware by renaming the file in question back to its original filename and extension. You may be able to apply the same method to revert the data or database back to the pre-encrypted version of the file/s or data/bases.
From my testing:
the ransomed file = is compressed and or simply renamed, the encryption is either not applied actually but only implied or the containing file or renamed file is encrypted but the original file is never touched. Simply rename back to original and you can access the file as you could be for the attack. Example:
This is the Ransomed file:
Adobe Acrobat XI Pro 11.0.20.zip.id[42AF04FF-2275].[supportcrypt2019#cock.li].Adame
This is the Ransomed file, renamed and fixed:
Adobe Acrobat XI Pro 11.0.20.zip
The removed portion of the FileName is:
.id[42AF04FF-2275].[supportcrypt2019#cock.li].Adame
Upon renaming the file, you will be prompted for approval to change the application type/ file type for which the file will be opened (Back to its original state), and what application will open it (its original designation as determined by the FileType preset after the FileName. The reason the file doesn't work when ransomed is the final file extension renaming scheme, whereas in this case .ADAME is not a real file type, but made up, and no program will or can open it. Thus, the file can not be opened as named.
You would need to do this for each file individually, could you post more information on the database file and encryption information as this should work for you as well. The Ransom Methodology should be the same. I can not identify the naming scheme used on your system without more information pertaining to unusual or new/unidentified portions of code injected throughout your instance.
For Renaming multiple files you could try an application such as "Advanced Renamer" for bulk processing.

How to check generated file has been modified in Eclipse plugin development?

Currently the plugin will generate a series of files in an IProject, I need to check whether the generated file has been modified by user before. If the generated artifact has been modified by user, I will need to handle the regeneration differently.
What I can think of is by checking Creation Date == Modified Date . The fact that I will delete the old file and create it again when user has not touched the file before to make sure the Creation Date always equals Modified Date. However I did not see how to retrieve these 2 properties from IFile. Anyone can help me regarding this?
I am quite new to Eclipse plugin development, can anyone suggest another way around this ?
*** Generated files cannot be locked as those are source codes
The modification stamp of an IFile or more generally an IResource can be obtained with getModificationStamp(). The return value is not strictly a time stamp but should serve your needs, see the JavaDoc for details.
If, however, you would like to track whether the content of a file was changed I would rather compute a hash of the content, for example with a MessageDigest. You can then compare the two hashes to decide whether the file was changed.
This latter approach would regard a file as unchanged if it was changed - saved - changes reverted - saved again. The modification stamp on the other hand would declare the file as changed even though its content is the same again.
Whichever approach you choose, you can store the modification stamp (or content hash) at generation time by using IResource#setPersistentProperty() and later compare it with the current modification stamp. Persistent properties are stored on disk with the platform metadata and maintained across platform shutdown and restart.
I found the answer:
private boolean isModified(IFile existingFile) throws CoreException {
IFileState[] history = existingFile.getHistory(NullProgessMonitor);
return history.length > 0;
}
This feature is maintained by eclipse IDE so it will survive the restarting of eclipse. If the file has been created without modification , the history state is zero.
You can clear local history by doing:
existingFile.clearHistory(NullProgessMonitor);