Process SAZ files using FiddlerScript or extension - fiddler

I've a series of Fiddler session archive files (SAZ), ~150 with huge number of sessions per file, ~15k entries. Per documentation I can use the AutoResponder feature to mimic the sessions for replay. However, I'm finding it awkward to import the sessions from SAZ files into AutoResponder as the list gets pretty large and the manual entries in AutoResponder rules becomes hard to locate.
I was wondering is there a way to read and locate the session from SAZ file directly using FiddlerScript or extension without going into AutoResponder tab. I'm not familiar with JS.NET or C#, but I'm trying to write some crude logic.
The closest I saw was I have problems to readSessionArchive() in FiddlerScript. Using the shared snippet, I could make it working just to list the sessions from SAZ. Is there a way to map the response from the SAZ file to the request in context just like when it's imported in AutoResponder?
Modified version from the above link:
for (var i1:int = 0; i1<sSessions.Length; ++i1)
{
FiddlerObject.log("sSessions: " + i1 + ": " + sSessions[i1].url);
if(sSessions[i1].url === 'example.com/default.css') {
//FiddlerObject.log("sSessions: " + i1 + ": " + sSessions[i1].GetResponseBodyAsString());
//TODO logic to map oSession.response = response stored in SAZ file
}
}
Is there any better way to achieve this? Also, I feel every time it's parses through all session entries in SAZ, there's lot of I/O activity. Is there any alternate without going for DB option?

Are you trying to implement your own AutoResponder using the extensibility model? If so, is the idea that you don't want to lose track of manual entries in the UI because there are so many imported entries? It seems like you could just keep your manual entries at the top?
If you like, you can selectively add Sessions (after you've loaded and/or modified them) to the AutoResponder UI using oAutoResponder.ImportSessions.
How big is the SAZ file?
Fiddler will only hit the disk when you call Utilities.ReadSessionArchive; after that, the Sessions are all in Fiddler's memory. If you're seeing disk IO here, it suggests that your machine is low on memory and your OS is paging memory to and from disk.
Is there a way to map the response from the SAZ file to the request in context
I'm not fully sure I understand what you're asking-- are you asking how Fiddler decides what a Session's URL is? Because it looks like you've already figured that out in your code snippet.

Related

The best way to save game data in Unity that's secure & platfrom independent

I am looking for a way to save the users progress for my game, I am looking for something that can't be tampered with/modified. I would like to have to be able to be platform independent. And I would like it to be stored within the game files to the user can transfer their data to different computers (with a flash drive or something). (Correct me if I'm wrong in any area) But I believe because I need to to be secure and platform independent that removes player prefs from the equation. I know there is a way to save data by encrypting it with Binary, but I don't know how that works and if the user could transfer the data from computer to computer. Is there a better way of saving data? Is encrypting is through Binary the way to go? If so how would I do it? Thank you :) If you have any questions feel free to ask.
Edit: I understand nothing is completely secure, I'm looking for something that can stop the average user from going into a file, changing a float, and having tons and tons of money in game.
The previous answer mentiones two good methods for storing data (although there are still some quirks regarding writing files on different platforms), I'd like to add on to the subject of security, as mentioned in a comment here.
First of all, nothing is fully secure, there is always someone brighter out there that will find a flaw somewhere in your code, maybe unless you want full on crypto, which is not trivial with key management etc.
I understand from the question that he wants to prevent users from moving files between machines, or allow them to move the files between machines but seal them so that users cannot easily change data stored in them.
In either case, a trivial solution would work: generate a hashcode from your dataset, mangle with it a little bit (salt it or do whatever to jump to another hashcode). so you could have something like
{
"name"="john",
"score"="1234",
"protection"="043DA33C"
}
if the 'protection' field is a hashcode of "john1234", it will not match "john9999", hence if the user doesn't know how you salt your protection, you will be able to tell that the save has been tampered with
The first way to save data in unity is use PlayerPrefs, using for example:
PlayerPrefs.SetString("key","value");
PlayerPrefs.SetFloat("key",0.0f);
PlayerPrefs.SetInt("key",0);
PlayerPrefs.Save();
and for get, you only need
PlayerPrefs.GetString("key","default");
The second way and the way that permit you stored in a independent file is use serialization, my prefer way is a use the json file for it.
1) make a class that will store the data (not necessarily it need extends of monobehaviour:
[System.Serializable]
public class DataStorer {
public data1:String = "default value";
public data2:Int = 4;
public data3:bool = true;
....
}
and store it in another class with
DataStorer dataStorer = new DataStorer();
.... // some change in his data
string json = JsonUtility.ToJson(this, true);//true for you can read the file
path = Path.Combine(Application.persistantDataPath, "saved files", "data.json");
File.WriteAllText(path, json);
and for read the data
string json= File.ReadAllText(path);
DataStorer dataStorer = new DataStorer();
JsonUtility.FromJsonOverwrite(json, dataStorer);
and now you dateStorer is loaded with the data in your json file.
I found a link of data encryption tool, which is very helpful according to your need, as you want to secure data on device (nothing 100% secure), there are 3 mode to secure data, APP, Device and Encryption key, you can choose as per your need.
See this link, it may help you.
https://forum.unity.com/threads/data-encryption-tool-on-assets-store.738299/

minifilter driver | tracking changes in files

What I'm trying to achieve is to intercept every write to a file and track the changes within the file. I want to track how much different the file content before and after the write.
So far in my minifilter driver I registered to IRP_MJ_WRITE callbacks and can now intercept writes to file. However I'm still not sure how can I obtain the content of the file before [preoperation] and the content after [postoperation].
The parameters that I have within the callback functions are:
PCFLT_RELATED_OBJECTS, PFLT_CALLBACK_DATA and I could not find anything related to the content of the file itself within these.
These are the operations that could change data in a file:
Modifying the file: IRP_MJ_WRITE, IRP_MJ_SET_INFORMATION ( specifically the FileEndOfFileInformation and FileValidDataLengthInformation information classes), IRP_MJ_FILE_SYSTEM_CONTROL ( specifically FSCTL_OFFLOAD_WRITE, FSCTL_WRITE_RAW_ENCRYPTED and FSCTL_SET_ZERO_DATA fsctl codes).
As for the content of the file itself that you just need to read it yourself.
If you mean the buffers as they are being written for example, check this out to find out more about the parameters of IRP_MJ_WRITE in the callback data. Esentially the buffer is at Data->Iopb->Parameters.Write.WriteBuffer/MdlAddress
Make sure you handle that memory correctly otherwise it will result a BSODs.
Good luck.

How do I get Gatling reports to show URLs instead of request_0 etc?

I'm new to Gatling, apologies if this is a complete noob question.
The "Details" tab of my Gatling report looks like this:
The left-hand menu contains all the requests that were made. My problem is that, in all but a few rare cases, they're just labelled "request_x" instead of the URL or filename. So where there is a bottleneck I can't tell what page or resource was causing it.
I found that if I manually edit the .scala file before running the scan, I can change each one by hand, e.g. if I change...
.exec(http("request_0")
.get(uri01)
.headers(headers_0)
.resources(http("request_1")
.get(uri02)
.headers(headers_1)))
...to..
.exec(http(uri01)
.get(uri01)
.headers(headers_0)
.resources(http(uri02)
.get(uri02)
.headers(headers_1)))
...it seems to have the desired effect. But I don't want to have to change hundreds of these by hand every time I have a new test to run.
Surely there's a better way?
FWIW I'm generating this scala file using Gatling's "recorder" with an HAR file exported from Chrome, as opposed to running the recorder as a proxy. But I have tried the proxy option and got the same end result.

LibXML: Comment-out a block of Elements

IS there a way to add/initate a comment ( e.g. $dom->createComment ... ) such that it comments out an entire block of xml tags. Basically I want to turn-off the content between the comment.
For example, it would look like this:
<TT>
<AA>keep</AA>
<!-- comment to blocking
<BB>hideme1</BB>
<CC>hideme2</CC>
-->
<DD>d's content is good</DD>
</TT>
Actually this question is a pre-cursor to my attempt to figure-out a method to be able to markup/label/identify the changes to an xml files in support of new client software functionality, but be able to have the ability to remove / back-out these xml changes in the rare event the client needs to fall back to the previous software version (and no I can't just simply point back to the original xml file because the client is allowed to make minor modifications to existing node text values). This is all going to be controlled via a perl script and LibXML's core modules (I can't use modules the client doesn't have).
So basically I've identified three possible types of xml changes as a result of new client sw functionality:
1.) ADD new element node(s) (typically to support new sw functionality)
2.) DELETE element node(s), or blocks of (would be rare, but never-the-less a possibility)
3.) CHANGE node text values (rare, but the new sw may require a new value)
For all three types, the client needs the ability to back out the changes. One thing I was thinking to use is ATTRIBUTES since the existing xml files don't use them. For example, for an ADD change type, I could include an atribute like 'ADD="sw version 4.1"' . This way if it needs to be removed, I could just simply have the perl script find those attribute strings and delete them (using LibXML methods). Same thing with CHANGE change type - I could use an attribute like CHG="newvalue_oldvalue", then again use straight perl (or LibXML) to switch back the value based on the contents of the attribute. The DELETE change type is giving me a problem though (as welll as the others lol!). I want to be able to "keep" the deleted lines in the xml file soley for the purposes if the sw falls back a version (at some late point the perl script could eventually cleanup/delete them).
I know this is a lot, I'm new to LibXML (but not to perl). I was just wonder if any of you have any thoughts as to how to go about it or seen anything resembling this kind of request ... I'd be grateful for any kind of advice! Thank you...

How can I tell if two image files are the same in Perl?

I have a Perl script I wrote for my own personal use that fetches image files from a website periodically. It then saves these images to a folder. These image files are quite often the same from fetch to fetch, and I'd like to not save duplicates if I can get around it.
My question: What would be the best way to compare/check if they are the same?
My only real thought so far is to open a file handle to existing one, md5 it, md5 the $response->content from the fetch and then compare them. Would that work?
Is there a better way?
EDIT:
Wow, already tons of great suggestions. Does it help if I tell you that this script runs daily via cron? I.e. it is guaranteed to always run at the exact same time everyday? Also: I'm looking at the last-modified headers on some of these, and they don't look 100% accurate, i.e. there are some that have a last-modified of over a week ago when I know the image is more recent than that. I'm assuming that's because the image file itself hasn't been modified on the server since then... which doesn't help me much...
Don't open and hash the stored image each time - stash the hash alongside the image when you store it. Compare sizes as well.
Don't issue a GET request straight away, do a HEAD first and compare the size, last modification date and any Etags to what you got last time.
There are a number of HTTP headers you can use for this -- if you save the time that you last retrieved the file, you can do a conditional get with
If-Modified-Since: <date>
Or, if the server returns an Etag header with the response, you can store that with the image, (or a collection of all of the etags you have seen for that image), and do:
If-None-Match: <all of your etags here>
If the server supports conditional gets, then you will get a "304 Not Modified" response, with no body.
Yep that sounsd right.
Depending on how you're getting the file and how frequently you might also be able to check for HTTP 304 Not Modified and save yourself the download.
md5 would work, but you'd still have to pull the file. Are there any useful metadata in the HTTP headers, content-length, cache-control directives, ETags, etc. ?
There's also a nice fdupes tool for the purpose. Don't know what system you're using and what systems the tool can be built for.