How to replay/show a saved flow in the console without triggering requests? - mitmproxy

The official docs show me all the ways to replay or filter or transform flows, i.e. network data captures, but it does not show how to simply
So let's assume I save a flow via mitmdump, or – and I guess or rather hope – it is the same file, via save_stream_file (as an command-line option --save-stream-file).
Now can I can I load that again – without replaying, i.e. just to view it?
So think of a collected dumpcap file, I know I can load it in the wireshark GUI. Wireshark itself of course does not re-trigger the requests, just show them, so I can analyse them.
I want to do the same here. How can I do that?

Oh, it seems to be done with the rfile (or -r for short) option (not really an intuitive name, especially when the other option to filter that is called readfile_filter, i.e. not abbreviated).
Also, you do not want to bind a new proxy for that, so to do this, add -n. (This is another option that I could not find being documented, but remembered from some example)
So in the end, here is how to load it in mitmproxy e.g.:
$ mitmproxy --rfile ./mitmproxy-flow.cap -n

Related

how to debug in simpy

I have a general question about how to debug in Simpy. Normal debugging tools don't seem to work, since everything is working on the event loop, and you can't step through the code line by line and inspect what exists at any point in time.
Primarily, I'm interested in finding what kinds of processes and callbacks are in existence at a particular time, and how to remove them at the appropriate point. Are there any best practices surrounding debugging in discrete event simulation generally?
I would just use a bunch of print()s.
One thing you might find useful is the specific requests that can be passed to primitives such as resources. For example you can ask a resource how many users it currently has or how big the queue to use the resource is with:
All of these commands can be found in the documentation, here is the resource example: https://simpy.readthedocs.io/en/latest/api_reference/simpy.resources.html

How can I copy load module using rexx?

I want to copy a load module from one pds to another using REXX.
You could invoke IEBCOPY from within a Rexx, allocating the appropriate datasets to the appropriate ddnames before invoking IEBCOPY.
I'm unable to provide an example as I don't have the facilities/access.
Note that doing so may tie up your terminal/session.
You could also go into a more elaborate solution to build and submit a batch job, perhaps even having a panel front end, driving file tailoring/skeletons.
As #cshneid said you can use IEBCOPY Using IEBCOPY in rexx is basically the same as in JCL but:
use TSO Alloc to allocate the files
Call/invoke the program
if running under ISPF you can use LMCOPY. Roughly the following should work, you may need to issue a LMOPEN / LMClose on the data-ids as well ???
Address ISPEXEC
'LMINIT DATAID(DIDFrom) Dataset(in.data.set)'
'LMINIT DATAID(DIDTo) Dataset(to.data.set)'
'LMCOPY FromId('DIDFrom') FROMMEM(mymem) toId('DIDTo') toMem(newMemberName)'
'LMFREE DATAID(DIDFrom)'
'LMFREE DATAID(DIDto)'
If running foreground, the ISPF services used to have the advantage as they "co-ordinated" there actions with all other ISPF users- Less likely to corrupt the PDS directory. Not sure if this is an advantage any more.
Using just REXX what you want to do is not possible, however, you can invoke IEBCOPY (or your site equivalent) to perform the task for you.
You may want to investigate calling programs like IEBCOPY and passing it the appropriate control cards to perform your task.

Non-RESTful backend with backbone.js

I'm evaluating backbone.js as a potential javascript library for use in an application which will have a few different backends: WebSocket, REST, and 3rd party library producing JSON. I've read some opinions that backbone.js works beautifully with RESTful backends so long as the api is 'by the book' and follows the appropriate http verbage. Can someone elaborate on what this means?
Also, how much trouble is it to get backbone.js to connect to WebSockets? Lastly, are there any issues with integrating a backbone.js model with a function which returns JSON - in other words does the data model always need to be served via REST?
Backbone's power is that it has an incredibly flexible and modular structure. It means that any part of Backbone you can use, extend, take out, or modify. This includes the AJAX functionality.
Backbone doesn't "care" where do you get the data for your collections or models. It will help you out by providing an out of the box RESTful "ajax" solution, but it won't be mad if you want to use something else!
This allows you to find (or write) any plugin you want to handle the server interaction. Just look on backplug.io, Google, and Github.
Specifically for Sockets there is backbone.iobind.
Can't find a plugin, no worries. I can tell you exactly how to write one (it's 100x easier than it sounds).
The first thing that you need to understand is that overwriting behavior is SUPER easy. There are 2 main ways:
Globally:
Backbone.Collection.prototype.sync = function() {
//screw you Backbone!!! You're completely useless I am doing my own thing
}
Per instance
var MySpecialCollection = Backbone.Collection.extend({
sync: function() {
//I like what you're doing with the ajax thing... Clever clever ;)
// But for a few collections I wanna do it my way. That cool?
});
And the only other thing you need to know is what happens when you call "fetch" on a collection. This is the "by the book"/"out of the box behavior" behavior:
collection#fetch is triggered by user (YOU). fetch will delegate the ACTUAL fetching (ajax, sockets, local storage, or even a function that instantly returns json) to some other function (collection#sync). Whatever function is in collection.sync has to has to take 3 arguments:
action: create (for creating), action: read (for fetching), delete (for deleting), or update (for updating) = CRUD.
context (this variable) - if you don't know what this does it, don't worry about it, not important for now
options - where da magic is. We only care about 1 option though
success: a callback that gets called when the data is "ready". THIS is the callback that collection#fetch is interested in because that's when it takes over and does it's thing. The only requirements is that sync passes it the following 1st argument
response: the actual data it got back
Now
has to return a success callback in it's options that gets executed when it's done getting the data. That function what it's responsible for is
Whenever collection#sync is done doing it's thing, collection#fetch takes back over (with that callback in passed in to success) and does the following nifty steps:
Calls set or reset (for these purposes they're roughly the same).
When set finishes, it triggers a sync event on the collection broadcasting to the world "yo I'm ready!!"
So what happens in set. Well bunch of stuff (deduping, parsing, sorting, parsing, removing, creating models, propagating changesand general maintenance). Don't worry about it. It works ;) What you need to worry about is how you can hook in to different parts of this process. The only two you should worry about (if your wraps data in weird ways) are
collection#parse for parsing a collection. Should accept raw JSON (or whatever format) that comes from the server/ajax/websocket/function/worker/whoknowwhat and turn it into an ARRAY of objects. Takes in for 1st argument resp (the JSON) and should spit out a mutated response for return. Easy peasy.
model#parse. Same as collection but it takes in the raw objects (i.e. imagine you iterate over the output of collection#parse) and splits out an "unwrapped" object.
Get off your computer and go to the beach because you finished your work in 1/100th the time you thought it would take.
That's all you need to know in order to implement whatever server system you want in place of the vanilla "ajax requests".

How to save file at some point without closing it?

OUTPUT TO "logfile.txt".
FOR EACH ...:
...
PUT "Some log data". OUTPUT CLOSE. OUTPUT TO "logfile.txt" APPEND.
...
END.
Haven't found an appropriate statement to save file at some point. I don't wanna use UNBUFFERED APPEND because it is supposedly slower. Maybe there is built-in logging tools? Maybe STREAMS could help me? Problem in my solution that I have to specify log filename each time i open it with OUTPUT TO statement. A nested procedure may not have a clue about filename.
The question as it stands is still ambiguous.
If you want a way to route the output through a standard "service" similar to what LOG-MANAGER does, you can do that by using
static members of a class,
by using an API in a persistent procedure and PUBLISHing to it,
by using an API in a session super-procedure and calling it's API
STREAMS will give you a way to segregate output for a single procedure or class to a single file, and keep that output from getting mingled with the production output, however it's limited to the current program, which means it's not a general solution as an application-wide logging facility.
There is no "save" option.
However... you can force output to be flushed with:
put control null(0).
"Supposedly slower" is awfully vague. Yes, there is potentially more IO with unbuffered output. But whether or not that really matters depends heavily on what you are doing and how it will be used. It is very unlikely that it actually matters.
A STREAM would certainly help to keep things organized and make it so that you don't have to know the name of the file in nested procedures.
Yes, there are built in logging tools. Look at the LOG-MANAGER system handle.
The code in the question would be better written as:
define stream logStream.
output stream logStream to value( "log.txt" ) append unbuffered.
for each customer no-lock:
put stream logStream custName skip.
/* put stream logStream control null(0). */ /* if you want to try fooling with buffered output... */
end.
output stream logStream close.

Will inserting the same `<script>` into the DOM twice cause a second request in any browsers?

I've been working on a bit of JavaScript code that, under certain conditions, lazy-loads a couple of different libraries (Clicky Web Analytics and the Sizzle selector engine).
This script is downloaded millions of times per day, so performance optimization is a major concern. To date, I've employed a couple of flags like script_loading and script_loaded to try to ensure that I don't load either library more than once (by "load," I mean requesting the scripts after page load by inserting a <script> element into the DOM).
My question is: Rather than rely on these flags, which have gotten a little unwieldy and hard to follow in my code (think callbacks and all of the pitfalls of asynchronous code), is it cross-browser safe (i.e., back to IE 6) and not detrimental to performance to just call a simple function to insert a <script> element whenever I reach a code branch that needs one of these libraries?
The latter would still ensure that I only load either library when I need it, and would also simplify and reduce the weight of my code base, but I need to be absolutely sure that this won't result in additional, unnecessary browser requests.
My hunch is that appending a <script> element multiple times won't be harmful, as I assume browsers should recognize a duplicate src URL and rely on a local cached copy. But, you know what happens when we assume...
I'm hoping that someone is familiar enough with the behavior of various modern (and not-so-modern, such as IE 6) browsers to be able to speak to what will happen in this case.
In the meantime, I'll write a test to try to answer this first-hand. My hesitation is just that this may be difficult and cumbersome to verify with certainty in every browser that my script is expected to support.
Thanks in advance for any help and/or input!
Got an alternative solution.
At the point where you insert the new script element in the DOM, could you not do a quick scan of existing script elements to see if there is another one with the same src? If there is, don't insert another?
Javascript code on the same page can't run multithreaded, so you won't get any race conditions in the middle of this or anything.
Otherwise you are just relying on the caching behaviour of current browsers (and HTTP proxies).
The page is processed as a stream. If you load the same script multiple times, it will be run every time it is included. Obviously, due to the browser cache, it will be requested from the server only once.
I would stay away from this approach of inserting script tags for the same script multiple times.
The way I solve this problem is to have a "test" function for every script to see if it is loaded. E.g. for sizzle this would be "function() { return !!window['Sizzle']; }". The script tag is only inserted if the test function returns false.
Each time you add a script to your page,even if it has the same src the browser may found it on the local cache or ask the server if the content is changed.
Using a variable to check if the script is included is a good way to reduce loading and it's very simple:
for example this may works for you:
var LOADED_JS=Object();
function js_isIncluded(name){//returns true if the js is already loaded
return LOADED_JS[name]!==undefined;
}
function include_js(name){
if(!js_isIncluded(name)){
YOUR_LAZY_LOADING_FUNCTION(name);
LOADED_JS[name]=true;
}
}
you can also get all script elements and check the src,my solution is better because it hase the speed and simplicity of an hash array and the script src has an absolute path even if you set it with a relative path.
you may also want to init the array with the scripts normally loaded(without lazy loading)on the page init to avoid double request.
For what it's worth, if you define the scripts as type="module", they will only be loaded and executed once.