I would like to add a uniform delay to responses in all sessions that Fiddler intercepts. The use of "response-trickle-delay" is unacceptable, since that doesn't actually introduce a uniform delay, but rather delays each 1KB of transfer (which simulates low bandwidth, rather than high latency).
The only reference I could find was here, which used the following atrocity (DO NOT USE!):
static function wait(msecs)
{
var start = new Date().getTime();
var cur = start;
while(cur – start < msecs)
{
cur = new Date().getTime();
}
}
and wait(5000); is inserted into OnBeforeResponse.
As expected, it locked up my computer and started overheating my CPU and I had to quit Fiddler.
I'm looking for something:
Less stupid, and
As simple as possible.
It looks like FiddlerScript is written in JScript.NET, and from what I gather there is a setTimeout() function, but I'm having trouble calling it (and I don't know JavaScript or .NET at all). Here is my OnBeforeResponse:
static function OnBeforeResponse(oSession: Session) {
if (m_Hide304s && oSession.responseCode == 304) {
oSession["ui-hide"] = "true";
}
setTimeout(function(){},1000);
}
It just gives a syntax error at the setTimeout line. Can setTimeout() be used from a FiddlerScript to introduce uniform delay?
FiddlerScript uses JScript.NET, which can reference .NET assemblies, and System.Threading contains Sleep.
In Tools > Fiddler Options > Extensions, add the path to System.Threading.dll. On my machine, this was located at C:\Windows\Microsoft.NET\Framework\v4.0.30319.
In FiddlerScript, add import System.Threading;.
You can now add lines like Thread.Sleep(1000).
Related
Env: Mac OS 12.1, JDK 17, Vert.x 4.2.4
Question: how to capture command line input from a verticle? Tried so far following in the public void start(Promise<Void> startPromise) throws Exception method:
getVertx().createSharedWorkerExecutor("sys-in").executeBlocking(promise -> {
try (final BufferedReader br = new BufferedReader(new InputStreamReader(System.in))) {
String line;
int count = 0;
do {
System.out.print("message to MC: ");
line = br.readLine();
count++;
//doSth(line); // e.g. send line over multicast
} while (count < 3);
} catch (Throwable t) {
// log.info("<start> ", t);
} finally {
// bye(); // send a final message and close vertx
promise.complete();
}
});
This will start, get 3 nulls from br, and exit. Also tried a separated ExecutorService, in vain. Couldn't find any help in Vert.x doc either. Any hints are appreciated:
aware of the warnings of Vert.x when doing blocking stuff
Vert.x might not meant to be used this way, but would be cool if it (reading from command line) can be done with the same toolkit
I understand what you are trying to accomplish, but the problem is that that goes against fundamentals of verticles concept. Waiting for user input is potentially infinitely blocking operation i.e. there is no guarantee user will ever input the values. In that case, you are left with the verticle that is hung forever, spending resources and stuck in one spot. Multiply this if you are using worker verticles and you might have serious problems with the app. This issue is also emphasized here: https://vertx.io/docs/vertx-core/java/#blocking_code (under Warning).
In the link provided you can also find a suggested solution with a separate thread solution. Non-vertx thread won't mind being blocked and when the user input is provided can inform the vertx part of the application via the event bus that the user input dependent code can now be executed.
This might not be the solution you had in mind since it's not pure vertx, but have in mind that vert.x is just another tool, and that tool is not a good fit for what you are trying to accomplish here. However, it can be paired well with plain Java and it won't mind.
I'm having an issue with ClientRpc never being called any client objects. I've trolled the internet for hours, but I can't find any clues as to why my implementation isn't working.
I'm following an order of events like so:
Add players in a lobby
Switch scene to the gameplay scene
Generate random tiles on the server
Send tile data to the clients using Rpc for rendering
However, the rendering function never gets called.
NetworkManager.cs
public override void OnServerReady(NetworkConnection conn)
{
base.OnServerReady(conn);
//Make sure they're all ready
for (int i = RoomPlayers.Count - 1; i >= 0; i--)
{
if (!RoomPlayers[i].IsReady)
{
return;
}
}
//Previously add SpawnTiles to OnServerReadied
OnServerReadied?.Invoke(conn);
}
GameManager.cs
private void SpawnTiles(NetworkConnection conn)
{
//Generate rawTiles beforehand
Debug.Log(conn.isReady);
Debug.Log("Entered spawn tiles");
RpcSpawnTiles(rawTiles);
}
[ClientRpc]
public void RpcSpawnTiles(short[] rawTiles)
{
Debug.Log("Client spawning tiles");
}
And this is my output when run on a host:
True
Entered spawn tiles
It appears to never enter the Rpc function :(
Is there something super obvious that I'm missing? My GameManager does have a NetworkID attached to it
There seemed to be a combination of things that I had to do to get it to work- I'm not entirely sure which one worked, but if you are having these problems, these are some of the things I had to check-
Just basic code. Make sure that the appropriate things are being created on the client, host, or server
Make sure that there aren't any race conditions, especially when working with changing scenes. You can call stuff from OnClientSceneChanged and OnServerSceneChanged
As derHugo pointed out, I was not checking to see if my package could even be sent. It turns out that the maximum size that can be sent is 1200 bytes, while I was trying to send 100x100x4 bytes. This error didn't occur when I was testing on the host, only when there was an external client
I need to detect when the current playing audio/video is paused. I cannot find anything for 1.0. My app is a bit complex but here is condensed code
/* This function is called when the pipeline changes states. We use it to
* keep track of the current state. */
static void state_changed_cb(GstBus *bus, GstMessage *msg, CustomData *data)
{
GstState old_state, new_state, pending_state;
gst_message_parse_state_changed(msg, &old_state, &new_state, &pending_state);
if(GST_MESSAGE_SRC(msg) == GST_OBJECT(data->playbin))
{
g_print("State set to %s\n", gst_element_state_get_name(new_state));
}
}
gst_init(&wxTheApp->argc, &argv);
m_playbin = gst_element_factory_make("playbin", "playbin");
if(!m_playbin)
{
g_printerr("Not all elements could be created.\n");
exit(1);
}
CustomData* data = new CustomData(xid, m_playbin);
GstBus *bus = gst_element_get_bus(m_playbin);
gst_bus_set_sync_handler(bus, (GstBusSyncHandler) create_window, data, NULL);//here I do video overly stuffs
g_signal_connect (G_OBJECT (bus), "message::state-changed", (GCallback)state_changed_cb, &data);
What do I do wrong? I cannot find working example on connecting such events on Gstreamer 1.0 and 0.x seems a bit different than 1.0 so the vast exaples there don't help
UPDATE
I have found a way to get signals. I run wxWidgets timer with 500ms time span and each time timer fires I call
GstMessage* msg = gst_bus_pop(m_bus);
if(msg!=NULL)
{
g_print ("New Message -- %s\n", gst_message_type_get_name(msg->type));
}
Now I get a lot of 'state-change' messages. Still I want to know if that message is for Pause or Stop or Play or End of Media (I mean way to differentiate which message is this) so that I can notify the UI.
So while I get signals now, the basic problem, to get specific signals, remains unsolved.
You have to call gst_bus_add_signal_watch() (like in 0.10) to enable emission of the signals. Without that you can only use the other ways to get notified about GstMessages on that bus.
Also just to be sure, you need a running GLib main loop on the default main context for this to work. Otherwise you need to do things a bit different.
For the updated question:
Check the documentation: gst_message_parse_state_changed() can be used to parse the old, new and pending state from the message. This is also still the same as in 0.10. From the application point of view, and conceptionally nothing much has changed really between 0.10 and 1.0
Also you shouldn't do this timeout-waiting as it will block your wxwidget main loop. Easiest solution would be to use a sync bus handler (which you already have) and dispatch all messages from there to some callback on the wxwidget main loop.
I'm having a rare issue in my code, I have a method that makes a very simple validation based on a string variable:
private void showNextStep(String psCondition,String poStepName){
int liCurrentStep=Integer.valueOf(poStepName);
String lsNextTrueStep=moSteps[liCurrentStep][4];
String liNextFalseStep=moSteps[liCurrentStep][5];
if ("Yes".equals(psCondition)){
moFrmStepsContainer.getField(liNextFalseStep).hide();
moFrmStepsContainer.getField(lsNextTrueStep).show();
}else{
moFrmStepsContainer.getField(liNextFalseStep).show();
moFrmStepsContainer.getField(lsNextTrueStep).hide();
}
}
Now, here is the ticky part: if I execute the application without debugging mode, it does the validation right all the time, how ever if don't it always goes to the else block (or at least I think) I tried to use JS alerts (I have a class that calls JS methods) to debug manually and check the valors of the variables; the valors where all right and the validation was also good. This means that only debugging or putting alerts before at the beggining of the IF block it does the validation right, otherwise it always goes to the ELSE, what do you think it could be?
It might be worth mentioning this is a web application made in netbeans 6.9, using the framework GWT 2.1. This application runs in firefox 25.0.1
Thank you!
UPDATE
Here is the code of the event that calls my method
final ComboBoxItem loYesNo=new ComboBoxItem("cmbYesNo" + moSteps[liStepIndex][0],"");
loYesNo.setValueMap("Yes","No");
loYesNo.setVisible(false);
loYesNo.setAttribute("parent", liStepIndex);
loYesNo.addChangedHandler(new ChangedHandler() {
public void onChanged(ChangedEvent poEvent){
String lsStepName=loYesNo.getName();
FormItem loItem=moFrmStepsContainer.getField(lsStepName);
String liStepNumber=String.valueOf(loItem.getAttributeAsInt("parent"));
showNextStep((String)poEvent.getItem().getValue(),liStepNumber);
}
});
We have a fairly simple program that's used for creating backups. I'm attempting to parallelize it but am getting an OutOfMemoryException within an AggregateException. Some of the source folders are quite large, and the program doesn't crash for about 40 minutes after it starts. I don't know where to start looking so the below code is a near exact dump of all code the code sans directory structure and Exception logging code. Any advice as to where to start looking?
using System;
using System.Diagnostics;
using System.IO;
using System.Threading.Tasks;
namespace SelfBackup
{
class Program
{
static readonly string[] saSrc = {
"\\src\\dir1\\",
//...
"\\src\\dirN\\", //this folder is over 6 GB
};
static readonly string[] saDest = {
"\\dest\\dir1\\",
//...
"\\dest\\dirN\\",
};
static void Main(string[] args)
{
Parallel.For(0, saDest.Length, i =>
{
try
{
if (Directory.Exists(sDest))
{
//Delete directory first so old stuff gets cleaned up
Directory.Delete(sDest, true);
}
//recursive function
clsCopyDirectory.copyDirectory(saSrc[i], sDest);
}
catch (Exception e)
{
//standard error logging
CL.EmailError();
}
});
}
}
///////////////////////////////////////
using System.IO;
using System.Threading.Tasks;
namespace SelfBackup
{
static class clsCopyDirectory
{
static public void copyDirectory(string Src, string Dst)
{
Directory.CreateDirectory(Dst);
/* Copy all the files in the folder
If and when .NET 4.0 is installed, change
Directory.GetFiles to Directory.Enumerate files for
slightly better performance.*/
Parallel.ForEach<string>(Directory.GetFiles(Src), file =>
{
/* An exception thrown here may be arbitrarily deep into
this recursive function there's also a good chance that
if one copy fails here, so too will other files in the
same directory, so we don't want to spam out hundreds of
error e-mails but we don't want to abort all together.
Instead, the best solution is probably to throw back up
to the original caller of copy directory an move on to
the next Src/Dst pair by not catching any possible
exception here.*/
File.Copy(file, //src
Path.Combine(Dst, Path.GetFileName(file)), //dest
true);//bool overwrite
});
//Call this function again for every directory in the folder.
Parallel.ForEach(Directory.GetDirectories(Src), dir =>
{
copyDirectory(dir, Path.Combine(Dst, Path.GetFileName(dir)));
});
}
}
The Threads debug window shows 417 Worker threads at the time of the exception.
EDIT: The copying is from one server to another. I'm now trying to run the code with the last Paralell.ForEach changed to a regular foreach.
Making a few guesses here as I haven't yet had feedback from the comment to your question.
I am guessing that the large amount of worker threads is happening here as actions (an action being the unit of work carried out on the parallel foreach) are taking longer than a specified amount of time, so the underlying ThreadPool is growing the number of threads. This will happen as the ThreadPool follows an algorithm of growing the pool so that new tasks are not blocked by existing long running tasks e.g. if all my current threads have been busy for half a second, I'll start adding more threads to the pool. However, you are going to get into trouble if all tasks are long-running and new tasks that you add are going to make existing tasks run even longer. This is why you are probably seeing a large number of worker threads - possibly because of disk thrashing or slow network IO (if networked drives are involved).
I am also guessing that files are being copied from one disk to another, or they are being copied from one location to another on the same disk. In this case, adding threads to the problem is not going to help out much. The source and destination disks only have one set of heads, so trying to make them do multiple things at once is likely to actually slow things down:
The disk heads will be lurching all over the place.
Your disk\OS caches may be frequently invalidated.
This may not be a great problem for parallelization.
Update
In answer to your comment, if you are getting a speed-up using multiple threads on smaller datasets, then you could experiment with lowering the maximum number of threads used in your parallel foreach, e.g.
ParallelOptions options = new ParallelOptions { MaxDegreeOfParallelism = 2 };
Parallel.ForEach(Directory.GetFiles(Src), options, file =>
{
//Do stuff
});
But please do bear in mind that disk thrashing may negate any benefits from parallelization in the general case. Play about with it and measure your results.