How to interpret memcached cachedump? - memcached

I added an entry to memcached and looking at the cachedump.
set a 0 0 5
hello
STORED
stats cachedump 1 5
ITEM a [5 b; 1312548967 s]
END
The first value 5 b is size of the item which is 5 bytes. The second one is what I'm confused about. It looks like time since epoch.
I tried adding another entry and even that entry got exactly same value.
set b 0 0 5
hello
STORED
stats cachedump 1 5
ITEM b [5 b; 1312548967 s]
ITEM a [5 b; 1312548967 s]
END
It looks like some kind of time, but I'm not sure what it really is. Can some one explain it?

This is meant for debugging purposes only and will very likely be removed soon.
You're invited to participate in that conversation. Please let us know what you're interested in.

Related

promql example with related fields but different labels

I'm using Prometheus and Grafana, and I'm trying to track a web server app.
I want to graph the average duration in ms of a particular query. I think I can get there from the data below, but I'm struggling.
My two sets of values:
rate(http_server_request_duration_seconds_sum[5m])
Element Value
{instance="dbserver:5000",job="control-tower",method="get",path="/api/control/v1/node/config.json"} 0.0010491088980113385
{instance="dbserver:5000",job="control-tower",method="get",path="/api/schedule/v1/programs/:id.json"} 0
{instance="dbserver:5000",job="control-tower",method="get",path="/api/schedule/v1/users.json"} 0
{instance="dbserver:5000",job="control-tower",method="get",path="/metrics"} 0.00009133616130826839
{instance="dbserver:5000",job="control-tower",method="post",path="/api/caption/v1/messages.json"} 0
{instance="dbserver:5000",job="control-tower",method="post",path="/api/caption/v1/sessions.json"} 0
{instance="dbserver:5000",job="control-tower",method="post",path="/api/schedule/v1/programs.json"} 0
{instance="dbserver:5000",job="control-tower",method="put",path="/api/caption/v1/sessions/captioners.json"} 0
{instance="dbserver:5000",job="control-tower",method="put",path="/api/control/v1/agents/:id.json"}
rate(http_server_requests_total[5m])
Element Value
{code="200",host="dbserver:5000",instance="dbserver:5000",job="control-tower",method="get",path="/api/control/v1/node/config.json"} 0.03511075688258612
{code="200",host="dbserver:5000",instance="dbserver:5000",job="control-tower",method="get",path="/api/schedule/v1/programs/:id.json"} 0
{code="200",host="dbserver:5000",instance="dbserver:5000",job="control-tower",method="get",path="/api/schedule/v1/users.json"} 0
{code="200",host="dbserver:5000",instance="dbserver:5000",job="control-tower",method="get",path="/metrics"} 0.06671043807691363
{code="200",host="dbserver:5000",instance="dbserver:5000",job="control-tower",method="post",path="/api/caption/v1/sessions.json"} 0
{code="200",host="dbserver:5000",instance="dbserver:5000",job="control-tower",method="post",path="/api/schedule/v1/programs.json"} 0
{code="200",host="dbserver:5000",instance="dbserver:5000",job="control-tower",method="put",path="/api/caption/v1/sessions/captioners.json"} 0
{code="200",host="dbserver:5000",instance="dbserver:5000",job="control-tower",method="put",path="/api/control/v1/agents/:id.json"} 0
{code="422",host="dbserver:5000",instance="dbserver:5000",job="control-tower",method="post",path="/api/schedule/v1/programs.json"} 0
{code="502",host="dbserver:5000",instance="dbserver:5000",job="control-tower",method="post",path="/api/caption/v1/messages.json"}
They have different labels. For this, I only care where path="/api/caption/v1/messages.json".
I think I need to use a combination of rate, sum, and "on" or "ignore", but I haven't been able to get on or ignore to work at all.
I can get the numerator (in seconds) with:
rate( http_server_request_duration_seconds_sum { path="/api/caption/v1/messages.json" }[5m])
And that returns:
{instance="dbserver:5000", job="control-tower", method="post", path="/api/caption/v1/messages.json"}
But the denominator can have different return codes, so I have to sum those, and I need to do some ignore or on or something, but I haven't found an example that helps me out, and I'm really new at this.
Anyone?
Okay, I continued to play. Because I only have one path I worry about, i figured out I could sum the rates. I think this works:
sum( rate( http_server_request_duration_seconds_sum {path="/api/caption/v1/messages.json"}[2h])) / sum( rate( http_server_requests_total{ path="/api/caption/v1/messages.json"}[2h]))
I changed the sample rate as my sample data fell off my 5-minute window, and I had zeros.
I THINK what this is doing is summing the rates, which gets rid of all the labels. And I THINK what it's also doing is using 2 hours of data. I think the rate value is how quickly the value changed over that 2 hour period.
I would love comments.
This solution won't work if I want one chart to include other paths, and I'm still not sure what to do about that, so this solves my current problem but still doesn't help me figure out how to do something similar with ignore or on.

Possible bug in Pd patch

I have made a very simple patch, by which when a bang is triggered, it is meant to trigger a unique number between 0-2, in other words, no numbers are repeated.
In the way that I set it up, it is meant to work in theory. Even my programming mentor said that it should work, in theory, and he's generally a very smart man. He's informally known as being the boffin of the academy.
A few more details:
This happens in both purr data and pure data, with the exact same setup.
There are no external libraries are used. Just plain Vanilla objects.
Since there doesn't seem to be a way to attach the actual file itself, I will instead post an image of the code:
The problem is with depth-first processing (as used by Pd) and the related stack-unrolling, as this might lead to setting the 2nd input of [select] to an old value (which you didn't expect).
Example
Note: select:in0 means the left-most inlet of [select],... The numbers generated by [random] are shown in bold (1) and the numbers output the patch are shown in bold italics (e.g. 3)
Imagine the [select] is initialized to 0 and the [random 3] object outputs a list 2 0 0 2 0 2 ... (hint: [seed 96().
The expected output would be 2 0 2 0 2 ..., however the output really is 2 0 2 2 2 ...
Now this is what happens if you consecutively send [bang( to the random generator:
random generates 2
2 is sent to the sel:in0, which compares it to 0 (no match)
and sends it out of sel:out1 (the reject outlet), displaying the number 2
after that the number is sent to sel:in1, setting it's internal state to 2.
random generates 0
0 is sent to the sel:in0, which compares it to 2 (no match)
and sends it out of sel:out1, displaying the number 0
after that the number is sent to sel:1, setting it's internal state to 0.
random generates 0
0 is sent to the sel:in0, which compares it to 0 (match!)
and sends a bang through sel:out0 (the match outlet)
triggering a new call to random, which now generates 2
2 is sent to the sel:in0, which compares it to 0 (no match)
and sends it out of sel:out1, displaying the number 2
after that the number is sent to sel:1, setting it's internal state to 2.
after that the number 0 (still pending in the trigger:out0) is sent to sel:1, setting it's internal state to 0!!!
random generates 0
0 is sent to the sel:in0, which compares it to 0 (match!)
and sends a bang through sel:out0
triggering a new call to random, which now generates 2
2 is sent to the sel:in0, which compares it to 0
and sends it out of sel:out1, displaying the number 2
after that the number is sent to sel:1, setting it's internal state to 2.
after that the number 0 (still pending in the trigger:out0) is sent to sel:1, setting it's internal state to 0!!!
As you can see, at the end of #3 the internal state of [select] is 0, even though the last number generated by [random] was 2 (because the left-most outlet of [trigger] will only send to 0 after it has sent the 2, due to stack-unrolling).
Solution
The solution is simple: make sure that the state of [select] contains the last displayed value, rather than the last one generated on the stack. avoid feedback when modifying the internal state.
E.g (using local send/receive to for nicer ASCII-art)
[r $0-again]
|
[bang(
|
[random 3]
|
| [r $0-last]
| |
[select]
| |
| [t f f]
| | |
| | [s $0-last]
| |
| [print]
|
[s $0-again]

validation digital multi signature PDF [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I need the help of someone who wants to collaborate with the solution of the following problem, I am validating the digital signatures of a PDF with ITEXT, this works very well, the problem that I can not validate the signature is when I have 2 signatures and the first one is invalid , As indicated by Adobe Acrobat, but with the examples of itext consider it as valid, some help please?
To begin with, Adobe Reader does not complain about a signature becoming invalid because of changes to the signed bytes of the first signed revision but because of additions in the incremental update section to the second signed revision which it considers disallowed.
Neither iText nor PDFBox (which originally also had been mentioned in a tag of the question) check this in their standard signature validation code. Thus, even if the addition in question was disallowed, neither iText nor PDFBox signature validation code would have recognized that.
Actually I don't know whether any PDF product other than Adobe Acrobat / Adobe Reader tries to check whether additions in incremental updates to signed PDFs are disallowed or not.
That being said, even Adobe's tests in this regard are not very good: When they check whether some change is allowed, they often actually check whether the change is executed in a manner Adobe would have executed an allowed change. Thus, if you do some allowed change in a different way, Adobe is likely to claim a disallowed change.
This also is the case in the sample document Con firma fallada.pdf.
In the first signed revision the PDF Catalog object looks like this (pretty printed):
1 0 obj
<<
/Type/Catalog
/ViewerPreferences 2 0 R
/Pages 29 0 R
/AcroForm
<<
/DA(/Helv 0 Tf 0 g )
/Fields[32 0 R]
/SigFlags 3
/DR<</Font<</Helv 33 0 R/ZaDb 34 0 R>>>>
>>
/Outlines 3 0 R
>>
endobj
In the incremental update to the second signed revision an additional signature field has been added, so the object had to be rewritten. The signing software additionally refactored the direct AcroForm dictionary object into a new indirect object:
1 0 obj
<<
/Pages 29 0 R
/AcroForm 35 0 R
/Type /Catalog
/ViewerPreferences 2 0 R
/Outlines 3 0 R
>>
endobj
35 0 obj
<<
/DA (/Helv 0 Tf 0 g )
/DR << /Font << /Helv 33 0 R/ZaDb 34 0 R >> >>
/Fields [ 32 0 R 36 0 R ]
/SigFlags 3
>>
endobj
With this in place, Adobe claims a disallowed change.
If one replaces the above with an equivalent version using a direct AcroForm dictionary object, Adobe does not claim that disallowed change anymore:
1 0 obj
<<
/Pages 29 0 R
/AcroForm
<<
/DA (/Helv 0 Tf 0 g )
/DR << /Font << /Helv 33 0 R/ZaDb 34 0 R >> >>
/Fields [ 32 0 R 36 0 R ]
/SigFlags 3
>>
/Type /Catalog
/ViewerPreferences 2 0 R
/Outlines 3 0 R
>>
endobj
This, by the way, is exactly how the Catalog looks like in the incremental update with a second signature in Con firma buena.pdf.
(Curiously, though, in that file there also is a copy of the direct AcroForm dictionary object in the indirect object 35; probably a sign of someone testing. As that object is not referenced, though, it does not disturb any check...)

Akka Reactive Streams always one message behind

For some reason, my Akka streams always wait for a second message before "emitting"(?) the first.
Here is some example code that demonstrates my problem.
val rx = Source((1 to 100).toStream.map { t =>
Thread.sleep(1000)
println(s"doing $t")
t
})
rx.runForeach(println)
yields output:
doing 1
doing 2
1
doing 3
2
doing 4
3
doing 5
4
doing 6
5
...
What I want:
doing 1
1
doing 2
2
doing 3
3
doing 4
4
doing 5
5
doing 6
6
...
The way your code is setup now, you are completely transforming the Source before it's allowed to start emitting elements downstream. You can clearly see that behavior (as #slouc stated) by removing the toStream on the range of numbers that represents the source. If you do that, you will see the Source be completely transformed first before it starts responding to downstream demand. If you actually want to run a Source into a Sink and have a transformation step in the middle, then you can try and structure things like this:
val transform =
Flow[Int].map{ t =>
Thread.sleep(1000)
println(s"doing $t")
t
}
Source((1 to 100).toStream).
via(transform ).
to(Sink.foreach(println)).
run
If you make that change, then you will get the desired effect, which is that an element flowing downstream gets processed all the way through the flow before the next element starts to be processed.
You are using .toStream() which means that the whole collection is lazy. Without it, your output would be first a hundred "doing"s followed by numbers from 1 to 100. However, Stream evaluates only the first element, which gives the "doing 1" output, which is where it stops. Next element will be evaluated when needed.
Now, I couldn't find any details on this in the docs, but I presume that runForeach has an implementation that takes the next element before invoking the function on the current one. So before calling println on element n, it first examines element n+1 (e.g. checks if it exists), which results in "doing n+1" message. Then it performs your println function on current element which results in message "n" .
Do you really need to map() before you runForeach? I mean, do you need two travelsals through the data? I know I'm probably stating the obvious, but if you just process your data in one go like this:
val rx = Source((1 to 100).toStream)
rx.runForeach({ t =>
Thread.sleep(1000)
println(s"doing $t")
// do something with 't', which is now equal to what "doing" says
})
then you don't have a problem of what's evaluated when.

How can I prevent duplicate results in a fetch from CoreData?

I'm running a very simple fetch unitType like $GIVEN_TYPE with a substitution dictionary, it's returning consistently 40 objects when there should be about 5. It seems like it's repeating the same results in different order, like permutations of them or something, except there are 120 permutations of 5 not 40.
It returns :
A B C D E
A E D C B
B A C D E
E D B C A
C E B D A
A D E C B
C D B E A
B E A C D
consistently every time.
I'm 99% sure there aren't all these repetitions of the same instances, I would check to make sure but I'm not really sure how to check, I had expected this query to return one of each..
Any help is appreciated in narrowing this down.
Update
Here's the basic code I'm using for the lookup, the fetch request is in xcode, but it's a one-liner, it just says unitType like $GIVEN_TYPE
NSArray * results = nil;
NSManagedObjectModel * model = [[cont persistentStoreCoordinator] managedObjectModel];
NSDictionary * substDict = [NSDictionary dictionaryWithObject:name forKey:#"GIVEN_NAME"];
NSFetchRequest * fetReq = [model fetchRequestFromTemplateWithName:#"UnitLookup" substitutionVariables:substDict];
results = [cont executeFetchRequest:fetReq error:&e];
#macworth - I did check now with Base, and I was right that there are only 5 object with unitType equal to the value I put in (tried changing like to ==) I was pretty sure, because I'm populating the database myself at the start of the test, and I tried it repeatedly after deleting the app from the simulator, and re-running.
You can open the CoreData database in any SQLLight browser - Base is a good one.
Go find the iPhone simulator directory, go to the application directory inside there, and look for the SQLite database inside. Open it in a SQLlite browser and look through the tables until you find the one representing the entity in question, and see how many objects you have.