Reducibility Etm undecideable - reduction

I want to ask about reduction.
In the proof that Etm is undecideable in the definition of M1 is
1.if x!=w,reject
2.if x==w,run M on input w and accept if M does
In many proofs that I meet I see that bold line but i can not understand how can I do it because I do not know if it will stop.
I would be more than happy to know where am i wrong.
Thanks.

Accepting with a Turing Machine means to stop in an accepting configuration. So if you simulate M and it accepts, it will stop and you will be able to notice this.
If M does not stop, this means that it does not accept w. In this case you should not accept either. One way of not accepting is running forever. So if your simulation of M runs forever, this makes you run forever, which is just what you should do.
Thus there is no need for you to know whether M will stop. This does not work the other way. run M on input w and accept if M rejects would not be possible to compute, because you would need to detect infinite computations and accept their input.

Related

How to evaluate square of a binom in coq?

I have reached a point in a structural induction proof where I have 2 equivalent algebraic expressions on different sides of the equation. One of them is just expanded form of another. I hoped reflexivity . would catch up on that, but apparently I still need some simplification. I'm not sure however what command can help me here.
It sounds like a job for the ring tactic (which requires Require Import Ring). In case this is about nat (which does not form a ring) you might be able to convert your goal to Z using the zify tactic (which should be included in Require Import Lia). In case your term is linear (does not multiply variables) you can also try lia instead of ring.
It is hard to tell why reflexivity does not work without looking at your code. You might need a few rewriting steps before reflexivity can do the job. Note that sometimes two expressions might look definitionally equal when they are printed even though they actually aren't. For instance, there could be invisible implicit arguments that are not definitionally equal and that are preventing unification. It might help to use Set Printing All to double check if you are missing such issues.

In paxos, what happens if a proposer is down after its proposal is rejected?

In this figure, the proposal of X is rejected.
At the end of the timeline, S1 and S2 accept X while S3, S4 and S5 accept Y. Proposer X is now supposed to re-send the proposal with value Y.
But what happens if proposer X gets down at that time? How does S1 and S2 eventually learn the value Y?
Thanks in advance!
It is a little hard to answer this from the fragment of a diagram that you've shared since it is not clear what exactly it means. It would be helpful if you could link to the source of that diagram so we can see more of the context of your question. The rest of this answer is based on a guess as to its meaning.
There are three distinct roles in Paxos, commonly known as proposer, acceptor and learner, and I think it aids understanding to divide things into these three roles. The diagram you've shared looks like it is illustrating a set of five acceptors and the messages that they have sent as part of the basic Synod algorithm (a.k.a. single-instance Paxos). In general there's no relationship between the sets of learners and acceptors in a system: there might be a single learner, or there might be thousands, and I think it helps to separate these concepts out. Since S1 and S2 are acceptors, not learners, it doesn't make sense to ask about them learning a value. It is, however, valid to ask about how to deal with a learner that didn't learn a value.
In practical systems there is usually also another role of leader which takes responsibility for pushing the system forward using timeouts and retries and fault detectors and so on, to ensure that all learners eventually learn the chosen value or die trying, but this is outside the scope of the basic algorithm that seems to be illustrated here. In other words, this algorithm guarantees safety ("nothing bad happens") but does not guarantee liveness ("something good happens"). It is acceptable here if some of the learners never learn the chosen value.
The leader can do various things to ensure that all learners eventually learn the chosen value. One of the simplest strategies is to get the learned value from any learner and broadcast it to the other learners, which is efficient and works as long as there is at least one running learner that's successfully learned the chosen value. If there is no such learner, the leader can trigger another round of the algorithm, which will normally result in the chosen value being learned. If it doesn't then its only option is to retry, and keep retrying until eventually one of these rounds succeeds.
In this figure, the proposal of X is rejected.
My reading of the diagram is that it is an ”accept request” that is rejected. Page 5 paragraph 1 of Paxos Made Simple describes this message type.
Proposer X is now supposed to re-send the proposal with value Y.
The diagram does not indicate that. Only if Y was seen in response to the blue initial propose messages would the blue proposer have to choose Y. Yet the blue proposer chose X as the value in its ”accept request”. If it is properly following Paxos it could not have ”seen Y” in response to it's initial proposal message. If it had seen it then it must have chosen it and so it wouldn’t have sent X.
In order to really know what is happening you would need to know what responses were seen by each proposer. We cannot see from the diagram what values, if any, were returned in response to the first three blue propose messages. We don’t see in the diagram whether X was previously accepted at any node or whether it was not. We don't know if the blue proposer was ”free to choose” it's own X or had to use an X that was already accepted at one or more nodes.
But what happens if proposer X gets down at that time?
If the blue proposer dies then this is not a problem. The green proposer has successfully fixed the value Y at a majority of the nodes.
How does S1 and S2 eventually learn the value Y?
The more interesting scenario is what happens if the green proposer dies. The green proposer may have sent it's accept request messages containing Y and immediately died. As three of the messages are successful the value Y has been fixed yet the original proposer may not be alive to see the accept response messages. For any further progress to be made a new proposer needs to send a new propose message. As three of the nodes will reply with Y the new proposer will chose Y as the value of it's accept request message. This will be sent to all nodes and if all messages get through, and no other proposer interrupts, then S1 and S2 will become consistent.
The essence of the algorithm is collaboration. If a proposer dies the next proposer will collaborate and chose the highest value previously proposed if any exists.

Searching for max-min in MATLAB

I am writing a matlab code where i calculate the max-min.
I am using matlab's "fminimax" to solve the following problem:
ki=G(i,:);
ki(i)=0;
fs(i)=-((G(i,i)*pt(i)+sum(ki.*pt)+C1)-(C2*(sum(ki.*pt)+C1)));
G: is a system matrix. pt: is the optimization variable.
When the actual system matrix is used, the "fminimax" stops after one iteration and returns the initial value of "pt", no matter what the initial value for "pt", i.e. no solution is found. (the initial value is defined as X0 in the documentation). The system has the following parameters: G is in the order of e-11, pt is in the order of e-1, and c1 is in the order of e-14.
when i try a randomly generated test matrix and different parameters, the "fminimax" finds a solution for the problem, and everything works fine. G in order of e-2, pt in order of e-2, c1 is in the order of e-7.
I tried to scale the actual system: "fminimax" lasted more than one iteration, however, it still returned the initial value of pt, i.e. it couldn't find a solution.
I tried to change the tolerance of the "fminmax", using "options" [StepTolerance, OptimalityTolerance, ConstraintTolerance, and functiontolerance]. There were no impact at all. still no solution.
I thought that the problem might be that the precision of "fminimax" is not that high, or it is not suitable to solve the problem. i think it is also slow.
i downloaded CPLX, and i wanted to transform the max-min problem into linear programing, using a method i found in a book. However, when i tried my code on a simple minimax it didn't give the same solution.
I thought of using CVX for example, but the problem is not convex.
What might be the problem?
P.S. the system matrix, G, has different realizations, i tried some of them. However, the "fminimax" responds in the same way for all of them, i.e. it wasn't able to find an adequate solution.
I am not convinced that the optimization solvers are broken. If the problem is nonconvex, then there can be multiple local minimizers. Given the information you have provided, we have no way of knowing whether you started at an initial condition.
The first place you need to start is by getting more information from the optimization exit condition... Did it finish because it hit the iteration limit? (I hope not since it isn't doing many iterations)... Did it finish because a tolerance was hit (e.g. the function did not change by more than xxxx)? Or perhaps it could not find a feasible solution? (I don't know if you have any constraints that need to be met).
More than likely, I wold guess that you are starting at a local minimizer without realizing it. So you need to determine whether you are indeed at a local minimizer by looking at the jacobian of the function evaluated at your initial guess. Either calculate it analytically or use a finite step approximation....

How to simulate block diagram based (Simulink-like) time-domain models?

I have been wondering this for some time now and was curious about the most logical implementation for simulating block diagram based time-domain models.
I don't know if that term is correct, but if you know Simulink you know what I mean.
There reason I am wondering is is that I have made some simulation models myself now, but I always get stuck when I am creating feedback loops. Most of the time this is not a problem when I am working with blocks that I can translate to the state-space domain, but when I get more complex elements this becomes a problem.
Practically I can not seem to get my head around how Simulink solves this.
I had thought that practically for every timesample every block calculates it current state and passes that to the connected blocks for the next timesample, however when you have:
->A->B->C->D
^-----|
4 blocks with a feedback to A and an input to A, it takes 3 timesamples to reach C, after which C will start emitting to A again. Before that C would have been emitting it's initial value. It takes 4 timesamples to reach D.
When A,B,C,D are simple laplace-like elements this is easily combined in a state-space or transfer function from A to D, however the results will be monumentally different. Because it will take 1 timesample from A to D and from C to A. I know that when the transfer function requires you, in general, to specify the initial conditions, but these conditions are not translateable (or I can not see it) to the block diagram solution.
How do you tackle this problem in a generic way?

What's the difference between NOT second preimage resistant and NOT collision resistant [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
By definition, Not 2nd-preimage resistant means: there exists at least one x (which is known) such that it is easy to find another x', such that h(x) = h(x').
While, Not collision resistant indicates: it is easy to find at least one such pair (x, x') that h(x) = h(x')
I don't see any difference here, anyone can tell? Or do I give the wrong definitions?
And, it is said that "Not collision resistant not necessarily means Not 2nd-preimage resistant", why is that?
Putting this into another answer because it's just too much to type for a comment.
The definition of 2nd-preimage-resistant is you have h(x) and x, and can't create x'.
The definition of preimage-resistant (without second!) means you have only h(x), and can't create x.
And the definition of collision resistant is you have nothing, and may choose any h(x), x and x'.
If you use the hash to sign a plaintext message, you need 2nd-preimage-resistancy, but not collision resistancy. It doesn't matter to you if someone can find two colliding messages that produce a hash that is different from yours, but you want to make sure noone is able to craft a different message that has the your hash, even if they know your plaintext.
If you use the hash to store hashed passwords, you don't care about collision resistance, and you don't care about 2nd-preimage-resistance, preimage-resistance is all you need. If an attacker knows one password, you don't really care if he can use that password to find a different one.
So these were two examples where collision resistance is not required, but preimage-resistance or 2nd-preimage-resistance is.
As to "Not collision resistant not necessarily means Not 2nd-preimage resistant", why is that? , consider the hash function if x has less then 24 bits, then h(x)=0, else h(x)=sha256(x). This is very obviously not collision resistant (choose any 2 words that have less than 4 letters), but, as long as your text is longer, this function is preimage-resistant and 2nd-preimage-resistant (assuming sha256 hasn't been broken yet).
2nd preimage resistant means, there's no (easy) way to find a 2nd x (called x') when you have only h(x), and maybe x.
Collision resistant means there's an (easy) way to find a random pair (x, x') with h(x)=h(x').
So the second one is weaker. Think about what happened to MD5 a while ago: there's an algorithm that finds pairs of input bytes that produce the same output. But this works only for specifically constructed input, not for random input. So, while it is possible to find messages that have a collision, the generic case "x is some specific message, find a second message that has the same MD5 as x" is not solved yet.