I have found that many satisfiable problems as according to the SATLIB SAT instances are infact unsatisfiable as they all contain one or more clauses that have an exact anticlause against them.
For instance the below download link for SATLIB cnf clauses for 20 variables, 91 clauses - 1000 instances, all satisfiable
has the 1st file itself which has the clauses 7th and 86th as exact inverse of each other so this equation can never be unsat.
I have already posted a question here regarding this but did not got any reply so farOld question on P=NP
Any comments at all are really welcome as I would really like to know if Benchmark problems are still used for competitions or not as if they are then those competitions are useless indeed. So, my question is :
Am I correct in determining these errors and exposing them to the public and asking for comments or not? Also are these findings any useful?
I have sent few emails asking for reply from the benchmark problems website's admin but still no reply after 2 months is making me feel bad.
I could not find a proper definition oft anticlause, so I have to speculate a bit.
Two clauses of size > 1 were each literal is inverted are not a contradiction by themself. Given the clauses
1 2 3 0
-1 -2 -3 0
We can find multiple solutions that satisfy both clauses as we only need to fulfil one literal per clause. Some partial solutions are
1 -2
-1 2
2 -3
...
For these clauses wie only need to select one positiv and one negative literal.
Related
I am reading Microservices Patterns by Chris. In his book, he gave some example, which I could not able to understand section 5.2.1. The problem with fuzzy boundaries
Here is the link to read online. Can you someone please look into section 5.2.1 and help me understand what exactly the issue with fuzzy boundaries?
I didn't get clearly especially below statement:
In this scenario, Sam reduces the order total by $X and Mary reduces it by $Y. As a result, the Order is no longer valid, even though the application verified that the order still satisfied the order minimum after each consumer’s update
In above statement, can someone please explain me, why Order is no longer valid?
In above statement, can someone please explain me, why Order is no longer valid?
The business problem that Chris Richardson is using in this example assumes that (a) the system should ensure that orders are always valid, and (b) that valid orders exceed some minimum amount.
Minimum amount is determined by a sum of the order_items associated with a specific order.
The "fuzzy boundary" issue comes about because the code in question allows Sam and Mary to manipulate order_items directly; in other words, writing changes to order items does not lock the other items of the order.
If Sam and Mary were forced to acquire a lock on the entire order before validating their changes, then you wouldn't have a problem; the second person would see the changes made by the first.
Alternatively, locking at the level of the order_item would be fine if you weren't trying to ensure that the set of order items satisfy some property. Take away the constraint on the total order cost, and Sam and Mary only need to get locks on their specific item.
last week I've done a phone interview and got stuck on one question:
Bank 1 has 5 tellers, each serving one customer at a time
independently; Bank 2 has 5 tellers, sharing a queue of customers to
serve. Which bank you prefer? Why?
I don't know what the interviewer want to know through this question. What I can do is just say, Bank 2 is better since most banks only have one queue and one queue can ensure no one will wait too long if one teller got stuck.
But I find the interviewer seems not satisfied.
Anyone knows the best answer for this question?
Your answer is not considering the real question the interviewer is asking - "How do you think about this type of problem?". Your answer given is "other people do it this way, so do it that way." That is a cop-out, which is why it was unsatisfactory. Instead, consider that they are comparing single-threading and multi-threading as operations. Discuss the advantages and disadvantages of each. Discuss the reasons why you would prefer one over the other based upon technical concerns. You only addressed one edge case - one teller gets "stuck". What about optimizing wait times, considering types of tasks performed at each station, etc?
Interviewers care about how you think, not about the answer you give.
With bank 1 you have 5 tellers and 5 lines, one for each teller. That means if 5 people got in line for the first teller, they would need to wait and be processed one at a time by that teller, all the while the other 4 tellers are doing nothing. With bank2 you have 5 tellers and 1 line. if 5 people all get in line they would be dispersed to the five tellers and all be helped at the same time. So bank 2 is more efficient in design.
What I was trying to do is , to test if optaplanner is suitable for our requirements etc.
Thus, I created our own dataset of courses, ~280 courses etc.
I "believe" XML I prepared is valid for sample, since it loads and optaplanner can start solving it.
However, right during CH phase, it finds some (-220) hard constraint violations, specifically for the rule "conflictingLecturesDifferentCourseInSamePeriod".
And for how long it tries, those violations still remain.
Then when I check violations, they are actually not real violations.
It is two different course, in same hours, but in different rooms, and teachers are not same. So there should be no violation for this scenario.
Also actually when I scan schedule by eye, I dont see any conflict.
So, I am lost right now....
Here is a link for XML dataset.
Actually I found the problem, well it is not a problem in first place :)
Maybe rule name is little bit misleading.
Anyway, problem is actually in too crowded curriculums. Like we had 30-40 courses, which makes 80-100 lectures. And for a 45 hours week, it is impossible to fit everything.
And I assume the rule "conflictingLecturesDifferentCourseInSamePeriod", checks "different" courses of same curriculum.
So, when I reduce course counts by splitting curriculumns into 4 for each, violations reduced to 0 .
Believe this will be a valuable info to whom couldnt understand mentioned rule's purpose.
Thanks.
I have developed a system that allows visitors to submit typo corrections for my blog. It works by having a small client-side app which then sends unified diffs to a server. Behind that, I have an interface which allows me to see all diffs in a nice graphical way, sort them, etc.
However I am thinking that as time passes, many visitors will submit corrections for the same things before I have time to fix them. So I would need a way to group similar or identical diffs together.
Identical diffs are easy enough. But there might be people who fix errors differently, e.g. using American or British spellings, different rules for punctuation, varying understandings of unclear phrases, that kind of thing. Grouping similar diffs would be tremendously helpful.
Are there techniques, algorithms, or tools that are specifically designed or can be used to compute the similarity of diffs?
I believe that you have two problems to solve: 1. recognizing fixes for the same text (e.g. same typo location), 2. potentially remove those with the same or nearly equal solutions and at least group all the patches that are related to that location.
Problem 1. The unified diff format is somewhat OK as it gives the lines, but a word level or character level diff (for example, counting each word as a line as wdiff does) might be more precise and help you group more precisely the patches.
Problem 2. if the patches are identical, as you noted it is trivial, if they are different, solving the problem 1 already did much of the work. You can of course use a normalization such as "inflected word parts removal" (removing 's', 'ing' and so on at end of words for example) or "lower casing" before the comparison the replacements part in the unified diffs, thus helping group together nearly identical solutions.
The problem 1 is the problem paused by integration or merge of patches. Problem 2 is more relevant to your particular case.
Maybe you could adopt the Damerau-Levenshtein algorithm. It is used to calculate the distance between two strings.
Question is pretty simple, but I couldn't find an answer for this one... Basicly, my application is generating filenames with md5(time());.
What are the chances, if any, that using this technique, I'll have 2 equal results?
P.S. Since my question title says hashes not exact hash, what are the chances, if any, again, of generating equal results for each type of hashes sha1();, sha512(); etc.?
Thanks in advance!
My estimation is it is unsafe due to possible changes in time by humans and other processes such as NTP which FrankH has kindly noted. I highly recommend using a cryptographically secure RNG (random number generator) if your framework allows.
Equal results are unlikely to result from this, you can simply validate that yourself by checking the uniqueness of md5(0) ... md5(INT32_MAX) since that's the total range of a time_t. I don't think there are collisions in that input space for any of the hashes you've named.
Predictable results is another matter, though. By choosing time() as you input supplier, you restrict yourself to, well, one unique hash per second, no more than 86400 per day, ...