serial port parity - windows-xp

Are there any known problems with using a serial port set for 8 data bits, 1 stop bit and even parity, under Windows XP?

I suspect this is a case of "my code is failing and I really hope there is a problem with Windows XP." Most likely it's a case of select is broken ("this isn't working but it can't possibly be my code").
If you're having problems, post the code and the real issue so we can help you. You'll also have better luck getting links to specific issues with WinXP and the serial port (I have no idea what issues actually exist).

Related

Porting word2vec to RISC-V.. potential proxy kernel issue?

We are trying to port word2vec to RISC-V. Towards this end, we have compiled word2vec with a cross compiler and are trying to run it on Spike.
The cross compiler compiles the standard RISC-V benchmarks and they run without failure on Spike, but when we use the same setup for word2vec, it fails with "bad syscall #179!". We tried two different versions, both fail around the same place a minute or two into the run while executing these instructions. After going through the loop several 100k times, we see C1, C2 printed an then the crash. We are thinking this is more of a spike/pk issue than a word2vec issue.
Has anyone had similar experiences when porting code to RISC-V? Any ideas on how we might track down whether it's the proxy kernel?
A related question is about getting gdb working with Spike.. will post that separately.
Thank you.
The riscv-pk does not support all possible syscalls. You'll need to track down which syscall it is and whether you can implement it in riscv-pk or if you need to move to running it on a different kernel. For example, riscv-pk does not support any threading-related syscalls as multithreaded kernel support is an explicitly riscv-pk non-goal.
I would also be wary of using riscv-pk in general. It's a very simple, thin kernel which is great for running newlib user applications in the beginning, but it lacks rigorous testing and validation efforts against it, so running applications that stress virtual memory systems, rely on lots of syscalls (iotcl and friends), or are expecting more glibc-like environments may prove problematic.

How to seed rand() in IBM Swift Sandbox?

I am new to StackOverflow so please correct me if there is a better way to post a question which is a specific case of an existing question.
Alberto Barrera answered
How does one seed the random number generator in Swift?
with
let time = UInt32(NSDate().timeIntervalSinceReferenceDate)
srand(time)
print("Random number: \(rand()%10)")
which is perfect generally, but when I try it in The IBM Swift Sandbox it gives the same number sequence every run, at least in the space of a half hour.
import Foundation
import CoreFoundation
let time = UInt32(NSDate().timeIntervalSinceReferenceDate)
srand(time)
print("Random number: \(rand()%10)")
At the moment, every Run prints 5.
Has anyone found a way to do this in the IBM Sandbox? I have found that random() and srandom() produce a different number sequence but similarly are the same each Run. I haven't found arc4random() in Foundation, CoreFoundation, Darwin, or Glibc.
As an aside, I humbly suggest someone with reputation above 1500 creates a tag IBM-Swift-Sandbox.
This was an issue with the way we implemented server-side caching in the Sandbox; non-deterministic code would continually return the same answer even though it should not have. We've disabled it for now, and you should be getting different results with each run. We're currently working on better mechanisms to ensure the scalability of the Sandbox.
I'll see about that tag, as well!
srand is working as expected. If you change value each time in let time = UInt32(NSDate().timeIntervalSinceReferenceDate) instead of NSDate().timeIntervalSinceReferenceDate with any number, it will output random numbers.
Maybe this is a caching issue, it just doesn't see any changes in code and doesn't send it for recompilation :)
I don't know what is going on but today it is totally working. So I guess the question is answered:
srand(UInt32(NSDate().timeIntervalSinceReferenceDate))
works fine.
(I think something must have changed. It was behaving the same way (generating the same number with repeated attempts) on two different computers for about 10 days... Bizarre.)

Guava Bloom Filter does not support large insertions?

I was using BloomFilter in guava v.11.0.1 and it seems like I am getting an exception when my insertion is large. I tried at 10 million with 0.001 fpp, and it failed.
java.lang.IllegalArgumentException: Number of bits must be positive
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at com.google.common.hash.BloomFilterStrategies.checkPositiveAndMakeMultipleOf64(BloomFilterStrategies.java:72)
at com.google.common.hash.BloomFilterStrategies.access$000(BloomFilterStrategies.java:18)
at com.google.common.hash.BloomFilterStrategies$From128ToN.withBits(BloomFilterStrategies.java:37)
at com.google.common.hash.BloomFilter.create(BloomFilter.java:192)
at com.ipg.collection.BloomFilterWritable.impl(BloomFilterWritable.java:43)
at com.ipg.collection.BloomFilterWritable.put(BloomFilterWritable.java:62)
at com.ipg.prophet.twitter.twitflow.archive.UnzipTweetsProcessAndUpload$ProcessorConsumer.process(UnzipTweetsProcessAndUpload.java:107)
at com.ipg.prophet.twitter.twitflow.archive.UnzipTweetsProcessAndUpload$ProcessorConsumer.run(UnzipTweetsProcessAndUpload.java:84)
at java.lang.Thread.run(Thread.java:662)
I think at least it should support that many insertions with such a high fpp, shouldn't it?
Sorry about this, I'm the culprit :)
Hopefully we will be able to push the next version soon. Not the time to mention this, but there is an upside to this accident: it means we can definitely kill the current serial form of BF and its related supporting code (which was an accident itself), which I'm trying to fix for a month now - incidentally the fix to that also fixes this problem.
Edit: more information here (and in Louis' filed issue)
This should probably be filed as an issue on Guava, not on StackOverflow. (I confirm it, by the way; and I've mostly figured out what's going on.)
UPDATE: I've filed an issue and started a patch.

How to handle MEMCACHED_SERVER_MARKED_DEAD?

I have a cluster of 10 memcaches, using consistent hashing. When the key passed to memcached_get() is searched on the unavailable server I get just MEMCACHED_SERVER_MARKED_DEAD response (return value).
I would expect the key should be redistributed to the next available server in this case and I should get NOTFOUND from the next memcached_get() call. However I'm still getting MEMCACHED_SERVER_MARKED_DEAD and so I'm unable to set a new value.
I discovered I can call memcached_behavior_set(..., MEMCACHED_BEHAVIOR_DISTRIBUTION). This causes hash redistribution and it works as I wish then. However, I do not think it is a good approach. Is it?
Generally you want to enable MEMCACHED_BEHAVIOR_DISTRIBUTION from the start if you are dealing with multiple memcached pools. So yes that solution will work.
If you are having further problems, take a look at MEMCACHED_BEHAVIOR_REMOVE_FAILED_SERVERS that will auto purge failed servers from pool after x number of failures.
I found the answer myself.
https://bugs.launchpad.net/libmemcached/+bug/777672
Applying the patch solved all my problems. Note, I wonder it has beed broken since 0.39 and nobody has cared.

TLS sequence number

I'm working on a college paper about TLS and I am asked why TLS sequence number counter is a 64-bit number when TLS only uses 32-bit sequence number in its messages. I've looked around for a while, even checked the RFC and I have found nothing so far. Can anyone help me?
Looks to me like the question is just plain wrong. TLS uses 64-bit sequence numbers, and these are implicit (i.e. not transmitted as part of TLS messages).
Maybe the original questions is confusing SQNs in TLS with SQNs in IPsec: there, 32-bit sequence numbers are included in ESP and AH header fields, but 64-bit extended sequence numbers (ESNs) are permitted by the relevant RFCs.
I take it the following quite from RFC2246, page 74, first paragraph, fifth sentence is an insufficient answer?
Since sequence numbers are 64-bits
long, they should never overflow.
There can be - and often are - differences between wording of the specification and any particular conforming implementation. English is an imprecise language for algorithm specification.
You fail to specify whether the implementation you are looking at never overflows in to bit 33, or if you've just not seen it happen. Claiming that you have seen the counter wrap modulo 2^32 would be a different claim altogether.
Please first understand what you are asking. What is a TLS message? Are you referring to TLS records? TLS uses a 64-bit counter for record messages and this number is not included in the TLS records. It is used implicitly.