Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I'm working on STM32F103RB Nucleo Board. I want to know how CAN messages are segregated in FIFO upon reception of data?. And what happens after FIFO is full(more than 3 messages)?.
When you configure a filter bank, you also specify the receive mailbox (you have 2 of them). Messages which are accepted by one filter bank goes into the associated mailbox.
FIFO (mailbox) overrun can trigger an interrupt if enabled. The behavior of the FIFO and the fate of the incoming messages are determined by the RFLM bit of the CAN->MCR register.
RFLM = 0 -> The last (3rd) message is overwritten (destroyed) by the new arriving messages. The first (oldest) 2 messages are preserved until you read them.
RFLM = 1 -> FIFO is locked. New arriving messages are discarded. The oldest 3 messages are preserved.
And what happens after FIFO is full(more than 3 messages)?
Then you are basically done for - you'll lose data upon Rx FIFO overflow, which is often unacceptable in CAN real-time systems. So in case your MCU is too busy to always meet the 3 message deadline, you would have to implement some ugly system with interrupts + ring buffers.
This is one reason why CAN controllers from somewhere around the late 90s/early 2000s started to use some 5 to 8 message rx buffers. BxCAN is apparently ancient, since it is worse than those 20+ years old controllers.
Hopefully you can DMA the messages, which is much prettier than the mentioned interrupt/ring buffer complexity. If that's not an option, then you should perhaps go for a modern CAN controller instead. Bascially any other CAN controller on the market has a larger rx FIFO than this one.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 10 months ago.
Improve this question
I must solve this problem:
Two processes, A and B, each need three records, 1, 2, and 3, in a
database. If A asks for them in the order 1, 2, 3, and B asks for them
in the same order, deadlock is not possible. However, if B asks for
them in the order 3, 2, 1, then deadlock is possible. With three
resources, there are 3! or six possible combinations in which each
process can request them. What fraction of all the combinations is
guaranteed to be deadlock free?
And I've seen the solution to this problem in a book:
123 deadlock free
132 deadlock free
213 possible deadlock
231 possible deadlock
312 possible deadlock
321 possible deadlock
Since four of the six may lead to deadlock, there is a 1/3 chance of
avoiding a deadlock and a 2/3 chance of getting one.
But I can't figure out what logic is behind of this solution.
Would someone please explain why this solution is correct?
I've searched a lot but didn't find anything and all of the answers to this problem was without clear explanation.
Deadlock occurs when both threads must wait to acquire a lock that the other thread already acquired (causing both threads to wait forever).
If both threads try to acquire the same lock first then only one thread can succeed and the other must wait, and the waiting thread will be waiting while not holding any locks, so deadlock can't happen because the thread that acquired lock 1 will be able to acquire all other locks it wants and will be able to release lock 1 when its finished (which allows the waiting thread to continue).
E.g. if A and B try to acquire lock 1 first and A wins, then B waits while not holding any lock and A can acquire any other locks in any order it wants because B isn't holding any locks (and then A will release lock 1 and B can stop waiting).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I need help for an archetecture issue.
I develope a cep system based on kafka technology with java.
CEP should have followed characteristics:
distributed (cluster)
scalable
fault-tolerance
CEP should make followed actions:
create events from different sources, which is actually are multi-partitioned kafka-topics (ETL-part)
analyze sequences of that events and if they are matched for a special patterns (scenario) put reaction-record to some store (analyze-part)
every X period query this store to do some communication with a client if it's the time (schedule-part)
During X period if a cancel-event appears, so I remove a reaction record from store.
I created that system using KafkaStreams library. But the archetecture as a result is not so good.
KafkaStreams use RocksDB in backend to store states. There are many problems with managing stores in cluster mode and having a consistent data. Also I cant to make sql-queries to them to resque from iterating every record in store to check if the time for reaction is heppen.
I'm not an architect and I only one who is busy in this task. I was adviced to look at KafkaStreams and Flink for create cep programm. But in fact are these technologies really fit?
There are no question for an ETL-part.
But how can I built an analyze-part and (it's more interesting) query-part? What tools can I use?
I'm grateful for any help and advices
[UPD]
About queries and stores:
We need to check if the time to send a communication is heppen. If it's true so communicate with a person: push-message, email or any other chanel.
select
...where event_time + wait_time < now
After that we need update that record in store to next message of this scenario (and make this algorithm until the person go to last message of scenario or does the cancel action)
Sequence of scenario A:
ev A -> ev B -> ev C -> ev D -----> start scenario -----> ev E or msg c was sent -----> cancel scenario
Messages for scenario A:
msg a (send after wait_time: 10 minutes)
msg b (send after wait_time: 1 day)
msg c (send after wait_time: 7 days) - last
update
... where user_id = xxx and scenario_id = A
If action was made in 2nd point, so we also need to update userStore (there are some information about users, including special counters; they are help to not spam the client and no sending messagies to him at night)
update
... where user_id = xxx
I wrote an engine for CEP with some rules, which I save in special store - scenarioStore.
Thus, there are a several stores:
initialStore (keep last event in scenario sequencies with message parameters, waiting the time to be sent) - ev D
scenarioStore (sequences of events by scenarios) - CEP rules
messageStore (texts and other properties of messages) - msg rules
userStore (information about users)
You can definitely do complex event processing CEP with Kafka Streams. There are even open-source libraries for that kafkastreams-cep.
Kafka Streams framework supports interactive-queries where you can query your state stores to retrieve required data. You can add a REST layer to make it queryable from REST API. Please, see a code example WordCountInteractiveQueriesExample.
I've read many stack overflow questions similar to this, but I don't think any of the answers really satisfied my curiosity. I have an example below which I would like to get some clarification.
Suppose the client is blocking on socket.recv(1024):
socket.recv(1024)
print("Received")
Also, suppose I have a server sending 600 bytes to the client. Let us assume that these 600 bytes are broken into 4 small packets (of 150 bytes each) and sent over the network. Now suppose the packets reach the client at different timings with a difference of 0.0001 seconds (eg. one packet arrives at 12.00.0001pm and another packet arrives at 12.00.0002pm, and so on..).
How does socket.recv(1024) decide when to return execution to the program and allow the print() function to execute? Does it return execution immediately after receiving the 1st packet of 150 bytes? Or does it wait for some arbitrary amount of time (eg. 1 second, for which by then all packets would have arrived)? If so, how long is this "arbitrary amount of time"? Who determines it?
Well, that will depend on many things, including the OS and the speed of the network interface. For a 100 gigabit interface, the 100us is "forever," but for a 10 mbit interface, you can't even transmit the packets that fast. So I won't pay too much attention to the exact timing you specified.
Back in the day when TCP was being designed, networks were slow and CPUs were weak. Among the flags in the TCP header is the "Push" flag to signal that the payload should be immediately delivered to the application. So if we hop into the Waybak
machine the answer would have been something like it depends on whether or not the PSH flag is set in the packets. However, there is generally no user space API to control whether or not the flag is set. Generally what would happen is that for a single write that gets broken into several packets, the final packet would have the PSH flag set. So the answer for a slow network and weakling CPU might be that if it was a single write, the application would likely receive the 600 bytes. You might then think that using four separate writes would result in four separate reads of 150 bytes, but after the introduction of Nagle's algorithm the data from the second to fourth writes might well be sent in a single packet unless Nagle's algorithm was disabled with the TCP_NODELAY socket option, since Nagle's algorithm will wait for the ACK of the first packet before sending anything less than a full frame.
If we return from our trip in the Waybak machine to the modern age where 100 Gigabit interfaces and 24 core machines are common, our problems are very different and you will have a hard time finding an explicit check for the PSH flag being set in the Linux kernel. What is driving the design of the receive side is that networks are getting way faster while the packet size/MTU has been largely fixed and CPU speed is flatlining but cores are abundant. Reducing per packet overhead (including hardware interrupts) and distributing the packets efficiently across multiple cores is imperative. At the same time it is imperative to get the data from that 100+ Gigabit firehose up to the application ASAP. One hundred microseconds of data on such a nic is a considerable amount of data to be holding onto for no reason.
I think one of the reasons that there are so many questions of the form "What the heck does receive do?" is that it can be difficult to wrap your head around what is a thoroughly asynchronous process, wheres the send side has a more familiar control flow where it is much easier to trace the flow of packets to the NIC and where we are in full control of when a packet will be sent. On the receive side packets just arrive when they want to.
Let's assume that a TCP connection has been set up and is idle, there is no missing or unacknowledged data, the reader is blocked on recv, and the reader is running a fresh version of the Linux kernel. And then a writer writes 150 bytes to the socket and the 150 bytes gets transmitted in a single packet. On arrival at the NIC, the packet will be copied by DMA into a ring buffer, and, if interrupts are enabled, it will raise a hardware interrupt to let the driver know there is fresh data in the ring buffer. The driver, which desires to return from the hardware interrupt in as few cycles as possible, disables hardware interrupts, starts a soft IRQ poll loop if necessary, and returns from the interrupt. Incoming data from the NIC will now be processed in the poll loop until there is no more data to be read from the NIC, at which point it will re-enable the hardware interrupt. The general purpose of this design is to reduce the hardware interrupt rate from a high speed NIC.
Now here is where things get a little weird, especially if you have been looking at nice clean diagrams of the OSI model where higher levels of the stack fit cleanly on top of each other. Oh no, my friend, the real world is far more complicated than that. That NIC that you might have been thinking of as a straightforward layer 2 device, for example, knows how to direct packets from the same TCP flow to the same CPU/ring buffer. It also knows how to coalesce adjacent TCP packets into larger packets (although this capability is not used by Linux and is instead done in software). If you have ever looked at a network capture and seen a jumbo frame and scratched your head because you sure thought the MTU was 1500, this is because this processing is at such a low level it occurs before netfilter can get its hands on the packet. This packet coalescing is part of a capability known as receive offloading, and in particular lets assume that your NIC/driver has generic receive offload (GRO) enabled (which is not the only possible flavor of receive offloading), the purpose of which is to reduce the per packet overhead from your firehose NIC by reducing the number of packets that flow through the system.
So what happens next is that the poll loop keeps pulling packets off of the ring buffer (as long as more data is coming in) and handing it off to GRO to consolidate if it can, and then it gets handed off to the protocol layer. As best I know, the Linux TCP/IP stack is just trying to get the data up to the application as quickly as it can, so I think your question boils down to "Will GRO do any consolidation on my 4 packets, and are there any knobs I can turn that affect this?"
Well, the first thing you can do is disable any form of receive offloading (e.g. via ethtool), which I think should get you 4 reads of 150 bytes for 4 packets arriving like this in order, but I'm prepared to be told I have overlooked another reason why the Linux TCP/IP stack won't send such data straight to the application if the application is blocked on a read as in your example.
The other knob you have if GRO is enabled is GRO_FLUSH_TIMEOUT which is a per NIC timeout in nanoseconds which can be (and I think defaults to) 0. If it is 0, I think your packets may get consolidated (there are many details here including the value of MAX_GRO_SKBS) if they arrive while the soft IRQ poll loop for the NIC is still active, which in turn depends on many things unrelated to your four packets in your TCP flow. If non-zero, they may get consolidated if they arrive within GRO_FLUSH_TIMEOUT nanoseconds, though to be honest I don't know if this interval could span more than one instantiation of a poll loop for the NIC.
There is a nice writeup on the Linux kernel receive side here which can help guide you through the implementation.
A normal blocking receive on a TCP connection returns as soon as there is at least one byte to return to the caller. If the caller would like to receive more bytes, they can simply call the receive function again.
I have a doubt arround the paradigm of distributed systems.
Taking into consideration the condition variables that the signal operation unlocks. If we say that the processes are signaled in Last In First Out motion what vantages can we get from here and disadvantages?
The disadvantages and advantages related to what?... Assuming it is related to having no order I would say That a disadvantage is that if we have many processes being put to wait on that condition constantly we may see starvation because only the most recent processes will wake up making it impossible for the first ones to ever wake up unless processes stop being put to wait.
The advantages I'm not so certain, but we can always say that at least we have some order and the signal won't just wake a random process wich we may use for our bnefit.
There may be other advantages or disadvantages that I didn't think about so it may be best to wait for other answers.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
We are working on web application, that has a feature to generate metrics based on how user is using the app. We are exploring to use Storm to process the user events and generate metrics.
The high level approach we are planning :
On client side (Browser), a Java script component to capture user events and post the event to server, and event message will be posted to RabbitMQ.
Storm spout consumes message from RabbitMQ.
Storm bolt process the message and computes metrics.
Finally metrics get saved to MongoDB
Question :
Bolt has to accumulate event's metrics before saving to MongoDB because of two reasons, need to avoid IO load on MongoDB and metrics logic has dependency on multiple events. So we need to have intermittent persistence for Bolt, and not impacting performance.
How can we add temporary persistence within storm topology while we calculate statistics on the data pulled from RabbitMQ, and then save metrics to permanent persistence MongoDB, only on some interval or some other logical trigger.
Please clarify if I don't fully answer your question but the general gist of your query seems to echo the theme: how can we persist within our storm topology while we calculate statistics on the data pulled from RabbitMQ?
Luckily for you, Storm has already considered this question and developed Storm-Trident, which performs real time aggregation on incoming tuples and allows the topology to persist the aggregated state for DRPC queries and for situations requiring high availability and persistence.
For example, in your particular scenario, you would have this kind of TridentTopology:
TridentTopology topology = new TridentTopology();
TridentState metricsState = topology.newSpout(new RabbitMQConsumer())
.each(new Fields("rawData"), new ComputeMetricsFunction(), new Fields("output"))
.groupBy(new Fields("output"))
.persistentAggregate(new MemoryMapState.Factory(), new AggregatorOfYourChoice(), new Fields("aggregationResult"))
Note: the code isn't 100% accurate but should be considered more as pseudo-code. See Nathan's word count example for code specific implementation (https://github.com/nathanmarz/storm/wiki/Trident-tutorial).