I read the I2C specification provided by nxp.but I am still not clear on some points. can you explain to me?
normally slave is controlled by I2C master. Acknowledgment is enabled by I2C master then how slave generates I2C acknowledgment?
I2C slave address and I2C data byte both are 1-byte data then how I2C slave differentiate between them?
Assuming i2c specification for nxp is identical to industry standard.
1) I don't think that "acknowledgement is enabled by Master" is the correct term here. After each (full) byte sent by the Master it waits for the Slave to send back the acknowledgement bit (or the not acknowledgement ). The slave does this by changing the level of the SDA line.
2) For transferring data from Master to Slave (and back) it is very important to keep the order of the sent bytes. A typical example would be like this:
Master sends: Start Signal
Master sends: Slave Address + Write-Bit
Slave sends: ACK
Master sends: Register Address the master wants to read
Slave sends: ACK
Master sends: repeated Start Signal
Master sends: Slave Address + Read-Bit
Slave sends: ACK
Slave sends: content of register Register Address the master wants to read
Master sends: NACK
Master sends: Stop Signal
If you are looking for some more details on the I2C, there is plenty of it throughout the internet. For me, section 21 of this data sheet helped me a lot to understand.
Related
I have to produce messages based on virtual iP's (Those ip's are targetting the same Kafka cluster behind) .
So i need to extract the IP from the url (producer request) to route the mmessage to a specific topic before the message is persisted to Kafka .
**Example **
Static IP's on host machine available :
192.168.0.2
192.168.0.3
192.168.0.4
192.168.0.5
Destination topics
Dest02 for IP 192.168.0.2
Dest03 for IP 192.168.0.3
Dest04 etc....
Dest05
So i Publish a record 001 to topicA (Virtual IP set to producer config > 192.168.0.2) in the service
=> record001 is routed to Dest02 destination topic
If you wonder why i want to route my message this way its because i cannot change the upstream service (producer) nor the downstream service neither (consumers) .
One more thing , i need to base this logic on the virtual IP as its used as my discriminant element to take a decision . otherwise i would not be able to know where to rouge my message
Thanks for your help
I am investigating on SMT's with HTTP source connector to try to catch the message before its written in Kafka brokers but maybe its not a good approach .
I would like to use an STM32F105's I2C bus in both master and slave modes.
I'd like it to listen as a slave, except when it is sending data or listening for a response to a packet it sent as master.
Does the CubeMX HAL allow this?
We have a network of Red Hat AMQ 7.2 brokers with Master/Slave configuration. The client application publish / subscribe to topics on the broker cluster.
How do we handle the situation wherein the network connectivity between the client application and the broker cluster goes down? Does Red Hat AMQ have a native solution like client local queue and a jms to jms bridge between local queue and remote broker so that network connectivity failure will not result in loss of messages.
It would be possible for you to craft a solution where your clients use a local broker and that local broker bridges messages to the remote broker. The local broker will, of course, never lose network connectivity with the local clients since everything is local. However, if the local broker loses connectivity with the remote broker it will act as a buffer and store messages until connectivity with the remote broker is restored. Once connectivity is restored then the local broker will forward the stored messages to the remote broker. This will allow the producers to keep working as if nothing has actually failed. However, you would need to configure all this manually.
That said, even if you don't implement such a solution there is absolutely no need for any message loss even when clients encounter a loss of network connectivity. If you send durable (i.e. persistent) messages then by default the client will wait for a response from the broker telling the client that the broker successfully received and persisted the message to disk. More complex interactions might require local JMS transactions and even more complex interactions may require XA transactions. In any event, there are ways to eliminate the possibility of message loss without implementing some kind of local broker solution.
Version : ActiveMQ Artemis 2.10.1
When we use ha-policy and replication, is the synchronization mode between the master and the slave full synchronization? Can we choose full synchronization or asynchronization?
I'm not 100% certain what you mean by "full synchronization" so I'll just explain how the brokers behave...
When a master broker receives a durable (i.e. persistent) message it will write the message to disk and send the message to the slave in parallel. The broker will then wait for the local disk write operation to complete as well as receive a response from the slave that it accepted the message before it responds to the client who originally sent the message.
This behavior is not configurable.
i'm a newbie in Kafka. I've been testing Kafka for sending messages.
This is my situation, now.
add.java in my local VM is sending messages to kafka in my local VM regularly.
relay.java in another server is polling from kafka in my local VM and producing to kafka in another server.
While I was sending messages from kafka in my local VM to kafka in another server,
I pulled LAN cable out from my lap top. Few seconds later, I connected LAN cable to it again.
And then I found that some messages were lost while LAN cable was disconnected.
However, When the network is reconnected, I want to get all messages which are in disconnection without
missing.
Are there any suggestions?
Any help would be highly appreciated.
First of all, I suggest you use MirrorMaker (1 or 2) because it supports exactly this use case of consuming and producing to another cluster.
Secondly, add.java should not be dropping messages if your LAN is disconnected.
Whether you end up with dropped messages on the way from relay.java depends on your consumer and producer settings within there. For example, you should definitely disable auto offset commits and only commit after you have gotten a completion event and acknowledgement from its producer action. This will result in at least once delivery.
You can find multiple posts about processing guarantees in Kafka