Why are consecutive timestamps in rosbag equal once in a while? - matlab

I'm using ROS Noetic on Ubuntu 20.04 (kernel version 5.15.0-53-generic) on a MSI GF66 and I have encountered a strange problem when analyzing a recorded rosbag.
I have to publish at 10 Hz by means of a Simulink model some messages to the /cmd_vel topic of a Turtlebot that moves in Gazebo and records the /odom and /cmd_vel topics. When analyzing the recorded bag, I notice something strange: every once in a while, two consecutive timestamps are exactly equal, even though the value of the two corresponding messages of the topic are not equal (it holds both for /odom and /cmd_vel).
I use the following script in Matlab to extract the information from the bag:
bagSelect = rosbag('BagPubSimulink.bag');
odomBag = select(bagSelect, 'Time', [bagSelect.StartTime bagSelect.EndTime], 'Topic', '/odom');
odomStructs = readMessages(odomBag, 'DataFormat','struct');
odomTime = odomBag.MessageList.Time;
Then, I cycle on odomStructs to extract the messages I need, let's say odomX.
Taking two instants k and k + 1 when the problem occurs:
odomTime(k : k + 1) = {149.674000000000; 149.674000000000}
odomX(k : k + 1) = {-0.790906331505904; -0.787962666465643}`
I've noticed that this problem happens more frequently in the recorded bag when the considered topic has a high publishing frequency, e.g. if I record the /clock topic, this problem of consecutive timestamps being equal is magnified and can last for more than two consecutive timestamps.
Can you please help me with this problem?
In order to install ROS, I've followed the instructions at https://emanual.robotis.com/docs/en/platform/turtlebot3/quick-start/ up to paragraph 1.1.5.
I've actually had to add some lines of code from the video linked in the same page, because they are not written there.
I'm sorry if something is not clear and whether I've not used the correct wording, but I'm new to both Ubuntu and ROS and I have a lot to learn.
Please, tell me if I have to provide some more details to work out a solution.
Edit
The problem is not due to the fact that the duplicated timestamps belong to two messages of the two topics I've recorded. In fact, this is MessageList of the variable bagSelect:
Time Topic MessageType FileOffset
99.3160000000000 '/cmd_vel' 'geometry_msgs/Twist' 402403
99.3170000000000 '/odom' 'nav_msgs/Odometry' 402497
99.3270000000000 '/odom' 'nav_msgs/Odometry' 403261
99.3690000000000 '/odom' 'nav_msgs/Odometry' 404025
99.4150000000000 '/cmd_vel' 'geometry_msgs/Twist' 404789
99.4170000000000 '/odom' 'nav_msgs/Odometry' 404883
99.4610000000000 '/odom' 'nav_msgs/Odometry' 405647
99.4610000000000 '/odom' 'nav_msgs/Odometry' 406411
99.5050000000000 '/odom' 'nav_msgs/Odometry' 407175
99.5160000000000 '/cmd_vel' 'geometry_msgs/Twist' 407939
99.5270000000000 '/odom' 'nav_msgs/Odometry' 408033
99.5730000000000 '/odom' 'nav_msgs/Odometry' 408797
99.6160000000000 '/cmd_vel' 'geometry_msgs/Twist' 409561
99.6170000000000 '/odom' 'nav_msgs/Odometry' 409655
99.6650000000000 '/odom' 'nav_msgs/Odometry' 410419
99.6650000000000 '/odom' 'nav_msgs/Odometry' 411183
99.7120000000000 '/odom' 'nav_msgs/Odometry' 411947
99.7150000000000 '/cmd_vel' 'geometry_msgs/Twist' 412711
Interestingly, /odom is the only topic of this bag that suffers from this timestamps duplication problem. Thus, it seems that the problem doesn't affect topics that are published by myself.
As a matter of fact, I've tried recording the /clock topic only with the Turtlebot staying still in the Gazebo world, and inside the MessageList I get bunch of equal timestamps, referring to different time instants, meaning that the messages of /clock topic are correctly different the one from the other.

I think what you are reading in odomTime is not the time stamp of the message in the topic /odom, when you use MessageList the MessageList table contains one row for each message in the bag ( in your case a raw for /odom and a raw for /cmd_vel that's whey you are getting the time stamp twice.
please refer to this tutorial for more info

I solved the problem doing the following:
for ii = 1 : length(odomStructs)
sec = cast(odomStructs{ii}.Header.Stamp.Sec, 'double');
nsec = cast(odomStructs{ii}.Header.Stamp.Nsec, 'double');
odomTime(ii, 1) = sec + nsec*1e-9;
end
In this way, I read the header.stamp, which isn't affected by the same problem.
Don't mind the hypothetical inefficiency of the cycle: I implemented it this way just to make sure I was not taking the wrong element.

Related

rxdart: Get buffered elements on stream subscription cancel

I'm using a rxdart ZipStream within my app to combine two streams of incoming bluetooth data. Those streams are used along with "bufferCount" to collect 500 elements each before emitting. Everything works fine so far, but if the stream subscription gets cancelled at some point, there might be a number of elements in those buffers that are omitted after that. I could wait for a "buffer cycle" to complete before cancelling the stream subscription, but as this might take some time depending on the configured sample rate, I wonder if there is a solution to get those buffers as they are even if the number of elements might be less than 500.
Here is some simplified code for explanation:
subscription = ZipStream.zip2(
streamA.bufferCount(500),
streamB.bufferCount(500),
(streamABuffer, streamBBuffer) {
return ...;
},
).listen((data) {
...
});
Thanks in advance!
So for anyone wondering: As bufferCount is implemented with BufferCountStreamTransformer which extends BackpressureStreamTransformer, there is a dispatchOnClose property that defaults to true. That means if the underlying stream whose emitted elements are buffered is closed, then the remaining elements in that buffer are emitted finally. This also applies to the example above. My fault was to close the stream and to cancel the stream subscription instantly. With awaiting the stream's closing and cancelling the stream subscription afterwards, everything works as expected.

QuickfixJ - I am getting sec definition details in logs but message.getGroup method fails when I try to retrieve field information

I am getting 35=d message from ICE in my logs with all the details while requesting it via java application which is using quickfixj.
In onMessage implementation I am trying to get group data and values of individual field but my code fails at getGroup() and gives the error that field not found.
quickfix.fix44.SecurityDefinition.NoUnderlyings group = new quickfix.fix44.SecurityDefinition.NoUnderlyings();
message.getGroup(count, group);
This getGroup method internally calls getGroups function of quickfixJ which is failing becasuse at below line in -
this.getGroups(group.getFieldTag()); //group.getFieldTag() is 711 or NoUnderlyings
Is there anything that I am missing here? I have tried different ways to get the fields but no luck, help would be much appreciated.
Just an observation - In fromapp /on message method, I am not seeing the full message when I do message.toString(); . I Only see the first part, Second part which has actual security(many groups) is not being displayed. Not sure if there is any other way(other than toString()) to see full message in methods.
Message that I am getting in fromApp/OnMessage on message.toString();
<20190828-12:14:47, FIX.4.4:XXXX/1/0->ICE, incoming> (8=FIX.4.4 9=XXXXX 35=d 49=ICE 34=5 52=20190828-12:14:47.695 56=XXXX 57=1 322=10342 323=4 320=1566994457340_0 393=91 82=1 67=1 9064=0 711=91
Message that I am getting in logs :
<20190828-12:14:47, FIX.4.4:XXXX/1/0->ICE, incoming> (8=FIX.4.4 9=XXXXX 35=d 49=ICE 34=5 52=20190828-12:14:47.695 56=XXXX 57=1 322=10342 323=4 320=1566994457340_0 393=91 82=1 67=1 9064=0 711=91
311=5094981 309=UPL FQF0021.H0021 305=8 463=FXXXXX 307=UK Peak Electricity Futures (Gregorian) - UK Grid - Q1 21 308=IFEU 542=20201230 436=1.0 9013=0.01 9014=1.0 9040=0.01 9041=1.0 9017=1.0 9022=768 9024=1.0 9025=Y 916=20210101 917=20210331 9201=1900 9200=15 9202=Q1 21 9300=8148 9301=IPE UK Grid, UK (Peak) 9302=UK Grid 9303=UPL 998=MWh 9100=GBP 9101=GBP / MWh 9085=hourly 9083=2 9084=0 9061=4639 9062=UK Peak Electricity Futures (Gregorian) 9063=UK Power Peakload Futures (Gregorian) 9032=0.01 9215=1 9216=0 763=800
311=5094980 309=UPL FMH0021! 305=8 463=FXXXXX 307=UK Peak Electricity Futures (Gregorian) - UK Grid - Mar21 308=IFEU 542=20210225 436=1.0 9013=0.01 9014=1.0 9040=0.01 9041=1.0 9017=1.0 9022=276 9024=1.0 9025=Y 916=20210301 917=20210331 9201=1875 9200=12 9202=Mar21 9300=8148 9301=IPE UK Grid, UK (Peak) 9302=UK Grid 9303=UPL 998=MWh 9100=GBP 9101=GBP / MWh 9085=hourly 9083=2 9084=0 9061=4639 9062=UK Peak Electricity Futures (Gregorian) 9063=UK Power Peakload Futures (Gregorian) 9032=0.01 9215=1 9216=0 457=1 458=GB00H1RK4P63 459=U4 763=0
Which version of QuickFIX/J are you using? In some older versions the message got truncated when there were unknown fields.
That brings me to the question whether your used data dictionary is really containing all the fields that ICE is sending. Do you have all that 9000+ tags in your dictionary? Please double-check that.

apache storm missing event detection based on time

I want to detect a missing event in a data stream ( e.g. detect a customer request that has not been responded within 1 hour of its reception. )
Here, I want to detect the "Response" event and make an alert.
I tried using tick tuple by setting TOPOLOGY_TICK_TUPLE_FREQ_SECS but it is configured at a bolt level and might come after 15th minute of getting a customer request.
#Override public Map getComponentConfiguration() {
Config conf = new Config();
conf.put(Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS, 1800);
return conf; }
^ this doesn't work.
Let me know in comments if any other information is required. Thanks in advance for the help.
This might help http://storm.apache.org/releases/1.0.3/Windowing.html
you can define 5 mins windows and check the status of last window events and alert based on what is received
or create an intermediate bolt which maintains these windows and sends the normal alert tuples(instead of tick tuple) in case of timeouts

Read from Kinesis is giving empty records when run using previous sequence number or timestamp

I am trying to read the messages pushed to Kinesis stream with the help of
get_records() and get_shard_iterator() APIs.
My producer keeps pushing the records when processed at it's end and consumer also keeps running as a cron every 30 minutes. So, I tried storing the sequence number of the current message read in my database and use AFTER_SEQUENCE_NUMBER shard iterator along with the sequence number last read. However, the same won't work for the second time (first time successfully read all messages in the stream) after new messages are pushed.
I also tried using AT_TIMESTAMP along with message timestamp that producer pushed to stream as part of the message and stored that message to be further used. Again, first run processes all messages and from the second run I get empty records.
I am really not sure where I am going wrong. I would appreciate if someone can help me in this.
Providing the code below using timestamp but the same thing is done for sequence number method too.
def listen_to_kinesis_stream():
kinesis_client = boto3.client('kinesis', region_name=SETTINGS['region_name'])
stream_response = kinesis_client.describe_stream(StreamName=SETTINGS['kinesis_stream'])
for shard_info in stream_response['StreamDescription']['Shards']:
kinesis_stream_status = mongo_coll.find_one({'_id': "DOC_ID"})
last_read_ts = kinesis_stream_status.get('state', {}).get(
shard_info['ShardId'], datetime.datetime.strftime(datetime.date(1970, 01, 01), "%Y-%m-%dT%H:%M:%S.%f"))
shard_iterator = kinesis_client.get_shard_iterator(
StreamName=SETTINGS['kinesis_stream'],
ShardId=shard_info['ShardId'],
ShardIteratorType='AT_TIMESTAMP',
Timestamp=last_read_ts)
get_response = kinesis_client.get_records(ShardIterator=shard_iterator['ShardIterator'], Limit=1)
if len(get_response['Records']) == 0:
continue
message = json.loads(get_response['Records'][0]['Data'])
process_resp = process_message(message)
if process_resp['success'] is False:
print process_resp
generic_config_coll.update({'_id': "DOC_ID"}, {'$set': {'state.{0}'.format(shard_info['ShardId']): message['ts']}})
print "Processed {0}".format(message)
while 'NextShardIterator' in get_response:
get_response = kinesis_client.get_records(ShardIterator=get_response['NextShardIterator'], Limit=1)
if len(get_response['Records']) == 0:
break
message = json.loads(get_response['Records'][0]['Data'])
process_resp = process_message(message)
if process_resp['success'] is False:
print process_resp
mongo_coll.update({'_id': "DOC_ID"}, {'$set': {'state.{0}'.format(shard_info['ShardId']): message['ts']}})
print "Processed {0}".format(message)
logger.debug("Processed all messages from Kinesis stream")
print "Processed all messages from Kinesis stream"
As per my discussion with AWS technical support person, there can be a few messages with empty records and hence it is not a good idea to break when len(get_response['Records']) == 0.
The better approach suggested was - we can have a counter indicating maximum number of messages that you read in a run and exit loop after reading as many messages.

kafka 0.72, minimum number of brokers

I'm trying to create a kafka producer that sends messages to kafka brokers (and not to zoo keeper).
I know that the better practice is working with zk, but for the moment I would like to send messages directly to a broker.
To do that, I'm setting the property "broker.list" as described in the documentation. The thing is that it appears that in order for it to work it requires minimum of 3 brokers (else I get an exception).
In the source code of kafka I can see:
if(brokerInfo.size < 3) throw new InvalidConfigException("broker.list has invalid value")
This is weird cause in my data center I hold only 2 kafka nodes (and 3 zk), what can I do in this case?
Is there a way go around this?
The brokerInfo is obtained by splitting the individual broker info and NOT the number of brokers .. if you checked the source code more carefully you would see some thing like
// check if each individual broker info is valid => (brokerId: brokerHost: brokerPort)
and then they split this info as below
brokerInfoList.foreach { bInfo =>
val brokerInfo = bInfo.split(":")
if(brokerInfo.size < 3) throw new InvalidConfigException("broker.list has invalid value")
}
so every single broker expected to have an id with host name and port separated by the : delimiter
basically regarding the number of broker it just do this
val brokerInfoList = config.brokerList.split(",")
if(brokerInfoList.size == 0) throw new InvalidConfigException("broker.list is empty")
So you should be fine with that I guess, just try to pass a single broker and it should work. Let us know how it goes
Apparently when writing
props.put("broker.list", "0:" + <host:port>);
It works (I added the "0:" to the original string).
I have found it in section 9 of the quick start guide.
I'm not sure I'm getting it, maybe this zero is the partition number(?) maybe something else (could be nice if someone can shed some light here).