I am running a ros publisher/subscriber node, which receives a single image from a /image_pub topic , do some processing and publish the results on /results topic. The image_pub topic is publishing at 20Hz but my publisher/subscriber node runs at 12 hz(i found it using rostopic hz /results). Is there any way to improve the speed or tell my program to run at 20Hz. At start it was running at 20Hz. Then i turned off my Linux for lunch, came back and restarted my program. Now its running at 12 hz. I have restarted it again and again but still runs at 12 hz. Any solution..?
If your image processing takes longer than 1/20 second than there is no way you can achieve the 20Hz. If that is not the case then the following main loop will do the job
ros::Rate publish_rate(20);
while(ros::ok())
{
// do some processing
publisher.publish(image);
publish_rate.sleep();
}
The ros::Rate will make sure to sleep for the right amount of time to achieve the 20Hz.
Also make sure to compile in Release mode (catkin_make -DCMAKE_BUILD_TYPE=Release) as this will speed up you code by a good margin.
Related
I am writing an external bootloader for the STM32F730Z8 - (why? I need one windows code that can run the bootloader for the STM32, or use the STM32 to reprog a connected ATF1508 for my client). I've done this before, using info in AN3155 and AN2606. On lesser CPUs, this has had no difficulty (i.e. STM32L4P5). In this case, I try the same:
1-cycle \RESET & BOOT0 to boot to supervisor mode
2-autobaud successfully
3-send 0x00 to get the list of commands, successfully
4-send 01 to get the version and protection, successfully (vers 49, rp and nt both 0)
5-send 02 to get chip id (0x0452), successfully
6-send 0x73 to write-unprotect flash, successfully (i.e. receive back two ACK)
7-send 0x44 to begin an extended erase (intending only to erase sector 0).
This is where it fails. I get neither ACK nor NACK - it just times out. I don't even get to the second half of the extended-erase command where I send it the sector info. (On the STM32L4P5 it succeeds here easily and goes on to finish erasing, then to write code successfully.)
I've tried very long waits & repeat loops to wait for the ACK (many minutes). From past experience this should be fast, it is only the second stage where I tell it how much flash to erase that takes any significant time.
I've inspected the protection option areas of memory, at 0x1FFF0010, 0018, and they are unprotected, as per factory defaults.
I'm communicating over an FT231XS-R, using the D2XX driver calls. I can mess with the baud rates and such, but that only prevents it from autobauding...and we're doing that fine (9600/8/1/E). I've played with the D2XX SetTimeouts - if set too hasty that only screws up earlier commands. I'm wired to a 20 MHz crystal, and the application runs at 200 MHz, but my understanding is that the bootloader just runs at the internal RC clock rate.
I'm certainly missing something stupid, but I didn't see it in the documentation. Help?
Jeff Casey / Rockfield Research Inc. / Las Vegas, NV
Fixed, disregard.
The fineprint of AN3155 clued me in. On the description of the Write Unprotect command, it says that a system reset will be performed after completion. How did I miss this on the STM32L4P5? I just didn't read it. But why did it work then? In the really fine print next page, in a footnote to the flowchart, it says that they were just foolin'....system reset is only called for some (..list omitted..) and for other STM32 products no system reset is called for.
My earlier success had the following sequence:
reboot-supervisor
autobaud
get
gvrp
gid
wpun
xerase
wpun
write
verify
reboot-user
obviously that doesn't work for the F730. what works is:
reboot-super
autobaud
get
gvrp
gid
wpun
reboot-super
autobaud
get gvrp
gid
xerase
reboot-super
autobaud
get
gvrp
gid
write
verify
reboot-user
(obviously I can skip a few of the repeated steps, like get-id, but basically it needed a reboot and re-autobaud.)
note that i had to reboot-super a 3rd time...this was because the write attempt timed out after the xerase unless i went through the whole sequence again. funny, though, the spec doesn't say anything about resetting after an erase. i cross posted this question on the STM32 community site, and I'll do the same with this answer and ping them on this.
Thanks for reading, cheers. Jeff
Good morning,
I have created a callback in Dash that makes the job of a scheduler.
Every 10 minutes (with the help of an interval component), my callback is running to fetch the data from a server and to update the csv file that I use in my app.
The problem is that my callback is called only when I have the webpage opened. As soon as I close the page, the scheduler stops and runs again when I open the page again.
As the data process of updating data can be long sometimes, I want the scheduler to always run and fetch the data every 10 minutes.
I assume that a callback is a client side process right? So how can I make it run in server side?
Thank you,
Dash is probably not the right solution for this. I think it would make more sense to set up the Python code you need for this job in a simple .py script file, and set a cron job to run that script every 10 min.
Thank you #coralvanda for the help.
I finally did a python script in my container A that calls the container B every 10 minutes. The container B is fetching the data.
It makes the job.
import schedule
import time
import docker
def worker_restart():
client = docker.from_env()
container = client.containers.get('container_worker')
container.restart()
schedule.every(10).minutes.do(worker_restart)
while True:
schedule.run_pending()
time.sleep(1)
I'm having an issue configuring passing time on an Anylogic model: I would like to configure every tick of the model time to be 5 minutes at 1x.
To be clearer, all the things I did were done on the project components shown on the "Projects" tab.
Reading guides and manuals I saw that by clicking on the project root I could configure the time unit in minutes, and this allows me to run it with 1 minute per tick.
I tried to modify the Simulation options setting the "Real-time with scale" at 5, but when I run the experiment it automatically starts at 5x.
Is there any way to achieve my needing?
Thanks a lot.
P
No matter what, the best option to control this, is by doing it programmatically.
getEngine().setRealTimeMode(true); // to be sure you are not using virtual mode
getEngine().setRealTimeScale(5); // 5 would be the 5x, otherwise put a different number
For instance, you can run this at 1x when your model starts (on your "on startup" action on your main properties) and with a button, or after some time, you can change it to whatever you want.
I have a simple script that uses music21 to process the notes in a midi file:
import music21
score = music21.converter.parse('www.vgmusic.com/music/console/nintendo/nes/zanac1a.mid')
for i in score.flat.notes:
print(i.offset, i.quarterLength, i.pitch.midi)
Is there a way to also obtain a note's voicing / midi program using a flat score? Any pointers would be appreciated!
MIDI channels and programs are stored on Instrument instances, so use getContextByClass(instrument.Instrument) to find the closest Instrument in the stream, and then access its .midiProgram.
Be careful:
.midiChannel and .midiProgram are 0-indexed, so MIDI channel 10 will be 9 in music21, etc., (we're discussing changing this behavior in the next release)
Some information might be missing if you're not running the bleeding edge version (we merged a patch yesterday on this topic), so I advise pulling from git: pip install git+https://github.com/cuthbertLab/music21
.flat is going to kill you, though, if the file is multitrack. If you follow my advice you'll just get the last instrument on every track. 90% of the time people doing .flat actually want .recurse().
I'm trying to configure a RaspberryPi2 to record video data from the camera module to a rosbag. To get the camera working with ROS, I used code I found here: https://github.com/fpasteau/raspicam_node.
This works fine, but I have a problem capturing the data to a rosbag. When capturing in raw mode at a high frame rate, it captures smoothly for a few seconds, then freezes for a few seconds, then captures smoothly for a few seconds, then freezes, ...
For instance, I tried capturing a file with 640x480#30FPS and this is what rosbag info yields:
duration: 2:51s (171s)
size: 2.9 GB
messages: 5049
compression: none [2504/2504 chunks]
types: rosgraph_msgs/Log [acffd30cd6b6de30f120938c17c593fb]
sensor_msgs/CameraInfo [c9a58c1b0b154e0e6da7578cb991d214]
sensor_msgs/Image [060021388200f6f0f447d0fcd9c64743]
topics: /camera/camera_info 2505 msgs : sensor_msgs/CameraInfo
/camera/image 2504 msgs : sensor_msgs/Image
/rosout 22 msgs : rosgraph_msgs/Log (2 connections)
/rosout_agg 18 msgs : rosgraph_msgs/Log
So if we have 171 seconds of video, at 90FPS, that should give 15390 messages, we only got 2504, which is about 14FPS. The file itself is 2.9GB in size. This means it had an average writing speed of ~17.5MB/s. Eventually I found a command to test the write speed of the SD card (dd if=/dev/zero of=~/test.tmp bs=500K count=1024), which says my writing speed is about ~19MB/s on average.
So my questions are:
If the SD writing speed is causing the problem, how come the RaspberryPi can't utilise the full 90MB/s?
Can I tune the RaspberryPi to write quicker to the SD card?
I thought about getting a BananaPi, which comes with SATA, so I could connect a SATA drive and shouldn't run into any write speed issues. Before making that investment, does anyone have experience with BananaPis? I saw a test here: http://314256.blogspot.co.uk/2014/11/banana-pi-sata-disk-throughput-test.html, which looks like the BananaPi should be able to handle it.
Any other ideas how to make it work on the RaspberryPi?
It looks like the raspicam_node publishes images with bgra8 encoding (raspicam_raw_node.cpp#L266), so we need to store 4*640*480*30 Bytes/second = 36.86 MB/s.
However ~18 MB/s seems to be pretty much the limit on a Raspberry 2 (microSD card performance comparison).
Instead of trying to save all the raw data, have rosbag store the sensor_msgs/CompressedImage from the /camera/image/compressed topic. You can tune the <base_topic>/compressed/jpeg_quality parameter (see compressed_image_transport's dynamic reconfigure parameters), but with the default of 80 you should get around 30:1 compression ratio, i.e. 1.23 MB/s.
The Raspberry should be able to handle this easily. Given the image quality of the tiny Raspberry camera, you will probably not even perceive any difference in quality.