If we have a midi file, and we get the .musicxml file by some method, such as by using MuseScore. How do we make the whole song as if we pressed on the sustain pedal (of a piano)? Can we do it globally or must that be add it for each of the note?
I found an answer by having a piano piece without sustain, and then add sustain to the whole song, and then diff the two .musicxml files, and I found under this part:
<part id="P1">
<measure number="1" width="125.73">
<print> ... </print>
<attributes> ... </attributes>
<direction placement="above"> ... </direction>
there is this new addition:
<direction placement="below">
<direction-type>
<pedal type="start" line="yes" default-y="-65.00"/>
</direction-type>
<staff>1</staff>
</direction>
There is also one for a <staff>2</staff> below as well that looks the same like above but just the staff is changed to 2 and it should be for the Bass Staff.
Related
Is there a possibility to limit the number of selectable images for the media_selection content type? According to the documentation there is none, but maybe there is still a way?
Reason is, that I want to allow to add an image to a text, but only one.
Maybe:
<property name="image" type="media_selection">
<param name="maxSelectionAmount" value="1"/>
</property>
There is nothing like that at the moment... What we have implement in the alphas of 2.0 is that the is a separate single_media_selection content type. This works well for limiting the assigned medias to one but still doesn't allow to restrict to an arbitrary number.
I want to simulate an algorithm we wrote in Gazebo. The base robot I am using is the clearpath Husky. And, I need to put a Velodyne VLP-16 Lidar on it to extract the point clouds. The way I am going about it is trying to make my own custom world and launching the Husky launch file which spawns the Husky in a custom world. I just want to know what would be the best way to put the lidar on top of the husky? Can I just modify the world file? Or will I need to change the clearpath file? I downloaded the velodyne_simulator package from Velodyne and it comes with a VLP-16.urdf.xacro. Is there anyway I can use that? Any help would be appreciated. Thanks
The Husky allows for additional accessories to be added via the husky.urdf.xacro file indicated by this line of code in the file <xacro:include filename="$(arg urdf_extras)" />
The file can be located here https://github.com/husky/husky/blob/kinetic-devel/husky_description/urdf/husky.urdf.xacro
The final step is to add these lines of VLP-16 code to/path/to/my/file.urdf:
<VLP-16 parent="base_link" name="velodyne" topic="/velodyne_points">
<!-- The 0.2 will most likely have to be adjusted to fit -->
<origin xyz="0 0 0.2" rpy="0 0 0" />
</VLP-16>
Now the Husky can be launched with the VLP-16 in Gazebo.
roslaunch husky_gazebo husky_playpen.launch urdf_extras=/path/to/my/file.urdf
The answer by #mastash3ff is mostly correct except there are few things missing. As per the latest husky package you can bring up velodyne VLP-16 or say any model by setting "HUSKY_URDF_EXTRAS" environment variable to the urdf file you want to include.
export HUSKY_URDF_EXTRAS=/path/to/vlp.urdf
In my case vlp.urdf looked like this. You can also follow the same pattern
<?xml version="1.0"?>
<robot xmlns:xacro="http://www.ros.org/wiki/xacro" name="example">
<xacro:include filename="$(find velodyne_description)/urdf/VLP-16.urdf.xacro"/>
<VLP-16 parent="top_plate_link" name="velodyne" topic="/velodyne_points" hz="10" samples="440">
<origin xyz="0.2206 0.0 0.00635" rpy="0 0 0" />
</VLP-16>
</robot>
Hope this helps.
I have created a custom Plone content type in my package i.e. my.product.
I am in need of integrating a working copy support: so that a "published" document (in my case, a published content type) stays online while it is being edited. Basically, I want to take advantage of 'Working Copy Support (Iterate)' provided by plone.app.iterate to achieve what is explained here. This will provide me with ability to check-in/check-out my changes.
Is this possible in Plone 4 with custom content types using Archetypes? How would one go about it if yes?
I added the following two files inside my.product/my/product/profiles/default folder and it appears to work:
diff_tool.xml
<?xml version="1.0"?>
<object>
<difftypes>
<type portal_type="MyCustomType">
<field name="any" difftype="Compound Diff for AT types"/>
</type>
</difftypes>
</object>
repositorytool.xml
<?xml version="1.0"?>
<repositorytool>
<policymap>
<type name="MyCustomType">
<policy name="at_edit_autoversion"/>
<policy name="version_on_revert"/>
</type>
</policymap>
</repositorytool>
I have never used plone.app.iterate, but this is the generic approach how to solve the problem.
Actions are installed by plone.app.iterate GenericSetup profile. You can see actions here:
https://github.com/plone/plone.app.iterate/blob/master/plone/app/iterate/profiles/default/actions.xml
Pay note to the line *available_expr* which tells when to show the action or not. It points to helper view with the conditition.
The view is defined here
https://github.com/plone/plone.app.iterate/blob/master/plone/app/iterate/browser/configure.zcml#L7
The checks that are performed for the content item if it's archiveable
https://github.com/plone/plone.app.iterate/blob/master/plone/app/iterate/browser/control.py#L47
Most likely the failure comes from if not interfaces.IIterateAware.providedBy condition. Your custom contennt must declare this interface. However, you can confirm this putting a pdb breakpoint in checkin_allowed(self) and step it though line-by-line and see what happens with your content type.
I am trying to play sequence of .wav files with the following code. My Problem is some times all files are playing at a time some times one after other. Simple to say xyz.wav files are playing randomly. Is there a way to thread.sleep or to wait till one file finishes playing the audio???
<
(ui:repeat value="#{captcha.imageSpeechFiles}" var="cart" rendered="#{captcha.play}">
(ice:outputMedia source="#{cart}" mimeType="audio/x-wav" player="windows" style="width:0px;height:0px;" >
(param name="play" value="true"/>
(/ice:outputMedia>
(/ui:repeat>
I solved my own question. instead of ice:outputMedia tag if we use embed as shown below its working fine.
(ui:repeat value="#{popup.imageSpeechFiles}" var="cart" rendered="#{popup.play}">
(embed id="embdwav" src="#{cart}" autostart="true" hidden="true" />
(/ui:repeat>
I'm trying to parse an xml tree file whith multiple NSXMLParserDelegate parsers but I'm getting in the next issue.
My XML structure is something like that.
<Object1>
<Name>Ricky</Name>
<Surname>Woodstock</Surname>
<Adress>
<City>Los Angeles</City>
<State>California</State>
<Country>USA</Country>
</Adress>
<Items>
<Item>
<Id>1</Id>
<Description>Sports Bag</Description>
<Price>13.45</Price>
</Item>
<Item>
<Id>2</Id>
<Description>Baseball Cap</Description>
<Price>6.90</Price>
</Item>
</Items>
<Total>20.15</Total>
</Object1>
And my Issue is that when I changes delegates, it starts parsing not in top-level tag, instead it begins parsing on the first nested tag.
For example.
I begin parsing XML wih XMLObject parser which parses element
When it reaches Adresss tag I set delegate to XMLAdressParser (and set XMLAdressParser's parent to self for returning) but XMLAdressParser begins parsing on tag.
And almost with Items and Item tags.
I think it's normal cause of a Parser definition, but that's the question:
Is there any way for the delegate to start parsing in correspondint tag, like first one?
Thanks in advance for any help.
Sergio
Rather than using multiple delegates I suggest using a single delegate and having it distribute the work as you feel appropriate.