Is anyone able to tell me how to get input from the brick buttons on the LEGO Mindstorms EV3 brick using the ROBOTC program ('ROBOTC for LEGO Mindstorms 4.X') that I can use in a script? The program gives me suggestions when I type in 'button', like 'buttonLeft' and 'BUTTONTYPE'. I was thinking that it should be possible because I can find numerous tutorials that use the LEGO Mindstorms Education EV3 Student Edition, but none that are using RobotC.
buttonLeft and in general the Buttontypes are enums for the getButtonPress function. If you want to get input, you will have to use a while loop with and check for the buttons status per iteration.
while(true){
if(getButtonPress(buttonUp)) {
//runs code here every iteration the top button is down
}
}
Related
I am new to pybricks and have found very little documentation to help answer my own query. I have written what I thought would be a simple program to spin my robot on the spot until the UltrasonicSensor sees something. It will then push forwards. If it is pushed backwards and sees a black line, it should try and swing out of the way.
The following code "works", but it's response to the Ultrasonic and Light sensors is significantly delayed:
#!/usr/bin/env pybricks-micropython
from pybricks.hubs import EV3Brick
from pybricks.ev3devices import Motor, ColorSensor, UltrasonicSensor
from pybricks.parameters import Port
from pybricks.tools import wait
ev3 = EV3Brick()
eyes = UltrasonicSensor(Port.S2)
left_motor = Motor(Port.B)
right_motor = Motor(Port.A)
right_light = ColorSensor(Port.S1)
left_light = ColorSensor(Port.S4)
while True:
if right_light.reflection() < 50:
ev3.speaker.say('black')
left_motor.run(500)
right_motor.run(-100)
wait(2000)
left_motor.run(500)
right_motor.run(500)
wait(1000)
if eyes.distance() > 200:
left_motor.run(500)
right_motor.run(-500)
else:
left_motor.run(-500)
right_motor.run(-500)
I can see in the (limited) documentation that you can apparently change motor settings but I can't find direction on how to do this (or even if it would be useful). Any help would be appreciated.
ev3.speaker.say(text) synthesizes speech as it goes. This is fun, but it is very slow. This is especially noticeable in a control loop like yours.
I'd recommend using ev3.speaker.beep() instead. You could even select the frequency based on the reflection value so you can "hear" what the sensor "sees".
So the problem was that I used the "run" command to move the motors. Run appears to have an acceleration and deceleration component.
I used "dc" instead of "run" and the motors instantly respond to the sensor data now.
I'm pretty new to coding MC plugins, and I am currently making one with Skript. I need to be able to find what dimension a player is currently in, for example the nether or overworld.
However, this kind of thing doesn't work as I would expect:
{dimOfPlayer} = dimension of player
send "Dim: %{dimOfPlayer}%" # I get an error: "Can't understand this condition or effect"
Is there any way to actually get the dimension and print it to chat, or test what it is? Thanks!
You can do this by checking what WORLD the player is for example if you create a normal world there should be these 3 worlds "world", "world_nether" and "world_the_end". I think So then you can check the dimension of player like this.
if player is in world "World name":
#Do stuff
i have to simulate a traffic on a highway using anylogic 8 learning edition, what I want to do is the control the car speed on road basis for ex if my car move from road1 to road 2 thru CarMoveTo I want to change the speed when it enters the road2... I tried to use "on enter" and "on exit" of the CarMoveTo but with no success and I even tried to use the Car API with no success too. I think I missed where is the suitable location to write the following code:
if (getRoad().equals("Road2"))
setPreferredSpeed(0, MPH);
Any Help?????
First, I think your getRoad().equals( "Road2" ) may be problematic. getRoad returns a road object according to intelliSense, not a string. Try taking out your quotation marks.
To set the speed at a certain road, try one of the following:
1) Use a stop line and on crossing the line, call your code to set the speed. No need to figure out what road you are on, because the stop line will inherently be on the road of interest.
2) Use a road network descriptor, and call your code "On Enter Road"
If move to only applies to road2, you could also set it there. However, if your move to block gives the car an overall destination that just happens to go through road2, then this would not be the right place, as it would get called just the first time the car enters the move to block.
I currently am using Max/MSP to create an interactive system between lights and sound.
I am using Philips hue lighting which I have hooked up to Max/MSP and now I am wanting to trigger an increase in brightness/saturation on the input of a note from a Midi instrument. Does anyone have any ideas how this might be accomplished?
I have built this.
I used the shell object. And then feed an array of parameters into it via a javascipt file with the HUE API. There is a lag time of 1/6 of a second between commands.
Javascript file:
inlets=1;
outlets=1;
var bridge="192.168.0.100";
var hash="newdeveloper";
var bulb= 1;
var brt= 200;
var satn= 250;
var hcolor= 10000;
var bulb=1;
function list(bulb,hcolor,brt,satn,tran) {
execute('PUT','http://'+bridge+'/api/'+hash+'/lights/'+bulb+'/state', '"{\\\"on\\\":true,\\\"hue\\\":'+hcolor+', \\\"bri\\\":'+brt+',\\\"sat\\\":'+satn+',\\\"transitiontime\\\":'+tran+'}"');
}
function execute($method,$url,$message){
outlet(0,"curl --request",$method,"--data",$message,$url);
}
To control Philips Hue you need to issue calls to a restful http based api, like so: http://www.developers.meethue.com/documentation/core-concepts, using the [jweb] or [maxweb] objects: https://cycling74.com/forums/topic/making-rest-call-from-max-6-and-saving-the-return/
Generally however, to control lights you use DMX, the standard protocol for professional lighting control. Here is a somewhat lengthy post on the topic: https://cycling74.com/forums/topic/controlling-video-and-lighting-with-max/, scroll down to my post from APRIL 11, 2014 | 3:42 AM.
To change the bri/sat of your lights is explained in the following link (Registration/Login required)
http://www.developers.meethue.com/documentation/lights-api#16_set_light_state
You will need to know the IP Address of your hue hue bridge which is explained here: http://www.developers.meethue.com/documentation/getting-started and a valid username.
Also bear in mind the performance limitations. As a general rule you can send up to 10 lightstate commands per second. I would recommend having a 100ms gap between each one, to prevent flooding the bridge (and losing commands).
Are you interested in finding out details of who to map this data from a MIDI input to the phillips HUE lights within max? or are you already familiar with Max.
Using Tommy b's javascript (which you could put into a js object), You could for example scale the MIDI messages you want to use using midiin and borax objects and map them to the outputs you want using the scale object. Karlheinz Essl's RTC library is a good place to start with algorithmic composition if you want to transform the data at all http://www.essl.at/software.html
+1 for DMX light control via Max. There are lots of good max-to-dmx tutorials and USB-DMX hardware is getting pretty cheap. However, as someone who previously believed in dragging a bunch of computer equipment on stage just to control a light or two with an instrument, I'd recommend researching and purchasing a simple one channel "color organ" circuit kit (e.g., Velleman MK 110). Controlling a 120/240V light bulb via audio is easier than you might think; a computer for this type of application is usually overkill. Keep it simple and good luck!
I have a Perl script which listens to a port and filters messages, and, based on them, proposes to take action or ignore event.
I'd like to make it show a notification window (not a dialogue window) with buttons 'take action' and 'ignore', which would go after a certain timeout.
So far I have something like this:
my #react = ("somecommand", "someoptions); # based on what regex a message matched
my $cmd = "xmessage";
my $cmd_args = "-print -timeout 7 -buttons React,Dismiss $message"; # raw message from port
open XMSG, "$cmd $cmd_args |";
while (<XMSG>) {
if ($_ eq "React\n") {
do something...
}
}
But it would handle only one notification at once, and the next message would not appear until the previous one is dismissed, reacted to or timed out, so it's quite a bad decision. I cannot do anything until I get return code from xmessage, and I can't get xmessage run a command. Well I probably can if I introduce event IDs and listen to a socket where xmessage prints, but it would make things too complicated, I guess.
So I wonder is there a library or an utility for Linux to draw notify-like windows with buttons which would each trigger a command?
I'm sorry I didn't see this one when it first was posted. There are several gui toolkits which could do something along these lines. Prima is a toolkit built especially for Perl and has no external library dependencies.
For when you just need a popup dialog, there is the Ask module which delegates the task of popping up windows to any available library.
In case anyone's interested, I've ended up writing a small Tcl/Tk program for that, the full code (all 48 lines) can be found here: http://cloudcabin.org/read/twobutton_notify, and you can ignore the text in Russian around it.