Pure Data MIDI Toggle Switch - puredata

I am looking to build a tool that allows me to toggle on/off certain GEMheads in my patch based on MIDI pad input. Pressing one pad would turn on a render chain and pressing a different pad would activate a new render chain. Pressing the midi pad for any active render chain should disable the rendering.
I'm having quite some trouble building this into my current patches which look like this:
pad_control
template
I grab the MIDI identity and use it to generate a one or a zero, however I'm not certain of the logic to keep the output as a 1 when pressing different MIDI pads and how to add the toggle functionality. As of now, only one can be active at any given time.
Any help is appreciated, thank you!

I think that's basically what you are after, the toggle representing the MIDI pad. The change object is only there to protect for cases like with a keyboard, where repeated strokes are sent, probably not necessary for a MIDI pad, but doesn't hurt either.

For those wondering, this is what my solution looks like tuned specifically to my MIDI settings
controller_1
Hope this helps anyone else!

Related

GTK prevent custom widget from grabbing focus

I've implemented a musical keyboard as a subclass of Fixed and where each individual key is a subclass of DrawingArea, and so far, it works great: custom drawing code in expose, press+release functionality working... kind of. See, here's the problem: I want the user to be able to drag the mouse across the keyboard with the mouse down to play it. I currently capture the button press and release signals, as well as enter and leave notify. Unfortunately, this doesn't quite work because the widget seems to grab focus of the mouse as soon as the mouse is pressed over it. This makes sense for normal buttons, but not for a musical keyboard. Is there any good way to remedy this other than rewriting the entire keyboard to be one massive DrawingArea?
Also, it shouldn't matter, but in case it does I'm using GTK#.
You might consider using GooCanvas: You can represent each of the keys as CanvasPolylines, fill them with the colors you need. Each of the Canvas items is a GtkWidget, so you can act on events like enter, leave, button-pressed etc.
This method seems to make more sense (to me) than separate DrawingAreas. As each drawn element is still accessible, you can even change colors/size and other properties dynamically. Also, Polyline lets you make more complex shapes.

How to access Face Manipulation Mode?

I am fairly new to Blender and I am trying to join objects together on blender for a simulation. I have researched for an answer, and have found one source which seemed to work best with what I was trying to do. I have been using the answer given on this question. I have switched to object mode, selected the objects, and pressed Ctrl+J to join the objects. I am then supposed to enter Edit Mode, and then Face Manipulation Mode. I do not know how to access Face Manipulation Mode, or Vertex Manipulation Mode, and cannot find any online resource to show me how to access it. Does someone know what hot keys I can press/ tabs I can open to get to this?
Use the tab key to switch between object mode and edit mode.
"Face manipulation" mode is not really a thing, just select a face (RMB while in edit mode) and manipulate it just like anything else. Make sure that the face selection is enabled (three little buttons on the horizontal bar below the 3d view let you modify the selection possibilities to vertex, edges, and/or faces. (They look like icons with selected those-things on them, respectively)

What application has mouse control?

One way users can cheat with games (desktop or web) is by having "robots" monitor the screen and move the mouse for them. Is there a way (of course with transparency and user permission) to monitor if an application is controlling the mouse? I am primarily interested in a windows app, but if there is a way for other OS's that would be useful to know as well.
Thanks!
There shouldn't be. Any sensibly designed UI layer will only pass events to the applications, about inputs such as mouse, keyboard etc. Those events will typically not include information about how the event was generated (you're not supposed to care, so why pay for that overhead).
One way might be to scan the system for processes having names of known "event-fakers", much like some anti-virus programs blacklist applications by name.
On Windows you can add a hook to monitor for injected keyboard or mouse messages,
and remove them if you like.
But I'm not sure if you can find the source of the messages.
Just an idea:
Get the current mouse position and check for fast position changes.
Like from (10,15) to (1000, 400).
Most robots just set a new position and don't imitate the human mouse movement.

What Icon is Appropriate For Sorted by Time?

I have a duplicate bridge scoring application with two different sorting modes.
Let me fill you in on how duplicate bridge works so you have an idea what I'm looking for. You sit and play a few hands of bridge against one set of opponents. Then you and the boards move and you play a few more hands of bridge against a new set of opponents. Repeat until the end of the night (usually around 24 boards). You don't necessarily play the boards in order. For instance you may play 1-3, 7-9, 13-15, ..., and eventually 4-6. Other people play them in a different order.
So now for the two sorting modes. There's sort by board order (fairly easy to come up with an icon like the "1-24" I settled on) and there's sort by order played.
Which of these choices is appropriate?
A. A clock
B. A calendar
C. Something else
P.S. I remember reading an article a while back about how using a clock for this would be cause for rejection, but haven't been able to find it.
Thanks in advance for any help/suggestions!
I think a down arrow with a clock.
An hourglass(saves you from the clock metaphor) with a horizontal arrow, as time is more likely perceived as a horizontal flow.

Things to consider when writing for touch screen?

I'm starting a new project which involves developing an interface for a machine that measures wedge and roundness of lenses and stores the information in a database and reports on it. There's a decent chance we're going to be putting a touch screen on this machine so that it doesn't need to have a mouse or keyboard...
I don't have any experience developing for full size touch screens, so I'm looking for advice/tips/info from you guys...
I can imagine you want to make the elements a little larger than normal... space buttons out a bit more.... things like that... anyone have anything else to add?
A few things to consider:
You need to account for parallax error when touching controls. Basically, the user may touch the screen above or below your actual control and therefore miss the control. This is a combination of the size of the control (eg you can have the active area larger than visual control to allow the user to miss and still activate the control), the viewing angle of the user (which you may or may not be able to predict/control) and the type of touch screen you're using. If you know where the user will be placed relative to the screen when using it, you can usually accommodate this with appropriate calibration.
Depending on the type of touch screen, you may need to ensure that your users aren't wearing gloves or using an implement other than their fingers (eg the end of a pen) to touch the screen. Some screens (eg those depending on conductance) don't respond well to anything other than flesh and blood.
Avoid using double clicks because it can be very hard for users to reliably double click a control. This can be partly mitigated if you've got experienced/trained users working in a fairly controlled environment where they're used to the screens.
Linked to the above, if you are using double clicks, you may find the double click activated when the user only wants to single click. This is because it's very easy for the user's finger to bounce slightly on touching the screen and, depending on how sensitive the double click settings are, trigger a double rather than a single click. For this and the previous reason, we always disable double clicks and only use single clicks (or similar single activation controls).
However big you think you need to make the controls to allow for touch activation, they almost certainly need to be bigger still. Make sure you test the interface with real users in the real deployment environment (or as close to it as you can get). For example, we deployed some screens with nice big buttons you couldn't miss only to find that the control room was unheated and that the users were wearing thick gloves in the middle of winter, making their fingers way bigger than we had allowed for.
Don't put any controls near the edges of the screen - it's very hard to get your finger into the edges (particularly if the screen has a deep bezel) and a slight calibration problem can easily shift the control too close to the edge to use. Standard menus and scroll bars are a good example of controls that can be very tricky to use on a touch screen and you should either avoid them (which is preferable - they're not good for touch screens) or replicate them with jumbo equivalents.
Remember that the user's hand will be over the screen, obscuring some of the screen and controls (typically those below where the user is touching, but it depends on the position of the user relative to the screen). Don't put instructions or indicators where the user's hand or arm will obscure them when trying to use the control they relate to (eg typically put them above rather than below the control).
Depending on the environment, make sure your touch screen is suitably proofed against dust, damp, grease etc and make sure it's easy to clean without damaging it. You wouldn't believe the slime that can quickly accumulate on a touch screen in an industrial or public setting.
The other obvious one is that there's no equivalent of pointer 'hover'. Not that that affects many apps though.
If you decide to put in analog controls (scrollbars, rotation widgets, etc) be sure to put in a digital control also. Some companies think that a touch screen means perfect control over something with your fingers. In real life, this translates to minutes of frustration trying to fix a number that's just a little off.
The most obvious thing is that everything on the GUI needs to be big enough for a fingertip to hit, which is sometimes bigger than you think.
As has been mentioned, there's really no way for a right-click action to happen. Also, double-clicking can be tricky with a fingertip on a touch screen.
The other major thing is that you'll want to create a on-screen keyboard that pops up for text entry and an on-screen numpad for number only fields.
I wrote my own set of controls for a POS application designed specifically to be touchscreen friendly.
Remember to allow enough real estate for stubby fingers and talons. In our application the users can have these manicures that necessitate them to use the pad of their finger instead of the tip. This means that you need to allow more space for activation areas than you would normally consider in any other type of application.
I would also recommend that you accommodate yourself as a programmer from a testing standpoint and from the point of view that things change and there may need to be a keyboard/mouse attached to a non-touch workstation. I cannot tell you how many times I went to touch my flat panel LCD expecting something to happen, before remembering that I had to use the mouse.
Make sure to read your basic UI principles like Fitz law (The time to acquire a target is a function of the distance to and size of the target).
Also consider whether or not the device is stationary or not when it is in use (e.g., like a palmpilot or iphone), research shows that you must accomodate that into your design.
The larger gui elements is the major thing. But it applies to all elements, scroll bars, tabs and even text fields.
The other major thing that I can think of, it's hard for the user to right click. So things that require a right click should be avoided, context menus are the only thing that comes to mind at the moment.
The other responses are pretty good, but are you totally sure that a touch screen would actually be easier to use? There are a lot of devices where a touch screen actually makes them much harder to use, not easier. The main problem is that you can't use the device when you're not looking at it. If users are going to be doing a lot of repetitive actions, a keyboard could be a lot more efficient.
Also, a touch screen might be a lot harder to use by someone with a disability, if you think there's even a small chance that could happen.
Even though this is quite old now, I found it to still be useful, as a starting point for design considerations.
http://www.sapdesignguild.org/resources/tsdesigngl/index.htm
If you've not already done so, have a look at some of the documentation available for developers on mobile platforms, eg Windows Mobile, iPhone.