Use Z-order and position to organize open forms in MS Access - forms

For MS Access 2010, I need a way to flexibly maintain the position and Z-order when a dozen forms are open. There can be multiple instances of the Parent form, and each one can lead to multiple instances of the Child form (some background here).
I want the user to be able to choose which form is top-most -- which means I don't want any forms set as Popup. Also, I want the Z-Order essentially preserved when a new Child opens. As the Child opens, the Parent loses the focus; at that point I'd like the Parent to drop back to its former position in the Z-order. I could add requirements along this line, but you get the idea ... I imagined a default behavior might do what I want, but if I have to assign Z-order locations from an array or something like that, I could accept that.
I also want to control the on-screen position of the Child forms (I mean only when they are first opened; they can be repositioned). If they open with the same X,Y coordinates, they'll appear stacked on top of each other and the user will have to reposition the top instance in order to see the others. That is inconvenient and, more important I think, disorienting.
So far I'm not able to have it all. I can get a nice cascade result by specifying X,Y positions, but it stops working when I use the flags to poke at the Z-order.
I've been using the API...
Declare Sub SetWindowPos Lib "user32" ( _
ByVal Hwnd&, _
ByVal hWndInsertAfter&, _
ByVal X&, ByVal Y&, ByVal cX&, _
ByVal cY&, ByVal wFlags&)
Global Const HWND_TOP = 0
Global Const HWND_TOPMOST = -1
SetWindowPos Hwnd, HWND_TOP, lngPosX, lngPosX, 0, 0, SWP_NOSIZE
I have different results when I try options for hWndInsertAfter& and wFlags&. Also when I set forms as Popup (results are better, but as mentioned, I want the user to bring any form to the top; therefore no Popup).
(Hmm... I bet Popup (and 'Modal`) are precisely what bring the API into best usage, because while a "must-answer" dialog is showing, control basically reverts to Windows. Confirm?)
My biggest frustration is that documentation for the API seems fragmentary and incoherent. And I wonder, am I stuck with that API? Is there something else I can use? I'd love a VBA solution apart from the API, but I guess this is what the API is for. Still, is there a method I'm missing?
I can post my variant attempts in more detail, but I feel I've been shooting in the dark, so I will wait on your feedback.
Update
I tried Reading The Manual. I tried twiddling with "form ownership" and NO/TOPMOST. For the Child form, I still have to choose between:
Being able to set the position upon opening
Being able to bring the Parent form back "on top" of the Child

Sorry for the late answer! I bumped into this while searching for a related issue.
One way to manage Z-order 'Access-only' is to use Form.SetFocus. The general solution outline:
Keep an array or collection of your form names and their Z-orders
When Z-order changes:
Resort your list to reflect the new Z-order
Turn screen updating off: Application.Echo False
Iterate through your list of forms in reverse Z-order. Use Form.SetFocus for each form. This will put the highest form on top.
Turn screen updating back on: Application.Echo True
This should work as long as all of your forms are non-modal.
If you need modal forms, be aware that they are by default on top, and you can only have one modal form open at a time. You can still use the above logic, just be sure to set Form.Modal = False for every form not on the top.
This is the 'how' answer, but I can't offer advice as to whether this is a sound approach for your application.

I believe the solution doesn't exist, or isn't worth pursuing because it would lean on Windows API libraries that may not be available in a few years. (This pessimism is not based on specific insights; but in general I see big pressures on the Windows user interface, so it's easy to imagine things shifting.)
I see some other hazards. Users will open numerous windows; resources will fail at some point, and probably before then they'll have lost any advantages from a human analytical point of view. Nonetheless they'll continue past the point of diminishing returns. Also I can expect to find a few pitfalls that gobble development time and lead in the end to complaints no matter how much time I spend mitigating them. Think "multi-user" and you'll know what I mean.
Instead, I need to re-think the approach. The application offers complicated and sometimes volumnous information. There's a way to do it. Not this way.
I might delete this OP, but it's gotten three up-votes, so I'll wait and see what you think. I can always punt to community wiki.

Related

hyperHTML for 10,000 Buttons

I created a test page where I'm using hyperHTML to showcase 10,000 buttons. The code is a little large to post onto stackoverflow, but you can view source on this page here to see the code (expect delay after clicking).
hyperHTML is taking more time than expected to complete its work, which makes me think I'm misusing it.
Any suggested optimizations?
Update: It seems I was using an older version of hyperHTML. The current version is blazing fast on this test.
Update beside the test being not a real world use case, there was room for improvements on linearly holed template literals so that 7 seconds now are down to roughly 70 ms ... however, the rest applies, that is not how you use hyperHTML.
I created a test page where I'm using hyperHTML to showcase 10,000 buttons
You are not using hyperHTML properly at all. It's a declarative library that wants you to forget the usage of document.createElement or addEventListener or even setAttribute.
It looks like you are really trying hard to avoid all its utility with this example, and since this is not your first question about hyperHTML, it looks like you are avoiding its documentation and examples on purpose.
In such case, what are you trying to achieve?
The code is a little large to post onto stackoverflow
That code is an absolute nonsense, IMO. No sane person would ever write 10000 buttons inline like you did there, and I bet that was machine generated indeed.
The code to create 10K buttons, or one of the ways, in hyperHTML, fits very easily in this forum:
function createButton(content) {
return wire(document, ':' + content)`
<button onclick=${onclick}>${content}</button>`;
}
function onclick(e) {
alert(`You clicked a button labeled: ${e.target.textContent}.`);
}
const buttons = [];
for (let i = 0; i < 10000; i++)
buttons.push(createButton('btn-' + i));
bind(document.body)`${buttons}`;
That's it. You can eventually optimize the container that will render such content, and to preserve your original demo, you can also add some text content which has very doubtful meaning but, in this very specific case, would need just a craeteTextNode, something again not really needed but the only thing that makes sense for a benchmark, so that the result is the one shown in this Code Pen, and the execution time here is 19.152ms, meaning you can show 10.000 buttons at 50FPS.
However, showing 10.000 buttons all at once has close to 0 use cases in the real-world, so you should rather understand what is hyperHTML, what it solves, and how to benefit from it, instead of using it as an innerHTML.
hyperHTML is 100% different from innerHTML, the sooner you understand this, the better it is.
If you need innerHTML, don't use hyperHTML.
If you want to use hyperHTML, forget any DOM operation that is not declarative, unless really needed, where this wasn't the case at all.

How can I scroll a Clutter.ScrollActor with a scrollbar?

I have a a GtkClutter.Embed that holds a complete graph of clutter actors. The most important actor is container_actor that holds a variable number of actors (laid out with a FlowLayout) that may overflow the height allocated to the parent Embed.
At some point, the container_actor takes the stage and be the only actor displayed (along with its children).
At this point I would like to be able to scroll through the content of container_actor.
Making my Embed implementing Gtk.Scrollable gives the ability to have a scrollbar. Also I've noticed that Clutter proposes a Clutter.ScrollActor.
Is using those two classes the recommended way to go?
Or do I need to use implement Gtk.Scrollable and move my container_actor manually on vadjustment.value_changed ?
edit: here's a sample in c for ScrollActor
ClutterScrollActor does not know anything about GtkScrollable or GtkAdjustment, so you will have to implement scrolling manually. It's not necessary to implement GtkScrollable — you just need a GtkScrollbar widget, a GtkAdjustment and some code that connects to the GtkAdjustment::value-changed signal to determine the point to which you wish to scroll the contents of the ClutterScrollActor.

Can two panels share a uicontrol in a MATLAB GUI?

I've got a MATLAB GUI that has different aspects of functionality, each with their own panel of uicontrols. When one panel is selected, the other one is set to invisible, and vice-versa. However, they share some of the same inputs in the form of a popup menu. Can I include a 'clone' instance of the menu on the second panel somehow? I'd like to avoid as many redundant callbacks and uicontrols as possible.
I guess if the uicontrol was a direct child of the figure, you may be able to put it in front of everything.
A much simpler solution is to use the same callback for multiple uicontrols. In the property editor, you can modify the callback name and set it to a common callback function. Additionally, you can create a field (e.g. myPopupH) in the OpeningFcn of the GUI, in which you store the handles of the popups that should behave the same way. Then, in the callback, you'd use hObject, i.e. the first input argument, for all the get calls (to access the modified state of the popup-menu), but you'd use handles.myPopupH in all the set calls, so that you can ensure that both popups always have the same state. Thus, the ui-object may be redundant, but all the code (which is much more critical) only exists in a single copy.
One place where I routinely use a single callback for multiple ui elements is the close request function which is accessed from the "Cancel"-button as well as from the "X" that closes the figure, and possibly from one of the "File"-menu items.

What's a good maintainable way to name methods that are intended to be called by IBActions?

I am creating function (for example) to validate content, then if it is valid, close the view, if it is not, present further instructions to the user. (Or other such actions.) When I go to name it, I find myself wondering, should I call it -doneButtonPressed or -validateViewRepairAndClose? Would it be better to name the method after what UI action calls it, or name it after what it does? Sometimes it seems simple, things like -save are pretty clear cut, other times, and I can't thing of a specific example right off, but I know some have seemed like naming them after what they do is just so long and confusing it seems better to just call them xButtonPressed where x is the word on the button.
It's a huge problem!!! I have lost sleep over this.
Purely FWIW ... my vote is for "theSaveButton" "theButtonAtTheTopRight" "userClickedTheLaunchButton" "doubleClickedOnTheRedBox" and so on.
Generally we name all those routines that way. However .. often I just have them go straight to another routine "launchTheRocket" "saveAFile" and so on.
Has this proved useful? It has because often you want to launch the rocket yourself ... in that case call the launchTheRocket routine, versus the user pressing the button that then launches the rocket. If you want to launch the rocket yourself, and you call userClickedTheLaunchButton, it does not feel right and looks more confusing in the code. (Are you trying to specifically simulate a press on the screen, or?) Debugging and so on is much easier when they are separate, so you know who called what.
It has proved slightly useful for example in gathering statistics. The user has requested a rocket launch 198 times, and overall we've launched the rocket 273 times.
Furthermore -- this may be the clincher -- say from another part of your code you are launching the rocket, using the launch-the-rocket message. It makes it much clearer that you are actually doing that rather than something to do with the button. Conversely the userClickedTheLaunchButton concept could change over time, it might normally launch the rocket but sometimes it might just bring up a message, or who knows what.
Indeed, clicking the button may also trigger ancillary stuff (perhaps an animation or the like) and that's the perfect place to do that, inside 'clickedTheButton', as well as then calling the gutsy function 'launchTheRocket'.
So I actually advocate the third even more ridiculously complicated solution of having separate "userDidThis" functions, and then having separate "startANewGame" functions. Even if that means normally the former does almost nothing, just calling the latter!
BTW another naming option would be combining the two... "topButtonLaunchesRockets" "glowingCubeConnectsSocialWeb" etc.
Finally! Don't forget you might typically set them up as an action, which changes everything stylistically.
[theYellowButton addTarget:.. action:#selector(launchRockets) ..];
[theGreenButton addTarget:.. action:#selector(cleanUpSequence) ..];
[thatAnimatingButtonSallyBuiltForUs addTarget:.. action:#selector(resetAll) ..];
[redGlowingArea addTarget:.. action:#selector(tryGetRatingOnAppStore) ..];
perhaps that's the best way, documentarily wise! This is one of the best questions ever asked on SO, thanks!
I would also go with something along the lines of xButtonPressed: or handleXTap: and then call another method from within the handler.
- (IBAction)handleDoneTap:(id)sender {
[self closeView];
}
- (void)closeView {
if ([self validate]) {
// save and close
}
else {
// display error information
}
}

Event propagation in a Morphic GUI

I have an image for a Squeak Morphic GUI that contains some transparent parts and thus should not accept any mouseevents etc. but just be visible, but it needs to be visible in front of other morphs.
That's why i thought it would be useful to propagate the appearing mouseevents to the underlying morphs. Does anyone know a solution for my problem or another suggestion to solve it.
V <- mouseDownEvent
_____________________________ <- transparent image (BorderedMorph)
_____ _____ _____
_| |___| |___| |__ <- buttons waiting for click and drop events
_____________________________ <- basic morph
i hope that illustrates my problem.
The best thing I can think of is something along the following lines (in increasing order of smoothness, and decreasing order of likelihood to work)
Record the event, tab the transparent image away, and replay the event. This seems like an inefficient and poor way of doing it.
Somehow keep track of what has focus behind your transparent image, and pass the event to it. I'm not familiar with the libraries in question, so I don't know if it's possible to do it like that. If you have control over the other layers, this is most likely the way to go. (You can directly call their 'a mouse event happened' functions with that mouseDownEvent, though you do still have to identify which one would receive it).
Simply declare it as something that doesn't get mouse events passed to it at whatever level is available. OSD windows tend to do this, I'm not sure how. If you can do it this way, I would advise it... but given that you're asking this question, you probably can't.
By default, Morphic mouse events are handled in the top-most morph. However, a parent morph is able to intercept #mouseDown to children using #mouseDownPriority.
Your transparent image gets all clicks because it is top-most. Take a look at #rejectsEvent:. It justs combines #isLocked and #visible to reject events. You may want to override this in order to reject events even if visible.
For example:
MyMorph>>rejectsEvent: anEvent
^ true "Ignores all events."