Embedded MSHTML: mouse wheel ignored - mshtml

In my VC++ application I have an embedded browser (MSHTML). It works fine and handles the mouse properly (for instance, clicks and selects are processed OK). However, mouse wheel rotations over the embedded browser do not have any effect. This is my problem.
I am not very familiar with the internals of MSHTML embedding, and OLE in general. This is an wxWidgets application (wxWidgets is a C++ GUI library), and I am making use of its IEHTMLWin component (which hosts an MSHTML control and wraps it in the wxWindow interface). However, I do have the source and am willing to do some debugging.
Forgetting wxWidgets and speaking purely about OLE and MSHTML, what's the right place to start looking for the problem cause? I tried naive googling for variants of "mshtml mouse events" or "mshtml wheel", but didn't give any good pointers.
Should you want to take a look at the code of IEHTMLWin, it can be browsed here. The iehtmlwin.c file (1,5 k lines) has all the OLE-related code and implements all interfaces needed to host a web browser control. It's worth noting that mouse events don't proceed to the containing wxWindow at all (OnMouse is never called).
{UPD} mshtml version: 6.00.2900.3314. Other applications that host this control (including IE) support the wheel. {/UPD}
jdigital hint (regarding winspector) was very helpful. After some message sniffing, I realize that the problem is focus-related. A click on the browser control somehow does not set focus on the browser control (unlike, say, RichEdit), and WM_MOUSEWHEEL is not sent there. So the new problem is setting the focus.

Try Winspector (http://www.windows-spy.com/) which will allow you to see the windows messages. Make sure that the scroll wheel events are getting passed through.

Related

How to close a browser window after an exit event trigger in a unity webgl game (for accessibility sake)?

I have to develop a browser based game with WebGL (for cross-platform support) using Unity that allows people with severe disabilities to take full control of their experience. This also includes opening and closing the application on their own. Opening the browser and game is pretty straightforward since these people can simply open them using a program provided by another company (like Tobii), but now I am facing an issue when trying to close the window again, since there doesn't seem to be a way to achieve this from within the browser/game.
My question is, is there a way to close the browser with JavaScript or maybe even in Unity itself? Or should I look towards creating an application outside of the browser with something like Java (for cross-platform support) that manages the browser window?
I already looked into ways of doing it via JavaScript or even from within unity, but I simply couldn't find a solution. I tried using JavaScript's windows.close() function, but that only works on windows opened from within JavaScript itself by the looks of it. Looking at a stand-alone application then leaves the question of how to detect an exit request from the user when they are done playing the game.
What I am looking for is a way for them to select an 'exit' button within the game which then closes the browser, so they can return to their assistance program, without the help from another person.
Currently, the user is only able to make use of a single button and can't control mouse or cursor themselves, meaning that they can't close the browser on their own.
tl;dr how can I close a browser window using an exit button in a WebGL Unity game for a person who isn't able to themselves due to a handicap?
Pretty sure you can't
And that goes back to window.close only working on windows opened with JS. Originally it could close any window, but people started abusing that fact (think about things like the self-retweeting tweet, except it also closes your browser tab!)
So the restriction got added.
This is why we can't have nice things.

Black windows issue

I develop a web browser based on gtk+ and webkit2gtk in Rust and sometimes, all GTK+ windows become black.
Even the gtk inspector window that we get with the environment variable GTK_DEBUG=interactive is black.
Even thought the windows are black, the UI is still responsive since I can navigate the web with the keyboard (I see the window title updating, showing the new page URL and load progress).
Here are two actions that triggers this issue everytime they happen:
Destroying the web view
When running the application a second time, it sends a message via Unix Domain Socket to the first process so that it creates a new web view.
I have unfortunately no small example to reproduce the issue. If you want, I can show you the code of the project, but it is big, non-trivial and uses many abstraction layers over gtk+.
I know I don't give you a lot to help me, but if you can give me some explanations about how the rendering works and how to debug that, it would be very appreciated.
Can you give me some hints on how to debug this issue?
Is there a global OpenGL (or whatever) context for the GTK+ windows?
Are there some debuging tools to help me? (Setting G_MESSAGES_DEBUG=all does not show anything relevant.)
With strace, I was able to debug this issue:
I found out the FD used for IPC communication was still polled after being closed, so it was returning POLLNVAL.
Removing the FD with g_source_remove_unix_fd() fixed this issue.

Assign command to the central soft button within javaMe

I have the mobile javaMe application that has been working on Nokia Phones. However, now I'm porting it to Samsung 5611, and I've faced with such a problem: no command is assigned on the central soft button, all of them are contained in the right-button menu. When the same midlet was launched on Nokia 3110c, one command was placed on central button, other ones (if >=2) were grouped into the options menu.
I tried Item.setDefaultCommand (no effect) and Display.getInstance().setThirdSoftButton(true) (such method not supported in SDK 3.4). Also I tried to change the type of one command to Ok or Screen, and change the priority, everything is without success.
Thanks in advance. Any idea will be helpful.
Sadly there's no way for the developer to decide exactly on which softbuttons the commands belong. It is the individual device that decides. Some devices has two softbuttons, and some has three.
You can fiddle a bit with priorities, but you still can't force commands to specific softbuttons.
That's high-level GUI (Form) for you.
If you want to have control of such things, you need to go with low-level GUI (Canvas / GameCanvas). Nowadays there are several APIs you can use to create Form-like low-level GUI. Check out LWUIT for example, which I imagine makes it easy for you to port your high-level code into low-level.
But even when using low-level coding, you have to be aware of different devices having different keycodes for the softbuttons.

Using NPAPI to detect browser minimize

Is there a way to use NPAPI to determine whether the browser is minimized.
Not directly. Depending on which platform you want (you should really specify things like that) there might be a way.
For example, on windows you might be able to get the browser HWND (NPN_GetValue with NPNVnetscapeWindow) and then check the state of that window with windows API calls.
On mac you're going to have a harder time of it; you could possibly intuit from the clipping information passed into NPP_SetWindow, but that doesn't tell you if the browser is minimized or if the plugin (or even the tab) is just not visible. Again, you'd need to try to figure out a way to use system calls to find your way back to the real window, but on Mac that's going to be very non-trivial.
Linux I'm not sure; you get a GtkSocket if you use XEmbed (only thing Chromium supports) and I haven't a clue if you can use that to get back to where you'd need to be to check minimized state.
So the short answer is no; NPAPI doesn't provide anything like that. You'd just have to try to find something that it does provide that gives you enough info to hack it.
Since I was using Core animation layer. I put in a timer which checks how often the candraw call back is called. If the time difference between the two callbacks are greater than a second I assume that either my plug-in is minimized or hidden.

desktop icon functionality in a window

My wife complains that I have too many icons on the Windows XP-Pro desktop.
I like to be able to quickly drop a file onto the icon for application I want to have open it. And I like to follow a link to open often-used deeply nested folders rather than navigate there. Thus, I have over 100 icons on the desktop.
(We share the same user account because we switch back and forth so often and because we both need to access the same e-mail, so separate accounts isn't the answer.)
I'd like to write a program which would have similar functionality to the Windows desktop. Then I could open that window to do the drag and drop work, but, when minimized, would leave the desktop display sparsely populated for my wife. As an added bonus, I could implement better organization of the icons than the desktop allows.
This is similar to what an Explorer window does, with the key exception that the desktop allows you to do some arrangement of icons. (For instance, program icons on the left (with the most used ones near the top), folders at the top, data files on the right.)
How do I go about getting an icon to display in a Windows Form (or on an appropriate control on the form)? (For instance, if I drop in a link to Notepad or a link to a file folder.)
How do I take the same action that the desktop does if the icon is double clicked? (For instance, if a link to a folder is double clicked.)
How do I take the same action that the desktop does if the icon has something dragged onto it? (For instance, a text file is dragged onto the Notepad icon.)
I'm using Visual Studio and C#.NET for programming.
I know how to do basic drag and drop.
I do not know:
A. what controls to use on the form to display the icons
B. how to find the icon
C. what commands are built by the desktop under various situations (so I can emulate the functionality)
I apologize that this is a multi-part question, but it was hard to break apart without explaining the whole story again.
This is a big question, but I'll give you some quick thoughts to get things moving in the right direction. WinForms exposes the functionality needed to make this happen, it's just a matter of wiring everything up the way you want it.
The key piece that you will want to look into is Drag/Drop, which is very well supported by WinForms. If you implement your icons as ImageBoxes you can set the AllowDrop property on the program icons and then handle the DragDrop event and have it call an overload of System.Diagnostics.Process() to start the application with the dropped filename as an argument.
As far as finding icons, most programs have their icon included as a resource in their .EXE file or in a related .DLL.
Regarding question C, the underlying question is what behaviors of the desktop would you like to have in your program? Explorer.exe is a massive application that does far more than what you need or what you will need or want to implement. Once you decide what functionality you want, play around with the IntelliSense list of events for the form and imagebox controls. You'll find that a lot of behavior is given to you for free in the Windows common controls, and additional behavior is fairly easy to add by handling the appropriate events.
Why dont you just use a Virtual Desktop??
Try http://virtuawin.sourceforge.net/
You will skip a lot of coding.
Right from their page:
"VirtuaWin is a virtual desktop manager for the Windows operating system (Win9x/ME/NT/Win2K/XP/Win2003/Vista). A virtual desktop manager lets you organize applications over several virtual desktops (also called 'workspaces'). Virtual desktops are very common in Unix/Linux, and once you get accustomed to using them, they become an essential part of a productive workflow."