best tool to reverse-engineer a WinXP PS/2 touchpad driver? - windows-xp

I have a PS/2 touchpad which I would like to write a driver for (I'm just a web guy so this is unfamiliar territory to me). The touchpad comes with a Windows XP driver, which apparently sends messages to enable/disable tap-to-click. I'm trying to find out what message it is sending but I'm not sure how to start. Would software like "Syser Debugger" work? I want to intercept outgoing messages being sent to the PS/2 bus.

IDA Pro won't be much use to you if you want to find out what 'messages' are being sent. You should realise that this is a very big step up for most web developers, but you already knew that?
I would start by deciding if you really need to work at the driver-level, often this is the Kernel level. The user mode level may be where you want to look first. Use a tool like WinSpy or other Windows debug tool to find out what messages are getting passed around by your driver software, and the mouse configuration applet in control panel. You can use the Windows API function called SendMessage() to send your messages to the application from user mode.
Your first stop for device driver development should be the Windows DDK docs and OSR Online.

I suggest reading the synaptics touchpad specs (most of the touchpads installed on notebooks are synaptics') available here http://www.synaptics.com/decaf/utilities/ACF126.pdf
I believe on page 18 you'll find the feature you are looking for. At least you'll know what to expect.
So, very likely, the touchpad driver "converts" the command coming from user mode to this PS/2 command.
I don't know the specifics of the touchpad PS/2 driver but I see two major ways for the user mode panel to communicate with the driver:
- update some key in the registry (this is actually very common)
- the driver provides an alternate "channel" that the user mode app opens and writes specific commands to
You may want to try using the process monitor from sysinternals to log registry activity when setting/resetting the feature.
As for the options 2 you may want to try IRP tracker from OSR and see if there's any specific communication between the panel and the driver (in the form or IRPs going back and forth). In this case, kernel programming knowledge is somewhat required.
The windows kernel debugger may also be useful to see if the PS/2 driver has some alternate channel.

Have a look at IDA Pro - The Interactive Disassembler. It is an amazing disassembler.
If you want to debug, not just reverse engineer, try PEBrowse Professional Interactive from SmidgeonSoft

Related

Livecode and Biopac Interface

My lab recently purchased the BIOPAC system to measure skin conductance as part of our experiments.
My experiments are normally coded using Livecode.
I need to be able to tell Livecode to automatically score and label digital event marks in the skin conductance responses in the BIOPAC System.
Does anyone have any experience interfacing the BIOPAC system and Livecode? Does anyone have same code they have used?
Thanks!
There is a gadget from "Bonig und KallenBach":
http://www.bkohg.com/serviceusbplus_e.html
This has analog inputs that you could easily configure to measure skin resistance. It comes with a framework for LiveCode, and connects through the USB port.
I have used these in many applications to connect the real world to my computer. All your processing is done in the usual LC way.
I think, there is no direct Example using Biopac Hardware.
To tinkering with your own software outside AcqKnowledge, you have to purchase BHAPI (MPDEV.dll) from Biopac. The problem is BHAPI only support Windows. not MacOS nor Linux.
Another way is streaming data through AcqKnowledge 5.x. Start acquisition in AcqKnowledge, and you can stream it. Then receive the data stream in livecode and process it.

Control Host playback from JUCE audio VST plugin

I am trying to find a way to control the playback position / tempo of a VST Host from a VST plugin build with JUCE.
I am not sure if this is possible.
I found a setPlayHead function on the AudioProcessor, and i think this might be what i am looking for.
https://www.juce.com/doc/classAudioProcessor#a9015f8476c07b173e3c9919b3036339d
But on the doc of the setPlayHead i am reading this:
Tells the processor to use this playhead object.
So can anybody tell me if this is supposed to mean that the new AudioPlayHead that is set on the AudioProcessor will be used for the Hosts playback (z.b. Cubase), or does it mean that only the AudioProcessor of my VST plugin will use this AudioPlayHead, and the AudioPlayHead of the Host remains unaffected)
Thanks for any help / input on this.
A sequencer cannot be controlled by a VST plugin in this way. The VST API doesn't allow for anything like this. The method you've found is actually part of the Juce API which allows a sequencer to pass a playhead structure to a plugin.
To be fair, there is no technical reason that a plugin couldn't do this. The host would have to supply an unofficial custom opcode and an associated canDo for the feature. However, it would not be part of the VST standard, and would only work for that specific host.
As far as I know, no major VST host (including Ableton Live, Cubase, etc) has a mechanism to allow this. Thinking from the host's standpoint, it would be a bit crazy to provide such a mechanism. Just imagine multiple plugins trying to stop/play the host's playback at the same time!
So yeah, sorry, but this is not really possible in the way that you are thinking. However, it would be possible for a VST plugin to control the host's tempo (but not playback state) via Ableton Link. Since Link works over a local network socket, and doesn't have any concept of master/slave, a VST plugin could theoretically send tempo changes to the host in this manner.
Right now (where "now" == September 2016), Ableton Live is the only sequencer which supports Link, but Ableton has said that they are working with other companies to help them add support for Link, so I wouldn't be surprised if more sequencers start to add Link support in the near future.

PJSIP via CLI on Raspberry

I am new to Raspberry and VOIP. I am interested to make a door intercom system using raspberry, as I read most of the post here and those are really helpful,one of the raspberry is acting as server (Asterisk and PBX). I was able to call using sflphonebut that was only for desktop mode and I am interested to call using CLI and for that I installed "PJSIP" as reffered by most of users, but don't have any idea what to do next (I mean how should i start). As it is written after installation I am supposed to try for "pjsua" and "pjsystest" in pjsip-apps/bin, but it doesn't ring any bell for me.
Sorry for my level as beginner but if you don't begin how are you supposed to masters it.
I shall be very thankful.
so i can't explain all of the stuff you need to know, but with this Site you should be able to register to you PBX. It's an documentation to the high level API of PJSUA. If you go further on that site they will lead you through the things you have to do to establish a call.
Although this can be very frustrating because there are many error that can appear. There are some python test application under pjproject/pjsip-apps/src/python/samples, here you should firstly try to establish a call between two clients to be sure you server is well configured.
Just for info, the dest_uri you have to give in the call.py is like so sip:ip:port and you have to change the values in the register.py to your client infos. Then run the register.py in one shell and the call.py in an other shell.
Hopefully i could help you over some barriers, feel free to ask if something was unclear.

Print service with user payment

Working at a company, I've been set to develop a print service solution which will be used from multiple platforms. The service should be available at least from Windows (native print dialog), OSX, IOS and Android. I need to be able to see which user is printing, how many pages etc.
I'm looking for a system like CUPS for Windows or Linux, which allows me add/connect to this payment system. The payment system confirms the user have enough points to be able to print the given document. The system should be as transparent as possible for the user and he/she should be able to print like a normal network printer.
The payment system is an existing product, so my system should only handle printing and user authentication.
My first thought was to develop a simple listener that would be running on the server and the clients could connect to, add files to the print queue and print if they had enough points. How ever I could not find any tutorial or similar projects for this approach. Therefore I'm looking into adjusting an already existing product for my needs.
I have made a drawing of how I think the system should look like
I found a solution using CUPS with Tea4Cups. Tea4Cups provides pre/post-hooks where the user can define some scripts/commands to run before and after the document is sent to the printer.
More information about this here:
https://serverfault.com/questions/208268/run-command-before-and-after-printing-with-cups
Run a script when user press print, and not start spooling before script ends (linux, cups)

What are the legitimate uses of global keyboard hooks?

Other than for app launch shortcuts, which should only be provided by the O/S, what are the legitimate uses of things like Windows keyboard hooks? It seems to me that we only have problems with things like key loggers because operating systems provide hooks to do things that should not be permitted by anyone under any condition except the O/S kernel itself.
EDIT: OK, so given some legitimate places where they may be required, should not the O/S provide a high level ability to globally turn them off, and only allow exceptions on a program-by-program basis?
I'm currently working on a mobile application platform / hardware abstraction layer for an enterprise client, and one requirement was that a screensaver would be brought up after a certain period of inactivity. Since mobile devices don't have mice to move, "activity" consists of key presses or screen taps. One of our devices doesn't have a touchscreen, and, to make a long story longer, the mobile hardware vendor didn't properly implement the Win32 API calls that would allow me to get the time since the last user input.
Since the hardware vendor was unwilling to implement the Win32 API properly, the next best way I knew of to ensure that my console application could trap key presses in any application on the system was to install a global keyboard hook.
That said, I agree that the average consumer scenario is very different from mine, and the OS should allow the user to whitelist activities like this.
Not true, there are environments where the owner of the computer may want to stop things such as Ctrl+Alt+Delete... example, a Kiosk, or... .... Best Buy?
For example I have installed two applications;
One maps Windows-V as paste unformatted text
Another one modifies how caps lock works
I think both of them require a hook.
I wrote an app that let me place virtual sticky notes on my monitor. I used an OS keyboard hook to bind a hotkey to it.
I had an option in settings to disable the hook.
There may not be a lot of legitimate uses. However, I'm of the opinion that you shouldn't intentionally limit the features of a system, simply to make it more secure.
Also, a key-logger isn't a bad thing if you know it's there and you installed it yourself.