Scorm 1.2 Player: Command Line Launch - command-line

Is there a Scorm 1.2 player that can be launched from the command line?
I'm looking to integrate the player into a ant script.

SCORM is a protocol that specifies how one JavaScript component communicates to another JavaScript component. You could in theory have this communication take place in the context of a command line, rather than the context of the browser, but you'd have to have an LMS that supported this kind of communication. I think it unlikely you'll find a component that will do this.
If you are looking for something to script browser behavior, I'd look into selenium. It will allow you to do automated testing through firefox. This thread may also be helpful: Automated Web UI Testing

Related

Integrate Chrome with command-line tools

I'm trying to set up some integration between Chrome, and various command-line tools and build systems that I have. Almost everything that I want to do within Chrome is supported by the extensions API, so I figured I'd make an extension, set up communication between it and my external tools, and go from there.
Unfortunately, I can't find any sane way to get messages in and out of Chrome. The only thing I could find that would plausibly work at all, would be introduce a local web server as a message broker, having the extension connect to it with WebSockets, and then having the command-line utilities do the same. But that's way too much complexity - it'd basically mean writing a whole IPC framework.
Is there any reasonable way to do this?
There is currently no way to let extensions communicate outside Chrome without XHR/WebSockets/SocketAPI or any traditional methods like Image URLs, JavaScript URLs etc.
If you want make an overkill, you could try creating a NPAPI Plugin that writes protocol messages to disk/file (like how Apache WebServer does), and create another standalone Python script/or any other scripting language that tails that. So your API would basically read that file that the NPAPI Extension Plugin creates.

Interaction between browser and external hardware?

I'd like to know what are the different ways for a browser to interact with external hardware.
Something important : I have control over the machine. That means I can install addons( firefox, chrome ), and run exes on the machine.
I already have a JAVA program that can communicate with the hardware, and I'd like to know how to expose the interface to the browser. So that's one possibility I'm investigating but I'd like to know if there is any other way I can do it.
Thank you
I had a similar problem. The only ways presented were to either use an addon, or write use a tiny C server that uses HTML as it's GUI.
I know you are using Java, and this thread is C++ related, but the basic principles should still work: link here.
You can expose a COM interface from your java application and use silverlight to talk to it. This is significantly simpler if the desktop application is in .NET. Check out: http://www.wintellect.com/CS/blogs/jprosise/archive/2009/12/14/silverlight-4-s-new-com-automation-support.aspx
HTML5 will have a device element that will allow you to connect external devices. Right now, the only choice you have is using plugins to communicate to external hardware.
You can search about NPAPI (a new api called PPAPI is in the making) that will allow you to create a plugin that communicates to native code to do whatever you want.

http client that executes javascript...?

Does anyone know of an http client that is scripting friendly (ie: the basics, gets, posts) and is capable of executing javascript (all, not just location redirect) ? And one which isn't just launching another browser.
There are now tools to achieve exactly what you are asking. The best class of tool, if not the only one, is probably the "headless-browser".
There have apparently been a few attempts at headless browsers, but the one that seems to have got it right is called PhantomJS.
PhantomJS is basically a WebKit browser without any display, so all the layout logic, JavaScript, etc is all in there along with the basic HTTP client, just like in a browser - because it is a browser.
PhantomJS exposes some kind of interface in JavaScript but apparently it's not so easy to use on its own. Another project has popped up to make it more useful, CasperJS.
One more project deserves mention here, SpookyJS. It's job is to act as a middleman between node.js and PhantomJS, since both implement a JavaScript event loop it's not easy to integrate them. With SpookyJS you can script a HTTP client in JavaScript on your desktop or server.
As far as I know there is no such thing available (although I'm keeping an eye on this thread hoping to be proved wrong).
However if your prepared to roll up your sleeves and do some work, then it should be possible to implement sucah a thing based on Firefox with a xul script - or you might consider looking at, for example, rhino - which is a javascript engine without a browser.
Elinks is a text-mode browser with javascript - so it would probably be simpler to run that in a pty compared with implementing your own browser component and exposing the DOM to rhino.

Should I include a command line mode in my applications?

For learning purposes i'm developing a Class generation application in c# and winforms. I think It could be fine to include a command-line mode that allow to use the application in scripts.
It's a good practice to include a command-line mode in my applications? It would be better to have two different programs, one with GUI in one for the command-line?
Actually having a C# application be both console and GUI is problematic. Console applications (/t:exe) are launched and then the command prompt waits for them to finish. GUI applications (/t:winexe) the command shell launches them and then returns immediately. While you can create and run forms from a 'console' application, it will always have a background console displayed. On the other hand 'Forms' application don't have the stdin, stdout and stderr connected and, while they can behave as command line tools and process command arguments, they have problems when embedded in scripts (because the standard input/output is not hooked up).
If you want to expose the functionality from both GUI driven applications and scriptable/pipe-able batch processing too the best way is to compile your functionality into a class library, then built two separate applications (one GUI one console) that leverage that library.
I'm not a C# programmer, but when I program in C++, I find it most useful to:
1.) Create both a shared library with a C as well as C++ API for performing core app functionality.
2.) Create one or more commandline binaries accessible to the shell interpreter.
3.) Create a GUI application for typical end users, implemented with the library (not by invoking the binaries).
This separates the logic of the application from the interface to the application, and enables thirdparty developers to create alternative interfaces for the same application functionality. It also makes it easy to script, while at the same time catering to typical end users who want a nice, shiny GUI.
Yes. If you think the program will be useful in a scripted environment then include a command line mode (without UI) so it can be used in scripts.
It doesn't have to be a separate application, but it can be. Whether you want to do that or not is entirely up to you. I'd imagine that if you had two applications they'd share the same logic assemblies but the interface (one a GUI the other a command line) would just be different.
I agree with michaelsafyan about creating a library with core functionality.
What I would add is that you should check out powershell cmdlets as well.
Much command line activity will be migrating to powershell and it brings a lot to the table.
http://en.wikipedia.org/wiki/Windows_PowerShell
I very often create such a utility as an API. If I need to use it from a simple command-line utility, that's easy - it just calls the API. If the command-line gets too complex, maybe it's time for a Winforms application - which can also call the API. If I wanted to use it from PowerShell, or from an MSBUILD task, those are still easy - they just call the API.
Creating an application on the windows platform that behaves correctly as a console application can be problematic it's an issue with the windows kernel architecture as they're considered two different types of application (they have a different subsystem that you generally specify in the compiler or linker options). You can still manually redirect the IO and open a console from a win32 application by the win32 function AllocConsole() and friends but this also has some issues. See This Old New Thing post for more information.
If you want your utility/prgram run in scripts you can expose it as COM.
Many script languages for windows had the hability to use COM objects directly.
You should include a command line interface in your application,
if it enhances usability and comfort.
For instance, calling a CLI command might be faster then starting the GUI, navigating through several menu layers to reach the same functionality.
You might ask the users of your application, if they would find it useful to have a CLI mode.
Some words on marrying CLI & GUI on Windows:
A windows application is either a GUI application or a Console application, but not both. This is an OS issue and there is probably nothing one can do about it.
The console subsystem in Windows is horrible and PowerShell didn't change that.
Your implementation options on Windows are:
the two files approach:
Provide two files: one .com with console, one .exe with GUI.
Because of the executable probing on the command line, the com file will get executed before the exe.
the console flickering approach:
Compile your GUI application with console mode on, then immediately after the start of the GUI you might call FreeConsole() to close it.
It's a bit annoying, but works. Bad: now you have a flickering console window. Pro: still one file.
I agree with #Remus Rusanu.
you should create a class library of your core functionality and then build GUI app(wrapper) for that.
and one other benefit of it is you might not even need to create a command line app as you can access your .net dll features using powershell..
you can find one example over here
Another great idea is to embed a scripting language. Then your program can be controlled by a script, and you get all the logic, branching, etc from the scripting language "for free."
There are many choices of what you can embed. Lua is one of the most popular and intended for just that purpose and is an excellent choice.
However, for a general purpose app, I'd take a hard look at embedding Python. Python is so popular, you'd have a larger group of people willing to take the effort to write a script for your app.

GUI Automation testing - Window handle questions

Our company is currently writing a GUI automation testing tool for compact framework applications. We have initially searched many tools but none of them was right for us.
By using the tool you can record test-cases and group them together to test-suites. For every test-suite there is generated an application, which launches the application-under-test and simulates user-input.
In general the tool works fine, but as we are using window handles for simulation user input, you can't do very many things. For example it is impossible for us to get the name of a control (we just get the caption).
Another problem using window handles is checking for a change. At the moment we simulate a click on a control and depending on the result we know if the application has gone to the next step.
Is there any other (simpler) way for doing such things (for example the message queue or anything else)?
Interesting problem! I've not done any low-level (think Win32) Windows programming in a while, but here's what I would do.
Use a named pipe and have your application listen to it. Using this named pipe as a communication medium, implement a real simple protocol whereby you can query the application for the name of a control given its HWND, or other things you find useful. Make sure the protocol is rich enough so that there is sufficient information exchanged between your application and the test framework. Make sure that the test framework does not yield too much "special behavior" from the app, because then you wouldn't really be testing the features, but rather your test framework.
There's probably way more elegant and cooler ways to implement this, but this is what I remember from the top of my head, using only simple Win32 API calls.
Another approach, which we have implemented for our product at work, is to record user events, such as mouse clicks and key events in an event script. This should be rich enough so that you can have the application play it back, artificially injecting those events into the message queue, and have it behave the same way it did when you first recorded the script. You basically simulate the user when you play back the script.
In addition to that, you can record any important state (user's document, preferences, GUI controls hierarchy, etc.), once when you record the script, and once when you play it back. This gives you two sets of data you can compare, to make sure for instance that everything stays the same. This solution gives you tests that not easy to modify (you have to re-record if your GUI changes), but that provide awesome regression testing.
(EDIT: This is also a terrific QA tool during beta testing, for instance: just have your users record their actions, and if there's a crash, you have a good chance of easily reproducing the problem by just playing back the script)
Good luck!
Carl
If the Automated GUI testing tool has knowledge about the framework the application is written in it could use that information to make better or more advanced scripts. TestComplete for example knows about Borland's VCL and WinForms. If you test applications build using Windows Presentation Foundation has advanced support for this build in.
use NUnitForms. I've used them with great success for single and multi threading apps and you don't have to worry about handles and stuff like that
Here are some posts about NUnitForms worth reading
NUnitForms and failed DragDrop registration - problem of MTA vs STA
Compiled application exe GUI testing with NUnitForms
I finally found a solution to communicate between the testing-application and the application-under-test: Managed Spy. It's basically a .NET application build on top of ManagedSpyLib.
ManagedSpyLib allows programmatic access to the Windows Forms controls of another process. For this it uses Window Hooks and memory-mapping files.
Thanks for all who helped me to get to this solution!
Managed Spy does not provide a solution for compact framework applications.
The company Jamo Solutions (www.jamosolutions.com) meets the requirements for automation testing on mobile devices, including .net compact framework applications.