I have some a document that have commands of ESPON printers. now i have to make drivers to use in an iPhone application. this is the link of that document http://qasimshah.sitesled.com/BettorSidekick/ESCPOS_Commands_FAQs.pdf
Now please tell me how i can send these commands to printer? I am confused how to get print via iPhone. Printer is not AirPrint Supported. so please guide me how to do it.
regards
I presume you need the Epson Communication Libraries for iOS. Register and download at EpsonExpert.com. It will be in the technical resources section. Choose printer TM-P60.
EPSON receipt printers generally support ESC POS (native), OPOS, JPOS, OPOS for .NET (all three of these are UPOS API wrappers for ESC POS), and EPSON Advanced Printer Driver (for Windows printing APIs).
Since iOS doesn't do Java, JPOS is out.
Since iOS doesn't do OLE COM or .NET, OPOS and OPOS for .NET are out.
Since iOS doesn't do Windows Printing APIs, APD is out.
That leaves you ESC POS as the only viable language to talk to EPSON receipt printers with on iOS, unless they give you something else with the ECL package I mentioned above. ESC POS is pretty trivial.
If you need high fidelity to what's on the screen (fonts, etc), render it as a 150 DPI 8-bit or less BMP file and send it to the printer with ESC POS. If you just want to print receipts, use the ESC POS commands for printing text.
POS receipt printers use their own command sets (as opposed to PostScript or PCL) because they do special things, like paper cutting, etc.
I don't think it is possible to make printer drivers directly into your apps.
Your best chance would be to set up a AirPrint-enabled printing server to manage the Epson printer.
Also, albeit that I haven't tested this, this modules claims to be able to make any printer Airport Enabled
Related
I'm using a GUI application framework (EGT) on an ATMEL/ Microchip SAMA5D4. The framework features
DRM/KMS and X11 backends.
I've looked at using tslib to calibrate a restive touchscreen for the device but due to EGT limitations it looks like I'm going to have to use libinput for the moment.
Is there a calibration mechanism (equivalent of tslib) available for libinput? I've looked at xlibinput_calibrator & it seems like it could be a solution but I'll have to sort out the dependencies in the Yocto build.
Thanks,
For anyone looking at this xlibinput_calibrator looks like it requires X11 & was not an option.
I eventually ended up using a startup script to prevent EGT capturing raw events (deleted /dev/event0 or similiar) from the touchscreen & instead use the calibrated source from ts_input.
I'm trying out OGRE and I'd like to ask a question about the OGRE config dialog.
The dialog which can be opened with Ogre::Root::showConfigDialog(), lists only "800 x 600 # 32-bit colour" for Video Mode, both for "Direct3D9 Rendering Subsystem" and "Direct3D11 Rendering Subsystem".
My question is, why is there only 800x600x32 ? Is there a way to make it list more video modes, like 1024x768x32, 1980x1080x32, etc?
I've tried Google-searching, but the closest thing I got was how to change video mode without the use of the config dialog.
Any help would be appreciated, thank you!
EDIT:
Here's a link to my screenshot of OGRE Engine Rendering Setup dialog, since I don't have enough reputation to upload images.
http://imgur.com/kNDy48E
In general: This list will automatically contain all available video modes reported by the selected rendering API drivers. If you are certain that your current API and drivers should allow more, you could debug the respective _initialise() function, e.g. for D3D11 in OgreD3D11RenderSystem.cpp:
RenderWindow* D3D11RenderSystem::_initialise( bool autoCreateWindow, const String& windowTitle )
Internally, the function D3D11VideoModeList::enumerate() will be used to enumerate all possible values from the driver.
I saw that your GPU uses "NVIDIA Optimus". Did you try to tell NVIDIA to use the correct GPU for your Ogre application? I saw that in the config dialog the GTX is selected, but just to make sure: How to select Optimus GPU.
Also this Optimus policy trick might help:
NVIDIA released Optimus rendering policies guidelines not long ago.
If the user has driver 302 or higher, we can hint the driver to use the dedicated GPU. All we need to do is to export a variable:
extern "C" {
_declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
}
My first question, so be gentle.. :)
I would like to get a certain PC located program to run (instead of iTunes) every time I connect my iPhone to my PC.
specifically - iTools, which is kind of a substitute for iTunes.
From my research i got that it involves adding/manipulating some registry values (something like "ServicesAutoStartOnConnect") at this location: HKEY_LOCAL_MACHINE/SOFTWARE/Microsoft/Windows CE Services. When i look there, under Microsoft, there is no Windows CE Services sub folder.
I'm using win7 prof. and as mentioned, the device is my iPhone4 (with iOS6.0.1 on it).
Can anybody give me some organized steps to be taken in order to accomplish this?
Not sure if you are aware of SuperUser.com. It is a site, much like this one, but caters more for the type of question you have asked here. For example, I did a search on 'iphone' + 'connect' + 'itunes' and found some possible helpers - SuperUser Search
Is there any API/SDK to support tts (text to speech) in Marmalade?
I've had some success porting Flite (http://www.speech.cs.cmu.edu/flite/) to the Marmalade environment. It produces wave files and raw, in-memory buffers (which can then be played using s3eSound directly) fine.
The s3eSound adapter (which plays the text directly from within flite) is a work in progress, so, while it does produce something close to recognisable speech, it is also obviously bugged. For my purposes, the raw buffers are more useful anyway, but feel free to try to fix it up.
You can see what I've done here https://github.com/madmaw/marmalade-flite
There is no specific API provided by Marmalade however you may be able to use the EDK if native APIs provide this functionality on iOS or Android.
https://www.madewithmarmalade.com/devnet/docs#/main/extensions.html
I'm creating an NPAPI plugin that isn't supposed to have a UI (for use from Javascript only). What windowing model should I use (windowed/windowless/xembed) to support as many browsers (and browser versions) as possible?
I currently implement the following functions:
NPP_SetWindow: do nothing, return NPERR_NO_ERROR
NPP_Event: do nothing, return kNPEventNotHandled (0)
NPP_SetValue: do nothing, return NPERR_NO_ERROR
NPP_GetValue: if asked for NPPVpluginNeedsXEmbed, answer yes if the browser supports it (NPNVSupportsXEmbedBool), no otherwise
For this plugin I am supporting Linux & Windows only for now. The NPPVpluginNeedsXEmbed was necessary for Chrome on Linux (bug 38229), however some old versions may not support it as the MDC page says that the sample plugin for XEmbed is only supported on Firefox 2.0+.
The most common that I have seen is to not care at all about the windowing mode and set the object tag to 1x1 (you can try 0x0, but I've seen browser bugs related to that) size, in which case it doesn't really matter what window mode you use. However, I would do windowless myself since it won't ever cause the trademark block that floats over all other DOM elements that a normal windowed (XEmbed or not) plugin gives you.
This is what FireBreath does if the FB_GUI_DISABLED flag is set.