Im wondering if it is possible to design an interface that can auto-determine which lirc.conf to use.
Currently, Im using a raspberry pi to control my TV by using the specific lirc.conf for my TV remote.
Now if I put the device in a different area with a different TV, how could I determine which conf to use?
Let's say we know some basic information like the brand of the TV, is there any potential to create a combined lirc that would work for all models of that brand?
Well, if you just have to handle a few remotes you can use all of the lirc.confs involved. lirc will try to decode the remote using each lirc.conf sequentially, using the first match.
The details is heavily depending on your lirc version. For modern lirc, see http://lirc.org/html/configuration-guide.html#appendix-8.
Another option is to use the kernel decoding. If your remotes are supported by the kernel, you could just use the devinput driver and lircd.conf file, trusting the kernel decoding which determines the remotes in runtime.
Related
I am wishing to use the Sync FIFO interface of a FT232H on a custom board from python on a RaspberryPi. I would use PyFTDI, but PyFTDI doesn't implement the Sync FIFO interface mode. The constant for the Sync FIFO mode is defined in PyFTDI but never used. I plan on accessing Sync FIFO by using PyUSB directly with PyFTDI as a reference. However as PyFTDI doesn't ever use the Sync FIFO mode, I don't know what FTDI commands are used for the mode on the USB endpoints. The documentation I have been able to find from FTDI tell how to use the proprietary library as opposed to the low level command structure actually sent to the chip. I have done a bit of searching, but FTDI provides many documents and it is a bit of information overload.
Does anyone know where the documentation is which covers the low level command codes and arguments which are sent to the FTDI USB end points? I am assuming the authors of PyFTDI were referencing something besides wire sniffing.
According to their knowledge base FTDI provides an API document under an NDA for some circumstances.
In some circumstances, it may be desirable to develop a custom driver for an exotic operating system or an embedded system. In these circumstances, an API document may be obtained from FTDI under NDA to allow driver development for FTDI devices. To request a copy of the API document, please contact FTDI Support support1#ftdichip.com.
Is it possible to access external hardware without using a driver, i.e. not having the driver abstraction layer in between program and external device?
Can you use a device by implementing your own driver-like controlling/handling directly in your program code?
I'm trying to understand a program that implements a Modbus protocol and some very specific Modbus configurations. Now I don't know how exactly it communicates with the Modbus devices.
It looks to me that this is very similar to what a driver does.
But can it even communicate DIRECTLY with the device without having a driver installed?
Yes, there are several micro-kernel OS's that always configure this way -- drivers are entirely implemented outside of the kernel.
The first thing you likely need is to get access to the device's registers; typically performed with mmap(), you may need to dig around a bit to find the right settings for cacheability, etc...
Second problem is interrupts. Unless you are running something like QNX, you won't have a way to have interrupts signal your program directly. You will probably have to turn them off and poll the device periodically.
If you are using linux and need io ports (inb, outb, etc...) man ioperm for more information.
We need to do some stress testing of our system, and we would like to be able to simulate non-ideal situations: things like latency, jitter, etc. In particular, we would like to simulate behavior of data over a cellular network.
Do you know of any hardware/software/both solutions that would work?
Thanks
Ideally you would get some idea about parametrization from a real simulator like ns3. Or write one yourself.
Additionally you could use the Linux kernels built-in QoS stack which provides the netem module which can be used for these purposes. netem provides network emulation functionality for testing protocols by emulating the properties of wide area networks. The current version emulates variable delay (jitter), loss, packet corruption, duplication and re-ordering. It supports distribution based opteration or you could script it to change certain values during run time.
Wifi card with an older access point/router, simply take the test station to the edge of the range and you should be able to reliably cause the connection to fail and reconnect. Only reason I sugest an older model is that the range generally weren't that fantastic on the older "802.11b" stuff.
But other than just being a lossy connection, I am not sure you'd be able to use this setup to test certain characteristics of a cellular connection, but it should work.
If you are in the US, an iPhone on AT&T would probably do it..
Probably need something along the lines of:
USRP Board, OpenBTS, TrixBox/Asterisk
You can check out OpenBTS(http://openbts.sourceforge.net/) and see if it will do what you need. You could have it use the USRP board as a tower, then use it similar to a loopback. I do know that the above combination will allow phones to connect to it like a cell tower(See BurningMan/DEFCON 18), so in theory it should allow you to broadcast out to saturate the spectrum.
OpenBTS-UMTS include 3G data http://openbts.org/w/index.php?title=OpenBTS-UMTS
You can download and compile on Ubuntu 16.04, there is some issue with dependencies on Ubuntu 18.04.
About hardware, i used both Ettus USRP N210 and X310.
I am communicating across USB, using a proprietary protocol, with some custom hardware I've built. I have a GUI that handles all the communications/interaction with that hardware and a (C#) DLL which exposes all the relevant USB functionality. I need to write a LabVIEW driver (VI) for communicating with the hardware. My thought is that I just use LabVIEW to open up my GUI and have a socket with which I expose all the relevant control to LabVIEW with. Is it possible to open a socket in LabVIEW and communicate with my GUI? Is this a bad approach or should I just try and make LabVIEW invoke the DLL and handle the hardware control instead of my GUI (polled communications, solicited/unsolicited commands, etc)?
IS there a reason you want to use your GUI only? In terms of time, I would say build a good front panel in LabVIEW and just communicate to the hardware directly using the DLL. Adding the GUI is just an added layer of complexity which might be difficult to maintain later on? Why not do everything in LabVIEW if you can?
Yes, LabVIEW supports sockets using both TCP/IP and UDP.
You should be able to create a program/service that continually runs acting as TCP/IP server. You can send commands and receive responses as strings. If you need to pack data, you can use the flatten to string command.
Essentially, your application should be structured as a loop running the TCP/IP server, and another loop that actually communicates with the instrument. This might change if you need to get data back from the devices to your TCP client. A producer consumer model, if you will :)
To get you started off, open up the NI Example Finder (Help -> Find Examples) and browse to Networking->TCP and UDP-> Simple Data Server.vi
It depends who is going to be using the LabVIEW driver and for what. If you're handing over this hardware to someone else who is going to want to create their own application(s) for it, they would probably prefer to talk directly to the DLL rather than go through your GUI. If it's more about automating your existing software from LabVIEW to do testing or repetitive tasks on the hardware, for example, then driving your GUI from LabVIEW might be less work.
I would like to make a simple VST plugin that does this :
analyze an audio stream (volume, beat, etc...)
has triggers on the analyzer's output (e.g. do something when volume > threshold)
generate MIDI events based on the triggers
This is to be able to chain plugins, even if they are not designed for it. For example I could control the gain of a compressor with the envelope of an audio stream, simply by connecting the MIDI OUT of my plugin to the MIDI IN of the compressor's gain button.
The problem is I don't know how to do this. Is there support for direct MIDI connections like this in VSTs ? Or maybe I need some sort of "virtual midi device" for interconnects ?
Your hunch here is probably correct; this task will be easier to accomplish by writing a virtual MIDI device instead of a VST plugin. It is possible to send MIDI events to a sequencer using the sendVstEventsToHost() call, but the problem is that the documentation never specifies how the host is required to react to these events. Many hosts simply ignore them, and I certainly can't think of one which allows easy routing from a plugin to a MIDI channel (maybe plogue bidule?).
You might be able to accomplish this with Audio Units with the kAudioUnitType_Generator plugin type... though I've never written such a plugin, my impression was that this is what you'd use to generate MIDI to the host. But again, the problem here is that I'm not sure how the host would allow you to route audio to the plugin and accept MIDI from it.
At any rate, your idea implemented as a plugin will be the most difficult to implement when you want to standardize its behavior for the most widely used sequencers. I think that a far easier way to accomplish what you want is to create a virtual MIDI device, as you'd thought of already, and then use rewire to route an input signal to your program.
Edit: Here's some resources on writing MIDI drivers for various systems:
Audio device driver programming in OS X
Windows MIDI driver API guide
VST plugins do not support direct midi connections, they can only have midi in/out ports.
It is still possible to do it though, you just need a host that supports routing midi from one plugin to another. Modular hosts such as EnergyXT, Bidule, AudioMulch and Console excel here. They all allow audio and midi signals to be routed freely (except no feedback paths). But it also may be possible in hosts with more 'traditional' mixer style vst racks. (For example, AFAIK Reaper will forward any midi from one plugin to the next.)
If you want to build your plugin in .NET take a look at VST.NET