I need to enumerate direct sound devices on windows, and serialise the actual device used to output a particular channel. This appears to be done normally by saving the direct sound device GUID. However, I need to connect to the same hardware if it exists on a different computer. I've tried using the GUID, but that is different on different computers with exactly the same audio device plugged in.
I believe since it's the audio hardware which I'm binding to, rather than a role, I should be using a device interface path, as shown in Windows device manager, but there doesn't seem to be a way to go from the direct sound object to the device manager path.
Is it possible to make this mapping?
There is an example found here: http://www.chrisnet.net/code.htm which shows how to use the CLSID_DirectSoundPrivate interface, which is non-trivial, almost impossible to find via MSDN if you don't know what to look for, and has a terrible interface involving multiple calls which is not explained anywhere other than in this example.
I took this example, and ended up getting stack violations attempting to call the Get method on the property set.
It turns out that direct show defines the same IKsPropertySet interface with the same guid but with a different vtable, causing horrible vtable-related problems if you #include dshow.h or strmif.h before dsound.h. Needless to say, I'm unimpressed.
The calls needed are as follows:
hr = pKsPropertySet->Get(DSPROPSETID_DirectSoundDevice,
DSPROPERTY_DIRECTSOUNDDEVICE_DESCRIPTION,
NULL,
0,
&sDirectSoundDeviceDescription,
sizeof(sDirectSoundDeviceDescription),
&ulBytesReturned
);
if (ulBytesReturned)
{
// On the first call it notifies us of the required amount of memory in order to receive the strings.
// Allocate the required memory, the strings will be pointed to the memory space directly after the struct.
psDirectSoundDeviceDescription = (PDSPROPERTY_DIRECTSOUNDDEVICE_DESCRIPTION_DATA)new BYTE[ulBytesReturned];
*psDirectSoundDeviceDescription = sDirectSoundDeviceDescription;
hr = pKsPropertySet->Get(DSPROPSETID_DirectSoundDevice,
DSPROPERTY_DIRECTSOUNDDEVICE_DESCRIPTION,
NULL,
0,
psDirectSoundDeviceDescription,
ulBytesReturned,
&ulBytesReturned
);
Related
I am trying out this MCU / SoC emulator, Renode.
I loaded their existing model template under platforms/cpus/stm32l072.repl, which just includes the repl file for stm32l071 and adds one little thing.
When I then load & run a program binary built with STM32CubeIDE and ST's LL library, and the code hits the initial function of SystemClock_Config(), where the Flash:ACR register is being probed in a loop, to observe an expected change in value, it gets stuck there, as the Renode Monitor window is outputting:
[WARNING] sysbus: Read from an unimplemented register Flash:ACR (0x40022000), returning a value from SVD: 0x0
This seems to be expected, not all existing templates model nearly everything out of the box. I also found that the stm32L071 model is missing some of the USARTs and NVIC channels. I saw how, probably, the latter might be added, but there seems to be not a single among the default models defining that Flash:ACR register that I could use as example.
How would one add such a missing register for this particular MCU model?
Note1: For this test, I'm using a STM32 firmware binary which works as intended on actual hardware, e.g. a devboard for this MCU.
Note2:
The stated advantage of Renode over QEMU, which does apparently not emulate peripherals, is also allowing to stick together a more complex system, out of mocked external e.g. I2C and other devices (apparently C# modules, not yet looked into it).
They say "use the same binary as on the real system".
Which is my reason for trying this out - sounds like a lot of potential for implementing systems where the hardware is not yet fully available, and also automatted testing.
So the obvious thing, commenting out a lot of parts in init code, to only test some hardware-independent code while sidestepping such issues, would defeat the purpose here.
If you want to just provide the ACR register for the flash to pass your init, use a tag.
You can either provide it via REPL (recommended, like here https://github.com/renode/renode/blob/master/platforms/cpus/stm32l071.repl#L175) or via RESC.
Assuming that your software would like to read value 0xDEADBEEF. In the repl you'd use:
sysbus:
init:
Tag <0x40022000, 0x40022003> "ACR" 0xDEADBEEF
In the resc or in the Monitor it would be just:
sysbus Tag <0x40022000, 0x40022003> "ACR" 0xDEADBEEF
If you want more complex logic, you can use a Python peripheral, as described in the docs (https://renode.readthedocs.io/en/latest/basic/using-python.html#python-peripherals-in-a-platform-description):
flash: Python.PythonPeripheral # sysbus 0x40022000
size: 0x1000
initable: false
filename: "script_with_complex_python_logic.py"
```
If you really need advanced implementation, then you need to create a complete C# model.
As you correctly mentioned, we do not want you to modify your binary. But we're ok with mocking some parts we're not interested in for a particular use case if the software passes with these mocks.
Disclaimer: I'm one of the Renode developers.
I'm actually using the Ecostruxure skin on codesys, trying to attach the output of a Modicon encoder to determine the speed a motor should go.
However, I can't figure out how to grab the variable that would correspond to that data. Any help appreciated.
The screenshots below indicate where I'm stuck. I need to create a variable of type ENC_REF_M262 however double clicking on the device does not take me to a screen I can map it with like what I've seen on a number of other devices. Obviously something I'm missing. The linking between this physical device and actually getting data from it is the part I'm struggling with.
edit: The system seems to have now correctly accepted the settings when before the reference to the variable Incremental Encoder would not be found. Seems to take a while to catch up.
I'm new to SUMO, Veins, OMNET++ and simulations with a bit background of networks. I have successfully setup environment and run veins 4.6 demo application. On google found that unlike RSU, Car modules are added on the fly.
In demo example car nodes send Airframe11p message, i'm not getting where this message is being populated because in TraCIDemo11p.cc methods (onWSA, onWSM, handleSelfMsg, handlePositionUpdate) we are dealing with WSM message types and BaseWaveApplLayer::checkAndTrackPacket methods ensures that message being sent is either BSM, WSM or WSA.
In veins\src\veins\modules\messages AirFrame11p.msg file exists but on finding references of "AirFrame11p" in project, matches are found in AirFrame11p_m.h and AirFrame11p_m.cc only. If demo is not using these files then for what purpose these files are added? and from where simulation gets the annotation of AirFrame11p.
I'm trying to simulate a car accident scenario without RSU using V2V communication, have replaced demo map with mine, generated random routes, now trying to remove RSU from demo application and exploring to send customized messages (including geo location, speed, direction, time etc) to nearby vehicles in specified range e.g. 100 meters using WiFi direct.
If i'm confusing something then please guide me. Thanks.
The short answer: The AirFrame11p message is a lower level message that encapsulates the upper layer messages. Just use the application message type that is appropriate for your application. If you want to replace the physical layer with WiFi direct instead of 11p, and you're starting from scratch, you're probably in for quite a bit of work, since the VEINS PHY implementation is very intricate. If you have an existing implementation of WiFi direct, it may be worth investigating the integration of VEINS' TraCI implementation with that code.
Encapsulation in VEINS
You are correct that the message types at the application layer are more diverse -- these message types (BSM and WSM) are used to encapsulate "application" behavior; it's just not very well visualized in the simulation execution. You can pause the simulation and look (for example) under scheduled events, where the queued packets can be examined visually.
Unlike regular networks, where such messages would be packaged in IP, MAC and PHY encapsulations, VEINS uses the following encapsulation process: BSMs are packaged in MAC frames (80211Pkt), which in turn are encapsulated by AirFrame11p signals. So basically, you should choose the correct message type for your application.
Footnote regarding application behavior:
Technically speaking, these messages would be more correctly placed at the Facilities layer (see e.g. ETSI's spec), since the periodic exchange of messages provides data stored in the facilities layer, which is then used by cITS/VANET applications that run on top. If you need this, look at Artery (as Ventu suggested in the comments).
Trying to understand why there are ioctl calls in socket.c ? I can see a modified kernel that I am using, it has some ioctl calls which load in the required modules when the calls are made.
I was wondering why these calls ended up in socket.c ? Isn't socket kind of not-a-device and ioctls are primarily used for device.
Talking about 2.6.32.0 heavily modified kernel here.
ioctl suffers from its historic name. While originally developed to perform i/o controls on devices, it has a generic enough construct that it may be used for arbitrary service requests to the kernel in context of a file descriptor. A file descriptor is an opaque value (just an int) provided by the kernel that can be associated with anything.
Now if you treat a file descriptor and think of things as files, which most *nix constructs do, open/read/write/close isn't enough. What if you want to label a file (rename)? what if you want to wait for a file to become available (ioctl)? what if you want to terminate everything if a file closes (termios)? all the "meta" operations that don't make sense in the core read/write context are lumped under ioctls; fctls; etc. unless they are so frequently used that they deserve their own system call (e.g. flock(2) functionality in BSD4.2)
Consider testing the project you've just implemented. If it's using the system's clock in anyway, testing it would be an issue. The first solution that comes to mind is simulation; manually manipulate system's clock to fool all the components of your software to believe the time is ticking the way you want it to. How do you implement such a solution?
My solution is:
Using a virtual environment (e.g. VMWare Player) and installing a Linux (I leave the distribution to you) and manipulating virtual system's clock to create the illusion of time passing. The only problem is, clock is ticking as your code is running. Me, myself, am looking for a solution that time will actually stop and it won't change unless I tell it to.
Constraints:
You can't confine the list of components used in project, as they might be anything. For instance I used MySQL date/time functions and I want to fool them without amending MySQL's code in anyway (it's too costy since you might end up compiling every single component of your project).
Write a small program that changes the system clock when you want it, and how much you want it. For example, each second, change the clock an extra 59 seconds.
The small program should
Either keep track of what it did, so it can undo it
Use the Network Time Protocol to get the clock back to its old value (reference before, remember difference, ask afterwards, apply difference).
From your additional explanation in the comments (maybe you cold add them to your question?), my thoughts are:
You may already have solved 1 & 2, but they relate to the problem, if not the question.
1) This is a web application, so you only need to concern yourself with your server's clock. Don't trust any clock that is controlled by the client.
2) You only seem to need elapsed time as opposed to absolute time. Therefore why not keep track of the time at which the server request starts and ends, then add the elapsed server time back on to the remaining 'time-bank' (or whatever the constraint is)?
3) As far as testing goes, you don't need to concern yourself with any actual 'clock' at all. As Gilbert Le Blanc suggests, write a wrapper around your system calls that you can then use to return dummy test data. So if you had a method getTime() which returned the current system time, you could wrap it in another method or overload it with a parameter that returns an arbitrary offset.
Encapsulate your system calls in their own methods, and you can replace the system calls with simulation calls for testing.
Edited to show an example.
I write Java games. Here's a simple Java Font class that puts the font for the game in one place, in case I decide to change the font later.
package xxx.xxx.minesweeper.view;
import java.awt.Font;
public class MinesweeperFont {
protected static final String FONT_NAME = "Comic Sans MS";
public static Font getBoldFont(int pointSize) {
return new Font(FONT_NAME, Font.BOLD, pointSize);
}
}
Again, using Java, here's a simple method of encapsulating a System call.
public static void printConsole(String text) {
System.out.println(text);
}
Replace every instance of System.out.println in your code with printConsole, and your system call exists in only one place.
By overriding or modifying the encapsulated methods, you can test them.
Another solution would be to debug and manipulate values returned by time functions to set them to anything you want