It's some kind of annoying:
Since I started using the MPMoviePlayerController the console is overfilled with information from MPAVController.
Eg:
[MPAVController] Autoplay: _streamLikelyToKeepUp: 1 -> 1
[MPAVController] Autoplay: Disabling autoplay
This is some kind of annoying because I always have to search for my own logged information.
Is there a way to turn off logging for specific objects or frameworks?
I don't think such filtering is possible out of the box. But it's possible to redirect stderr (which is used by NSLog) into a pipe, read from that pipe in a background thread and then print messages that pass through the filter onto stdout (which is captured by the debugger as well). This code does the job:
int main(int argc, char *argv[])
{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^(void) {
size_t const BUFFER_SIZE = 2048;
// Create a pipe
int pipe_in_out[2];
if (pipe(pipe_in_out) == -1)
return;
// Connect the 'in' end of the pipe to the stderr
if (dup2(pipe_in_out[1], STDERR_FILENO) == -1)
return;
char *buffer = malloc(BUFFER_SIZE);
if (buffer == 0)
return;
for (;;)
{
// Read from the 'out' end of the pipe
ssize_t bytes_read = read(pipe_in_out[0], buffer, BUFFER_SIZE);
if (bytes_read <= 0)
break;
// Filter and print to stdout
if (should_show(buffer)) // TODO: Apply filters here
fwrite(buffer, 1, bytes_read, stdout);
}
free(buffer);
close(pipe_in_out[1]);
});
// Rest of main
}
Please note that this code is quite simple and doesn't handle all corner cases. First of all it captures all stderr output and not just NSLog. Maybe this could be filtered out by checking against the content. NSLog output always starts with the date and time.
Second problem with this code is that it doesn't try to split/join strings it reads from the pipe. There's no guarantee that there will be one NSLog per read. They could be coming together or be too long and would be split. To handle this it would require additional processing of the data read from the pipe.
Anyway, for many practical purposes this should be enough.
You should look into NSLogger. While NSLog doesn't give you any selectivity about what you see from run to run, NSLogger can. NSLogger displays output from the device (or simulator) in its own window in OS X.
Basically it adds the concept of facility and level to output. Unix wizards might find fault with this comparison but I see it as very similar to syslog. The NSLogger viewer lets you display output messages for one or more facilities (which you define) which also meet the minimum level required.
Macros define what you see in the output window. Here's an excerpt:
#ifdef DEBUG
#define LOG_GENERAL(level, ...) LogMessageF(__FILE__,__LINE__,__FUNCTION__,#"general",level,__VA_ARGS__)
#else
#define LOG_GENERAL(...) do{}while(0)
#endif
When DEBUG is off, no messages appear. When on, if you have a LOG_GENERAL() statement in code and your viewer is configured to display facility "general" and your level is sufficient to be displayed, you get a message.
It's incredibly flexible and I like it a lot. It takes about five minutes to add to your project. Please take a look at the github page linked above for full details and download.
(This will not solve the problem of MPAVController filling the console with messages, but it does put the messages you want in a new window, making it much easier to control, filter and interpret what you are interested in.)
Another option, if you can use it, is to run either a simulator or a device running iOS < 6.0.
The MPAVController log messages do not appear for me when using a 5.0 device or the 5.1 Simulator. But they definitely appear in the 6.0 Simulator.
Of course one should generally use the current OS, but if one is working on a video heavy section of a project, running an earlier simulator or device while working on that particular set of tasks is a way to alleviate this logging headache.
This also provides some backward compatibility testing as a bonus.
Related
i know this forum dislikes "open" questions like this, nevertheless i'd like somebody to help untie the knot in my head, much appreciated.
The goal is simple:
read a stereo 32bit 44100 S/s I2S signal from 2 adafruit sph0645 mics
create a wav-header and store the data onto an SD-card
I've been at this for a few days now and i know that this will be much more complicated than i originally thought. Main reason: signal quality. Like most tutorials on this subject the simplest "hello world" for these mics is a looped polling for I2S-samples. Poll, fill buffer, output via serial or write to SD-card. This returns a choppy, noisy, sped up version of RL-audio. The filling of the internal DMA-buffers can be seen as constant, but the rest is mostly chaos, so
how to i sync these DMA-buffers with the rest of my code?
From experience with the STM32 HAL i'd imagine some register which can be set to throw an interrupt whenever a buffer is full, or an event which can be sent between tasks via queues. Examples on this subject either poll in a main loop with mono an abysmal sample-rate and bit depth or use pages of overkill code and never adress what it does, "just copy and it works", not good. Does the ESP32-Arduino framework provide some way to to this properly? The espressif-documentation isn't something to look forward to, since some of their I2S interface functions don't even work (if you are researching this topic as well, you too might have noticed that i2s_read only returns zeros). Just a hint into the right direction would help, i'm writing my own code anyway. Interrupts? Events? Timers? Polling for full buffers? Only you might know.
have a good one, thx
Thanks to https://github.com/atomic14/ i now have an answer for a syncing-method which works very well. This method has been tried by https://esp32.com/viewtopic.php?t=12546 who also didn't fully understand what was going on: the espressif i2s-interface offers a flag stored in an event which is triggererd every time one of the specified dma-buffers has received a full set of data, ergo, is full. It looks like this:
while(<your condition>){
i2s_event_t evt;
if (xQueueReceive(<your queue>, &evt, portMAX_DELAY) == pdPASS){
if (evt.type == I2S_EVENT_RX_DONE){
size_t bytesRead = 0;
do{
//read data via i2s_read or i2s_read_bytes
} while (bytesRead > 0);
No data is stored in this queue, but rather a flag which can then be used to synchronize dma-filling and further buffering/calculating/sending the read data.
HOWEVER this only works if you install the i2s driver in a specific setup. Instead of using
i2s_driver_install(I2S_NUM_0, &i2s_config, 0, NULL);
in your setup, you can activate the "affinity" for events by passing a queue-handle and a lenght:
i2s_driver_install(I2S_NUM_0, &i2s_config, 4, &<your queue>);
hope this helps getting started, it sure did help me.
I have been trying to write a working program that takes in data from a UDP socket and displays it in an edit control box as you receive the data (My exposure to c++ is also only about a week :P have only done embedded C code before). I have a working program that can send and output data on a button click but I want something that can do it in real time. The aim is scale this up into a larger GUI program that can send control data to hardware and get responses from them.
I have run into various problems including:
The program just not executing my OnReceivefunction (derived from
CAsyncSocket)
Getting the OnReceive function to run on a separate thread so that it can still run after a button has been clicked sending a control packet to the client then waiting for a response in a while loop
Not being able to output the data in the edit box (tried using both CEdit and CString)
ReplaceSel error saying that the type char is incompatible with LPCTSTR
My code is based on this codeproject.com tutorial, being almost exactly what I want but I get the error in 4.
EDIT: the error in 4. disappears when I change it to a TCHAR but then it outputs random chinese characters. The codeproject.com tutorial outputs the correct characters regardless of char or TCHAR declaration. When debugged my code has type wchar_t instead type char like the other code.
Chinese output
In the working program echoBuffer[0] the character sent and displayed was a 1
UINT ReceiveData(LPVOID pParam)
{
CTesterDlg *dlg = (CTesterDlg*)pParam;
AfxSocketInit(NULL);
CSocket echoServer;
// Create socket for sending/receiving datagrams
if (echoServer.Create(12345, SOCK_DGRAM, NULL) == 0)
{
AfxMessageBox(_T("Create() failed"));
}
for (;;)
{ // Run forever
// Client address
SOCKADDR_IN echoClntAddr;
// Set the size of the in-out parameter
int clntAddrLen = sizeof(echoClntAddr);
// Buffer for echo string
char echoBuffer[ECHOMAX];
// Block until receive message from a client
int recvMsgSize = echoServer.ReceiveFrom(echoBuffer, ECHOMAX, (SOCKADDR*)&echoClntAddr, &clntAddrLen, 0);
if (recvMsgSize < 0)
{
AfxMessageBox(_T("RecvFrom() failed"));
}
echoBuffer[recvMsgSize] = '\0';
dlg->m_edit.ReplaceSel(echoBuffer);
dlg->m_edit.ReplaceSel(_T("\r\n"));
}
}
After reading the link that #IInspectable provided about working with strings and checking the settings differences between the two programs it became clear that the issue lay with an incorrect conversion to UNICODE. My program does not require it so I disabled it.
This has cleared up the issue in 4. and provided solutions for 2 and 3.
I also think I know why another instance of my program would not run OnReceivein 1. because that file was not being defined by one that was already being run by the program, but that is now irrelevant.
I'm trying to read from my XBox 360 controller without polling it. (To be precise, I'm actually using a Logitech F310, but my Windows 10 PC sees it as an XBox 360 controller.) I've written some rather nasty HID code that uses overlapping I/O to block in a thread on two events, one that indicates there is a report ready to read from the HID device, the other indicating the UI thread has requested the HID thread to exit. That works fine, but the HID driver behaves somewhat differently than XInput does. In particular, it consolidates the two triggers into a single value, only passing their difference (on the curious claim that games expect HID values to be 0x80 when the player's finger is off the control). XInput treats them as two distinct values, which is a big improvement. Also, XInput reports the hat switches as four bits, which means you can actually get ten states out of it: unpressed, N, NE, E, SE, S, SW, W, NW, and all-down (that last might be hard to use successfully, but at least it's there if you want it; I've been using it to exit my polling loop).
The downside, to me, of XInput is that there appears to be no way to block on a read request until the controller changes one of its values or buttons. As an HID device, the ReadFile call will block (more exactly, WaitForMultipleEvents blocks until there is data available). XInput seems to anticipate polling. For a game that would naturally be written to poll the controller as often as it updated the game state (maybe once for each new video frame displayed, for example), that makes sense. But if you want to use the controller for some other purpose (I'm working on a theatrical application), you might want a purely asynchronous system like the HID API supplies. But, again, the HID API combines the two value triggers.
Now, when you read the device with XInput, not only do you get the state of all the controls, you also get a packet number. MSDN says the packet number only changes when the state of a control changes. That way, if consecutive packet numbers are the same, you don't have to bother with any processing after the first one, because you know the controller state hasn't changed. But you are still polling which, to me, is somewhat vulgar.
What intrigues me, however, is that when I put a big delay in between my polls (100ms) I can see that the packet numbers go up by more than one when the value controls (the triggers or sticks) are being moved. This, I think, suggests that the device is sending packets without waiting to be polled, and that I am only getting the most recent packet each time I poll. If that is the case, it seems that I ought to be able to block until a packet is sent, and react only when that happens, rather than having to poll at all. But I can't find any indication that this is an option. Because I can block with the HID API, I don't want to give up without trying (including asking for advice here).
Short of writing my own driver for the controller (which I'm not sure is even an option without proprietary documentation), does anyone know how I can use overlapping I/O (or any other blocking method) to read the XBox 360 controller the way XInput does, with the triggers as separate values, and the hat as four buttons?
Below is some code I wrote that reads the controller and shows that the packet numbers can jump by more than one between reads:
#include <Windows.h>
#include <Xinput.h>
#include <stdio.h>
#define MAX_CONTROLLERS 4
int main()
{
DWORD userIndex;
XINPUT_STATE xs;
XINPUT_VIBRATION v;
XInputEnable(TRUE);
// Which one are we?
for (userIndex = 0; userIndex < XUSER_MAX_COUNT; ++userIndex)
if (XInputGetState(userIndex, &xs) == ERROR_SUCCESS)
break;
if (userIndex == XUSER_MAX_COUNT)
{
printf("Couldn't find an Xbox 360 controller.\n");
getchar();
return -1;
}
printf("Using controller #%1d.\n", userIndex);
while (TRUE)
{
DWORD res = XInputGetState(userIndex, &xs);
printf("%5d %6d: %3d %3d %3d %3d %3d %3d 0x%04X\n",
res,
xs.dwPacketNumber,
xs.Gamepad.bLeftTrigger & 0xFF,
xs.Gamepad.bRightTrigger & 0xFF,
xs.Gamepad.sThumbLX & 0xFF,
xs.Gamepad.sThumbLY & 0xFF,
xs.Gamepad.sThumbRX & 0xFF,
xs.Gamepad.sThumbRY & 0xFF,
xs.Gamepad.wButtons);
if (xs.Gamepad.wButtons == 0x000F) // mash down the hat
break;
Sleep(100);
}
getchar();
return 0;
}
Please note that DirectInput isn't much help, as it also combines the triggers into one value.
Thanks!
Not sure there is any advantage to this, but could you write a thread that polls on a regular interval and then sets a semaphore (or some other signal) when the state has changed. Then your main thread could block waiting for the signal from the polling thread. But potentially there might not be any advantage to this system because on some controllers the values of the thumbsticks change slightly ever frame whether you move them or not. (Noise) You could of course ignore small changes and only signal your semaphore when a large change occurred.
I need to detect when the current playing audio/video is paused. I cannot find anything for 1.0. My app is a bit complex but here is condensed code
/* This function is called when the pipeline changes states. We use it to
* keep track of the current state. */
static void state_changed_cb(GstBus *bus, GstMessage *msg, CustomData *data)
{
GstState old_state, new_state, pending_state;
gst_message_parse_state_changed(msg, &old_state, &new_state, &pending_state);
if(GST_MESSAGE_SRC(msg) == GST_OBJECT(data->playbin))
{
g_print("State set to %s\n", gst_element_state_get_name(new_state));
}
}
gst_init(&wxTheApp->argc, &argv);
m_playbin = gst_element_factory_make("playbin", "playbin");
if(!m_playbin)
{
g_printerr("Not all elements could be created.\n");
exit(1);
}
CustomData* data = new CustomData(xid, m_playbin);
GstBus *bus = gst_element_get_bus(m_playbin);
gst_bus_set_sync_handler(bus, (GstBusSyncHandler) create_window, data, NULL);//here I do video overly stuffs
g_signal_connect (G_OBJECT (bus), "message::state-changed", (GCallback)state_changed_cb, &data);
What do I do wrong? I cannot find working example on connecting such events on Gstreamer 1.0 and 0.x seems a bit different than 1.0 so the vast exaples there don't help
UPDATE
I have found a way to get signals. I run wxWidgets timer with 500ms time span and each time timer fires I call
GstMessage* msg = gst_bus_pop(m_bus);
if(msg!=NULL)
{
g_print ("New Message -- %s\n", gst_message_type_get_name(msg->type));
}
Now I get a lot of 'state-change' messages. Still I want to know if that message is for Pause or Stop or Play or End of Media (I mean way to differentiate which message is this) so that I can notify the UI.
So while I get signals now, the basic problem, to get specific signals, remains unsolved.
You have to call gst_bus_add_signal_watch() (like in 0.10) to enable emission of the signals. Without that you can only use the other ways to get notified about GstMessages on that bus.
Also just to be sure, you need a running GLib main loop on the default main context for this to work. Otherwise you need to do things a bit different.
For the updated question:
Check the documentation: gst_message_parse_state_changed() can be used to parse the old, new and pending state from the message. This is also still the same as in 0.10. From the application point of view, and conceptionally nothing much has changed really between 0.10 and 1.0
Also you shouldn't do this timeout-waiting as it will block your wxwidget main loop. Easiest solution would be to use a sync bus handler (which you already have) and dispatch all messages from there to some callback on the wxwidget main loop.
I have a GPUImageColorDodgeBlend filter with two inputs connected:
A GPUImageVideoCamera which is getting frames from the iPhone video camera.
A GPUImageMovie which is an (MP4) video file that I want to have laid over the live camera feed.
The GPUImageColorDodgeBlend is then connected to two outputs:
A GPUImageImageView to provide a live preview of the blend in action.
A GPUImageMovieWriter to write the movie to storage once a record button is pressed.
Now, before the video starts recording, everything works OK 100% of the time. The GPUImageVideo is blended over the live camera video fine, and no issues or warnings are reported.
However, when the GPUImageMovieWriter starts recording, things start to go wrong randomly. About 80-90% of the time, the GPUImageMovieWriter works perfectly, there are no errors or warnings and the output video is written correctly.
However, about 10-20% of the time (and from what I can see, this is fairly random), things seem to go wrong during the recording process (although the on-screen preview continues to work fine).
Specifically, I start getting hundreds & hundreds of Program appending pixel buffer at time: errors.
This error originates from the - (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex method in GPUImageWriter.
This issue is triggered by problems with the frameTime values that are reported to this method.
From what I can see, the problem is caused by the writer sometimes receiving frames numbered by the video camera (which tend to have extremely high time values like 64616612394291 with a timescale of 1000000000). But, then sometimes the writer gets frames numbered by the GPUImageMovie which are numbered much lower (like 200200 with a timescale of 30000).
It seems that GPUImageWriter is happy as long as the frame values are increasing, but once the frame value decreases, it stops writing and just emits Program appending pixel buffer at time: errors.
I seem to be doing something fairly common, and this hasn't been reported anywhere as a bug, so my questions are (answers to any or all of these are appreciated -- they don't all need to necessarily be answered sequentially as separate questions):
Where do the frameTime values come from -- why does it seem so arbitrary whether the frameTime is numbered according to the GPUImageVideoCamera source or the GPUImageMovie source? Why does it alternative between each -- shouldn't the frame numbering scheme be uniform across all frames?
Am I correct in thinking that this issue is caused by non-increasing frameTimes?
...if so, why does GPUImageView accept and display the frameTimes just fine on the screen 100% of the time, yet GPUImageMovieWriter requires them to be ordered?
...and if so, how can I ensure that the frameTimes that come in are valid? I tried adding if (frameTime.value < previousFrameTime.value) return; to skip any lesser-numbered frames which works -- most of the time. Unfortunately, when I set playsAtActualSpeed on the GPUImageMovie this tends to become far less effective as all the frames end up getting skipped after a certain point.
...or perhaps this is a bug, in which case I'll need to report it on GitHub -- but I'd be interested to know if there's something I've overlooked here in how the frameTimes work.
I've found a potential solution to this issue, which I've implemented as a hack for now, but could conceivably be extended to a proper solution.
I've traced the source of the timing back to GPUImageTwoInputFilter which essentially multiplexes the two input sources into a single output of frames.
In the method - (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex, the filter waits until it has collected a frame from the first source (textureInput == 0) and the second, and then forwards on these frames to its targets.
The problem (the way I see it) is that the method simply uses the frameTime of whichever frame comes in second (excluding the cases of still images for which CMTIME_IS_INDEFINTE(frameTime) == YES which I'm not considering for now because I don't work with still images) which may not always be the same frame (for whatever reason).
The relevant code which checks for both frames and sends them on for processing is as follows:
if ((hasReceivedFirstFrame && hasReceivedSecondFrame) || updatedMovieFrameOppositeStillImage)
{
[super newFrameReadyAtTime:frameTime atIndex:0]; // this line has the problem
hasReceivedFirstFrame = NO;
hasReceivedSecondFrame = NO;
}
What I've done is adjusted the above code to [super newFrameReadyAtTime:firstFrameTime atIndex:0] so that it always uses the frameTime from the first input and totally ignores the frameTime from the second input. So far, it's all working fine like this. (Would still be interested for someone to let me know why this is written this way, given that GPUImageMovieWriter seems to insist on increasing frameTimes, which the method as-is doesn't guarantee.)
Caveat: This will almost certainly break entirely if you work only with still images, in which case you will have CMTIME_IS_INDEFINITE(frameTime) == YES for your first input'sframeTime.