I'm following this tutorial to use Lottie animations library on WatchOS. It's working until I try to change the animation while the app is running. Even though I change it, the animation stays the same until I run the watch app from Xcode. (If I simply close the app and open again it doesn't change.)
What I tried:
clearing the cache from the watch file system
clearing URLCache
clearing SDImageCache memory and disk
You were right that caching causes your problem. However, you didn't go deep enough.
You use SDWebImageLottieCoder to display your animations, which in turn uses rlottie.
According to their readme:
"rlottie is a platform independent standalone c++ library for rendering vector based animations and art in realtime."
The point is that the animation is cached by the rlottie framework, so neither clearing URLCache nor SDImageCache will solve this.
The easiest way to solve this is to modify the code in the rlottie C++ library. In your Xcode project navigate to Pods -> Pods -> librlottie and look for the following file: lottieanimation_capi.cpp
In this file search for the following code:
RLOTTIE_API Lottie_Animation_S *lottie_animation_from_data(const char *data, const char *key, const char *resourcePath)
{
if (auto animation = Animation::loadFromData(data, key, resourcePath) ) {
Lottie_Animation_S *handle = new Lottie_Animation_S();
handle->mAnimation = std::move(animation);
return handle;
} else {
return nullptr;
}
}
This is the code that loads your Lottie animation on your device and is responsible for caching. loadFromData takes an additional fourth parameter called cachePolicy, which is a boolean, and nothing is passed for it in this case.
So if you don't want your animation to be cached then pass false into this call.
Modify the code to this and it should work:
RLOTTIE_API Lottie_Animation_S *lottie_animation_from_data(const char *data, const char *key, const char *resourcePath)
{
if (auto animation = Animation::loadFromData(data, key, resourcePath, false) ) {
Lottie_Animation_S *handle = new Lottie_Animation_S();
handle->mAnimation = std::move(animation);
return handle;
} else {
return nullptr;
}
}
Related
I'm trying to make an application that streams video through a gtk draw area. The pipeline I'm currently trying to run is videotestsrc ! ximagesink. My problem is, when I try to run my program, it displays videotestsrc, but only as a still image. This is different from running "gst-launch-1.0 videotestsrc ! ximagesink" through a terminal, where the static in the bottom right moves.
Any ideas on what I'm doing wrong?
int main(int argc, char* argv[])
{
Gst::init(argc, argv);
auto app = Gtk::Application::create(argc, argv, "gtkmm.video.sunshine.test");
Program_Window window;
return app->run(window);
}
class Program_Window : public Gtk::Window
{
public:
Program_Window();
virtual ~Program_Window();
protected:
Gtk::DrawingArea* display;
Glib::RefPtr<Gst::Pipeline> playbin;
gulong window_handler;
GstVideoOverlay* overlay;
void on_display_realize();
};
Program_Window::Program_Window()
{
//initialize variables
display = new Gtk::DrawingArea();
window_handler = 0;
//connect realize callback
display->signal_realize().connect( sigc::mem_fun( *this, &Program_Window::on_display_realize ));
//create playbin
playbin = Gst::PlayBin::create("playbin");
//prepare elements for the pipeline
Glib::RefPtr<Gst::Element> source = Gst::ElementFactory::create_element("videotestsrc", "src");
Glib::RefPtr<Gst::Element> sink = Gst::ElementFactory::create_element("ximagesink", "sink");
//add elements to the pipeline
playbin->add(source)->add(sink);
//link elements
source->link(sink);
//prep video overlay interface
overlay = (GstVideoOverlay*) sink->gobj();
//add drawing area to main window
add(*display);
show_all_children();
}
void Program_Window::on_display_realize()
{
//acquire an xwindow pointer to our draw area
window_handler = GDK_WINDOW_XID( display->get_window()->gobj() );
//give xwindow pointer to our pipeline via video overlay interface
gst_video_overlay_set_window_handle(overlay, window_handler);
//start video
playbin->set_state(Gst::STATE_PLAYING);
}
Could be that in the app it is a tad slower causing following frames to miss their clock times and get discarded. Try setting the sync=false option for the video sink and check if it changes anything. Else use GST_DEBUG to get some logs from the pipeline about what is happening.
P.S. When using Gtk consider using gtksink and gtkglsink to make you life easier.
Fixed it. For whatever reason, the program didn't like the way I used playbin. Changing it to a normal Gst::pipeline worked.
//create playbin //in the above code
//playbin = Gst::PlayBin::create("playbin"); //change this
playbin = Gst::Pipeline::create("pipeline"); //to this
I recently tried to develop a flutter plugin with cameraX, but I found that there was no way to simply bind Preview to flutter's Texture.
In the past, I only needed use camera.setPreviewTexture(surfaceTexture.surfaceTexture()) to bind camera and texture, now I can't find the api.
camera.setPreviewTexture(surfaceTexture.surfaceTexture())
val previewConfig = PreviewConfig.Builder().apply {
setTargetAspectRatio(Rational(1, 1))
setTargetResolution(Size(640, 640))
}.build()
// Build the viewfinder use case
val preview = Preview(previewConfig).also{
}
preview.setOnPreviewOutputUpdateListener {
// it.surfaceTexture = this.surfaceTexture.surfaceTexture()
}
// how to bind the CameraX Preview surfaceTexture and flutter surfaceTexture?
I think you can bind texture by Preview.SurfaceProvider.
final CameraSelector cameraSelector = new CameraSelector.Builder().requireLensFacing(CameraSelector.LENS_FACING_BACK).build();
final ListenableFuture<ProcessCameraProvider> listenableFuture = ProcessCameraProvider.getInstance(appCompatActivity.getBaseContext());
listenableFuture.addListener(() -> {
try {
ProcessCameraProvider cameraProvider = listenableFuture.get();
Preview preview = new Preview.Builder()
.setTargetResolution(new Size(720, 1280))
.build();
cameraProvider.unbindAll();
Camera camera = cameraProvider.bindToLifecycle(appCompatActivity, cameraSelector, preview);
Preview.SurfaceProvider surfaceProvider = request -> {
Size resolution = request.getResolution();
surfaceTexture.setDefaultBufferSize(resolution.getWidth(), resolution.getHeight());
Surface surface = new Surface(surfaceTexture);
request.provideSurface(surface, ContextCompat.getMainExecutor(appCompatActivity.getBaseContext()), result -> {
});
};
preview.setSurfaceProvider(surfaceProvider);
} catch (Exception e) {
e.printStackTrace();
}
}, ContextCompat.getMainExecutor(appCompatActivity.getBaseContext()));
Update: CameraX has added functionality which will now allow this since this answer was written, but this might still be useful to someone. See this answer for details.
It seems as though using CameraX is difficult to impossible due to it abstracting the more complicated things away and so not exposing things you need like being able to pass in your own SurfaceTexture (which is normally created by Flutter).
So the simple answer is that you can't use CameraX.
That being said, with some work you may be able to get this to work, but I have no idea if it will work for sure. It's ugly and hacky so I wouldn't recommend it. YMMV.
If we're going to do this, let's first look at how the flutter view creates a texture
#Override
public TextureRegistry.SurfaceTextureEntry createSurfaceTexture() {
final SurfaceTexture surfaceTexture = new SurfaceTexture(0);
surfaceTexture.detachFromGLContext();
final SurfaceTextureRegistryEntry entry = new SurfaceTextureRegistryEntry(nextTextureId.getAndIncrement(),
surfaceTexture);
mNativeView.getFlutterJNI().registerTexture(entry.id(), surfaceTexture);
return entry;
}
Most of that is replicable, so we may be able to do it with the surface texture the camera gives us.
You can get ahold of the texture the camera creates this way:
preview.setOnPreviewOutputUpdateListener { previewOutput ->
SurfaceTexture texture = previewOutput.surfaceTexture
}
What you're going to have to do now is to pass a reference to your FlutterView into your plugin (I'll leave that for you to figure out). Then call flutterView.getFlutterNativeView() to get ahold of the FlutterNativeView.
Unfortunately, FlutterNativeView's getFlutterJni is package private. So this is where it gets really hacky - you can create a class in that same package that calls that package-private method in a publicly accesible method. It's super ugly, and you may have to fiddle around with Gradle to get the compilation security settings to allow it, but it should be possible.
After that, it should be simple enough to create a SurfaceTextureRegistryEntry and to register the texture with the flutter jni. I don't think you want to detach from the opengl context, and I really have no idea if this will actually work. But if you want to try it out and report back what you find I would be interested in hearing the result!
I am trying to detect when a user locks the device (vs. pressing home button for instance).
Found this:
CFNotificationCenterAddObserver(CFNotificationCenterGetDarwinNotifyCenter(), //center
NULL, // observer
lockStateChanged, // callback
CFSTR("com.apple.springboard.lockstate"), // event name
NULL, // object
CFNotificationSuspensionBehaviorDeliverImmediately);
static void lockStateChanged(CFNotificationCenterRef center, void *observer, CFStringRef name, const void *object, CFDictionaryRef userInfo) {
NSLog(#"event received!");
// you might try inspecting the `userInfo` dictionary, to see
// if it contains any useful info
if (userInfo != nil) {
CFShow(userInfo);
}
}
I can imagine that com.apple.springboard.lockstate is like calling private API? Or is this fine?
Assuming all the CF... functions are public you are probably OK, but in a murky area for sure. Next release of iOS could break your code if Apple changes that string.
What I did in a similar situation for an approved shipping app was to avoid using the string directly. Create an array of the strings, then use the NSString method to combine them with a period separator instead of using com.apple.springboard.lockstate directly.
YMMV
I've been trying to figure out some method to cause a GtkTreeView to redraw after I update the bound GtkListStore from a background thread created with pthreads.
Generally, the widget does not update until something obscures an existing row (even a mouse cursor ).
Most of my searches for this problem has "your tree model doesn't/isn't generating the correct signals" ....
I'm running an old Red Hat 9 with gtk+ 2.0.0, for industrial embedded applications. Most of the data comes from ipc/socket/pipes and gets displayed by a GTK app. Unfortunately so does CRITICAL alarms, which has a habit of not showing when they should. We will (one day) move to a current kernel, but I need to get something working with the existing software.
I've tried emiting the "row-changed" signals, tried calling the gtk_widget_queue_draw and also tried connecting to the "expose-event", where I've tried various things that don't work or seg fault.
server.c
bool Server::Start()
{
// ....
// pthread_t _id;
//
pthread_create( & _id, NULL, &StaticServerThread, this );
// ....
}
viewer.c
bool Viewer::ReadFinished( SocketArgs * args )
{
gdk_threads_enter();
// Populate the buffer and message
//
// GtkListStore *_outputStore;
// gchar *buffer;
// gchar *message;
GtkTreeIter iter;
gtk_list_store_insert_with_values( _outputStore, &iter, 0,
0, buffer, 1, message, -1 );
// ....
gdk_threads_leave();
}
You can perform the updates to the list store in the main thread. For example, you can use g_idle_add() in the worker thread.
I'm using the Audio Queue Services API to play audio streamed from a server over a TCP socket connection on an iPhone. I can play the buffers that were filled from the socket connection, I just cannot seem to make my AudioQueue call my AudioQueueOutputCallback function, and I'm out of ideas.
High level design
Data is passed to the player from the socket connection, and written
immediately into circular buffers in memory.
As AudioQueueBuffers become available, data is copied from the circular buffers into the
available AudioQueueBuffer, which is immediately re-queued. (Or would be, if my callback happened)
What happens
The buffers are all filled and enqueued successfully, and I hear the audio stream clearly. For testing, I use a large number of buffers (15) and all of them play through seamlessly, but the AudioQueueOutputCallback is never called, so I never re-queue any of those buffers, despite the fact that everything seems to be working perfectly. If I don't wait for my callback, assuming it will never be called, and instead drive the enqueueing of buffers based on the data as it is written, I can play the audio stream indefinitely, reusing and re-enqueueing buffers as if they had been explicitly returned to me by the callback. It is that fact: that I can play the stream perfectly while reusing buffers as needed, that confuses me the most. Why isn't the callback being called?
Possibly Relevant Code
The format of the stream is 16 bit linear PCM, 8 kHz, Mono:
_streamDescription.mSampleRate = 8000.0f;
_streamDescription.mFormatID = kAudioFormatLinearPCM;
_streamDescription.mBytesPerPacket = 2;
_streamDescription.mFramesPerPacket = 1;
_streamDescription.mBytesPerFrame = sizeof(AudioSampleType);
_streamDescription.mChannelsPerFrame = 1;
_streamDescription.mBitsPerChannel = 8 * sizeof(AudioSampleType)
_streamDescription.mReserved = 0;
_streamDescription.mFormatFlags = (kLinearPCMFormatFlagIsBigEndian |
kLinearPCMFormatFlagIsPacked);
My prototype and implementation of the callback are as follows. Nothing fancy, and pretty much identical to every example I've seen so far:
// Prototype, declared above the class's #implementation
void AQBufferCallback(void* inUserData, AudioQueueRef inAudioQueue, AudioQueueBufferRef inAudioQueueBuffer);
// Definition at the bottom of the file.
void AQBufferCallback(void* inUserData, AudioQueueRef inAudioQueue, AudioQueueBufferRef inAudioQueueBuffer) {
printf("callback\n");
[(MyAudioPlayer *)inUserData audioQueue:inAudioQueue didAquireBufferForReuse:inAudioQueueBuffer];
}
I create the AudioQueue like this:
OSStatus status = 0;
status = AudioQueueNewOutput(&_streamDescription,
AQBufferCallback, // <-- Doesn't work...
self,
CFRunLoopGetCurrent(),
kCFRunLoopCommonModes,
0,
&_audioQueue);
if (status) {
// This is not called...
NSLog(#"Error creating new audio output queue: %#", [MyAudioPlayer stringForOSStatus:status]);
return;
}
And I enqueue buffers like this. At this point, it is known that the local buffer contains the correct amount of data for copying:
memcpy(aqBuffer->mAudioData, localBuffer, kAQBufferSize);
aqBuffer->mAudioDataByteSize = kAQBufferSize;
OSStatus status = AudioQueueEnqueueBuffer(_audioQueue, aqBuffer, 0, NULL);
if (status) {
// This is also not called.
NSLog(#"Error enqueueing buffer %#", [MyAudioPlayer stringForOSStatus:status]);
}
Please save me.
Is this executed on the main thread or a background thread? probably not good if CFRunLoopGetCurrent() returns a run loop of a thread that could disappear (thread pool etc) or is a run loop that don't care about kCFRunLoopCommonModes.
Try to change CFRunLoopGetCurrent() to CFRunLoopGetMain() or make sure AudioQueueNewOutput() and CFRunLoopGetCurrent() is executed on the main thread or a thread that you have control over and has a proper run loop.
Try changing self for (void*)self. Like this:
status = AudioQueueNewOutput(&_streamDescription,
AQBufferCallback,
(void*)self,
CFRunLoopGetCurrent(),
kCFRunLoopCommonModes,
0,
&_audioQueue);