Wait for eglSwapBuffers posting to complete - framebuffer

I need to know when posting completes after eglSwapBuffers. I was thinking eglWaitNative might halt execution until positing is complete, but I find it unclear reading the spec, chapter 3.8:
https://www.khronos.org/registry/egl/specs/eglspec.1.5.pdf
It would appear eglWaitNative is used to synchronizing "native" rendering API such as Xlib and GDI. However as far as I know eglSwapBuffers might be running on top of Wayland which can´t render shit. Still, it would seem reasonable to believe the EGL_CORE_NATIVE_ENGINE engine always points out the "marking engine" doing buffer swaps...
From 3.10.3 I read:
Subsequent client API commands can be issued immediately, but will not
be executed until posting is completed.
I suppose I could do something like this but I´d rather use "pure" egl if possible:
eglSwapBuffers(...);
glClear(...); // "Dummy" command.
My project is using OpenGL Safety Critical profile 1.0.1, EGL 1.3 and some vendor specific extensions. Sync objects are not available.

Related

macOS : programmatic check if process runs as a launchDaemon or launchAgent or from command-line

I'd like to get an indication about the context in which my process is running from. I'd like to distinguish between the following cases :
It runs as a persistent scheduled task (launchDaemon/launchAgent)
It was called on-demand and created by launchd using open command-line or double-click.
It was called directly from command-line terminal (i.e. > /bin/myProg from terminal )
Perhaps is there any indication about the process context using Objective-c/swift framework or any other way ? I wish to avoid inventing the wheel here :-)
thanks
There is definetely no simple public API or framework for doing this, and doing this is hard.
Some parts of this info possibly could be retreived by your process itslef with some side-ways which will work on some system versions:
There is a launchctl C-based API, which you can try to use to enumerate all
launch daemon/agent tasks and search for your app path/pid. You may
require a root rights for your process for doing this.
Using open command-line sometimes could be traced with environment
variables it sets for your process.
Running directly from command-line could leave responsible_pid filled correctly (which is private API from libquarantine, unless you are observing it with Endpoint Security starting from 11.smth version)
All this things, except launchctl API, are not public, not reliable, could be broken at any time by Apple, and may be not sufficient for your needs.
But it is worth to take them a try, because there is nothing better :)
You could potentially distinguish all cases you want using system events monitoring from some other (root-permitted) process you control, possibly adopting Endpoint Security Framework (requires an entitlement from Apple, can't be distributed via AppStore), calling a lot of private APIs and a doing bunch of reversing tricks.
The open resource I could suggest on this topic is here

Using a vkFence with an std::condition_variable

A VkFence can be waited upon or queried about its state. Is it possible to have a callback invoked by the Vulkan implementation when the fence is ready instead?
This would allow it to be used with objects such as a std::condition_variable. When the fence would be ready, the condition_variable would get notified.
Such an approach would also allow integration with libraries like Boost.Fiber, which would completely remove the need for the thread to sleep, but rather it could do useful work while waiting upon the fence.
If this is not possible in base Vulkan, is there an extension that allows it?
Vulkan doesn't work that way. Vulkan devices and queues execute independently of the CPU. Indeed, with one or two exceptions, Vulkan implementations only ever use CPU resources within the scope of a particular function call and only on the thread on which this call was made. Even debug callbacks are made within the scope of the function that caused the error.
There is no mechanism for Vulkan implementations to use CPU resources without the explicit consent of the user of the API (again, minus one or two exceptions). So no callbacks that act outside of an API call.
Vulkan does have a way to extract a native synchronization object from a VkFence, but it is surprisingly not useful in Windows. While you can get a HANDLE, it cannot be used by the Win32 API for waiting on it. This is mainly for interop with other APIs (like converting it to a D3D12 sync object), not for waiting on it yourself. But the file descriptor extraction operation can get a fully functional sync object... if the implementation lets you.

Scheduling/delaying of jobs/tasks in Play framework 2.x app

In a typical web application, there are some things that I would prefer to run as delayed jobs/tasks. They tend to have some or all of the following properties:
Takes a long time (anywhere from multiple seconds to multiple minutes to multiple hours).
Occupy some resource heavily (CPU, network, disk, external API limits, etc.)
Result not immediately necessary. Can complete HTTP response without it. OK (and possibly even preferable) to delay until later.
Can be (and possibly preferable to) run on (a) different machine(s) than web server(s). The machine(s) are potentially dedicated job/task runners.
Should be run in response to other event(s), or started periodically.
What would be the preferred way(s) to set up, enqueue, schedule, and run delayed jobs/tasks in a Scala + Play Framework 2.x app?
For more details...
The pattern I have used in the past, and which I would like to replicate if applicable, is:
In handler of web request, or in cron-like call, enqueue job(s)
In job runner(s), repeatedly dequeue and run one job at a time
Possibly handle recording job results
This seems to be a relatively simple yet still relatively flexible pattern.
Examples I have encountered in the past include:
Updating derived data in DB
Analytics/tracking API calls for a web request
Delete expired sessions or other stale/outdated DB records
Periodic batch ETLs
In other languages/frameworks, I would typically use a job/task framework. Examples include:
Resque in a Ruby + Rails app
Celery in a Python + Django app
I have found the following existing materials, but unfortunately, I don't think they fit my use case directly.
Play 1.x asynchronous jobs API (+ various SO questions referencing it). Appears to have been removed in 2.x line. No reference to what replaced it.
Play 2.x Akka integration. Seems very general-purpose. I'd imagine it's possible to use Akka for the above, but I'd prefer not to write a jobs/tasks framework if one already exists. Also, no info on how to separate the job runner machine(s) from your web server(s).
This SO answer. Seems potentially promising for the "short to medium duration IO bound" case, e.g. analytics calls, but not necessarily for the "CPU bound" case (probably shouldn't tie up CPU on web server, prefer to ship off to different node), the "lots of network" case, or the "multiple hour" case (probably shouldn't leave that in the background on the web server, even if it isn't eating up too many resources).
This SO question, and related questions. Similar to above, it seems to me that this covers only the cases where it would be appropriate to run on the same web server.
Some further clarification on use-cases (as per commenters' request). There are two main use-cases that I have experienced with something like resque or celery that I am trying to replicate here:
Some event on the site (Most often, an incoming web request causes task to be enqueued.)
Task should run periodically. (Most often, this is implemented as: periodically, enqueue task to be run as above.)
In the case of resque or celery, the tasks enqueued by both use-cases enter queues the same way and are treated the same way by the runner/worker process. Barring other Scala or Play-specific considerations, that would be my initial guess for how to approach this.
Some further clarification on why I do not believe the Akka scheduler fits my use case out-of-the-box (as per commenters' request):
While it is no doubt possible to construct a fitting solution using some combination of the Akka scheduler (for periodic jobs), akka-remote and akka-cluster (for communicating between the job caller and the job runner), that approach requires a certain amount of glue code which is almost a delayed job framework in and of itself. If it exists, I would prefer to use an existing out-of-the-box solution rather than reinvent the wheel.

Does ProfileOptimization actually work?

One of the new performance enhanchements for .NET 4.5 is the introduction of the 'MultiCode JIT'.
See here for more details.
I have tried this, but it seems to have no effect on my application.
The reason why I am interested is that my app (IronScheme) takes a good long time to startup if not NGEN'd, which implies a fair amount of JIT'ng is involved at startup. (1.4 sec vs 0.1 sec when NGEN'd).
I have followed the instructions on how to enable this, and I can see a 'small' (4-12KB) is created. But on subsequent startup, it seems to have absolutely no effect on improving the startup time. It is still 1.4 sec.
Has anyone actually seen (or made) this work in practice?
Also, are there any limitations on which code will be 'tracked'? Eg: assembly loading contexts, transient assemblies, etc. I ask this as the created file never seems to grow, but I am in fact generating a fair amount of code (in a transient assembly).
One bug that I did encounter was that SetProfileRoot does not seem to understand a / as a path separator, make sure to use \ .
The rule of thumb we use at Microsoft is that Multicore JIT gets you about half way towards NGEN startup performance. Thus if your app starts in 0.1 seconds with NGEN and 1.4 seconds without NGEN, we would expect Multicore JIT startup to take about 0.75 seconds.
That being said, we had to put some limitations in place to guarantee that program execution order is the same with and without MCJ. MCJ will sometimes pause the background thread waiting for modules to be loaded by the foreground thread, and will abort background compilation if there is an assembly resolve or module resolve event.
If you want to find out what's happening in your case, we have ETW (Event Tracing For Windows) instrumentation of the MCJ feature and we will be releasing a version of PerfView soon which will be able to collect these events by if you take a trace of your app startup.
Update: PerfView has been updated to be able to show background JIT information. Here are the steps to diagnosing with the latest version (1.2.2.0):
Collect a trace using PerfView of your application startup, either using Collect->Run or Collect->Collect from the main PerfView menu.
Assuming you used Collect-> Run, put the name of your .exe in the Command text box, pick a filename (i.e. IronScheme.etl), select Background JIT from Advanced Options, and click Run Command.
Close your application and double click on the IronScheme.etl file that gets generated.
Double click on the JIT Stats view in the list underneath IronScheme.etl, you should see something like this in the view that pops up:
This process uses Background JIT compilation (System.Runtime.ProfileOptimize)
Methods Background JITTed : 2,951
Percent # Methods Background JITTed : 52.9%
MSec Background JITTing : 3,901
Percent Time JITTing is Background : 50.9%
Background JIT Thread : 11308
You can click on "View Raw Background Jit Diagnostics" to see all of the MCJ events in excel. One question I forgot to ask: are you running this on a multicore machine or multicore VM? It is a common mistake to test out MCJ in a VM that only has a single logical processor.
Calling Activator.CreateInstance during startup seems to kill MCJ?
Or rather that triggered an Assembly Resolve, which completely seems to stop MCJ. And never work after that. Maybe the MSDN docs should mention this.

OpenFeint achievements performance

I've decided to integrate OpenFeint into my new game to have achievements and leaderboards.
The game is dynamic and I would like user to be rewarded immediately for some successful results, but as it seems for me, OpenFeint's achievements are a bit sluggish and it shows visual notification only when it receives confirmation from the server.
Is it possible to change something in settings or hack it a little bit to show notification immediately as soon as it checks only local database if the achievement has not been unlocked it?
Not sure if this relates to the Android version of the SDK (which seems even slower), but we couldn't figure out how to make it faster. It was so unacceptably slow that we started developing our own framework that fixes most of open feint's shortcomings and then some. Check out Swarm, it might fit your needs better.
There are several things you can do to more tightly control the timing of these notifications. I'll explain one approach and you can use this as a starting point to explore further on your own. These suggestions apply specifically to iOS apps. One caveat is that these suggestions refer to internal APIs in OFSDK 2.8 for iOS and not ordinarily recommended for high level use and subject to change in future versions.
The first thing I recommend is that you build the sample app with your own product key. Use the standard sample app to experiment before applying the result to your own code.
You are going to get the snappiest response by separating the notification pop-up UI from the process of submitting the achievement. This way you don't have to worry about getting wrapped up in the logic for deciding whether the submission is going just to the local db or is doing the full confirmation on an async network transaction.
See the declaration of "showAchievementNotice" in "OFNotification.h". Performing a search in the sample app, you will see that this is the internal API used for displaying the achievement pop-up when an achievement is earned. It does not actually submit the achievement. You can call this method directly as it is called from "OFAchievementService.mm" to directly control when the message appears. You can then use the following article to disable the pop-up from being called when the actual submission occurs:
http://support.openfeint.com/dev/notification-pop-ups-in-ios/
This gives you complete freedom to call the submission at a later time provided you keep track of the need to do so. For example, you could locally serialize a flag to take care of the actual submission either after the level is done or the next time the app starts up. Don't forget that the user could quit out of a game without cleanly finishing a level.