Is it possible to view the traces of a running LTTng tracing session? - trace

I am aware of the command lttng view. But it can only be used to view a session after it has been stopped.

To view data before a session is stopped/finished either use the rotation feature, introduced in LTTng 2.11, or use the live session mode.
Both modes have pros and cons but the rotation feature is the way to go most of the time especially if trace analysis automation is on your road map.
You can also use the snapshot session mode but this is really used for "flight recorder" type of tracing workload.

Related

Unity UNET How to change online scene in sync with clients

I'm using the old Unity 2017.3 UNET implementation in my game. Players connect to a server and are placed in a Lobby scene until the party leader selects another level to go to. The implementation is just a slightly modified version of the default NetworkLobbyManager.
The trouble started now that I've begun heavily testing the networking code by running a compiled build for a client and then using the editor as the server. Very frequently, the server running in the editor will run a great deal slower than the compiled client build. So when I use NetworkManager.ServerChangeScene the client will load the scene before the server, which will cause all NetworkIdentities on the client scene to be disabled (because they haven't been created on the server yet.)
It's a lot less likely to happen if the server is running a compiled build, because the server will almost always load the scene before any clients. But it does surface a bigger issue with Unity itself. That there's no guarantee that the server will be available when changing scenes.
Is there some other way of changing scenes in a networked game? Is there a way to guarantee that the server enters the scene before any clients? Or am I stuck just kind of hoping that the network remains stable between scene changes?
Well I thought about it more overnight and decided to look in other directions, because more investigation revealed that sometimes the Network Identities are still disabled when the server loads the scene first as well.
I was looking through the UNET source code and realized that the server should be accounting for situations where it loads the scene after the clients, although that code looks a little jank to me. This theory was backed up by the documentation I found that also says NetworkIdentities in the Scene on startup are treated as if they are spawned dynamically when the server starts.
Knowing those things now, I'm starting to think that I'm just dumb and messed some stuff up on my end. The object that was being disabled is a manager that enables and disables other NetworkIdentity objects. I'm pretty sure the main problem is that it's disabling a network identity on the client, that is still enabled on the server, which is causing the whole thing to go haywire.
In the future, I'm just going to try and stay away from enabling and disabling game objects on a networked basis and stick to putting relevant functionality behind a flag of my own so that I can "soft disable" an object without bugging out any incoming RPCs or SyncVar data.

Object persistence in WSGI

I've been developing a web interface for a simple raspberry pi project. It's only turning lights on and off, but I've been trying to add a dimming feature with PWM.
I'm using modWSGI with Apache, and RPi.GPIO for GPIO access. For my prototype I'm using (3) SN74HC595's in series for the LED outputs, and am trying to PWM the OE line to dim the lights.
Operating the shift registers is easy, because they hold the outputs in between updates. However, for PWM to work the GPIO.PWM instance must stay active between WSGI sessions. This is what I'm having trouble with. I've been working on this for a few days, and saw a couple similar questions here. But nothing for active objects like PWM, only simple counters and such.
My two thoughts are:
1) Use the global scope to hold the PWM object, and use PWM.ChangeDutyCycle() in the WSGI function to change brightness. This approach has worked before, but it seems like it might not here.
Or 2) Create a system level daemon (or something) and make calls to that from within my WSGI function.
Very important with mod_wsgi if you need things in memory to persist across requests, is that you must use mod_wsgi daemon mode and not embedded mode. Embedded mode is the default though, so you need to make sure you are configuring it. The default for daemon mode is single process and so requests will always hit the same process. It is still multithreaded though, so make sure you are protecting global data access/update with thread locking.
Details on embedded vs daemon mode in:
http://modwsgi.readthedocs.io/en/develop/user-guides/processes-and-threading.html
You will see some example about daemon mode in document which also explains how you should be configuring your virtual environment.
http://modwsgi.readthedocs.io/en/develop/user-guides/virtual-environments.html
For anyone looking at this in 2020:
I changed mod_wsgi to single thread mode. I'm not sure if it's related to Python, mod_wsgi, or bad juju, but it still would not last long term. After a few hours the PWM would stop at full off.
I tried rolling my own PWM daemon, but ultimately went with the pigpio module (is Joan on SE?). It's been working perfect for me.

What should be Memory Protection Strategy for ARM Cortex CPU?

I need to implement a multitasking system with MPU for ARM Cortex M3/M4 processors.
In that system, there will be a Kernel which manages resource in Privileged mode and user applications in Unprivilege mode. And I want to seperate User Application from rest of it and system resources.
Therefore, when I switch to a new task, I am releasing Stack and Global Memory area of user application.
It can be done easily using ARM Cortex MPU registers.
But problem is that, when a context switching is occurred, I need to use also some global variables of Kernel.
For example, I am calling a function to get next TCB in PendSV Handler during context switching but task pool is out of user app area and it is protected from user application.
So, it seems there should be balance, right? What are the secure and efficient strategies for memory protection?
Privilieged mode can be raised before context switching when Yield function is called but it does not seem a good solution.
What are the general strategies on that issue?
Perhaps you might take a look at an existing open source implementation and see what design decisions were made there. FreeRTOS for example has Cortex-M MPU support here; it may not answer your exact question directly and you may have to inspect the source code to get complete details.
Possibly divide the data memory into three regions - user, kernel and shared.

What does sys_vm86old syscall do?

My question is quite simple.
I encountered this sys_vm86old syscall (when reverse engineering) and I am trying to understand what it does.
I found two sources that could give me something but I'm still not sure that I fully understand; these sources are
The Source Code and this page which gives me this paragraph (but it's more readable directly on the link):
config GRKERNSEC_VM86
bool "Restrict VM86 mode"
depends on X86_32
help:
If you say Y here, only processes with CAP_SYS_RAWIO will be able to
make use of a special execution mode on 32bit x86 processors called
Virtual 8086 (VM86) mode. XFree86 may need vm86 mode for certain
video cards and will still work with this option enabled. The purpose
of the option is to prevent exploitation of emulation errors in
virtualization of vm86 mode like the one discovered in VMWare in 2009.
Nearly all users should be able to enable this option.
From what I understood, it would ensure that the calling process has cap_sys_rawio enabled. But this doesn't help me a lot...
Can anybody tell me ?
Thank you
The syscall is used to execute code in VM86 mode. This mode allows you to run old "real mode" 32bit code (like present in some BIOS) inside a protected mode OS.
See for example the Wikipedia article on it: https://en.wikipedia.org/wiki/Virtual_8086_mode
The setting you found means you need CAP_SYS_RAWIO to invoke the syscall.
I think X11 especially is using it to call BIOS methods for switching the video mode. There are two syscalls, the one with old suffix offers less operations but is retained for binary (ABI) compatibility.

Real-time application with graphic interface

I need to develop the real-time application which can handle user's input (from some external control panel) as fast as possible and provide some output to LCD monitor (very fast as well).
To be more exact - I need to handle fixed-time interrupts (with period of 1ms) to recalculate internal model - with current state fetched from external control panel.
When internal model is changed i need to update a picture on LCD monitor (now I think the most proper way is to update on each interrupt). Also don't want any delays here.
What is the most suitable platform to implement it? And also which one is the most cost-effective?
I've heard about QNX, IntervalZero RTX, rtlinux but don't know the details and abilities of each one.
Thanks!
As far as the different OSs, I know QNX has very good "hard" real time and has been built & optimized from the ground up. It also now has Qt running on it (QNX 6.5) for full featured GUIness.
I have heard (2nd hand) anecdotal information that rtlinux is very close to hard realtime (guaranteed realtime), but it can sometimes be late if a driver (usually 3rd party) is not coded well. [This was from a RTOS vendor, so take it for what it is worth.]
As a design issue, I'd decouple the three separate operations into three threads with different priorities: one thread to fetch the data and set a semaphore that new data is ready, one thread to update the model and set a semaphore that the model is ready, and one thread to update the GUI. I would run the GUI thread at a much slower update rate. Most monitors are in the 60-120Hz range for updating. Why update faster than the data can be shown on the screen?