To come to my point, i need to explain the context:
I got a daemon process that opens a posix mq for communication. Clients are in the same group like that daemon to communicate with it. The clients also opens posix mq's and subscribe to the daemon. To be able to communicate, the client mq's must have the same group that the daemon can answer to them.
So far so good, i set the client set gid (chmod g+s client). On a Qt based Desktop (LXQT), the client starts and works as expected. On gtk+ based Desktop (Name LXDE on raspberry pi), it fails to start as gtk+ prevents a set uid/gid programs to use it's library.
As a result i extracted the creation of the mq_open() to a external executable that is set gid (chmod g+s) and uses setegid() to the saved set gid.
The client creates a socketpair(), fork(), execve() and sends the fd through the socketpair (AF_UNIX/SOCK_STREAM) to the client.
The requirements i need to fullfill:
mq's must be readable from all members of the mqclients group
Rights must be 0660 on the mq's
avoid set gid(chmod g+s) on the client
Keep the possible security impact of chmod g+sas small as possible.
Now my point/questions:
I would like to avoid to handle SIGCHLD and kill(), wait() for the mq-opener in the client and daemon. I would just like to readmsg() on the socketpair() and get an error if the mq-opener dies for whatever reason. Just write no signal handler for SIGCHLD?
The fork() procedure and connection with the mq-opener is pretty big. Is there a more simple way to do this?
Can the mq-opener (which starts as a unprivileged user) do the double fork() and drop its parent/child connection to the parent? At what point drops the parent/child relationship?
Would it be better to create a mq-opener daemon that just handle the creation of the mq`s?
To make it a little more clear: Here a diagram:
+-----------+ +------------+
|Daemon | |Client |
+-----------+ +------------+
|File | |File |
|User | |User |
|mqdaemon | |pi |
| | | |
|Group | |Group |
|mqdaemon | |pi |
| | | |
|Rights | +------------+ |Rights |
|a-s | |mq-opener | |a-s |
+-----------+ +------------+ +------------+
|Process | |File | |Process |
|User | |User | |User |
|mqdaemon | |mq-opener | |pi |
| | | | | |
|Group | |Group | |Group |
|mqclients | |mqclients | |pi |
+----+------+ | | +--+---------+
^ | |Rights | | ^
| | |g+s | | |
| | | | | |
| | +------------+ | |
| | fork() |Process | fork() | |
| +------------>|User |<---------+ |
| |(forked) | |
| send_fd() | | send_fd() |
+---------------+|Group |+------------+
|mqclients |
+------------+
Related
Note: I've already gone over related questions like following that don't address my query
SQL: how to pick one row for each set of rows with duplicate value in one column?
Fill missing values with first non-null following value in Redshift
I have a sparse, unclean dataset like this
| id | operation | title | channel_type | mode |
|-----|-----------|----------|--------------|------|
| abc | Start | | | |
| abc | Start | recovery | | Link |
| abc | Start | recovery | SMS | |
| abc | Set | | Email | |
| abc | Verify | | Email | |
| pqr | Start | | | OTP |
| pqr | Verfiy | sign_in | Push | |
| pqr | Verify | | | |
| xyz | Start | sign_up | | Link |
and I need to fill up empty rows of each id with non-empty data available from other rows
| id | operation | title | channel_type | mode |
|-----|-----------|----------|--------------|------|
| abc | Start | recovery | SMS | Link |
| abc | Start | recovery | SMS | Link |
| abc | Start | recovery | SMS | Link |
| abc | Set | recovery | Email | Link |
| abc | Verify | recovery | Email | Link |
| pqr | Start | sign_in | Push | OTP |
| pqr | Verfiy | sign_in | Push | OTP |
| pqr | Verify | sign_in | Push | OTP |
| xyz | Start | sign_up | | Link |
notes
some ids can have a certain field as empty in all rows
and while most ids will have same non-empty values for each field, edge cases could have different values. For such groups, filling up any non-empty value in all rows is acceptable. [this is too rare in my dataset and can be ignored]
another extra bit of pattern is that certain fields are mostly only present only against rows of certain operations, for e.g. mode is only present against operation='Start' rows
I've tried grouping rows by id while performing listagg over title, channel_type and mode columns, followed by coalesce, something along the lines of this:
WITH my_data AS (
SELECT
id,
operation,
title,
channel_type,
mode
FROM
my_db.my_table
),
list_aggregated_data AS (
SELECT
id,
listagg(title) AS titles,
listagg(channel_type) AS channel_types,
listagg(mode) AS modes
FROM
my_data
GROUP BY
id
),
coalesced_data AS (
SELECT DISTINCT
id,
coalesce(titles) AS title,
coalesce(channel_types) AS channel_type,
coalesce(modes) AS mode
FROM
list_aggregated_data
),
joined_data AS (
SELECT
md.id,
md.operation,
cd.title,
cd.channel_type,
cd.mode
FROM
my_data AS md
LEFT JOIN
coalesced_data AS cd ON cd.id = md.id
)
SELECT
*
FROM
joined_data
ORDER BY
id,
operation
But for some reason this is resulting in concatenation of values (presumably from coalesce operation), where I get
| id | operation | title | channel_type | mode |
|-----|-----------|------------------|--------------|------|
| abc | Start | recoveryrecovery | SMS | Link |
| abc | Start | recoveryrecovery | SMS | Link |
| abc | Start | recoveryrecovery | SMS | Link |
| abc | Set | recoveryrecovery | Email | Link |
| abc | Verify | recoveryrecovery | Email | Link |
| pqr | Start | sign_in | Push | OTP |
| pqr | Verfiy | sign_in | Push | OTP |
| pqr | Verify | sign_in | Push | OTP |
| xyz | Start | sign_up | | Link |
What's the correct way to approach this problem?
I'd start with the first_value() window function with the ignore nulls option. You will partition by the first 2 columns and will need to work out the edge cases with some data massaging, likely in the order by clause of the window function.
I currently coding a fitness app that permits to record all the personal records for a user.
I'm really new with Cloud Firestore from Firebase, so I really don't know how I could structure the database.
In my mind, I have two options:
OPTION 1
Users
|
+--UserID
| |
| +--Name
| +--Phone
| +--etc..
|
|
Users-records
|
+--UserID
| |
| +--RecordName
| | |
| | +--recordValue
| | +--recordType
| |
| +--RecordName
| | +--recordValue
| | +--recordType
OPTION 2
Users
|
+--UserID
| |
| +--Name
| +--Phone
| +--etc..
| +--Records
| | |
| | +--RecordName
| | | |
| | | +--recordValue
| | | +--recordType
| | +--RecordName
| | | |
| | | +--recordValue
| | | +--recordType
The questions are: Do I have to split the collection for the user?
Do you think this architecture is well designed for the purpose (ie record personal records from users)?
Thank you very much
Your database structure really depends on how you are going to use it. Keep in mind that whenever you observe a node, you are also observing all of the children nodes.
So I'd probably go with something closer to Option two, maybe like this:
Users
|
+--UserID
| |
| +--UserInfo
| | |
| | +--Name
| | +--Phone
| | +--etc..
| |
| +--Records
| | |
| | +--RecordName
| | | |
| | | +--recordValue
| | | +--recordType
| | +--RecordName
| | | |
| | | +--recordValue
| | | +--recordType
I'd choose this, because I'd image you'd want to get all of the UserInfo at once, So we can observe that "UserInfo" node and get all of the children: name, phone, etc....
Then I'd think you'd also want to get all of the records at once, so we can observe that "Records" node and get all of that data.
Additionally, if you wanted, you could get everything at once by observing the UserID!
However, if you were maybe going to be getting a list of all the users, then you definitely don't want all this data in one spot and this design wouldn't work, because that is a lot of data to observe just to get all the users.
In summary: Choose an option which makes it easiest for you to get what you need, without getting extra data you don't want!
I'd like to build a Flow such as represented in the following asciiFlow :
Custom Flow
+-------------------------------------------------------------+
| |
| +------------------+ |
| | | |
| | +---------------------------------------------->
| | | |
+---------> CustomFanOut2 | +--------------------+ |
| | | | | |
| | +-------> CustomSink | |
| +------------------+ | | |
| +--------------------+ |
| |
+-------------------------------------------------------------+
Of course, I can use GraphDSL, but it boils down to just putting a sink on one of the outlets for CustomFanOut2, so it seems that there could be a method
Graph[FanOutShape2[I, O0, O1], Mat1].to1(sink: Sink[O1, Any]: Flow[I, O0, Mat1]
or equivalents on other inlets and outlets, for other graphs than Source, Flow, Sink and Bidi.
Does such a method exist, or could it exist in some future version of akka-stream? In the case where it would not be possible, why is it so?
I can see that it is possible to add metadata to a Rackspace virtual machine instance.
I want to get a list of running instances, filtered by a particular metatag value.
I can't see how to do so in the documentation however.
is it possible?
You should be able to do so using the openstack client... but it depends on which metatag you're interested in.
You can get a list of all servers:
openstack server list
Will spit something like
+--------------------------------------+------------------+--------+-----------------------------------------------------------------------------------------------------------+
| ID | Name | Status | Networks |
+--------------------------------------+------------------+--------+-----------------------------------------------------------------------------------------------------------+
| 97606ae9-7f18-4a3c-903a-1583d446119b | trysmallwin | ERROR | |
| cb78b8d5-2f03-4a3f-ab26-f389acbd0b76 | Win-try again | ERROR | public=2607:f298:5:101d:f816:3eff:fe9e:5cd4, 208.113.133.90, 2607:f298:5:101d:f816:3eff:fe36:da45, |
| | | | 208.113.133.93, 2607:f298:5:101d:f816:3eff:fe40:57d5, 208.113.133.95 |
| 040751d1-c4c5-47aa-8dec-1d69a468be1c | hnxhdkwskrvwvdwr | ACTIVE | public=2607:f298:5:101d:f816:3eff:fe60:324, 208.113.130.52 |
+--------------------------------------+------------------+--------+-----------------------------------------------------------------------------------------------------------+
note the ID of the server and investigate deeper:
openstack server show 040751d1-c4c5-47aa-8dec-1d69a468be1c
+--------------------------------------+------------------------------------------------------------+
| Field | Value |
+--------------------------------------+------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | iad-2 |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2016-07-26T17:32:01.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | public=2607:f298:5:101d:f816:3eff:fe60:324, 208.113.130.52 |
| config_drive | True |
| created | 2016-07-26T17:31:51Z |
| flavor | gp1.semisonic (50) |
| hostId | e1efd75d1e8f6a7f5bb228a35db13647281996087d39c65af8ce83d9 |
| id | 040751d1-c4c5-47aa-8dec-1d69a468be1c |
| image | Ubuntu-14.04 (03f89ff2-d66e-49f5-ae61-656a006bbbe9) |
| key_name | stef |
| name | hnxhdkwskrvwvdwr |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | d2fb6996496044158cf977c2129c8660 |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | ACTIVE |
| updated | 2016-07-26T17:32:01Z |
| user_id | 5b2ca246f39a425f9a833460bf322603 |
+--------------------------------------+------------------------------------------------------------+
openstack --f json will output the same stuff but in json format that you can more easily manipulate programmatically.
HTH
Usually I edit source code in emacs with two (emacs-)windows side-by-side -- The second
windows opened via 'C-x 3. Like this:
+------------+-------------+
| | |
| src1 | src2 |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
+------------+-------------+
| mini-buffer |
+------------+-------------+
When I now start compile, eg with F9, the new *compilation* buffer replaces one of my src-buffers.
Instead I would like the *compilation* buffer to open on-top of the mini-buffer, if it is not visible already (if it is, use it, of course).
+------------+-------------+
| | |
| src1 | src2 |
| | |
| | |
| | |
| | |
+------------+-------------+
| |
| *compilation* |
| |
+------------+-------------+
| mini-buffer |
+------------+-------------+
The *compilation*-buffer should have a height of about 30% of the while window or 6-10 lines.
How to accomplish that?
One way to achieve this would be to use popwin.el. I've never used it but it seems pretty customizable and the default config includes *compilation* already.