Perl's getpwnam returns deleted entries - perl

I am using Perl's getpwnam to check whether an entry exists in the LDAP database I'm using to store user information. This is used in a script that deletes LDAP entries.
The thing is, when I run the script once, it's successful and I can no longer see the entry via the Unix command getent passwd and it's deleted from the LDAP database as well. The problem is, when I try to run the script again and ask it to delete the same user entry (to check that it's idempotent), the getpwnam test still returns success (and prints the entry that was just deleted) which causes the script to throw an error about attempting to delete a non-existent entry.
Why is Perl's getpwnam behaving like this? Is there a more robust test for LDAP entry existence short of binding to the LDAP server and querying it?

nscd cache is not keeping track of your deletions, apparently.
I'm reluctant to call this an "answer" since I don't know if nscd is supposed to stay synchronized with deletions, or how to fix it. The only thing I've ever done with nscd is remove it.

Related

Better way to "mutex" than with a .lock file over the network?

I have a small setup consisting of n clients (CL0, CL1, ... CLn) that access a windows share on the server.
On that server a json file holds important data that needs to be read- and writable by all players in the game. It holds key value pairs that are constantly read and changed:
{
"CurrentActive": "CL1",
"DataToProcess": false,
"NeedsReboot": false,
"Timestamp": "2020-05-25 16:10"
}
I already got the following done with PowerShell:
if a client writes the file, a lock file is generated that holds the hostname and the timestamp of the access. After the access, the lock file is removed. Each write "job" first checks if there is a lockfile and if the timestamp is still valid and will then conditionally write to the file after the lock is removed.
#Some Pseudo Code:
if(!Test-Path $lockfile){
gc $json
}else{
#wait for some time and try again
#check if lock is from my own hostname
#check if timestamp in lock is still valid
}
This works ok, but is very complex to build up, since I needed to implement the lock-mechanism and also a way to force remove the lock when the client is not able to remove the file for multiple reasons a.s.o. (and I am sure I also included some errors...). Plus, in some cases reading the file will return an empty one. I assume this is in the sweetspot during the write of the file by another client, when it is flushed and then filled with the new content.
I was looking for other options such as mutex and it works like a charm on a single client with multiple threads, but since relying on SafeHandles in the scope of one system, not with multiple clients over the network. Is it possible to get mutex running over the network with
I also stumbled about AlphaFS which would allow me to do transactional processing on the filesystem, but that doesn't fix the root cause that multiple clients access one file at the same time.
Is there a better way to store the data? I was thinking about the Windows Registy but I could not find anything on mutex there.
Any thoughts highly appreciated!

Firebase REST API: Delete sometimes fails

I'm currently building a web frontend for a Matlab program. I'm using webread/webwrite to interface with the Firebase realtime database (Though I'll be shifting to urlread2 soon for compatibility reasons). The Matlab end has to delete nodes from the database on a regular basis. I do this by using webwrite to send a POST request and putting "X-HTTP-Method-Override: DELETE" in the header. This works, but after a few deletes it stops working until data is either added to or removed from the database. It seems completely random, my teammate and I have been trying to find a pattern for a few days and we've found nothing.
Here is the relevant Matlab code:
modurl = strcat(url, modkey, '.json');
modurlstr = char(modurl);
webop = weboptions('KeyName', 'X-HTTP-Method-Override', 'KeyValue','DELETE');
webwrite(modurlstr, webop);
Where url is our database url and modkey is the key of the node we're trying to delete. There's no authentication because the database is set to public (Security is not an issue for us).
The database is organized pretty simply. The root node just has a bunch of children. We only delete a whole child (i.e. we don't ever try to delete the individual components of a child).
Are we doing something wrong?
Thanks in advance!
We found out some of the keys had hyphens in them, which were getting translated to their ascii representation. The reason it seemed random was because the delete was only bugging out on the nodes which had a hyphen in their keys. When we switched them back everything worked fine.

Can SQLite DB files be made read-only?

Information from an SQLite DB is presented to user through a web server (displayed in an HTML browser). The DB is loaded once for all by a small application independent from the web server. DB data cannot be changed from user browser (this is a read-only service).
As the web-server has its own user-id, it accesses the SQLite DB file with "other" permissions. For security reason, I would like to set the DB file permissions as rw-rw-r--.
Unfortunately, with this permission set, I get a warning attempt to write a readonly database at line xxx which points to a line about a SELECT transaction (which in principle is read-only). Of course, I get no result.
If permissions are changed to rw-rw-rw, everything works fine, but that means everybody can tamper with the DB.
Is there any reason why SQLite DB cannot be accessed read-only?
Are there "behind-the-scene" processings which need write access, even for SELECT transactions?
Look-up on StackOverflow shows that people usually complain for the opposite situation: encountering a read-only access permission preventing writing to the DB. My goal is to protect my DB against ANY change attempt.
For the complete story, my web app is written in Perl and uses DBD::SQLite
You must connect to your SQLite db in readonly mode.
From the docs:
You can also set sqlite_open_flags (only) when you connect to a database:
use DBD::SQLite;
my $dbh = DBI->connect("dbi:SQLite:$dbfile", undef, undef, {
sqlite_open_flags => DBD::SQLite::OPEN_READONLY,
});
-- https://metacpan.org/pod/DBD::SQLite#Database-Name-Is-A-File-Name
The solution is given in the answer to this question Perl DBI treats setting SQLite DB cache_size as a write operation when subclassing DBI.
It turns out that AutoCommit cannot be set to 0 with read-only SQLite DB. Explicitly forcing it to 1 in the read-only DB case solved the problem.
Thanks to all who gave clues and leads.

Can I log the script that invokes DELETE query?

I have to investigate who or what caused tables rows to disappear.
So, I am thinking about creating "on before delete" trigger that logs the script that invokes the deletion. Is this possible? Can I get the db client name or event better - the script that invokes delete query and log it to another temporarly created log table?
I am open to other solutions, too.
Thanks in advance!
You can't get "the script" which issued the delete statement, but you can get various other information:
current_user will return the current Postgres user that initiated the delete statement
inet_client_addr() will return the IP address of the client's computer
current_query() will return the complete statement that caused the trigger to fire
More details about that kind of of functions are available in the manual:
http://www.postgresql.org/docs/current/static/functions-info.html
The Postgres Wiki contains two examples of such an audit trigger:
https://wiki.postgresql.org/wiki/Audit_trigger_91plus
https://wiki.postgresql.org/wiki/Audit_trigger (somewhat outdated)

Can Primary-Keys be re-used once deleted?

0x80040237 Cannot insert duplicate key.
I'm trying to write an import routine for MSCRM4.0 through the CrmService.
This has been successful up until this point. Initially I was just letting CRM generate the primary keys of the records. But my client wanted the ability to set the key of a our custom entity to predefined values. Potentially this enables us to know what data was created by our installer, and what data was created post-install.
I tested to ensure that the Guids can be set when calling the CrmService.Update() method and the results indicated that records were created with our desired values. I ran my import and everything seemed successful. In modifying my validation code of the import files, I deleted the data (through the crm browser interface) and tried to re-import. Unfortunately now it throws and a duplicate key error.
Why is this error being thrown? Does the Crm interface delete the record, or does it still exist but hidden from user's eyes? Is there a way to ensure that a deleted record is permanently deleted and the Guid becomes free? In a live environment, these Guids would never have existed, but during my development I need these imports to be successful.
By the way, considering I'm having this issue, does this imply that statically setting Guids is not a recommended practice?
As far I can tell entities are soft-deleted so it would not be possible to reuse that Guid unless you (or the deletion service) deleted the entity out of the database.
For example in the LeadBase table you will find a field called DeletionStateCode, a value of 0 implies the record has not been deleted.
A value of 2 marks the record for deletion. There's a deletion service that runs every 2(?) hours to physically delete those records from the table.
I think Zahir is right, try running the deletion service and try again. There's some info here: http://blogs.msdn.com/crm/archive/2006/10/24/purging-old-instances-of-workflow-in-microsoft-crm.aspx
Zahir is correct.
After you import and delete the records, you can kick off the deletion service at a time you choose with this tool. That will make it easier to test imports and reimports.