SSTable folder naming convention - hash

I just noticed that my newly created sstable folder has a combination of numbers and letters attached. For my table "tweets" it looks like:
/var/lib/cassandra/data/twitter/tweets-a6da23906d8211e8a057ffb9a095df5c
on the disk. Does anybody know what this attached hash is?
Thanks!
Christian

The folder name consists of table name, and table ID that is generated anew every time when the table is created - this is done to prevent race condition when table created, dropped, created, etc.

Related

Is there a way to include a column from one table in many other tables (while maintaining consistency) in PostgreSQL?

I'm trying to build a database (in PostgreSQL 9.6.6) that allows for one "master column" (items.id) to be replicated in to many (automatically generated) tables (e.g. rank1.id, rank2.id, rank3.id, ...). Only items will have INSERT's (or DELETE's) performed and when they are the newly added id's should also show up (or be removed) in the rankX table(s). To be more concrete:
items:
id | name | description
rank1:
id | rank
rank2:
id | rank
...
Where the id's are always the same, and there is always the same number of rows in each of the tables. The rankX.rank values, however, will be different (imagine users ranking how funny a series of images are -- the images all have the same id's but different users might rank them differently).
What I was thinking was that when a new user was added and a new rankX table created I would do the following:
Have rankX.id referencing a foreign key items.id (with ON DELETE CASCADE)
Copy any items.id that already exist
Auto-generate a trigger function that mirrors the INSERT's to items to the rankX table
This seems cumbersome and wasteful of space since all of the xxxx.id columns are identical and I will end up with hundreds or thousands of trigger functions. As someone new to relational databases I was hoping there was an easier way to achieve this.
So, I have a few questions:
Is there a more efficient way to define my tables such that all of this copying isn't necessary?
If this the best way, can you give an example of how you would set up the triggers (and associated functions)?
Do I need to worry about running out of space on the server as I create (potentially many) sets of triggers of this type?

Merging Data in FMPro with modification of ID values

We are attempting to merge multiple datasets created in in filmmaker pro.
These datasets have multiple tables, and each entry within each table has a local ID that is used to relate entries between tables. The local ID values for all the entries were serially generated values, but some of the ID values are repeated between the different datasets, though the indicated records are non equivalent.
How can the ID values be updated in the data that is being imported to remove these overlaps without destroying the relationships that depend on them?
If you have access to the original database, you can try to migrate the ID's over to UUID or something unique before exporting. This has to be done manually, either cut/paste by hand or by a script.
Such a script will have do the following:
Loop through the parent records
For each record go to the related records
Generate an UUID with the get(UUID) function and put it in a variable
Replace the parent ID in the related record with this variable
Return to the parent record and replace the record ID with the variable.
Move to the next record.
Repeat until all records have been updated.

Referential integrity : delete all children but not the parent [PostgreSQL]

I have an imports table that contains information on file imports that are done: idimport (SERIAL PRIMARY KEY), file name, import date, etc.
Several tables have a field idimport INTEGER REFERENCES imports(idimport) ON DELETE CASCADE.
To "unimport" a file, all I have to do is DELETE the row in the imports table.
The problem I'm facing is that some users tell me that they definitely imported a file but find no trace of the imported data. Usually, they unimported the file and forgot to reimport it but I have no proof of that (except the missing idimport which is far from enough).
So I would like to keep track of the imports that have been deleted. Ideally, I would like PostgreSQL to delete all the child rows, keep the parent (imports) row and I would mark that row as deleted, the user who deleted it and when the deletion was made (and maybe a reason for the deletion).
The idea I have here is to create an ON DELETE trigger that would memorize the "interesting" fields in the imports table, let the delete operation run and recreate an imports row with the interesting fields (including the idimport) and the ones I want to add.
But I want both "Before" (memorizing) and "After" (recreating the row) actions, so that would be two triggers and I don't know how I could make them communicate (the interesting fields).
Of course, I could either do this client-side or create a stored procedure but I'd prefer a completely integrated solution (working with DELETE FROM imports WHERE idimport=12)
Rather than burden the table with deleted rows, add an audit table that records whenever an import is deleted with a trigger. This table can be purged and truncated easily when necessary and does not burden your application.

Filemaker Pro 14 History tables

With a few solutions Ive worked with I've created temp table's or history tables. Normally I script it to take a handful of fields needed from a main table and copy it over to the other table by
Setting a variable then setting field to the variable for each field in the new table / new record.
I have a situation now, where Im building a history table that needs to copy the current record as is. A snapshot where all fields from that instance of the record are copied to the history table.
Rather then setting a variable then set field to the variable, Id like to get some input on a quicker way to get this done where I can do this on a record level and not type out field by field to get it done. Also if fields are added to both tables then I have to make sure my script gets updated.
Ill keep hunting around.. appreciate any help.
-Rich
Do you have a sample of copying a record from 1 table to another
including all fields and setting some fields?
As I suggested in comments, use the Import Records[] script step, and select the same file as the source. If you choose Arrange by: [ matching names ] in the Import Field Mapping dialog, it will automatically map all source fields to their similarly named counterparts.
Note that you must establish a found set in the source table before importing.
For "setting some fields", you can define auto-enter options and activate them during the import, or run Replace Field Contents[] immediately after the import.

How can I (partially) automate the transfer of a FileMaker database structure and field contents to a second database?

I'm trying to copy some field values to a duplicate database. One record at a time. This is used for history and so I can delete some records in the original database to keep it fast.
I don't want to manually save the values in a variable because there are hundreds of fields. So I want to go to the first field, save the field name and value and then go over to the other database and save the data. Then run a 'Go to Next Field' and loop through all the fields.
This works perfectly, but here is the problem: When a field is a calculation you cannot tab into it and therefore 'Go to Next Field' doesn't work. It skips it.
I though of doing a 'Go to Object' but then I need to name all the objects and I can't find a script to name objects.
Can anyone out there think of a solution?
Thanks!
This is one of those problems where I always found it easier to do an export/import.
Export all the data you want from the one database, and then import it into the other database. All you need to do is:
Manually specify which fields you want to copy
Map the data from the export to the right fields in the new database/table
You can even write a script to do these things for you.
There are several ways to achieve this.
To make a "history file", I have found there are several cases out there, so lets take a look.
CASE ONE
Single file I just want to "keep" a very large file with historical data, because I need to erease all data in my Main file.
In this case, you should create a "clone" table (in the same file ore in other file, is the same). Then change any calculation field to the type of the calculation result (number, text, date, an so on...). Remove any "auto entered value or calculation from any field, like auto number, auto creation date, etc..). You will have a "Plain Table" with no calculations or auto entered data.
Then add a field to control duplicate data. If you have lets say an invoice number (unique) for each record, you can do this to achieve this task. But if you do not have a unique field that identifies the record as unique, then you have to create one...
To create such a field, I recommed to add a new field on the clone table and set as an aunto entered calculation and make a field combination that is unique... somthing like this: invoiceNumber & "-" & lineNumber & "-" " & date.
On the clone table make shure that validation is set up for "always", and no empty values allowed and that this value is unique.
Once you setup the clone table... then you can import your records, making sure that the auto enty option is on. Yo can do it as many times as you like, new records will be added and no duplicates.
If you want, can make a Script to do the move to historical table all the current records before deleting them.
NOTE:
This technique works fine when the data you try to keep do not have changes over time. This means, once the record is created is has no changes.
CASE TWO
A historical table must be created but some fields are updated.
In the beginnig I thougth a historical data, never changes. In some cases I found this is not the case, like the case I want to track historical invoices but at the same time, keep track if they are paid or not...
In this case you may use the same technique above, but instead of importing data... you must update data based on the "unique" fields that identifiy the record.
Hope this technique helps
FileMaker's FieldNames() function, along with GetField() can give you a list of field names and then their values