How to sort subset of DataView? - ado.net

How to sort subset of DataView? I use Sort property of DataView. Is it possible to sort all records except last added (let's name it "new record") - waiting until user complete entering data in fields of this new record and then initiating sort of the whole DataView.

I know it's been a while and you've moved on. But for future generations, if you use DataView.NewRow() and before you do Row.EndEdit() you can get the other rows by setting your DataView.RowStateFilter to DataViewRowState.OriginalRows.
You retain the new row, can work with those that match the filter. I would imagine that you then set the filter back to current before doing Row.EndEdit() but I've never done this particular move.
You can also use DataView.Table.NewRow(). This won't be added to your dataview until you perform DataView.Table.Rows.Add(Row). That is how I always do it - I think it is easier.

Related

MongoDB - Getting first set of $lt

I'm using MongoDB to store data and when retrieving some, I need a subset which I'm uncertain how to obtain.
The situation is this; items are created in batches, spanning about a month between. When a new batch is added, the previous batch has a deleted_on date set.
Now, depending on when a customer is created, they can always retrieve the current (not deleted) set of items, and all items in the one batch that wasn't deleted when they registered.
Thus, I want to retrieve records that have deleted_on as either null, or all items that have the deleted_on on the closest date in the future from the customer.added_on-date.
In all of my solutions, I run into one of the below problems:
I can get all items that were deleted before the customer was created - but they include all batches - not just the latest one.
I can get the first item that was deleted after the customer was created, but nothing else from the same batch.
I can get all items, but I have to modify the result set afterwards to remove all items that don't apply.
Having to modify the result afterwards is fine, I guess, but undesirable. What's the best way to handle this?
Thanks!
PS. The added_on (on the customer) and deleted_on on the items have indexes.

Keeping two fields in sync when one of them is updated

The application I'm working on is a character based application. there is a field adfc.unme-code in a table and another field arbu.unit-code-shu. These two fields are shown in different windows but they must be in sync. When I update unme-code, unit-code-shu must be updated too.
Is a simple assign statement would be enough or should I do something else? Also, should I use the actual fields or a buffer?
You can use the ASSIGN statement in your code to assign both values at the same time. If there's a possibility of some other process changing those fields, you can create a database trigger procedures on each field to copy the value over to the other field. In the Data Dictionary, view the field properties and click on the "Triggers..." button to do that.
Yes, ASSIGN is used to set the value of a buffers/records field. However: certain critera needs to be met:
The buffer/record needs to be available.
You must have the "lock" of the buffer/record.
If you have the record available and locked you can simply do:
ASSIGN arbu.unit-code-shu = adfc.unme-code.
To make this code "safer" you can simply make sure that arbu is available and not locked by any other user. And finally it will handle if your assign fails because you have no lock at all.
IF AVAILABLE arbu AND NOT LOCKED arbu THEN DO:
ASSIGN arbu.unit-code-shu = adfc.unme-code NO-ERROR.
IF ERROR-STATUS:GET-NUMBER(1) = 396 THEN DO:
MESSAGE "Apparently the record is locked." VIEW-AS ALERT-BOX ERROR.
END.
END.
If you do not have the record (or the right locking) you need to look into your application and see how you can add this feature. What identifies the right record of the second table? A single id? Something else? Are there always a 1 to 1 relation between arbu and adfc etc?
Perhaps your application has a simple way of setting the value of a field. If there is a architecture at place you should try to stick to it...

How to create table occurrences for filtered data..?

I have a table called transactions. Within that is a field called ipn_type. I would like to create separate table occurrences for the different ipn types I may have.
For example, one value for ipn_type is "dispute". In the past I would create a global field called "rel_dispute" and I would populate that with the value of "dispute". Then I could create a new table occurrence of the transactions table, and make a relationship based on transactions::ipn_type = transactions::rel_dispute. This way only the dispute records would show up in my new table occurrence.
Not long ago, somebody pointed out to me that this is no longer necessary, and there is a simpler way to setup such a relationship to create a new table occurrence. I can't for the life of me remember how that was done, though.
Any information on this would be greatly appreciated. Thanks!
To show a found set of only one type, you must either perform a find or use the Go to Related Record script step to show only related records. What you describe as your previous setup fits the latter.
The simpler way is to perform a find - either on demand, or by a script triggered OnLayoutEnter.
The new 'easy' way is probably:
using one base relationship only and
filtering only the displaying portal by type. This can be done with a global field, a global variable containing current display type. Multiple portals with different filter conditions are possible as well.
~jens

Filemaker: Best way to set a certain field in every related record

I have a FileMaker script which calculates a value. I have 1 record from table A from which a relation points to n records of table B. What is the best way to set B::Field to this value for each of these n related records?
Doing Set Field [B::Field; $Value] will only set the value of the first of the n related records. What works however is the following:
Go to Related Record [Show only related records; From table: "B"; Using layout: "B_layout" (B)]
Loop
Set Field [B::Field; $Value]
Go To Record/Request/Page [Next; Exit after last]
End Loop
Go to Layout [original layout]
Is there a better way to accomplish this? I dislike the fact that in order to set some value (model) programmatically (controller), I have to create a layout (view) and switch to it, even though the user is not supposed to notice anything like a changing view.
FileMaker always was primarily an end-user tool, so all its scripts are more like macros that repeat user actions. It nowhere near as flexible as programmer-oriented environments. To go to another layout is, actually, a standard method to manipulate related values. You would have to do this anyway if you, say, want to duplicate a related record or print a report.
So:
Your script is quite good, except that you can use the Replace Field Contents script step. Also add Freeze Window script step in the beginning; it will prevent the screen from updating.
If you have a portal to the related table, you may loop over portal rows.
FileMaker plug-in API can execute SQL and there are some plug-ins that expose this functionality. So if you really want, this is also an option.
I myself would prefer the first variant.
Loop through a Portal of Related Records
Looping through a portal that has the related records and setting the field has a couple of advantages over either Replace or Go To Record, Set Field Loop.
You don't have to leave the layout. The portal can be hidden or place off screen if it isn't already on the layout.
You can do it transactionally. IE you can make sure that either all the records get edited or none of them do. This is important since in a multi-user networked solution, records may not always be editable. Neither replace or looping through the records without a portal is transaction safe.
Here is some info on FileMaker transactions.
You can loop through a portal using Go To Portal Row. Like so:
Go To Portal Row [First]
Loop
Set Field [B::Field; $Value]
Go To Portal Row [Next; Exit after last]
End Loop
It depends on what you're using the value for. If you need to hard wire a certain field, then it doesn't sound like you've got a very normalised data structure. The simplest way would be a calculation in TableB instead of a stored field, or if this is something that is stored, could it be a lookup field instead that is set on record creation?
What is the field in TableB being used for and how?

How can I (partially) automate the transfer of a FileMaker database structure and field contents to a second database?

I'm trying to copy some field values to a duplicate database. One record at a time. This is used for history and so I can delete some records in the original database to keep it fast.
I don't want to manually save the values in a variable because there are hundreds of fields. So I want to go to the first field, save the field name and value and then go over to the other database and save the data. Then run a 'Go to Next Field' and loop through all the fields.
This works perfectly, but here is the problem: When a field is a calculation you cannot tab into it and therefore 'Go to Next Field' doesn't work. It skips it.
I though of doing a 'Go to Object' but then I need to name all the objects and I can't find a script to name objects.
Can anyone out there think of a solution?
Thanks!
This is one of those problems where I always found it easier to do an export/import.
Export all the data you want from the one database, and then import it into the other database. All you need to do is:
Manually specify which fields you want to copy
Map the data from the export to the right fields in the new database/table
You can even write a script to do these things for you.
There are several ways to achieve this.
To make a "history file", I have found there are several cases out there, so lets take a look.
CASE ONE
Single file I just want to "keep" a very large file with historical data, because I need to erease all data in my Main file.
In this case, you should create a "clone" table (in the same file ore in other file, is the same). Then change any calculation field to the type of the calculation result (number, text, date, an so on...). Remove any "auto entered value or calculation from any field, like auto number, auto creation date, etc..). You will have a "Plain Table" with no calculations or auto entered data.
Then add a field to control duplicate data. If you have lets say an invoice number (unique) for each record, you can do this to achieve this task. But if you do not have a unique field that identifies the record as unique, then you have to create one...
To create such a field, I recommed to add a new field on the clone table and set as an aunto entered calculation and make a field combination that is unique... somthing like this: invoiceNumber & "-" & lineNumber & "-" " & date.
On the clone table make shure that validation is set up for "always", and no empty values allowed and that this value is unique.
Once you setup the clone table... then you can import your records, making sure that the auto enty option is on. Yo can do it as many times as you like, new records will be added and no duplicates.
If you want, can make a Script to do the move to historical table all the current records before deleting them.
NOTE:
This technique works fine when the data you try to keep do not have changes over time. This means, once the record is created is has no changes.
CASE TWO
A historical table must be created but some fields are updated.
In the beginnig I thougth a historical data, never changes. In some cases I found this is not the case, like the case I want to track historical invoices but at the same time, keep track if they are paid or not...
In this case you may use the same technique above, but instead of importing data... you must update data based on the "unique" fields that identifiy the record.
Hope this technique helps
FileMaker's FieldNames() function, along with GetField() can give you a list of field names and then their values