MongoDB - Getting first set of $lt - mongodb

I'm using MongoDB to store data and when retrieving some, I need a subset which I'm uncertain how to obtain.
The situation is this; items are created in batches, spanning about a month between. When a new batch is added, the previous batch has a deleted_on date set.
Now, depending on when a customer is created, they can always retrieve the current (not deleted) set of items, and all items in the one batch that wasn't deleted when they registered.
Thus, I want to retrieve records that have deleted_on as either null, or all items that have the deleted_on on the closest date in the future from the customer.added_on-date.
In all of my solutions, I run into one of the below problems:
I can get all items that were deleted before the customer was created - but they include all batches - not just the latest one.
I can get the first item that was deleted after the customer was created, but nothing else from the same batch.
I can get all items, but I have to modify the result set afterwards to remove all items that don't apply.
Having to modify the result afterwards is fine, I guess, but undesirable. What's the best way to handle this?
Thanks!
PS. The added_on (on the customer) and deleted_on on the items have indexes.

Related

How to sort subset of DataView?

How to sort subset of DataView? I use Sort property of DataView. Is it possible to sort all records except last added (let's name it "new record") - waiting until user complete entering data in fields of this new record and then initiating sort of the whole DataView.
I know it's been a while and you've moved on. But for future generations, if you use DataView.NewRow() and before you do Row.EndEdit() you can get the other rows by setting your DataView.RowStateFilter to DataViewRowState.OriginalRows.
You retain the new row, can work with those that match the filter. I would imagine that you then set the filter back to current before doing Row.EndEdit() but I've never done this particular move.
You can also use DataView.Table.NewRow(). This won't be added to your dataview until you perform DataView.Table.Rows.Add(Row). That is how I always do it - I think it is easier.

How do I determine the portal row number?

How do I determine the portal row number when dragging a document into a container field in FileMaker? Dragging a document into a container field doesn't change focus, so Get(ActivePortalRowNumber) doesn't work.
Well, you could have a second relationship to the same related records sorted by the modification dates of the related records, most recent first. Then the record that had had the document dragged into the container field would appear as the first related record via that relationship, and you could grab an ID field from it.
Alternatively, each related record in the portal could contain an unstored calculation field set to "get(recordnumber)". Within a portal, that will evaluate to the portal row number of the record. Maybe you could use that somehow. Without more information on what you're trying to accomplish, though, it's tough to say.

How to update a long list of documents (rows) in mongodb based on the expiring date?

I would need a cron job that filters all the rows that have (travellingPath.endDate>now) and set them (travellingPath.isActive=false). The travelling path has a toCity property. Now I want to update the quantity of the toCity based on the quantity of the travellingPath and another settings collection.
For example:
a travelling path expired
cron job catches it
get the toCity from the travelling path
get the conversionRate from another collection
based on the toCity.quantity, travellingPath.quantity, conversionRate and random I update the toCity.quantity to a new value, and I also might change the toCity.owner
update the travelling path to isActive=false
My idea would be to query each travelling path that has endDate>now but this could end up with 100000 results so it's not great. I might limit it to 250 results to work properly. Then for each travellingPath I get it's toCity and make the calculations and update the toCity and the travellingPath.
But this seems so not efficient..
do you have better ideas? thanks (:
Yes, that's the way to go. MongoDB updates don't support expressions that depend on other fields. So you're stuck with this:
Retrieve documents one by one or in small batches;
Calculate new values for the fields;
Send updates to the database (one by one or in batches);
Get next portion of documents, repeat until done.

How can I (partially) automate the transfer of a FileMaker database structure and field contents to a second database?

I'm trying to copy some field values to a duplicate database. One record at a time. This is used for history and so I can delete some records in the original database to keep it fast.
I don't want to manually save the values in a variable because there are hundreds of fields. So I want to go to the first field, save the field name and value and then go over to the other database and save the data. Then run a 'Go to Next Field' and loop through all the fields.
This works perfectly, but here is the problem: When a field is a calculation you cannot tab into it and therefore 'Go to Next Field' doesn't work. It skips it.
I though of doing a 'Go to Object' but then I need to name all the objects and I can't find a script to name objects.
Can anyone out there think of a solution?
Thanks!
This is one of those problems where I always found it easier to do an export/import.
Export all the data you want from the one database, and then import it into the other database. All you need to do is:
Manually specify which fields you want to copy
Map the data from the export to the right fields in the new database/table
You can even write a script to do these things for you.
There are several ways to achieve this.
To make a "history file", I have found there are several cases out there, so lets take a look.
CASE ONE
Single file I just want to "keep" a very large file with historical data, because I need to erease all data in my Main file.
In this case, you should create a "clone" table (in the same file ore in other file, is the same). Then change any calculation field to the type of the calculation result (number, text, date, an so on...). Remove any "auto entered value or calculation from any field, like auto number, auto creation date, etc..). You will have a "Plain Table" with no calculations or auto entered data.
Then add a field to control duplicate data. If you have lets say an invoice number (unique) for each record, you can do this to achieve this task. But if you do not have a unique field that identifies the record as unique, then you have to create one...
To create such a field, I recommed to add a new field on the clone table and set as an aunto entered calculation and make a field combination that is unique... somthing like this: invoiceNumber & "-" & lineNumber & "-" " & date.
On the clone table make shure that validation is set up for "always", and no empty values allowed and that this value is unique.
Once you setup the clone table... then you can import your records, making sure that the auto enty option is on. Yo can do it as many times as you like, new records will be added and no duplicates.
If you want, can make a Script to do the move to historical table all the current records before deleting them.
NOTE:
This technique works fine when the data you try to keep do not have changes over time. This means, once the record is created is has no changes.
CASE TWO
A historical table must be created but some fields are updated.
In the beginnig I thougth a historical data, never changes. In some cases I found this is not the case, like the case I want to track historical invoices but at the same time, keep track if they are paid or not...
In this case you may use the same technique above, but instead of importing data... you must update data based on the "unique" fields that identifiy the record.
Hope this technique helps
FileMaker's FieldNames() function, along with GetField() can give you a list of field names and then their values

Sybase select variable logic

Ok, I have a question relating to an issue I've previously had. I know how to fix it, but we are having problems trying to reproduce the error.
We have a series of procedures that create records based on other records. The records are linked to the primary record by way of a link_id. In a procedure that grabs this link_id, the query is
select #p_link_id = id --of the parent
from table
where thingy_id = (blah)
Now, there are multiple rows in the table for the activity. Some can be cancelled. The code I have doesn't disinclude cancelled rows in the select statement, so if there are previously cancelled rows, those ids will appear in the select. There is always going to be one 'open' record that is selected if I disinclude cancelled rows. (append where status != 'C')
This solves this issue. However, I need to be able to reproduce the issue in our development environment.
I've gone through a process where I've entered a whole heap of data, opening, cancelling, etc to try and get this select statement to return an invalid id. However, whenever I run the select, the ids are in order (sequence generated), but in the case where this error occured, the select statement returned what seems to be the first value into the variable.
For example.
ID Status
1 Cancelled
2 Cancelled
3 Cancelled
4 Open
Given the above, if I do a select for the ID I want, I want to get '4'. In the error, the result is 1. However, even if I enter in 10 cancelled records, I still get the last one in the select.
In oracle, I know that if you select into a variable and more than one record is returned, you get an error (I think). Sybase apparently can assign multiple values into a variable without erroring.
I'm thinking that there's either something to do with how the data is selected from the table, where the id's without a sort order don't return in ascending order, or there's a dboption where a select into a variable will save the first or last value queried.
Edit: it looks like we can reproduce this error by rolling back stored procedure changes. However, the procs don't go anywhere near this link_id column. Is it possible that changes to the database architecture could break an index or something?
If more than one row is returned, the value that is stored will be the last value in the list, according to this.
If you haven't specified an order for retrieval via ORDER BY, then the order returned will be at the convenience of the database engine. It may very well vary by the database instance. It may be in the order created, or even appear "random" because of where the data is placed within the database block structure.
The moral of the story:
Always make singleton SELECTs return a single row
When #1 can't be done, use an ORDER BY to make sure the one you care about comes last