Using Lotus Notes 8.5.2 & made a backup of my mail application in order to preserve everything in a specific folder before deleting the contents of it from my main application. The backup is a local copy, created by going to File --> Application --> New Copy. Set the Server to Local, give it a title & file name that I am saving in a folder. All of this works okay.
Once I have that, I go into the All Documents & delete everything out except the contents of the folder(s) I want this application to preserve. When finished, I can select all and see approximately 800 documents.
However, there are a couple other things I have noticed also. First - the Document Count (Right-click on the newly created application & go to Properties). Select the "i" tab, and it has a Disk Space & Document count there. However, that document count doesn't match what is shown when you open the application & go to All Documents. That count is matches the 800 I had after deleting all but the contents I wanted to preserve. Instead, the application says it has almost double that amount (1500+), with a fairly large file size.
I know about the Unread Document count, and in this particular application I checked the "Don't maintain unread marks" on the last property tab. There is no red number in the application, but the document count nor the file size changed when that was selected. Compacting the application makes no difference.
I'm concerned that although I've trimmed down what I want to preserve on this Lotus Notes application that there's a lot of excess baggage with it. Also, since the document count appears to be inflated, I suspect that the file size is also.
How do you make a backup copy of a Lotus Notes application, then keep only what you want & have the Document Count and File Size reflect what you have actually preserved? Would appreciate any help or advice.
Thanks!
This question might really belong on ServerFault or SuperUser, because it's more of an admin or user question than a development question, but I can give you an answer from a developer angle...
Open your mailbox in Domino Designer, and look at the selection formula for $All view. It should look something like this:
SELECT #IsNotMember("A"; ExcludeFromView) & IsMailStationery != 1 & Form != "Group" & Form != "Person"
That should tell you first of all that indeed, "All Documents" doesn't really mean all documents. If you take a closer look, you'll see that three types of documents are not included in All Documents.
Stationery documents
Person and Group documents (i.e., synchronized contacts)
Any other docs that any IBM, 3rd party, or local developer has decided to mark with an "A" in the ExcludeFromView field. (I think that repeat calendar appointment info probably falls into this category.)
One or more of those things is accounting for the difference in your document count.
If you want, you can create a view with the inverse of that selection formula by reversing each comparison and changing the Ands to Ors:
SELECT #IsMember("A"; ExcludeFromView) | (IsMailStationery = 1) | ( Form = "Group" | Form = "Person")
Or for that matter, you can get the same result just taking the original formula and surrounding it with parens and prefixing it with a logical not.
Either way, that view should show you everything that's not in AllDocuments, and you can delete anything there that you don't want.
For a procedure that doesn't involve mucking around with Domino Designer, I would suggest making a local replica instead of a local copy, and using the selective replication option to replicate only documents from specific folders (Space Savers under More Options). But that answer belongs on ServerFault or SuperUser so if you have any questions about it please enter a new question there.
Related
due to an oversight in a flow-routine that was meant to tag certain folders on upload into the cloud, a huge amount of unwanted files were also tagged in the process. Now there are thousands upon thousands of files that have the wrong tag and need to be untagged. Neither doing this by hand nor reuploading with the correct flow-routine are really workable options. Is there a way to do the following:
Crawl through every entry in a folder
If its a file, untag it, if its a folder, don't
Everything I found about tags and NextCloud was concerning with handling them when they were uploaded, but never running over existing files in regards of tagging.
Is this possible?
The cloud stores those data into the configured database. So you could simply remove the assigns from the db.
The assigns are stored in oc_systemtag_object_mapping while the tags itself are in oc_systemtag. If you found the ID of the tag to remove (let's say 4), you could simply remove all assignments from the db:
DELETE FROM oc_systemtag_object_mapping WHERE systemtagid = 4;
If you would like to do this only for a specific folder, it's not even getting much more complicated. Files (including their folder structure!) are stored in oc_filecache, while oc_systemtag_object_mapping.objectid references oc_filecache.fileid. So with some joining and LIKEing, you could limit the rows to delete. If your tag is used for non-files, your condition should include oc_systemtag_object_mapping.objecttype = 'files'.
I started using django-simple-history in order to keep the history but when I delete an object (from admin page at least) I notice that it is gone for good.
I suppose I could create tags and "hide" objects instead of deleting in my views but would be nice if there is an easier way with django-simple-history, which would also cover admin operations.
When objects are deleted, that deletion is also recorded in history. The object does not exist anymore, but its history is safe.
If you browse your database, you should find a table named:
[app_name]_history[model_name]
It contains a line with the last state of the object. That line also contains additional columns: history_id, history_change_reason, history_date, history_type. For a deletion, history_type will be set to "-" (minus sign).
Knowing that, it is possible to revert a deletion programmatically, but not through the Django Admin. Have a look at django-simple-history documentation for details on how to do it programmatically.
Hope that helps!
I hope my question makes sense, I'll try to give as much info as possible.I should probably start off by saying this is the first access database (any database) I have ever done and my knowledge comes from trial and error as well as youtube and the occasional google search...NOOB
So I'm attempting to build a database using microsoft access (2007) for the first time (Student Records in my department). I have pulled in all the data I had available (names, major, graduate, advisor etc.) and made several appended tables for additional data using an append query (usually just pulling over name and ID# and major, and then adding the information that is related to the particular table).
Now I am going through the paper files (which we would like to get rid of) to update any missing data or add new students that we didn't have stored anywhere electronically.
I have created a form in which I can add new records or edit/add already available data that I need.
The problem that I have is that it pretty much pulls up everything I need except the occasional record (which I do a search in the search field on the bottom using the ID#) so I figure hey I must not have this student and add it, when I hit save it basically tells me this record can't be added as there already is a conflicted value. And when I check my table sure enough the record is there. In the form query where I check what tables the field's information is pulled from I have no criteria in there to filter any information out, the relationships overall are just based on the ID# (which is my primary key in all tables). When I check the data everything seems to be correct (not a wrong major, etc.) so I can't quite figure out why some records are not being pulled up.
My question is why and what can I do to fix it...
I hope my explanation is not to confusing. Thank you in advance.
I have an ACCDB that I split a while ago that contains many forms with sub forms (based on tables) and over two hundred tables in the BE (almost all are small lookup tables for vehicle objects) and 400+ queries. There also happens to exist another ACCDB with a single table in it with 6.5M rows that the FE links to with basic history info. The two backends do not link to each other in any way. The FE is 14MB, BE is 1.2G and the single table DB is 900MB, all with primary keyes and indexes setup appropriately. The DB is 100% normalized. Both BE's grow 5% every month. The DB is currently slated to be migrated to an Oracle 11G environment later this year.
Question:
I found out recently that if I compact and repair the back end or front end that none of the forms containing subforms open; the whole FE just freezes to white. Even if all 3 are repaired I still have issues. BUT if I compact/repair all 3 as well as relink the entire front end to the two backends the forms all of sudden start working. It was only recently that this behavior began.
Why do I have to relink to make the forms work again?
You should not have to re-link anything here at all after a C+R.
The only thing that comes to mind is the user who is doing the C+R has some restricted rights in the folder or directory where the C+R occurs.
Remember, when the user does the C+R, then a COPY of the file is created – and thus possible inheriting of the CURRENT user’s rights can occur WHEN the NEW file is created. So it sounds like some permissions issues exists on the folder, or the user that is doing the C+R has some special (different) rights. (perhaps some inherited rights do to membership in some security group).
Of course one should ensure that you are using UNC path names, and of course the front end needs to be placed on each machine.
Perhaps again the user doing the C+R has “different” drive mappings and thus links to the back end databases are thus wrong due to different drive letter. So if not already, as a general rule I would STRONGLY avoid drive letters and use NC path names (if you not already).
If you are using UNC path names, then the likely issue is permissions.
There also a possibility that the new user doing the C+R is running the front end from a “non” trusted location.
Also, the table of 6.5 million rows seems a bit large, and I assume the 1.2 gig size is RIGHT AFTER a C+R? (but this issue is for another post).
This suggests a drive mapping issue, a permissions issue, or perhaps the user launching the application is messing up references. I would shift by-pass into the application and ensure that the user doing the C+R can compile the application, and would from VBA editor take CAREFUL note that say office 14 references are not being hi-jacked to office 15 references for example.
You're reaching the "hassle-free" viable (as opposed to "documented") limits of Access as a database. remember the queries need to be compiled which means resolving all the table links, and verifying existing indexes and other meta-data. it's possible that simply over-writing this information by manually using the linked table manager as you have, may be more efficient.
Here's a few prescribed tips which might help you out:
http://office.microsoft.com/en-gb/access-help/improve-performance-of-an-access-database-HP005187453.aspx
And some more...
http://www.fmsinc.com/MicrosoftAccess/Performance.html#Linked%20Tables
And a related thread from this site:
Proper way to program a Microsoft Access Backend Database in a Multiuser Environment
Issues which may not be helping you:
queries which don't restrict the dataset sufficiently, particularly those running a dynaset
backed database files sitting too low in the windows folder structure (the higher the better)
As the 2nd link suggests, the truth is there are so many variables at work that resolving this will require some tinkering, with trial & error playing a major part.
All that, or you can upsize to SQL Server Express :)
http://office.microsoft.com/en-gb/access-help/move-access-data-to-a-sql-server-database-by-using-the-upsizing-wizard-HA010275537.aspx
I keep seeing questions floating through that make reference to a column in a database table named something like DateLastUpdated. I don't get it.
The only companion field I've ever seen is LastUpdateUserId or such. There's never an indicator about why the update took place; or even what the update was.
On top of that, this field is sometimes written from within a trigger, where even less context is available.
It certainly doesn't even come close to being an audit trail; so that can't be the justification. And if there is and audit trail somewhere in a log or whatever, this field would be redundant.
What am I missing? Why is this pattern so popular?
Such a field can be used to detect whether there are conflicting edits made by different processes. When you retrieve a record from the database, you get the previous DateLastUpdated field. After making changes to other fields, you submit the record back to the database layer. The database layer checks that the DateLastUpdated you submit matches the one still in the database. If it matches, then the update is performed (and DateLastUpdated is updated to the current time). However, if it does not match, then some other process has changed the record in the meantime and the current update can be aborted.
It depends on the exact circumstance, but a timestamp like that can be very useful for autogenerated data - you can figure out if something needs to be recalculated if a depedency has changed later on (this is how build systems calculate which files need to be recompiled).
Also, many websites will have data marking "Last changed" on a page, particularly news sites that may edit content. The exact reason isn't necessary (and there likely exist backups in case an audit trail is really necessary), but this data needs to be visible to the end user.
These sorts of things are typically used for business applications where user action is required to initiate the update. Typically, there will be some kind of business app (eg a CRM desktop application) and for most updates there tends to be only one way of making the update.
If you're looking at address data, that was done through the "Maintain Address" screen, etc.
Such database auditing is there to augment business-level auditing, not to replace it. Call centres will sometimes (or always in the case of financial services providers in Australia, as one example) record phone calls. That's part of the audit trail too but doesn't tend to be part of the IT solution as far as the desktop application (and related infrastructure) goes, although that is by no means a hard and fast rule.
Call centre staff will also typically have some sort of "Notes" or "Log" functionality where they can type freeform text as to why the customer called and what action was taken so the next operator can pick up where they left off when the customer rings back.
Triggers will often be used to record exactly what was changed (eg writing the old record to an audit table). The purpose of all this is that with all the information (the notes, recorded call, database audit trail and logs) the previous state of the data can be reconstructed as can the resulting action. This may be to find/resolve bugs in the system or simply as a conflict resolution process with the customer.
It is certainly popular - rails for example has a shorthand for it, as well as a creation timestamp (:timestamps).
At the application level it's very useful, as the same pattern is very common in views - look at the questions here for example (answered 56 secs ago, etc).
It can also be used retrospectively in reporting to generate stats (e.g. what is the growth curve of the number of records in the DB).
there are a couple of scenarios
Let's say you have an address table for your customers
you have your CRM app, the customer calls that his address has changed a month ago, with the LastUpdate column you can see that this row for this customer hasn't been touched in 4 months
usually you use triggers to populate a history table so that you can see all the other history, if you see that the creationdate and updated date are the same there is no point hitting the history table since you won't find anything
you calculate indexes (stock market), you can easily see that it was recalculated just by looking at this column
there are 2 DB servers, by comparing the date column you can find out if all the changes have been replicated or not etc etc ect
This is also very useful if you have to send feeds out to clients that are delta feeds, that is only the records that have been changed or inserted since the data of the last feed are sent.