How can I insert a set of pages into all existing documents within an existing batch in Kofax Capture - copy

We are using Kofax Capture 11.1
I have a use case where a business unit will give us 2 stacks of paper:
Stack 1: Single page cover sheets printed from accounting software with all required index values that will be processed with zonal recognition. Anywhere from 50-200 of these at a time.
Stack 2: A 50-100 page document that is the supporting documentation that covers each/all of the items in stack one.
I'm trying to reduce manual work by having users scan Stack 1 and do the separation into the 50-200 individual documents within the batch. THEN, I want them to be able to scan the second document and have it automatically inserted into or associated with each of those original documents in the batch without having to scan it for each document or manually copy/paste it.

The base idea is to have those cover sheets inserted before the actual document.
So take the page from stack one and put it before the beginning of the document in stack 2.
Continue the preparation for all other documents.
If you setup you separation correctly to coversheet. Then everything should be done automatically. (most cases you can even choose if you want your cover-sheet to be exported with the actual document)
If you are stuck on having to scan the stack one as a complete and then do stack 2.
Then you should look into foldering options in Kofax Capture.
Documents will be sorted as you want, but they will still be exported as separate documents. (You can alternatively merge them with custom module or custom export)

Related

Growing Tables with aggregations

I'm looking at creating a table that could potentially be loaded with 100s of rows. I was hoping to use the growing option that the tables provide. I have a few questions.
If I have a aggregation which is a total for all the rows in that column, will it be the total of all rows or only of those that have been loaded. Or can this be set with a variable etc.
similar to above the select all feature to tick all the rows, will this select every row even the ones not included, or will it just select the loaded rows. Again is this just a variable that I can set.
This is a first time really using any of the UI5 table elements, and the sap said this which I didn't really understand:
"Show Aggregations
Show aggregations (such as totals) on the table footer (sap.m.Column, aggregation: footer).
Do not show aggregations in “growing” mode. It is not clear, if an aggregation will only aggregate the items loaded into the front end, or all items."
For the growing tables, by default all actions and aggregations will only be processed for the data already loaded. Your citation from SAP means that it is not clear to the end user if the aggregated data refers to the visible data or to all data.
If you want to implement something like "Select all" or "Delete All", it would be better to implement this in the backend. From the guidelines of sap.m.List:
In multiple selection mode, users can (de)select all items using the shortcut CTRL+A. This only affects items that have already been loaded to the front-end server. All other items are not (de)selected before they are loaded (for example, items added via lazy loading with growingScrollToLoad). This conflicts with the guideline that all items the user can reach by scrolling must be (de)selected.
To process all items, listen to the selectionChange event and to its flag selectAll. This indicates whether CTRL+A was triggered. As soon as an action is triggered, process the items accordingly. Depending on the number of items, consider processing them in the back end.

PowerApps datasource to overcome 500 visible or searchable items limit

For PowerApps, what data source, other than SharePoint lists are accessible via Powershell?
There are actually two issues that I am dealing with. The first is dynamic updating and the second is the 500 item limit that SharePoint lists are subject to.
I need to dynamically update my data source, which I am currently doing with PowerShell. My data source is not static and updating records by hand is time-consuming and error prone. The driving force behind my question is that the SharePoint list view threshold is 5,000 records however you are limited to 500 visible and searchable records when using SharePoint lists in the Gallery View and my data source contains greater than 500 but less than 1000 records. If you have any items beyond the 500th record that should match the filter criteria, they will not be found. So SharePoint lists are not optional for me until that limitation is remediated
Reference: https://powerapps.microsoft.com/en-us/tutorials/function-filter-lookup/
To your first question, Powershell can be used for almost anything on the Microsoft stack. You could use SQL server, Dynamics 365, SP, Azure, and in the future there will be an SDK for the Common Data Service. There are a lot of connectors, and Powershell can work with a good majority of them.
Take note that working with these data structures through Powershell is independent from Powerapps. Powerapps just takes the data that the data connector gives it, and if you have something updating the data in the background (Powershell, cron job, etc.), In order to get a dynamic list of items, you can use a Timer control and a Refresh function on your data source to update the list every ~5-20 seconds.
To your second question about SharePoint, there is an article that came out around the time you asked this regarding working with large lists. I wouldn't say it completely solves your question, but this article seems to state using the "Filter" function on basic column types would possibly work for you:
...if you’d like to filter the set of items that you are showing in the gallery control, you will make use of a “Filter” expression, rather than the “Search” expression, which is the default that existing apps used. With our changes, SharePoint connector now supports “equals” type of queries on columns that support filtering (Single line of text, choice, numbers, dates and people), so make sure that the columns and the expressions you use are supported and watch for the same warning to avoid reverting back to the top 500 items.
It also notes that if you want to pull from a list larger than the 5k threshold, you would need to use indexes, I have not fully tested this yet but it seems that this could potentially solve your problem.

Lotus Notes application Document count and disk space

Using Lotus Notes 8.5.2 & made a backup of my mail application in order to preserve everything in a specific folder before deleting the contents of it from my main application. The backup is a local copy, created by going to File --> Application --> New Copy. Set the Server to Local, give it a title & file name that I am saving in a folder. All of this works okay.
Once I have that, I go into the All Documents & delete everything out except the contents of the folder(s) I want this application to preserve. When finished, I can select all and see approximately 800 documents.
However, there are a couple other things I have noticed also. First - the Document Count (Right-click on the newly created application & go to Properties). Select the "i" tab, and it has a Disk Space & Document count there. However, that document count doesn't match what is shown when you open the application & go to All Documents. That count is matches the 800 I had after deleting all but the contents I wanted to preserve. Instead, the application says it has almost double that amount (1500+), with a fairly large file size.
I know about the Unread Document count, and in this particular application I checked the "Don't maintain unread marks" on the last property tab. There is no red number in the application, but the document count nor the file size changed when that was selected. Compacting the application makes no difference.
I'm concerned that although I've trimmed down what I want to preserve on this Lotus Notes application that there's a lot of excess baggage with it. Also, since the document count appears to be inflated, I suspect that the file size is also.
How do you make a backup copy of a Lotus Notes application, then keep only what you want & have the Document Count and File Size reflect what you have actually preserved? Would appreciate any help or advice.
Thanks!
This question might really belong on ServerFault or SuperUser, because it's more of an admin or user question than a development question, but I can give you an answer from a developer angle...
Open your mailbox in Domino Designer, and look at the selection formula for $All view. It should look something like this:
SELECT #IsNotMember("A"; ExcludeFromView) & IsMailStationery != 1 & Form != "Group" & Form != "Person"
That should tell you first of all that indeed, "All Documents" doesn't really mean all documents. If you take a closer look, you'll see that three types of documents are not included in All Documents.
Stationery documents
Person and Group documents (i.e., synchronized contacts)
Any other docs that any IBM, 3rd party, or local developer has decided to mark with an "A" in the ExcludeFromView field. (I think that repeat calendar appointment info probably falls into this category.)
One or more of those things is accounting for the difference in your document count.
If you want, you can create a view with the inverse of that selection formula by reversing each comparison and changing the Ands to Ors:
SELECT #IsMember("A"; ExcludeFromView) | (IsMailStationery = 1) | ( Form = "Group" | Form = "Person")
Or for that matter, you can get the same result just taking the original formula and surrounding it with parens and prefixing it with a logical not.
Either way, that view should show you everything that's not in AllDocuments, and you can delete anything there that you don't want.
For a procedure that doesn't involve mucking around with Domino Designer, I would suggest making a local replica instead of a local copy, and using the selective replication option to replicate only documents from specific folders (Space Savers under More Options). But that answer belongs on ServerFault or SuperUser so if you have any questions about it please enter a new question there.

Mail Merge with multiple child records

I have a mail merge template, which includes a bunch of information about the entity that it is associated with in CRM. However, I'm needing a way to add all of the child records from my main entity in to the mail merge template as well.
Is there a way to have a sub record set inside a template?
The easiest solution might be Invantive Composition by inserting a <invantive:foreach>...</invantive:foreach> or through insert building block in the ribbon (note that I work there, but there is also a free version). Alternative solution I've used in the past are programming it completely (using RTF generation outside of Word or VBA or VSTO in Word). But this is quite hard to get right for tables. When the amount of sub records is somewhat limited, you might use PIVOT (see this for example) to change it all into one big record and insert the fields in your document. In the document you may need to hide all placeholders for the sub records not present in a specific instance.

Preserve everything count and get filtered results in t-sql?

I have created a complex sql server 2008/coldfusion search page, that searches thru a variety of tables.
On the left is a list of the categories, plus an everything category, by each category or type of result is a total number of results of that type found in the current search result.
I have everything fine, but I am hoping there is a more optimal approach.
Because everytime i filter the search to a specific category, i still have to get all the results, so as to make sure the everything category has the correct totals.
And because of this, I have realized this is a problem I've had in lots of other programs in coldfusion/sql.
Where you want to reduce the number of results by some field in the select, but you need to keep the original recordcount total.
But you really don't want to re-run the whole massive query everytime, when you just need to get the trimmed results.
This program is 1 cfc, 1 cfm, 1 stored procedure, and jquery/ajax inside the cfm to call the cfc.
The cfm calls the cfc when it originally get's a form submitted search request, and then any filtering does the same thing.
However if there are more than 20 results then it show's a button at the bottom to do via ajax get 20 more records.
My main goal is to improve performance, make sure i keep an accurate record of what the record count is before any filtering is done, without having to rerun the unfiltered query every time.
This is a kind of complex problem, so there might not be any answers...
Thank you all for trying..
I would run the "big" query once, then pop it into a SESSION variable. Then I'd use Query-of-Query to return subsets based on filters.
The main query always exists, so you can query against that or use metadata like bigQuery.recordCount. Your QofQ is a smaller set of data you can use for display. And you can re-apply filters without having to return to the database.
Well you need to run the query (or a count(*)) at least once to get the total number. You could:
Cache this query and refer to the
cached query's recordcount again
and again
Store the record count in the session scope until the next time it is run for this user