Copying contents of one Word Document into another Document with different names - ms-word

I have Word 2003. I basically have over 100 documents with contents pertaining to a specific unit of a process, say Process 1. I have four other areas with different names, but the content will stay the same. How do I copy multiple contents from the first set of documents to the remaining 3 without changing the name?
There are two tables in the document, I only want to copy the second table from the first set of documents to each of the other documents. The first table has the file name and other info in it that needs to stay unique to that document.
Any help would be greatly appreciated as I have thousands of these that will need to be copied over eventually, and doing it manually would pretty much kill me.
Thank you,
David at Work

I'm assuming the documents are .doc binary files (since you say Word 2003).
Given this, your 2 most feasible options are to do something from within Word (eg macro or add-in), or automate Word.

Related

How to split large XLIFF file into smaller files and merge them back

I have a large XLIFF 1.2 files (each 50MB+), and I am looking for a safe utility that would split such file into smaller files based on certain criteria (for example size, number of elements, etc.) - and then merge them back, when needed.
I imagine that the process would look like this:
Big file is split into several files keeping the necessary information outside element in each file. Small files can be then processed (translated). After this, the utility would merge the files back into the one big file.
Any advice?
Thank you very much in advance!
Jan

How to remove words from a document on a column-by-column basis instead of whole lines in word

Perhaps a stupid question but I have a document where I have a large number of numerical values arranged in columns, although not in word's actual column formatting and I want to delete certain columns while leaving one intact. Heres a link to a part of my document.
Data
As can be seen there are four columns and I only want to keep the 3rd column but when I select any of this in word, it selects the whole line. Is there a way I can select data in word as a column, rather than as whole lines? If not, can this be done in other word processing programs?
Generally, spreadsheet apps or subprograms are what you need for deleting and modifying data in column or row format.
Microsoft's spreadsheet equivalent is Excel, part of the Microsoft Office Suite that Word came with. I believe Google Docs has a free spreadsheet tool online as well.
I have not looked at the uploaded file, but if it is small enough, you might be able to paste one row of data at a time into a spreadsheet, and then do your operation on the column data all at once.
There may be other solutions to this problem, but that's a start.

Import match from Excel unreliable

I have a script set up in Filemaker 11 which should import data from an excel sheet. There's a field containing a unique number in both the Filemaker database and the .xlsx file which is used to match already existing entries. "Update matching records in found set" and "Add remaining data as new record" are both enabled.
Unfortunately, Filemaker seems to behave completely arbitrarily here. Using the same script and the same .xlsx file several times in a row, the results are completely unpredictable. Sometimes already existing records are correctly skipped or updated sometimes they are added a second (or third or fifth …) time.
Is this a bug, maybe specific to version 11, which was sorted out later? Am I missing something about importing?
Here's my official answer to your question:
Imports into FileMaker databases are found set sensitive.
If you're importing records in a method that acts on existing records (update matching), FileMaker uses the found set showing in the layout on your active window to do the matching on rather than the entire table itself.
Regarding it's localness (per your comment above), it allows you to do more precise imports. In a case where you want to make sure you only match for specific records (e.g. you have a spreadsheet of data from employees at company A and you only want to update employee records for company A) you could perform a find and limit the found set to just your desired records before importing with matching turned on. This wat the import will ONLY look at the records in your found set to do it's evaluation. This means less processing because FM has to look at fewer records and also less risk that you're going to find a match that you didn't want (depending on what your match criteria is).
I'm having a hard time finding a good and up-to-date reference for you. All I can find is this one that is form FM10 days on Format. I would suggest bookmarking the FileMaker 13 help pages. It's the same set of help documents available when you use the Help menu in FileMaker Pro, but I find it much easier to search the docs via a browser.

Force indexing of a filestream in SQL Server 2012

Is it possible to force somehow the indexing service of MS SQL Server 2012 to index a particular filestream/record of a filetable?
If not, is there any way to know if a filestream/record has been indexed?
Thank you very much!
Edit: I found something. I'm not able to index a single file, but I may be able to understand what files have been indexed.
using this query: EXEC sp_fulltext_keymappings #table_id; you'll know every record that has been indexed, is better than nothing...
It sounds like you want to full text index a subset of the files within a single file table. (If it's otherwise, clarify your question and I'll edit the answer). There are two ways you could approach this.
One approach, is to use two distinct FileTables (MyTable_A and MyTable_B), place the files you want indexed in MyTable_A, and the non-indexed ones in MyTable_B. Then apply a full text index to A, but not B. If you need the files to appear in a unified fashion within SQL, just gate access through a view that UNIONs the two filetables. A potential pitfall is that it requires two distinct directory structures. If you need a unified file system structure this approach won't work.
Another approach, is to create an INDEXED VIEW of the files you want to full text indexed. Then apply a full text index to the view. Disclaimer: I have not tried this approach, but apparently it works.

how much data can i store per node in Neo4j

I need to save big chunks of Unicode text strings in Neo4j nodes. The documentation does not mention anything about the size of data one can store per node.
Does anyone know that?
I just tried the following with the neo4j web interface :
I wrote a line of 26 characters and duplicated it through 32000 lines, which makes a total of 832000 characters.
I created a node with a property "text" and copied my text in it, and it worked perfectly.
I tried again with 64000 lines with white spaces at the end of lines, with a total of 1728000 characters. Created a new node, then queried the node and copied the result back in a file to check the size (you never know), and wc gave me 1728001 (the one must be an error in the copy/paste process I suppose).
It didn't seem to complain.
FYI this is equivalent to a text with 345600 words of an average size of 4 and a space (5 characters), and a book of 1000 pages with 300 words per page.
I don't know however how this could impact performances if there are too many nodes. If it doesn't work well because of this, you could always consider having neo4j for storing informations about relationships, with a property ID as an id for another document oriented database to retrieve the text (or simply the path of a file as a path property).
Neo4j is by default indexed using Lucene. Lucene was built as a full text search toolbox (with Solr being the de facto search engine implementation). Since Lucene was intended to search over large amounts of text, my suspicion is that you can put as much text into a node as you want and it will work just fine.
Neo4j is a very nice solution for managing relationships between objects. As you may already know these relationships can have properties as well as the nodes themselves. But I think you cannot store "a big chunk" of data on these nodes. I think Neo4j was intended to be used with another database such as MongoDb or even mysql. You get "really fast" the information you first need and then look up for it using another engine. On my projects I store usernames, names, date of birth, ids, and these kind of information, but not very large text strings.