In Word Online I'm seeing tables can be inserted directly after one another in the document without creating a paragraph element between.
The Office JS API in Word Online outputs these as two separate table objects in a table collection. However when the same document is accessed through the Desktop they appear to merge into a single table object.
When viewing the range ooxml, the two tables are formatted as though it is one.
Word Online:
Word for Windows:
It can be seen by inserting two tables after another in Word, logging the selected range, and looking at the table collections property.
const context = await this.run();
const rangeObject = context.document.getSelection();
rangeObject.load('tables');
await context.sync();
console.log(rangeObject);
Which of these instances is the intended behaviour?
This is a known discrepancy in behavior between Word for Windows and Word Online, and as it does not currently cause any particularly bad experiences, Word Online doesn't currently have any solid plans to "fix" this discrepancy. Word Online simply just considers them two different tables.
In the future, if we did feel the need to address this discrepancy in experience, we would likely just align Word Online more closely with the desktop experience by disabling the ability to create a table immediately after another table.
Related
I have been asked to develop a web application to be hosted on Sharepoint 2013 that has to work with lots of data. It basically consists in a huge form to be used to save and edit many information.
Unfortunately, due to some work restrictions, I do not have access to the backend, so it has to be done entirely client-side.
I am already aware on how to programmatically create sharepoint lists with site columns and save data on them with REST.
The problem is, I need to create a Sharepoint list (to be used as database) with at least 379 site columns (fields), of which 271 has to be single lines of text and 108 multiple lines of text, and by doing so I think I would exceed the threshold limit (too many site columns on a single list).
Is there any way I could make this work? Any other solution on how to save big amounts of data on Sharepoint, by only using client-side solutions (e.g. REST)? Maybe there is a way to save a XML or JSON file in any way on Sharepoint through REST?
I don't remember if there is actually some limit regarding columns in SP 2013. For sure there is a limit when You would be using lookup columns (up to 12 columns in one view), but since You are using only text columns this should not be blocking... and limit regarding number of rows that may be present in one view (5000 - normal user, 20 000 - admin user)
Best to check all here https://learn.microsoft.com/en-us/sharepoint/install/software-boundaries-and-limits
As described by MS:
The sum of all columns in a SharePoint list cannot exceed 8,000 bytes.
Also You may create one Note column and store all date in JSON structure which will be saved in this column or create Library not list and store JSON documents in list (just be aware that by default JSON format is blocked in SharePoint to be stored in lists and You need to change those settings in Central Admin for application pool -> link). Just be aware that with this approach You will be not able to use many OOB SharePoint features like column filters which might be handy
Perhaps a stupid question but I have a document where I have a large number of numerical values arranged in columns, although not in word's actual column formatting and I want to delete certain columns while leaving one intact. Heres a link to a part of my document.
Data
As can be seen there are four columns and I only want to keep the 3rd column but when I select any of this in word, it selects the whole line. Is there a way I can select data in word as a column, rather than as whole lines? If not, can this be done in other word processing programs?
Generally, spreadsheet apps or subprograms are what you need for deleting and modifying data in column or row format.
Microsoft's spreadsheet equivalent is Excel, part of the Microsoft Office Suite that Word came with. I believe Google Docs has a free spreadsheet tool online as well.
I have not looked at the uploaded file, but if it is small enough, you might be able to paste one row of data at a time into a spreadsheet, and then do your operation on the column data all at once.
There may be other solutions to this problem, but that's a start.
I have a script set up in Filemaker 11 which should import data from an excel sheet. There's a field containing a unique number in both the Filemaker database and the .xlsx file which is used to match already existing entries. "Update matching records in found set" and "Add remaining data as new record" are both enabled.
Unfortunately, Filemaker seems to behave completely arbitrarily here. Using the same script and the same .xlsx file several times in a row, the results are completely unpredictable. Sometimes already existing records are correctly skipped or updated sometimes they are added a second (or third or fifth …) time.
Is this a bug, maybe specific to version 11, which was sorted out later? Am I missing something about importing?
Here's my official answer to your question:
Imports into FileMaker databases are found set sensitive.
If you're importing records in a method that acts on existing records (update matching), FileMaker uses the found set showing in the layout on your active window to do the matching on rather than the entire table itself.
Regarding it's localness (per your comment above), it allows you to do more precise imports. In a case where you want to make sure you only match for specific records (e.g. you have a spreadsheet of data from employees at company A and you only want to update employee records for company A) you could perform a find and limit the found set to just your desired records before importing with matching turned on. This wat the import will ONLY look at the records in your found set to do it's evaluation. This means less processing because FM has to look at fewer records and also less risk that you're going to find a match that you didn't want (depending on what your match criteria is).
I'm having a hard time finding a good and up-to-date reference for you. All I can find is this one that is form FM10 days on Format. I would suggest bookmarking the FileMaker 13 help pages. It's the same set of help documents available when you use the Help menu in FileMaker Pro, but I find it much easier to search the docs via a browser.
I was given the task to decide whether our stack of technologies is adequate to complete the project we have at hand or should we change it (and to which technologies exactly).
The problem is that I'm just a SQL Server DBA and I have a few days to come up with a solution...
This is what our client wants:
They want a web application to centralize pharmaceutical researches separated into topics, or projects, in their jargon. These researches are sent as csv files and they are somewhat structured as follows:
Project (just a name for the project)
Segment (could be behavioral, toxicology, etc. There is a finite set of about 10 segments. Each csv file holds a segment)
Mandatory fixed fields (a small set of fields that are always present, like Date, subjects IDs, etc. These will be the PKs).
Dynamic fields (could be anything here, but always as a key/pair value and shouldn't be more than 200 fields)
Whatever files (images, PDFs, etc.) that are associated with the project.
At the moment, they just want to store these files and retrieve them through a simple search mechanism.
They don't want to crunch the numbers at this point.
98% of the files have a couple of thousand lines, but there's a 2% with a couple of million rows (and around 200 fields).
This is what we are developing so far:
The back-end is SQL 2008R2. I've designed EAVs for each segment (before anything please keep in mind that this is not our first EAV design. It worked well before with less data.) and the mid-tier/front-end is PHP 5.3 and Laravel 4 framework with Bootstrap.
The issue we are experiencing is that PHP chokes up with the big files. It can't insert into SQL in a timely fashion when there's more than 100k rows and that's because there's a lot of pivoting involved and, on top of that, PHP needs to get back all the fields IDs first to start inserting. I'll explain: this is necessary because the client wants some sort of control on the fields names. We created a repository for all the possible fields to try and minimize ambiguity problems; fields, for instance, named as "Blood Pressure", "BP", "BloodPressure" or "Blood-Pressure" should all be stored under the same name in the database. So, to minimize the issue, the user has to actually insert his csv fields into another table first, we called it properties table. This action won't completely solve the problem, but as he's inserting the fields, he's seeing possible matches already inserted. When the user types in blood, there's a panel showing all the fields already used with the word blood. If the user thinks it's the same thing, he has to change the csv header to the field. Anyway, all this is to explain that's not a simple EAV structure and there's a lot of back and forth of IDs.
This issue is giving us second thoughts about our technologies stack choice, but we have limitations on our possible choices: I only have worked with relational DBs so far, only SQL Server actually and the other guys know only PHP. I guess a MS full stack is out of the question.
It seems to me that a non-SQL approach would be the best. I read a lot about MongoDB but honestly, I think it would be a super steep learning curve for us and if they want to start crunching the numbers or even to have some reporting capabilities,
I guess Mongo wouldn't be up to that. I'm reading about PostgreSQL which is relational and it's famous HStore type. So here is where my questions start:
Would you guys think that Postgres would be a better fit than SQL Server for this project?
Would we be able to convert the csv files into JSON objects or whatever to be stored into HStore fields and be somewhat queryable?
Is there any issues with Postgres sitting in a windows box? I don't think our client has Linux admins. Nor have we for that matter...
Is it's licensing free for commercial applications?
Or should we stick with what we have and try to sort the problem out with staging tables or bulk-insert or other technique that relies on the back-end to do the heavy lifting?
Sorry for the long post and thanks for your input guys, I appreciate all answers as I'm pulling my hair out here :)
I need to save big chunks of Unicode text strings in Neo4j nodes. The documentation does not mention anything about the size of data one can store per node.
Does anyone know that?
I just tried the following with the neo4j web interface :
I wrote a line of 26 characters and duplicated it through 32000 lines, which makes a total of 832000 characters.
I created a node with a property "text" and copied my text in it, and it worked perfectly.
I tried again with 64000 lines with white spaces at the end of lines, with a total of 1728000 characters. Created a new node, then queried the node and copied the result back in a file to check the size (you never know), and wc gave me 1728001 (the one must be an error in the copy/paste process I suppose).
It didn't seem to complain.
FYI this is equivalent to a text with 345600 words of an average size of 4 and a space (5 characters), and a book of 1000 pages with 300 words per page.
I don't know however how this could impact performances if there are too many nodes. If it doesn't work well because of this, you could always consider having neo4j for storing informations about relationships, with a property ID as an id for another document oriented database to retrieve the text (or simply the path of a file as a path property).
Neo4j is by default indexed using Lucene. Lucene was built as a full text search toolbox (with Solr being the de facto search engine implementation). Since Lucene was intended to search over large amounts of text, my suspicion is that you can put as much text into a node as you want and it will work just fine.
Neo4j is a very nice solution for managing relationships between objects. As you may already know these relationships can have properties as well as the nodes themselves. But I think you cannot store "a big chunk" of data on these nodes. I think Neo4j was intended to be used with another database such as MongoDb or even mysql. You get "really fast" the information you first need and then look up for it using another engine. On my projects I store usernames, names, date of birth, ids, and these kind of information, but not very large text strings.