tiki-wiki: How to copy a structure into another Or how to duplicate a structure - tiki-wiki

I am new to tiki, I just learned about structures and I find them to be very useful and powerful. My requirement is to create a hierarchy of empty pages (essentially a hierarchy) and make users of the wiki to reuse that hierarchy while creating certain content.
I find tiki structures to fit my needs of hierarchy perfectly. However, I can't find any way to duplicate a structure (so that users can have their own copy of it) and edit their own copy.
Is it possible in tiki and if so how?

You cannot have two wiki pages of the same name so you need to use prefixes or suffixes.
You can make a dump of existing structure or XML zip and then copy paste the dump tree into new structure tree textarea or import the zip.
Then add the prefix or suffix manually. It will be something like:
My Structure
Subpage 1
Subpage 2
Modify it to:
user2 - My Structure
user2 - Subpage 1
user2 - Subpage 2

Related

In Power Query, when duplicating the source query should I duplicate the Transform File folder as well?

My apologies in advance if this question has already been asked, if so I cannot find it.
So, I have this huge data base divided by country where I need to import from each country data base individually and then, in Power Query, append the queries as one.
When I imported the US files, the Power Query automatically generated a Transform File folder with 4 helper queries:
Then I just duplicated the query US - Sales and named it as UK - Sales pointing it to the UK sales folder:
The Transform File folder didn't duplicate, though.
Everything seems to be working just fine right now, however I'd like to know if this could be problem in the near future, because I still have several countries to go. Should I manually import new queries as new connections instead of just duplicating them or it just doesn't matter?
Many thanks!
The Transform Files Folder group contains the code that is called to transform a list of files. It is re-usable code. You can see the Sample File, which serves as the template for the transform actions.
As long as the file that is arrived at for the Sample File has the same structure as the files that you are feeding into the command, then you can use any query with any list of files.
One thing you need to make sure is that the Sample File is not removed from your data source. You may want to create a new dummy file just for that purpose, make sure it won't be deleted, and then point the Sample File query to pull just that file.
The Transform Helper Queries are special queries that you may edit the queries, but you cannot delete and recreate your own manually. They are automatically created by PQ when combining list of contents and are inherently linked to the parent query.
That said, you cannot replicate them, and must use the Combine function provided by PQ to create the helper queries.
You may however, avoid duplicating the queries, instead replicate your steps in the parent query, and use table union to join the list before combining the contents with the same helper queries.

Nextcloud - mass removal of collaborative tags from files

due to an oversight in a flow-routine that was meant to tag certain folders on upload into the cloud, a huge amount of unwanted files were also tagged in the process. Now there are thousands upon thousands of files that have the wrong tag and need to be untagged. Neither doing this by hand nor reuploading with the correct flow-routine are really workable options. Is there a way to do the following:
Crawl through every entry in a folder
If its a file, untag it, if its a folder, don't
Everything I found about tags and NextCloud was concerning with handling them when they were uploaded, but never running over existing files in regards of tagging.
Is this possible?
The cloud stores those data into the configured database. So you could simply remove the assigns from the db.
The assigns are stored in oc_systemtag_object_mapping while the tags itself are in oc_systemtag. If you found the ID of the tag to remove (let's say 4), you could simply remove all assignments from the db:
DELETE FROM oc_systemtag_object_mapping WHERE systemtagid = 4;
If you would like to do this only for a specific folder, it's not even getting much more complicated. Files (including their folder structure!) are stored in oc_filecache, while oc_systemtag_object_mapping.objectid references oc_filecache.fileid. So with some joining and LIKEing, you could limit the rows to delete. If your tag is used for non-files, your condition should include oc_systemtag_object_mapping.objecttype = 'files'.

Migration of tx_news from one TYPO3 to another

I want to migrate existing news from one instance to another one incl. relations to FAL and content elements.
What is the best practise? I tried T3D Export, but it needs too much memory. Is this the only solution or do you have better ones?
1st you can try to export smaller chunks.
2nd you can do the export by hand. But then you must know which records are involved and what files are involved. And you must handle uid-collisions!
Starting with a simple query for your news-records.
Then you need all related records: FAL, tt_content, categories
Depending on your tt_content records you might need further related records - typical: FAL
Then you need to identify all the files.
Before you import all the records: make sure the used uids are unused in your target installation. oherwise you need to modify your uids (e.g.: you can add a constant value of 10000 to all your uids)

Mongo schema: Todo-list with groups

I want to learn mongo and decided to create a more complex todo-application for learning purpose.
The basic idea is a task-list where tasks are grouped in folders. Users may have different access to those folders (read, write) and tasks may be moved to other folders. Usually (especially for syncing) tasks will be requested by-folder and not alone.
Basically I thought about three approaches and would like to hear your opinion for them. Maybe I missed some points or just have the wrong way of thinking.
A - List of References
Collections: User, Folder, Task
Folders contain references to Users
Folders contain references to Tasks
Problem
When updating a Task a reference to Folder is needed. Either those reference is stored within the Task (redundancy) or it must be passed with each API-call.
B - Subdocuments
Collections: User, Folder
Folders contain references to Users
Tasks are subdocuments within Folders
Problem
No way to update a Task without knowing the Folder. Both need to be transmitted as well but there is no redundancy compared to A.
C - References
Collections: User, Folder, Task
Folders contain references to Users
Taskskeep a reference to their Folders
Problem
Requesting a folder means searching in a long list instead of having direct references (A) or just returning the folder (B).
If you don't need any metadata for the folder except the name you could also go with:
Collections: User,Task
Task has field folder
User has arrays read_access and write_access
Then
You can get a list of all folders with
db.task.distinct("folder")
The folder a specific user can access are automatically retrieved when you retrieve the user document so those can basically known at login.
You can get all tasks a user can read with
db.task.find( { folder: { $in: read_access } } )
with read_access beeing the respective array you got from your users document. The same with write_access.
You can find all tasks within a folder with a simple find query for the folder name.
Renaming a folder can be achieved with one update query on each of the collections.
Creating a folder or moving a task to another folder can also be achieved in simple manners.
So without metadata for folders that is what I would do. If you need metadata for folders it can become a little more complicated but basically you could manage those independent of the tasks and users above using a folder collection containing the metadata with _id beeing the folder name referenced in user and task.
Edit:
Comparison of the different approaches
Stumbled over this link which might be of interest for you. In there is a discussion of transitioning from a relational database model to mongo. The difference beeing that in a relational database you usually try to go for third normal form where one of the goals is to avoid bias to any form of access pattern where in mongodb you can try to model your data to best fit your access patterns (while keeping in mind not to introduce possible data anomalies through redundancy).
So with that in mind:
your model A is a way how you could do it in a relational database (each type of information in one table referenced over id)
model B would be tailored for an access pattern where you always list a complete folder and tasks are only edited when the folder is opened (if you retrieve one folder you have all the task without an additional query)
C would be a different relational model than A and I think little closer to third normal form (without knowing the exact tables)
My suggestion would support the folder access not as optimal as B but would make it easier to show and edit single tasks
Problems that could come up with the schemas: Since A and C are basically relational you can get a problem with foreign keys since mongodb does not enforce foreign key constraints (e.g. you could delete a folder while there are still tasks referencing it in C or a task without deleting its reference in the folder in A). You could circumvent this problem by enforcing it from the application. For B the 16MB document limit could become a problem circumventable by allowing folders to split into multiple document when they reach a certain task count.
So new conclusion: I think A and C might not show you the advanatages of mongodb (and might even be more work to build in mongodb than in sql) since they are what you would do on a traditional relational database which is the way mongodb was not designed for (e.g. the missing join statement, no foreign key constraints). In sum B most matches your access patern "Usually (especially for syncing) tasks will be requested by-folder" while still allowing to easily edit and move tasks once the folder is opened.

Can I use VSTO instead of Open XML to manipulate altChunk features?

I would like to embed one Word document (call it "hidden.docx") into another Word document (call it "host.docx"). The document hidden.docx would not be visible at all when host.docx is opened in Word by an end-user. Document hidden.docx would only be carried inside host.docx, sort of as unstructured cargo data.
All research I have done points me to the use of something called altChunk offered by the Open XML SDK. I have installed Open XML SDK and got a sample working: http://msdn.microsoft.com/en-us/library/gg490656%28v=office.14%29.aspx
My question: In order to insert an altChunk into a docx, do I really need the Open XML SDK? Can this not be accomplished using VSTO? If so, how?
[PS: My ultimate goal is, for a pair of documents where one document is the original text and the other is its translated version in another language, to be able to preserve the original document within the translated document, so as not to lose it. For any document pair, there's always the risk that the two documents become separated through misplacement of one of them.]
Yes and No.
1.) That's not what AltChunks do. AltChunks are a way to embed one document into another document such that they get merged together. They are not hidden. If you create a docx package with an AltChunk in it, and then open up Word, Word will immediately merge that AltChunk into the document. (If that AltChunk is another Word document that also contains child AltChunks they will be recursively merged into the parent as well.) Basically, it's an easy way to merge content together without having to reconsolidate all their styles, rIDs, etc. -- if you save the document and examine it, the AltChunk will be gone, and you will notice that Word has merged the document back together into a single document again.
2.) Range.InsertXML, if provided a valid Flat Package for a full Word document will, under the hood, invoke that same merge functionality (down to having the same bugs, etc.) that you would get from an AltChunk. The two behave identical, and you can even create a document package with the OpenXML SDK that contains embedded AltChunks, and insert those (I've done this in Word 2007, 2010, and 2013) -- of course, as I mentioned above, the AltChunks are never persisted, they're immediately merged into the document.
If you want to save hidden data in a document, I recommend using Custom XML (take a look at Document.CustomXMLParts). Keep in mind though, at least in Word 2010, Undo does not revert changes to CustomXML parts.
If you simply want to include some file into the Open XML package, then the simplest way is to use API from the System.IO.Packaging namespace (First obtain the reference to the main document part of the host part):
EmbeddedPackagePart hiddenDocumentPart = mainDocumentPart.AddEmbeddedPackagePart(#"application/vnd.openxmlformats-officedocument.wordprocessingml.document");
hiddenDocumentPart.FeedData(File.Open(hiddenDocumentFile, FileMode.Open));
Just to be sure, this way the hidden document will be in no way part of the host document content. It will only be part of its file (package). You can later extract it with a similar method: Get the main part of the host document, find the embedded (hidden) part and get/read the data from it.