tiki-wiki: How to batch import categorized articles from a CSV file - tiki-wiki

Importing categories as a CSV file is OK in tiki-wiki, But how to import categorized articles to the system. Tiki supports media wiki and word press importing, While my data base has another format.
Is there any module for CSV upload/import?
Is there any flexible migration-like script to import articles?
In case that both of answers are NO, would you please give me a clue to write down the proper code to import articles directly to data base.
I went through understanding database, But I think accessing to database directly should be my last choice!

For custom import jobs, you can use Tiki's profiles. The profiles are a YAML-based format. While it is not the primary use for them, it might be the easiest way to perform what you want to do.
Normally, you need to place the profile in a repository to execute it, but there is a developer option to load content from a textarea. You can simply prepare your YAML definition and paste it in there.
Categorizing elements through the database could be harder than it seems as many tables are involved.

Related

How do I make use of my exported firestore data?

Following these instructions for exporting firestore to google cloud storage I was able to produce an export folder in GCS.
After looking at the data it's kinda familiar I recognize parts of it. It's filled with stuff like this zamount �* !ÆG·zT8#z I recognize the word amount from one of my fields.
I don't understand how this export data is helpful or meaningful. If I needed to reconstitute my database from these file, how would I even start?
Clearly I'm missing something because this is borderline gibberish.
The format of a Firestore is undocumented. It's a binary representation of the document data. The only supported ways to import the data from that export is using the instructions in the related documentation using gcloud firestore import or the GCP console.
If you want an export whose format you can parse and work with programmatically, you should find a different mechanism. I'm sure there are other libraries out there that can do what you want.

Akeneo 2.1 : Import / export best practices to set it all up

I'm currently setting up an Akeneo (2.1) instance that needs to communicate with an e-commerce solution. I was wondering what the best practices are when it comes to importing and exporting data. The documentation kind of lacks in this; it tells how you can set it up, but I'm missing practical use cases here.
Here's what I'm thinking of:
I want our customer to be able to upload their images / CSV files using an FTP connection.
Akeneo should ideally only start importing when a mutation in this (FTP) destination folder is detected.
Exporting should only be done once or twice a day, and upon completing the archive should be transfered with (s)FTP to a different location
I'm currently having trouble on how to implement this flow in Akeneo. Because if I look at what comes out of the box I can come up with the following:
I can setup an FTP account that ends up in `app/uploads/product/` and allow the customer to upload to that location
Akeneo does not detect file system changes, so I can only setup a cronjob that tries to import every hour or something. The drawback of this approach is that Akeneo will copy the CSV file(s) every time to `app/archive/import`. If you have big CSV files this can cause for some increment in disk usage.
I can setup a cronjob to export twice a day, but again: Akeneo will create archives on every export, so `app/archive/export` will grow even bigger every day. Please note that my customer has 4GB+ assets (images, documents, etc.). Does Akeneo cleanup the `app/archive`-folder every now and then?
Every exported archive comes in a new folder (with an every incremented job number (`app/archive/export/csv_product_export/28/` for example)), so I'm kind of wondering how I can detect for this new folder and how I can trigger to uploading of the archive to the remote (S)FTP server after the export is complete.
I was just wondering how other people who work with Akeneo handled these challenges. I know I can write my own custom bundle and hook into a ton of events or write shell scripts that do lot of the magic for me, but I am wondering what Akeneo itself already offers regarding this subject.
Any thoughts / ideas / suggestions / experiences on this topic are welcome!
To answer your questions:
Akeneo doesn't need to have csv uploaded in the app/uploads/product/
folder. You can define the csv location in the import profile. That way you can use whatever location you want.
To import images, you need to zip them with the csv file (to see how the structure of the archive should look like, you can export some products with media on demo.akeneo.com)
Setting up a cronjob seems to be a good idea. If the disk usage is a problem, this cronjob could also clean the folder after the import.
To export twice a day, you can use the export builder to only export products that have been updated since last export (delta export). That way, you don't use too much space for nothing.
Again, the app/archive/export/csv_product_export/28/ path is only for internal use. This is a working directory used by Akeneo during the export (before zip for example) and the final file (csv or zip) is moved to the defined destination (in job configuration).
With all those informations, here is my recommendation:
Write a simple bash/php script to detect change in a folder and if there is one, move the file to another location and launch the import.
If you want to handle images, you can add to your script a way to generate the zip file with the good format
Then to export to your ecommerce, setup a cronjob to export every hours and export only new or updated products to the desired destination.
Another way would be to use the new REST API which is well documented here: https://api.akeneo.com/

Import/Export Meteor Collections

I'd like to be able to provide an import/export to csv feature for a specific Subscription (a subset of a Collection). However, I'm not sure at all where I should start. I assume that this has to be done server-side since they need to upload a file for the importing feature, so it probably involves a Meteor.methods function on the server which I call from the client. I'm not sure how you would return a file for download or temporarily upload one (for the import feature, I don't want to keep the file around).
Any ideas on the best way to approach this with Meteor?
I'm going to pretend this is the best way, but do check out CollectionFS, a Meteor package that implements file uploads, so a logged-in user can upload files, and file handlers, in this sense, a function or series of functions automatically run on an uploaded file.
For exports, you could pipe this through CollectionFS again, or you could use FileSaver.js to just directly serve the export file.

Importing a table from an HTML file directly to Tableau

I recall seeing a direct import of a table in a Wikipedia HTML document directly into tableau. Tableau's website has a method that involves using Google doc HTML import function
=ImportHtml("http://en.wikipedia.org/wiki/List_of_bank_failures_in_the_United_States_(2008%E2%80%93present)","table",2)
to first import the tables into a Google spreadsheet and later download the spreadsheet that can be linked to Tableau.
Is there any way to import a Wikipedia table, or anyt other HTML doc directly into Tableau?
Thanks
Tableau lets you paste data from any application that uses csv delimited text on the clipboard (not all apps do). Here's a knowledge based article on it.
http://onlinehelp.tableausoftware.com/current/pro/online/en-us/clipboard_datasource.html
Unfortunately, it didn't seem to work o nthe table you pointed out in your example, perhaps the HTML is too complex. But if you can somehow export the table as CSV, it can be loaded in Tableau.
Tableau Version 8 is due out soon, and has a couple different new APIs available that should help with connecting to data from various sources.

Anyone have file structure documentation for Windows Live Mail contacts.edb file?

Does anyone have any documentation or info on the file structure of the contacts.edb file as it is used in Windows Live Mail? I can't seem to find any way to import file back into WLM (I'm told it HAS no way, and must be "backed up" via manual exports to CSV files).
Given the apparent lack of means to reimport a backed up EDB file, I figured that perhaps I could just write a program to read all the contacts out into vcards or something, and then import those. (I'm actually looking ultimately to get them into Outlook, and ditch WLM for this user, cuz I don't feel like dealing with these maintenance headaches).
Problem is that I cannot find anywhere any information on the file format/structure for WLM's EDB files.
Thanks in advance, any help GREATLY appreciated!
To anyone finding this in the future, THIS CAN BE DONE, without needing to code anything! Use this program - EseDbViewer - to export the file to a CSV file, and then you can import that where ever you want.
Note that you pretty much need the whole file structure containing the edb, not just the edb file.
Hat-tip to #MicrosoftHelps on twitter.