How to upload a entire structured folder to eXist-db database through RESTful - eclipse

Could someone tell me that how to upload an entire structured folder to eXist-db database through RESTful?
Here is what I attempt to achieve: I have a folder which contains data file, and it has sub-folders which form the hierarchy of the root folder. Is it possible for me to upload the entire root folder of data to the local eXist-db database using RESTful, so I can access to the data files like this:
http://localhost:8080/exist/rest/db/basefolder/branch1/dev/documents/File.xml
in Eclipse.
Thank you very much.

You need to write some script to do this. Two main options:
1) Write it client-side in the language of your choice, and have a script that loops through your files HTTP PUT'ing each of them to the database.
2) Write it server-side in XQuery. You then just Zip/Gz your directory structure and HTTP POST the zip file to your XQuery installed in eXist. Your XQuery should unpack the Zip and store each entry from the zip file into the database.

Related

Count files in a ZIP file over SFTP using PowerShell

I am connecting to SFTP via host, port, username and password using PowerShell. I want to count the number of files in a particular zip folder without having to download the zip folder on my local machine and count. Please share the piece of logic that would do this. I looked into this but it seems a bit tricky when it comes to do this in a zip folder.
That's not an easy task to do. There's no API in SFTP to do that completely remotely. There are basically two solutions:
Use SFTP to download only the ZIP central directory (basically the listing that is placed at the very end of the ZIP file). And decode the directory locally. For C#, this is covered in my answer to List files inside ZIP file located on SFTP server in C#. Though as mentioned there, there's a bug in SSH.NET that requires a workaround with implementing an interface. While that's probably doable in PowerShell too, I've never done that.
If you have an SSH shell access to the server, use remote zip command to list the contents of the file. Or build another API (like a web service).
Btw, note that there's nothing like ZIP "folder". ZIP is an archive file. It's only Windows that call ZIP files "folders".

Data Fusion: GCS create creating folders not object

I am trying to create an GCS object (File) with GCS create plugin of Data Fusion.
but it is creating a folder instead.
How I can have a file created instead of a folder ??
It seems that the description of the plugin leads to a misunderstanding. Cloud Storage doesn't work like a conventional filesystem, so you cannot "strictly" create empty files. The gsutil command doesn't have an equivalent to a touch command (on Linux) and all "basic" operation in this product is limited to the cp command (upload and download files).
Therefore, since there is no file when you specify the storage url, it's expected that a folder will be created instead of a file.
Based on this, I would like to suggest you two workarounds:
If you are using this plugin to create a file as a ‘flag’, you can continue using the plugin since the created folder also serves as a flag (to trigger a Cloud Function, for example)
If you need to create a file, you can create files with the GCS plugin located in ‘Sink’ plugins group to write records to one or more files in a directory on Google Cloud Storage. Files can be written in various formats such as csv, avro, parquet, and json.

How to import site into Local by flywheel using existing file directory only, and no sql file

This guide indicates that you need both a file directory and sql file to accomplish this, does anyone know a workaround?
https://localwp.com/help-docs/how-to-import-a-wordpress-site-into-local/
You can retrieve the backup archives from the starting-site folder. Within your WordPress folder, navigate to wp-content -> uploads -> backwpup-xxxxxx-backups. Open the archive. Inside you’ll find a .SQL file (local.sql).

Talend issue while copying local files to HDFS

Hi I want to know how to copy files to HDFS from source file system(Local File system),if source file already copied to HDFS,then how to eliminate or ignore that file to copy again in HDFS using Talend.
Thanks
Venkat
To copy files from local file system to the HDFS, you need to use tHDFSPut components if you have Talend for big data. If you use Talend for data integration you can easily use tSystem component with the right command.
To avoid duplicated files, you need to create a table in a RDBMS and keep track of all copied files. Each time the job start copying file, it should check if it already exists in the table.

Configuration and content management with automated deployment tools for ZF based app

I am trying to automate deployments of a particular project and a bit lost as to who to handle config file as well as user assets.
(Application is based on Zend Framework based btw).
Main application folder is structured as follows:
./app
./config.ini <----- config file
./modules
./controllers
./models
./views
./libs
./public
That config file is where all the configs are stored.
So 'app' folder contains whole bunch of code in PHP and 'public' contains whole bunch of code in JavaScript, HTML/CSS and stuff like that(web accessible basically).
If I follow Capistrano's model, where each package is expanded into it's own folder that is then symlinked to, how do I handle that config.ini file?
What about all the user content that is uploaded into ./public folder?
Thanks!
The Capistrano approach to this is to have a structure like this on your remote server:
releases/
20100901172311/
20101001101232/
[...]
current/ (symlink to current release)
shared/
in the shared directory you include your config file and any user generated content (e.g. shared/files). Then on each deployment, once you've checked out the code you automatically create symlinks from the checkout into your relevant shared directories. E.g.:
releases/20101001101232/public/files -> shared/files
releases/20101001101232/application/configs/config.ini -> shared/config.ini
that way, when a user uploads a file to public/files it is actually being stored in shared/files.