Mass delete from jasper server - jasperserver

I want to Remove all the PDF files that reside in the reports output folder in one go there are about 1000 files it will not be easy to delete the files one by one.

Related

how to pick latest files according to the latest uploaded date and process it in ADF

I have a folder in which every week new files will be dumped. i need to pick the latest files and process using ADF.
How to achieve this scenario?
You can use Get meta data activity to scan across all the files within the folder and filter out the file names based on the last processed date.
and than iteratively copy the files into destination and update the last processed date

is it possible to truncate sys_file_processedfile?

I am stuck with a sys_file_processedfile table with more than 200.000 entries. Is it possible to truncate the table and empty the folder /fileadmin/_processed_ without destroying something?
Thanks!
It is possible.
In Admin Tools (Installtool) under Maintenance there is a card named Remove Temporary Assets which you should use to do so.
TYPO3 stores processed files and cached images in a dedicated directory. This directory is likely to grow quickly.
With this action you can delete the files in this folder. Afterwards, you should also clear the cache database tables.
The File Abstraction Layer additionally stores a database record for every file it needs to process. (e.g. image thumbnails). In case you modified some graphics settings (All Configuration [GFX]) and you need all processed files to get regenerated, you can use this tool to remove the "processed" ones.

Merge files in Azure Blob using powershell

I have a requirement to Merge multiple files with same keyword having different timestamp in Azure blobs and move them from one folder to another so that downstream service can consume them.
I was able to move the files from one folder to other but I don't find an option to concatenate them using powershell (With in Blob folders) into one single file. Is there any way to achieve this specifically using powershell ?
Note: All files in the folder are text/csv files with same layout.
Copy Blob operation cannot concatenate/join/combine blobs, it’s intended as a background copy operation and can only make "copies" the destination blob will be overwritten each time
There you may use case it will require the content of all the required blobs to be retrieved, a new blob constructed locally and then uploaded to destination
get-azurestorageblobcontent will bring down the blob content
Reference: https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-powershell#download-blobs Concatenate Locally
Upload blobs to the container: https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-powershell#upload-blobs-to-the-container

Can (Open Studio) Talend be used to automate a data load from a folder to vertica?

I have been looking at a way to automate my data loads into vertica instead of manually exporting flat files each time, and stumbled upon the ETL Talend.
I have been working with a test folder containing multiple csv files, and am attempting to find a way to build a job so the file can be put into vertica.
However, I see in the open studio version (free), if your files do not have the same schema, this becomes next to impossible without having the dynamic schema option which is in the enterprise version.
I start with tFileList and attempt to iterate through tFileInputDelimited, but the schemas are not uniform, so of course it will stop the processing.
So, long story short, am I correct in assuming that there is no way to automate data loads in the free version of Talend if you have a folder consisting of files with different schemas?
If anyone has any suggestions for other open source ETLs to look at or a solution that would be great.
You can access the CURRENT_FILE variable from a tFileList compenent and then send a file down different route depening on the file name. You'd then create a tFileInputDelimited for each file. For example if you had two files named file1.csv and file2.csv, right click the tFileList and choose Trigger>Run If. In the run if condition type ((String)globalMap.get("tFileList_1_CURRENT_FILE")).toLowerCase().matches("file1.csv") and drag it to the tFileInputDelimited set up to handle file1.csv. Do the same for file2.csv, changing the filename in the run if condition.

Import a directory file by file

I have a large number of csv files that I need to import and compare some of the fields to a list. The files were/are generated every hour into the directory. The analysis I want to perform needs to be applied to each file in the directory individually, not all of them at once. Is there a way to take in each file one at a time?