There is a problem with your environment because the Application Express files have not been loaded. Please verify that you have copied the images directory to your application server as instructed in the Installation Guide. In addition, please verify that your image prefix path is correct. Your current path is /i/ (it should contain both starting and ending forward slashes, such as the default /i/). Use the SQL script reset_image_prefix.sql if you need to change it.
Starting with Apex 18.1.x.x.x we should be instead putting CDN location as static path for images. Local path will not be supported any further.
Take a look at below announcement :
https://blogs.oracle.com/apex/announcing-oracle-apex-static-resources-on-oracle-content-delivery-network
CDN makes application faster.
Coming to problem you mentioned, can be easily resolved by performing below steps :
Locate reset_image_prefix.sql . It should be under 'apex/utilities'
change directory :
cd apex/utilities
Connect to DB and check what is image prefix
connect to SQL as SYS
SQL> set serveroutput on
SQL> begin
2 dbms_output.put_line(apex_200100.wwv_flow_image_prefix.g_image_prefix);
3
4 end;
5 /
It should list /i/ or any other location you may have configured with
Note: whenever you run the above command change to the correct APEX user (version), so for APEX 19.1 you use apex_190100.
4. Now, check CDN address. For example Apex 20.1.X.X.XX location is - https://static.oracle.com/cdn/apex/20.1.0.00.13/
Same can be checked from https://blogs.oracle.com/apex/announcing-oracle-apex-static-resources-on-oracle-content-delivery-network
5. Now, its time to run SQL (assuming you have APEX 19.2.0.X.XX version)
SQL> #reset_image_prefix.sql
Enter the Application Express image prefix [/i/] https://static.oracle.com/cdn/apex/19.2.0.00.18/
...Changing Application Express image prefix
NEW_IMAGE_PREFIX
------------------------------------------------
https://static.oracle.com/cdn/apex/19.2.0.00.18/
Go to http://your-host:your-port/ords/apex_admin
Problem should be resolved by now!
I get INTERNAL SERVER ERROR when trying to upload a file that is larger than ~15 MB.
It happens on many Typo3 installations.
I have set post_max_size and upload_max_filesize to 100 megabytes.
max_execution_time and max_input_time to 1000.
Typo3 6.2 and 7.6.
You have also to look at the max execution time and max input time of php.
max_execution_time = value_in_seconds
max_input_time = value_in_seconds
If this is not helping, look at your apache error log file which error is displayed.
Other possible problems:
Does the filename contain characters that are not compatible with your system?
Is the user (you) allowed to upload to the upload directory?
Are the permissions for the target folder correctly set (see Installtool -> Folder Structure)
I am Exploring StreamSet Tool,I have a log file n , I need to parse the log file to the StreamSet tool,I passed the log file from the Directory to the log parser,the format of the log parser is the Common log format , n the destination is the local fs..When I start executing it is running but I am not getting the output.Could any one please help me..
I want to start by thanking you all for your help ahead of time, as this will help clear up a detail left out on the readthedocs.io guide. What I need is to compress several files into a single gzip, however, the guide shows only how to compress a list of files as individual gzipped file. Again, I appreciate any help as there is very few resources and documentation for this set up. (If there is some extra info, please include links to sources)
After I had set up the grid engine, I ran through the samples in the guide.
Am I right in assuming there is not a script for combining multiple files into one gzip using grid-computing-tools?
Are there any solutions on the Elasticluster Grid Engine setup to compress multiple files into 1 gzip?
What changes can be made to the grid-engine-tools to make it work?
EDIT
The reason we are considering a cluster is that we do expect multiple operations occurring simultaneously, zipped up files per order, which will occur systematically so that a vendor can download a single compressed file per order.
May I state the definition of the problem and you can let me know if I understood it correctly, as both Matt and I provided the exact same solution and somehow it doesn't seem sufficient.
Problem Definition
You have an Order defining the start of a task to process some data.
The processing of data would be split among several compute nodes, each producing a resulting file stored on GS directories.
The goal is:
Collect the files from GS bucket (that were produced by each of the nodes),
Archive the collection of files as one file,
Then compress that archive, and
Push it back to a different GS location.
Let me know if I summarized it properly,
Thanks,
Paul
Are the files in question in Cloud Storage?
Are the files in question on a local or network drive?
In your description, you indicate "What I need is to compress several files into a single gzip". It isn't clear to me that a cluster of computers is needed for this. It sounds more like you just want to use tar along with gzip.
The tar utility will create an archive file it can compress it as well. For example:
$ # Create a directory with a few input files
$ mkdir myfiles
$ echo "This is file1" > myfiles/file1.txt
$ echo "This is file2" > myfiles/file2.txt
$ # (C)reate a compressed archive
$ tar cvfz archive.tgz myfiles/*
a myfiles/file1.txt
a myfiles/file2.txt
$ # (V)erify the archive
$ tar tvfz archive.tgz
-rw-r--r-- 0 myuser mygroup 14 Jul 20 15:19 myfiles/file1.txt
-rw-r--r-- 0 myuser mygroup 14 Jul 20 15:19 myfiles/file2.txt
To extract the contents use:
$ # E(x)tract the archive contents
$ tar xvfz archive.tgz
x myfiles/file1.txt
x myfiles/file2.txt
UPDATE:
In your updated problem description, you have indicated that you may have multiple orders processed simultaneously. If the frequency in which results need to be tar-ed is low, and providing the tar-ed results is not extremely time-sensitive, then you could likely do this with a single node.
However, as the scale of the problem ramps up, you might take a look at using the Pipelines API.
Rather than keeping a fixed cluster running, you could initiate a "pipeline" (in this case a single task) when a customer's order completes.
A call to the Pipelines API would start a VM whose sole purpose is to download the customer's files, tar them up, and push the resulting tar file into Cloud Storage. The Pipelines API infrastructure does the copying from and to Cloud Storage for you. You would effectively just need to supply the tar command line.
There is an example that does something similar here:
https://github.com/googlegenomics/pipelines-api-examples/tree/master/compress
This example will download a list of files and compress each of them independently. It could be easily modified to tar the list of input files.
Take a look at the https://github.com/googlegenomics/pipelines-api-examples github repository for more information and examples.
-Matt
So there are many ways to do it, but the thing is that you cannot directly compress on Google Storage a collection of files - or a directory - into one file, and would need to perform the tar/gzip combination locally before transferring it.
If you want you can have the data compressed automatically via:
gsutil cp -Z
Which is detailed at the following link:
https://cloud.google.com/storage/docs/gsutil/commands/cp#changing-temp-directories
And the nice thing is that you retrieve uncompressed results from compressed data on Google Storage, because it has the ability to perform Decompressive Transcoding:
https://cloud.google.com/storage/docs/transcoding#decompressive_transcoding
You will notice on the last line in the following script:
https://github.com/googlegenomics/grid-computing-tools/blob/master/src/compress/do_compress.sh
The following line will basically copy the current compressed file to Google Cloud Storage:
gcs_util::upload "${WS_OUT_DIR}/*" "${OUTPUT_PATH}/"
What you will need is to first perform the tar/zip on the files in the local scratch directory, and then gsutil copy the compressed file over to Google Storage, but make sure that all the files that need to be compressed are in the scratch directory before starting to compress them. Most likely you would need to SSH copy (scp) them to one of the nodes (i.e. master), and then have the master tar/gzip the whole directory before sending it over to Google Storage. I am assuming each GCE instance has its own scratch disk, but the "gsutil cp" transfer is very fast when working on GCE.
Since Google Storage is fast at data transfers with Google Compute instances, the easiest second option to pursue is to mark out lines 66-69 in the do_compress.sh file:
https://github.com/googlegenomics/grid-computing-tools/blob/master/src/compress/do_compress.sh
This way no compression happens, but the copy happens on the last line via gsutil::upload, in order to have all the uncompressed files transferred to the same Google Storage bucket. Then using "gsutil cp" from the master node you would copy them back locally, in order to compress them locally via tar/gz and then copy the compressed directory file back to the bucket using "gsutil cp".
Hope it helps but it's tricky,
Paul
Setup:
- Prestashop 1.6 Fresh Install
- Products CSV export from a live site (Prestashop 1.4)
Goal:
What I want to accomplish is to completely test the CSV Import on localhost first
before doing it on a new site. I have already tried it on the live site and I encounter
a bunch of errors, so I though it would be better to test is first.
Problem:
Now the problem is that the CSV Import doesnt seem to work on localost. Whenever I try to upload the CSV i get a "products_stream.csv (382.23 KB) : File is too large" error.
I have also tried copying the csv file directly to the admin/import folder to see if it would appear on the 'Choose from history /FTP' list but that also failed.
Would greatly appreciate any help! Cheers!
it depends on PHP configuration variables. Edit your php.ini file and increase these values, for localhost develop you can do it without fear.
Example PHP.ini modifications
; Maximum allowed size for uploaded files.
; Modified for prestashop develop
upload_max_filesize = 256M
I also suggest you to change these other values in order to better Prestashop develop:
memory_limit = 512M ; Maximum amount of memory a script may consume (8MB)
max_input_vars = 5000 ; Numero maximo de variables post
After changing values, don't forget to restart your LAMP or WAMP server (almost Apache) and to clear browser cookies before retrying import process.
Best regards