Aspera Node API /files/{id}/files endpoint not returning up to date data - ibm-cloud

I am working on a webapp for transferring files with Aspera. We are using AoC for the transfer server and an S3 bucket for storage.
When I upload a file to my s3 bucket using aspera connect everything appears to be successful, I see it in the bucket, and I see the new file in the directory when I run /files/browse on the parent folder.
I am refactoring my code to use the /files/{id}/files endpoint to list the directory because the documentation says it is faster compared to the /files/browse endpoint. After the upload is complete, when I run the /files/{id}/files GET request, the new file does not show up in the returned data right away. It only becomes available after a few minutes.
Is there some caching mechanism in place? I can't find anything about this in the documentation. When I make a transfer in the AoC dashboard everything updates right away.
Thanks,
Tim

Yes, the file-id base system uses an in-memory cache (redis).
This cache is updated when a new file is uploaded using Aspera. But for files movement directly on the storage, there is a daemon that will periodically scan and find new files.
If you want to bypass the cache, and have the API read the storage, you can add this header in the request:
X-Aspera-Cache-Control: no-cache
Another possibility is to trigger a scan by reading:
/files/{id}
for the folder id

Related

de- serialize JSON metadata to .qvf using qlik sense API

I am aware of Qlik sense serialize app where we generate a JSON object containing metadata information of a .qvf file using Qlik sense API.
I want to do a reverse operation of this i.e generate .qvf file back from json metadata.
After many research just found this link github and it doesnot have a complete information.
Any solution would be helpfull.
Technically you cant create qvf directly from json. You'll have to create an empty qvf and then use various api to import the json.
Qlik have a very nice tool for un-build/build apps (and more). qlik-cli have dedicated commands for un-build/build:
If you are looking for something more "programmable" then ive create some enigma.js mixin for the same purpouse - enigma-mixin. I still need to perform more detailed testing there but it was working ok with simpler tests
Update 08/10/2021
Using qlik-cli
setup context
first unbuild an app:
qlik app unbuild --app 11111111-2222-3333-4444-555555555555
This will create new folder in the current folder named <app_name>-unbuild. The folder will contain all info about the app in json and/or yaml files
once these files are available then you can use them to build another app. Just to mention that the target app should exists before the build is ran:
qlik.exe app build --config ./config.yml --app 55555555-4444-3333-2222-111111111111
The above command will use all available files (specified in config.yml) and update the target app
If you dont want all files to be used and only want to update the data connections, for example, then the build command can be ran with different arguments:
qlik.exe app build --connections ./connections.yml --app 55555555-4444-3333-2222-111111111111
This command will only update the data connections in the target app and will not update anything else

Media not found Exception In Email Business Process (Hybris)

I've created a process to be able send email to the user on order confirmation.
The problem is that on the DEV environment everything goes well but when I did a deploy to UAT server
I got an exception during the task execution ( " Media not found (requested media location: hf0/h27/8861015965726.bin) ").
Any Ideas what could be happening ?
How can this issue be resolved and what causes this issue.
hybris creates emails using Velocity templates. Those Velocity Templates are stored as Medias on the hybris Servers. hybris Medias consist of two parts: an entry in the respective table in the database and a file on the hard drive. The database entry stores metadata about that media while the file stores the actual content.
Now what hybris is telling you, is that the file on the hard drive is missing. The database entry directs to a file that is not existing. There could be a lot of reasons why that file is missing:
It was deleted during deployment.
It wasn't created during deployment.
The hybris server has no access/access rights to that directory.
In a clustered environment the file could have been stored on another node and is not accessible on the current node.
Media could be the email itself as Johannes stated, but it can also be a part of the email, an image set from the CMS cockpit for example.
To fix this issue you have to master your impex flows.
First be sure that impex contain all the data needed to create properly the email.
Then know what is imported when you deploy and update your system.
Be sure that mandatory files are imported during initialization.
Be sure that data that can be managed by webmasters are not reset by impex during update.
If a data is created during the update because init is already done then be sure that is won't be played after each update.
As the media file is not found, you can
1. go to hmc-->Multimedia-->Media, in search panel,
2. click "search additional attributes" dropdown box, select "PK of file"
3. use "8861015965726" as PK of file to search
Then you can find out what file is missing and you can import impex or upload using hmc to fix this problem.

Downloading public data directory from google cloud storage with command line utilities like wget

I would like to download publicly available data from google cloud storage. However, because I need to be in a Python3.x environment, it is not possible to use gsutil. I can download individual files with wget as
wget http://storage.googleapis.com/path-to-file/output_filename -O output_filename
However, commands like
wget -r --no-parent https://console.cloud.google.com/path_to_directory/output_directoryname -O output_directoryname
do not seem to work as they just download an index file for the directory. Neither do rsync or curl attempts based on some initial attempts. Any idea of how to download publicly available data on google cloud storage as a directory?
The approach you mentioned above does not work because Google Cloud Storage doesn't have real "directories". As an example, "path/to/some/files/file.txt" is the entire name of that object. A similarly named object, "path/to/some/files/file2.txt", just happens to share the same naming prefix.
As for how you could fetch these files: The GCS APIs (both XML and JSON) allow you to do an object listing against the parent bucket, specifying a prefix; in this case, you'd want all objects starting with the prefix "path/to/some/files/". You could then make individual HTTP requests for each of the objects specified in the response body. That being said, you'd probably find this much easier to do via one of the GCS client libraries, such as the Python library.
Also, gsutil currently has a GitHub issue open to track adding support for Python 3.

Move Cloud Storage file to different bucket with Java API

How can I move a file from one bucket to another with the Cloud Storage Java API? I can find examples of file creation but not copying or deletion - and I imagine I'd have to copy the file and delete it in order to execute a move from one bucket to another.
You're correct. Do the copy and then delete the original after. There are some examples on GitHub. Here's the gist of it:
CopyWriter copyWriter = originalBlob.copyTo(BlobId.of(bucketName, blobName));
Blob copiedBlob = copyWriter.getResult();

OneDrive REST API - Upload - Files > 4GB

Uploading files greater than 4GB using the OneDrive REST API fails.
Sample request:
PUT https://apis.live.net/v5.0/folder.<removed>/files/test.vmdk HTTP/1.1
<removed>
Content-Length: 10000000000
Host: apis.live.net
Since it is now possible to upload files up to 10Gb using the OneDrive website and the Desktop client it would be great if this is also possible with the REST API.
We're getting this published on the documentation site in the next content refresh, but I wrote up a quick gist on how to upload files larger than the REST APIs 100MB limit.
https://gist.github.com/rgregg/37ba8929768a62131e85
For large files, the best results will be achieved by splitting the file into multiple fragments and uploading those fragments. That way if a connection is dropped after you uploaded 90% of the files (in smaller fragments) you can recover the upload with the last fragment instead of starting all over again.