Following the instructions on the official site https://docs.unity3d.com/ScriptReference/Caching.html I fail to get AssetBundleManifest neither from the server nor from local storage. The end goal is to get AssetBundle’s hash.
Main error message in the console is:
Failed to decompress data for the AssetBundle "Memory"
I tried to use the following:
UnityWebRequest - to load from server
LoadFromFile - to local from local storage
How to do it correctly?
Related
I am working on a webapp for transferring files with Aspera. We are using AoC for the transfer server and an S3 bucket for storage.
When I upload a file to my s3 bucket using aspera connect everything appears to be successful, I see it in the bucket, and I see the new file in the directory when I run /files/browse on the parent folder.
I am refactoring my code to use the /files/{id}/files endpoint to list the directory because the documentation says it is faster compared to the /files/browse endpoint. After the upload is complete, when I run the /files/{id}/files GET request, the new file does not show up in the returned data right away. It only becomes available after a few minutes.
Is there some caching mechanism in place? I can't find anything about this in the documentation. When I make a transfer in the AoC dashboard everything updates right away.
Thanks,
Tim
Yes, the file-id base system uses an in-memory cache (redis).
This cache is updated when a new file is uploaded using Aspera. But for files movement directly on the storage, there is a daemon that will periodically scan and find new files.
If you want to bypass the cache, and have the API read the storage, you can add this header in the request:
X-Aspera-Cache-Control: no-cache
Another possibility is to trigger a scan by reading:
/files/{id}
for the folder id
I'm using copy data that copies a file from File Share to Blob. I always get this error:
'''
ErrorCode=UserErrorFailedToCreateAzureBlobContainer,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Unable to create Azure Blob container. Endpoint: https://xxx.blob.core.windows.net/, Container Name: temp-archive.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=Microsoft.WindowsAzure.Storage.StorageException,Message=Unable to connect to the remote server,Source=Microsoft.WindowsAzure.Storage,''Type=System.Net.WebException,Message=Unable to connect to the remote server,Source=System,''Type=System.Net.Sockets.SocketException,Message=A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond xx.xxx.xxx.xxx:xxx,Source=System,'
'''
Both blob and file share storage are correctly setup as I used them in my pipeline. only when I'm copying from file share to blob that this happens. Is it a limitation?
Please do these steps to identify the error:
Tested the connection of the linked services of file Storage and Blob Storage.
Preview the source data to test if the connection works well. If so, then the error is caused by sink side.
Delete and recreate the linked service of sink dataset.
Check the Blob Storage firewall if Data Factory can access it.
I have online/offline project.
I need to download wav/ogg/mp3 file from Application.persistentDataPath on WebGL platform.
I tried www/webrequest.
For example - WWW("file://" + Application.persistentDataPath + filePath);
But always get error: Failed to load: Cross origin requests are only supported for protocol schemes: http, data, chrome, chrome-extension, https.
Could you help me?
P.S. From remote server works fine.
You can not load local files in a browser as it's a security risk. If you could then a webpage could read your hard drive and steal all your files.
If you're just testing you can run a local server.
If you want to let the user supply a file you can let them choose a file
I have a Parse Server running on top of a MongoDB, and that's running 100% fine on my Dev Server which is hosted on DigitalOcean. Here I'm able to send GET requests to my server to obtain the image, as well as access the image via it's Parse-Dashboard.
I cloned that droplet to set up a Production Server, and everything is running fine... Except, I can't access the images from Parse that were either cloned from the Dev Server, or ones that I uploaded after I initialized the new Production server. I'm able to send GET requests to obtain all other fields, except for the image files. I also can't access the image file via the Parse-Dashboard - it returns a 404 - Oh no, we can't find that page! error, on the following URL: http://server.ip/parse/files/ProdServer/de632aeb61f7265926e554fabfb25180_image1.png
Other key things to note:
The Dev Server is hosted on a domain that has a SSL; could it be an SSL issue?
I'm initializing the parse-dashboard with the --allowInsecureHTTP flag
Everything (even before the SSL) was working on the Dev Server beforehand
all packages + dependencies are up-to-date
tl;dr
How do I access the image files from my Parse Server, via Parse-Dashboard or GET request?
A couple methods I tried... Since this was an elaborate process for me, allow me to document the methods I tried to resolve this issue:
The first issue was, do the files exist? If so, where are they stored?
By accessing my parse-dashboard on port 4040, I tried to view the image path via the URL... So I knew it existed somewhere, and I recursively searched my entire server for the file path, but to no avail.
Then with more research I found that any file over 16MB gets converted into a GridFS object i.e. images are stored in my MongoDB. How you access these objects are through a utility called mongofiles.
By running mongofiles -d dbname list I was able to view in a list view all of the images stored on my Parse-Server.
just to ensure the images weren't corrupt...
I also sftp the images over into my local machine, and fortunately I could view them. So the problem was that the images weren't being served correctly...
The next issue was, how come the images aren't being served correctly?
So my parse-dashboard was being served on port 4040, but for some reason, my image file path within their respective URLs were being prefaced with the same port 4040... It turns out that within my Parse-Server config, the parse-server URL was pointing to port 4040, but was being served on ****. By changing my URL back to ****, my images were able to properly render on my parse-dashboard, and I was able to send http requests for the images as well :)
tl;dr make sure your image file path is being served on the same port where your parse-server is being served
from my local host, I connected to blue mix with
cf api https://api.ng.bluemix.net
I logged in and then I pushed the changes with
cf push
However, in the console,
Uploading MY_PROJECT...
Uploading app files from: /Users/MyName/Documents/MY_PROJECT
Uploading 437.7K, 386 files
Done uploading
FAILED
Error processing app files: Error uploading application.
The resource file mode is invalid: File mode '0444' is invalid.
(venv) My-iMac:MY_PROJECT MyName$
How do I trouble shoot this?
According to this link: https://github.com/cloudfoundry/cli/issues/685 the file mode must be at least 600 so I guess you should "raise" the permissions for your resources folder, even if 444 would be technically ok.
Concerning troubleshooting: the error message is right there in your output. If you need more log output, you can use the command
cf logs APP-NAME
See https://docs.cloudfoundry.org/devguide/deploy-apps/streaming-logs.html for further details.