Migrating Files from Parse.com to our hosted parse server [image] - mongodb

I have migrated files from Parse.com to my hosted parse server using "https://github.com/parse-server-modules/parse-files-utils" tool by applying "Option-2".
Now My problem is when I click on the image in my hosted parse server dashboard, it will show me message "File not found." and my url is like,
http://ip of my server:1337/parse/files/OE9gP1wrd2OT9avp3RBmt8zysmM25wRTMtDOxsfe/tfss-6ca44378-72fb-4ddf-aef2-11af0485b11b-profile-pic
If I upload new image from mobile aap, its working fine.
I have installed mongodb and migrated parse.com data to newly created database in mongodb.
I am not using any FileAdapter in my new created parse server.
Thanks in advance, kindly please look into this issue and help me that how can I display migrated images in our hosted parse server.

What is the http error you are seeing (404?) Are you sure the error is "file not found"? Maybe your server folder permission is set not set to public, so you can't publicly access the files (should be a 403 error).
You usually store the image files to a container (storage) that can be accessed via APIs like Amazon (AWS) or Microsoft Azure. It's usually more efficient to keep you local server file storage small and have fast access speeds to your images.
You can find out how to setup an Amazon S3 bucket or Google Cloud Storage here.
You can find out how to setup an Azure Storage here and connect it to your parse-server using this adapter.
I'm not sure about AWS, but Google and Azure gives you free credits if you sign up, and (at least for Azure) the storage aren't too expensive, so the free credits can last you a while...

Related

Unable to make connections from a deployed App Maker app to Cloud SQL

I had a working App Maker application which uses the Directory API and the default Cloud SQL instance that gets created for the App Maker.
Before it was working fine and I was able to retrieve data from the Admin Console and insert them into the Cloud SQL database. Now it stopped working and when I check logs, I see the following:
Exception: Authorization Failed. More information: Unable to fetch
tokens for CloudSql connection:
I have not got any changes to the code and I did not modify anything. I only created a new deployment and I did change the product name in the OAuth Consent screen in the app's project properties to make it more use friendly...
I don't know what parts of the code to share since I did not change/type any new code and nothing in the error above points to anything specific about any part in the code...
Thanks a lot for any feedback and help on this!

Use only a domain and disable https://storage.googleapis.com url access

I am newbie at cloud servers and I've opened a google cloud storage to host image files. I've verified my domain and configured it, to view images via my domain. The problem is, same file is both accessible via my domain example.com/images/tiny.png and also via storage.googleapis.com/example.com/images/tiny.png Is there any solution to disable access via storage.googleapis.com and use only my domain?
Google Cloud Platform Support Version:
NOTE: This is the reply from Google Cloud Platform Support when contacted via email...
I understand that you have set up a domain name for one of your Cloud Storage buckets and you want to make sure only URLs starting with your domain name have access to this bucket.
I am afraid that this is not possible because of how Cloud Storage permission works.
Making a Cloud Storage bucket publicly readable also gives each of its files a public link. And currently this public link can’t be disabled.
A workaround would be implement a proxy program and running it on a Compute Engine virtual machine. This VM will need a static external IP so that you can map your domain to it. The proxy program will be in charged of returning the requested file from a predefined Cloud Storage bucket while the bucket keeps to be inaccessible to the public.
You may find these documents helpful if you are interested in this workaround:
1. Quick start to set up a Linux VM (1).
2. Python API for accessing Cloud Storage files (2).
3. How to download service account keys to grant a program access to a set of services (3).
4. Pricing calculator for getting a picture on how much a VM may cost (4).
(1) https://cloud.google.com/compute/docs/quickstart-linux
(2) https://pypi.org/project/google-cloud-storage/
(3) https://cloud.google.com/iam/docs/creating-managing-service-account-keys
(4) https://cloud.google.com/products/calculator/
My Version:
It seems the solution to this question is really a simple, just FUSE Google Cloud Storage with VM Instance.
After FUSE private files from GCS can be accessed through VM's IP address. It made Google Cloud Storage Bucket act like a directory.
The detailed documentation about how to setup FUSE in Google Cloud is here.
There is but it requires you to do more work.
Your current solution works because you've made access to the GCS bucket (example.com), public and then you're DNS aliasing from your domain.
An alternative approach would be for you to limit access to the GCS bucket to one (possibly several) accounts and then run a web-server that uses one of the accounts to access your image files. You could then also either permit access to your web-server to anyone or also limit access to it.
More work for you (and possibly cost) but more control.

Google Cloud Storage 500 Internal Server Error 'Google::Cloud::Storage::SignedUrlUnavailable'

Trying to get Google Cloud Storage working on my app. I successfully saved an image to a bucket, but when trying to retrieve the image, I receive this error:
GCS Storage (615.3ms) Generated URL for file at key: 9A95rZATRKNpGbMNDbu7RqJx ()
Completed 500 Internal Server Error in 618ms (ActiveRecord: 0.2ms)
Google::Cloud::Storage::SignedUrlUnavailable (Google::Cloud::Storage::SignedUrlUnavailable):
Any idea of what's going on? I can't find an explanation for this error in their documentation.
To provide some explanation here...
Google App Engine (as well as Google Compute Engine, Kubernetes Engine, and Cloud Run) provides "ambient" credentials associated with the VM or instance being run, but only in the form of OAuth tokens. For most API calls, this is sufficient and convenient.
However, there are a small number of exceptions, and Google Cloud Storage is one of them. Recent Storage clients (including the google-cloud-storage gem) may require a full service account key to support certain calls that involve signed URLs. This full key is not provided automatically by App Engine (or other hosting environments). You need to provide one yourself. So as a previous answer indicated, if you're using Cloud Storage, you may not be able to depend on the "ambient" credentials. Instead, you should create a service account, download a service account key, and make it available to your app (for example, via the ActiveStorage configs, or by setting the GOOGLE_APPLICATION_CREDENTIALS environment variable).
I was able to figure this out. I had been following Rail's guide on Active Storage with Google Storage Cloud, and was unclear on how to generate my credentials file.
google:
service: GCS
credentials: <%= Rails.root.join("path/to/keyfile.json") %>
project: ""
bucket: ""
Initially, I thought I didn't need a keyfile due to this sentence in Google's Cloud Storage authentication documentation:
If you're running your application on Google App Engine or Google
Compute Engine, the environment already provides a service account's
authentication information, so no further setup is required.
(I am using Google App Engine)
So I commented out the credentials line and started testing. Strangely, I was able to write to Google Cloud Storage without issue. However, when retrieving the image I would receive the 500 server error Google::Cloud::Storage::SignedUrlUnavailable.
I fixed this by generating my private key and adding it to my rails app.
Another possible solution as of google-cloud-storage gem version 1.27 in August 2020 is documented here. My Google::Auth.get_application_default as in the documentation returned an empty object, but using Google::Cloud::Storage::Credentials.default.client instead worked.
If you get Google::Apis::ClientError: badRequest: Request contains an invalid argument response when signing check that you have dash in the project name in the signing URL (i.e projects/-/serviceAccounts explicit project name in the path is deprecated and no longer valid) and that you have "issuer" string correct, as the full email address identifier of the service account not just the service account name.
If you get Google::Apis::ClientError: forbidden: The caller does not have permission verify the roles your Service Account have:
gcloud projects get-iam-policy <project-name>
--filter="bindings.members:<sa_name>"
--flatten="bindings[].members" --format='table(bindings.role)'
=> ROLE
roles/iam.serviceAccountTokenCreator
roles/storage.admin
serviceAccountTokenCreator is required to call the signBlob service, and you need storage.admin to have ownership of the thing you need to sign. I think these are project global rights, I couldn't get it to work with more fine grained permissions unfortunately (i.e one app is admin for a certain Storage bucket)

ibm data connect connection for ibm cloud object storage

I have created bucket in ibm cloud object storage and successfully able to upload file posting there. Now, I want to refine those files using ibm data connect. I am trying to create a connection in data connect and selected bluemix object storage.
Dont understand what details are asked under details. Am I selecting wrong connection type or what exactly details should i look for, in my cloud object storage.
Data Connect was deprecated in December 2017 and reached end-of-life 3/30/2018. Data Refinery is the successor, and does connect to IBM Cloud Object Storage natively. Information about migrating from Data Connect can be found here.

Upload files to Google Cloud Storage without downloading them locally?

I want to upload 120 files, each around 1.2GB so about 150GB in total, from an HTTPS website onto my Google Cloud Storage.
I really, really don't want to have to download them all locally, and then upload them individually.
Is there any way around this? Surely I can just give Google Cloud Storage a URL to pull from? I don't control the HTTPS server.
It seems to be possible to upload from S3 to Google Cloud Storage, but S3 seems to suffer from the same problem.
If your website allows public access you can use the GCS Transfer Service to do it: https://cloud.google.com/storage/transfer/