I imported old files into the S3 bucket - how can I access them with filepicker API? - filepicker.io

I have old files prior filepicker what I now copied to the S3 bucket. Can I access them with the filepicker API to get them cropped?
I didn't find any relevant info in the documentation. According to the Stackoverflow threads, it seems I should store them again. Is it right?

To use filepicker.io conversion feature file has to be available via filepicker API.
So first store amazon url:
https://s3.amazonaws.com/your_own_bucket/ZynOv436QOirPYbJIr3Y_5qYoopVTsixCJJiqSWSE.png
Using Filepicker REST API:
curl -X POST -d url="https://s3.amazonaws.com/your_own_bucket/ZynOv436QOirPYbJIr3Y_5qYoopVTsixCJJiqSWSE.png" https://www.filepicker.io/api/store/S3?key=MY_API_KEY
Sample response:
{"container": "kg_bucket", "url": "https://www.filepicker.io/api/file/EmlSqNgR0CcgiKJQ70aV", "filename": "ZynOv436QOirPYbJIr3Y_5qYoopVTsixCJJiqSWSE.png", "key": "ILeMnXRB7ucPF1ILzQ9a_ZynOv436QOirPYbJIr3Y_5qYoopVTsixCJJiqSWSE.png", "type": "image/png", "size": 175210}
Now you can convert filepicker url https://www.filepicker.io/api/file/EmlSqNgR0CcgiKJQ70aV
Using GET request
https://www.filepicker.io/api/file/EmlSqNgR0CcgiKJQ70aV/convert?w=200&h=250
Or using POST request to store converted file
curl -X POST "https://www.filepicker.io/api/file/EmlSqNgR0CcgiKJQ70aV/convert?format=jpg&quality=30&storeLocation=S3&storePath=/myFolder/myFile.png?key=MY_API_KEY"

Related

Using a login token in subsequent GET calls after successful login

I have an application written in PHP that exposes a REST endpoint to allow a client to download a document stored in the application. One can also access this PHP system via a web browser and download a document when logged into the system.
When using the REST endpoint provided by this system for downloading documents, I expect a PDF or some other form, e.g. a link in the response. However, below is what it is in the response.
curl -X GET --header 'Accept: application/json' --header 'DOLAPIKEY: XXXXXXXXXX' 'http://dxxx.com.zw/api/index.php/documents/download?modulepart=order_supplier&original_file=000008'
Response
{
"filename": "000008",
"content-type": "application/octet-stream",
"filesize": 4096,
"content": "",
"encoding": "base64"
}
This response is not helpful. Nonetheless, I know for sure that if I am logged into this system, the internal links allows me to download the file. If I am not logged in, the system prompts for a username and password. This system provides a REST login endpoint, which is working and this is what it is returned.
{
"success": {
"code": 200,
"token": "XXXXXXXXXX",
"entity": "0",
"message": "Welcome doketera - This is your token (recorded for your user). You can use it to make any REST API call, or enter it into the DOLAPIKEY field to use the Dolibarr API explorer."
}
}
So my question then, is how do I use this information in the login response to emulate the same actions I would do on the web browser using the GET method in POSTMAN for example or within any other REST client . This is the internal link provided in the web browser.
http://dxxx.com.zw/document.php?modulepart=commande_fournisseur&file=000009%2F000009.pdf&entity=1

How to copy a file from AWS rest API gateway to s3 bucket?

Using an API gateway, I created an S3 bucket to copy an image (image/jpg). This website describes how I can upload images using Amazon's API gateway: https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-upload-image-s3/.
When I type the URL by adding the bucket and object name, I get the following error:
{"message":"Missing Authentication Token"}
I would like to know, first of all, where API can find my image? How can I verify that the image is a local file as stated in the introduction? In addition to that, should I use the curl command to complete the transformation? I am unable to use Potsman.
I have included a note from the link, how can I change the type of header in a put request?
What is the best way to copy files from the API gateway to S3 without using the Lambda function?
Assuming you set up your API and bucket following the instructions in the link, you upload a local file to your bucket using curl command as below.
curl -X PUT -H "Accept: image/jpeg" https://my-api-id.execute-api.us-east-2.amazonaws.com/my-stage-name/my-bucket-name/my-local-file.jpg
Note that the header indicates it will accept jpeg files only. Depending on the file you are uploading to S3, you will change/add this header.
To answer the questions directly:
What is the best way to copy files from the API gateway to S3 without using the Lambda function? - follow steps in this link https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-upload-image-s3/
where API can find my image? - your image is located in your local host/computer. You use curl command to upload it via the API that you created
should I use the curl command to complete the transformation? - not sure what you meant by transformation. But you use curl command to upload the image file to your S3 bucket via API Gateway
how can I change the type of header in a put request? - you use -H to add headers in your curl command

How to download files in Google Cloud Storage using rest api

Scenario: there are multiple folders and many files stored in storage bucket related to dcm API.(click,impression,daily aggregate files etc).
https://console.cloud.google.com/storage/browser/dcmaccountno
Is it possible to download files using rest api and currently i have service account and private key.
we dont have much exposure towards goog cloud storage hence any small help would be really appreciable.
Thank you for any help!
You can do calls to whichever of the two REST APIs: JSON or XML. In any case, you will need to get an authorization access token from OAuth 2.0 as detailed in the documentation and then use cURL with a GET Object Request:
JSON API:
curl -X GET \
-H "Authorization: Bearer [OAUTH2_TOKEN]" \
-o "[SAVE_TO_LOCATION]" \
"https://www.googleapis.com/storage/v1/b/[BUCKET_NAME]/o/[OBJECT_NAME]?alt=media"
XML API:
curl -X GET \
-H "Authorization: Bearer [OAUTH2_TOKEN]" \
-o "[SAVE_TO_LOCATION]" \
"https://storage.googleapis.com/[BUCKET_NAME]/[OBJECT_NAME]"
Note that for multiple files, you will have to program the requests, so if you want to easily download all the objects in a bucket or subdirectory, it's better to use gsutil instead.
Using rest API you can download/upload files from google storage in the following way that I already did in the below-mentioned link:
Reference: https://stackoverflow.com/a/53955058/4345389
Instead of UploadData method of WebClient, you can use DownloadData method in the following way:
// Create a new WebClient instance.
using (WebClient client= new WebClient())
{
client.Headers.Add(HttpRequestHeader.Authorization, "Bearer " + bearerToken);
client.Headers.Add(HttpRequestHeader.ContentType, "application/octet-stream");
// Download the Web resource and save it into a data buffer.
byte[] bytes = client.DownloadData(body.SourceUrl);
MemoryStream memoryStream = new MemoryStream(bytes);
// TODO write further funcitonality to write the memorystream
}
Do some tweaks as per your requirements.

Is it possible to wget / curl protected files from GCS?

Is it possible to wget / curl protected files from Google Cloud Storage without making them public? I don't mind a fixed predefined token. I just want to avoid the case where my public file gets leeched, costing me good dollars.
Another way, if as you say you don't mind getting a token externally, is to use curl to set the 'Authorization' header in your call to GCS like so:
curl -H "Authorization: Bearer 1/fFBGRNJru1FQd44AzqT3Zg" https://www.googleapis.com/storage/v1/b/bucket/o/object?alt=media
The 'alt=media' query string parameter is necessary to download the object data directly instead of receiving a JSON response.
You can copy and paste the token obtained by authorizing with the Cloud Storage JSON API separately in the OAuth 2.0 Playground.
See also:
https://cloud.google.com/storage/docs/access-control
https://cloud.google.com/storage/docs/json_api/v1/objects/get
You can use Signed URLs. This allows you to create a signed URL that can be used to download an object without additional authentication.
You could also use the curlwget chrome extension so whenever you download something on Chrome it will create the url with headers etc to allow you to wget your file.

Bitbucket services api to create POST service

Does anyone know the parameters to create a POST service through the BitBucket API?
Currently the documentation is missing and there is an open ticket to write it. It follows the same format as the rest of the API, hoping someones figured it out.
So far the only parameter I can create is the type:
curl --user name:pw https://api.bitbucket.org/1.0/repositories/{account}/{repository}/services --data 'type={POST/Twitter/AgileZen/etc}'
Successfully creates an empty POST service.
Here's a link to the docs in case it helps.
It's kinda shoddy that we failed to document that properly. Anyway, here's how you add a POST service that posts to google.com:
$ curl -X POST https://username:passwd#bitbucket.org/api/1.0/repositories/evzijst/interruptingcow/services \
-d type=POST -d URL=http://google.com
{
"id": 507781,
"service": {
"fields": [
{
"name": "URL",
"value": "http://google.com"
}
],
"type": "POST"
}
}
The way this endpoint works is that you always specify the "type" parameter which must contain the name of the service (as presented in the dropdown menu) and then configure it by passing additional post parameters.
Each service has its own configuration parameters. You can find out by simply adding the service on a repo and looking at the fields. Your parameters must match the available fields:
Individual parameters can be modified by doing a PUT.
This is documented; just got lost in the shuffle when I revised everything:
https://confluence.atlassian.com/display/BITBUCKET/services+Resource
You can also test it out in our REST browser:
http://restbrowser.bitbucket.org/