I'm in the process of getting some data from Salesforce in order to store them in GCP. It seems that there doesn't exit any tool that directly connects both sides, so the way I'm doing it is by using Postman to send a REST request to Salesforce and therefore getting the data. Now, my question is how I should proceed in order to store those data into Cloud Storage or BigQuery as I can't find the way to create a channel between GCP and Postman (if that is the right thing to do). Any advice would be much appreciated.
I think it would be best to at least code a prototype a for doing this or a python script. But you could probably use cUrl to hit the salesforce api and push the response to a local file and use the cloud tools CLI (see example from docs) to then send it to Cloud Storage. bearing in mind the results from the api call to SF would be in the raw json format. You can probably combine the different commands into a single bash script to make running end to end repeatable once you have the individual commands working correctly
curl https://***instance_name***.salesforce.com/services/data/v42.0/ -H "Authorization: Bearer access_token_from_auth_call" > response.txt
gsutil cp ./response.txt gs://your-gs-bucket
Related
Using an API gateway, I created an S3 bucket to copy an image (image/jpg). This website describes how I can upload images using Amazon's API gateway: https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-upload-image-s3/.
When I type the URL by adding the bucket and object name, I get the following error:
{"message":"Missing Authentication Token"}
I would like to know, first of all, where API can find my image? How can I verify that the image is a local file as stated in the introduction? In addition to that, should I use the curl command to complete the transformation? I am unable to use Potsman.
I have included a note from the link, how can I change the type of header in a put request?
What is the best way to copy files from the API gateway to S3 without using the Lambda function?
Assuming you set up your API and bucket following the instructions in the link, you upload a local file to your bucket using curl command as below.
curl -X PUT -H "Accept: image/jpeg" https://my-api-id.execute-api.us-east-2.amazonaws.com/my-stage-name/my-bucket-name/my-local-file.jpg
Note that the header indicates it will accept jpeg files only. Depending on the file you are uploading to S3, you will change/add this header.
To answer the questions directly:
What is the best way to copy files from the API gateway to S3 without using the Lambda function? - follow steps in this link https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-upload-image-s3/
where API can find my image? - your image is located in your local host/computer. You use curl command to upload it via the API that you created
should I use the curl command to complete the transformation? - not sure what you meant by transformation. But you use curl command to upload the image file to your S3 bucket via API Gateway
how can I change the type of header in a put request? - you use -H to add headers in your curl command
I am trying to understand what is the equivalent REST api for the cli command:
gcloud logging read ....
I need to execute a query on logs programmatically from a client library (Ruby if possible).
I cannot find nothing in the rest api docs.
Any suggestion?
Google's documentation for using client libraries with its services, is mostly very good.
Here's the documentation for Ruby for Cloud Logging.
https://cloud.google.com/logging/docs/reference/libraries#client-libraries-install-ruby
I encourage you to use Google's excellent APIs Explorer when developing using the client libraries too. APIs Explorer helps you craft REST requests and see the responses and this is very helpful when writing code and debugging:
Here's the APIs Explorer page for the Logging service's method to list log entries:
https://cloud.google.com/logging/docs/reference/v2/rest/v2/entries/list
Another very helpful facility is that you can append --log-http to any gcloud command it will show you the underlying REST calls that are being made. This is also helpful in determining how the gcloud commands work so that you may write your own code that replicates|extends the functionality:
gcloud logging read ${FILTER} \
--project=${PROJECT} \
--log-http
And, finally, the Console's Log Explorer is an excellent way to prototype the filters you'll probably need to use in your client code:
https://console.cloud.google.com/logs/query
Hint: when constructing filters, don't forget to escape " characters, for example FILTER="resource.type=\"gce_instance\"". In this case, the value gce_instance must be quoted (") in the filter's value.
I have been unable to use the Db2 on Cloud REST API to load data from a file in IBM Cloud Object Storage (COS). This is preventing a hybrid integration POC.
Another user has reported similar REST API issue using the SERVER configuration, see the IBM Developer thread at https://developer.ibm.com/answers/questions/526660/how-to-use-db2-on-cloud-rest-api-to-load-data-from.html
I cannot seem to get the parameters correct, and I think the docs have errors in them for current Cloud Object Storage with HMAC keys ... such as for the endpoint to use, and whether auth_id should be the access_key_id.
I've tried a variety of data load commands, like the following, but none work. Can someone provide an example of a command that works (with any considerations/explanations for values)?
curl -H "x-amz-date: 20200112T120000Z" -H "Content-Type: application/json"
-H "Authorization: Bearer <auth_token>"
-X POST "https://dashdb-xxxx.services.eu-gb.bluemix.net:8443/dbapi/v3/load_jobs"
-d '{"load_source": "SOFTLAYER", "schema": "MDW84075",
"table": "SALES", "file_options":
{"code_page": "1208", "column_delimiter": ",",
"string_delimiter": "", "date_format": "YYYY-MM-DD", "time_format":
"HH:MM:SS", "timestamp_format": "YYYY-MM-DD HH:MM:SS",
"cde_analyze_frequency": 0 }, "cloud_source":
{"endpoint": "https://s3-api.us-geo.objectstorage.softlayer.net/auth/v2.0",
"path": "<bucket>/sales_data_test.csv", "auth_id": "<access_key_id>",
"auth_secret": "<secret_access_key>"} }'
Different attempts with the API call fail with a variety of messages, which usually do not have enough information to debug (and searches in doc/web do not find the messages); eg:
{"trace":"","errors":[{"code":"not_found", "message":"HWCBAS0030E: The
requested resource is not found in service admin.",
"target":{"type":"","name":""},"more_info":""}]}
P.S. I was able to use the DB2 on Cloud UI to load data from the file in COS S3, with the same access key values.
P.P.S. Perhaps "load_source": "SOFTLAYER" is an issue, but it is the only option that might map to an IBM cloud object storage. The API docs do not give any other option that might work with IBM COS S3.
If you use Db2 on Cloud with Cloud Object Storage with the REST API, then for LOAD the type should be S3. Both Amazon and IBM COS use the S3 protocol. Softlayer had its own SWIFT protocol before, but it is not available (anymore) for IBM COS.
Also see here for some docs on loading data using LOAD. The examples use Amazon and IBM COS, both with S3 protocol.
I want to generate a swagger file for rest v2 connector in informatica cloud with these details.
POST CALL:
Accept: application/json
Content-Type: application/x-www-form-urlencoded
Raw Body:
token=XXXXXXX&content=record&format=csv
But informatica cloud does not have an option of application/x-www-form-urlencoded.
I am able to do the same request in POSTMAN as POSTMAN has all the functionalities.
I even tried to put the Content-Type separately in the headers section while generating the swagger file in Informatica-cloud, but still didn't work.
Someone told me to use this website: http://specgen.apistudio.io for creating the swagger file, but the site does not seem secure and thus I cannot enter any sensitive data
Is there any way I could generate the file through a website or through informatica itself?
Swagger file cannot be generated for the header “Content-Type: application/x-www-form-urlencoded” in Informatica cloud.
What can be done instead is to use 'Curl' for the rest api call in the pre/post processing command in the Mapping Task/Data Synchronization Task. You can take a look at the curl commands in here:
https://www.baeldung.com/curl-rest
Other way if you want to avoid using Curl then, you can create a 'service connector' for the REST call in the application integration.
It is also possible to run data integration tasks from application integration if you want to run them after using the service connector.
The way it works is:
Create a service connector
Create the connection for the service connector
Create a process.
Inside the process, use various services. First service can run your API connection that you just made, then you can use other service to run a data integration task which is available inside 'System service: -> Run cloud task'.
This way you can make the work done without creating a swagger file as it does not accept “Content-Type: application/x-www-form-urlencoded”.
I have a Google Compute VM (LAMP) webserver set up to copy files to a Google Storage Bucket, which then need to be accessed (read and write) by a program on a Google Compute VM (Windows 2008). I can't seem to find any documentation about how a Google Compute Engine Windows VM can access storage buckets.
Is there a way this is possible? Thanks.
I'm doing the same thing, but not with a windows VM, but I think the principle is the same.
First you need to allow Project Access for your VM from the Google Cloud Console https://console.developers.google.com/project, see the screenshot below:
Once you've done this you need to call the metadata server to get an access token from your program. You need to make a HTTP call to the metadata server, here is an example from the docs (https://cloud.google.com/compute/docs/authentication) using curl, bear in mind when programming this you need to also provide the header "Metadata-Flavor: Google":
$ curl "http://metadata/computeMetadata/v1/instance/service-accounts/default/token" \
-H "Metadata-Flavor: Google"
{
"access_token":"ya29.AHES6ZRN3-HlhAPya30GnW_bHSb_QtAS08i85nHq39HE3C2LTrCARA",
"expires_in":3599,
"token_type":"Bearer"
}
You obviously need to code this HTTP call and the parsing of the JSON data in whichever programming language you are using for your program and extract the "access_token", based on the "expires_in" field you might also need to implement a mechanism to fetch a new token once it expires. You can then use the Google supplied cloud storage client library (https://cloud.google.com/storage/docs/json_api/v1/libraries) for your programming language and use the access token above for authenticating calls to cloud storage. I use Java and the Cloud storage class in the API library has this method that can be used:
.setOauthToken("blah")
You can mount the drive with CloudBerry. I would like to find a better way to do it though using only Google Cloud. Please let me know if you find anything better.