IBM Cloud Object Storage credentials - ibm-cloud

I am trying to setup a Raspberry Pi that connects to an Object Storage service on IBM Cloud. In all tutorials on Object Storage, credentials are of this format:
{
"auth_url": "https://identity.open.softlayer.com",
"project": "object_storage_xxxxxxxx_xxxx_xxxx_b35a_6d007e3f9118",
"projectId": "512xxxxxxxxxxxxxxxxxxxxxe00fe4e1",
"region": "dallas",
"userId": "e8c19efxxxxxxxxxxxxxxxxxxx91d53e",
"username": "admin_1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxa66",
"password": "fTxxxxxxxxxxw8}l",
"domainId": "15xxxxxxxxxxxxxxxxxxxxxxxxxxxx2a",
"domainName": "77xxx3",
"role": "admin"
}
According to here for example
Where the following comment is given:
Inside the IBM Cloud web interface you can create or read existing credentials. If your program runs on IBM Cloud (Cloudfoundry or Kubernetes) the credentials are also available via the VCAP environment variable
However, I am not running my Python script on IBM Cloud, rather on a RPi that sends data to it. In my Object Storage service, there is a "service credentials" tab, which has the following form:
{
"apikey": "XXXXXX-_XXXXXXXXXXXXXXXXXX_XXXXXX",
"endpoints": "https://cos-service.bluemix.net/endpoints",
"iam_apikey_description": "Auto generated apikey during resource-key
operation for Instance - crn:v1:bluemix:public:cloud-object-
storage:global:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"iam_apikey_name": "auto-generated-apikey-XXXXXXXX-XXXX-XXXX-XXXX-
XXXXXXXXXXXX",
"iam_role_crn": "crn:v1:bluemix:public:iam::::serviceRole:Writer",
"iam_serviceid_crn": "crn:v1:bluemix:public:iam-
identity::XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX::serviceid:ServiceId-
XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
"resource_instance_id": "crn:v1:bluemix:public:cloud-object-
storage:global:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
}
So how do I find the credentials needed so I can use the SWIFT protocol in Python to send data from my Raspberry Pi to my Object Storage service?

Instead of swift which I don’t think is supported, you can use IBM’s flavour of S3 object storage protocol. There is a python library you can use to make this easy
For example to connect to cos s3:
import ibm_boto3
from ibm_botocore.client import Config
api_key = 'API_KEY'
service_instance_id = 'RESOURCE_INSTANCE_ID'
auth_endpoint = 'https://iam.bluemix.net/oidc/token'
service_endpoint = 'https://s3-api.us-geo.objectstorage.softlayer.net'
s3 = ibm_boto3.resource('s3',
ibm_api_key_id=api_key,
ibm_service_instance_id=service_instance_id,
ibm_auth_endpoint=auth_endpoint,
config=Config(signature_version='oauth'),
endpoint_url=service_endpoint)
Th IBM boto3 library is very similar to the boto3 library that is used to connect to amazon s3 object storage. The main difference is in setting up the initial connection which I have shown above. After you have done that you can find plenty of examples online for using boto3, here is one:
# Upload a new file
data = open('test.jpg', 'rb')
s3.Bucket('my-bucket').put_object(Key='test.jpg', Body=data)
From: http://boto3.readthedocs.io/en/latest/guide/quickstart.html

You might want to look at question/answer I list below. Basically what you need is an access key and secret key to add in your Python code to connect to your Cloud Object Storage account.
https://stackoverflow.com/a/48936053/9392933

Related

how to connect Power BI service to google cloud storage (Bucket)

I was working with Azure Blob Storage, where I stored some csv files. I then build some dashboards on PowerBI using those csv files.
The connection between Power BI and Azure Blob storage is easy going.
Now I want to use the same concept, but only replacing Azure Blob Storage with Google Cloud Storage Bucket(GCS-B).
My problem is, I can't connect Power BI to GCS-B. Any Ideas?
After a long googling, documentation reading... There is What I got.
- The only way to connect Power BI service to google cloud storage, is by running a Virtual machine as a Getway, then running some script inside this virtual machine that get the data from gcs and then load it to power bi. I think this is not a useful way to do it.
- What I tried and it works fine, but unfortunately only for power BI desktop is the following R script. That I run in power Bi as data source option:
wd <-getwd()
setwd(wd)
file.create("service_account.json")
json <- '{"type": "service_account",
"project_id": "xxxxxxx",
"private_key_id": "xxxxx"
"private_key": "-----BEGIN PRIVATE KEY-----\\n...
...}'
write(json, "service_account.json")
write(j, "service_account.json")
options(googleAuthR.scopes.selected = "https://www.googleapis.com/auth/cloud-
platform")
library(googleCloudStorageR)
Sys.setenv("GCS_AUTH_FILE" = "service_account.json")
### optional, if you haven't set environment argument GCS_AUTH_FILE
gcs_auth()
gcs_global_bucket("xxxxx")
gcs_get_global_bucket()
df <- gcs_get_object(gcs_list_objects()$name[[1]])
with this script you can load the first csv file from the specified bucket to power BI desktop.
Just make the bucket or the object public, how to do it read here:
Than copy link from each file (column Public access) and add it to Power Bi like Web source.

Extending S/4HANA OData service to SCP

I want to extend a custom OData service created in a S/4HANA system. I added a Cloud Connector to my machine, but I don't know how to go from there. The idea is that I want people to access the service from SCP and that I don't need multiple accounts accessing the service on the S/4 system, but just the one coming from SCP. Any ideas?
Ok I feel silly doing this but it seems to work. My test is actually inconclusive because I don't have a cloud connector handy, but it works proxy-ing google.
I'm still thinking about how to make it publicly accessible. There might be people with better answers than this.
create the cloud connector destination.
make a new folder in webide
create file neo-app.json.
content:
{
"routes": [{
"path": "/google",
"target": {
"type": "destination",
"name": "google"
},
"description": "google"
}],
"sendWelcomeFileRedirect": false
}
path is the proxy in your app, so myapp.scp-account/google here. the target name is your destination. I called it just google, you'll put your cloud connector destination.
Deploy.
My test app with destination google going to https://www.google.com came out looking like this. Paths are relative so it doesn't work but google seems proxied.
You'll still have to authenticate etc.

How do i deploy a Perfect (swift) backend code + PostgreSQL to Google App Engine

i'm pretty new to web development and much more in Google Cloud, sorry for anything.
Basically, i'm doing the backend part of an app in Swift (using Perfect), and it's running smoothly and okay in my local computer, i'm using a local Postgre database (using PostgreORM in my application).
But, when i deploy that to the Google Cloud, it does not recognize the database (i've created a identical poster database in the computer engine AND a Cloud SQL (Postgre service of Google Cloud with the same names and credentials), but again, when the app is on the cloud, it does not recognize the database, what i'm missing? How should i do it? Install other docker image with Postgre?
Here's my DBConnector code:
import PostgresStORM
func setupDBCredentials() -> PostgresConnector.Type{
let connection = PostgresConnector.self
connection.host = "localhost" // or the connection name of the Google Cloud, it doesn't work as well
connection.username = "postgres"
connection.password = "nearby"
connection.database = "nearby"
connection.port = 5432
return connection
}
Basically, how do i make my google app engine code connect to any database at all?
Also, if it helps, i'm using the Perfect Assistant to deploy my code to Google Cloud, using Docker.
Thanks already!
You'll likely need to do a few things to get connected, such as granting access to the Cloud SQL instance. Here is the link to the PHP docs that cover the broad steps that you'll want to follow and it shows a representation of the connection string that you'll also need.
I believe your connection string needs to look something like this
pgsql:dbname=DATABASE;host=/cloudsql/CONNECTION_NAME
Where CONNECTION_NAME is in the format of PROJECT_ID:CLOUD_SQL_REGION_ID:CLOUD_SQL_INSTANCE_ID

How to share information across notebooks in a DSX project

Is it possible to share information (such as credentials) across multiple notebooks in a DSX project, e.g. with environment variables?
For example a Cloud Foundry application in Bluemix has a control setting where environment variables can be defined, is there a similar concept for a DSX project (I couldn't see anything in the various project level settings).
Separate notebooks have separate runtimes in the background and at the moment it is not possible to share credentials among notebooks by defining environment variables. But there are helper methods for most obvious credential requirements in a project. This is called the "Insert to code" method.
For example: if you have an object store associated with your project.
Select the "Data" tab in the top bar.
Add some file to the object store by browsing or simple drag-n-drop.
Insert credentials of that object store container in your notebook by selecting the "Insert credentials" option, right besides your file in the right hand side panel.
You can then directly insert those credential (Step 3) in any other notebook in that project.
Besides "Insert to code" there are other helper functions like "Insert SparkR dataframe", "Pandas dataframe" etc. to speed up the analytics process of data scientists. Hope that was a bit helpful.
FYI - I've added a feature request on uservoice to allow Bluemix services to be bound to a project and then the credentials be accessed in the same way a Bluemix application accessess credentials. Please vote if you think this would be useful.
Currently, one pattern I use quite a lot is to create a notebook in my project that is used to save credentials to a file on DSX:
! echo '{ "username": "xxxx", "password": "xxxx", ... }' > cloudant_creds.json
That file is now available to all of your notebooks on the project. NOTE: the file is saved on the spark service file system. If you use the same spark service in other dsx projects, they will also be able to access the file.
The credentials for cloudant normally include other fields such as host, I haven't shown these fields here so I can Keep the example simple. I have indicated there are more fields with the .... I normally copy this json from the bluemix service credentials field.
In your other notebooks, you would read the credentials something like this:
with open('cloudant_creds.json') as data_file:
sourceDB = json.load(data_file)
You can then refer the credentials like this:
dfReader = sqlContext.read.format("com.cloudant.spark")
dfReader.option("cloudant.host", sourceDB.host)
if sourceDB.username:
dfReader.option("cloudant.username", sourceDB.username)
if sourceDB.password:
dfReader.option("cloudant.password", sourceDB.password)
df = dfReader.load(sourceDB.database).cache()

How do you use storage service in Bluemix?

I'm trying to insert some storage data onto Bluemix, I searched many wiki pages but I couldn't come to conclude how to proceed. So can any one tell me how to store images, files in storage of Bluemix through any language code ( Java, Node.js)?
You have several options at your disposal for storing files in your app. None of them include doing it in the app container file system as the file space is ephemeral and will be recreated from the droplet each time a new instance of your app is created.
You can use services like MongoLab, Cloudant, Object Storage, and Redis to store all kinda of blob data.
Assuming that you're using Bluemix to write a Cloud Foundry application, another option is sshfs. At your app's startup time, you can use sshfs to create a connection to a remote server that is mounted as a local directory. For example, you could create a ./data directory that points to a remote SSH server and provides a persistent storage location for your app.
Here is a blog post explaining how this strategy works and a source repo showing it used to host a Wordpress blog in a Cloud Foundry app.
Note that as others have suggested, there are a number of services for storing object data. Go to the Bluemix Catalog [1] and select "Data Management" in the left hand margin. Each of those services should have sufficient documentation to get you started, including many sample applications and tutorials. Just click on a service tile, and then click on the "View Docs" button to find the relevant documentation.
[1] https://console.ng.bluemix.net/?ace_base=true/#/store/cloudOEPaneId=store
Check out https://www.ng.bluemix.net/docs/#services/ObjectStorageV2/index.html#gettingstarted. The storage service in Bluemix is OpenStack Swift running in Softlayer. Check out this page (http://docs.openstack.org/developer/swift/) for docs on Swift.
Here is a page that lists some clients for Swift.
https://wiki.openstack.org/wiki/SDKs
As I search There was a service that name was Object Storage service and also was created by IBM. But, at the momenti I couldn't see it in the Bluemix Catalog. I guess , They gave it back and will publish new service in the future.
Be aware that pobject store in bluemix is now S3 compatible. So for instance you can use Boto or boto3 ( for python guys ) It will work 100% API comaptible.
see some example here : https://ibm-public-cos.github.io/crs-docs/crs-python.html
this script helps you to list recursively all objects in all buckets :
import boto3
endpoint = 'https://s3-api.us-geo.objectstorage.softlayer.net'
s3 = boto3.resource('s3', endpoint_url=endpoint)
for bucket in s3.buckets.all():
print(bucket.name)
for obj in bucket.objects.all():
print(" - %s") % obj.key
If you want to specify your credentials this would be :
import boto3
endpoint = 'https://s3-api.us-geo.objectstorage.softlayer.net'
s3 = boto3.resource('s3', endpoint_url=endpoint, aws_access_key_id=YouRACCessKeyGeneratedOnYouBlueMixDAShBoard, aws_secret_access_key=TheSecretKeyThatCOmesWithYourAccessKey, use_ssl=True)
for bucket in s3.buckets.all():
print(bucket.name)
for obj in bucket.objects.all():
print(" - %s") % obj.key
If you want to create a "hello.txt" file in a new bucket. :
import boto3
endpoint = 'https://s3-api.us-geo.objectstorage.softlayer.net'
s3 = boto3.resource('s3', endpoint_url=endpoint, aws_access_key_id=YouRACCessKeyGeneratedOnYouBlueMixDAShBoard, aws_secret_access_key=TheSecretKeyThatCOmesWithYourAccessKey, use_ssl=True)
my_bucket=s3.create_bucket('my-new-bucket')
s3.Object(my_bucket, 'hello.txt').put(Body=b"I'm a test file")
If you want to upload a file in a new bucket :
import boto3
endpoint = 'https://s3-api.us-geo.objectstorage.softlayer.net'
s3 = boto3.resource('s3', endpoint_url=endpoint, aws_access_key_id=YouRACCessKeyGeneratedOnYouBlueMixDAShBoard, aws_secret_access_key=TheSecretKeyThatCOmesWithYourAccessKey, use_ssl=True)
my_bucket=s3.create_bucket('my-new-bucket')
timestampstr = str (timestamp)
s3.Bucket(my_bucket).upload_file(<location of yourfile>,<your file name>, ExtraArgs={ "ACL": "public-read", "Metadata": {"METADATA1": "resultat" ,"METADATA2": "1000","gid": "blabala000", "timestamp": timestampstr },},)