Using this guide, I have been trying to access my containers at IBM Object Storage, I have installed the python-swiftclient library and running this command(AUTH_URL, USERNAME,KEY are from IBM Bluemix Object Storage Credentials Section):
swift -A <AUTH_URL> -U <USERNAME> -K <KEY> stat -v
I get the following error:
Auth GET failed: https://identity.open.softlayer.com/ 300 Multiple Choices [first 60 chars of response] {"versions": {"values": [{"status": "stable", "updated": "20
I have tried with other credentials as well, looked online, no luck so far. What is wrong with this?
If you are referring to the Cloud Object Storage (S3 compatible version) look at https://ibm-public-cos.github.io/crs-docs/crs-python.html instead. The example in the KnowledgeLayer is for the SWIFT based option. The new Cloud Object Storage is using S3 API style commands.
Use the following:
swift \
--os-auth-url=https://identity.open.softlayer.com/v3 \
--auth-version=3 \
--os-project-id=<projectId> \
--os-region-name=<region> \
--os-username=<username> \
--os-password=<password> \
--os-user-domain-id=<domainId> \
stat -v
You will find the values for projectId, region, username, password, domainId in the credentials section of your Object Storage service in the Bluemix dashboard.
Another option is to set the environment variables OS_AUTH_URL, OS_AUTH_VERSION, OS_PROJECT_ID, OS_REGION_NAME, OS_USERNAME (or OS_USER_ID), OS_PASSWORD and OS_DOMAIN_ID.
Related
I have an up&running SolrCloud v8.11 cluster on Kubernetes, with solr-operator.
The backup is enabled on S3 bucket.
How can I correctly write the request to perform a RESTORE of a backup stored in a S3 bucket?
I'm unable to figure out what should it be the location and the snapshotName I have to provide in the Restore API request made to Solr.
In order to discover those values, I tried to execute the LISTBACKUP action, but in this case the location values is also wrong...
$ curl https://my-solrcloud.example.org/solr/admin/collections\?action=LISTBACKUP\&name=collection-name\&repository=collection-backup\&location=my-s3-bucket/collection-backup
{
"responseHeader":{
"status":400,
"QTime":70},
"error":{
"metadata":[
"error-class","org.apache.solr.common.SolrException",
"root-error-class","org.apache.solr.common.SolrException"],
"msg":"specified location s3:///my-s3-bucket/collection-backup/ does not exist.",
"code":400}}
## The Log in cluster writes:
org.apache.solr.common.SolrException: specified location s3:///my-s3-bucket/collection-backup/ does not exist. => org.apache.solr.common.SolrException: specified location s3:///my-s3-bucket/collection-backup/ does not exist.
After all, the recurring backup works as expected, but sooner or later a RESTORE action will be performed and it's not clear how could it be done correctly.
Thank you in advance.
A bit late, but I came across this question while searching for the same answer. There was a thread on the mailing list that helped me to figure out how this is supposed to work.
I found the documentation on this pretty confusing, but the location seems to be relative to the backup repository. So, the repository argument already accounts for the bucket name, and the name argument would be the name of the backup you are attempting to list. Solr then builds the S3 path as {repository bucket} + {location} + {backup name}. So, location should simply be: /
Assume you've set up a backupRepository for the SolrCloud deployment like the following:
backupRepositories:
- name: "my-backup-repo"
s3:
region: "us-east-1"
bucket: "my-s3-bucket"
and you have created a SolrBackup like the following:
---
apiVersion: solr.apache.org/v1beta1
kind: SolrBackup
metadata:
name: "my-collection-backup"
spec:
repositoryName: "my-backup-repo"
solrCloud: "my-solr-cloud"
collections:
- "my-collection"
The full cURL command for LISTBACKUP would be:
$ curl https://my-solrcloud.example.org/solr/admin/collections \
-d action=LISTBACKUP \
-d name=my-collection-backup \
-d repository=my-backup-repo \
-d location=/
Similarly for the RESTORE command:
$ curl https://my-solrcloud.example.org/solr/admin/collections \
-d action=RESTORE \
-d name=my-collection-backup \
-d repository=my-backup-repo \
-d location=/ \
-d collection=my-collection-restore
I am using "Cloud Foundry apps" to deploy my applications to the IBM Cloud. Haven't used this service before.
I am using Windows for converging. I did everything step by step as described on the site.
Logged into my IBM Cloud account and selected the API endpoint.
ibmcloud login
Targeting Cloud Foundry organization and space:
ibmcloud target --cf
But in the third step. When needed from the get-start-python directory
submit my app to IBM Cloud
i am getting error when running command:
ibmcloud cf push
error:
$ ibmcloud cf push
FAILED
No CF API endpoint set.
Use 'C: \ Users \ ami \ .bluemix \ .cf \ cfcli \ ibmcloud.exe target --cf-api ENDPOINT [-o ORG] [-s SPACE]' to target Cloud Foundry, or 'C: \ Users \ ami \ .bluemix \ .cf \ cfcli \ ibmcloud.exe target --cf 'to target it interactively.
Then I decided to look at cf target
and I get the inscription:
No org or space targeted, use 'cf.exe target -o ORG -s SPACE'
What am I doing wrong?
My understanding is that you are trying to create a Cloud Foundry org in us-south, but your home region is eu-uk. Per definition of Lite accounts you can only create one org in one region. That region is your "home" region.
You would need to set the API endpoint to api.eu-uk.cf.cloud.ibm.com, set ibmcloud target -r eu-uk for the region. Then try again with creating a region or setting the org and space.
The docs describe that hasura needs the postgres connection string with the HASURA_GRAPHQL_DATABASE_URL env var.
Example:
docker run -d -p 8080:8080 \
-e HASURA_GRAPHQL_DATABASE_URL=postgres://username:password#hostname:port/dbname \
hasura/graphql-engine:latest
It looks like that my problem is that the server instance connection name for google cloud sql looks like PROJECT_ID:REGION:INSTANCE_ID is not TCP
From the cloud run docs (https://cloud.google.com/sql/docs/postgres/connect-run) I got this example:
postgres://<db_user>:<db_pass>#/<db_name>?unix_sock=/cloudsql/<cloud_sql_instance_name>/.s.PGSQL.5432 but it does not seem to work. Ideas?
I'm currently adding the cloud_sql_proxy as a workaround to the container so that I can connect to TCP 127.0.0.1:5432, but I'm looking for a direct connection to google-cloud-sql.
// EDIT Thanks for the comments, beta8 did mostly the trick, but I also missed the set-cloudsql-instances parameter: https://cloud.google.com/sdk/gcloud/reference/beta/run/deploy#--set-cloudsql-instances
My full cloud-run command:
gcloud beta run deploy \
--image gcr.io/<PROJECT_ID>/graphql-server:latest \
--region <CLOUD_RUN_REGION> \
--platform managed \
--set-env-vars HASURA_GRAPHQL_DATABASE_URL="postgres://<DB_USER>:<DB_PASS>#/<DB_NAME>?host=/cloudsql/<PROJECT_ID>:<CLOUD_SQL_REGION>:<INSTANCE_ID>" \
--timeout 900 \
--set-cloudsql-instances <PROJECT_ID>:<CLOUD_SQL_REGION>:<INSTANCE_ID>
As per v1.0.0-beta.8, which has better support for Postgres connection string parameters, I've managed to make the unix connection to work, from Cloud Run to Cloud SQL, without embedding the proxy into the container.
The connection should look something like this:
postgres://<user>:<password>#/<database>?host=/cloudsql/<instance_name>
Notice that the client will add the suffix /.s.PGSQL.5432 for you.
Make sure you added also the Cloud SQL client permission.
If the Hasura database requires that exact connection string format, you can use it. However, you cannot use Cloud Run's Cloud SQL support. You will need to whitelist the entire Internet so that your Cloud Run instance can connect. Cloud Run does not publish a CIDR block of addresses. This method is not recommended.
The Unix Socket method is for Cloud SQL Proxy that Cloud Run supports. This is the connection method used internally to your container when Cloud Run is managing the connection to Cloud SQL. Note, for this method IP based hostnames are not supported in your client to connect to Cloud Run's Cloud SQL Proxy.
You can embed the Cloud SQL Proxy directly in your container. Then you can use 127.0.0.1 as the hostname part for the connection string. This will require that you create a shell script as your Cloud Run entrypoint to launch both the proxy and your application. Based on your scenario, I recommend this method.
The Cloud SQL Proxy is written in Go and the source code is published.
If you choose to embed the proxy, don't forget to add the Cloud SQL Client role to the Cloud Run service account.
I am trying to set up datalab from my chrome book using the following tutorial https://cloud.google.com/dataproc/docs/tutorials/dataproc-datalab. However when trying to set up an SSH tunnel using the following guidelines https://cloud.google.com/dataproc/docs/concepts/accessing/cluster-web-interfaces#create_an_ssh_tunnel I keep on receiving the following error.
ERROR: (gcloud.compute.ssh) Could not fetch resource:
- Project 57800607318 is not found and cannot be used for API calls. If it is recently created, enable Compute Engine API by visiting https://console.developers.google
.com/apis/api/compute.googleapis.com/overview?project=57800607318 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our sy
stems and retry.
The error message would lead me to believe my "Compute Engine API" is not enabled. However, I have double checked and "Compute Engine API" is enabled.
Here is what I am entering into the cloud shell
gcloud compute ssh ${test-cluster-m} \
--project=${datalab-test-229519} --zone=${us-west1-b} -- \
-4 -N -L ${8080}:${test-cluster-m}:${8080}
The ${} is for accessing the local environment variable. You set them in the step before with:
export PROJECT=project;export HOSTNAME=hostname;export ZONE=zone;PORT=number
In this case would be:
export PROJECT=datalab-test-229519;export HOSTNAME=test-cluster-m;export ZONE=us-west1-b;PORT=8080
Either try this:
gcloud compute ssh test-cluster-m \
--project datalab-test-229519 --zone us-west1-b -- \
-D 8080 -N
Or access the enviroment variables with:
gcloud compute ssh ${HOSTNAME} \
--project=${PROJECT} --zone=${ZONE} -- \
-D ${PORT} -N
Also check the VM you are trying to access is running.
I'm trying to run the default configuration for hashicorp and mongo but I can't complete the tutorial from here: https://www.vaultproject.io/docs/secrets/databases/mongodb.html.
It crashes here:
vault write database/config/mongodb \
plugin_name=mongodb-database-plugin \
allowed_roles="readonly" \
connection_url="mongodb://admin:Password!#mongodb.acme.com:27017/admin?ssl=true"
-bash: !mongodb.acme.com: event not found
I have mongo installed and done correctly the vault mount database
There are several things to change from that command.
vault write database/config/mongodb \
plugin_name=mongodb-database-plugin \
allowed_roles="readonly" \
connection_url="mongodb://admin:passwd#127.0.0.1:27017/admin"
Admin:Password has to be changed to the current admin:password credentials (keep in mind that mongo don't have any admin:password with a fresh installation).
!#mongodb.acme.com had to be changed to the ip of the machine where mongo is.
Finally had to disable the ssl ssl=false or removing it directly.