How to restore Solr backup from S3 bucket - kubernetes

I have an up&running SolrCloud v8.11 cluster on Kubernetes, with solr-operator.
The backup is enabled on S3 bucket.
How can I correctly write the request to perform a RESTORE of a backup stored in a S3 bucket?
I'm unable to figure out what should it be the location and the snapshotName I have to provide in the Restore API request made to Solr.
In order to discover those values, I tried to execute the LISTBACKUP action, but in this case the location values is also wrong...
$ curl https://my-solrcloud.example.org/solr/admin/collections\?action=LISTBACKUP\&name=collection-name\&repository=collection-backup\&location=my-s3-bucket/collection-backup
{
"responseHeader":{
"status":400,
"QTime":70},
"error":{
"metadata":[
"error-class","org.apache.solr.common.SolrException",
"root-error-class","org.apache.solr.common.SolrException"],
"msg":"specified location s3:///my-s3-bucket/collection-backup/ does not exist.",
"code":400}}
## The Log in cluster writes:
org.apache.solr.common.SolrException: specified location s3:///my-s3-bucket/collection-backup/ does not exist. => org.apache.solr.common.SolrException: specified location s3:///my-s3-bucket/collection-backup/ does not exist.
After all, the recurring backup works as expected, but sooner or later a RESTORE action will be performed and it's not clear how could it be done correctly.
Thank you in advance.

A bit late, but I came across this question while searching for the same answer. There was a thread on the mailing list that helped me to figure out how this is supposed to work.
I found the documentation on this pretty confusing, but the location seems to be relative to the backup repository. So, the repository argument already accounts for the bucket name, and the name argument would be the name of the backup you are attempting to list. Solr then builds the S3 path as {repository bucket} + {location} + {backup name}. So, location should simply be: /
Assume you've set up a backupRepository for the SolrCloud deployment like the following:
backupRepositories:
- name: "my-backup-repo"
s3:
region: "us-east-1"
bucket: "my-s3-bucket"
and you have created a SolrBackup like the following:
---
apiVersion: solr.apache.org/v1beta1
kind: SolrBackup
metadata:
name: "my-collection-backup"
spec:
repositoryName: "my-backup-repo"
solrCloud: "my-solr-cloud"
collections:
- "my-collection"
The full cURL command for LISTBACKUP would be:
$ curl https://my-solrcloud.example.org/solr/admin/collections \
-d action=LISTBACKUP \
-d name=my-collection-backup \
-d repository=my-backup-repo \
-d location=/
Similarly for the RESTORE command:
$ curl https://my-solrcloud.example.org/solr/admin/collections \
-d action=RESTORE \
-d name=my-collection-backup \
-d repository=my-backup-repo \
-d location=/ \
-d collection=my-collection-restore

Related

Nexus return 401 Unauthorized, after build image from Dockerfile

I'm new in docker, try to google this issue, bit found nothing.
I have to create nexus image from sonatype/nexus3 and change password in admin.password file after creating image.
It's my Dockerfile:
FROM sonatype/nexus3
WORKDIR /nexus-data
RUN ["/bin/bash", "-c", "echo root >> admin.password"]
and when i check the file admin.password (docker exec <container> cat admin.password) i have this result:
root
And Authorization works if i run continer from sonatype/nexus3 image from docker hub (with default UUID password).
What should i do?
I am thinking that maybe i rewrite admin profile or delete it somehow?
The way it works is that the sonatype/nexus3 image contains an already installed version and the random password has been written to admin.password. But it's just a log, not the password used to confgure nexus.
What you want to do has already been answered here How to set admin user/pwd when launching Nexus docker image
Here is a detailed walkthrough to change the admin password from the CLI after starting a fresh nexus3 docker container. You can easily script that once you understand how it works.
Important note to clear a possible misunderstanding: you don't build a nexus3 image containing predefined data like your admin password. You start a fresh image which will initialize fresh data when using an empty nexus-data volume, including a random admin password and you use that password to change it to your own value.
Start a docker container from the official image. Note: this is a minimal and trashable (i.e. --rm) start just for the example. Read the documentation to secure your data.
docker run -d --rm --name testnexus -p 8081:8081 sonatype/nexus3:latest
Wait a bit for nexus to start (you can check the logs with docker logs testnexus) then read the generated password into a variable:
CURRENT_PASSWORD=$(docker exec testnexus cat /nexus-data/admin.password)
Set the expected admin password into an other variable
NEW_PASSWORD=v3rys3cur3
Use the Nexus API to change the admin password:
curl -X PUT \
-u "admin:${CURRENT_PASSWORD}" \
-d "${NEW_PASSWORD}" \
-H 'accept: application/json' \
-H 'Content-Type: text/plain' \
http://localhost:8081/service/rest/v1/security/users/admin/change-password
Access Nexus GUI with your browser at http://localhost:8081, login with your newly changed password, enjoy.

How to create and place a .env file in Kafka cluster so a Kafka Connector Config can reference it

I’m trying to create a kafka-connector from kafka to snowflake using the kafka to snowflake section of this tutorial.
Here is a full sample of the connect config that I'm starting with, contained in a curl request. As you can see it references ${file:/data/credentials.properties:ENV_VAR_NAME} multiple times to grab envars:
curl -i -X PUT -H "Content-Type:application/json" \
http://localhost:8083/connectors/sink_snowflake_01/config \
-d '{
"connector.class":"com.snowflake.kafka.connector.SnowflakeSinkConnector",
"tasks.max":1,
"topics":"mssql-01-mssql.dbo.ORDERS",
"snowflake.url.name":"${file:/data/credentials.properties:SNOWFLAKE_HOST}",
"snowflake.user.name":"${file:/data/credentials.properties:SNOWFLAKE_USER}",
"snowflake.user.role":"SYSADMIN",
"snowflake.private.key":"${file:/data/credentials.properties:SNOWFLAKE_PRIVATE_KEY}",
"snowflake.database.name":"DEMO_DB",
"snowflake.schema.name":"PUBLIC",
"key.converter":"org.apache.kafka.connect.storage.StringConverter",
"value.converter":"com.snowflake.kafka.connector.records.SnowflakeAvroConverter",
"value.converter.schema.registry.url":"https://${file:/data/credentials.properties:CCLOUD_SCHEMA_REGISTRY_HOST}",
"value.converter.basic.auth.credentials.source":"USER_INFO",
"value.converter.basic.auth.user.info":"${file:/data/credentials.properties:CCLOUD_SCHEMA_REGISTRY_API_KEY}:${file:/data/credentials.properties:CCLOUD_SCHEMA_REGISTRY_API_SECRET}"
}'
My question is: How do I put a .env file in “data/credentials.properties” on within the cluster, so that my connect config will be able to access the env vars within the .env file, using “${…}” syntax like this line of the example connect config json:
"snowflake.url.name":"${file:/data/credentials.properties:SNOWFLAKE_HOST}",
Robin's tutorial (thankyou for the link) makes specific reference to that credentials file:
All the code shown here is based on this github repo. If you’re following along then make sure you set up .env (copy the template from .env.example) with all of your cloud details. This .env file gets mounted in the Docker container to /data/credentials.properties, which is what’s referenced in the connector configurations below.
So if you're using his repo, then that file is already dropped in the right spot for you, as you can see at https://github.com/confluentinc/demo-scene/blob/master/pipeline-to-the-cloud/docker-compose.yml#L73

HashiCorp Vault Mongo error

I'm trying to run the default configuration for hashicorp and mongo but I can't complete the tutorial from here: https://www.vaultproject.io/docs/secrets/databases/mongodb.html.
It crashes here:
vault write database/config/mongodb \
plugin_name=mongodb-database-plugin \
allowed_roles="readonly" \
connection_url="mongodb://admin:Password!#mongodb.acme.com:27017/admin?ssl=true"
-bash: !mongodb.acme.com: event not found
I have mongo installed and done correctly the vault mount database
There are several things to change from that command.
vault write database/config/mongodb \
plugin_name=mongodb-database-plugin \
allowed_roles="readonly" \
connection_url="mongodb://admin:passwd#127.0.0.1:27017/admin"
Admin:Password has to be changed to the current admin:password credentials (keep in mind that mongo don't have any admin:password with a fresh installation).
!#mongodb.acme.com had to be changed to the ip of the machine where mongo is.
Finally had to disable the ssl ssl=false or removing it directly.

how to mount secret in openshift with uid:gid set correctly

I'm using this Dockerfile to deploy it on openshift. - https://github.com/sclorg/postgresql-container/tree/master/9.5
It works fine, until I enabled ssl=on and injected the server.crt and server.key file into the postgres pod via volume mount option.
Secret is created like
$ oc secret new postgres-secrets \
server.key=postgres/server.key \
server.crt=postgres/server.crt \
root-ca.crt=ca-cert
The volume is created as bellow and attached to the given BuidlConfig of postgres.
$ oc volume dc/postgres \
--add --type=secret \
--secret-name=postgres-secrets \
--default-mode=0600 \
-m /var/lib/pgdata/data/secrets/secrets/
Problem is the mounted files of secret.crt and secret.key files is owned by root user, but postgres expect it should be owned by the postgres user. Because of that the postgres server won't come up and says this error.
waiting for server to start....FATAL: could not load server
certificate file "/var/lib/pgdata/data/secrets/secrets/server.crt":
Permission denied stopped waiting pg_ctl: could not start server
How we can insert a volume and update the uid:guid of the files in it ?
It looks like this is not trivial, as it requires to set Volume Security Context so all the containers in the pod are run as a certain user https://docs.openshift.com/enterprise/3.1/install_config/persistent_storage/pod_security_context.html
In the Kubernetes projects, this is something that is still under discussion https://github.com/kubernetes/kubernetes/issues/2630, but seems that you may have to use Security Contexts and PodSecurityPolicies in order to make it work.
I think the easiest option (without using the above) would be to use a container entrypoint that, before actually executing PostgreSQL, it chowns the files to the proper user (postgres in this case).

IBM Object Storage Command Line Access

Using this guide, I have been trying to access my containers at IBM Object Storage, I have installed the python-swiftclient library and running this command(AUTH_URL, USERNAME,KEY are from IBM Bluemix Object Storage Credentials Section):
swift -A <AUTH_URL> -U <USERNAME> -K <KEY> stat -v
I get the following error:
Auth GET failed: https://identity.open.softlayer.com/ 300 Multiple Choices [first 60 chars of response] {"versions": {"values": [{"status": "stable", "updated": "20
I have tried with other credentials as well, looked online, no luck so far. What is wrong with this?
If you are referring to the Cloud Object Storage (S3 compatible version) look at https://ibm-public-cos.github.io/crs-docs/crs-python.html instead. The example in the KnowledgeLayer is for the SWIFT based option. The new Cloud Object Storage is using S3 API style commands.
Use the following:
swift \
--os-auth-url=https://identity.open.softlayer.com/v3 \
--auth-version=3 \
--os-project-id=<projectId> \
--os-region-name=<region> \
--os-username=<username> \
--os-password=<password> \
--os-user-domain-id=<domainId> \
stat -v
You will find the values for projectId, region, username, password, domainId in the credentials section of your Object Storage service in the Bluemix dashboard.
Another option is to set the environment variables OS_AUTH_URL, OS_AUTH_VERSION, OS_PROJECT_ID, OS_REGION_NAME, OS_USERNAME (or OS_USER_ID), OS_PASSWORD and OS_DOMAIN_ID.