I'm trying to use the REST API on Couchbase 2.2 and I'm finding two things that I cannot seem to do via REST:
Init a new cluster when no other nodes exist.
CLI version:
couchbase-cli cluster-init -u admin -p mypw -c localhost:8091 --cluster-init-ramsize=1024
Remove a healthy node from the cluster.
CLI version:
couchbase-cli rebalance -u admin -p mypw -c 10.10.1.10:8091 --server-remove=10.10.1.12
As for removing a node, I've tried:
curl -u admin:mypw -d otpNode=ns_1#10.10.1.12 \
http://10.10.1.10:8091/controller/ejectNode
Which returns: "Cannot remove active server."
I've also tried:
curl -s -u Administrator:myclusterpw \
-d 'ejectedNodes=ns_1%4010.10.1.12&knownNodes=ns_1%4010.10.1.10%2Cns_1%4010.10.1.11' \
http://10.10.1.10:8091/controller/rebalance
Which returns: {"mismatch":1} (presumably due to the node actually not being marked for ejection?)
Am I crazy, or are there no ways to do these things using curl?
I span up a two node cluster on aws (10.170.76.236 and 10.182.151.86), I was able to remove node 10.182.151.86 using the below curl request
curl -v -u Administrator:password -X POST 'http://10.182.151.86:8091/controller/rebalance' -d 'ejectedNodes=ns_1#10.182.151.86&knownNodes=ns_1#10.182.151.86,ns_1#10.170.76.236'
That removes the node and performs the rebalance leaving only '10.170.76.236' as the single node. Running this request below results in 'Cannot remove active server' as you have experienced.
curl -u Administrator:password -d otpNode=ns_1#10.170.76.236 http://10.170.76.236:8091/controller/ejectNode
This is because you can't remove the last node as you can't perform a rebalance, this issue is covered here http://www.couchbase.com/issues/browse/MB-7517
I left the real IP's in that I used so the curl requests are as clear as possible, I've terminated the nodes now though :)
Combo of:
curl -X POST -u admin:password -d username=Administrator \
-d password=letmein \
-d port=8091 \
http://localhost:8091/settings/web
and
curl -X POST -u admin:password -d memoryQuota=400 \
http://localhost:8091/pools/default
Ticket raised against this indicates that the ejectnode command itself won't work by design.
Server needs to either be pending or failover state to use that command seemingly
Related
How can we Install multiple PG bouncer with different pool mode on a single server(Ubuntu18.04)?
when I tried to second time install it says already installed?
Is there any other way to install with a different port?
You could install a container rungime (eg. docker) and run multiple containers, each containing a pgbouncer installation, eg. using this image: https://github.com/edoburu/docker-pgbouncer
First, install docker:
sudo apt install docker.io
Then, start can start as many pgbouncers as you like.
pgbouncer-1:
sudo docker run --rm -d \
-n pgbouncer-session\
-e DATABASE_URL="postgres://user:pass#postgres-host/database" \
-e POOL_MODE=session \
-p 5432:5432
edoburu/pgbouncer
pgbouncer-2
sudo docker run --rm -d \
-n pgbouncer-transaction \
-e DATABASE_URL="postgres://user:pass#postgres-host/database" \
-e POOL_MODE=transaction \
-p 5433:5432
edoburu/pgbouncer
Note that the the containers use different ports on the host (first one uses 5432, second one uses 5433).
If you have lots of configuration, you might want to use a bind-mount for the configuration files.
Also, for a steady setup, I would recommend using docker-compose instead of raw docker commands.
I have a use case that requires the retrieval of 4.x openshift session tokens. This shell command for the 3.11 endpoint works as fine:
export TOKEN=$(curl -u user1:test#123 -kI 'https://myose01:8443/oauth/authorize?clientid=openshift-challenging-client&response_type=token' | grep -oP "access_token=\K[^&]*")
However, Openshift 4.4 seem to have different endpoints and I'm having trouble reproducing the same result. Anyone know what the 4.4 equivalent is?
Using the openshift cli is not an option
First get your Endpoints with this command:
oc get --raw '/.well-known/oauth-authorization-server'
You are looking for: authorization_endpoint
Then add this Header to your request:
-H "X-CSRF-Token: 100"
So if you run:
curl -u user1:test#123 'https://authorization_endpoint_URL/oauth/authorize?clientid=openshift-challenging-client&response_type=token' -kI -H "X-CSRF-Token: 100" | grep -oP "access_token=\K[^&]*"
you will get your Token.
I ran the keycloak instance by
docker run -d --name keycloak \
-e ROOT_LOGLEVEL=INFO \
-e KEYCLOAK_LOGLEVEL=INFO \
-e KEYCLOAK_USER=admin \
-e KEYCLOAK_PASSWORD=admin \
-p 8080:8080 \
-it jboss/keycloak:master -b 0.0.0.0
docker logs -f keycloak
And then visit http://localhost:8080/auth/realms/master/protocol/openid-connect/token, get Internal Server Error:
So,
How to get the error log? docker logs keycloak stays at the startup information, now new request log.
Where is wrong, and how to fix the internal server error?
Why do you need GET request /auth/realms/master/protocol/openid-connect/token?
Token endpoint is for POST requests, not for GET request - see OIDC spec https://openid.net/specs/openid-connect-core-1_0.html#TokenRequest
We used to have soft delete and recently enabled hard delete in atlas 1.1. Now we are trying to clean up the soft deleted entities via delete by guid api and not able to clear those. Is there a way to delete/clean soft deleted entities after enabling hard delete?
Even I tried updating the entities with active status to make it active, but the status is still "DELETED".
Apache Atlas Version: 1.1
API: DELETE /v2/entity/guid/{guid}
Atlas seems to have added purge API(/admin/purge) to delete soft deleted Entities.
You can go through the jira: https://issues.apache.org/jira/browse/ATLAS-3477 for more details.
----------------soft
curl -iv -u admin:admin -X DELETE http://localhost:21000/api/atlas/v2/entity/guid/88f13750-f2f9-4e31-89f7-06d313fe5d39
---------------- than hard
curl -i -X PUT -H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-u admin:admin 'http://loclahost:21000/api/atlas/admin/purge/' \
-d '["88f13750-f2f9-4e31-89f7-06d313fe5d39"]'
I am following the instructions at: http://apiaxle.com/docs/statistics-and-analytics-in-apiaxle/ . Unfortunately currently (May 17, 2014) apiAxle is redirecting me to the endPointserver and I am not getting statist
menelaos:~$ curl 'http://localhost:3000/v/api/test/stats?
granularity=hour&format_timestamp=ISO'
Response:
{"meta":{"version":1,"status_code":404},"results":{"error":
{"type":"ApiUnknown","message":"No api specified (via subdomain)"}}}
I also tried using the subdomain but that didn't work either:
menelaos:~$ curl 'http://test.api.localhost:3000/v/api/test/stats?granularity=hour&format_timestamp=ISO'
Typically you run multiple instances of apiaxle-proxy (which provides access to your endpoints), and a single instances of apiaxle-api (which provides access to statistics, key creation, and other API management functionality).
For example, you might be running the proxy like this:
apiaxle-proxy -f 1 -p 3000 -q
To run the API, you would run something like this:
apiaxle-api -f 1 -p 5000 -q
Note that the API needs to run on a separate port. Also note that it shouldn't be accessible to the outside world as it doesn't have any authentication.
Using the above example, your curl command would look like this:
curl -H 'content-type: application/json' \
-X GET \
'http://localhost:5000/v1/api/test/stats' \
-d '{"granularity":"hour","format_timestamp":"ISO"}'
Note that the parameters need to be sent as JSON.