Openshift 4.x Session Token Retrieval Using REST API Calls - kubernetes

I have a use case that requires the retrieval of 4.x openshift session tokens. This shell command for the 3.11 endpoint works as fine:
export TOKEN=$(curl -u user1:test#123 -kI 'https://myose01:8443/oauth/authorize?clientid=openshift-challenging-client&response_type=token' | grep -oP "access_token=\K[^&]*")
However, Openshift 4.4 seem to have different endpoints and I'm having trouble reproducing the same result. Anyone know what the 4.4 equivalent is?
Using the openshift cli is not an option

First get your Endpoints with this command:
oc get --raw '/.well-known/oauth-authorization-server'
You are looking for: authorization_endpoint
Then add this Header to your request:
-H "X-CSRF-Token: 100"
So if you run:
curl -u user1:test#123 'https://authorization_endpoint_URL/oauth/authorize?clientid=openshift-challenging-client&response_type=token' -kI -H "X-CSRF-Token: 100" | grep -oP "access_token=\K[^&]*"
you will get your Token.

Related

How do I search by checksum (SHA1, etc...) with Nexus 3 REST API?

I had a look at the documentation of the API but did not find a listing of all the possible query parameters, and in particular, could not find an appropriate parameter to search by SHA1, or other checksum :
https://help.sonatype.com/repomanager3/integrations/rest-and-integration-api/search-api
https://help.sonatype.com/repomanager3/integrations/rest-and-integration-api/assets-api
Previously in Nexus 2 it was possible to do this with 2 endpoints data_index and lucene :
https://repository.sonatype.org/service/local/lucene/search?sha1=686ef3410bcf4ab8ce7fd0b899e832aaba5facf7
https://nexus.xwiki.org/nexus/service/local/data_index?sha1=686ef3410bcf4ab8ce7fd0b899e832aaba5facf7
I had a look at what endpoint Nexus 3 queries internally, and it's again another endpoint called extdirect which uses POST.
I found on some other post that it is already deprecated https://groups.google.com/a/glists.sonatype.com/g/nexus-users/c/8_DyIZrt9mM
Other answers didn't help, in fact I couldn't find 2 answers that agree on the parameter names.
Here it's a strange spelling of artifact spelled artefact with a 'e'
Can't download using Nexus 3 REST API and CURL
curl -u username:password -L -X GET "https://mynexusserver/service/rest/v1/search/assets/download?sort=version&repository=snapshotsrepo&maven.groupId=mygroup&maven.artefactId=myartefact&maven.extension=zip" -H "accept: application/json" -o myartefact.zip
In this answer, the parameters are again different:
https://stackoverflow.com/a/71126636/8315843
curl -u token:tokenPassword -L -X GET "https://MY_NEXUS/service/rest/v1/search/assets/download?sort=version&repository=MY-REPO&group=MY_GROUP&name=MY_ARTIFACT_NAME&version=MY_Version&maven.extension=zip" --output My_Artifact.zip
So, for artifact, is it maven.artifactId or name ?
For group, is it maven.groupId or group ?
How would I get token:tokenPassword ? Can't I just use username:password ?

Is it possible to run a curl command with a splunk dbxquery?

I am developing a dashboard that connects to Splunk via REST API and displays data on various charts/graphs etc. In order to get the data I have to make a POST request via curl (node.js). Everything is working great. However when I try to make a Post request with a dbxquery, it fails and returns 'fatal dbxquery unknown command.' I was wondering if anyone had encountered this before.
curl -H 'Authorization: Basic auth token' -k https://devfg.com:8089/services/search/jobs -d search=" | dbxquery query=\"SELECT count(*) FROM db.table\" connection=\"connection\"" -d output_mode=json
Are the permissions for the dbxquery command set to be executable from any app? Check under app permissions to see if the command is globally exported.
Alternatively, you may need to escape the *, so \*.
Otherwise, you should be able to run the dbxquery via a curl command.

How to create an SSH in gcloud, but keep getting API error

I am trying to set up datalab from my chrome book using the following tutorial https://cloud.google.com/dataproc/docs/tutorials/dataproc-datalab. However when trying to set up an SSH tunnel using the following guidelines https://cloud.google.com/dataproc/docs/concepts/accessing/cluster-web-interfaces#create_an_ssh_tunnel I keep on receiving the following error.
ERROR: (gcloud.compute.ssh) Could not fetch resource:
- Project 57800607318 is not found and cannot be used for API calls. If it is recently created, enable Compute Engine API by visiting https://console.developers.google
.com/apis/api/compute.googleapis.com/overview?project=57800607318 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our sy
stems and retry.
The error message would lead me to believe my "Compute Engine API" is not enabled. However, I have double checked and "Compute Engine API" is enabled.
Here is what I am entering into the cloud shell
gcloud compute ssh ${test-cluster-m} \
--project=${datalab-test-229519} --zone=${us-west1-b} -- \
-4 -N -L ${8080}:${test-cluster-m}:${8080}
The ${} is for accessing the local environment variable. You set them in the step before with:
export PROJECT=project;export HOSTNAME=hostname;export ZONE=zone;PORT=number
In this case would be:
export PROJECT=datalab-test-229519;export HOSTNAME=test-cluster-m;export ZONE=us-west1-b;PORT=8080
Either try this:
gcloud compute ssh test-cluster-m \
--project datalab-test-229519 --zone us-west1-b -- \
-D 8080 -N
Or access the enviroment variables with:
gcloud compute ssh ${HOSTNAME} \
--project=${PROJECT} --zone=${ZONE} -- \
-D ${PORT} -N
Also check the VM you are trying to access is running.

ApiAxle: cannot access stat URL in order to view analytics

I am following the instructions at: http://apiaxle.com/docs/statistics-and-analytics-in-apiaxle/ . Unfortunately currently (May 17, 2014) apiAxle is redirecting me to the endPointserver and I am not getting statist
menelaos:~$ curl 'http://localhost:3000/v/api/test/stats?
granularity=hour&format_timestamp=ISO'
Response:
{"meta":{"version":1,"status_code":404},"results":{"error":
{"type":"ApiUnknown","message":"No api specified (via subdomain)"}}}
I also tried using the subdomain but that didn't work either:
menelaos:~$ curl 'http://test.api.localhost:3000/v/api/test/stats?granularity=hour&format_timestamp=ISO'
Typically you run multiple instances of apiaxle-proxy (which provides access to your endpoints), and a single instances of apiaxle-api (which provides access to statistics, key creation, and other API management functionality).
For example, you might be running the proxy like this:
apiaxle-proxy -f 1 -p 3000 -q
To run the API, you would run something like this:
apiaxle-api -f 1 -p 5000 -q
Note that the API needs to run on a separate port. Also note that it shouldn't be accessible to the outside world as it doesn't have any authentication.
Using the above example, your curl command would look like this:
curl -H 'content-type: application/json' \
-X GET \
'http://localhost:5000/v1/api/test/stats' \
-d '{"granularity":"hour","format_timestamp":"ISO"}'
Note that the parameters need to be sent as JSON.

Couchbase REST API vs CLI

I'm trying to use the REST API on Couchbase 2.2 and I'm finding two things that I cannot seem to do via REST:
Init a new cluster when no other nodes exist.
CLI version:
couchbase-cli cluster-init -u admin -p mypw -c localhost:8091 --cluster-init-ramsize=1024
Remove a healthy node from the cluster.
CLI version:
couchbase-cli rebalance -u admin -p mypw -c 10.10.1.10:8091 --server-remove=10.10.1.12
As for removing a node, I've tried:
curl -u admin:mypw -d otpNode=ns_1#10.10.1.12 \
http://10.10.1.10:8091/controller/ejectNode
Which returns: "Cannot remove active server."
I've also tried:
curl -s -u Administrator:myclusterpw \
-d 'ejectedNodes=ns_1%4010.10.1.12&knownNodes=ns_1%4010.10.1.10%2Cns_1%4010.10.1.11' \
http://10.10.1.10:8091/controller/rebalance
Which returns: {"mismatch":1} (presumably due to the node actually not being marked for ejection?)
Am I crazy, or are there no ways to do these things using curl?
I span up a two node cluster on aws (10.170.76.236 and 10.182.151.86), I was able to remove node 10.182.151.86 using the below curl request
curl -v -u Administrator:password -X POST 'http://10.182.151.86:8091/controller/rebalance' -d 'ejectedNodes=ns_1#10.182.151.86&knownNodes=ns_1#10.182.151.86,ns_1#10.170.76.236'
That removes the node and performs the rebalance leaving only '10.170.76.236' as the single node. Running this request below results in 'Cannot remove active server' as you have experienced.
curl -u Administrator:password -d otpNode=ns_1#10.170.76.236 http://10.170.76.236:8091/controller/ejectNode
This is because you can't remove the last node as you can't perform a rebalance, this issue is covered here http://www.couchbase.com/issues/browse/MB-7517
I left the real IP's in that I used so the curl requests are as clear as possible, I've terminated the nodes now though :)
Combo of:
curl -X POST -u admin:password -d username=Administrator \
-d password=letmein \
-d port=8091 \
http://localhost:8091/settings/web
and
curl -X POST -u admin:password -d memoryQuota=400 \
http://localhost:8091/pools/default
Ticket raised against this indicates that the ejectnode command itself won't work by design.
Server needs to either be pending or failover state to use that command seemingly