Node not seeing funds - cardano

There was a transfer made which went through 72 hours ago, but the relay node is returning
No funds on the Address!
Is it because the node is not synced yet? Is there a way to verify the sync status?

assumed your cardano-node config file has set the default EKG port 12788,
you can query the node for the current BlockNo by executing this command on your local node
curl --silent -H 'Accept: application/json' 'http://127.0.0.1:12788/' | jq -r .cardano.node.metrics.blockNum.int.val
(You may first need to install jq on your system)
then go to https://pooltool.io/ and compare your nodes current value with the height reported there.

Related

Get "Upgrade request required" from kubectl exec from Windows10/cygwin

For quite a while, I've been running kubectl v1.17 on Windows10/Cygwin to connect to the clusters for our application. Every once in a while, I use "kubectl exec" to perform an operation within a container. I've never had a problem doing this, until the last couple of days.
A couple of days ago, this attempt failed with "Upgrade request required". I talked to a colleague with a similar setup, and he hadn't been seeing this error. He was using v1.18, so I upgraded, and that seemed to fix the problem. I then used that for a few hours yesterday with no problem.
This morning, I'm getting "Upgrade request required" again, so the "upgrade" didn't actually fix it.
From the occurrences of this on the web, I see it has something to do with the connection handshake, but that's about all I know.
Our clusters are running v1.13.5 of k8s.
I tried running the command with "-v=10" to get more info.
The actual internal curl command that gets this appears to be this (with some elisions):
I0612 10:40:31.032729 10408 round_trippers.go:423] curl -k -v -XPOST -H "X-Stream-Protocol-Version: v4.channel.k8s.io" -H "X-Stream-Protocol-Version: v3.channel.k8s.io" -H "X-Stream-Protocol-Version: v2.channel.k8s.io" -H "X-Stream-Protocol-Version: channel.k8s.io" -H "User-Agent: kubectl.exe/v1.18.0 (windows/amd64) kubernetes/9e99141" -H "Authorization: Bearer ..." 'https://...'
Is there anything in this that might indicate what might be going wrong here?
Update:
I discovered this morning that the issue is definitely related to Cygwin. If I take the resulting "kubectl exec" command and execute in a Windows cmd shell, it works perfectly fine. No error.
A relevant portion of my "uname -a" string might be "3.1.5(0.340/5/3) 2020-06-01 08:59 x86_64 Cygwin".

In Apache Atlas, is there a way to delete/clean soft deleted entities after enabling hard delete?

We used to have soft delete and recently enabled hard delete in atlas 1.1. Now we are trying to clean up the soft deleted entities via delete by guid api and not able to clear those. Is there a way to delete/clean soft deleted entities after enabling hard delete?
Even I tried updating the entities with active status to make it active, but the status is still "DELETED".
Apache Atlas Version: 1.1
API: DELETE /v2/entity/guid/{guid}
Atlas seems to have added purge API(/admin/purge) to delete soft deleted Entities.
You can go through the jira: https://issues.apache.org/jira/browse/ATLAS-3477 for more details.
----------------soft
curl -iv -u admin:admin -X DELETE http://localhost:21000/api/atlas/v2/entity/guid/88f13750-f2f9-4e31-89f7-06d313fe5d39
---------------- than hard
curl -i -X PUT -H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-u admin:admin 'http://loclahost:21000/api/atlas/admin/purge/' \
-d '["88f13750-f2f9-4e31-89f7-06d313fe5d39"]'

Is it possible to run a curl command with a splunk dbxquery?

I am developing a dashboard that connects to Splunk via REST API and displays data on various charts/graphs etc. In order to get the data I have to make a POST request via curl (node.js). Everything is working great. However when I try to make a Post request with a dbxquery, it fails and returns 'fatal dbxquery unknown command.' I was wondering if anyone had encountered this before.
curl -H 'Authorization: Basic auth token' -k https://devfg.com:8089/services/search/jobs -d search=" | dbxquery query=\"SELECT count(*) FROM db.table\" connection=\"connection\"" -d output_mode=json
Are the permissions for the dbxquery command set to be executable from any app? Check under app permissions to see if the command is globally exported.
Alternatively, you may need to escape the *, so \*.
Otherwise, you should be able to run the dbxquery via a curl command.

ApiAxle: cannot access stat URL in order to view analytics

I am following the instructions at: http://apiaxle.com/docs/statistics-and-analytics-in-apiaxle/ . Unfortunately currently (May 17, 2014) apiAxle is redirecting me to the endPointserver and I am not getting statist
menelaos:~$ curl 'http://localhost:3000/v/api/test/stats?
granularity=hour&format_timestamp=ISO'
Response:
{"meta":{"version":1,"status_code":404},"results":{"error":
{"type":"ApiUnknown","message":"No api specified (via subdomain)"}}}
I also tried using the subdomain but that didn't work either:
menelaos:~$ curl 'http://test.api.localhost:3000/v/api/test/stats?granularity=hour&format_timestamp=ISO'
Typically you run multiple instances of apiaxle-proxy (which provides access to your endpoints), and a single instances of apiaxle-api (which provides access to statistics, key creation, and other API management functionality).
For example, you might be running the proxy like this:
apiaxle-proxy -f 1 -p 3000 -q
To run the API, you would run something like this:
apiaxle-api -f 1 -p 5000 -q
Note that the API needs to run on a separate port. Also note that it shouldn't be accessible to the outside world as it doesn't have any authentication.
Using the above example, your curl command would look like this:
curl -H 'content-type: application/json' \
-X GET \
'http://localhost:5000/v1/api/test/stats' \
-d '{"granularity":"hour","format_timestamp":"ISO"}'
Note that the parameters need to be sent as JSON.

Couchbase REST API vs CLI

I'm trying to use the REST API on Couchbase 2.2 and I'm finding two things that I cannot seem to do via REST:
Init a new cluster when no other nodes exist.
CLI version:
couchbase-cli cluster-init -u admin -p mypw -c localhost:8091 --cluster-init-ramsize=1024
Remove a healthy node from the cluster.
CLI version:
couchbase-cli rebalance -u admin -p mypw -c 10.10.1.10:8091 --server-remove=10.10.1.12
As for removing a node, I've tried:
curl -u admin:mypw -d otpNode=ns_1#10.10.1.12 \
http://10.10.1.10:8091/controller/ejectNode
Which returns: "Cannot remove active server."
I've also tried:
curl -s -u Administrator:myclusterpw \
-d 'ejectedNodes=ns_1%4010.10.1.12&knownNodes=ns_1%4010.10.1.10%2Cns_1%4010.10.1.11' \
http://10.10.1.10:8091/controller/rebalance
Which returns: {"mismatch":1} (presumably due to the node actually not being marked for ejection?)
Am I crazy, or are there no ways to do these things using curl?
I span up a two node cluster on aws (10.170.76.236 and 10.182.151.86), I was able to remove node 10.182.151.86 using the below curl request
curl -v -u Administrator:password -X POST 'http://10.182.151.86:8091/controller/rebalance' -d 'ejectedNodes=ns_1#10.182.151.86&knownNodes=ns_1#10.182.151.86,ns_1#10.170.76.236'
That removes the node and performs the rebalance leaving only '10.170.76.236' as the single node. Running this request below results in 'Cannot remove active server' as you have experienced.
curl -u Administrator:password -d otpNode=ns_1#10.170.76.236 http://10.170.76.236:8091/controller/ejectNode
This is because you can't remove the last node as you can't perform a rebalance, this issue is covered here http://www.couchbase.com/issues/browse/MB-7517
I left the real IP's in that I used so the curl requests are as clear as possible, I've terminated the nodes now though :)
Combo of:
curl -X POST -u admin:password -d username=Administrator \
-d password=letmein \
-d port=8091 \
http://localhost:8091/settings/web
and
curl -X POST -u admin:password -d memoryQuota=400 \
http://localhost:8091/pools/default
Ticket raised against this indicates that the ejectnode command itself won't work by design.
Server needs to either be pending or failover state to use that command seemingly