How can I set my local waves network successfully? - wavesplatform

I have 2 questions.
I did execute local waves network.
I want to set 2 miner nodes.
First booted node did woking well and mining blocks.
Second booted node did woking well but just syncing blocks.
Second node didn't mining blocks.
Second node also did set "miner.enable=yes" and have 1000WAVES.
Is there anything else that needs to be set for this node to be minor? Or does this node need time to participate in the mining schedule?
I want to get miner info by using REST API.
My local node's config did set like followings.
api-key-hash = "H6nsiifwYKYEx6YzYD7woP1XCn72RVvx6tC1zjjLXqsu"
And I did call API like this.
curl -X GET http://127.0.0.1:6869/debug/minerInfo -H "Content-Type:application/json" -H "api_key: H6nsiifwYKYEx6YzYD7woP1XCn72RVvx6tC1zjjLXqsu"
But I got error message like this.
{"error":2,"message":"Provided API key is not correct"}
I did call same API in "https://nodes-testnet.wavesnodes.com/api-docs/index.html#/debug/minerInfo_1"
But I got same error message.
How can I call this API successpully?

That should be enough, but if your first node has 99.9999 million Waves and the second one - 1000, the first one will generate 99.9999% of blocks, so maybe it is not the proper time to generate a block for the second node.
You should add header X-Api-Key with an actual API key, not with the hash of it. For example, you had "myawesomekey" and got a hash from it (H6nsiifwYKYEx6YzYD7woP1XCn72RVvx6tC1zjjLXqsu), then you send a header X-Api-Key=myawesomekey

Related

Jmeter recorded script is not added data on frontend side

I recorded my jmeter script on server x and make it dynamic after that run that same script on server y - it fetch all data by post processor and did not give any error but data is not added on fronted . how can I solve it any reason behind it? (website is same just change the server for testing)
expected-Data should add on fronted like create lead on server y (successfully create on server x)
actual -data not added on server y
Most probably you need to correlate your script as it is not doing what it is supposed to be doing.
You can run your test with 1 virtual user and 1 iteration configured in the Thread Group and inspect request and response details using View Results Tree listener
My expectation is that you either not getting logged in (you have added HTTP Cookie Manager to your Test Plan, haven't you?) or fail to provide valid dynamic parameters. Modern web applications widely use dynamic parameters for example for client side state tracking or for CSRF protection
You can easily detect dynamic parameters by recording the same scenario one more time and compare the generated scripts. All the values which differ need to be correlated, to wit extracted from the previous response using a suitable Post-Processor and stored into a JMeter Variable. Once done you will need to replace recorded hard-coded value with the aforementioned JMeter Variable.
Check out How to Handle Correlation in JMeter article for comprehensive information with examples.

DELETE different resources with one requests - Is it ok or we should try to mix those resources to one

Let assume that I have a collection with /playrequests endpoint. It is a collection (list) for those players who want to find another player to start a match.
The server will check this collection periodically and if it finds two unassigned players, it will create another resource in another collection with /quickmatchs endpoint and also change (for example) a field in the PlayRequests collection for both players to shows that they are assigned to a quickMatch.
At this point, players can send a PUT or PATCH request to set the (for example) "ready" field of their related quickMach resource to true. so the server and each of them can find out that if both of them is ready and the match can be started.
(The Issue Part Is Below Part...)
Also, before a the playRequests assigned to a match and also after they assigned to it, they can send a DELETE request to /playrequests endpoint to tell the server that they want to give up the request. So if the match doesn't create yet, It is ok. the resource related to the player will remove from playRequests collection. but if player assigned to a match, the server must delete the related playRequest and also it must delete the related quickMatch resource from the quickMatchs collection. ( and also we should modify the playRequest related to another player to indicate that it's unassigned now. or we can check and change it later when he to check the status of his related resources in both collection. It is not the main issue for now. )
So, my question is that is it ok to change a resource that is related to the given end point and also change another resource accordingly, If it is necessary? ( I mean is it ok to manipulate different resources with different endpoints in one request? I don't want to send multiple requests.) or I need to mix those two collections to avoid such an action?
I know that many things ( different strategies ) are possible but I want to know that (from the viewpoint of RESTFUL) what is standard/appropriate and what is not? (consider that I am kinda new to restful)
it ok to change a resource that is related to the given end point and also change another resource accordingly
Yes, and there are consequences.
Suppose we have two resources, /fizz and /buzz, and the representations of those resources are related via some implementation details on the server. On the client, we have accessed both of these resources, so we have cached copies of each of them.
The potential issue here is that the server changes the representations of these resources together, but as far as the client is concerned, they are separate.
For instance, if the client sends an unsafe request to change /fizz, a successful response message from the server will invalidate the locally cached copy of that representation, but the stale representation of /buzz does not get evicted. In effect, the client now has a view of the world with version 0 /buzz and version 1 /fizz.
Is that OK? "It depends" -- will expensive things happen to you if your clients are in a state of unmatched representations? Can you patch over the "problem" in other ways (for instance, by telling the client to check resources for updates more often)?

Finding all the users in Jira using the REST API

I'm trying to list all the users in Jira using the REST API, I'm currently using the search user feature using GET : https://docs.atlassian.com/jira/REST/server/#api/2/user-findUsers
The thing is it says that the result will by default display the 50 first result and that we can expand that result up to 1000. Compared to other features available in the REST API, the pagination here is not specified.
An example is the group member feature : https://docs.atlassian.com/jira/REST/server/#api/2/group-getUsersFromGroup
Thus I did a test and with my test Jira filled with 2 members, tried to get only one result and see if there was some sort of indication referring to the rest of my result.
The response provided will only give the results and no ways to get to know if there was more thatn 1000 (or 1 in my example), it's maybe logical but in the case of an organization with more than 1000 members, listing all the users doing this : http://jira/rest/api/2/user/search?username=.&maxResults=1000&includeInactive=true will only give at most 1000 results.
I'm getting all the users no matter what their name are using . as the matching character.
Thanks for your help!
What you can do, is to calculate manually the number of users.
Let's say you have 98 users in your system.
First search will give you 50 users. Now you have an array and you can get the length of that array which is 50.
Since you do not know if there are 50 or 51 users, you execute another search with the parameter &startAt=50.
This time the array length is 48 instead of 50 and you know that you've reached all the users in the system.
From speaking to Atlassian support, it seems like the user/search endpoint has a bug where it will only ever return the first 1,000 results at most.
One possible other way to get all of the users in your JIRA instance is to use the Crowd API's /rest/usermanagement/1/search endpoint:
curl -X GET \
'https://jira.url/rest/usermanagement/1/search?entity-type=user&start-index=0&max-results=1000&expand=user' \
-H 'Accept: application/json' -u username:password
You'll need to create a new JIRA User Server entry to create Crowd credentials (the username:password parameter above) for your application to use in its REST API calls:
Go to User Management.
Select JIRA User Server.
Add an application.
Enter the application name and password that the application will use when accessing your JIRA server application.
Enter the IP address, addresses, or IP CIDR block of the application, and click Save.

Is it safe to use Celery task IDs in HTTP requests?

I am starting to use Celery in a Flask-based web application to run async tasks on the server side.
Several resources get an '/action' sub-resource to which the user/client can send a POST including a JSON-body specifying an action, for example:
curl -H "Content-Type: application/json" -X POST \
-d '{"doPostprocessing": { "update": true}}}' \
"http://localhost:5000/api/results/123/action"
They get a 202 ACCEPTED response with a header
Location: http://localhost:5000/api/results/123/action/8c742418-4ade-474f-8c54-55deed09b9e5
they can poll to get the final result (or get another 202 ACCEPTED if the task is still running).
The ID I am returning for the action is the celery.result.AsyncResult.id.
Is this a safe thing to do? What kind of problems do I create when passing Celery task ids directly to the public?
If not, is there a recommended way to it? Preferably one which avoids having to track the state of the tasks explicitly.
You will be fine using the task ID. Celery uses Kombu's uuid function, which in turn uses uuid4 by default. uuid4 is randomly generated, rather than based off mac address (which uuid1 is), so will be 'random enough'.
The only other way would be to have an API endpoint that returns the status of all running tasks for the user. i.e. remove any task ID. But you will then remove the ability to query an individual task. Other options will effectively mask the task ID behind a different random number, so you'll have the same brute force problem.
I'd recommend having a look through the security Stack Exchange for UUID questions (https://security.stackexchange.com/search?q=uuid). Some of these will no doubt be equivalent to what you're looking for.

Can firebase server timestamps be written without making two requests?

The Firebase REST API describes how to write server values (currently only timestamps are supported) at a location, but it appears that one must submit a separate request in order to do this. Is there (or has there been planned) any way of setting timestamps (like createdAt) at the same time one submits other data? Seems like this would really help reduce traffic and improve performance.
Sure, this is possible. The documentation is admittedly a little unclear, but all you need to do is include the {".sv": "timestamp"} object as part of your JSON payload. Here's an example that saves it to a key timestamp.
curl -X PUT -d '{"something":"something", "timestamp":{".sv": "timestamp"}}' https://abc.firebaseio-demo.com/.json