I’ve noticed that when I fetch balances from the Waves Platform API, my reported balances can vary and seem to be reported inaccurately at times. Is this a common issue? Is there maybe a delay in updating balances after trades?
This usually happen when:
1. You are using official Waves nodes pool (nodes.wavesnodes.com) - some responses are cached.
2. At the moment of request generating node and your node are not synced yet. In this way you have to wait few seconds to let new micro blocks to sync up.
Related
We’re using a standard 3-node Atlas replicaset in a dedicated cluster (M10, Mongo 6.0.3, AWS) and have configured an alert if the ‘Restarts in last hour is’ rule exceeds 0 for any node.
https://www.mongodb.com/docs/atlas/reference/alert-conditions/#mongodb-alert-Restarts-in-Last-Hour-is
We’re seeing this alert fire every now and then and we’re wondering what this means for a node in a dedicated cluster and whether this is something to be concerned about, since I don’t think we have any control over it. Should we should disable this rule or increase the restart threshold?
Thanks in advance for any advice.
(Note I've asked this over at the Mongo community support site also, but haven't received any traction yet so asking here too)
I got an excellent response on my question at the Mongo community support site:
A node restarting is not necessarily a cause for concern. However, you should investigate the cause of the restart itself to better determine if this is an issue or not. You should take a look at your Project Activity Feed to see if you can determine why the nodes are restarting. I understand you have noted this is an M10 cluster so you should have access to the MongoDB logs, you also can check those to try determine the cause of the node restart. If you do not have access to the logs, you can consider working with Atlas in-app chat support to diagnose the issue.
It’s always good to keep the alerts active, as they can indicate a potential problem as soon as they occur. You can consider increasing the restart threshold to reduce alert noise after concluding whether the restarts are expected or not.
In my case, having checked the activity feed I was able to match up all the alerts we were seeing to Mongo version auto-updates on the nodes. We still wanted to keep that so we've increased our alert threshold to fire on >1 restart per hour rather than >0 restart, assuming that auto-updates won't be applied multiple times in the same hour.
I've used Fabric for quite a few applications, however I was curious about the performance when a single application submits potentially hundreds of events per minute.
For this example I'm going to be using a Pedometer application, in which I would want to keep track of the amount of steps users are taking in my application. Considering the average user walks 100 steps per minute, I wouldn't want the application to be sending several dozen updates to the server.
How would Fabric handle this, would it just tell the server "Hey, there were 273 step events in the last 5 minutes with this meta deta" or would it sent 273 step events.
Pedometer applications typically run in the background so how would we get data to Fabric without the user opening the application
Great question! Todd from Fabric. These get batched and sent at time intervals and also certain events (like installs) trigger an upload of the queued events data. You can watch our traffic in Xcode debugger if you are curious about the specifics for your app.
Server frameworks: Scala, Play 2.2, ReactiveMongo, Heroku
I think I have quite interesting brain teaser for you:
In my trip-planning application I want to display weather forecast on a map(similar to this). I'm using a paid REST service to query weather data. To speed up user experience and reduce costs I plan to cache weather data for each location for one hour.
There are a few not-so obvious things to consider:
It might require to query up to 100 location for weather to display one weather map
Weather must be queried in parallel because it would take too long to query it in serial fashion considering network latency
However launching 100 threads for each user request is not an option as well (imagine just 5 users looking at a map at one time)
The solution is to have let's say 50 workers that query weather for user requests
Multiple users might be viewing the same portion of map
There is a possible racing condition where one location is queried multiple times.
However it should be queried only once and then cached.
The application is running in clustered environment meaning there will be several play instances.
Coming from a Java EE background I can come up with a pretty good solution using the Java EE stack.
However I wonder how to do this using something more natural to Scala/Play stack: Akka. There is an example (google "heroku scala akka") for similar problem but it doesn't solve one issue: Racing condition when multiple users query the same data at once.
How would you implement this?
EDIT: I have decided that the requirement to ensure that weather data is updated only once is not necessary. The situation would happen far too infrequently to be a real problem and all proposed solutions would bring too much overhead and complexity to the system to be viable.
Thanks everyone for your time and effort. I hope answers to this question will help someone in the future with similar problem.
In Akka you can choose from multiple routing strategies. ConsistentHashingRoutingLogic could serve you well in this situation. Since actors are single-threaded you can easily maintain a cache in each actor. This routing logic will assure that two equal messages will always hit the same actor.
Each actor can work in the following way:
1. check local cache (for example apache commons LRUMap)
- if found, return
2. check global cache (distributed memcache or any other key-value store)
- if found, store the result in the local cache and return
3. query the REST service
4. store the result in the global and local caches
You can have a look at this question, which I based my answer on.
I decided that I'll post my JMS solution as well.
Controller that processes the request for weather does following:
Query the DB for weather data. If there are NO locations with out-of-date data reply immediately. Otherwise continue:
Start listening on a topic (explained later).
For each location: Check whether the weather for the location isn't being updated.
If not send a weather update request message to queue.
Certain amount of workers (50?) listen to that queue.
Worker first marks the location weather as being updated
Worker retrieves updated weather and updates the DB.
Worker sends a message to a topic with weather data for that location.
When controller receives (via topic) weather updates for all out-of-date locations, combine it with up-to-date locations and reply.
I'm working on a tracking site that that tracks a player's levels for a game from day to day.
It's going to have to process around 25,000 API calls once a day. I'd like to be able to get this done in 1 hour but I would be okay with processing them all in 2 hours for now.
This is the API I would need to call for each player in my database to get their information: http://hiscore.runescape.com/index_lite.ws?player=Zezima
My site and database are hosted on a VPS.
My thought on how to achieve this is to spin up a handful of Digital Ocean VPS instances when the time comes to make the API calls and have my main VPS distribute the API calls across the DO instances which will make the API calls and insert the results back into my database.
Parallelization is your friend here. Pool your queue listeners and have them run on a machine with adequate CPU and memory.
How fast is your process? Completing 25,000 transactions in one hour means 7 per second. Do you have timing data to help guide the number of instances you'll need?
I'm assuming that your database will allow simultaneous INSERTs. You don't want those getting in each other's way.
In the documentation, it states that the API is limited to one email per user, and that we should create threads and process multiple users at once.
Does any one know if the is any other type of limitation? How many GB/Hour?
I have to plan a migration tens of thousands of accounts, hardware resources is practically unlimited, will I reaise a flag somewhere or get blocked if I start migrating over 1,000 users at a time?
Thanks
The limits for the API are posted at https://developers.google.com/google-apps/email-migration/limits. There is a per-user rate limit in place of one request per second per user. If you exceed this you will start seeing 503 errors returned. The best way to deal with this is to implement an exponential backoff algorithm to handle the errors and retry the request.