How much data I can store with Chaincode state in IBM hyperledger?
I'm using IBM Bluemix hosting.
I'm not able to find any link which specifies the size limit.
Currently there is no size limit in bytes.
What we have seen with internal testing for one use case, their chaincode wrote so much to the state, that the peers can't transfer it all before the connection times out. So, I guess the answer is no limit, but you are indeed limited by the capacity of your network. I hope this helps.
Related
I have an AWS ec2 server that is running an application that is connected to a MongoDB atlas sharded cluster. Periodically, the application will slow down and I will receive alerts from MongoDB about high CPU steal %. I am looking to upgrade my MongoDB server tier and see the only difference in the options is more storage space and more RAM, but the number of vCPUs is the same. I'm wondering if anyone has any insight on whether this increased RAM will help with the CPU steal % alerts I am receiving and whether it will help speed up the app? Otherwise, am I better off upgrading my AWS server tier for more CPU that way?
Any help is appreciated! Thanks :)
I don't think more RAM will necessarily help if you're mostly CPU Bound. However, if you're using MongoAtlas then the alternative tiers definitely do provide more vCPU as you go up the scaling options.
You can also enable auto scaling and set your minimum and maximum tiers to allow the database to scale as necessary: https://docs.atlas.mongodb.com/cluster-autoscaling/
However, be warned that MongoAtlas has a pretty aggressive scale-out and a pretty crappy scale-in. I think the scale-in only happens after 24hours so it can get costly.
I am a little confuse about my message server's network bottleneck issue. I can obviously found the problem caused by the a lot of network operation, but I am not sure why and how to identify it.
Currently we are using GCP as our VM and 4 core/8G RAM for our message server. Redis & Cassandra is in other server at the same place. The problem happened at the network operation to the redis server and cassandra server.
I need to handle 3000+ requests at once to save data to redis and 12000+ requests to cassandra server.
My task consuming all my CPU power and the CPU usage down right after I merge the redis request and cassandra request to kind of batch request. The penalty is I have to delay my data saving.
What I want to know is how can I know the network's capability of my system. How many requests within 1 second is a reasonable task?. As my testing, this is obviously true that the bottleneck is the network operation, but I can't prove it. I can't even know how to estimate a reasonable network usage of my system? Are there some tools or other thing that can help to my make sure my network's problem? Or this is just a error config of my GCP system?
Thanks,
Eric
There is a "monitoring" label in each instance where you can check through graphs values like instance CPU, Network and RAM usage.
But to further check the performance of your instance you should use StackDriver Logging1 and Monitoring2. It stores a lot of information from the internal servers and the system performance. for that you will need to install the agent in the instance. It also stores information about your Load Balancer3, in case you are using one with your web application, which is very advisable since it scale your resources up or down with intelligent Autoscaling.
But in order to test out your network you will need to use some third party tool to overload the network. There are multiple tools to achieve this, like JMeter.
The server will run singly on one instance of compute engine. What could limit it's serving capacity and how much load can a single instance (4 vCPUs and 15GB Memory) handle.
Note : I've already looked at Kubernetes and even load-balancing multiple instances but accessing the database from multiple clients is a little too complicated for me right now. So please keep in mind if you're going to suggest containerisation, that I'm a beginner.
Any and all advice is welcome. Thanks!
The serving capacity of the server depends on a number of factors, which includes the requests you receive from the clients, the additional applications running in it etc. For a 4 core CPU, as per this help center article, you will get a peak performance of 8Gb/s, which is good for a single instance. Since you are using a single parse server alone on the VM, it should work very well with the above-mentioned configuration.
A container is a tool for a developer. It contains all the dependencies and library which required to run/test a particular application in a container. The applications running in the container are easily portable.
There is this help center article which might give a precise idea of containers and its usage. While the Kubernetes Engine will help you to deploy/manage these containerized application.
I have started exploring on the network programming in Linux using Socket. I am wondering how come webservers like Yahoo, google, and etc are able to establish million/billions of connections. I believe the core is only socket programming to access the remote server. If that is the case then how come billion and millions of people are able to connect to the server. It means billions/millions of socket connection. This is not possible right? The spec says maximum 5 socket connections only. What is the mystery behind it?
Can you also speak in terms of this - API?
listen(sock,5);
To get an idea of tuning an individual server you may want to start with Apache Performance Tuning and maybe Linux Tuning Parameters, though it is somewhat outdated. Also see Upper limit of file descriptor in Linux
When you got a number of finely tuned servers, a network balancer is used and it typically distributes IP traffic across a cluster of such hosts. Sometimes a DNS load balancing is used in addition to further split between IP balancers.
Here, if you are interested you can follow Google's Compute Engine Load Balancing, which provides a single IP address, and does away with the need to have DNS balancing in addition, and reproduce their results:
The following instructions walk you step-by-step in setting up a
Google Compute Load Balancer benchmark that achieves 1,000,000
Requests Per Second. It is the code and step were used when writing a
blog post for the Google Cloud Platform blog. You can find the Google
Cloud Platform blog # http://googlecloudplatform.blogspot.com/ This
GIST is a combination of instructions and scripts from Eric Hankland
and Anthony F. Voellm. You are free to reuse the code snippets.
https://gist.github.com/voellm/1370e09f7f394e3be724
It doesn't 'say maximum 5 connections only'. The argument to listen() that you refer to is the backlog, not the total number of connections. It refers to the number of incoming connections that TCP will accept and hold on the 'backlog' queue() prior to the application getting hold of them via accept().
I'm preparing to set up a APNS message server, and I was wondering if anybody has done any analysis on APNS server load that they would be able to share. Minimum server specs, maximum messages per second, anything like that.
Thanks!
edit: I'm planning to implement this with .NET, but info about any platform would be incredibly useful.
For my application (which has about 24,000 downloads) I am seeing an average of of about 1300 messages sent a day.
Those are low numbers, but then my client base isn't that large either. But I figure I might as well contribute some info. :-)
My notification provider is idle most of the time so there is MUCH more capacity available if I need it.
Its also using very little ram at this point (somewhere around 13 mb - I implemented my provider in Python and suspect most of that is taken up by the run time).
I am running on a Media Temple dv (specifically the Base configuration).
I haven't extrapolated out the numbers to find what my theoretical maximum would be, but because of the niche market of my application its not something that worries me at this point. I have lots of capacity to scale with.
Hope that helps a bit.
chris.
One of the Apple devs mentioned that 100,000 messages is not considered a large amount, that doesn't really answer your question, but I wouldn't expect that sending the actual messages would be the bottleneck.
Any server that can handle your database work should be fine for sending the messages out. The protocol is intentionally light-weight.
There are no maximum messages per second.
You should consider that every message must be smaller than 256 Byte. Otherwise Apple will be reject your messages. And you can also check MonoPush. AFAIK they are building their products top of the .Net Framework.