Is there a way to relativize Blockchain ledger copies? - distributed-computing

I have a few philosophical questions about Blockchain:
Current Bitcoin's blockchain size is over 150gb, right?
When a new peer joins a blockchain network it has to download the full copy of the ledger, right?
I am wondering if is there a way possible in blockchain where a participant Peer does not need to be forced to have a full copy?
I am having these thoughts because it would be nice if low-end devices could join the network without having this huge storage capacity requirement, imagine how great would be having smartphones and IoT devices contributing to the network.

Interesting thought. Blockchain platforms like Bitcoin are not meant to be running in low-end devices. If you are a non-miner, why would you care to spend 150gig of your memory in storing and validating blocks.
Blockchain solutions like IOTA that uses Directed Acyclic Graphs allows you to be a full node or a partial node, where you can opt to store full blockchain or not. It is meant for small devices who want to be a part of the chain without having to store full data.

Related

Is there an alternative for game server which works like Google Cloud Functions?

I was really satisfied by the billing structure of cloud functions. They only charge during the runtime of a function based on how much resources it consumes. I am looking for a similar solution for my game servers.
I checked various cloud hosting options like VMs but all providers like digital ocean, google, amazon etc. charge even when there is no load on VM (and I completely understand the reason for that).
I am looking for an option where I can deploy my game server and will charge me only when it will consume resources(based on how much resources it consume) and won't charge (or a minimal charge) when the server is idle. It's like it will auto scale based on the current load.
Thankyou in advance for all the answers.
You can try Squids.io(https://squids.io/), a fully managed database service provider allows you to power off when your server is idle, and you can restart it at anytime of your choice. Plus, it charges you no management fees.

How can I improve response time if the remote server is located very far physical distance

I want to know how to construct servers physically in this situation.
Let's assume that my service provides in the USA.
And my business is quite successful so, I want to expand my business location in Asia.
but I don't want to localized service, so I just got some API server in Asia to provide service which is just use API that located in headquater, but my main components are still in the USA.
But the problem is that my API which is located in Asia needs to call head-quater API which is located in the USA, and the response is quite often slow because of far physical distance.
so In this situation, How can I overcome?
In my opinion, I get some CDN for static contents. but I have no idea how to improve the API response time problem which is originated from physical distance.
If it is a stupid question, please understand, I'm quite a newbie in architect.
EDIT:
Also, How can I construct database replication in this situation.
If I get a replication which is replicate from the USA in Asia, I think the replication performance is quite poor because of phisical distance.
How Amazon or any global service construct it?
Replication performance can be quite poor. It is important to understand how much of your data is changing so that you can estimate the bandwidth required and understand whether your replication can keep up.
Amazon and other global services deal with this via a combination of replication, edge-caching (CDN), and other methodologies that bring the data closer to the consumer.
As a first step, you also might want to look at just making your API more coarse-grained. The fewer calls you have to make, the higher the performance (as the problem is likely latency, not bandwidth). See if you can batch things up instead of handling them one-at-a-time.
You also can look critically at caching. Instead of making your read-only API calls all the time, introduce some cache-control headers to specify the acceptable age of your requests. A lot of data is very static, things like user data, departments, product-info etc... Some of this data can leverage caching layers to become much more performant.
If you want to use AWS and want to host main components in a specific region, then you may think of hosting it yourself in EC2(s) [as Origin Server] in the region of your choice and use Cloudfront (CDN) to serve the content globally. AWS employs their own High Speed Backbone Network to reduce latency between geographically distant locations, by reducing no of Network hops.
From a caching standpoint, as Rob rightly said, Cloudfront performs different caching mechanisms for hot objects, warm objects (edge-caching, regional-caching); Also the Origin servers can send minimum expiration time and maximum expiration time over HTTP Headers to define Caching TTL.
If however, you don't want to use the advantage of High Speed Backbone Network, you should consider application design of your endpoints and functionality keeping latency as a constraint; and use appropriate TTL for caching of objects and define appropriate caching strategy, keeping in mind the R/W ratio of your application.

how high frequency trading system connects to exchange

I'm trying to study about high frequency trading systems. Whats the mechanism that HFT use to connect with the exchange and whats the procedure (does it has to go through a broker or is it direct access, if it's direct access what sort of connection information that i require)
Thanks in advance for your answers.
Understand that there are two different "connections" in an HFT engine. The first is the connection to a market data source. The second is to a clearing resource. As mentioned in kpavlov's answer, a very expensive COLO (co-location) is needed to get as close to the data source/target as possible. Depending on their nominal latency these COLO resources cost thousands of dollars per month.
With both connections, your trading engine must be certified by the provider (ICE, CME, etc) to comply with their requirements. With CME the certification process is automated, with ICE it employs human review. In any case, the certification requires that your software demonstrate conformance to standards and freedom from undesirable network side effects.
You must also subscribe to your data source(s) and clearing service, neither is inexpensive and pricing varies over a pretty wide range. During the subscription process you'll gain access to the service providers technical data specification(s)-- a critical part of designing your trading engine. Using old data that you find on the Internet for design purposes is a recipe for problems later. Subscription also gets you access to the provider(s) test sites. It is on these test sites that you test and debug your engine.
After you think you engine is ready for deployment you begin connecting to the data/clearing production servers. This connection will get you into a place of shadows-- port roulette. Not every port at the provider's network edge has the same latency. Here you'll learn that you can have the shortest latency yet seldom have orders filled first. Traditional load balancing does little to help this and CME has begun deployment of FPGA-based systems to ensure correct temporal sequencing of inbound orders, but it's still early in its deployment process.
Once you're running you then get to learn that mistakes can be very expensive. If you place an order prior to a market pre-open event the order is automatically rejected. Do it too often and the clearing provider will charge you a very stiff penalty. Other things can also get you penalized or even kicked-off the service if your systems are determined to be implementing strategies to block others from access, etc.
All the major exchanges web sites have links to public data and educational resources to help decide if HFT is "for you" and how to go about it.
It usually requires an approval from exchange to grant access from outside. They protect their servers by firewalls so your server/network need to be authorized to access.
Special certification procedure with technician (by phone) is usually required before they authorize you.
Most liquidity providers use FIX protocol or custom APIs. You may consider starting implementing your connector with QuickFix, but it may become a bottleneck later, when your traffic will grow.
Information you need to access by FIX is:
Server IP
Server port
FIX protocol credentials:
SenderCompID
TargetCompID
Username
Password
Other fields

Horizontal scalability for distributed apps, how to achieve that?

I would like to disregard web applications here, because to scale them horizontally, ie to use multiple server instances together, it is "sufficient" to just duplicate the server software over the machines and just use a sort of router that forwards requests to the "less busy" server machine.
But what if my server application allows users to engage together in realtime ?
If the response to the request of a certain client X depends on the context of a client Y whose connection is managed by another machine then "inter machines" communication is needed.
I'd like to know the kind of "design solutions" that people has used in such cases.
For example, the people at Facebook must have already encountered such situation when enabling the chat feature of their social app.
Thank you in advance for any advise.
One solution to achive that is to use distibuted caches like memcache (Facebook also uses that aproach).
Then all the information which is needed on all nodes is stored in that cache (and a database if it needs to be permanent) an so all nodes can access that information (with a very small latency between the nodes).
regards
You should consider some solutions that provide transparent horizontal database scalability and guarantee ACID semantics. There are many solutions that offer this at various levels. People at Facebook which you reference have solved the problem by accepting eventual consistency but your question leads me to believe that you can't accept eventual consistency.

How to sync an application state over multiple iphones in the same network?

I am developing an iPhone application that allows to basically click through a series of actions. These series are predefined and synced with a common configuration server.
That app might be running on multiple devices at the same time. All devices are assumed to have the same series of actions defined on them. All devices are considered equal, there is not a server and multiple clients or something like that.
(Only) one of these devices is used by a person at any given time, it is however possible that the person switches to a different device at any given time. All "passive" devices need to be synchronized with the active one, so that they display the same action.
The whole thing should happen as automatically as possible. No selection of devices, configuration, all devices in the same network take part in the same series of actions.
One additional requirement is that a device could join during a presentation (a series of actions) and needs to jump to the currently active action.
Right now, I see two options to implement the networking/communication part of that:
Bonjour. I have implemented a working prototype that can automatically connect with one (1) other device in the network and communicate with that. I am not sure at this point how much additional work the "multiple devices" requirement is. Would I have to open a set of connections for every device and manually send the sync events to all of them? Is there a better way or does bonjour provide anything to help me with that? What does Bonjour provide given that I want to communicate with every device in the network anyway?
Multicast with AsyncUdpSocket. Simply define a port and send multicast sync events out to that port. I guess the main issue compared to using bonjour with tcp would be that the connection is not safe and packets could be lost. This is however in a private, protected wlan network with low traffic if that would really be an issue. Are there other disadvantages that I'm not seeing? Because that sounds like a relatively easy option at this point...
Which one would you suggest? Or is there another, better alternative that I'm not thinking of?
You should check out GameKit (built in to iOS)--they have a lot of the machinery you need in a convenient package. You can easily discover peers on the network and easily send data back for forth between clients (broadcast or peer to peer)
In my experience Bonjour is perfect for what you want. There's an excellent tutorial with associated source code: Chatty that can be easily modified to suit your purposes.
I hobbled together a distributed message bus for the iphone (no centralized server) that would work great for this. It should be noted that the UI guy made a mess of the code, so thar' be dragons there: https://code.google.com/p/iphonebusmiddleware/
The basic idea is to use bonjour to form a network with leader election. The leader becomes the hub through which all the slaves subscribe to topics of interest. Then any message sent to a given topic is delivered to every node subscribed to said topic. A master disconnection simple means restarting the leader election process.