Getting updates in every blockchain append - callback

I am working on an art project which desplays bitcoin transactions graphically.
Thus i need some way of getting an update when a transaction is filed in the blockchain.
Is there any way to accomplish this without copying the whole blockchain, since i do not need any precise information about the transaction?

If you are really want to get all the transactions that are happening, then you have to parse each new block that comes in. You have to look into the RPC Calls to get each individual block.
If you just want to watch certain addresses for transactions you can look into walletnotify of the bitcoind node.

Related

Prevent Modification of PayPal Orders from JavaScript

I'm starting to integrate PayPal checkouts with a server workflow.
My basic need is to create an order on the server and ensure that the client can not modify it in any way.
Because of this requirement, I have already ruled out using the "simple" JavaScript-only solution, and I'm instead going for a server integration, calling my own URL endpoints for creating and capturing orders.
However, I have found that the client can just ab-use the actions.order.patch() method to modify almost every aspect of the order, including the amount and the custom_id that I'm attaching to the purchase_item.
Basically, It looks like I have absolutely no guarantee on the order contents, even if I created it on the server, is this correct?
In that case, it means I have to check each order's contents against the orders database of my application. It is possible, but I was hoping to not have to do that.
Any clues? How do you deal with this issue?
Thanks!
If you are particularly concerned about this scenario of patching down the total or other details before capture, the only way to ensure it has not changed is to do a server-side ‘get details’ call before the capture and at least validate the total amount value, as well as any other field you’re concerned about.
Otherwise, the usual general safety solution in ecommerce (for this as well as other potential issues that might crop up) is to simply capture and validate the total in the capture response. If the capture has a total you don't expect, issue an immediate refund or flag the occurrence for review before fulfilling anything.

Stuck with understanding how to build a scalable system

I need some guidance on how to properly build out a system that will be able to scale. I will give you some information about what I am trying to do and then ask my specific question.
I have a site where I want visitors to send some data to be processed. They input the data into a textarea or upload it in a file. Simple. The data is somewhat preprocessed on the client side before a POST request is made to a REST endpoint.
What I am stuck on is what is a good way to take this posted data store it and then associate an id with it that references the user since I cannot process the data fast enough for it to be returned to the user in a reasonable amount of time?
This question is a bit vague and open to opinion, I admit it. I just need a push in the right direction to keep moving. What I have been considering is throwing the data into a message queue and then having some workers process the data elsewhere and when the data is processed alert the user as to where to find it with some sort of link to an S3 bucket or just a URL to a file. The other idea was to just run the request for each item to be processed against another end-point that already processes individual records in some sort of loop client side. The problem is as follows with this idea:
To process the data it may take somewhere from 30 minutes to 2 hours depending upon the amount that they want processed. It's not ideal for them to just sit there and wait for that to finish depending on the amount of records they need processed, so I have ruled this out mostly.
Any guidance would be very much appreciated as I don't have any coworkers to bounce things off of, nor do I know many people with the domain knowledge that I could freely ask. If this isn't the right place to ask this, could you point me in the right direction as to where it should be asked?
Chris
If I've got you right, your pipeline is:
Accept item from user
Possibly preprocess/validate it (?)
Put into some queue
Process data
Return result.
You man use one or several queues on stage (3). Entity from user gets added to one of the queues. If it's big enough, it could be stored in S3 or storage alike, and only info about it put into the queue: link, add date, user id (or email of alike). Processors can pull items from queue and give feedback to users.
If you have no strict requirements on order, things get much simpler: you don't need any sync between them. Treat all the components: upload acceptors, queues, storages and processors as independent pools of processes. Monitor each pool separately. If there's some bottlenecks - add machines to that pool.

Converting resources in a RESTful manner

I'm currently stuck with designing my endpoints in a way so that they are conform with the REST principles but also ensure the integrity of the underlying data.
I have two resources, ShadowUser and RealUser whereas the first one only has a first name, last name and an e-mail.
The second user has much more properties such like an Id under which the real user can be addressed at other place in the system.
My use-case it to convert specific ShadowUsers into real users.
In my head the flow seems pretty simple:
get the shadow users /GET api/ShadowUsers?somePropery=someValue
create new real users with the data fetched /POST api/RealUsers
delete the shadow-users /DELETE api/ShadowUSers?somePropery=someValue
But what happens when there is a problem between the creation of new users and the deletion of the shadow ones? The data would now be inconsistent.
The example is even easier when there is only one single user, but the issue stays the same as there could be something between step 2 and 3 leaving the user existing as shadow and real.
So the question is, how this can be done in a "transactional" manner where anything is good and persisted or something went wrong and nothing has been changed in the underlying data-store?
Are there any "best practices" or "design-patterns" which can be used?
Perhaps the role of the RESTful API GETting and POSTing those real users in batch (I asked a question some weeks ago about a related issue: Updating RESTful resources against aggregate roots only).
In the API side, POSTed users wouldn't be handled directly but they would be enqueued in a reliable messaging queue (for example RabbitMQ). A background process would be subscribed to the whole queue and it would process both the creation and removal of real and shadow users respectively.
The point of using a reliable messaging system is that you can implement retry policies. If the operation is interrupted in the middle of finishing its work, you can retry it and detect which changes are still pending to complete the task.
In summary, using this approach you can implement that operation in a transactional way.

Rally rest API transactions

Is there any way to achieve an atomic transaction using the Rally wsapi. I know a transaction implies state among the consecutive requests, but REST obviously is a stateless protocol. So that might be an issue.
need to be able to pull a portfolioitem/feature and then immediately write it back if I have the most recent version of it. I have a custom field on portfolioitem/feature that WILL be edited by multiple people simultaneously, and I need to make sure that each update happens in the correct order.
Since i don't have access to Rally's server stuff, i must do all this client side, and I can't figure out how to do this. I will be doing this will the Rally SDK also.
I don't think WS API supports atomic transactions. A scenario where updates occur as one atomic transaction so that, for example, if one of the updates fail they are all rolled back is not supported. In the example you mentioned each update will be a distinct transaction and in case of a mid-air collision when the same artifact is updated by different users, one of the users will receive a concurrency error.
I am in the same boat as the OP, the only difference being that hours may pass between the read and subsequent write. Interestingly, I only seem to get concurrency errors when I attempt to update a record while there's another transaction of mine in flight. I don't see any exception raised when I am updating a record using a stale version thereof, i.e. one that someone else has changed from under me.
I will be attempting to fix this soon as it's becoming an issue. The chosen approach is to forcibly chain a GET before every POST, and throw an exception if the VersionID of the record I GET doesn't match the one I have stored in-memory. In case of mismatch, it will refresh the local record (and thus, view) and prompt the user to resubmit their changes. Yes this will be inconvenient for a user but in my app most changes are a single click away so it's reasonable.
I too would like to know if there is a better approach to this problem. One would assume that with every record having a VersionID, it would be handled server-side, with proper support from WsapiProxy on the client end. Maybe I'm missing something obvious, like explicitly fetching VersionID?

New/Read Flags in CQRS

I am currently drafting a concept for a (mostly) HTML-based collaboration suite which I plan to implement using CQRS. This software will contain messages that can be sent to the user (which can either be read or unread, obviously) and other elements which shall be marked "new" if they were created after the last user login.
Hardly something new, but I am not quite sure how that would be correctly implemented using CQRS. As I understand it, Change of any kind should, without exception, only be possible via Commands. But creating commands for every single (new) element that is being accessed seems a bit too much, not to mention the overhead.
I don't know if I need it, but what would be the best way to implement a Last-Accessed Timestamp on elements. Basically the same problem like the above, with the difference that the change happens EVERY time the element is accessed, not only the first time for each user.
CQRS seems to be an awesome concept but it really needs more learning material. Can't wait till a book is released :)
Regards
[Edit] No one? Wouldn't have thought that this is such a complicated issue..
I assume you're using event-sourcing in which case once you allow your query-service/event-handlers to raise appropriate events then this becomes fairly easy to solve.
For your messages/elements; when handling the specific creation events of your elements either add to existing or create additional event-handlers, to store to a messages read-model with a status of new and appropriate information about the element.
As part of you're user login I don't see why you can't raise a user-logged-in event (from the security/query service depending on how your implementing authentication) to say the user has logged in. An event-handler could capture this and write the last-login timestamp to a specific user-last-login read-model.
In addition the user-logged-in event-handler would need to update all the new messages (for that user) to an unread status. Seeing as we're changing the status of the messages as the user logs in do you still need to store the last-login timestamp?
For your last-accessed timestamp, perhaps you could just work this into your query service as queries for your different elements complete. Raise a query-completed event with element id/type information.