Can an NFT have metadata that updates based on smart contract variable values? - metadata

Can someone please explain the specifics of how NFT metadata is stored (both on and off chain examples?) and how it's called by apps like Opensea or Decentlraland?
Additionally, would it be especially challenging to utilize dynamic metadata for an NFT, which updates regularly based on changing smart contract variables, which themselves change due to user interactions with the smart contracts? E.g. imagine an updatable "Countdown" NFT where the jpeg shows a picture of the integer "days until X", which updates each day as time passes but can also updated based on changing X in the NFT smart contacts... made this up on the spot but actually an interesting idea? :)
Is this doable? Are there storage challenges? Do apps call metadata repeatedly or would they call it once and never reflect updates?

From my understanding you generally set a baseURI in a erc721 contract. And then if you have an auto incrementing token id. The first token will be the baseURI plus "1".
In other words:
baseURI: "https://myserver.com/api/metadata",
tokenId: 1
the token metadata is hosted at https://myserver.com/api/metadata/1
And that takes some storage off the chain, if every token has it's own unique url it adds a bit of unnecessary junk to the chain.

Related

What is the best way to import data into holochain from another source, like mongo?

MongoDB => Holochain Rust DHT
How to import, if possible
If I am using a different app backend, like mongo, and I get my holochain set up correctly and configured, is there a way to get the data from mongo to holochain? How would I do that?
Here is the question in context
Definitely technologically possible; you could write a nodejs script, fire up a Holochain container with the holochain-nodejs library, and import all the data as one agent. Then when users join the HC-based network, they vouch for their identity in some way and 'claim' all the data as theirs.
Here's a sketch of how it could look:
you (let's call you 'agent 0') import all the data.
For each user, you create an 'anchor' with the user's ID (I'll explain anchors in a
sec) and link each piece of data to the anchor.
You also record that
user's password hash as a private entry on your own source chain. A
user joins the network and is required to prove continuity of
identity.
They do this by using node-to-node messaging to send their
user ID and their password hash to you privately. You authorise them
to claim their identity by publishing an entry that says that "agent
public key x = user ID". (You would probably want to link from your
authorisation entry to their user ID anchor and their public key too,
for convenience's sake.)
The user collects all their data by asking
for all the links to their user ID anchor.
The user then publishes
each piece of their data to their own source chain as a way of
'claiming' ownership of it.
Now, every redundant copy of the data in
the DHT has two authors in its metadata fields -- you and the user
that actually owns the data. Peers validate that piece of data by
saying, "Is agent 0 already the author of this piece of data?
If so,
has agent 0 published an authorisation entry that says that the new
author of this data is allowed to claim/republish it?"
Problems with this approach (not insurmountable):
Agent 0 has to be online all the time cuz they never know when a new
user is going to sign up and try to claim their data. Agent 0 has to
import a ton of data. (I don't think it'd be vastly
time-prohibitive though)
For relational data, there's the chicken-and-egg problem of how to
create links if the data doesn't exist. I'm thinking not of linking
data to data -- that can be done on initial import -- but linking
data to humans, who now have a public key which might not exist on
the DHT yet because they haven't joined the network. That would
always have to happen per-user once they join, and it could create
some cyclic dependency problems.
Anchors
Re: anchors, an anchor is just a pattern that consists of a base and a link -- the base is a simple string, so it's easy for anyone who knows the string to find it by hash. It acts as, well, an anchor to hang links off of. That's why I'm recommending using it to connect legacy user IDs to pieces of content. You can get sample source code for implementing the anchor pattern at https://github.com/holochain/mixins/tree/master/anchors (note that this is for the legacy version of Holochain, so it's written in JavaScript).
( answer provided by
pauldaoust )

Avoid duplicate POSTs with REST

I have been using POST in a REST API to create objects. Every once in a while, the server will create the object, but the client will be disconnected before it receives the 201 Created response. The client only sees a failed POST request, and tries again later, and the server happily creates a duplicate object...
Others must have had this problem, right? But I google around, and everyone just seems to ignore it.
I have 2 solutions:
A) Use PUT instead, and create the (GU)ID on the client.
B) Add a GUID to all objects created on the client, and have the server enforce their UNIQUE-ness.
A doesn't match existing frameworks very well, and B feels like a hack. How does other people solve this, in the real world?
Edit:
With Backbone.js, you can set a GUID as the id when you create an object on the client. When it is saved, Backbone will do a PUT request. Make your REST backend handle PUT to non-existing id's, and you're set.
Another solution that's been proposed for this is POST Once Exactly (POE), in which the server generates single-use POST URIs that, when used more than once, will cause the server to return a 405 response.
The downsides are that 1) the POE draft was allowed to expire without any further progress on standardization, and thus 2) implementing it requires changes to clients to make use of the new POE headers, and extra work by servers to implement the POE semantics.
By googling you can find a few APIs that are using it though.
Another idea I had for solving this problem is that of a conditional POST, which I described and asked for feedback on here.
There seems to be no consensus on the best way to prevent duplicate resource creation in cases where the unique URI generation is unable to be PUT on the client and hence POST is needed.
I always use B -- detection of dups due to whatever problem belongs on the server side.
Detection of duplicates is a kludge, and can get very complicated. Genuine distinct but similar requests can arrive at the same time, perhaps because a network connection is restored. And repeat requests can arrive hours or days apart if a network connection drops out.
All of the discussion of identifiers in the other anwsers is with the goal of giving an error in response to duplicate requests, but this will normally just incite a client to get or generate a new id and try again.
A simple and robust pattern to solve this problem is as follows: Server applications should store all responses to unsafe requests, then, if they see a duplicate request, they can repeat the previous response and do nothing else. Do this for all unsafe requests and you will solve a bunch of thorny problems. Repeat DELETE requests will get the original confirmation, not a 404 error. Repeat POSTS do not create duplicates. Repeated updates do not overwrite subsequent changes etc. etc.
"Duplicate" is determined by an application-level id (that serves just to identify the action, not the underlying resource). This can be either a client-generated GUID or a server-generated sequence number. In this second case, a request-response should be dedicated just to exchanging the id. I like this solution because the dedicated step makes clients think they're getting something precious that they need to look after. If they can generate their own identifiers, they're more likely to put this line inside the loop and every bloody request will have a new id.
Using this scheme, all POSTs are empty, and POST is used only for retrieving an action identifier. All PUTs and DELETEs are fully idempotent: successive requests get the same (stored and replayed) response and cause nothing further to happen. The nicest thing about this pattern is its Kung-Fu (Panda) quality. It takes a weakness: the propensity for clients to repeat a request any time they get an unexpected response, and turns it into a force :-)
I have a little google doc here if any-one cares.
You could try a two step approach. You request an object to be created, which returns a token. Then in a second request, ask for a status using the token. Until the status is requested using the token, you leave it in a "staged" state.
If the client disconnects after the first request, they won't have the token and the object stays "staged" indefinitely or until you remove it with another process.
If the first request succeeds, you have a valid token and you can grab the created object as many times as you want without it recreating anything.
There's no reason why the token can't be the ID of the object in the data store. You can create the object during the first request. The second request really just updates the "staged" field.
Server-issued Identifiers
If you are dealing with the case where it is the server that issues the identifiers, create the object in a temporary, staged state. (This is an inherently non-idempotent operation, so it should be done with POST.) The client then has to do a further operation on it to transfer it from the staged state into the active/preserved state (which might be a PUT of a property of the resource, or a suitable POST to the resource).
Each client ought to be able to GET a list of their resources in the staged state somehow (maybe mixed with other resources) and ought to be able to DELETE resources they've created if they're still just staged. You can also periodically delete staged resources that have been inactive for some time.
You do not need to reveal one client's staged resources to any other client; they need exist globally only after the confirmatory step.
Client-issued Identifiers
The alternative is for the client to issue the identifiers. This is mainly useful where you are modeling something like a filestore, as the names of files are typically significant to user code. In this case, you can use PUT to do the creation of the resource as you can do it all idempotently.
The down-side of this is that clients are able to create IDs, and so you have no control at all over what IDs they use.
There is another variation of this problem. Having a client generate a unique id indicates that we are asking a customer to solve this problem for us. Consider an environment where we have a publicly exposed APIs and have 100s of clients integrating with these APIs. Practically, we have no control over the client code and the correctness of his implementation of uniqueness. Hence, it would probably be better to have intelligence in understanding if a request is a duplicate. One simple approach here would be to calculate and store check-sum of every request based on attributes from a user input, define some time threshold (x mins) and compare every new request from the same client against the ones received in past x mins. If the checksum matches, it could be a duplicate request and add some challenge mechanism for a client to resolve this.
If a client is making two different requests with same parameters within x mins, it might be worth to ensure that this is intentional even if it's coming with a unique request id.
This approach may not be suitable for every use case, however, I think this will be useful for cases where the business impact of executing the second call is high and can potentially cost a customer. Consider a situation of payment processing engine where an intermediate layer ends up in retrying a failed requests OR a customer double clicked resulting in submitting two requests by client layer.
Design
Automatic (without the need to maintain a manual black list)
Memory optimized
Disk optimized
Algorithm [solution 1]
REST arrives with UUID
Web server checks if UUID is in Memory cache black list table (if yes, answer 409)
Server writes the request to DB (if was not filtered by ETS)
DB checks if the UUID is repeated before writing
If yes, answer 409 for the server, and blacklist to Memory Cache and Disk
If not repeated write to DB and answer 200
Algorithm [solution 2]
REST arrives with UUID
Save the UUID in the Memory Cache table (expire for 30 days)
Web server checks if UUID is in Memory Cache black list table [return HTTP 409]
Server writes the request to DB [return HTTP 200]
In solution 2, the threshold to create the Memory Cache blacklist is created ONLY in memory, so DB will never be checked for duplicates. The definition of 'duplication' is "any request that comes into a period of time". We also replicate the Memory Cache table on the disk, so we fill it before starting up the server.
In solution 1, there will be never a duplicate, because we always check in the disk ONLY once before writing, and if it's duplicated, the next roundtrips will be treated by the Memory Cache. This solution is better for Big Query, because requests there are not imdepotents, but it's also less optmized.
HTTP response code for POST when resource already exists

RESTful API creates a globally unique resource

In our system we have accounts which contain items. An item is always associated with a single account but also has a globally unique id in the system. Sometimes it is desirable to work with an item when only its id is known.
Is it incorrect to allow access to a subordinate resource (the item) from outside it's owner (the account)? In other words, is it wrong to have 2 URI's to the same resource? This is a little tricky to explain so here is an example:
POST /inventory/accountId
#Request Body contains new item
#Response body contains new item's id
GET|PUT|DELETE /inventory/accountId/guid #obviously works and makes sense
GET|PUT|DELETE /inventory/guid #does this make sense?
Perhaps I should rethink my resource layout and not use accounts to create items but instead take the account as a query string parameter or field on the item?
POST /inventory
# Request body contains item w/ account name set on it
GET|POST|DELETE /inventory/uuid #makes sense
GET|POST|DELETE /inventory/accountId/uuid #not allowed
I think having two URIs point to the same item is asking for trouble. In my experience, these sorts of things lead to craziness as you scale out (caching, multiple nodes in a cluster going out of sync and so on). As long as the item's ID is indeed globally unique, there's no reason no to simply refer to it as /inventory/uid
POST /inventory/accountId
GET|PUT|DELETE /inventory/accountId/guid #obviously works and makes sense
GET|PUT|DELETE /inventory/guid #does this make sense?
It makes the most sense when /inventory/guid redirects to /inventory/accountId/guid (or, I'd argue, vice-versa). Having a single canonical entity, with multiple URI's redirecting to it, allows your caching scheme to remain the most straightforward. If the two URI's instead return the same data, then a user is inevitably going to PUT a new representation to one and then be confused when it GETs an old copy from the other because the cache was only invalidated for the former. A similar problem can occur for subsequent GETs on the two. Redirects keep that a lot cleaner (not perfectly synchronous, but cleaner).
Whether to make items subordinate to accounts depends on whether items can exist without an account. If the data of an item is a subset of the data of an account, then go ahead and make it subordinate. If you find that an account is just one kind of container, or that some items exist without any container, then promote them to the top level.
In other words, is it wrong to have 2 URI's to the same resource?
No. It is not wrong to have multiple URI's identifying the same resource. I don't see anything wrong with your first approach as well. Remember URI's are unique identifiers and should be opaque to clients. If they are uniquely identifying a resource then you don't have to worry too much about making your URLs look pretty. I am not saying resource modeling is not important but IMO we shouldn't spend too much time on it. If your business needs that you have guid directly under inventory and also under individual accounts, so be it.
Are you concerned about this because of a potential security hole in letting data be available to unauthorized users? Or is your concern purely design driven?
If security is not your concern, I agree that it is perfectly fine to have 2 URIS pointing to the same resource.

How to generate one-time-use links? Any CMS or framework solutions?

I'm making a site for a writers management company. They get tons of script submissions every day from prospective and often unsolicited writers. The new site will allow a prospective writer to submit a short logline / sample of his or her idea. This idea gets sent to an email account at the management group. If the management group likes what they see, they want to be able to approve that submission from within the email and have a unique link dispatched to the submitter to upload their full script. This link would either only work once, or only for a certain amount of time so that only the intended recipient could use it.
So, can anyone point me in the direction of some sort of (I'm assumine PHP + mySQL) CMS or framework that could accomplish this? I've searched a lot, but I can't seem to figure out the right way to phrase this query to a search engine.
I have moderate programming experience, but not much with PHP outside of some simple Wordpress hacks.
Thanks!
I will just give you general guidelines on a simple way to construct such a system.
I assume that the Writer is somehow Registered into the system, and his/her profile contains a valid mail address.
So, when he submits the sample, you would create an entry on the "Sample" table. Then you would mail a Manager with the sample and a link. This link would point to a script giving the database "id" of the sample as a parameter (this script should verify that the manager is logged on -- if not, show the login screen and after successful login redirect him back).
This script would then be aware of the Manager's intention to allow the Writer to submit his work. Now the fun begins.
There are many possibilities:
You can create an entry in an appropriate "SubmitAuthorizations" DB table containing the id of the Writer and the date this authorization was given (ie, the date when the row was added to your DB). Then you simply send a mail to the Writer with a link like "upload.php?id=42", where the id is the authorization id. This script would check if the logged user is the correct Writer, and if he is within the allowed timeframe (by comparing the stored "authorization date" and the current date).
The next is the one I prefer: without a special table just for handling something trivial (let's say you will never want to "edit" an authorization, nor "cancel" it, but it may still "expire"). You simply simply give the Writer a link with 2 parameters: the date the authorization was given and an authorization key, like: "upload.php?authDate=20091030&key=87a62d726ef7..."
Let me explain how it works.
The script would first verify if the Writer is logged on (if not, show the login page with a redirection after successful login).
So, now it's time to validate the request: that is, check if this is not a "forged" link. How to do this? It's just a "smart" way of construction this authorization key.
You can do something like:
key = hash(concat(userId, ";", authDate, ";", seed));
Well, here hash() is what we call a "one-way function", like MD5, SHA1, etc. Then concat() is simply a string concatenation function. Finally seed is something like a "master password", completely random and that will not change (for if you change it all the issued links would stop working) just to increase security -- let's say a hacker correctly guesses you are using MD5 (which is easy) and the he tries to hack your system by hashing some combinations of the username and the date.
Also, for a request to be valid, it must be in the correct time frame.
So, if both the key is valid, and the date is within the time frame, you are able to accept an upload.
Some points to note:
This is a very simple system, but might be exactly what you need.
You should avoid MD5 for the hashing function, take something like SHA1 instead.
For the link sent to the Writer, you could "obfuscate" the parameter names, ie, call them "k" for the "key" and "d" for the "authDate".
For the date, you could chose another format, more "cryptic", like the unix epoch.
Finally, you can encode the parameters with something like "base64" (or simply apply some character replacing function like rot13 for instance, but that take digits into account aswell) just in order to make them more difficult to guessing
Just for completeness, in the validation script you can also check if the Writer has already sent a file on the time frame, thus making it impossible to him to send many files within the time frame.
I have recently implemented something like this twice on the company I work for, for two completely different uses. Once you get the idea, it is extremelly simple to implement it -- maybe less than 10 lines of code for the whole key-generation and validation process.
On one of them, the agent equivalent to your Writer had no account into the system (actually it would be his first contact with the system) -- there was only his "profile" on the system, managed by someone else. In this case, you would have to include the "Writer"'s id on the parameters to the "Upload" script aswell.
I hope this helps, and that it was clear enough. If I find the time, I will blog about it with an working example on some language.

Appending to a resource's attribute RESTfully

This is a follow up to Updating a value RESTfully with Post
How do I simply append to a resource's attribute using REST. Imagine I have customer.balance and balance is an int. Let' say I just want to tell the server to append 5 to whatever the current balance is. Can I do this restfully? If so, how?
Keep in mind that the client doesn't know the customer's existing balance, so it can't just
get customer
customer.balance += 5
post customer
(there would also be concurrency issues with the above.)
Simple, slightly ugly:
This is a simpler variation of my answer to your other question.
I think you're still within the constraints of REST if you do the following. However, I'm curious about what others think about this situation as well, so I hope to hear from others.
Your URI will be:
/customer/21/credits
You POST a credit resource (maybe <credit>5</credit>) to the URI, the server can then take the customer's balance and += it with the provided value. Additionally, you can support negative credits (e.g. <credit>-10</credit>);
Note that /customer/21/credits doesn't have to support all methods. Supporting POST only is perfectly acceptable.
However, this gets a little weird if credits aren't a true resource within your system. The HTTP spec says:
If a resource has been created on the origin server, the response SHOULD be 201 (Created) and contain an entity which describes the status of the request and refers to the new resource, and a Location header.
Technically you're not creating a resource here, you're appending to the customer's balance (which is really an aggregate of all previous credits in the system). Since you're not keeping the credit around (presumably), you wouldn't really be able to return a reference to the newly "created" credit resource. You could probably return the customer's balance, or the <customer> itself, but that's a bit unintuitive to clients. This is why I think treating each credit as a new resource in the system is easier to work with (see below).
My preferred solution:
This is adapted from my answer in your other question. Here I'll try to approach it from the perspective of what the client/server are doing:
Client:
Builds a new credit resource:
<credit>
<amount>5</amount>
</credit>
POSTs resource to /customer/21/credits
POSTing here means, "append this new <credit> I'm providing to the list of <credit>s for this customer.
Server:
Receives POST to /customer/21/credits
Takes the amount from the request and +=s it to the customer's balance
Saves the credit and its information for later retrieval
Sends response to client:
<credit href="/customer/21/credits/credit-id-4321234">
<amount>5</amount>
<date>2009-10-16 12:00:23</date>
<ending-balance>45.03</ending-balance>
</credit>
This gives you the following advantages:
Credits can be accessed at a later date by id (with GET /customer/21/credits/[id])
You have a complete audit trail of credit history
Clients can, if you support it, update or remove credits by id (with PUT or DELETE)
Clients can retrieve an ordered list of credits, if you support it; e.g. GET /customer/21/credits might return:
<credits href="/customer/21/credits">
<credit href="/customer/21/credits/credit-id-7382134">
<amount>13</amount>
...
</credit>
<credit href="/customer/21/credits/credit-id-134u482">
...
</credit>
...
</credits>
Makes sense, since the customer's balance is really the end result of all credits applied to that customer.
To think about this in a REST-ful way, you would need to think about the action itself as a resource. For example, if this was banking, and you wanted to update the balance on an account, you would create a deposit resource, and then add one of those. The consequence of this would be to update the customer's balance
This also helps deal with concurrency issues, because you would be submitting a +5 action rather than requiring prior knowledge of the customer's balance. And, you would also be able to recall that resource (say deposit/51 for deposit with an ID of 51) and see other details about it (ie. Reason for deposit, date of deposit etc.).
EDIT: Realised that using an id of 5 for the deposit actually confuses the issue, so changed it to 51.
Well, there is alternative other than #Rob-Hruska 's solution.
The fundamental idea is the same: to think each credit/debit operation as a standalone transaction. However I once used a backend which supports storing schema-less data in json, so that I end up with defining the API as PUT with dynamic field names. Something like this:
PUT /customer/21
{"transaction_yyyymmddHHMMSS": 5}
I know this is NOT appropriate in the "credit/debit" context because an active account could have growing transaction records. But in my context I am using such tactics to store finite data (actually I was storing different batches of GPS way points during a driving trip).
Cons: This api style has heavy dependence on backend behavior's schema-less feature.
Pros: At least my approach is fully RESTful from the semantic point of view.
By contrast, #Rob-Hruska 's "Simple, slightly ugly" solution 1 does not have a valid Location header to return in the "201 Created" response, which is not a common RESTful behavior. (Or perhaps, we can let #Rob-Hruska's solution 1 to also return a dummy Location header, which points to a "410 Gone" or "404 Not Found" page. Is this more RESTful? Comments are welcome!)