When exactly NIC/kernel generate socket timestamp? - sockets

I just find that there are three socket options to generate timestamp.
SOF_TIMESTAMPING_TX_HARDWARE
SOF_TIMESTAMPING_TX_SOFTWARE
SOF_TIMESTAMPING_TX_SCHED
And it turns out that they give me slightly different result. But I'm still curious of exactly when timestamp is recorded with those 3 options. I already read this document, but I need more detailed description. Can you explain the difference between them?

Related

Security of smart contract data not returned by a view function

I've been looking through some of the NEAR demos and came across the one regarding posting quizzes that can be answered for a reward.
Source code here: https://github.com/Learn-NEAR/NCD-02--riddles
Video here: https://www.youtube.com/watch?v=u4jP2a2mbiI
My question is related to how secure the answer hash is. In the current implementation, the answer hash is returned with the quizzes, but I imagine it would be better if that wasn't the case. Even then, if the hash was stored on the NEAR network without it being returned by any view functions, how secure would that be? If there was code in this contract to only give a certain number of guesses per account before denying additional attempts, would someone be able to get the hash through some other means and then have as many chances to answer as they want by locally hashing answers with sha256 and seeing if one matches?
Thanks,
Christopher
for sure all data on chain is public so storing anything means sharing it with the world
one reasonable way to handle something like this would be to store the hash but accept the raw string and then hash it to compare the two for a possible win
if you choose a secure hashing algorithm then it would be nearly impossible to guess the required input string based on seeing the hash
update: it was poined out to me that this answer is in incomplete or misleading because if the set of possible answers is small then this would still be a bad design because you could just quickly hash all the possible answers (eg. in a multiple choice question) and compare those hashes with the answer
heads up!
everything in that GitHub org that starts with NCD is a student project submitted after just a week of learning about NEAR
so there is a huge pile of mistakes there just waiting to be refactored and commented on by experts in the community
the projects that are presented for study all start with the prefix sample
those are the ones we generated to help students explore the possibilities of contracts on the NEAR platform along with all our core contracts, Sputnik contracts and others
sign up to learn more about NEAR Certified Developer Programs here: https://near.training

Using found set of records as basis for value list

Beginner question. I would like to have a value list display only the records in a found set.
For example, in a law firm database that has two tables, Clients and Cases, I can easily create value list that displays all cases for clients.
But that is a lot of cases to pick from, and invites user mistakes. I would like the selection from the value list to be restricted to cases matched to a particular client.
I have tried this method https://support.claris.com/s/article/Creating-conditional-Value-Lists-1503692929150?language=en_US and it works up to a point, but it requires too much entry of data and too many tables.
It seem like there ought to be a simpler method using the find function. Any help or ideas greatly appreciated.

How to break up large document into smaller answer units on Retrieve and Rank?

I am still very new to Retrieve and Rank, and Document Conversion services, so I have been playing around with that lately.
I encountered a problem where when I upload a large document (100+ pages) - Retrieve and Rank would help me automatically break it up into answer units, which is great and helpful.
However, some questions only require ONE small line in the big chunks of answer units, is there a way that I can manually break further down the answer units that Retrieve and Rank service has provided me?
I heard that you can do it through JavaScript, but is there a way to do it through the UI?
I am contemplating to manually break up the huge doc into multiple smaller documents, but that could potentially lead to 100s of them - which is probably the last option that I'd resort to.
Any help or suggestions is greatly appreciated!
Thank you all!
First off, one clarification:
Retrieve and Rank does not break up your documents into answer units. That is something that the Document Conversion Service does when your conversion target is ANSWER_UNITS.
Regarding your question:
I don't fully understand exactly what you're trying to do, but if the answer units that are produced by default don't meet your requirements, you can customize different steps of the conversion process to adjust the produced answer units. Take a look at the documentation here.
Specifically, you want to make sure that the heading levels (for Word, PDF or HTML, depending on your document type) are defined in a way that
they detect the start of each answer unit. Then, make sure that the heading levels that you defined (h1, h2, h3, etc.) are included in the selector_tags list within the answer_units section.
Once your custom Document Conversion Service configuration produces the answer units you are looking for, you will be ready to send them to Retrieve and Rank to be indexed.

Is there any test link for Mpeg DASH in both: type = "dynamic" and with multiple representations (bitrates)?

As title:
I have tried to find some but I found for most of cases
if the test url is of type = "dynamic" there is ONLY ONE representation (a unique bitrate; CANNOT apply bitrate switch).
Does anyone know if there is a test link?
Thanks
There are several DASH data sets and test vectors out there, lots of them are listed in this blog post. Many don't have live streams, but some have (at least simulated live streams).
The DASH IF Test Vectors might be a good starting point, there are several live streams (look at the column mpd_type and search for the value dynamic), at least some should have multiple representations.

Should dateTime elements include time zone information in SOAP messages?

I've been searching for a definitive answer to this, and the XML schema data types document seems to suggest that timezones are accepted, yet I found at least one implementation which does not properly convert time zones ( NUSOAP ).
To make sure that the problem is not at my end, I'd like to know if a format such as 2009-11-05T11:53:22+02:00 is indeed valid and should be parsed with timezone information, i.e. as 2009-11-05T13:53:22.
Given the following sentences from the w3c schema documentation:
"Local" or untimezoned times are
presumed to be the time in the
timezone of some unspecified locality
as prescribed by the appropriate legal
authority;
and
When a timezone is added to a UTC
dateTime, the result is the date and
time "in that timezone".
it does not sound like there is a definitive answer to this. I would assume that it is the usual ambiguity: Both versions are principally valid, and the question of what version to use depends on the configuration/behavior/expectations of the system one is interfacing with.
And even if there where a definitive answer, I would definitely not rely on it, but rather expect that every other web service and library had its own way of dealing with this :/
You converted the timezone incorrectly.
2009-11-05T11:53:22+02:00
is equivalent to
2009-11-05T09:53:22Z
Is that what NUSOAP did?