How would you do a speedtest from Server A to client B. Write a program - bandwidth

This was a follow up question to a large design question i was asked.
The objective is to find and report the result of speedtest between two end points in the networks(Server and client).

Related

In Redis, what does letter "z" in zadd, zscore mean? [duplicate]

When I was studying Redis for my database, I learned that 'Zset' means 'Sorted Set'.
What does 'Zset' actually stand for? I couldn't figure out why it also means 'Sorted Set'.
It could be simple or too broad question, but I want to understand exactly what I learned.
A similar question is asked before on Redis's github page and the creator of Redis answered it
Hello. Z is as in XYZ, so the idea is, sets with another dimension: the
order. It's a far association... I know :)
Set commands start with s
Hash commands start with h
List commands start with l
Sorted set commands start with z
Stream commands start with x
Hyperloglog commands start with pf
Sorted sets could have been named sset, but it's unpronouncable, as well as ss has a bad connotation in Europe. So, maybe due to this, or just for fun they have chosen the zset name.
The essence of the given name Zset stands for seriousness, thought, intuition, intent and wisdom.
So, it is highly probable that the chosen name has little to do with technology and more with culture. This is what I can work out from the sources available, but this is only probably true. If you need factual precision, then you might want to send a message to the authors, asking them.

Security of smart contract data not returned by a view function

I've been looking through some of the NEAR demos and came across the one regarding posting quizzes that can be answered for a reward.
Source code here: https://github.com/Learn-NEAR/NCD-02--riddles
Video here: https://www.youtube.com/watch?v=u4jP2a2mbiI
My question is related to how secure the answer hash is. In the current implementation, the answer hash is returned with the quizzes, but I imagine it would be better if that wasn't the case. Even then, if the hash was stored on the NEAR network without it being returned by any view functions, how secure would that be? If there was code in this contract to only give a certain number of guesses per account before denying additional attempts, would someone be able to get the hash through some other means and then have as many chances to answer as they want by locally hashing answers with sha256 and seeing if one matches?
Thanks,
Christopher
for sure all data on chain is public so storing anything means sharing it with the world
one reasonable way to handle something like this would be to store the hash but accept the raw string and then hash it to compare the two for a possible win
if you choose a secure hashing algorithm then it would be nearly impossible to guess the required input string based on seeing the hash
update: it was poined out to me that this answer is in incomplete or misleading because if the set of possible answers is small then this would still be a bad design because you could just quickly hash all the possible answers (eg. in a multiple choice question) and compare those hashes with the answer
heads up!
everything in that GitHub org that starts with NCD is a student project submitted after just a week of learning about NEAR
so there is a huge pile of mistakes there just waiting to be refactored and commented on by experts in the community
the projects that are presented for study all start with the prefix sample
those are the ones we generated to help students explore the possibilities of contracts on the NEAR platform along with all our core contracts, Sputnik contracts and others
sign up to learn more about NEAR Certified Developer Programs here: https://near.training

With Bluemix Retrieve&Rank, How do we implement a system to continuously learn?

With reference to the Web page below, using the Retrieve & Rank service of IBM Bluemix, we are creating a bot that can respond to inquiries.
Question:
After learning the ranker once, based on the user's response to the inquiry, how can we construct a mechanism to continuously learn and improve response accuracy?
Assumption:
Because there was no API of R&R service to continuously learn from the inquiry response result of the user, tuning the GroundTruth file,
I suppose that it is necessary to periodically perform such a process as training the ranker again.
Tuning contents of assumed GT file:
If there is a new question, add a set of questions and answers
Increase or decrease the relevance score of responses if there is something that could not be answered well by existing question
(If bot answered incorrectly, lower the score, if there is a useful answer, raise the score)
In order to continuously learn, you will want to do the following:
capture new examples i.e. each user input and corresponding result
review those examples and create new ranker examples, adjust relevance scores, etc
add those new examples to the ranker
retrain the ranker using your new and existing examples
NOTE: Be sure to validate that new updates to the ranker data improves the overall system performance. k-fold validation is a great way to measure this.
All in all, learning is a continuous process that should be repeated indefinitely or until system performance is deemed suffcient.

What would be a good/correct term to designate a group of microservices? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am opening this topic looking for an advice to solve/help in the following problem:
I am currently working with other people on a project with a microservices architecture, using cloud computing.
There are 6 different microservices, and there are some couples of microservices that are not compatible, therefore not able of being instantiated within the same machine.
Each microservice has a version number.
In order to launch one or more new instances of any microservice, we have to define which microservices will run on this new machine, via a static configuration.
This static configuration, that we call so far as a "deploy" contains the microservices that are being deployed, and the version of each microservice. (ex: (XY,[(X,v1),(Y,v2)]) - X and Y are microservices, and the XY deploy instantiates version 1 of X and version 2 of Y)
Those "deploys" also have their own version number. Altering the version number of a microservice within a deploy requires altering the version of any "deploy" containing the microservice. (ex: (XY,v1,[(X,v1),(Y,v2)]) and (XY,v2,[(X,v1),(Y,v3)]))
The question is: what would be a correct, or at least, a good term to refer to this entity that I have previously called a "deploy"?
Many developers are writing programs around our architecture and using different names for such entity, which causes syntaxic and semantic incompatibility inside our team.
Of those different names, all have pros and cons:
deploy: makes sense because you are deploying all the microservices in the list. However, the term deploy already designate another part of our process, and there could be some over utilization of the same term. (Deploying the XY deploy will deploy microservices X and Y in a machine)
cluster: good name for a group of things, but you can deploy multiple machines from a configuration, and the term cluster already applies to this group of machines.
service: a service would be a group of microservices. Makes sense, but many pieces of codes refer to a microservice as 'service', and that could lead to a confusion. (def get_version(service) - Is he talking about a service or a microservice?)
Does any of you could give us any opinion or enlightenment on the question?
Thanks!
You might take a hint from the 12-factor App, and call them releases (http://12factor.net/build-release-run)
You then deploy a versioned release.
It sounds like you want a suitable collective noun. I suggest you Google "collective nouns", to find numerous lists. Read some of the lists and pick a noun that you think is appropriate.
Alternatively, the term cooperative (or co-op for short) might be suitable if one of the defining characteristics of an instantiation collection of microservices is that they complement, or cooperate with, each other.
I have used the term "complex" (as in the "mortgage risk" complex vs the "compliance" complex). It seemed unambiguous.
People also used the term within a project for deployed sets of microservices (e.g the production complex vs the test complex).

Looking for examples where knowledge of discrete mathematics is helpful [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Inspired after watching Michael Feather's SCNA talk "Self-Education and the Craftsman", I am interested to hear about practical examples in software development where discrete mathematics have proved helpful.
Discrete math has touched every aspect of software development, as software development is based on computer science at its core.
http://en.wikipedia.org/wiki/Discrete_math
Read that link. You will see that there are numerous practical applications, although this wikipedia entry speaks mainly in theoretical terms.
Techniques I learned in my discrete math course from university helped me quite a bit with the Professor Layton games.
That counts as helpful... right?
There are a lot of real-life examples where map coloring algorithms are helpful, besides just for coloring maps. The question on my final exam had to do with traffic light programming on a six-way intersection.
As San Jacinto indicates, the fundamentals of programming are very much bound up in discrete mathematics. Moreover, 'discrete mathematics' is a very broad term. These things perhaps make it harder to pick out particular examples. I can come up with a handful, but there are many, many others.
Compiler implementation is a good source of examples: obviously there's automata / formal language theory in there; register allocation can be expressed in terms of graph colouring; the classic data flow analyses used in optimizing compilers can be expressed in terms of functions on lattice-like algebraic structures.
A simple example the use of directed graphs is in a build system that takes the dependencies involved in individual tasks by performing a topological sort. I suspect that if you tried to solve this problem without having the concept of a directed graph then you'd probably end up trying to track the dependencies all the way through the build with fiddly book-keeping code (and then finding that your handling of cyclic dependencies was less than elegant).
Clearly most programmers don't write their own optimizing compilers or build systems, so I'll pick an example from my own experience. There is a company that provides road data for satnav systems. They wanted automatic integrity checks on their data, one of which was that the network should all be connected up, i.e. it should be possible to get to anywhere from any starting point. Checking the data by trying to find routes between all pairs of positions would be impractical. However, it is possible to derive a directed graph from the road network data (in such a way as it encodes stuff like turning restrictions, etc) such that the problem is reduced to finding the strongly connected components of the graph - a standard graph-theoretic concept which is solved by an efficient algorithm.
I've been taking a course on software testing, and 3 of the lectures were dedicated to reviewing discrete mathematics, in relation to testing. Thinking about test plans in those terms seems to really help make testing more effective.
Understanding of set theory in particular is especially important for database development.
I'm sure there are numerous other applications, but those are two that come to mind here.
Just example of one of many many...
In build systems it's popular to use topological sorting of jobs to do.
By build system I mean any system where we have to manage jobs with dependency relation.
It can be compiling program, generating document, building building, organizing conference - so there is application in task management tools, collaboration tools etc.
I believe testing itself properly procedes from modus tollens, a concept of propositional logic (and hence discrete math), modus tollens being:
P=>Q. !Q, therefore !P.
If you plug in "If the feature is working properly, the test will pass" for P=>Q, and then take !Q as given ("the test did not pass"), then, if all these statements are factually correct, you have a valid, sound basis for returning the feature for a fix. By contrast, many, maybe most testers operate by the principle:
"If the program is working properly, the test will pass. The test passed, therefore the program is working properly."
This can be written as: P=>Q. Q, therefore P.
But this is the fallacy of "affirming the consequent" and does not show what the tester believes it shows. That is, they mistakenly believe that the feature has been "validated" and can be shipped. When Q is given, P may in fact either be true or it may be untrue for P=>Q, and this can be shown with a truth table.
Modus tollens is core to Karl Popper's notion of science as falsification, and testing should proceed in much the same way. We're attempting to falsify the claim that the feature always works under every explicit and implicit circumstance, rather than attempting to verify that it works in the narrow sense that it can work in some proscribed way.