How does Microsoft generate the Event ID for winqual events? - hash

I am curious about the IDs that are used to identify items in winqual.
I assume it is some sort of hash. Is this published?
Hash of the module name, its version and an offset?
That seems appropriate but the size is too small to cover the possibilities unless some truncation/lossy algorithm is used.
This seems to be generated with the minidump on the client machine.

If you're interested in the gory details read Debugging in the (Very) Large: Ten Years of Implementation and Experience (PDF) which was presented by some Microsoft folks at the 2nd ACM Symposium on Operating Systems Principles. The paper goes into a lot of detail about the bucketing process.
It's essentially a hash and it isn't perfect: "The ideal bucketing algorithm should strictly maintain a property of orthogonality: one bug per bucket and one bucket per bug. WER approaches orthogonality through two phases of bucketing."

Related

How can I make sure that a hash function won't produce the same cypher for 2+ different entries?

Edit: some people flagged this question as a potential duplicate of this other one. While I agree that knowing how the birthday paradox applies to hashing functions, the 2 questions (and respective answers) address 2 different, albeit related, subjects.
The other question is asking "what are the odds of collision", whereas this question main focus is "how can I make sure that collision never happens".
I have a data lake stored in S3 where each day an ETL script dumps additional data from the day before.
Due to how the pipeline is built, it is possible for a very inconsiderate user that has admin access to produce duplicates in said data lake by manually interacting with the dump files coming from our OLTP database, and triggering the ETL script when it's not supposed to.
I thought that a good idea to prevent data duplication was to insert a form of security measure in my ETL script:
Produce a hash for each entry.
Store said hashes somewhere else (like a dynamodb table).
Whenever new data comes in, hash that as well and compare it with the already existing hashes.
If any of new hash is in the existing hashes, reject the associated entry entirely.
However, I know very little about hashing and I was reading that, although unlikely, 2 different sources can produce the same hash.
I understand it's really hard for it to happen in this situation, but I was wondering if there is a way to be 100% sure about it.
Any idea is much appreciated.
Long answer: what you want to study and explore is called "perfect hashing" (ie hashing guaranteed not to have collisions. https://en.wikipedia.org/wiki/Perfect_hash_function
Short answer: A cryptographic collision resistant algorithm like sha-1 is probably safe to use for all but the largest (PBs a day) datasets and even then its probably all right. Git uses sha-1 internally and code repositories probably deal with the most files on the planet and rarely have collisions.
See for details: https://ericsink.com/vcbe/html/cryptographic_hashes.html#:~:text=Git%20uses%20hashes%20in%20two,computed%20when%20it%20was%20stored.
Medium answer: this is actually a pretty hard problem overall and a frequent area of study for computer science and a lot depends on your particular use case and the context you're operating in. Cuckoo hashing, collision resistant algorithms, and hashing in general are probably all good terms to research. There's also a lot of art and science behind space (memory) and time (computer power needed) when picking these methods. A good rule of thumb is that perfect hashing will generally take up more space and time than a collision resistant cryptographic hash like sha-1.

Kafka : Generating unique IDs for strings across partitions

I'm trying to asses if Kafka could be used to scale-out our current solution.
I can identify partitions easily. Currently, the requirement is there to be 1500 partitions, each having 1-2 events per second, but future might go as high as 10000 partitions.
But there is one part of our solution which I don't know how would be solved in Kafka.
The problem is that each message contains a string and I want to assign a unique ID to each string across the whole topic. So same strings have the same ID while different strings have different IDs. The IDs don't need to be sequential, nor do they need to be always-growing.
The IDs will then be used down-stream as unique keys to identify those strings. The strings can be hundreds of characters long, so I don't think they would make efficient keys.
More advanced usage would be where messages might have different "kinds" of strings, so there would be multiple unique sequences of IDs. And messages will contain only some of those kinds depending on the type of the message.
Another advanced usage would be that the values are not strings, but structures and if two structures are same would be some more elaborate rule, like if PropA is equal, then structures are equal, if not, then structures are equal if PropB is equal.
To illustrate the problem: Each partition is a computer in a network. Each event is action on the computer. Events need to be ordered per-computer so that events that change the state of the computer (eg. user logged in) can affect other types of events, and ordering is critical for that. Eg. the user opened an application, a file is written, a flash drive is inserted, etc.. And I need each application, file, flash drive, or many others to have unique identifiers across all computers. This is then used to calculate statistics down-stream. And sometimes, an event can have multiple of those, eg. operation on a specific file on the specific flash drive.
There is a very nice post about kafka and blockchain. This is collective mind work and I think this could solve your IDs scalability issue. For solution refer to "Blockchain: reasons." part. All credits goes to respective authors.
Idea is simple, yet efficient:
Data is hash based, with link to previous block
Data may be very well same hashes, links to respective blocks of types
Custom block-chain solution means you in control of data encoding/decoding
Each hash chain is self-contained, and essentially may be your process (hdd/ram/cpu/word/app etc.)
Each hash chain may be a message itself
Bonus: statistics and analytics may be very well stored in block-chain, with high support for compression and replication. Consumers are pretty cheap in that context (scalability).
Proc:
Unique identifier issue solved
All records linked and thanks to kafka & blockchain highly ordered
Data extendable
Kafka properties applied
Cons:
Encryption/Decryption is CPU intensive
Growing level of hash calculation complexity
Problem: without problem context it's hard to approximate the limitations that need to be addressed further. However, assuming calculated solution has a finite nature you should have no issues scaling the solution in a regular way.
Bottom line:
Without knowledge of requirements in terms of speed/cost/quality it's hard to give a better, backed answer with working example. CPU cloud extension may be comparably cheap, data storage - depends on time for how long and what amount of data you want to store, replay-ability, etc. It's a good chunk of work. Prototype? Concept in referenced article.

In NoSQL, how do you handle massive updates to common dependant data?

I really want to understand the NoSQL approach, but some aspects baffle me. And the most readily prominent docs don't seem to address them (that I've found, so far).
For example, I'm looking at the CouchDB website...
Self-Contained Data
An invoice contains all the pertinent information about a single transaction the seller, the buyer, the date, and a list of the items or services sold. [...] Self-contained documents, there’s no abstract reference on this piece of paper that points to some other piece of paper with the seller’s name and address. Accountants appreciate the simplicity of having everything in one place. And given the choice, programmers appreciate that, too.
By "one abstract reference" I think they mean an FK, right? And in an analogous SQL DB the "some other piece of paper" would be a row in a sellers table?
Ok, but what happens when it turns out someone messed up and the seller's address is actually on Maple Avenue, not Maple Lane And you have 96,487 invoices with that say Maple Lane.
What is the orthodox NoSQL way of dealing with that inevitability?
Do you scan your 4.8 million invoice "documents" for the 96k with "Lane", dredge them up, and execute 96k writes?
And if so, in this described CouchDB-based app, WHO goes in and performs that? Because, guessing here, but I imagine your front end probably doesn't have a view with a Seller form. Because your sellers are all embedded inside invoices, right? So in NoSQL, does this sort of data correction & maintenance become the DBA's job?
(Also, do you actually repeat all of the seller's info on every single invoice involving that seller? Doesn't that get expensive?)
And in a huge, busy system, how do you ensure that all that repeated seller data is correct and consistent?
I'm considering which storage technology to look at for a series of upcoming projects. NoSQL is obviously extremely popular and widely adopted. In some domains it's kind of the "Golden Path"/default choice. If I want to use PostgreSQL with Node.js I'll have to scrounge for info about less popular libraries and support.
So there's significant real-world pressure towards MongoDB, CouchDB, etc.
Yet in the systems I'm designing, the questions I mention above are going to really matter. Is there a proven, established, and practical way of addressing these concerns?
What is the orthodox NoSQL way of dealing with that inevitability?
Two possible approaches:
Essentially the same as the pre-SQL (i.e. paper filing cabinets) way:
Update the master file for the customer.
Use the new address on all new invoices.
Historical invoices will continue to have wrong data. But that's okay, and arguably even better than the RDBMS way, because it accurately reflects history.
Go to the extra work of updating all the affected documents. With properly built indexes or views, this isn't that hard (you won't have to scan all 4.8 million invoices--your view will direct you to the 18 actually affected by the change)
I imagine your front end probably doesn't have a view with a Seller form.
Why not? If you do seller-based queries, I sure hope you have a seller-based view (or several).
Because your sellers are all embedded inside invoices, right?
That's irrelevant. Views can index any part of the data.
do you actually repeat all of the the seller's info on every single invoice involving that seller?
Of course. You would repeat it every time you print an invoice on paper, right? Your database document is a "document", same as a printed invoice is.
Doesn't that get expensive?
If you're storing your entire database on a mobile phone, maybe. Otherwise, hard drives are cheap these days.
Yet in the systems I'm designing, the questions I mention above are going to really matter.
NoSQL isn't right for every job. If transactional integrity is important (and it likely is for a financial app like the one you seem to be discussing), it likely is not the right tool.
Think of CouchDB as a sync protocol with a database tacked on for good luck.
If your core feature is the ability to sync, then CouchDB is probably a good fit. If that's not a feature core to your application, then it's probably the wrong tool for the job.

Item recommendation service

I'm supposed to make book recommendation service using MyMediaLite. So far I have collected books from website using Nutch crawler and storing info into hbase. The problems is that I actually not fully understand, how all this thing works. By examples, I have to pass a test data and training data files, with user-item id pairs and rating. But what about other information of book, like categories and authors? How it is possible to find "similar" books, by their information etc, without information about user (so far)? Is it possible to pass data directly from hbase, without storing it to file and then leading in?
Or for this job better suits Apache Mahout or LibRec?
User-item-rating information, often in a matrix, is the basis for collaborative filtering algorithms (user-user CF, item-item CF, matrix factorization, and others). You're using other people's opinions to form recommendations. There's no innate recognition of the content of the items themselves. For that, you'll need some sort of content-based filtering algorithm or data mining technique. These are often used in the "user cold start" scenario you described: you have lots of information about items but not about a particular user's preferences.
First, think about your end goal and the data you have. Based on your product needs and available data, you can choose the right algorithm for your purposes. I highly recommend the RecSys course on Coursera for learning more: https://www.coursera.org/learn/recommender-systems. It's taught by a leader in the field.

Words Prediction - Get most frequent predecessor and successor

Given a word I want to get the list of most frequent predecessors and successors of the word in English language.
I have developed a code that does bigram analysis on any corpus ( I have used Enron email corpus) and can predict the most frequent next possible word but I want some other solution because
a) I want to check the working / accuracy of my prediction
b) Corpus or dataset based solutions fail for an unseen word
For example, given the word "excellent" I want to get the words that are most likely to come before excellent and after excellent
My question is whether any particular service or api exists for the purpose?
Any solution to this problem is bound to be a corpus-based method; you just need a bigger corpus. I'm not aware of any web service or library that is does this for you, but there are ways to obtain bigger corpora:
Google has published a huge corpus of n-grams collected from the English part of the web. It's available via the Linguistic Data Consortium (LDC), but I believe you must be an LDC member to obtain it. (Many universities are.)
If you're not an LDC member, try downloading a Wikipedia database dump (get enwiki) and training your predictor on that.
If you happen to be using Python, check out the nice set of corpora (and tools) delivered with NLTK.
As for the unseen words problem, there are ways to tackle it, e.g. by replacing all words that occur less often than some threshold by a special token like <unseen> prior to training. That will make your evaluation a bit harder.
You have got to give some more instances or context of "unseen" word so that the algorithm can make some inference.
One indirect way can be reading rest of the words in the sentences.. and looking into a dictionary for the words where those words are encountered.
In general, you cant expect the algorithm to learn and understand the inference in the first time. Think about yourself.. If you were given a new word.. how well can you make out its meaning (probably by looking into how it has been used in the sentence and how well your understanding is) but then you make an educated guess and over the period of time you understand the meaning.
I just re-read the original question and I realize the answers, mine included got off base. I think the original person just wanted to solve a simple programming problem, not look for datasets.
If you list all distinct word-pairs and count them, then you can answer your question with simple math on that list.
Of course you have to do a lot of processing to generate the list. While it's true that if the total number of distinct words is as much a 30,000 then there are a billion possible pairs, I doubt that in practice there are that many. So you can probably make a program with a huge hash table in memory (or on disk) and just count them all. If you don't need the insignificant pairs you could write a program that flushes out the less important ones periodically while scanning. Also you can segment the word list and generate pairs of a hundred words verses the rest, then the next hundred and so on, and calculate in passes.
My original answer is here I'm leaving it because it's my own related question:
I'm interested in something similar (I'm writing a entry system that suggest word completions and punctuation and I would like it to be multilingual).
I found a download page for google's ngram files, but they're not that good, they're full of scanning errors. 'i's become '1's, words run together etc. Hopefully Google has improved their scanning technology since then.
The just-download-wikipedia-unpack=it-and-strip-the-xml idea is a bust for me, I don't have a fast computer (heh, I have a choice between an atom netbook here and an android device). Imagine how long it would take me to unpack a 3 gigabytes of bz2 file becoming what? 100 of xml, then process it with beautiful soup and filters that he admits crash part way through each file and need to be restarted.
For your purpose (previous and following words) you could create a dictionary of real words and filter the ngram lists to exclude the mis-scanned words. One might hope that the scanning was good enough that you could exclude misscans by only taking the most popular words... But I saw some signs of constant mistakes.
The ngram datasets are here by the way http://books.google.com/ngrams/datasets
This site may have what you want http://www.wordfrequency.info/