Suppressing renumbering of ordered lists in export - org-mode

I would like to refer to a few of Alan Perlis' Epigrams on Programming by their original numbers, but in an Org Mode ordered list.
When I export my document, the numbers I provide for the list items are discarded and replaced with new numbers, beginning with 1.
The raw source text:
#+begin_example
A few of Alan J. Perlis\rsquo{} [[http://www-pu.informatik.uni-tuebingen.de/users/klaeren/epigrams.html][Epigrams on Programming]]:
8. A programming language is low level when its programs require attention to the irrelevant.
15. Everything should be built top-down, except the first time.
31. Simplicity does not precede complexity, but follows it.
54. Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy.
#+end_example
The text as rendered, and renumbered, by export:
#begin_quote
A few of Alan J. Perlis\rsquo{} [[http://www-pu.informatik.uni-tuebingen.de/users/klaeren/epigrams.html][Epigrams on Programming]]:
8. A programming language is low level when its programs require attention to the irrelevant.
15. Everything should be built top-down, except the first time.
31. Simplicity does not precede complexity, but follows it.
54. Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy.
#end_quote

You can set the item number to whatever you want by beginning the text of the item with [#8] (for example). See Ordered List here.
A working example:
A list with custom ordering:
1. [#8] apple
1. [#77] orange
1. [#101] lime
When you export the document the list numbers will be 8, 77, and 101.

Related

diff text documents but ignore single character differences? Set a minimum edit distance filter?

I have two versions of a large book in txt format and I'd like to compare them to find significant changes between the versions, ignoring small single character differences.
There are lots of diffing tools that can ignore whitespace differences, but I also want to ignore small typos and single or couple character differences. For example, one version of the book has a repeated misspelling of leige hundreds of times and this is corrected in the next version to liege. Some proper nouns have also changed their spelling. (I could make custom workarounds for each misspelling, but would like something more general purpose)
Since I only care about more significant multi-word differences want I really want is to set a filter that ignores changes for a line unless the Levenshtein edit distance is above some threshold.
Looking around all the diff/comparisons tools I find seem to have code in mind so they lack any feature around ignoring small text changes. Google's diff_match_patch library is great for diffing plaintext and ignoring whitespace changes (demo here) but doesn't seem to have an out of the box way to ignore single character non-whitespace differences.
tl;dr; Are there any diff tools that can compare text documents but filter out minor single character non-whitespace differences?
In Beyond compare you can define "replacements".
An example:
Differences are marked red:
Then you can go to Session->Session Settings and set a replacement:
Or even easier: Mark the text and define the replacement immediate:
Now the difference is unimportant and marked blue:
With one click you can ignore the unimportant differences (red arrow in the screenshot).
Technical remark: I use BC4 with the pro edition.

Can you create a programming language with just one symbol?

Can you create a programming language with just one symbol like brainfuck.
Yes, it has been done before - see Unary.
Basically it's a strange encoding of brainfuck. Treat each BF command as a number. The whole program is then also a number, created by concatenating the commands together (with an extra 1 at front, for unambiguous decoding). Convert the number to unary numeric system (aka the number of digits is your number) and you're done.
Note however the programs in this tend to be very large - a cat implemented in Unary is (according to the information on page) 56623 characters long.
MGIFOS, Lenguage and Ellipsis follow the same principle. Note that e.g. a hello world in MGIFOS
has more characters than particles in the observable universe
Then Len(language,encoding) extends this principle to any language.
They are called OISC One Instruction Set Compiler.
The first one know of is Melzak's Arithmetic Machine (1961), with the instruction:
z = x-y or jump if y>x
You also have Zero Instruction Set Computer, which are more like neural nets.
Not forgetting the amazing FRACTRAN of Conway & Guy (1996), with no instruction but interprets a series of fractions (the program) in a Tuning complete way.

Zebra Programming Language (ZPL) II using ^FB or ^TB truncates text at specific lenghts

I am writing code to print labels for botanic gardens. Each label is printed individually but with different information on each label. Each label contains a scientific name which can vary greatly in size and thus can go over 2 lines (our label size is 10cm wide by 2.5cm high).
Our problem occurs mainly with the name when we go over 24 characters (See line with **).
If we choose a name that has 24 characters or less then it prints fine.
Anything more it will not print.
If we take all the other "items" off the label and just leave the "name" element then it prints only the first 24 characters and truncates the rest (we did this to test whether a possible overlap between our ^FB block and another element could be causing this problem).
We tried this with other elements that use a ^FB and we found that they displayed the same behaviour but varied in the length at which this issue occurred: for example "cc" (short for country code) had a limit of 21 characters.
For added information: we compile this code within a BASIC environment and use variables such as ":name:", ":Acc.dt":" as seen bellow. Our database provides this information and we have checked for any internal routines that would have truncated long names etc. Our code was working fine in ZPL but we recently had to move to ZPL II (we purchased a newer model GX430t) and had to modify our ZPL code at which point this problem started to occur.
Here is our code:
^XA
^LH40,40
^MMT
^PW1200
^LL1200
^FO16,05^A0N,35,^FDAcc. num.^FS
^FO170,05^A0,35,^FV":accnum:"^FS
^FO360,05^A0,35,^FV":qual:"^FS
^FO350,35^A0N,30,^FDAcc.dt.^FS
^FO450,35^A0N,30,^FB790,3,0,L,
^FH\^FV":accdt:"^FS
^FO430,70^^A0N,25,^FB790,3,0,L,
^FH\^FDProv. type^FS
^FO560,70^A0N,25,^FV":provtype:"^FS
^FO800,225^A0N,30,^FB790,3,0,L,
^FV":cc:"^FS
**^FO10,100^A0N,40,^FB790,3,0,L,
^FV":name:"^FS**
^FO1000,05^A0,35,^FV":proptype:"^FS
^FO5,225^A0,25^FVColl.^FS
^FO55,225^A0,25^FV":coll:"^FS
^FO375,225^A0,25,^FV":consstat:"^FS
^FO1000,70^A0,25,^FV":reqby:"^FS
^FO535,180^BCN,55,N,N,N^FV":qual:"^FS
^FO60,45^BCN,35,N,N,N^FV":accnum:"^FS
^PQ1,0,1,Y
^XZ
Here is what we have tried to fix this (apologies if some seem like wild cards):
Changing font type, size, and location on label;
Changing ^FO to ^FT;
Looked at our internal database logic;
Taking away ^FH\;
Changing the values within the ^FB line (we tried nearly all possible permutations);
Manually typed in a name longer than 24 characters (using notepad - no database/compiler) - same issue.
Any thoughts on this would be greatly appreciated
Kerry
I've had this issue before, and across printer manufacturers, firmwares and languages.
First, some paraphrased explanations straight out of the 2014 ZPL II Programming Guide (P1012728-009 Rev. A).
"The ^TB command prints a text block with defined width and height. The text block has an automatic word-wrap function. If the text exceeds the block height, the text is truncated."
"The ^FB (Field Block) command allows you to print text into a defined block type format. It can format a ^FD (Field Data) string into a block of text using the origin, font, and
rotation specified for the text string, and it contains an automatic word-wrap function."
Technically, the difference between a text block and a field block is that height is in dots for the former and in lines for the latter.
Also notice that although not mentioned, the ^FB command also truncates text that does not fit in the number of lines specified, and here's where the font size of the A0 command and the line spacing of the FB command now play an important role in determining whether to show or truncate that second or third line.
Incidentally, in other languages such as TSPL there is no truncation of text blocks--if you tell the block to be 3 lines in height but there's enough text for 4 lines, line 4 overlaps line 3 to indicate this--which may seem awful, but it is better than the data loss of truncation, which is not obvious.
For both commands:
"Using ^FT (Field Typeset) for your data takes the baseline origin of
the last possible line of text, meaning that the field block will be
filled from bottom to top."
"Using ^FO (Field Origin) means that the field block will be filled from top to bottom."
In reality, I have only been able to make the ^FB command work as expected, but that may be because ^TB is not implemented in the firmware I've worked with (ZPL II "compliant" Bluetooth printers).
You can test the following snippet for a 2x2 label in the Labelary Viewer:
^XA
~TA0
^MTD
^MNW
^MMT
^MFN
~SD15
^PR6
^PON
^PMN
^PW406
^LS0
^LRN
^LL406
^LT0
^LH0,0
^CI0
^XZ
^XA
^FO324,10,0^FB386,2,0,C,0^A0R,36,28.8^FH^FD"The King" Cupcake^FS
^FO278,10,0^FB386,1,0,C,0^A0R,28,22.4^FH^FDUse By 11/24/2015 02:45 PM^FS
^FO152,10,0^FB386,1,0,C,0^A0R,24,19.2^FH^FD11/24/2015 02:45 PM^FS
^FO62,140,0^FB250,1,0,R,0^A0R,24,19.2^FH^FDSL: 4 hours^FS
^FO38,10,0^FB386,1,0,L,0^A0R,18,14.4^FH^FDPREP DATE:^FS
^FO8,10,0^FB386,1,0,L,0^A0R,28,22.4^FH^FD11/24/2015 10:45 AM^FS
^FO62,10,0^FB50,1,0,L,0^A0R,24,19.2^FH^FDEMP:^FS
^FO92,10,0^FB376,3,0,J,0^A0R,18,14.4^FH^FDIngredients: 1 1/2 cups all-purpose flour, 1 teaspoon baking powder, 1/2 teaspoon salt, 8 tablespoons (1 stick) unsalted butter, room temperature, 1 cup sugar, 3 large eggs, 1 1/2 teaspoons pure vanilla extract, 3/4 cup milk.^FS
^PQ3,,,Y
^XZ
In particular, I've preceeded the A0 and FD commands with FB. Using the viewer, you can quickly test the effects of changing from FT and FO in the ingredients line, the effects of changing the A0 font sizes and the effects of changing the FB number of lines from say 3 to 2 (the viewer does not truncate text btw).
Of course there is no match for actually printing a label, for your ZPL II "compliant" printer may or may not truncate text according to its manufacturer and firmware version.
I hope that helps!

Count the number of words in NSString

I'm trying to implement a word count function for my app that uses UITextView.
There's a space between two words in English, so it's really easy to count the number of words in an English sentence.
The problem occurs with Chinese and Japanese word counting because usually, there's no any space in the entire sentence.
I checked with three different text editors in iPad that have a word count feature and compare them with MS Words.
For example, here's a series of Japanese characters meaning the world's idea: 世界(the world)の('s)アイデア(idea)
世界のアイデア
1) Pages for iPad and MS Words count each character as one word, so it contains 7 words.
2) iPad text editor P*** counts the entire as one word --> They just used space to separate words.
3) iPad text editor i*** counts them as three words --> I believe they used CFStringTokenizer with kCFStringTokenizerUnitWord because I could get the same result)
I've researched on the Internet, and Pages and MS Words' word counting seems to be correct because each Chinese character has a meaning.
I couldn't find any class that counts the words like Pages or MS Words, and it would be very hard to implement it from scratch because besides Japanese and Chinese, iPad supports a lot of different foreign languages.
I think CFStringTokenizer with kCFStringTokenizerUnitWord is the best option though.
Is there a way to count words in NSString like Pages and MSWords?
Thank you
I recommend keep using CFStringTokenizer. Because it's platform feature, so will be upgraded by platform upgrade. And many people in Apple are working hardly to reflect real cultural difference. Which are hard to know for regular developers.
This is hard because this is not a programming problem essentially. This is a human cultural linguistic problem. You need a human language specialist for each culture. For Japanese, you need Japanese culture specialist. However, I don't think Japanese people needs word count feature seriously, because as I heard, the concept of word itself is not so important in the Japanese culture. You should define concept of word first.
And I can't understand why you want to force concept of word count into the character count. The Kanji word that you instanced. This is equal with counting universe as 2 words by splitting into uni + verse by meaning. Not even a logic. Splitting word by it's meaning is sometimes completely wrong and useless by the definition of word. Because definition of word itself are different by the cultures. In my language Korean, word is just a formal unit, not a meaning unit. The idea that each word is matching to each meaning is right only in roman character cultures.
Just give another feature like character counting for the users in east-asia if you think need it. And counting character in unicode string is so easy with -[NSString length] method.
I'm a Korean speaker, (so maybe out of your case :) and in many cases we count characters instead of words. In fact, I never saw people counting words in my whole life. I laughed at word counting feature on MS word because I guessed nobody would use it. (However now I know it's important in roman character cultures.) I have used word counting feature only once to know it works really :) I believe this is similar in Chinese or Japanese. Maybe Japanese users use the word counting because their basic alphabet is similar with roman characters which have no concept of composition. However they're using Kanji heavily which are completely compositing, character-centric system.
If you make word counting feature works greatly on those languages (which are using by people even does not feel any needs to split sentences into smaller formal units!), it's hard to imagine someone who using it. And without linguistic specialist, the feature should not correct.
This is a really hard problem if your string doesn't contain tokens identifying word breaks (like spaces). One way I know derived from attempting to solve anagrams is this:
At the start of the string you start with one character. Is it a word? It could be a word like "A" but it could also be a part of a word like "AN" or "ANALOG". So the decision about what is a word has to be made considering all of the string. You would consider the next characters to see if you can make another word starting with the first character following the first word you think you might have found. If you decide the word is "A" and you are left with "NALOG" then you will soon find that there are no more words to be found. When you start finding words in the dictionary (see below) then you know you are making the right choices about where to break the words. When you stop finding words you know you have made a wrong choice and you need to backtrack.
A big part of this is having dictionaries sufficient to contain any word you might encounter. The English resource would be TWL06 or SOWPODS or other scrabble dictionaries, containing many obscure words. You need a lot of memory to do this because if you check the words against a simple array containing all of the possible words your program will run incredibly slow. If you parse your dictionary, persist it as a plist and recreate the dictionary your checking will be quick enough but it will require a lot more space on disk and more space in memory. One of these big scrabble dictionaries can expand to about 10MB with the actual words as keys and a simple NSNumber as a placeholder for value - you don't care what the value is, just that the key exists in the dictionary, which tells you that the word is recognised as valid.
If you maintain an array as you count you get to do [array count] in a triumphal manner as you add the last word containing the last characters to it, but you also have an easy way of backtracking. If at some point you stop finding valid words you can pop the lastObject off the array and replace it at the start of the string, then start looking for alternative words. If that fails to get you back on the right track pop another word.
I would proceed by experimentation, looking for a potential three words ahead as you parse the string - when you have identified three potential words, take the first away, store it in the array and look for another word. If you find it is too slow to do it this way and you are getting OK results considering only two words ahead, drop it to two. If you find you are running up too many dead ends with your word division strategy then increase the number of words ahead you consider.
Another way would be to employ natural language rules - for example "A" and "NALOG" might look OK because a consonant follows "A", but "A" and "ARDVARK" would be ruled out because it would be correct for a word beginning in a vowel to follow "AN", not "A". This can get as complicated as you like to make it - I don't know if this gets simpler in Japanese or not but there are certainly common verb endings like "ma su".
(edit: started a bounty, I'd like to know the very best way to do this if my way isn't it.)
If you are using iOS 4, you can do something like
__block int count = 0;
[string enumerateSubstringsInRange:range
options:NSStringEnumerationByWords
usingBlock:^(NSString *word,
NSRange wordRange,
NSRange enclosingRange,
BOOL *stop)
{
count++;
}
];
More information in the NSString class reference.
There is also WWDC 2010 session, number 110, about advanced text handling, that explains this, around minute 10 or so.
I think CFStringTokenizer with kCFStringTokenizerUnitWord is the best option though.
That's right, you have to iterate through text and simply count number of word tokens encontered on the way.
Not a native chinese/japanese speaker, but here's my 2cents.
Each chinese character does have a meaning, but concept of a word is combination of letters/characters to represent an idea, isn't it?
In that sense, there's probably 3 words in "sekai no aidia" (or 2 if you don't count particles like NO/GA/DE/WA, etc). Same as english - "world's idea" is two words, while "idea of world" is 3, and let's forget about the required 'the' hehe.
That given, counting word is not as useful in non-roman language in my opinion, similar to what Eonil mentioned. It's probably better to count number of characters for those languages.. Check around with Chinese/Japanese native speakers and see what they think.
If I were to do it, I would tokenize the string with spaces and particles (at least for japanese, korean) and count tokens. Not sure about chinese..
With Japanese you can create a grammar parser and I think it is the same with Chinese. However, that is easier said than done because natural language tends to have many exceptions, but it is not impossible.
Please note it won't really be efficient since you have to parse each sentence before being able to count the words.
I would recommend the use of a parser compiler rather than building one yourself as well to start at least you can concentrate on doing the grammar than creating the parser yourself. It's not efficient, but it should get the job done.
Also have a fallback algorithm in case your grammar didn't parse the input correctly (perhaps the input really didn't make sense to begin with) you can use the length of the string to make it easier on you.
If you build it, there could be a market opportunity for you to use it as a natural language Domain Specific Language for Japanese/Chinese business rules as well.
Just use the length method:
[#"世界のアイデア" length]; // is 7
That being said, as a Japanese speaker, I think 3 is the right answer.

Theory: "Lexical Encoding"

I am using the term "Lexical Encoding" for my lack of a better one.
A Word is arguably the fundamental unit of communication as opposed to a Letter. Unicode tries to assign a numeric value to each Letter of all known Alphabets. What is a Letter to one language, is a Glyph to another. Unicode 5.1 assigns more than 100,000 values to these Glyphs currently. Out of the approximately 180,000 Words being used in Modern English, it is said that with a vocabulary of about 2,000 Words, you should be able to converse in general terms. A "Lexical Encoding" would encode each Word not each Letter, and encapsulate them within a Sentence.
// An simplified example of a "Lexical Encoding"
String sentence = "How are you today?";
int[] sentence = { 93, 22, 14, 330, QUERY };
In this example each Token in the String was encoded as an Integer. The Encoding Scheme here simply assigned an int value based on generalised statistical ranking of word usage, and assigned a constant to the question mark.
Ultimately, a Word has both a Spelling & Meaning though. Any "Lexical Encoding" would preserve the meaning and intent of the Sentence as a whole, and not be language specific. An English sentence would be encoded into "...language-neutral atomic elements of meaning ..." which could then be reconstituted into any language with a structured Syntactic Form and Grammatical Structure.
What are other examples of "Lexical Encoding" techniques?
If you were interested in where the word-usage statistics come from :
http://www.wordcount.org
This question impinges on linguistics more than programming, but for languages which are highly synthetic (having words which are comprised of multiple combined morphemes), it can be a highly complex problem to try to "number" all possible words, as opposed to languages like English which are at least somewhat isolating, or languages like Chinese which are highly analytic.
That is, words may not be easily broken down and counted based on their constituent glyphs in some languages.
This Wikipedia article on Isolating languages may be helpful in explaining the problem.
Their are several major problems with this idea. In most languages, the meaning of a word, and the word associated with a meaning change very swiftly.
No sooner would you have a number assigned to a word, before the meaning of the word would change. For instance, the word "gay" used to only mean "happy" or "merry", but it is now used mostly to mean homosexual. Another example is the morpheme "thank you" which originally came from German "danke" which is just one word. Yet another example is "Good bye" which is a shortening of "God bless you".
Another problem is that even if one takes a snapshot of a word at any point of time, the meaning and usage of the word would be under contention, even within the same province. When dictionaries are being written, it is not uncommon for the academics responsible to argue over a single word.
In short, you wouldn't be able to do it with an existing language. You would have to consider inventing a language of your own, for the purpose, or using a fairly static language that has already been invented, such as Interlingua or Esperanto. However, even these would not be perfect for the purpose of defining static morphemes in an ever-standard lexicon.
Even in Chinese, where there is rough mapping of character to meaning, it still would not work. Many characters change their meanings depending on both context, and which characters either precede or postfix them.
The problem is at its worst when you try and translate between languages. There may be one word in English, that can be used in various cases, but cannot be directly used in another language. An example of this is "free". In Spanish, either "libre" meaning "free" as in speech, or "gratis" meaning "free" as in beer can be used (and using the wrong word in place of "free" would look very funny).
There are other words which are even more difficult to place a meaning on, such as the word beautiful in Korean; when calling a girl beautiful, there would be several candidates for substitution; but when calling food beautiful, unless you mean the food is good looking, there are several other candidates which are completely different.
What it comes down to, is although we only use about 200k words in English, our vocabularies are actually larger in some aspects because we assign many different meanings to the same word. The same problems apply to Esperanto and Interlingua, and every other language meaningful for conversation. Human speech is not a well-defined, well oiled-machine. So, although you could create such a lexicon where each "word" had it's own unique meaning, it would be very difficult, and nigh on impossible for machines using current techniques to translate from any human language into your special standardised lexicon.
This is why machine translation still sucks, and will for a long time to come. If you can do better (and I hope you can) then you should probably consider doing it with some sort of scholarship and/or university/government funding, working towards a PHD; or simply make a heap of money, whatever keeps your ship steaming.
It's easy enough to invent one for yourself. Turn each word into a canonical bytestream (say, lower-case decomposed UCS32), then hash it down to an integer. 32 bits would probably be enough, but if not then 64 bits certainly would.
Before you ding for giving you a snarky answer, consider that the purpose of Unicode is simply to assign each glyph a unique identifier. Not to rank or sort or group them, but just to map each one onto a unique identifier that everyone agrees on.
How would the system handle pluralization of nouns or conjugation of verbs? Would these each have their own "Unicode" value?
As a translations scheme, this is probably not going to work without a lot more work. You'd like to think that you can assign a number to each word, then mechanically translate that to another language. In reality, languages have the problem of multiple words that are spelled the same "the wind blew her hair back" versus "wind your watch".
For transmitting text, where you'd presumably have an alphabet per language, it would work fine, although I wonder what you'd gain there as opposed to using a variable-length dictionary, like ZIP uses.
This is an interesting question, but I suspect you are asking it for the wrong reasons. Are you thinking of this 'lexical' Unicode' as something that would allow you to break down sentences into language-neutral atomic elements of meaning and then be able to reconstitute them in some other concrete language? As a means to achieve a universal translator, perhaps?
Even if you can encode and store, say, an English sentence using a 'lexical unicode', you can not expect to read it and magically render it in, say, Chinese keeping the meaning intact.
Your analogy to Unicode, however, is very useful.
Bear in mind that Unicode, whilst a 'universal' code, does not embody the pronunciation, meaning or usage of the character in question. Each code point refers to a specific glyph in a specific language (or rather the script used by a group of languages). It is elemental at the visual representation level of a glyph (within the bounds of style, formatting and fonts). The Unicode code point for the Latin letter 'A' is just that. It is the Latin letter 'A'. It cannot automagically be rendered as, say, the Arabic letter Alif (ﺍ) or the Indic (Devnagari) letter 'A' (अ).
Keeping to the Unicode analogy, your Lexical Unicode would have code points for each word (word form) in each language. Unicode has ranges of code points for a specific script. Your lexical Unicode would have to a range of codes for each language. Different words in different languages, even if they have the same meaning (synonyms), would have to have different code points. The same word having different meanings, or different pronunciations (homonyms), would have to have different code points.
In Unicode, for some languages (but not all) where the same character has a different shape depending on it's position in the word - e.g. in Hebrew and Arabic, the shape of a glyph changes at the end of the word - then it has a different code point. Likewise in your Lexical Unicode, if a word has a different form depending on its position in the sentence, it may warrant its own code point.
Perhaps the easiest way to come up with code points for the English Language would be to base your system on, say, a particular edition of the Oxford English Dictionary and assign a unique code to each word sequentially. You will have to use a different code for each different meaning of the same word, and you will have to use a different code for different forms - e.g. if the same word can be used as a noun and as a verb, then you will need two codes
Then you will have to do the same for each other language you want to include - using the most authoritative dictionary for that language.
Chances are that this excercise is all more effort than it is worth. If you decide to include all the world's living languages, plus some historic dead ones and some fictional ones - as Unicode does - you will end up with a code space that is so large that your code would have to be extremely wide to accommodate it. You will not gain anything in terms of compression - it is likely that a sentence represented as a String in the original language would take up less space than the same sentence represented as code.
P.S. for those who are saying this is an impossible task because word meanings change, I do not see that as a problem. To use the Unicode analogy, the usage of letters has changed (admittedly not as rapidly as the meaning of words), but it is not of any concern to Unicode that 'th' used to be pronounced like 'y' in the Middle ages. Unicode has a code point for 't', 'h' and 'y' and they each serve their purpose.
P.P.S. Actually, it is of some concern to Unicode that 'oe' is also 'œ' or that 'ss' can be written 'ß' in German
This is an interesting little exercise, but I would urge you to consider it nothing more than an introduction to the concept of the difference in natural language between types and tokens.
A type is a single instance of a word which represents all instances. A token is a single count for each instance of the word. Let me explain this with the following example:
"John went to the bread store. He bought the bread."
Here are some frequency counts for this example, with the counts meaning the number of tokens:
John: 1
went: 1
to: 1
the: 2
store: 1
he: 1
bought: 1
bread: 2
Note that "the" is counted twice--there are two tokens of "the". However, note that while there are ten words, there are only eight of these word-to-frequency pairs. Words being broken down to types and paired with their token count.
Types and tokens are useful in statistical NLP. "Lexical encoding" on the other hand, I would watch out for. This is a segue into much more old-fashioned approaches to NLP, with preprogramming and rationalism abound. I don't even know about any statistical MT that actually assigns a specific "address" to a word. There are too many relationships between words, for one thing, to build any kind of well thought out numerical ontology, and if we're just throwing numbers at words to categorize them, we should be thinking about things like memory management and allocation for speed.
I would suggest checking out NLTK, the Natural Language Toolkit, written in Python, for a more extensive introduction to NLP and its practical uses.
Actually you only need about 600 words for a half decent vocabulary.