java.lang.NumberFormatException: For input string: "Some(12)" - scala

CAn anyone tell me please what is wrong with my code:
Below is my spark code in scala:
import java.text.SimpleDateFormat
import org.apache.spark.sql.SparkSession
import scala.xml.XML
object TopTenTags09 {
def main(args:Array[String]){
val format = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS")
val format2 = new SimpleDateFormat("yyyy-MM")
val spark = SparkSession.builder().appName("Number of posts which are questions and contains specified words").master("local").getOrCreate()
val data = spark.read.textFile("/home/harsh/Hunny/HadoopPractice/Spark/DF/StackOverFlow/Posts.xml").rdd
val result = data.filter{line=>{line.trim().startsWith("<row")}}
.filter{line=>{line.contains("PostTypeId=\"1\"")}}
.map { line=>{
val xml = XML.loadString(line)
if(xml.attribute("Tags").mkString.toLowerCase().contains("hadoop") ||
xml.attribute("Tags").mkString.toLowerCase().contains("spark")){
(Integer.parseInt(xml.attribute("Score").toString()),Integer.parseInt(xml.attribute("Score").toString()))
}
}}/*.filter(line=>line._1>2)
.sortByKey(false)*/
result.foreach(println) //throwing error while printing
spark.stop
}
}
And below is the error I am getting while running it:
java.lang.NumberFormatException: For input string: "Some(12)"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.parseInt(Integer.java:615)
I am new to spark and the the error is making me crazy because as mentioned in error ther is no "Some" in code or in data.Can anyone help me please.
Sample data
<row Id="5" PostTypeId="1" CreationDate="2014-05-13T23:58:30.457" Score="7" ViewCount="286" Body="<p>I've always been interested in machine learning, but I can't figure out one thing about starting out with a simple "Hello World" example - how can I avoid hard-coding behavior?</p>
<p>For example, if I wanted to "teach" a bot how to avoid randomly placed obstacles, I couldn't just use relative motion, because the obstacles move around, but I don't want to hard code, say, distance, because that ruins the whole point of machine learning.</p>
<p>Obviously, randomly generating code would be impractical, so how could I do this?</p>
" OwnerUserId="5" LastActivityDate="2014-05-14T00:36:31.077" Title="How can I do simple machine learning without hard-coding behavior?" Tags="<machine-learning>" AnswerCount="1" CommentCount="1" FavoriteCount="1" ClosedDate="2014-05-14T14:40:25.950" />
<row Id="7" PostTypeId="1" AcceptedAnswerId="10" CreationDate="2014-05-14T00:11:06.457" Score="2" ViewCount="266" Body="<p>As a researcher and instructor, I'm looking for open-source books (or similar materials) that provide a relatively thorough overview of data science from an applied perspective. To be clear, I'm especially interested in a thorough overview that provides material suitable for a college-level course, not particular pieces or papers.</p>
" OwnerUserId="36" LastEditorUserId="97" LastEditDate="2014-05-16T13:45:00.237" LastActivityDate="2014-05-16T13:45:00.237" Title="What open-source books (or other materials) provide a relatively thorough overview of data science?" Tags="<education><open-source>" AnswerCount="3" CommentCount="4" FavoriteCount="1" ClosedDate="2014-05-14T08:40:54.950" />
<row Id="9" PostTypeId="2" ParentId="5" CreationDate="2014-05-14T00:36:31.077" Score="4" Body="<p>Not sure if this fits the scope of this SE, but here's a stab at an answer anyway.</p>
<p>With all AI approaches you have to decide what it is you're modelling and what kind of uncertainty there is. Once you pick a framework that allows modelling of your situation, you then see which elements are "fixed" and which are flexible. For example, the model may allow you to define your own network structure (or even learn it) with certain constraints. You have to decide whether this flexibility is sufficient for your purposes. Then within a particular network structure, you can learn parameters given a specific training dataset.</p>
<p>You rarely hard-code behavior in AI/ML solutions. It's all about modelling the underlying situation and accommodating different situations by tweaking elements of the model.</p>
<p>In your example, perhaps you might have the robot learn how to detect obstacles (by analyzing elements in the environment), or you might have it keep track of where the obstacles were and which way they were moving.</p>
" OwnerUserId="51" LastActivityDate="2014-05-14T00:36:31.077" CommentCount="0" />
<row Id="10" PostTypeId="2" ParentId="7" CreationDate="2014-05-14T00:53:43.273" Score="9" Body="<p>One book that's freely available is "The Elements of Statistical Learning" by Hastie, Tibshirani, and Friedman (published by Springer): <a href="http://statweb.stanford.edu/~tibs/ElemStatLearn/">see Tibshirani's website</a>.</p>
<p>Another fantastic source, although it isn't a book, is Andrew Ng's Machine Learning course on Coursera. This has a much more applied-focus than the above book, and Prof. Ng does a great job of explaining the thinking behind several different machine learning algorithms/situations.</p>
" OwnerUserId="22" LastActivityDate="2014-05-14T00:53:43.273" CommentCount="1" />
<row Id="14" PostTypeId="1" CreationDate="2014-05-14T01:25:59.677" Score="14" ViewCount="686" Body="<p>I am sure data science as will be discussed in this forum has several synonyms or at least related fields where large data is analyzed.</p>
<p>My particular question is in regards to Data Mining. I took a graduate class in Data Mining a few years back. What are the differences between Data Science and Data Mining and in particular what more would I need to look at to become proficient in Data Mining?</p>
" OwnerUserId="66" LastEditorUserId="322" LastEditDate="2014-06-17T16:17:20.473" LastActivityDate="2014-06-20T17:36:05.023" Title="Is Data Science the Same as Data Mining?" Tags="<data-mining><definitions>" AnswerCount="4" CommentCount="1" FavoriteCount="2" />

I assume that
(Integer.parseInt(xml.attribute("Score").toString())
throws the above mentioned exception, because xml is of type Elem, and if you call the method attribute on it, it returns an Option[Seq[Node]], not just a single string with the number.
You probably want to replace both pieces of the above type by
(Integer.parseInt(xml.attribute("Score").get.toString())
Moreover, you could also replace the cumbersome Integer.parseInt by
xml.attribute("Score").get.toString.toInt
Isolated demo:
scala> val e = XML.loadString("""<foo Score="42" Bar="58"/>""")
e: scala.xml.Elem = <foo Bar="58" Score="42"/>
scala> e.attribute("Score").get.toString.toInt
res4: Int = 42

Related

NLP ELMo model pruning input

I am trying to retrieve embeddings for words based on the pretrained ELMo model available on tensorflow hub. The code I am using is modified from here: https://www.geeksforgeeks.org/overview-of-word-embedding-using-embeddings-from-language-models-elmo/
The sentence that I am inputting is
bod =" is coming up in and every project is expected to do a video due on we look forward to discussing this with you at our meeting this this time they have laid out the selection criteria for the video award s go for the top spot this time "
and these are the keywords I want embeddings for:
words=["do", "a", "video"]
embeddings = elmo([bod],
signature="default",
as_dict=True)["elmo"]
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
this sentence is 236 characters in length.
this is the picture showing that
but when I put this sentence into the ELMo model, the tensor that is returned is only contains a string of length 48
and this becomes a problem when i try to extract embeddings for keywords that are outside the 48 length limit because the indices of the keywords are shown to be outside this length:
this is the code I used to get the indices for the words in 'bod'(as shown above)
num_list=[]
for item in words:
print(item)
index = bod.index(item)
num_list.append(index)
num_list
But i keep running into this error:
I tried looking for ELMo documentation to explain why this is happening but I have not found anything related to this problem of pruned input.
Any advice is much appreciated!
Thank You
This is not really an AllenNLP issue since you are using a tensorflow-based implementation of ELMo.
That said, I think the problem is that ELMo embeds tokens, not characters. You are getting 48 embeddings because the string has 48 tokens.

Mozilla Deep Speech SST suddenly can't spell

I am using deep speech for speech to text. Up to 0.8.1, when I ran transcriptions like:
byte_encoding = subprocess.check_output(
"deepspeech --model deepspeech-0.8.1-models.pbmm --scorer deepspeech-0.8.1-models.scorer --audio audio/2830-3980-0043.wav", shell=True)
transcription = byte_encoding.decode("utf-8").rstrip("\n")
I would get back results that were pretty good. But since 0.8.2, where the scorer argument was removed, my results are just rife with misspellings that make me think I am now getting a character level model where I used to get a word-level model. The errors are in a direction that looks like the model isn't correctly specified somehow.
Now I when I call:
byte_encoding = subprocess.check_output(
['deepspeech', '--model', 'deepspeech-0.8.2-models.pbmm', '--audio', myfile])
transcription = byte_encoding.decode("utf-8").rstrip("\n")
I now see errors like
endless -> "endules"
service -> "servic"
legacy -> "legaci"
earning -> "erting"
before -> "befir"
I'm not 100% that it is related to removing the scorer from the API, but it is one thing I see changing between releases, and the documentation suggested accuracy improvements in particular.
Short: The scorer matches letter output from the audio to actual words. You shouldn't leave it out.
Long: If you leave out the scorer argument, you won't be able to detect real world sentences as it matches the output from the acoustic model to words and word combinations present in the textual language model that is part of the scorer. And bear in mind that each scorer has specific lm_alpha and lm_beta values that make the search even more accurate.
The 0.8.2 version should be able to take the scorer argument. Otherwise update to 0.9.0, which has it as well. Maybe your environment is changed in a way. I would start in a new dir and venv.
Assuming you are using Python, you could add this to your code:
ds.enableExternalScorer(args.scorer)
ds.setScorerAlphaBeta(args.lm_alpha, args.lm_beta)
And check the example script.

Scala:Handling special encoding characters in String

I have string containing many umlauts(ä,ö,ü) and euro(€) symbol. Is there any library or existing methods that transform them to (a,o,u) and Euro(or its equivalent) respectively in Scala.
I am aware of the similar libraries in python that do the job but can't seem to find it in scala.
Consider this example : val String="Köln and München are great cities. The average bus ticket costs €4.5"
I want to be converted something like this or equivalent: "Koln and Munchen are great cities. The average bus ticket costs Euros 4.5"
You can build your own translator with whatever rules you need to apply.
val str="Köln and München are great cities. The average bus ticket costs €4.5"
val deUm :Map[Char,String] =
Map('ö'->"o", 'ü'->"u", '€'->"Euros ").withDefault(_.toString)
str.flatMap(deUm(_))
//res0: String = Koln and Munchen are great cities. The average bus ticket costs Euros 4.5
Do you really need a library for this? you can just use the string replace function like shown here?
http://gordon.koefner.at/blog/coding/replacing-german-umlauts/

Unicode character usage statistics [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I am looking for some statistical data on the usage of Unicode characters in textual documents (with any markup). Googling brought no results.
Background: I am currently developing a finite state machine-based text processing tool. Statistical data on characters might help searching for the right transitions. For instance latin characters are probably most used so it might make sense to check for those first.
Did anyone by chance gathered or saw such statistics?
(I'm not focused on specific languages or locales. Think general-purpose parser like an XML parser.)
To sum up current findings and ideas:
Tom Christiansen gathered such statistics for PubMed Open Access Corpus (see this question). I have asked if he could share these statistics, waiting for the answer.
As #Boldewyn and #nwellnhof suggested, I could run the analysis of the complete Wikipedia dump or CommonCrawl data. I think these are good suggestions, I'll probably go with the CommonCrawl.
So sorry, this is not an answer, but a good research direction.
UPDATE: I have written a small Hadoop job and ran it on one of the CommonCrawl segments. I have posted my results in a spreadsheet here. Below are the first 50 characters:
0x000020 14627262
0x000065 7492745 e
0x000061 5144406 a
0x000069 4791953 i
0x00006f 4717551 o
0x000074 4566615 t
0x00006e 4296796 n
0x000072 4293069 r
0x000073 4025542 s
0x00000a 3140215
0x00006c 2841723 l
0x000064 2132449 d
0x000063 2026755 c
0x000075 1927266 u
0x000068 1793540 h
0x00006d 1628606 m
0x00fffd 1579150
0x000067 1279990 g
0x000070 1277983 p
0x000066 997775 f
0x000079 949434 y
0x000062 851830 b
0x00002e 844102 .
0x000030 822410 0
0x0000a0 797309
0x000053 718313 S
0x000076 691534 v
0x000077 682472 w
0x000031 648470 1
0x000041 624279 #
0x00006b 555419 k
0x000032 548220 2
0x00002c 513342 ,
0x00002d 510054 -
0x000043 498244 C
0x000054 495323 T
0x000045 455061 E
0x00004d 426545 M
0x000050 423790 P
0x000049 405276 I
0x000052 393218 R
0x000044 381975 D
0x00004c 365834 L
0x000042 353770 B
0x000033 334689 E
0x00004e 325299 N
0x000029 302497 /
0x000028 301057 (
0x000035 298087 5
0x000046 295148 F
To be honest, I have no idea if these results are representative. As I said, I only analysed one segment. Looks quite plausible for me. One can also easily spot that the markup is already stripped off - so the distribution is not directly suitable for my XML parser. But it gives valuable hints on which character ranges to check first.
The link to http://emojitracker.com/ in the near-duplicate question I personally think is the most promising resource for this. I have not examined the sources (I don't speak Ruby) but from a real-time Twitter feed of character frequencies, I would expect quite a different result than from static web pages, and probably a radically different language distribution (I see lots more Arabic and Turkish on Twitter than in my otherwise ordinary life). It's probably not exactly what you are looking for, but if we just look at the title of your question (which probably most visitors will have followed to get here) then that is what I would suggest as the answer.
Of course, this begs the question what kind of usage you attempt to model. For static XML, which you seem to be after, maybe the Common Crawl set is a better starting point after all. Text coming out of an editorial process (however informal) looks quite different from spontaneous text.
Out of the suggested options so far, Wikipedia (and/or Wiktionary) is probably the easiest, since it's small enough for local download, far better standardized than a random web dump (all UTF-8, all properly tagged, most of it properly tagged by language and proofread for markup errors, orthography, and occasionally facts), and yet large enough (and probably already overkill by an order of magnitude or more) to give you credible statistics. But again, if the domain is different than the domain you actually want to model, they will probably be wrong nevertheless.

Text classification using Weka

I'm a beginner to Weka and I'm trying to use it for text classification. I have seen how to StringToWordVector filter for classification. My question is, is there any way to add more features to the text I'm classifying? For example, if I wanted to add POS tags and named entity tags to the text, how would I use these features in a classifier?
It depends of the format of your dataset and the preprocessing steps you perform. For instance, let us suppose that you have pre-POS-tagged your texts, looking like:
The_det dog_n barks_v ._p
So you can build an specific tokenizer (see weka.core.tokenizers) to generate two tokens per word, one would be "The" and the other one would be "The_det" so you keep the tag information.
If you want only tagged words, then you can just ensure that "_" is not a delimiter in the weka.core.tokenizers.WordTokenizer.
My advice is to have both the words and tagged words, so a simpler way would be to write an script that joins the texts and the tagged texts. From a file containing "The dog barks" and another one cointaining "The_det dog_n barks_v ._p", it would generate a file with "The The_det dog dog_n barks barks_v . ._p". You may even forget about the order unless you are going to make use of n-grams.