mahout - seq2sparse not creating vectors - cluster-analysis

I have twitter data (usernames and their tweets) which I am trying to cluster. The text file is 151.7 MB in size.
I converted the raw txt text data to a mahout sequence file.
I inspected this sequence file, it's full of data. It's also 151.7 MB.
I tried to convert the sequence file to sparse vectors.
At this point something has clearly gone wrong. It claims success, but it only creates vector files that are bytes in size. My TFIDF vector file is only 90 bytes, which is obviously wrong when the original txt file and the sequence file are both 151 MB.
What confuses me most is that I can't see what's so different between the data I have and the reuters dataset which is used in the clustering example in 'Mahout in Action'. They're both just text.
Here are the exact commands I used:
--- Turned raw text txt file into a mahout sequence file. I've also checked the sequence file using seqdumper, it's full of username/tweet data. ---
sudo /opt/mahout/bin/mahout seqdirectory -c UTF-8 -i /home/efx/Desktop/tweetQueryOutput.txt -o /home/efx/Desktop/allNYCdataseqfiles
(Inspect the sequence file, it's full of username/tweet data)
sudo /opt/mahout/bin/mahout seqdumper -i /home/efx/Desktop/allNYCdataseqfiles/chunk-0 -o /home/efx/Desktop/allNYCdataseqfiles/sequenceDumperOutput
--- Then tried to convert the sequence file to sparse vectors. ---
sudo /opt/mahout/bin/mahout seq2sparse -o /home/efx/Desktop/allNYC_DataVectors -i /home/efx/Desktop/allNYCdataseqfiles/ -seq

Under Mahout 0.8+cdh5.0.2, you have to do the following :
sudo /opt/mahout/bin/mahout seq2sparse
-o /home/efx/Desktop/allNYC_DataVectors
-i /home/efx/Desktop/allNYCdataseqfiles/
-seq
--maxDFPercent 100
The --maDFPercent options represents the max percentage of docs for the DF. It can be used to remove really high frequency terms. By default the value is 99. But if you use --maxDFSigma also, it will override this value.
This works fine for me but I'm not sure about the 0.7 version of Mahout.

Related

ffmpeg blackdetect, start_black value to mkv chapter?

I'm trying to do automatic detect chapter with blackdetect with ffmpeg.
When I use blackdetect I get result but what is the result? Its not frames? Also. Is it possible to do a script/bat-file (for windows 10, powershell or cmd) to convert the result to a "mkv xml-file" so It can be imported with mkvtoolnix?
ffmpeg -i "movie.mp4" -vf blackdetect=d=0.232:pix_th=0.1 -an -f null - 2>&1 | findstr black_duration > output.txt
result:
black_start:2457.04 black_end:2460.04 black_duration:3
black_start:3149.46 black_end:3152.88 black_duration:3.41667
black_start:3265.62 black_end:3268.83 black_duration:3.20833
black_start:3381.42 black_end:3381.92 black_duration:0.5
black_start:3386.88 black_end:3387.38 black_duration:0.5
black_start:3390.83 black_end:3391.33 black_duration:0.5
black_start:3824.29 black_end:3824.58 black_duration:0.291667
black_start:3832.71 black_end:3833.08 black_duration:0.375
black_start:3916.29 black_end:3920.29 black_duration:4
Please see the documentation on this function here. Specifically this line:
Output lines contains the time for the start, end and duration of the detected black interval expressed in seconds.
If you read further down the page you will see another function blackframe which does a similar thing but outputs the frame number rather than seconds.
If the mkvtoolnix xml file has a chapter definition facility you will be able to create a script that takes the ffmpeg output and dumps it into the correct format in any language of your choice.

mongoimport: set type for all fields when importing CSV

I have multiple problems with importing a CSV with mongoimport that has a headerline.
Following is the case:
I have a big CSV file with the names of the fields in the first line.
I know you can set this line to use as field names with: --headerline.
I want all field types to be strings, but mongoimport sets the types automatically to what it looks like.
IDs such as 0001 will be turn into 1, which can have bad side effects.
Unfortunately, there is (as far as i know) no way of setting them as string with a single command, but by naming each field and setting it type with
--columnsHaveTypes --fields "name.string(), ... "
When I did that, the next problem appeared.
The headerline (with all field names) got imported as values in a separate document.
So basically, my questions are:
Is there a way of setting all field types as string using the --headerline command ?
Alternative, is there a way to ignore the first line ?
I had this problem when uploading 41 million record CSV file into mongodb.
./mongoimport -d testdb -c testcollection --type csv --columnsHaveTypes -f
"RECEIVEDDATE.date(2006-01-02 15:04:05)" --file location/test.csv
As above we have a command to upload file with data types called '-f' or '--fields' but when we use this command to the file that contain header line, mondodb upload first row as well i.e header lines row then its leads error 'cannot convert to datatype' or upload column names also as data set.
Unfortunately we cannot use '--headerline' command instead of '--fields'.
Here the solutions that I found for this problem.
1)Remove header column and upload using '--fields' command as above command. if you re use linux environment you can use below command to remove first row of the huge file i.e header line.it took 2-3 mints for me.(depending on the machine performance)
sed -i -e "1d" location/test.csv
2)upload the file using '--headerline' command then mongodb uploads the file with its default identified data types.Then open mongodb shell command use testdb then run javascript command that get each record and change it into specific data types.But if you have huge file this will takes time.
found this solution from stackoverflow
db.testcollection.find().forEach( function (x) {
x.RECEIVEDDATE = new Date(x.RECEIVEDDATE ); db.testcollection .save(x);});
If you wanna remove the unnecessary rows that not fit to data type use below command.
mongodb document
'--parseGrace skipRow'
I found a solution, that I am comfortable with
Basically, I wanted to use mongoimport within my Clojure Code to import a CSV file in the DB and do a lot of stuff with it automatically. Due to the above mentioned problems I had to find a workaround, to delete this wrong document.
I did following to "solve" this problem:
To set the types as I wanted, I wrote a function to read the first line, put it in a vector and then used String concatenation to set these as fields.
Turning this: id,name,age,hometown,street
into this: id.string(),name.string(),age.string() etc
Then I used the values from the vector to identify the document with
{ name : "name"
age : "age"
etc : "etc" }
and then deleted it with a simple remving.find() command.
Hope this helps any dealing with the same kind of problem.
https://docs.mongodb.com/manual/reference/program/mongoimport/#example-csv-import-types reads:
MongoDB 3.4 added support for specifying field types. Specify field names and types in the form .() using --fields, --fieldFile, or --headerline.
so your first line within the csv file should have names with types. e.g.:
name.string(), ...
and the mongoimport parameters
--columnsHaveTypes --headerline --file <filename.csv>
As to the question of how to remove the first line, you can employ pipes. mongoimport reads from STDIN if no --file option passed. E.g.:
tail -n+2 <filename.csv> | mongoimport --columnsHaveTypes --fields "name.string(), ... "

Replace matches of one regex expression with matches from another, across two files

I am currently helping a friend reorganise several hundred images on a database driven website. I have generated a list of the new, reorganised image paths offline and would like to replace each matching image reference in the sql export of the database with the new paths.
EDIT: Here is an example of what I am trying to achieve
The new_paths_list.txt is a file that I generated using a batch script after I had organised all of the existing images into folders. Prior to this all of the images were in just a few folders. A sample of this generated list might be:
image/data/product_photos/telephones/snom/snom_xyz.jpg
image/data/product_photos/telephones/gigaset/giga_xyz.jpg
A sample of my_exported_db.sql (the database exported from the website) might be:
...
,(110,32,'data/phones/snom_xyz.jpg',3),(213,50,'data/telephones/giga_xyz.jpg',0),
...
The result I want is my_exported_db.sql to be:
...
,(110,32,'data/product_photos/telephones/snom/snom_xyz.jpg',3),(213,50,'data/product_photos/telephones/gigaset/giga_xyz.jpg',0),
...
Some pseudo code to illustrate:
1/ Find the first image name in my_exported_db.sql, such as 'snom_xyz.jpg'.
2/ Find the same image name in new_paths_list.txt
3/ If it is present, copy the whole line (the path and filename)
4/ Replace the whole path in in my_exported_db.sql of this image with the copied line
5/ Repeat for all other image names in my_exported_db.sql
A regex expression that appears to match image names is:
([^)''"/])+\.(?:jpg|jpeg|gif|png)
and one to match image names, complete with path (for relative or absolute) is:
\bdata[^)''"\s]+\.(?:jpg|jpeg|gif|png)
I have looked around and have seen that Sed or Awk may be capable of doing this, but some pointers would be greatly appreciated. I understand that this will only work accurately if there are no duplicated filenames.
You can use sed to convert new_paths_list.txt into a set of sed replacement commands:
sed 's|\(.*\(/[^/]*$\)\)|s#data\2#\1#|' new_paths_list.txt > rules.sed
The file rules.sed will look like this:
s#data/snom_xyz.jpg#image/data/product_photos/telephones/snom/snom_xyz.jpg#
s#data/giga_xyz.jpg#image/data/product_photos/telephones/gigaset/giga_xyz.jpg#
Then use sed again to translate my_exported_db.sql:
sed -i -f rules.sed my_exported_db.sql
I think in some shells it's possible to combine these steps and do without rules.sed:
sed 's|\(.*\(/[^/]*$\)\)|s#data\2#\1#|' new_paths_list.txt | sed -i -f - my_exported_db.sql
but I'm not certain about that.
EDIT<:
If the images are in several directories under data/, make this change:
sed "s|image/\(.*\(/[^/]*$\)\)|s#[^']*\2#\1#|" new_paths_list.txt > rules.sed

Pick random lines from file with fixed seed (pseudo-random)

I would like to randomly pick some lines (e.g.20) from a file and print it into another but I want to have a seed fixed so that I get the same output if input file is the same.
The examples I've found that pick several lines, their output is different everytime
e.g:
perl -e '$f="inputfile";$_=`wc -l $f`;#l=split( );$r=int rand(#l[0]);system("head -n$r $f|tail -20")'> outputfile
And those that talk about fixed seed and pseudo-random are just for printing numbers, not extracting lines from files, or just extract a single line. Is there a command for unix or some code in perl or similar? (sort -R, --random- & shuf didn't work (using Mac OS X 10.5.8)).
You can set the seed via srand(); (for example. srand(5)) to get a fixed seed for rand.

Can kdb read from a named pipe?

I hope I'm doing something wrong, but it seems like kdb can't read data from named pipes (at least on Solaris). It blocks until they're written to but then returns none of the data that was written.
I can create a text file:
$ echo Mary had a little lamb > lamb.txt
and kdb will happily read it:
q) read0 `:/tmp/lamb.txt
enlist "Mary had a little lamb"
I can create a named pipe:
$ mkfifo lamb.pipe
and trying to read from it:
q) read0 `:/tmp/lamb.pipe
will cause kdb to block. Writing to the pipe:
$ cat lamb.txt > lamb.pipe
will cause kdb to return the empty list:
()
Can kdb read from named pipes? Should I just give up? I don't think it's a permissions thing (I tried setting -m 777 on my mkfifo command but that made no difference).
With release kdb+ v3.4 Q has support for named pipes: Depending on whether you want to implement a streaming algorithm or just read from the pipe use either .Q.fps or read1 on a fifo pipe:
To implement streaming you can do something like:
q).Q.fps[0N!]`:lamb.pipe
Then $ cat lamb.txt > lamb.pipe
will print
,"Mary had a little lamb"
in your q session. More meaningful algorithms can be implemented by replacing 0N! with an appropriate function.
To read the context of your file into a variable do:
q)h:hopen`:fifo://lamb.pipe
q)myText: `char$read1(h)
q)myText
"Mary had a little lamb\n"
See more about named pipes here.
when read0 fails, you can frequently fake it with system"cat ...". (i found this originally when trying to read stuff from /proc that also doesn't cooperate with read0.)
q)system"cat /tmp/lamb.pipe"
<blocks until you cat into the pipe in the other window>
"Mary had a little lamb"
q)
just be aware there's a reasonably high overhead (as such things go in q) for invoking system—it spawns a whole shell process just to run whatever your command is
you might also be able to do it directly with a custom C extension, probably calling read(2) directly…
The algorithm for read0 is not available to see what it is doing under the hood but, as far as I can tell, it expects a finite stream and not a continuous one; so it will will block until it receives an EOF signal.
Streaming from pipe is supported from v3.4
Details steps:
Check duplicated pipe file
rm -f /path/dataPipeFileName
Create named pipe
mkfifo /path/dataPipeFileName
Feed data
q).util.system[$1]; $1=command to fetch data > /path/dataPipeFileName &
Connect pipe using kdb .Q.fps
q).Q.fps[0N!]`$":/path/",dataPipeFileName;
Reference:
.Q.fps (streaming algorithm)
Syntax: .Q.fps[x;y] Where x is a unary function and y is a filepath
.Q.fs for pipes. (Since V3.4) Reads conveniently sized lumps of complete "\n" delimited records from a pipe and applies a function to each record. This enables you to implement a streaming algorithm to convert a large CSV file into an on-disk kdb+ database without holding the data in memory all at once.