MongoPasswordField setPassword + save - scala

I've started working a little bit with lift+scala+mongorecord but I found a small annoyance :
Usually to easily create a record ( document ) I just do:
User.createRecord.loginName("user").firstName("Name").lastName("LastName").save
But when I use the MongoPasswordField it is impossible to do it in just one line:
val userRecord = User.createRecord.loginName("user").firstName("Name").lastName("LastName")
userRecord.password.setPassword("SomePassword")|
userRecord.save
Source code for the filed is at http://scala-tools.org/mvnsites/liftweb-2.2/framework/scaladocs/lift-persistence/lift-mongodb-record/src/main/scala/net/liftweb/mongodb/record/field/MongoPasswordField.scala.html
Is there any way of doing this in just one line?
or at least can the field code be modified in some way to actually allow doing this?

I think you could do this:
User.createRecord.loginName("user").firstName("Name").lastName("LastName").password(Password("Some password")).save

Related

How to fill tables in powerpoint from a cell array in matlab?

I've downloaded saveppt2, jrichter's code and even WritetoWordfromMatlab and tried reading through them to figure it out with no luck. I have something of my own built already so I just need to figure out how to get tables to work.
Whenever I try something like:
myTable.Cell(1,1).TextFrame.Text = 'textstring'
or
myTable.Table.Cell(1,1) = 'textstring'
Or any combination of table / text commands I end up with there being no such property or function as cell for table objects. Every COM/VBA/C library I can find, as well as some code in Python (PandastoPowerPoint from Github) that does what I'm aiming to do says that Table.Cell(row,col) should work. Is this specifically a problem with matlab trying to use (#,#) as a form of indexing?
Try
myTable.Cell(1, 1).Shape.TextFrame.TextRange.Text = "TextString"
or = 'TextString' if that's what matlab prefers.
Thanks Steve R! With a little tweak, I got it to work, finally. So here's the answer:
% add table to existing slide object
myTable = slide.Shapes.addTable(nRows,nCols,x0,y0,rowWidthnRows,colHeightnCols)
myTable.Table.Cell(1, 1).Shape.TextFrame.TextRange.Text = 'TextString'

dataFrame keying using pandas groupby method

I new to pandas and trying to learn how to work with it. Im having a problem when trying to use an example I saw in one of wes videos and notebooks on my data. I have a csv file that looks like this:
filePath,vp,score
E:\Audio\7168965711_5601_4.wav,Cust_9709495726,-2
E:\Audio\7168965711_5601_4.wav,Cust_9708568031,-80
E:\Audio\7168965711_5601_4.wav,Cust_9702445777,-2
E:\Audio\7168965711_5601_4.wav,Cust_7023544759,-35
E:\Audio\7168965711_5601_4.wav,Cust_9702229339,-77
E:\Audio\7168965711_5601_4.wav,Cust_9513243289,25
E:\Audio\7168965711_5601_4.wav,Cust_2102513187,18
E:\Audio\7168965711_5601_4.wav,Cust_6625625104,-56
E:\Audio\7168965711_5601_4.wav,Cust_6073165338,-40
E:\Audio\7168965711_5601_4.wav,Cust_5105831247,-30
E:\Audio\7168965711_5601_4.wav,Cust_9513082770,-55
E:\Audio\7168965711_5601_4.wav,Cust_5753907026,-79
E:\Audio\7168965711_5601_4.wav,Cust_7403410322,11
E:\Audio\7168965711_5601_4.wav,Cust_4062144116,-70
I loading it to a data frame and the group it by "filePath" and "vp", the code is:
res = df.groupby(['filePath','vp']).size()
res.index
and the output is:
[E:\Audio\7168965711_5601_4.wav Cust_2102513187,
Cust_4062144116, Cust_5105831247,
Cust_5753907026, Cust_6073165338,
Cust_6625625104, Cust_7023544759,
Cust_7403410322, Cust_9513082770,
Cust_9513243289, Cust_9702229339,
Cust_9702445777, Cust_9708568031,
Cust_9709495726]
Now Im trying to approach the index like a dict, as i saw in examples, but when im doing
res['Cust_4062144116']
I get an error:
KeyError: 'Cust_4062144116'
I do succeed to get a result when im putting the filepath, but as i understand and saw in previouse examples i should be able to use the vp keys as well, isnt is so?
Sorry if its a trivial one, i just cant understand why it is working in one example but not in the other.
Rutger you are not correct. It is possible to "partial" index a multiIndex series. I simply did it the wrong way.
The index first level is the file name (e.g. E:\Audio\7168965711_5601_4.wav above) and the second level is vp. Meaning, for each file name i have multiple vps.
Now, this is correct:
res['E:\Audio\7168965711_5601_4.wav]
and will return:
Cust_2102513187 2
Cust_4062144116 8
....
but trying to index by the inner index (the Cust_ indexes) will fail.
You groupby two columns and therefore get a MultiIndex in return. This means you also have to slice using those to columns, not with a single index value.
Your .size() on the groupby object converts it into a Series. If you force it in a DataFrame you can use the .xs method to slice a single level:
res = pd.DataFrame(df.groupby(['filePath','vp']).size())
res.xs('Cust_4062144116', level=1)
That works. If you want to keep it as a series, boolean indexing can help, something like:
res[res.index.get_level_values(1) == 'Cust_4062144116']
The last option is a bit less readable, but sometimes also more flexibile, you could test for multiple values at once for example:
res[res.index.get_level_values(1).isin(['Cust_4062144116', 'Cust_6073165338'])]

whoosh doesn't search for short words like "C#"

i am using whoosh to index over 200,000 books. but i have encountered some problems with it.
the whoosh query parser returns NullQuery for words like "C#", "C++" with meta-characters in them and also for some other short words. this words are used in the title and body of some documents so i am not using keyword type for them. i guess the problem is in the analysis or query-parsing phase of searching or indexing but i can't touch my data blindly. can anyone help me to correct this issue. Tnx.
i fixed the problem by creating a StandardAnalyzer with a regex pattern that meets my requirements,here is the regex pattern:
'\w+[#+.\w]*'
this will make tokenizing of fields to be done successfully, and also the searching goes well.
but when i use queries like "some query++*" or "some##*" the parsed query will be a single Every query, just the '*'. also i found that this is not related to my analyzer and this is the Whoosh's default behavior. so here is my new question: is this behavior correct or it is a bug??
note: removing the WildcardPlugin from the query-parser solves this problem but i also need the WildcardPlugin.
now i am using the following code:
from whoosh.util import rcompile
#for matching words like: '.NET', 'C++' and 'C#'
word_pattern = rcompile('(\.|[\w]+)(\.?\w+|#|\+\+)*')
#i don't need words shorter that two characters so i don't change the minsize default
analyzer = analysis.StandardAnalyzer(expression=word_pattern)
... now in my schema:
...
title = fields.TEXT(analyzer=analyzer),
...
this will solve my first problem, yes. but the main problem is in searching. i don't want to let users to search using the Every query or *. but when i parse queries like C++* i end up an Every(*) query. i know that there is some problem but i can't figure out what it is.
I had the same issue and found out that StandardAnalyzer() uses minsize=2 by default. So in your schema, you have to tell it otherwise.
schema = whoosh.fields.Schema(
name = whoosh.fields.TEXT(stored=True, analyzer=whoosh.analysis.StandardAnalyzer(minsize=1)),
# ...
)

T-SQL SELECT Statment to Exclude Text Between ( And ) Without Update or Variables

I'm wondering if this is possible, I have a messy set of first name data to work with, and it's shared by other applications so I don't want to run any update statements. But the data will sometime include an AKA such as:
Robert (AKA Bob)
And I am trying to get a clean data where it just says "Robert".
One way I thought of is to use a temp table then CHARINDEX for ( and ) then REPLACE what's between ( and ). This seems like a long winded way to do this.
Is there a smarter way?
EDIT: More examples of the data hell. Sometimes the parenthesis comes in the front or mixed up such as:
(Bill) William
Richard (Dick) Jr.
Untested:
FirstName = case when charindex('(AKA', FirstName) = 0
then FirstName
else substring(FirstName, 1, charindex('(AKA', FirstName)) end
If the noisy string pattern is unpredictable, better to use SQL CLR TVF(table value function), utilize C# code regex. Taking an xml as input parameter, which includes data you need to process, return a table with data processed.
Pieter got the ball rolling for me! Thank you for getting me started on the right track!
SELECT
CASE WHEN FirstName like '%)%'
THEN REPLACE(FirstName,
SUBSTRING(FirstName,CHARINDEX('(',FirstName),CHARINDEX(')',FirstName)-CHARINDEX('(',FirstName)+1),'')
ELSE FirstName END

exporting MongoDB to CSV using pymongo

I would like write a script to generate a CSV file from my mongoDB database and I would like to know the most convenient version !
first let me begin with the structure of collections.
MyDataBase -> setting
users
fruits
in setting I have something like
setting -> _id
data
_tenant
and the thing I am after, is making a CSV file out of profiles in data
which they have some fields/properties like "name", "address", "postalcode", "email", age and etc. and not neccessary all of these profile have all files/properties and even some of them look like collection (have sub-branches) which I am not interested in at all !
so, my code is python so far is look like these
myquery = db.settings.find() # I am getting everything !
output = csv.writer(open('some.csv', 'wt')) # writng in this file
for items in myquery[0:10]: # first 11 entries
a = list(items['data']['Profile'].values()) # collections are importent as dictionary and I am making them as list
tt = list()
for chiz in a:
if chiz is not None:
tt.append(chiz.encode('ascii', 'ignore')) #encoding
else:
tt.append("none")
output.writerow(tt)
these fields/properties dont have neccessary all fields, and also even some of them are collection(with sub-branch) and will be imported as dictionary ! so, I have to convert them to list and all and all, there are quite few things to take care in such a process and in all doesn't look that straightforward !
My question might be sounds very general but is it a typical way to make such report ?! if not, can you someone make it clear ?!
Yes, I am using the same way.
It is clear and fast, also it works without of any additional libraries.