I am trying to get ask the user to input data in to perform the insert, it works whenever I have numbers, but when I input letters it gives me the error "LettersUsed" is not defined. I tried converting the input to str(input('Whatever')) but that did not do the trick any help why it does this?
import pymongo
import sys
#get a connection to database
connection = pymongo.MongoClient('mongodb://localhost')
#get a handle to database
db=connection.test
vehicles=db.vehicles
vehicle_VIN = input('What is the Vehicle VIN number? ')
vehicle_Make = input('What is the Vehicle Make? ')
newVehicle = {'VIN' : (vehicle_VIN).upper(), 'Make' : (vehicle_Make)}
try:
vehicles.insert_one(newVehicle)
print ('New Vehicle Inserted')
except Exception as e:
print 'Unexpected error:', type(e), e
#print Results
results = vehicles.find()
print()
# display documents in collection
for record in results:
print(record['VIN'] + ',',record['Make'])
#close the connection to MongoDB
connection.close()
Message: name 'DAFEQF' is not defined
In Python 2 the code input() is equal to eval(raw_input(prompt)). This means that whatever input you enter, Python2 tries to "evaluate" that input, and will complain that your input is not defined.
Make sure you replace input() with raw_input() (this is only for Python2!)
Replace
vehicle_VIN = input('What is the Vehicle VIN number? ')
vehicle_Make = input('What is the Vehicle Make? ')
with
vehicle_VIN = raw_input('What is the Vehicle VIN number? ')
vehicle_Make = raw_input('What is the Vehicle Make? ')
Related
Performing PostgreSQL query
when searching from two lists for matches in a database i receive the following error:
ProgrammingError: operator does not exist: character varying ~~ text[]
LINE 1: ...FROM public."Phosphosite_table" WHERE "GENE_NAME" LIKE ARRAY...
^
HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.
my code is the following:
start = time.time()
input_file = "az20.tsv"
names = []
residuelist = []
kinase = []
fclist = []
with open(input_file, "r") as data:
data = csv.reader(data, delimiter='\t')
next(data, None)
for row in data:
data = row[0:6]
if "(" in data[0]:
name = data[0].split("(")[0]
residue1 = data[0].split("(")[1]
residue = residue1.split(")")[0]
fc = data[3]
else:
pass
if "_" in name:
name = name.split("_")[0]
if residue != "None":
names.append(str(name))
residuelist.append(str(residue))
fclist.append(fc)
genename = names
location = residuelist
connection = pg.connect(HAHAHA)
cur = connection.cursor()
cur.execute('SELECT "KINASE_NAME" FROM public."Phosphosite_table" WHERE "GENE_NAME" LIKE %s and "RESIDUE" LIKE %s',\
(genename, location))
query = cur.fetchall()
print query
connection.close()
end = time.time()
print str((end - start)/60) + " Minutes"
I have done some research and it appears that PostgreSQL does not perform any typecasting. However, I thought it would be a comparison of a string against a string, which, I changed before appending to my list. Does anyone have any advice?
Connor
However, I thought it would be a comparison of a string against a string
The part character varying ~~ text[] of the error message tells you, that you are comparing a string ("character varying") with an array ("text[]").
If you want to compare a single value with all elements of an array you need to use the ANY operator:
WHERE "GENE_NAME" LIKE any(%s)
assuming that %s is passed as native Postgres array from your code (which seems to be the case given the error message).
I have 'em dash' character in my python code to split by it a line in a certain txt file.
with open(path, 'r') as r:
number = r.readline()
num = number.split(' — ')[1].replace('\n',' — ')
It worked fine under ubuntu with python3.4, but when running the code under windows 7 (python3.4) get the following error.
num = number.split(' \u2014 ')[1].replace('\n',' \u2014 ') IndexError:
list index out of range
I'm sure that it should work and It seems that the problem is in encoding.
Will appreciate any help to fix my programm. I've tried to set "# -- coding: utf-8 --" without any result
SOLUTION WAS open(path, mode, encoding='UTF8')
when you do:
num = number.split(' — ')[1].replace('\n',' — ')
you assume that the string 'number' contains a dash, and then take the second field ([1]), if number does not contains a dash then [1] does not exists, only [0] exists, and you get the index out of range response.
if ' — ' in number:
num = number.split(' — ')[1].replace('\n',' — ')
else:
num = number.replace('\n',' — ')
furthermore, as you are now on Windows, you might want to check for '\r\n' as well as '\n' depending what the file is using as end of line character(s)
I am trying the following:
from pandas.io.sql import get_schema
tbl_schema = get_schema(contracts, 'my_contracts', keys=['country', 'contract_id'], con=db_engine)
I am getting this
ArgumentError: Element ['country', 'contract_id'] is not a string name or column element
which seems likely coming from this:
def _to_schema_column_or_string(element):
if hasattr(element, '__clause_element__'):
element = element.__clause_element__()
if not isinstance(element, util.string_types + (ColumnElement, )):
msg = "Element %r is not a string name or column element"
raise exc.ArgumentError(msg % element)
return element
I am not sure I understand how the multiple primary keys should be formatted to be parsed properly. I don't really understand this: util.string_types + (ColumnElement, ) I was hoping I could just point to the frame columns without having to define the whole SQLAlchemy schema.
I am creating a simple chat programme in QBasic that will answer questions based on some specific key words present in the user input.therefore I need a way to search for a sub string (I.e. A specific word)within a string.
So, please help me.
To find out if a string contains a certain (sub-)string, you can do this:
text$ = "nonsense !"
IF INSTR( text$, "sense" ) >= 1 THEN
PRINT "This text makes sense !"
END IF
And no, I was not able to test this, as a no longer have QBasic on my PC ;-)
According to the link from the comment above >= 1 is ok
I think INSTR is usually used as follows:
sent$ = "This is a sentence"
PRINT INSTR(1, sent$, "is")
PRINT INSTR(4, sent$, "is")
PRINT INSTR(1, sent$, "word")
the first PRINT command will print a '3' since the first location of "is" within the sentence is at position 3. (The 'is' in 'This')
the second PRINT command starts searching at position 4 (the 's' in 'This'), and so finds the "is" at position 6. So it will print '6'.
the third PRINT command will print a '0' since there is no instance of "word" in the sentence.
Counts the occurrences of a substring within a string.
T$ = "text to be searched and to be displayed"
S$ = "to"
l = 1
DO
x = INSTR(l, T$, S$)
IF x THEN
n = n + 1
l = x + LEN(S$)
ELSE
EXIT DO
END IF
LOOP
PRINT "text '"; S$; "' matches"; n; "times."
I was using active record to get my stories and then generate a CSV, the standard way as done in the rails cast. But I have a lot of rows and it takes minutes. I think if I could get posgresql to do the csv rendering, then I could save some time.
Heres what I have right now:
query = "COPY stories TO STDOUT WITH CSV HEADER;"
results = ActiveRecord::Base.connection.execute(query);
But the results are empty for this query:
=> #<PG::Result:0x00000006ea0488 #connection=#<PG::Connection:0x00000006c62fb8 #socket_io=nil, #notice_receiver=nil, #notice_processor=nil>>
2.0.0-p247 :053 > result.count
=> 0
A better way of knowing:
2.0.0-p247 :059 > result.to_json
=> "[]"
I suspect my controller will look something like this:
format.csv { send_data raw_results }
This works for normal queries, I just can't figure out the SQL syntax to have the CSV results returned to rails.
UPDATE
Got the CSV export from 120000 msec down to 290 msec
My model:
def self.to_csv(story_ids)
csv = []
conn = ActiveRecord::Base.connection.raw_connection
conn.copy_data("COPY (SELECT * FROM stories WHERE stories.id IN (#{story_ids.join(',')})) TO STDOUT WITH (FORMAT CSV, HEADER TRUE, FORCE_QUOTE *, ESCAPE E'\\\\');") do
while row = conn.get_copy_data
csv.push(row)
end
end
csv.join("\r\n")
end
My controller:
send_data Story.to_csv(Story.order(:created_at).pluck(:id))
AFAIK you need to use the copy_data method on the underlying PostgreSQL database connection for this:
- (Object) copy_data(sql)
call-seq:
conn.copy_data( sql ) {|sql_result| ... } -> PG::Result
Execute a copy process for transferring [sic] data to or from the server.
This issues the SQL COPY command via #exec. The response to this (if there is no error in the command) is a PG::Result object that is passed to the block, bearing a status code of PGRES_COPY_OUT or PGRES_COPY_IN (depending on the specified copy direction). The application should then use #put_copy_data or #get_copy_data to receive or transmit data rows and should return from the block when finished.
And there's even an example:
conn.copy_data "COPY my_table TO STDOUT CSV" do
while row=conn.get_copy_data
p row
end
end
ActiveRecord's wrapper for the raw database connection doesn't know what copy_data is but you can use raw_connection to unwrap it:
conn = ActiveRecord::Base.connection.raw_connection
csv = [ ]
conn.copy_data('copy stories to stdout with csv header') do
while row = conn.get_copy_data
csv.push(row)
end
end
That would leave you with an array of CSV strings in csv (one CSV row per array entry) and you could csv.join("\r\n") to get the final CSV data.
This answer builds up on the answer provided by #mu-is-too-short, but without a temporary object using streaming instead.
headers['X-Accel-Buffering'] = 'no'
headers["Cache-Control"] = 'no-cache'
headers["Transfer-Encoding"] = 'chunked'
headers['Content-Type'] = 'text/csv; charset=utf-8'
headers['Content-Disposition'] = 'inline; filename="data.csv"'
headers.delete('Content-Length')
sql = "SELECT * FROM stories WHERE stories.id IN (#{story_ids.join(',')})"
self.response_body = Enumerator.new do |chunk|
conn = ActiveRecord::Base.connection.raw_connection
conn.copy_data("COPY (#{sql.chomp(';')}) TO STDOUT WITH (FORMAT CSV, HEADER TRUE, RCE_QUOTE *, ESCAPE E'\\\\');") do
while row = conn.get_copy_data
chunk << "#{row.length.to_s(16)}\r\n"
chunk << row
chunk << "\r\n"
end
chunk << "0\r\n\r\n"
end
end
You can also use gz = Zlib::GzipWriter.new(Stream.new(chunk)) and gz.write row with a class akin to
class Stream
def initialize(block)
#block = block
end
def write(row)
#block << "#{row.length.to_s(16)}\r\n"
#block << row
#block << "\r\n"
end
end
And remember headers['Content-Encoding'] = 'gzip'. See also this gist.