Python mysql.connector .fetchall() returns (redundant) data in different formats - mysql-connector-python

I have this method to execute Queries:
def exeQuery(query, data, dbEdit ):
try:
myDatabase = mysql.connector.connect(**dbLoginInfo)
cursor = myDatabase.cursor()
except mysql.connector.Error as e:
print('[ERROR WHILE CONNECTING TO DATABASE]: ', e)
else:
if dbEdit == True:
if data == None:
res = cursor.execute(query)
myDatabase.commit()
else:
res = cursor.execute(query, data)
myDatabase.commit()
else:
if data == None:
cursor.execute(query)
res = cursor.fetchall()
else:
cursor.execute(query, data)
res = cursor.fetchall()
cursor.close()
myDatabase.close()
#print(type(res))
#print(res)
return res
And I call it here:
#app.route("/profil/delete", methods= ['POST'])
#token_required
def deleteProfil():
dicUser = decodeToken(request.args.get('token'))
profilName = request.args.get('profilName')
path = exeQuery('SELECT profilbild FROM Profil WHERE profilName = %s AND konto_email = %s', (profilName, dicUser['user']), False)
print(path)
return Response(status = 200)
It should print me ('pics/jj#gmail.de/DelProf.png',) once.
For some reason I get the right data but sometimes the formatting is changing. Everytime there was only one row in the database that fit to the query. These are the three outputs I got so far:
[('pics/jj#gmail.de/DelProf.png',), ('pics/jj#gmail.de/DelProf.png',), ('pics/jj#gmail.de/DelProf.png',), ('pics/jj#gmail.de/DelProf.png',)]
[('pics/jj#gmail.de/DelProf.png',)]
('pics/jj#gmail.de/DelProf.png',)

The method MySQLCursor.fetchall() returns a list of tuples. as mentioned here: https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-fetchall.html.
Even if there are no results it will return a list, but empty [].
If you got a list with repeated values, it's possible that your query it may found those matches.

Related

Why the same query is being executed two times with the two methods?

def execCypher(conn:ext.connection, graphName:str, cypherStmt:str, cols:list=None, params:tuple=None) -> ext.cursor :
if conn == None or conn.closed:
raise _EXCEPTION_NoConnection
cursor = conn.cursor()
#clean up the string for modification
cypherStmt = cypherStmt.replace("\n", "")
cypherStmt = cypherStmt.replace("\t", "")
cypher = str(cursor.mogrify(cypherStmt, params))
cypher = cypher[2:len(cypher)-1]
preparedStmt = "SELECT * FROM age_prepare_cypher({graphName},{cypherStmt})"
cursor = conn.cursor()
try:
cursor.execute(sql.SQL(preparedStmt).format(graphName=sql.Literal(graphName),cypherStmt=sql.Literal(cypher)))
except SyntaxError as cause:
conn.rollback()
raise cause
except Exception as cause:
conn.rollback()
raise SqlExecutionError("Execution ERR[" + str(cause) +"](" + preparedStmt +")", cause)
stmt = buildCypher(graphName, cypher, cols)
cursor = conn.cursor()
try:
cursor.execute(stmt)
return cursor
except SyntaxError as cause:
conn.rollback()
raise cause
except Exception as cause:
conn.rollback()
raise SqlExecutionError("Execution ERR[" + str(cause) +"](" + stmt +")", cause)
I just want to understand that if the two execution calls (with preparedStat & buildCypher functions) do the same thing, why are both of them being executed in the same function with same parameters?
The two execution calls are not doing the same thing. SELECT * FROM age_prepare_cypher({graphName},{cypherStmt}) is just storing the graphName and cypherStmt into the session.
The buildCypher function does this, it doesn't even touch your cypherStmt:
def buildCypher(graphName:str, cypherStmt:str, columns:list) ->str:
if graphName == None:
raise _EXCEPTION_GraphNotSet
columnExp=[]
if columns != None and len(columns) > 0:
for col in columns:
if col.strip() == '':
continue
elif WHITESPACE.search(col) != None:
columnExp.append(col)
else:
columnExp.append(col + " agtype")
else:
columnExp.append('v agtype')
stmtArr = []
stmtArr.append("SELECT * from cypher(NULL,NULL) as (")
stmtArr.append(','.join(columnExp))
stmtArr.append(");")
return "".join(stmtArr)
Later on it must be getting the graphName and cypherStmt back out of the session to use. So it needs both of them to work.
See more at https://stackoverflow.com/a/75461986/6796081

Flutter, when cleaning list that already got added to list, the values are gone

I need to sort the elements by their data.
For example:
When I have 4 entries and 2 of them have the same date, the result will be 3 entries in the result list
This is my code:
Future<List<List<MoodData>>> moodData() async {
var result = await database
.ref()
.child("users/")
.child(user!.uid)
.child("moodData")
.once();
List<MoodData> x = [];
List<List<MoodData>> resultdata = [];
result.snapshot.children.forEach((element) {
maxID = int.parse(element.key.toString());
print(element.child("date").value);
if (x.length != 2) {
x.add(MoodData(
id: int.parse(element.key.toString()),
date: element.child("date").value.toString(),
moodValue: double.parse(element.child("y_value").value.toString()),
text: element.child("text").value.toString()));
} else {
resultdata.add(x);
x.clear();
}
});
print(resultdata);
return resultdata;
}
The problem is, that in the result list, all the elemts are empty lists.
What is my code doing wrong?
When you adding x to resultdata it not produces the copy of x, x just becomes an element of resultdata.
Then you have 2 options for accessing x data:
Using given name x
Get it from resultdata by index
So when you call x.clear() after resultdata.add(x) it's the same as calling resultdata.last.clear().
The right solution is adding a copy of x([...x]) to resultdata:
resultdata.add([...x]);
x.clear();

Upsert statement with Flask-SQLAlchemy

I have a Flask app that parses a CSV of public election data and inserts the results into a Postgres database. It's a port of an old, not-Flask, Python 2 app that uses various libraries that no longer work. I'm mostly trying to base the application's structure on this tutorial. I've been using Flask-SQLAlchemy to construct some models for the database tables and populate the data from the CSV.
In this case I'm working with an Area model, which corresponds to a geographic area that might have an election (house district, school board district, etc). Here's what I've got in my basic blueprint route:
election = None
#bp.route('/areas')
def scrape_areas():
area = Area()
sources = area.read_sources()
election = area.set_election()
if election not in sources:
return
# Get metadata about election
election_meta = sources[election]['meta'] if 'meta' in sources[election] else {}
for i in sources[election]:
source = sources[election][i]
if 'type' in source and source['type'] == 'areas':
rows = area.parse_election(source, election_meta)
count = 0
for row in rows:
parsed = area.parser(row, i)
area = Area()
area.from_dict(parsed, new=True)
# this shows the generated string of area_id
# which is a UNIQUE key in the database
print(area)
db.session.add(area)
db.session.commit()
count = count + 1
return count
And here's the models.py:
import logging
import os
import json
import re
import csv
import urllib.request
import calendar
import datetime
from flask import current_app
from app import db
LOG = logging.getLogger(__name__)
scraper_sources_inline = None
class ScraperModel(object):
nonpartisan_parties = ['NP', 'WI', 'N P']
def __init__(self, group_type = None):
"""
Constructor
"""
# this is where scraperwiki was creating and connecting to its database
# we do this in the imported sql file instead
self.read_sources()
def read_sources(self):
"""
Read the scraper_sources.json file.
"""
if scraper_sources_inline is not None:
self.sources = json.loads(scraper_sources_inline)
else:
#sources_file = current_app.config['SOURCES_FILE']
sources_file = os.path.join(current_app.root_path, '../scraper_sources.json')
data = open(sources_file)
self.sources = json.load(data)
return self.sources
def set_election(self):
# Get the newest set
newest = 0
for s in self.sources:
newest = int(s) if int(s) > newest else newest
newest_election = str(newest)
election = newest_election
# Usually we just want the newest election but allow for other situations
election = election if election is not None and election != '' else newest_election
return election
def parse_election(self, source, election_meta = {}):
# Ensure we have a valid parser for this type
parser_method = getattr(self, "parser", None)
if callable(parser_method):
# Check if election has base_url
source['url'] = election_meta['base_url'] + source['url'] if 'base_url' in election_meta else source['url']
# Get data from URL
try:
response = urllib.request.urlopen(source['url'])
lines = [l.decode('latin-1') for l in response.readlines()]
rows = csv.reader(lines, delimiter=';')
return rows
except Exception as err:
LOG.exception('[%s] Error when trying to read URL and parse CSV: %s' % (source['type'], source['url']))
raise
def from_dict(self, data, new=False):
for field in data:
setattr(self, field, data[field])
class Area(ScraperModel, db.Model):
__tablename__ = "areas"
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
area_id = db.Column(db.String(255), unique=True, nullable=False)
areas_group = db.Column(db.String(255))
county_id = db.Column(db.String(255))
county_name = db.Column(db.String(255))
ward_id = db.Column(db.String(255))
precinct_id = db.Column(db.String(255))
precinct_name = db.Column(db.String(255))
state_senate_id = db.Column(db.String(255))
state_house_id = db.Column(db.String(255))
county_commissioner_id = db.Column(db.String(255))
district_court_id = db.Column(db.String(255))
soil_water_id = db.Column(db.String(255))
school_district_id = db.Column(db.String(255))
school_district_name = db.Column(db.String(255))
mcd_id = db.Column(db.String(255))
precincts = db.Column(db.String(255))
name = db.Column(db.String(255))
updated = db.Column(db.DateTime, default=db.func.current_timestamp(), onupdate=db.func.current_timestamp())
def __repr__(self):
return '<Area {}>'.format(self.area_id)
def parser(self, row, group):
# General data
parsed = {
'area_id': group + '-',
'areas_group': group,
'county_id': None,
'county_name': None,
'ward_id': None,
'precinct_id': None,
'precinct_name': '',
'state_senate_id': None,
'state_house_id': None,
'county_commissioner_id': None,
'district_court_id': None,
'soil_water_id': None,
'school_district_id': None,
'school_district_name': '',
'mcd_id': None,
'precincts': None,
'name': ''
}
if group == 'municipalities':
parsed['area_id'] = parsed['area_id'] + row[0] + '-' + row[2]
parsed['county_id'] = row[0]
parsed['county_name'] = row[1]
parsed['mcd_id'] = "{0:05d}".format(int(row[2])) #enforce 5 digit
parsed['name'] = row[1]
if group == 'counties':
parsed['area_id'] = parsed['area_id'] + row[0]
parsed['county_id'] = row[0]
parsed['county_name'] = row[1]
parsed['precincts'] = row[2]
if group == 'precincts':
parsed['area_id'] = parsed['area_id'] + row[0] + '-' + row[1]
parsed['county_id'] = row[0]
parsed['precinct_id'] = row[1]
parsed['precinct_name'] = row[2]
parsed['state_senate_id'] = row[3]
parsed['state_house_id'] = row[4]
parsed['county_commissioner_id'] = row[5]
parsed['district_court_id'] = row[6]
parsed['soil_water_id'] = row[7]
parsed['mcd_id'] = row[8]
if group == 'school_districts':
parsed['area_id'] = parsed['area_id'] + row[0]
parsed['school_district_id'] = row[0]
parsed['school_district_name'] = row[1]
parsed['county_id'] = row[2]
parsed['county_name'] = row[3]
return parsed
So Areas is an extension of my default model class because it allows me to set up the fields that are specific to a given area based on the CSV.
Where this code fails is a (relatively) rare case in the CSV data where there might be multiple rows that, in the old application, correspond to the same row in the table. That old application had an array (usually with just one item, representing a UNIQUE column in the database) to instruct the code to run an UPDATE on those rows.
It returns an error like this:
sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "areas_area_id_key"
DETAIL: Key (area_id)=(counties-01) already exists.
An example of how it runs when I'm just logging my UNIQUE key value from the model instead of inserting it:
<Area precincts-87-0140>
<Area precincts-87-0145>
<Area precincts-87-0150>
<Area precincts-87-0155>
<Area precincts-87-0160>
<Area precincts-87-0165>
<Area school_districts-0001>
<Area school_districts-0001>
<Area school_districts-0001>
<Area school_districts-0002>
<Area school_districts-0004>
<Area school_districts-0006>
<Area school_districts-0012>
<Area school_districts-0013>
<Area school_districts-0014>
So I've been looking at different methods that Flask can use to run an UPSERT statement because I'd need to update all of the fields, and they'd be different based on both which type of area it is, and also in the other models (election contests or results, for example). Most of what I'm finding uses SQLAlchemy rather than Flask-SQLAlchemy.
I found this answer that looked promising. Here's what I added to my model:
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.sql.expression import Insert
Then I modified the ScraperModel class like this:
class ScraperModel(object):
#compiles(Insert)
def compile_upsert(insert_stmt, compiler, **kwargs):
"""
converts every SQL insert to an upsert i.e;
INSERT INTO test (foo, bar) VALUES (1, 'a')
becomes:
INSERT INTO test (foo, bar) VALUES (1, 'a') ON CONFLICT(foo) DO UPDATE SET (bar = EXCLUDED.bar)
(assuming foo is a primary key)
:param insert_stmt: Original insert statement
:param compiler: SQL Compiler
:param kwargs: optional arguments
:return: upsert statement
"""
pk = insert_stmt.table.primary_key
insert = compiler.visit_insert(insert_stmt, **kwargs)
ondup = f'ON CONFLICT ({",".join(c.name for c in pk)}) DO UPDATE SET'
updates = ', '.join(f"{c.name}=EXCLUDED.{c.name}" for c in insert_stmt.table.columns)
upsert = ' '.join((insert, ondup, updates))
return upsert
I'm clearly misunderstanding how the insert_stmt works because of how the query comes out, but here's the error that it generates:
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.SyntaxError) syntax error at or near "ON"
LINE 1: ..., '54', '', CURRENT_TIMESTAMP) RETURNING areas.id ON CONFLIC...
^
[SQL: INSERT INTO areas (area_id, areas_group, county_id, county_name, ward_id, precinct_id, precinct_name, state_senate_id, state_house_id, county_commissioner_id, district_court_id, soil_water_id, school_district_id, school_district_name, mcd_id, precincts, name, updated) VALUES (%(area_id)s, %(areas_group)s, %(county_id)s, %(county_name)s, %(ward_id)s, %(precinct_id)s, %(precinct_name)s, %(state_senate_id)s, %(state_house_id)s, %(county_commissioner_id)s, %(district_court_id)s, %(soil_water_id)s, %(school_district_id)s, %(school_district_name)s, %(mcd_id)s, %(precincts)s, %(name)s, CURRENT_TIMESTAMP) RETURNING areas.id ON CONFLICT (id) DO UPDATE SET id=EXCLUDED.id, area_id=EXCLUDED.area_id, areas_group=EXCLUDED.areas_group, county_id=EXCLUDED.county_id, county_name=EXCLUDED.county_name, ward_id=EXCLUDED.ward_id, precinct_id=EXCLUDED.precinct_id, precinct_name=EXCLUDED.precinct_name, state_senate_id=EXCLUDED.state_senate_id, state_house_id=EXCLUDED.state_house_id, county_commissioner_id=EXCLUDED.county_commissioner_id, district_court_id=EXCLUDED.district_court_id, soil_water_id=EXCLUDED.soil_water_id, school_district_id=EXCLUDED.school_district_id, school_district_name=EXCLUDED.school_district_name, mcd_id=EXCLUDED.mcd_id, precincts=EXCLUDED.precincts, name=EXCLUDED.name, updated=EXCLUDED.updated]
[parameters: {'area_id': 'counties-01', 'areas_group': 'counties', 'county_id': '01', 'county_name': 'Aitkin', 'ward_id': None, 'precinct_id': None, 'precinct_name': '', 'state_senate_id': None, 'state_house_id': None, 'county_commissioner_id': None, 'district_court_id': None, 'soil_water_id': None, 'school_district_id': None, 'school_district_name': '', 'mcd_id': None, 'precincts': '54', 'name': ''}]
(Background on this error at: https://sqlalche.me/e/14/f405)
I'm hoping I didn't paste too much to be helpful there.
I also found this answer that I read as creating its own insert statement instead of compiling the built in one. Here's what I changed. In the blueprint's imports:
from sqlalchemy.dialects.postgresql import insert
And in the blueprint's loop:
for i in sources[election]:
source = sources[election][i]
if 'type' in source and source['type'] == 'areas':
rows = area.parse_election(source, election_meta)
count = 0
for row in rows:
parsed = area.parser(row, i)
area = Area()
area.from_dict(parsed, new=True)
stmt = insert(Area.__table__).values(parsed)
stmt = stmt.on_conflict_do_update(
# Let's use the constraint name which was visible in the original posts error msg
constraint="['area_id']",
# The columns that should be updated on conflict
set_={
parsed
}
)
db.session.execute(stmt)
count = count + 1
return count
It results in a different error:
TypeError: unhashable type: 'dict'
All that to say, I'm currently at a loss. It's clear to me that I need to modify the INSERT statement, but it's not clear to me which route I should take to modify it, how to make sure that it matches on the correct field (which is called area_id and the key is called areas_id_unique), or how to make sure it updates the correct fields when it does find a match.
What I think I'm finding is that none of this would work because I wasn't matching on a primary key, but on a unique key. What I've done is change the unique key area_id to a primary key. Then, I can use the upsert statement from above.
#compiles(Insert)
def compile_upsert(insert_stmt, compiler, **kwargs):
"""
converts every SQL insert to an upsert i.e;
INSERT INTO test (foo, bar) VALUES (1, 'a')
becomes:
INSERT INTO test (foo, bar) VALUES (1, 'a') ON CONFLICT(foo) DO UPDATE SET (bar = EXCLUDED.bar)
(assuming foo is a primary key)
:param insert_stmt: Original insert statement
:param compiler: SQL Compiler
:param kwargs: optional arguments
:return: upsert statement
"""
pk = insert_stmt.table.primary_key
insert = compiler.visit_insert(insert_stmt, **kwargs)
ondup = f'ON CONFLICT ({",".join(c.name for c in pk)}) DO UPDATE SET'
updates = ', '.join(f"{c.name}=EXCLUDED.{c.name}" for c in insert_stmt.table.columns)
upsert = ' '.join((insert, ondup, updates))
return upsert
I had been trying to change the pk = insert_stmt.table.primary_key line to check for the unique key with no success, but it works just like this if I change that field.
Changing the primary key also fixed the other solution I was trying:
group = []
for row in rows:
parsed = area.parser(row, i)
area = Area()
area.from_dict(parsed, new=True)
group.append(area)
insert(db.session, Area, group)
def insert(session, model, rows):
table = model.__table__
stmt = insert(table)
primary_keys = [key.name for key in inspect(table).primary_key]
update_dict = {c.name: c for c in stmt.excluded if not c.primary_key}
if not update_dict:
raise ValueError("insert_or_update resulted in an empty update_dict")
stmt = stmt.on_conflict_do_update(
index_elements=primary_keys,
set_=update_dict
)
So both solutions were (relatively) workable, but only with a primary key instead of a unique key and that just hadn't been clear to me.

Insert to postgres DB: invalid String format (parsing exception error)

I am creating a log db where I track execution steps and catch where it might or might not fail.
I am having issues inserting the Exceptions thrown to a DB.
I am calling an endpoint first in order then query a django database.
In this case I wanted to catch a mistake in the selection_query and thus being unable to fetch the data from the DB.
I want to log this message.
Example:
Exception catch
try:
select_query = "SELECT columnn from row"
cur.execute(select_query)
selected_vals = cur.fetchall()
except Exception as e:
response = insert_into_logging(message = f"{e}", logging_id = 3)
Error
File "/usr/lib/python3.6/json/decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
This is because of the value passed in the message argument.
(message = f"{e}")
It is not insertable in the DB
The value of "e" is:
auto_close_alarm relation "row" does not exist
LINE 1: SELECT columnn from row
^
I tried to pass
str(e)
Value passed: identical output as above
repr(e.args[0])
Output: 'relation "row" does not exist\nLINE 1: SELECT columnn from row\n ^\n'
repr(e)
Value passed: UndefinedTable('relation "row" does not exist\nLINE 1: SELECT columnn from row\n ^\n',)
And multiple other ways but to no avail.
They all result in the same JSONDecodeError.
Basically the key problem is that this is not a valid String format for it to be able to be parsed in the DB.
All I need is a representation of this Exception error for it to be insertable
Flow of data:
insert function
def insert_into_logging(message, logging_id, date_time):
url = f"""url/insert-into-logging/"""
data = {
"message": message,
"logging_id": logging_id,
}
response = r.post(url, data=json.dumps(data, indent=4, sort_keys=True, default=str))
print(response)
return response.json()
Querying DB
def insert_to_logging(request):
from datetime import datetime, timedelta
if request.method != 'POST':
print(request.method)
return JsonResponse({'status':'error', 'message':'Client insertion only accepts post requests.' + str(request.method)})
today = datetime.today()
body = json.loads(request.body)
message = body.get('message')
logging_id = body.get('logging_id')
Logging.objects.create(message = message, logging_id = logging_id, time_stamp = today)
return JsonResponse({'status':'OK', 'message':'Logging successful'})
Django Model
class Logging(models.Model):
objects = GetOrNoneManager()
message = models.CharField(max_length=64, blank=True, null=True)
logging_id = models.IntegerField(blank=True, null=True)
time_stamp = models.DateTimeField(blank=True, null=True)
class Meta:
db_table = 'logging'

Efficient way to optimise a Scala code to read large file that doesn't fit in memory

Problem Statement Below,
We have a large log file which stores user interactions with an application. The entries in the log file follow the following schema: {userId, timestamp, actionType} where actionType is one of two possible values: [open, close]
Constraints:
The log file is too big to fit in memory on one machine. Also assume that the aggregated data doesn’t fit into memory.
Code has to be able to run on a single machine.
Should not use an out-of-the box implementation of mapreduce or 3rd party database; don’t assume we have a Hadoop or Spark or other distributed computing framework.
There can be multiple entries of each actionType for each user, and there might be missing entries in the log file. So a user might be missing a close record between two open records or vice versa.
Timestamps will come in strictly ascending order.
For this problem, we need to implement a class/classes that computes the average time spent by each user between open and close. Keep in mind that there are missing entries for some users, so we will have to make a choice about how to handle these entries when making our calculations. Code should follow a consistent policy with regards to how we make that choice.
The desired output for the solution should be [{userId, timeSpent},….] for all the users in the log file.
Sample log file (comma-separated, text file)
1,1435456566,open
2,1435457643,open
3,1435458912,open
1,1435459567,close
4,1435460345,open
1,1435461234,open
2,1435462567,close
1,1435463456,open
3,1435464398,close
4,1435465122,close
1,1435466775,close
Approach
Below is the code I've written in Python & Scala, which seems to be not efficient and upto the expectations of the scenario given, I'd like to feedback from community of developers in this forum how better we could optimise this code as per given scenario.
Scala implementation
import java.io.FileInputStream
import java.util.{Scanner, Map, LinkedList}
import java.lang.Long
import scala.collection.mutable
object UserMetrics extends App {
if (args.length == 0) {
println("Please provide input data file name for processing")
}
val userMetrics = new UserMetrics()
userMetrics.readInputFile(args(0),if (args.length == 1) 600000 else args(1).toInt)
}
case class UserInfo(userId: Integer, prevTimeStamp: Long, prevStatus: String, timeSpent: Long, occurence: Integer)
class UserMetrics {
val usermap = mutable.Map[Integer, LinkedList[UserInfo]]()
def readInputFile(stArr:String, timeOut: Int) {
var inputStream: FileInputStream = null
var sc: Scanner = null
try {
inputStream = new FileInputStream(stArr);
sc = new Scanner(inputStream, "UTF-8");
while (sc.hasNextLine()) {
val line: String = sc.nextLine();
processInput(line, timeOut)
}
for ((key: Integer, userLs: LinkedList[UserInfo]) <- usermap) {
val userInfo:UserInfo = userLs.get(0)
val timespent = if (userInfo.occurence>0) userInfo.timeSpent/userInfo.occurence else 0
println("{" + key +","+timespent + "}")
}
if (sc.ioException() != null) {
throw sc.ioException();
}
} finally {
if (inputStream != null) {
inputStream.close();
}
if (sc != null) {
sc.close();
}
}
}
def processInput(line: String, timeOut: Int) {
val strSp = line.split(",")
val userId: Integer = Integer.parseInt(strSp(0))
val curTimeStamp = Long.parseLong(strSp(1))
val status = strSp(2)
val uInfo: UserInfo = UserInfo(userId, curTimeStamp, status, 0, 0)
val emptyUserInfo: LinkedList[UserInfo] = new LinkedList[UserInfo]()
val lsUserInfo: LinkedList[UserInfo] = usermap.getOrElse(userId, emptyUserInfo)
if (lsUserInfo != null && lsUserInfo.size() > 0) {
val lastUserInfo: UserInfo = lsUserInfo.get(lsUserInfo.size() - 1)
val prevTimeStamp: Long = lastUserInfo.prevTimeStamp
val prevStatus: String = lastUserInfo.prevStatus
if (prevStatus.equals("open")) {
if (status.equals(lastUserInfo.prevStatus)) {
val timeSelector = if ((curTimeStamp - prevTimeStamp) > timeOut) timeOut else curTimeStamp - prevTimeStamp
val timeDiff = lastUserInfo.timeSpent + timeSelector
lsUserInfo.remove()
lsUserInfo.add(UserInfo(userId, curTimeStamp, status, timeDiff, lastUserInfo.occurence + 1))
} else if(!status.equals(lastUserInfo.prevStatus)){
val timeDiff = lastUserInfo.timeSpent + curTimeStamp - prevTimeStamp
lsUserInfo.remove()
lsUserInfo.add(UserInfo(userId, curTimeStamp, status, timeDiff, lastUserInfo.occurence + 1))
}
} else if(prevStatus.equals("close")) {
if (status.equals(lastUserInfo.prevStatus)) {
lsUserInfo.remove()
val timeSelector = if ((curTimeStamp - prevTimeStamp) > timeOut) timeOut else curTimeStamp - prevTimeStamp
lsUserInfo.add(UserInfo(userId, curTimeStamp, status, lastUserInfo.timeSpent + timeSelector, lastUserInfo.occurence+1))
}else if(!status.equals(lastUserInfo.prevStatus))
{
lsUserInfo.remove()
lsUserInfo.add(UserInfo(userId, curTimeStamp, status, lastUserInfo.timeSpent, lastUserInfo.occurence))
}
}
}else if(lsUserInfo.size()==0){
lsUserInfo.add(uInfo)
}
usermap.put(userId, lsUserInfo)
}
}
Python Implementation
import sys
def fileBlockStream(fp, number_of_blocks, block):
#A generator that splits a file into blocks and iterates over the lines of one of the blocks.
assert 0 <= block and block < number_of_blocks #Assertions to validate number of blocks given
assert 0 < number_of_blocks
fp.seek(0,2) #seek to end of file to compute block size
file_size = fp.tell()
ini = file_size * block / number_of_blocks #compute start & end point of file block
end = file_size * (1 + block) / number_of_blocks
if ini <= 0:
fp.seek(0)
else:
fp.seek(ini-1)
fp.readline()
while fp.tell() < end:
yield fp.readline() #iterate over lines of the particular chunk or block
def computeResultDS(chunk,avgTimeSpentDict,defaultTimeOut):
countPos,totTmPos,openTmPos,closeTmPos,nextEventPos = 0,1,2,3,4
for rows in chunk.splitlines():
if len(rows.split(",")) != 3:
continue
userKeyID = rows.split(",")[0]
try:
curTimeStamp = int(rows.split(",")[1])
except ValueError:
print("Invalid Timestamp for ID:" + str(userKeyID))
continue
curEvent = rows.split(",")[2]
if userKeyID in avgTimeSpentDict.keys() and avgTimeSpentDict[userKeyID][nextEventPos]==1 and curEvent == "close":
#Check if already existing userID with expected Close event 0 - Open; 1 - Close
#Array value within dictionary stores [No. of pair events, total time spent (Close tm-Open tm), Last Open Tm, Last Close Tm, Next expected Event]
curTotalTime = curTimeStamp - avgTimeSpentDict[userKeyID][openTmPos]
totalTime = curTotalTime + avgTimeSpentDict[userKeyID][totTmPos]
eventCount = avgTimeSpentDict[userKeyID][countPos] + 1
avgTimeSpentDict[userKeyID][countPos] = eventCount
avgTimeSpentDict[userKeyID][totTmPos] = totalTime
avgTimeSpentDict[userKeyID][closeTmPos] = curTimeStamp
avgTimeSpentDict[userKeyID][nextEventPos] = 0 #Change next expected event to Open
elif userKeyID in avgTimeSpentDict.keys() and avgTimeSpentDict[userKeyID][nextEventPos]==0 and curEvent == "open":
avgTimeSpentDict[userKeyID][openTmPos] = curTimeStamp
avgTimeSpentDict[userKeyID][nextEventPos] = 1 #Change next expected event to Close
elif userKeyID in avgTimeSpentDict.keys() and avgTimeSpentDict[userKeyID][nextEventPos]==1 and curEvent == "open":
curTotalTime,closeTime = missingHandler(defaultTimeOut,avgTimeSpentDict[userKeyID][openTmPos],curTimeStamp)
totalTime = curTotalTime + avgTimeSpentDict[userKeyID][totTmPos]
avgTimeSpentDict[userKeyID][totTmPos]=totalTime
avgTimeSpentDict[userKeyID][closeTmPos]=closeTime
avgTimeSpentDict[userKeyID][openTmPos]=curTimeStamp
eventCount = avgTimeSpentDict[userKeyID][countPos] + 1
avgTimeSpentDict[userKeyID][countPos] = eventCount
elif userKeyID in avgTimeSpentDict.keys() and avgTimeSpentDict[userKeyID][nextEventPos]==0 and curEvent == "close":
curTotalTime,openTime = missingHandler(defaultTimeOut,avgTimeSpentDict[userKeyID][closeTmPos],curTimeStamp)
totalTime = curTotalTime + avgTimeSpentDict[userKeyID][totTmPos]
avgTimeSpentDict[userKeyID][totTmPos]=totalTime
avgTimeSpentDict[userKeyID][openTmPos]=openTime
eventCount = avgTimeSpentDict[userKeyID][countPos] + 1
avgTimeSpentDict[userKeyID][countPos] = eventCount
elif curEvent == "open":
#Initialize userid with Open event
avgTimeSpentDict[userKeyID] = [0,0,curTimeStamp,0,1]
elif curEvent == "close":
#Initialize userid with missing handler function since there is no Open event for this User
totaltime,OpenTime = missingHandler(defaultTimeOut,0,curTimeStamp)
avgTimeSpentDict[userKeyID] = [1,totaltime,OpenTime,curTimeStamp,0]
def missingHandler(defaultTimeOut,curTimeVal,lastTimeVal):
if lastTimeVal - curTimeVal > defaultTimeOut:
return defaultTimeOut,curTimeVal
else:
return lastTimeVal - curTimeVal,curTimeVal
def computeAvg(avgTimeSpentDict,defaultTimeOut):
resDict = {}
for k,v in avgTimeSpentDict.iteritems():
if v[0] == 0:
resDict[k] = 0
else:
resDict[k] = v[1]/v[0]
return resDict
if __name__ == "__main__":
avgTimeSpentDict = {}
if len(sys.argv) < 2:
print("Please provide input data file name for processing")
sys.exit(1)
fileObj = open(sys.argv[1])
number_of_chunks = 4 if len(sys.argv) < 3 else int(sys.argv[2])
defaultTimeOut = 60000 if len(sys.argv) < 4 else int(sys.argv[3])
for chunk_number in range(number_of_chunks):
for chunk in fileBlockStream(fileObj, number_of_chunks, chunk_number):
computeResultDS(chunk, avgTimeSpentDict, defaultTimeOut)
print (computeAvg(avgTimeSpentDict,defaultTimeOut))
avgTimeSpentDict.clear() #Nullify dictionary
fileObj.close #Close the file object
Both program above gives desired output, but efficiency is what matters for this particular scenario. Let me know if you've anything better or any suggestions on existing implementation.
Thanks in Advance!!
What you are after is iterator usage. I'm not going to re-write your code, but the trick here is likely to be using an iterator. Fortunately Scala provides decent out of the box tooling for the job.
import scala.io.Source
object ReadBigFiles {
def read(fileName: String): Unit = {
val lines: Iterator[String] = Source.fromFile(fileName).getLines
// now you get iterator semantics for the file line traversal
// that means you can only go through the lines once, but you don't incur a penalty on heap usage
}
}
For your use case, you seem to require a lastUser, so you're dealing with groups of 2 entries. I think you you have two choices, either go for iterator.sliding(2), which will produce iterators for every pair, or simply add recursion to the mix using options.
def navigate(source: Iterator[String], last: Option[User]): ResultType = {
if (source.hasNext) {
val current = source.next()
last match {
case Some(existing) => // compare with previous user etc
case None => navigate(source, Some(current))
}
} else {
// exit recursion, return result
}
}
You can avoid all the code you've written to read the file and so on. If you need to count occurrences, simply build a Map inside your recursion, and increment the occurrences at every step based on your business logic.
from queue import LifoQueue, Queue
def averageTime() -> float:
logs = {}
records = Queue()
with open("log.txt") as fp:
lines = fp.readlines()
for line in lines:
if line[0] not in logs:
logs[line[0]] = LifoQueue()
logs[line[0]].put((line[1], line[2]))
else:
logs[line[0]].put((line[1], line[2]))
for k in logs:
somme = 0
count = 0
while not logs[k].empty():
l = logs[k].get()
somme = (somme + l[0]) if l[1] == "open" else (somme - l[0])
count = count + 1
records.put([k, somme, count//2])
while not records.empty():
record = records.get()
print(f"UserId={record[0]} Avg={record[1]/record[2]}")