How to set the width of a table column? - swt

I am trying to set the Width of the columns in the code bellow but it does not change. What is wrong with this code?
table_2 = new Table(shell, SWT.CHECK | SWT.BORDER | SWT.V_SCROLL | SWT.SINGLE | SWT.FULL_SELECTION | SWT.H_SCROLL);
table_2.setBounds(321, 47, 85, 98);
table_2.setLinesVisible(true);
TableColumn tbl2_tc1 = new TableColumn(table_2,SWT.NONE);
TableColumn tbl2_tc2 = new TableColumn(table_2,SWT.NONE);
tbl2_tc1.setWidth(120);
tbl2_tc1.pack();
tbl2_tc2.setWidth(40);
tbl2_tc2.pack();
table_2.pack();

Don't call TableColumn.pack() as that resets the column size to the 'preferred size' - which is not the one you set with setWidth.

Related

What is the best PySpark practice to dynamically retrieve/filter the Spark frame based on extracting widget values on top of notebook in DataBrick?

Let's say I have following Spark dataframe so-called df:
<------Time-resolution-------->
+------------+----------+---------+---------+---------+
| Name | date | 00-24 | 00-12 | 12-24 |
+------------+----------+---------+---------+---------+
| X1 |2020-10-20| 137 | 68 | 69 |
| X2 |2020-10-22| 132 | 66 | 66 |
| X3 |2020-10-24| 132 | 64 | 68 |
| X4 |2020-10-25| 587 | 292 | 295 |
| X5 |2020-10-29| 134 | 67 | 67 |
+------------+----------+---------+---------+---------+
I want to create 4 widgets on top of my notebook in DataBricks medium using PySpark in the form of the dbutils.widgets.dropdown() from available data as follows:
DATE_FROM
DATE_TO
Time_Resolution_Of_Interest (one of 00-24|00-12|12-24)
Name_Of_Interest (top 3 names based on descending sort of interested Time-resolution column)
what I have tried the following based on this answer & that answer:
I could manage to this for 1st and 2nd items as below:
dbutils.widgets.removeAll()
# compute the list of all dates from maximum date available till today
max_date = df.select(F.max('date')).first()['max(date)']
min_date = df.select(F.min('date')).first()['min(date)']
print(min_date)
print(max_date)
dbutils.widgets.dropdown(name = "DATE_FROM", defaultValue = min_date , choices = ['date'])
dbutils.widgets.dropdown(name = "DATE_TO", defaultValue = max_date, choices = ['date'])
#dbutils.widgets.text(name = "DATE_FROM", defaultValue = min_date")
#dbutils.widgets.text(name = "DATE_TO", defaultValue = max_date)
for the 3rd item I just have stupid idea:
channel = ['00-24', '00-12', '12-24']
dbutils.widgets.dropdown(name = "Time_Resolution_Of_Interest", defaultValue = "00-24" , choices = [str(x) for x in channel] + ["None"])
For the last item I want to make list of interested names but I couldn't manage to map String and pass it like a Scala version
#Get interested Time resolution from widget
dropdownColumn = dbutils.widgets.get("Time_Resolution_Of_Interest")
# compute the list 5 top names in interested time resolution
max_Top_Name = df.select(F.max(dropdownColumn)).first()[dropdownColumn]
NUM_OF_NAMES_FOR_DROPDOWN = 5
#Scala version works
#val Name_list = df.select("Name").take(NUM_OF_NAMES_FOR_DROPDOWN).map(i=>i.getAs[String]("Name"))
#dbutils.widgets.dropdown("Name", "X1", Name_list.toSeq , "Username Of Interes")
#PySpark version doesn't work
Name_list = df.select("Name").take(NUM_OF_NAMES_FOR_DROPDOWN).rdd.flatMap(lambda x: x).collect()
dbutils.widgets.dropdown(name = "Name", defaultValue = max_Top_Name , choices = [str(x) for x in Name_list] + ["None"])
in the end I want to filter the records for that specific Name and selected time resolution over time and update the frame and according to this answer as below:
selected_widgets = ['DATE_FROM', 'DATE_TO', 'Time_Resolution_Of_Interest', 'Name_Of_Interest']
myList = getArgument(selected_widgets).split(",")
display(df.filter(df.isin(myList)))
I expected to reach following table for let's say via widgets values Name: X1 and Time-resolution: 00-24 over certain time date from 2020-10-20 till 2020-11-20:
+------------+----------+---------+
| Name | date | 00-24 |
+------------+----------+---------+
| X1 |2020-10-20| 137 |
| X1 |2020-10-21| 111 |
| X1 |2020-10-22| 99 |
| X1 |2020-10-23| 123 |
| X1 |2020-10-24| 101 |
| ... | ... | ... |
+------------+----------+---------+
What you could do is first build widgets like you are doing and get individual values from widget and filter them to get end result. See sample code below, this may not match 1-1 to your requirement but should guide you to get to what you want.
Create date widgets:
from pyspark.sql.functions import min, max
dbutils.widgets.removeAll()
# compute the list of all dates from maximum date available till today
date = [date[0] for date in data.select("date").collect()]
max_min_date = data.select(max('date'),min('date')).first()
min_date = max_min_date['min(date)']
max_date = max_min_date['max(date)']
print(date)
print(min_date)
print(max_date)
dbutils.widgets.dropdown(name = "DATE_FROM", defaultValue = min_date , choices = date)
dbutils.widgets.dropdown(name = "DATE_TO", defaultValue = max_date, choices = date)
Create Time Resolution Widget using schema, this will allow you to build dynamic list of time columns:
channel = [f.name for f in data.schema.fields if f.name not in ['name', 'date']]
print(channel)
dbutils.widgets.dropdown(name = "Time_Resolution_Of_Interest", defaultValue = "00-24" , choices = [str(x) for x in channel] + ["None"])
Create Name widget:
from pyspark.sql.functions import col
dropdownColumn = dbutils.widgets.get("Time_Resolution_Of_Interest")
NUM_OF_NAMES_FOR_DROPDOWN = 5
#sort by selected time column desc and take 5 rows
name_limit = [name[0] for name in
data.select("Name").orderBy(col(dropdownColumn), ascending=False).take(NUM_OF_NAMES_FOR_DROPDOWN)]
dbutils.widgets.dropdown(name = "Name", defaultValue = 'X1' , choices = [str(x) for x in name_limit] + ["None"])
Finally, filter data based on widget values:
date_from_val = dbutils.widgets.get("DATE_FROM")
date_to_val = dbutils.widgets.get("DATE_TO")
time_val = dbutils.widgets.get("Time_Resolution_Of_Interest")
name_val = dbutils.widgets.get("Name")
result = data.select("name", time_val).where(f"name = '{name_val}' and date between '{date_from_val}' and '{date_to_val}'")
display(result)

Postgres returning empty result if one of the outcome is null

Postgres returning empty result if one of the outcome is null.
For a scenario, consider a table,
table: books
id | title | is_free |
1 | A | true |
2 | B | false |
select 'some_text' as col, b.title
from (select title from books
where id = 3) as b;
In this case, the number of rows returned is 0.
col | title |
(0 rows)
How to return Null as return value?
col | title |
some_text | NULL |
(1 row)
Use a subquery in a different way:
select 'some_text' as col,
(select title from books where id = 3);

cannot update row value for integer column in postgresql table

This is the dummy function I wrote to update the counter.
def updateTable(tableName, visitorId, dtWithZone):
db_uri = app.config["SQLALCHEMY_DATABASE_URI"]
engine = create_engine(db_uri, connect_args={"options": "-c timezone={}".format(dtWithZone.timetz().tzinfo.zone)})
# create session
Session = sessionmaker()
Session.configure(bind=engine)
session = Session()
meta = MetaData(engine, reflect=True)
table = meta.tables[tableName]
print dir(table)
# update row to database
row = session.query(table).filter(
table.c.visitorId == visitorId).first()
print 'original:', row.count
row.count = row.count + 1
print "updated {}".format(row.count)
session.commit()
conn.close()
but when it reaches the line row.count = row.count + 1 it throws error:
AttributeError: can't set attribute
this is the table
\d visitorinfo;
Table "public.visitorinfo"
Column | Type | Modifiers
--------------+--------------------------+-----------
platform | character varying(15) |
browser | character varying(10) |
visitorId | character varying(10) | not null
language | character varying(10) |
version | character varying(20) |
cl_lat | double precision |
cl_lng | double precision |
count | integer |
ip | character varying(20) |
visitor_time | timestamp with time zone |
Indexes:
"visitorinfo_pkey" PRIMARY KEY, btree ("visitorId")
what am I doing wrong ?why is it saying cannot set attribute?
part of updated code:
# update row to database
row = session.query(table).filter(
table.c.visitorId == visitorId).first()
print 'original:', row.count
val = row.count
row.count = val + 1
print "updated {}".format(row.count)
use an update query:
UPDATE public.visitorinfo SET counter = counter + 1 WHERE visitorId = 'VisitorID'
Make sure that last 'VisitorID' is filled from your application

How to create a Postgres trigger that calculates values

How would you create a trigger that uses the values of the row being inserted to be calculated first so that a value being inserted gets transformed?
Let's say I have this table labor_rates,
+---------------+-----------------+--------------+------------+
| labor_rate_id | rate_per_minute | unit_minutes | created_at |
+---------------+-----------------+--------------+------------+
| bigint | numeric | numeric | timestamp |
+---------------+-----------------+--------------+------------+
Each time a new record is created, I need that the rate is calculated as rate/unit (the smallest unit here is a minute).
So example, when inserting a new record:
INSERT INTO labor_rates(rate, unit)
VALUES (60, 480);
It would create a new record with these values:
+---------------+-----------------+--------------+----------------------------+
| labor_rate_id | rate_per_minute | unit_minutes | created_at |
+---------------+-----------------+--------------+----------------------------+
| 1000000 | 1.1979 | 60 | 2017-03-16 01:59:47.208111 |
+---------------+-----------------+--------------+----------------------------+
One could argue that this should be left as a calculated field instead of storing the calculated value. But in this case, it would be best if the calculated value is stored.
I am fairly new to triggers so any help would be much appreciated.

systemLayoutSizeFittingSize not sizing UITableViewCell correctly

I have a UITableViewCell with a label inside of it. I would like to calculate the cell size based on the contents inside of it.
I don't just set constraints in the content view though, I also add constraints to the enclosing UITableViewCell:
+--------------------------+
|UITableViewCell |
| | inset |
| +------------------+ |
| |contentView | |
| | |inset | |
| | +------------+ | |
|-- |--| Label |--| --|
| | +------------+ | |
| | |inset | |
| +------------------+ |
| |inset |
+--------------------------+
And here is the code that calculates the size:
override public class func cellSize(item: ItemInterface?, fittingSize: CGSize) -> CGSize {
struct Static {
static var onceToken : dispatch_once_t = 0
static var sizingCell : LabelTableViewCell!
}
dispatch_once(&Static.onceToken, {
Static.sizingCell = NSBundle.mainBundle().loadNibNamed("LabelTableViewCell", owner: self, options: nil)[0] as! LabelTableViewCell
})
let sizingCell = Static.sizingCell
// sets the text of the label and also adds constraints
// from label to enclosing content view
sizingCell.setupCell(text: "asdkfjklsd")
// for multi line support
sizingCell.label.preferredMaxLayoutWidth = fittingSize.width
// update all the constraints
sizingCell.setNeedsUpdateConstraints()
sizingCell.updateConstraintsIfNeeded()
// re-layout cell
sizingCell.setNeedsLayout()
sizingCell.layoutIfNeeded()
// calculate size for the whole cell (not just contentView)
let size = sizingCell.systemLayoutSizeFittingSize(UILayoutFittingCompressedSize)
return CGSizeMake(size.width, size.height)
}
What I end up getting is a cell that is squished. The label ends up being too small and therefore you almost don't see the label at all: