visualisation using pyvis is empty - networkx

Hi when I dont include select_menu=True in inside Network, I was able to see the graph.
nodes=['Short St', 'Jefferson St', 'South St', 'Sunset Dr', 'River Rd', 'Walnut St', 'Church St', 'Lincoln St', '3rd Av', 'Fairway Dr', 'Jackson St']
edges=[('Short St', 'Jefferson St', {}), ('South St', 'Sunset Dr', {}), ('South St', 'Walnut St', {}), ('Sunset Dr', 'River Rd', {}), ('Sunset Dr', 'Walnut St', {}), ('Walnut St', 'Church St', {}), ('Lincoln St', '3rd Av', {}), ('Lincoln St', 'Jackson St', {}), ('3rd Av', 'Fairway Dr', {})]
nt1 = Network(height="500px", width="500px", bgcolor="#222222", font_color="white",)
nt1.from_nx(G)
nt1.repulsion()
nt1.show(r'output\original_graph.html')
However when I include select_menu =True, none of the node shows up
nt1 = Network(height="500px", width="500px", bgcolor="#222222", font_color="white", select_menu=True)
Why does this happen? also how can i make the line to be straight rather than curly

Related

Open Street Map using OSMN: how to get building height?

I am trying to find a way to extract building heights. Here is what I've tried so far:
place_name = "Uptown, Dallas, Texas"
buildings = ox.geometries_from_place(place_name, tags={'building':True})
print(buildings.columns)
outputs:
Index(['amenity', 'geometry', 'nodes', 'addr:housenumber', 'addr:street',
'building', 'building:levels', 'height', 'name', 'office', 'wikidata',
'wikipedia', 'parking', 'addr:city', 'addr:postcode', 'addr:state',
'layer', 'cuisine', 'access', 'addr:country', 'brand', 'brand:wikidata',
'brand:wikipedia', 'opening_hours', 'operator', 'operator:wikidata',
'operator:wikipedia', 'phone', 'ref:walmart', 'shop', 'website',
'wheelchair', 'beds', 'emergency', 'gnis:feature_id', 'healthcare',
'old_name', 'ele', 'gnis:county_name', 'gnis:import_uuid',
'gnis:reviewed', 'source', 'fee', 'smoking', 'roof:levels',
'roof:shape', 'addr:unit', 'tourism', 'short_name', 'contact:website',
'outdoor_seating', 'ways', 'type'],
dtype='object')
height parameters are NaN for most values. The closet parameter is building:levels but it is just number of stories in buildings.

How to schedule notification at specific time using awesome package in Flutter?

I am using an awesome Package and am trying to make a notification at a specific time using this package.
Future<void> showNotificationWithIconsAndActionButtons(int id) async {
AwesomeNotifications().initialize(
'',
[
NotificationChannel(
channelKey: 'basic_channel',
channelName: 'Basic notifications',
channelDescription: 'Notification channel for basic tests',
defaultColor: Color(0xFF9D50DD),
ledColor: Colors.white,
playSound: true,
importance: NotificationImportance.Max,
defaultRingtoneType: DefaultRingtoneType.Notification,
)
]
);
await AwesomeNotifications().createNotification(
content: NotificationContent(
id: id,
channelKey: 'basic_channel',
title: 'Anonymous says:',
body: 'Hi there!',
payload: {'uuid': 'user-profile-uuid'},
displayOnBackground: true,
displayOnForeground: true,
),
i need to make notification in particular time.
The same plugin provides a way to schedule notifications as per the need.
Checkout this link from it's description:
https://pub.dev/packages/awesome_notifications#scheduling-a-notification
Bascially showing it at a specific time will look something like this:
await AwesomeNotifications().createNotification(
content: NotificationContent(
id: id,
channelKey: 'scheduled',
title: 'Just in time!',
body: 'This notification was schedule to shows at ' +
(Utils.DateUtils.parseDateToString(scheduleTime.toLocal()) ?? '?') +
' $timeZoneIdentifier (' +
(Utils.DateUtils.parseDateToString(scheduleTime.toUtc()) ?? '?') +
' utc)',
notificationLayout: NotificationLayout.BigPicture,
bigPicture: 'asset://assets/images/delivery.jpeg',
payload: {'uuid': 'uuid-test'},
autoCancel: false,
),
schedule: NotificationCalendar.fromDate(date: scheduleTime));
Note: The code sample is from the plugin's Readme. I have not tested this yet.
Timer.periodic(Duration(minutes: 1), (timer) {
if (DateTime.now()== DateTime.parse("2021-07-20 20:18:04Z")){// 8:18pm
return showNotificationWithIconsAndActionButtons();
}
});
This will let it check the time every minute.

How to proper use sql/hive variables in the new databricks connect

I'm testing the new databricks connect and I often use sql variables in my python scripts on databricks, however I'm not able to use those variables through dbconnect. The example below works fine in databricks but not in dbconnect:
from pyspark.sql import SparkSession
from pyspark.sql import SQLContext
import pandas as pd
spark = SparkSession.builder.getOrCreate()
sqlContext = SQLContext(spark)
df = spark.createDataFrame(pd.DataFrame({'a':[2,5,8], 'b':[3,5,5]}))
df.createOrReplaceTempView('test_view')
sqlContext.sql("set a_value = 2")
sqlContext.sql("select * from test_view where a = ${a_value}")
In dbconnect I received the follow:
---------------------------------------------------------------------------
ParseException Traceback (most recent call last)
<ipython-input-50-404f4c5b017c> in <module>
10
11 sqlContext.sql("set a_value = 2")
---> 12 sqlContext.sql("select * from test_view where a = ${a_value}")
c:\users\pc\miniconda3\lib\site-packages\pyspark\sql\context.py in sql(self, sqlQuery)
369 [Row(f1=1, f2=u'row1'), Row(f1=2, f2=u'row2'), Row(f1=3, f2=u'row3')]
370 """
--> 371 return self.sparkSession.sql(sqlQuery)
372
373 #since(1.0)
c:\users\pc\miniconda3\lib\site-packages\pyspark\sql\session.py in sql(self, sqlQuery)
702 [Row(f1=1, f2=u'row1'), Row(f1=2, f2=u'row2'), Row(f1=3, f2=u'row3')]
703 """
--> 704 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
705
706 #since(2.0)
c:\users\pc\miniconda3\lib\site-packages\py4j\java_gateway.py in __call__(self, *args)
1303 answer = self.gateway_client.send_command(command)
1304 return_value = get_return_value(
-> 1305 answer, self.gateway_client, self.target_id, self.name)
1306
1307 for temp_arg in temp_args:
c:\users\pc\miniconda3\lib\site-packages\pyspark\sql\utils.py in deco(*a, **kw)
132 # Hide where the exception came from that shows a non-Pythonic
133 # JVM exception message.
--> 134 raise_from(converted)
135 else:
136 raise
c:\users\pc\miniconda3\lib\site-packages\pyspark\sql\utils.py in raise_from(e)
ParseException:
mismatched input '<EOF>' expecting {'(', 'COLLECT', 'CONVERT', 'DELTA', 'HISTORY', 'MATCHED', 'MERGE', 'OPTIMIZE', 'SAMPLE', 'TIMESTAMP', 'UPDATE', 'VERSION', 'ZORDER', 'ADD', 'AFTER', 'ALL', 'ALTER', 'ANALYZE', 'AND', 'ANTI', 'ANY', 'ARCHIVE', 'ARRAY', 'AS', 'ASC', 'AT', 'AUTHORIZATION', 'BETWEEN', 'BOTH', 'BUCKET', 'BUCKETS', 'BY', 'CACHE', 'CASCADE', 'CASE', 'CAST', 'CHANGE', 'CHECK', 'CLEAR', 'CLONE', 'CLUSTER', 'CLUSTERED', 'CODEGEN', 'COLLATE', 'COLLECTION', 'COLUMN', 'COLUMNS', 'COMMENT', 'COMMIT', 'COMPACT', 'COMPACTIONS', 'COMPUTE', 'CONCATENATE', 'CONSTRAINT', 'COPY', 'COPY_OPTIONS', 'COST', 'CREATE', 'CREDENTIALS', 'CROSS', 'CUBE', 'CURRENT', 'CURRENT_DATE', 'CURRENT_TIME', 'CURRENT_TIMESTAMP', 'CURRENT_USER', 'DATA', 'DATABASE', DATABASES, 'DAY', 'DBPROPERTIES', 'DEEP', 'DEFINED', 'DELETE', 'DELIMITED', 'DESC', 'DESCRIBE', 'DFS', 'DIRECTORIES', 'DIRECTORY', 'DISTINCT', 'DISTRIBUTE', 'DROP', 'ELSE', 'ENCRYPTION', 'END', 'ESCAPE', 'ESCAPED', 'EXCEPT', 'EXCHANGE', 'EXISTS', 'EXPLAIN', 'EXPORT', 'EXTENDED', 'EXTERNAL', 'EXTRACT', 'FALSE', 'FETCH', 'FIELDS', 'FILTER', 'FILEFORMAT', 'FILES', 'FIRST', 'FOLLOWING', 'FOR', 'FOREIGN', 'FORMAT', 'FORMAT_OPTIONS', 'FORMATTED', 'FROM', 'FULL', 'FUNCTION', 'FUNCTIONS', 'GLOBAL', 'GRANT', 'GROUP', 'GROUPING', 'HAVING', 'HOUR', 'IF', 'IGNORE', 'IMPORT', 'IN', 'INDEX', 'INDEXES', 'INNER', 'INPATH', 'INPUTFORMAT', 'INSERT', 'INTERSECT', 'INTERVAL', 'INTO', 'IS', 'ITEMS', 'JOIN', 'KEYS', 'LAST', 'LATERAL', 'LAZY', 'LEADING', 'LEFT', 'LIKE', 'LIMIT', 'LINES', 'LIST', 'LOAD', 'LOCAL', 'LOCATION', 'LOCK', 'LOCKS', 'LOGICAL', 'MACRO', 'MAP', 'MINUTE', 'MONTH', 'MSCK', 'NAMESPACE', 'NAMESPACES', 'NATURAL', 'NO', NOT, 'NULL', 'NULLS', 'OF', 'ON', 'ONLY', 'OPTION', 'OPTIONS', 'OR', 'ORDER', 'OUT', 'OUTER', 'OUTPUTFORMAT', 'OVER', 'OVERLAPS', 'OVERLAY', 'OVERWRITE', 'PARTITION', 'PARTITIONED', 'PARTITIONS', 'PATTERN', 'PERCENT', 'PIVOT', 'PLACING', 'POSITION', 'PRECEDING', 'PRIMARY', 'PRINCIPALS', 'PROPERTIES', 'PURGE', 'QUERY', 'RANGE', 'RECORDREADER', 'RECORDWRITER', 'RECOVER', 'REDUCE', 'REFERENCES', 'REFRESH', 'RENAME', 'REPAIR', 'REPLACE', 'RESET', 'RESTRICT', 'REVOKE', 'RIGHT', RLIKE, 'ROLE', 'ROLES', 'ROLLBACK', 'ROLLUP', 'ROW', 'ROWS', 'SCHEMA', 'SECOND', 'SELECT', 'SEMI', 'SEPARATED', 'SERDE', 'SERDEPROPERTIES', 'SESSION_USER', 'SET', 'MINUS', 'SETS', 'SHALLOW', 'SHOW', 'SKEWED', 'SOME', 'SORT', 'SORTED', 'START', 'STATISTICS', 'STORED', 'STRATIFY', 'STRUCT', 'SUBSTR', 'SUBSTRING', 'TABLE', 'TABLES', 'TABLESAMPLE', 'TBLPROPERTIES', TEMPORARY, 'TERMINATED', 'THEN', 'TO', 'TOUCH', 'TRAILING', 'TRANSACTION', 'TRANSACTIONS', 'TRANSFORM', 'TRIM', 'TRUE', 'TRUNCATE', 'TYPE', 'UNARCHIVE', 'UNBOUNDED', 'UNCACHE', 'UNION', 'UNIQUE', 'UNKNOWN', 'UNLOCK', 'UNSET', 'USE', 'USER', 'USING', 'VALUES', 'VIEW', 'VIEWS', 'WHEN', 'WHERE', 'WINDOW', 'WITH', 'YEAR', '+', '-', '*', 'DIV', '~', STRING, BIGINT_LITERAL, SMALLINT_LITERAL, TINYINT_LITERAL, INTEGER_VALUE, EXPONENT_VALUE, DECIMAL_VALUE, DOUBLE_LITERAL, BIGDECIMAL_LITERAL, IDENTIFIER, BACKQUOTED_IDENTIFIER}(line 1, pos 34)
== SQL ==
select * from test_view where a =
----------------------------------^^^
So, has anyone managed to make these variables work?
Thanks
You can pass parameters/arguments to your SQL statements by programmatically creating the SQL string using Scala/Python and pass it to sqlContext.sql(string).
sqlContext.sql("set a_value = 2")
sqlContext.sql("select * from test_view where a = ${a_value}").show()

Spark SQL error : org.apache.spark.sql.catalyst.parser.ParseException: extraneous input '$' expecting

I am forming a query in a String Builder like below :
println(dataQuery)
Execution started at 2019-10-31 02:58:24.006019 PST
res245: String =
" SELECT transaction_created_date, txn_mth, txn_mth_id, breakout_y_n, cast($counter as Int) AS arrival_days, cast(date_sub(date_add(transaction_created_date,$counter),day(transaction_created_date)) as String) as Arrival_date,trim(cast(getDayOfWeek(cast(date_sub(date_add(transaction_created_date,$counter),day(transaction_created_date)) as String)) as String)) as weekday,cast(ceil($counter/7)as Int) as week_no, sum(if(arrival_day_base<=$counter,gross,0)) as GROSS, sum(if(arrival_day_base<=$counter,nbc,0)) as NBC, sum(if(arrival_day_base<=$counter,nbr,0)) as NBR, sum(if(arrival_day_base<=$counter,dp,0)) as DP, sum(if(arrival_day_base==$counter,gross,0)) as DAYGROSS, sum(if(arrival_day_base==$counter,nbc,0)) as DAYNBC, sum(if(arrival_day_base==$counter,nbr,0)) as DAYNBR, , sum(if(arrival_day_base==$counter,dp,0)) as DAYDP,
FROM BASE_DLV
GROUP BY transaction_created_date, txn_mth, txn_mth_id, breakout_y_n, arrival_days, arrival_date, weekday, week_no
when executing it as sql val data3 = spark.sql(dataQuery)
getting below error:
org.apache.spark.sql.catalyst.parser.ParseException:
extraneous input '$' expecting {'SELECT', 'FROM', 'ADD', 'AS', 'ALL', 'DISTINCT', 'WHERE', 'GROUP', 'BY', 'GROUPING', 'SETS', 'CUBE', 'ROLLUP', 'ORDER', 'HAVING', 'LIMIT', 'AT', 'OR', 'AND', 'IN', NOT, 'NO', 'EXISTS', 'BETWEEN', 'LIKE', RLIKE, 'IS', 'NULL', 'TRUE', 'FALSE', 'NULLS', 'ASC', 'DESC', 'FOR', 'INTERVAL', 'CASE', 'WHEN', 'THEN', 'ELSE', 'END', 'JOIN', 'CROSS', 'OUTER', 'INNER', 'LEFT', 'SEMI', 'RIGHT', 'FULL', 'NATURAL', 'ON', 'LATERAL', 'WINDOW', 'OVER', 'PARTITION', 'RANGE', 'ROWS', 'UNBOUNDED', 'PRECEDING', 'FOLLOWING', 'CURRENT', 'FIRST', 'AFTER', 'LAST', 'ROW', 'WITH', 'VALUES', 'CREATE', 'TABLE', 'DIRECTORY', 'VIEW', 'REPLACE', 'INSERT', 'DELETE', 'INTO', 'DESCRIBE', 'EXPLAIN', 'FORMAT', 'LOGICAL', 'CODEGEN', 'COST', 'CAST', 'SHOW', 'TABLES', 'COLUMNS', 'COLUMN', 'USE', 'PARTITIONS', 'FUNCTIONS', 'DROP', 'UNION', 'EXCEPT', 'MINUS', 'INTERSECT', 'TO', 'TABLESAMPLE', 'STRATIFY', 'ALTER', 'RENAME', 'ARRAY', 'MAP', 'STRUCT', 'COMMENT', 'SET', 'RESET', 'DATA', 'START', 'TRANSACTION', 'COMMIT', 'ROLLBACK', 'MACRO', 'IGNORE', 'BOTH', 'LEADING', 'TRAILING', 'IF', 'POSITION', 'DIV', 'PERCENT', 'BUCKET', 'OUT', 'OF', 'SORT', 'CLUSTER', 'DISTRIBUTE', 'OVERWRITE', 'TRANSFORM', 'REDUCE', 'SERDE', 'SERDEPROPERTIES', 'RECORDREADER', 'RECORDWRITER', 'DELIMITED', 'FIELDS', 'TERMINATED', 'COLLECTION', 'ITEMS', 'KEYS', 'ESCAPED', 'LINES', 'SEPARATED', 'FUNCTION', 'EXTENDED', 'REFRESH', 'CLEAR', 'CACHE', 'UNCACHE', 'LAZY', 'FORMATTED', 'GLOBAL', TEMPORARY, 'OPTIONS', 'UNSET', 'TBLPROPERTIES', 'DBPROPERTIES', 'BUCKETS', 'SKEWED', 'STORED', 'DIRECTORIES', 'LOCATION', 'EXCHANGE', 'ARCHIVE', 'UNARCHIVE', 'FILEFORMAT', 'TOUCH', 'COMPACT', 'CONCATENATE', 'CHANGE', 'CASCADE', 'RESTRICT', 'CLUSTERED', 'SORTED', 'PURGE', 'INPUTFORMAT', 'OUTPUTFORMAT', DATABASE, DATABASES, 'DFS', 'TRUNCATE', 'ANALYZE', 'COMPUTE', 'LIST', 'STATISTICS', 'PARTITIONED', 'EXTERNAL', 'DEFINED', 'REVOKE', 'GRANT', 'LOCK', 'UNLOCK', 'MSCK', 'REPAIR', 'RECOVER', 'EXPORT', 'IMPORT', 'LOAD', 'ROLE', 'ROLES', 'COMPACTIONS', 'PRINCIPALS', 'TRANSACTIONS', 'INDEX', 'INDEXES', 'LOCKS', 'OPTION', 'ANTI', 'LOCAL', 'INPATH', IDENTIFIER, BACKQUOTED_IDENTIFIER}(line 1, pos 74)
== SQL ==
SELECT transaction_created_date, txn_mth, txn_mth_id, breakout_y_n, cast($counter as Int) AS arrival_days, cast(date_sub(date_add(transaction_created_date,$counter),day(transaction_created_date)) as String) as Arrival_date,trim(cast(getDayOfWeek(cast(date_sub(date_add(transaction_created_date,$counter),day(transaction_created_date)) as String)) as String)) as weekday,cast(ceil($counter/7)as Int) as week_no, sum(if(arrival_day_base<=$counter,gross,0)) as GROSS, sum(if(arrival_day_base<=$counter,nbc,0)) as NBC, sum(if(arrival_day_base<=$counter,nbr,0)) as NBR, sum(if(arrival_day_base<=$counter,dp,0)) as DP, sum(if(arrival_day_base==$counter,gross,0)) as DAYGROSS, sum(if(arrival_day_base==$counter,nbc,0)) as DAYNBC, sum(if(arrival_day_base==$counter,nbr,0)) as DAYNBR, sum(if(arrival_day_base==$counter,dp,0)) as DAYDP
--------------------------------------------------------------------------^^^
FROM BASE_DLV
GROUP BY transaction_created_date, txn_mth, txn_mth_id, breakout_y_n, arrival_days, arrival_date, weekday, week_no
at org.apache.spark.sql.catalyst.parser.ParseException.withCommand(ParseDriver.scala:239)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:115)
at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:48)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:69)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:641)
... 71 elided
also tried to run the same query directly
val data2 =spark.sql(s"""SELECT transaction_created_date, txn_mth, txn_mth_id, breakout_y_n,
cast($counter as Int) AS arrival_days,
cast(date_sub(date_add(transaction_created_date,$counter),day(transaction_created_date)) as String) as Arrival_date,
trim(cast(getDayOfWeek(cast(date_sub(date_add(transaction_created_date,$counter),day(transaction_created_date)) as String)) as String)) as weekday,
cast(ceil($counter/7)as Int) as week_no,
sum(if(arrival_day_base<=$counter,gross,0)) as GROSS,
sum(if(arrival_day_base<=$counter,nbc,0)) as NBC,
sum(if(arrival_day_base<=$counter,nbr,0)) as NBR,
sum(if(arrival_day_base<=$counter,dp,0)) as DP,
sum(if(arrival_day_base==$counter,gross,0)) as DAYGROSS,
sum(if(arrival_day_base==$counter,nbc,0)) as DAYNBC,
sum(if(arrival_day_base==$counter,nbr,0)) as DAYNBR,
sum(if(arrival_day_base==$counter,dp,0)) as DAYDP
FROM BASE_DLV
GROUP BY transaction_created_date, txn_mth, txn_mth_id, breakout_y_n, arrival_days, arrival_date, weekday, week_no""")
and it is executing successfully
Execution started at 2019-10-31 02:51:32.451289 PST
data2: org.apache.spark.sql.DataFrame = [transaction_created_date: string, txn_mth: string ... 14 more fields]
Execution completed at 2019-10-31 02:51:34.532190 PST in 2.08 s
but getting same parse error on trying
val data3 = spark.sql(s"""$dataQuery""")
can anyone please help with the using the stringBuilder in spark.sql() without the issue
dataQuery should have counter defined and evaluated
val counter = 10
val dataQuery = s"select $counter as cnt" //gives select 10 as cnt
spark.sql(s"$dataQuery").show()
shows
+---+
|cnt|
+---+
| 10|
+---+
I think what you are noticing is in scala multi line queries need """ triple quotes around multi line SQL statements.

What format should I use for payload and headers in my chai REST-API test?

I am setting up REST-API test within my codecept testing framework which uses integrated Chai.
After checking the very basic documentation on the subject in CodeceptJS documentation I can't seem to get my test to work.
const expect = require('chai').expect;
Feature('Checkout');
Scenario('Create a customer', async I => {
const payload = ({
first_name: 'Dummy title',
last_name: 'Dummy description',
email: 'john#test.com',
locale: 'fr-CA',
billing_address[first_name]: 'John',
billing_address[last_name]: 'Doe',
billing_address[line1]: 'PO Box 9999',
billing_address[city]: 'Walnut',
billing_address[state]: 'California',
billing_address[zip]: '91789',
billing_address[country]: 'US'
})
const header = ({Content-Type: 'application/x-www-form-urlencoded'})
const res = await I.sendPostRequest('https://mytesturl-
api.com/api/',payload,header);
expect(res.code).eql(200);
});
I have put my payload and header in a variable for ease of use and readability.
But it doesn't work and keeps giving me Unexpected token [
I figured it out.
The way to format the payload was as a string (see example below)
const expect = require('chai').expect;
Feature('Checkout');
Scenario('Create a customer', async I => {
const payload = ({
first_name: 'Dummy title',
last_name: 'Dummy description',
email: 'john#test.com',
locale: 'fr-CA',
'billing_address[first_name]': 'John',
'billing_address[last_name]': 'Doe',
'billing_address[line1]': 'PO Box 9999',
'billing_address[city]': 'Walnut',
'billing_address[state]': 'California',
'billing_address[zip]': '91789',
'billing_address[country]': 'US'
})
const header = ({Content-Type: 'application/x-www-form-urlencoded'})
const res = await I.sendPostRequest('https://mytesturl-api.com/api/',payload,header);
expect(res.code).eql(200);
});