Print output to file no longer working - RPi 2, PN532 NFC RDW - raspberry-pi

For my classroom, I have a PN532 NFC card reader/writer hooked up via UART to a Raspberry Pi 2, and I'm using Type 2 NXP NTAG213 NFC cards to store information specifically to the text record. While weak in Python, I used the example under subheader 8.3 in the NFCPy Documentation to write to the card and used "How to redirect 'print' output to a file using python?" in order to complete the output process to a text file. For a while, the reading, writing, and outputting to my text file worked:
import nfc
import nfc.ndef
import nfc.tag
import os, sys
import subprocess
import glob
from os import path
import datetime
f = open('BankTransactions.txt', 'a')
sys.stdout = f
path = '/home/pi/BankTransactions.txt'
def connected(tag): print(tag); return False
clf = nfc.ContactlessFrontend('tty:AMA0:pn532')
clf.connect(rdwr={'on-connect': connected})
tag = clf.connect(rdwr={'on-connect': connected})
record_1 = tag.ndef.message[0]
signature = nfc.tag.tty2_nxp.NTAG213
today = datetime.date.today()
print(record_1.pretty())
if tag.ndef is not None:
print(tag.ndef.message.pretty())
if tag.ndef.is_writeable:
text_record = nfc.ndef.TextRecord("Jessica has 19 GP on card")
tag.ndef.message = nfc.ndef.Message(text_record)
print >> f, "Edited by Roman", today, record_1, signature, '\n'
f.close()
Now, however, when I use the same card for testing, it will not append the data within the text file. The data is still being written to the card, as I can read the information on the card with a simple read program.

Related

Pyspark - How to calculate file hashes

I have a bunch of CSV files in a mounted blob container and I need to calculate the 'SHA1' hash values for every file to store as inventory. I'm very new to Azure cloud and pyspark so I'm not sure how this can be achieved efficiently. I have written the following code in Python Pandas and I'm trying to use this in pyspark. It seems to work however it takes quite a while to run as there are thousands of CSV files. I understand that things work differently in pyspark, so can someone please guide if my approach is correct, or if there is a better piece of code I can use to accomplish this task?
import os
import subprocess
import hashlib
import pandas as pd
class File:
def __init__(self, path):
self.path = path
def get_hash(self):
hash = hashlib.sha1()
with open(self.path, "rb") as f:
for chunk in iter(lambda: f.read(4096), b""):
hash.update(chunk)
self.md5hash = hash.hexdigest()
return self.md5hash
path = '/dbfs/mnt/data/My_Folder' #Path to CSV files
cnt = 0
rlist = []
for path, subdirs, files in os.walk(path):
for fi in files:
if cnt < 10: #check on only 10 files for now as it takes ages!
f = File(os.path.join(path, fi))
cnt +=1
hash_value = f.get_hash()
results = {'File_Name': fi, 'File_Path': f.filename, 'SHA1_Hash_Value': hash_value}
rlist.append(results)
print(fi)
df = pd.DataFrame(rlist)
print(str(cnt) + ' files processed')
df = pd.DataFrame(rlist)
#df.to_csv('/dbfs/mnt/workspace/Inventory/File_Hashes.csv', mode='a', header=False) #not sure how to write files in pyspark!
display(df)
Thanks
Since you want to treat the files as blobs and not read them into a table. I would recommend using spark.sparkContext.binaryFiles this would land you an RDD of pairs where the key is the file name and the value is a file-like object, on which you can calculate the hash in a map function (rdd.mapValues(calculate_hash_of_file_like))
For more information, refer to the documentation: https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.SparkContext.binaryFiles.html#pyspark.SparkContext.binaryFiles

How do you use BeautifulSoup to fetch data in specific format?

I would create a python code to fetch the average volume of a given link stock using BeautifulSoup.
What I have done so far:
import bs4
import requests
from bs4 import BeautifulSoup
r=requests.get('https://finance.yahoo.com/quote/M/key-statistics?p=M')
soup=BeautifulSoup(r.content,"html.parser")
# p = soup.find_all(class_="Fw(500) Ta(end) Pstart(10px) Miw(60px)")[1].get_text
# p = soup.find_all('td')[2].get_text
# p = soup.find_all('table', class_='W(100%) Bdcl(c)')[70].tr.get_text
Anyway, I was able to get that number directly from google console using this command:
Document.querySelectorAll('table tbody tr td')[71].innerText
"21.07M"
Please, help with the basic explanation, I know a few about DOM.
You can use this logic to make it easier:
Use find_all to find all the spans in the html file
Search the spans for the correct label (Avg Vol...)
Use parent to go up the hierarchy to the full table row
Use find_all again from the parent to get the last cell which contains the value
Here is the updated code:
import bs4
import requests
from bs4 import BeautifulSoup
r=requests.get('https://finance.yahoo.com/quote/M/key-statistics?p=M')
soup=BeautifulSoup(r.content,"html.parser")
p = soup.find_all('span')
for s in p: # each span
if s.text == 'Avg Vol (10 day)': # starting cell
pnt = s.parent.parent # up 2 levels, table row
print(pnt.find_all('td')[-1].text) # last table cell
Output
21.76M

find replace import file values in spss-modeler

I'm using Modeler 18.0 and I'm new to the tool.
I inherited 30 stream files. Each stream starts with 30 excel source nodes with file names like \.xlsx (e.g. c:\source\regl_sales_01_WI_2017Q3.xlsx). I need to update all 900 nodes for the 2017Q4 versions of the source files.
Can I do this with some type of script where I can find and replace? Would this be a stand alone script? Seems like I could use something like node.setPropertyValue("full_filename", "c:\source\regl_sales_01_WI_2017Q3.xlsx") If I can only identify the script and node.
Thank you
I think your problem can be solved using standalone scripting and the stream.findAll()-functionality.
You could do something like this:
from os import listdir
from os.path import isfile, join
session = modeler.script.session()
tasks = session.getTaskRunner()
mypath = 'C:\\yourpath'
streams = [f for f in listdir(mypath) if (isfile(join(mypath, f)) and f.endswith(".str"))]
for streamFile in streams:
print(stream.getName()+" is getting processed.")
stream = tasks.openStreamFromFile(demosDir + streamFile, True)
inputNodes = stream.findAll("excelimport", None)
for in in inputNodes:
ff = in.getPropertyValue("full_filename")
ff.replace("2017Q3.xlsx", "2017Q4.xlsx")
in.setPropertyValue("full_filename", ff)
print(stream.getName()+" is processed.")
stream.close()
I did not test this, but it shouldn't need a lot of tweaking to work.

Data sending with raspberry pi where nextion 2.4 display

Hi i trying to send data to nextion 2.4 display from raspberry pi , i try to do change such as t0.txt="abc" but i dont know how i do with python
i try to this code block but is not work
import serial
import time
import struct
ser = serial.Serial("/dev/ttyAMA0")
print ser
time.sleep(1)
i=1
k=struct.pack('B', 0xff)
while True:
ser.write(b"t0.txt=")
ser.write(str(i))
ser.write(k)
ser.write(k)
ser.write(k)
print " NEXT"
time.sleep(1)
i=i+1`
You're missing the quotation marks around str(i). You're sending; t0.txt=1, t0.txt=2, etc. It needs to be; t0.txt="1".
Something like this should do the trick I guess:
ser.write(b"t0.txt=")
ser.write('"')
ser.write(str(i))
ser.write('"')
ser.write(k)
ser.write(k)
ser.write(k)

Python with Gtk3 not setting unicode properly

I have some simple code that isn't working as expected. First, the docs say that Gtk.Clipboard.get(Gdk.SELECTION_PRIMARY).set_text() should be able to accept only one argument with the length argument option, but it doesn't work (see below). Finally, pasting a unicode ° symbol breaks setting the text when trying to retrieve it from the clipboard (and won't paste into other programs). It gives this warning:
Gdk-WARNING **: Error converting selection from UTF8_STRING
>>> from gi.repository.Gtk import Clipboard
>>> from gi.repository.Gdk import SELECTION_PRIMARY
>>> d='\u00B0'
>>> print(d)
°
>>> cb=Clipboard
Clipboard
>>> cb=Clipboard.get(SELECTION_PRIMARY)
>>> cb.set_text(d) #this should work
Traceback (most recent call last):
File "<ipython-input-6-b563adc3e800>", line 1, in <module>
cb.set_text(d)
File "/usr/lib/python3/dist-packages/gi/types.py", line 43, in function
return info.invoke(*args, **kwargs)
TypeError: set_text() takes exactly 3 arguments (2 given)
>>> cb.set_text(d, len(d))
>>> cb.wait_for_text()
(.:13153): Gdk-WARNING **: Error converting selection from UTF8_STRING
'\\Uffffffff\\Uffffffff'
From the documentation for Gtk.Clipboard
It looks like the method set_text needs a second argument. The first is the text, the second is the length of the text. Or if you don't want to provide the length, you can use -1 to let it calculate the length itself.
gtk.Clipboard.set_text
def set_text(text, len=-1)
text : a string.
len : the length of text, in bytes, or -1, to calculate the length.
I've tested it on Python 3 and it works with cb.set_text(d, -1).
Since GTK version 3.16 there is a easier way of getting the clipboard. You can get it with the get_default() method:
import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk, Gdk, GLib, Gio
display = Gdk.Display.get_default()
clipboard = Gtk.Clipboard.get_default(display)
clipboard.set_text(string, -1)
also for me it worked without
clipboard.store()
Reference: https://lazka.github.io/pgi-docs/Gtk-3.0/classes/Clipboard.html#Gtk.Clipboard.get_default
In Python 3.4. this is only needed for GtkEntryBuffers. In case of GtkTextBuffer set_text works without the second parameter.
example1 works as usual:
settinginfo = 'serveradres = ' + server + '\n poortnummer = ' + poort
GtkTextBuffer2.set_text(settinginfo)
example2 needs extra parameter len:
ErrorTextDate = 'choose earlier date'
GtkEntryBuffer1.set_text(ErrorTextDate, -1)