Hi I’ve bought the dispay pack2 (https://shop.pimoroni.com/products/pico-display-pack-2-0?variant=39374122582099) and am trying to display an image. If I download the image and put it on the pi pico w then the image displays OK. I’m trying to get the image to be downloaded from a URL and displayed but am getting
MemoryError: memory allocation failed, allocating 21760 bytes
I’m new to this sort of coding and am struggling to see what I’m doing wrong. here is my full py code
`
import network
import urequests
import time
import picographics
import jpegdec
from pimoroni import Button
wlan = network.WLAN(network.STA_IF)
wlan.active(True)
wlan.connect("SSID","password")
time.sleep(5)
print(wlan.isconnected())
display = picographics.PicoGraphics(display=picographics.DISPLAY_PICO_DISPLAY_2, rotate=0)
display.set_backlight(0.8)
# Create a new JPEG decoder for our PicoGraphics
j = jpegdec.JPEG(display)
# Open the JPEG file
#j.open_file("squid.jpg")
# Decode the JPEG
#j.decode(0, 0, jpegdec.JPEG_SCALE_FULL)
if wlan.isconnected():
res = urequests.get(url='https://squirrel365.io/tmp/squid.jpg')
j.open_RAM(memoryview(res.content))
# Decode the JPEG
j.decode(0, 0, jpegdec.JPEG_SCALE_FULL)
# Display the result
display.update()
`
Any ideas?
Kedge
I've tested and can get plain text back from the URL, as soon as I try and get an image I get the memory error
You're not doing anything "wrong"; you're just working with a device that has very limited resources. What if you re-arrange your code to perform the fetch before initializing any graphics resources, write the data to a file, and then have the jpeg module read it from disk? Something like:
import jpegdec
import network
import picographics
import time
import urequests
wlan = network.WLAN(network.STA_IF)
wlan.active(True)
wlan.connect("SSID","password")
while not wlan.isconnected():
print('waiting for network')
time.sleep(1)
res = urequests.get(url='https://squirrel365.io/tmp/squid.jpg')
with open('squid.jpg', 'wb') as fd:
fd.write(res.content)
display = picographics.PicoGraphics(display=picographics.DISPLAY_PICO_DISPLAY_2, rotate=0)
display.set_backlight(0.8)
# Create a new JPEG decoder for our PicoGraphics
j = jpegdec.JPEG(display)
# Open the JPEG file
j.open_file("squid.jpg")
# Decode the JPEG
j.decode(0, 0, jpegdec.JPEG_SCALE_FULL)
# Display the result
display.update()
You can sometimes free up additional memory by adding an explicit garbage collection to your code:
import gc
.
.
.
with open('squid.jpg', 'wb') as fd:
fd.write(res.content)
gc.collect()
Related
I had a motor controller connected to GP0 and GP1 so I know they work, however I cant seem to get a response from the SIM controller. Without the pico attached to the board, I can get it to work, but when I add the pico it seems like it wont send AT commands or translate received data if the pico is getting any at all. I have tried to run the code line by line in a live session and all I get is a single number that is equal to the number of letters inside the string that I am sending to the sim controller. ie uart.write(bytearray(b'ATE1\r\n')) would return >>> 6 6 being the number of characters in the code after b. I'm ordering a new pico to see if just maybe it was my sub par soldering, but in the mean time I could see if anyone more experienced than I can point out a error.
import machine
import os
import utime
import time
import binascii
from machine import UART
pwr_enable = 22 # EG25_4G Power key connected on GP22
uart_port = 0
uart_baud = 115200
# Initialize UART0
uart = machine.UART(uart_port, uart_baud)
print(os.uname())
def wait_resp_info(timeout=3000):
prvmills = utime.ticks_ms()
info = b""
while (utime.ticks_ms()-prvmills) < timeout:
if uart.any():
info = b"".join([info, uart.read(1)])
print(info.decode())
return info
def Check_and_start(): # Initialize SIM Module
while True:
uart.write(bytearray(b'ATE1\r\n'))
utime.sleep(10)
uart.write(bytearray(b'AT\r\n'))
rec_temp = wait_resp_info()
print(wait_resp_info())
print(rec_temp)
print(rec_temp.decode())
utime.sleep(10)
if 'OK' in rec_temp.decode():
print('OK response from AT command\r\n' + rec_temp.decode())
break
else:
power = machine.Pin(pwr_enable, machine.Pin.OUT)
power.value(1)
utime.sleep(2)
power.value(0)
print('No response, restarting\r\n')
utime.sleep(10)
def Network_check():# Network connectivity check
for i in range(1, 3):
if Send_command("AT+CGREG?", "0,1") == 1:
print('Connected\r\n')
break
else:
print('Device is NOT connected\r\n')
utime.sleep(2)
continue
def Str_to_hex_str(string):
str_bin = string.encode('utf-8')
return binascii.hexlify(str_bin).decode('utf-8')
Check_and_start()
Network_check()
Response is
>>> Check_and_start()
b''
b'\x00\x00'
No response, restarting
New Pico fixed my issue, I believe it to be that my inadequate soldering skills created the issue. Symptoms were, no UART data was being transmitted or received through UART pins 0 and 1. Solution was new Pico board was inserted in place of old one, same code was uploaded and ran successfully first time.
I have a class that I want to share between different files or computers? I can pickle and load it from the same jupyter notebook. However, I cannot load it from a different machine/or different notebook. I have tried the following,
Initialize and and save from a jupyter notebook
# Simplified class definition
class MyClass:
def __init__(self, name):
self.name = name
self.dataToIndex = {}
self.index = 0
def addData(self, dataReceived):
self.dataToIndex[dataReceived] = self.index
self.index += 1
# initialize
my_dataset = MyClass("My_test_dataset")
# add some data
my_dataset.addData("One")
my_dataset.addData("Two")
# check data
my_dataset.dataToIndex
# save from one juputer notebook
import pickle
with open("path.obj", "wb") as inp:
pickle.dump(my_dataset, inp, pickle.HIGHEST_PROTOCOL)
# Read from the same jupyter notebook
with open("path.obj", 'rb') as inp:
transferred = pickle.load(inp)
# Output looks good
transferred.dataToIndex
{'One': 0, 'Two': 1}
Read from the new jupyter notebook
import pickle
with open("path.obj", 'rb') as inp:
transferred = pickle.load(inp)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-1-44a1a18ebb1b> in <module>
2 # Read from the same jupyter notebook
3 with open("path.obj", 'rb') as inp:
----> 4 transferred = pickle.load(inp)
AttributeError: Can't get attribute 'MyClass' on <module '__main__'>
Now, I want to be able to load into a different python script on in a different jupyter notebook.
I have checked this, Saving an Object (Data persistence)
and
https://www.stefaanlippens.net/python-pickling-and-dealing-with-attributeerror-module-object-has-no-attribute-thing.html
But could not figure out a solution. Any help is appreciated.
There is a tons of helpful posts on this. However, most of those fall under, how to save from within the class etc. For a future beginner like me, I just want to provide a simple solution that worked for me.
Take the class out of the notebook and put in a script like, my_class_def.py
class MyClass:
def __init__(self, name):
self.name = name
self.dataToIndex = {}
self.index = 0
def addData(self, dataReceived):
self.dataToIndex[dataReceived] = self.index
self.index += 1
Then use the pickle to save as it is.
In the different(new) script, import the class first using,
from my_class_def import MyClass
and then load the picked file.
It is puzzling to me that there is a tfdv.load_statistics() function, but no corresponding tfdv.write_statistics() function. How do I go about saving the statistics, and then loading them again?
e.g.
import tensorflow_data_validation as tfdv
stats = tfdv.generate_statistics_from_dataframe(df)
# how do I save?
# load back for later use
saved_stats = tfdv.load_statistics('saved_stats.stats')
I can save the string representation to a file, but this is not the format that load_statistics expects.
with open('saved_stats.stats', 'w') as o:
o.write(str(stats))
Pointers anyone?
have you tried this : tfdv.utils.stats_util.write_stats_text ?
In the current tfdv version 1.3.0 there are the following methods that can be used:
load_stats_text
write_stats_text
Example:
import tensorflow_data_validation as tfdv
stats = tfdv.generate_statistics_from_dataframe(df)
stats_path = "my-stats-file.stats"
# saving
tfdv.write_stats_text(stats, stats_path)
# loading
stats = tfdv.load_stats_text(stats_path)
Okay figure out this hacky way to do it.
df = ... # create pandas df
from tensorflow_metadata.proto.v0 import statistics_pb2
import tensorflow_data_validation as tfdv
stats = tfdv.generate_statistics_from_dataframe(df)
# save it
with open('saved_stats.stats', 'wb') as o:
o.write(stats.SerializeToString())
# load back for later use
with open('saved_stats.stats', 'rb') as i:
loaded_stats = statistics_pb2.FromString(i.read())
There's a function called tfdv.load_stats_binary that you can use to solve this problem.
I try following AI Platform tutorial to upload a model and a prediction routine but one part fail and I don't understand why.
My prediction class is the same as in their tutorial:
%%writefile predictor.py
import os
import pickle
import numpy as np
from sklearn.datasets import load_iris
from sklearn.externals import joblib
class MyPredictor(object):
def __init__(self, model, preprocessor):
self._model = model
self._preprocessor = preprocessor
self._class_names = load_iris().target_names
def predict(self, instances, **kwargs):
inputs = np.asarray(instances)
preprocessed_inputs = self._preprocessor.preprocess(inputs)
if kwargs.get('probabilities'):
probabilities = self._model.predict_proba(preprocessed_inputs)
return probabilities.tolist()
else:
outputs = self._model.predict(preprocessed_inputs)
return [self._class_names[class_num] for class_num in outputs]
#classmethod
def from_path(cls, model_dir):
model_path = os.path.join(model_dir, 'model.joblib')
model = joblib.load(model_path)
preprocessor_path = os.path.join(model_dir, 'preprocessor.pkl')
with open(preprocessor_path, 'rb') as f:
preprocessor = pickle.load(f)
return cls(model, preprocessor)
the code I use to create my model in cloud is:
! gcloud beta ai-platform versions create {VERSION_NAME} \
--model {MODEL_NAME} \
--runtime-version 1.13 \
--python-version 3.5 \
--origin gs://{BUCKET_NAME}/custom_prediction_routine_tutorial/model/ \
--package-uris gs://{BUCKET_NAME}/custom_prediction_routine_tutorial/my_custom_code-0.1.tar.gz \
--prediction-class predictor.MyPredictor
But I end up with such an odd error:
ERROR: (gcloud.beta.ai-platform.versions.create) Bad model detected with error: "Failed to load model: Unexpected error when loading the model: 'ascii' codec can't decode byte 0xf9 in position 2: ordinal not in range(128) (Error code: 0)"
The thing is that when I run the same command without the:
--prediction-class predictor.MyPredictor
it work fine.
Does someone know the reason of this ? I think model.joblib might have an encoding problem but when I load it myself there is nothing wrong
I've find the solution,
In the tutorial they use pickle to save the preprocessor object created, and Joblib to save the model.
You need to use Joblib to save both and then send it to google storage.
I have huge data in hadoop archive .har format. Since, har doesn't include any compression, I am trying to further gzip it in and store in HDFS. The only thing I can get to work without error is :
harFile.coalesce(1, "true")
.saveAsTextFile("hdfs://namenode/archive/GzipOutput", classOf[org.apache.hadoop.io.compress.GzipCodec])
//`coalesce` because Gzip isn't splittable.
But, this doesn't give me the correct results. A Gzipped file is generated but with invalid output ( a single line saying the rdd type etc.)
Any help will be appreciated. I am also open to any other approaches.
Thanks.
A Java code snippet to create a compressed version of an existing HDFS file.
Built in a hurry, in a text editor, from bits and pieces of a Java app I wrote some time ago, hence not tested; some typos and gaps to be expected.
// HDFS API
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.FileStatus;
// native Hadoop compression libraries
import org.apache.hadoop.io.compress.CompressionCodecFactory;
import org.apache.hadoop.io.compress.CompressionCodec;
import org.apache.hadoop.io.compress.Compressor;
import org.apache.hadoop.io.compress.GzipCodec;
import org.apache.hadoop.io.compress.BZip2Codec;
import org.apache.hadoop.io.compress.SnappyCodec;
import org.apache.hadoop.io.compress.Lz4Codec;
..............
// Hadoop "Configuration" (and its derivatives for HDFS, HBase etc.) constructors try to auto-magically
// find their config files by searching CLASSPATH for directories, and searching each dir for hard-coded
// name "core-site.xml", plus "hdfs-site.xml" and/or "hbase-site.xml" etc.
// WARNING - if these config files are not found, the "Configuration" reverts to hard-coded defaults without
// any warning, resulting in bizarre error messages later > let's run some explicit controls here
Configuration cnfHadoop = new Configuration() ;
String propDefaultFs =cnfHadoop.get("fs.defaultFS") ;
if (propDefaultFs ==null || ! propDefaultFs.startsWith("hdfs://"))
{ throw new IllegalArgumentException(
"HDFS configuration is missing - no proper \"core-site.xml\" found, please add\n"
+"directory /etc/hadoop/conf/ (or custom dir with custom XML conf files) in CLASSPATH"
) ;
}
/*
// for a Kerberised cluster, either you already have a valid TGT in the default
// ticket cache (via "kinit"), or you have to authenticate by code
UserGroupInformation.setConfiguration(cnfHadoop) ;
UserGroupInformation.loginUserFromKeytab("user#REALM", "/some/path/to/user.keytab") ;
*/
FileSystem fsCluster =FileSystem.get(cnfHadoop) ;
Path source = new Path("/some/hdfs/path/to/XXX.har") ;
Path target = new Path("/some/hdfs/path/to/XXX.har.gz") ;
// alternative: "BZip2Codec" for better compression (but higher CPU cost)
// alternative: "SnappyCodec" or "Lz4Codec" for lower compression (but much lower CPU cost)
CompressionCodecFactory codecBootstrap = new CompressionCodecFactory(cnfHadoop) ;
CompressionCodec codecHadoop =codecBootstrap.getCodecByClassName(GzipCodec.class.getName()) ;
Compressor compressorHadoop =codecHadoop.createCompressor() ;
byte[] buffer = new byte[16*1024*1024] ;
int bufUsedCapacity ;
InputStream sourceStream =fsCluster.open(source) ;
OutputStream targetStream =codecHadoop.createOutputStream(fsCluster.create(target, true), compressorHadoop) ;
while ((bufUsedCapacity =sourceStream.read(buffer)) >0)
{ targetStream.write(buffer, 0, bufUsedCapacity) ; }
targetStream.close() ;
sourceStream.close() ;
..............