>>> import httplib
>>> x = httplib.HTTPConnection('localhost', 8080)
>>> x.connect()
>>> x.request('GET','/camera/store?fn=aaa&ts='+str.encode('2015-06-15T14:45:21.982600+00:00','ascii')+'&cam=ddd')
>>> y=x.getresponse()
>>> z=y.read()
>>> z
'error: Invalid format: "2015-06-15T14:45:21.982600 00:00" is malformed at " 00:00"'
And the system show me this error. As i want to encode this format to this: 2015-06-15T14%3A45%3A21.982600%2B00%3A00
>>> import urllib
>>> f = { 'fn' : 'aaa', 'ts' : "2015-06-15T14:45:21.982600+00:00"}
>>> urllib.urlencode(f)
from:
How to urlencode a querystring in Python?
url = "http://example.com?p=" + urllib.quote(query)
it works with this!
Related
I am trying to train my point cloud data on PointCNN so I need to convert my dataset to hdf5 as used in PointCNN. PointCNN used the modelnet40_ply_hdf5_2048 dataset.
I have tried converting my custom dataset but I am having issues with the label.
I tried this to get the label/shape_names
shape_ids = {}
shape_ids = [line.rstrip() for line in open(os.path.join(PATH, 'filelist1.txt'))]
shape_names = ['_'.join(x.split('_')[0:-1]) for x in shape_ids]
datapath = [(shape_names[i], os.path.join(PATH, shape_names[i], shape_ids[i])) for i
in range(len(shape_ids))]
Convert to h5py file
import numpy as np
from tqdm import tqdm
import h5py
filenames = [line.rstrip() for line in open(os.path.join(PATH))]
f = h5py.File("filename", 'w')
data = np.zeros((len(filenames), 1024, 3))
for i in range(0, len(datapath)):
fn = datapath[i]
cls = classes[datapath[i][0]]
label = np.array([cls]).astype(np.int32)
csvreader = np.genfromtxt("data1/" + filenames[i] + ".csv", delimiter=",").astype(np.float32)
for j in range(0,1024):
data[i,j] = [csvreader[j][0], csvreader[j][1], csvreader[j][2]]
label
dset1 = f.create_dataset("data", data=data, compression="gzip", compression_opts=4)
dset2 = f.create_dataset("label", data=label, compression="gzip", compression_opts=1)
f.close()
It did convert successfully but when I tried to train on PointCNN
PointCNN training
------Building model-------
------Successfully Built model-------
Traceback (most recent call last):
File "train_pytorch.py", line 174, in <module>
current_data, current_label, _ = provider.shuffle_data(current_data, np.squeeze(current_label))
File "provider.py", line 28, in shuffle_data
idx = np.arange(len(labels))
TypeError: len() of unsized object
I have question about the data type of the result returned by Sympy Poly.all_coeffs(). I have started to use Sympy just recently.
My Sympy transfer function is following:
Then I run this code:
n,d = fraction(Gs)
num = Poly(n,s)
den = Poly(d,s)
num_c = num.all_coeffs()
den_c = den.all_coeffs()
I get:
Then I run this code:
from scipy import signal
#nu = [5000000.0]
#de = [4.99, 509000.0]
nu = num_c
de = den_c
sys = signal.lti(nu, de)
w,mag,phase = signal.bode(sys)
plt.plot(w/(2*np.pi), mag)
and the result is:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-131-fb960684259c> in <module>
4 nu = num_c
5 de = den_c
----> 6 sys = signal.lti(nu, de)
But if I use those commented line 'nu' and 'de' straight python lists instead, the program works. So what is wrong here?
Why did you just show a bit the error? Why not the full message, maybe even the full traceback!
In [60]: sys = signal.lti(num_c, den_c)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-60-21f71ecd8884> in <module>
----> 1 sys = signal.lti(num_c, den_c)
/usr/local/lib/python3.6/dist-packages/scipy/signal/ltisys.py in __init__(self, *system, **kwargs)
590 self._den = None
591
--> 592 self.num, self.den = normalize(*system)
593
594 def __repr__(self):
/usr/local/lib/python3.6/dist-packages/scipy/signal/filter_design.py in normalize(b, a)
1609 leading_zeros = 0
1610 for col in num.T:
-> 1611 if np.allclose(col, 0, atol=1e-14):
1612 leading_zeros += 1
1613 else:
<__array_function__ internals> in allclose(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/numpy/core/numeric.py in allclose(a, b, rtol, atol, equal_nan)
2169
2170 """
-> 2171 res = all(isclose(a, b, rtol=rtol, atol=atol, equal_nan=equal_nan))
2172 return bool(res)
2173
<__array_function__ internals> in isclose(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/numpy/core/numeric.py in isclose(a, b, rtol, atol, equal_nan)
2267 y = array(y, dtype=dt, copy=False, subok=True)
2268
-> 2269 xfin = isfinite(x)
2270 yfin = isfinite(y)
2271 if all(xfin) and all(yfin):
TypeError: ufunc 'isfinite' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
Now look at the elements of the num_c list (same for den_c):
In [55]: num_c[0]
Out[55]: 500000.000000000
In [56]: type(_)
Out[56]: sympy.core.numbers.Float
The scipy code is doing numpy testing on the inputs. So it's first turned the lists into arrays:
In [61]: np.array(num_c)
Out[61]: array([500000.000000000], dtype=object)
This array contains sympy object(s). It can't cast that to numpy float with 'safe'. But an explicit astype uses unsafe as the default:
In [63]: np.array(num_c).astype(float)
Out[63]: array([500000.])
So lets convert both lists into valid numpy float arrays:
In [64]: sys = signal.lti(np.array(num_c).astype(float), np.array(den_c).astype(float))
In [65]: sys
Out[65]:
TransferFunctionContinuous(
array([100200.4008016]),
array([1.00000000e+00, 1.02004008e+05]),
dt: None
)
Conversion in a list comprehension also works:
sys = signal.lti([float(i) for i in num_c],[float(i) for i in den_c])
You likely need to conver sympy objects to floats / lists of floats.
I would like to know the right way to read float fields using Quickfix (python). I was getting a string then casting to float.
For instance:
>>> m = fix.Message()
>>> m.setField(fix.BidPx(1.12))
>>> m.getField(fix.BidPx()).getString()
'1.12'
>>> float(m.getField(fix.BidPx()).getString())
1.12
The way above works fine for floats with less then 15 digits of precision. But I got the following error for float numbers with more the 15 digits of precision:
>>> m = fix.Message()
>>> m.setField(fix.BidPx(1.123456789123456))
>>> m.getField(fix.BidPx()).getString()
'\x00\xe1}\xf5\x82U\x00\x0078912346'
>>> float(m.getField(fix.BidPx()).getString())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: could not convert string to float:
I am not sure if that sample works, may be you should explain how you import "fix".
Any way, this sample works with python 3.7 and quickfix 1.15.1
>>> import quickfix as fix
>>> m = fix.Message()
>>> m.setField(fix.BidPx(1.123456789123456789123456789))
>>> m.getField(fix.BidPx().getField())
'1.12345678912346'
>>>
if you need more precision in the float number, you can do
>>> m.setField(fix.StringField(fix.BidPx().getField(),"1.123456789123456789123456789"))
>>> m.getField(fix.BidPx().getField())
'1.123456789123456789123456789'
>>>
I hope I've helped
I have a capture packet raw packet using python's sockets:
s = socket.socket(socket.AF_PACKET, socket.SOCK_RAW, socket.ntohs(0x0003))
while True:
message = s.recv(4096)
test = []
print(len(message))
print(repr(message))
I assumed that the packet returned would be in hex string format, however the printout of print(repr(message)) get me something like this:
b'\x00\x1b\xac\x00Gd\x00\x14\xd1+\x1f\x19\x05\n\x124VxC!UUUU\x00\x00\x00\x00\xcd\xcc\xcc=\xcd\xccL>\x9a\x99\x99>\xcd\xcc\xcc>\x00\x00\x00?\x9a\x......'
which has weird non hex characters like !UUUU or =. What encoding is this, and how do I decode the packet?
I know what the packet looks like ahead of time for now, since I'm the one generating the packets using winpcapy:
from ctypes import *
from winpcapy import *
import zlib
import binascii
import time
from ChanPackets import base, FrMessage, FrTodSync, FrChanConfig, FlChan, RlChan
while (1):
now = time.time()
errbuf = create_string_buffer(PCAP_ERRBUF_SIZE)
fp = pcap_t
deviceName = b'\\Device\\NPF_{8F5BD2E9-253F-4659-8256-B3BCD882AFBC}'
fp = pcap_open_live(deviceName, 65536, 1, 1000, errbuf)
if not bool(fp):
print ("\nUnable to open the adapter. %s is not supported by WinPcap\n" % deviceName)
sys.exit(2)
# FrMessage is a custom class that creates the packet
test = FrMessage('00:1b:ac:00:47:64', '00:14:d1:2b:1f:19', 0x12345678, 0x4321, 0x55555555, list(i/10 for i in range(320)))
# test.get_Raw_Packet() returns a c_bytes array needed for winpcap to send the packet
if (pcap_sendpacket(fp, test.get_Raw_Packet(), test.packet_size) != 0):
print ("\nError sending the packet: %s\n" % pcap_geterr(fp))
sys.exit(3)
elapsed = time.time() - now
if elapsed < 0.02 and elapsed > 0:
time.sleep(0.02 - elapsed)
pcap_close(fp)
Note: I would like to get an array of hex values representing each byte
What encoding is this, and how do I decode the packet?
What you see is the representation of bytes object in Python. As you might have guessed \xab represents byte 0xab (171).
which has weird non hex characters like !UUUU or =
Printable ASCII characters represent themselves i.e., instead of \x55 the representation contains just U.
What you have is a sequence of bytes. How to decode them depends on your application. For example, to decode a data packet that contains Ethernet frame, you could use scapy (Python 2):
>>> b = '\x00\x02\x157\xa2D\x00\xae\xf3R\xaa\xd1\x08\x00E\x00\x00C\x00\x01\x00\x00#\x06x<\xc0\xa8\x05\x15B#\xfa\x97\x00\x14\x00P\x00\x00\x00\x00\x00\x00\x00\x00P\x02 \x00\xbb9\x00\x00GET /index.html HTTP/1.0 \n\n'
>>> c = Ether(b)
>>> c.hide_defaults()
>>> c
<Ether dst=00:02:15:37:a2:44 src=00:ae:f3:52:aa:d1 type=0x800 |
<IP ihl=5L len=67 frag=0 proto=tcp chksum=0x783c src=192.168.5.21 dst=66.35.250.151 |
<TCP dataofs=5L chksum=0xbb39 options=[] |
<Raw load='GET /index.html HTTP/1.0 \n\n' |>>>>
I would like to get an array of hex values representing each byte
You could use binascii.hexlify():
>>> pkt = b'\x00\x1b\xac\x00Gd\x00'
>>> import binascii
>>> binascii.hexlify(pkt)
b'001bac00476400'
or If you want a list with string hex values:
>>> hexvalue = binascii.hexlify(pkt).decode()
>>> [hexvalue[i:i+2] for i in range(0, len(hexvalue), 2)]
['00', '1b', 'ac', '00', '47', '64', '00']
In python raw packet decode can be done using the scapy functions like IP(), TCP(), UDP() etc.
import sys
import socket
from scapy.all import *
s = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_TCP)
while 1:
packet = s.recvfrom(2000);
packet = packet[0]
ip = IP(packet)
ip.show()
I use Pandas dataframe heavily. And need to attach some data to the dataframe, for example to record the birth time of the dataframe, the additional description of the dataframe etc.
I just can't find reserved fields of dataframe class to keep the data.
So I change the core\frame.py file to add a line _reserved_slot = {} to solve my issue. I post the question here is just want to know is it OK to do so ? Or is there better way to attach meta-data to dataframe/column/row etc?
#----------------------------------------------------------------------
# DataFrame class
class DataFrame(NDFrame):
_auto_consolidate = True
_verbose_info = True
_het_axis = 1
_col_klass = Series
_AXIS_NUMBERS = {
'index': 0,
'columns': 1
}
_reserved_slot = {} # Add by bigbug to keep extra data for dataframe
_AXIS_NAMES = dict((v, k) for k, v in _AXIS_NUMBERS.iteritems())
EDIT : (Add demo msg for witingkuo's way)
>>> df = pd.DataFrame(np.random.randn(10,5), columns=list('ABCDEFGHIJKLMN')[0:5])
>>> df
A B C D E
0 0.5890 -0.7683 -1.9752 0.7745 0.8019
1 1.1835 0.0873 0.3492 0.7749 1.1318
2 0.7476 0.4116 0.3427 -0.1355 1.8557
3 1.2738 0.7225 -0.8639 -0.7190 -0.2598
4 -0.3644 -0.4676 0.0837 0.1685 0.8199
5 0.4621 -0.2965 0.7061 -1.3920 0.6838
6 -0.4135 -0.4991 0.7277 -0.6099 1.8606
7 -1.0804 -0.3456 0.8979 0.3319 -1.1907
8 -0.3892 1.2319 -0.4735 0.8516 1.2431
9 -1.0527 0.9307 0.2740 -0.6909 0.4924
>>> df._test = 'hello'
>>> df2 = df.shift(1)
>>> print df2._test
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\Python\lib\site-packages\pandas\core\frame.py", line 2051, in __getattr__
(type(self).__name__, name))
AttributeError: 'DataFrame' object has no attribute '_test'
>>>
This is not supported right now. See https://github.com/pydata/pandas/issues/2485. The reason is the propogation of these attributes is non-trivial. You can certainly assign data, but almost all pandas operations return a new object, where the assigned data will be lost.
Your _reserved_slot will become a class variable. That might not work if you want to assign different value to different DataFrame. Probably you can assign what you want to the instance directly.
In [6]: import pandas as pd
In [7]: df = pd.DataFrame()
In [8]: df._test = 'hello'
In [9]: df._test
Out[9]: 'hello'
I think a decent workaround is putting your datafame into a dictionary with your metadata as other keys. So if you have a dataframe with cashflows, like:
df = pd.DataFrame({'Amount': [-20, 15, 25, 30, 100]},index=pd.date_range(start='1/1/2018', periods=5))
You can create your dictionary with additional metadata and put the dataframe there
out = {'metadata': {'Name': 'Whatever', 'Account': 'Something else'}, 'df': df}
and then use it as out[df]