Example 1: I understand that xyz.mock_pi which returns 1 and math.pi which returns the constant value assigned is being performed.
Example 2: I guess this is because the value was taken from the original math.pi and not from xyz.pi.
Example 3: Patch caused an object in the path to occur
Question:
Why doesn't example 2 work. Can it be done without adding "import math" and "print ('math.pi')"?
#Example 1
#xyz.py
from math import pi
with patch("xyz.pi") as pi_mock:
pi_mock.return_value = 1.0
print(float(pi))
#Result
1.0
3.141592653589793
#Example 2
#xyz.py
from math import pi
with patch("math.pi") as pi_mock:
pi_mock.return_value = 1.0
print(float(pi))
#Result
3.141592653589793
#Example 3
#xyz.py
import math
with patch("math.pi") as pi_mock:
pi_mock.return_value = 1.0
print(float(math.pi))
#Result
1.0
Related
I currently working on a migration from python to pyspark,and I have one step where I find the best fit distribution using a modified function of Fitting empirical distribution to theoretical ones with Scipy (Python)? where I apply best_fit_distribution to each group od Id's, and save the output in a dictionary,there is some way to do that in pyspark? I was doing research about pyspark statistics and I don't find any library that could help me.
For the needs of the development I need to do this part in pyspark, so keep in original python can't be an option.
import scipy.stats as st
import numpy as np
import warnings
def best_fit_distribution(data, bins=200, ax=None)
y, x = np.histogram(data, bins=bins, density=True)
x = (x + np.roll(x, -1))[:-1] / 2.0
# Distributions to check
distribution_list = [st.alpha,st.chi2, st.pearson3] #This is an example
# Best holders
best_distribution = st.norm
best_params = (0.0, 1.0)
best_sse = np.inf
for distribution in distribution_list:
try:
with warnings.catch_warnings():
warnings.filterwarnings("ignore")
params = distribution.fit(data)
arg = params[:-2]
loc = params[-2]
scale = params[-1]
pdf = distribution.pdf(x, loc=loc, scale=scale, *arg)
sse = np.sum(np.power(y - pdf, 2.0))
# if axis pass in add to plot
try:
if ax:
pd.Series(pdf, x).plot(ax=ax)
#end
except Exception:
pass
# identify if this distribution is better
if best_sse > sse > 0:
best_distribution = distribution
best_params = params
best_sse = sse
except Exception:
pass
return (best_distribution.name, best_params)
This is an example and description of my df:
Id
Values
8
59.25
8
25.1
8
39.0333
8
138.3737
8
79.5002
8
52.9
8
0.1674
9
33.8667
9
0.75
9
78.05
9
76.9167
9
14.6667
9
80.3166
9
32.7333
9
0.8333
9
76.95
9
84.4
9
23.1667
9
23.1
9
76.6667
summary
Id
Values
count
34052
1983107
min
8
0.0
max
2558
59646.1712
I am running MicroPython on a Raspberry Pi Pico. I can set the position of the servo by changing the duty cycles:
from machine import Pin, PWM
servo = PWM(Pin(0))
servo.freq(50)
servo.duty_u16(1350) # sets position to 0 degrees
I may have missed something, but I have read through the docs and couldn't find any way to read the current position of the servo. Is there any way to do this?
Most servos do not provide any sort of position information. You know what the position is because you set it. You can write code that will keep track of this value for you. For example, something like:
from machine import Pin, PWM
class Servo:
min_duty = 40
max_duty = 1115
def __init__(self, pin):
servo = PWM(Pin(pin))
servo.freq(50)
self.servo = servo
self.setpos(90)
def setpos(self, pos):
'''Scale the angular position to a value between self.min_duty
and self.max_duty.'''
if pos < 0 or pos > 180:
raise ValueError(pos)
self.pos = pos
duty = int((pos/180) * (self.max_duty - self.min_duty) + self.min_duty)
self.servo.duty(duty)
You would use the Servo class like this:
>>> s = Servo(18)
>>> s.pos
90
>>> s.setpos(180)
>>> s.pos
180
>>> s.setpos(0)
>>> s.pos
0
>>>
In your question, you have:
servo.duty_u16(1350)
I'm suspicious of this value: at 50Hz, the duty cycle is typically between 40 and 115 (+/1 some small amount at either end), corresponding to ≈ 1ms (duty=40) to 2ms (duty=115).
I am doing a project about brain tumor segmentation. And when I apply N4BiasCorrection to my file(.mha), I used slicer and simpleITK methods.
Slicer performs well but is time-consuming because I do not know how to use code to run through all my file, I just use the Slicer-N4ITK module and process each file by hand.
Then I try the simpleITK with python, problems show up. First, it runs very slow on each .mha file and gets a really big file(36.7MB compare with 4.4MB using Slicer) after applying n4biasfieldcorrection. Second, in order to speed up, I set the Shrink parameter to 4 but the whole .mha file becomes really blurred, which will not happen using slicer.
So can anyone tell me whether it is normal ? are there any methods to speed up without blurring my file? Or could you please tell me an example to apply N4BiasFieldCorrection within Slicer python interactor .
Thanks!!
# -*- coding: utf-8 -*-
"""
Spyder Editor
This is a temporary script file.
"""
from __future__ import print_function
import SimpleITK as sitk
import sys
import os
#from skimage import io
from glob import glob
import numpy as np
def n4process(inputimage, outpath):
inputImage = sitk.ReadImage( inputimage )
# numberFilltingLevels = 4
maskImage = sitk.OtsuThreshold( inputImage, 0, 1, 200 )
# inputImage = sitk.Shrink( inputImage, [ 2 ] * inputImage.GetDimension() )
# maskImage = sitk.Shrink( maskImage, [ 2 ] * inputImage.GetDimension() )
inputImage = sitk.Cast( inputImage, sitk.sitkFloat32 )
corrector = sitk.N4BiasFieldCorrectionImageFilter();
corrector.SetConvergenceThreshold=0.001
corrector.SetBiasFieldFullWidthAtHalfMaximum=0.15
corrector.SetMaximumNumberOfIterations=50
corrector.SetNumberOfControlPoints=4
corrector.SetNumberOfHistogramBins=200
corrector.SetSplineOrder=3
corrector.SetWienerFilterNoise=0.1
output = corrector.Execute( inputImage,maskImage )
sitk.WriteImage( output, outpath )
input_path = '/Users/chenrui/Desktop/BRATS2015_Training/HGG/'
patientpath = glob('/Users/chenrui/Desktop/BRATS2015_Training/HGG/*')
num = 0
for i in patientpath:
num = num+1
#i = '/Users/chenrui/Desktop/BRATS2015_Training/HGG/brats_2013_pat0001_1'
flair = glob(i + '/*Flair*/*.mha')
flair_outpath = '/Users/chenrui/Desktop/BRATS2015_Training/test/'+'Flair/'+str(num)+'.mha'
n4process(flair[0], flair_outpath)
t2 = glob(i + '/*T2*/*.mha')
t2_outpath = '/Users/chenrui/Desktop/BRATS2015_Training/HGG_n4/'+'T2/'+str(num)+'.mha'
n4process(t2[0], t2_outpath)
t1c = glob(i + '/*_T1c*/*.mha')
t1c_outpath = '/Users/chenrui/Desktop/BRATS2015_Training/HGG_n4/'+'T1c/'+str(num)+'.mha'
n4process(t1c[0], t1c_outpath)
t1 = glob(i + '/*_T1*/*.mha')
t1 = [scan for scan in t1 if scan not in t1c]
t1_outpath = '/Users/chenrui/Desktop/BRATS2015_Training/HGG_n4/'+'T1/'+str(num)+'.mha'
n4process(t1[0],t1_outpath)
From a look at the original implementation http://www.insight-journal.org/browse/publication/640
You can download this and generate the example to then test on your data. The parameters you set appear to be the same as defined in the defaults except for WeinerFilterNoise which should be 0.01 unless you've changed this for a reason - is this the blurring issue?
The size differential (x 8 increase) will be that you've probably saved out the data from 8bit to 64 bit or something. Checking the metaimage header will show this. This can be resolved with casting.
When I use sympy to get the square root of 8, the output is ugly:
2*2**(1/2)
import sympy
In [2]: sympy.sqrt(8)
Out[2]: 2*2**(1/2)
Is there any way to make sympy print output in proper mathematical notation (i.e. using the proper symbol for square root) ?
UPDATE:
when I follow the suggestions from #pqnet:
from sympy import *
x, y, z = symbols('x y z')
init_printing()
init_session()
I get following error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-23-21d886bf3e54> in <module>()
2 x, y, z = symbols('x y z')
3 init_printing()
----> 4 init_session()
/usr/lib/python2.7/dist-packages/sympy/interactive/session.pyc in init_session(ipython, pretty_print, order, use_unicode, quiet, argv)
154 # and False means don't add the line to IPython's history.
155 ip.runsource = lambda src, symbol='exec': ip.run_cell(src, False)
--> 156 mainloop = ip.mainloop
157 else:
158 mainloop = ip.interact
AttributeError: 'ZMQInteractiveShell' object has no attribute 'mainloop'
In an ipython notebook you can enable Sympy's graphical math typesetting with the init_printing function:
import sympy
sympy.init_printing(use_latex='mathjax')
After that, sympy will intercept the output of each cell and format it using math fonts and symbols. Try:
sympy.sqrt(8)
See also:
Printing section in the Sympy Tutorial.
The simplest way to do it is this:
sympy.pprint(sympy.sqrt(8))
For me (using rxvt-unicode and ipython) it gives
___
2⋅╲╱ 2
Not able to located complete Yhat doc to answer this question, using the R version of
ggplot I've attempted to iteratively back into a solution.
What is the correct syntax for annotating a Python ggplot plot with text in generally, more specifically using a variable from Statsmodels (everything works except the last line of this code block below)?
from ggplot import *
ggplot(aes(x='rundiff', y='winpct'), data=mlb_df) +\
geom_point() + geom_text(aes(label='team'),hjust=0, vjust=0, size=10) +\
stat_smooth(method='lm', color='blue') +\
ggtitle('Contenders vs Pretenders') +\
ggannotate('text', x = 4, y = 7, label = 'R^2')
Thanks.
You can use geom_text as a provisional solution
from ggplot import *
import pandas as pd
dataText=pd.DataFrame.from_items([('x',[4]),('y',[7]),('text',['R^2'])])
ggplot(aes(x='rundiff', y='winpct'), data=mlb_df) +\
geom_point() + geom_text(aes(label='team'),hjust=0, vjust=0, size=10) +\
stat_smooth(method='lm', color='blue') +\
ggtitle('Contenders vs Pretenders') +\
geom_text(aes(x='x', y='y', label='text'), data=dataText)