string = pytesseract.image_to_string(res,lang ='eng',config = config)
I am getting an error as:
pytesseract.pytesseract.TesseractError: (255, '')
i am cropping the images and performing some image processing tasks. After that I want to do ocr, on running the ocr i am getting the error.
string = pytesseract.image_to_string(res,lang ='eng',config = config)
expected the ocr result. but tesseract is throwing an error and stops executing
Some times the text may lie on the border of of the images. So padding the images along side the border resolves the issue
Related
I have a full stable diffusion image to image model working on Colab, powered by Gradio. However, it requries nvidia gpu. When I deploy it to hugging face spaces, a runtime error occurs:
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
How do i deploy the model? Is there some other way to deploy the model, like some resources for deploying on AWS, Azure, for free.
Here's the main code of the model:
def predict(img, strength, seed, prompt):
seed = int(seed)
img1 = np.asarray(img)
img2 = Image.fromarray(img1)
init_image = img2.resize((768, 512))
generator = torch.Generator(device=device).manual_seed(seed)
with autocast("cuda"):
image = pipe(prompt=prompt, init_image=init_image, strength=strength, guidance_scale=5, generator=generator).images[0]
return image
gr.Interface(
predict,
title = 'Image to Image using Diffusers',
inputs=[
gr.Image(),
gr.Slider(0, 1, value=0.05, label ="strength (keep it close to 0 to make minimal changes to image (such as 0.1, 0.2, 0.3)"),
gr.Number(label = "seed (any number, generally 1024. But it's totally random. Change it and see different outputs)"),
gr.Textbox(label="Prompt, empty by default")
],
outputs = [
gr.Image()
]
).launch()
In pyBullet, I have struggled a bit with generating a dataset. What I want to achieve is to get pictures of what the camera is seeing: img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
Basically: to get the images that are seen in Synthetic Camera RGB data and Synthetic Camera Depth Data (especially this one), which are the camera windows you can see in the following picture on the left.
p.resetDebugVisualizerCamera(cameraDistance=0.5, cameraYaw=yaw, cameraPitch=pitch, cameraTargetPosition=[center_x, center_y, 0.785])
img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgbBuffer = img[2]
depthBuffer = img[3]
list_of_rgbs.append(rgbBuffer)
list_of_depths.append(depthBuffer)
rgbim = Image.fromarray(rgbBuffer)
depim = Image.fromarray(depthBuffer)
rgbim.save('test_img/rgbtest'+str(counter)+'.jpg')
depim.save('test_img/depth'+str(counter)+'.tiff')
counter += 1
I already run the following, so I don't know if it is related to the settings. p.configureDebugVisualizer(p.COV_ENABLE_DEPTH_BUFFER_PREVIEW, 1)
I have tried several methods because the depth part is complicated. I don't understand if it needs to be treated separately because of the pixel color information or if I need to work with the project matrixes and view matrixes.
I need to save it as a .tiff because I get some cannot save F to png errors. I tried playing a bit with the bit information but acomplished nothing. In case you asked,
# depthBuffer[depthBuffer > 65535] = 65535
# im_uint16 = np.round(depthBuffer).astype(np.uint16)
# depthBuffer = im_uint16
The following is an example of the the .tiff image
And to end, just to remark that these depth images keep changing (looking at all of them, then to the RGB and passing again to the depth images, shows different images regardless of being the same image. I have never ever seen something like this before.
I thought "I managed to fix this some time ago, might as well post the answer found".
The data structure of img has to be taken into account!
img = p.getCameraImage(224, 224, shadow = False, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgb_opengl = (np.reshape(img[2], (IMG_SIZE, IMG_SIZE, 4)))
depth_buffer_opengl = np.reshape(img[3], [IMG_SIZE, IMG_SIZE])
depth_opengl = far * near / (far - (far - near) * depth_buffer_opengl)
seg_opengl = np.reshape(img[4], [IMG_SIZE, IMG_SIZE]) * 1. / 255.
rgbim = Image.fromarray(rgb_opengl)
rgbim_no_alpha = rgbim.convert('RGB')
rgbim_no_alpha.save('dataset/'+obj_name+'/'+ obj_name +'_rgb_'+str(counter)+'.jpg')
# plt.imshow(depth_buffer_opengl)
plt.imsave('dataset/'+obj_name+'/'+ obj_name+'_depth_'+str(counter)+'.jpg', depth_buffer_opengl)
# plt.show()
Final Images:
Im trying to make a script that takes and input image and fades it out. So far this is my script
imgObject = im.open(imageName)
toAppend = []
for i in range(int(256)):
imgObject.putalpha(i)
toAppend.append(imgObject)
#imgObject.save('images/'+str(i)+'.png', 'PNG')
imgObject.save('finished.gif', save_all=True, append_images=toAppend)
When this is run, the output gif is just a still of the input with no changes. But if i save each image as a png, then the transparency works! It saves 255 different images where you can see it fade out. I've also tried stitching these photos together after the fact, but the same or similar problems occurred.
I've also tried this, this, this, and this. All producing the same effect.
Read more into the docs and found this
imgObject = im.open(imageName)
bpiObject = im.open(backroundName)
toAppend = []
for i in range(100):
imgObject = im.blend(imgObject, bpiObject, i/100)
toAppend.append(imgObject)
imgObject.save('finished.gif', save_all=True, append_images=toAppend, loop = 0)
This worked for me, they do have to have the same dimensions though
I am making a game representing freehand drawing and sprites to animate when pass over it. So i have to use color detection and cause an event when the change in color is encountered by sprite on the screen from where it passes. For this i am using glReadpixel() passing RGBA_8888 and GLES20 version and recieve its value in Red Green Blue form but everytime it returns everything to be 0. Tried to change pixelformat and make many hit and trial but no sucess. Can you please help
My code:
`
ByteBuffer PixelBuffer = ByteBuffer.allocateDirect(4);
PixelBuffer.order(ByteOrder.nativeOrder());
PixelBuffer.position(0);
int mTemp = 0;
GLES20.glReadPixels(100, 100, 1,1,GLES20.GL_RGBA,GLES20.GL_UNSIGNED_BYTE, PixelBuffer);
byte b[] = new byte[4];
PixelBuffer.get(b);
Log.e("COLOR", "R:" + PixelBuffer.get(0) + PixelBuffer.get(1) + PixelBuffer.get(2));
`
Result
Logcat : COLOR R: 000.
I tried using non black background and have red color on screen coordinate provided.
Thanks in advance
I have the following Circos diagram, which I rendered as an SVG file and then converted to PNG, for the purposes of illustration:
The text labels that circle the outer rim are oriented correctly from 12 o'clock to 9 o'clock, oriented outwards, away from the grey arcs.
Between 9 and 12, the text labels are oriented inwards, overlapping the grey arc. This is not expected.
Here's a close-up, to clarify the issue:
If I output a PNG from Circos, instead of SVG, the labels are drawn correctly, but then I lose the ability to mark up the vector-formatted SVG figure in Adobe Illustrator or Inkscape. So I need the SVG output.
Here's a snippet of the circos.conf file relevant to the addition of the labels:

...
<plots>
<plot>
type = text
color = black
file = factorList.txt
r0 = 1r
r1 = 1r+200p
label_size = 12p
label_font = condensedbold
padding = 0p
rpadding = 0p
label_snuggle = yes
max_snuggle_distance = 1r
snuggle_sampling = 2
snuggle_tolerance = 0.25r
snuggle_link_overlap_test = yes
snuggle_link_overlap_tolerance = 2p
snuggle_refine = yes
</plot>
</plots>
I'm not sure what other options I can apply to try to resolve this. My question is: What should I try in this or another configuration file, which fixes the SVG output? Thanks for your advice.
Maybe you can try this option:
label_rotate = no
And I think the layout above caused by the snuggle option ,and also check your conf file and be sure all the "r0" and "r1" of plot(type=text) are bigger than 1r .
Here is the circos lessonss example