How do I deploy GPU enabled model on HuggingFace? - deployment

I have a full stable diffusion image to image model working on Colab, powered by Gradio. However, it requries nvidia gpu. When I deploy it to hugging face spaces, a runtime error occurs:
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
How do i deploy the model? Is there some other way to deploy the model, like some resources for deploying on AWS, Azure, for free.
Here's the main code of the model:
def predict(img, strength, seed, prompt):
seed = int(seed)
img1 = np.asarray(img)
img2 = Image.fromarray(img1)
init_image = img2.resize((768, 512))
generator = torch.Generator(device=device).manual_seed(seed)
with autocast("cuda"):
image = pipe(prompt=prompt, init_image=init_image, strength=strength, guidance_scale=5, generator=generator).images[0]
return image
gr.Interface(
predict,
title = 'Image to Image using Diffusers',
inputs=[
gr.Image(),
gr.Slider(0, 1, value=0.05, label ="strength (keep it close to 0 to make minimal changes to image (such as 0.1, 0.2, 0.3)"),
gr.Number(label = "seed (any number, generally 1024. But it's totally random. Change it and see different outputs)"),
gr.Textbox(label="Prompt, empty by default")
],
outputs = [
gr.Image()
]
).launch()

Related

How to get depth images from the camera in pyBullet

In pyBullet, I have struggled a bit with generating a dataset. What I want to achieve is to get pictures of what the camera is seeing: img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
Basically: to get the images that are seen in Synthetic Camera RGB data and Synthetic Camera Depth Data (especially this one), which are the camera windows you can see in the following picture on the left.
p.resetDebugVisualizerCamera(cameraDistance=0.5, cameraYaw=yaw, cameraPitch=pitch, cameraTargetPosition=[center_x, center_y, 0.785])
img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgbBuffer = img[2]
depthBuffer = img[3]
list_of_rgbs.append(rgbBuffer)
list_of_depths.append(depthBuffer)
rgbim = Image.fromarray(rgbBuffer)
depim = Image.fromarray(depthBuffer)
rgbim.save('test_img/rgbtest'+str(counter)+'.jpg')
depim.save('test_img/depth'+str(counter)+'.tiff')
counter += 1
I already run the following, so I don't know if it is related to the settings. p.configureDebugVisualizer(p.COV_ENABLE_DEPTH_BUFFER_PREVIEW, 1)
I have tried several methods because the depth part is complicated. I don't understand if it needs to be treated separately because of the pixel color information or if I need to work with the project matrixes and view matrixes.
I need to save it as a .tiff because I get some cannot save F to png errors. I tried playing a bit with the bit information but acomplished nothing. In case you asked,
# depthBuffer[depthBuffer > 65535] = 65535
# im_uint16 = np.round(depthBuffer).astype(np.uint16)
# depthBuffer = im_uint16
The following is an example of the the .tiff image
And to end, just to remark that these depth images keep changing (looking at all of them, then to the RGB and passing again to the depth images, shows different images regardless of being the same image. I have never ever seen something like this before.
I thought "I managed to fix this some time ago, might as well post the answer found".
The data structure of img has to be taken into account!
img = p.getCameraImage(224, 224, shadow = False, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgb_opengl = (np.reshape(img[2], (IMG_SIZE, IMG_SIZE, 4)))
depth_buffer_opengl = np.reshape(img[3], [IMG_SIZE, IMG_SIZE])
depth_opengl = far * near / (far - (far - near) * depth_buffer_opengl)
seg_opengl = np.reshape(img[4], [IMG_SIZE, IMG_SIZE]) * 1. / 255.
rgbim = Image.fromarray(rgb_opengl)
rgbim_no_alpha = rgbim.convert('RGB')
rgbim_no_alpha.save('dataset/'+obj_name+'/'+ obj_name +'_rgb_'+str(counter)+'.jpg')
# plt.imshow(depth_buffer_opengl)
plt.imsave('dataset/'+obj_name+'/'+ obj_name+'_depth_'+str(counter)+'.jpg', depth_buffer_opengl)
# plt.show()
Final Images:

SCNNode's flattenedNode behavior in Swift

In a Swift iOS app, I have a main SCNNode containing thousands of nodes which all contain the SAME geometry. I am cloning the nodes using .copy():
let firstNode = SCNNode(geometry: myGeo)
for i in 0...10000
{
let newNode=firstNode.copy()
rootNode.addChildNode(newNode)
// Change position
...
}
scene.rootNode.addChildNode(rootNode)
All the nodes are properly displayed but performance extremely slow. I am hence using flattenedNode to hope for having an optimized single node using efficiently the fact I am only using 1 geometry:
// Removing of previous "scene.rootNode.addChildNode(rootNode)"
let clo=rootNode.flattenedClone()
scene.rootNode.addChildNode(clo)
However the app crashes with the following error:
-[MTLDebugDevice newBufferWithBytes:length:options:], line 644: error 'Buffer Validation newBufferWith*:length 0x120ba300 must not exceed
256 MB.
As I am only using 1 geometry, is it normal that flattenedNode generates such a huge buffer ?

pytesseract.pytesseract.TesseractError: (255, '')

string = pytesseract.image_to_string(res,lang ='eng',config = config)
I am getting an error as:
pytesseract.pytesseract.TesseractError: (255, '')
i am cropping the images and performing some image processing tasks. After that I want to do ocr, on running the ocr i am getting the error.
string = pytesseract.image_to_string(res,lang ='eng',config = config)
expected the ocr result. but tesseract is throwing an error and stops executing
Some times the text may lie on the border of of the images. So padding the images along side the border resolves the issue

How to draw concave shape using Stencil test on Metal

This is the first time I'm trying to use Stencil Test but I have seen some examples using OpenGL and a few on Metal but focused on the Depth test instead. I understand the theory behind the Stencil test but I don't know how to set it up on Metal.
I want to draw irregular shapes. For the sake of simplicity lets consider the following 2D polygon:
I want the stencil to pass where the number of overlapping triangles is odd, so that I can reach something like this, where the white area is the area to be ignored:
I'm doing the following steps in the exact order:
Setting the depthStencilPixelFormat:
mtkView.depthStencilPixelFormat = .stencil8
mtkView.clearStencil = .allZeros
Stencil attachment:
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .stencil8, width: drawable.texture.width, height: drawable.texture.height, mipmapped: true)
textureDescriptor.textureType = .type2D
textureDescriptor.storageMode = .private
textureDescriptor.usage = [.renderTarget, .shaderRead, .shaderWrite]
mainPassStencilTexture = device.makeTexture(descriptor: textureDescriptor)
let stencilAttachment = MTLRenderPassStencilAttachmentDescriptor()
stencilAttachment.texture = mainPassStencilTexture
stencilAttachment.clearStencil = 0
stencilAttachment.loadAction = .clear
stencilAttachment.storeAction = .store
renderPassDescriptor.stencilAttachment = stencilAttachment
Stencil descriptor:
stencilDescriptor.depthCompareFunction = MTLCompareFunction.always
stencilDescriptor.isDepthWriteEnabled = true
stencilDescriptor.frontFaceStencil.stencilCompareFunction = MTLCompareFunction.equal
stencilDescriptor.frontFaceStencil.stencilFailureOperation = MTLStencilOperation.keep
stencilDescriptor.frontFaceStencil.depthFailureOperation = MTLStencilOperation.keep
stencilDescriptor.frontFaceStencil.depthStencilPassOperation = MTLStencilOperation.invert
stencilDescriptor.frontFaceStencil.readMask = 0x1
stencilDescriptor.frontFaceStencil.writeMask = 0x1
stencilDescriptor.backFaceStencil = nil
depthStencilState = device.makeDepthStencilState(descriptor: stencilDescriptor)
and lastly, Im setting the reference value and the stencil state in the main pass:
renderEncoder.setStencilReferenceValue(0x1)
renderEncoder.setDepthStencilState(self.depthStencilState)
Am I missing something because the result I got is just like there is no stencil at all. I can see some differences when changing the settings of the depth test but nothing happens when changing the settings of the stencil ...
Any clue?
Thank you in advance
You're clearing the stencil texture to 0. The reference value is 1. The comparison function is "equal". So, the comparison will fail (1 does not equal 0). The operation for when the stencil comparison fails is "keep", so the stencil texture remains 0. Nothing changes for subsequent fragments.
I would expect that you'd get no rendering, although depending on the order of your vertexes and the front-face winding mode, you may be looking at the back faces of your triangles, in which case the stencil test is effectively disabled. If you don't otherwise care about front vs. back, just set both stencil descriptors the same way.
I think you need to do two passes: first, a stencil-only render; second, the color render governed by the stencil buffer. For the stencil only, you would make the compare function .always. This will toggle (invert) the low bit for each triangle that's drawn over a given pixel, giving you an indication of even or odd count. Because neither the compare function nor the operation involve the reference value, it doesn't matter what it is.
For the second pass, you'd set the compare function to .equal and the reference value to 1. The operations should all be .keep. Also, make sure to set the stencil attachment load action to .load (not .clear).

Is there a limit to the amount of data you can put in a MATLAB pie/pie3 chart?

I have everything going swimmingly on my pie chart and 3D pie charts within MATLAB for a dataset, however, I noticed that even though I have 21 pieces of data for this pie-chart being fed into the pie-chart call, only 17 appear.
PieChartNums = [ Facebook_count, Google_count, YouTube_count, ThePirateBay_count, StackOverflow_count, SourceForge_count, PythonOrg_count, Reddit_count, KUmail_count, Imgur_count, WOWhead_count, BattleNet_count, Gmail_count, Wired_count, Amazon_count, Twitter_count, IMDB_count, SoundCloud_count, LinkedIn_count, APOD_count, PhysOrg_count];
labels = {'Facebook','Google','YouTube','ThePirateBay','StackOverflow', 'SourceForge', 'Python.org', 'Reddit', 'KU-Email', 'Imgur', 'WOWhead', 'BattleNet', 'Gmail', 'Wired', 'Amazon', 'Twitter', 'IMDB', 'SoundCloud', 'LinkedIn', 'APOD', 'PhysOrg'};
pie3(PieChartNums)
legend(labels,'Location','eastoutside','Orientation','vertical')
This goes for the labels and the physical graph itself.
Excuse the poor formatting in terms of the percentage cluster, this is just a rough version. I tried every orientation and even splitting labels between the orientations without any luck.
Quasi-better resolution for Pie Chart -- Imgur Link
Like Daniel said - it appears that there simply isn't any non-negative data for the missing slices. I tried reproducing your problem with the following initialization, yet it resulted in normal-looking chart:
[ Facebook_count, Google_count, YouTube_count, ThePirateBay_count, ...
StackOverflow_count, SourceForge_count, PythonOrg_count, Reddit_count, ...
KUmail_count, Imgur_count, WOWhead_count, BattleNet_count, Gmail_count, ...
Wired_count, Amazon_count, Twitter_count, IMDB_count, SoundCloud_count, ...
LinkedIn_count, APOD_count, PhysOrg_count] = deal(0.04);
In order to verify this hypothesis - could you provide the data you're using for the chart? Do you get any warnings when plotting the chart?
From inside the code of pie.m:
if any(nonpositive)
warning(message('MATLAB:pie:NonPositiveData'));
x(nonpositive) = [];
end
and:
for i=1:length(x)
if x(i)<.01,
txtlabels{i} = '< 1%';
else
txtlabels{i} = sprintf('%d%%',round(x(i)*100));
end
end
You can see that MATLAB doesn't delete valid slices, but only renames them if the data values are small.