Drawing on a headless server in Java - png

In java, using the Processing API, we are drawing and rendering to png. We do not need to actually draw a window. The question is whether there is any way of drawing to an image buffer directly.
Processing directly supports writing to pdf, but png is not listed
size(600,600,PDF);
processing also supports opengl renderer. In OpenGL, I have seen code to render to a buffer, though never seen that in Processing anywhere. Is there any way to do it?

Sure. Just draw to an instance of PGraphics.
Specifically, just add the Processing jar to your classpath, then you can use the PGraphics class from your Java server code to create an image and do whatever you want with it.

Related

How to draw a Pixmap with partial transparency in GTK application

I'm just getting started with Mono programming using GTK, and have been pleasantly surprised. However, I have come across a hurdle I haven't been able to get over yet.
In the app I'm working on, I am able to load a JPEG image into a Pixmap and draw it to my GUI's Drawing Area. That works fine. However, I want to be able to take a second JPEG image, make it partially transparent, and draw it over the first. So far, I haven't been able to figure out a decent way to do this.
Is it somehow possible to change the alpha value of an entire Pixmap before I draw it? I'm not sure where to go from here.
If you're using GtkDrawingArea you should be using Cairo to do the drawing itself. As an alternative to using cairo_paint() there is a cairo_paint_with_alpha() which lets you specify the opacity you wish to paint with.

Previewing OpenGLES Render Buffers under Android

I've started to play a bit with the Frame Buffer Objects and Render Buffers in OpenGLES. One thing that bugs me out is that I'm not able to see what data is currently in my Render Buffer instance, or simply put - what I've drawn within. I know that I could possibly draw my data into the texture and then simply sample it onto the rectangle, but I don't want to do that. Maybe somebody already used or is aware of some sort of a plugin, preferably an Eclipse plugin, or eventually an application that would present me with the graphical data of the Render Buffer of my choice?
I will answer my question myself, there are some tools dedicated for NVIDIA Tegra chipsets which are really helpful when dealing with problems within the OpenGL scope (PerfHUD ES for example)

OpenGL 2D editor?

I want to draw different 2D objects in OpenGL for example a path/Road ,is there any program i could draw them using a GUI then transfer them to points so i could use them in my program ?
I have personally used Inkscape to do this. If you save your data as SVG, then any standard XML parsing library should make it relatively easy to extract your data. Even better, you might even find an SVG parsing library that will make it even easier. I created one in Python, based on the work of Martin O'Leary of supereffective:
http://pypi.python.org/pypi/svgbatch
It's very fragile and incomplete (it barfs on svg elements it doesn't recognise) but if you stick to the SVG elements it recognises (closed polygon paths, no curves) then it works, and it might help you put together one of your own.
Somewhat heavy handed, but you could use Inkscape to create SVG files, and then just parse out the path vertexes.

Fastest iPhone Blit Routine?

I have a UIView subclass onto which I need to blit a UIImage. There are several ways to skin this cat depending on which series of APIs you prefer to use, and I'm interested in the fastest. Would it be UIImage's drawAtPoint or drawRect? Or perhaps the C-based CoreGraphics routines, or something else? I have no qualms about altering my source image data format if it'll make the blitting that much faster.
To describe my situation my app has anywhere from ~10 to ~200 small UIViews (64x64), a subset of which will need to be redrawn based on user interaction. My current implementation is a call to drawAtPoint inside my UIView subclass' drawRect routine. If you can think of a better way to handle this kind of scenario, I'm all ears (well, eyes).
Using an OpenGL view may be fastest of all. Keep an age cache of images (or if you know a better way to determine when certain images can be removed from the cache, by all means use that) and preload as many images as you can while the app is idle. It should be very quick, with almost no Objective-C calls involved (just -draw)
While not a "blit" at all, given the requirements of the problem (many small images with various state changes) I was able to keep the different states to redraw in their own separate UIImageView instances, and just showed/hid the appropriate instance given the state change.
Since CALayer is lightweight and fast I would get a try.
Thierry
The fastest blit implementation you are going to find is in my AVAnimator library, it contains an ARM asm implementation of a blit for a CoreGraphics buffer, have a look at the source. The way you could make use of it would be to create a single graphics context, the size of the whole screen, and then blit your specific image changes into this single graphics context, then create a UIImage from that and set it as the image of a UIImageView. That would involve 1 GPU upload per refresh, so it will not depend on how many images you render into the buffer. But, you will likely not need to go that low level. You should first try making each 64x64 image into a CALayer and then update each layer with the contents of an image that is the exact size of the layer 64x64. The only tricky thing is that you will want to decompress each of your original images if they come from PNG or JPEG files. You do that by creating another pixel buffer and rendering the original image into the new pixel buffer, that way all the PNG or JPEG decompression is done before you start setting CALayer contents.

Generating graphics at runtime with Cocos2D - How to display?

I'm trying to create dynamic graphics for my game, which I'm building with Cocos2D. The graphics generation will occur at predictable, finite points, such as level loading. I'm having a hard time figuring out how to actually draw this at runtime. From what I can tell, the easiest way would be to draw into a PNG file at runtime and then load an AtlasSprite based on the PNG file, but I can't seem to figure out if this is indeed the best way or how to go about doing it. Any suggestions?
I'm not sure how Cocos2D loads Sprites or Atlases so this is a more general answer.
It might be worth taking a look at the Texture2D class that comes with the old CrashLanding example app. It uses a bitmap graphics context to generate a texture of a string for drawing with OpenGL. The code uses the CGBitmapContextCreate function to create a context. You can draw whatever you want onto it.
Then once you've finished drawing, you can either save the file as a PNG or you can call glTexImage2D on the data to use it with OpenGL.
There's more information about it in the Graphics and Drawing
documentation, specifically the section: Creating and Drawing Images.
Edit: It looks like Cocos2D comes with Texture2D so you should be in good shape. Check out the initWithString method here.