In my iPhone App I'd like to draw a few formulas. How can I manage that with quartz 2d? Is there a way to build formulas like for example in latex? Or are there any existing frameworks? Thanks.
As the developer of an iPhone application which does just that, trust me when I say typesetting equations is not a trivial undertaking. In my case, I used Core Animation layers to construct the sub-elements of a parsed equation. The equations are constructed hierarchically, and the operations that compose them are laid out as such. Each operation is contained within its parent operation's layer, and laid out following the rules of that particular operation.
For rendering the visual elements of the operations within an equation, I used Quartz to draw lines, symbols, etc., but most of the drawing was simply text drawn within a CALayer using the NSString text drawing extensions.
I did override the standard CALayer rendering architecture for the generation of PDFs from these equations, because CALayers don't render as vector elements by default. For an example of how this rendering works in my application, see the open source Core Plot project, which does the same thing at its base level.
I do output to LaTeX from the equations, which is pretty simple once you've parsed them into a hierarchical data structure, but parsing them from LaTeX into that structure is proving a little trickier.
For simple text equation input and evaluation, you might find Graham Cox's GCMathParser to be of use.
There's no real Tex engine implemented for Cocoa touch that I have heard of, but there is a lightweight Javascript formulae-layout engine: jsMath.
Related
I'm trying to figure out a good way to programmatically generate contours describing a 2D surface, from a 3D STEP model. Application is generating NC code for a laser-cutting program from a 3D model.
Note: it's easy enough to do this in a wide variety of CAD systems. I am writing software that needs to do it automatically.
For example, this (a STEP model):
Needs to become this (a vector file, like an SVG or a DXF):
Perhaps the most obvious way of tackling the problem is to parse the STEP model and run some kind of algorithm to detect planes and select the largest as the cut surface, then generate the contour. Not a simple task!
I've also considered using a pre-existing SDK to render the model using an orthographic camera, capture a high-res image, and then operating on it to generate the appropriate contours. This method would work, but it will be CPU-heavy, and its accuracy will be limited to the pixel resolution of the rendered image - not ideal.
This is perhaps a long shot, but does anyone have thoughts about this? Cheers!
I would use a CAD library to load the STEP file (not a CAD API), look for the planar face with the higher number of edge curves in the face loop and transpose them on the XY plane. Afterward, finding 2D geometry min/max for centering etc. would be pretty easy.
Depending on the programming language you are using I would search for "CAD control" or "CAD component" on Google combining it with "STEP import".
I use matlab to render a complex mesh (using trimesh, material, camlight, view...) and need not display it to the user, just to get the rendered image. This is discussed in another question.
Using any of the suggested solutions (save-as image, saving into a video object, and using undocumented hardcopy) is very slow (~1sec), especially compared to rendering the plot itself, including painting on the screen takes less than 0.5sec.
I believe it is caused by hardcopy method not to utilize the GPU, while rendering the original plot for display do use the GPU; using GPU-Z monitor software I see the GPU working during ploting but not during hardcopy.
The figure use 'opengl' as renderer, but hardcopy, which is the underlying implementation of all the suggested methods, don't seem to respect this...
Any suggestion on how to configure it to use the GPU?
EDITED: following this thread I've moved to use the following, but GPU usage is still a flatliner.
cdata=hardcopy(f, '-Dopengl', '-r0')
Being used to Matlab and its great capabilities of drawing vector graphics, I am looking for something similar in OpenCV. OpenCV drawing functions seem to raster the lines or points at pixel level. Currently, I am dumping the data into text, copy-paste to Matlab and doing all the plots. I also thought about using Matlab engine to pass it the parameters and running plots, but it seems to be too much mess for simple debug operation.
I want to be able to do the following:
Zoom in, out of the image
Draw a line/point which is re-rastered each time I do zoom, like in Matlab.
Currently, I found image watch plugin to take care of zooming, but it does not help with the second part.
Any idea?
OpenCV has a lot of capabilities to process an image but only minimal ones for displaying the result. It has nothing that can display vector graphics like Matlab. When I need to see polygons on image (or just polygons) I am dumping them to file and using third party viewer (usually Giv viewer).
I'd like to stack the images from this map: http://earthquake.usgs.gov/regional/nca/soiltype/map/ from linear projection into a leaflet map. The source tiles are in known but nonstandard zoom levels, and leaflet maps want mercator mercator XYZ tiles. In principle, I know how to do this - I have functions for changing XY coordinates into lat-lng coordinates in the two maps, and I just need to map pixels for the target map in terms of pixels in the source map.
This is unfortunately nontrivial, as the source pixels are spread across hundreds of different image files, and I am trying to put them into hundreds more images. Is there a software package that makes this a little bit more straightforward? If there is no library for dealing with this kind of data, it seems like there really should be...
Postgis has the RT_ST_Transform method, which under the hood uses GdalWarp. So, you have at least these two options. If you use Postgis, you will need to actually register/import the images into Postgis, using raster2pgsql and then call RT_ST_Transform on each one and then dump them out again -- which could be scripted to some extent using plpgsql (Postgres's scripting language). There is something of a learning curve involved with using Postgis raster, which may be worthwhile if you plan to do any other image processing analysis. You could also write a shell script (or similar) to automate gdalwarp if you don't wish to go the Postgis route.
For a less formal method than gdalwarp (an excellent program), you can check out the Leaflet plugin Leaflet.imageTransform, that can transform and image on the fly in the browser.
I have already asked this question on scientific computing and wondered if this forum could offer alternatives.
I need to simulate the movement of a large number of agents undergoing soft body deformation. The processes that govern the agents' movement are complex and so the entire process requires parallelisation.
The simulation needs to be visualised in 3D. As I will be running this simulation across many different nodes (MPI or even MPI+GPGPU) I do not want the visualisation to run in real time, rather the simulation should output a video file after it is finished.
(i'm not look for awesome AAA video game quality graphics, in addition the movement code will take up enough CPU time so I don't want to further slow the application down by adding heavy weight rendering code)
I think there are three ways of producing such a video:
Write raw pixel information to BMPs and stitch them together - I have done this in 2D but I don't know how this would work in 3D.....
Use an offline analogue of OpenGL/Direct3D, rendering to a buffer instead of the screen.
Write some sort of telemetry data to a file, indicating each agents' position, deformation etc for each time interval and then after the simulation has finished use it as input to a OpenGL/Direct3D program.
This problem MUST have been solved before - there's plenty of visualisation in HPC
In summary: How does one easily render to a video in an offline manner (very basic graphics not toy story - I just need 3D blobs) without impacting performance in a big way?
My idea would be to store the different states/positions of the vertices as single frames of a vertex animation in a suitable file format. A suitable format would be COLLADA, which is a intermediate format for 3D scenes based on XML, thus it can be easily parsed and written with general purpose XML libraries. There are also special purpose libraries for COLLADA like COLLADA DOM and pycollada. The COLLADA file containing the vertex animation could then be rendered directly to a video file, with the rendering software of your choice (3D Studio Max, Blender, Maya ...)