Can cairo_surface possibly be attach on wl_shell_surface? - cairo

I am using cairo_surface (cairo_surface_t) and wl_shell_surface to display an object on a Wayland client's surface. So could you distinguish between these surface types ? They are obligated to exist together, or separate types of surface. Thanks for your help.

This is the answer.
Cairo surface is strong in drawing 2D object (example: text, shape ...)
Refer to https://www.cairographics.org/

Related

How to create a triangular support in CadQuery?

I need to create a right angle with a triangular support, as seen in the example below (which I created in Blender). I know how to create the right angle, but I don't really know how to create the triangular support. The only way that I was able to think of would be to create a 2D polygon that would be the outer face of the object where the support is, then extrude it to the thickness of the support, and then extrude the rest to complete the object. This, however, seems clumsy and against the idea of CQ, where the code should be (kind of) similar to how a human would describe such object.
Is it possible to create the right angle first and then add the support? How?

Break 3d object into two parts

I need to break an 3d object into 2 parts such that the "connection surface" looks realistically. I know there are plenty of cell fracture add ons in blender, but they all create many new flat objects. I need the opposite - only 2 result parts, but many triangles in the surface of break.
I wonder if there is any simple functionality in blender, 3dsmax, or directly in unity. I am also thinking of creating at first the surface (e.g. Bezier surface) and then somehow "cut" the object by intersecting this surface through the object.
I am pretty new to 3d graphics and I am not familiar with the tools and engines, so for me it doesn't matter, which tool I will use, just need to do this procedure. Any suggestions? Thanks

How to identify Points Outside a 3D mesh in Matlab

I have two large data sets, one that is the outside of an object, and on that represents the fluid flow inside of the object. I am worried that with the mesh I have, some of the data might be mis-represented, or not modeled well, and is outside the first data set.
In Matlab, I used trisurf to create a mesh from the first data set and was curious if there was a way to check for points outside the mesh. Ive seen the 2D version of inpolygon, and some threshold functions, but the surface is not super regular and those don't really account for meshes. Thanks for the help!
You didn't specify how what kind of data/format your object is defined as. If for example you have a Delaunay tetrahdralization/mesh of your object (if not you can use delaunay to create one from a point cloud), you can use the tsearchn function to determine if points are in/out of the object (mesh).
https://www.mathworks.com/help/matlab/ref/tsearchn.html

Why in 3D game we need to separate a material into so many textures for a static object?

Perhaps the question is not that correct, the textures should be say a kind of channel? although I know they will be mixed in the shader finally.
I know the knowledge of the various textures is very important, but also a bit hard to understand completely.
From my understanding:
diffuse - the 'real' color of an object without light involved.
light - for static objects. render light effections into texture beforehand.
specular - the area where has direct reflection.
ao - to absorb indirect light for the different area of an object.
alpha - to 'shape' the object.
emissive - self illuminance.
normal - pixel normal vector to deal with the light ray.
bump - (dont' know the exact differences between normalmap).
height - stores Z range values, to generate terrain, modify vertex etc.
And the items below should be related to PBR material which I'm not familiar with:
translucency / cavity / metalness / roughness etc...
Please correct me if some misunderstandings there.
But whatever, my question is why we need to separate these textures apart for a material but not only render them together into the diffusemap directly for a static object?
It'll be appreciated if some examples (especially for PBR) , and thank you very much.
I can beforehand bake all things into the diffuse map and apply to my
mesh, why I need to apply so many different textures?
Re-usability:
Most games re-use textures to reduce the size of the game. You can't if you combine them together. For example, when you two similar objects but you want to randomize the looks of them(aging effect), you can make them share the-same color(albedo) map but use different ao map. This becomes important when there hundreds of objects, you can use different combination of texture maps on similar objects to create unique Objects. If you have combined this into one, it would be impossible to share it with other similar objects but you to slightly make to look different.
Customize-able:
If you separate them, you'll be able to change the amount of effect each texture will apply to the Object. For example, the slider on the metallic slot for the Standard shader. There are more of this sliders on other map slots but they only appear once you plug a texture into the slot. You can't do this when you combine the textures into one.
Shader:
The standard shader can't do this so you have to learn how to write shader since you can't use one image to get the effects you would with all those texture maps with the standard shader. A custom shader is required and you need a way to read the information about the maps in the combined shader.
This seems like a reasonable place to start:
https://en.wikipedia.org/wiki/Texture_mapping
A texture map is an image applied (mapped) to the surface of a shape or polygon. This may be a bitmap image or a procedural texture. They may be stored in common image file formats, referenced by 3d model formats or material definitions, and assembled into resource bundles.
I would add to this that the shape or a polygon don't have to belong to 3d objects as one may imagine it. If you render two triangles as a rectangle, you can run all sorts of computations and store it in a "live" texture.
Texture mapping is a method for defining high frequency detail, surface texture, or color information on a computer-generated graphic or 3D model. Its application to 3D graphics was pioneered by Edwin Catmull in 1974.
What this detail represents is either some agreed upon format to represent some property, (say "roughness" within some BRDF model) which you would encounter if you are using some kind of an engine.
Or whatever you decide that detail to be, if you are writing your own engine. You can decide to store whatever you want, however you want it.
You'll notice on the link that different "mapping" techniques are mentioned, each with their own page. This is the result of some person, or people who did some research and came up with a paper detailing the technique. Other people adopt it, and that's how they find their way into engines.
There is no rule saying these can't be combined.

Identify different shapes drawn using UIBezierPath?

I am able to draw shapes using the UIBezierPath object. Now I want to identify different shapes drawn using this eg. Rectangle , Square , Triangle , Circle etc. Then next thing I want to do is that user should be able to select a particular shape and should be able to move the whole shape to different location on the screen. The actual requirement is even more complex , but If I could make this much then I can work out on the rest.
Any suggestion or links or points on how do I start with this is welcome . I am thinking of writing a separate view to handle every shape but not getting how do I do that..
Thank You all in advance !!
I recommend David Gelphman’s Programming with Quartz.
In his chapter “Drawing with Paths” he has a section on “Path Construction Primitives” which provides a crossroads:
If you use CGContextAddLineToPoint your user could make straight lines defined by known Cartesian points. You would use basic math to deduce the geometric shapes defined by those points.
If you use CGContextAddCurveToPoint your user could make curved lines defined by known points, and I’m pretty sure that those lines would run through the points, so you could still use basic math to determine at least an approximation of the types of shapes formed.
But if you use CGContextAddQuadCurveToPoint, the points define a framework outside of the drawn curve. You’d need more advanced math to determine the shapes formed by curves along tangents.
Gelphman also discusses “Path Utility Functions,” like getting a bounding box and checking whether a given point is inside the path.
As for moving the completed paths, I think you would use CGContextTranslateCTM.