I think what Im asking is impossible but I'm using Coffeescript to create a simple WebGL Library. Im looking at textures and shaders and I have the following setup. In my example file I have this:
cgl = new CoffeeGL.App('webgl-canvas')
shader = new CoffeeGL.Shader.Basic(cgl).bind()
cgl.drawLoop = () ->
#gl.clearColor(0.15, 0.15, 0.15, 1.0)
#gl.clear(#gl.COLOR_BUFFER_BIT | #gl.DEPTH_BUFFER_BIT)
#updateCamera(c)
#draw(n0) if n0?
Take a look at the Shader. It is passed an App. This App class is basically, an object that deals with the context. The GL context, the canvas and all of that is encapsulated nicely in a class. As a shader needs context throughout its life (same with textures and geometry that has been sent to the graphics card) it is given a context when it is created.
This strikes me as not that elegant. Is it possible to create something like
cgl = new CoffeeGL.App('webgl-canvas')
cgl.init = () ->
shader = new CoffeeGL.Shader.Basic().bind()
cgl.drawLoop = () ->
#gl.clearColor(0.15, 0.15, 0.15, 1.0)
#gl.clear(#gl.COLOR_BUFFER_BIT | #gl.DEPTH_BUFFER_BIT)
#updateCamera(c)
#draw(n0) if n0?
Where the context is actually gleened from that fact that its the context that is calling the constructor (or the compile or bind - doesnt have to be a contructor per-se)
I figure, since Coffeescript (especially if you are using node.js to combine your scripts) wraps everything in a closure, that this is just not possible.
Related
I have a question for anyone that's been using nDisplay with vive trackers.
I set up using Ben Kidd's videos on youtube with the nDisplay new project template and created this blueprint: https://blueprintue.com/blueprint/fgg1zcub/ by creating a blueprint subclass of DisplayClusterRootActor
I put the GetVRPNTrackerLocation not being a function down to being in a different version of UE (4.26)
In VRPN, I'm getting the following data from the controller (not using a tracker atm):
Tracker openvr/controller/LHR-F7FD3B46#192.168.0.41:3884, sensor 0:
pos (-0.08, 0.78, -0.36); quat ( 0.20, 0.07, -0.15, 0.96)
Tracker openvr/controller/LHR-F7FD3B46#192.168.0.41:3884, sensor 0:
pos (-0.08, 0.78, -0.36); quat ( 0.20, 0.07, -0.16, 0.96)
...
and that's coming through in my Print String
so I know that the data is passing through from the controller -> VRPN -> UE4 / nDisplay and it looks similar to Ben's (numbers from -2ish to 2ish)
lastly in my nDisplay cfg i have (alongside my monitor setup):
...
[camera] id="camera_static" loc="X=0,Y=0,Z=0" parent="eye_level" eye_swap="false" eye_dist="0.064" force_offset="0" tracker_id="ViveVRPN" tracker_ch="0"
[input] id="ViveVRPN" type="tracker" addr="openvr/controller/LHR-F7FD3B46#192.168.0.41:3884" loc="X=0,Y=0,Z=0" rot="P=0,Y=0,R=0" front="-Z" right="X" up="Y"
...
however the movement of the camera is really tiny and not representative of the actual camera movements.
Finally it's not only me with this issue.
The issue is that seemingly the GetVRPNTrackerLocation only takes into consideration the first axis, assigns it to front, and sets the rest to positive X.
As for the underlying problem, I have no idea where this happens, but since I needed a quick fix I just hardcoded these values into engine code, and didn't look for a more permanent fix.
So in case you want to do this workaround here's what I did:
Follow these steps to obtain a source version of Unreal 4.26 (I would recommed the 4.26 branch)
https://docs.unrealengine.com/en-US/ProductionPipelines/DevelopmentSetup/BuildingUnrealEngine/index.html
Find {UnrealSourceFolder}\Engine\Plugins\Runtime\nDisplay\Source\DisplayCluster\Private\Input\Devices\VRPN\Tracker\DisplayClusterVrpnTrackerInputDevice.cpp
Modify these two methods
FVector FDisplayClusterVrpnTrackerInputDevice::GetMappedLocation(const FVector& Loc, const AxisMapType Front, const AxisMapType Right, const AxisMapType Up) const
{
static TLocGetter funcs[] = { &LocGetX, &LocGetNX, &LocGetY, &LocGetNY, &LocGetZ, &LocGetNZ };
//return FVector(funcs[Front](Loc), funcs[Right](Loc), funcs[Up](Loc));
return FVector(funcs[AxisMapType::NZ](Loc), funcs[AxisMapType::X](Loc), funcs[AxisMapType::Y](Loc));
}
FQuat FDisplayClusterVrpnTrackerInputDevice::GetMappedQuat(const FQuat& Quat, const AxisMapType Front, const AxisMapType Right, const AxisMapType Up, const AxisMapType InAxisW) const
{
static TRotGetter funcs[] = { &RotGetX, &RotGetNX, &RotGetY, &RotGetNY, &RotGetZ, &RotGetNZ, &RotGetW, &RotGetNW };
//return FQuat(funcs[Front](Quat), funcs[Right](Quat), funcs[Up](Quat), -Quat.W);// funcs[axisW](quat));
return FQuat(funcs[AxisMapType::NZ](Quat), funcs[AxisMapType::X](Quat), funcs[AxisMapType::Y](Quat), -Quat.W);// funcs[axisW](quat));
}
So I just upgraded from a working 4.25 version to 4.26 and then came across the same issue. I then built the engine and debugged my way through to find the cause and a possible solution for this problem.
It seems like it is a problem with the .cfg text files and the axes getting parsed incorrectly.
An easy solution would be to import the .cfg file into the editor so it gets then converted to the new .json file format. Here you can then see the issue with wrong assigned axes on the tracker and can also change it in the new file. Afterwards just use the new .json file from then on for your ndisplay configuration and it should work correctly.
I'm trying to set the background color of a GUI.Box:
void OnGUI()
{
string LatLong;
LatLong = map.calc.prettyCurrentLatLon;
var mousePosition = Input.mousePosition;
float x = mousePosition.x + 10;
float y = Screen.height - mousePosition.y + 10;
GUI.backgroundColor = Color.red;
GUI.Box(new Rect(x, y, 200, 200), LatLong);
}
However, the box is showing in a semi-transparent black, and the white text is subdued, not opaque white.
You have to use s gui style:
private GUIStyle currentStyle = null;
void OnGUI()
{
InitStyles();
GUI.Box( new Rect( 0, 0, 100, 100 ), "Hello", currentStyle );
}
private void InitStyles()
{
if( currentStyle == null )
{
currentStyle = new GUIStyle( GUI.skin.box );
currentStyle.normal.background = MakeTex( 2, 2, new Color( 0f, 1f, 0f, 0.5f ) );
}
}
private Texture2D MakeTex( int width, int height, Color col )
{
Color[] pix = new Color[width * height];
for( int i = 0; i < pix.Length; ++i )
{
pix[ i ] = col;
}
Texture2D result = new Texture2D( width, height );
result.SetPixels( pix );
result.Apply();
return result;
}
Taken from unity forum.
I'm gonna slide in with a more elegant solution here before this question gets old. I saw Thomas's answer and started to wonder if there is a way to do that without having to do the "InitStyles" in the OnGUI loop. Since ideally you only want to init the GuiSkin once in Awake or Start or wherever, but only once, and then never check to see if it's null ever again.
Anyway, after some trial and error, I came up with this.
private void Awake() {
// this variable is stored in the class
// 1 pixel image, only 1 color to set
consoleBackground = new Texture2D(1, 1, TextureFormat.RGBAFloat, false);
consoleBackground.SetPixel(0, 0, new Color(1, 1, 1, 0.25f));
consoleBackground.Apply(); // not sure if this is necessary
// basically just create a copy of the "none style"
// and then change the properties as desired
debugStyle = new GUIStyle(GUIStyle.none);
debugStyle.fontSize = 24;
debugStyle.normal.textColor = Color.white;
debugStyle.normal.background = consoleBackground;
}
REVISION - 17 July 2022 - GUI Style Creation and Storage
Prelude
Style creation through the methods provided by others are certainly functional methods of providing your custom editors with a unique look. They have some fundamental issues I should point out, which my method doesn't outright correct, just alleviate. This method still needs to be expanded upon and is still a partly experimental progression from a personal plugin.
Creating styles every OnGUI Call creates unnecessary, extra instructions for your editor window. This doesn't scale well past a handful (4~) styles.
By creating styles every time OnGUI is called, you're creating textures repeatedly for the background colour (not good). Over prolonged use of this method, memory leaks can occur although unlikely.
What does my method do differently?
Creates GUIStyle and Texture2D files. GUIStyles are saved as .JSON files, which is best compatible for [JSON <-> GUIStyle] conversion and storage.
Texture2Ds are encoded from raw data to PNG format through UnityEngine.
Checks if a style file is null before fetching or creating any missing styles again.
Contains a list of all styles through the use of a Style Manifest (struct) to store the names of all textures and styles to iteratively load on fetch.
Only creates styles if they are missing. Does not spend resources on creating pre-existing styles and pre-existing styles.
GUIStyles (as JSONs) and Texture2D files are stored in a Resources folder within the Plugin folder.
It should be noted that my style methods are done with the express understanding and consideration of GUISkins existing. They are not suitable for my UI/UX needs.
How is this done?
Plugin Style Handing Diagram
I separate Style Loading into a unique namespace for style handling and contain functions, as well as public variables for global plugin access, within. This namespace creates, loads and can send back styles on the requests sent by other scripts.
A call to a function is made when the plugin is opened to load the style manifest and subsequently all styles and textures are loaded, to be relinked for use.
If the styles manifest is missing then it is recreated along with all GUIStyle files. If the styles manifest is not missing but a style is then that style is recreated and the style manifest is modified to reference the new style.
Textures are handled separately from GUIStyle loading and are collected in a separate array. They are independently checked to see if they still exist and missing textures are recreated and encoded from raw data to PNG format, with the style manifest being modified when necessary.
Instead of repeatedly creating all styles or repeatedly loading them each frame, the plugin sends a request to fetch the first style from memory and checks if the result is null. If the first style returns as null then the plugin assumes all styles have been dereferenced and calls a reload or recreation of the relevant GUIStyle files (this can happen because of the engine entering/exiting play mode and is necessary to preserve UI/UX legibility).
If the style returns as a valid reference, plugins of mine do use it but this is risky. It's a good idea to also check at least one texture because textures are at risk of being dereferenced from the Texture2D array.
Once each check is done, the plugin renders the layout as normal and the next cycle begins. Using this method overall requires extra processing time and extra storage space for the plugin but in turn:
Quicker over a longer period of time due to styles being created or loaded only when necessary
Easier to modify themes for plugins, allowing individuals to customize the tools to their preferred theme. This can also be expanded on custom "theme packs" for a plugin.
Scalable for large amounts of styles to an extent.
This method still requires experience in GUI Styles, Data Saving, JSON Utilisation and C#.
I recently tried to develop a flutter plugin with cameraX, but I found that there was no way to simply bind Preview to flutter's Texture.
In the past, I only needed use camera.setPreviewTexture(surfaceTexture.surfaceTexture()) to bind camera and texture, now I can't find the api.
camera.setPreviewTexture(surfaceTexture.surfaceTexture())
val previewConfig = PreviewConfig.Builder().apply {
setTargetAspectRatio(Rational(1, 1))
setTargetResolution(Size(640, 640))
}.build()
// Build the viewfinder use case
val preview = Preview(previewConfig).also{
}
preview.setOnPreviewOutputUpdateListener {
// it.surfaceTexture = this.surfaceTexture.surfaceTexture()
}
// how to bind the CameraX Preview surfaceTexture and flutter surfaceTexture?
I think you can bind texture by Preview.SurfaceProvider.
final CameraSelector cameraSelector = new CameraSelector.Builder().requireLensFacing(CameraSelector.LENS_FACING_BACK).build();
final ListenableFuture<ProcessCameraProvider> listenableFuture = ProcessCameraProvider.getInstance(appCompatActivity.getBaseContext());
listenableFuture.addListener(() -> {
try {
ProcessCameraProvider cameraProvider = listenableFuture.get();
Preview preview = new Preview.Builder()
.setTargetResolution(new Size(720, 1280))
.build();
cameraProvider.unbindAll();
Camera camera = cameraProvider.bindToLifecycle(appCompatActivity, cameraSelector, preview);
Preview.SurfaceProvider surfaceProvider = request -> {
Size resolution = request.getResolution();
surfaceTexture.setDefaultBufferSize(resolution.getWidth(), resolution.getHeight());
Surface surface = new Surface(surfaceTexture);
request.provideSurface(surface, ContextCompat.getMainExecutor(appCompatActivity.getBaseContext()), result -> {
});
};
preview.setSurfaceProvider(surfaceProvider);
} catch (Exception e) {
e.printStackTrace();
}
}, ContextCompat.getMainExecutor(appCompatActivity.getBaseContext()));
Update: CameraX has added functionality which will now allow this since this answer was written, but this might still be useful to someone. See this answer for details.
It seems as though using CameraX is difficult to impossible due to it abstracting the more complicated things away and so not exposing things you need like being able to pass in your own SurfaceTexture (which is normally created by Flutter).
So the simple answer is that you can't use CameraX.
That being said, with some work you may be able to get this to work, but I have no idea if it will work for sure. It's ugly and hacky so I wouldn't recommend it. YMMV.
If we're going to do this, let's first look at how the flutter view creates a texture
#Override
public TextureRegistry.SurfaceTextureEntry createSurfaceTexture() {
final SurfaceTexture surfaceTexture = new SurfaceTexture(0);
surfaceTexture.detachFromGLContext();
final SurfaceTextureRegistryEntry entry = new SurfaceTextureRegistryEntry(nextTextureId.getAndIncrement(),
surfaceTexture);
mNativeView.getFlutterJNI().registerTexture(entry.id(), surfaceTexture);
return entry;
}
Most of that is replicable, so we may be able to do it with the surface texture the camera gives us.
You can get ahold of the texture the camera creates this way:
preview.setOnPreviewOutputUpdateListener { previewOutput ->
SurfaceTexture texture = previewOutput.surfaceTexture
}
What you're going to have to do now is to pass a reference to your FlutterView into your plugin (I'll leave that for you to figure out). Then call flutterView.getFlutterNativeView() to get ahold of the FlutterNativeView.
Unfortunately, FlutterNativeView's getFlutterJni is package private. So this is where it gets really hacky - you can create a class in that same package that calls that package-private method in a publicly accesible method. It's super ugly, and you may have to fiddle around with Gradle to get the compilation security settings to allow it, but it should be possible.
After that, it should be simple enough to create a SurfaceTextureRegistryEntry and to register the texture with the flutter jni. I don't think you want to detach from the opengl context, and I really have no idea if this will actually work. But if you want to try it out and report back what you find I would be interested in hearing the result!
I'm looking at SceneKit's handle binding method with the SCNBufferBindingBlock call back as described here:
https://developer.apple.com/documentation/scenekit/scnbufferbindingblock
Does anyone have an example of how this works?
let program = SCNProgram()
program.handleBinding(ofBufferNamed: "", frequency: .perFrame) { (steam, theNode, theShadable, theRenderer) in
}
To me it reads like I can use a *.metal shader on a SCNNode without having to go through the hassle of SCNTechniques....any takers?
Just posting this in case someone else came here looking for a concise example. Here's how SCNProgram's handleBinding() method can be used with Metal:
First define a data structure in your .metal shader file:
struct MyShaderUniforms {
float myFloatParam;
float2 myFloat2Param;
};
Then pass this as an argument to a shader function:
fragment half4 myFragmentFunction(MyVertex vertexIn [[stage_in]],
constant MyShaderUniforms& shaderUniforms [[buffer(0)]]) {
...
}
Next, define the same data structure in your Swift file:
struct MyShaderUniforms {
var myFloatParam: Float = 1.0
var myFloat2Param = simd_float2()
}
Now create an instance of this data structure, changes its values and define the SCNBufferBindingBlock:
var myUniforms = MyShaderUniforms()
myUniforms.myFloatParam = 3.0
...
program.handleBinding(ofBufferNamed: "shaderUniforms", frequency: .perFrame) { (bufferStream, node, shadable, renderer) in
bufferStream.writeBytes(&myUniforms, count: MemoryLayout<MyShaderUniforms>.stride)
}
Here, the string passed to ofBufferNamed: corresponds to the argument name in the fragment function. The block's bufferStream property then contains the user-defined data type MyShaderUniforms which can then be written to with updated values.
The .handleBinding(ofBufferNamed:frequency:handler:) method registers a block for SceneKit to call at render time for binding a Metal buffer to the shader program. This method can only be used with Metal or OpenGL shading language based programs. SCNProgram object helps perform this custom rendering. Program object contains a vertex shader and a fragment shader. Using a program object completely replaces SceneKit’s rendering. Your shaders take input from SceneKit and become responsible for all transform, lighting and shading effects you want to produce. Use .handleBinding() method to associate a block with a Metal shader program to handle setup of a buffer used in that shader.
Here's a link to Developer Documentation on SCNProgram class.
Also you need an instance method writeBytes(_:count:) that copies all your necessary data bytes into the underlying Metal buffer for use by a shader.
SCNTechniqueclass specifically made for post-processing SceneKit's rendering of a scene using additional drawing passes with custom Metal or OpenGL shaders. Using SCNTechnique you can create such effects as color grading or displacement, motion blur and render ambient occlusion as well as other render passes.
Here is a first code's excerpt how to properly use .handleBinding() method:
func useTheseAPIs(shadable: SCNShadable,
bufferStream: SCNBufferStream
voidPtr: UnsafeMutableRawPointer,
bindingBlock: #escaping SCNBindingBlock,
bufferFrequency: SCNBufferFrequency,
bufferBindingBlock: #escaping SCNBufferBindingBlock,
program: SCNProgram) {
bufferStream.writeBytes(voidPtr, count: 4)
shadable.handleBinding!(ofSymbol: "symbol", handler: bindingBlock)
shadable.handleUnbinding!(ofSymbol: "symbol", handler: bindingBlock)
program.handleBinding(ofBufferNamed: "pass",
frequency: bufferFrequency,
handler: bufferBindingBlock)
}
And here is a second code's excerpt:
let program = SCNProgram()
program.delegate = self as? SCNProgramDelegate
program.vertexShader = NextLevelGLContextYUVVertexShader
program.fragmentShader = NextLevelGLContextYUVFragmentShader
program.setSemantic(
SCNGeometrySource.Semantic.vertex.rawValue,
forSymbol: NextLevelGLContextAttributeVertex,
options: nil)
program.setSemantic(
SCNGeometrySource.Semantic.texcoord.rawValue,
forSymbol: NextLevelGLContextAttributeTextureCoord,
options: nil)
if let material = self._material {
material.program = program
material.handleBinding(ofSymbol: NextLevelGLContextUniformTextureSamplerY, handler: {
(programId: UInt32, location: UInt32, node: SCNNode?, renderer: SCNRenderer) in
glUniform1i(GLint(location), 0);
})
material.handleBinding(ofSymbol: NextLevelGLContextUniformTextureSamplerUV, handler: {
(programId: UInt32, location: UInt32, node: SCNNode?, renderer: SCNRenderer) in
glUniform1i(GLint(location), 1);
})
}
Also, look at Simulating refraction in SceneKit
SO post.
I'm trying to alpha blend some layers: [CGImageRef] in the drawLayer(thisLayer: CALayer!, inContext ctx: CGContext!) routine of my custom NSView. Until now I used CGContextDrawImage() for drawing those layers into the drawLayer context. While profiling I noticed CGContextDrawImage() needs 70% of the CPU time so I decided to try the Accelerate framework. I changed the code but it just crashes and I have no clue what the reason could be.
I'm creating those layers like this:
func addLayer() {
let colorSpace: CGColorSpaceRef = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB)
let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.PremultipliedFirst.rawValue)
var layerContext = CGBitmapContextCreate(nil, UInt(canvasSize.width), UInt(canvasSize.height), 8, UInt(canvasSize.width * 4), colorSpace, bitmapInfo)
var newLayer = CGBitmapContextCreateImage(layerContext)
layers.append( newLayer )
}
My drawLayers routine looks like this:
override func drawLayer(thisLayer: CALayer!, inContext ctx: CGContext!)
{
var ctxImageBuffer = vImage_Buffer(data:CGBitmapContextGetData(ctx),
height:CGBitmapContextGetHeight(ctx),
width:CGBitmapContextGetWidth(ctx),
rowBytes:CGBitmapContextGetBytesPerRow(ctx))
for imageLayer in layers
{
//CGContextDrawImage(ctx, CGRect(origin: frameOffset, size: canvasSize), imageLayer)
var inProvider:CGDataProviderRef = CGImageGetDataProvider(imageLayer)
var inBitmapData:CFDataRef = CGDataProviderCopyData(inProvider)
var buffer:vImage_Buffer = vImage_Buffer(data: &inBitmapData, height:
CGImageGetHeight(imageLayer), width: CGImageGetWidth(imageLayer), rowBytes:
CGImageGetBytesPerRow(imageLayer))
vImageAlphaBlend_ARGB8888(&buffer, &ctxImageBuffer, &ctxImageBuffer, 0)
}
}
the canvasSize is allways the same and also all the layers have the same size, so I don't understand why the last line crashes.
Also I don't see how to use the new convenience functions to create vImageBuffers directly from CGLayerRefs. That's why I do it the complicated way.
Any help appreciated.
EDIT
inBitmapData indeed holds pixel data that reflect the background color I set. However the debugger can not po &inBitmapData and fails with this message:
error: reference to 'CFData' not used to initialize a inout parameter &inBitmapData
So I looked for a way to get the pointer to inBitmapData. That is what I came up with:
var bitmapPtr: UnsafeMutablePointer<CFDataRef> = UnsafeMutablePointer<CFDataRef>.alloc(1)
bitmapPtr.initialize(inBitmapData)
I also had to change the way to point at my data for both buffers that i need for the alpha blend input. Now it's not crashing anymore and luckily the speed boost is inspectable with a profiler (vImageAlphaBlend only takes about a third of CGContextDrawImage), but unfortunately the image results in a transparent image with pixel failures instead of the white image background.
So far I don't get any runtime errors anymore but since the result is not as expected I fear that I still don't use the alpha blend function correctly.
vImage_Buffer.data should point to the CFData data (pixel data), not the CFDataRef.
Also, not all images store their data as four channel, 8-bit per channel data. If it turns out to be three channel or RGBA or monochrome, you may get more crashing or funny colors. Also, you have assumed that the raw image data is not premultiplied, which may not be a safe assumption.
You are better off using vImageBuffer_initWithCGImage so that you can guarantee the format and colorspace of the raw image data. A more specific question about that function might help us resolve your confusion about it.
Some CG calls fall back on vImage to do the work. Rewriting your code in this way might be unprofitable in such cases. Usually the right thing to do first is to look carefully at the backtraces in the CG call to try to understand why you are causing so much work for it. Often the answer is colorspace conversion. I would look carefully at the CGBitmapInfo and colorspace of the drawing surface and your images and see if there wasn't something I could do to get those to match up a bit better.
IIRC, CALayerRefs usually have their data in non cacheable storage for better GPU access. That could cause problems for the CPU. If the data is in a CALayerRef I would use CA to do the compositing. Also, I thought that CALayers are nearly always BGRA 8-bit premultiplied. If you are not going to use CA to do the compositing, then the right vImage function is probably vImagePremultipliedAlphaBlend_RGBA/BGRA8888.