opengl renderer not working Intel HD graphics 2500 - ilnumerics

I have been unsuccessful trying to use the OpenGL driver with ILNumerics visualizations. I am trying to just do a basic visualization following Quick Start guide - every time I launch the application I get the error message "no compatible hardware accelerated driver could be found or activated" with error reported "Attempted to read or write protected memory. This is often an indication that other memory is corrupt". The graphics driver falls back to GDI which is really slow.
I have tried all of the suggested fixes for this problem. I installed the latest Intel HD graphics driver, and I ran OpenGL Extensions viewer which indicates that OpenGL 4.0 is supported. ILNumerics documentation indicates 3.1+ is required, which my system appears to support.
So I am at a loss here. Is there a way to use hardware rendering with this Intel card, or not?

I have also been trying to use the ILNumerics OpenGL driver but with an an Intel HD4000. I get the same error and the debug log shows that ILNumerics crashes with at the glDrawElements call.
I found a work around when initializing an ilPlotCube so that the OpenGL driver will not crash. I am using the Window Forms ilPanel control and ilNumerics 3.2.2.0 from NuGet.
In the ilPanel_load event create an ilPlotCube and set the x-axis scale to logarithmic. Add the plotcube to the scene.
Add an ilPoint element to the plotcube. I fill it with random data.
For me this runs and the plot control loads using the OpenGL driver without crashing.
void ilPanel1_Load(object sender, EventArgs e)
{
var pc = new ILPlotCube(twoDMode: false);
// Set an axis scale to logarithmic so the GL driver will not crash
pc.ScaleModes.XAxisScale = AxisScale.Logarithmic;
// Create a new scene
var scene = new ILScene();
scene.Add(pc);
this.ilPanel1.Scene = scene;
// Add points to the scene so the GL driver will not crash
this.AddPoints();
}
/// <summary>
/// Add an ILPoints object so GL driver will not crash
/// </summary>
private void AddPoints()
{
var pc = ilPanel1.Scene.First<ILPlotCube>();
ILArray<float> A = ILMath.tosingle(ILMath.rand(3, 1000));
var points = new ILPoints
{
Positions = A,
Colors = A,
Size = 2,
};
pc.Add(points);
this.points = points;
}
If the control loads successfully with the OpenGL driver then remove the points element from the scene. Set the axis scale as desired. Add another charting element which plots the actual thing you want to plot.
// Remove the ILPoints shape
if (this.points != null && ilPanel1.Scene.Contains(points))
{
ilPanel1.Scene.Remove(this.points);
this.points = null;
}
// Set the axis scale back to linear
var pcsm = ilPanel1.Scene.First<ILPlotCube>().ScaleModes;
pcsm.XAxisScale = AxisScale.Linear;
// Add actual plots here

Intel HD graphics often causes problems with OpenGL. You should file a bugreport on the Intel bugtracker and resort to a graphics card which does support OpenGL 3.1 - really.

Related

Extracting the ARMeshGeometry data to apply AudioKit sound generator

How can I extract and work with the ARMeshGeometry generated by the new SceneReconstruction API on the iPad Pro? I am using Apple's Visualising Scene Semantics sample app/code
I am trying to attach an AudioKit AKOscillator() to the centre of the 'face' as a 3D sound source as its created in realtime.
I can see from the LiDAR example code that this 'seems' to be the point at which a 'face' is created however I am having trouble combining both the extraction/seeing the 'face' data and adding the AudioKit sound source.
Here is where I believe the face is determined (I am new to swift could be very wrong):
DispatchQueue.global().async {
for anchor in meshAnchors {
for index in 0..<anchor.geometry.faces.count {
// Get the center of the face so that we can compare it to the given location.
let geometricCenterOfFace = anchor.geometry.centerOf(faceWithIndex: index)
// Convert the face's center to world coordinates.
var centerLocalTransform = matrix_identity_float4x4
centerLocalTransform.columns.3 = SIMD4<Float>(geometricCenterOfFace.0, geometricCenterOfFace.1, geometricCenterOfFace.2, 1)
let centerWorldPosition = (anchor.transform * centerLocalTransform).position
I would really benefit from seeing the RAW array data if that is achievable? Is this from ARGeometrySource?? Can this be printed or viewed/extracted??
I then want to add something like an oscillator/noise generator to that 'face' in the 3D world location and it be spatialised using the array/location data using something like:
var oscillator = AKOscillator() Create the sound generator
AudioKit.output = oscillator Tell AudioKit what to output
AudioKit.start() Start up AudioKit
oscillator.start() Start the oscillator
oscillator.frequency = random(in: 220...880) Set oscillator parameters
I appreciate this is almost a two-fold question however any approach on the ARMeshGemotery data extraction/use or the implementation of a sound source in the centre of each 'face' or both are welcomed.
Further code for LiDAR visualising scene semantics example in link above.
Thanks, your assistance is much appreciated,
R

Unity resolution problem, mesh has holes when zooming out

I have a server which renders a mesh using OpenGL with a custom frame buffer. In my fragment shader I write the gl_primitiveID into an output variable. Afterwards I call glReadPixels to read the IDs out. Then I know which triangles were rendered (and are therefore visible) and I send all those triangles to a client which runs on Unity. On this client I add the vertex and index data to a GameObject and it renders it without a problem. I get the exact same rendering result in Unity as I got with OpenGL, unless I start to zoom out.
Here are pictures of the mesh rendered with Unity:
My first thought was that I have different resolutions, but this is not the case. I have 1920*1080 on both server and client. I use the same view and projection matrix from the client on my server, so this also shouldn't be the problem. What could be cause of this error?
In case you need to see some of the code I wrote.
Here is my vertex shader code:
#version 330 core
layout (location = 0) in vec3 position;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * vec4(position, 1.0f);
}
Here is my fragment shader code:
#version 330 core
layout(location = 0) out int primitiveID;
void main(void)
{
primitiveID = gl_PrimitiveID + 1; // +1 because the first primitive is 0
}
and here is my getVisibleTriangles method:
std::set<int> getVisibleTriangles() {
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0, 0, SCR_WIDTH, SCR_HEIGHT, GL_RED_INTEGER, GL_INT, &pixels[0]);
std::set<int> visibleTriangles;
for (int i = 0; i < pixelsLength; i += 4) {
int id = * (int*) &pixels[i];
if (id != 0) {
visibleTriangles.insert(id-1); // id 0 is NO_PRIMITIVE
}
}
return visibleTriangles;
}
Oh my god, I can't believe I made such a stupid mistake.
After all it WAS a resolution problem.
I didn't call gl_Viewport (only when resizing the window). Apparently when creating a window with glfwCreateWindow GLFW creates the window but since the parameters are only hints and not hard constraints (as it is stated here: glfw reference) it is possible that those are not exactly fulfilled. Since I have passed a desired size of 1920*1080 (which is also my resolution) apparently the drawing area did not actually get the size of 1920*1080, because there is also some space needed for the menu etc. So it is rendered with a lower resolution (to be exact, it was 1920*1061) on the server which results in missing triangles on the client.
Before getting into the shader details for this problem, are you sure that the problem doesn't lie with the way the zoom out functionality has been implemented in Unity? It's just a hunch since I have seen it in older projects, but if the zoom in/out functionality works by actually moving the camera then the movement of the clipping planes will create those "holes" when the mesh surfaces go outside the range. Although the placement of some of those holes in the shared image makes me doubt that this is the case, but you never know.
If this happens to be the way the zoom function works then you can confirm this by looking at the editor mode while zooming out. It will display the position of the clipping planes of the camera in relation to the mesh.

Unity and VR for complex construction CAD models

I work in construction and we are trying to visualize our projects using Unity and Oculus Rift.
Basically all our models are created using Revit and we export them out to fbx and bring them into Unity. For each model we have (electrical, mechanical, architectural, facade...) we generate a fbx in Revit and bring into Unity.
The models have around 3000 to 60000 objects(meshes) and around 3 million to 40 million polygons. When we try to visualize the models in Unity we are getting very low FPS, around 2 to 3 fps, and batch draw calls around 15000 to 20000.
I believe the problem is the complexity of all our models that we bring together into Unity. I wonder if there is any way to optimize it, I already tried decimating, disabling shadows, occlusion but nothing seems to work. Collapsing the models into a single object is not an option because we have to allow the user to select and inspect individual elements.
I am working on something similar and i can share some experiences for tasks like this with many vertices or meshes. I am trying to visualize point clouds in Unity and it is a very challenging task. In my case though i create point clouds myself and i do not triangulate them. It helps but i still have to apply optimizations.
From my experience if you have vertices more than 10 million rendered at a frame you start to have fps issues. This can vary based on your hardware of course. Also i am sure this will be even worse with triangulated meshes. What i have done to optimize things are following:
I first started by rendering objects that are in Camera Frustum In order to do this i used a function called IsVisibleFrom which is an extension to Renderer like this:
using UnityEngine;
public static class RendererExtensions
{
public static bool IsVisibleFrom(this Renderer renderer, Camera camera)
{
Plane[] planes = GeometryUtility.CalculateFrustumPlanes(camera);
return GeometryUtility.TestPlanesAABB(planes, renderer.bounds);
}
}
Then you can use it like this by traversing all the meshes you have:
Renderer grid;
IEnumerator RenderVisibleGameObject()
{
for (int i = 0; i < PointCloud.transform.childCount; i++)
{
grid = PointCloud.transform.GetChild(i).GetComponent<Renderer>();
if (!grid.IsVisibleFrom(cam))
{
grid.gameObject.SetActive(false);
}
else
{
grid.gameObject.SetActive(true);
}
if (i == (PointCloud.transform.childCount - 1))
yield return null;
}
StartCoroutine(RenderVisibleGameObject());
}
Second option would be if possible and if you can create meshes with lower detail using Level of Detail. Basically what it does is rendering low detail meshes that are further away from camera.
Last option i can recommend is using Occlusion Culling. This is similar to first option but it also takes care of occlusions which was not the case for me because i had only points.
You may also find the Forge Unity AR/VR toolkit of interest:
Overview
Introduction
23-minute video
As you probably know, Forge is highly optimised for professional visualisation of large CAD models.

Unity spawning lots of objects at runtime running slow

I've created a simple project where approx. 7000 cubes are created in the scene (with VR camera) but the problem is that when I move camera to see all cubes the FPS becomes very bad, something like 5-6 frames. My PC is i7 with GTX 1070, and I thought that it have to draw hundred thousands of cubes without any problems. Really, I saw minecraft, it looks like there no problems to draw cubes ))
So question is is it possible to optimize the scene so that all cubes are painted it one call or something to provide a good performance?
I'm actually made all cubes static and there are no textures only standard material...
Here is how it looks now:
I'm using default Directional Light and It would be good to not change the standard shader because of it great work with light.
Here is how I'm generating the cubes:
private void AddCube(Vector3 coords)
{
var particle = (Transform)MonoBehaviour.Instantiate(prototype.transform, holder.transform);
SetScale(particle);
SetPosition(particle, coords);
cubes.Add(particle.gameObject);
particle.gameObject.isStatic = true;
}
private void SetScale(Transform particle)
{
particle.localScale = new Vector3(Scale, Scale, Scale);
}
private void SetPosition(Transform particle, Vector3 coords)
{
particle.position = coords;
}
This is the screenshot of Stats:
I have here 41 fps because I moved camera away from cubes to have clean background for the stats panel. Actually after I'm making the cubes 'static' the FPS is depends on are the cubes visible on the screen or not.
The problem is most likely caused by number of individual objects you are instantiating. If cubes doesnt change their transform after generating you should be able to use StaticBatchingUtility to combine them into one batch.
Call this after generating cubes.
StaticBatchingUtility.Combine(GameObject cubesRoot);

Displaying a Mesh based on pointcloud data

I am sampling data from the point cloud and trying to display the selected points using a mesh renderer.
I have the data but I can't visualize it. I am using the Augmented Reality application as template.
I am doing the point saving and mesh population in a coroutine. There are no errors but I can't see any resulting mesh.
I am wondering if there is a conflict with an existing mesh component from the point cloud example that I use for creating the cloud.
I pick a point on screen (touch) and use the index to find coordinates and populate a Vector3[]. The mesh receiveds the vertices( 5000 points out of 500000 in the point cloud)
this is where I set the mesh:
if (m_updateSubPointsMesh)
{
int[] indices = new int[ctr];
for (int i = 0; i < ctr; ++i)
{
indices[i] = i;
}
m_submesh.Clear();
m_submesh.vertices = m_subpoints;
int vertsInMesh = m_submesh.vertexCount;
m_submesh.SetIndices(indices, MeshTopology.Points, 0);
}
m_subrenderer.material.SetColor("_SpecColor", Color.yellow);
I am using Unity pro 5.3.3 and VS 2015 on windows 10.
Comments and advice are very much appreciated even if they are not themselves a solution.
Jose
I sort it out. The meshing was right it turn out to be a bug on a transform (not tango-defined). The mesh was rendered in another point. Had to walk around to find it.
Thanks
You must convert the Tango mesh data to mesh data for unity, its not structured in the same way I believe its the triangles thats different. You also need to set triangles and normals to the mesh.