I am trying to implement this particular command in MATLAB to opencv.I am working in Linux Ubuntu.Can you help me figure out the code in opencv for this.
out_vector=hist(G_vector,0:17:255);
G_vector is an array of size 1X100 which represents 1 component of an image.
I am using the following code
vector<Mat> rgb_planes;
split(image,rgb_planes);
int histSize = 255;
/// Set the ranges ( for R,G,B) )
float range[] = { 0,17, 255 } ;
const float* histRange = { range };
bool uniform = true; bool accumulate = false;
Mat r_hist, g_hist, b_hist,g_hist1;
/// Compute the histograms:
calcHist( &rgb_planes[0], 1, 0, Mat(), r_hist, 1, &histSize, &histRange, uniform, accumulate );
calcHist( &rgb_planes[1], 1, 0, Mat(), g_hist, 1, &histSize, &histRange, uniform, accumulate );
calcHist( &rgb_planes[2], 1, 0, Mat(), b_hist, 1, &histSize, &histRange, uniform, accumulate );
// Draw the histograms for R, G and B
int hist_w = 400; int hist_h = 400;
int bin_w = cvRound( (double) hist_w/histSize );
Mat histImage( hist_w, hist_h, CV_8UC3, Scalar( 0,0,0) );
/// Normalize the result to [ 0, histImage.rows ]
normalize(r_hist, r_hist, 0, histImage.rows, NORM_MINMAX, -1, Mat() );
normalize(g_hist, g_hist, 0, histImage.rows, NORM_MINMAX, -1, Mat() );
normalize(b_hist, b_hist, 0, histImage.rows, NORM_MINMAX, -1, Mat() );
Can you suggest where should I make changes here
histSize controls the number of bins in each dimension. If you want 16 bins in those one dimensional histograms, use:
int histSize[] = { 16 };
or I guess just int histSize = 16; would work.
Set the bin boundaries to be from 0 to 255 like this:
float intensity_range = { 0, 256 }; //upper bound is exclusive
const float * histRange[] = { intensity_range };
OpenCV's calcHist provides a configurability that few people will ever need. Check out the method description and example code here.
Related
In CImg, I have split an RGBA image apart into multiple single-channel images, with code like:
CImg<unsigned char> input("foo.png");
CImg<unsigned char> r = input.get_channel(0), g = input.get_channel(1), b = input.get_channel(2), a = input.get_channel(3);
Then I try to swizzle the channel order:
CImg<unsigned char> output(input.width(), input.height(), 1, input.channels());
output.channel(0) = g;
output.channel(1) = b;
output.channel(2) = r;
output.channel(3) = a;
When I save the image out, however, it turns out grayscale, apparently based on the alpha channel value; for example, this input:
becomes this output:
How do I specify the image color format so that CImg saves into the correct color space?
Simply copying a channel does not work like that; a better approach is to copy the pixel data with std::copy:
std::copy(g.begin(), g.end(), &output.atX(0, 0, 0, 0));
std::copy(b.begin(), b.end(), &output.atX(0, 0, 0, 1));
std::copy(r.begin(), r.end(), &output.atX(0, 0, 0, 2));
std::copy(a.begin(), a.end(), &output.atX(0, 0, 0, 3));
This results in an output image like:
I'm trying to optimize some mesh generation using MeshData & the Job System, but for some reason when I try to use 2 params in meshData.SetVertexBufferParams, the resulting meshData.GetVertexData is half the length it should be (I set the vertex count to 5120, but the resulting VertexData NativeArray is only 2560 items long).
When I force it to be double the length (SetVertexBufferParams(numVerts * 2, ...)), it creates a mesh that appears to treat the norms and vert positions as all position data and also makes the screen go black so no screen shot.
Here's my code:
// generate 256 height values
int[] arr = new int[256];
for (int i = 0; i < arr.Length; i++)
{
arr[i] = (int) (Mathf.PerlinNoise(i / 16 / 16f, i % 16 / 16f) * 5);
}
// put it in a NativeArray
NativeArray<int> heights = new NativeArray<int>(arr, Allocator.TempJob);
// 4 verts per face * 5 faces = 20
int numVerts = heights.Length * 20; // this value is always 5120
// 2 tris per face * 5 daces * 3 indices = 30
int indices = heights.Length * 30;
// MeshData setup
Mesh.MeshDataArray meshDataArray = Mesh.AllocateWritableMeshData(1);
Mesh.MeshData meshData = meshDataArray[0];
meshData.SetVertexBufferParams(numVerts,
new VertexAttributeDescriptor(VertexAttribute.Position, VertexAttributeFormat.Float32, 3, stream:0),
new VertexAttributeDescriptor(VertexAttribute.Normal, VertexAttributeFormat.Float32, 3, stream:1)
);
meshData.SetIndexBufferParams(indices, IndexFormat.UInt16);
// Create job
Job job = new Job
{
Heights = heights,
MeshData = meshData
};
// run job
job.Schedule().Complete();
// struct I'm using for vertex data
[System.Runtime.InteropServices.StructLayout(System.Runtime.InteropServices.LayoutKind.Sequential)]
public struct VData
{
public float3 Vert;
public float3 Norm;
}
// Here's some parts of the job
public struct Job : IJob
{
[ReadOnly]
public NativeArray<int> Heights;
public Mesh.MeshData MeshData;
public void Execute()
{
NativeArray<VData> Verts = MeshData.GetVertexData<VData>();
NativeArray<ushort> Tris = MeshData.GetIndexData<ushort>();
// loops from 0 to 255
for (int i = 0; i < Heights.Length; i++)
{
ushort t1 = (ushort)(w1 + 16);
// This indicates that Verts.Length is 2560 when it should be 5120
Debug.Log(Verts.Length);
int t = i * 30; // tris
int height = Heights[i];
// x and y coordinate in chunk
int x = i / 16;
int y = i % 16;
float3 up = new float3(0, 1, 0);
// This throws and index out of bounds error because t1 becomes larger than Verts.Length
Verts[t1] = new VData { Vert = new float3(x + 1, height, y + 1), Norm = up};
// ...
}
}
}
meshData.SetVertexBufferParams(numVerts,
new VertexAttributeDescriptor(VertexAttribute.Position, VertexAttributeFormat.Float32, 3, stream:0),
new VertexAttributeDescriptor(VertexAttribute.Normal, VertexAttributeFormat.Float32, 3, stream:1)
);
Your SetVertexBufferParams here places VertexAttribute.Position and VertexAttribute.Normal on a separate streams thus halving the size of the buffer per stream and later the length of the buffers if buffer becomes reinterpreted with the wrong struct by mistake.
This is how documentation explains streams:
Vertex data is laid out in separate "streams" (each stream goes into a separate vertex buffer in the underlying graphics API). While Unity supports up to 4 vertex streams, most meshes use just one. Separate streams are most useful when some vertex attributes don't need to be processed, for example skinned meshes often use two vertex streams (one containing all the skinned data: positions, normals, tangents; while the other stream contains all the non-skinned data: colors and texture coordinates).
But why it might end up re-interpreted as half the length? Well, because of this line:
NativeArray<VData> Verts = MeshData.GetVertexData<VData>();
How? Because there is a implicit stream parameter value there (doc)
public NativeArray<T> GetVertexData(int stream = 0);
and it defaults to 0. So what happens here is this:
var Verts = Positions_Only.Reinterpret<Position_And_Normals>();
or in other words:
var Verts = NativeArray<float3>().Reinterpret<float3x2>();
case solved :T
TL;DR:
Change stream:1 to stream:0 so both vertex attributes end up on the same stream.
or var Positions = MeshData.GetVertexData<float3>(0); & var Normals = MeshData.GetVertexData<float3>(1);
or create a dedicated VData struct per stream var Stream0 = MeshData.GetVertexData<VStream0>(0); & var Stream1 = MeshData.GetVertexData<VStream1>(1);
I have a 4x4x4 3DTexture which I am initializing and showing correctly to color my 4x4x4 grid of vertices (see attached red grid with one white pixel - 0,0,0).
However when I render the 4 layers in a framebuffer (all four at one time using gl.COLOR_ATTACHMENT0 --> gl.COLOR_ATTACHMENT3, only four of the sixteen pixels on a layer are successfully rendered by my fragment shader (to be turned green).
When I only do one layer, with gl.COLOR_ATTACHMENT0, the same 4 pixels show up correctly altered for the 1 layer, and the other 3 layers stay with the original color unchanged. When I change the gl.viewport(0, 0, size, size) (size = 4 in this example), to something else like the whole screen, or different sizes than 4, then different pixels are written, but never more than 4. My goal is to individually specify all 16 pixels of each layer precisely. I'm using colors for now, as a learning experience, but the texture is really for position and velocity information for each vertex for a physics simulation. I'm assuming (faulty assumption?) with 64 points/vertices, that I'm running the vertex shader and the fragment shader 64 times each, coloring one pixel each invocation.
I've removed all but the vital code from the shaders. I've left the javascript unaltered. I suspect my problem is initializing and passing the array of vertex positions incorrectly.
//Set x,y position coordinates to be used to extract data from one plane of our data cube
//remember, z we handle as a 1 layer of our cube which is composed of a stack of x-y planes.
const oneLayerVertices = new Float32Array(size * size * 2);
count = 0;
for (var j = 0; j < (size); j++) {
for (var i = 0; i < (size); i++) {
oneLayerVertices[count] = i;
count++;
oneLayerVertices[count] = j;
count++;
//oneLayerVertices[count] = 0;
//count++;
//oneLayerVertices[count] = 0;
//count++;
}
}
const bufferInfo = twgl.createBufferInfoFromArrays(gl, {
position: {
numComponents: 2,
data: oneLayerVertices,
},
});
And then I'm using the bufferInfo as follows:
gl.useProgram(computeProgramInfo.program);
twgl.setBuffersAndAttributes(gl, computeProgramInfo, bufferInfo);
gl.viewport(0, 0, size, size); //remember size = 4
outFramebuffers.forEach((fb, ndx) => {
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
gl.drawBuffers([
gl.COLOR_ATTACHMENT0,
gl.COLOR_ATTACHMENT1,
gl.COLOR_ATTACHMENT2,
gl.COLOR_ATTACHMENT3
]);
const baseLayerTexCoord = (ndx * numLayersPerFramebuffer);
console.log("My baseLayerTexCoord is "+baseLayerTexCoord);
twgl.setUniforms(computeProgramInfo, {
baseLayerTexCoord,
u_kernel: [
0, 0, 0,
0, 0, 0,
0, 0, 0,
0, 0, 1,
0, 0, 0,
0, 0, 0,
0, 0, 0,
0, 0, 0,
0, 0, 0,
],
u_position: inPos,
u_velocity: inVel,
loopCounter: loopCounter,
numLayersPerFramebuffer: numLayersPerFramebuffer
});
gl.drawArrays(gl.POINTS, 0, (16));
});
VERTEX SHADER:
calc_vertex:
const compute_vs = `#version 300 es
precision highp float;
in vec4 position;
void main() {
gl_Position = position;
}
`;
FRAGMENT SHADER:
calc_fragment:
const compute_fs = `#version 300 es
precision highp float;
out vec4 ourOutput[4];
void main() {
ourOutput[0] = vec4(0,1,0,1);
ourOutput[1] = vec4(0,1,0,1);
ourOutput[2] = vec4(0,1,0,1);
ourOutput[3] = vec4(0,1,0,1);
}
`;
I’m not sure what you’re trying to do and what you think the positions will do.
You have 2 options for GPU simulation in WebGL2
use transform feedback.
In this case you pass in attributes and generate data in buffers. Effectively you have in attributes and out attributes and generally you only run the vertex shader. To put it another way your varyings, the output of your vertex shader, get written to a buffer. So you have at least 2 sets of buffers, currentState, and nextState and your vertex shader reads attributes from currentState and writes them to nextState
There is an example of writing to buffers via transform feedback here though that example only uses transform feedback at the start to fill buffers once.
use textures attached to framebuffers
in this case, similarly you have 2 textures, currentState, and nextState, You set nextState to be your render target and read from currentState to generate next state.
the difficulty is that you can only render to textures by outputting primitives in the vertex shader. If currentState and nextState are 2D textures that’s trival. Just output a -1.0 to +1.0 quad from the vertex shader and all pixels in nextState will be rendered to.
If you’re using a 3D texture then same thing except you can only render to 4 layers at a time (well, gl.getParameter(gl.MAX_DRAW_BUFFERS)). so you’d have to do something like
for(let layer = 0; layer < numLayers; layer += 4) {
// setup framebuffer to use these 4 layers
gl.drawXXX(...) // draw to 4 layers)
}
or better
// at init time
const fbs = [];
for(let layer = 0; layer < numLayers; layer += 4) {
fbs.push(createFramebufferForThese4Layers(layer);
}
// at draw time
fbs.forEach((fb, ndx) => {;
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
gl.drawXXX(...) // draw to 4 layers)
});
I’m guessing multiple draw calls is slower than one draw call so another solution is to instead treat a 2D texture as a 3D array and calculate texture coordinates appropriately.
I don’t know which is better. If you’re simulating particles and they only need to look at their own currentState then transform feedback is easier. If need each particle to be able to look at the state of other particles, in other words you need random access to all the data, then your only option is to store the data in textures.
As for positions I don't understand your code. Positions define a primitives, either POINTS, LINES, or TRIANGLES so how does passing integer X, Y values into our vertex shader help you define POINTS, LINES or TRIANGLES?
It looks like you're trying to use POINTS in which case you need to set gl_PointSize to the size of the point you want to draw (1.0) and you need to convert those positions into clip space
gl_Position = vec4((position.xy + 0.5) / resolution, 0, 1);
where resolution is the size of the texture.
But doing it this way will be slow. Much better to just draw a full size (-1 to +1) clip space quad. For every pixel in the destination the fragment shader will be called. gl_FragCoord.xy will be the location of the center of the pixel currently being rendered so first pixel in bottom left corner gl_FragCoord.xy will be (0.5, 0.5). The pixel to the right of that will be (1.5, 0.5). The pixel to the right of that will be (2.5, 0.5). You can use that value to calculate how to access currentState. Assuming 1x1 mapping the easiest way would be
int n = numberOfLayerThatsAttachedToCOLOR_ATTACHMENT0;
vec4 currentStateValueForLayerN = texelFetch(
currentStateTexture, ivec3(gl_FragCoord.xy, n + 0), 0);
vec4 currentStateValueForLayerNPlus1 = texelFetch(
currentStateTexture, ivec3(gl_FragCoord.xy, n + 1), 0);
vec4 currentStateValueForLayerNPlus2 = texelFetch(
currentStateTexture, ivec3(gl_FragCoord.xy, n + 2), 0);
...
vec4 nextStateForLayerN = computeNextStateFromCurrentState(currentStateValueForLayerN);
vec4 nextStateForLayerNPlus1 = computeNextStateFromCurrentState(currentStateValueForLayerNPlus1);
vec4 nextStateForLayerNPlus2 = computeNextStateFromCurrentState(currentStateValueForLayerNPlus2);
...
outColor[0] = nextStateForLayerN;
outColor[1] = nextStateForLayerNPlus1;
outColor[2] = nextStateForLayerNPlus1;
...
I don’t know if you needed this but just to test here’s a simple example that renders a different color to every pixel of a 4x4x4 texture and then displays them.
const pointVS = `
#version 300 es
uniform int size;
uniform highp sampler3D tex;
out vec4 v_color;
void main() {
int x = gl_VertexID % size;
int y = (gl_VertexID / size) % size;
int z = gl_VertexID / (size * size);
v_color = texelFetch(tex, ivec3(x, y, z), 0);
gl_PointSize = 8.0;
vec3 normPos = vec3(x, y, z) / float(size);
gl_Position = vec4(
mix(-0.9, 0.6, normPos.x) + mix(0.0, 0.3, normPos.y),
mix(-0.6, 0.9, normPos.z) + mix(0.0, -0.3, normPos.y),
0,
1);
}
`;
const pointFS = `
#version 300 es
precision highp float;
in vec4 v_color;
out vec4 outColor;
void main() {
outColor = v_color;
}
`;
const rtVS = `
#version 300 es
in vec4 position;
void main() {
gl_Position = position;
}
`;
const rtFS = `
#version 300 es
precision highp float;
uniform vec2 resolution;
out vec4 outColor[4];
void main() {
vec2 xy = gl_FragCoord.xy / resolution;
outColor[0] = vec4(1, 0, xy.x, 1);
outColor[1] = vec4(0.5, xy.yx, 1);
outColor[2] = vec4(xy, 0, 1);
outColor[3] = vec4(1, vec2(1) - xy, 1);
}
`;
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
if (!gl) {
return alert('need webgl2');
}
const pointProgramInfo = twgl.createProgramInfo(gl, [pointVS, pointFS]);
const rtProgramInfo = twgl.createProgramInfo(gl, [rtVS, rtFS]);
const size = 4;
const numPoints = size * size * size;
const tex = twgl.createTexture(gl, {
target: gl.TEXTURE_3D,
width: size,
height: size,
depth: size,
});
const clipspaceFullSizeQuadBufferInfo = twgl.createBufferInfoFromArrays(gl, {
position: {
data: [
-1, -1,
1, -1,
-1, 1,
-1, 1,
1, -1,
1, 1,
],
numComponents: 2,
},
});
const fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
for (let i = 0; i < 4; ++i) {
gl.framebufferTextureLayer(
gl.FRAMEBUFFER,
gl.COLOR_ATTACHMENT0 + i,
tex,
0, // mip level
i, // layer
);
}
gl.drawBuffers([
gl.COLOR_ATTACHMENT0,
gl.COLOR_ATTACHMENT1,
gl.COLOR_ATTACHMENT2,
gl.COLOR_ATTACHMENT3,
]);
gl.viewport(0, 0, size, size);
gl.useProgram(rtProgramInfo.program);
twgl.setBuffersAndAttributes(
gl,
rtProgramInfo,
clipspaceFullSizeQuadBufferInfo);
twgl.setUniforms(rtProgramInfo, {
resolution: [size, size],
});
twgl.drawBufferInfo(gl, clipspaceFullSizeQuadBufferInfo);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
gl.drawBuffers([
gl.BACK,
]);
gl.useProgram(pointProgramInfo.program);
twgl.setUniforms(pointProgramInfo, {
tex,
size,
});
gl.drawArrays(gl.POINTS, 0, numPoints);
}
main();
<canvas></canvas>
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
I am working on a Unity3D project which relies on a 3D texture momentarily.
The problem is, Unity only allows Pro users to make use of Texture3D. Hence I'm looking for an alternative to Texture3D, perhaps a one dimensional texture (although not natively available in Unity) that is interpreted as 3 dimensional in the shader (which uses the 3D texture).
Is there a way to do this whilst (preferably) keeping subpixel information?
(GLSL and Cg tags added because here lies the core of the problem)
Edit: The problem is addressed here as well: webgl glsl emulate texture3d
However this is not yet finished and working properly.
Edit: For the time being I disregard proper subpixel information. So any help on converting a 2D texture to contain 3D information is appreciated!
Edit: I retracted my own answer as it isn't sufficient as of yet:
float2 uvFromUvw( float3 uvw ) {
float2 uv = float2(uvw.x, uvw.y / _VolumeTextureSize.z);
uv.y += float(round(uvw.z * (_VolumeTextureSize.z - 1))) / _VolumeTextureSize.z;
return uv;
}
With initialization as Texture2D(volumeWidth, volumeHeight * volumeDepth).
Most of the time it works, but sometimes it shows wrong pixels, probably because of subpixel information it is picking up on. How can I fix this? Clamping the input doesn't work.
I'm using this for my 3D clouds if that helps:
float SampleNoiseTexture( float3 _UVW, float _MipLevel )
{
float2 WrappedUW = fmod( 16.0 * (1000.0 + _UVW.xz), 16.0 ); // UW wrapped in [0,16[
float IntW = floor( WrappedUW.y ); // Integer slice number
float dw = WrappedUW.y - IntW; // Remainder for intepolating between slices
_UVW.x = (17.0 * IntW + WrappedUW.x + 0.25) * 0.00367647058823529411764705882353; // divided by 17*16 = 272
float4 Value = tex2D( _TexNoise3D, float4( _UVW.xy, 0.0, 0.0 ) );
return lerp( Value.x, Value.y, dw );
}
The "3D texture" is packed as 16 slices of 17 pixels wide in a 272x16 texture, with the 17th column of each slice being a copy of the 1st column (wrap address mode)...
Of course, no mip-mapping allowed with this technique.
Here's the code I'm using to create the 3D texture if that's what bothering you:
static const NOISE3D_TEXTURE_POT = 4;
static const NOISE3D_TEXTURE_SIZE = 1 << NOISE3D_TEXTURE_POT;
// <summary>
// Create the "3D noise" texture
// To simulate 3D textures that are not available in Unity, I create a single long 2D slice of (17*16) x 16
// The width is 17*16 so that all 3D slices are packed into a single line, and I use 17 as a single slice width
// because I pad the last pixel with the first column of the same slice so bilinear interpolation is correct.
// The texture contains 2 significant values in Red and Green :
// Red is the noise value in the current W slice
// Green is the noise value in the next W slice
// Then, the actual 3D noise value is an interpolation of red and green based on the W remainder
// </summary>
protected NuajTexture2D Build3DNoise()
{
// Build first noise mip level
float[,,] NoiseValues = new float[NOISE3D_TEXTURE_SIZE,NOISE3D_TEXTURE_SIZE,NOISE3D_TEXTURE_SIZE];
for ( int W=0; W < NOISE3D_TEXTURE_SIZE; W++ )
for ( int V=0; V < NOISE3D_TEXTURE_SIZE; V++ )
for ( int U=0; U < NOISE3D_TEXTURE_SIZE; U++ )
NoiseValues[U,V,W] = (float) SimpleRNG.GetUniform();
// Build actual texture
int MipLevel = 0; // In my original code, I build several textures for several mips...
int MipSize = NOISE3D_TEXTURE_SIZE >> MipLevel;
int Width = MipSize*(MipSize+1); // Pad with an additional column
Color[] Content = new Color[MipSize*Width];
// Build content
for ( int W=0; W < MipSize; W++ )
{
int Offset = W * (MipSize+1); // W Slice offset
for ( int V=0; V < MipSize; V++ )
{
for ( int U=0; U <= MipSize; U++ )
{
Content[Offset+Width*V+U].r = NoiseValues[U & (MipSize-1),V,W];
Content[Offset+Width*V+U].g = NoiseValues[U & (MipSize-1),V,(W+1) & (MipSize-1)];
}
}
}
// Create texture
NuajTexture2D Result = Help.CreateTexture( "Noise3D", Width, MipSize, TextureFormat.ARGB32, false, FilterMode.Bilinear, TextureWrapMode.Repeat );
Result.SetPixels( Content, 0 );
Result.Apply( false, true );
return Result;
}
I followed Patapoms response and came to the following. However it's still off as it should be.
float getAlpha(float3 position)
{
float2 WrappedUW = fmod( _Volume.xz * (1000.0 + position.xz), _Volume.xz ); // UW wrapped in [0,16[
float IntW = floor( WrappedUW.y ); // Integer slice number
float dw = WrappedUW.y - IntW; // Remainder for intepolating between slices
position.x = ((_Volume.z + 1.0) * IntW + WrappedUW.x + 0.25) / ((_Volume.z + 1.0) * _Volume.x); // divided by 17*16 = 272
float4 Value = tex2Dlod( _VolumeTex, float4( position.xy, 0.0, 0.0 ) );
return lerp( Value.x, Value.y, dw );
}
public int GetPixelId(int x, int y, int z) {
return y * (volumeWidth + 1) * volumeDepth + z * (volumeWidth + 1) + x;
}
// Code to set the pixelbuffer one pixel at a time starting from a clean slate
pixelBuffer[GetPixelId(x, y, z)].r = color.r;
if (z > 0)
pixelBuffer[GetPixelId(x, y, z - 1)].g = color.r;
if (z == volumeDepth - 1 || z == 0)
pixelBuffer[GetPixelId(x, y, z)].g = color.r;
if (x == 0) {
pixelBuffer[GetPixelId(volumeWidth, y, z)].r = color.r;
if (z > 0)
pixelBuffer[GetPixelId(volumeWidth, y, z - 1)].g = color.r;
if (z == volumeDepth - 1 || z == 0)
pixelBuffer[GetPixelId(volumeWidth, y, z)].g = color.r;
}
I have successfully used the C++ example project to draw graphs from my C++ project using ZedGraph. However there is no example with Date axis for C++.
The following code is taken from the C# example found at
http://zedgraph.org/wiki/index.php?title=Tutorial:Date_Axis_Chart_Demo. Please see my comments with the text //JEM// to see where my problem is
PointPairList list = new PointPairList();
for ( int i=0; i<36; i++ )
{
double x = (double) new XDate( 1995, 5, i+11 );
> //JEM //This line above doesn't work in
> C++.
double y = Math.Sin( (double) i * Math.PI / 15.0 );
list.Add( x, y );
}
....missing code...
// Set the XAxis to date type
myPane.XAxis.Type = AxisType.Date;
//JEM //This one also doesn't work even if I change it to the
//syntax that C++ understands, that is,
myPane->XAxis->Type = AxisType->Date;
Maybe C++ has some problems with the anonymous variables?
Try to create a XDate object first before converting it to double.
XDate date = new XDate( 1995, 5, i+11 );
double x = (double)date;
Thanks Gacek.
This is how it finally went down. Your answer was the turning point!!!
for ( int i = 0; i < basin.DY; i++ ){
XDate dato(1995,9,i,0,0,0); //date
double x = (double)dato;
//double x = i;
double y = basin.Qsim[i];
double y2 = basin.Qobs[i];
list->Add( x, y );
list2->Add( x, y2 );
}
//set the XAXis to date type
myPane->XAxis->Type = AxisType::Date;
here is the constructor for the Xdate type for c++ from sourceforge dot net documentation/html/M_ZedGraph_XDate__ctor_3.htm.
XDate (int year, int month, int day, int hour, int minute, double second)
I have also found a detailed example on this link http://www.c-plusplus.de/forum/viewtopic-var-t-is-186422-and-view-is-next.html
with the following code
/// ZedGraph Kurve //////////
private: void CreateGraph( ZedGraphControl ^zgc )
{
GraphPane ^myPane = zgc->GraphPane;
// Set the titles and axis labels
myPane->Title->Text = "Gewichtskurve";
myPane->XAxis->Title->Text = "Tag";
myPane->YAxis->Title->Text = "Gewicht in Kg";
// Make up some data points from the Sine function
double x,y;
PointPairList ^list = gcnew PointPairList();
for ( int i=0; i<36; i++ )
{
x = (double) gcnew XDate( 1995, 5, i+11 );
y = Math::Sin( (double) i * Math::PI / 15.0 );
list->Add( x, y );
}
// Generate a blue curve with circle symbols, and "My Curve 2" in the legend
LineItem ^myCurve = myPane->AddCurve( "Trainingskurve", list, Color::Blue,
SymbolType::Circle );
XDate ^xdatum = gcnew XDate ( 1995, 1, 1);
xdatum->AddDays ( 1 );
myPane->XAxis->Type = AxisType::Date;