Itextsharp 7 - Scaled and Centered Image as watermark - itext

I started using itextsharp 7 a few days ago, i used to work with itextsharp 5 for years .
I don't manage to add a scaled image at the center of the page as watermark with itext7.
My code with itextsharp 5 :
using (PdfStamper pdfStamper = new PdfStamper(pdfReader, memoryStream))
{
for (int pageIndex = 1; pageIndex <= pdfReader.NumberOfPages; pageIndex++)
{
pdfStamper.FormFlattening = false;
iTextSharp.text.Rectangle pageRectangle = pdfReader.GetPageSizeWithRotation(pageIndex);
PdfContentByte pdfData = pdfStamper.GetOverContent(pageIndex);
PdfGState graphicsState = new PdfGState();
graphicsState.FillOpacity = 0.4F;
pdfData.SetGState(graphicsState);
pdfData.BeginText();
Image imageWM = Image.GetInstance(image_WM_Path);
float width = pageRectangle.Width;
float height = pageRectangle.Height;
//scale image
imageWM.ScaleToFit(width / 3, height / 3);
//center image
imageWM.SetAbsolutePosition(width / 2 - imageWM.ScaledWidth / 2, height / 2 - imageWM.ScaledHeight / 2);
pdfData.AddImage(imageWM);
pdfData.EndText();
}
pdfStamper.Close();
return memoryStream.ToArray();
}
Here is with itextsharp 7 (code based on the itext 7 examples):
PdfDocument pdfDoc = new PdfDocument(new PdfReader(sourceFile), new PdfWriter(destinationPath));
Document document = new Document(pdfDoc);
PdfCanvas over;
PdfExtGState gs1 = new PdfExtGState();
gs1.SetFillOpacity(0.5f);
int n = pdfDoc.GetNumberOfPages();
Rectangle pagesize;
float x, y;
ImageData img = ImageDataFactory.Create(image_WM_Path);
float w = img.GetWidth();
float h = img.GetHeight();
for (int i = 1; i <= n; i++)
{
PdfPage pdfPage = pdfDoc.GetPage(i);
pagesize = pdfDoc.GetPage(i).GetPageSize();
pdfPage.SetIgnorePageRotationForContent(true);
x = (pagesize.GetLeft() + pagesize.GetRight()) / 2;
y = (pagesize.GetTop() + pagesize.GetBottom()) / 2;
over = new PdfCanvas(pdfDoc.GetPage(i));
over.SaveState();
over.SetExtGState(gs1);
over.AddImage(img, w, 0, 0, h, x - (w / 2), y - (h / 2), true);
over.RestoreState();
}
document.Close();
pdfDoc.Close();
The image is centered but i dont manage to scale it with the AddImage method.
Maybe it is easily done but i am struggling with this.
Any help appreciated.

I have adapted your example to Java, but that shouldn't matter much since it's the Math that is important:
public static final String SRC = "src/main/resources/pdfs/hello.pdf";
public static final String DEST = "results/text/watermark.pdf";
public static final String IMG = "src/main/resources/img/mascot.png";
public static void main(String[] args) throws IOException {
File file = new File(DEST);
file.getParentFile().mkdirs();
new Watermark().createPdf(SRC, DEST);
}
public void createPdf(String src, String dest) throws IOException {
PdfDocument pdfDoc = new PdfDocument(
new PdfReader(src), new PdfWriter(dest));
Document document = new Document(pdfDoc);
PdfCanvas over;
PdfExtGState gs1 = new PdfExtGState();
gs1.setFillOpacity(0.5f);
int n = pdfDoc.getNumberOfPages();
Rectangle pagesize;
ImageData img = ImageDataFactory.create(IMG);
float iW = img.getWidth();
float iH = img.getHeight();
float pW, pH, sW, sH, f, x, y;
for (int i = 1; i <= n; i++)
{
PdfPage pdfPage = pdfDoc.getPage(i);
pagesize = pdfPage.getPageSize();
pW = pagesize.getWidth();
pH = pagesize.getHeight();
f = (pW / iW) * 0.5f;
sW = iW * f;
sH = iH * f;
x = pagesize.getLeft() + (pW / 2) - (sW / 2);
y = pagesize.getBottom() + (pH / 2) - (sH / 2);
over = new PdfCanvas(pdfDoc.getPage(i));
over.saveState();
over.setExtGState(gs1);
over.addImage(img, sW, 0, 0, sH, x, y);
over.restoreState();
}
document.close();
pdfDoc.close();
}
The result of this code looks like this:
That looks exactly the way I expect it.
Some explanation.
I have an image mascot.png with dimensions iW x iH.
I have pages with dimensions pW x pH.
I want to scale the image so that it takes 50% of the width, hence I create a variable f with value 0.5f (50%) x ``(pW / iW)`.
I apply the factor f to the initial values of the images, resulting in the scaled dimensions sW x sH.
I define an offset for the image (x, y) by subtracting half of the scaled width and height of the middle of the page.
Now I have the values I need for the addImage() method: over.addImage(img, sW, 0, 0, sH, x, y);
Note: you were adding the images as an inline image. That's a bad idea because it leads to bloated PDF files, especially in the case of watermarks. By adding an image as an inline image to each page, you add the image bytes redundantly as many times as there are pages. It's much better to add the image as an Image XObject, in which case the image bytes will be added to the document only once, no matter how many times you use that same image. Please remove the true value from the parameters of the addImage() method (make a before and after PDF, and compare the file size to understand what I mean).

Maybe you can use AddImageFittedIntoRectangle
var x = width / 2 - imageWM.ScaledWidth / 2;
var y = height / 2 - imageWM.ScaledHeight / 2;
var w = width / 3;
var h = height / 3;
over.AddImage(img, new Rectangle(x, y, w, h), false);

Related

Unity3d terrain: Only a sub-area of the terrain is generated

I want to generate a Terrain in Unity from SRTM data. Width and length of the terrain is specified. The terrain itself is 3061 x 2950 (width, length). A smaller area starting from (0,0) to (~2300, ~2130) has the correct terrain. The remainder is flat surface with 0 height.
here is an image of the problem
The relevant code:
LocalizedMercatorProjection mercator; // basic Mercator projection
SRTM_Reader srtm; // wrapper around https://github.com/itinero/srtm with SRTM data "N49E009.hgt"
// coordinate bounds of Flein, Germany
public float max_lat = 49.1117000f;
public float min_lat = 49.0943000f;
public float min_lon = 9.1985000f;
public float max_lon = 9.2260000f;
public void GenerateTerrain()
{
ConfigureTerrainData();
float[,] heights = GenerateTerrainData();
terrain.terrainData.SetHeights(0, 0, heights);
}
void ConfigureTerrainData()
{
this.depth = Math.Abs(Convert.ToInt32(mercator.latToY(max_lat) - mercator.latToY(min_lat)));
// this.depth = 2950;
this.width = Math.Abs(Convert.ToInt32(mercator.lonToX(max_lon) - mercator.lonToX(min_lon)));
// this.width = 3061;
this.height = 400;
this.terrain.terrainData.heightmapResolution = width + 1;
this.terrain.terrainData.size = new Vector3(width, height, depth);
}
float[,] GenerateTerrainData()
{
float[,] heights = new float[depth, width];
for(int x = 0; x < (width); x += 1)
{
for (int y = 0; y < (depth); y += 1)
{
heights[y,x] = CalculateHeight(x, y);
}
}
return heights;
}
float CalculateHeight(int x, int y)
{
// uses SRTMData.GetElevationBilinear(lat, lon) under the hood
float elevation = srtm.GetElevationAtSync(mercator.yToLat(y), mercator.xToLon(x));
if (elevation >= height) return 1.0f;
return elevation / height;
}
Does anyone have an idea why only a smaller area of the terrain is filled?
Edit 1: Setting the values for depth and width to 4097 mitigates the problem. This is not a perfect solution to me, so the question still persists.

How to convert RGB pixmap to ui.Image in Dart?

Currently I have a Uint8List, formatted like [R,G,B,R,G,B,...] for all the pixels of the image. And of course I have its width and height.
I found decodeImageFromPixels while searching but it only takes RGBA/BGRA format. I converted my pixmap from RGB to RGBA and this function works fine.
However, my code now looks like this:
Uint8List rawPixel = raw.value.asTypedList(w * h * channel);
List<int> rgba = [];
for (int i = 0; i < rawPixel.length; i++) {
rgba.add(rawPixel[i]);
if ((i + 1) % 3 == 0) {
rgba.add(0);
}
}
Uint8List rgbaList = Uint8List.fromList(rgba);
Completer<Image> c = Completer<Image>();
decodeImageFromPixels(rgbaList, w, h, PixelFormat.rgba8888, (Image img) {
c.complete(img);
});
I have to make a new list(waste in space) and iterate through the entire list(waste in time).
This is too inefficient in my opinion, is there any way to make this more elegant? Like add a new PixelFormat.rgb888?
Thanks in advance.
You may find that this loop is faster as it doesn't keep appending to the list and then copy it at the end.
final rawPixel = raw.value.asTypedList(w * h * channel);
final rgbaList = Uint8List(w * h * 4); // create the Uint8List directly as we know the width and height
for (var i = 0; i < w * h; i++) {
final rgbOffset = i * 3;
final rgbaOffset = i * 4;
rgbaList[rgbaOffset] = rawPixel[rgbOffset]; // red
rgbaList[rgbaOffset + 1] = rawPixel[rgbOffset + 1]; // green
rgbaList[rgbaOffset + 2] = rawPixel[rgbOffset + 2]; // blue
rgbaList[rgbaOffset + 3] = 255; // a
}
An alternative is to prepend the array with a BMP header by adapting this answer (though it would simpler as there would be no palette) and passing that bitmap to instantiateImageCodec as that code is presumably highly optimized for parsing bitmaps.

What are the meanings of itextpdf pdfcontentbyte addtemplate's parameters

I am using itextpdf to merge some pdfs to a single one.
What are the meanings of itextpdf pdfcontentbyte addtemplate's parameters,there is no docs to describe them.
public void addTemplate(PdfTemplate template,
double a, double b, double c, double d, double e, double f)
The six values a, b, c, d, e, and f are elements of a matrix that has three rows and three columns.
You can use this matrix to express a transformation in a two-dimentional system.
Carrying out this multiplication results in this:
x' = a * x + c * y + e
y' = b * x + d * y + f
The third column in the matrix is fixed: you're working in two dimensions, so you don't need to calculate a new z coordinate.
When studying analytical geometry in high school, you've probably learned how to apply transformations to objects.
In PDF, we use a slightly different approach: instead of transforming objects, we transform the coordinate system.
The e and the f value can be used for a translation. The a, b, c, and d value can be used for a rotation and/or scaling operation.
By default the Current Transformation Matrix (CTM) is:
With the addTemplate() method, you can add a Form XObject to a canvas and define a position using e, f, e.g:
canvas.addTemplate(template, 36, 36);
This will add template at coordinate x = 36; y = 36.
By introducing a, b, c, and d, you can also rotate and/or scale the template.
Update: as mentioned in the comments, you might want to use the overloaded methods that accept an AffineTransform parameter if you don't like the Algebra of the transformation matrix.
the code below did the trick,thank for the guys who helped me.
FileInputStream pdfInput = new FileInputStream(pdf);
PdfReader pdfReader = new PdfReader(pdfInput);
for (int index = 1; index <= pdfReader.getNumberOfPages(); index++) {
main.newPage();
PdfImportedPage page = pdfWriter.getImportedPage(pdfReader,
index);
Rectangle pagesize = pdfReader.getPageSizeWithRotation(index);
float oWidth = pagesize.getWidth();
float oHeight = pagesize.getHeight();
float scale = getScale(oWidth, oHeight);
float scaledWidth = oWidth * scale;
float scaledHeight = oHeight * scale;
int rotation = pagesize.getRotation();
AffineTransform transform = new AffineTransform(scale, 0, 0, scale, 0, 0);
switch (rotation) {
case 0:
cb.addTemplate(page, transform);
break;
case 90:
AffineTransform rotate90 = new AffineTransform(0, -1f, 1f, 0, 0, scaledHeight);
rotate90.concatenate(transform);
cb.addTemplate(page, rotate90);
break;
case 180:
AffineTransform rotate180 = new AffineTransform(-1f, 0, 0, -1f, scaledWidth,
scaledHeight);
rotate180.concatenate(transform);
cb.addTemplate(page, rotate180);
break;
case 270:
AffineTransform rotate270 = new AffineTransform(0, 1f, -1f, 0, scaledWidth, 0);
rotate270.concatenate(transform);
cb.addTemplate(page, rotate270);
break;
default:
cb.addTemplate(page, scale, 0, 0, scale, 0, 0);
}
}
private static float getScale(float width, float height) {
float scaleX = PageSize.A4.getWidth() / width;
float scaleY = PageSize.A4.getHeight() / height;
return Math.min(scaleX, scaleY);
}

OpenCV: how to rotate IplImage?

I need to rotate an image by very small angle, like 1-5 degrees. Does OpenCV provide simple way of doing that? From reading docs i can assume that getAffineTransform() should be involved, but there is no direct example of doing something like:
IplImage *rotateImage( IplImage *source, double angle);
If you use OpenCV > 2.0 it is as easy as
using namespace cv;
Mat rotateImage(const Mat& source, double angle)
{
Point2f src_center(source.cols/2.0F, source.rows/2.0F);
Mat rot_mat = getRotationMatrix2D(src_center, angle, 1.0);
Mat dst;
warpAffine(source, dst, rot_mat, source.size());
return dst;
}
Note: angle is in degrees, not radians.
See the C++ interface documentation for more details and adapt as you need:
getRotationMatrix
warpAffine
Edit: To down voter: Please comment the reason for down voting a tried and tested code?
#include "cv.h"
#include "highgui.h"
#include "math.h"
int main( int argc, char** argv )
{
IplImage* src = cvLoadImage("lena.jpg", 1);
IplImage* dst = cvCloneImage( src );
int delta = 1;
int angle = 0;
int opt = 1; // 1: rotate & zoom
// 0: rotate only
double factor;
cvNamedWindow("src", 1);
cvShowImage("src", src);
for(;;)
{
float m[6];
CvMat M = cvMat(2, 3, CV_32F, m);
int w = src->width;
int h = src->height;
if(opt)
factor = (cos(angle*CV_PI/180.) + 1.05) * 2;
else
factor = 1;
m[0] = (float)(factor*cos(-angle*2*CV_PI/180.));
m[1] = (float)(factor*sin(-angle*2*CV_PI/180.));
m[3] = -m[1];
m[4] = m[0];
m[2] = w*0.5f;
m[5] = h*0.5f;
cvGetQuadrangleSubPix( src, dst, &M);
cvNamedWindow("dst", 1);
cvShowImage("dst", dst);
if( cvWaitKey(1) == 27 )
break;
angle =(int)(angle + delta) % 360;
}
return 0;
}
UPDATE: See the following code for rotation using warpaffine
https://code.google.com/p/opencvjp-sample/source/browse/trunk/cpp/affine2_cpp.cpp?r=48
#include <cv.h>
#include <highgui.h>
using namespace cv;
int
main(int argc, char **argv)
{
// (1)load a specified file as a 3-channel color image,
// set its ROI, and allocate a destination image
const string imagename = argc > 1 ? argv[1] : "../image/building.png";
Mat src_img = imread(imagename);
if(!src_img.data)
return -1;
Mat dst_img = src_img.clone();
// (2)set ROI
Rect roi_rect(cvRound(src_img.cols*0.25), cvRound(src_img.rows*0.25), cvRound(src_img.cols*0.5), cvRound(src_img.rows*0.5));
Mat src_roi(src_img, roi_rect);
Mat dst_roi(dst_img, roi_rect);
// (2)With specified three parameters (angle, rotation center, scale)
// calculate an affine transformation matrix by cv2DRotationMatrix
double angle = -45.0, scale = 1.0;
Point2d center(src_roi.cols*0.5, src_roi.rows*0.5);
const Mat affine_matrix = getRotationMatrix2D( center, angle, scale );
// (3)rotate the image by warpAffine taking the affine matrix
warpAffine(src_roi, dst_roi, affine_matrix, dst_roi.size(), INTER_LINEAR, BORDER_CONSTANT, Scalar::all(255));
// (4)show source and destination images with a rectangle indicating ROI
rectangle(src_img, roi_rect.tl(), roi_rect.br(), Scalar(255,0,255), 2);
namedWindow("src", CV_WINDOW_AUTOSIZE);
namedWindow("dst", CV_WINDOW_AUTOSIZE);
imshow("src", src_img);
imshow("dst", dst_img);
waitKey(0);
return 0;
}
Check my answer to a similar problem:
Rotating an image in C/C++
Essentially, use cvWarpAffine - I've described how to get the 2x3 transformation matrix from the angle in my previous answer.
Updating full answer for OpenCV 2.4 and up
// ROTATE p by R
/**
* Rotate p according to rotation matrix (from getRotationMatrix2D()) R
* #param R Rotation matrix from getRotationMatrix2D()
* #param p Point2f to rotate
* #return Returns rotated coordinates in a Point2f
*/
Point2f rotPoint(const Mat &R, const Point2f &p)
{
Point2f rp;
rp.x = (float)(R.at<double>(0,0)*p.x + R.at<double>(0,1)*p.y + R.at<double>(0,2));
rp.y = (float)(R.at<double>(1,0)*p.x + R.at<double>(1,1)*p.y + R.at<double>(1,2));
return rp;
}
//COMPUTE THE SIZE NEEDED TO LOSSLESSLY STORE A ROTATED IMAGE
/**
* Return the size needed to contain bounding box bb when rotated by R
* #param R Rotation matrix from getRotationMatrix2D()
* #param bb bounding box rectangle to be rotated by R
* #return Size of image(width,height) that will compleley contain bb when rotated by R
*/
Size rotatedImageBB(const Mat &R, const Rect &bb)
{
//Rotate the rectangle coordinates
vector<Point2f> rp;
rp.push_back(rotPoint(R,Point2f(bb.x,bb.y)));
rp.push_back(rotPoint(R,Point2f(bb.x + bb.width,bb.y)));
rp.push_back(rotPoint(R,Point2f(bb.x + bb.width,bb.y+bb.height)));
rp.push_back(rotPoint(R,Point2f(bb.x,bb.y+bb.height)));
//Find float bounding box r
float x = rp[0].x;
float y = rp[0].y;
float left = x, right = x, up = y, down = y;
for(int i = 1; i<4; ++i)
{
x = rp[i].x;
y = rp[i].y;
if(left > x) left = x;
if(right < x) right = x;
if(up > y) up = y;
if(down < y) down = y;
}
int w = (int)(right - left + 0.5);
int h = (int)(down - up + 0.5);
return Size(w,h);
}
/**
* Rotate region "fromroi" in image "fromI" a total of "angle" degrees and put it in "toI" if toI exists.
* If toI doesn't exist, create it such that it will hold the entire rotated region. Return toI, rotated imge
* This will put the rotated fromroi piece of fromI into the toI image
*
* #param fromI Input image to be rotated
* #param toI Output image if provided, (else if &toI = 0, it will create a Mat fill it with the rotated image roi, and return it).
* #param fromroi roi region in fromI to be rotated.
* #param angle Angle in degrees to rotate
* #return Rotated image (you can ignore if you passed in toI
*/
Mat rotateImage(const Mat &fromI, Mat *toI, const Rect &fromroi, double angle)
{
//CHECK STUFF
// you should protect against bad parameters here ... omitted ...
//MAKE OR GET THE "toI" MATRIX
Point2f cx((float)fromroi.x + (float)fromroi.width/2.0,fromroi.y +
(float)fromroi.height/2.0);
Mat R = getRotationMatrix2D(cx,angle,1);
Mat rotI;
if(toI)
rotI = *toI;
else
{
Size rs = rotatedImageBB(R, fromroi);
rotI.create(rs,fromI.type());
}
//ADJUST FOR SHIFTS
double wdiff = (double)((cx.x - rotI.cols/2.0));
double hdiff = (double)((cx.y - rotI.rows/2.0));
R.at<double>(0,2) -= wdiff; //Adjust the rotation point to the middle of the dst image
R.at<double>(1,2) -= hdiff;
//ROTATE
warpAffine(fromI, rotI, R, rotI.size(), INTER_CUBIC, BORDER_CONSTANT, Scalar::all(0));
//& OUT
return(rotI);
}
IplImage* rotate(double angle, float centreX, float centreY, IplImage* src, bool crop)
{
int w=src->width;
int h=src->height;
CvPoint2D32f centre;
centre.x = centreX;
centre.y = centreY;
CvMat* warp_mat = cvCreateMat(2, 3, CV_32FC1);
cv2DRotationMatrix(centre, angle, 1.0, warp_mat);
double m11= cvmGet(warp_mat,0,0);
double m12= cvmGet(warp_mat,0,1);
double m13= cvmGet(warp_mat,0,2);
double m21= cvmGet(warp_mat,1,0);
double m22= cvmGet(warp_mat,1,1);
double m23= cvmGet(warp_mat,1,2);
double m31= 0;
double m32= 0;
double m33= 1;
double x=0;
double y=0;
double u0= (m11*x + m12*y + m13)/(m31*x + m32*y + m33);
double v0= (m21*x + m22*y + m23)/(m31*x + m32*y + m33);
x=w;
y=0;
double u1= (m11*x + m12*y + m13)/(m31*x + m32*y + m33);
double v1= (m21*x + m22*y + m23)/(m31*x + m32*y + m33);
x=0;
y=h;
double u2= (m11*x + m12*y + m13)/(m31*x + m32*y + m33);
double v2= (m21*x + m22*y + m23)/(m31*x + m32*y + m33);
x=w;
y=h;
double u3= (m11*x + m12*y + m13)/(m31*x + m32*y + m33);
double v3= (m21*x + m22*y + m23)/(m31*x + m32*y + m33);
int left= MAX(MAX(u0,u2),0);
int right= MIN(MIN(u1,u3),w);
int top= MAX(MAX(v0,v1),0);
int bottom= MIN(MIN(v2,v3),h);
ASSERT(left<right&&top<bottom); // throw message?
if (left<right&&top<bottom)
{
IplImage* dst= cvCreateImage( cvGetSize(src), IPL_DEPTH_8U, src->nChannels);
cvWarpAffine(src, dst, warp_mat/*, CV_INTER_LINEAR + CV_WARP_FILL_OUTLIERS, cvScalarAll(0)*/);
if (crop) // crop and resize to initial size
{
IplImage* dst_crop= cvCreateImage(cvSize(right-left, bottom-top), IPL_DEPTH_8U, src->nChannels);
cvSetImageROI(dst,cvRect(left,top,right-left,bottom-top));
cvCopy(dst,dst_crop);
cvReleaseImage(&dst);
cvReleaseMat(&warp_mat);
//ver1
//return dst_crop;
// ver2 resize
IplImage* out= cvCreateImage(cvSize(w, h), IPL_DEPTH_8U, src->nChannels);
cvResize(dst_crop,out);
cvReleaseImage(&dst_crop);
return out;
}
else
{
/*cvLine( dst, cvPoint(left,top),cvPoint(left, bottom), cvScalar(0, 0, 255, 0) ,1,CV_AA);
cvLine( dst, cvPoint(right,top),cvPoint(right, bottom), cvScalar(0, 0, 255, 0) ,1,CV_AA);
cvLine( dst, cvPoint(left,top),cvPoint(right, top), cvScalar(0, 0, 255, 0) ,1,CV_AA);
cvLine( dst, cvPoint(left,bottom),cvPoint(right, bottom), cvScalar(0, 0, 255, 0) ,1,CV_AA);*/
cvReleaseMat(&warp_mat);
return dst;
}
}
else
{
return NULL; //assert?
}
}

How do you draw a cylinder with OpenGLES?

How do you draw a cylinder with OpenGLES?
First step is to write a subroutine that draws a triangle. I'll leave that up to you. Then just draw a series of triangles the make up the shape of a cylinder. The trick is to approximate a circle with a polygon with a large number of sides like 64. Here's some pseudo-code off the top of my head.
for (i = 0; i < 64; i++)
{
angle = 360 * i / 63; // Or perhaps 2 * PI * i / 63
cx[i] = sin(angle);
cy[i] = cos(angle);
}
for (i = 0; i < 63; i++)
{
v0 = Vertex(cx[i], cy[i], 0);
v1 = Vertex(cx[i + 1], cy[i + 1], 0);
v2 = Vertex(cx[i], cy[i], 1);
v3 = Vertex(cx[i + 1], cy[i + 1], 1);
DrawTriangle(v0, v1, v2);
DrawTriangle(v1, v3, v2);
// If you have it: DrawQuad(v0, v1, v3, v2);
}
There is almost certainly a mistake in the code. Most likely is that I've screwed up the winding order in the triangle draws so you could end up with only half the triangles apparently visible or a very odd case with only the back visible.
Performance will soon want you drawing triangle strips and fans for efficiency, but this should get you started.
You'll need to do it via object loading. You can't call on 3D shape primitives using Open GL ES.
Look through Jeff Lamarche's blog, there's lots of really good resources on how to object load there. link text
You can indeed draw a cylinder in OpenGL ES by calculating the geometry of the object. The open source GLUT|ES project has geometry drawing routines for solids (cylinders, spheres, etc.) within its glutes_geometry.c source file. Unfortunately, these functions use the glBegin() and glEnd() calls, which aren't present in OpenGL ES.
Code for a partially working cylinder implementation for OpenGL ES can be found in the forum thread here.
I hope this can help you, this is my implementation of a cylinder in OpenGLES 2.0 for Android
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;
import javax.microedition.khronos.opengles.GL10;
public class Cylinder {
public Cylinder(int n) {
this.numOfVertex = n;
float[] vertex = new float[3 * (n + 1) * 2];
byte[] baseIndex = new byte[n];
byte[] topIndex = new byte[n];
byte[] edgeIndex = new byte[n*2 + 2];
double perAngle = 2 * Math.PI / n;
for (int i = 0; i < n; i++) {
double angle = i * perAngle;
int offset = 6 * i;
vertex[offset + 0] = (float)(Math.cos(angle) * radious) + cx;
vertex[offset + 1] = -height;
vertex[offset + 2] = (float)(Math.sin(angle) * radious) + cy;
vertex[offset + 3] = (float)(Math.cos(angle) * radious) + cx;
vertex[offset + 4] = height;
vertex[offset + 5] = (float)(Math.sin(angle) * radious) + cy;
topIndex[i] = (byte)(2*i);
baseIndex[i] = (byte)(2*i +1);
edgeIndex[2*i + 1] = baseIndex[i];
edgeIndex[2*i] = topIndex[i];
}
edgeIndex[2*n] = topIndex[0];
edgeIndex[2*n+1] = baseIndex[0];
ByteBuffer vbb = ByteBuffer
.allocateDirect(vertex.length * 4)
.order(ByteOrder.nativeOrder());
mFVertexBuffer = vbb.asFloatBuffer();
mFVertexBuffer.put(vertex);
mFVertexBuffer.position(0);
normalBuffer = mFVertexBuffer;
mCircleBottom = ByteBuffer.allocateDirect(baseIndex.length);
mCircleBottom.put(baseIndex);
mCircleBottom.position(0);
mCircleTop = ByteBuffer.allocateDirect(topIndex.length);
mCircleTop.put(topIndex);
mCircleTop.position(0);
mEdge = ByteBuffer.allocateDirect(edgeIndex.length);
mEdge.put(edgeIndex);
mEdge.position(0);
}
public void draw(GL10 gl)
{
gl.glCullFace(GL10.GL_BACK);
gl.glEnable(GL10.GL_CULL_FACE);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mFVertexBuffer);
gl.glNormalPointer(GL10.GL_FLOAT, 0, normalBuffer);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glPushMatrix();
gl.glColor4f(1f, 0, 0, 0);
gl.glDrawElements( GL10.GL_TRIANGLE_STRIP, numOfVertex * 2 + 2, GL10.GL_UNSIGNED_BYTE, mEdge);
gl.glPopMatrix();
gl.glPushMatrix();
gl.glColor4f(0.9f, 0, 0, 0);
gl.glDrawElements( GL10.GL_TRIANGLE_FAN, numOfVertex, GL10.GL_UNSIGNED_BYTE, mCircleTop);
gl.glPopMatrix();
gl.glPushMatrix();
gl.glTranslatef(0, 2*height, 0);
gl.glRotatef(-180, 1, 0, 0);
gl.glColor4f(0.9f,0, 0, 0);
gl.glDrawElements( GL10.GL_TRIANGLE_FAN, numOfVertex , GL10.GL_UNSIGNED_BYTE, mCircleBottom);
gl.glPopMatrix();
}
private FloatBuffer mFVertexBuffer;
private FloatBuffer normalBuffer;
private ByteBuffer mCircleBottom;
private ByteBuffer mCircleTop;
private ByteBuffer mEdge;
private int numOfVertex;
private int cx = 0;
private int cy = 0;
private int height = 1;
private float radious = 1;
}
You can draw a cylinder procedurally by calculating the geometry. On top of that though, you should make it so that it supports triangle stripping and you also need to calculate the mapping coordinates and possibly the normals too. So it will take a bit of thinking to do from scratch.
I have created a module for Unity3D in C# that does exactly this and allows you to tweak the parameters. You should be able to easily convert to C or C++ as the geometry calculation is the same everywhere. Watch the video to see what it's about and download the code from GitHub.