Underlining with pdfnet results in different line thickness - annotations

The code that i use for underlining a selection of text. I begin calling the addUnderline() method, the other methods are helper methods.
private pdftron.SDF.Obj CreateUnderlineAppearance(pdftron.PDF.Rect bbox)
{
ElementBuilder builder = new ElementBuilder();
ElementWriter writer = new ElementWriter();
builder.PathBegin();
builder.MoveTo(bbox.x1, bbox.y1);
builder.LineTo(bbox.x2, bbox.y1);
Element line = builder.PathEnd();
//Set color attributes for the line...
line.SetPathFill(false);
line.SetPathStroke(true);
GState gs = line.GetGState();
gs.SetStrokeColorSpace(ColorSpace.CreateDeviceRGB());
gs.SetStrokeColor(new ColorPt(0, 0, 0)); // black
gs.SetLineWidth(2);
writer.Begin(m_document);
writer.WriteElement(line);
pdftron.SDF.Obj stm = writer.End();
builder.Dispose();
writer.Dispose();
// Set the bounding box
stm.PutRect("BBox", bbox.x1, bbox.y1, bbox.x2, bbox.y2);
stm.PutName("Subtype", "Form");
return stm;
}
public Annot CreateUnderlineAnnot(pdftron.PDF.Rect rect)
{
Annot underlineAnnot = Annot.Create(m_document, Annot.Type.e_Underline, rect);
underlineAnnot.SetAppearance(CreateUnderlineAppearance(rect));
return underlineAnnot;
}
public void AddUnderline()
{
if (m_document != null)
{
PDFViewCtrl.Selection selection = m_pdfViewer.GetSelection();
int pageNumber = selection.GetPageNum();
double[] quads = selection.GetQuads();
int numQuads = quads.Length / 8;
if (quads.Length % 8 == 0) //must have at least 8 points to be valid
{
Console.WriteLine("GetRectsFromQuads - numQuads: " + numQuads.ToString());
for (int i = 0; i < numQuads; i++)
{
Rect selectionRect = GetSelectionRect(ref quads, i);
//Console.WriteLine("GetRectsFromQuads - aRect: " + rectX1.ToString() + " | " + rectY1.ToString() + " | " + rectX2.ToString() + " | " + rectY2.ToString());
Annot underlineAnnot = CreateUnderlineAnnot(selectionRect);
m_pdfViewer.AddUnderlineAnnotationToPage(underlineAnnot, pageNumber);
//m_pdfViewer.Refresh(); --> to see how this algorithm works when debugging
}
m_pdfViewer.RefreshAnnotations();
}
}
}
You can see in the image if you look closely that some lines are thicker or thinner than others. Is this fixable? by the way, when i zoom in/out the problem is gone...

You need to set the following on your pdf view control:
PDFViewCtrl.SetThinLineAdjustment(true, true);
That will remove the aliasing on the lines, and mean all lines that are 1.5px are 1px, and so on. See here: https://www.pdftron.com/pdfnet/mobile/docs/WinRT/html/M_pdftron_PDF_PDFViewCtrl_SetThinLineAdjustment.htm

Related

Uno platform: load embedded resource file

How do I load an embedded resource for the Android head ? My code, below works for UWP:
Assembly assembly = GetType ().GetTypeInfo ().Assembly;
string[] names = assembly.GetManifestResourceNames ();
Console.WriteLine ("Resource Names");
foreach (var name in names)
Console.WriteLine (" " + name);
using (var stream = assembly.GetManifestResourceStream (Source))
{
bmpSrc = SKBitmap.Decode (stream);
}
And, the XAML is
<controls:ExpandableImage
...
Source="UnoTest.Assets.icons.folder_tab.png"
/>
The file resides in UnoTest.Shared/Assets/ and is marked as "embedded resource".
The debug output shows that one of the "names" is
"UnoTest.Droid.Assets.icons.folder_tab.png"
indicating that my URI should be referring to the Android head.
EDIT
Ultimately, in this experiment, I am intending to paint the left part of the bitmap in the left of the target area, the right in the right, and fill the middle with an expansion of a vertical stripe from the bitmap's mid section. Then, draw some text over it.
private void OnPaintSurface(object sender, SKPaintSurfaceEventArgs e)
{
...
// identify left, right halves and a 10px wide swath of the middle of the source bitmap
SKRect rectSrcLeft = new SKRect(0, 0, bmpSrc.Width / 2, bmpSrc.Height);
SKRect rectSrcRight = new SKRect(bmpSrc.Width / 2, 0, bmpSrc.Width, bmpSrc.Height);
SKRect rectSrcMid = new SKRect(bmpSrc.Width / 2 - 5, 0, bmpSrc.Width / 2 + 5, bmpSrc.Height);
// create a new bitmap containing a 10 pixel wide swatch from middle of bmpSrc
SKBitmap bmpSrcMid = new SKBitmap(10, bmpSrc.Height);
using (SKCanvas tempCanvas = new SKCanvas(bmpSrcMid))
{
SKRect rectDest = new SKRect(0, 0, rectSrcMid.Width, rectSrcRight.Height);
tempCanvas.DrawBitmap(bmpSrc, rectSrcMid, rectDest);
}
var canvas = e.Surface.Canvas;
using (SKPaint paint = new SKPaint())
{
canvas.Save();
float hDest = canvas.DeviceClipBounds.Height;
float scale = hDest / (float)bmpSrc.Height;
canvas.Scale(scale);
paint.IsAntialias = true;
// determine dest rect for middle section
float rightDest = (float)textBounds.Width / scale; // rightmost point of whole target area
SKRect rectDestMid = new SKRect(rectSrcLeft.Width, 0, rightDest - rectSrcRight.Width, rectSrcRight.Height);
// left part of tab
canvas.DrawBitmap(bmpSrc, rectSrcLeft, rectSrcLeft, paint);
// right part of tab
{
SKRect rectDest = new SKRect(rectDestMid.Right, 0, rightDest, rectSrcRight.Height);
canvas.DrawBitmap(bmpSrc, rectSrcRight, rectDest, paint);
}
// mid part of tab
paint.Shader = SKShader.CreateBitmap(bmpSrcMid,
SKShaderTileMode.Repeat,
SKShaderTileMode.Repeat);
canvas.DrawRect(rectDestMid, paint);
canvas.Restore(); // back to orig scale
}
using (SKPaint paint = new SKPaint { Color = SKColors.Black })
{
float leftText = 20; // matches padding in ListPage.xaml
float bottomText = canvas.DeviceClipBounds.Height / 2 + textCoreHeight / 2;
canvas.DrawText(Label, new SKPoint(leftText, bottomText), paint);
}
}
The XAML for the control is:
<UserControl
x:Class="UnoTest.Shared.Controls.ExpandableImage"
...
<skia:SKXamlCanvas x:Name="EICanvas" PaintSurface="OnPaintSurface" />
</UserControl>
Do I need to write some code to modify the "generic" URI for the Android case ?
Embedded Resources defined in shared projects are sharing the default namespace of the project referencing that shared project, making the file name different in all projects by default in Uno templates.
You have multiple options:
Change the default namespace to be the same in all projects.
Skip the first two dots and use this as the base
Use a different project (a .NET Standard 2.0 project will do) and place your resources there
Here's what I ended up doing. It is not 100% robust but pretty close. And it's really simple.
if (Source == null)
return;
string sourceWithNameSpace = null;
Assembly assembly = GetType ().GetTypeInfo ().Assembly;
string[] names = assembly.GetManifestResourceNames ();
foreach (var name in names)
{
if (name.EndsWith (Source))
{
sourceWithNameSpace = name;
break;
}
}
if (sourceWithNameSpace == null)
return;
using (var stream = assembly.GetManifestResourceStream (sourceWithNameSpace))
{
bmpSrc = SKBitmap.Decode (stream);
}
And, in the XML file, leave off the project head from the Source path, e.g.:
Source="Assets.icons.folder_tab.png"

AR camera distance measurement

I have a question about AR(Augmented Reality).
I want to know how to show the distance information(like centermeter...) between AR camera and target object. (Using Smartphone)
Can I do that in Unity ? Should I use AR Foundation? and with ARcore? How to write code?
I tried finding some relative code(below), but it seems just like Printing information between object and object, nothing about "AR camera"...
var other : Transform;
if (other) {
var dist = Vector3.Distance(other.position, transform.position);
print ("Distance to other: " + dist);
}
Thank again!
Here is how to do it Unity and AR Foundation 4.1.
This example script prints the depth in meters at the depth texture's center and works both with ARCore and ARKit:
using System;
using System.Collections;
using UnityEngine;
using UnityEngine.Assertions;
using UnityEngine.XR.ARFoundation;
using UnityEngine.XR.ARSubsystems;
public class GetDepthOfCenterPixel : MonoBehaviour {
// assign this field in inspector
[SerializeField] AROcclusionManager manager = null;
IEnumerator Start() {
while (ARSession.state < ARSessionState.SessionInitializing) {
// manager.descriptor.supportsEnvironmentDepthImage will return a correct value if ARSession.state >= ARSessionState.SessionInitializing
yield return null;
}
if (!manager.descriptor.supportsEnvironmentDepthImage) {
Debug.LogError("!manager.descriptor.supportsEnvironmentDepthImage");
yield break;
}
while (true) {
if (manager.TryAcquireEnvironmentDepthCpuImage(out var cpuImage) && cpuImage.valid) {
using (cpuImage) {
Assert.IsTrue(cpuImage.planeCount == 1);
var plane = cpuImage.GetPlane(0);
var dataLength = plane.data.Length;
var pixelStride = plane.pixelStride;
var rowStride = plane.rowStride;
Assert.AreEqual(0, dataLength % rowStride, "dataLength should be divisible by rowStride without a remainder");
Assert.AreEqual(0, rowStride % pixelStride, "rowStride should be divisible by pixelStride without a remainder");
var numOfRows = dataLength / rowStride;
var centerRowIndex = numOfRows / 2;
var centerPixelIndex = rowStride / (pixelStride * 2);
var centerPixelData = plane.data.GetSubArray(centerRowIndex * rowStride + centerPixelIndex * pixelStride, pixelStride);
var depthInMeters = convertPixelDataToDistanceInMeters(centerPixelData.ToArray(), cpuImage.format);
print($"depth texture size: ({cpuImage.width},{cpuImage.height}), pixelStride: {pixelStride}, rowStride: {rowStride}, pixel pos: ({centerPixelIndex}, {centerRowIndex}), depthInMeters of the center pixel: {depthInMeters}");
}
}
yield return null;
}
}
float convertPixelDataToDistanceInMeters(byte[] data, XRCpuImage.Format format) {
switch (format) {
case XRCpuImage.Format.DepthUint16:
return BitConverter.ToUInt16(data, 0) / 1000f;
case XRCpuImage.Format.DepthFloat32:
return BitConverter.ToSingle(data, 0);
default:
throw new Exception($"Format not supported: {format}");
}
}
}
I'm working on AR depth image as well and the basic idea is:
Acquire an image using API, normally it's in format Depth16;
Split the image into shortbuffers, as Depth16 means each pixel is 16 bits;
Get the distance value, which is stored in the lower 13 bits of each shortbuffer, you can do this by doing (shortbuffer & 0x1ff), then you can have the distance for each pixel, normally it's in millimeters.
By doing this through all the pixels, you can create a depth image and store it as jpg or other formats, here's the sample code of using AR Engine to get the distance:
try (Image depthImage = arFrame.acquireDepthImage()) {
int imwidth = depthImage.getWidth();
int imheight = depthImage.getHeight();
Image.Plane plane = depthImage.getPlanes()[0];
ShortBuffer shortDepthBuffer = plane.getBuffer().asShortBuffer();
File sdCardFile = Environment.getExternalStorageDirectory();
Log.i(TAG, "The storage path is " + sdCardFile);
File file = new File(sdCardFile, "RawdepthImage.jpg");
Bitmap disBitmap = Bitmap.createBitmap(imwidth, imheight, Bitmap.Config.RGB_565);
for (int i = 0; i < imheight; i++) {
for (int j = 0; j < imwidth; j++) {
int index = (i * imwidth + j) ;
shortDepthBuffer.position(index);
short depthSample = shortDepthBuffer.get();
short depthRange = (short) (depthSample & 0x1FFF);
//If you only want the distance value, here it is
byte value = (byte) depthRange;
byte value = (byte) depthRange ;
disBitmap.setPixel(j, i, Color.rgb(value, value, value));
}
}
//I rotate the image for a better view
Matrix matrix = new Matrix();
matrix.setRotate(90);
Bitmap rotatedBitmap = Bitmap.createBitmap(disBitmap, 0, 0, imwidth, imheight, matrix, true);
try {
FileOutputStream out = new FileOutputStream(file);
rotatedBitmap.compress(Bitmap.CompressFormat.JPEG, 90, out);
out.flush();
out.close();
MainActivity.num++;
} catch (Exception e) {
e.printStackTrace();
}
} catch (Exception e) {
e.printStackTrace();
}
}
While the answers are great, they may be too complicated and advanced for this question, which is about the distance between the ARCamera and another object, and not about the depth of pixels and their occlusion.
transform.position gives you the position of whatever game object you attach the script to in the hierarchy. So attach the script to the ARCamera object. And obviously, other should be the target object.
Alternately, you can get references to the two game objects using inspector variables or GetComponent
/raycasting should be in update/
Ray ray = new Ray(cam.transform.position, cam.transform.forward);
if (Physics.Raycast(ray, out info, 50f, layerMaskAR))//50 meters detection range bcs of 50f
{
distanca.text = string.Format("{0}: {1:N2}m", info.collider.name, info.distance, 2);
}
This is func that does it what u need with this is ofc on UI txt element and layer assigne to object/prefab.
int layerMaskAR = 1 << 6; (here u see 6 bcs 6th is my custom layer ,,layerMaskAR,,)
This is ray cating on to objects in only this layer rest object are ignored(if u dont want to ignore anything remove layerMask from raycast and it will print out name of anything with collider).
Totally doable by this line of code
Vector3.Distance(gameObject.transform.position, Camera.main.transform.position)

iTextSharp ColumnText drawing and height not correct

I'm trying to place text into a ColumnText object and calculate its exact height. I'm using iTextSharp 5.5.9
What I'm finding is that there appears to be some "padding" at the top of the ColumnText object and it causes the height calculation to be flawed and the text to be misplaced. I'm trying to understand exactly what's happening - here's my code:
var doc = new Document(PageSize.LETTER, DocumentRenderer.PageMarginSize, DocumentRenderer.PageMarginSize, DocumentRenderer.PageTopMarginSize, DocumentRenderer.PageMarginSize);
doc.SetPageSize(PageSize.LETTER.Rotate());
var writer = PdfWriter.GetInstance(doc, fs);
doc.Open();
var x = 5;
var y = doc.PageSize.Height - 5;
var width = doc.PageSize.Width/ 2;
var height = doc.PageSize.Height - 10;
var lines = new List<string> { "Test string", };// "Test string 2", "Test string 3", "Test string 4" };
for (int i = 0; i < 6; i++)
{
lines.Add("Test String " + i);
}
lines.Add(lines.Aggregate((a, b) => a + ", " + b));
lines.Add(lines.Aggregate((a, b) => a + "\n" + b));
PdfContentByte cb = writer.DirectContent;
cb.SetTextRenderingMode(PdfContentByte.TEXT_RENDER_MODE_FILL);
cb.SetCMYKColorFill(DocumentRenderer.DarkColor.C, DocumentRenderer.DarkColor.M, DocumentRenderer.DarkColor.Y, DocumentRenderer.DarkColor.K);
Font ListFont = new Font(DocumentRenderer.TextFont, 12);
ColumnText ct = new ColumnText(cb);
ct.Leading = 14;
//ct.UseAscender = true;
ct.Alignment = Element.ALIGN_LEFT;
ct.SetSimpleColumn(x, y - height, x + width, y);
lines.ForEach(line =>
{
ct.AddText(new Phrase(line + "\n", ListFont));
});
ct.Go();
var height1 = (ct.LinesWritten * ct.Leading);
var height2 = y - ct.YLine;
DocumentRenderer.DrawBox(ref doc, ref writer, new BoxInfo(x, y, width, height2), DocumentRenderer.HighlightColor);
if (doc.IsOpen())
doc.Close();
Here are the results - please note that I highlighted the text with my cursor for effect.
I stumbled into this post (iText placement of Phrase within ColumnText) regarding UseAscender and tried it out by uncommenting that line in the code. You can see that on the right in the picture and it didn't appear to work like I was hoping.
What I really want is the text to still have its leading drawn correctly but for that to start at the top of the ColumnText. Also, I'm trying to get an accurate height for the text. The ct.YLine appears to give me the line at the bottom of the text but the text will overhang below that slightly. I'm not sure why that's not correct but I need to know the exact height if that's possible...
Anyone know how I might achieve these two things?

How target a movieClip in animate cc in this drag drop code

is there a way to modify this code for animate cc to make object in the stage and interact with it ?
it is a bit of pain to make drag and drop in createjs for animate cc
there is nothing in the web that describe how to do it for animate cc or flash cc even the documentation has nothing to tell about drag and drop in the canvas
//Stage
var stage = new createjs.Stage("demoCanvas");
//VARIABLES
//Drag Object Size
dragRadius = 40;
//Destination Size
destHeight = 100;
destWidth = 100;
//Circle Creation
var label = new createjs.Text("DRAG ME", "14px Lato", "#fff");
label.textAlign="center";
label.y -= 7;
var circle = new createjs.Shape();
circle.graphics.setStrokeStyle(2).beginStroke("black")
.beginFill("red").drawCircle(0,0, dragRadius);
//Drag Object Creation
//Placed inside a container to hold both label and shape
var dragger = new createjs.Container();
dragger.x = dragger.y = 100;
dragger.addChild(circle, label);
dragger.setBounds(100, 100, dragRadius*2, dragRadius*2);
//DragRadius * 2 because 2*r = width of the bounding box
var label2 = new createjs.Text("HERE", "bold 14px Lato", "#000");
label2.textAlign = "center";
label2.x += 50;
label2.y += 40;
var box = new createjs.Shape();
box.graphics.setStrokeStyle(2).beginStroke("black").rect(0, 0, destHeight, destWidth);
var destination = new createjs.Container();
destination.x = 350;
destination.y = 50;
destination.setBounds(350, 50, destHeight, destWidth);
destination.addChild(label2, box);
//DRAG FUNCTIONALITY =====================
dragger.on("pressmove", function(evt){
evt.currentTarget.x = evt.stageX;
evt.currentTarget.y = evt.stageY;
stage.update(); //much smoother because it refreshes the screen every pixel movement instead of the FPS set on the Ticker
if(intersect(evt.currentTarget, destination)){
evt.currentTarget.alpha=0.2;
box.graphics.clear();
box.graphics.setStrokeStyle(3)
.beginStroke("#0066A4")
.rect(0, 0, destHeight, destWidth);
}else{
evt.currentTarget.alpha=1;
box.graphics.clear(); box.graphics.setStrokeStyle(2).beginStroke("black").rect(0, 0, destHeight, destWidth);
}
});
//Mouse UP and SNAP====================
dragger.on("pressup", function(evt) {
if(intersect(evt.currentTarget, destination)){
dragger.x = destination.x + destWidth/2;
dragger.y = destination.y + destHeight/2;
dragger.alpha = 1;
box.graphics.clear();
box.graphics.setStrokeStyle(2).beginStroke("black").rect(0, 0, destHeight, destWidth);
stage.update(evt);
}
});
//Tests if two objects are intersecting
//Sees if obj1 passes through the first and last line of its
//bounding box in the x and y sectors
//Utilizes globalToLocal to get the x and y of obj1 in relation
//to obj2
//PRE: Must have bounds set for each object
//Post: Returns true or false
function intersect(obj1, obj2){
var objBounds1 = obj1.getBounds().clone();
var objBounds2 = obj2.getBounds().clone();
var pt = obj1.globalToLocal(objBounds2.x, objBounds2.y);
var h1 = -(objBounds1.height / 2 + objBounds2.height);
var h2 = objBounds2.width / 2;
var w1 = -(objBounds1.width / 2 + objBounds2.width);
var w2 = objBounds2.width / 2;
if(pt.x > w2 || pt.x < w1) return false;
if(pt.y > h2 || pt.y < h1) return false;
return true;
}
//Adds the object into stage
stage.addChild(destination, dragger);
stage.mouseMoveOutside = true;
stage.update();
thanks
I am not exactly sure what you are asking. The demo you showed works fine (looks like it came from this codepen), and it is not clear what you are trying to add. This demo was made directly in code, not with Animate CC - which is really good for building assets, animations, and display list structure, but you should write application code around what gets exported.
There are plenty of documentation and examples online for Drag and Drop, in the EaselJS GitHub, and EaselJS docs:
DragAndDrop demo in GitHub
Live demo on EaselJS demos page
Documentation on pressMove
Tutorial on Mouse Events which includes Drag and Drop
I recommend narrowing down what you are trying to do, show what code or approaches you have tried so far, and posting specific questions here.
Lastly, here is the first part of an ongoing series for working with Animate CC: http://blog.gskinner.com/archives/2015/04/introduction-to-the-flash-cc-html5-canvas-document.html
Cheers.

Strange mouse event position problem

I have a function which returns mouse event positions.
// returns the xy point where the mouse event was occured.
function getXY(ev){
var xypoint = new Point();
if (ev.layerX || ev.layerY) { // Firefox
xypoint.x = ev.layerX;
xypoint.y = ev.layerY;
} else if (ev.offsetX || ev.offsetX == 0) { // Opera
xypoint.x = ev.offsetX;
xypoint.y = ev.offsetY;
}
return xypoint;
}
I am capturing mouse events to perform drawings on html5 canvas. Sometimes I am getting -ve values for xypoint. When I debug the application using firebug I am getting really strange behavior. for example if I put my break point at the 4th line of this function with condition (xypoint.x<0 || xypoint.y<0), it stops at the break point and I can see that layer.x, layer.y was positive and correct. But xypoint.x or xypoint.y is negative. If I reassign the values using firbug console I am getting correct values in xypoint. Can anyone explain me what is happening.
The above works fine if I move mouse with normal speed. If I am moving mouse at very rapid speed I am getting this behavior.
Thanks
Handling mouse position was an absolute pain with Canvas. You have to make a ton of adjustments. I use this, which has a few minor errors, but works even with the drag-and-drop divs I use in my app:
getCurrentMousePosition = function(e) {
// Take mouse position, subtract element position to get relative position.
if (document.layers) {
xMousePos = e.pageX;
yMousePos = e.pageY;
xMousePosMax = window.innerWidth+window.pageXOffset;
yMousePosMax = window.innerHeight+window.pageYOffset;
} else if (document.all) {
xMousePos = window.event.x+document.body.scrollLeft;
yMousePos = window.event.y+document.body.scrollTop;
xMousePosMax = document.body.clientWidth+document.body.scrollLeft;
yMousePosMax = document.body.clientHeight+document.body.scrollTop;
} else if (document.getElementById) {
xMousePos = e.pageX;
yMousePos = e.pageY;
xMousePosMax = window.innerWidth+window.pageXOffset;
yMousePosMax = window.innerHeight+window.pageYOffset;
}
elPos = getElementPosition(document.getElementById("cvs"));
xMousePos = xMousePos - elPos.left;
yMousePos = yMousePos - elPos.top;
return {x: xMousePos, y: yMousePos};
}
getElementPosition = function(el) {
var _x = 0,
_y = 0;
if(document.body.style.marginLeft == "" && document.body.style.marginRight == "" ) {
_x += (window.innerWidth - document.body.offsetWidth) / 2;
}
if(el.offsetParent != null) while(1) {
_x += el.offsetLeft;
_y += el.offsetTop;
if(!el.offsetParent) break;
el = el.offsetParent;
} else if(el.x || el.y) {
if(el.x) _x = el.x;
if(el.y) _y = el.y;
}
return { top: _y, left: _x };
}
Moral of the story? You need to figure in the offset of the canvas to have proper results. You're capturing the XY from the event, which has an offset that's not being captured in relation to the window's XY. Make sense?