Android iText Pdf SignatureField - itext

iText kernel;
When I give a specific rectangle and create a signature field, the location of the signature field changes.
val pdfReader: com.itextpdf.kernel.pdf.PdfReader =
com.itextpdf.kernel.pdf.PdfReader(FileInputStream(file2))
val pdfWriter = PdfWriter(dest)
val document = PdfDocument(pdfReader, pdfWriter)
val page: PdfPage = document.getPage(page)
val extractionStrategy = TextPlusXYExtractionStrategy()
val parser = PdfCanvasProcessor(extractionStrategy)
parser.processPageContent(page);
for (i in signatures) {
val rWidth = 200f
val rHeight = 200f
val acroForm = PdfAcroForm.getAcroForm(document, true)
val signature: PdfFormField = PdfSignatureFormField.createSignature(
document,
com.itextpdf.kernel.geom.Rectangle(i.rect.left.toFloat(), i.rect.top.toFloat(), rWidth, rHeight)
)
signature.setVisibility(0)
signature.setFieldName(System.currentTimeMillis().toString() + UUID.randomUUID())
acroForm.addField(signature, page)
}
document.close()
///////////////////////////////////////////////////////////////////
widht = 200
height = 200
Rect 1;
x = 383.0
y = 209.0
Rect2;
x = 440.0
y = 530.0
Rect3;
x = 464.0
y = 879.0
Rect4;
x = 242.0
y = 872.0
Rect5;
x = 255.0
y = 493.0
Result;

Related

Shrink method in array stack (scala)

I am trying to implement a resize function that shrinks and grows an array stack depending on its number of elements and the array's size in Scala3. Here is my code below:
package adt
import scala.reflect.ClassTag
class ArrayStack[A: ClassTag]:
private var dataArray: Array[A] = Array.fill(10)(null.asInstanceOf[A])
private var top: Int = 0
private var sz: Int = 10
def push(elem: A): Unit =
if top == sz then resize(grow = true)
dataArray(top) = elem
top += 1
def pop(): A =
if top == (sz / 2) - 1 then resize(grow = false)
top -= 1
dataArray(top)
def peek(): A =
dataArray(top - 1)
def isEmpty(): Boolean =
top == 0
def resize(grow: Boolean): Unit =
val newSize = if grow then sz * 2 else sz / 2
val newArray: Array[A] = Array.fill(newSize)(null.asInstanceOf[A])
for (a <- 0 until dataArray.length) do newArray(a) = dataArray(a)
dataArray = newArray
sz = newSize
However, in my JUnit tests, I get an array index out of bounds exception.
#Test def pushMultiple(): Unit =
val stack = new ArrayStack[Int]
val pushArr = Array.tabulate(100)(i => Math.round(i * 100))
pushArr.foreach(stack.push(_))
for (n <- pushArr.reverse) do assertEquals(n, stack.pop())
#Test def popMultiple(): Unit =
val stack = new ArrayStack[Int]
val pushArr = Array.tabulate(100)(i => Math.round(i * 100))
pushArr.foreach(stack.push(_))
for (i <- 1 to 1000) do stack.pop()
assertTrue(stack.isEmpty())
Error message:
Test arraystack_test.pushMultiple failed: java.lang.ArrayIndexOutOfBoundsException: Index 80 out of bounds for length 80, took 0.032 sec
at scala.runtime.ScalaRunTime$.array_update(ScalaRunTime.scala:75)
ScalaRunTime.scala:75
at adt.ArrayStack.resize$$anonfun$1(ArrayStack.scala:30)
ArrayStack.scala:30
at scala.runtime.java8.JFunction1$mcVI$sp.apply(JFunction1$mcVI$sp.scala:18)
at scala.collection.immutable.Range.foreach(Range.scala:190)
Range.scala:190
at adt.ArrayStack.resize(ArrayStack.scala:30)
ArrayStack.scala:30
at adt.ArrayStack.pop(ArrayStack.scala:17)
ArrayStack.scala:17
at arraystack_test.pushMultiple$$anonfun$2(arraystack_test.scala:26)
at scala.runtime.java8.JFunction1$mcVI$sp.apply(JFunction1$mcVI$sp.scala:18)
at scala.collection.ArrayOps$.foreach$extension(ArrayOps.scala:1324)
ArrayOps.scala:1324
at arraystack_test.pushMultiple(arraystack_test.scala:26)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
NativeMethodAccessorImpl.java:34
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
NativeMethodAccessorImpl.java:62
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
DelegatingMethodAccessorImpl.java:43
at java.lang.reflect.Method.invoke(Method.java:566)
Method.java:566
...
Test arraystack_test.popMultiple failed: java.lang.ArrayIndexOutOfBoundsException: Index 80 out of bounds for length 80, took 0.0 sec
at scala.runtime.ScalaRunTime$.array_update(ScalaRunTime.scala:75)
ScalaRunTime.scala:75
at adt.ArrayStack.resize$$anonfun$1(ArrayStack.scala:30)
ArrayStack.scala:30
at scala.runtime.java8.JFunction1$mcVI$sp.apply(JFunction1$mcVI$sp.scala:18)
at scala.collection.immutable.Range.foreach(Range.scala:190)
Range.scala:190
at adt.ArrayStack.resize(ArrayStack.scala:30)
ArrayStack.scala:30
at adt.ArrayStack.pop(ArrayStack.scala:17)
ArrayStack.scala:17
at arraystack_test.popMultiple$$anonfun$2(arraystack_test.scala:32)
at scala.runtime.java8.JFunction1$mcII$sp.apply(JFunction1$mcII$sp.scala:17)
at scala.collection.immutable.Range.foreach(Range.scala:190)
Range.scala:190
at arraystack_test.popMultiple(arraystack_test.scala:32)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
NativeMethodAccessorImpl.java:34
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
NativeMethodAccessorImpl.java:62
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
DelegatingMethodAccessorImpl.java:43
at java.lang.reflect.Method.invoke(Method.java:566)
Can anyone please help me figure out why my resize shrink function does not work?
Nevermind, I figured it out.
In the resize method, this causes an index out of bounds exception
for (a <- 0 until dataArray.length) do newArray(a) = dataArray(a)
When the method is called for grow, it works fine. But, when the method is called for shrink, newArray has a length that is half of dataArray, so it will cause an index out of bounds exception.
To fix this:
def resize(grow: Boolean): Unit =
val newSize = if grow then sz * 2 else sz / 2
val endRange: Int = if grow then dataArray.length else newSize
val newArray: Array[A] = Array.fill(newSize)(null.asInstanceOf[A])
for (a <- 0 until endRange) do newArray(a) = dataArray(a)
dataArray = newArray
sz = newSize

Writing Files take a lot of time

I am writing three List of TripleInts with 277270 rows aprox,
My class TripleInts is the following:
class tripleInt (var sub:Int, var pre:Int, var obj:Int)
Additional I create each lists with Apache Jena components from an RDF file, I transform the RDF elements to ids and I store this ids in the diferent lists. Once I have the lists, I write the files with the following code:
class Indexes (val listSPO:List[tripleInt], val listPSO:List[tripleInt], val listOSP:List[tripleInt] ){
val sl = listSPO.sortBy(l => (l.sub, l.pre))
val pl = listPSO.sortBy(l => (l.sub, l.pre))
//val ol = listOSP.sortBy(l => (l.sub, l.pre))
var y1:Int=0
var y2:Int=0
var y3:Int=0
val fstream:FileWriter = new FileWriter("patSPO.dat")
var out:BufferedWriter = new BufferedWriter(fstream)
//val fstream:FileOutputStream = new FileOutputStream("patSPO.dat")
//var out:ObjectOutputStream = new ObjectOutputStream(fstream)
//out.writeObject(listSPO)
val fstream2:FileWriter = new FileWriter("patPSO.dat")
var out2:BufferedWriter = new BufferedWriter(fstream2)
/*val fstream3:FileOutputStream = new FileOutputStream("patOSP.dat")
var out3:BufferedOutputStream = new BufferedOutputStream(fstream3)*/
for ( a <- 0 to sl.size-1){
y1 = sl(a).sub
y2 = sl(a).pre
y3 = sl(a).obj
out.write((y1.toString+","+y2.toString+","+y3.toString+"\n"))
}
for ( a <- 0 to pl.size-1){
y1 = pl(a).sub
y2 = pl(a).pre
y3 = pl(a).obj
out2.write((y1.toString+","+y2.toString+","+y3.toString+"\n"))
}
out.close()
out2.close()
This process takes 30 min aprox. My pc is 16 Gb Ram, core i7. Then I don't understand why is taking a lot of time, and Is there a way to optimize this performance?
Thank you
Yes, you need to choose your data structures wisely. List is for sequential access (Seq), not random access (IndexedSeq). What you are doing is O(n^2) because of indexing large Lists. The following should be much faster (O(n), and hopefully easier to read too):
class Indexes (val listSPO: List[tripleInt], val listPSO: List[tripleInt], val listOSP: List[tripleInt] ){
val sl = listSPO.sortBy(l => (l.sub, l.pre))
val pl = listPSO.sortBy(l => (l.sub, l.pre))
var y1:Int=0
var y2:Int=0
var y3:Int=0
val fstream:FileWriter = new FileWriter("patSPO.dat")
val out:BufferedWriter = new BufferedWriter(fstream)
for (s <- sl){
y1 = s.sub
y2 = s.pre
y3 = s.obj
out.write(s"$y1,$y2,$y3\n"))
}
// TODO close in finally
out.close()
val fstream2:FileWriter = new FileWriter("patPSO.dat")
val out2:BufferedWriter = new BufferedWriter(fstream2)
for ( p <- pl){
y1 = p.sub
y2 = p.pre
y3 = p.obj
out2.write(s"$y1,$y2,$y3\n"))
}
// TODO close in finally
out2.close()
}
(It would not hurt using IndexedSeq/Vector as inputs, but there might be constraints why List is preferred in your case.)

Scala: Save result in a toDf temp table

I'm trying save some analyses in toDF TempTable, but a receive the following error ":215: error: value toDF is not a member of Double".
I'm reading data of a Cassandra table, and i'm doing some calculations. I want save these results in a temp table.
I'm new in scala, somebody, can help me please?
my code
case class Consumo(consumo:Double, consumo_mensal: Double, mes: org.joda.time.DateTime,ano: org.joda.time.DateTime, soma_pf: Double,empo_gasto: Double);
object Analysegridata{
val conf = new SparkConf(true)
.set("spark.cassandra.connection.host","127.0.0.1").setAppName("LiniarRegression")
.set("spark.cassandra.connection.port", "9042")
.set("spark.driver.allowMultipleContexts", "true")
.set("spark.streaming.receiver.writeAheadLog.enable", "true");
val sc = new SparkContext(conf);
val ssc = new StreamingContext(sc, Seconds(1))
val sqlContext = new org.apache.spark.sql.SQLContext(sc);
val checkpointDirectory = "/var/lib/cassandra/data"
ssc.checkpoint(checkpointDirectory) // set checkpoint directory
// val context = StreamingContext.getOrCreate(checkpointDirectory)
import sqlContext.implicits._
JavaSparkContext.fromSparkContext(sc);
def rddconsumo(rddData: Double): Double = {
val rddData: Double = {
implicit val data = conf
val grid = sc.cassandraTable("smartgrids", "analyzer").as((r:Double) => (r)).collect
def goto(cs: Array[Double]): Double = {
var consumo = 0.0;
var totaldias = 0;
var soma_pf = 0.0;
var somamc = 0.0;
var tempo_gasto = 0.0;
var consumo_mensal = 0.0;
var i=0
for (i <- 0 until grid.length) {
val minutos = sc.cassandraTable("smartgrids","analyzer_temp").select("timecol", "MINUTE");
val horas = sc.cassandraTable("smartgrids","analyzer_temp").select("timecol","HOUR_OF_DAY");
val dia = sc.cassandraTable("smartgrids","analyzer_temp").select("timecol", "DAY_OF_MONTH");
val ano = sc.cassandraTable("smartgrids","analyzer_temp").select("timecol", "YEAR");
val mes = sc.cassandraTable("smartgrids","analyzer_temp").select("timecol", "MONTH");
val potencia = sc.cassandraTable("smartgrids","analyzer_temp").select("n_pf1", "n_pf2", "n_pf3")
def convert_minutos (minuto : Int) : Double ={
minuto/60
}
dia.foreach (i => {
def adSum(potencia: Array[Double]) = {
var i=0;
while (i < potencia.length) {
soma_pf += potencia(i);
i += 1;
soma_pf;
println("Potemcia =" + soma_pf)
}
}
def tempo(minutos: Array[Int]) = {
var i=0;
while (i < minutos.length) {
somamc += convert_minutos(minutos(i))
i += 1;
somamc
}
}
def tempogasto(horas: Array[Int]) = {
var i=0;
while (i < horas.length) {
tempo_gasto = horas(i) + somamc;
i += 1;
tempo_gasto;
println("Temo que o aparelho esteve ligado =" + tempo_gasto)
}
}
def consumof(dia: Array[Int]) = {
var i=0;
while (i < dia.length) {
consumo = soma_pf * tempo_gasto;
i += 1;
consumo;
println("Consumo diario =" + consumo)
}
}
})
mes.foreach (i => {
def totaltempo(dia: Array[Int]) = {
var i = 0;
while(i < dia.length){
totaldias += dia(i);
i += 1;
totaldias;
println("Numero total de dias =" + totaldias)
}
}
def consumomensal(mes: Array[Int]) = {
var i = 0;
while(i < mes.length){
consumo_mensal = consumo * totaldias;
i += 1;
consumo_mensal;
println("Consumo Mensal =" + consumo_mensal);
}
}
})
}
consumo;
totaldias;
consumo_mensal;
soma_pf;
tempo_gasto;
somamc
}
rddData
}
rddData.toDF().registerTempTable("rddData")
}
ssc.start()
ssc.awaitTermination()
error: value toDF is not a member of Double"
It's rather unclear what you're trying to do exactly (too much code, try providing a minimal example), but there are a few apparent issues:
rddData has type Double: seems like it should be RDD[Double] (which is a distributed collection of Double values). Trying to save a single Double value as a table doesn't make much sense, and indeed - doesn't work (toDF can be called on an RDD, not any type, specifically not on Double, as the compiler warns).
you collect() the data: if you want to load an RDD, transform it using some manipulation, and then save it as a table - collect() should probably not be called on that RDD. collect() sends all the data (distributed across the cluster) into the single "driver" machine (the one running this code) - after which you're not taking advantage of the cluster, and again not using the RDD data structure so you can't convert the data into a DataFrame using toDF.

jFreeChart contour plot rendering incorrectly

Code:
package vu.co.kaiyin.sfreechart.plots
import java.awt.{Shape, Stroke, RenderingHints}
import javax.swing.JFrame
import org.jfree.chart.plot.{PlotOrientation, XYPlot}
import org.jfree.chart.{ChartFactory => cf}
import org.jfree.chart.renderer.GrayPaintScale
import org.jfree.chart.renderer.xy.XYBlockRenderer
import org.jfree.chart.title.PaintScaleLegend
import org.jfree.chart._
import org.jfree.chart.axis.{AxisLocation, NumberAxis}
import org.jfree.data.Range
import org.jfree.data.general.DatasetUtilities
import org.jfree.data.statistics.HistogramDataset
import org.jfree.data.xy.{IntervalXYDataset, XYZDataset}
import org.jfree.ui.{RectangleEdge, RectangleInsets}
import vu.co.kaiyin.sfreechart.{ColorPaintScale, ExtendedFastScatterPlot}
import vu.co.kaiyin.sfreechart.implicits._
import scala.util.Random.nextGaussian
/**
* Created by kaiyin on 2/10/16.
*/
object Plots {
def histogram(
dataset: IntervalXYDataset,
title: String = "Histogram",
xAxisLabel: String = "Intervals",
yAxisLabel: String = "Count",
orientation: PlotOrientation = PlotOrientation.VERTICAL,
legend: Boolean = true,
tooltips: Boolean = true,
urls: Boolean = true,
alpha: Float = 0.5F,
pannable: Boolean = false
): JFreeChart = {
val hist = cf.createHistogram(
title, xAxisLabel, yAxisLabel, dataset, orientation, legend, tooltips, urls
)
val xyPlot = hist.getPlot.asInstanceOf[XYPlot]
if (pannable) {
xyPlot.setDomainPannable(true)
xyPlot.setRangePannable(true)
}
xyPlot.setForegroundAlpha(alpha)
hist
}
def controuPlot(dataset: XYZDataset, title: String = "Contour plot", scaleTitle: String = "Scale"): JFreeChart = {
val xAxis = new NumberAxis("x")
val yAxis = new NumberAxis("y")
val blockRenderer = new XYBlockRenderer
val zBounds: Range = DatasetUtilities.findZBounds(dataset)
println(zBounds.getLowerBound, zBounds.getUpperBound)
val paintScale = new ColorPaintScale(zBounds.getLowerBound, zBounds.getUpperBound)
blockRenderer.setPaintScale(paintScale)
val xyPlot = new XYPlot(dataset, xAxis, yAxis, blockRenderer)
xyPlot.setAxisOffset(new RectangleInsets(1D, 1D, 1D, 1D))
xyPlot.setDomainPannable(true)
xyPlot.setRangePannable(true)
val chart = new JFreeChart(title, xyPlot)
chart.removeLegend()
val scaleAxis = new NumberAxis(scaleTitle)
val paintScaleLegend = new PaintScaleLegend(paintScale, scaleAxis)
paintScaleLegend.setAxisLocation(AxisLocation.BOTTOM_OR_LEFT)
paintScaleLegend.setPosition(RectangleEdge.BOTTOM)
chart.addSubtitle(paintScaleLegend)
chart
}
def fastScatter(data: Array[Array[Float]], title: String = "Scatter plot", pointSize: Int = 5, pointAlpha: Float = 0.3F): JFreeChart = {
val xAxis = new NumberAxis("x")
val yAxis = new NumberAxis("y")
xAxis.setAutoRangeIncludesZero(false)
yAxis.setAutoRangeIncludesZero(false)
val fsPlot = new ExtendedFastScatterPlot(data, xAxis, yAxis, pointSize, pointAlpha)
fsPlot.setDomainPannable(true)
fsPlot.setRangePannable(true)
val chart = new JFreeChart(title, fsPlot)
chart.getRenderingHints.put(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON)
chart
}
def main(args: Array[String]) {
// fastScatter(Array(Array(1.0F, 2.0F, 3.0F), Array(1.0F, 2.0F, 3.0F))).vis()
val x = (1 to 10000).map(_.toFloat).toArray
val y = x.map(i => i * nextGaussian().toFloat * 3F).toArray
fastScatter(Array(x, y)).vis()
val x1 = (-13.0 to 13.0 by 0.2).toArray
val y1 = (-13.0 to 13.0 by 0.2).toArray
val xyzData = (for {
i <- x1
j <- y1
if i > j
} yield Array(i, j, math.sin(i) + math.cos(j))).transpose
controuPlot(xyzData.toXYZDataset()).vis()
histogram((1 to 10000).map(_ => nextGaussian()).toArray.toHistogramDataset()).vis()
}
}
Full project can be found here: https://github.com/kindlychung/sfreechart
Running the above code will give you this:
If you look carefully, you will find a narrow band of pixels along the diagonal edge that doesn't quite fit (this is a contour plot of sin(x) + cos(y)), as if there was a tear and shift. But if I comment out the if i < j line, then the plot looks normal:
What went wrong and how can this be solved?
Update
Actually, if you look carefully at the right edge of the second figure above, there is also a strip that shouldn't be there.
I managed to fake a contour plot by a scatter plot:
val x = (-12.0 to 12.0 by 0.1).toArray
val y = (-12.0 to 12.0 by 0.1).toArray
val xyzData = (for {
i <- x
j <- y
} yield {
val s = math.sin(i)
val c = math.cos(j)
Array(i, j, s + c)
}).transpose
fastScatter(xyzData.toFloats, grid = (false, false), pointSize = 4, pointAlpha = 1F).vis()
Implementation of fastScatter can be found here: https://github.com/kindlychung/sfreechart (disclosure: I am the author.)

Which java API is best for converting pdf to image

I tried with 3 Java APIs for pdf but all 3 did not work properly.
1. PDFFile
2. PDDocument
3. PDFDocumentReader
If I have a pdf which has a 2 layer in which upper one is bit transparent, so when above 3 APIs converts it into image then only upper layer comes in image with no transparency. But both layer must come.
So suggest me other API to get my requirment fulfill
Code for PDFFile :
val raf = new RandomAccessFile(file, "r")
val channel = raf.getChannel()
val buf = channel.map(FileChannel.MapMode.READ_ONLY, 0, channel.size())
raf.close()
val pdffile = new PDFFile(buf)
val numPgs = pdffile.getNumPages() + 1
for (i <- 1 until numPgs) {
val page = pdffile.getPage(i)
val pwdt = page.getBBox().getWidth().toDouble
val phgt = page.getBBox().getHeight().toDouble
val rect = new Rectangle(0, 0, pwdt.toInt, phgt.toInt)
val rsiz = resize(method, size, pwdt, phgt)
val img = page.getImage(rsiz("width"), rsiz("height"),
rect, null, true, true)
result ::= buffer(img)
Code for PDDocument :
val doc = PDDocument.load(new FileInputStream(file));
val pages = doc.getDocumentCatalog().getAllPages()
for (i <- 0 until pages.size()) {
val page = pages.get(i)
val before = page.asInstanceOf[PDPage].convertToImage()
}
Code for PDFDocumentReader :
val inputStream = new FileInputStream(file)
val document = new PDFDocumentReader(inputStream)
val numPgs = document.getNumberOfPages
for (i <- 0 until numPgs) {
val pageDetail = new PageDetail("", "", i, "")
val resourceDetails = document.getPageAsImage(pageDetail)
val image = ImageIO.read(new ByteArrayInputStream(resourceDetails.getBytes()))
result ::= image
}