I am creating a simple C# windows application and I want to access my richtextbox value in other from. I am able to access the richtextbox but when i am trying to access the value of that it gives me null.
Any suggestion is helpful.
if you use winforms richtextbox then you can simple do the following action
richtextbox2.Text = richtextbox1.Text;
if you use wpf richtextbox the look at the msdn examples for loading and saving TextRanges from/to streams.
save: http://msdn.microsoft.com/en-us/library/ms598701.aspx
load: http://msdn.microsoft.com/en-us/library/system.windows.documents.textrange.load.aspx
here are the code examples from msdn
// This method accepts an input stream and a corresponding data format. The method
// will attempt to load the input stream into a TextRange selection, apply Bold formatting
// to the selection, save the reformatted selection to an alternat stream, and return
// the reformatted stream.
Stream BoldFormatStream(Stream inputStream, string dataFormat)
{
// A text container to read the stream into.
FlowDocument workDoc = new FlowDocument();
TextRange selection = new TextRange(workDoc.ContentStart, workDoc.ContentEnd);
Stream outputStream = new MemoryStream();
try
{
// Check for a valid data format, and then attempt to load the input stream
// into the current selection. Note that CanLoad ONLY checks whether dataFormat
// is a currently supported data format for loading a TextRange. It does not
// verify that the stream actually contains the specified format. An exception
// may be raised when there is a mismatch between the specified data format and
// the data in the stream.
if (selection.CanLoad(dataFormat))
selection.Load(inputStream, dataFormat);
}
catch (Exception e) { return outputStream; /* Load failure; return a null stream. */ }
// Apply Bold formatting to the selection, if it is not empty.
if (!selection.IsEmpty)
selection.ApplyPropertyValue(TextElement.FontWeightProperty, FontWeights.Bold);
// Save the formatted selection to a stream, and return the stream.
if (selection.CanSave(dataFormat))
selection.Save(outputStream, dataFormat);
return outputStream;
}
Related
I am new to VertX and I want to read a pdf using the "GET" method. I know that buffer will be used. But there are no resources on the internet on how to do that.
Omitting the details of how you would get the file from your data store (couchbase DB), it is fair to assume the data is read correctly into a byte[].
Once the data is read, you can feed it to an io.vertx.core.buffer.Buffer that can be used to shuffle data to the HttpServerResponse as follows:
public void sendPDFFile(byte[] fileBytes, HttpServerResponse response) {
Buffer buffer = Buffer.buffer(fileBytes);
response.putHeader("Content-Type", "application/pdf")
.putHeader("Content-Length", String.valueOf(buffer.length()))
.setStatusCode(200)
.end(buffer);
}
I need to create a large tab separated file as a response to HTTP GET request.
In my route a create some Scala objects and then I want to write some custom representation of those objects to the Output Stream.
It is not just serialization to tab separated instead of JSON, because I need also to create a header with column names, so IMHO this can't be solved with custom marshaling.
So how can I get a writer or outputstream from HttpRequest?
Something like
~path("export") {
get {
val sampleExonRPKMs = exonRPKMService.getRPKMs(samples)
val writer = HttpResponse().getWriter // this does not exists
writeHeader(writer)
... // write objects tab separated
}
}
You can complete an Akka HTTP route with a marshallable source. If you don't want to use custom marshallers, you can always complete with a Source[ByteString, _]. See the docs for more info.
Your route will look something like
get {
val sampleExonRPKMs = exonRPKMService.getRPKMs(samples)
val headers: String = ???
Source.single(headers).concat(Source(sampleExonRPKMs).map(_.toTSVLine)).intersperse("\n").map(ByteString.apply)
}
Note as a separate issue: if you are dealing with large amounts of data, the getRPKMs call will result in loading all of it in memory.
I have a BizTalk custom pipeline component where I'm parsing a PDF attachment using itexsharp into a custom model. The pipeline is bound to a POP3 receiving port.
In the new created message if I return the attachment stream (outputMessage.GetPart("Body").Data = ms), then this is looking good in the BizTalk administration console. I have been able to save the message from here manually and this was parsed correctly using the same parsing method as in the pipeline.
When parsing the PDF directly in the pipeline, then I'm getting the following error: Rebuild failed: trailer not found.; Original message: xref subsection not found at file pointer 1620729
If I remove the default XMLDisassembler component from pipeline, then the parsing error disappeared, but in the console the message Body is empty, although the AttachmentSizeInBytes=1788
public IBaseMessage Execute(IPipelineContext pContext, IBaseMessage pInMsg)
{
return ExtractMessagePartToMessage(pContext, pInMsg);
}
private IBaseMessage ExtractMessagePartToMessage(IPipelineContext pContext, IBaseMessage pInMsg)
{
if (pInMsg.PartCount <= 1)
{
throw new InvalidOperationException("The email had no attachment, apparently.");
}
string partName;
IBaseMessagePart attachmentPart = pInMsg.GetPartByIndex(1, out partName);
Stream attachmentPartStream = attachmentPart.GetOriginalDataStream();
IBaseMessage outputMessage;
outputMessage = pContext.GetMessageFactory().CreateMessage();
outputMessage.AddPart("Body", pContext.GetMessageFactory().CreateMessagePart(), true);
outputMessage.Context = pInMsg.Context;
var ms = new MemoryStream();
attachmentPartStream.CopyTo(ms);
ms.Seek(0L, SeekOrigin.Begin);
Stream orderStream = PdfFormParser.Parse(ms);
outputMessage.GetPart("Body").Data = orderStream;
outputMessage.Context.Write("AttachmentName", "http://schemas.microsoft.com/BizTalk/2003/file-properties", partName);
outputMessage.Context.Write("AttachmentSizeInBytes", "http://schemas.microsoft.com/BizTalk/2003/file-properties", orderStream.Length.ToString());
pContext.ResourceTracker.AddResource(ms);
pContext.ResourceTracker.AddResource(orderStream);
return outputMessage;
}
public static Stream Parse(Stream pdfDocument)
{
using (var reader = new PdfReader(pdfDocument))
{
var outputStream = new MemoryStream();
var pdfForm = ParseInternal(reader);
var xmlDocument = new XmlDocument();
xmlDocument.LoadXml(pdfForm.Serialize());
xmlDocument.Save(outputStream);
return outputStream;
}
In pipelines when you read or write a Stream, you have to rewind the stream back to the beginning if something else is going to use it (especially the final message that you expect BizTalk to process).
I am having a java agent which loops through the view and gets the attachment from each document, The attachment is nothing but the .dxl file containing the document xml data. I am extracting the file at some temp directory and trying import the extracted .dxl as soon as it get extracted.
But the problem here is ,it only imports or works on first document's attachment in the loop and throws the error in java debug console
NotesException: DXL importer operation failed
at lotus.domino.local.DxlImporter.importDxl(Unknown Source)
at JavaAgent.NotesMain(Unknown Source)
at lotus.domino.AgentBase.runNotes(Unknown Source)
at lotus.domino.NotesThread.run(Unknown Source)
My java Agent code is
public class JavaAgent extends AgentBase {
static DxlImporter importer = null;
public void NotesMain() {
try {
Session session = getSession();
AgentContext agentContext = session.getAgentContext();
// (Your code goes here)
// Get current database
Database db = agentContext.getCurrentDatabase();
View v = db.getView("DXLProcessing_mails");
DocumentCollection dxl_tranfered_mail = v.getAllDocumentsByKey("dxl_tranfered_mail");
Document dxlDoc = dxl_tranfered_mail.getFirstDocument();
while(dxlDoc!=null){
RichTextItem rt = (RichTextItem) dxlDoc.getFirstItem("body");
Vector allObjects= rt.getEmbeddedObjects();
System.out.println("File name is "+ allObjects.get(0));
EmbeddedObject eo = dxlDoc.getAttachment(allObjects.get(0).toString());
if(eo.getFileSize()>0){
eo.extractFile(System.getProperty("java.io.tmpdir") + eo.getName());
System.out.println("Extracted File to "+System.getProperty("java.io.tmpdir") + eo.getName());
String filePath = System.getProperty("java.io.tmpdir") + eo.getName();
Stream stream = session.createStream();
if (stream.open(filePath) & (stream.getBytes() >0)) {
System.out.println("In If"+System.getProperty("java.io.tmpdir"));
importer = session.createDxlImporter();
importer.setDocumentImportOption(DxlImporter.DXLIMPORTOPTION_CREATE);
System.out.println("Break Point");
importer.importDxl(stream,db);
System.out.println("Imported Sucessfully");
}else{
System.out.println("In else"+stream.getBytes());
}
}
dxlDoc = dxl_tranfered_mail.getNextDocument();
}
} catch(Exception e) {
e.printStackTrace();
}
The code executes till it prints "Break Point" and throws the error but the attachment get imported for first time
In other case if i hard code the filePath for the specific dxl file from file system it imports the dxl as document in the database with no errors
I am wondering if it is the issue of the stream passed doesn't get completes and the next loop executes.
Any kind of suggestion will be helpful.
I can't see any part where your while loop would move on from the first document.
Usually you would have something like:
Document nextDoc = dxl_tranfered_mail.getNextDocument(dxlDoc);
dxlDoc.recycle();
dxlDoc = nextDoc;
Near the end of the loop to advance it to the next document. As your code currently stands it looks like it would never advance, and always be on the first document.
If you do not know about the need to 'recycle' domino objects I suggest you have a search for some blog posts articles that explain the need to do so.
It is a little complicated but basically, the Java Objects are just a 'wrapper' for the the objects in the C API.
Whenever you create a Domino Object (such as a Document, View, DocumentCollection etc.) a memory handle is allocated in the underlying 'C' layer. This needs to be released (or recycled) and it will eventually do so when the session is recycled, however when your are processing in a loop it is much more important to recycle as you can easily exhaust the available memory handles and cause a crash.
Also it's possible you may need to close (and recycle) each Stream after you a finished importing each file
Lastly, double check that the extracted file that is causing an exception is definitely a valid DXL file, it could simply be that some of the attachments are not valid DXL and will always throw an exception.
you could put a try/catch within the loop to handle that scenario (and report the problem files), which will allow the agent to continue without halting
Im trying to use the methods described bt kuujinbo here.
PDF Compression with iTextSharp
This is my code, and it results in this error:
"Rebuild failed: trailer not found.; Original message: PDF startxref not found."
PdfReader reader = new PdfReader(output.ToArray());
ReduceResolution(reader, 9);
// Save altered PDF. then you can pass the btye array to a database, etc
using (MemoryStream ms = new MemoryStream())
{
using (PdfStamper stamper = new PdfStamper(reader, ms))
{
}
document.Close();
Response.ContentType = "application/pdf";
Response.AddHeader("Content-Disposition", string.Format("attachment;filename=Produktark-{0}.pdf", myItem.Key));
Response.BinaryWrite(output.ToArray());
}
What might I be missing?
An exception stating Rebuild failed: ...; Original message: ... is thrown by iText only during PdfReader initialization, i.e. in your case in the line
PdfReader reader = new PdfReader(output.ToArray());
and it indicates that the read data, i.e. output.ToArray(), does not constitute a valid PDF. You should write output.ToArray() to some file, too, and inspect it.
If you wonder why the message indicates that some Rebuild failed... you actually don't get the initial error but a follow-up one, the PDF digesting code has multiple blocks like this
try {
read some part of the PDF;
} catch(Exception) {
try {
try to repair that part of the PDF and read it;
} catch(Exception) {
throw "Rebuild failed: ...; Original message: ...";
}
}
In your case the part of interest was the cross reference table/stream and the issue was that the PDF startxref (a statement containing the offset of the cross reference start in the document) was not found.
When I receive this error message it is caused by not closing the PDFStamper that I am using to edit the form fields.
Stamper.Close();
Must call before closing the PDF or will throw specified error.