Protractor - Create a txt file as report with the "Expect..." result - protractor

I'm trying to create a report for my scenario, I want to execute some validations and add the retults in a string, then, write this string in a TXT file (for each validation I would like to add the result and execute again till the last item), something like this:
it ("Perform the loop to search for different strings", function()
{
browser.waitForAngularEnabled(false);
browser.get("http://WebSite.es");
//strings[] contains 57 strings inside the json file
for (var i = 0; i == jsonfile.strings.length ; ++i)
{
var valuetoInput = json.Strings[i];
var writeInFile;
browser.wait;
httpGet("http://website.es/search/offers/list/"+valuetoInput+"?page=1&pages=3&limit=20").then(function(result) {
writeInFile = writeInFile + "Validation for String: "+ json.Strings[i] + " Results is: " + expect(result.statusCode).toBe(200) + "\n";
});
if (i == jsonfile.strings.length)
{
console.log("Executions finished");
var fs = require('fs');
var outputFilename = "Output.txt";
fs.writeFile(outputFilename, "Validation of Get requests with each string:\n " + writeInFile, function(err) {
if(err)
{
console.log(err);
}
else {
console.log("File saved to " + outputFilename);
}
});
}
};
});
But when I check my file I only get the first row writen in the way I want and nothing else, could you please let me know what am I doing wrong?
*The validation works properly in the screen for each of string in my file used as data base
**I'm a newbie with protractor
Thank you a lot!!

writeFile documentation
Asynchronously writes data to a file, replacing the file if it already
exists
You are overwriting the file every time, which is why it only has 1 line.
The easiest way would probably (my opinion) be appendFile. It writes to a file without overwriting existing data and will also create the file if it doesnt exist in the first place.
You could also re-read that log file, store that data in a variable, and re-write to that file with the old AND new data included in it. You could also create a writeStream etc.
There are quite a few ways to go about it and plenty of other answers
on SO specifically on those functions that can provide more info.
Node.js Write a line into a .txt file
Node.js read and write file lines
Final note, if you are using Jasmine you can also create a custom jasmine reporter. They have methods that contain exactly what you want (status Pass/Fail, actual vs expected values etc) and it's fairly easy to set up with Protractor

Related

How to edit pasted content using the Open XML SDK

I have a custom template in which I'd like to control (as best I can) the types of content that can exist in a document. To that end, I disable controls, and I also intercept pastes to remove some of those content types, e.g. charts. I am aware that this content can also be drag-and-dropped, so I also check for it later, but I'd prefer to stop or warn the user as soon as possible.
I have tried a few strategies:
RTF manipulation
Open XML manipulation
RTF manipulation is so far working fairly well, but I'd really prefer to use Open XML as I expect it to be more useful in the future. I just can't get it working.
Open XML Manipulation
The wonderfully-undocumented (as far as I can tell) "Embed Source" appears to contain a compound document object, which I can use to modify the copied content using the Open XML SDK. But I have been unable to put the modified content back into an object that lets it be pasted correctly.
The modification part seems to work fine. I can see, if I save the modified content to a temporary .docx file, that the changes are being made correctly. It's the return to the clipboard that seems to be giving me trouble.
I have tried assigning just the Embed Source object back to the clipboard (so that the other types such as RTF get wiped out), and in this case nothing at all gets pasted. I've also tried re-assigning the Embed Source object back to the clipboard's data object, so that the remaining data types are still there (but with mismatched content, probably), which results in an empty embedded document getting pasted.
Here's a sample of what I'm doing with Open XML:
using OpenMcdf;
using DocumentFormat.OpenXml;
using DocumentFormat.OpenXml.Packaging;
using DocumentFormat.OpenXml.Wordprocessing;
...
object dataObj = Forms.Clipboard.GetDataObject();
object embedSrcObj = dateObj.GetData("Embed Source");
if (embedSrcObj is Stream)
{
// read it with OpenMCDF
Stream stream = embedSrcObj as Stream;
CompoundFile cf = new CompoundFile(stream);
CFStream cfs = cf.RootStorage.GetStream("package");
byte[] bytes = cfs.GetData();
string savedDoc = Path.GetTempFileName() + ".docx";
File.WriteAllBytes(savedDoc, bytes);
// And then use the OpenXML SDK to read/edit the document:
using (WordprocessingDocument openDoc = WordprocessingDocument.Open(savedDoc, true))
{
OpenXmlElement body = openDoc.MainDocumentPart.RootElement.ChildElements[0];
foreach (OpenXmlElement ele in body.ChildElements)
{
if (ele is Paragraph)
{
Paragraph para = (Paragraph)ele;
if (para.ParagraphProperties != null && para.ParagraphProperties.ParagraphStyleId != null)
{
string styleName = para.ParagraphProperties.ParagraphStyleId.Val;
Run run = para.LastChild as Run; // I know I'm assuming things here but it's sufficient for a test case
run.RunProperties = new RunProperties();
run.RunProperties.AppendChild(new DocumentFormat.OpenXml.Wordprocessing.Text("test"));
}
}
// etc.
}
openDoc.MainDocumentPart.Document.Save(); // I think this is redundant in later versions than what I'm using
}
// repackage the document
bytes = File.ReadAllBytes(savedDoc);
cf.RootStorage.Delete("Package");
cfs = cf.RootStorage.AddStream("Package");
cfs.Append(bytes);
MemoryStream ms = new MemoryStream();
cf.Save(ms);
ms.Position = 0;
dataObj.SetData("Embed Source", ms);
// or,
// Clipboard.SetData("Embed Source", ms);
}
Question
What am I doing wrong? Is this just a bad/unworkable approach?

How to print a file with Jscript

Goal
I want to print a file via a PDF printer which isn't the default printer. I was able to temporary change the normal printer to the PDF printer.
Problem
But I don't know how to print a .doc, .txt or .xls via Jscript. Also, I can't find a way to save the default printer name so I can switch back after I've printed the file.
Jscript code
var objShell = new ActiveXObject("Shell.Application");
var objFSO = new ActiveXObject("Scripting.FileSystemObject");
try {
var PDFCreatorQueue = new ActiveXObject("PDFCreatorBeta.JobQueue");
PDFCreatorQueue.Initialize();
var sourceFile = WScript.Arguments(0)
var sourceFolder = objFSO.GetParentFolderName(sourceFile)
var sourceName = objFSO.GetBaseName(sourceFile)
var targetFile = sourceFolder + "\\" + sourceName + ".pdf"
//HERE GOES THE COMMAND TO SAVE THE CURRENT DEFAULT PRINTER NAME TO A TEMP VARIABLE
objNet.SetDefaultPrinter("PDFCreator");
//HERE GOES THE PRINT COMMAND WHICH I DON'T KNOW
// HERE GOES THE COMMAND TO CHANGE BACK TO THE OLD DEFAULT PRINTER
if(!PDFCreatorQueue.WaitForJob(3)) {
WScript.Echo("The print job did not reach the queue within " + 3 + " seconds");
}
else {
var job = PDFCreatorQueue.NextJob;
job.SetProfileByGUID("DefaultGuid");
job.ConvertTo(targetFile);
if(!job.IsFinished || !job.IsSuccessful) {
WScript.Echo("Could not convert the file: " + targetFile);
}
}
PDFCreatorQueue.ReleaseCom();
}
catch(e) {
WScript.Echo(e.message);
PDFCreatorQueue.ReleaseCom();
}
Use the ShellFolderItem.InvokeVerbEx() function. The JScript example code in the MSDN article shows how to use it. Make the first argument "print" and the second argument the name of the printer. So you can remove the code that tinkers with the default printer.
Printing web page from js is quite easy, you could use window.print() method over an iFrame ( this works only with file format wich can be displaied into a web page so it doesn't work with .doc extension)
<iframe id="textfile" src="text.txt"></iframe>
<button onclick="print()">Print</button>
<script type="text/javascript">
function print() {
var iframe = document.getElementById('textfile');
iframe.contentWindow.print();
}
</script>
These will show you a message box to select what printer you want to use a so on.
What are you asking for seems to be silent printing but it isn't standarized over all the broswer.
P.S. I think that isn't a good idea to use the printer to save this file to pdf, I think taht you could look at jsPDF (a js tools to create pdf) or you should consider to make the pdf generation serverside.

Receiving data in node.js

I am having a java program send data to me over a specific socket to my node.js application. I want to be able to obtain all of the data, which is information from a SQlite database, and send it off to something else.
I've found something like the following can work but it seems to be unreliable as data is missing and sometimes it doesn't even show up.
stream.addListener('data', function(data){
buffer.write(data.toString());
});
on a side note, I need the socket to stay open so I can't call the "end" event.
I really don't have any attachment to stream.addListener so i can use something else if it works how i want. Basically what i'm asking is, What is the most effective way to obtain data from a socket using node.js?
P.S. thank you for your time
The data event is not guaranteed to have all the data sent to it in one go. You'll need to build up a buffer over multiple events and watch for delimiters of some kind (newlines, null characters, whatever you feel). Here's an example from a project where I'm parsing data from IRC (converted from CoffeeScript); parseData is the event handler for the data event (e.g. socket.on('data', this.parseData);):
IrcConnection.prototype.parseData = function(data) {
var line, lines, i;
data = data.replace("\r\n", "\n");
this.buffer += data;
lines = this.buffer.split("\n");
this.buffer = "";
/* Put the last line back in the buffer if it was incomplete */
if (lines[lines.length - 1] !== '') {
this.buffer = lines[lines.length - 1];
}
/* Remove the final \n or incomplete line from the array */
lines = lines.splice(0, lines.length - 1);
for (i = 0; i < lines.length; i++) {
line = lines[i];
this.emit('raw', line);
}
};

Protovis - dealing with a text source

lets say I have a text file with lines as such:
[4/20/11 17:07:12:875 CEST] 00000059 FfdcProvider W com.test.ws.ffdc.impl.FfdcProvider logIncident FFDC1003I: FFDC Incident emitted on D:/Prgs/testing/WebSphere/AppServer/profiles/ProcCtr01/logs/ffdc/server1_3d203d20_11.04.20_17.07.12.8755227341908890183253.txt com.test.testserver.management.cmdframework.CmdNotificationListener 134
[4/20/11 17:07:27:609 CEST] 0000005d wle E CWLLG2229E: An exception occurred in an EJB call. Error: Snapshot with ID Snapshot.8fdaaf3f-ce3f-426e-9347-3ac7e8a3863e not found.
com.lombardisoftware.core.TeamWorksException: Snapshot with ID Snapshot.8fdaaf3f-ce3f-426e-9347-3ac7e8a3863e not found.
at com.lombardisoftware.server.ejb.persistence.CommonDAO.assertNotNull(CommonDAO.java:70)
Is there anyway to easily import a data source such as this into protovis, if not what would the easiest way to parse this into a JSON format. For example for the first entry might be parsed like so:
[
{
"Date": "4/20/11 17:07:12:875 CEST",
"Status": "00000059",
"Msg": "FfdcProvider W com.test.ws.ffdc.impl.FfdcProvider logIncident FFDC1003I",
},
]
Thanks, David
Protovis itself doesn't offer any utilities for parsing text files, so your options are:
Use Javascript to parse the text into an object, most likely using regex.
Pre-process the text using the text-parsing language or utility of your choice, exporting a JSON file.
Which you choose depends on several factors:
Is the data somewhat static, or are you going to be running this on a new or dynamic file each time you look at it? With static data, it might be easiest to pre-process; with dynamic data, this may add an annoying extra step.
How much data do you have? Parsing a 20K text file in Javascript is totally fine; parsing a 2MB file will be really slow, and will cause the browser to hang while it's working (unless you use Workers).
If there's a lot of processing involved, would you rather put that load on the server (by using a server-side script for pre-processing) or on the client (by doing it in the browser)?
If you wanted to do this in Javascript, based on the sample you provided, you might do something like this:
// Assumes var text = 'your text';
// use the utility of your choice to load your text file into the
// variable (e.g. jQuery.get()), or just paste it in.
var lines = text.split(/[\r\n\f]+/),
// regex to match your log entry beginning
patt = /^\[(\d\d?\/\d\d?\/\d\d? \d\d:\d\d:\d\d:\d{3} [A-Z]+)\] (\d{8})/,
items = [],
currentItem;
// loop through the lines in the file
lines.forEach(function(line) {
// look for the beginning of a log entry
var initialData = line.match(patt);
if (initialData) {
// start a new item, using the captured matches
currentItem = {
Date: initialData[1],
Status: initialData[2],
Msg: line.substr(initialData[0].length + 1)
}
items.push(currentItem);
} else {
// this is a continuation of the last item
currentItem.Msg += "\n" + line;
}
});
// items now contains an array of objects with your data

merge word documents to a single document

I used the code in the link mentioned below to merge word files into a single file
http://devpinoy.org/blogs/keithrull/archive/2007/06/09/updated-how-to-merge-multiple-microsoft-word-documents.aspx
However, seeing the output file i realized that it was unable to copy header image in the first document. How do we merge documents preserving format and content.
I will suggest to use GroupDocs.Merger Cloud for merging multiple word document to a single word document, it keeps the formatting and contents of the source documents. It is a platform independent REST API solution without depending on any third-party tool or software.
Sample C# code:
var configuration = new GroupDocs.Merger.Cloud.Sdk.Client.Configuration(MyAppSid, MyAppKey);
var apiInstance_Document = new GroupDocs.Merger.Cloud.Sdk.Api.DocumentApi(configuration);
var apiInstance_File = new GroupDocs.Merger.Cloud.Sdk.Api.FileApi(configuration);
var pathToSourceFiles = #"C:/Temp/input/";
var remoteFolder = "Temp/";
var joinItem_list = new List<JoinItem>();
try
{
DirectoryInfo dir = new DirectoryInfo(pathToSourceFiles);
System.IO.FileInfo[] files = dir.GetFiles();
foreach (System.IO.FileInfo file in files)
{
var request_upload = new GroupDocs.Merger.Cloud.Sdk.Model.Requests.UploadFileRequest(remoteFolder + file.Name, File.Open(file.FullName, FileMode.Open));
var response_upload = apiInstance_File.UploadFile(request_upload);
var item = new JoinItem
{
FileInfo = new GroupDocs.Merger.Cloud.Sdk.Model.FileInfo
{ FilePath = remoteFolder + file.Name }
};
joinItem_list.Add(item);
}
var options = new JoinOptions
{
JoinItems = joinItem_list,
OutputPath = remoteFolder + "Merged_Document.docx"
};
var request = new JoinRequest(options);
var response = apiInstance_Document.Join(request);
Console.WriteLine("Output file path: " + response.Path);
}
catch (Exception e)
{
Console.WriteLine("Exception while Merging Documents: " + e.Message);
}
That code is inserting a page break after each file.
Since sections control headers, if a second or subsequent document has a header, you'll probably be wanting to keep the original section properties, and insert those after your first document.
If you look at your original document as a docx, you'll probably see that your section is a document level section properties element.
The easiest way around your problem may be to create a second section properties element inside the last paragraph (which contains the header information). Then this should just stay there when the documents are merged (ie other paragraphs added after it).
That's the theory. See also http://www.pcreview.co.uk/forums/thread-898133.php
But I haven't tried it; it assumes InsertFile behaves as I expect it should.