JSON file load issue - iphone

I'm currently with this issue, it would be great if someone could give me some pointers or best practice. Titanium SDK version: 1.6.1 iPhone SDK version: 4.2
My app does the following, it is getting a remote JSON file for syncing my SQLite database, but the app is having this error while parsing the file
[INFO] Due to memory conditions, 0 of 0 images were unloaded from cache.
Everything works fine when working with smaller JSON files, but now the file reaches 7MB and the my code quits on me.
Is this because of a titanium JSON parse limitation? I can not provide the database with app installation, because of dynamic content. So this is already an excluded solution.
code:
function syncDatabase() {
if ((Titanium.Network.networkType != Titanium.Network.NETWORK_NONE)) {
Ti.API.info("There is network connection, trying to update database..");
var conn = Ti.Network.createHTTPClient();
conn.setTimeout(20000);
var lastUpdated = Ti.App.Properties.getInt("lastUpdated");
conn.open('GET', 'http://example.com/get/all/' + lastUpdated);
filename = "db";
conn.onload = function(){
try {
if (conn.status == 200)
{
var f = Ti.Filesystem.getFile(Titanium.Filesystem.applicationDataDirectory,filename);
f.write(this.responseData);
}
fillDatabase();
}
catch(e) {
}
};
conn.send();
}
}
function fillDatabase()
{
try {
var file = Ti.Filesystem.getFile(Ti.Filesystem.applicationDataDirectory + "/db");
var json = JSON.parse(file.read().text);
var db = Titanium.Database.open('db');
for( i=0; i < json.length; i++){
Ti.API.info("Found foobar: With id ["+json[i].id+"] ["+json[i].foo+"]");
var syncid = json[i].id;
var foo = json[i].foo;
var bar = json[i].bar;
db.execute('REPLACE INTO objects (id,foo,bar) VALUES (?,?,?)',syncid,foo,bar);
}
Ti.App.Properties.setInt('lastUpdated', Math.floor(new Date().getTime()/1000));
db.close();
}
catch(e) {
Ti.API.info("THERE IS AN ERROR UPDATING THE DATABASE");
}
}
Any help is appreciated.

i have no idea actually but what do you mean with
dynamic content
?
usually it should work with you're database as well

Related

Power BI REST API ExportToFileInGroup Not Working

I am able to programmatically log in to the PowerBI Client, gather my Workspaces as well as get a specific Report from a specific Workspace. I need to programmatically render that report to a .pdf or .xlsx file. Allegedly this is possible with the ExportToFileInGroup/ExportToFileInGroupAsync methods. I even created a very simple report without any parameters. I can embed this using the sample app from here. So that at least tells me that I have what I need setup in the backend. But it fails when I try to run the ExportToFileInGroupAsync method (errors below code.)
My Code is:
var accessToken = await tokenAcquisition.GetAccessTokenForUserAsync(new string[] {
PowerBiScopes.ReadReport,
PowerBiScopes.ReadDataset,
});
var userInfo = await graphServiceClient.Me.Request().GetAsync();
var userName = userInfo.Mail;
AuthDetails authDetails = new AuthDetails {
UserName = userName,
AccessToken = accessToken,
};
var credentials = new TokenCredentials($"{accessToken}", "Bearer");
PowerBIClient powerBIClient = new PowerBIClient(credentials);
var groups = await powerBIClient.Groups.GetGroupsAsync();
var theGroup = groups.Value
.Where(x => x.Name == "SWIFT Application Development")
.FirstOrDefault();
var groupReports = await powerBIClient.Reports.GetReportsAsync(theGroup.Id);
var theReport = groupReports.Value
.Where(x => x.Name == "No Param Test")
.FirstOrDefault();
var exportRequest = new ExportReportRequest {
Format = FileFormat.PDF,
};
string result = "";
try {
var response = await powerBIClient.Reports.ExportToFileInGroupAsync(theGroup.Id, theReport.Id, exportRequest);
result = response.ReportId.ToString();
} catch (Exception e) {
result = e.Message;
}
return result;
It gets to the line in the try block and then throws the following errors:
An error occurred while sending the request.
Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host..
UPDATE
Relating to #AndreyNikolov question, here is our Embedded capacity:
After this was implemented, no change. Same exact error.
Turns out the issue was on our side, more specifically, security/firewall settings. Here is the exact quote from our networking guru.
"After some more investigation we determined that our firewall was causing this issue when it was terminating the SSL connection. We were able to add a bypass for the URL and it is now working as expected."

mirth connect - post db query executes only after checking pre query send status -Error/sent? Advice required

Basically, I am new to mirth connect. Please give me advice on this.
When I use something like this on Run Post-Process script:
try {
dbConn = DatabaseConnectionFactory.createDatabaseConnection('com.mysql.jdbc.Driver','jdbc:mysql://localhost:3306/mirth','XYZ','XYZ');
a =$('his_user_id');
responseStatus=Response.getStatus();
loger.info(responseStatus);
if(responseStatus == SENT) {
var result = dbConn.executeUpdate("UPDATE his_user SET status =0 where id"+a);
return result;
}
}
finally {
if (dbConn) {
dbConn.close();
}
}
I am getting the below error:
SourceSOURCE CODE:
53: var dbConn;54: 55: try {56:
dbConn = DatabaseConnectionFactory.createDatabaseConnection
('com.mysql.jdbc.Driver','jdbc:mysql://localhost:3306/mirth',
'root','root');57: a =$('his_user_id');58:
responseStatus=Response.getStatus();59:
loger.info(responseStatus);60: if(responseStatus == SENT)61:
{62: LINE NUMBER: 58DETAILS:
Java class "com.mirth.connect.userutil.Response" has
no public instance field or method named "getStatus". at
0462ff2d-8942-4898-9afb-802bfe68a63d:58
(doScript) at 0462ff2d-8942-4898-9afb-802bfe68a63d:74
This is my Pre process script in db writer
var dbConn;
try {
dbConn = DatabaseConnectionFactory.createDatabaseConnection('com.mysql.jdbc.Driver','jdbc:mysql://localhost:3306/mirth','root','root');
var result = dbConn.executeCachedQuery("SELECT his_user.Id AS his_user_Id, his_user.His_username AS his_user_His_username, his_user.His_useraddress AS his_user_His_useraddress, his_user.status AS his_user_status FROM his_user where his_user.status='1'");
return result;
}
finally {
if (dbConn) {
dbConn.close();
}
}
Change your dbConn to use this instead and it should work...
importPackage(java.sql);
var dbConn= java.sql.DriverManager.getConnection('jdbc:jtds:sqlserver://localhost:1433/dbname','user','pass');
If you do not feel like changing your dbConn, you may be able to circumvent that by replacing "dbConn.prepareStatement" with "dbConn.getConnection().prepareStatement". If that doesn't work, you may also need to include "importPackage(java.sql);" at the beginning of your transformer code.

Sending a mail with attachment on failed expect

I am trying to attach a .png file to the mail that will be sent via nodemailer on a expect/spec failure run via Protractor.
Worth mentioning is that I am using protractor-jasmine2-screenshot-reporter for screenshot capture.
What I am doing:
browser.driver.wait(function() {
return helper.checkURLAddress(browser.params.Test.URL.mojOLX); //will return false
}, 2000)
.then(function() {
// success code
},
//failure code goes below
function() {
var htmlFilePath = 'D:/Test/target/screenshots/my-report.html';
var htmlFileContent = String(fs.readFileSync(htmlFilePath));
var screenshotDirectory = "D:/Test/target/screenshots/chrome";
helper.sendHTMLMail(htmlFileContent, helper.getMostRecentFileName(screenshotDirectory));
}
The function for getting the most recent file:
function getMostRecentFileName(dir) {
var files = fs.readdirSync(dir);
return _.max(files, function (f) {
var fullpath = path.join(dir, f);
return fs.statSync(fullpath).ctime;
});
}
And the mailOptions with attachments:
var mailOptions = {
//from, to, subject go here
attachments: {
path: htmlFilePath
}
};
The error I am getting is:
Error: ENOENT: no such file or directory, open 'D:/Test/Screenshotname.png'.
The filepath of the screenshot is actually incorrect and is missing 3 directories in the path to get to the PNG(target, screenshots, chrome).
I pressume that it is due to the directories not created, as mentioned in this thread. But the solution here is to wait for the pdf creation, which is done by the user, which is not the case here.
When exactly is the screenshot saved?
Why does the function not use the file it shows in the error?
EDITED Question:
How to call the sendHTMLMail after the screenshots are created?

socket.io for react native (sending query problems)

I'm using this library and i can connect without problems.
Usually when I have worked with sockets the code that ever i used is:
socket = io.connect(url, { query: ‘token=’ + token});
and i can see this info reading socket.request._query
Using socket.io for react native i'm trying to send params:
this.socket = new SocketIO('http://localhost:3000', { query: ‘token=’ + token});
but in socket.request._query only can see this log:
{ transport: 'polling', b64: '1' }
In the library some options are mentioned like: connectParams. But i don't know how i can see the info
Related: link
It's not pretty detailed in the repo, but connectParams is a key/value object, and furthermore the values you sent in it will be appended in the url, as shown here:
if connectParams != nil {
for (key, value) in connectParams! {
let keyEsc = key.urlEncode()!
let valueEsc = "\(value)".urlEncode()!
queryString += "&\(keyEsc)=\(valueEsc)"
}
}
>Source<
So, you should try using connectParams like this(though I'm not sure how you tried it before):
this.socket = new SocketIO('http://localhost:3000', {
connectParams: {
myAwesomeQueryStringParam: "someRandomValue"
}
});
PS: forgive me, my english is pretty bad

Using Sailsjs Skipper file uploading with Flowjs

I'm trying to use skipper and flowjs with ng-flow together for big file uploading.
Based on sample for Nodejs located in flowjs repository, I've created my sails controller and service to handle file uploads. When I uploading a small file it's works fine, but if I try to upload bigger file (e.g. video of 200 Mb) I'm receiving errors (listed below) and array req.file('file')._files is empty. Intersting fact that it happening only few times during uploading. For example, if flowjs cut the file for 150 chunks, in sails console these errors will appear only 3-5 times. So, almost all chunks will uploaded to the server, but a few are lost and in result file is corrupted.
verbose: Unable to expose body parameter `flowChunkNumber` in streaming upload! Client tried to send a text parameter (flowChunkNumber) after one or more files had already been sent. Make sure you always send text params first, then your files.
These errors appears for all flowjs parameters.
I know about that text parameters must be sent first for correct work with skipper. And in chrome network console I've checked that flowjs sends this data in a correct order.
Any suggestions?
Controller method
upload: function (req, res) {
flow.post(req, function (status, filename, original_filename, identifier) {
sails.log.debug('Flow: POST', status, original_filename, identifier);
res.status(status).send();
});
}
Service post method
$.post = function(req, callback) {
var fields = req.body;
var file = req.file($.fileParameterName);
if (!file || !file._files.length) {
console.log('no file', req);
file.upload(function() {});
}
var stream = file._files[0].stream;
var chunkNumber = fields.flowChunkNumber;
var chunkSize = fields.flowChunkSize;
var totalSize = fields.flowTotalSize;
var identifier = cleanIdentifier(fields.flowIdentifier);
var filename = fields.flowFilename;
if (file._files.length === 0 || !stream.byteCount)
{
callback('invalid_flow_request', null, null, null);
return;
}
var original_filename = stream.filename;
var validation = validateRequest(chunkNumber, chunkSize, totalSize, identifier, filename, stream.byteCount);
if (validation == 'valid')
{
var chunkFilename = getChunkFilename(chunkNumber, identifier);
// Save the chunk by skipper file upload api
file.upload({saveAs:chunkFilename},function(err, uploadedFiles){
// Do we have all the chunks?
var currentTestChunk = 1;
var numberOfChunks = Math.max(Math.floor(totalSize / (chunkSize * 1.0)), 1);
var testChunkExists = function()
{
fs.exists(getChunkFilename(currentTestChunk, identifier), function(exists)
{
if (exists)
{
currentTestChunk++;
if (currentTestChunk > numberOfChunks)
{
callback('done', filename, original_filename, identifier);
} else {
// Recursion
testChunkExists();
}
} else {
callback('partly_done', filename, original_filename, identifier);
}
});
};
testChunkExists();
});
} else {
callback(validation, filename, original_filename, identifier);
}};
Edit
Found solution to set flowjs property maxChunkRetries: 5, because by default it's 0.
On the server side, if req.file('file')._files is empty I'm throwing not permanent(in context of flowjs) error.
So, it's solves my problem, but question why it behave like this is still open. Sample code for flowjs and Nodejs uses connect-multiparty and has no any additional error handling code, so it's most likely skipper bodyparser bug.