I have a large logging and data entry program that receives DataTable object with 1 DataRow. The DataTable have random amount of columns, with random column name and type so i cannot have class for each. Using these DataColumn i get the DataType and build a BsonDocument from scratch out of this.
Here's a short example
public void ParseData(DataTable table)
{
// create the document
var document = new BsonDocument();
// get the only row in the table
var row = table.Rows[0];
// for each column we add the property
foreach (DataColumn column in table.Columns)
{
// create an empty value
BsonValue value = null;
// current column value
var columnValue = row[column.ColumnName];
// set the value based on the datatype
if (column.DataType == typeof(string)) value = new BsonString(columnValue.ToString());
else if (column.DataType == typeof(int)) value = new BsonInt32(Convert.ToInt32(columnValue));
else if (column.DataType == typeof(float)) value = new BsonDouble(Convert.ToDouble(columnValue));
else if (column.DataType == typeof(double)) value = new BsonDouble(Convert.ToDouble(columnValue));
else if (column.DataType == typeof(bool)) value = new BsonBoolean(Convert.ToBoolean(columnValue));
else if (column.DataType == typeof(DateTime)) value = new BsonDateTime(Convert.ToDateTime(columnValue));
// add the element
document.Add(new BsonElement(column.ColumnName, value));
}
// insert the document in the generic collection
InsertDocument(document);
}
As you can see it's pretty simple. I have removed a lot of types in the list as we have many custom types that might pass so i just kept the basic ones. The problem is that i cannot figure out how to force the BsonDateTime to save as local time in the collection. When doing filters with legacy apps it's not working. I need them to be saved as local time. It's never been an issue in the past but because of those legacy apps from the early 90's that still need support i have to figure something out.
I also need to reload them as local time. If i could, i would save them as string but i can't because since all columns are random i do not know when loading if a specific BsonString is really a string or if it's a DateTime. For reloading i must not reload really as local time. I must reload the exact value in the database. I only control the creation of the document. But reading i only control a few one's that will be reading from it that are in C#, Java and C++. The rest are legacy apps that companies doesn't even exist anymore.
I did try to just modify every single date that came in the system to account for UTC and change the date to when saved as UTC it's stored property and filters from legacy apps still works but all of .NET, Java and C++ apps load up the wrong value and not the written value.
Is there a way to just disable UTC in a specific collection or database in MongoDB directly like you can in SQL server ?
MongoDB stores times in UTC and does not have time zone support. You can store any values you like but they will be interpreted as UTC timestamps by most MongoDB-related software.
I find getVersion API to be a bit hard to grasp. After some manual experiments with workflow changes, I found out that it's perfectly fine to have such a piece of code:
val version = Workflow.getVersion("change#1", 1, 1);
val anotherVersion = Workflow.getVersion("change#2", 2, 2);
Does it mean that the integer version is assigned to a changeId and not workflow instance? Does a single workflow instance/execution keep a set of integer-based versions?
What is the purpose of minSupported and maxSupported parameters? Why simply not to use an API like below?
val version = Workflow.getVersion("change#1")
if (version) {
// code after "change#1" changes
} else {
// code before "#change#1" changes
}
You are correct, the version is assigned to a changeId not a workflow instance. This allow versioning each piece of the workflow code independently. It allows fixing bugs while workflow is already running and didn't reach that part of the code.
The main reason is validation. The getVersion call records in the workflow history maxVersion when the code was executed for the first time. So on replay the correct version is used to guarantee correct replay even if the maxVersion has changed. When a branch is removed the minVersion is incremented. Imagine that such code is deployed by mistake when there is a workflow that needs the removed branch. The getVersion is going to detect that minVersion is larger than the one recorded in the history and is going to fail the decision task essentially blocking the workflow execution instead of breaking it. The same happens if the recorded version is higher than the maxVersion argument.
Update: Answer to the comment
In other words, I'm trying to come up with a situation where using
many different changeIds and not exceeding maxVersion=1 is not enough
They are enough if you don't perform removal of branches. But if you do then having validation of the minimal version is very convenient. For example look at the following code:
val version = Workflow.getVersion("change", 0, 2);
if (version == DEFAULT_VERSION) {
// before change
} else if (version == 1) {
// first change
} else {
// second hange
}
Let's remove the default version:
val version = Workflow.getVersion("change", 1, 2);
if (version == 1) {
// first change
} else {
// second hange
}
Now look at the version without min and max:
var version1 = Workflow.getVersion("change1");
var version2 = Workflow.getVersion("change2");
if (version1 == DEFAULT_VERSION) {
// before change
} else if (version2 == DEFAULT_VERSION) {
// first change
} else {
// second hange
}
Let's remove the default branch:
var version2 = Workflow.getVersion("change2");
if (version2 == DEFAULT_VERSION) {
// first change
} else {
// second hange
}
Note that a workflow that used the last sample code is going to break in unpredictable way if it is routed by mistake to a worker that doesn't know about version2, but only about the original default version. The first example with min max version is going to detect the issue gracefully.
I need to import a large table of data into my database in one go. I currently have it as an Excel file but I am happy to copy it to Google sheets etc.
So far I've added a few entries manually directly via cloud firestore.
Is there a solution already out there to achieve this?
I think the easiest way to export table data into Firestore is to use a Google Apps Script Library (for Google Sheets).
Step 1
Make a Copy of THIS example Google Spreadsheet I created as an example
Step 2
From the menu of YOUR copy of the Example Google Spreadsheet from step 1, click Tools > Script Editor. This should open up the example Google App Script associated with the example spreadsheet.
Step 3
Follow the Steps for installing this library and then update the script with the following:
email
key
projectID
These variables are generated by going to the Google Service Accounts page. This will require that you already have a Firebase or Google Cloud account setup. I won't repeat all the steps that are already iterated in in the aforementioned Github writeup. Just follow them carefully, and realize that the private_key is THE ENTIRE KEY starting with -----BEGIN PRIVATE KEY-----\n, EVERYTHING in between, and ending with \n-----END PRIVATE KEY-----\n
Step 4
Insert a page on your spreadsheet that contains your data, and EDIT the script to use your new sheet name and your data. I have HEAVILY commented the script so it's pretty clear what almost every line of code is doing. For those of you that just want to peek at the Google App Script that's behind this spreadsheet, here's the code:
// Note this Script uses an external library as per this page:
// https://github.com/grahamearley/FirestoreGoogleAppsScript
// This solution requires a Google Spreadhseet and a Firebase Account
// FOLLOW THE INSTRUCTIONS ON THAT GITHUB REPO TO SETUP NEEDED API KEYS!!!
//Global Variables
const ss = SpreadsheetApp.getActiveSpreadsheet(); // Gets the active "workbook"
const sheet = ss.getSheetByName('Restaurants'); // CHANGE TO YOUR SHEET NAME
const headerRowNumber = 1; // If you have more than one row for your header, then change this value to number of header rows
// If you want to mark modified cells, then set up a trigger for the following function:
// Edit > Current Project Triggers > (+ Add Trigger) > On Edit Spreadsheet etc
function onEdit(e) {
var cell = ss.getActiveCell(); //This will also effectively get our row
var dataRange = sheet.getDataRange(); //This checks for all rows/columns with data
var modifiedCol = dataRange.getLastColumn()-1; //Our "modified" column should be the second to last
if (cell.getColumn() < modifiedCol && cell.getRow() > headerRowNumber) { //If we edit any cells to the left of our modified column and below our header...
var celltoMark = sheet.getRange(cell.getRowIndex(),modifiedCol) //Get the R/C cordinates of cell to place modified time
celltoMark.setValue(new Date()); //write timestamp to that cell
}
};
// This will parse any comma separated lists you create in any of your fields (useful for search words, or attributes, etc)
function listToArray(list) {
var ogArray = list.split(","); //Input is a comma separated list
let trimmedArr = ogArray.map(string => string.trim()); //Let's strip out the leading/trailing whitespaces if any
return trimmedArr; //return the cleaned array
}
function writeToFireStore() {
const email = 'sheets#yourprojectid.iam.gserviceaccount.com'; // CHANGE THIS!!!
const key = '-----BEGIN PRIVATE KEY-----\nYOURPRIVATEKEY\n-----END PRIVATE KEY-----\n'; // CHANGE THIS!!!
const projectID = 'yourprojectid'; // CHANGE THIS!!!
var firestore = FirestoreApp.getFirestore(email, key, projectID);
const collection = "MySpreadsheetData"; // Name of your Firestore Database "Collection"
var dataRange = sheet.getDataRange().offset(headerRowNumber, 0, sheet.getLastRow() - headerRowNumber); //this is your data range
var data = dataRange.getValues(); // this is an array of your datarange's values
var lastCol = dataRange.getLastColumn(); // this is the last column with a header
var newDoc = {}; // Instantiate your data object. Each one will become the data for your firestore documents
// r = row number in this case
for (let r = 0; r <= dataRange.getLastRow(); r++) {
//Logger.log("R = ",r);
var cellMod = dataRange.getCell(r+1, lastCol-1);
var cellFS = dataRange.getCell(r+1, lastCol);
var cellModVal = cellMod.getValue();
var cellFSVal = cellFS.getValue();
//
// IMPORTANT READ THIS IMPORTANT READ THIS IMPORTANT READ THIS IMPORTANT READ THIS IMPORTANT READ THIS!!!
// Well, read the line below...
if (r > 2) break; //Comment Out this line after you're done testing otherwise you'll write all your rows to firestore after every run
newDoc[r] = {
name : data[r][1],
category : data[r][2],
cuisine : data[r][3],
address: {
add1: data[r][4],
add2: data[r][5],
city: data[r][6],
state: data[r][7],
zip: data[r][8]
},
tel: data[r][9],
searchterms: listToArray(data[r][10]) //Let's turn a csv list into an array
}
// For the sake of efficiency and to save $, we WON'T create documents that have already been created...
// ...and we won't update documents that have a fireStore Timestamp that's newer than a Modified Timestamp
// If there's not firestore timestamp in our spreadsheet, then let's create firestore document and update firestore stamp:
if (!cellFSVal) {
var now = new Date(); //Generate timestamp right now
try {
firestore.createDocument(collection + "/" + data[r][0], newDoc[r]); // To Use Your Own Document ID
//Now let's insert a timestamp in our Firestore TS column of the sheet so we know it's been added to Firestore
cellFS.setValue(now);
Logger.log("Row ",r,"(",data[r][1],") is NEW and was added to FireStore Successfully");
} catch (e) {
Logger.log("Error: ",e," : Document with same name already existed in Firestore.");
}
}
//var if FS Timestamp exists but, the modified time stamp is greater, let's update the Firstore Document
else if ((cellFSVal) && (cellModVal > cellFSVal)) {
try {
firestore.updateDocument(collection + "/" + data[r][0], newDoc[r]);
//Now let's insert a timestamp in our Firestore TS column of the sheet so we know it's been updated to Firestore
cellFS.setValue(now);
Logger.log("Row ",r,"(",data[r][1],") updated/edited.");
} catch (e) {
Logger.log("Error: ",e," : Document existed, we tried updating it, but jack shit happened.");
}
}
else {
Logger.log("Row ",r,"(",data[r][1],") Already in Firestore & hasn't been modified. Skipped.");
}
}
}
Step 5
Once your script is modified to your needs, it's time to run the script. Simply save it (File > Save), then choose the function "writeToFireStore" from the "Select function" dropdown selector in the menu bar (in between the icon of the bug, and the lightbulb), then hit the PLAY icon (to the left of the bug icon). At this point, you will likely be prompted to accept permissions to run the script, (which you need to accept if you want to run the script). Once you've accepted the permissions, then run the "writeToFireStore" function again if it hasn't already run, and voila!
NOTES:
I created a function that automatically writes a Modified Timestamp to the second to last column in the target sheet, and when you run the function, writes a Firestore Timestamp (so you know which rows have been successfully exported to Firestore). This way, if you run the firestore function again, and you haven't changed the data on your sheet, it won't bother updating the database with the same data (and will save you money and/or server resources). For this functionality to work, you must setup project Triggers (which is explained inside the script in the comments).
I am trying to select rows which are older than 7 days from current date. Database used is DB2 version 9.
Can you please tell me how exactly can I use the datetime in the query? The date table field is of type timestamp.
I am able to manually run the query without issues. However, when I am using in the prepared statement,
The app hangs when executing the query result = pselect.executeQuery(); as a result of which we need to restart db2 instance in order to clear it.
Can you please help what might be the issue? I do not see any exceptions at all. Other parts of the code works fine if I remove the select_query part.
try{
String select_query = "SELECT URL_ID ,URLVAL FROM URL_TAB WHERE " +
"UPDATED_DATE < TIMESTAMP(CURRENT_DATE - 7 DAYS, '00.00.00')";
System.out.println("select_query=" + select_query);
conn = JDBCDataObjectFactoryManager
.getConnection("JDBCConnectionFactory-SDE");
pselect = conn.prepareStatement(select_query);
System.out.println("pselect=" + pselect);
try{
System.out.println("inside try");
result = pselect.executeQuery();
System.out.println("result=" + result);
}catch(Exception e){
System.out.println("inside catch");
System.out.println("error message==============>"+e.getMessage());
}
if ((result != null) && (result.next())) {
System.out.println("3 >>>>>>>>>>>>>>>>>>>>>>>>>");
url_id = result.getInt(1);
url = result.getString(2);
}//end if
There are two possibilities: either the query is in a lock wait, or it runs for so long that it appears to be hung.
Check what is the value of LOCKWAIT database configuration parameter --by default it is -1, which means infinity, and you normally want to set it to a more reasonable value, typically 30 or 60 seconds. If it is the lock wait that causes your application to "hang", you would get an exception instead, which will help you to debug further.
If the issue is caused by the poor query performance, you'll need to work with your DBAs to figure out the root cause and resolve it.
I have implemented a gwt incubator table following the example at http://zenoconsulting.wikidot.com/blog:17
this works great however, the order of rows is not same as it is passed using the list object.
the data in this example is set in the following part of the code:
public long showMessages(ArrayList<Message> list) {
long start = System.currentTimeMillis();
// update the count
countLabel.setText("There are "+ list.size() + " messages.");
// reset the table model data
tableModel.setData(list);
// reset the table model row count
tableModel.setRowCount(list.size());
// clear the cache
cachedTableModel.clearCache();
// reset the cached model row count
cachedTableModel.setRowCount(list.size());
// force to page zero with a reload
pagingScrollTable.gotoPage(0, true);
long end = System.currentTimeMillis();
return end - start;
}
Please provide some solution to fix this issue.
awesome! thank you! somehow i ignored this part and internally down the line code was using a hashMap to manage table data which obviously does not retain the sort order. i changed it to linkedHashMap and it worked