Combining multiple databases using Processmaker triggers - triggers

I'm have difficulties combining/union 2 different databases in Processmaker using trigger. I have no issue getting value from the $oldDB but i have no idea how to combine both $oldDB and $newDB from the same table using union.
$oldDB = '18863150963a29172059b73074177825';
$newDB = '65738461163a2918f0d9097091237694';
$query1 = "SELECT app_uid, app_number, app_status FROM table WHERE drNumber = '$insertDrNo'";
$result1 = executeQuery($query1, $oldDB);
above script worked but i need to combine/union both $oldDB and $newDB with similar table from both databases. Any help are much appreciated, thanks in advance.

Related

Sqlalcemy - Bulk-insert with multiple cascade & back_populates relationships

I have tried to optimize our insertions to the database, which is currently the bottleneck and slowing down our pipeline. I decided to first start speed up our data_generator used for testing, all the tables are empty at first. Thought it would be a easy place to start ..
they are then populated and used in various tests.
Currently, we do pretty much all insertions with Session.add(entry) or in some cases bulked entries with add_all(entries), which does not improve the speed that much.
The goal was to do more insertions at once and have less time communicating back and forth with the database and I tried various bulk_insert methods (bulk_save_objects, bulk_insert_mappings and ORM,CORE methods with INSERT INTO, COPY, IMPORT .. but I got nothing to work properly. Foreign key constraints, duplicated keys ... or tables not getting populated.
I will show an example of a Table that would previous be added with add_all() in a run_transaction.
class News(NewsBase):
__tablename__ = 'news'
news_id = Column(UUID(as_uuid=True), primary_key=True, nullable=False)
url_visit_count = Column('url_visit_count', Integer, default=0)
# One to many
sab_news = relationship("sab_news", back_populates="news")
sent_news = relationship("SenNews", back_populates="news")
scope_news = relationship("ScopeNews", back_populates="news")
news_content = relationship("NewsContent", back_populates="news")
# One to one
other_news = relationship("other_news", uselist=False, back_populates="news")
# Many to many
companies = relationship('CompanyNews', back_populates='news', cascade="all, delete")
aggregating_news_sources = relationship("AggregatingNewsSource", secondary=NewsAggregatingNewsSource,
back_populates="news")
def __init__(self, title, language, news_url, publish_time):
self.news_id = uuid4()
super().__init__(title, language, news_url, publish_time)
We have many tables built like this, some with more relations, and my conclusion now is that having many different relationships that back_populates and update each other does not allow for fast bulk_insertions, Am I wrong?
One of my current solution that was able to decrease our execution_time from 120s to 15s for a regular data_generator for testing is like this:
def write_news_to_db(news, news_types, news_sources, company_news,
):
write_bulk_in_chunks(news_types)
write_bulk_in_chunks(news_sources)
def write_news(session):
enable_batch_inserting(session)
session.add_all(news)
def write_company_news(session):
session.add_all(company_news)
engine = create_engine(
get_connection_string("name"),
echo = False,
executemany_mode = "values")
run_transaction(create_session(engine=engine), lambda s: write_news(s))
run_transaction(create_session(), lambda s: write_company_news(s))
I used this library sqlalchemy_batch_inserts
github together with Psycopg2 Fast Execution Helpers, set executemany_mode="values".
I did this by creating a new engine just for these insertions - It did work however this itself seems like a bad practice. It works with the same database.
Anyway, this does seem to work, but it is still not the execution speed I want - especially when we are initially working with empty tables.
Ideally, I wouldn't want to do this hacky solution and avoid bulk_insertions since SQLAlchemy does not recommend using them - to avoid problems that I have faced.
But how does one construct queries to properly do bulk_insertions in cases of complex Tables like these - should we re-design our tables or is it possible?
Using Multi-row insertions within the run_transaction with ORM or CORE would be ideal, but I haven't been able to do it.
Any recommendations or help would be much appreciated!
TLDR; Bulk-insertion with multiple relationships, back_populates, cascade. How is it supposed to done?
CockroachDB supports bulk insertions using multi-row insert for existing tables as well as IMPORT statements for new tables - https://www.cockroachlabs.com/docs/stable/insert.html. Have you considered using these options directly?

Is there a way to combine the data of two rows with some intersecting data in postgreSQL?

Good morning! I am currently working on creating a postgreSQL database with some client information, however I ran into an issue which I wasn't able to solve with my basic knowledge of SQL. Searching for this method also returned with no results which I found useful or applicable.
I have two tables: 'mskMobile' and 'emailData'. Both of those tables contain a column named 'email' and some of those emails overlap. I figured out that I can view those intersecting emails by requesting
SELECT "mailData".email
FROM "mailData"
JOIN "mskMobile"
ON "mailData".email="mskMobile".email;
Now I want to write the data of two other columns of those common rows in 'mskMobile' named 'name' and 'surname' to the corresponding columns in 'emailData' (named identically), however I cannot find any answer on how to do so. Any suggestions on how to execute this action?
UPDATE "mksMobile" SET name = "mailData".name, surname = "mailData".surname
FROM "mailData"
WHERE "mailData".email = "mskMobile".email;
After a bit more research I came up with a following way of declaring it:
SELECT "mailData".email, "mskMobile".num, "mskMobile".name
FROM "mailData"
INNER JOIN "mskMobile"
ON "mailData".email="mskMobile".email;
This allowed me to build a new table with the data combined.

What is the best and fast way to get database data type of table field in c# for Postgres?

I have code:
MyDataAdapter.SelectCommand = MySelectCommand;
MyDataAdapter.Fill(MyDataTable);
MyDataGrid.ItemsSource = MyDataTable.DefaultView;
MyDataGrid.Columns.Add(MyDataGridTextColumn1);
MyDataGrid.Columns.Add(MyDataGridTextColumn2);
MyDataGrid.Columns.Add(MyDataGridTextColumn3);
...
MyDataGrid.Columns.Add(MyDataGridTextColumnXXX);
At next step I want to format added columns according to theirs database data type. But there is time limit for building of MyDataGrid. How to do it by the best and fast way?
MyDataTable.Columns[i].DataType.Name;
or
NpgsqlDataReader dr = MyDataAdapter.SelectCommand.ExecuteReader();
dr.GetDataTypeName(i);
where i - index of interesting column

How to work with queries from multiple databases

I have 2 queries from 2 different databases.I need to create a report with those 2 queries. Please suggest.
Not knowing anything about your databases, the way you combine multiple databases is by using the UNION command. An example:
SELECT * FROM database1.Table1
UNION ALL
SELECT * FROM database2.Table1
You can also create a view that by adding CREATE VIEW CombinedTable1 AS as the first line.
Here is a good introduction on how to use UNION.
Hope that helps,
Chris

SQL and Crystal Reports in ADO.NET

I've created crystal reports using ADO.NET datasets, the reports are working just fine but now my question is:.
How do I make my report to get data from multiple tables because I tried using INNER JOIN method and it didn't work,it displayed nothing....
So can someone give me an idea on how to make crystal report to get data from multiple tables using VB.NET? And tell me the purpose of "SQL Expression" in Crystal Reports.
Please, Thank You.
Con.close
Con.open
Query = "select customers.cust_id,customers.cust_fname,customers.cust_lname,invoices.inv_id,invoices.inv_date from customers Inner Join Invoices ON customers.cust_id = invoices.cust_id where customers.cust_id = '"& txtcustnr.text & "'"
Con.close
Read this to get you started on the purpose of SQL Expression Fields. And without any code for your table joins nobody will be able to help you out.
EDIT:
Instead of where customers.cust_id = '"& txtcustnr.text & "'" try where customers.cust_id = txtcustnr.text
I don't know what data is in "txtcustnr.text", but if you you want the data from txtcustnr.text to be the same as customers.cust_id, then try the above WHERE clause.