How to pass database name dynamically in ZIO Zlayer dependency - scala

Problem: I use a different database for every customer (each one has a separate sub-domain) extracted from the domain name. e.g. customer1.abc.com then a database name, "db_customer1" is used for PostgreSQL DB connections (e.g In line 31 of https://github.com/pawank/zio-playground/blob/master/src/main/scala/dev/xymox/zio/playground/zhttp/ItemEndpoints.scala new db information is to be used).
zio-http is to be used for the implementation (e.g https://github.com/pawank/zio-playground/blob/master/src/main/scala/dev/xymox/zio/playground/zhttp/ItemServer.scala).
How do I either pass a database name in every repository function which can be used by db library or provide zlayer dynamically with new db name other than what is taken from the config file. (In the example code at line 21 of https://github.com/pawank/zio-playground/blob/master/src/main/scala/dev/xymox/zio/playground/zhttp/ItemServer.scala the data source ZioQuillContext.dataSourceLayer is statically set.
Any suggestion to provide the layer dependency within HTTP methods?
Steps to run minimal reproducible example:
$sbt
$run
(Choose no 40 for example)
git#github.com:pawank/zio-playground.git

Related

How to match a list of values from Database1 with a column in Database2 using JDBC Request in JMeter?

I am quite new to JMeter, so I am looking for the best approach to do this: I want to get a list of messageID's from Database1 and then check whether these messageID values will be found in Database2 and then check the ErrorMessage column for these ID's against what I expect.
I have the JDBC Request working for extracting the list of messageID's from Database1. JMeter returns the list to me, but now I'm stuck. I am not sure how to handle the variable names and result variable names field in the JDBC Request and use this in the next throughput controller loop for the JDBC Request for Database2.
My JDBC request looks like this (PostgreSQL):
SELECT messageID FROM database1
ORDER BY created DESC
FETCH FIRST 20 ROWS ONLY
Variable names: messageid
Result variable names: resultDB1
Then I use the BeanShell Assertion to see whether the connection to the database is present, or whether the response is empty.
But now, I have to connect to a different database, so I need to make a new throughput controller with a new JDBC configuration, Request, etc in there, but I don't know how to pass on the messageid list to this new request.
What I thought about was writing the list of results from Database1 into a file and then read the values from that file for Database2, but that seems like unnecessarily complicated to me, like there should be a solution in JMeter already for that. Also, I am running my JMeter tests on a remote linux server, so I don't want to make it more complicated by making new files and saving them somewhere.
You can convert your resultDB1 into a JMeter Property like:
props.put("resultDB1", vars.getObject("resultDB1"));
As per JMeter Documentation:
Properties are not the same as variables. Variables are local to a thread; properties are common to all threads
So basically JMeter Properties is a subset of Java Properties which are global for the whole JVM
Once done you will be able to access the value in other Thread Groups like:
ArrayList resultDB1 = (ArrayList)props.get("resultDB1");
ArrayList resultDB2 = (ArrayList)vars.getObject("resultDB2");
//your code to compare 2 result sets here
Also be aware that since JMeter 3.1 you should be using JSR223 Test Elements and Groovy language for scripting so consider migrating to JSR223 Assertion on next available opportunity.

connecting several database using tFileInputProperties in Talend

I'm completely new to talend and currently I need to build a job that reads params values like
Database Name
Server Name
Host Name
Password
username
From a property file and passed those values in a tPostgresqlConnection Component.
This is what I have try so far
tFileInputProperties -------> tContextLoad ------> tPostgresqlConnection
Talend Job Image:
Problem is the job is taking only one database connection instead of looping for all databases connection that I have defined in the file properties.
Can someone advice how i can create a Job that will loop params values on the tPostgresqlConnection component?
I suggest you create a properties file per database connection, and in each file you define your connection parameters :
server=myServer
port=myPort
user=myUser
password=myPassword
...
(Make sur your parameters have the same keys across files)
Then you can do something like this :
tFileList -- Iterate -- tFileInputProperties -- Main -- tContextLoad -- OnComponentOk -- tRunJob
the tContextLoad component will populate the context parameters (which need to exist in your main job and child job).
In the tRunJob, you encapsulate your job logic : connect to your database using your connection parameters (server, port, user..etc) passed down from the parent job, do what you need to do, and close your connection at the end of your job.
Make sure to check the option "Transmit whole context" in your tRunJob component settings.
Of course you can do it all in a single job using OnSubjobOk links, but I find this more readable (and easily maintainable)

Replacing Data Source from one Server to Another

We are deploying Tableau for a bank.
We had created 6 test dashboards using dummy data on a staging data base using sql connection and lets say has an ip 10.10.10.10.
Now we need to use the same view we had used with the dummy data on Live data but using a different connection which is again an sql engine & IP lets say as 20.20.20.20. All the variable names and other properties are the same, not difference is that the Live data would not have calculated fields which we can deploy on the Live environment.
The challenge is: the LIVE data being of a bank is highly confidential and cannot be used from outside operations site rather we need to deploy it from an ODC [restricted environment]. Hence we simply cannot do a replace data source.
Hence we are planning to move twbx files and data extracts for each of these views using a shared folder to the ODC. Then the process would be like below:
As the LIVE sql data base is different from the dummy sql we will get error
We will select edit data connection
Will select tableau data extract for each sheet and dashboard
Will then select the option of replace data source and select LIVE SQL database
Will extract the new data
The visualization should work fine
Earlier we had just moved TWBX files hence it failed. Is there a different approach to it.
I did something similar to it
For that, you must have
same schema as of Live database and dummy database
do not change name of any source table or column
create your viz
send it in the .tbw form which is editable HTML format
Now the hard part- open your tbw in notepad and replace all connection details to new one
save and open in the tableau
tell me if it didn't worked
One method would be to modify your hosts file on your local computer, pointing the production server name the staging instance of the database. For example, let's say your production database is prod.url.com and you have a reporting staging db server instance called reportstage.otherurl.com
Open your hosts file. Add an entry for prod.url.com. Point it to reportstage.otherurl.com
Develop the report in Desktop, with the db connection string to prod.url.com.
When you publish the twb file to Server, no connection string changes are needed.
Another easier way is to publish the twb to Server with your staging connection string but edit the connection string in the data source in Server.
Develop the twb file on your local computer against your staging database.
Publish the twb file to Server.
Go to the workbook on Server and instead of looking at the views, click on Data Sources.
Edit the data source(s) connection information. This allows you to edit the server name, port, username, or password.
I've used this second method quite a bit. We have an environment where we can't hit the production db outside of the data center. Our staging environment doesn't have that restriction. We develop against the stage db, deploy, and edit the server name in the data source.

Can I connect to a specific Enterprise architect model?

Using .NET, I'd like to connect to an EA model from an external application.
If I have more than one EA model open -- eg two remote SQL Server hosted models -- how do I specify extracting data from only one of the models?
# any way to specify a specific data source?
var r = new EA.Repository();
# As I don't think is what I want because:
# 1) didn't want to Open a document -- just connect to it
# 2) don't have a filename - just a model name and/or ConnString...
bool isOpen = r.OpenFile("C:/Sparx-EA/Test Project.EAP");
# etc.
Element ele = r.GetElementByID(10);
Thank you!
This is documented in the manual:
OpenFile (string Filename) Boolean
Notes: This is the main point for
opening an Enterprise Architect project file from an automation
client, and working with the contained objects.
If the required
project is a DBMS repository, and you have created a shortcut .eap
file containing the database connection string, you can call this
shortcut file to access the DBMS repository.
You can also connect to a
SQL database by passing in the connection string itself instead of a
filename.
A valid connection string can be obtained from the Open
Project dialog by selecting a recently opened SQL repository.
Parameters: Filename: String - the filename of the Enterprise
Architect project to open
Repository.OpenFile() is the correct method to use. You can pass it a connection string, it doesn't have to be a file.
A Repository object can only be connected to one EA project at a time. So in order to connect to two EA projects in the same process, create two Repository objects.
Finally, the numeric identities used in calls like Repository.GetElementByID() are valid only within a repository. The number 10 likely refers to two different elements in two repositories -- or might have been deleted in one of them. If you know you've got the same element in two repositories, use Repository.GetElementByGuid() instead.

How DSpace process a query in jspui?

How any query is processed in DSpace and data is managed between front end and PostgreSQL
Like every other webapp running in a Servlet Container like Tomcat, the file WEB-INF/web.xml controls how a query is processed. In case of DSpace's JSPUI you'll find this file in [dspace-install]/webapps/jspui/WEB-INF/web.xml. The JSPUI defines several filters, listeners and servlets to process a request.
The filters are used to report that the JSPUI is running, that restricted areas can be seen by authenticated users or even by authenticated administrators only and to handle Content Negotiation.
The listeners ensure that DSpace has started correctly. During its start DSpace loads the configuration, opens database connections that it uses in a connection pool, let Spring do its IoC magic and so on.
For the beginning the most important part to see how a query is processed are the servlets and the servlet-mappings. A servlet-mapping defines which servlet is used to process a request with a specific request path: e.g. all requests to example.com/dspace-jspui/handle/* will be processed by org.dspace.app.webui.servlet.HandleServlet, all requests to example.com/dspace-jspui/submit will be processed by org.dspace.app.webui.servlet.SubmissionController.
The servlets uses their Java code ;-) and the DSpace Java API to process the request. You'll find most of it in the dspace-api module (see [dspace-source]/dspace-api/src/main/java/...) and some smaller part in dspace-services module ([dspace-source]dspace-services/src/main/java/...). Within the DSpace Java API their are two important classes if you're interested in the communication with the database:
One is org.dspace.core.Context. The context contains information whether and which user is logged in, an initialized and connected database connection (if all went well) and a cache. The methods Context.abort(), Context.commit() and Context.complete() are used to manage the database transaction. That is the reason, why almost all methods manipulating the database requests a Context as method parameter: it controls the database connection and the database transaction.
The other one is org.dspace.storage.rdbms.DatabaseManager. The DatabaseManager is used to handle database queries, updates, deletes and so on. All DSpaceObjects contains an object TableRow which contains the information of the object stored in the database. Inside the DSpaceObject classes (e.g. org.dspace.content.Item, org.dspace.content.Collection, ...) the TableRow may be manipulated and the changes stored back to the database by using DatabaseManager.update(Context, DSpaceObject). The DatabaseManager provides several methods to send SQL queries to the database, to update, delete, insert or even create data in the database. Just take a look to its API or look for "SELECT" it the DSpace source to get an example.
In JSPUI it is important to use Context.commit() if you want to commit the database state. If a request is processed and Context.commit() was not called, then the transaction will be aborted and the changes gets lost. If you call Context.complete() the transaction will be committed, the database connection will be freed and the context is marked as been finished. After you called Context.complete() the context cannot be used for a database connection any more.
DSpace is quite a huge project and their could be written a lot more about its ORM, the initialization of the database and so on. But this should already help you to start developing for DSpace. I would recommend you to read the part "Architecture" in the DSpace manual: https://wiki.duraspace.org/display/DSDOC5x/Architecture
If you have more specific questions you are always invited to ask them here on stackoverflow or on our mailing lists (http://sourceforge.net/p/dspace/mailman/) dspace-tech (for any question about DSpace) and dspace-devel (for question regarding the development of DSpace).
It depends on the version of DSpace you are running, along with your configuration.
In DSpace 4.0 or above, by default, the DSpace JSPUI uses Apache Solr for all searching and browsing. DSpace performs all indexing and querying of Solr via its Discovery module. The Discovery (Solr) based searche/indexing classes are available under the "org.dspace.discovery" package.
In earlier versions of DSpace (3.x or below), by default, the DSpace JSPUI uses Apache Lucene directly. In these older versions, DSpace called Lucene directly for all indexing and searching. The Lucene based search/indexing classes are available under the "org.dspace.search" package.
In both situations, queries are passed directly to either Solr or Lucene (again depending on the version of DSpace). The results are parsed and displayed within the DSpace UI.