Finding all input parameter and the queries corresponding to those input parameter - postgresql

I have Postgresql DB on my pc and I'm trying to connect different database application to Postgresql but before that(An research issue), for each application, I need to see all the input parameter and all the queries corresponding to those input parameter that application can do.
How?

Look in the code of every application and see what calls are being made. In addition figure out all the parameter values that can be sent based on an almost infinite combination of characters and numbers the user can select from.
Or to remain sane turn on postgresql logging and let the users do their thing and analyse what calls are being made.

Related

Postgres AUTONOMOUS_TRANSACTION equivalent on the same DB

I'm currently working on a SpringBatch application that should insert some logs in case a certain type of error happens. The problem is that if the BatchJob fails, it automatically rollback everything done and that’s perfect, but it also rollback the error logs.
I need to achieve something similar to the AUTONOMOUS_TRANSACTION of Oracle while using PostgreSQL (14).
I’ve seen the DBLINK and it seem the only thing close to an alternative, but I have found some problems:
I need to avoid the connection string because the database host/port/name changes in the different environments, is it possible? I need to persist the data in the same database to technically I don’t need to connect to any other database but use the calling connection.
Is it possible to create a Function/Procedure that creates the takes care of all and I only have to call it Java side? Maybe this way I can somehow pass the connection data as a parameter in case that is not possible to avoid.
In a best case scenario I would be able to do something like:
dblink_exec(text sql);
That without arguments considers the same database where is been executed.
The problem is that I need this to be done without specifying any connection data, this will be inside a function on the executing db, in the same schema… that function will pass from one environment to the next one and the code needs to be the same so any name/user/pass needed must be avoided since they will change by environment. And since doing it in the same db and schema technically they can be inferred.
Thanks in advance!
At the moment I haven't try anything, I'm trying to get some information first.

Problem with connecting ADODB.Recordset to a forms RECORDSET on the On Open event of the form

I have an access project that is "linked" to a SQL database that now works like a charm. The last problem I solved was, making sure any Boolean fields be turned to bits with default of 0, and adding the TIMESTAMP in SQL due to the fact that ACCESS is not so much of a genius with record locking (so I was told) .
Now that I tried to connect direct to SQL server by using an ADODB.Recordset and setting the forms.recordset to the recordset, at the OnOpen event of the form, (this recordset runs a stored procedure in SQL, I get the data fine but get the error locking (write conflict) back.
This ADODB.Recordset cursorlocation is set to "adUseClient".
Obviously I no longer have the forms recordsource attached or assigned to the linked SQL table anymore.
Am I missing something? do I need to assign anything to the forms recordsource?
The Idea is trying to connect directly thru the use of stored procedures instead of linked tables.
thanks so much for any help.
The adding of timestamp is a VERY good idea. And do not confuse the term/name used timestamp to mean an actual date/time column. The correct term is "row version".
This issue has ZERO to do with locking. The REASON why you want this column added is because then Access will use that column to determine when the record is dirty, and more imporant figure out that the record been changed. If you omit this column, then access reverts to a column by column testing approach. Not only does this cause more network traffic, but worse for real type values, due to rounding, you can get the dredged this record has been changed by another user. But, it not been changed, and even columns with floating point values will cause access to error out with that changed record.
So, for all tables, and you even see the option included in the SSMA (the access to sql migration wizard that this option is available (and I believe it is a default).
So yes, it is HIGH but VERY high recommended that you include/add a rowversion column to all tables - this will help Access in a HUGE way.
And as noted, there is a long standing issue with bit fields that don't have a default setting. so, you don't want to allow bit fields to be added/created with a null value. So, ensure that there is a default value of 0 (you set this sql server side).
Ok, now that we have the above cleared up?
It not really all that clear as to why you want or need or are adopting a store procedure and code to load/fill up the form. You not see any better performance if you bind the form DIRECTLY to the linked table. Access will ONLY pull the reocrds you tell that form to load.
So, bind the form directly to the linked table. Then, you can launch/open the form say to once reocrd with this:
docmd.OpenForm "frmInvoices",,,"InvoiceNum = 123"
Now, you would of course change the above "123" to some variable or some way to prompt the user for what invoice to work on.
The invoice form will then load to the ONE record. So, even if the form bound (linked table) has 2 million rows? Only ONE record will come down the network pipe. So, all that extra work of a store procedure, creating a recordset and pulling it ? You will gain ZERO in terms of performance, but you are writing all kinds of code when it simply not required, and you not achieve any superior performance to the above one line of code that will automatic filter and ONLY pull down the record that meets the given criteria (in this example invoice number).
So:
Yes, all tables need a PK
Yes, all tables should have a rowversion (but it called a timestamp column - nothing to do with the actual time).
Yes, all bit fields need a default of 0 - don't allow null values.
And last but not least?
I don't see any gains in performance, or even any advantages of attempting to code your way though this by adopting store procedures and that of introducing reocrdset code when none is required, but worse will not gain you performance anyway.

Can we change multicapabilies in between the test running in protractor

I am using protractor-cucumber framework(protractor 5.2.2 and cucumber 3.2.0).
I have a requirements like this - posting some details(from DB) to an application with different user credentials.
Currently, I am doing with a single login credential. So, in beforeLaunch() I have to call one function (which create temporary table that is having all data to be entered for that user), it will split the data for each set(let it be Set 1, Set 2 and Set 3). And I am running the automation script in a 3 nodes by selenium grid by passing this set of numbers to the query (which is used to fetch data from the temporary table according to the set number).
I have a loop in my js file to enter data row by row. And I have set the getMultiCapabilities() dynamically (by dividing total numbers of rows of a table for the given user by a constant number).
I can successfully run it like this. But when I need to run for multiple user, each node may have data for different users. So i need to run in a way that, process one user at a time in all threads and then for next user.
Is it possible to do it like this? Thanks in advance.
You have a tricky way to run your tests. I'm sure that it could be done in a more "easier to understand" way.
But if does not break your flow, I think you could archive what you want with creating several config files. Where you will keep specific data for each user.
Better to split logic. In test spec files should be nothing specific about user, just something const user = someClass.getUser(). Separately, you should have some class that managed these users. And again, separately, the class where you get and receive and ... data about User X from DB or filesystem or API or whatever.

SSRS won't allow parameters in embedded dataset based on data source

Whenever I construct a report that uses an embedded dataset and try to use a parameter (such as #StartDate and #EndDate), I receive an error that states I must declare scalar values. However, this error only comes up if I set a data source that uses the "credentials stored securely in the report server" option. If I set the data source to use "Windows integrated security," I do not receive the error.
I am at a complete loss. These reports need to be accessed by a large amount of people. We have granted them "browser" privileges through an Active Directory Group through SSRS, including the data sources.
What is the best way to proceed? Is there an easy fix?
I generally deploy with the option already set by going into the Data Source and choosing 'Log on to SQL Server' section > 'Use SQL Server Authentication'> (Set your user and settings). When you use a windows user as your main user after you deploy there could be issues.
The other question would be does this work correctly at all times in Business Intelligence Development Studio, BIDS, and just not on the server? It is very interesting a permission issue alone would cause a scalar error to return. Generally when users have to get to the report they may still get the error but not storing the credentials merely asks them for credentials. It would help more to know the datasets and what they are returning or supposed to be returning. Generally a Start and End are typically defined as 'DataTime' in SSRS and are in a predicate like 'Where thing between #Start and #End' and there data is chosen from a calendar by a user. If you are binding them to other datasets and there is the possibility of a user selecting multiple values that could present an issue.
I took a look at the data source that had been set up by our DBA. It was set up as an ODBC connection. I changed it to Microsoft SQL. It works now. I do not understand why and would appreciate if a more seasoned individual might explain.

Should Command / Handles hold the full aggregates or only its id?

I'm trying to play around with DDD and CQRS.
And I got this two solutions :
add AggregateId to my command / event. It's nice beauce I can use my command as my web service's parameter, and I can as well return some instance of my command to my forms for saying "you can do this command,t his one and this one"
add my full Aggregate to my command / event. It's nice because I'm sure that I won't load my aggregate 100 times if there is a lot of event going on, I'll just pass my reference around (for instance I won't load it in my command's validator and in my command handler). But i'd add to create a parameter class for each command wih only the id.
For now I have the id in the commands and the full model in the events (I trust my unit of work for caching the Load(aggregateId) so i won't execute the same request 100 for 1 command).
Is there a right / better way ?
Yes your current approach is correct - reference the aggregate with an identity value on the command. A command is meant to be serialized and sent across process boundaries. Also, a command is normally constructed by a client who may not have enough information to create an entire aggregate instance. This is also why an identity should be used. And yes, your unit of work should take care of caching an aggregate for the duration of a unit of work, if need be.