Using static objects in custom keyword methods for katalon - katalon-studio

As per Katalon documentation https://docs.katalon.com/katalon-studio/docs/handling-databases.html
Database connection string is static
private static Connection connection = null;
This is will be used for creating, Querying and closing the connection. Using custom keyword feature of Katalon, these methods will be called.
CustomKeywords.'dataProvider.MySQL.connectDB'()
With a single test case, this is fine. What will happen for parallel execution. Will the connection object be shared across all the threads and it cause some other problem.
Any help will be greatly appreciated.

You need to create a standalone keyword for each testcase to implement different executeQuery() methods and to execute different SQL query strings.

Related

ADO.NET Best Practices for Connection and DataAdaptor Object Scope - follow-up

I had the exact same question that was asked by Mark Lansdown some time ago. Mark's question
The answers in that thread were somewhat helpful but left me still extremely puzzled; particularly as it relates to the recommended practice of employing "using" blocks.
The first answer seemed to indicate that the Connection object and the DataAdapter object should be created within using blocks...
DataTable dt = new DataTable();
using (OleDbConnection conn = new OleDbConnection("my_connection_string"))
using (OleDbDataAdapter adapter = new OleDbDataAdapter("SELECT * from Employees", conn))
{
adapter.Fill(dt);
}
Thus, the DataTable object is retained but both the DataAdapter and Connection object go out of scope the instant the table is filled.
Yet a follow-on answer indicated that the DataAdapter Object should be retained. That makes perfect sense to me as it does appear to me that the DataAdapter was designed with handling multiple commands in mind.
So this leaves me with multiple questions:
BTW, I am using vb.net with SQL Server
Question 1) In order to retain the DataAdapter object doesn't that mean I cannot create it with a using block?
Question 2) In order to create an instance of a DataAdapter don't I need an instance of a Connection object which would make it impractical to create the Connection object with a using block?
How would I implement the using blocks in code like this?
Private Class frmMain
Dim adapter as SqlDataAdapter
Dim conn as SqlConnection
Private Sub frmMain_Load(sender As Object, e As EventArgs) Handles Me.Load
conn = new SqlConnection("My_Connection_String")
adapter = new SqlDataAdapter("Select * from Employees", conn)
adapter.fill(MyDataSet, MyTableName)
End Sub
Private Sub SaveButtton_Click(sender as Object, e As EventArgs) Handles SaveButton.Click
adapter.Update(MyTableName)
End Sub
End Class
I have seen a bunch of sample code for all this on msdn and every sample code I saw incorporated using blocks but always created a table via code and performed updates via code all inside the using blocks which seems to me like it could never work in the real world.
Thanks for any advice.
a follow-on answer indicated that the DataAdapter Object should be retained...
Question 1) In order to retain the DataAdapter object doesn't that mean I cannot create it with a using block?
No. The response is flawed. Remember, DataAdpater was in .Net 1.0. At this time there was no good way to dispose your objects, so you did the best you could. Using blocks and good IDisposable support were added for .Net 2.0, and this caused a change in direction for how things should be done. Yes, a DataAdapter can have different kinds of queries, and has the ability to support longer lifetimes if that's really want you want, but it's rarely a good choice anymore to use it that way.
If you really want, you can still create another DataAdapter later on if you need it for the different type of query, or you can use ExecuteNonQuery() for things like DELETE, INSERT, and UPDATE. But if you want the framework to do more of that kind of work for you, you should really go for a full ORM. If you want to write your own SELECT statements in code, you're usually better off also writing your own INSERT, UPDATE, and DELETE (etc) statements by hand, as well (and be sure to use parameterized queries!).
Question 2) In order to create an instance of a DataAdapter don't I need an instance of a Connection object which would make it impractical to create the Connection object with a using block?
How would I implement the using blocks in code like this?
Don't write code like that. Rather than repeat myself I'll link to a previous answer explaining why:
https://softwareengineering.stackexchange.com/questions/142065/creating-database-connections-do-it-once-or-for-each-query/398790#398790
But the short version is an ADO.Net connection object is a thin wrapper around much "heavier" and more expensive items in a connection pool. When you try to re-use a connection throughout a class or application, you gain efficiency in the relatively cheap wrapper at the expense of the much larger real underlying connections. You really are much better off creating a new connection each time. Only the connection string should be preserved for re-use.
every sample code I saw incorporated using blocks but always created a table via code and performed updates via code all inside the using blocks which seems to me like it could never work in the real world.
I assure you, it works very well. Again, if you don't like it, maybe you're looking for a full ORM like EntityFramework.

Is it OK to globally set the mybatis executor mode to BATCH?

I am currently developing a Spring Boot app, which uses mybatis for its persistence layer. I want to optimize the batch insertion of entities in the following scenario:
// flightSerieMapper and legMapper are used to create a series of flights.
// legMapper needs to use batch insertion.
#Transactional
public FlightSerie add(FlightSerie flightSerie) {
Integer flightSerieId = flightSeriesSequenceGenerator.getNext();
flightSerie.setFlightSerieId(flightSerieId);
flightSerieMapper.create(flightSerie);
// create legs in batch mode
for (Leg leg : flightSerie.getFlightLegs()) {
Integer flightLegId = flightLegsSequenceGenerator.getNext();
leg.setLegId(flightLegId);
legMapper.create(leg);
}
return flightSerie;
}
mybatis is configured as follows in application.properties:
# this can be externalized if necessary
mybatis.config-location=classpath:mybatis-config.xml
mybatis.executor-type=BATCH
This means that mybatis will execute all statements in batch mode by default, including single insert/update/delete statements. Is this OK? Are there any issues I should be aware of?
Another approach would be to use a dedicated SQLSession specifically for the LegMapper. Which approach is the best (dedicated SQLSession vs global setting in application.properties)?
Note: I have seen other examples where "batch inserts" are created using a <foreach/> loop directly in the mybatis xml mapper file. I don't want to use this approach because it does not actually provide a batch insert.
As #Ian Lim said, make sure you annotate mapper methods with inserts and updates with #Flush annotation if you globally set executor type to BATCH.
Another approach would be to use a dedicated SQLSession specifically
for the LegMapper. Which approach is the best (dedicated SQLSession vs
global setting in application.properties)?
Keep in mind that if you are using different SQL sessions for different mappers there will be different transactions for each SQL session. If a service or service method annotated with #Transactional uses several mappers that use different SQL sessions it will allocate different SQL transactions. So it's impossible to do atomic data operation that involves mappers with different SQL sessions.

Reworking EF nested connections to avoid MSDTC on Azure

I've deployed to Azure and Azure SQL, which doesn't support MSDTC and I'm having trouble understanding how to rework my code to prevent what I assume is nested connections. I'm fairly new to EF and my knowledge of TransactionScope is not wonderful, so I'm not sure that I have the right pattern.
I am trying to use repos, which call on a shared instance of the ObjectContext (I tried to dispose on EndRequest but had issues, so this is another problem for me).
I have a transaction which calls SaveChanges on the ObjectContext instance several times, but at some point it becomes disposed. What is governing this and can you recommend what I can do to get it working correctly?
If you want to avoid issues with distributed transaction you must handle connection manually because you need only one opened connection per TransactionScope = one context instance with one connection used for all queries and database updates. The code should look like:
using (var context = new YourObjectContext()) {
context.Connection.Open();
...
}
I am trying to use repos, which call on a shared instance of the
ObjectContext (I tried to dispose on EndRequest but had issues, so
this is another problem for me).
If you share your context instance among multiple request or even worse if you use just single context instance to handle all your requests you should stop now and completely redesign your application. Otherwise it will not work correctly.

Entity Framework 4: How expensive is it to create an EntityConnection?

How expensive is it to create an EF4 EntityConnection? I am creating an EF4 desktop app with SQL Compact, and the user will be able to open database files using a File Open dialog. My code then builds an EntityConnection, like this:
// Configure a SQL CE connection string
var sqlCompactConnectionString = string.Format("Data Source={0}", filePath);
// Create an Entity Connection String Builder
var builder = new EntityConnectionStringBuilder();
// Configure Builder
builder.Metadata = string.Format("res://*/{0}.csdl|res://*/{0}.ssdl|res://*/{0}.msl", m_EdmName);
builder.Provider = "System.Data.SqlServerCe.4.0";
builder.ProviderConnectionString = sqlCompactConnectionString;
var edmConnectionString = builder.ToString();
// Create an EDM connection
var edmConnection = new EntityConnection(edmConnectionString);
I have an ObjectContextFactory class that creates object contexts for Repository classes as needed.
So, here's my question: Is it better practice to build the EntityConnection once, when I initialize the factory, or should the factory build a new connection each time it creates an object context? Thanks for your help.
The overhead in EF4 is from all I know minimal - please verify here - basically it comes down to opening a new database connection and even that cost is small if the provider supports connection pooling (which SQL server does).
Metadata from MetadataWorkspace is is cached globally so this will not decrease performance (that probably wasn't true in 2009 when the blog post linked to in the other post was posted).
Also connection strings from the config file (the other performance problem indicated in that blog post) are all cached in memory, so I can't see how this can negatively impact performance either.
I would definitely use a new entity connection for each unit of work.
Take a look at this blog post. It seems that creating a new EntityConnection for each context is an expensive operation and the source of some major performance problems. The root of these performance issues (in your case) is the creation of the connection metadata. The other performance hit mentioned in the article (getting the connection string frmo config) would not apply to you, as you are supplying your own connection string. In my opinion, you should create a single EntityConnection.
One thing to keep in mind is that according to the documentation, EntityConnection is not guaranteed to be thread safe. If you are going to be accessing these connections from different threads then you will run into problems, and the safest way to solve this would be to not reuse the EntityConnection.

Entity Framework: Calling 'Read' when DataReader is closed

Entity Framework: Calling 'Read' when DataReader is closed
I am getting this problem intermittently when i pound my service with parallel asynchronous calls.
i understand that the reader is accessed when calling .ToList() on my defined EF query.
I would like to find out what is the best practice in constructing EF queries to avoid this, and similar problems.
My architecture is as follows:
My Entity Data Layer is a static class, with a static constructor, which instantiates my Entities (_myEntities). It also sets properties on my entities such as MergeOption.
This static class exposes public static methods which simply access the Entities.
public static GetSomeEntity(Criteria c) {
...
var q = _myEntitites.SomeEntity.Where(predicate);
return q.ToList();
}
This has been working in production for some time, but the error above and the one here happen intermittently, esp under heavy loads from clients.
I am also currently setting MultipleActiveResultSets=True in my connection string.
And that is the source of all your problems. Don't use shared context and don't use shared context as data cache or central data access object - it should be defined as one of the main rules in EF. It is also the reason why you need MARS (our discussion from previous question is solved now). When multiple clients executes queries on your shared context in the same time it opens multiple DataReaders on the same db connection.
I'm not sure why you get your current exception but I'm sure that you should redesign your data access approach. If you also modify data on shared context you must.
The issue may come from the connection timeout when trying to get a huge amount of data from your database, so trying to set the connection timeout in your code as below:
Entity 5
((IObjectContextAdapter)this.context).ObjectContext.CommandT‌​imeout = 1800;
Other Entity:
this.context.Database.CommandTimeout = 1800;