ServiceStack OrmLite and PostgreSQL - timeouts - postgresql

I am updating large amounts of data using ServiceStack's OrmLite with a connection to PostgreSQL, however, I am getting a large amount of timeouts.
Sample Code:
public class AccountService : Service
{
public object Any(ImportAccounts request)
{
var sourceAccountService = this.ResolveService<SourceAccountService();
var sourceAccounts = (GetSourceAccountResponse)sourceAccountService.Get(new GetSourceAccounts());
foreach (var a in sourceAccounts)
{
Db.Save(a.ConvertTo<Account>());
}
}
}
The Source Account service, which sits in the same project & accesses the same Db.
public class SourceAccountService : Service
{
public object Get(GetSourceAccounts request)
{
return new GetSourceAccountsResponse { Result = Db.Select<SourceAccounts>().ToList() };
}
}
Questions,
Should I be expecting large amount of timeouts considering the above set up?
is it better to be using using (IDbConnection db = DbFactory.OpenDbConnection()) instead of Db?

If you're resolving and executing a Service you should do it in a using statement so its open Db connection and
other resources are properly disposed of:
using (var service = this.ResolveService<SourceAccountService())
{
var sourceAccounts = service.Get(new GetSourceAccounts());
foreach (var a in sourceAccounts)
{
Db.Save(a.ConvertTo<Account>());
}
}
If you're executing other Services it's better to specify the Return type on the Service for added type safety
and reduced boiler plate at each call site, e.g:
public class SourceAccountService : Service
{
public GetSourceAccountsResponse Get(GetSourceAccounts request)
{
return new GetSourceAccountsResponse {
Result = Db.Select<SourceAccounts>()
};
}
}
Note: Db.Select<T> returns a List so .ToList() is unnecessary,
Another alternative for executing a Service instead of ResolveService<T> is to use:
var sourceAccounts = (GetSourceAccountsResponse)base.ExecuteRequest(new GetSourceAccounts());
Which is same above and executes the Service within a using {}.

Related

avoid concurrent access of postgres db

We have two .net services (.Net core console applications) which are accessing a postgres db table.
Service 1 inserts some 500 rows every 1 minute. It runs as a background thread.
Service 2 reads data from the same table continuously. There is an MQTT publisher which keeps reading data from this table when any new data is requested. This also happens very frequently i.e atleast 4/5 times a minute.
We are getting "FATAL: sorry, too many clients already " error.
What I am assuming is since write and read is happening simultaneously too frequently, the connection is not getting dispose properly.
Is there a way to avoid read whenever a write is happening.
EDITED
Thanks for the reply.. I know some connection pooling is happening but not sure where.. so my question was how to avoid concurrent access of postgres db..
Was not sure what part of code I can post to make the question clear
I am having using clause on dbcontext and also disposed like the below..
This is retrieval section
using (PlatinumDBContext platinumDBContext = new PlatinumDBContext())
{
try
{
var data = platinumDBContext.TrendPoints.Where(x => ids.Contains(x.TrendPointID) && x.TimeStamp >= DateTime.Now.AddHours(-timeinHours));
result = data.Select(x => new Last24hours
{
Label = x.TrendPointID.ToString(),
Value = (double)x.TrendPointValue,
time = x.TimeStamp.ToString("MM/dd/yyyy HH:mm:ss")
}).ToList();
}
catch (Exception oE)
{
}
finally {
platinumDBContext.Dispose();
}
}
This is the insertion section
using (PlatinumDBContext platinumDBContext = new PlatinumDBContext())
{
try
{
foreach (var point in trendPoints)
{
if (point != null)
{
TrendPoint item = new TrendPoint();
item.CreatedDate = DateTime.Now;
item.ObjectState = ObjectState.Added;
item.TrendPointID = point.TrendID;
item.TrendPointValue = double.IsNaN(point.Value) ? decimal.MinValue : (decimal)point.Value;
item.TimeStamp = new DateTime(point.TimeStamp);
platinumDBContext.Add(item);
}
}
platinumDBContext.SaveChanges();
}
catch (Exception ex)
{
}
finally
{
platinumDBContext.Dispose();
}
}
Regards,
Geervani

Spring WebFlux - how to get value from database and set this value to create object

I really don't know how to create an object with data from Cassandra without breaking my reactive chain?
I have some private method that is part of the whole reactive chain:
private Mono<SecurityData> createSecurityData(Security securityOfType) {
return jobsProgressRepository
.findByAgentId(securityOfType.getAgentId()) //Flux<JobsProgress>
.collectList() //Mono<List<JobsProgress>>
.flatMap(this::getJobsProgressSummary) //Mono<JobsProgressSummary>
.flatMap(job -> mapToSecurityData(job, securityOfType));
}
and then i want to prepare some object:
private Mono<SecurityData> mapToSecurityData(JobsProgressSummary job, Security security ) {
SecurityData securityData = new SecurityData();
securityData.setAgentId(security.getAgentId());
securityData.setGroupId(security.getGroupId());
securityData.setHostname(getHostname(security)); --> here is the problem!!!
return Mono.just(securityData);
}
And getHostname method:
private String getHostname(Security security) {
String hostname = "";
switch(security.getProductType()){
case VM: hostname = vmRepository
.findByAgentId(security.getAgentId()).blockFirst().getHostname();
case HYPER: hostname = hyperRepository
.findByAgentId(security.getAgentId()).blockFirst().getHostname();
default: ""
}
return hostname;
}
My repos look like:
public interface HostRepository extends ReactiveCassandraRepository<Host, MapId> {
Flux<Host> findByAgentId(UUID agentId);
}
Maybe is my approach wrong? I can't of course use
hostRepository
.findByAgentId(security.getAgentId()).subscribe() // or blockFirst()
because I don't want to break my reactive chain...
How can I solve my problem? Please don't hesitate to give any, even very small tips:)
UPDATE
Here I added the missing body of the method getJobsProgressSummary:
private Mono<JobsProgressSummary> getJobsProgressSummary(List<JobsProgress> jobs) {
JobsProgressSummary jobsProgressSummary = new JobsProgressSummary();
jobs.forEach(
job -> {
if (job.getStatus().toUpperCase(Locale.ROOT).equals(StatusEnum.RUNNING.name())) {
jobsProgressSummary.setRunningJobs(jobsProgressSummary.getRunningJobs() + 1);
} else if (job.getStatus().toUpperCase(Locale.ROOT).equals(StatusEnum.FAILED.name())) {
jobsProgressSummary.setAmountOfErrors(jobsProgressSummary.getAmountOfErrors() + 1);
} else if (isScheduledJob(job.getStartTime())) {
jobsProgressSummary.setScheduledJobs(jobsProgressSummary.getScheduledJobs() + 1);
}
});
Instant lastActivity =
jobs.stream()
.map(JobsProgress::getStartTime)
.map(startTime -> Instant.ofEpochMilli(Long.parseLong(startTime)))
.max(Instant::compareTo)
.orElseGet(null);
jobsProgressSummary.setLastActivity(lastActivity);
return Mono.just(jobsProgressSummary);
}
You need to chain everything together, your code currently is like a mix of imperative and reactive. Also you should never need to call block.
Something like below should work
private Mono<SecurityData> mapToSecurityData(JobsProgressSummary job, Security security ) {
//Try to get hostname first, then process result
return getHostname(security)
//Map it. Probz should use builder or all args constructor to reduce code here
.map(hostname -> {
SecurityData securityData = new SecurityData();
securityData.setAgentId(security.getAgentId());
securityData.setGroupId(security.getGroupId());
securityData.setHostname(hostname);
return securityData;
});
}
private Mono<String> getHostname(Security security) {
Mono<String> hostname = Mono.empty();
switch(security.getProductType()){
//Also assuming hostname is a field in Security
//just change Security to class name if not
case VM: hostname = vmRepository.findByAgentId(security.getAgentId())
.next()
.map(Security::getHostname);
case HYPER: hostname = hyperRepository.findByAgentId(security.getAgentId())
.next()
.map(Security::getHostname);
}
return hostname;
}

How to do Async Http Call with Apache Beam (Java)?

Input PCollection is http requests, which is a bounded dataset. I want to make async http call (Java) in a ParDo , parse response and put results into output PCollection. My code is below. Getting exception as following.
I cound't figure out the reason. need a guide....
java.util.concurrent.CompletionException: java.lang.IllegalStateException: Can't add element ValueInGlobalWindow{value=streaming.mapserver.backfill.EnrichedPoint#2c59e, pane=PaneInfo.NO_FIRING} to committed bundle in PCollection Call Map Server With Rate Throttle/ParMultiDo(ProcessRequests).output [PCollection]
Code:
public class ProcessRequestsFn extends DoFn<PreparedRequest,EnrichedPoint> {
private static AsyncHttpClient _HttpClientAsync;
private static ExecutorService _ExecutorService;
static{
AsyncHttpClientConfig cg = config()
.setKeepAlive(true)
.setDisableHttpsEndpointIdentificationAlgorithm(true)
.setUseInsecureTrustManager(true)
.addRequestFilter(new RateLimitedThrottleRequestFilter(100,1000))
.build();
_HttpClientAsync = asyncHttpClient(cg);
_ExecutorService = Executors.newCachedThreadPool();
}
#DoFn.ProcessElement
public void processElement(ProcessContext c) {
PreparedRequest request = c.element();
if(request == null)
return;
_HttpClientAsync.prepareGet((request.getRequest()))
.execute()
.toCompletableFuture()
.thenApply(response -> { if(response.getStatusCode() == HttpStatusCodes.STATUS_CODE_OK){
return response.getResponseBody();
} return null; } )
.thenApply(responseBody->
{
List<EnrichedPoint> resList = new ArrayList<>();
/*some process logic here*/
System.out.printf("%d enriched points back\n", result.length());
}
return resList;
})
.thenAccept(resList -> {
for (EnrichedPoint enrichedPoint : resList) {
c.output(enrichedPoint);
}
})
.exceptionally(ex->{
System.out.println(ex);
return null;
});
}
}
The Scio library implements a DoFn which deals with asynchronous operations. The BaseAsyncDoFn might provide you the handling you need. Since you're dealing with CompletableFuture also take a look at the JavaAsyncDoFn.
Please note that you necessarily don't need to use the Scio library, but you can take the main idea of the BaseAsyncDoFn since it's independent of the rest of the Scio library.
The issue that your hitting is that your outputting outside the context of a processElement or finishBundle call.
You'll want to gather all your outputs in memory and output them eagerly during future processElement calls and at the end within finishBundle by blocking till all your calls finish.

Cannot attach database file when using Entity Framework Core Migration commands

I am using EntityFramework Core commands to migration database. The command I am using is like the docs suggests: dnx . ef migration apply. The problem is when specifying AttachDbFileName in connection string, the following error appear: Unable to Attach database file as database xxxxxxx. This is the connection string I am using:
Data Source=(LocalDB)\mssqllocaldb;Integrated Security=True;Initial Catalog=EfGetStarted2;AttachDbFileName=D:\EfGetStarted2.mdf
Please help how to attach the db file to another location.
Thanks
EF core seem to have troubles with AttachDbFileName or doesn't handle it at all.
EnsureDeleted changes the database name to master but keeps any AttachDbFileName value, which leads to an error since we cannot attach the master database to another file.
EnsureCreated opens a connection using the provided AttachDbFileName value, which leads to an error since the file of the database we want to create does not yet exist.
EF6 has some logic to handle these use cases, see SqlProviderServices.DbCreateDatabase, so everything worked quite fine.
As a workaround I wrote some hacky code to handle these scenarios:
public static void EnsureDatabase(this DbContext context, bool reset = false)
{
if (context == null)
throw new ArgumentNullException(nameof(context));
if (reset)
{
try
{
context.Database.EnsureDeleted();
}
catch (SqlException ex) when (ex.Number == 1801)
{
// HACK: EF doesn't interpret error 1801 as already existing database
ExecuteStatement(context, BuildDropStatement);
}
catch (SqlException ex) when (ex.Number == 1832)
{
// nothing to do here (see below)
}
}
try
{
context.Database.EnsureCreated();
}
catch (SqlException ex) when (ex.Number == 1832)
{
// HACK: EF doesn't interpret error 1832 as non existing database
ExecuteStatement(context, BuildCreateStatement);
// this takes some time (?)
WaitDatabaseCreated(context);
// re-ensure create for tables and stuff
context.Database.EnsureCreated();
}
}
private static void WaitDatabaseCreated(DbContext context)
{
var timeout = DateTime.UtcNow + TimeSpan.FromMinutes(1);
while (true)
{
try
{
context.Database.OpenConnection();
context.Database.CloseConnection();
}
catch (SqlException)
{
if (DateTime.UtcNow > timeout)
throw;
continue;
}
break;
}
}
private static void ExecuteStatement(DbContext context, Func<SqlConnectionStringBuilder, string> statement)
{
var builder = new SqlConnectionStringBuilder(context.Database.GetDbConnection().ConnectionString);
using (var connection = new SqlConnection($"Data Source={builder.DataSource}"))
{
connection.Open();
using (var command = connection.CreateCommand())
{
command.CommandText = statement(builder);
command.ExecuteNonQuery();
}
}
}
private static string BuildDropStatement(SqlConnectionStringBuilder builder)
{
var database = builder.InitialCatalog;
return $"drop database [{database}]";
}
private static string BuildCreateStatement(SqlConnectionStringBuilder builder)
{
var database = builder.InitialCatalog;
var datafile = builder.AttachDBFilename;
var dataname = Path.GetFileNameWithoutExtension(datafile);
var logfile = Path.ChangeExtension(datafile, ".ldf");
var logname = dataname + "_log";
return $"create database [{database}] on primary (name = '{dataname}', filename = '{datafile}') log on (name = '{logname}', filename = '{logfile}')";
}
It's far from nice, but I'm using it for integration testing anyway. For "real world" scenarios using EF migrations should be the way to go, but maybe the root cause of this issue is the same...
Update
The next version will include support for AttachDBFilename.
There may be a different *.mdf file already attached to a database named EfGetStarted2... Try dropping/detaching that database then try again.
You might also be running into problems if the user LocalDB is running as doesn't have correct permissions to the path.

ADO.NET - Bad Practice?

I was reading an article in MSDN several months ago and have recently started using the following snippet to execute ADO.NET code, but I get the feeling it could be bad. Am I over reacting or is it perfectly acceptable?
private void Execute(Action<SqlConnection> action)
{
SqlConnection conn = null;
try {
conn = new SqlConnection(ConnectionString);
conn.Open();
action.Invoke(conn);
} finally {
if (conn != null && conn.State == ConnectionState.Open) {
try {
conn.Close();
} catch {
}
}
}
}
public bool GetSomethingById() {
SomeThing aSomething = null
bool valid = false;
Execute(conn =>
{
using (SqlCommand cmd = conn.CreateCommand()) {
cmd.CommandText = ....
...
SqlDataReader reader = cmd.ExecuteReader();
...
aSomething = new SomeThing(Convert.ToString(reader["aDbField"]));
}
});
return aSomething;
}
What is the point of doing that when you can do this?
public SomeThing GetSomethingById(int id)
{
using (var con = new SqlConnection(ConnectionString))
{
con.Open();
using (var cmd = con.CreateCommand())
{
// prepare command
using (var rdr = cmd.ExecuteReader())
{
// read fields
return new SomeThing(data);
}
}
}
}
You can promote code reuse by doing something like this.
public static void ExecuteToReader(string connectionString, string commandText, IEnumerable<KeyValuePair<string, object>> parameters, Action<IDataReader> action)
{
using (var con = new SqlConnection(connectionString))
{
con.Open();
using (var cmd = con.CreateCommand())
{
cmd.CommandText = commandText;
foreach (var pair in parameters)
{
var parameter = cmd.CreateParameter();
parameter.ParameterName = pair.Key;
parameter.Value = pair.Value;
cmd.Parameters.Add(parameter);
}
using (var rdr = cmd.ExecuteReader())
{
action(rdr);
}
}
}
}
You could use it like this:
//At the top create an alias
using DbParams = Dictionary<string, object>;
ExecuteToReader(
connectionString,
commandText,
new DbParams() { { "key1", 1 }, { "key2", 2 } }),
reader =>
{
// ...
// No need to dispose
}
)
IMHO it is indeed a bad practice, since you're creating and opening a new database-connection for every statement that you execute.
Why is it bad:
performance wise (although connection pooling helps decrease the performance hit): you should open your connection, execute the statements that have to be executed, and close the connection when you don't know when the next statement will be executed.
but certainly context-wise. I mean: how will you handle transactions ? Where are your transaction boundaries ? Your application-layer knows when a transaction has to be started and committed, but you're unable to span multiple statements into the same sql-transaction with this way of working.
This is a very reasonable approach to use.
By wrapping your connection logic into a method which takes an Action<SqlConnection>, you're helping prevent duplicated code and the potential for introduced error. Since we can now use lambdas, this becomes an easy, safe way to handle this situation.
That's acceptable. I've created a SqlUtilities class two years ago that had a similar method. You can take it one step further if you like.
EDIT: Couldn't find the code, but I typed a small example (probably with many syntax errors ;))
SQLUtilities
public delegate T CreateMethod<T> (SqlDataReader reader);
public static T CreateEntity<T>(string query, CreateMethod<T> createMethod, params SqlParameter[] parameters) {
// Open the Sql connection
// Create a Sql command with the query/sp and parameters
SqlDataReader reader = cmd.ExecuteReader();
return createMethod(reader);
// Probably some finally statements or using-closures etc. etc.
}
Calling code
private SomeThing Create(SqlDataReader reader) {
SomeThing something = new SomeThing();
something.ID = Convert.ToIn32(reader["ID"]);
...
return something;
}
public SomeThing GetSomeThingByID (int id) {
return SqlUtilities.CreateEntity<SomeThing> ("something_getbyid", Create, ....);
}
Of course you could use a lambda expression instead of the Create method, and you could easily make a CreateCollection method and reuse the existing Create method.
However if this is a new project. Check out LINQ to entities. Is far easier and flexible than ADO.Net.
Well, In my opinion check what you do before going through it.Something that is working doesn't mean it is best and good programming practice.Check out and find a concrete example and benefit of using it.But if you are considering using for big projects it would be nice using frameworks like NHibernate.Because there are a lot projects even frameworks developed based on it,like http://www.cuyahoga-project.org/.