Timeout Expired during Link server query - tsql
I have a SQL Server 2017 database which has 10 tables. Each table is filled by 3rd party application.
When there is any insert/update in any table, a trigger is fired. The trigger updates another table located in remote SQL Server 2017 instance & database (linkserver).
That remote database includes only one table which maintains the information of source database (10 tables). So I have 10 after insert triggers and 10 after update triggers. Each table has about 172 column.
Sometimes, I get an error
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding
My after insert trigger is:
ALTER TRIGGER [dbo].[AfterINSERTTriggerONE] on [dbo].[ONE]
FOR INSERT AS
INSERT INTO [TargetServer].[TargetDb].[dbo].[DataLogs](
[DeviceId],
[DateTime],
[TimeStampUTC],
[StationIsOpen],
[PanelIsOpen],
[PowerIsOscillating],
[Heater1_FlameIsOn],
[Heater1_Flame2IsOn],
[Heater1_WaterLevelSwitch],
[Heater2_FlameIsOn],
[Heater2_Flame2IsOn],
[Heater2_WaterLevelSwitch],
[Heater3_FlameIsOn],
[Heater3_Flame2IsOn],
[Heater3_WaterLevelSwitch],
[Heater4_FlameIsOn],
[Heater4_Flame2IsOn],
[Heater4_WaterLevelSwitch],
[Ups_BatteryMode],
[Ups_BatteryCharging],
[Ups_ShutdownEvent],
[Ups_Remote],
[Filter1],
[Filter2],
[Filter3],
[AirConditionerIsOn],
[R1_P],
[R1_T],
[R1_Qb],
[R1_Qm],
[R1_C],
[R1_CarbonDioxide],
[R1_Nitrogen],
[R1_TbX],
[R1_PbX],
[R1_Methane],
[R1_Ethane],
[R1_Propane],
[R1_iButane],
[R1_BatRemain],
[R1_Productivity],
[R2_P],
[R2_T],
[R2_Qb],
[R2_Qm],
[R2_C],
[R2_CarbonDioxide],
[R2_Nitrogen],
[R2_TbX],
[R2_PbX],
[R2_Methane],
[R2_Ethane],
[R2_Propane],
[R2_iButane],
[R2_BatRemain],
[R2_Productivity],
[R3_P],
[R3_T],
[R3_Qb],
[R3_Qm],
[R3_C],
[R3_CarbonDioxide],
[R3_Nitrogen],
[R3_TbX],
[R3_PbX],
[R3_Methane],
[R3_Ethane],
[R3_Propane],
[R3_iButane],
[R3_BatRemain],
[R3_Productivity],
[R4_P],
[R4_T],
[R4_Qb],
[R4_Qm],
[R4_C],
[R4_CarbonDioxide],
[R4_Nitrogen],
[R4_TbX],
[R4_PbX],
[R4_Methane],
[R4_Ethane],
[R4_Propane],
[R4_iButane],
[R4_BatRemain],
[R4_Productivity],
[Total_Qb],
[Total_Qm],
[InGasP],
[InGasT],
[OutGasP],
[OutGasT],
[AirT],
[AirHumadity],
[PanelT],
[PanelHumadity],
[ConexTemperature],
[ConexHumadity],
[StationCapacity],
[Productivity],
[Heater1_InGasT],
[Heater1_WaterT],
[Heater1_OutGasT],
[Heater1_WaterLevel],
[Heater2_InGasT],
[Heater2_WaterT],
[Heater2_OutGasT],
[Heater2_WaterLevel],
[Heater3_InGasT],
[Heater3_WaterT],
[Heater3_OutGasT],
[Heater3_WaterLevel],
[Heater4_InGasT],
[Heater4_WaterT],
[Heater4_OutGasT],
[Heater4_WaterLevel],
[OdorizerLevel1],
[OdorizerLevel2],
[Ups_AmbientTemperature],
[Ups_BatteryDischargeCurrent],
[Ups_BatteryVoltage],
[Ups_OutputVoltage],
[TBS_OutGasP],
[R1_VbT],
[R1_Vb],
[R1_Vbd],
[R1_VmT],
[R1_Vm],
[R1_Vmd],
[R1_Vb_PrevDay],
[R1_Vm_PrevDay],
[R2_VbT],
[R2_Vb],
[R2_Vbd],
[R2_VmT],
[R2_Vm],
[R2_Vmd],
[R2_Vb_PrevDay],
[R2_Vm_PrevDay],
[R3_VbT],
[R3_Vb],
[R3_Vbd],
[R3_VmT],
[R3_Vm],
[R3_Vmd],
[R3_Vb_PrevDay],
[R3_Vm_PrevDay],
[R4_VbT],
[R4_Vb],
[R4_Vbd],
[R4_VmT],
[R4_Vm],
[R4_Vmd],
[R4_Vb_PrevDay],
[R4_Vm_PrevDay],
[Total_Vb_PrevDay]
)
select
StationName,
Time_Stamp,
Utc,
StationIsOpen,
PanelIsOpen,
PowerIsOscillating,
Heater1_FlameIsOn,
Heater1_Flame2IsOn,
Heater1_WaterLevelSwitch,
Heater2_FlameIsOn,
Heater2_Flame2IsOn,
Heater2_WaterLevelSwitch,
Heater3_FlameIsOn,
Heater3_Flame2IsOn,
Heater3_WaterLevelSwitch,
Heater4_FlameIsOn,
Heater4_Flame2IsOn,
Heater4_WaterLevelSwitch,
Ups_BatteryMode,
Ups_BatteryCharging,
Ups_ShutdownEvent,
Ups_Remote,
Filter1,
Filter2,
Filter3,
AirConditionerIsOn,
R1_P,
R1_T,
R1_Qb,
R1_Qm,
R1_C,
R1_CarbonDioxide,
R1_Nitrogen,
R1_TbX,
R1_PbX,
R1_Methane,
R1_Ethane,
R1_Propane,
R1_iButane,
R1_BatRemain,
R1_Productivity,
R2_P,
R2_T,
R2_Qb,
R2_Qm,
R2_C,
R2_CarbonDioxide,
R2_Nitrogen,
R2_TbX,
R2_PbX,
R2_Methane,
R2_Ethane,
R2_Propane,
R2_iButane,
R2_BatRemain,
R2_Productivity,
R3_P,
R3_T,
R3_Qb,
R3_Qm,
R3_C,
R3_CarbonDioxide,
R3_Nitrogen,
R3_TbX,
R3_PbX,
R3_Methane,
R3_Ethane,
R3_Propane,
R3_iButane,
R3_BatRemain,
R3_Productivity,
R4_P,
R4_T,
R4_Qb,
R4_Qm,
R4_C,
R4_CarbonDioxide,
R4_Nitrogen,
R4_TbX,
R4_PbX,
R4_Methane,
R4_Ethane,
R4_Propane,
R4_iButane,
R4_BatRemain,
R4_Productivity,
Total_Qb,
Total_Qm,
InGasP,
InGasT,
OutGasP,
OutGasT,
AirT,
AirHumadity,
PanelT,
PanelHumadity,
ConexTemperature,
ConexHumadity,
StationCapacity,
Productivity,
Heater1_InGasT,
Heater1_WaterT,
Heater1_OutGasT,
Heater1_WaterLevel,
Heater2_InGasT,
Heater2_WaterT,
Heater2_OutGasT,
Heater2_WaterLevel,
Heater3_InGasT,
Heater3_WaterT,
Heater3_OutGasT,
Heater3_WaterLevel,
Heater4_InGasT,
Heater4_WaterT,
Heater4_OutGasT,
Heater4_WaterLevel,
OdorizerLevel1,
OdorizerLevel2,
Ups_AmbientTemperature,
Ups_BatteryDischargeCurrent,
Ups_BatteryVoltage,
Ups_OutputVoltage,
TBS_OutGasP,
R1_VbT,
R1_Vb,
R1_Vbd,
R1_VmT,
R1_Vm,
R1_Vmd,
R1_Vb_PrevDay,
R1_Vm_PrevDay,
R2_VbT,
R2_Vb,
R2_Vbd,
R2_VmT,
R2_Vm,
R2_Vmd,
R2_Vb_PrevDay,
R2_Vm_PrevDay,
R3_VbT,
R3_Vb,
R3_Vbd,
R3_VmT,
R3_Vm,
R3_Vmd,
R3_Vb_PrevDay,
R3_Vm_PrevDay,
R4_VbT,
R4_Vb,
R4_Vbd,
R4_VmT,
R4_Vm,
R4_Vmd,
R4_Vb_PrevDay,
R4_Vm_PrevDay,
Total_Vb_PrevDay
from inserted;
PRINT 'We Successfully Fired the AFTER INSERT Trigger for Table [ONE] in SQL Server.'
and AfterUpdate:
ALTER TRIGGER [dbo].[AfterUpdateTriggerONE] ON [dbo].[ONE]
AFTER UPDATE
AS
if (UPDATE (StationIsOpen))
BEGIN
UPDATE [TargetServer].[TargetDBDb].[dbo].[DataLogs]
SET Trg.StationIsOpen = ins.StationIsOpen , Trg.Updated = 1
FROM [TargetServer].[TargetDBDb].[dbo].[DataLogs] Trg
JOIN inserted ins
ON Trg.[DateTime] = ins.[Time_Stamp] AND
Trg.[DeviceId] = ins.[StationName]
END
if (UPDATE (PanelIsOpen))
BEGIN
UPDATE [TargetServer].[TargetDBDb].[dbo].[DataLogs]
SET Trg.PanelIsOpen = ins.PanelIsOpen , Trg.Updated = 1
FROM [TargetServer].[TargetDBDb].[dbo].[DataLogs] Trg
JOIN inserted ins
ON Trg.[DateTime] = ins.[Time_Stamp] AND
Trg.[DeviceId] = ins.[StationName]
END
.
.
.
and all other 170 columns
Related
How to connect to schema that is not public with psycopg 2 or 3
I cannot query from other schema's than public, also I am using supabase and connection string provided in supabase doesn't work.
Use schema.table in the query. conn_string = "dbname=postgres user=postgres password=*** host=*** port=****" with psycopg.connect(conn_string) as conn: with conn.cursor() as cur: cur.execute('SELECT "userId" FROM next_auth.sessions ORDER BY expires DESC LIMIT 1') result = cur.fetchone() I am using Supabase and the port that they provided in their "connection string" in the admin panel was wrong.
Spring batch: items are read again before write when using cursor
I have a spring batch with a step that: read from table all message in "SCHEDULED" status process message update message status to "SENT" or other status according to process result Using RepositoryItemReader i had the problem that updating items state messed up pagination, so i ended up using JpaCursorItemReader. Everything goes fine, but now all the items are read again from DB before the update phase (this not happens when using RepositoryItemReader). Here is my relevant code: #Autowired private MessageSenderProcessor messageSenderProcessor; #Autowired private GovioMessagesRepository govioMessagesRepository; public Step getProfileStep(){ return steps.get("getProfileStep") .<GovioMessageEntity, GovioMessageEntity>chunk(10) .reader(expiredScheduledDateMessageReader()) .processor(this.messageSenderProcessor) .writer(messageWriter()) .build(); } private RepositoryItemWriter<GovioMessageEntity> messageWriter() { final RepositoryItemWriter<GovioMessageEntity> repositoryItemWriter = new RepositoryItemWriter<>(); repositoryItemWriter.setRepository(govioMessagesRepository); repositoryItemWriter.setMethodName("save"); return repositoryItemWriter; } private ItemReader<GovioMessageEntity> expiredScheduledDateMessageCursor() { JpaCursorItemReader<GovioMessageEntity> itemReader = new JpaCursorItemReader<>(); itemReader.setQueryString("SELECT msg FROM GovioMessageEntity msg WHERE msg.status = :status AND msg.scheduledExpeditionDate < :now"); itemReader.setEntityManagerFactory(entityManager.getEntityManagerFactory()); itemReader.setSaveState(true); Map<String, Object> parameters = new HashMap<String, Object>(); parameters.put("status", Status.SCHEDULED); parameters.put("now", LocalDateTime.now()); itemReader.setParameterValues(parameters); return itemReader; } Here an extract of execution log: Hibernate: select goviomessa0_.id as id1_0_0_, govioservi1_.id as id1_1_1_, goviomessa0_.amount as amount2_0_0_, goviomessa0_.appio_message_id as appio_me3_0_0_, goviomessa0_.creation_date as creation4_0_0_, goviomessa0_.expedition_date as expediti5_0_0_, goviomessa0_.id_govio_service_instance as id_govi15_0_0_, goviomessa0_.invalid_after_due_date as invalid_6_0_0_, goviomessa0_.last_update_status as last_upd7_0_0_, goviomessa0_.markdown as markdown8_0_0_, goviomessa0_.notice_number as notice_n9_0_0_, goviomessa0_.payee as payee10_0_0_, goviomessa0_.scheduled_expedition_date as schedul11_0_0_, goviomessa0_.status as status12_0_0_, goviomessa0_.subject as subject13_0_0_, goviomessa0_.taxcode as taxcode14_0_0_ from govio_messages goviomessa0_ where goviomessa0_.status=? and goviomessa0_.scheduled_expedition_date<? 2022-11-07 10:03:06,990 INFO [spring_batch_msgsender1] it.govio.msgsender.step.GetProfileProcessor: Sending msg 1 2022-11-07 10:03:06,990 INFO [spring_batch_msgsender2] it.govio.msgsender.step.GetProfileProcessor: Sending msg 2 2022-11-07 10:03:06,990 INFO [spring_batch_msgsender3] it.govio.msgsender.step.GetProfileProcessor: Sending msg 3 .... 2022-11-07 10:03:07,171 INFO [spring_batch_msgsender1] it.govio.msgsender.step.GetProfileProcessor: Message sent. 2022-11-07 10:03:07,171 INFO [spring_batch_msgsender2] it.govio.msgsender.step.GetProfileProcessor: Message sent. 2022-11-07 10:03:07,220 INFO [spring_batch_msgsender3] .... Hibernate: select goviomessa0_.id as id1_0_0_, goviomessa0_.amount as amount2_0_0_, goviomessa0_.appio_message_id as appio_me3_0_0_, goviomessa0_.creation_date as creation4_0_0_, goviomessa0_.expedition_date as expediti5_0_0_, goviomessa0_.id_govio_service_instance as id_govi15_0_0_, goviomessa0_.invalid_after_due_date as invalid_6_0_0_, goviomessa0_.last_update_status as last_upd7_0_0_, goviomessa0_.markdown as markdown8_0_0_, goviomessa0_.notice_number as notice_n9_0_0_, goviomessa0_.payee as payee10_0_0_, goviomessa0_.scheduled_expedition_date as schedul11_0_0_, goviomessa0_.status as status12_0_0_, goviomessa0_.subject as subject13_0_0_, goviomessa0_.taxcode as taxcode14_0_0_ from govio_messages goviomessa0_ where goviomessa0_.id=? Hibernate: select goviomessa0_.id as id1_0_0_, goviomessa0_.amount as amount2_0_0_, goviomessa0_.appio_message_id as appio_me3_0_0_, goviomessa0_.creation_date as creation4_0_0_, goviomessa0_.expedition_date as expediti5_0_0_, goviomessa0_.id_govio_service_instance as id_govi15_0_0_, goviomessa0_.invalid_after_due_date as invalid_6_0_0_, goviomessa0_.last_update_status as last_upd7_0_0_, goviomessa0_.markdown as markdown8_0_0_, goviomessa0_.notice_number as notice_n9_0_0_, goviomessa0_.payee as payee10_0_0_, goviomessa0_.scheduled_expedition_date as schedul11_0_0_, goviomessa0_.status as status12_0_0_, goviomessa0_.subject as subject13_0_0_, goviomessa0_.taxcode as taxcode14_0_0_ from govio_messages goviomessa0_ where goviomessa0_.id=? .... Hibernate: update govio_messages set amount=?, appio_message_id=?, creation_date=?, expedition_date=?, id_govio_service_instance=?, invalid_after_due_date=?, last_update_status=?, markdown=?, notice_number=?, payee=?, scheduled_expedition_date=?, status=?, subject=?, taxcode=? where id=? Hibernate: update govio_messages set amount=?, appio_message_id=?, creation_date=?, expedition_date=?, id_govio_service_instance=?, invalid_after_due_date=?, last_update_status=?, markdown=?, notice_number=?, payee=?, scheduled_expedition_date=?, status=?, subject=?, taxcode=? where id=? ... So i have those questions: Why the item is read again after process phase when using cursor? Can i avoid this? CursorItemReader as item reader is the right choice for my use case?
Produce data into Kafka topic with KSQL
The topic test already exists inside of a remote Kafka Cluster. The session details are saved in the client variable. We have three columns in our table called FEATURE, ACCOUNTHOLDER, CLASS. Now I create sample_table. client.create_table(table_name='sample_table', columns_type=['FEATURE double', 'ACOUNTHOLDER string', 'CLASS int'], topic='test', value_format='DELIMITED', key = 'ACCOUNTHOLDER') The result is true. So the table is successfully created. Now I want to push data into sample_table by: client.ksql("""INSERT INTO sample_table (FEATURE, ACCOUNTHOLDER, CLASS) VALUES (6.1, 'C1', 1)""") I receive a NullPointerException. KSQLError: ('java.lang.NullPointerException', 50000, ['io.confluent.ksql.rest.server.execution.InsertValuesExecutor.extractRow(InsertValuesExecutor.java:164)', 'io.confluent.ksql.rest.server.execution.InsertValuesExecutor.execute(InsertValuesExecutor.java:98)', 'io.confluent.ksql.rest.server.validation.CustomValidators.validate(CustomValidators.java:109)', 'io.confluent.ksql.rest.server.validation.RequestValidator.validate(RequestValidator.java:143)', 'io.confluent.ksql.rest.server.validation.RequestValidator.validate(RequestValidator.java:115)', 'io.confluent.ksql.rest.server.resources.KsqlResource.handleKsqlStatements(KsqlResource.java:163)', 'sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)', 'sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)', 'java.lang.reflect.Method.invoke(Method.java:498)', 'org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:76)', 'org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:148)', 'org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:191)', 'org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:200)', 'org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:103)', 'org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:493)', 'org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:415)', 'org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:104)', 'org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:277)', 'org.glassfish.jersey.internal.Errors$1.call(Errors.java:272)', 'org.glassfish.jersey.internal.Errors$1.call(Errors.java:268)', 'org.glassfish.jersey.internal.Errors.process(Errors.java:316)', 'org.glassfish.jersey.internal.Errors.process(Errors.java:298)', 'org.glassfish.jersey.internal.Errors.process(Errors.java:268)', 'org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289)', 'org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256)', 'org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703)', 'org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:416)', 'org.glassfish.jersey.servlet.ServletContainer.serviceImpl(ServletContainer.java:409)', 'org.glassfish.jersey.servlet.ServletContainer.doFilter(ServletContainer.java:584)', 'org.glassfish.jersey.servlet.ServletContainer.doFilter(ServletContainer.java:525)', 'org.glassfish.jersey.servlet.ServletContainer.doFilter(ServletContainer.java:462)', 'org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)', 'org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)', 'org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)', 'org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1700)', 'org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)', 'org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)', 'org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)', 'org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)', 'org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1667)', 'org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)', 'org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)', 'org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)', 'org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:152)', 'org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:174)', 'org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)', 'org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:753)', 'org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)', 'org.eclipse.jetty.server.Server.handle(Server.java:505)', 'org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370)', 'org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267)', 'org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)', 'org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)', 'org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)', 'org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)', 'org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)', 'org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)', 'org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)', 'org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)', 'org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:698)', 'org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:804)', 'java.lang.Thread.run(Thread.java:748)']) Any ideas what I am doing wrong?
Password change plugin in poste.io fails
I am experiment with poste.io mail server. It uses rouncube as its web interface. I tried to enable the password plugin. I see below error whenever I try to set the password: [21-Mar-2017 13:00:31 +0100]: DB Error: [1] no such function: update_passwd (SQL Query: SELECT update_passwd('$1$LXeDlIT0$NGunS8gcCOSrKK2ZJ6RIW/', 'naidu#example.com')) in /opt/www/webmail/program/lib/Roundcube/rcube_db.php on line 539 (POST /webmail/?_task=settings&_action=plugin.password-save) The internet is full of using mysql as the database. I think I have to update the password change query in /opt/www/webmail/plugins/password/config.inc.php from $sql = 'SELECT update_passwd(%c, %u)'; to $sql = 'UPDATE mailaccount SET password=%c WHERE nname=%u LIMIT 1'; The UPDATE statement above is valid for mysql. What is the equivalent for sqlite3 database?
I have a setup consist of postfix with sqlite and my sql query is as below: UPDATE mailbox SET password=%c WHERE username=%u LIMIT 1 My sqlite config as below: $config['password_db_dsn'] = 'sqlite:////var/vmail/postfixadmin.db?mode=0646'; $config['password_query'] = 'UPDATE mailbox SET password=%c WHERE username=%u LIMIT 1'; Add this for debugging: $config['debug_level'] = 4; $config['sql_debug'] = true; $config['imap_debug'] = true; $config['ldap_debug'] = true; $config['smtp_debug'] = true; Hope this helps.
How to handle sqlalchemy ObjectDeletedError with session.merge() and table is deleted
I have a table row that is created, and periodically updated, with session.merge(). If the table in the DB (sqlite) is deleted, then I get an ObjectDeletedError, but I can't find a way to gracefully handle this. On exception, I recreate the table, but the exception still occurs. What do I need to do to get the session to recognise the new table, create the row and continue on as before ?? Here is the code ... #!/usr/bin/env python import sqlalchemy from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker Base = declarative_base() ##============================================================================ class Module(Base): """Schema for the modules table.""" ##------------------------------------------------------------------------ __tablename__ = "modules" id = sqlalchemy.Column(sqlalchemy.Integer, primary_key=True) port = sqlalchemy.Column(sqlalchemy.Integer) ##------------------------------------------------------------------------ def __init__(self, id, port=None): self.id = id self.port = port ##============================================================================ class SessionMgr(object): """Class to manage the database session.""" ##------------------------------------------------------------------------ def __init__(self, commitInterval=5, debug=False): self.commitInterval = commitInterval database = "merge-bug.db" self.engine = sqlalchemy.create_engine("sqlite:///%s" % (database), echo=debug) Session = sessionmaker(bind=self.engine) self.session = Session() self.create_tables() ##------------------------------------------------------------------------ def create_tables(self): """Create database tables if they do not exist.""" ### List of tables to check/create. tables = [ Module ] ### Create each table if it does not exist. for t in tables: tname = t.__tablename__ if not self.engine.dialect.has_table(self.engine.connect(), tname): print "%s table didn't exist. Creating..." % (tname) t.metadata.create_all(self.engine) ##------------------------------------------------------------------------ def commit(self): """Commit changes to the database.""" print "Committing to db" try: self.session.commit() except Exception as ex: print "ERROR: SessionMgr.commit():", ex print "DEBUG: SessionMgr.commit(): Issuing a session rollback ..." self.session.rollback() ## Check that tables still exist. self.create_tables() finally: ## Optional -- depends on use case. #self.session.remove() pass ##------------------------------------------------------------------------ ##============================================================================ def main(): print "DEBUG: main():" import time sessmgr = SessionMgr(commitInterval=5, debug=False) ## ## Test adding a module and updating it using session.merge(). ## if 1: errors = 0 m = Module(1234, 43210) print "DEBUG: merge module" mm = sessmgr.session.merge(m) sessmgr.commit() delay = 1 for i in range(10): try: print "DEBUG: sleeping %i second ..." % (delay) time.sleep(delay) port = mm.port + 1 print "DEBUG: updating port field to %i ..." % (port) ## Exception on the following statement if table is deleted !! mm.port = port sessmgr.commit() print "DEBUG: updating id field ..." mm.id = 1234 sessmgr.commit() except Exception as ex: print "DEBUG: caught exception", ex if 0: print "DEBUG: Issuing a session rollback ..." sessmgr.session.rollback() if 1: print "DEBUG: Create tables ..." sessmgr.create_tables() if 1: print "DEBUG: Remerge module ..." m = Module(1234, 43210) mm = sessmgr.session.merge(mm) if 0: print "DEBUG: Refresh merged module ..." mm = sessmgr.session.merge(mm) errors += 1 print "DEBUG: errors =", errors if errors > 3: raise ##============================================================================ if __name__ == "__main__": main() ##============================================================================ Here is the output ... DEBUG: main(): DEBUG: merge module Committing to db DEBUG: sleeping 2 second ... DEBUG: updating port field to 43211 ... Committing to db DEBUG: updating id field ... Committing to db DEBUG: sleeping 2 second ... DEBUG: caught exception (OperationalError) no such table: modules u'SELECT modules.id AS modules_id, modules.port AS modules_port \nFROM modules \nWHERE modules.id = ?' (1234,) DEBUG: Create tables ... modules table didn't exist. Creating... Committing to db DEBUG: Remerge module ... DEBUG: errors = 1 DEBUG: sleeping 2 second ... DEBUG: caught exception Instance '<Module at 0x102a586d0>' has been deleted, or its row is otherwise not present. DEBUG: Create tables ... Committing to db DEBUG: Remerge module ... DEBUG: errors = 2 DEBUG: sleeping 2 second ... DEBUG: caught exception Instance '<Module at 0x102a586d0>' has been deleted, or its row is otherwise not present. DEBUG: Create tables ... Committing to db DEBUG: Remerge module ... DEBUG: errors = 3 DEBUG: sleeping 2 second ... DEBUG: caught exception Instance '<Module at 0x102a586d0>' has been deleted, or its row is otherwise not present. DEBUG: Create tables ... Committing to db DEBUG: Remerge module ... DEBUG: errors = 4 Traceback (most recent call last): File "merge_bug.py", line 135, in <module> main() File "merge_bug.py", line 103, in main port = mm.port + 1 File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/sqlalchemy/orm/attributes.py", line 316, in __get__ return self.impl.get(instance_state(instance), dict_) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/sqlalchemy/orm/attributes.py", line 611, in get value = callable_(passive) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/sqlalchemy/orm/state.py", line 375, in __call__ self.manager.deferred_scalar_loader(self, toload) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/sqlalchemy/orm/loading.py", line 606, in load_scalar_attributes raise orm_exc.ObjectDeletedError(state) sqlalchemy.orm.exc.ObjectDeletedError: Instance '<Module at 0x102a586d0>' has been deleted, or its row is otherwise not present.
What do I need to do to get the session to recognise the new table, create the row and continue on as before ?? when an error occurs with the Session you need to roll it back to cancel the current transaction: session.rollback() now if you are still having this issue, then it is likely that SQLite can't handle the table being dropped (I assume by a different connection?) you may need to reconnect to the database entirely - disabling connection pooling using NullPool will achieve this, each time the Session closes its transaction, the connection itself will be closed. It's also not really the appropriate use of a relational database to be creating and dropping tables as the application is running. If you need to delete all the data in a table, use "DELETE FROM table". The table structure itself should not be changing while the application is running (especially on a fragile system like SQLite).