How to reset auto increment in JavaDB - javadb

I emigrate from MySQL at Java DB to use Embedded Driver for my application
In Java I use this lines of code to reset an auto increment column, like an id.
String s1 = "DELETE from keeplog where idk = '"+1+"'";
String s2 = "ALTER TABLE keeplog drop idk";
String s3 = "ALTER TABLE keeplog add idk int not null auto_increment primary key";
I search on the Internet and I found this methods:
String q = "ALTER TABLE init ALTER COLUMN idinit RESTART WITH 1";
and
String query = "ALTER TABLE pers ALTER idp SET INCREMENT BY 1
I try with both of them but doesn't work.
So, what kind of code I need to put in my Java app to restart a column in Java DB ?
P.S.
I use derby.jar from my jdk1.8.0_25.

Related

How to reset the auto generated primary key in PostgreSQL

My class for the table topics is as below. The primary key is autogenerated serial key. While testing, I deleted rows from the table and was trying to re-insert them again. The UUID is not getting reset.
class Topics(db.Model):
""" User Model for different topics """
__tablename__ = 'topics'
uuid = db.Column(db.Integer, primary_key=True)
topics_name = db.Column(db.String(256),index=True)
def __repr__(self):
return '<Post %r>' % self.topics_name
I tried the below command to reset the key
ALTER SEQUENCE topics_uuid_seq RESTART WITH 1;
It did not work.
I would appreciate any form of suggestion!
If it's indeed a serial ID, you can reset the owned SEQUENCE with:
SELECT setval(pg_get_serial_sequence('topics', 'uuid'), max(uuid)) FROM topics;
See:
How to reset postgres' primary key sequence when it falls out of sync?
But why would the name be uuid? UUID are not integer numbers and not serial. Also, it's not entirely clear what's going wrong, when you write:
The UUID is not getting reset.
About ALTER SEQUENCE ... RESTART:
Postgres manually alter sequence
In order to avoid duplicate id errors that may arise when resetting the sequence try:
UPDATE table SET id = DEFAULT;
ALTER SEQUENCE seq RESTART;
UPDATE table SET id = DEFAULT;
For added context:
'table' = your table name
'id' = your id column name
'seq' = find the name of your sequence with:
SELECT pg_get_serial_sequence('table', 'id');

How to set values for one column based on another?

How to set values for one column based on another?
Goal: When in the DB table the column Remote = table SO in the column
Thrunode -> Set in the table DB the column customer = table SO
DB = tbl_db_collecting
SO = tb_systemshc
sql:
UPDATE tbl_db_collecting SET
tbl_db_collecting.customer = tb_systemshc.environment
FROM tb_systemshc
WHERE tbl_db_collecting.lower(remote) = tb_systemshc.lower(thrunode)
output:
SQL Error [3F000]: ERROR: schema "tbl_db_collecting" does not exist
Is this what you are looking for?
update tbl_db_collecting
set customer = tb_systemshc.environment
from tb_systemshc
where lower(tbl_db_collecting.remote) = lower(tb_systemshc.thrunode);
When you write tbl_db_collecting.lower(remote), PostgreSQL parses that as if you are looking for a lower() function defined in schema tbl_db_collecting.

DB2 Update statement not working using JDBC

I have a few rows stored in a source table (as defined as $schema.$sourceTable in the UPDATE query below). This table has 3 columns: TABLE_NAME, PERMISSION_TAG_COL, PT_DEPLOYED
I have an update statement stored in a string like:
var update_PT_Deploy = s"UPDATE $schema.$sourceTable SET PT_DEPLOYED = 'Y' WHERE TABLE_NAME = '$tableName';"
My source table does have rows with TABLE_NAME as $tableName (parameter) as I inserted rows into this table using another function of my program. The default value of PT_DEPLOYED when I inserted the rows was specified as NULL.
I'm trying to execute update using JDBC in the following manner:
println(update_PT_Deploy)
val preparedStatement: PreparedStatement = connection.prepareStatement(update_PT_Deploy)
val row = preparedStatement.execute()
println(row)
println("row updated in table successfully")
preparedStatement.close()
The above piece of code does not throw any exception, but when I query my table in a tool like DBeaver, the NULL value of PT_DEPLOYED does not get updated to Y.
If I execute the same query as mentioned in update_PT_Deploy inside DBeaver, the query works and the table updates. I am sure I am following the correct steps..

save file (.pdf) in database whit python 2.7

Craig Ringer Ican not work whit large object functions
My database looks like this
this is my table
-- Table: files
--
DROP TABLE files;
CREATE TABLE files
(
id serial NOT NULL,
orig_filename text NOT NULL,
file_data bytea NOT NULL,
CONSTRAINT files_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE files
I want save .pdf in my database, I saw you did the last answer, but using python27 (read the file and convert to a buffer object or use the large object functions)
I did the code would look like
path="D:/me/A/Res.pdf"
listaderuta = path.split("/")
longitud=len(listaderuta)
f = open(path,'rb')
f.read().__str__()
cursor = con.cursor()
cursor.execute("INSERT INTO files(id, orig_filename, file_data) VALUES (DEFAULT,%s,%s) RETURNING id", (listaderuta[longitud-1], f.read()))
but when I'm downloading, I save
fula = open("D:/INSTALL/pepe.pdf",'wb')
cursor.execute("SELECT file_data, orig_filename FROM files WHERE id = %s", (int(17),))
(file_data, orig_filename) = cursor.fetchone()
fula.write(file_data)
fula.close()
but when I'm downloading the file can not be opened, this damaged I repeat I can not work with large object functions
try this and turned me, can you help ?
I am thinking that psycopg2 Binary function does not user lob functions.
thus I used.....
path="salman.pdf"
f = open(path,'rb')
dat = f.read()
binary = psycopg2.Binary(dat)
cursor.execute("INSERT INTO files(id, file_data) VALUES ('1',%s)", (binary,))
conn.commit()
Just correction in INSERT statement, INSERT statement will be failed with null value in column "orig_filename" violates not-null constraint as orig_filename is defined as NOT NULL.... use instead
("INSERT INTO files(id, orig_filename,file_data) VALUES ('1','filename.pdf',%s)", (binary,))

IBM DB2 recreate index on truncated table

After truncating table, and inserting new values in table, auto-increment values are not set to started value 1. When inserting new values it's remember last index-ed value of auto-increment.
Colum in table named: ID
Index: PRIMARY,
Initial Value: 1
Cache size: 1
Increment: 1
[checked on IBM DB2 Control Center]
This query:
TRUNCATE TABLE ".$this->_schema.$table." DROP STORAGE IGNORE DELETE TRIGGERS IMMEDIATE
table is EMPTY.
After INSERT NEW VALUES example: INSERT INTO DB2INST1.db (val) VALUES ('abc') it's INSERT with LAST
ID | val
55 | abc
But it SHOULD BE:
ID | val
1 | abc
I'm guessing here that your question is "how do you restart the IDENTITY sequence?" If that is the case, then you can reset it with the following SQL:
ALTER TABLE <table name> ALTER COLUMN <IDENTITY column> RESTART WITH 1
However, like #Ian said, what you are seeing is the expected behavior of a TRUNCATE.
First select in TABLE SCHEMA WHERE is name of IDENTITY column:
Query 1:
SELECT COLNAME FROM SYSCAT.COLUMNS WHERE TABSCHEMA = 'DB2INST1' AND
TABNAME = 'DB' AND IDENTITY = 'Y'
Then, truncate table and return it's example: ID for altering index:
Query 2:
This ID puts on query for reset and altering index identity:
ALTER TABLE DB2INST1.DB ALTER COLUMN ID RESTART WITH 1
Change ID above returned from Query 1, which returns name of ID to Query 2.
SOLVED!