I have the following migration file:
<?php
use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;
class CreateHeadersTable extends Migration
{
public function up()
{
Schema::create('headers', function (Blueprint $table) {
$table->increments('entry')->unsigned();
$table->string('id',16);
$table->string('address',256);
$table->string('desciption',512)->nullable()->default('NULL');
$table->tinyInteger('currency',)->default('0');
$table->decimal('sum',13,4);
$table->datetime('entered');
$table->tinyInteger('aborted',)->default('0');
$table->primary('entry');
});
}
public function down()
{
Schema::dropIfExists('headers');
}
}
This file was automatically generated by an online tool from a SQL file. However, when I ran "php artisan migrate," I got the following error:
SQLSTATE[42S01]: Base table or view already exists: 1050 Table 'headers'
already exists (SQL: create table `headers` (`entry` int unsigned not null
auto_increment primary key, `id` varchar(16) not null, `address`
varchar(256) not null, `desciption` varchar(512) null default 'NULL',
`currency` tinyint not null default '0', `sum` decimal(13, 4) not null,
`entered` datetime not null, `aborted` tinyint not null default '0')
default character set utf8mb4 collate 'utf8mb4_unicode_ci')
at vendor/laravel/framework/src/Illuminate/Database/Connection.php:712
708▕ // If an exception occurs when attempting to run a query, we'll format the error
709▕ // message to include the bindings with SQL, which will make this exception a
710▕ // lot more helpful to the developer instead of just the database's errors.
711▕ catch (Exception $e) {
➜ 712▕ throw new QueryException(
713▕ $query, $this->prepareBindings($bindings), $e
714▕ );
715▕ }
716▕ }
+9 vendor frames
10 database/migrations/2023_01_31_1675138133_create_headers_table.php:23
Illuminate\Support\Facades\Facade::__callStatic()
+22 vendor frames
33 artisan:37
Illuminate\Foundation\Console\Kernel::handle()
I am not familiar with Laravel migration files. How can I fix this? Many thanks!
Do not drop the table manually. use php artisan migrate:rollback
and then re-try php artisan migrate
Error you got states that Table 'headers' already exists in database if you have deleted table from database just check their may be entry in migrations table in database just delete it and run migration again
Related
I would like to change the database in my existing project from mysql to postgresql.
I have configured the database, I have regenerated the migrations which work, but the problem appears in the fixtures.
when trying to load fixtures an error appears like this:
An exception occurred while executing 'INSERT INTO user (nickname, password, id, created_at, updated_at, email) VALUES (?, ?, ?, ?, ?, ?)' with params ["user", "$2y$13$rgHtT56Vlk2
avmf3gX2W7.QYcQ5d6AXRzr41ebRMGfxREqLZQfsTG", "017c4562-d487-ddff-c303-108c1916d6dd", "2021-10-03 11:01:16", "2021-10-03 11:01:16", "user#user.pl"]:
SQLSTATE[42601]: Syntax error: 7 BŁĄD: błąd składni w lub blisko "user"
LINE 1: INSERT INTO user (nickname, password, id, created_at, update... ,
this is a problem possibly caused by entity manager generating the mysql dialect instead of postgres dialect.
a similar error occurs during the get shot under the user entity:
"hydra:description": "An exception occurred while executing 'SELECT u0_.nickname AS nickname_0, u0_.password AS password_1, u0_.id AS id_2, u0_.created_at AS created_at_3, u0_.updated_at AS updated_at_4, u0_.email AS email_5 FROM user u0_':\n\nSQLSTATE[42703]: Undefined column: 7 BŁĄD: kolumna u0_.nickname nie istnieje\nLINE 1: SELECT u0_.nickname AS nickname_0, u0_.password AS password_.
Here is my doctrine.yaml configuration:
doctrine:
dbal:
url: '%env(resolve:DATABASE_URL)%'
driver: 'pdo_pgsql'
charset: utf8
# IMPORTANT: You MUST configure your server version,
# either here or in the DATABASE_URL env var (see .env file)
#server_version: '13'
orm:
auto_generate_proxy_classes: true
naming_strategy: doctrine.orm.naming_strategy.underscore_number_aware
auto_mapping: true
mappings:
App:
is_bundle: false
type: annotation
dir: '%kernel.project_dir%/src/Entity'
prefix: 'App\Entity'
alias: App
Could someone help me get rid of the bug? :)
I have a CSV file (MS SQL server table export) and I would like to import it to Aurora Serverless PostgreSQL database table. I did a basic preprocessing of the CSV file to replace all of the NULL values in it (i.e. '') to "NULL". The file looks like that:
CSV file:
ID,DRAW_WORKS
10000002,NULL
10000005,NULL
10000004,FLEXRIG3
10000003,FLEXRIG3
The PostgreSQL table has the following schema:
CREATE TABLE T_RIG_ACTIVITY_STATUS_DATE (
ID varchar(20) NOT NULL,
DRAW_WORKS_RATING int NULL
)
The code I am using to read and insert the CSV file is the following:
import boto3
import csv
rds_client = boto3.client('rds-data')
...
def batch_execute_statement(sql, sql_parameter_sets, transaction_id=None):
parameters = {
'secretArn': db_credentials_secrets_store_arn,
'database': database_name,
'resourceArn': db_cluster_arn,
'sql': sql,
'parameterSets': sql_parameter_sets
}
if transaction_id is not None:
parameters['transactionId'] = transaction_id
response = rds_client.batch_execute_statement(**parameters)
return response
transaction = rds_client.begin_transaction(
secretArn=db_credentials_secrets_store_arn,
resourceArn=db_cluster_arn,
database=database_name)
sql = 'INSERT INTO T_RIG_ACTIVITY_STATUS_DATE VALUES (:ID, :DRAW_WORKS);'
parameter_set = []
with open('test.csv', 'r') as file:
reader = csv.DictReader(file, delimiter=',')
for row in reader:
entry = [
{'name': 'ID','value': {'stringValue': row['RIG_ID']}},
{'name': 'DRAW_WORKS', 'value': {'longValue': row['DRAW_WORKS']}}
]
parameter_set.append(entry)
response = batch_execute_statement(
sql, parameter_set, transaction['transactionId'])
However, there is an error that gets returned suggests that there is a type mismatch:
Invalid type for parameter parameterSets[0][5].value.longValue,
value: NULL, type: <class 'str'>, valid types: <class 'int'>"
Is there a way to configure Aurora to accept NULL values for types such as int?
Reading the boto3 documentation more carefully I found that we can use isNull value set to True in case a field is NULL. The bellow code snippet shows how to insert null value to the database:
...
entry = [
{'name': 'ID','value': {'stringValue': row['ID']}}
]
if row['DRAW_WORKS'] == 'NULL':
entry.append({'name': 'DRAW_WORKS', 'value': {'isNull': True}})
else:
entry.append({'name': 'DRAW_WORKS_RATING', 'value': {'longValue': int(row['DRAW_WORKS'])}})
parameter_set.append(entry)
I am trying to connect to a Postgres 12 DB running in Cloud SQL from a Cloud Function written in TypeScript.
I create the database with the following:
import * as Knex from "knex"
const { username, password, instance } = ... // username, password, connection name (<app-name>:<region>:<database>)
const config = {
client: 'pg',
connection: {
user: username,
password: password,
database: 'ingredients',
host: `/cloudsql/${instance}`,
pool: { min: 1, max: 1}
}
}
const knex = Knex(config as Knex.Config)
I am then querying the database using:
const query = ... // passed in as param
const result = await knex('tableName').where('name', 'ilike', query).select('*')
When I run this code, I get the following error in the Cloud Functions logs:
Unhandled error { error: select * from "tableName" where "name" ilike $1 - relation "tableName" does not exist
at Parser.parseErrorMessage (/workspace/node_modules/pg-protocol/dist/parser.js:278:15)
at Parser.handlePacket (/workspace/node_modules/pg-protocol/dist/parser.js:126:29)
at Parser.parse (/workspace/node_modules/pg-protocol/dist/parser.js:39:38)
at Socket.stream.on (/workspace/node_modules/pg-protocol/dist/index.js:10:42)
at Socket.emit (events.js:198:13)
at Socket.EventEmitter.emit (domain.js:448:20)
at addChunk (_stream_readable.js:288:12)
at readableAddChunk (_stream_readable.js:269:11)
at Socket.Readable.push (_stream_readable.js:224:10)
at Pipe.onStreamRead [as onread] (internal/stream_base_commons.js:94:17)
I created the table using the following commands in the GCP Cloud Shell (then populated with a data from a CSV):
\connect ingredients;
CREATE TABLE tableName (name VARCHAR(255), otherField VARCHAR(255), ... );
In that console, if I run the query SELECT * FROM tableName;, I see the correct data listed.
Why does Knex not see the table: tableName, but the GCP Cloud Shell does?
BTW, I am definitely connecting to the correct db, as I see the same error logs in the Cloud SQL logging interface.
Looks like you are creating the table tableName without quoting, which makes it actually lower case (case insensitive). So when creating schema do:
CREATE TABLE "tableName" ("name" VARCHAR(255), "otherField" VARCHAR(255), ... );
or use only lower-case table / column names.
I am trying to save changes via WFS-T using GeoServer:
This is my code that is getting feature from geoserver
var sourceWFS = new ol.source.Vector({
loader: function (extent) {
$.ajax('http://127.0.0.1:8080/geoserver/kairosDB/ows', {
type: 'GET',
data: {
service: 'WFS',
version: '1.1.0',
request: 'getFeature',
typename: 'wfs_geom',
srsname: 'EPSG:3857',
bbox: extent.join(',') + ',EPSG:3857'
}
}).done(function (response) {
sourceWFS.addFeatures(formatWFS.readFeatures(response));
});
},
// strategy: ol.loadingstrategy.tile(ol.tilegrid.createXYZ()),
strategy: ol.loadingstrategy.bbox,
projection: 'EPSG:3857'
});
And This is code that is saving feature to server
var formatGML = new ol.format.GML({
featureNS: 'http://127.0.0.1:8080/geoserver/kairosDB',
featureType: 'wfs_geom',
srsName: 'EPSG:3857'
});
But When I saveFeature to server, This error's occured
2017-02-01 14:30:02,339 ERROR [geoserver.ows] -
org.geoserver.wfs.WFSTransactionException: {http://127.0.0.1:8080/geoserver/kairosDB}wfs_geom is read-only
at org.geoserver.wfs.Transaction.execute(Transaction.java:269)
at org.geoserver.wfs.Transaction.transaction(Transaction.java:106)
at org.geoserver.wfs.DefaultWebFeatureService.transaction(DefaultWebFeatureService.java:172)
at sun.reflect.GeneratedMethodAccessor195.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
wfs_geom table already has a primary key, create script:
CREATE TABLE public.wfs_geom
(
id bigint NOT NULL,
geometry geometry,
CONSTRAINT wfs_geom_pkey PRIMARY KEY (id)
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
ALTER TABLE public.wfs_geom
OWNER to postgres;
GRANT ALL ON TABLE public.wfs_geom TO postgres;
-- Index: sidx_wfs_geom
-- DROP INDEX public.sidx_wfs_geom;
CREATE INDEX sidx_wfs_geom
ON public.wfs_geom USING gist
(geometry)
TABLESPACE pg_default;
Could you help me?
This works like a charm: http://osgeo-org.1560.x6.nabble.com/Read-only-error-when-editing-a-WFS-T-td5284537.html
There is a rule in the "Data security" section that do allow to write
to this workspace for all, but the writing is not allowed to anonymous
user for all the workspace (..w) The strange behaviour is that in
yesterday I can edit the layer using a client.. and I do not
understand why, moreover using QGIS I cannot edit the layer, but QGIS
support the WFS-T
There are two things you need to change in GeoServer to makes this possible.
Enable Transactional under the WFS tab on GeoServer
Under the Data tab on GeoServer you need to edit the rule ..w to enable the role ROLE_ANONYMOUS
After doing these two things I was able to get rid of this error and post data to GeoServer.
Add a primary key constraint to the table.
I'm trying to use the timer service provided by Glassfish. Thus, I have to create a table named EJB__TIMER__TBL and configure jdbc resource in Glassfish.
I want to store this table on postgreSQL on a schema named glassfish. So my ddl is this one (I replace the BLOB type to BYTEA) :
CREATE SCHEMA glassfish;
CREATE TABLE glassfish.EJB__TIMER__TBL (
CREATIONTIMERAW BIGINT NOT NULL,
BLOB BYTEA,
TIMERID VARCHAR(255) NOT NULL,
CONTAINERID BIGINT NOT NULL,
OWNERID VARCHAR(255) NULL,
STATE INTEGER NOT NULL,
PKHASHCODE INTEGER NOT NULL,
INTERVALDURATION BIGINT NOT NULL,
INITIALEXPIRATIONRAW BIGINT NOT NULL,
LASTEXPIRATIONRAW BIGINT NOT NULL,
SCHEDULE VARCHAR(255) NULL,
APPLICATIONID BIGINT NOT NULL,
CONSTRAINT PK_EJB__TIMER__TBL PRIMARY KEY (TIMERID)
);
DROP ROLE IF EXISTS glassfish;
CREATE ROLE glassfish WITH NOINHERIT LOGIN PASSWORD '...';
REVOKE ALL PRIVILEGES ON ALL TABLES IN SCHEMA glassfish FROM glassfish;
REVOKE ALL PRIVILEGES ON ALL FUNCTIONS IN SCHEMA glassfish FROM glassfish;
GRANT USAGE ON SCHEMA glassfish TO glassfish;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA glassfish TO glassfish;
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA glassfish TO glassfish;
GRANT USAGE ON ALL SEQUENCES IN SCHEMA glassfish TO glassfish;
ALTER USER glassfish SET search_path to 'glassfish';
I configured a jdbc pool and resource for Glassfish :
asadmin create-jdbc-connection-pool
--datasourceclassname org.postgresql.ds.PGConnectionPoolDataSource
--restype javax.sql.ConnectionPoolDataSource
--property User=glassfish:Password=...:PortNumber=5432:DatabaseName=...:ServerName=localhost jdbc/simPool/glassfish
asadmin create-jdbc-resource --connectionpoolid jdbc/simPool/glassfish jdbc/sim/glassfish
And I properly enter jdbc/sim/glassfish in the jdbc resource to use for timer service in Glassish GUI.
Each time I deploy my app, I receive Exception :
[#|2013-02-18T11:42:42.562+0100|WARNING|glassfish3.1.2|org.eclipse.persistence.session.file
:/E:/softs/serveurs/glassfish3_1122/glassfish/domains/domain1/applications/ejb-timer-service-app/WEB-INF/classes/___EJB__Timer__App|_ThreadID=58;_ThreadName=Thread-2;|Local Exception Stack:
Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.3.2.v20111125-r10461): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: org.postgresql.util.PSQLException: ERREUR: la relation « EJB__TIMER__TBL » n'existe pas
Position : 193
Error Code: 0
Call: SELECT "TIMERID", "APPLICATIONID", "BLOB", "CONTAINERID", "CREATIONTIMERAW", "INITIALEXPIRATIONRAW", "INTERVALDURATION", "LASTEXPIRATIONRAW", "OWNERID", "PKHASHCODE", "SCHEDULE", "STATE" FROM "EJB__TIMER__TBL" WHERE (("OWNERID" = ?) AND ("STATE" = ?))
bind => [2 parameters bound]
Query: ReadAllQuery(name="findTimersByOwnerAndState" referenceClass=TimerState sql="SELECT "TIMERID", "APPLICATIONID", "BLOB", "CONTAINERID", "CREATIONTIMERAW", "INITIALEXPIRATIONRAW", "INTERVALDURATION", "LASTEXPIRATIONRAW", "OWNERID", "PKHASHCODE", "SCHEDULE", "STATE" FROM "EJB__TIMER__TBL" WHERE (("OWNERID" = ?) AND ("STATE" = ?))")
at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:333)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:644)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:535)
at org.eclipse.persistence.internal.sessions.AbstractSession.basicExecuteCall(AbstractSession.java:1717)
So my table EJB__TIMER__TBL doesn't seem to be accessible by Glassfish.
When I create another project, configure a persistence.xml file with the same credentials as my pooled connexion above and create a simple query SELECT COUNT(*) FROM EJB__TIMER__TBL, I receive 0 so my connexion is well established and the default schema accessed is glassfish as espected.
In ${glassfish_root}\glassfish\lib\install\databases there is some ddls but neither for postgresql...so where am I doing wrong ?
NB: when I test to configure service timer with MySQL jdbc resource, it's work...
Thanks for help
Ok I found the solution of my problem.
I didn't know that SQL can be case sensitive. Glassfish calls SELECT ... FROM "EJB__TIMER__TBL" with double quotes so I have to create a table named "EJB__TIMER__TBL" not "ejb__timer__tbl" or anything else.
The workaround is just to recreate my table with double quotes :
CREATE TABLE glassfish."EJB__TIMER__TBL" (
"CREATIONTIMERAW" BIGINT NOT NULL,
"BLOB" BYTEA,
"TIMERID" VARCHAR(255) NOT NULL,
"CONTAINERID" BIGINT NOT NULL,
"OWNERID" VARCHAR(255) NULL,
"STATE" INTEGER NOT NULL,
"PKHASHCODE" INTEGER NOT NULL,
"INTERVALDURATION" BIGINT NOT NULL,
"INITIALEXPIRATIONRAW" BIGINT NOT NULL,
"LASTEXPIRATIONRAW" BIGINT NOT NULL,
"SCHEDULE" VARCHAR(255) NULL,
"APPLICATIONID" BIGINT NOT NULL,
CONSTRAINT "PK_EJB__TIMER__TBL" PRIMARY KEY ("TIMERID")
);