Using and following the documentation:
https://godoc.org/github.com/lib/pq
but can't see after hours and hours and research online to find any good example of passing variables to the db.Exec()
I'm building a program that will create new tables depending on certain names entered on the command arguments.
db.Exec(`CREATE TABLE $1(
ID INT PRIMARY KEY NOT NULL,
HOST TEXT NOT NULL,
PORTS TEXT,
BANNERS TEXT,
JAVASCRIPT TEXT,
HEADERS TEXT,
COMMENTS TEXT,
ROBOTS TEXT,
EMAILS TEXT,
CMS TEXT,
URLS TEXT,
BUSTIN TEXT,
VULN TEXT
)`, tablename)
But no luck, I obviously have try to change things around, even I have try
to build the CREATE TABLE syntax on a string and have try to pass that instead of db.Exec(string)
but no luck neither...
can someone give me a hand?
Thanks
You can check on https://golang.org/src/database/sql/sql.go?s=39599:39668#L1437, at line 1478, that sql statements will be first prepared then executed.
In PostgreSQL, prepare are only valid for SELECT, INSERT, UPDATE, DELETE, or VALUES, https://www.postgresql.org/docs/10/static/sql-prepare.html .
Here you can use Go's fmt.Sprintf to support creating different tables, and check table name manually, SQL table names can contain many special characters, but you can narrow it, mine validation is regexp.MustCompile("^[a-zA-Z_]+[0-9a-zA-Z_]*$") .
Related
You can see that I am getting 'No results found' when searching for varchar.
I need to know the data type that I should select for 'email' in postgresql database.
In the past I used text or varchar or character varying
Apart from using of VARCHAR (as suggested by #Maria), you might get some insight from this link:
https://www.dbrnd.com/2018/04/postgresql-how-to-validate-the-email-address-column/
and from this https://dba.stackexchange.com/questions/68266/what-is-the-best-way-to-store-an-email-address-in-postgresql
if you read some parts of it, they created their own functions or constraints, which would likely help you in understanding PSQL more.
TL/DR, or the links might change in the future (shamelessly taken from one of the links):
CREATE EXTENSION citext;
CREATE DOMAIN domain_email AS citext
CHECK(
VALUE ~ '^\w+#[a-zA-Z_]+?\.[a-zA-Z]{2,3}$'
);
-- for valid samples
SELECT 'some_email#gmail.com'::domain_email;
SELECT 'accountant#dbrnd.org'::domain_email;
-- for an invalid sample
SELECT 'dba#aol.info'::domain_email;
As Neil had pointed out, yeah it's just like using custom TYPES.
CREATE DOMAIN creates a new domain. A domain is essentially a data type with optional constraints (restrictions on the allowed set of values).
source
For those of you unfamiliar with the weird characters used to check the value, it's a regex pattern.
And an example used with a table:
CREATE TABLE sample_table ( id SERIAL PRIMARY KEY, email domain_email );
-- The following is invalid, because ".info" has 4 characters
-- the regex pattern only allows 2-3 characters
INSERT INTO sample_table (email) VALUES ('sample_email#gmail.info');
ERROR: value for domain domain_email violates check constraint "domain_email_check"
-- The following query is valid
INSERT INTO sample_table (email) VALUES ('sample_email#gmail.com');
SELECT * FROM sample_table;
id | email
----+------------------------
1 | sample_email#gmail.com
(1 row)
Thanks Neil for the suggestion.
I recommend you yo use CITEXT type that ignores case in values comparison. It's important for email to prevent duplication like username#example.com and UserName#example.com.
This type is the part of the citext extension that could be activates by the following query:
CREATE EXTENSION citext;
We're in process of converting over from SQL Server to Postgres. I have a scenario that I am trying to accommodate. It involves inserting records from one table into another, WITHOUT listing out all of the columns. I realize this is not recommended practice, but let's set that aside for now.
drop table if exists pk_test_table;
create table public.pk_test_table
(
recordid SERIAL PRIMARY KEY NOT NULL,
name text
);
--example 1: works and will insert a record with an id of 1
insert into pk_test_table values(default,'puppies');
--example 2: fails
insert into pk_test_table
select first_name from person_test;
Error I receive in the second example:
column "recordid" is of type integer but expression is of type
character varying Hint: You will need to rewrite or cast the
expression.
The default keyword will tell the database to grab the next value.
Is there any way to utilize this keyword in the second example? Or some way to tell the database to ignore auto-incremented columns and just them be populated like normal?
I would prefer to not use a subquery to grab the next "id".
This functionality works in SQL Server and hence the question.
Thanks in advance for your help!
If you can't list column names, you should instead use the DEFAULT keyword, as you've done in the simple insert example. This won't work with a in insert into ... select ....
For that, you need to invoke nextval. A subquery is not required, just:
insert into pk_test_table
select nextval('pk_test_table_id_seq'), first_name from person_test;
You do need to know the sequence name. You could get that from information_schema based on the table name and inferring its primary key, using a function that takes just the table name as an argument. It'd be ugly, but it'd work. I don't think there's any way around needing to know the table name.
You're inserting value into the first column, but you need to add a value in the second position.
Therefore you can use INSERT INTO table(field) VALUES(value) syntax.
Since you need to fetch values from another table, you have to remove VALUES and put the subquery there.
insert into pk_test_table(name)
select first_name from person_test;
I hope it helps
I do it this way via a separate function- though I think I'm getting around the issue via the table level having the DEFAULT settings on a per field basis.
create table public.pk_test_table
(
recordid integer NOT NULL DEFAULT nextval('pk_test_table_id_seq'),
name text,
field3 integer NOT NULL DEFAULT 64,
null_field_if_not_set integer,
CONSTRAINT pk_test_table_pkey PRIMARY KEY ("recordid")
);
With function:
CREATE OR REPLACE FUNCTION func_pk_test_table() RETURNS void AS
$BODY$
INSERT INTO pk_test_table (name)
SELECT first_name FROM person_test;
$BODY$
LANGUAGE sql VOLATILE;
Then just execute the function via a SELECT FROM func_pk_test_table();
Notice it hasn't had to specify all the fields- as long as constraints allow it.
Hello I am fairly new to PostgreSQL, I keep getting the following error code:
ERROR: relation "contact" does not exist
********** Error **********
ERROR: relation "contact" does not exist
SQL state: 42P01
Questions regarding this error code have been mentioned a lot on Stack Overflow and online, however I have tried checking for any braces that may change the letter case sensitivity of my code and was unable to find any.
This is how I've attempted to create the table:
CREATE TABLE CONTACT (
CONTACT_ID INTEGER,
BUILDING_NO INTEGER,
POSTCODE VARCHAR,
PHONE_NO INTEGER,
EMAIL VARCHAR,
CONSTRAINT PK_CONTACT_ID PRIMARY KEY (CONTACT_ID));
I would appreciate anyones help, and am sorry if this question may have been repeated, thank you guys :)
The issue is that you are creating your objects (tables) in the wrong order.
Create contact before you create tables that use contact as a foreign key-- or, create the foreign keys after you've created all your tables. The first table "student" is referencing contact, which has not been created yet.
EDIT:
Also, your phone number fields should not be integers, they should be text or varchar. If you're dead set on a numeric type, use bigint instead if integer.
I copied and pasted you code and it worked for me. Could it be something with your postgresql setup?
EDITS BELOW
Alright so I'm going to add more to the solution
Like Joe Love Said, You have issues with the ordering. The proper ordering of your tables should be something like Contact, Status, Student, Company, Application and then the rest of the queries.
In your Application Table, the line
CONSTRAINT FK_STATUS_ID FOREIGN KEY (STATUS_ID) REFERENCES STATUS (STATUS_ID),
Will give you an error.
Its because in the STATUS Table, you have set the ID as a VARCHAR and in the Application table, the Status is an INTEGER.
You still have some issues with the Application query referencing the Company Table as well. If you fix them, you should be good.
We have a PostgreSQL database. And we have several tables which need to keep certain data in several languages (the list of possible languages is thankfully system-wide defined).
For example lets start with:
create table blah (id serial, foo text, bar text);
Now, let's make it multilingual.
How about:
create table blah (id serial, foo_en text, foo_de text, foo_jp text,
bar_en text, bar_de text, bar_jp text);
That would be good for full-text search in Postgres. Just add a tsvector column
for each language.
But is it optimal?
Maybe we should use another table to keep the translations?
Like:
create table texts (id serial, colspec text, obj_id int, language text, data text);
Maybe, just maybe, we should use something else - something out of the SQL world?
Any help is appreciated.
I think it is best if you create two tables. One for languages, one for ids and so on.
first_table( id )
second_table( s_id, id_first_table, language_id, language_text)
Here's a great article the Mozilla developers put together on making their database multilingual. It's specific to CakePHP, but the info can easily be applied to other systems. Also, note that it makes SQL queries significantly more complex, which is a drawback. That will generally be true regardless of your i18n implementation, though.
Part 1
Part 2
Part 3
I need to make a function that would be triggered after every UPDATE and INSERT operation and would check the key fields of the table that the operation is performed on vs some conditions.
The function (and the trigger) needs to be an universal one, it shouldn't have the table name / fields names hardcoded.
I got stuck on the part where I need to access the table name and its schema part - check what fields are part of the PRIMARY KEY.
After getting the primary key info as already posted in the first answer you can check the code in http://github.com/fgp/pg_record_inspect to get record field values dynamicaly in PL/pgSQL.
Have a look at How do I get the primary key(s) of a table from Postgres via plpgsql? The answer in that one should be able to help you.
Note that you can't use dynamic SQL in PL/pgSQL; it's too strongly-typed a language for that. You'll have more luck with PL/Perl, on which you can access a hash of the columns and use regular Perl accessors to check them. (PL/Python would also work, but sadly that's an untrusted language only. PL/Tcl works too.)
In 8.4 you can use EXECUTE 'something' USING NEW, which in some cases is able to do the job.