i've create this psql table code :
CREATE TABLE admin (id INT NOT NULL, username VARCHAR(180) NOT NULL, roles JSON NOT NULL, password VARCHAR(255) NOT NULL, PRIMARY KEY(id))');
and then
INSERT INTO admin (id, username, roles, password) VALUES (nextval('admin_id_seq'), 'admin', '["ROLE_ADMIN"]', '$argon2id$v=19$m=65536,t=4,p=1$Kcm6sv104bqBtb+FEh+dDQ$caej4aGFAYMvBqKNHgFAGw6+1rua1Iwk/g09mYbOCLMx’);
there no single error on my end at terminal but 0 rows on table, any code that i missed ?
if you look closely at your values (truncated below!), you'll see that the very last single quote is actually a "right single quotation mark" (U+2019) and not a "single quote" / "apostrophe" (U+0027), while all other quotes are apostrophe, as they should be.
(nextval('admin_id_seq'), 'admin', '["ROLE_ADMIN"]', '$argon2id$v=19$...’)
^-- this one
If you look very closely, you can see that it even looks different. You can zoom in to see the difference more easily.
The solution is to replace the last one with an apostrophe / single quote (U+0027):
INSERT INTO admin (id, username, roles, password) VALUES (nextval('admin_id_seq'), 'admin', '["ROLE_ADMIN"]', '$argon2id$v=19$m=65536,t=4,p=1$Kcm6sv104bqBtb+FEh+dDQ$caej4aGFAYMvBqKNHgFAGw6+1rua1Iwk/g09mYbOCLMx');
Related
I want to make a username field in postgresql database of course usernames has name patterns
like using 0-9 a-z A-Z _ characters only how do I do such thing ?
I tried making the server check the username before it insert it into the database it works but it's like meh
You could create your table with a check constraint on the username field:
CREATE TABLE courses (
...,
username VARCHAR(50) CHECK (username ~ '^[A-Z0-9_]+$'),
...
);
The above check constraint is using a case insensitive regex to assert that the username contains only letters, numbers, or underscore.
Assuming I have a (over simplified, non secure) table that looks like:
CREATE TABLE users (id SERIAL PRIMARY KEY, user VARCHAR(25), _password VARCHAR(25), email VARCHAR(80));
I want to add a additional failsafe on the column _password that prevents it from being returned on a SELECT * FROM users call, is this possible in PostgreSQL and if so, how?
I tried some versions of https://stackoverflow.com/a/7250991/929999, but this probably isn't what I was looking for. But that got me thinking that there might be a constraint that could be created. I can't find anyone who's tried this or asked it before, so I'm kind of lost seeing as I'm not a database expert by any means.
So for now I dump all results from the database into a custom dictionary placeholder in Python with a function called .safe_dump() that removes any keys starting with _<key>.
And I guess I could create a separate table containing a list of sensitive keys and match those on every SELECT statement via a JOIN or similar, but that would just move the risk of accidentally retrieving a sensitive key from the SELECT call to keeping that "JOIN table" updated.
Is there a flag in PostgreSQL that can filter out of block calls trying to access a key while still allowing it to be used on WHERE x=y clauses?
You can deny permission for that column:
CREATE TABLE users (
id SERIAL PRIMARY KEY,
"user" VARCHAR(25),
_password VARCHAR(25),
email VARCHAR(80)
);
REVOKE ALL ON users FROM laurenz;
GRANT SELECT (id, "user", email) ON users TO public;
test=> SELECT * FROM users;
ERROR: permission denied for relation users
test=> SELECT id, "user", email FROM users;
id | user | email
----+------+-------
(0 rows)
If you'd rather want exclude the column from the output, use a view:
CREATE VIEW users_v AS SELECT id, "user", email FROM users;
GRANT SELECT ON users_v TO PUBLIC;
I'm currently using the pq lib for Go to communicate with my PostgreSQL database. Error checking is proving to be a little more difficult than anticipated. The easiest way to describe my question is through an example scenario.
Imagine a web form:
Username ________
Email ________
Voucher ________
Password ________
A rough schema:
username VARCHAR(255) UNIQUE NOT NULL,
email VARCHAR(255) UNIQUE NOT NULL,
voucher VARCHAR(255) UNIQUE NOT NULL,
password VARCHAR(255) NOT NULL
Ignore the presumed plain text password for now. If a person submits the form, I can do all of my validation to verify constraints such as length/allowed characters/etc.
Now it comes to putting it in the database, so we write a prepared statement and execute it. If the validation was done correctly, the only thing that can really go wrong is the UNIQUE constraints. In the event that someone attempts to enter an existing username, database/sql is going to fire back an error.
My problem is that I have no idea what to do with that error and recover from (what should be) a recoverable error. pq provides some support for this, but there still appears to be come ambiguity to what's returned.
I can see two solutions, neither of which sound particularly appealing to me:
A SERIALIZABLE transaction which checks every single form value prior to insertion. Alternatively, some form of parsing on the pq error struct.
Is there a common pattern for implementing such a system? I'd like to be able to say to a user Sorry that username exists rather than Sorry something bad happened
As a sidenote, the PostgreSQL documentation states:
The fields for
schema name, table name, column name, data type name, and constraint
name are supplied only for a limited number of error types; see
Appendix A.
but the linked page isn't very helpful with respect to values returned in the database object.
If the validation was done correctly, the only thing that can really go wrong is the UNIQUE constraints.
No, the client could lack sufficient privileges, the client might have entered a valid password that's not the right password, the client might have entered a valid voucher that belongs to a different client, etc.
Using "A SERIALIZABLE transaction which checks every single form value prior to insertion" doesn't make sense. Just insert data, and trap errors.
At the very least, your code needs to examine and respond to the C (Code) field, which is always present in the error struct. You don't need to parse the error struct, but you do need to read it.
If you violate a unique constraint, PostgreSQL will return SQL state 23505 in the Code field. It will also return the name of the first constraint that's violated. It doesn't return the column name, probably because a unique constraint can include more than one column.
You can select the column(s) the constraint refers to by querying the information_schema views.
Here's a simple version of your table.
create table test (
username VARCHAR(255) UNIQUE NOT NULL,
email VARCHAR(255) UNIQUE NOT NULL,
voucher VARCHAR(255) UNIQUE NOT NULL,
password VARCHAR(255) NOT NULL
);
insert into test values ('msherrill', 'me#example.com', 'a', 'wibble');
This quick and dirty go program inserts the same row again. It violates every unique constraint.
package main
import (
"github.com/lib/pq"
"database/sql"
"fmt"
"log"
)
func main() {
db, err := sql.Open("postgres", "host=localhost port=5435 user=postgres password=xxxxxxxx dbname=scratch sslmode=disable")
if err != nil {
log.Fatal(err)
}
rows, err := db.Exec("insert into public.test values ('msherrill', 'me#example.com', 'a', 'wibble');")
if err, ok := err.(*pq.Error); ok {
fmt.Println("Severity:", err.Severity)
fmt.Println("Code:", err.Code)
fmt.Println("Message:", err.Message)
fmt.Println("Detail:", err.Detail)
fmt.Println("Hint:", err.Hint)
fmt.Println("Position:", err.Position)
fmt.Println("InternalPosition:", err.InternalPosition)
fmt.Println("Where:", err.Where)
fmt.Println("Schema:", err.Schema)
fmt.Println("Table:", err.Table)
fmt.Println("Column:", err.Column)
fmt.Println("DataTypeName:", err.DataTypeName)
fmt.Println("Constraint:", err.Constraint)
fmt.Println("File:", err.File)
fmt.Println("Line:", err.Line)
fmt.Println("Routine:", err.Routine)
}
fmt.Println(rows)
}
Here's the output.
Severity: ERROR
Code: 23505
Message: duplicate key value violates unique constraint "test_username_key"
Detail: Key (username)=(msherrill) already exists.
Hint:
Position:
InternalPosition:
Where:
Schema: public
Table: test
Column:
DataTypeName:
Constraint: test_username_key
File: nbtinsert.c
Line: 406
Routine: _bt_check_unique
You have the schema, table, and constraint names. You presumably know the database (catalog) name, too. Use these values to select the schema, table, and column names from information_schema views. You're lucky; in this case you need only one view.
select table_catalog, table_schema, table_name, column_name
from information_schema.key_column_usage
where
table_catalog = 'scratch' and -- Database name
table_schema = 'public' and -- value returned by err.Schema
table_name = 'test' and -- value returned by err.Table
constraint_name = 'test_username_key' -- value returned by err.Constraint
order by constraint_catalog, constraint_schema, constraint_name, ordinal_position;
I have a User table in sqlite3 with schema:
CREATE TABLE "users" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
"email" varchar(255) DEFAULT '' NOT NULL,
"encrypted_password" varchar(255) DEFAULT '' NOT NULL,
"admin" boolean,);
Which has a boolean field admin.
I add a user record from rails console:
User.create!({email: "example#example.com", encrypted_password: "sample string", admin:1})
Then when I query user record by the admin field
select * from users where admin=1;
It returns empty result set.
I had a look at sqlite users table, the admin field is saved as string 't' and 'f'.
This cause a problem, when I use custom query in rails, the admin filter is not compatible with sqlite3, which is my test database, and postgresql, which is my dev and production database.
How could I overcome this problem?
If you must use raw SQL, then you should use the same dbms in development, testing, and production. PostgreSQL will run fine on Windows, Linux, and OSX.
This is especially important when it comes to SQLite, which doesn't support data types at all in the way SQL databases do. SQLite allows this kind of literal nonsense.
sqlite> create table test (n integer not null);
sqlite> insert into test values ('wibble');
sqlite> select n from test;
wibble
But the query that's giving you trouble won't run in PostgreSQL anyway. This query
select * from users where admin=1;
will raise an error in PostgreSQL, because the integer 1 isn't a value in the domain of Boolean values. The string '1' is a valid value, though. So this will work.
select * from users where admin='1';
As will this.
select * from users where admin='t';
Im new to Cassandra and try to understand the datamodel. so i know how to insert if "bob" is following "james". i also know how to query to get a list of all people who follow "bob" and i know how to query to get a list of who "bob" is following.
My Question is, given the below, what does the query look like if i would like to find out if "bob" is following "james" ? (Yes or No)
Is this the right query?
SELECT * FROM followers WHERE username="bob" AND following="james"
Do i need to set a second Index on FOLLOWING to be able to execute the above query?
-- User storage
CREATE TABLE users (username text PRIMARY KEY, password text);
-- Users user is following
CREATE TABLE following (
username text,
followed text,
PRIMARY KEY(username, followed)
);
-- Users who follow user
CREATE TABLE followers (
username text,
following text,
PRIMARY KEY(username, following)
);
No need for a secondary index in this case. You can always test quick ideas like this using the cqlsh shell.
cqlsh> use ks;
cqlsh:ks> CREATE TABLE followers (
... username text,
... following text,
... PRIMARY KEY(username, following)
... );
cqlsh:ks> INSERT INTO followers (username, following ) VALUES ( 'bob', 'james' );
cqlsh:ks> SELECT * FROM followers WHERE username='bob' and following='james';
username | following
----------+-----------
bob | james
The reason why you don't have to make an secondary index (nor should you if you would like perform this sort of query at scale) is because 'following' is specified as a clustering key. This means 'following' describes the layout of the data in the partition meaning we can filter on 'following' very quickly.
As an aside, if a query that is performed frequently requires a secondary index (Allow filtering) that is an indication that you should be rethinking your datamodel.