How to add new tokens into PostgreSQL? [closed] - postgresql

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I would like to add some more tokens such as OPEN into PostgreSQL, what procedure should I follow? I did not find corresponding documents. thanks.

(Assuming you mean "the postgresql server" not "the command line client psql and that by "Token" you mean "SQL command / statement type"):
... yeah, that's not super simple.
If it's a utility command that does not require query planning it isn't super hard. You can take the existing utility commands as guidance on how they work. They're all quite different though. Start with ProcessUtility.
If it's intended to produce a query plan, like SELECT, INSERT, UPDATE, DELETE, CREATE TABLE AS, etc ... well, that tends to be a lot more complicated.
This sort of thing will require some quality time reading the PostgreSQL source code and developer documentation. It's way too complex to give you a step-by-step how-to here, especially since you have not even explained what the command you wish to add is supposed to do.
If at all possible you should develop the functionality you need as a user defined function first. Start with PL/PgSQL, PL/Perl, or whatever, and if you hit the limitations of that develop it as a C extension.
Once you have all the functionality you want as C functions, then think about whether it makes sense to extend the actual SQL syntax.

Related

What's the difference between these two Gorm way of query things? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I'm using Go and Gorm for Postgresql.
I want to understand what the difference is between
this:
var name = "myName"
var user User
db.Where("user like ?", name).Find(&user)
and this:
var user User
db.Where("user like " + name).Find(&user)
The SQL query is the same.
I mean, why do we use ORMs?
Can the #1 become a prepared statement?
Is the #1 "more optimized" than the #2?
What does it mean "more optimized"?
To answer specifically to your questions:
Can the #1 become a prepared statement?
Usually ORMs will build prepared statements, so that (as explained by Flimzy) you can avoid sql injection and DB engine does not need to recalculate query plans.
Gorm seems to have an specific configuration for caching prepared statements:
https://gorm.io/docs/v2_release_note.html#Prepared-Statement-Mode
Is the #1 "more optimized" than the #2?
You can see this part from database perspective and language perspective.
Database: Because #1 uses prepared statements if you execute the same query again and again the db engine doesn't need to recalculate the query (re use prepared statements)
Language: Golang is going to create a string for "user like", then another string for the concatenation "user like" + name. If this code is executed multiple times (in a loop for example) you will see an increase of execution time, just because each string means a new memory address assigned.
What does it mean "more optimized"?
More optimized means faster.
As explained above:
Prepared statements help to calculate a query plan only once, which means next time the same query is executed this plan doesn't need to be calculated by DB engine saving time. (You usually see this differences in complex queries)
Language string concatenation can be resource consuming

Best ways to apply Joins inside a Update Query in Postgreql (performance wise) [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I Want to update a table column after checking multiple conditions on multiple tables
I think set from is more feasible at often and is faster than subquerying.See this understandable example.
UPDATE t_name AS t
SET T.attr1 = r.attr2
FROM any_list AS r
WHERE t.anylist_id = t.id
Joins are executed by the RDBMS with an execution pattern such that to optimize data loading and processing, unlike the sub-query where it will run all the queries and load all their data to do the processing.
More on subqueries can be found here
The subquery will generally only be executed far enough to determine whether at least one row is returned, not all the way to completion like in joins. It is unwise to write a subquery that has any side effects (such as calling sequence functions); whether the side effects occur or not may be difficult to predict.
I don't know how well they perform, but I see at least the following two possibilities:
A subquery
UPDATE SET FROM - syntax (can be found in PostgreSQL documentation, too)

Entity Framework or SqlDataReader [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I appreciate there are one or two similar questions on SO but we are a few years on and I know EF's speed and general performance has been enhanced so those may be out of date.
I am writing a new webservice to replace an old one. Replicating the existing functionality it needs to do just a handful of database operations. These are:
Call existing stored procedures to get data (2)
Send SQL to the database to be executed (should be stored procedures I know) (5)
Update records (2)
Insert records (1)
So 10 operations in total. The database is HUGE but I am only dealing with 3 tables directly (stored procedures do some complex JOINs).
When getting the data I build an array of objects (e.g. Employees) which then get returned by the web service.
From my experience with Entity Framework, and because I'm not doing anything clever with the data, I believe EF is not the right tool for my purpose and SqlDataReader is better (I imagine it is going to be lighter and faster).
Entity Framework focuses mostly on developer productivity - easy to use, easy to get things done.
EF does add some abstraction layers on top of "raw" ADO.NET. It's not designed for large-scale, bulk operations, and it will be slower than "raw" ADO.NET.
Using a SqlDataReader will be faster - but it's also a lot more (developer's) work, too.
Pick whichever is more important to you - getting things done quickly and easily (as a developer), or getting top speed by doing it "the hard way".
There's really no good "one single answer" to this "question" ... pick the right tool / the right approach for the job at hand and use it.

why doesn't PostgreSQL have ON DUPLICATE KEY? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Is this a perpetually denied feature request or something specific to Postgres?
I've read that Postgre has potentially higher performance than InnoDB but also potentially larger chance of less serialization (I apologize that I don't have a source, and please give that statement a wide berth because of my noobity and memory) and wonder if it might have something to do with that.
Postgres is amazingly functional compared to MySQL, and that's why I've switched. Already I've cut down lines of code & unnecessary replication immensely.
This is just a small annoyance, but I'm curious if it's considered unnecessary because of the UPDATE then INSERT workaround, or if it's very difficult to develop (possibly vs the perceived added value) like boost::lockfree::queue's ability to pass "anything" (, or if it's something else).
PostgreSQL committers are working on a patch to introduce "INSERT ... ON DUPLICATE KEY", which is functionally equivalent to an "upsert". MySQL and Oracle already have functionality (in Oracle it is called "MERGE")
A link to the PostgreSQL archives where the functionality is discussed and a patch introduced: http://www.postgresql.org/message-id/CAM3SWZThwrKtvurf1aWAiH8qThGNMZAfyDcNw8QJu7pqHk5AGQ#mail.gmail.com

NoSQL or SQL Server [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm starting out to design a site that has some requirements that I've never really dealt with. Specifically, the data objects will have similar, but not exact, attributes. Yes, I could probably figure out most of the possible attributes and then just not populate the ones that don't make sense, and therefore keep a traditional "Relational" table and column design, but I'm thinking this might be a really good time to learn NoSQL.
In addition, the user will have 1, and only 1, textbox to search, and I will need to search all data objects and their attributes to find that string.
Ideally, I'd like to have the search return in order of "importance", meaning that if a match for the user's entered string is found in a "name" attribute, it would be returned as a higher confidence match than if the string was matched on a sub-attribute.
Anyone have any experience in this sort of situation? What have you tried that worked or didn't work? Am I wrong in thinking that this project is very well suited to a NoSQL type of database?
Stick with a traditional relational database such as MySQL or Postgresql. I would suggest sorting by relevance in your application code after obtaining the matching results. The size of your result set should impact your design choices, but if you will have less than 1-2k results then just keep it simple and don't worry too much about optimization.
NoSQL is just a dumb key value store, a persistent dictionary that can be shared across multiple application instances. It can solve scalability issues, but introduces new ones since you now just have a dumb data store. Relational databases have had years of performance tuning and do a great job.
I find NoSQL to be much more suited to storing state data, like a users preferences or cache. If you are analyzing the relationship between data then you need a relational database.