Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 days ago.
Improve this question
please will you help me?
I'm doing a Perl program in web that acces a postgresql table whith 562,279 rows, the table structure is the next:
The table has 562,279 rows, I had been created indexes for fast performance but the answer from postgresql DB server is FAST but from WEB continue SLOW
https://aws.amazon.com/es/blogs/database/tune-sorting-operations-in-postgresql-with-work_mem/
So I did the next sql instructions:
1)postgresql=#show log_temp_files;
log_temp_files=-1
2)postgresql=#show client_min_messages;
client_min_messages=notice
3)postgresql=#show trace_sort;
show trace_sort=Off
4)postgresql=#show work_mem;
work_mem=1GB
5)postgresql=#show client_min_messages;
client_min_messages=Notice
6)postgresql=#show log_temp_files;
log_temp_files=-1
7)postgresql=#show shared_buffers;
show shared_buffers=1GB
But the time response was not better, plese will you help me to better the time server response from web?
Regards
Xochitl
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have a data-set that consist of 1500 columns and 6500 rows and I am trying to figure out what the best way is to shape the data for web based user interactive visualizations.
What I am trying to do is make the data more interactive and create an admin console that allows anyone to filter the data visually.
Front-end could potentially be based on Crossfilter, D3 and DC.js and give the user basically end-less filtering possibilities(date, value, country. In addition there will be some pre defined views like top and bottom 10 values.
I have seen and tested some great examples like this one, but after testing it did not really fit for the large amount of columns I had and it was based on a full JSON dump from the MongoDB. This amounted in very long loading times and loss of full interactivity with the data.
So in the end my question is what is the best approach (starting with normalization) in getting the data shaped in the right way so it can be manipulated from a front-end. Changing the amount of columns is a priority.
A quick look at the piece of data that you shared suggests that the dataset is highly denormalized. To allow for querying and visualization from a database backend I would suggest normalizing. This is no small bit software work but in the end you will have relational data which is much easier to deal with.
It's hard to guess where you would start but from the bit of data you showed there would be a country table, an event table of some sort and probably some tables of enumerated values.
In any case you will have a hard time finding a db engine that a lows that many columns. The row count is not a problem. I think in the end you will want a db with dozens of tables.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I appreciate there are one or two similar questions on SO but we are a few years on and I know EF's speed and general performance has been enhanced so those may be out of date.
I am writing a new webservice to replace an old one. Replicating the existing functionality it needs to do just a handful of database operations. These are:
Call existing stored procedures to get data (2)
Send SQL to the database to be executed (should be stored procedures I know) (5)
Update records (2)
Insert records (1)
So 10 operations in total. The database is HUGE but I am only dealing with 3 tables directly (stored procedures do some complex JOINs).
When getting the data I build an array of objects (e.g. Employees) which then get returned by the web service.
From my experience with Entity Framework, and because I'm not doing anything clever with the data, I believe EF is not the right tool for my purpose and SqlDataReader is better (I imagine it is going to be lighter and faster).
Entity Framework focuses mostly on developer productivity - easy to use, easy to get things done.
EF does add some abstraction layers on top of "raw" ADO.NET. It's not designed for large-scale, bulk operations, and it will be slower than "raw" ADO.NET.
Using a SqlDataReader will be faster - but it's also a lot more (developer's) work, too.
Pick whichever is more important to you - getting things done quickly and easily (as a developer), or getting top speed by doing it "the hard way".
There's really no good "one single answer" to this "question" ... pick the right tool / the right approach for the job at hand and use it.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Is this a perpetually denied feature request or something specific to Postgres?
I've read that Postgre has potentially higher performance than InnoDB but also potentially larger chance of less serialization (I apologize that I don't have a source, and please give that statement a wide berth because of my noobity and memory) and wonder if it might have something to do with that.
Postgres is amazingly functional compared to MySQL, and that's why I've switched. Already I've cut down lines of code & unnecessary replication immensely.
This is just a small annoyance, but I'm curious if it's considered unnecessary because of the UPDATE then INSERT workaround, or if it's very difficult to develop (possibly vs the perceived added value) like boost::lockfree::queue's ability to pass "anything" (, or if it's something else).
PostgreSQL committers are working on a patch to introduce "INSERT ... ON DUPLICATE KEY", which is functionally equivalent to an "upsert". MySQL and Oracle already have functionality (in Oracle it is called "MERGE")
A link to the PostgreSQL archives where the functionality is discussed and a patch introduced: http://www.postgresql.org/message-id/CAM3SWZThwrKtvurf1aWAiH8qThGNMZAfyDcNw8QJu7pqHk5AGQ#mail.gmail.com
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I would like to add some more tokens such as OPEN into PostgreSQL, what procedure should I follow? I did not find corresponding documents. thanks.
(Assuming you mean "the postgresql server" not "the command line client psql and that by "Token" you mean "SQL command / statement type"):
... yeah, that's not super simple.
If it's a utility command that does not require query planning it isn't super hard. You can take the existing utility commands as guidance on how they work. They're all quite different though. Start with ProcessUtility.
If it's intended to produce a query plan, like SELECT, INSERT, UPDATE, DELETE, CREATE TABLE AS, etc ... well, that tends to be a lot more complicated.
This sort of thing will require some quality time reading the PostgreSQL source code and developer documentation. It's way too complex to give you a step-by-step how-to here, especially since you have not even explained what the command you wish to add is supposed to do.
If at all possible you should develop the functionality you need as a user defined function first. Start with PL/PgSQL, PL/Perl, or whatever, and if you hit the limitations of that develop it as a C extension.
Once you have all the functionality you want as C functions, then think about whether it makes sense to extend the actual SQL syntax.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a need to collect snmptraps and display them in a web interface. The application already includes
*ruby on rails
*linux
*delayed_job (for queueing)
*postgresql
*A few cron jobs that do snmp queries
Now I need to run something like snmptrapd to collect alarms. Would it be possible for snmptrapd to write its traps to a queue that I can process with a cron job. Something like the built in mqueue of linux would be great. Or even writing it to a postgresql database (I know it supports mysql but no mention of postgres anywhere)
Does anyone know how I can redirect the output of snmptrapd into something I can process with a cron job.
I did something similar in Perl but you can do that with Ruby as well.
First you need to tell snmptrapd who is the default handler for traps. In the snmptrapdconf you can define it as the following:
traphandle default /yourpluginpath/yourplugin
Now every time a trap occours its data will be sent to yourplugin. It's up to him now to handle it. Once you have the packet you can store it to any DB you want, doesn't matter if it is MySQL or PostgreSQL.
The only problem is the library that parses the data that comes from the trap. In Perl I used SNMP::Trapinfo but since I nevery used Ruby I don't know the equivalent, I am sure someone else can point you to the right library. You can even parse it yourself. Actually you can also use a basic shell script if you wish someone else to store the data to the DB.