VPS server is slow - webserver

I have a VPS server with Linode. At first it was very fast but now it takes upto 23 seconds to load a webpage. This is so appalling. I have tried configuring Apache and MySQL key buffer but all to no avail. Can I get a standard VPS configuration.
My VPS is a Linode 16GB Ram with 320GB SSD. According to google analytics, I get to 2,073,000 page views per month. What proper configuration do the server need to be fast? And how can I increase server response time (currently, google pagespeed says the server response time is 1.2 seconds and this is very slow.) this has been a great challenge to me.

maybe mainserver is overloaded by other vms i would test the Server with some tools, like test the Speed of copy data.
btw for 80$ i would buy me a real server
Intel® Xeon® Gold 6140
64 GB DDR4 RAM (ECC)
14 dedizierte Kerne
240 GB SSD / 2,0 TB SAS
52$ Netcup for example
if u like stay on the server would be good to know which software ur using, sometimes the mistake is somewehre else, like my page had a loadtime of 16 seconds(i forgot 2 index keys in the db to set)

Related

TYPO3 10.4 sometimes page rendering is very slow

System:
TYPO3: 10.4.22
PHP: 7.4.30
Apache 2.4.46
Windows Server 2019
X64
Hello!
I have the issue that sometimes the page rendering is absolutely slow. I added a simple page and "normal" page generation is about 200ms. But about each 10th load of the page it is very slow - about 20 seconds.
I checked the issue in the Admin Panel and if I look at the TS-Tree I have a very big value in the Script Start row (e.g. +21075 for a 21048ms run vs. +235 for a 205ms run).
I checked the performance (memory, network, cpu of the server itself and everything looks good). I did a test when no one else was working (it is a intranet installation) so I was the only one hitting the page and run into the same issue. If I do a reload after a slow load I get a fast load immediately.
Any ideas what is happing on these slow runs?
Thank you
Christian
Try '127.0.0.1' instead of 'localhost'
(in typo3conf/LocalConfiguration.php, 'DB' => 'Connections' => 'Default' => 'host')
The problem was an old File Storage Setting. The file storage was configured as local filesystem to a directory which was a symlink to a fileshare on another server. This server was switched off. The file storage was not in use anymore but the record in TYPO3 still existed and was set to "online". I have no clue why it slew down the page loading only each 5 or 8 time....

Postgres ODBC Bulk Loading Slow on IBM SPSS

I have the official Postgres ODBC drivers installed and am using IBM SPSS to try and load 4 million records from a MS SQL data source. I have the option set to bulk load via ODBC, but the performance is REALLY slow. When I go SQL-SQL the performance is good, when I go Postgres-Postgres the performance is good, but when I try and go SQL-Postgres it takes about 2.5 hours to load the records.
It's almost as if it's not bulk loading at all. Looking at the output it seems like it's reading the batched record count from the source very quickly (10,000 records), but the insert into the postgres side is taking forever. When I look at the record count every few seconds it jumps from 0 to 10,000 but takes minutes to get there, whereas it should be seconds.
Interestingly I downloaded a third party driver from DevArt and the load went from 2.5 hours to 9 minutes. Still not super quick, but much better. Either Postgres ODBC does not support bulk load (unlikely since postgres to postgres loads so quickly) or there's some configuration option at play in either the ODBC driver config or SPSS config.
Has anybody experienced this? I've been looking at options for the ODBC driver, but can't really see anything related to bulk loading.
IBM SPSS Statistics uses the IBM SPSS Data Access Pack (SDAP). These are 3rd party drivers from Progress/Data Direct. I can't speak to performance using other ODBC drivers. But if you are using the IBM SPSS Data Access Pack "IBM SPSS OEM 7.1 PosgreSQL Wire Protocol" ODBC driver, then there are resources for you
The latest Release of the IBM SPSS Data Access Pack (SDAP) is version 8.0. It is available from Passport Advantage (where you would have downloaded your IBM SPSS Statistics Software) as "IBM SPSS Data Access Pack V8.0 Multiplatform English (CC0NQEN )"
Once installed, see the Help. On Windows it will be here:
C:\ProgramData\Microsoft\Windows\Start Menu\Programs\IBM SPSS OEM Connect and ConnectXE for ODBC 8.0\

Postgres 11 issue: "SSL error: DATA_LENGTH_TOO_LONG" error on server

Looking for any thoughts on what is going on here:
Environment:
Java 11 GCP Function that copies data into table
Postgres 11 Cloud SQL using JDBC driver
(org.postgresql:postgresql:42.2.5)
No changes to any code or configuration in 2 weeks.
I'm connecting to the private SQL IP address, so similar to
jdbc:postgresql://10.90.123.4/...
I'm not requiring a SSL cert
There is Serverless VPC Access set up between the Function and SQL.
This is happening across two different GCP projects and SQL servers.
Prior to this Saturday (2/22), everything was working fine. We are using Postgres' CopyManager to load data into a table: copyManager.copyIn(sql, this.reader);
After 2/22, this started failing with "SSL error: DATA_LENGTH_TOO_LONG" as seen in the SQL server log. These failures are 100% consistent and still happen. I can see that SQL was restarted by Google a few hours before the issue started and I'm wondering if this is somehow related to whatever maintenance happened, SQL version upgrade? I'm unclear what version we had before Saturday, but it's now 11.6.
Interestingly enough, I can avoid the error if the file loaded into the table is under a certain size:
14,052 bytes (16 KB on disk): This fails every time.
14,051 bytes (16 KB on disk): This works every time.
I'd appreciate if someone from Google could confirm what took place during the maintenance window that might be causing this error. We are currently blocked by this as we load much larger datasets into the database than ~14 000 bytes.
FYI this was caused by a JDK issue with TLS v1.3, addressed in JDK 11.05. Google will likely upgrade the JDK used for the Cloud Functions JVMs from 11.04 to something newer next week. See https://bugs.openjdk.java.net/browse/JDK-8221253

How to replicate a postgresql database from local to web server

I am new in the form and also new in postgresql.
Normally I use MySQL for my project but I’ve decided to start migrating towards postgresql for some valid reasons which I found in this database.
Expanding on the problem:
I need to analyze data via some mathematical formulas but in order to do this I need to get the data from the software via the API.
The software, the API and Postgresql v. 11.4 which I installed on a desktop are running on windows. So far I’ve managed to take the data via the API and import it into Postgreql.
My problem is how to transfer this data from
the local Postgresql (on the PC ) to a web Postgresql (installed in a Web server ) which is running Linux.
For example if I take the data every five minutes from software via API and put it in local db postgresql, how can I transfer this data (automatically if possible) to the db in the web server running Linux? I rejected a data dump because importing the whole db every time is not viable.
What I would like is to import only the five-minute data which gradually adds to the previous data.
I also rejected the idea of making a master - slave architecture
because not knowing the total amount of data, on the web server I have almost 2 Tb of hard disk while on the local pc I have only one hard disk that serves only to take the data and then to send it to the web server for the analysis.
Could someone please help by giving some good advice regarding how to achieve this objective?
Thanks to all for any answers.

Segmentation fault when starting G-WAN 3.12.26 32-bit on linux fc14

I have a fc14 32 bit system with 2.6.35.13 custom compiled kernel.
When I try to start G-wan I get a "Segmentation fault".I've made no changes, just downloaded and unpacked the files from g-wan site.
In the log file I have:
"[Wed Dec 26 16:39:04 2012 GMT] Available network interfaces (16)"
which is not true, on the machine i have around 1k interfaces mostly ppp interfaces.
I think the crash has something to do with detecting interfaces/ip addresses because in the log after the above line I have 16 lines with ip's belonging to the fc14 machine and after that about 1k lines with "0.0.0.0" or "random" ip addresses.
I ran gwan 3.3.7 64-bit on a fc16 with about the same number of interfaces and had no problem,well it still reported a wrong number of interfaces (16) but it did not crashed and in the log file i got only 16 lines with the ip addresses belonging to the fc16 machine.
Any ideas?
Thanks
I have around 1k interfaces mostly ppp interfaces
Only the first 16 will be listed as this information becomes irrelevant with more interfaces (the intent was to let users find why a listen attempt failed).
This is probably the long 1K list, many things have changed internally after the allocator was redesigned from scratch. Thank you for reporting the bug.
I also confirm the comment which says that the maintenance script crashes. Thanks for that.
Note that bandwidth shaping will be modified to avoid the newer Linux syscalls so the GLIBC 2.7 requirement will be waved.
...with a custom compiled kernel
As a general rule, check again on a standard system like Debian 6.x before asking a question: there is room enough for trouble with a known system - there's no need to add custom system components.
Thank you all for the tons(!) of emails received these two last days about the new release!
I had a similar "Segmentation fault" error; mine happens any time I go to 9+GB of RAM. Exact same machine at 8GB works fine, and 10GB doesn't even report an error, it just returns to the prompt.
Interesting behavior... Have you tried adjusting the amount of RAM to see what happens?
(running G-WAN 4.1.25 on Debian 6.x)