Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds - centos

readline-6.2-11.el7.x86_64.rpm FAILED
http://mirror.verinomi.com/centos/7.9.2009/os/x86_64/Packages/readline-6.2-11.el7.x86_64.rpm: [Errno 12] Timeout on http://mirror.verinomi.com/centos/7.9.2009/os/x86_64/Packages/readline-6.2-11.el7.x86_64.rpm: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds')
I got an error while updating yum.
yum update

Try:
time wget http://mirror.verinomi.com/centos/7.9.2009/os/x86_64/Packages/readline-6.2-11.el7.x86_64.rpm
time wget http://mirror.verinomi.com/centos/7.9.2009/os/x86_64/Packages/readline-6.2-11.el7.x86_64.rpm
--2022-08-25 11:24:31-- http://mirror.verinomi.com/centos/7.9.2009/os/x86_64/Packages/readline-6.2-11.el7.x86_64.rpm
Resolving mirror.verinomi.com (mirror.verinomi.com)... 193.162.43.250
Connecting to mirror.verinomi.com (mirror.verinomi.com)|193.162.43.250|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 197696 (193K) [application/x-rpm]
Saving to: ‘readline-6.2-11.el7.x86_64.rpm’
>
readline-6.2-11.el7.x86_64.rpm 100%[=====================================================================================>] 193,06K 555KB/s in 0,3s
2022-08-25 11:24:32 (555 KB/s) - ‘readline-6.2-11.el7.x86_64.rpm’ saved [197696/197696]
real 0m0,758s
user 0m0,010s
sys 0m0,009s
If there is a big delay check your internet connection. If your internet speed is really low (like downloading "Less than 1000 bytes/sec the last 30 seconds") it will stop the update through this mirror and may try another.

Related

How to fill data upto a size in multiple disk?

I am creating 4 mountpoint disk in Windows OS. I need to copy files up to a threshold value (say 50 GB).
I tried with vdbench. It works fine, but it throws an exception at last.
compratio=4
dedupratio=1
dedupunit=256k
* Host Definition section
hd=default,user=Administator,shell=vdbench,jvms=1
hd=localhost,system=localhost
********************************************************************************
* Storage Definition section
fsd=fsd1,anchor=C:\UnMapTest-Volume1\disk1\,depth=1,width=1,files=1,size=5g
fsd=fsd2,anchor=C:\UnMapTest-Volume2\disk2\,depth=1,width=1,files=1,size=5g
fwd=fwd1,fsd=fsd*,operation=write,xfersize=1m,fileio=sequential,fileselect=random,threads=10
rd=rd1,fwd=fwd1,fwdrate=max,format=yes,elapsed=1h,interval=1
Below is the exception from vdbench. Due to this my calling script would fail.
05:29:14.287 Message from slave localhost-0:
05:29:14.289 file=C:\UnMapTest-Volume1\disk1\\vdb.1_1.dir\vdb_f0001.file,busy=true
05:29:14.290 Thread: FwgThread write C:\UnMapTest-Volume1\disk1\ rd=rd1 For loops: None
05:29:14.291
05:29:14.292 last_ok_request: Thu Dec 28 05:28:57 PST 2017
05:29:14.292 Duration: 16.92 seconds
05:29:14.293 consecutive_blocks: 10001
05:29:14.294 last_block: FILE_BUSY File busy
05:29:14.294 operation: write
05:29:14.295
05:29:14.296 Do you maybe have more threads running than that you have
05:29:14.296 files and therefore some threads ultimately give up after 10000 tries?
05:29:14.300 *
05:29:14.301 ******************************************************
05:29:14.302 * Slave localhost-0 aborting: Too many thread blocks *
05:29:14.302 ******************************************************
05:29:14.303 *
05:29:21.235
05:29:21.235 Slave localhost-0 prematurely terminated.
05:29:21.235
05:29:21.235 Slave aborted. Abort message received:
05:29:21.235 Too many thread blocks
05:29:21.235
05:29:21.235 Look at file localhost-0.stdout.html for more information.
05:29:21.735
05:29:21.735 Slave localhost-0 prematurely terminated.
05:29:21.735
java.lang.RuntimeException: Slave localhost-0 prematurely terminated.
at Vdb.common.failure(common.java:335)
at Vdb.SlaveStarter.startSlave(SlaveStarter.java:198)
at Vdb.SlaveStarter.run(SlaveStarter.java:47)
I am using PowerShell in a Windows machine. Even if some other tools like Diskspd is having way to fill data up to some threshold then please provide me.
I found the answer by myself.
I have done this using Diskspd.exe as below
The following command fill 50 GB data in the mentioned disk folder
.\diskspd.exe -c50G -b4K -t2 C:\UnMapTest-Volume1\disk1\testfile1.dat
It is very simple than Vdbench for my requirement.
Caution : But it is not having real data so array side disk size is
not shown up with the size

How to make more than 1000 requests per second through Gatling?

I am using the following setup which generates more than 3000 rps. Requirement is to test with up to 4k rps.
The setup that I am using is:
setUp(scn.inject(constantUsersPerSec(8) during(10 minutes)).protocols(httpConf)).throttle(
reachRps(1000) in (20 seconds),
holdFor(5 minute),
jumpToRps(2000),
holdFor(5 minute)
)
The error that I am getting is the following:
17:18:15.603 [gatling-http-thread-1-1] WARN i.gatling.http.ahc.ResponseProcessor - Request 'Home' failed: j.n.ConnectException: handshake timed out
But Gatling seems to fail, above 1000 rps. Is there a way we can do that?

Where does CPanel stores cron job result files?

When a cron job runs, I get an email that says
HTTP request sent, awaiting response... 200 OK
Length: 19 [text/html]
Saving to: “filefeed.16”
0K 100% 4.93M=0s
2017-03-23 10:10:04 (4.93 MB/s) - “filefeed.16” saved [19/19]
So it's my understanding that Saving to: “filefeed.16” means that is storing this file somewhere in my server, where is it?
After looking a few hours, I found it was quite simple, depending on the user who is running the cron job, it will store this file in the user's directory, for example, let's say I am using user_03, it will save it on /home/user_03/.

Kitura slow or low request per second?

I've download Kitura 0.20 and created a new project for a benchmark on a swift build -c release
import Kitura
let router = Router()
router.get("/") {
request, response, next in
response.send("Hello, World!")
next()
}
Kitura.addHTTPServer(onPort: 8090, with: router)
Kitura.run()
and the score appear to be low compare to Zewo and Vapor which could hit 400k+ request/s?
MacBook-Pro:hello2 yanli$ wrk -t1 -c100 -d30 --latency http://localhost:8090
Running 30s test # http://localhost:8090
1 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 415.36us 137.54us 3.09ms 91.11%
Req/Sec 5.80k 2.47k 7.19k 85.71%
Latency Distribution
50% 391.00us
75% 443.00us
90% 513.00us
99% 0.93ms
16229 requests in 30.01s, 1.67MB read
Socket errors: connect 0, read 342, write 55, timeout 0
Requests/sec: 540.84
Transfer/sec: 57.04KB
I suspect you are running out of ephemeral ports. Your issue is probably the same as this one: 'ab' program freezes after lots of requests, why?
Kitura currently does not support HTTP keepalive, and so every request requires a new connection. One symptom of this is that regardless of how many seconds you attempt to drive load, you'll see a similar number of completed requests (16229 in your example).
On OS X, there are 16,384 ephemeral ports available by default, and these will be rapidly exhausted unless you tune the network settings.
[1] http://danielmendel.github.io/blog/2013/04/07/benchmarkers-beware-the-ephemeral-port-limit/
[2] https://rolande.wordpress.com/2010/12/30/performance-tuning-the-network-stack-on-mac-osx-10-6/
My approach has been to reduce the Maximum Segment Lifetime tunable (which defaults to 15000, or 15 seconds) and increase the range of available ports temporarily while benchmarking, for example:
sudo sysctl -w net.inet.tcp.msl=1000
sudo sysctl -w net.inet.ip.portrange.first=32768
<run benchmark>
sudo sysctl -w net.inet.tcp.msl=15000
sudo sysctl -w net.inet.ip.portrange.first=49152

SQL timeout on Azure website suddenly started when return large number (1500) rows

Azure Website with EF6 just started to get timeout on pages where I retrieve more than about 1000 rows. (unsure about the limit, works on 400 or less, fails on 1500 or more)
[Win32Exception (0x80004005): The wait operation timed out]
[SqlException (0x80131904): Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. This failure occurred while attempting to connect to the routing destination. The duration spent while attempting to connect to the original server was - [Pre-Login] initialization=1; handshake=21; [Login] initialization=0; authentication=0; [Post-Login] complete=1; ]
The app has been running smoothly for several month I just noticed today. Any ideas?
(In case the error is still present: Page with err: http://fartslek.no/fartslek/15 Page without err: http://fartslek.no/fartslek/3 )