I am using flutter to build a projetct, now I am running get command but stuck for more than one hour, is it possible to output a detail message to know the problem. This stuck like this in Android Studio:
/Users/dolphin/apps/flutter/bin/flutter --no-color pub get
Running "flutter pub get" in Cruise...
when I execute command in Terminal it works fine:
~/source/third-party/Cruise on master! ⌚ 10:55:37
$ ~/apps/flutter/bin/flutter pub get
Running "flutter pub get" in Cruise... 0.6s
this is my android studio proxy settings:
what should I do to make it works fine in Android studio?
using this command will found all listening on localhost:
$ lsof -i:7890
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
Google 535 dolphin 20u IPv4 0x46a2e89295f84f31 0t0 TCP localhost:54480->localhost:7890 (ESTABLISHED)
Google 535 dolphin 26u IPv4 0x46a2e89278d11dd1 0t0 TCP localhost:54554->localhost:7890 (ESTABLISHED)
Google 535 dolphin 28u IPv4 0x46a2e8927887b191 0t0 TCP localhost:54610->localhost:7890 (ESTABLISHED)
Google 535 dolphin 30u IPv4 0x46a2e89298b92551 0t0 TCP localhost:54558->localhost:7890 (ESTABLISHED)
Google 535 dolphin 32u IPv4 0x46a2e89298ebe7b1 0t0 TCP localhost:54310->localhost:7890 (CLOSE_WAIT)
Google 535 dolphin 35u IPv4 0x46a2e892776e7551 0t0 TCP localhost:54316->localhost:7890 (CLOSE_WAIT)
Google 535 dolphin 38u IPv4 0x46a2e892776e8911 0t0 TCP localhost:54330->localhost:7890 (ESTABLISHED)
Google 535 dolphin 48u IPv4 0x46a2e89280aa4551 0t0 TCP localhost:54322->localhost:7890 (CLOSE_WAIT)
Google 535 dolphin 54u IPv4 0x46a2e89278d127b1 0t0 TCP localhost:54323->localhost:7890 (CLOSE_WAIT)
Google 535 dolphin 56u IPv4 0x46a2e89298febf31 0t0 TCP localhost:54331->localhost:7890 (ESTABLISHED)
so tweak the proxy:
Related
My Rancher desktop was working just fine, until today when I switched container runtime from containerd to dockerd. When I wanted to change it back to containerd, it says:
Error Starting Kubernetes
Error: unable to verify the first certificate
Some recent logfile lines:
client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUV1eXhYdFYvTDZOQmZsZVV0Mnp5ekhNUmlzK2xXRzUxUzBlWklKMmZ5MHJvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFNGdQODBWNllIVzBMSW13Q3lBT2RWT1FzeGNhcnlsWU8zMm1YUFNvQ2Z2aTBvL29UcklMSApCV2NZdUt3VnVuK1liS3hEb0VackdvbTJ2bFJTWkZUZTZ3PT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
2022-09-02T13:03:15.834Z: Error starting lima: Error: unable to verify the first certificate
at TLSSocket.onConnectSecure (node:_tls_wrap:1530:34)
at TLSSocket.emit (node:events:390:28)
at TLSSocket._finishInit (node:_tls_wrap:944:8)
at TLSWrap.ssl.onhandshakedone (node:_tls_wrap:725:12) {
code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE'
}
Tried reinstalling, factory reset etc. but no luck. I am using 1.24.4 verison.
TLDR: Try turning off Docker/Something that is binding to port 6443. Reset Kubernetes in Rancher Desktop, then try again.
Try checking if there is anything else listening on port 6443 which is needed by kubernetes:rancher-desktop.
In my case, lsof -i :6443 gave me...
~ lsof -i :6443
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 63385 ~~~~~~~~~~~~ 150u IPv4 0x44822db677e8e087 0t0 TCP localhost:sun-sr-https (LISTEN)
ssh 82481 ~~~~~~~~~~~~ 27u IPv4 0x44822db677ebb1e7 0t0 TCP *:sun-sr-https (LISTEN)
I'm trying to configure Sendmail to listen on on 110 POP3 on a ec2 server. I need it for a newsletter app so that it can check for bounces. When I try to telnet in on port 110 I get a connection error.
root:/# telnet sub.domain.com 110
Trying 5?.??.?.?0...
telnet: Unable to connect to remote host: Connection refused
root:/# telnet sub.domain.com 25
Trying 5?.??.?.?0...
Connected to sub.domain.com.
Escape character is '^]'.
220 ip-172-31-54-114.ec2.internal ESMTP Sendmail 8.14.4/8.14.4/Debian-4.1ubuntu1; Wed, 30 Nov 2016 10:24:50 GMT; (No UCE/UBE) logging access from: [5?.??.?.?0](FORGED)-ec2-5?-??-?-?0.compute-1.amazonaws.com [5?.??.?.?0] (may be forged)
^]
telnet> quit
Connection closed.
When I lsof on port 25 I can see that it's working but not on 110.
root:/# lsof -n -i :25
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sendmail- 4279 root 4u IPv4 2349285 0t0 TCP *:smtp (LISTEN)
root:/# lsof -n -i :110
root:/#
Do I need to edit the sendmail.mc file, previously I commented out the below lines so that smtp would listen to all IPs.
dnl DAEMON_OPTIONS(`Family=inet, Name=MTA-v4, Port=smtp, Addr=127.0.0.1')dnl
dnl DAEMON_OPTIONS(`Family=inet, Name=MSP-v4, Port=submission, M=Ea, Addr=127.0.0.1')dnl
I've searched the sendmail.cf & sendmail.mc for any references to pop3/port110 configuration but can't see anything.
Sendmail MTA acts like SMTP server. You need separate program/server to service POP3 protocol e.g. dovecot IMAP/POP server.
Sendmail-FAQ-4.19 : How do I configure sendmail for POP/IMAP/...?
I installed mongodb on remote server via vagrant. I can access postgres from my local system but mongo is not available. When I login via ssh and check mongo status it says that mongo running, I can make queries too. When I try to connect from my local system using this command:
mongo 192.168.192.168:27017
I get an error
MongoDB shell version: 2.6.5
connecting to: 192.168.192.168:27017/test
2014-12-27T22:19:19.417+0100 warning: Failed to connect to 192.168.192.168:27017, reason: errno:111 Connection refused
2014-12-27T22:19:19.418+0100 Error: couldn't connect to server 192.168.192.168:27017 (192.168.192.168), connection attempt failed at src/mongo/shell/mongo.js:148
exception: connect failed
looks like mongo not listen to connection from other ips? I commented bind_ip in mongo settings but it doesn't help.
services for 192.168.192.168 via nmap command:
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
111/tcp open rpcbind
5432/tcp open postgresql
9000/tcp open cslistener
Looks like mongd listen
sudo lsof -iTCP -sTCP:LISTEN | grep mongo
mongod 1988 mongodb 6u IPv4 5407 0t0 TCP *:27017 (LISTEN)
mongod 1988 mongodb 8u IPv4 5411 0t0 TCP *:28017 (LISTEN)
Firewall rules
sudo iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Update
My mongo config
dbpath=/var/lib/mongodb
#where to log
logpath=/var/log/mongodb/mongodb.log
logappend=true
#bind_ip = 127.0.0.1
#port = 27017
# Enable journaling, http://www.mongodb.org/display/DOCS/Journaling
journal=true
# Enables periodic logging of CPU utilization and I/O wait
#cpu = true
# Turn on/off security. Off is currently the default
#noauth = true
#auth = true
Solution is to change mongo configuration
bind_ip = 0.0.0.0
I am unable to bind to my regular port 9000 with the typical error message:
[error] org.jboss.netty.channel.ChannelException: Failed to bind to: /0.0.0.0:9000
However, I do not have anything currently running on that port..
Checking what port 9000 is listing to:
sudo lsof -i -P | grep "9000"
gives me:
java 2642 ow 137u IPv6 0xe9a3870d7acf02fd 0t0 TCP *:9000 (LISTEN)
java 2642 ow 142u IPv6 0xe9a3870d7e430f1d 0t0 TCP localhost:9000->localhost:62403 (CLOSE_WAIT)
java 2642 ow 156u IPv6 0xe9a3870d856676dd 0t0 TCP localhost:9000->localhost:60860 (CLOSE_WAIT)
Any idea how to close this?
Edit
Turns out google chrome is using my 9000 which is kind of weird
Google 51558 ow 125u IPv4 0xe9a3870d8683581d 0t0 TCP localhost:61238->localhost:9000 (ESTABLISHED)
When I killed it, chrome crashed
Guess I'll have to start using a different port!
Play isn't running anymore?
Otherwise for reference, one can find the Play process with ps auxwww | grep play and kill it withkill <pid> or kill -9 <pid>.
I have the same issue with play framework using scalaVersion := "2.11.7".
java 19068 ecamur 342u IPv6 40371923 0t0 TCP *:9000 (LISTEN)
I killed using bellow comment
kill -9 19068
It appeared to be nothing was crashed. I ran the application without any issue.
I've often the same problem when my Play application hang out without releasing the socket.
The easiest solution I found is to restart the network interface.
ifconfig en0 down
ifconfig en0 up
(Assuming en0 is your main interface)
I am running a sample hadoop job on my centos 6.2.64 machine for debugging,
hadoop jar hadoop-examples-0.20.2-cdh3u3.jar randomtextwriter o
and it appears that after the job is completed, the connections to datanodes still remain.
java 8979 username 51u IPv6 326596025 0t0 TCP localhost:50010->localhost:56126 (ESTABLISHED)
java 8979 username 54u IPv6 326621990 0t0 TCP localhost:50010->localhost:56394 (ESTABLISHED)
java 8979 username 59u IPv6 326578719 0t0 TCP *:50010 (LISTEN)
java 8979 username 75u IPv6 326596390 0t0 TCP localhost:50010->localhost:56131 (ESTABLISHED)
java 8979 username 84u IPv6 326621621 0t0 TCP localhost:50010->localhost:56388 (ESTABLISHED)
java 8979 username 85u IPv6 326622171 0t0 TCP localhost:50010->localhost:56395 (ESTABLISHED)
java 9276 username 77u IPv6 326621714 0t0 TCP localhost:56388->localhost:50010 (ESTABLISHED)
java 9276 username 78u IPv6 326596118 0t0 TCP localhost:56126->localhost:50010 (ESTABLISHED)
java 9408 username 75u IPv6 326596482 0t0 TCP localhost:56131->localhost:50010 (ESTABLISHED)
java 9408 username 76u IPv6 326622170 0t0 TCP localhost:56394->localhost:50010 (ESTABLISHED)
java 9408 username 77u IPv6 326622930 0t0 TCP localhost:56395->localhost:50010 (ESTABLISHED)
Eventually I get this error in the datanode logs after sometime.
2012-04-12 15:56:29,151 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-591618896-176.9.25.36-50010-1333654003291, infoPort=50075, ipcPort=50020):DataXceiver
java.io.FileNotFoundException: /tmp/hadoop-serendio/dfs/data/current/subdir4/blk_-4401902756916730461_31251.meta (Too many open files)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:137)
at org.apache.hadoop.hdfs.server.datanode.FSDataset.getMetaDataInputStream(FSDataset.java:996)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:125)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:258)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:163)
This leads to issues in the production system, namely datanode running out of xcievers.
This behaviour does not seem to happen on my Ubuntu development box. We are using cloudera hadoop-0.20.2-cdh3u3 for our purposes.
Any pointers to resolve this issue?
Add in hdfs-site.xml if you have not specified yet:
<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
</property>
defalut is 256 i think....
this the formaula type calculation for how much xciever you should have to avoid such error...
# of xcievers = (( # of storfiles + # of regions * 4 + # of regioServer * 2 ) / # of datanodes)+reserves(20%)