Flow server and vscode flow extension breaking after updating to catlina and mojave - visual-studio-code

Does anyone encounter this issue?
Connection to server got closed. Server will not be restarted.
This I am getting when I am checking out to old commit and in locus-dashboard (there we do have an old version of flow) and then switching back to current. Then it starts throwing an error Connection to server got closed. Server will not be restarted..
This are the logs of flow.
[Info - 12:03:15 PM - locus-dashboard-v2/.flowconfig] Found flow using option `useNPMPackagedFlow`
[Info - 12:03:16 PM - locus-dashboard-v2/.flowconfig] Using flow '/Users/shubanusharma/workspace/locus-dashboard-v2/node_modules/flow-bin/flow-osx-v0.111.3/flow' (v0.111.3)
Unhandled exception: (Sys_error "/tmp/daemon_param688afa.bin: Permission denied")
Raised by primitive operation at file "stdlib.ml", line 316, characters 29-55
Called from file "filename.ml", line 259, characters 7-73
Re-raised at file "filename.ml", line 261, characters 30-37
Called from file "hack/utils/sys/daemon.ml", line 267, characters 2-53
Called from file "hack/utils/jsonrpc/jsonrpc.ml", line 215, characters 4-357
Called from file "src/lsp/flowLsp.ml", line 1555, characters 15-36
Called from file "src/commands/commandUtils.ml", line 13, characters 4-32
[Error - 12:03:16 PM] Connection to server got closed. Server will not be restarted.
I've tried cleaning up node_modules, Clearing yarn and npm cache, reinstalling extension.
This seems to be a Catalina permissions issue running flow with sudo works for flow but vscode extension have same issue still.

Not a permanent solution but changing tmp folder permissions to 777 fixing this.
Go to your repo dir and stop server yarn flow stop
Change /tmp dir permissions sudo chmod 777 /tmp
Start flow server yarn flow start
Restart vscode flow client by cmd+shift+p(windows ctrl+shift+p) type restart client press enter
[EDIT 15-jun-2021]
In most cases just giving permission is enough no need of stopping the server
Also, this not only happens in Catalina happens Catalina onwards
Mac OS Catlina
Mac OS Mojave
I've found both contains the issue

Related

How to debug the problem not able to translate OID with a new MIB file for UPS-MIB?

On Centos, I ran into the following error:
sudo snmptrap -v 2c -c read localhost '' UPS-MIB::upsTraps
MIB search path: /root/.snmp/mibs:/usr/share/snmp/mibs
Cannot find module (UPS-MIB): At line 0 in (none)
UPS-MIB::upsTraps: Unknown Object Identifier
The above error happened after
Copied UPS-MIB.txt to /usr/share/snmp/mibs
I started snmptrapd:
snmptrapd -f -Lo -Dread-config -m ALL
The version of the Net-SNMP is 5.2.x.
The same procedures work fine with Ubuntu 18.04/Net-SNMP 5.3.7.
I wonder how to debug and fix the problem?
Besides the Net-SNMP version difference, on Ubuntu, I found an instruction to install mib-download-tool, and execute it after the installation of Net-SNMP, and comment out the lines beginning with min: in snmp.conf in order to fix the error of missing MIB's.
However, for the Centos, I had no need and found no such instruction, thus I have not done it yet, as there is no error message of missing MIB's.
The MIB file is downloaded from https://tools.ietf.org/rfc/rfc1628.txt
renamed to UPS-MIB.txt (It seems to me that the name of the MIB file does not matter, as long as it's unique? I tried to use a different names, upsMIB.txt, rfc1628.txt, but it does not help to improve.)
I solved the problem as follows:
manually copied /usr/share/snmp/mibs/ietf/UPS-MIB on an Ubuntu with Net-SNMP 5.7.3 installed to the Centos /usr/share/snmp/mibs/UPS-MIB
then restart the snmpd
by the command:
service snmpd restart
then the OID of UPS-MIB becomes visible and accessible.
Maybe, the version that I downloaded from https://tools.ietf.org/rfc/rfc1628.txt is not suitable??

OCI runtime exec failed: exec failed: container_linux.go:348 : starting container process caused "no such file or directory": unknown

I am trying to bringup my fabric network.
I got my orderers organization started.
I got my peer organizations started.
I got my cli started.
after that request is failing with
OCI runtime exec failed:
exec failed: container_linux.go:348 : starting container process caused "no such file or directory": unknown
The error means that either working_dir is undefined, or it does not exist.
Czeck the cli section in your docker-compose file for the above setting.
If you are working on Windows OS, a possible cause is the file encoding (should be in Unix format).
You could open this page:
https://hyperledger-fabric.readthedocs.io/en/latest/build_network.html
And search "No such file or directory". There is some related trouble shooting.
Just a short description:
Ensure that the file in question is encoded in the Unix format. This was most likely caused by not setting core.autocrlf to false in your Git configuration. There are several ways of fixing this. If you have access to the vim editor for instance, open the file:
vim ./path/to/the/related-file
Then change its format by executing the following vim command:
:set ff=unix

FTPD Server Issue

So I am trying to use my xampp server and for the life of me can't understand why my ProFTPD will not turn on. It only became cause for concern when I saw the word "bogon" in the application log. Can anyone translate to me what the application log means and maybe how I go about troubleshooting the problem ?
Stopping all servers...
Stopping Apache Web Server...
/Applications/XAMPP/xamppfiles/apache2/scripts/ctl.sh : httpd stopped
Stopping MySQL Database...
/Applications/XAMPP/xamppfiles/mysql/scripts/ctl.sh : mysql stopped
Starting ProFTPD...
Exit code: 8
Stdout:
Checking syntax of configuration file
proftpd config test fails, aborting
Stderr:
bogon proftpd[3948]: warning: unable to determine IP address of 'bogon'
bogon proftpd[3948]: error: no valid servers configured
bogon proftpd[3948]: Fatal: error processing configuration file '/Applications/XAMPP/xamppfiles/etc/proftpd.conf'

GConf Error: "Failed to contact configuration server ... 1: Not running within active session"

I have installed Gnumeric in CentOS 6.5, then use ssconvert command to convert .xls/.xlsx file to CSV, but I still get the following error:
$ ssconvert
GConf Error: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you
have stale NFS locks due to a system crash. See
http://projects.gnome.org/gconf/ for information. (Details - 1: Not
running within active session)
GConf Error: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you
have stale NFS locks due to a system crash. See
http://projects.gnome.org/gconf/ for information. (Details - 1: Not
running within active session)
** (ssconvert:5725): WARNING **: Configured default font 'Sans 10.000000' not available, trying fallback...
** (ssconvert:5725): WARNING **: Fallback font 'Sans 10.000000' not available, trying 'fixed'...
** (ssconvert:5725): WARNING **: Even 'fixed 10' failed ?? We're going to exit now,there is something wrong with your font
configuration
Can you help me?
As this is an old question, the answer is in case someone will get the same error (like me today on CentOS 6.8), you missed to install the Sans font:
yum install gnu-free-sans-fonts
For docker alpine images this might help:
RUN apk add --update \
msttcorefonts-installer fontconfig \
ttf-opensans

Unable to run Mongo shell (Mac)

I'm new to web development and I wanted to get started with some RoR (using Locomotive CMS).
One of the things Locomotive asks for is to have Mongodb. I installed using homebrew by following this link http://docs.mongodb.org/manual/tutorial/install-mongodb-on-os-x/
It installs fine but then im not able to run it!
When I type 'mongo' on terminal I get the following output :
"MongoDB shell version: 2.4.3
connecting to: test
Mon May 6 11:12:28.927
JavaScript execution failed:
Error: couldn't connect to server
127.0.0.1:27017 at src/mongo/shell/mongo.js:L112
exception: connect failed"
BACKGROUND TO HELP DEBUGGING ( on Terminal) :
1.When I type in mongod I get the following :
"all output going to: /usr/local/var/log/mongodb/mongo.log"
Ownership of mongo.log :
-rw-r--r-- 1 username admin 22133 May 6 11:13 mongo.log
2.When I input mongod --fork I get the following :
about to fork child process, waiting until server is ready for connections.
forked process: 77566
all output going to: /usr/local/var/log/mongodb/mongo.log
ERROR: child process failed, exited with error number 100
3.Typing mongod --help gives the following warning:
* WARNING: soft rlimits too low. Number of files is 256, should be at least 1000
4.I have a folder called data (which acts as amongodb database, is this where it should be?)in root (PATH : /data) Ownership of data folder :
"drwxr-xr-x 3 username wheel 102 Apr 23 21:38 data"
5.Checking if ports are free: lsof -i :27017. Ive also tried to check for a running mongo process using activity montior and found zilch!
No output
6.Ive also tried : mongo --repair. Dint help!
Ive been stuk on this for a while, I've looked at most responses on stackoverflow and searched around to find a solution to this but nothing has helped so far!
UPDATE:
When I tried to start the mongo shell, I was getting the following l
log message from mongo.log:
5/6/13 1:33:27.616 PM com.apple.launchd:
(org.mongodb.mongod[79133])
open("/private/var/log/mongodb/output.log", ...): Permission denied
So I did a chmod777 for the particular folder and the shell launches!
Although I still get a warning when it launches as:
Server has startup warnings:
Mon May 6 13:33:27.693 [initandlisten]
Mon May 6 13:33:27.693 [initandlisten]
** WARNING: soft rlimits too low.
Number of files is 256, should be at least 1000
Any idea how I can silence these warnings?
To get the information you need to determine the cause of failure you need to look in (and post for us) the output from /usr/local/var/log/mongodb/mongo.log when it is trying to start.
However, the most common reason for the failure is the lack of the default database path - at /data/db. Either create that folder (and don't forget to make sure your user has permission to read/write to it) or specify a different path with the --dbpath option.
UPDATE: as you have since found, bad permissions on the log file can cause the issue, in a similar way to bad permissions on the data path.
In terms of the warning, the information you need is here:
https://superuser.com/questions/433746/is-there-a-fix-for-the-too-many-open-files-in-system-error-on-os-x-10-7-1
It is just that though, a warning - you can run MongoDB without an issue with those limits as long as it is not under heavy load. So, if this is a development environment, unless you plan on load testing, you should be fine