My clean munin install is not generating graphs, i did everything listed in this URL to debug
http://munin-monitoring.org/wiki/FAQ_no_graphs#Ijustreadtheaboveanswerandtherestillarentanygraphs
i can successfully connect via telnet to the node.
-The nodes comand display the node correctly
-The list command shows the list of plugins correctly.
-The fetch command provides an empty output!!!
i opened the munin-node.log file to see what's going on, and every entry logs (i think 1 entry for every plugin) show one of this 2 error messages (when i restart the node the error switch from one to another)
1-
Error output from threads:
2013/08/08-07:38:04 [2659] Insecure directory in $ENV{PATH} while running with -T switch at /usr/share/perl5/Munin/Node/Service.pm line 241.
2013/08/08-07:38:04 [2659] Service 'threads' exited with status 255/0.
2-
Error output from vmstat:
2013/08/08-07:35:02 [1096] 2013/08/08-07:35:02 [1186] Died at /usr/share/perl5/Munin/Common/Timeout.pm line 66, line 74.
i have no clue, of how to solve this.
Im using Ubuntu 12.04 and the installed munin version is the 1.4 ... since the 2.x are not available for ubuntu 12.04
Maybe someone with experience with pearl could orient me on how to edit the source to get rid of the error!!!
Related
I spent several hours looking at this Lens error for K8s. I installed Python, OCI-CLI for Windows 10 (I downloaded oci-cli offline installation, and run python install.py) and configured cluster access. Using CMD works ok:
kubectl command works fine, even get pods command works
But using Lens it gives me the error when connecting
Error getting Credentials: exec: executable oci not found
What am I missing?
I finally found the solution, it was to download kubectl.exe
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/windows/amd64/kubectl.exe
I put it in a folder on the disk, example c:\kubenetes
add that folder to the PATH environment variable.
Restart the PC. Without reboot it didn't work.
I am coming from notepad, and am learning how to work with VS Code.
I am now trying to access my online repertory on the webserver.
I followed up the guide here: https://code.visualstudio.com/docs/remote/ssh
I did manage to access my server through the terminal window.
(ssh user#domain + password).
When connecting, this shows in the log:
"Linux infong-eu27 4.4.246-icpu-061 #2 SMP Thu Nov 26 10:58:41 UTC 2020 x86_64"
This tells me that it is working on Linux.
If I type "Ls", I can see my folders and navigate among them.
So far, so good!
Second phase: Connecting through the remote explorer.
Step 1:
I configured the ssh with the same credentials I used with the terminal.
Step 2:
I am opening the remote explorer, I can see my server's name. I right click on it and select "connect".
Step 3:
I am then asking to choose the system. I am picking Linux as shown earlier when connecting through the terminal.
Step 4: I am entering the same password I used before to connect in the terminal.
Step 5: Infinite loading, or a very long one till I get 2 notifications / errors:
Could not fetch remote environment
Failed to connect to the remote extension host (error time limit..)
That being said, it also says in the bottom left corner, in the "remote window", that I am connected. This does not seems right.
Any chance someone could help?
I am frustrated because it connects in 1s using the terminal, but not in the remote explorer.
UPDATE:
I found this article on medium that paraphrases the official documentation.
https://medium.com/#sujaypillai/connect-to-your-remote-servers-from-visual-studio-code-eb5a5875e348
I managed, through Git bash, to create a ssh pair key, and managed to copy one on my server.
I then followed the instructions on how to set it up on VS Code successfully!
Now, when I try to connect, I am asked for my key pass:
But, when I do: super long loading, and the same error message.
When I looked on my server with a sftp software, I see that VS did manage to connect as files were installed in a VS folder it created:
This is reported in issue 4415 (no answer) and issue 4204
The last one includes:
This might be caused by our new automatic port forwarding feature which scans the remote OS for available ports in order to forward them locally (microsoft/vscode#112843)
This is fixed by PR 113342, for the next 1.54 Feb. 2021 release. That bug is about setting remote.autoForwardPorts to false and... still seeing VSCode auto-forward ports!
Check on your server (while VSCode attempts to connect) if:
the CPU is high
if there are any services running on public port on said server
I solved a similar issue by following the error logs from the remote ssh extension. I had to install libatomic1 on the remote server with
sudo apt-get install libatomic1
I've got problem with completing pgadmin4 installation thru sudo /usr/pgadmin4/bin/setup-web.sh command.
During this process instalator does not recognizing that Apache is running and asks me if I want to start it:
The Apache web server is not running. We can enable and start the web server for you to finish pgAdmin 4 installation. Continue (y/n)? y
Then it just spits some errors:
Too few arguments.
Error enabling . Please check the systemd logs
Too few arguments.
Error starting . Please check the systemd logs
So far I havn't found where the logs are stored.
About my apache, I am quite sure that my server is running, because I can connect to it through browser, phpmyadmin is working properly, and service apache2 status returns * apache2 is running. By my understanding apache2 is just fancy word for httpd service, and there is no other service called simply apache.
PostgreSQL seems to work properly from command line, haven't tested if I can connect to it yet, but this shouldn't be the case right?
I am using
**PostgreSQL:** 12.5 (Ubuntu 12.5-0ubuntu0.20.04.1)
**Ubuntu:** Ubuntu 20.04 LTS
**Server:** Apache/2.4.41 (Ubuntu)
I had the same issue for Debian 10 and Ubuntu 20. The /usr/pgadmin4/bin/setup-web.sh script is using 'uname -a' which doesn't contain "Debian" identifier in the return string. Updating this to read /proc/version will allow APACHE to be specified as the Debian variant of apache2.
Change:
UNAME=$(uname -a)
To:
UNAME=$(cat /proc/version)
I had a similar problem with Ubuntu running inside WSL 2. Managed to resolve it by modifying the /usr/pgadmin4/bin/setup-web.sh script. I moved these lines outside of the conditional:
IS_DEBIAN=1
APACHE=apache2
This allowed the installation to progress beyond the Too few arguments. error. There was still an error however:
System has not been booted with systemd as init system (PID 1). Can't operate.
Error restarting apache2. Please check the systemd logs
I resolved this by running:
sudo service apache2 restart
After this I tried bringing up the admin page by visiting http://127.0.0.1/pgadmin4 from the Windows host. This still didn't work, and had to connect using the Ubuntu machine's ip address (you can find it out via ifconfig) which finally allowed me to see the login page.
i'm using centos7, i have installed libreoffice6.3. it was working file before changing of my computer ip. but after changing ip it is not working. and then i have reinstalled many time. but still it's not working.
command & and outputs below
libreoffice6.3 --version ==== LibreOffice 6.3.4.2 60da17e045e08f1793c57c00ba83cdfce946d0aa
soffice --version ===== -bash: soffice: command not found
libreoffice6.3 ===== Failed to open display
I know this is old, but just a tip: Every time you need / want to use LibreOffice's functionality without starting up the GUI, add --headless the the soffice command. soffice --help says
Starts in "headless mode" which allows using the
application without GUI. This special mode can be used
when the application is controlled by external clients
via the API.
I've updated jdk from 1.8_131 to 1.8_151 for CDH5. So i need to restart the cluster to make it take affect. In the begining i use cloudrea manager web page to restart, but it failed when zookeeper started which is the first step. Then I made a bad choice which is close cloudrea manager in terminal including kill -9 postgresql process. After that, i could't open the cloudrea manager web page.
I use following instructions to start the cluster.
service cloudera-scm-server-db start
service cloudera-scm-server start
service cloudera-scm-agent start
All of them are failed, because /var/log/cloudera-scm-server and /var/log/cloudera-scm-agent disappear.
So I creat these two files manually also include dg.log and cloudera-scm-agent.log
At this time, the server and agent could start. But server-db still can not. The next is some details.
Starting cloudera-scm-server-db (via systemctl): Job for
cloudera-scm-server-db.service failed because the control process
exited with error code. See "systemctl status
cloudera-scm-server-db.service" and "journalctl -xe" for details
journalctl -xe
The CM is using external DB. Failed to start embedded DB service, giving up
service --status-all
What i've done:
So, what should i do now? thank you thank you very much!!!
The above problem had been sovled.
If you open this /etc/cloudera-scm-server/db.properties file, which shown as below.
# cat /etc/cloudera-scm-server/db.properties
Auto-generated by scm_prepare_database.sh
#
Sat Oct 1 12:19:15 PDT 201
#
com.cloudera.cmf.db.type=postgresql
com.cloudera.cmf.db.host=localhost
com.cloudera.cmf.db.name=scm
com.cloudera.cmf.db.user=scm
com.cloudera.cmf.db.password=TXqEESuhj5
com.cloudera.cmf.db.setupType=EXTERNAL
EXTERNAL is the crux.
In my CDH service, I use embedded postgresql as my server database. But it's not recommended to use by cloudera offical. I'm a new man on Cloudera, so I made a mistake.
I wrongly use a command which only prepared for Cloudera Manager Server external database.
/usr/share/cmf/schema/scm_prepare_database.sh postgresql scm scm scm_password
The above command can config db.properties
As long as you run above command, com.cloudera.cmf.db.setupType will be set to EXTERNAL(For more details about this, you can find in Cloudera docs)
The most direct and effective way is to reset password of scm.
Then
update the password
set Type as EMBEDDED
make port 7432 listening(you can use netstat -nltp to check)
in db.properties.
#vim cat /etc/cloudera-scm-server/db.properties
Auto-generated by scm_prepare_database.sh
Sat Oct 1 12:19:15 PDT 201
com.cloudera.cmf.db.type=postgresql
com.cloudera.cmf.db.host=localhost:7432
com.cloudera.cmf.db.name=scm
com.cloudera.cmf.db.user=scm
com.cloudera.cmf.db.password=new_password
com.cloudera.cmf.db.setupType=EMBEDDED
Now close all cloudera-scm service and restart in order server-db,server,agent.
If /var/log was cleared wrongly.
You can creat these files such as /var/log/cloudera-scm-server and /var/log/cloudera-scm-agent manually.
It is noteworthy that you should creat these file by user cloudera-scm, otherwise the log can not be written, and you won't find what error happened from log file.