telnet is necessary in order to maintain compatibility with older software in this case. I'm working with the Yocto Rocko 2.4.2 distribution. when I try to telnet to the board I'm getting the oh so detailed message "connection refused".
Using the method here and the options here I modified the busybox configuration per suggestion. When the board is booted up and logged in, if you execute: telnet, it spits out usage info and a quick directory check shows that telnet is installed to /usr/bin/telnet. My guess is that the telnet client is installed but the telnet server is not running?
I need to get telnetd to start manually at least so I know it will work with an init script in place. The second reference link there suggests that 'telnetd will not be started automatically though...' and that there will need to be an init script. How can I start telnetd manually for testing?
systemctl enable telnetd
returns: Unit telnetd.service could not be found
UPDATE
telnetd in located in /usr/sbin/telnetd. I was able to manually start the telnetd service for testing from there. After manually starting the service telnet login now works. looking into writing a systemd init script to auto start the telnetd service, so I suppose this issue is closed. unless anyone would like to offer up detailed telnet busybox configuration and setup steps as an answer to 'How to configure telnet service for yocto image'
update
Perhaps there is something more? I created a unit file that looks like this:
[Unit]
Description=auto start telnetd
[Service]
ExecStart=/usr/sbin/telnetd
[Install]
WantedBy=multi-user.target
on reboot, systemd indicates the process executed and succeeded:
systemctl status telnetd
.
.
.
Process: 466 ExecStart=/usr/sbin/telnetd (code=exited, status=0/SUCCESS)
.
.
.
The service is not running however. netstat -l does not list it and telnet login fails. Something I'm missing?
last update...i think
so following this post, I managed to get telnet.socket service to startup on reboot.
systemctl status telnet.socket
shows that it is running and listening on 23. Now however, when I try to remote in with telnet I'm getting
Connection closed by foreign host
Everything I've read so far has been talking about xinetd service (which I do not have...). What is confusing is that, if I just navigate to /usr/sbin/ and execute telnetd, the server is up and running and I can telnet into the board, so I do not believe I'm missing any utilities or services (like the above mentioned xinetd), but something is still not being configured correctly. any ideas?
Related
Which config files could disable the automatically starting ssh server, so a headless connect becomes impossible?
I need to know the config files that might interfere with the ssh server to normally start up at boot.
I believe that you are looking for the following commands (assuming you are running the last version of raspbian):
sudo systemctl stop sshd
sudo systemctl disable sshd
sudo systemctl mask sshd
stop Basically stops the service immediately. disable disables the service from starting at bootup. Additionally, mask will make it impossible to load the service.
Digging deeper into what each command does, on modern linux distributions there are configuration files for each service called unit files. They are stored (usually) in /usr/lib/systemd. These are basically the evolution of scripts to start services.
the stop command just calls the sshd.service unit file with a stop parameter, in order to shut down the server.
the disable (or enable) command removes(or creates) a symlink of the unit file in a directory where systemd looks into when booting services (usually, /etc/systemd/system).
systemctl mask creates a symlink to /dev/null instead of the unit file. That way the service cant be loaded.
I've got problem with completing pgadmin4 installation thru sudo /usr/pgadmin4/bin/setup-web.sh command.
During this process instalator does not recognizing that Apache is running and asks me if I want to start it:
The Apache web server is not running. We can enable and start the web server for you to finish pgAdmin 4 installation. Continue (y/n)? y
Then it just spits some errors:
Too few arguments.
Error enabling . Please check the systemd logs
Too few arguments.
Error starting . Please check the systemd logs
So far I havn't found where the logs are stored.
About my apache, I am quite sure that my server is running, because I can connect to it through browser, phpmyadmin is working properly, and service apache2 status returns * apache2 is running. By my understanding apache2 is just fancy word for httpd service, and there is no other service called simply apache.
PostgreSQL seems to work properly from command line, haven't tested if I can connect to it yet, but this shouldn't be the case right?
I am using
**PostgreSQL:** 12.5 (Ubuntu 12.5-0ubuntu0.20.04.1)
**Ubuntu:** Ubuntu 20.04 LTS
**Server:** Apache/2.4.41 (Ubuntu)
I had the same issue for Debian 10 and Ubuntu 20. The /usr/pgadmin4/bin/setup-web.sh script is using 'uname -a' which doesn't contain "Debian" identifier in the return string. Updating this to read /proc/version will allow APACHE to be specified as the Debian variant of apache2.
Change:
UNAME=$(uname -a)
To:
UNAME=$(cat /proc/version)
I had a similar problem with Ubuntu running inside WSL 2. Managed to resolve it by modifying the /usr/pgadmin4/bin/setup-web.sh script. I moved these lines outside of the conditional:
IS_DEBIAN=1
APACHE=apache2
This allowed the installation to progress beyond the Too few arguments. error. There was still an error however:
System has not been booted with systemd as init system (PID 1). Can't operate.
Error restarting apache2. Please check the systemd logs
I resolved this by running:
sudo service apache2 restart
After this I tried bringing up the admin page by visiting http://127.0.0.1/pgadmin4 from the Windows host. This still didn't work, and had to connect using the Ubuntu machine's ip address (you can find it out via ifconfig) which finally allowed me to see the login page.
I've updated jdk from 1.8_131 to 1.8_151 for CDH5. So i need to restart the cluster to make it take affect. In the begining i use cloudrea manager web page to restart, but it failed when zookeeper started which is the first step. Then I made a bad choice which is close cloudrea manager in terminal including kill -9 postgresql process. After that, i could't open the cloudrea manager web page.
I use following instructions to start the cluster.
service cloudera-scm-server-db start
service cloudera-scm-server start
service cloudera-scm-agent start
All of them are failed, because /var/log/cloudera-scm-server and /var/log/cloudera-scm-agent disappear.
So I creat these two files manually also include dg.log and cloudera-scm-agent.log
At this time, the server and agent could start. But server-db still can not. The next is some details.
Starting cloudera-scm-server-db (via systemctl): Job for
cloudera-scm-server-db.service failed because the control process
exited with error code. See "systemctl status
cloudera-scm-server-db.service" and "journalctl -xe" for details
journalctl -xe
The CM is using external DB. Failed to start embedded DB service, giving up
service --status-all
What i've done:
So, what should i do now? thank you thank you very much!!!
The above problem had been sovled.
If you open this /etc/cloudera-scm-server/db.properties file, which shown as below.
# cat /etc/cloudera-scm-server/db.properties
Auto-generated by scm_prepare_database.sh
#
Sat Oct 1 12:19:15 PDT 201
#
com.cloudera.cmf.db.type=postgresql
com.cloudera.cmf.db.host=localhost
com.cloudera.cmf.db.name=scm
com.cloudera.cmf.db.user=scm
com.cloudera.cmf.db.password=TXqEESuhj5
com.cloudera.cmf.db.setupType=EXTERNAL
EXTERNAL is the crux.
In my CDH service, I use embedded postgresql as my server database. But it's not recommended to use by cloudera offical. I'm a new man on Cloudera, so I made a mistake.
I wrongly use a command which only prepared for Cloudera Manager Server external database.
/usr/share/cmf/schema/scm_prepare_database.sh postgresql scm scm scm_password
The above command can config db.properties
As long as you run above command, com.cloudera.cmf.db.setupType will be set to EXTERNAL(For more details about this, you can find in Cloudera docs)
The most direct and effective way is to reset password of scm.
Then
update the password
set Type as EMBEDDED
make port 7432 listening(you can use netstat -nltp to check)
in db.properties.
#vim cat /etc/cloudera-scm-server/db.properties
Auto-generated by scm_prepare_database.sh
Sat Oct 1 12:19:15 PDT 201
com.cloudera.cmf.db.type=postgresql
com.cloudera.cmf.db.host=localhost:7432
com.cloudera.cmf.db.name=scm
com.cloudera.cmf.db.user=scm
com.cloudera.cmf.db.password=new_password
com.cloudera.cmf.db.setupType=EMBEDDED
Now close all cloudera-scm service and restart in order server-db,server,agent.
If /var/log was cleared wrongly.
You can creat these files such as /var/log/cloudera-scm-server and /var/log/cloudera-scm-agent manually.
It is noteworthy that you should creat these file by user cloudera-scm, otherwise the log can not be written, and you won't find what error happened from log file.
I have setup a basic infrastructure using chef. This includes a local chef server(ubuntu based), workstation and an ubuntu based server(to be used as the node). Please note that the entire infrastructure lies behind the firewall in my office network. And I have made necessary proxy settings for the servers to access the internet.
So here is the problem - When I try to bootstrap the node using -
knife bootstrap <node's ip> --sudo -x <username> -P <password> -N "<name>"
i get the following error
<node's ip> --2014-02-19 10:47:10-- https://www.opscode.com/chef/install.sh
<node's ip> Resolving www.opscode.com (www.opscode.com)... 184.106.28.91
<node's ip>1 Connecting to www.opscode.com (www.opscode.com)|184.106.28.91|:443... failed:Connection refused.
<node's ip> bash: line 83: chef-client: command not found
I was not able to find a solution to this. However I came across the knife[:bootstrap_proxy] = "http://username:password#proxyIP:port/" setting that can be added to knife.rb . I did this (by entering my office proxy details) and then the connection during bootstrap was successfull and the chef client was downloaded on the node. However this setting only defines the proxy that should be used by the node. So, this led to the http_proxy = "http://username:password#proxyIP:port/" being set in client.rb. But because I have already made all the proxy settings in my server, the chef client failed to launch. So I manually removed the http_proxy and https_proxy settings from client.rb and ran the command chef-client which was then successful.
I have two questions -
1) why did knife[:bootstrap_proxy] = "http://username:password#proxyIP:port/" work? because it only defines the proxy that should be used by the node.
2) Also, alll the proxy setting for the node has already been done. I do not want any proxy settings in client.rb. How do I achieve this?
Please help!
When it comes to your client.rb I'd suggest looking into https://github.com/opscode-cookbooks/chef-client
It's a wrapper script for client.rb(s).
Not sure about your knife[:bootstrap_proxy] though. Ideally that cookbook should take care of it. If you are still stumpped you can run chef-client -VV and knife -VV to see exactly what it's doing.
When veewee is displaying the following message, Waiting for ssh login on 127.0.0.1 with user veewee to sshd on port => 7222 to work, timeout=10000 sec what exactly is it waiting on?
As far as I can tell there is a ssh server on port 7222 on the host that veewee has put up and it's waiting on that. This means that something in the guest is going to connect back to it. However, I can't figure out what that thing might be - and thus I can't debug further.
Further details
I'm trying to build a virtualbox image for vagrant with the CentOS-6.3-x86_64-minimal template. My steps:
bundle exec veewee vbox define 'ejs-centos6.3-1' 'CentOS-6.3-x86_64-minimal'
wget http://mirror.symnds.com/distributions/CentOS-vault/6.3/isos/x86_64/CentOS-6.3-x86_64-minimal.iso
bundle exec veewee vbox build 'ejs-centos6.3-1'
The CentOS install appeared to run without error but it's stuck waiting for the ssh login.
You're right, there's a Ssh server on listening on port 7222, but it's on the guest (VM), not the host.
The host (Veewee) is waiting to connect to it. This SSH service is supposed to become available when the VM install process finishes, that's one of the steps used by Veewee to assume that the setup went fine and that the VM is ready.
If Veewee blocks and never gets this SSH connection, I think there could be multiple reasons:
VM setup went wrong and something prevents it from finishing successfully. Check Veewee output and the Virtualbox VM graphical console that should have opened when launching vewee box build.
There's something preventing your host computer to connect to the VM at the network level.
The VM image doesn't have Sshd installed, and/or the veewee box configuration files (in veewee/definitions/ejs-centos6.3-1/) miss instructions to install the ssh package
You should try to login to the VM using Virtuabox console window and check if there's an ssh package installed (rpm -qa | grep openssh-server) and a process named sshd running.
I've run Veewee against Centos 7 built with GUI on and it stuck on anaconda asking for source of packages. I've checked the ks.cfg and it was pointing to dead resource (404). After pointing to valid url it went through.