i create a target file to group all my personal services in my fedora 18, i tested the services and i can start then individually, but if i try to enable the target i got a error message!
[root#ghostrider system]# systemctl enable developer.target
Failed to issue method call: Invalid argument
And here the target code:
###########################################################################
# Target para ativar servicos de desenvolvimento em Java
###########################################################################
#/etc/systemd/system/developer.target
[Unit]
Description=Processes Java
After=default.target
[Install]
Alias=developer.target
I really don't understand what that message means, any clues?
You have an Alias with the same name as the unit (developer.target). This caused the same issue for me while writing a service file. It is completely redundant, so just remove it.
systemd will create symlink in /etc/systemd/system/xxx.target.wants, if your /etc/systemd/system/xxx.target.wants/xxx.service is not a symlink, systemd can not create symlink, it will throw this error.
please clean your /etc/systemd/system/xxx.target.wants/ dir.
Related
I am running the following command in the directory where my root composer.json file is located:
./vendor/bin/typo3 extension:activate slickcarousel
However, I get the following error in return:
In ConnectionPool.php line 110: The requested database connection named "Default" has not been configured.
Even though I have configured my database in my LocalConfiguration.php. I also cannot find the ConnectionPool.php file in the vendor directory. How do I fix this error?
Do you use a different TYPO3 Context maybe? then you need to set that as well with
TYPO3_CONTEXT=Development ./vendor/bin/typo3 extension:activate slickcarousel
I've installed the Bro IDS but when I try to start the service an error is coming that :
Error: error occurred while trying to send mail: send-mail: SENDMAIL-NOTFOUND not found
starting ...
starting bro ...
bro terminated immediately after starting; check output with "diag"
I've already used broctl install and broctl update but still got the same error.
Kindly help
I've checked the configuration file i.e. node.cfg under /nsm/bro/etc and change the default interface eth0 with my system interface.
now bro has started
Bro can run without sendmail present but you may have been hit by a bug in Bro where it failed to include the Sendmail location in the config files. Whilst it says the bug was supposed to be fixed I've also seen the problem in Bro-2.5. So an easy fix is to do as suggested in the bug report and add the SendMail = /usr/sbin/sendmail to /usr/local/bro/etc/broctl.cfg and then rerun the deployment command:
sudo /usr/local/bro/bin/broctl deploy
After I build incubator-hawq on Centos7.1, I tried to init it. But the error below occurs:
20160516:18:10:43:002036 hawqinit.sh:host-172-16-0-105:hawqadmin-[INFO]:-Loading hawq_toolkit...
ALTER ROLE
20160516:18:10:44:001766 hawq_init:host-172-16-0-105:hawqadmin-[INFO]:-20160516:18:10:43:002036 hawqinit.sh:host-172-16-0-105:hawqadmin-[INFO]:-Loading hawq_toolkit...
20160516:18:10:44:001766 hawq_init:host-172-16-0-105:hawqadmin-[INFO]:-Master init successfully
20160516:18:10:44:001766 hawq_init:host-172-16-0-105:hawqadmin-[INFO]:-Init segments in list: ['hawq-master']
20160516:18:10:44:001766 hawq_init:host-172-16-0-105:hawqadmin-[DEBUG]:-Start to init segment on node 'hawq-master'
20160516:18:10:44:001766 hawq_init:host-172-16-0-105:hawqadmin-[INFO]:-Total segment number is: 1
fgets failure: Success
The program "postgres" is needed by initdb but was either not found in the same directory as "/usr/hawq/bin/initdb" or failed unexpectedly.
Check your installation; "postgres -V" may have more information.
20160516:18:10:45:002318 hawqinit.sh:host-172-16-0-105:hawqadmin-[ERROR]:-Postgres initdb failed
20160516:18:10:45:002318 hawqinit.sh:host-172-16-0-105:hawqadmin-[ERROR]:-Segment init failed on host-172-16-0-105
20160516:18:10:45:001766 hawq_init:host-172-16-0-105:hawqadmin-[INFO]:-20160516:18:10:45:002318 hawqinit.sh:host-172-16-0-105:hawqadmin-[ERROR]:-Postgres initdb failed
20160516:18:10:45:002318 hawqinit.sh:host-172-16-0-105:hawqadmin-[ERROR]:-Segment init failed on host-172-16-0-105
20160516:18:10:45:001766 hawq_init:host-172-16-0-105:hawqadmin-[ERROR]:-HAWQ init failed on hawq-master
20160516:18:10:46:001766 hawq_init:host-172-16-0-105:hawqadmin-[INFO]:-0 of 1 segments init successfully
20160516:18:10:46:001766 hawq_init:host-172-16-0-105:hawqadmin-[ERROR]:-Segments init failed, exit
When I type the command, the below shows:
[hawqadmin#host-172-16-0-105 hawqAdminLogs]$ postgres -V
postgres (HAWQ) 8.2.15
Any advice? Thanks!
If "postgres -V" works, that means the postgres binary is good.
Before you do "hawq init cluster", please make sure:
1) $GPHOME in greenplum_path.sh is correctly set to the directory of hawq binary, i.e, /usr/hawq in your case
2) source $GPHOME/greenplum_path.sh
3) check if initdb and postgres binary is in $GPHOME/bin
From the error you pasted above, 2 possible causes:
(1) The binary postgres called is not /usr/hawq/bin/postgres, You can use which postgres to check the path.
(2) The dependent lib for postgres may be wrong. You can use ldd for linux or otool for mac to print all dependent lib paths, and check them.
Moreover, if any error when init hawq, please check log in ~/hawqAdminLogs/, you may find out the specific error message.
Hope it will help you to find out the root cause.
Recently I faced same error while initializing cluster.
Postgres -V showed correct version, which postgres showed /usr/local/hawq/bin/postgres, also the path was already set, still faced above error.
Finally resolved by setting LD_LIBRARY_PATH to /usr/local/hawq/lib/ and sourced it via .bashrc file.
Looks like you might have installed hawq binaries in different directory . Please check the following
1.Make sure you have all the right PATH set
Check hawq initdb binaries are there in /usr/hawq/bin/ directory
make sure you have successed compile hawq and install them
check postgres is in the same dir with initdb
if there are more than 1 postgres in your pc, make sure the path of postgres(the same dir with initdb) is in your PATH.
I have installed Gnumeric in CentOS 6.5, then use ssconvert command to convert .xls/.xlsx file to CSV, but I still get the following error:
$ ssconvert
GConf Error: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you
have stale NFS locks due to a system crash. See
http://projects.gnome.org/gconf/ for information. (Details - 1: Not
running within active session)
GConf Error: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you
have stale NFS locks due to a system crash. See
http://projects.gnome.org/gconf/ for information. (Details - 1: Not
running within active session)
** (ssconvert:5725): WARNING **: Configured default font 'Sans 10.000000' not available, trying fallback...
** (ssconvert:5725): WARNING **: Fallback font 'Sans 10.000000' not available, trying 'fixed'...
** (ssconvert:5725): WARNING **: Even 'fixed 10' failed ?? We're going to exit now,there is something wrong with your font
configuration
Can you help me?
As this is an old question, the answer is in case someone will get the same error (like me today on CentOS 6.8), you missed to install the Sans font:
yum install gnu-free-sans-fonts
For docker alpine images this might help:
RUN apk add --update \
msttcorefonts-installer fontconfig \
ttf-opensans
Environment:
Glassfish 4.0 (only one DAS), Windows Server 2012 R2, Java 1.7.0_51
Create the DAS instance service by using the create-service subcommand.
Issue:
The maximum history files attribute has been set, however, Glassfish Server couldn’t remove the old log files due to the lock file server.log.lck
Path --> C:\glassfish4\glassfish\domains\domain1\config\logging.properties
com.sun.enterprise.server.logging.GFFileHandler.maxHistoryFiles=10
Log Snippet:
[2014-12-10T18:00:39.372+0900] [glassfish 4.0] [SEVERE] [] [] [tid: _ThreadID=16 _ThreadName=Thread-5] [timeMillis: 1418202039372] [levelValue: 1000] [[
java.util.logging.ErrorManager: 0: FATAL ERROR: COULD NOT DELETE LOG FILE.]]
[2014-12-10T18:00:39.372+0900] [glassfish 4.0] [SEVERE] [] [] [tid: _ThreadID=16 _ThreadName=Thread-5] [timeMillis: 1418202039372] [levelValue: 1000] [[
java.io.IOException: Could not delete log file: C:\glassfish4\glassfish\domains\domain1\logs\server.log.lck
at com.sun.enterprise.server.logging.GFFileHandler.cleanUpHistoryLogFiles(GFFileHandler.java:725)
at com.sun.enterprise.server.logging.GFFileHandler$4.run(GFFileHandler.java:802)
at java.security.AccessController.doPrivileged(Native Method)
at com.sun.enterprise.server.logging.GFFileHandler.rotate(GFFileHandler.java:744)
at com.sun.enterprise.server.logging.GFFileHandler$1.run(GFFileHandler.java:301)
at com.sun.enterprise.server.logging.LogRotationTimerTask.run(LogRotationTimerTask.java:68)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)]]
Findings:
1, If the lock file “server.log.lck” exists in the log folder, the issue occurred, and can find the above errors in the log every day when Glassfish server tries to remove the old log files. If there is no “server.log.lck” in the log folder, no any issue and work properly.
2, If the DAS instance is started by the command “asadmin start-domain domain1”, there is no lock file “server.log.lck” generated in the log folder. But if the DAS instance is started in Windows Service, the lock file “server.log.lck” will be generated automatically and keep 0KB until stop the service, this file will be removed automatically.
3, If the DAS instance is started by the command “asadmin start-domain -w domain1” which adds the watchdog option, the lock file “server.log.lck” will be generated automatically and exist until stop the service.
4, When the lock file “server.log.lck” appears, there is always one more java.exe process existing. Therefore, when start the DAS instance from Windows Service, there are two “java.exe” running in the process and “server.log.lck” is using by one of them.
Questions:
1, I’d like to start/stop the DAS instance by Windows Service, not using the subcommand. Moreover, I don’t want to keep all Glassfish logs on my server and it will cause a disk full issue so that I would prefer to turn on the Glassfish Logging Maximum history Files option. Is there any workaround or solution for that?
2, Is this a defect of Glassfish or it’s just a setting issue? I did try to install on other servers and all had the same issue.
3, Why there are two java.exe processes running if started from Windows Server, is the 2nd one used for “watchdog”?
Thank you so much for your help and please let me know if there is any further information you’d like to know or want me to do some other tests.
In case someone is still struggling I found a solution.
When you create a GF service in Windows environment via asadmin create-service GF creates a file domain1Service.xml in glassfish\domains\domain1\bin which contains parametres for server to start.
It looks something like the following
<service>
<id>domain1</id>
<name>domain1 GlassFish Server</name>
<description>GlassFish Server</description>
<executable>C:/Supertel-NMSv3/glassfish-4.1/glassfish/lib/nadmin.bat</executable>
<logpath>C:\\Supertel-NMSv3\\glassfish-4.1\\glassfish\\domains/domain1/bin</logpath>
<logmode>reset</logmode>
<depend>tcpip</depend>
<startargument>start-domain</startargument>
<startargument>--watchdog</startargument>
<startargument>--domaindir</startargument>
<startargument>C:\\Supertel-NMSv3\\glassfish-4.1\\glassfish\\domains</startargument>
<startargument>domain1</startargument>
<stopargument>stop-domain</stopargument>
<stopargument>--domaindir</stopargument>
<stopargument>C:\\Supertel-NMSv3\\glassfish-4.1\\glassfish\\domains</stopargument>
<stopargument>domain1</stopargument>
</service>
the line <startargument>--watchdog</startargument> is responsible for launching watchdog process which prevents log file from being deleted.
You can't just delete this startargument section (the service won't start) but you can switch this off by setting false flag like this
<startargument>--watchdog=false</startargument>
After that the service will start like via manual start-domain command without watchdog process.
You should do it after every service creation and it could be pretty annoying so I did further research.
It turns out that asadmin create OS specific domainService.xml by using templates located in glassfish\lib\install\templates Those templates also OS specific. And template for Windows (named Domain-service-winsw.xml.template) looks like this
<service>
<id>%%%NAME%%%</id>
<name>%%%DISPLAY_NAME%%%</name>
<description>GlassFish Server</description>
<executable>%%%AS_ADMIN_PATH%%%</executable>
<logpath>%%%LOCATION%%%/%%%ENTITY_NAME%%%/bin</logpath>
<logmode>reset</logmode>
<depend>tcpip</depend>
<startargument>%%%START_COMMAND%%%</startargument>
<startargument>--watchdog</startargument>
%%%CREDENTIALS_START%%%%%%LOCATION_ARGS_START%%%<startargument>%%%ENTITY_NAME%%%</startargument>
<stopargument>%%%STOP_COMMAND%%%</stopargument>
%%%CREDENTIALS_STOP%%%%%%LOCATION_ARGS_STOP%%%<stopargument>%%%ENTITY_NAME%%%</stopargument>
</service>
So you can edit template directly by setting param --watchdog=false and this change will reflect in all future created files domainService.xml
Hope it helps.
That’s not the right solution. Watchdog has an important function: it monitors whether the service is running or not. Without watchdog, glassfish is started correctly, but shortly afterwards the system no longer knows if the service is still running or maybe crashed. In the Services GUI, only the “start” button is active (always!). A “stop” and “restart” cannot be used.
A right solution would be the possibility to change the path to the lock file.