Is it possible that lsyncd keeps working when a destination node is unavailable on the network? I need to sync folders on different servers, but these servers are in an Azure Availability set (autoscale ec2 on AWS), so they can be turned on or off according to the load on my application and lsyncd.
I cannot use a file share for these folders because there are a lot of micro-files.
Thanks in advance for your help.
https://axkibe.github.io/lsyncd/manual/config/file/
According to the manual, you can start with 'insist' to allow lsyncd to startup without all of the destinations being available
Related
I have set up a Prefect backend server on a remote machine. I was able to connect local agents from different other machines to the server by modifying the config.toml in the .prefect folder:
[server]
endpoint = "http://server_ip:port/graphql"
[server.ui]
apollo_url = "http://server_ip:port/graphql"
As it stands, I can create a local agent on each machine, register flows and run them on the respective machines. Now I would like to have a central computer where I can develop and register my flows.
Unfortunately, when I run a flow on Machine B, registered on Machine A, I get a "Module not Found" error message. I have read that the error comes from machines only looking for the flows in their local storage.
Without using Git, GCS, etc., is it possible to use, for example, a NAS where all flows are stored and which all machines can use to access the flows?
if so, how must flows, agents, and storage be configured? Unfortunately, I have not found any good documentation on this.
Many applications use Docker agents and have similar problems, or use remote storage directly.
There is no native NAS storage interface available in the core library, but we provide recipes and guidance on how you may solve the ModuleNorFoundError - check out this Discourse wiki page which dives into how you may solve that
I was able to find a solution to my answer. The prerequisite is shared storage (e.g. a NAS), which is accessible on all machines under the same path. In this storage, the flows are stored in the form of .py files. Flows and used local Agents do not need any special preparations.
I simply registered my flows with
prefect register --project "PREFECT_PROJECT_NAME" --path "PATH_TO_.py"
in CLI.
I was able to deploy all my flows from machine A and execute them from/schedule them on any other machine
I'm trying to deploy an infinisppan cluster (2 machines) using the domain mode. But I can't find any working example of domain.xml and host.xml config file.
This cluster would be used by keycloak as a cache server
Any luck one of you already work on this ?
You need to download infinispan 9.4.14 (or any 9.4) and start the bin/domain.sh [bat] script.
That's it you have a running domain with two servers.
If you want to add a second machine you need to copy the server and start the domain script by passing "--host-config=host-slave.xml" also you need to set "jboss.domain.master.address=" with "-D" to let the process know where the domain master is.
Anothe option is to move host-slave.xml-->host.xml and edit the domain-controller discovery-options.
More information can be found here -> http://infinispan.org/docs/stable/server_guide/server_guide.html#domain_mode
I have setup the apache cloudstack on CentOS 6.8 machine following quick installation guide. The management server and KVM are setup on the same machine. The management server is running without problems. I was able to add zone, pod, cluster, primary and secondary storage from the web interface. But when I tried to add an instance it is not showing any templates in the second stage as you can see in the screenshot
However, I am able to see two templates under Templates link in web UI.
But when I select the template and navigate to Zone tab, I see Timeout waiting for response from storage host and Ready field shows no.
When I check the management server logs, it seems there is an error when cloudstack tries to mount secondary storage for use. The below segment from cloudstack-management.log file describes this error.
2017-03-09 23:26:43,207 DEBUG [c.c.a.t.Request] (AgentManager-Handler-
14:null) (logid:) Seq 2-7686800138991304712: Processing: { Ans: , MgmtId:
279278805450918, via: 2, Ver: v1, Flags: 10, [{"com.cloud.agent.api.Answer":
{"result":false,"details":"com.cloud.utils.exception.CloudRuntimeException:
GetRootDir for nfs://172.16.10.2/export/secondary failed due to
com.cloud.utils.exception.CloudRuntimeException: Unable to mount
172.16.10.2:/export/secondary at /mnt/SecStorage/6e26529d-c659-3053-8acb-
817a77b6cfc6 due to mount.nfs: Connection timed out\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.getRootDir(Nf
sSecondaryStorageResource.java:2080)\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.execute(NfsSe
condaryStorageResource.java:1829)\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.executeReques
t(NfsSecondaryStorageResource.java:265)\n\tat
com.cloud.agent.Agent.processRequest(Agent.java:525)\n\tat
com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:833)\n\tat
com.cloud.utils.nio.Task.call(Task.java:83)\n\tat
com.cloud.utils.nio.Task.call(Task.java:29)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\
n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\
n\tat java.lang.Thread.run(Thread.java:745)\n","wait":0}}] }
Can anyone please guide me how to resolve this issue? I have been trying to figure it out for some hours now and don't know how to proceed further.
Edit 1: Please note that my LAN address was 10.103.72.50 which I assume is not /24 address. I tried to give CentOs a static IP by making the following settings in ifcg-eth0 file
DEVICE=eth0
HWADDR=52:54:00:B9:A6:C0
NM_CONTROLLED=no
ONBOOT=yes
BOOTPROTO=none
IPADDR=172.16.10.2
NETMASK=255.255.255.0
GATEWAY=172.16.10.1
DNS1=8.8.8.8
DNS2=8.8.4.4
But doing this would stop my internet. As a workaround, I reverted these changes and installed all the packages first. Then I changed the IP to static by the same configuration settings as above and ran the cloudstack management. Everything worked fine untill I bumped into this template thing. Please help me figure out what might have went wrong
I know I'm late, but for people trying out in the future, here it goes:
I hope you have successfully added a host as mentioned in Quick Install Guide before you changed your IP to static as it autoconfigures VLANs for different traffic and creates two bridges - generally with names 'cloud' or 'cloudbr'. Cloudstack uses the Secondary Storage System VM for doing all the storage-related operations in each Zone and Cluster. What seems to be the problem is that secondary storage system vm (SSVM) is not able to communicate with the management server at port 8250. If not, try manually mounting the NFS server's mount points in the SSVM shell. You can ssh into the SSVM using the below command:
ssh -i /var/cloudstack/management/.ssh/id_rsa -p 3922 root#<Private or Link local Ip address of SSVM>
I suggest you run the /usr/local/cloud/systemvm/ssvm-check.sh after doing ssh into the secondary storage system VM (assuming it is running) and has it's private, public and link local IP address. If that doesn't help you much, take a look at the secondary storage troubleshooting docs at Cloudstack.
I would further recommend, if anyone in future runs into similar issues, check if the SSVM is running and is in "Up" state in the System VMs section of Infrastructure tab and that you are able to open up a console session of it from the browser. If that is working go on to run the ssvm-check.sh script mentioned above which systematically checks each and every point of operation that SSVM executes. Even if console session cannot be opened up, you can still ssh using the link local IP address of SSVM which can be accessed by opening up details of SSVM and than execute the script. If it says, it cannot communicate with Management Server at port 8250, I recommend you check the iptables rules of management server and make sure all traffic is allowed at port 8250. A custom command to check the same is nc -v <mngmnt-server-ip> 8250. You can do a simple search and learn how to add port 8250 in your iptables rules if that is not opened. Next, you mentioned you used CentOS 6.8, so it probably uses older versions of nfs, so execute exportfs -a in your NFS server to make sure all the NFS shares are properly exported and there are no errors. I would recommend that you wait for the downloading status of CentOS 5.5 no GUI kvm template to be complete and its Ready status shown as 'Yes' before you start importing your own templates and ISOs to execute on VMs. Finally, if your ssvm-check.sh script shows everything is good and the download still does not start, you can run the command: service cloud restart and actually check if the service has gotten a PID using service cloud status as the older versions of system vm templates sometimes need us to manually start the cloud service using service cloud start even after the restart command. Restarting the cloud service in SSVM triggers the restart of downloading of all remaining templates and ISOs. Side note: the system VMs uses a Debian kernel if you want to do some more troubleshooting. Hope this helps.
I have three managed servers running on a Weblogic domain. Now I need to configured node manager so that I can stop and start each of the managed servers.
My question is do I need to define a separate 'Machine' and 'Node Manager port' for each of the managed servers? Or can a single 'Machine' and "Node Manager port' combination be used to start/stop multiple managed servers
Thanks in advance
Yes it is possible but the configuration depends on how your Machines are distributed across your hosts on whether you need to use different ports etc. Oracle provides quite a detailed tutorial on this here. The contents of which is too much to replicate into SO.
I recommend you follow the tutorial and then post any specific questions you may have as a new question.
Step 1: Start weblogic server and open weblogic console in browser and login with correct credential.
Step 2: Expand Environment and click on Machine link
Step 3: Click on New to add new machine and give the machine name. Then click on next.
Step 4: Enter Listen Address (server IP) and port on which node manage will run. And then click Finish.
Please use below link for more detail view
https://fi-sm.com/blog/how-to-add-new-server-on-admin-server-in-web-logic-12c-server/
I have a FinalBuilder job that, as a final step, deploys the compiled app and DLLs to a network share on another server.
About 50% of the time, it just fails with
Win32 Error : The network path was not found
Changing the target from \\myserver\myshare to \\myserver.mydomain.com\myshare will often fix it temporarily - the first 2-3 runs after modifying the build file will work, after which it'll start failing again.
The FinalBuilder task is running with domain credentials granting admin access on the target box; and copying files to/from shares on that server via Windows Explorer works reliably.
I'm completely stumped.
Finally tracked this down. The target server was a virtual machine, and the Hyper-V host network settings were set to "Virtual Network" instead of "Virtual Teamed Network"
I have no idea what that means, but having changed it to Virtual Teamed Network, it works flawlessly. O_o
The network path was not found.
This is related to DNS/WINS not being able to look up the name. When I have seen this there are problems with our DNS servers.
Adding an Entry into the lmhost file would prevent the system from looking in DNS/WINS.
If that does not work, another option to consider is to increase the number of retries on the Action. This can be done from the "Runtime" tab of the action by clicking on "Timing Properties"