Moodle on AWS ECS - amazon-ecs

I am looking to host Moodle LMS on AWS ECS.
I have seen some resources like Bitnami's docker images for Moodle. But not sure if it's the best or the easiest option.
Is there any other good documentation or steps that I can follow to set it up?

After you have created an aws account, login to your account:
Search for Ec2, under search results click on Ec2
Create and then launch an Ec2 instance [recommendation: ubuntu]
You are given public and private IP addresses, DNS.
An access key file [anyname.pem] file would be given, save that file because you would need it to connect to the FTP file manager client.
Wait for the instance state to show running, then connect to that EC2 instance;
Uploading Moodle files to ec2 file manager
Install FileZilla client, launch it
Go to Edit tab click on settings.
Under the settings box, select SFTP then upload that access key file downloaded from ec2, save and close that tab.
Then under host, put in your public IP address, enter username[ubuntu], enter port as 22, then quick connect.
When the file directory loads you need to set up apache virtual hosts.
Check this documentation to create apache virtual hosts:
https://www.digitalocean.com/community/tutorials/how-to-set-up-apache-virtual-hosts-on-ubuntu-18-04-quickstart
After setting up the virtual hosts follow the Moodle docs for the step by step installation instructions for moodle

The Moodle docs provide full step-by-step installation instructions for Moodle on Ubuntu. This covers all standard dependencies. You can find the instructions here:
https://docs.moodle.org/310/en/Step-by-step_Installation_Guide_for_Ubuntu
Otherwise, you can certainly use third party images such as Bitnami's docker images, but obviously the instructions for getting those working, and maintaining them going forward, would need to come from the organisation providing the image.

Related

Apache CloudStack: No templates showing when adding instance

I have setup the apache cloudstack on CentOS 6.8 machine following quick installation guide. The management server and KVM are setup on the same machine. The management server is running without problems. I was able to add zone, pod, cluster, primary and secondary storage from the web interface. But when I tried to add an instance it is not showing any templates in the second stage as you can see in the screenshot
However, I am able to see two templates under Templates link in web UI.
But when I select the template and navigate to Zone tab, I see Timeout waiting for response from storage host and Ready field shows no.
When I check the management server logs, it seems there is an error when cloudstack tries to mount secondary storage for use. The below segment from cloudstack-management.log file describes this error.
2017-03-09 23:26:43,207 DEBUG [c.c.a.t.Request] (AgentManager-Handler-
14:null) (logid:) Seq 2-7686800138991304712: Processing: { Ans: , MgmtId:
279278805450918, via: 2, Ver: v1, Flags: 10, [{"com.cloud.agent.api.Answer":
{"result":false,"details":"com.cloud.utils.exception.CloudRuntimeException:
GetRootDir for nfs://172.16.10.2/export/secondary failed due to
com.cloud.utils.exception.CloudRuntimeException: Unable to mount
172.16.10.2:/export/secondary at /mnt/SecStorage/6e26529d-c659-3053-8acb-
817a77b6cfc6 due to mount.nfs: Connection timed out\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.getRootDir(Nf
sSecondaryStorageResource.java:2080)\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.execute(NfsSe
condaryStorageResource.java:1829)\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.executeReques
t(NfsSecondaryStorageResource.java:265)\n\tat
com.cloud.agent.Agent.processRequest(Agent.java:525)\n\tat
com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:833)\n\tat
com.cloud.utils.nio.Task.call(Task.java:83)\n\tat
com.cloud.utils.nio.Task.call(Task.java:29)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\
n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\
n\tat java.lang.Thread.run(Thread.java:745)\n","wait":0}}] }
Can anyone please guide me how to resolve this issue? I have been trying to figure it out for some hours now and don't know how to proceed further.
Edit 1: Please note that my LAN address was 10.103.72.50 which I assume is not /24 address. I tried to give CentOs a static IP by making the following settings in ifcg-eth0 file
DEVICE=eth0
HWADDR=52:54:00:B9:A6:C0
NM_CONTROLLED=no
ONBOOT=yes
BOOTPROTO=none
IPADDR=172.16.10.2
NETMASK=255.255.255.0
GATEWAY=172.16.10.1
DNS1=8.8.8.8
DNS2=8.8.4.4
But doing this would stop my internet. As a workaround, I reverted these changes and installed all the packages first. Then I changed the IP to static by the same configuration settings as above and ran the cloudstack management. Everything worked fine untill I bumped into this template thing. Please help me figure out what might have went wrong
I know I'm late, but for people trying out in the future, here it goes:
I hope you have successfully added a host as mentioned in Quick Install Guide before you changed your IP to static as it autoconfigures VLANs for different traffic and creates two bridges - generally with names 'cloud' or 'cloudbr'. Cloudstack uses the Secondary Storage System VM for doing all the storage-related operations in each Zone and Cluster. What seems to be the problem is that secondary storage system vm (SSVM) is not able to communicate with the management server at port 8250. If not, try manually mounting the NFS server's mount points in the SSVM shell. You can ssh into the SSVM using the below command:
ssh -i /var/cloudstack/management/.ssh/id_rsa -p 3922 root#<Private or Link local Ip address of SSVM>
I suggest you run the /usr/local/cloud/systemvm/ssvm-check.sh after doing ssh into the secondary storage system VM (assuming it is running) and has it's private, public and link local IP address. If that doesn't help you much, take a look at the secondary storage troubleshooting docs at Cloudstack.
I would further recommend, if anyone in future runs into similar issues, check if the SSVM is running and is in "Up" state in the System VMs section of Infrastructure tab and that you are able to open up a console session of it from the browser. If that is working go on to run the ssvm-check.sh script mentioned above which systematically checks each and every point of operation that SSVM executes. Even if console session cannot be opened up, you can still ssh using the link local IP address of SSVM which can be accessed by opening up details of SSVM and than execute the script. If it says, it cannot communicate with Management Server at port 8250, I recommend you check the iptables rules of management server and make sure all traffic is allowed at port 8250. A custom command to check the same is nc -v <mngmnt-server-ip> 8250. You can do a simple search and learn how to add port 8250 in your iptables rules if that is not opened. Next, you mentioned you used CentOS 6.8, so it probably uses older versions of nfs, so execute exportfs -a in your NFS server to make sure all the NFS shares are properly exported and there are no errors. I would recommend that you wait for the downloading status of CentOS 5.5 no GUI kvm template to be complete and its Ready status shown as 'Yes' before you start importing your own templates and ISOs to execute on VMs. Finally, if your ssvm-check.sh script shows everything is good and the download still does not start, you can run the command: service cloud restart and actually check if the service has gotten a PID using service cloud status as the older versions of system vm templates sometimes need us to manually start the cloud service using service cloud start even after the restart command. Restarting the cloud service in SSVM triggers the restart of downloading of all remaining templates and ISOs. Side note: the system VMs uses a Debian kernel if you want to do some more troubleshooting. Hope this helps.

Can we configure a single node manager for multiple managed servers in Weblogic

I have three managed servers running on a Weblogic domain. Now I need to configured node manager so that I can stop and start each of the managed servers.
My question is do I need to define a separate 'Machine' and 'Node Manager port' for each of the managed servers? Or can a single 'Machine' and "Node Manager port' combination be used to start/stop multiple managed servers
Thanks in advance
Yes it is possible but the configuration depends on how your Machines are distributed across your hosts on whether you need to use different ports etc. Oracle provides quite a detailed tutorial on this here. The contents of which is too much to replicate into SO.
I recommend you follow the tutorial and then post any specific questions you may have as a new question.
Step 1: Start weblogic server and open weblogic console in browser and login with correct credential.
Step 2: Expand Environment and click on Machine link
Step 3: Click on New to add new machine and give the machine name. Then click on next.
Step 4: Enter Listen Address (server IP) and port on which node manage will run. And then click Finish.
Please use below link for more detail view
https://fi-sm.com/blog/how-to-add-new-server-on-admin-server-in-web-logic-12c-server/

Add endpoint in Cloud Integration service

I failed to add endpoint in Cloud Integration service, following the steps below:
Login to Bluemix
Create a Cloud Integration service
click on Secure connections tab
Download the connector and install it on the controller node
provide the public key
Refresh the connection. It should show connected
try to add the endpoint It is giving error fail to connect endpoint
Each time that you create a new basic connection, a new installation .tar file is created specific to that installation. They all have unique /home/nativeapiadmin/mgmt.tunnel files that are configured specifically for that connector. If you want to reuse an existing copy of the installation, you must edit the mgmt.tunnel file with the correct host name or IP address, and port numbers.Then, restart the connector.
If above does not resolve your problem, run the following procedure to clean up and recreate the endpoint:
delete your basic connector from cloud integration
create a new basic connector, with a new name
Download the Linux 64-bit installer and make sure the file size is  around 844,128 bytes
remove the older connector from the system
delete the "nativeapiadmin" user and the user's directory
delete the known_host file in the /root/.ssh directory
reinstall the connector, please read the INSTALL_README file that is included in the zip
upload the id_rsa.pub key from same machine as connector was installed
create the endpoint

How can I setup a cell and collective in Bluemix

I'm trying to setup a cell and a collective in a WAS for bluemix service. I've found a few steps online for generic liberty setup, but nothing specific for a bluemix collective or cell. Can someone point me in the right direction?
At a high level, you should be able to do the following for a Cell:
Login to the Admin Console as wsadmin
Create a server.
Open all the ports on each host for each server created by running the openFirewallPorts.sh script. Below, you will find the standard ports for a new server given that only one server exists on each host You may need to open more ports for additional servers on the same host since ports can be unique per server. Try the following:
cd WAS_HOME/virtual/bin
export serverPorts=2810:TCP,2810:UDP,8880:TCP,8880:UDP,9101:TCP,9101:UDP,9061:TCP,9061:UDP,9080:TCP,9080:UDP,9354:TCP,9354:UDP,9044:TCP,9044:UDP,9443:TCP,9443:UDP,5060:TCP,5060:UDP,5061:TCP,5061:UDP,11005:TCP,11005:UDP,11007:TCP,11007:UDP,9633:TCP,9633:UDP,7276:TCP,7276:UDP,7286:TCP,7286:UDP,5558:TCP,5558:UDP,5578:TCP,5578:UDP
sudo ./openFirewallPorts.sh -ports $serverPorts -persist true
Start your server.
Deploy your application.
There are a few slight differences for a Liberty Collective, but again, at a high level, you should be able to try the following:
Switch your user to wsadmin or ssh to your host using wsadmin / password
On each host, create a server and join it to the collective. Be sure to use the full host name of the controller for the --host parameter.
cd WAS_HOME/bin
./server create server
./collective join server --host=yourhostname --port=9443 --user=wsadmin --password=xxxxxxxx --keystorePassword=yyyyyyyy
Accept the chain certificate (y/n) y
Save the output from each join so you can paste it into each host's application server.xml file before deploying your application.
Install the features required by your application on each host. The features listed below are an example.
cd /opt/IBM/WebSphere/Liberty/bin
./featureManager install --acceptLicense ejblite-3.2 websocket-1.0 jsp-2.3 jdbc-4.1 jaxrs-2.0 cdi-1.2 beanValidation-1.1
NOTE: Output from this command will contain messages similar to:
chmod: changing permissions of
`/opt/IBM/WebSphere/Liberty/bin/featureManager': Operation not
permitted
This is OK. You should see this message upon completion:
Product validation completed successfully.
Update your application's server.xml file with the information saved in Step 2.
Start your server.
Deploy your application.
Verify your application is reachable :9080/appname

configure users using opensips 1.11(Ubuntu 14.04)

After installing opensips(It will be better if i won't have to use opensips control panel) how can add users and can make test call.
Note:
I am a newbie, and following this guide for installation.
http://www.opensips.org/Documentation/Install-CompileAndInstall-1-11
Instead of using the Control Panel, you can use opensipsctl in order to add new subscribers. All you need to do is:
opensipsctl add liviu#opensips.org mypassword
For more help on the opensipsctl, simply type:
opensipsctl
For any user that's trying to install the package under Ubuntu by instructions from official manual, please make sure that you also read setup manual from github page, section [C] and [D]
https://github.com/OpenSIPS/opensips/blob/master/INSTALL
I've tried to do a fresh setup of opensips on a virtual machine to test the functions. The provided packages on Jessie branch of Debian (which is supported by Ubuntu 14.04) is not included MySQL database deployment.
For a quick test I'm using the DBText as DB engine, and using command to add user will not succeed. Because the DBText engine requires email field, however the opensipsctl interface doesn't understand, so we should add some subscribers by adding some lines to Subscriber table, basically is located under path /usr/local/etc/opensips/dbtext, e.g:
1:brian:192.168.186.129:password:123456:xxx:xxx:xxx
2:julia:192.168.186.129:password:123456:xxx:xxx:xxx
Example above using the ip which is the virtual machine ip.
Good luck.