Image scanning not working in Sysdig secure - ibm-cloud

I am working on Sysdig-secure to enable Image Scanning. I have followed all the steps to enable image scanning (https://docs.sysdig.com/en/image-scanning.html#UUID-4db6c413-043d-8661-7599-c12c0ce0d7cf_UUID-1b8910ba-b5f9-4150-9854-67419de3f302). On IBMCloud,our plan is Graduated Tier - Sysdig Secure + Monitor which covers both Sysdig secure and Monitor. Still, I am not able to scan images of my openshift cluster whereas monitor is working fine. Please helpO

Related

Moodle on AWS ECS

I am looking to host Moodle LMS on AWS ECS.
I have seen some resources like Bitnami's docker images for Moodle. But not sure if it's the best or the easiest option.
Is there any other good documentation or steps that I can follow to set it up?
After you have created an aws account, login to your account:
Search for Ec2, under search results click on Ec2
Create and then launch an Ec2 instance [recommendation: ubuntu]
You are given public and private IP addresses, DNS.
An access key file [anyname.pem] file would be given, save that file because you would need it to connect to the FTP file manager client.
Wait for the instance state to show running, then connect to that EC2 instance;
Uploading Moodle files to ec2 file manager
Install FileZilla client, launch it
Go to Edit tab click on settings.
Under the settings box, select SFTP then upload that access key file downloaded from ec2, save and close that tab.
Then under host, put in your public IP address, enter username[ubuntu], enter port as 22, then quick connect.
When the file directory loads you need to set up apache virtual hosts.
Check this documentation to create apache virtual hosts:
https://www.digitalocean.com/community/tutorials/how-to-set-up-apache-virtual-hosts-on-ubuntu-18-04-quickstart
After setting up the virtual hosts follow the Moodle docs for the step by step installation instructions for moodle
The Moodle docs provide full step-by-step installation instructions for Moodle on Ubuntu. This covers all standard dependencies. You can find the instructions here:
https://docs.moodle.org/310/en/Step-by-step_Installation_Guide_for_Ubuntu
Otherwise, you can certainly use third party images such as Bitnami's docker images, but obviously the instructions for getting those working, and maintaining them going forward, would need to come from the organisation providing the image.

Docker Security: Whether just blocking port is sufficient?

I have a little problem. I've tried to secure my MongoDB in docker container (btw I'm using docker-compose) by restricting access from outside of the docker network. I've just simply removed ports from docker-compose mongo services and it worked, I could not access it from outside. But is that enough? And is it the right decision? Maybe someone has another solution.
Here are a few best practices you can follow from the security point of view:
Prefer Minimal Base Image: The base image you select can also have vulnerabilities, you can look for security vulnerabilities before selecting the base image. Select the minimal base image as it may ensure that there are fewer vulnerabilities.
Least Privileged User: If no user is specified in the Dockerfile, by default the container is run using root privilege. To restrict access, create a dedicated user and user group in the docker image.
Sign and Verify the images: We run the docker images in our production environment, thus it is quite important to authenticate the docker image before using it. You should sign your docker image and before running you should also verify it.
Use Security Softwares and linters: Use security software to scan your docker images for any vulnerabilities, you can also use a linter which statically analyzes your Dockerfile and gives a warning when there is a security vulnerability.
Don’t leak sensitive information to Docker images: The secrets must be kept outside of the Dockerfile. If you copy the secret, then they get cached on the intermediate docker container, to avoid this problem, you can use multi-stage build or docker secret commands.
Credits: Thanks to Liran Tal and Omer Levi Hevroni for the blog. I learned these best practices from their blog, please visit this blog for more details and a few more best practices.

Deploy code to multiple production servers under the load balancer without continuous deployments

I am the only developer (full-stack) in my company I have too much work other than automating the deployments as of now. In the future, we may hire a DevOps guy for the same.
Problem: We have 3 servers under Load Balancer. I don't want to block 2nd & 3rd servers till the 1st server updated and repeat the same with 2nd & 3rd because there might be huge traffic for one server initially and may fail at some specif time before other servers go live.
Server 1
User's ----> Load Balancer ----> Server 2 -----> Database
Server 3
Personal Opinion: Is there a way where we can pull the code by writing any scripts in the Load Balancer. I can replace the traditional Digital Ocean load balancer with Nginx Server making it a reverse proxy.
NOTE: I know there are plenty of other questions asked in Stack
Overflow on the same but none of them solves my queries.
Solutions I know
GIT Hooks - I know somewhat about GIT Hooks but don't want to use it because if I commit to master branch by mistake then it must not get sync to my production and create havoc in the live server and live users.
Open multiple tabs of servers and do it manually (Current Scenario). Believe me its pain in the ass :)
Any suggestions or redirects to the solutions will be really helpful for me. Thanks in advance.
One of the solutions is to write ansible playbook for this. With Ansible, you can specify to run it per one host at the time and also as the last step you can include verification check that checks if your application responds with response code 200 or it can query some endpoint that indicates the status of your application. If the check fails, Ansible will stop the execution. For example, in your case, Server1 deploys fine, but on server2 it fails. The playbook will stop and you will have servers 1 and 3 running.
I have done it myself. Works fine in environments without continuous deployments.
Here is one example

Apache CloudStack: No templates showing when adding instance

I have setup the apache cloudstack on CentOS 6.8 machine following quick installation guide. The management server and KVM are setup on the same machine. The management server is running without problems. I was able to add zone, pod, cluster, primary and secondary storage from the web interface. But when I tried to add an instance it is not showing any templates in the second stage as you can see in the screenshot
However, I am able to see two templates under Templates link in web UI.
But when I select the template and navigate to Zone tab, I see Timeout waiting for response from storage host and Ready field shows no.
When I check the management server logs, it seems there is an error when cloudstack tries to mount secondary storage for use. The below segment from cloudstack-management.log file describes this error.
2017-03-09 23:26:43,207 DEBUG [c.c.a.t.Request] (AgentManager-Handler-
14:null) (logid:) Seq 2-7686800138991304712: Processing: { Ans: , MgmtId:
279278805450918, via: 2, Ver: v1, Flags: 10, [{"com.cloud.agent.api.Answer":
{"result":false,"details":"com.cloud.utils.exception.CloudRuntimeException:
GetRootDir for nfs://172.16.10.2/export/secondary failed due to
com.cloud.utils.exception.CloudRuntimeException: Unable to mount
172.16.10.2:/export/secondary at /mnt/SecStorage/6e26529d-c659-3053-8acb-
817a77b6cfc6 due to mount.nfs: Connection timed out\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.getRootDir(Nf
sSecondaryStorageResource.java:2080)\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.execute(NfsSe
condaryStorageResource.java:1829)\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.executeReques
t(NfsSecondaryStorageResource.java:265)\n\tat
com.cloud.agent.Agent.processRequest(Agent.java:525)\n\tat
com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:833)\n\tat
com.cloud.utils.nio.Task.call(Task.java:83)\n\tat
com.cloud.utils.nio.Task.call(Task.java:29)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\
n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\
n\tat java.lang.Thread.run(Thread.java:745)\n","wait":0}}] }
Can anyone please guide me how to resolve this issue? I have been trying to figure it out for some hours now and don't know how to proceed further.
Edit 1: Please note that my LAN address was 10.103.72.50 which I assume is not /24 address. I tried to give CentOs a static IP by making the following settings in ifcg-eth0 file
DEVICE=eth0
HWADDR=52:54:00:B9:A6:C0
NM_CONTROLLED=no
ONBOOT=yes
BOOTPROTO=none
IPADDR=172.16.10.2
NETMASK=255.255.255.0
GATEWAY=172.16.10.1
DNS1=8.8.8.8
DNS2=8.8.4.4
But doing this would stop my internet. As a workaround, I reverted these changes and installed all the packages first. Then I changed the IP to static by the same configuration settings as above and ran the cloudstack management. Everything worked fine untill I bumped into this template thing. Please help me figure out what might have went wrong
I know I'm late, but for people trying out in the future, here it goes:
I hope you have successfully added a host as mentioned in Quick Install Guide before you changed your IP to static as it autoconfigures VLANs for different traffic and creates two bridges - generally with names 'cloud' or 'cloudbr'. Cloudstack uses the Secondary Storage System VM for doing all the storage-related operations in each Zone and Cluster. What seems to be the problem is that secondary storage system vm (SSVM) is not able to communicate with the management server at port 8250. If not, try manually mounting the NFS server's mount points in the SSVM shell. You can ssh into the SSVM using the below command:
ssh -i /var/cloudstack/management/.ssh/id_rsa -p 3922 root#<Private or Link local Ip address of SSVM>
I suggest you run the /usr/local/cloud/systemvm/ssvm-check.sh after doing ssh into the secondary storage system VM (assuming it is running) and has it's private, public and link local IP address. If that doesn't help you much, take a look at the secondary storage troubleshooting docs at Cloudstack.
I would further recommend, if anyone in future runs into similar issues, check if the SSVM is running and is in "Up" state in the System VMs section of Infrastructure tab and that you are able to open up a console session of it from the browser. If that is working go on to run the ssvm-check.sh script mentioned above which systematically checks each and every point of operation that SSVM executes. Even if console session cannot be opened up, you can still ssh using the link local IP address of SSVM which can be accessed by opening up details of SSVM and than execute the script. If it says, it cannot communicate with Management Server at port 8250, I recommend you check the iptables rules of management server and make sure all traffic is allowed at port 8250. A custom command to check the same is nc -v <mngmnt-server-ip> 8250. You can do a simple search and learn how to add port 8250 in your iptables rules if that is not opened. Next, you mentioned you used CentOS 6.8, so it probably uses older versions of nfs, so execute exportfs -a in your NFS server to make sure all the NFS shares are properly exported and there are no errors. I would recommend that you wait for the downloading status of CentOS 5.5 no GUI kvm template to be complete and its Ready status shown as 'Yes' before you start importing your own templates and ISOs to execute on VMs. Finally, if your ssvm-check.sh script shows everything is good and the download still does not start, you can run the command: service cloud restart and actually check if the service has gotten a PID using service cloud status as the older versions of system vm templates sometimes need us to manually start the cloud service using service cloud start even after the restart command. Restarting the cloud service in SSVM triggers the restart of downloading of all remaining templates and ISOs. Side note: the system VMs uses a Debian kernel if you want to do some more troubleshooting. Hope this helps.

Docker : multiples linked containers for each customers

I'm developing a platform that monitor emails, save the results in a Mongo database (through Parse-Server) and display the results on a web app (using AngularJS).
Basically, for each customer i want a SMTP server, a Parse Server, a MongoDB & a web platform.
I thought of using Docker for more scalability and the idea is to automatically create the containers when the user signup on my website but I don't really understand how to link these containers together : web1|smtp1 connected to parse1 connected to mongo1 & web2|smtp2 connected to parse2 connected to mongo2.
In the end, i want the customers to access the web app through web1.website.com, so I think i should also use Haproxy..
I'm just wondering if it's really the best way to do it as i'm going crazy with the automation process and if you have any tips to do that.
Using Docker (specifically Docker Compose) linking containers together is very easy. In fact it happens out of the box! If you compose all your containers at the same time, a Docker network is created for your "composed app" and all containers can see each other.
See the documentation here.
Each container can now look up the hostname web or db and get back the appropriate container’s IP address. For example, web’s application code could connect to the URL postgres://db:5432 and start using the Postgres database.