libreoffice pipe as well as socket connection on AWS lambda failed in _get_remote_context pyuno - sockets

Hi folks I am tying to use libreoffice using custom runtime environment as ECR image on AWS Lambda, This whole setup works in Windows as well as on Linux, as soon as I pushing to Lambda using ECR image, the lambda fails to open Pipe or socket to communicate with libreoffice. The same image works well on my linux system.

Related

Use Google Authenticator 2FA in VS Code SSH with Google Compute Engine VM

My VS Code has a SSH-FS connection with my GCE VM and can view / edit files and folder structures directly from the sidebar of VS Code client app in my MacOS. In strengthening the security of my VM, I enabled 2FA to my VM instance by adding metadata of enable-oslogin-2fa=TRUE and enable-oslogin=TRUE. However, the VS Code SSH, which was working perfectly, now shows an error saying error while connecting to ssh fs all configured authentication methods failed. Is there a way to keep my 2FA setting for VM instance while maintaining automatic SSH connection for my VS Code app?
Source of Google 2FA
https://cloud.google.com/compute/docs/oslogin/setup-two-factor-authentication#configure_2fa
Environment
VS Code: Version: 1.52.1
Google Compute Engine VM Instance: Debian GNU/Linux 9.13 (stretch)

Installing Ravendb on the Raspberry Pi

I'm currently trying to install Ravendb 4.1.5-patch-41012 for the Raspberry Pi on my Raspberry Pi 3 Model B running Raspbian Stretch Lite. When I run the run.sh script it will give an error about not being able to open a browser even if I set the Setup.Mode in the settings to none. After that I'm able to run server commands but I'm not able to access Ravendb studio and the Ravendb server locally or using my local network. Are there extra steps I have to take and or thing I have to keep in mind when installing Ravendb on the Raspberry Pi?
Raspbian Stretch Lite doesn't equipped with local web browser, therefor you may need to give outside access before using web setup. In the following link you can find description on the Server's configuration: https://ravendb.net/docs/article-page/4.1/csharp/server/configuration/configuration-options
Modify Server/settings.json in a way it fits your security needs, as follows (Replace 10.0.0.90 with your Pie's IP)
Totally unsecured access from anywhere (ATTENTION: This will give access to the database to any one with access to this docker instance):
{
"ServerUrl": "http://0.0.0.0:8080",
"PublicServerUrl": "http://10.0.0.90:8080",
"Setup.Mode": "None",
"Security.UnsecuredAccessAllowed": "PublicNetwork",
}
Access from docker's host machine or other machines on you local LAN:
{
"ServerUrl": "http://10.0.0.90:8080",
"Setup.Mode": "None",
"PublicServerUrl": "http://10.0.0.90:8080",
"Security.UnsecuredAccessAllowed": "PrivateNetwork",
"License.Eula.Accepted": true
}
Browsing to http://10.0.0.90:8080 should work at this point.
You can use cli, read : https://ravendb.net/docs/article-page/4.1/Csharp/server/configuration/command-line-arguments
Example:
cd ~/RavenDB/Server
./Raven.Server --Security.UnsecuredAccessAllowed=PublicNetwork --ServerUrl=http://0.0.0.0:8080 --PublicServerUrl=http://10.0.0.90:8080 --Setup.Mode="None" --DataDir=/mnt/ExternalDisk/RavenDB
As a side note: I do recommend to set "DataDir" to external mounted USB disk, rather then using the default SD card data path, if this is your case.
And later on you may want to use scripts for adding RavenDB as service on your Pie (see install-daemon.sh here : https://github.com/ravendb/ravendb/tree/v4.2/scripts/linux)
The run.sh is trying to start a browser the first time you start RavenDB to give you access to it. Given that you are running the Lite version, there is no such browser, obviously.
See Adi's comment on how to access RavenDB from outside the Pi machine.
You can just call server/Raven.Server instead of the run.sh instead to start RavenDB

Apache CloudStack: No templates showing when adding instance

I have setup the apache cloudstack on CentOS 6.8 machine following quick installation guide. The management server and KVM are setup on the same machine. The management server is running without problems. I was able to add zone, pod, cluster, primary and secondary storage from the web interface. But when I tried to add an instance it is not showing any templates in the second stage as you can see in the screenshot
However, I am able to see two templates under Templates link in web UI.
But when I select the template and navigate to Zone tab, I see Timeout waiting for response from storage host and Ready field shows no.
When I check the management server logs, it seems there is an error when cloudstack tries to mount secondary storage for use. The below segment from cloudstack-management.log file describes this error.
2017-03-09 23:26:43,207 DEBUG [c.c.a.t.Request] (AgentManager-Handler-
14:null) (logid:) Seq 2-7686800138991304712: Processing: { Ans: , MgmtId:
279278805450918, via: 2, Ver: v1, Flags: 10, [{"com.cloud.agent.api.Answer":
{"result":false,"details":"com.cloud.utils.exception.CloudRuntimeException:
GetRootDir for nfs://172.16.10.2/export/secondary failed due to
com.cloud.utils.exception.CloudRuntimeException: Unable to mount
172.16.10.2:/export/secondary at /mnt/SecStorage/6e26529d-c659-3053-8acb-
817a77b6cfc6 due to mount.nfs: Connection timed out\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.getRootDir(Nf
sSecondaryStorageResource.java:2080)\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.execute(NfsSe
condaryStorageResource.java:1829)\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.executeReques
t(NfsSecondaryStorageResource.java:265)\n\tat
com.cloud.agent.Agent.processRequest(Agent.java:525)\n\tat
com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:833)\n\tat
com.cloud.utils.nio.Task.call(Task.java:83)\n\tat
com.cloud.utils.nio.Task.call(Task.java:29)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\
n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\
n\tat java.lang.Thread.run(Thread.java:745)\n","wait":0}}] }
Can anyone please guide me how to resolve this issue? I have been trying to figure it out for some hours now and don't know how to proceed further.
Edit 1: Please note that my LAN address was 10.103.72.50 which I assume is not /24 address. I tried to give CentOs a static IP by making the following settings in ifcg-eth0 file
DEVICE=eth0
HWADDR=52:54:00:B9:A6:C0
NM_CONTROLLED=no
ONBOOT=yes
BOOTPROTO=none
IPADDR=172.16.10.2
NETMASK=255.255.255.0
GATEWAY=172.16.10.1
DNS1=8.8.8.8
DNS2=8.8.4.4
But doing this would stop my internet. As a workaround, I reverted these changes and installed all the packages first. Then I changed the IP to static by the same configuration settings as above and ran the cloudstack management. Everything worked fine untill I bumped into this template thing. Please help me figure out what might have went wrong
I know I'm late, but for people trying out in the future, here it goes:
I hope you have successfully added a host as mentioned in Quick Install Guide before you changed your IP to static as it autoconfigures VLANs for different traffic and creates two bridges - generally with names 'cloud' or 'cloudbr'. Cloudstack uses the Secondary Storage System VM for doing all the storage-related operations in each Zone and Cluster. What seems to be the problem is that secondary storage system vm (SSVM) is not able to communicate with the management server at port 8250. If not, try manually mounting the NFS server's mount points in the SSVM shell. You can ssh into the SSVM using the below command:
ssh -i /var/cloudstack/management/.ssh/id_rsa -p 3922 root#<Private or Link local Ip address of SSVM>
I suggest you run the /usr/local/cloud/systemvm/ssvm-check.sh after doing ssh into the secondary storage system VM (assuming it is running) and has it's private, public and link local IP address. If that doesn't help you much, take a look at the secondary storage troubleshooting docs at Cloudstack.
I would further recommend, if anyone in future runs into similar issues, check if the SSVM is running and is in "Up" state in the System VMs section of Infrastructure tab and that you are able to open up a console session of it from the browser. If that is working go on to run the ssvm-check.sh script mentioned above which systematically checks each and every point of operation that SSVM executes. Even if console session cannot be opened up, you can still ssh using the link local IP address of SSVM which can be accessed by opening up details of SSVM and than execute the script. If it says, it cannot communicate with Management Server at port 8250, I recommend you check the iptables rules of management server and make sure all traffic is allowed at port 8250. A custom command to check the same is nc -v <mngmnt-server-ip> 8250. You can do a simple search and learn how to add port 8250 in your iptables rules if that is not opened. Next, you mentioned you used CentOS 6.8, so it probably uses older versions of nfs, so execute exportfs -a in your NFS server to make sure all the NFS shares are properly exported and there are no errors. I would recommend that you wait for the downloading status of CentOS 5.5 no GUI kvm template to be complete and its Ready status shown as 'Yes' before you start importing your own templates and ISOs to execute on VMs. Finally, if your ssvm-check.sh script shows everything is good and the download still does not start, you can run the command: service cloud restart and actually check if the service has gotten a PID using service cloud status as the older versions of system vm templates sometimes need us to manually start the cloud service using service cloud start even after the restart command. Restarting the cloud service in SSVM triggers the restart of downloading of all remaining templates and ISOs. Side note: the system VMs uses a Debian kernel if you want to do some more troubleshooting. Hope this helps.

I used sysprep on a VM (new portal) and lost connectivity to the machine

In the new portal, there's an icon that says 'Capture'. I assume this was for capturing an image of a VM (snapshot), but it was greyed out. Doing a little reading, several posts suggested running sysprep to prepare the machine for a capture.
I ran it according to those instructions, the machine appears to reboot, but all connectivity is lost.
Anyone know what's going on or how to fix it? Also, are there any ways to capture a snapshot in the new portal or do we need to use PS scripts?
the machine appears to reboot, but all connectivity is lost.
It is by design behavior. Before capture a VM image, we should use sysprep to generalize the VM, generalizing a VM removes all your personal account information, among other things, and prepares the machine to be used as an image.
After we run sysprep, we will lost all connection. Run sysprep, we should select shutdown:
For now, we can't via Azure new portal to capture a VM image. We can use PowerShell to capture a VM image, we can refer to this link.
you could create a virtual machine from an image. I can't find the
same function in the new portal.
We can't use Azure new portal to create a VM from image, we can use PowerShell to create a VM from image, we can refer to the link.
Most important:
Before you capture a VM image, you should back up you VM's VHD first, because the process will delete the original virtual machine after it's captured.
The latest version of PowerShell is 3.6.0, you can install it from this page.

gsutil authentication code failure

I need to download files from GCS to local machine. I tried gsutil config and download in my machine. In my win 8 64 bit, all worked fine. However when I try to setup the same in another dedicated machine which is win vista 23 bit, entering authentication code just shows Failure.
Python gsutil.py config -b pops up the browser with request url, I got the authentication code in browser...pasting that gives Failure.
I am new to Python and could not trace back the problem. Does gsutil have any limitation on number of machines we can configure for same login/project?
Appreciate any help in debugging this.
Also, is it possible to move GCS data into Google SQL Cloud. I am assuming running a script on Google App engine that reads from GCS, parse the data and insert to Google SQL cloud is possible. Are their any documentations/tools to do this?
Thanks
Dhurka