In other words : is it some kind of containerization/ VM technology or it's just my computer doing the whole thing ? and where's the downloaded data stored ?
Things I tried :
Code here , uncomment and use node index.js to run
1- Checking system info using systeminformation gives: (obviously not my specs)
manufacturer: 'Intel',
brand: 'Coreā¢ i9-9880H',
other information weren't very useful at least for my level of experience .
2- Testing network interfaces
iface: 'en0',
ifaceName: 'en0',
default: false,
ip4: '192.168.1.104',
I checked my host's local ip using ifconfig , not the same.
3- Checking external ip / Network speed
Wasn't able to do that , I guess only connections to npm server are allowed for downloading packages , fetching other webpages or connecting to speed-test servers isn't working .
So far it seemed like it's not My computer BUT then I tried running npm i largest-package and directly cut my PC's connectivity (the command should continue running on the server and I should find the package installed when I reconnect) this ,however , did not happen .
As for the Data
I checked the cached data in browser.. very small (in my humble opinion)
Finally
Checking the documentation
yields (09/03/2022) link
I'd appreciate you helping me wrap my head around this .
Related
Hello i created a WebRTC screen-sharing on my website, and i just wanted to create and have my own signaling server in case that the one present on the code would ('https://socketio-over-nodejs2.herokuapp.com:443/') be not functional in the future although i need a server using internet (not localhost:...)
How can i process ? Thanks
var config = {
openSocket: function (config) {
var SIGNALING_SERVER = 'https://socketio-over-nodejs2.herokuapp.com:443/';
You can go on AWS and start up a remote machine for free (be sure to pick one in a country near you).
Then remote onto your server, install node.js, and put the signalling server code into an index.js file in a folder somewhere. Then go to the directory in commmand prompt, type npm install to install any dependencies, then node index.js to run your server. Be sure to open up the correct port on your remote machine.
See https://codelabs.developers.google.com/codelabs/webrtc-web/#6
for example code of a node.js signalling server.
I'm currently trying to install Ravendb 4.1.5-patch-41012 for the Raspberry Pi on my Raspberry Pi 3 Model B running Raspbian Stretch Lite. When I run the run.sh script it will give an error about not being able to open a browser even if I set the Setup.Mode in the settings to none. After that I'm able to run server commands but I'm not able to access Ravendb studio and the Ravendb server locally or using my local network. Are there extra steps I have to take and or thing I have to keep in mind when installing Ravendb on the Raspberry Pi?
Raspbian Stretch Lite doesn't equipped with local web browser, therefor you may need to give outside access before using web setup. In the following link you can find description on the Server's configuration: https://ravendb.net/docs/article-page/4.1/csharp/server/configuration/configuration-options
Modify Server/settings.json in a way it fits your security needs, as follows (Replace 10.0.0.90 with your Pie's IP)
Totally unsecured access from anywhere (ATTENTION: This will give access to the database to any one with access to this docker instance):
{
"ServerUrl": "http://0.0.0.0:8080",
"PublicServerUrl": "http://10.0.0.90:8080",
"Setup.Mode": "None",
"Security.UnsecuredAccessAllowed": "PublicNetwork",
}
Access from docker's host machine or other machines on you local LAN:
{
"ServerUrl": "http://10.0.0.90:8080",
"Setup.Mode": "None",
"PublicServerUrl": "http://10.0.0.90:8080",
"Security.UnsecuredAccessAllowed": "PrivateNetwork",
"License.Eula.Accepted": true
}
Browsing to http://10.0.0.90:8080 should work at this point.
You can use cli, read : https://ravendb.net/docs/article-page/4.1/Csharp/server/configuration/command-line-arguments
Example:
cd ~/RavenDB/Server
./Raven.Server --Security.UnsecuredAccessAllowed=PublicNetwork --ServerUrl=http://0.0.0.0:8080 --PublicServerUrl=http://10.0.0.90:8080 --Setup.Mode="None" --DataDir=/mnt/ExternalDisk/RavenDB
As a side note: I do recommend to set "DataDir" to external mounted USB disk, rather then using the default SD card data path, if this is your case.
And later on you may want to use scripts for adding RavenDB as service on your Pie (see install-daemon.sh here : https://github.com/ravendb/ravendb/tree/v4.2/scripts/linux)
The run.sh is trying to start a browser the first time you start RavenDB to give you access to it. Given that you are running the Lite version, there is no such browser, obviously.
See Adi's comment on how to access RavenDB from outside the Pi machine.
You can just call server/Raven.Server instead of the run.sh instead to start RavenDB
I have setup the apache cloudstack on CentOS 6.8 machine following quick installation guide. The management server and KVM are setup on the same machine. The management server is running without problems. I was able to add zone, pod, cluster, primary and secondary storage from the web interface. But when I tried to add an instance it is not showing any templates in the second stage as you can see in the screenshot
However, I am able to see two templates under Templates link in web UI.
But when I select the template and navigate to Zone tab, I see Timeout waiting for response from storage host and Ready field shows no.
When I check the management server logs, it seems there is an error when cloudstack tries to mount secondary storage for use. The below segment from cloudstack-management.log file describes this error.
2017-03-09 23:26:43,207 DEBUG [c.c.a.t.Request] (AgentManager-Handler-
14:null) (logid:) Seq 2-7686800138991304712: Processing: { Ans: , MgmtId:
279278805450918, via: 2, Ver: v1, Flags: 10, [{"com.cloud.agent.api.Answer":
{"result":false,"details":"com.cloud.utils.exception.CloudRuntimeException:
GetRootDir for nfs://172.16.10.2/export/secondary failed due to
com.cloud.utils.exception.CloudRuntimeException: Unable to mount
172.16.10.2:/export/secondary at /mnt/SecStorage/6e26529d-c659-3053-8acb-
817a77b6cfc6 due to mount.nfs: Connection timed out\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.getRootDir(Nf
sSecondaryStorageResource.java:2080)\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.execute(NfsSe
condaryStorageResource.java:1829)\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.executeReques
t(NfsSecondaryStorageResource.java:265)\n\tat
com.cloud.agent.Agent.processRequest(Agent.java:525)\n\tat
com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:833)\n\tat
com.cloud.utils.nio.Task.call(Task.java:83)\n\tat
com.cloud.utils.nio.Task.call(Task.java:29)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\
n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\
n\tat java.lang.Thread.run(Thread.java:745)\n","wait":0}}] }
Can anyone please guide me how to resolve this issue? I have been trying to figure it out for some hours now and don't know how to proceed further.
Edit 1: Please note that my LAN address was 10.103.72.50 which I assume is not /24 address. I tried to give CentOs a static IP by making the following settings in ifcg-eth0 file
DEVICE=eth0
HWADDR=52:54:00:B9:A6:C0
NM_CONTROLLED=no
ONBOOT=yes
BOOTPROTO=none
IPADDR=172.16.10.2
NETMASK=255.255.255.0
GATEWAY=172.16.10.1
DNS1=8.8.8.8
DNS2=8.8.4.4
But doing this would stop my internet. As a workaround, I reverted these changes and installed all the packages first. Then I changed the IP to static by the same configuration settings as above and ran the cloudstack management. Everything worked fine untill I bumped into this template thing. Please help me figure out what might have went wrong
I know I'm late, but for people trying out in the future, here it goes:
I hope you have successfully added a host as mentioned in Quick Install Guide before you changed your IP to static as it autoconfigures VLANs for different traffic and creates two bridges - generally with names 'cloud' or 'cloudbr'. Cloudstack uses the Secondary Storage System VM for doing all the storage-related operations in each Zone and Cluster. What seems to be the problem is that secondary storage system vm (SSVM) is not able to communicate with the management server at port 8250. If not, try manually mounting the NFS server's mount points in the SSVM shell. You can ssh into the SSVM using the below command:
ssh -i /var/cloudstack/management/.ssh/id_rsa -p 3922 root#<Private or Link local Ip address of SSVM>
I suggest you run the /usr/local/cloud/systemvm/ssvm-check.sh after doing ssh into the secondary storage system VM (assuming it is running) and has it's private, public and link local IP address. If that doesn't help you much, take a look at the secondary storage troubleshooting docs at Cloudstack.
I would further recommend, if anyone in future runs into similar issues, check if the SSVM is running and is in "Up" state in the System VMs section of Infrastructure tab and that you are able to open up a console session of it from the browser. If that is working go on to run the ssvm-check.sh script mentioned above which systematically checks each and every point of operation that SSVM executes. Even if console session cannot be opened up, you can still ssh using the link local IP address of SSVM which can be accessed by opening up details of SSVM and than execute the script. If it says, it cannot communicate with Management Server at port 8250, I recommend you check the iptables rules of management server and make sure all traffic is allowed at port 8250. A custom command to check the same is nc -v <mngmnt-server-ip> 8250. You can do a simple search and learn how to add port 8250 in your iptables rules if that is not opened. Next, you mentioned you used CentOS 6.8, so it probably uses older versions of nfs, so execute exportfs -a in your NFS server to make sure all the NFS shares are properly exported and there are no errors. I would recommend that you wait for the downloading status of CentOS 5.5 no GUI kvm template to be complete and its Ready status shown as 'Yes' before you start importing your own templates and ISOs to execute on VMs. Finally, if your ssvm-check.sh script shows everything is good and the download still does not start, you can run the command: service cloud restart and actually check if the service has gotten a PID using service cloud status as the older versions of system vm templates sometimes need us to manually start the cloud service using service cloud start even after the restart command. Restarting the cloud service in SSVM triggers the restart of downloading of all remaining templates and ISOs. Side note: the system VMs uses a Debian kernel if you want to do some more troubleshooting. Hope this helps.
I use gsutil tool for download archives from Google Storage.
I use next CMD command:
python c:\gsutil\gsutil cp gs://pubsite_prod_rev_XXXXXXXXXXXXX/YYYYY/*.zip C:\Tmp\gs
Everything works fine, but if I try to run that command from corporate proxy, I receive error:
Caught socket error, retrying: [Errno 10051] A socket operation was attempted to an unreachable network
I tried several times to set the proxy settings in .boto file, but all to no avail.
Someone faced with such a problem?
Thanks!
Please see the section "I'm connecting through a proxy server, what do I need to do?" at https://developers.google.com/storage/docs/faq#troubleshooting
Basically, you need to configure the proxy settings in your .boto file, and you need to ensure that your proxy allows traffic to accounts.google.com as well as to *.storage.googleapis.com.
A change was just merged into github yesterday that fixes some of the proxy support. Please try it out, or specifically, overwrite this file with your current copy:
https://github.com/GoogleCloudPlatform/gsutil/blob/master/gslib/util.py
I believe I am having the same problem with the proxy settings being ignored under Linux (Ubuntu 12.04.4 LTS) and gsutils 4.2 (downloaded today).
I've been watching tcpdump on the host to confirm that gsutils is attempting to directly route to Google IPs instead of to my proxy server.
It seems that on the first execution of a simple command like "gsutil -d ls" it will use my proxy settings specified .boto for the first POST and then switch back to attempting to route directly to Google instead of my proxy server.
Then if I CTRL-C and re-run the exact same command, the proxy setting is no longer used at all. This difference in behaviour baffles me. If I wait long enough, I think it will work for the initial request again so this suggests some form on caching taking place. I'm not 100% of this behaviour yet because I haven't been able to predict when it occurs.
I also noticed that it always first tries to connect to 169.254.169.254 on port 80 regardless of proxy settings. A grep shows that it's hardcoded into oauth2_client.py, test_utils.py, layer1.py, and utils.py (under different subdirectories of the gsutils root).
I've tried setting the http_proxy environment variable but it appears that there is code that unsets this.
CentOS 5.5
I have a web application running on a server and it needs access to another CentOS server's file system running in the same network (via private IP). After doing a bunch of googling it looks like mounting the drive via NFS is a good way to go, but I'm not finding any good step by step instructions on how to go about it. I've read the man docs on the mount command and read some docs on the CentOS wiki as well but I feel like I'm missing something. Here is what I'm trying
mount -t nfs my.ip.address:/somePath /somePath/mount
I keep getting a 'no route to host' error but I can ping the server just fine. I'm guessing that I am possibly missing a port I need to open or something, but again, can't find information that makes sense to a non-sysadmin like myself.
Thanks for any help.
I ran across this, followed it step by step, and now I'm up and running!
http://www.cyberciti.biz/faq/centos-fedora-rhel-nfs-v4-configuration/