Hello i created a WebRTC screen-sharing on my website, and i just wanted to create and have my own signaling server in case that the one present on the code would ('https://socketio-over-nodejs2.herokuapp.com:443/') be not functional in the future although i need a server using internet (not localhost:...)
How can i process ? Thanks
var config = {
openSocket: function (config) {
var SIGNALING_SERVER = 'https://socketio-over-nodejs2.herokuapp.com:443/';
You can go on AWS and start up a remote machine for free (be sure to pick one in a country near you).
Then remote onto your server, install node.js, and put the signalling server code into an index.js file in a folder somewhere. Then go to the directory in commmand prompt, type npm install to install any dependencies, then node index.js to run your server. Be sure to open up the correct port on your remote machine.
See https://codelabs.developers.google.com/codelabs/webrtc-web/#6
for example code of a node.js signalling server.
Related
In other words : is it some kind of containerization/ VM technology or it's just my computer doing the whole thing ? and where's the downloaded data stored ?
Things I tried :
Code here , uncomment and use node index.js to run
1- Checking system info using systeminformation gives: (obviously not my specs)
manufacturer: 'Intel',
brand: 'Core™ i9-9880H',
other information weren't very useful at least for my level of experience .
2- Testing network interfaces
iface: 'en0',
ifaceName: 'en0',
default: false,
ip4: '192.168.1.104',
I checked my host's local ip using ifconfig , not the same.
3- Checking external ip / Network speed
Wasn't able to do that , I guess only connections to npm server are allowed for downloading packages , fetching other webpages or connecting to speed-test servers isn't working .
So far it seemed like it's not My computer BUT then I tried running npm i largest-package and directly cut my PC's connectivity (the command should continue running on the server and I should find the package installed when I reconnect) this ,however , did not happen .
As for the Data
I checked the cached data in browser.. very small (in my humble opinion)
Finally
Checking the documentation
yields (09/03/2022) link
I'd appreciate you helping me wrap my head around this .
I have scala application with akka steams. So the flow of my application is like this:
1. Check if file exists on FTP - I'm doing it with the org.apache.commons.net.ftp.FTPClient
2. If it exists stream it via alpakka library(and make some stream transformations)
My application works locally and it can connect to the server.
The problem is when it is being deployed to dcos/mesos. I get an issue:
java.io.IOException: /path/file.txt: No such file or directory
I can say for sure that file still exists there. Also when I try to connect from docker container locally through the ftp I've got something like this:
ftp> open some.ftp.address.com
Connected to some.ftp.address.com.
220 Microsoft FTP Service
Name (some.ftp.address.com:root): USER
331 Password required
Password:
230 User logged in.
Remote system type is Windows_NT.
ftp> dir
501 Server cannot accept argument.
ftp: bind: Address already in use
ftp>
Not sure if its still helpful but I also got my ftp client transfering data from inside a Docker container after changing the data connection to passive. I think that active mode requires the client to have open ports which the server connects to when returning file listing results and during data transfer. However the client ports are not reachable from outside of the docker container since the requests are not routed through (like in a NAT:et network).
Found this post explaning active/passive FTP connections
https://labs.daemon.com.au/t/active-vs-passive-ftp/182
So my problem was really weird. But I've managed to fix this way.
Quick answer: I was using alpakka ftp lib this way:
Ftp
.fromPath(url, user, pass, Paths.get(s"/path/$fileName"))
But using this way it works:
val ftpSettings = FtpSettings(
host = InetAddress.getByName(url),
port = 21,
NonAnonFtpCredentials(user, pass),
binary = true,
passiveMode = true
)
Ftp
.fromPath(Paths.get(s"/path/$fileName"), ftpSettings)
Longer answer: I started investigating alpakka lib and I've discovered that it uses the same lib that works for me during checking if file exists!
https://github.com/akka/alpakka/blob/master/ftp/src/main/scala/akka/stream/alpakka/ftp/impl/FtpOperations.scala
So I've started digging and it seems that most likely tahat setting passive mode to true was the solution. But it's weird because I've read that windows ftp server does not support passive mode...
I hope someone could clarify my doubts one day, but at the moment I'm happy because it works :)
I'm trying to setup a cell and a collective in a WAS for bluemix service. I've found a few steps online for generic liberty setup, but nothing specific for a bluemix collective or cell. Can someone point me in the right direction?
At a high level, you should be able to do the following for a Cell:
Login to the Admin Console as wsadmin
Create a server.
Open all the ports on each host for each server created by running the openFirewallPorts.sh script. Below, you will find the standard ports for a new server given that only one server exists on each host You may need to open more ports for additional servers on the same host since ports can be unique per server. Try the following:
cd WAS_HOME/virtual/bin
export serverPorts=2810:TCP,2810:UDP,8880:TCP,8880:UDP,9101:TCP,9101:UDP,9061:TCP,9061:UDP,9080:TCP,9080:UDP,9354:TCP,9354:UDP,9044:TCP,9044:UDP,9443:TCP,9443:UDP,5060:TCP,5060:UDP,5061:TCP,5061:UDP,11005:TCP,11005:UDP,11007:TCP,11007:UDP,9633:TCP,9633:UDP,7276:TCP,7276:UDP,7286:TCP,7286:UDP,5558:TCP,5558:UDP,5578:TCP,5578:UDP
sudo ./openFirewallPorts.sh -ports $serverPorts -persist true
Start your server.
Deploy your application.
There are a few slight differences for a Liberty Collective, but again, at a high level, you should be able to try the following:
Switch your user to wsadmin or ssh to your host using wsadmin / password
On each host, create a server and join it to the collective. Be sure to use the full host name of the controller for the --host parameter.
cd WAS_HOME/bin
./server create server
./collective join server --host=yourhostname --port=9443 --user=wsadmin --password=xxxxxxxx --keystorePassword=yyyyyyyy
Accept the chain certificate (y/n) y
Save the output from each join so you can paste it into each host's application server.xml file before deploying your application.
Install the features required by your application on each host. The features listed below are an example.
cd /opt/IBM/WebSphere/Liberty/bin
./featureManager install --acceptLicense ejblite-3.2 websocket-1.0 jsp-2.3 jdbc-4.1 jaxrs-2.0 cdi-1.2 beanValidation-1.1
NOTE: Output from this command will contain messages similar to:
chmod: changing permissions of
`/opt/IBM/WebSphere/Liberty/bin/featureManager': Operation not
permitted
This is OK. You should see this message upon completion:
Product validation completed successfully.
Update your application's server.xml file with the information saved in Step 2.
Start your server.
Deploy your application.
Verify your application is reachable :9080/appname
I am not sure if I can get help for this on here, but I thought it was worth a try.
I have 3 node cluster on AWS, I am running MAPR M3 , I installed Storm, Kafka and Divolte-collector and Cassandra. I would like try some of the clickstream examples and I am running into an issue with the tcp-consumer example. Also being quite new to java and distributed processing I have some clarification questions. Again I am not quite sure where to post this because I feel like this is divolte-collector specific and I also have some gaps in my understanding of the javadoc concept and the building and running of jar files; but I figured someone could point me to some resources or help with some clarifications. I can't get the json string to appear in the console running netcat socket listening for clicks:
Divolte tcp-kafka-consumer example
Everything works until the netcat part step 7 and my knowledge gap is with step 6.
Step 1: install and configure Divolte Collector
Install works and hello world click collections is promising :-)
Step 2: download, unpack and run Kafka
# In one terminal session
cd kafka_2.10-0.8.1.1/bin
./zookeeper-server-start.sh ../config/zookeeper.properties
# Leave Zookeeper running and in another terminal session, do:
cd kafka_2.10-0.8.1.1/bin
./kafka-server-start.sh ../config/server.properties
No erros plus tested kafka examples so seems to working as well
Step 3: start Divolte Collector
Go into the bin directory of your installation and run:
cd divolte-collector-0.2/bin
./divolte-collector
Step 3 no hitch, can test default divole-collector test page
Step 4: host your Javadoc files
Setup a HTTP server that serves the Javadoc files that you generated or downloaded for the examples. If you have Python installed, you can use this:
cd <your-javadoc-directory>
python -m SimpleHTTPServer
Ok so I can reach the javadoc pages
Step 5: listen on TCP port 1234
nc -kl 1234
Note: when using netcat (nc) as TCP server, make sure that you configure the Kafka consumer to use only 1 thread, because nc won't handle multiple incoming connections.
Tested netcat by opening port and sending messages so I figured I don't have any port issues on AWS.
Step 6: run the example
cd divolte-examples/tcp-kafka-consumer
mvn clean package
java -jar target/tcp-kafka-consumer-*-jar-with-dependencies.jar
Note: for this to work, you need to have the avro-schema project installed into your local Maven repository.
I installed the avro-schema with mvn clean install in avro project that comes with the examples. as per instructions here
Step 7: click around and check that you see events being flushed to the console where you run netcat
When you click around the Javadoc pages, you console should show events in JSON format similar to this:
I don't see the clicks in my netcat window :(
Investigating the issue I viewed the console and network tabs using chrome developer tools it seems divolte is running, but I am not sure how to dig further. This is the console view. Any ideas or pointers?
Thanks anyways
Initializing Divolte.
divolte.js:140 Divolte base URL detected http://ec2-x-x-x-x.us-west-x.compute.amazonaws.com:8290/
divolte.js:280 Divolte party/session/pageview identifiers ["0:i6i3g0jy:nxGMDVdU9~f1wF3RGqwmCKKICn4d1Sb9", "0:i6qx4rmi:IXc1i6Qcr17pespL5lIlQZql956XOqzk", "0:6ZIHf9BHzVt_vVNj76KFjKmknXJixquh"]
divolte.js:307 Module initialized. Object {partyId: "0:i6i3g0jy:nxGMDVdU9~f1wF3RGqwmCKKICn4d1Sb9", sessionId: "0:i6qx4rmi:IXc1i6Qcr17pespL5lIlQZql956XOqzk", pageViewId: "0:6ZIHf9BHzVt_vVNj76KFjKmknXJixquh", isNewPartyId: false, isFirstInSession: false…}
divolte.js:21 Signalling event: pageView 0:6ZIHf9BHzVt_vVNj76KFjKmknXJixquh0
allclasses-frame.html:9 GET http://ec2-x-x-x-x.us-west-x.compute.amazonaws.com:8000/resources/fonts/dejavu.css
overview-summary.html:200 GET http://localhost:8290/divolte.js net::ERR_CONNECTION_REFUSED
(Intro: I work on Divolte Collector)
It seems that you are running the example on an AWS instance somewhere. If you are using the pre-packaged JavaDoc files that come with the examples, they have hard-coded the divolte location as http://localhost:8290/divolte.js. So if you are running somewhere other than localhost, you should probably create your own JavaDoc for the example, using the correct hostname for the Divolte Collector server.
You can do so using this command. Be sure to run it from the directory where you source tree is rooted. And of course change localhost for the hostname where you are running the collector.
javadoc -d YOUR_OUTPUT_DIRECTORY \
-bottom '<script src="//localhost:8290/divolte.js" defer async></script>' \
-subpackages .
As an alternative, you could also just try to run the examples locally first (possibly in a virtual machine, if you are on a Windows machine).
It doesn't seem there is anything MapR specific with the issue that you are seeing so far. The Kafka based examples and pipeline should work in any environment that has the required components installed. This doesn't touch MapR-FS or anything else MapR specific. Writing to the distributed filesystem is another story.
We don't compile Divolte Collector against MapR Hadoop currently, but incidentally I have given it a run on the MapR sandbox VM. When installing from the RPM distribution, create a /etc/divolte/divolte-env.sh with the following env var setting:
HADOOP_CONF_DIR=/usr/share/divolte/lib/guava-18.0.jar:/usr/share/divolte/lib/avro-1.7.7.jar:$(hadoop classpath)
Obviously this is a bit of a hack to get around classpath peculiarities and we hope to provide a distribution compiled against MapR that works out of the box in the future.
Also, you need Java 8 to run Divolte. If you install this from the Oracle RPM, add the proper JAVA_HOME to divolte-env.sh as well, e.g.:
JAVA_HOME=/usr/java/jdk1.8.0_31
With these settings I'm able to run the server and collect Avro files on MapR FS, create a external Hive table on those files and run a query.
Here in our institute we have a server where student login from putty and write code -- need to create a file and write code in vi editor(generally they copy and paste code into vi editor), also one can upload files by ftp transfer (using Ammy admin). Coding lang's can be Java,Perl, ...
Here we need a eclipse environment to individual user to access their code from putty. where individual must be able to run, debug his code on server for which he connected through putty.
This is to reduce the time for the students for working on 2 environments, and also to maintain assignments in sever according to user.
After you paste your code in Vi, you can run the program on server in debug-mode, with e.g.
java -agentlib:jdwp=suspend=y,transport=dt_socket,address=8123,server=y com.company.Main
and the program will listen on port 8123 until a debugger attaches to it,
and then you can remote debug it with Eclipse:
Run>Debug Configurations>double click "Remote Java Application">set project and host:port.
And you don't need putty for that, unless you are accessing the server through a ssh tunnel.
Eclipse do not support automatic code transfer to remote server, or starting program on remote server,
the program must be started from the shell as shown above, and then eclipse can attach to it.