So I am asking this question for my sysadmin. We have a rather large print server, with over 250 copiers. Currently, to adjust configuration we have to use a web GUI and go through a bunch of clicks per IP address. With 250+ copiers, this is very time consuming. We are looking for a way to bulk configure these copiers by executing a process. I have captured the packet that adjusts the setting on the copier. I was hoping there might be someone out there who knows if it is possible to take this HTTP packet and somehow transmit this packet to all copiers with the press of a button, and automating the whole process. I have very light knowledge of packets and the protocols associated with them, but if someone has the knowledge, I would love to pick your brain...
Related
I'm trying, finally, to understand eBPF and maybe use it in an upcoming project.
For sake of simplicity I started with reading bcc documentation.
In my project I'll need to send some data over network upon some kernel function calls.
Can that be done without sending the data to userspace first?
I see that I can redirect skbs from one socket to another etc., and I see that I can submit custom data to user space. Is there a way to get the best of both worlds?
EDIT: I'm trying to log some file system events to another server that'll collect this data from multiple machines. Those machines can be fairly busy in some situations. It should be real time and with low latency.
I'd love to avoid going through userspace to prevent copying the data back and forth and to reduce sw overhead as much as possible.
Thank you all!
It seems this question can be summarized to: is it possible to send data over the network from a BPF tracing program (kprobes, tracepoints, etc.)?
The answer to that question is no. As far as I know, there are currently no way to craft and send packets over the network from BPF programs. You can resend a received packet to the network with some helpers, but they are only available to networking BPF programs.
I'm trying to enhance a server-app-website architecture in reliability, another programmer has developed.
At the moment, android smartphones start a tcp connection to a server component to exchange data. The server takes the data, writes them into a DB and another user can have a look on the data through a website. The problem is that the smartphones very regularly are in locations where connectivity is really bad. The consequence is that the smartphones lose the tcp connection and it's hard to reconnect. Now my question is, if there are any protocols that are so lightweight or accomodating concerning bad connectivity that the data exchange could work better or more reliable.
For example, I was thinking about replacing the raw TCP interface with a RESTful API, but I don't really know how well REST works in this scenario, as I don't have any experience in this area.
Maybe useful to know for answering this question: The server component is programmed in c#. The connecting components are android smartphones.
Please understand that I dont add some code to this question, because in my opinion its just a theoretically question.
Thank you in advance !
REST runs over HTTP which runs over TCP so it would have the same issues with connectivity.
Moving up the stack to the application you could perhaps think in terms of 'interference'. I quite often have to use technical stuff in remote areas with limited reception and it reminds of trying to communicate in a storm. If you think about it, if you're trying to get someone to do something in a storm where they can hardly hear you and the words get blown away (dropped signal), you don't read them the manual on how to fix something, you shout key words such as 'handle', 'pull', 'pull', 'PULL', 'ok'. So the information reaches them in small bursts you can repeat (pull, what? pull, eh? PULL! oh righto!)
Can you redesign the communications between the android app and the server so the server can recognise key 'words' with corresponding data and build up the request over a period of time? If you consider idempotency, each burst of data would not alter the request if it has already been received (pull, PULL!) and over time the android app could send/receive smaller chunks of the request. If the signal stays up, just keep sending. If it goes down, note which parts of the request haven't been sent and retry them when the signal comes back.
So you're sending the request jigsaw-style but the server knows how to reassemble the pieces in the right order. A STOP word at the end tells the server ok this request is complete, go work on it. Until that word arrives the server can store the incomplete request or discard it if no more data comes in.
If the server respond to the first request chunk with an id, the app can use the id to get the response and keep trying until the full response comes back, at which point the server can remove the response from its jigsaw cache. A fair amount of work though.
I am a newbie to SDN and have to implement latency monitor with Ryu controller.
I am thinking of sending a packet from switch to switch, where i remember the packet send, and then i recieve it at end switch i will calculate the delay.
The problem is i dont know how to tell apart the packets, which i send. I was thinking of putting into them a string which would tell me:"hey i am packet number 23." But i dont know if it is possible. I read the ryu wiki several times and looked over the examples.
I just dont know how to move forward.
I have answered a similar question over here about how to measure latency. You can have a look. But if you wanna go ahead with your current approach. You can try something like this:
Record switch details and current timing value in packet and send the packet to next switch (via the link for which you want to measure latency).
When recieved that packet on another switch parse the recorded information.
Subtract the timing delay.
For example you can have a look at RYU implementation at here which uses a kind of similar mechanism to discover topology. LLDP packets are generated by controller, sent to one switch which is to be forwarded via a specific port, when another switch receives this packet, it parses the packet to obtain sender switch's id and port and again sends this information to controller, which in-turn detects that there is a path between these switches.
But I would suggest you to have a look at the papers I mentioned before implementing your approach (if at all you have not already done the hard work).
I have a set up that involves exactly one client and one server. The client can generate 32K data really fast. As I generate that data I'd like to send it in parallel over TCP to the server and have it reassembled in the same order that I sent it out in.
So I have the thought where I add each 32K packet to a queue on the client, then something sends out those packets in parallel. On the server, those packets are received in some random order, but then put back in order into a queue and then I can simple dequeue the packets. I have a picture of this set up:
Does this set up have a name? Fanout/fanin? What should I be searching for? It seems middleware such as ZeroMQ might help, but I haven't been able to see any specific examples that display this type of architecture.
I assume this is a solved problem with some nice open source libraries out there, but my assumptions have been wrong in the past.
Thanks for any help.
I was recently approached by my management with an interesting problem - where I am pretty sure I am telling my bosses the correct information but I really want to make sure I am telling them the correct stuff.
I am being asked to develop some software that has this function:
An application at one location is constantly processing real-time data every second and only generates data if the underlying data has changed in any way.
On the event that the data has changed send the results to another box over a network
Maintains a persistent connection between the both machines, altering the remote box if for some reason the network connection went down
From what I understand, I imagine that I need to do some reading on doing some sort of TCP/IP socket-level stuff. That way if the connection is dropped the remote location will be aware that the data it has received may be stale.
However management seems to be very convinced that this can be accomplished using SOAP. I was under the impression that SOAP is more or less a way for a client to initiate a procedure from a server and get some results via the HTTP protocol. Am I wrong in assuming this? I haven't been able to find much information on how SOAP might be able to solve a problem like this.
I feel like a lot of people around my office are using SOAP as a buzzword and that has generated a bit of confusion over what SOAP actually is - and is capable of.
Any thoughts on how to accomplish this task would be appreciated!
I think SOAP is the wrong tool. SOAP is a spec for exchanging structured data. For your problem, the simplest thing would be to write a program to just transfer data and figure out if the other end is alive. Sockets are a good way to go. There are lots of socket programming tutorials on the net. Pick your language, and ask Mr. Google. Write a couple of demo programs to teach yourself how it works. Ask if you have more specific questions.
For the problem, you'll need a sender and a receiver. The sender sends data when it gets it, the receiver waits for data and hands it off when it arrives. Get that working first. Next, add in heartbeats; a message that says "I'm alive", sent periodically. Get that working next. You'll need to be determine the exact behavior you want -- should both sides send heartbeats to the other end, the maximum time you are willing to wait for a heartbeat, and what action you take should heartbeats stop arriving. The network connection can drop, the other end can crash, the other end can hang, and perhaps there are other conditions you should think about (e.g., what if the real time data is nonsense?). Figure out how to handle each condition, and code up the error handling. Test it out, and serve with a side of documentation.
SOAP certainly won't tell you when the data source goes down, though you could use "heartbeats" to add that.
Probably you are right and they are just repeating a buzz word, and don't actually know much about what SOAP is or does or have any real argument for why it ought to be used here.