Where have you used gSOAP? - soap

Can you give examples how you used gSOAP and how well it was integrated in your existing architecture? Have you found development bottlenecks with gSOAP?

We used gSOAP for a bunch of ARM clients to communicate with an AXIS Web Service server. Pros of gSOAP:
very powerful, supports nearly all Web Service constructs
easy to use, its abstraction of WS calls into functions removes all Web Service complexity to the programmer
elegant interfaces in both C and C++
However, we ran into several development bottlenecks:
when using custom datatypes like maps or sets, it takes quite some hacking to get the gSOAP compiler to handle them (marshal/unmarshalling). Especially bad with dynamic data structures.
debugging is hard because of its intrinsic complex network, parsing and memory allocation parts. Do everything possible to stick with static memory allocation.
the mailing list is alive, but developers are not very active on it. Simple questions can get answered quickly, but the toughest problems often go unanswered
forget about optimization. Linking in gSOAP eats about 1MB of memory at runtime (-Os). Runtime performance is fine on our 32MB linux-based ARM board, but there really is little to do about optimization if you need it.

We used gSOAP in a C++-based web server about 4 years back. Overall it worked fine. The only major issue was that the interface was in C and procedural (I understand it is difficult to design a good non-procedural interface). There may be a lot of repeated code while implementing the interface, for which you might have to use macros (we didn't explore the templates option too far then).

We are using gSoap to deploy a web service onto an embedde linux device running an ARM MX processor.

We are using gSOAP to consume a WCF based webservice from an application deployed on a linux device running on ARM processor. The experience is good to a large extent.

We used gSOAP in a web server on ARM ARM9 400MHz device.
gSOAP daemon connected to a database daemon through zeromq library, which is run on the same device.
It supports more than 1000 basic requests wich does not requre connection to database.
Disabling support for multi-referenced SOAP option by the WITH_NOIDREF define helped to decrease serialization time about 4 times faster on big requests with large number of serialization nodes.

Related

Advantages of using an Erlang web server for a web application

Note: This question is heavily affected by the main requirement to the web application that I build: high availability and fault tolerance. All the other requirements (like scalability and number of users) is not in question here.
I have got and advice from one of the members of this community to use an Erlang web-server as a back-end for my web application.
The suggestion was that I could use something like Mochiweb as a backend and Django/Ruby on Rails as a front end using JSON and the Service Oriented Model.
The only obvious advantage of this approach that I can understand is that the development of the front-end part is 'as usual' - regular MVC stuff, Ruby on Rails or any other common framework of someone's choice.
But what about other advantages? Do they actually exist?
Sure, Erlang/OTP adds fault-tolerance to the system in question, but doesn't adding a web front-end layer diminish this fault tolerance level to much lower level?
Don't we introduce a 'single point of failure' by coupling Ruby on Rails with Mochiweb? Of course, Mochiweb can cope with faults, but what if something wrong happens on the front-end side?
Technically the Erlang/OTP platform does not do anything about fault-tolerance and high-availability on it's own. It just allows to easily implement concurrent and distributed software. Erlang web server running on a single machine can fail as all other do - just because hardware fails.
So for HA sites it's more important to have proper redundancy and fallback scenarios for cases of hardware and software failures, rather than using any specific software stack or platform. Probably it will just be slightly easier to implement in Erlang compared to other platforms (if you are familiar with it, of course), but absolutely the same results can be achieved using pure Ruby/Python/Java/C or almost any other.
The web industry have tons of experience setting up fault-tolerant frontends. It's just a matter of setting up multiple web machines (often light reverse-proxies) and some sort of HA manager (built into many loadbalancing solutions). The backend is usually the harder part.
I wouldn't use Erlang as a front-end web server if the back end is some other technology.
Many of the benefits of Erlang as a web server come about when the back end is also using Erlang. The biggest of these is lower I/O costs. When your front end and back end are completely separate software stacks, you lose that benefit.
If you're going to build something on Rails, you might as well use something you can get more help with on the front end, such as nginx.

Why use backend server and RPC in Web Server infrastructure?

I'm interested in creating a web application and I've just done some research on the what makes a good web server. I've search through the facebook, twitter and foursquare. They share what software they used to build their infrastructure.
For me, some of the software used are new. I'd like to ask some questions here.
why create a back end server, isn't a web server running PHP is enough? Why use java/scala for backend? Do we really need RPC framework such as thrift/protocol buffer? What is that RPC framework used for? Is it used for communication between frontend and backend servers?
Really appreciate for those who answer my questions, or if there's some books you would suggest me to read.
Thank you.
It sounds as though you'd like to build a scalable backend infrastructure that ultimately will be used to do the following:
Serve content. This is the web server layer.
Perform some type of back end processing for user requests
coming in from the web server layer and communicate with the data store. Call this the application server layer.
Save session state and user data in a distributed, fault tolerant, eventually consistent key value store.
Also, it sounds as though you want to do this using commodity PC hardware.
This is a tall order.
Foursquare uses Scala with the Lift framework, jetty for their web server. Here's more. And more.
Facebook uses many different technologies. I know that for their data store they use HBase (they were using Cassandra)
Yahoo uses HBase to keep track of user statistics.
Twitter started as a Ruby-backend web site. They moved to Scala. Twitter is incrementally moving from mysql (I assume sharded) to Cassandra using their proprietary incremental database conversion tool.
As far as scaling on the application server and web server end, I know that what really counts is having a language that has the ability to spawn new user processes in user space and a manager process that assigns new worker processes the requests coming in. Think of it as running a very efficient company. The more work you've got coming in, the more people you hire. This is the Actor model. Some languages have actors built in,(erlang) others have actors implemented as frameworks(akka) or libraries (Scala native). Apparently, Scala's native actors are buggy so some people got together and implemented the akka framework for Scala and Java. There's a lot of discussion online regarding actors and which language and libraries one should use. Erlang has a lot going for it out of the box, however, Scala runs in the JVM and allows you to reuse a lot of the existing Java web libraries (which could have some issue if they happen to have static objects declared in them) Erlang has actors and the OTP libraries, but apparently does not have the rich libraries that Java has. So, for me it really boils down to Scala (with akka) or Erlang.
For the web server, with Scala, you can use any java app server. Foursquare uses jetty for most things. It's not written in Scala, but since Scala compiles down to bytecode that runs on the JVM it easily interops with any java app server.
People also say that there aren't that many Erlang programmers and that Erlang is harder to learn (functional programming vs imperative programming) Scala is functional and imperative at the same time (meaning you can do either)
Erlang is functional. Now, functional programming has a lot of things going for it as one expert functional programmer can get a lot more done than an expert imperative programmer. Yahoo stores was originally written and maintained in Lisp (functional language) by one man. On the other hand, imperative programming is easier to learn and used widely in a team setting. Imperative languages are good for some things, functional languages for
others. The right tool for the right job.
Back to the web server discussion, with Erlang, you can use yaws or you can run a framework (Chicago Boss)
Here's more on the Scala vs Erlang debate.
Another link.
More here.
And another.
Another opinion.
On the database end, you have a lot of choices. See here.
You can even eschew the database all-together and save your data in mnesia (Erlang's runtime data store)
My answer is not complete as this topic (scaling app servers, databases and web servers) is very complicated and full of debate. Some frameworks even blur the tiers (web server, application server, database) distinction and integrate a lot of the functionality of these layers within the framework itself.
For example, I encounter a lot of problems developing complex webapp using PHP only. PHP has no threads, php is lacking many good things that has scala, or another good modern language with rich syntax. PHP is slow comparing to compiled JVM language. PHP is less secure in my opinion. It is good to get a bunch of data and render as HTML page, but processing for high load is not its plus. RPC as you suggest serves as communication layer.

Socket vs HTTP based communication for a mobile client/server application

I've recently decided to take on a pretty big software engineering project that will involve developing a client-server based application. My plan is to develop as many clients as possible: including native iPhone, Android and Blackberry Apps as well as a web-based app.
For my server I'm planning on using a VPS (possibly from slicehost.com) running a flavor of Linux with a MySQL database. My first question is what should be my strategy for clients to interface with the server. My ideas are:
HTTP-POST or GET based communication with a PHP script.
This is something I'm very familiar with - passing information to a PHP script from a form, working with it and returning output. I'm assuming I'd want to return output to clients as some sort of XML or JSON based string. I'm also assuming I'd want to create a well defined API for clients that want to interface with my server.
Socket based communication with either a PHP script, Java program, or C++ program
This I'm less familiar with. I've worked with basic tutorials on creating a script or simple application that creates a socket, listens for a connection and returns data. I'm assuming there is far less communication data-overhead with this method than an HTTP based method. My dream is for there to be A LOT of concurrent clients in use, all working with the server/database. I'm not sure if a simple HTTP/PHP script based communication design can scale effectively to meet the needs of many clients. Also, I may eventually want the capability of a Server-Push to clients triggered by various server events. I'm also unsure of what programming language is best suited for this. If efficiency is a big concern I'd imagine a PHP script might not be efficient enough?
Is there a commonly accepted way of doing this? For me this is an attempt to bridge a gap between some of my current skills. I have a lot of experience with PHP and interfacing with a MySQl database to serve dynamic web pages. I also have a lot of experience developing native iPhone applications (however none that have had any significant server-based communication). Also I've worked with Java/C++, and I've developed applications in both languages that have interfaced with MySQL.
I don't anticipate my clients sending/receiving a tremendous amount of data to/from a server. Something on par with a set of strings per a given client-side event.
Another question: Using a VPS - good idea? I obviously don't want to pay for a full-dedicated server (slicehost offers a VPS starting at ~ $20/month), and I'm assuming a VPS will be capable of meeting the requirements of a few initial clients. As more and more users begin to interface with my server, I'm assuming it will be possible to migrate to larger and larger 'slices' and possibly eventually moving to a full-dedicated server if necessary.
Thanks for the advice! :)
I'd say go with the simplicity of HTTP, at least until your needs outgrow its capabilities. (The more stateful your application needs to be, the less HTTP fits).
For low cost and scalability, you probably can't go wrong with a cloud like Rackspace's or Amazon's. But I'm just getting started with those, my servers have been VPSs from tektonic until now.

Does anyone has first-hand experience with G-WAN web Server?

The only place where I found informations on G-WAN web server was the project web site and it looked very much like advertisement.
What I would really know is, for someone who is proficient with C, if it is as easy to use and extend that other architectures. For now I would mostly focus on scripting abilities.
Are C scripts on GWAN easy to write ?
Can you easily update and upload new C scripts to the server (say as easily than some PHP or Java pages on other architectures) ? Do you have to restart the server when doing so ?
Can you easily extend it with third party or existing C libraries ?
Any other feedback welcome.
Well, now G-WAN is available under Linux, I am using it for more than 6 months.
The C scripts are fully-ANSI C compatible so there is no difference for any seasonned C programmer.
To update them on the server, you can edit them directly in the /csp folder (remotely via SSH) or locally on a test machine (and copy them later): G-WAN reloads scripts on-the-fly when they have been changed on disk (no server stop required).
G-WAN C scripts can use any existing library (starting with all those under /usr/lib) without any configuration or interface: you just have to write a '#pragma link' followed by the name of the library at the top of your script.
What I found really useful is the ability to edit C scripts and refresh the view in the Internet browser to see how my code works.
If there is a compilation error, then G-WAN outputs the line in the source code (just like any C compiler).
But where it enters the extraordinary area, is when you have a C script crash: here also it gives you THE LINE NUMBER IN THE SOURCE CODE (with the faulty call and the backtrace).
Kind of black-magic when you are used to Apache modules.
My experience with G-WAN and its C scripts are:
The G-WAN community is very small. Questions you have are mostly answered by its single developer.
I consider the API not mature: it's not as "clean" as Java APIs.
The limitation, but at the same time the power, of C: it's a systems programming language. So writing application logic in it must be done carefully.
You generally need to be a good developer to get good results: if you do something wrong, the server crashes fast and hard (Unix-style).
I've written some scripts now, to try out G-WAN. Overall, it's been very "productive": not much bugs and it works if you follow the guidelines and don't want to do too much funky stuff you expect it to have, like mature web servers. However, I have got the feeling I'm reinventing the wheel a lot of times.
G-WAN also support scripts written in other programming languages (C++, Objective-C, Java, etc.) so you will benefit from whatever native libraries each language implements.
For C scripts, well, the /usr/lib directory lists more than 1,500 libraries that G-WAN can re-use with a simple #pragma link "library".
I found it neat to be able to write a Web application with a part in C, another in C++ and a third one in Java!
Benchmark shown how G-wan fare poorly at handling these tests.
http://joshitech.blogspot.sg/2012/04/performance-nginx-netty-cppcms.html
I have been using G-Wan for about two years. I consider it highly stable and production ready for static files. I have a number of static sites running for over a year with no issues.
I have built some small scale dynamic sites in C with it as demos/test projects. A bittorrent tracker and a real time analytics platform both using the KV Store for data backing.
In my view building large scale dynamic sites in G-Wan is possible but only with a significant investment in development and support. G-Wan is better suited to building robust highly scalable "enterprise grade" applications than tossing something together over a weekend.
I use G-Wan for a CMS http://solicms.com but for now, I use Ruby as primary language.
I have used G-wan for some preliminary testing and it does benchmark well. I have found a few points of concern that make it so that I will not likely use it for any of my projects. I have found that it seems to cache responses for about 0.5secs to speedup the responses/second and I can't have only some of the responses hitting the application code. Also the key/value store is great for cache and temporary data storage but I'm not sure how well it will work as a real back-end storage method.

Any success using Apache Thrift on iPhone?

Has anybody done or seen a deployment of Apache Thrift in an iPhone app?
I am wondering if is a reasonable solution for a high-volume, low(er)-latency network service for iPhones compared to HTTP.
One noteworthy thing I found is a bug report about running Thrift on the iPhone, which seems to have been fixed. But that doesn't necessarily indicate that it's a done deal.
Thrift and HTTP aren't mutually exclusive. In fact thrift now ships with an HTTP transport implementation to use. It's also a really nice way to auto-generate server/client code that avoids a lot of marshalling/unmarshalling boilerplate while still being really fast. Its internal representation is basically binary JSON, so it's very similar to a RESTful web service (except being easier to code and much, much faster).
So... anyone able to answer the original question? If not, I'll dive in myself with thrift's included Cocoa support and see how it works on the iphone.
Just my two cents..
The accepted answer to this question, is an opinion to not use a technology, not an answer of whether it is possible.
Thrift, is an interface definition language, IDL, like Protobuf and Capt'n'Proto. They permit the definition of a client/server/server protocol which is platform agnostic. JSON and Plist don't provide the same level of type conformance.
Having previously lead an iOS team with 10Ms MAU using Google Protobuf v2.5 on iOS, Android, Windows, and server teams, I can attest that IDLs are great on mobile. Apple uses them for syncing iWork content.
My current team uses Thrift for iOS and Android clients, with a mostly Scala backend. I much prefer it to Protobuf.
We send Thrift payloads over HTTPS and WebSockets. Once you have defined (in Thrift) your our wire communication protocol (i.e. frame structure), it's very easy to evolve your APIs.
However, on iOS in particular there are some implementation issues. The current version of the library is quite poorly packaged, and if you hope to make an Objective-C framework (e.g. for iOS 8+), then you will not be able to out of the box with v0.9.2. This is because the library headers include local imports, (#import "TProtocol.h" instead of #import <Thrift/TProtocol.h>) with no umbrella headers. Worst of all, the Objective-C compiler generates very messy Objective-C classes, also including local imports from the Thrift library.
Some of these issues are pretty damning. It indicates to me that while use of an IDL is very much a good engineering decision, not many iOS teams are using Thrift, unless they're huge with the resources to write their own library.
I've always disliked frameworks that use a common interface definition that builds out both server and client code. It keeps both sides too much in lockstep where in reality server API changes must be very flexible in the versions of clients that are communicating with it.
There are helpful libraries that make JSON or PLIST communication over HTTP pretty easy, and decades of debugging and understanding the HTTP protocol and how to use it well. I would ignore that at your peril.
I have used thrift's objective c bindings for a large iPhone app with a few million users. As one of the posters mentioned we can use Http which gets the best of both worlds. However there is no asynchronous HTTP client for thrift. We had to build an event based wrapper to allow non-blocking I/O calls. The underlying layer still issues one call at a time which hit us in a big way because we have one server call that takes a long time but it does not block UI flow and another really fast one that does block UI flow. If the underlying layer is busy with the slow command our fast command just has to wait. I am trying to build asyc http in c++ which can then be used on the iPhone but that is someways off from being ready.
Thrift as an external API doesn't make sense. Use it internally rock and roll.