I used webclient & webdrive to connect webdav server, webdrive is faster than webclient, but its not open source, and I require open source module to work on client for webdav server. kindly suggest me, if any one have the solution.
Or If any one know about speedup webclient as faster as webdrive.
Is there any specific reason for having a requirement for Open Source?
One of the benefits of WebDrive is that the same engineering team has been working on the product regularly for nearly 15 years. For you, the end user, this means that any issues that may arise with the product are able to be quickly and efficiently addressed by a team of engineers who are well-versed with the source code.
Related
I am a new user to github.com and I am trying to create a mobile app that has two parts: client code and server code. The client code will be written in Java for Android, Swift for iOS and C# for Windows Phone. The server side will be written in PHP, HTML, Javascript and CSS.
My question is this: how should I structure the code? Should I put the server code and the client code in different repositories or should I just put them in different folders in the same repository? Also, since there are multiple clients (Android, iOS and Windows) should I put them in different repositories or different folders?
I already know that either one can be done, but I want to know what the standard procedure is and what the advantages/disadvantages are.
There are many possible solutions to this issue. The answer provided by saljuama is the most complete.
After looking at a few projects on github I found that one way to do this is to create separate repositories for the client and the server. If you have multiple client codes (as in my case) you can use the same repository but different makefiles.
I found this way of doing things at https://github.com/CasparCG The server is a different repository than the client. Additionally, the clients share the same src, lib and other folders but the build scripts of each is different. As noted by saljuama this method makes sense when your clients are sharing the same codebase. If your clients do not share the same codebase see saljuama's answer.
This is not the only possible way. You can do it in your own way.
The topic is quite broad and it might be opinion based, but I'll try to stay as neutral as possible.
According to the 12factor first principle codebase, each application should have its own codebase (or repository). Since client and server are different applications, they shouldn't share the same repo.
One reason of why having both in the same repository is a bad idea, is when using Continous Integration and Continous Delivery systems. Since a repository push is what usually triggers these processes, performing a change on one of the sides would make the CI/CD server to process both sides of the application, when there is no need for it.
The rest of the reasons I can think of, are subject to be categorized as opinion based, so I won't state them.
Update:
To answer to the update on your question, if the different clients have the exact same codebase, if you keep reading about the 12-factor app principles, configuration and builds, then one repo is what you should use.
At our company, we are looking at replacing a number of legacy systems that handle information from our customers into our company. Typical systems allow the user to drop an ftp file somewhere. This file will then be transformed by a number of programs and eventually end up in some kind of database. In total we have +30 different "systems" or applications that does this. And, it is more or less a mess.
We believe we lack a common system to manage these flows: triggered by upload or possible another event, register the data, create some sort of "job" (or process) from it, pass it through the variuos services/transformation programs it needs to go through, provide feedback to the customer, provide information about progress, etc to us, handle failures and so on. Sort of like Jenkins (/Hudson/CruiseControl/similar) but for information transformation jobs, rather than build jobs, and with a job beeing more of a "process instance" of a job, then the job itself (e.g. different data should trigger the job several times, running concurrently).
We are cabable of writing such software ourselves, but surely software as this exists(?) I have been googling around, and found that what we need ma possibly be "job scheduling" software or "business process management" software. However, these are all new domains for us, and I am quite uncertain to as what kind of software would fit our needs. It appears one could invest quite a deal of ressources into this type of software before
So, what I am looking for is pointers to what kind of software or systems that could solve the kind of needs we have. Preferably Open Source, Java based, running in a Java EE container or similar, but really, at this point, almost any pointer/hint will be welcomed :-)
Thanks in advance
P.S. I realise I may be out of scope for Stackexchange, but I have been unable to locate another forum where this kind of question might be answered, so I hope it is OK.
I know of the following products:
Redwood Cronacle (I worked with it 1994-1997 and it still runs). Purchase product. Oracle and C based. Strong in multiple server platforms. Embeddable.
Oracle E-business suite core. Purchase product. Oracle based. Strong for integration with the same ERP system. Weak for multiple server platforms.
Invantive Vision (I developed it :-). Purchase product. Oracle and Java based. Strong in integration with ETL (Pentaho open source). Weak for multiple server platforms. Embeddable.
Quartz Scheduler. Apache license. Java based. Worked with in 2004 or so. Strong focus on embedding.
Hi I don’t know if you will find that solution in open source or Java. It sounds like bespoke or custom software to me. I would advise you to search for a project management software developer with high level of IT and Data warehousing. Ask for bespoke and customized installations with a real time database. I think you will solve your problem with this.
Related question: What is the most efficient way to break up a centralised database?
I'm going to try and make this question fairly general so it will benefit others.
About 3 years ago, I implemented an integrated CRM and website. Because I wanted to impress the customer, I implemented the cheapest architecture I could think of, which was to host the central database and website on the web server. I created a desktop application which communicates with the web server via a web service (this application runs from their main office).
In hindsight this was rather foolish, as now that the company has grown, their internet connection becomes slower and slower each month. Now, because of the speed issues, the desktop software times out on a regular basis, the customer is left with 3 options:
Purchase a faster internet connection.
Move the database (and website) to an in-house server.
Re-design the architecture so that the CRM and web databases are separate.
The first option is the "easiest", but certainly not the cheapest long term. Second option; if we move the website to in-house hosting, the client has to combat issues like overloaded/poor/offline internet connection, loss of power, etc. And the final option; the client is loathed to pay a whole whack of cash for me to re-design and re-code the architecture, and I can't afford to do this for free (I need to eat).
Is there any way to recover from when you've screwed up the design of a distributed system so bad, that none of the options work? Or is it a case of cutting your losses and just learning from the mistake? I feel terrible that there's no quick fix for this problem.
You didn't screw up. The customer wanted the cheapest option, you gave it to them, this is the cost that they put off. I hope you haven't assumed blame with your customer. If they're blaming you, it's a classic case of them paying for a Chevy while wanting a Mercedes.
Pursuant to that:
Your customer needs to make a business decision about what to do. Your job is to explain to them the consequences of each of the choices in as honest and professional a way as possible and leave the choice up to them.
Just remember, you didn't screw up! You provided for them a solution that served their needs for years, and they were happy with it until they exceeded the system's design basis. If they don't want to have to maintain the system's scalability again three years from now, they're going to have to be willing to pay for it now. Software isn't magic.
I wouldn't call it a screw up unless:
It was known how much traffic or performance requirements would grow. And
You deliberately designed the system to under-perform. And
You deliberately designed the system to be rigid and non adaptable to change.
A screw up would have been to over-engineer a highly complex system costing more than what the scale at the time demanded.
In fact it is good practice to only invest as much as can currently be leveraged by the business, using growth to fund further investment in scalability, should it be required. It is simple risk management.
Surely as the business has grown over time, presumably with the help of your software, they have also set aside something for the next level up. They should be thanking you for helping grow their business beyond expectations, and throwing money at you so you can help them carry through to the next level of growth.
All of those three options could be good. Which one is the best depends on cost benefits analysis, ROI etc. It is partially a technical decision but mostly a business one.
Congratulations on helping build a growing business up til now, and on to the future.
Are you sure that the cause of the timeouts is the internet connection, and not some performance issues in the web service / CRM system? By timeout I'm going to assume you mean something like ~30 seconds, in which case:
Either the internet connection is to blame and so you would see these sorts of timeouts to other websites (e.g. google), which is clearly unacceptable and so sorting the internet is your only real option.
Or the timeout is caused either by the desktop application, the web serice, or due to exessively large amounts of information being passed backwards and forwards, in which case you should either address the performance issue how you might any other bug, or look into ways of optimising the Desktop application so that less information is passed backwards and forwards.
In sort: the architecture that you currently have seems (fundamentally) fine to me, on the basis that (performance problems aside) access for the company to the CRM system should be comparable to accesss for the public to the system - as long as your customers have reasonable response times, so should the company.
Install a copy of the database on the local network. Then let the client software communicate with the local copy and let the database software do the synchronization between the local database server and the database on the webserver. It depends on which database you use, but some of them have tools to make that work. In MSSQL it is called replication.
First things first how much of the code do you really have to throw away? What language did you use for the Desktop client? Something .NET and you may be able to salvage a good chuck of the logic of the system and only need to redo the UI and some of the connections.
My thoughts are that 1 and 2 are out of the question, while 1 might be a good idea it doesn't solve the real problem. And we as engineers should try and build solutions not dependent on the client when ever possible. And 2 makes them get into something they aren't experts at and it is better to keep the hosting else where.
Also since you mention a web service is all you are really losing the UI? You can alway reuse the webservices for the web server interface.
Lastly you could look at using a framework to help provide a simple web based CRUD to start and then expand from there.
Are you sure the connection is saturated? You could be hitting all sorts of network, I/O and database problems... Unless you've already done so, use wireshark to analyze the traffic; measure the throughput and share the results with us.
We are considering using ClearCase Multisite to enable the offshore development team. The other option is the ClearCase Remote Client using the local (onshore) ClearCase installation. Has anyone had experiences using Multisite? Is the synchronization and management hassle worth offshore being able to use the fat client?
That is a good question. I belive it is worth using multisite so long as you can figure out the mastership of elements. If an element is mastered at site A you can't edit it at site B until you have transfered the mastership. So if each site is working on the same pieces of code then multisite is going to be more trouble than it is worth, if the coverage is disjoint then multisite is a good call. Clearcase is very chatty on the network and keeping as much local as possible is a good idea.
I agree with stimms. Actually, unless you have a massive concurrent development on the same set of files, multisite is quite heavy to setup and maintain...
And if the coverage is disjoint, ... actually we have switched to CRC (but we are with CC7.0.1 here, CRC in 6.0 was not advanced enough). That means you have a good connection to allow your user to connect to the web server that represents CRC and that will access your vobs for them.
Your remote clients will either use a "semi-fat" client (the eclipse RCP ClearCase client talking to your CRC) or a web interface, for setting up their snapshot views.
The other point that drove us away from Multisite is the licensing system: you can not convert a vob into a 'multisite-compliant' one without using (more expensive) multisite licenses, even for your local users...
So if you want to use only multi-site licenses for your remote users, you have to isolate your data into a multi-site vob, and then replicate those data into a normal vob!
All in all, I believe Multisite is not the only answer to offshore development team.
BUT, that being said, one strong force of the Multisite mechanism is its ability to synchronize itself from delta coming from various sources:
regular reception of packages
files
even a CD burned with the latest delta can do it!
That means, if your connexion is not always up with the remote site, Multisite can be a valid option.
One of the really big differences between Multisite and CCRC is the fact that you can only use snapshot views (but actually called webviews) with CCRC whereas Multisite can do both snapshot and dynamic views.
As the previous poster stated, there are also monetary and administration costs to consider.
Without more information about the size of the offshore team, what they are likely to develop, how long you are going to be using the solution for, the size of the business, the administration experience and time of your ClearCase staff...well, it'd be tricky to answer this accurately.
MultiSite is a great product, and truly enables remote sites in a way CCRC does not. It also serves as a backup replica for your VOBs. There are many things to consider, but don't let the complexity of MultiSite turn you away... I suggest you look into CM/InSync to automate MultiSite into a hands-free setup.
The CCRC client is OK, stil lackluster in comparison to native dynamic views. It very much depends on your requirements and needs.
d.
Our project is held in a SourceSafe database. We have an automated build, which runs every evening on a dedicated build machine. As part of our build process, we get the source and associated data for the installation from SourceSafe. This can take quite some time and makes up the bulk of the build process (which is otherwise dominated by the creation of installation files).
Currently, we use the command line tool, ss.exe, to interact with SourceSafe. The commands we use are for a recursive get of the project source and data, checkout of version files, check-in of updated version files, and labeling. However, I know that SourceSafe also supports an object model.
Does anyone have any experience with this object model?
Does it provide any advantages over using the command line tool that might be useful in our process?
Are there any disadvantages?
Would we gain any performance increase from using the object model over the command line?
I should imagine the command line is implemented internally with the same code as you'd find in the object model, so unless there's a large amount of startup required, it shouldn't make much of a difference.
The cost of rewriting to use the object model is probably more than would be saved in just leaving it go as it is. Unless you have a definite problem with the time taken, I doubt this will be much of a solution for you.
You could investigate shadow directories so the latest version is always available, so you don't have to perform a 'getlatest' every time, and you could ensure that you're talking to a local VSS (as all commands are performed directly on the filesystem, so WAN operations are tremendously expensive).
Otherwise, you're stuck unless you'd like to go with a different SCM (and I recommend SVN - there's an excellent converter available on codeplex for it, with example code showing how to use the VSS ans SVN object models)
VSS uses a mounted file system to share the database. When you get a file from SourceSafe it works at the file system level which means that instead of just sending you the file it send you all the blocks of the disk to find the file and the file. This adds up to a lot more transactions and extra data.
When using VSS over a remote or slow connection or with huge projects it can be pretty much unusable.
There is a product which amongst other things improves the speed of VSS by ~12 times when used over a network. It does this by implementing a client server protocol. This additionally can be encripted which is useful when using VSS over the internet.
I don't work or have any connection with them I just used it in a previous company.
see SourceOffSite at www.sourcegear.com.
In answer to the only part of your question which seems to have any substance - no switching to the object model will not be any quicker as the "slowness" is coming from the protocol used for sharing the files between VSS and the database - see my other answer.
The product I mentioned works along side VSS to address the problem you have. You still use VSS and ahev to have licences to use it... it just speeds it up where you need it.
Not sure why you marked me down?!
We've since upgraded our source control to Team Foundation Server. When we were using VSS, I noticed the same thing in the CruiseControl.Net build logs (caveat: I never researched what CC uses; I'm assuming the command line).
Based on my experience, I would say the problem is VSS. Our TFS is located over 1000 miles away and gets are faster than when the servers were separated by about 6 feet of ethernet cables.
Edit: To put on my business hat, if you add up the time spent waiting for builds + the time spent trying to speed them up may be enough to warrant upgrading or the VSS add-on mentioned in another post (already +1'd it). I wouldn't spend much of your time building a solution on VSS.
I betting running the Object Model will be slower by at least 2 hours.... ;-)
How is the command line tool used? You're not by chance calling the tool once per file?
It doesn't sound like it ('recursive get' pretty much implies you're not), but I thought I'd throw this thought in. Others may have similar problems to yours, and this seems frighteningly common with source control systems.
ClearCase at one client performed like a complete dog because the client's backend scripts did this. Each command line call created a connection, authenticated the user, got a file, and closed the connection. Tens of thousands of times. Oh, the dangers of a command line interface and a little bit of Perl.
With the API, you're very likely to properly hold the session open between actions.