I thought of migrating from J 1.5.23 to 1.7 and like almost everyone i too ran into problems (Good i backed-up my site)
The problem i am facing is that my jUpgrade gets stuck at 'Migrating undefined'. 1.7 gets downloaded completely and also extracts correctly. I think i am still facing this problem because i somehow run out of space during the installation. what i wanted to know was How much disk space does migration require?
I have like 25 Mb free on my server and i am allowed only 100 MB so.
Thank You?
and btw i also unchecked the skip downloads options, didnt work for me
You will probably need more disk space than you have available. Your current site, plus the downloaded zip file, plus space for extracting the files plus any backups you have on the server are likely to exceed your 100MB.
I'd recommend taking a backup of your site, setting up the site on a localhost (xampp, wamp, etc) server on your own machine and run the migration there. This will have the benefits of not hitting arbitrary limits of what sounds like a very low budget web host.
Obviously you'll have the extra complexity of setting up your own server on your PC - but there are many tutorials out there that will walk you through the process, and the learning of new skills is always good.
Related
I've been idly bashing away at an issue with Postgres for months now. I've a bit of software (custom in-house stuff) that on 24 out of 25 servers does a certain process absolutely fine, no issues what so ever.
On the 25th server though, the process wouldn't quite complete properly, it would fail at the final hurdle which was a simple date change.
It's been a back-burner type issue so I'd not committed much time to working it out until management started to get angsty so I spent most of yesterday bashing away at it.
Obvious checks were done first:
Postgres version (9.6)
Software version
Windows patches (Server 2019)
GPO's
NTFS permissions
etc
All checked out as matches across every server. Went through the Postgres and the in-house software logs at length, had one of the developers build a stand alone executable for the process with a ridiculous amount of logging, still no dice. No indicators. Procmon and Wireshark logs showed the same story, nothing clear at all as to what was going on.
So then we take a backup of the database, load it in with a different name for testing and start running the process to find that it now works fine on the cloned database. This leads us to thinking there's maybe a formatting issue of some kind in the database, conscious of the idea that doing the backup & restore would shake things around. So we go back to the live, back it up again - delete the DB from postgres and restore from the backup.
No dice. Still broken.
Cue some serious confusion. We've done essentially the same thing with cloning live to test and are still getting the same fault at the end of the process.
After some head scratching and more prodding around in the logs I hit upon an idea of doing a fresh backup of the live DB, deleting the database, restoring the backup with a different name and then pointing the live software install to the newly named live DB and testing the process again.
It works!
For clarity, the filenames are basic alpha only. Upper case and lower case. No numbers, no symbols. Less than 15 characters in length.
I'm at a loss as to why it's now working and I'd love to get some input from the community.
I have one e-commerce website in Magento 2.2.2 and it keeps on going down almost every day. Whenever it goes down, users get site took too long too respond and it never loads. To get web site working again I have to restart the server and then it works.
Total space on the server is 50GB. Out of which the whole website is around 18GB (11GB Media files and then vendor files etc.). Here are things that i cannot figure out why:
a.) The server shows that 33GB has been used although it should show only 18GB has been used. I have checked everywhere and I can't find what is consuming additional 15GB of space. Complete HTML folder is only 18GB.
b.) When I checked log files: it shows the following:
WARNING: Memory size allocated for the temporary table is more than 20% of innodb_buffer_pool_size. Please update innodb_buffer_pool_size or decrease batch size value (which decreases memory usages for the temporary table). Current batch size: 100000; Allocated memory size: 280000000 bytes; InnoDB buffer pool size: 1073741824 bytes.
I have already set innodb_buffer_pool_size to 2GB. But still, this problem keeps coming.
The server is an Amazon EC2 server and Magento is in production mode. Can allocating 100GB instead of 50GB will solve the problem?
Increased innodb buffer pool size to 10GB and logs do not show error anymore but server still goes down every day. Since RAM is only 4GB on our server, can that be the main cause? Because everyone is suggesting at least 8GB RAM?
Try the things below.
Magento2 has big log files and caching system. There may be increase your files in var folder.
But still you have to check whether your site belongs to more than 3000 products with high size images for products and you are storing all these in your server itself.
The suggestions what I can give, If your site have more products which I already mentioned better you have to use CDN for better performance. So the entire image will be process from the third party.
Next is You have to setup cloud flare to avoid the down time errors or customer side effect. You can make your index page to load while the server is down. And obviously you have to write script to restart the site automatically while its down.
In your server side check the memory size for php, you can better to give to 2G.
In Mysql side : Check the whether its making sleep query or not. If its making through your custom extension area ask your developer to optimize the code.
for eg : May be the code passing 'collection' for a single item fetch.
You can use the tool like nurelic
If everything is fine from your developer area please try to optimize the site with making memory limit mysql killing etc.. with your server side.
In the mean while magento is a big platform for e-commerce sector, so it has more area to cover by default. Better to avoid the unwanted modules from your active site, like disable the core modules which you are not using yet.
For an average site Use 16gb RAM,
A restart your mysql to make it effect ?
Also you need to set that buffer up to 20971520000, thats around 20GB.
Magento uses a lot of sessions and cache.
I have a mongodb database with several million users.
I wanted to free space and I created a bot to remove inactive users of more than 6 months.
I have been looking at the disk for several minutes
and I have seen that it varied but it will not release large space, not even 1 mb. That's weird.
I've read that "remove" does not actually delete the disc if it does not simply mark that it can be deleted or overwritten. It is true?
That seemed to make a lot of sense to me. So, I've looked for something that forces space to really free up...
I've applied repairDatabase() and I think I've done wrong.
Everything has been blocked!
I have tried the luck and I have restarted the server.
There is a MongoDB service working but its status is maintained in "Starting" (not Running).
I'm reading from other sites that repairDatabase() requires twice as much space as the original size of the database, it does not have it.
I do not know, what is doing, and this could in several hours, days ...
Is the database lost? I think I will stop all services and delete the database.
repairDatabase is similar to fsck. That is, it attempts to clean up the database of any corrupt documents which may be preventing MongoDB to start up. How it works in detail is different depending on your storage engine, but repairDatabase could potentially remove documents from the database.
The details of what the command does is outlined quite clearly (with all the warnings) in the MongoDB documentation page: https://docs.mongodb.com/manual/reference/command/repairDatabase/
I would suggest that next time it's better to read the official documentation first rather than reading what people said in forums. Second-hand information like these could be outdated, or just plain wrong.
Having said that, you should leave the process running until completion, and perform any troubleshooting if the database cannot be started. It may require 2x the disk space of your data, but it's also possible that the command just needs time to finish.
I have been running a large-ish site for years, with a typical nginx / apache setup, and all the "pages" are mod_perl. Up until recently, I was running on FreeBSD. After a hardware-replacement, and due to other reasons, I was forced to migrate to Ubuntu (12.04.2 LTS), which I use on many other servers, so no big deal. However, I have a problem with my logs now.
For some reason, more and more "actions" are no longer logged through Log4Perl. This was never a problem on my previous setup, but now I seem to "lose" between 2 and 15 % of my log-entries. This is checked and verified by logging the data to a database at the same time.
Does anyone have a clue why this would happen?
Is there something I should know about large log-files and ubuntu? (It's not that large tbh, 390 MB atm)?
I get nothing in my error-logs anywhere, and as the database-logging happens AFTER the $log->info("ENTRY HERE"), the script obviously doesn't crash. But I am missing a lot of those ENTRY HEREs :)
The log in question is "hit" about once per second on average, but I should not think this would be a big problem?
Could there be "too many processes" trying to write to the log in parallel, causing locking-issues, preventing data form being appended to the file? Any typical Ubuntu-settings that might be adjusted for something like this?
Any help would be greatly appreciated.
Spinner
How is performance in the current version (4.7) of Accurev?
time to checkout per 100mb, per gb?
time to commit per # of files or mb?
responsiveness of gui when 100+ streams?
I just had a demo of Accurev, and the streams look like a lightweight way to model workflow around code/projects. I've heard people praising Accurev for the streams back end and complaining about performance. Accurev appears to have worked on the performance, but I'd like to get some real world data to make sure it isn't a case of demos-well-runs-less-well.
Does anyone have Accurev performance anecdotes or (even better) data from testing?
I don't have any numbers but I can tell you where we have noticed performance issues.
Our builds typically use 30-40K files from source control. In my workspace currently there are over 66K files including build intermediate and output files, over 15GB in size. To keep AccuRev working responsively we aggressively use the ignore elements so AccuRev ignores any intermediate files such as *.obj. In addition we use the time stamp optimization. In general running an update is quick, but the project sizes are typically 5-10 people so normally only a couple of dozen files come down if you update daily. Even if someone made changes that touched lots of files speed is not an issue. On the other hand a full populate of all 30K+ files is slow. I don't have a time since I seldom do this and on the rare occasion I do, I run the populate when I'm going to lunch or a meeting. I expect it could be as much as 10 minutes. In general source files come down very quickly, but we have some large binary files, 10-20MB, that take a couple of seconds each.
If the exclude rules and ignore elements are not correctly configured, AccuRev can take a couple of minutes to run an update for workspaces of this size. When I hear of other developers complaining about the speed I know something is miss-configured and we get it straightened out.
A year or so ago one of the project updated boost with 25K+ files and also added FireFox to the repository (forget the size but made boost look small.) They also added ICU, wrote a lot of software and modified countless files. In all I recall there were approx 250K+ files sitting in a stream. I unfortunately decided that all their good code should be promoted to the root so all projects could share. This turned out to be a little beyond what AccuRev could handle well. It was a multi hour process getting all the changes promoted. As I recall once FireFox was promoted the rest went smoothly - perhaps a single transaction with over 100K files was the issue?
I recently updated boost and so had to keep and promote 25K+ files. It took a minute or two but not unreasonable considering the number of files and the size of the binaries.
As for the number of streams, we have over 800 streams and workspaces. Performance here is not an issue. In general I find the large number of streams hard to navigate so I run a filtered view of just my workspaces and the just streams I'm interested in. However when I need to look at the unfiltered list to find something performance is fine.
As a final note, AccuRev support is terrific - we call them the voice in the sky. Every now and again we shoot ourselves in the foot using AccuRev and wind up clueless on how to fix things. Almost always we did something dumb and then tried something dumber to fix it. Eventually we place a support request and next thing we know they are walking us through the steps to righteousness either on the phone or a goto meeting. I've even contacted them for trivial things that I just don't have time to figure out as I'm having a hectic day and they kindly walk me through it rather than telling me to RTFM.
Edit 2014: We can now get acceptable X-Windows performance by using the commercial version of RealVNC.
Original comment:This answer applies to any version of Accurev, not just 4.7. Firstly, GUI performance might be OK if you can use the web client. If you can't use the web client and if you want GUI performance then you'd better be using Windows, or have all your developers in one place, i.e. where the Accurev server is located. Try to run the GUI on X-Windows over a WAN ? Forget it : our experience has been dozens of seconds or minutes for basic point and click operations. This is over a fairly good WAN about 800 miles distant, with an almost optimal ping time. This is not a failing of Accurev, but of X-Windows, and you'll likely have similar problems with other X applications over a WAN. So avoid basic X if you possibly can. Currently we cannot, and our WAN users are forcibly relegated to command-line only. The basic problem is that Accurev is is centralized and you can't increase the speed of light. I believe you can get around WAN latency by running Accurev Replication Servers, but that still does not properly address the problem if you have remote developers at single-person offices over VPN. It is ironic that the replication servers somewhat turn this centralized VCS into a form of DVCS. If you don't have replication servers then a horrible but somewhat workable work-around is to use a delta-synchronization tool such as rsync to sync your source tree between your local machine where you can run the GUI (i.e. GUI running directly on your Windows or Linux laptop), and the machine where you're actually working (e.g. UNIX machine 1,000 miles away). Another option is to use something like VNC which works better over a WAN than X, connecting to a virtual desktop at the Accurev server's location, and use X from there. At my workplace more than one team has resorted to using Mercurial on the side and promoting to Accurev only when it's strictly necessary. As Stephen Nutt points out above, other necessary work is to use time-stamp optimization and ignores. We also have our Accurev admins (yes, it requires you employ people to baby sit it) complain when we need to include large numbers of files, despite the fact they form a core part of our product and MUST be included and version controlled. Draw your own conclusions.