I am new Magento 2 I was wondering if it is possible to take backup of magento without putting it in maintenance mode? (it's a client requirement since they do it daily.)
thanks in advance :)
Kind Regards
Sajid
You can setup backup using back office.
Stores=> Configuration=> Advanced=> System=>Scheduled Backup Settings and set without maintenance mode.
Related
I have a database project in Visual Studio 2017. Our database project is managed just like any other library of code where multiple developers can update the project as necessary. To ease the pain of deployments, I have built a custom deployment task in our TFS 2018 (vNext) Build process that is a powershell script that calls sqlPackage.exe. SqlPackage compares our compiled database project (*.dacpac file) to our target database (in Dev, QA, etc.). I have the custom step configured so that it will write the expected changes to disk so I have a record of what was changed, then sqlPackage runs a second pass to apply the changes to the intended target database.
My DBA enabled the Query Store in our SQL 2016 Database. During my sqlPackage deployment, one of the initial steps is to turn the query store off, this makes my DBA unhappy. He wants the ability to compare pre and post deployment changes but if the query store gets turned off, we lose the history.
I have tried several of the switches in the documentation (https://msdn.microsoft.com/en-us/library/hh550080(v=vs.103).aspx#Publish%20Parameters,%20Properties,%20and%20SQLCMD%20Variables) but I can't seem to find the magic parameter.
How do I stop SqlPackage from turning off the query store?
My current script:
sqlPackage.exe /Action:Script /SourceFile: myPath\MyDatabaseName.dacpac" /OutputPath:"myPath\TheseAreMyChangesThatWillBeApplied.sql" /TargetConnectionString:"Integrated Security=true;server=MyServer;database=MyDatabase;" /p:DropObjectsNotInSource=false /p:DropPermissionsNotInSource=false /p:DropRoleMembersNotInSource=false /p:BlockOnPossibleDataLoss=True /Variables:"CrossDatabaseRefs=CrossDatabaseRefs
Is there a better way? I am surprised that I had to write a custom TFS Build Task to do this. Which makes me think that I might be doing it the hard way. (But this has worked pretty well for me for the last several years). I love database projects, I love that they enforce references and ensure that we don't break other objects when a column is dropped (for instance).
Any insight would be greatly appreciated.
Either disable the scripting of database properties using /p:ScriptDatabaseOptions=false, or update the database properties in the project to reflect the desired Query Store settings.
To set the Query Store settings in the project, right-click the database project in Solution Explorer and open Properties. From there, in the Project Settings tab find the "Database Settings..." button. In the Database Settings dialog, click the Operational tab and scroll down to find the Query Store settings.
Apparently, all we needed to do was add a post deployment script to re-enable the Query Store. Hope this helps someone out there...
USE Master
ALTER DATABASE [MyDbName] SET QUERY_STORE = ON
We are using confluence 5.8.5 server version for our internally small teams in my company.But, we got an order to move our confluence to enterprise-confluence handled by another large confluence admin team in our company. So, I thought the best way is to just export each space to xml and import in the enterprise one. I tried to export one space and I got below error.
There was an error in the export. Please check your log files. :java.io.IOException: No space left on device
I observed that I do not have space in my confluence 5.8.5 in "/var" mount. I raised request to increase space which may take 3 business days. As a workaround is there any way I can export the space to any other mount or just directly from UI to my desktop like we do for jira?
Exporting and importing Space in Confluence needs free storage space in the Server. Other than that, exporting/importing space and site requires free memory as well so please ensure that you allocated enough memory to JVM. I believe you have to wait for extra storage at this stage.
Also,practically importing to next major version is not a good practice for Confluence. I would try it in a test environment prior to importing directly to production.
I've come across a strange kind of a bug or issue after updating my MODx Revo 2.2.15-pl to MODx Revo 2.3.0-pl
The right side of the admin panel just disappeared after modx revo update
However, if I install a fresh copy of MODX from scratch, this problem doesn't appear at all.
I have updated my site several times before without problems. I always followed the steps described in the official manual Upgrading MODX Revolution 2.x
Maybe someone familiar with this situation can help me find out what's wrong?
Also be careful with .htaccess in root MODx folder, try not change him, in some cases it causes issues. MODX Revo bug.
I find this is generally a symptom due to permissions on the /assets/ directory
make sure the webserver has permission to read/write to /assets/components/ [and /manager/ and /connectors/]
clear cache [always!]
disable js/css compression
make sure that the left panel is not actually collapsed [older versions was difficult to see the little arrow!]
Errors/problems should show up in core/cache/logs/ installation log.
Anyway, probably what has happened here is that you uploaded all your files VIA ftp as the ftp user & the webserver cannot overwrite those files. To make life easier on you, use the advanced install - that way all you have to upload are the /setup/ and /core/ directories [both should be read/write by the webserver as well]
I have an issue in updating contents in Umbraco. Whenever I update something in Umbraco, I have to wait at least one hour, sometimes 12 hours to see the changes at front-end.
The only way to see the changes immediately at front-end is "empty the connection string value umbracoDbDSN and refresh the page, then put the connection string back and refresh the page". I have to do everything I update something in CMS.
Do you guys have any idea what is happening here? Thanks.
The problem was. Umbraco was configured to run on load balanced servers on our old servers. I had to turn it off on the new server.
<distributedCall enable="false"> in umbracoSettings.config
What version are you running? When v5 first came out, I had a big problem with that (and solved it like you by touching the web.config to force a reset. Hopefully you are not using v5 (as its been discontinued and has extreme performance issues).
I have not had that problem in any v4.x versions that I can remember; changes should show up instantly after you republish.
Are you running in a standard configuration? Using a webfarm by anyt chance?
Is the ~/App_Data/umbraco.config file being written to on publish? This is the XML cache file that is used in displaying you website.
When you publish a node, the data is serialized into XML, stored in a database table and then written out to the umbraco.config cache.
This could be some kind of permissions issue, if umbraco doesn't have rights to read/write the file. Or you could have a corrupt dll that just isn't writing to it correctly. Or perhaps it's writing it out just fine, but your server is caching you pages in a weird way. Either way, I'd take a look first at the umbraco.config and make sure the data is being written to it on publish.
I work in a large evolving code base and use perforce to manage it. The problem is that I need to update it everyday and it takes a long time to do it. I am looking for ways to automate this process.
I first thought of writing an script and make it a scheduled task. I could not do it since running "p4 sync" gives me "p4 protect" related error. I don't have, and will not get, admin rights to server so I can't add myself to the protect table.
Since I can sync through P4V - perforce UI- I guess there should be a way to achieve this through custom tools or something similar.
Can you guys please provide pointers on how to approach this problem or if there is already a solution for it.
If 'p4 sync' gets a protect error in your script, but not when you use P4V, you most likely have the wrong environment in your script. P4PORT, P4USER, and P4CLIENT need to have the exact same settings in our script as they do in your P4V connection.