My question may be stupid, but I'm not that good at Microsoft Word to solve that problem by myself. I tried some solutions, ofc not every, cause I have no time.
My problem is may be simple: I can't copy some M. Word tables to a new M. Word file, but there's one issue: the tables that I want to copy is just done bad (I guess), like it made for no copy...
Here is the file: https://gofile.io/d/09vnQf. First example is on 90'th page.
This problem tables looks like this in the end: Example Image.
I can't copy the only tables with that end. The format of this tables are just
like goes wild. What should I do?
Related
My apologies in advance if this question has already been asked, if so I cannot find it.
So, I have this huge data base divided by country where I need to import from each country data base individually and then, in Power Query, append the queries as one.
When I imported the US files, the Power Query automatically generated a Transform File folder with 4 helper queries:
Then I just duplicated the query US - Sales and named it as UK - Sales pointing it to the UK sales folder:
The Transform File folder didn't duplicate, though.
Everything seems to be working just fine right now, however I'd like to know if this could be problem in the near future, because I still have several countries to go. Should I manually import new queries as new connections instead of just duplicating them or it just doesn't matter?
Many thanks!
The Transform Files Folder group contains the code that is called to transform a list of files. It is re-usable code. You can see the Sample File, which serves as the template for the transform actions.
As long as the file that is arrived at for the Sample File has the same structure as the files that you are feeding into the command, then you can use any query with any list of files.
One thing you need to make sure is that the Sample File is not removed from your data source. You may want to create a new dummy file just for that purpose, make sure it won't be deleted, and then point the Sample File query to pull just that file.
The Transform Helper Queries are special queries that you may edit the queries, but you cannot delete and recreate your own manually. They are automatically created by PQ when combining list of contents and are inherently linked to the parent query.
That said, you cannot replicate them, and must use the Combine function provided by PQ to create the helper queries.
You may however, avoid duplicating the queries, instead replicate your steps in the parent query, and use table union to join the list before combining the contents with the same helper queries.
I am working on my portfolio for grad school and I am having an issue with inserting a word document into another and keeping the original formatting of both. I built the main document so that all I would need to do is insert my supporting documentation which has to be of a different format. I tried next page section breaks and it generates the next page but all the formatting is still tied to the main document. Thanks in advance.
Just make a regular copy and paste. Control+C on source and Control+V on destiny. Best.
Hello I want to be able to compare values before and after form handling, so that I can process them before flush.
What I do is collect old values in an array before handlerequest.
I then compare new values to the old values in the array.
It works perfectly on simple variables, like strings for instance.
However I want to work on uploaded files. I am able to get their fullpath and names before handling the form but when I get the values after checking if form is valid, I am still getting the same old value.
I tried both calling $entity->getVar() and $form->getData()->getVar() and I have the same output....
Hello I actually found a solution. Yet it is a departure from the strategy announced in my question, which I realize is somewhat truncated regarding my objective. Which was to compare old file names and new names (those names actually include full path) for changes, so that I would unlink those of those old names that were not in the new name list anymore. Basically, to operate a cleanup after a file was uploaded to replace another, without the first one being deleted first. And to save the webmaster the hassle of having to sort between uniqid-named files that are still used by the web site and those that are useless.
Problem is that my upload functions, that are very similar to those given in examples to the file upload code shown on the official documentation pages, seemed to take effect at flush time.
So, since what I wanted to do with those files had nothing to do with database operations, I resorted to having step two code launch after flush, which works fine.
However I am intrigued by your solutions, as they are both strategies I hadn't thought of. Thank you for suggestions.
However I am not sure if cloning the whole object will be as straightforward as comparing two arrays of file names.
Recently we added a "audit_logs" table to the database, and after some frustration I realised that there was already an "auditlog" table in the database for some reason. It wasn't being used so I dropped it. I deleted the Auditlog.pm and AuditLogs.pm files from my schema, and then regenerated. For some reason DCSL again created AuditLogs.pm for the "audit_logs" table, even though there was no longer an "auditlog" table or Auditlog.pm file that would conflict with it.
I have tried just about everything I can think of to get it to generate Log.pm without success. The only thing that I can figure is that it is caching the moniker map somewhere, and I cannot seem to reset it.
I eventually tracked this problem down to an issue with the Lingua inflector. It was picking up "logs" as a singular verb instead of a plural noun. This happened because it followed the word "audit" which ends with "it." Basically, I had to write a custom moniker_map function that added an exception for audit_logs.
Edit: This was a bogus question. The problem was that I had quotes in my description field. The entire field should be wrapped in one set of quotes with none inside. Changed quotes to apostrophes to fix. Magento is working correctly.
I am using a Profile in the Import/Export section of my Magento admin to import a CSV document.
My description fields are very long (around 10k file size). Two issues are occurring:
On the published product, only the first 50% or so of the description is present.
The Magento system does not import the next column on the import document (brief description).
Does anybody know how to fix this?
The proplem is much more likley the application you export your product data with and which creates the CSV.
Did you check if the CSV contains the full description prior to importing it? Maybe the application only allows a certain amount of characters in a column and truncates the rest.
I believe I read there is a bug (not necessarily confirmed) in the CVS import if your 'short_description' is more than a sentence or two long, it causes problems elsewhere. You've got long, long descriptions, but you didn't mention how short, your short descriptions are. Could you try importing with a one sentence 'short_description', then see what happens.
I'm not sure the protocol of recommending a commercial product here, but there's a windows program (I run it in vmware) that does imports/exports with a direct connection to the magento database, skipping the long-winded dataflow api). I've imported products from there in much faster time frames without issue. I've never had to deal with long descriptions, though. It's not cheap at $200, but the time saved has been worth for it for me. It's the first result for 'magento manager' in google.
Have you confirmed by creating by hand a single product with a huge description that magento doesn't choke on it?