I am trying to create a program to automatically download the attached files that are sent to us from a certain email and then transform the delimiter with SAS, of those csv that are attached to us and pass those csv through a flow that I have already created.
I have managed to create a program that treats the csv as I want and the delimiter that I want, the problem is that when it comes to automating the download of files from Outlook it does not work.
What I have done is create a rule with the following VB code that I found on the internet:
Public Sub SaveAttachmentsToDisk(MItem As Outlook.MailItem)
Dim oAttachment As Outlook.Attachment
Dim sSaveFolder As String
sSaveFolder = "C:\Users\ES010246\Desktop"
For Each oAttachment In MItem.Attachments
oAttachment.SaveAsFile sSaveFolder & oAttachment.DisplayName
Next
End Sub
I have changed the path to my personal path where i want the files are downloaded.
website: https://es.extendoffice.com/documents/outlook/3747-outlook
The problem is that this code does not work for me, it does absolutely nothing for me and no matter how much I search the internet, only this code appears.
Is there any other way to do with SAS what I want? What is it to automatically download 8 csv files sent to me by Outlook, or has someone experienced the same thing as me with VBA?
I have followed all the steps about 7 times so I think the error is not in copying the code or selecting certain options wrong, in fact I had copied and pasted the code and later I modified the path where I wanted those to be saved. files but it doesn't work, does anyone know why?
I will be tremendously grateful, thank you very much for everything!
First of all, you need to make sure the file name and path doesn't include forbidden symbols.
The VBA macro used for a rule in Outlook is absolutely valid except that a mail item may contain the attached files with the same name, so a file saved to the disk may be overwritten (saved with the same name). That's why I'd suggest generating a file name with your own unique IDs making sure that DisplayName property is not empty and has a valid name what can be used for file names (exclude forbidden symbols).
Also you may consider handling the NewMailEx event of the Application class which is fired when a new message arrives in the Inbox and before client rule processing occurs. Use the Entry ID returned in the EntryIDCollection string to call the NameSpace.GetItemFromID method and process the item. This event fires once for every received item that is processed by Microsoft Outlook. The item can be one of several different item types, for example, MailItem, MeetingItem, or SharingItem.
The Items.ItemAdd event can be helpful when items are moved to a folder (from Inbox). This event does not run when a large number of items are added to the folder at once.
Related
Years ago, I wrote a small app in itext2 to gather reports on a weekly basis and concatenate them into one PDF. The app used com.lowagie.text.pdf.PdfCopy to copy and merge the PDFs. And it worked fine. Performed exactly as expected.
A few weeks ago I looked into migrating the application to itex7. To that end, I used the copyPagesTo method of com.itextpdf.kernel.pdf.PdfDocument. When run on the same file set, this produces warnings like:
WARN PdfNameTree - Name "section.1" already exists in the name tree; old value will be replaced by the new one.
When I click on the link to "section.1" in the first document of the merged PDF, I am taken to "section.1" of the last document. Not what I expected and not what happens when using the itext2 app. In the PDF's produced by itext2, if I click on the link to "section.1" of the first document in the combined PDF, I am taken to section 1 of the first document.
There is a hint in Javadocs for copyPagesTo saying
If outlines destination names are the same in different documents, all
such outlines will lead to a single location in the resultant
document. In this case iText will log a warning. This can be avoided
by renaming destinations names in the source document.
There is however, no explanation of how this should be done. I find it odd that this should be necessary in itext7, although it wasn't in itext2.
Is there a simple way to get around his problem?
I've also tried the Sejda desktop app and it produces correct results, but I would prefer to automate the process through a batch script.
My guess is iText 2 didn't even know it might be a problem.
If iText can't deduplicate destination names, the procedure is roughly:
Follow /Catalog -> /Names -> /Dests in each document to find the destination name tree.
Deduplicate the names, by adding suffixes. Remember that a name with a suffix added might be equal to an existing name in the same or another document. Be careful!
Now you can rewrite the destination name trees. Since you have only used suffixes, you can do this in place - the lexicographic ordering of the names is unaltered so the search tree structure is not broken.
Now, rewrite destination links in each PDF for the new names. For example any dictionary entry with key /Dest, or any /D in a /GoTo action.
Now, after all this preprocessing, the files will merge without name clashes.
(I know all this because I've just implemented it for my own PDF software. It's slightly hairy stuff, but not intractable.)
If you like, I can provide a devel version of cpdf with this functionality, if you would like to test it.
I have a password protected zip file with 3 folders PFC, STA and SYS. The zip file when loaded for the first time reads all the text files in the PFC folder and display the data onto the screen. I perform some operations on the screeen which generates new files and they are added into the STA and SYS folders and the same initial zip file is updated.
However, when I try to load the same zip file which now should have the all the initial files and folders along with the newly created files in it, it gives me a Wrong CRC Value error. I dont know what am I doing wrong.
We have tried to reproduce the issue by preparing a sample with your scenario. But, we are facing another issue related to password which we will fix in our upcoming release. The sample can be downloaded from the below link .
Sample link : https://www.syncfusion.com/downloads/support/directtrac/general/ze/Password-1299200154.zip
Kindly modify the sample to reproduce the “CRC value error” issue and update us along with input documents. It will help us to analyze further on this and provide you a solution at earliest.
Note : I work for Syncfusion.
You can send your sample or confidential information in mail to, support#syncfusion.com
Regards,
Mohan.
I'm using OpenXml SDK to generate word 2013 files. I'm running on a server (part of a server solution), so automation is not an option.
Basically I have an xml file that is output from a backend system. Here's a very simplified example:
<my:Data
xmlns:my="https://schemas.mycorp.com">
<my:Customer>
<my:Details>
<my:Name>Customer Template</my:Name>
</my:Details>
<my:Orders>
<my:Count>2</my:Count>
<my:OrderList>
<my:Order>
<my:Id>1</my:Id>
<my:Date>19/04/2017 10:16:04</my:Date>
</my:Order>
<my:Order>
<my:Id>2</my:Id>
<my:Date>20/04/2017 10:16:04</my:Date>
</my:Order>
</my:OrderList>
</my:Orders>
</my:Customer>
</my:Data>
Then I use Word's Xml Mapping pane to map this data to content control:
I simply duplicate the word file, and write new Xml data when generating new files.
This is working as expected. When I update the xml part, it reflects the data from my backend.
Thought, there's a case that does not works. If a customer has no order, the template content is kept in the document. The xml data is :
<my:Data
xmlns:my="https://schemas.mycorp.com">
<my:Customer>
<my:Details>
<my:Name>Some customer</my:Name>
</my:Details>
<my:Orders>
<my:Count>0</my:Count>
<my:OrderList>
</my:OrderList>
</my:Orders>
</my:Customer>
</my:Data>
(see the empty order list).
In Word, the xml pane reflects the correct data (meaning no Order node):
But as you can see, the template content is still here.
Basically, I'd like to hide the order list when there's no order (or at least an empty table).
How can I do that?
PS: If it can help, I uploaded the word and xml files, and a small PowerShell script that injects the data : repro.zip
Thanks for sharing your files so we can better help you.
I had a difficult time trying to solve your problem with your existing Word Content Controls, XML files and the PowerShell script that added the XML to the Word document. I found what seemed to be Microsoft's VSTO example solution to your problem, but I couldn't get this to work cleanly.
I was however able to write a simple C# console application that generates a Word file based on your XML data. The OpenXML code to generate the Word file was generated code from the Open XML Productivity Tool. I then added some logic to read your XML file and generate the second table rows dynamically depending on how many orders there are in the data. I have uploaded the code for you to use if you are interested in this solution. Note: The xml data file should be in c:\temp and the generated word files will be in c:\temp also.
Another added bonus to this solution is if you were to add all of the customer data into one XML file, the application will create separate word files in your temp directory like so:
customer_<name1>.docx
customer_<name2>.docx
customer_<name3>.docx
etc.
Here is the document generated from the first xml file
Here is the document generated from the second xml file with the empty row
Hope this helps.
I'm seeing some very strange behavior with FileMaker 14. I'm using LayoutObjectNames for some required functionality. On the development system it's working fine. It returns the list of named objects on the layout.
I close the file, zip it up and send it to the client, and that required functionality isn't working. He sends the file back and I open it and get a data viewer up. The function returns nothing. I go into layout mode and confirm that there are named objects on the layout.
The first time this happened and I tried recovering the file. In the recovered file it worked, so I assumed some corruption had happened on his end. I told him to trash the file I had given him and work with a new version I supplied. The problem came up again.
This morning he sent me the oldest version that the problem manifested in. I confirmed the problem, tried recovering it again, but this time it didn't fix the problem.
I'm at a loss. It works in the version I send him, doesn't on his system. We're both using FileMaker 14, although I'm using Advanced. My next step will be to work from a served file instead of a local one, but I have never seen this type of behavior in FileMaker. Has anyone seen anything similar? Any ideas on a fix? I'm almost ready to just scrap the file and build it again from scratch since we're not too far into the project.
Thanks, Chuck
There is a known issue with the Get (FileName) function when the file name contains dots (other that the one before the extension). I will amend my answer later with more details and a possible solution (I have to look it up).
Here's a quote from 2008:
This is a known issue. It affects not only the ValueListItems()
function, but any function that requires the file name. The solution
is to include the file extension explicitly in the file name. This
works even if you use Get (FileName) to return the file name
dynamically:
ValueListItems ( Get ( FileName ) & ".fp7" ; "MyValueList" )
Of course, this is not required if you take care not to use period
when naming your files.
http://fmforums.com/forums/topic/60368-fm-bug-with-valuelistitems-function/?do=findComment&comment=285448
Apparently the issue is still with us - I wonder if the solution is still the same (I cannot test this at the moment).
IS there a way to add/initate a comment ( e.g. $dom->createComment ... ) such that it comments out an entire block of xml tags. Basically I want to turn-off the content between the comment.
For example, it would look like this:
<TT>
<AA>keep</AA>
<!-- comment to blocking
<BB>hideme1</BB>
<CC>hideme2</CC>
-->
<DD>d's content is good</DD>
</TT>
Actually this question is a pre-cursor to my attempt to figure-out a method to be able to markup/label/identify the changes to an xml files in support of new client software functionality, but be able to have the ability to remove / back-out these xml changes in the rare event the client needs to fall back to the previous software version (and no I can't just simply point back to the original xml file because the client is allowed to make minor modifications to existing node text values). This is all going to be controlled via a perl script and LibXML's core modules (I can't use modules the client doesn't have).
So basically I've identified three possible types of xml changes as a result of new client sw functionality:
1.) ADD new element node(s) (typically to support new sw functionality)
2.) DELETE element node(s), or blocks of (would be rare, but never-the-less a possibility)
3.) CHANGE node text values (rare, but the new sw may require a new value)
For all three types, the client needs the ability to back out the changes. One thing I was thinking to use is ATTRIBUTES since the existing xml files don't use them. For example, for an ADD change type, I could include an atribute like 'ADD="sw version 4.1"' . This way if it needs to be removed, I could just simply have the perl script find those attribute strings and delete them (using LibXML methods). Same thing with CHANGE change type - I could use an attribute like CHG="newvalue_oldvalue", then again use straight perl (or LibXML) to switch back the value based on the contents of the attribute. The DELETE change type is giving me a problem though (as welll as the others lol!). I want to be able to "keep" the deleted lines in the xml file soley for the purposes if the sw falls back a version (at some late point the perl script could eventually cleanup/delete them).
I know this is a lot, I'm new to LibXML (but not to perl). I was just wonder if any of you have any thoughts as to how to go about it or seen anything resembling this kind of request ... I'd be grateful for any kind of advice! Thank you...