How to upload a workbook object created using xlsxwriter to google cloud storage using python - google-cloud-storage

I have the working knowledge for uploading/creating objects in google cloud storage, but when trying to do same with Workbook created using xlxswriter it throws exception and won't allow to create object. Tried converting workbook to bytes/string as it is supported by cloud storage for upload.
Below code was used for creating a workbook in required format
workbook = xlsxwriter.Workbook('SampleDownload.xlsx')
worksheet = workbook.add_worksheet("xyz")
worksheet.freeze_panes(1, 1)
formater = workbook.add_format()
formater.set_bold()
formater.set_bg_color('red')
---code block for adding data---
workbook.close()
Now while trying to upload workbook on cloud storage using blob (upload_from_file) method, it thorws exception as 'Workbook' object has no attribute 'tell'.
So to overcome this I tried converting workbook to bytes as bytes are uploadable on google cloud storage using io.BytesIO(workbook) but it throws exception as TypeError: a bytes-like object is required, not 'Workbook'.

Related

ADF Copy only when a new CSV file is placed in the source and copy to the Container

I want to copy the file from Source to target container but only when the Source file is new
(latest file is placed in source). I am not sure how to proceed this and not sure about the syntax to check the source file greater than target. Should i have to use two get metadata activity to check source and target last modified date and use if condition. i tried few ways but it didn't work.
Any help will be handy
syntax i used for the condition is giving me the error
#if(greaterOrEquals(ticks(activity('Get Metadata_File').output.lastModified),activity('Get Metadata_File2')),True,False)
error message
The function 'greaterOrEquals' expects all of its parameters to be either integer or decimal numbers. Found invalid parameter types: 'Object'
You can try one of the Pipeline Templates that ADF offers.
Use this template to copy new and changed files only by using
LastModifiedDate. This template first selects the new and changed
files only by their attributes "LastModifiedDate", and then copies
them from the data source store to the data destination store. You can
also go to "Copy Data Tool" to get the pipeline for the same scenario
with more connectors.
View
documentation
OR...
You can use Storage Event Triggers to trigger the pipeline with copy activity to copy when each new file is written to storage.
Follow detailed example here: Create a trigger that runs a pipeline in response to a storage event

capacitor unable write blob to file and read it back

I'm trying to write a blob from fetch to a file using capacitor Filesystem.writeFile
when I write the file this is the data I write:
...etc (you get the point full png file as a blob)
but when I read the data this is ALL the data returned
{data: "ZumJPw=="}
Any idea what's going on? How do I fix it?

Not able to see saved files inside azure storage container

string filePath = #"C:\test.pdf";
CloudBlockBlob blockBlob = container.GetBlockBlobReference("DxRecordForm");
FileStream localDirDxRecordForm = File.Create(filePath);
localDirDxRecordForm.Close();
dxCodeReport.ExportToDisk(CrystalDecisions.Shared.ExportFormatType.PortableDocFormat, filePath);
using (var fileStream = System.IO.File.OpenRead(filePath))
{
blockBlob.UploadFromStream(fileStream);
}
I am exporting a crystal report into pdf format and then saving the pdf in azure storage container. In above code dxCodereport is an instance of crystal report. When I view my storage container, I see the block blob named DxRecordForm. When I click that, I am also able to see the pdf version of my crystal report.
I am not sure why I don't see the file Test.pdf inside my container. I just see the block blob with content-type application/octet-stream.
Your code snippet above uploads the file C:\test.pdf to a block blob named DxRecordForm. This would not result in creating a blob named test.pdf. If you would like to upload it a blob named test.pdf, please use "test.pdf" when getting a block blob reference.

Binary files in Gridfs-Mongo DB to be stored in local drive in python 2.7

I have stored some attachments into GridFS mongo using the Put command in Gridfs.
x12 = 'c:\test\' + str10
attachment.SaveAsFile(x12)
with open(x12, 'rb') as content_file:
content = content_file.read()
object_id = fs.put(strattach,filename=str10)
strattach is obtained as follows
attachment = A1.Item(1) processing email attachments using MAPI
strattach = str(attachment) converting to string.If i am not doing this i get a Typeerror: saying
can only write strings or file like objects
A1 is the attachments collection and attachment is the object obtained.
Now the Put was successful and i got the the object ID object_id which was store in Mongo db along with file name.
Now i need to build my Binary file again using the object_id and file name in Python 2.7.
to do this i read from gridfs using f2 = object_id.read() and tried to apply the write method on F2
which is failing. When i read the manual it said read in python 2.7 returns a string instance.
Could you please help me on how i can save that instance back as a binary file in python2.7.
Any alternate suggestions will also be helpful
Thanks

CrystalReportSource binding

Hello I have a crystalReportViewer and CrystalReportSource on a web form.
I need to be able to bind the reportSource at run time to different report files.
I have the file data stored in a blob in a DB.
The mechanism I am using now is to save the blob to a file and then
this.CrystalReportSource1.Report.FileName = myFileName;
The issue is that I want to avoid saving the file on disk and somehow bind the report directly to a file stream.
Is that possible?
Thanks
In C#, I believe that you should be able to use something like the following, but I haven't tested it out on my system.
ReportDocument rpt = new ReportDocument();
rpt.Load(filestream); //filestream is your filestream object
Give it a try and let me know if you have issues.