How to optimize large size mtar for TMS upload? - sap-cloud-platform

We are using mta to structure our application consisting of multiple (11 nos) micro-services (Java + Nodejs + Python).
Using the SAP Cloud SDK Pipeline, we are packaging the aforesaid micro-services into mtar file and then uploading the generated mtar to Transport Management landscape which subsequently deploys the application to SAP CLoud Foundry.
The issue here is with the large size of the mtar generated. The mtar file size in our case ranges between 900 MB to 1.2 GB and that is a pain point as it exceeds the 400 MB limit set in TMS upload.
This fat size of the mtar is mainly because of the packaging of the node_modules (around 350 MB each) in the mtar for the nodejs (2 nos) microservices.
We understand that the mtar size limit to TMS upload can be configured but looking for suggestions on the best practices to optimize the size of the mtar file.
Appreciate if someone can guide us here on the efficient ways to handle the node_modules in the mtar and thereby optimize the large size of the mtar file.

Related

Weblogic slow server startup and publishing

I am using Weblogic 12.2.1.3 on Windows 10 to deploy a web application (war) of size ~200 mb.
I have around 3 GB allocated to Weblogic server in setDomainEnv.cmd. The problem is server startup and application deploy takes more than 30 minutes.
How can I speed up abd reduce the time?
I have tried setting different random generator in java.security file but it didn't help.
I have also try disabling xml validation but it didnt help either.
Any pointers on troubleshooting slow start up times?
Thanks

Does Azure Artifacts store the deltas of Universal Packages?

I'm already familiar with Azure Artifacts, and lately I've been trying to optimize billing expenses. Since Azure Artifacts charges per/GB, I've been wanting to know, does the Universal Packages feature attempt to optimize storage usage by only storing the differences between one version of the package and the next version of the package?
does the Universal Packages feature attempt to optimize storage usage by only storing the differences between one version of the package and the next version of the package?
Agree with Jonathan. Customers are billed for the full size of each artifact stored on the service
That because Azure Artifacts could not intelligently extract deltas from your package version 2 (Sometimes we only modify the contents of the file.) unless we only package deltas when we pack the package. But in this case, this deltas package should be another package instead of package version 2 (It does not contain files from package version 1.).
On the other hand, when we use the package version 2, we still could use the package 1 independently. Azure Artifacts still provides services for package version 1, so it is reasonable to pay for it. If you do not want to billed for the full size of each artifact stored on the service, you could delete the package version 1 after upload the package version 2.
Note: Every organization can use up to 2 GB storage for free. Additional storage usage is charged according to tiered rates starting at $2 per GB and decreasing to $0.25 per GB:
Rate card
0 - 2 GB = Free
2 - 10 GB = $2 per GB
10 - 100 GB = $1 per GB
100 - 1,000 GB = $0.50 per GB
1,000+ GB = $0.25 per GB
So, if your package does not exceed the GB boundary, there will be no additional charges.
Hope this helps.
Customers are billed for the full size of each artifact stored on the service, regardless of how we physically store it.

Why azure cdn returns me the old version of file with custom domain

I have a file uploaded to my azure storage, and now I have replaced it with another version of this file.
The old file size was 22 mb.
now the new version is about 10 mb.
After replace when I try to download the file with my custom domain it still downloads the old file(22 mb).
But when I try to download with it's original url(storageName.blob.core.windows.net)
I get the correct file.
I have tried to set cache-control header 1 minutes using Microsoft azure storage explorer.
max-age=1
But it didn't help.
Why is such kind of behavior? And how to solve this problem?
When you have a CDN configured with Azure Storage and you updated the file in Storage, CDN will still serve the cached old file until the TTL expires.
So you should either do a Purge or you need to configure the caching rules to get desired rules.
You can read more about Caching rules in CDN here.

Swift, Mac OS X. Uploading large files to s3 bucket AWS.

I have my first app on swift and I need to upload large files to s3 bucket. I was trying to upload via Alamofire, but s3 bucket has limit on max file size ( 5 gb) when I use put request. Also I haven't found AWS SDK for Swift. Has anyone any ideas? Thank you!

How to get Azure VM's disk utilization details using Java SDK or REST API?

I am able to get Azure VM's actual disk size by using Java SDK.
But i want to know how much disk is utilized. Is there any way to get utilized size of disk using Java SDK or REST API?