Bulk file upload into Azure Media Services not finishing - azure-media-services

When trying to bulk upload a file to Azure Media Services using an IngestManifest the Asset is created, IngestionManifest blob container is also created and the process gets into the stage when the file appears to be fully in both the Manifest and Asset blob containers but the Asset is marked as empty, of 0 size and the Ingestion Manifest with the 1 file pending and 0 finished indefinitely. This is happening in the same way when ingesting a file (MP4 to test) with our own code using the .Net AMS SDK but also with the AMS Explorer tool. The AMS account is a new account also with a new V2 storage as primary account behind it, so it more likely to be in some configuration or settings I would think, but otherwise a complete mystery.

This was an error behaviour and was resolved by Microsoft.

Related

How to use multiple client certificates stored in keystore.p12 and set jmeter's system.properties while setting test in Azure Load Testing service?

I am trying to set up test (manual/yaml) in Azure Load Testing service and my test uses client certificates, so I uploaded jmx, keystore(.p12) and csv (has alias of certificates in keystore) to test plan.
In Azure Load Testing, where can I set javax.net.ssl.keyStoreType, javax.net.ssl.keyStore, javax.net.ssl.keyStorePassword, https.use.cached.ssl.context,https.keyStoreStartIndex and https.keyStoreEndIndex properties?
In case of Jmeter, I would set above properties in jmeter's system.properties file. But, in case of Azure Load Testing, not sure how to get this working.
Please suggest, Thanks
As per Configure a load test in YAML
configurationFiles List of relevant configuration files or other files that you reference in the Apache JMeter script. For example, a CSV data set file, images, or any other data file. These files will be uploaded to the Azure Load Testing resource alongside the test script. If the files are in a subfolder on your local machine, use file paths that are relative to the location of the test script.
So my expectation is that if you upload system.properties file along with the .jmx script and CSV file with certificate aliases the Azure load testing engine should pick it up and apply.
It should also be possible to do via GUI:
More information: How to Use Multiple Certificates When Load Testing Secure Websites

Aspera Node API /files/{id}/files endpoint not returning up to date data

I am working on a webapp for transferring files with Aspera. We are using AoC for the transfer server and an S3 bucket for storage.
When I upload a file to my s3 bucket using aspera connect everything appears to be successful, I see it in the bucket, and I see the new file in the directory when I run /files/browse on the parent folder.
I am refactoring my code to use the /files/{id}/files endpoint to list the directory because the documentation says it is faster compared to the /files/browse endpoint. After the upload is complete, when I run the /files/{id}/files GET request, the new file does not show up in the returned data right away. It only becomes available after a few minutes.
Is there some caching mechanism in place? I can't find anything about this in the documentation. When I make a transfer in the AoC dashboard everything updates right away.
Thanks,
Tim
Yes, the file-id base system uses an in-memory cache (redis).
This cache is updated when a new file is uploaded using Aspera. But for files movement directly on the storage, there is a daemon that will periodically scan and find new files.
If you want to bypass the cache, and have the API read the storage, you can add this header in the request:
X-Aspera-Cache-Control: no-cache
Another possibility is to trigger a scan by reading:
/files/{id}
for the folder id

Azure Devops: Pipeline fails to deploy to Linux Web App

I have a pipeline deploying to my Azure web app, that most of the times errors out because it couldn't deploy to my web app. The task take around 25 mins :
...
Copying file: 'frontend/.gitignore'
Copying file: 'frontend/README.md'
Copying file: 'frontend/package.json'
Copying file: 'frontend/tsconfig.json'
Copying file: 'frontend/yarn.lock'
Omitting next output lines...
An error has occurred during web site deployment.
Kudu Sync failed
\n/opt/Kudu/Scripts/starter.sh "/home/site/deployments/tools/deploy.sh"
##[error]Failed to deploy web package to App Service.
##[error]To debug further please check Kudu stack trace URL : https://$someapp:***#someapp.scm.azurewebsites.net/api/vfs/LogFiles/kudu/trace
##[error]Error: Package deployment using ZIP Deploy failed. Refer logs for more details.
...
When i enable : system.debug = true , i see these logs repeated many time , before start copying the artifact files :
POLL URL RESULT: {"statusCode":202,"statusMessage":"Accepted","headers":{"transfer-encoding":"chunked","content-type":"application/json; charset=utf-8","location":"http://XXXXXXXXX.scm.azurewebsites.net:80/api/deployments/latest?deployer=VSTS_ZIP_DEPLOY&time=2021-07-09_09-01-41Z","server":"Kestrel","date":"Fri, 09 Jul 2021 09:23:37 GMT","connection":"close"},"body":{"id":"68a7a8811796416b993924437493ff87","status":0,"status_text":"Building and Deploying '68a7a8811796416b993924437493ff87'.","author_email":"N/A","author":"N/A","deployer":"VSTS_ZIP_DEPLOY","message":"Created via a push deployment","progress":"Running deployment command...","received_time":"2021-07-09T09:01:50.4159225Z","start_time":"2021-07-09T09:01:51.775357Z","end_time":null,"last_success_end_time":null,"complete":false,"active":false,"is_temp":false,"is_readonly":true,"url":null,"log_url":null,"site_name":"XXXXXXXXXXXXe"}}
Deployment status: 0 'Building and Deploying '68a7a8811796416b993924437493ff87'.'. retry after 5 seconds
setting affinity cookie ["ARRAffinity=c06e9bb74f52245b3695b3079a52f6acbc70c3ee812f67e4fa3f5f65088ff4f7;Path=/;HttpOnly;Secure;Domain=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX.scm.azurewebsites.net","ARRAffinitySameSite=c06e9bb74f52245b3695b3079a52f6acbc70c3ee812f67e4fa3f5f65088ff4f7;Path=/;HttpOnly;SameSite=None;Secure;Domain=XXXXXXXXXXXXXXX.scm.azurewebsites.net"]
[GET]https://XXXXXXXXXXX-test.scm.azurewebsites.net:443/api/deployments/latest?deployer=VSTS_ZIP_DEPLOY&time=2021-07-09_09-01-41Z
POLL URL RESULT: {"statusCode":202,"statusMessage":"Accepted","headers":{"transfer-encoding":"chunked","content-type":"application/json; charset=utf-8","location":"http://XXXXXXXXXXXXXXXXXXX.scm.azurewebsites.net:80/api/deployments/latest?deployer=VSTS_ZIP_DEPLOY&time=2021-07-09_09-01-41Z","server":"Kestrel","date":"Fri, 09 Jul 2021 09:23:45 GMT","connection":"close"},"body":{"id":"68a7a8811796416b993924437493ff87","status":0,"status_text":"Building and Deploying '68a7a8811796416b993924437493ff87'.","author_email":"N/A","author":"N/A","deployer":"VSTS_ZIP_DEPLOY","message":"Created via a push deployment","progress":"Running deployment command...","received_time":"2021-07-09T09:01:50.4159225Z","start_time":"2021-07-09T09:01:51.775357Z","end_time":null,"last_success_end_time":null,"complete":false,"active":false,"is_temp":false,"is_readonly":true,"url":null,"log_url":null,"site_name":"XXXXXXXXXXXX"}}
Deployment status: 0 'Building and Deploying '68a7a8811796416b993924437493ff87'.'. retry after 5 seconds
setting affinity cookie ["ARRAffinity=c06e9bb74f52245b3695b3079a52f6acbc70c3ee812f67e4fa3f5f65088ff4f7;Path=/;HttpOnly;Secure;Domain=XXXXXXXXXXXXXXXXXXXXXX.scm.azurewebsites.net","ARRAffinitySameSite=c06e9bb74f52245b3695b3079a52f6acbc70c3ee812f67e4fa3f5f65088ff4f7;Path=/;HttpOnly;SameSite=None;Secure;Domain=XXXXXXXXXXXXXXXXXX"]
This task fails only in specific slot in myweb app , authors slots and production slot works fine and the job take around 6 mins
Any ideas what could be wrong?
As per the discussion and troubleshooting performed here, I tried to setup a Linux App Service on Standard S1 pricing tier enabling 5 (max) slots with CI/CD configured via Azure Pipelines. Unfortunately, I wasn't able to reproduce the same error as yours despite multiple different trials.
I'd suggest you to try the following:
Kudu Sync failed in the deployment log resembles this open issue from about a year ago: ZipDelpoy on azure web app linux fails during kudu sync #2972. Please check the trace/deployment log files on kudu at https://<appname>.scm.azurewebsites.net/api/vfs/LogFiles/kudu/trace or /deployment or from Kudu's DebugConsole (/LogFiles/kudu/\*) and check if this is caused by deployment lock failures. In that case, check this wiki out for dealing with locked files during deployment.
Try a different deployment method like run from package (to avoid resource locking), using FTP/S, or local git deployment.
This should help you narrow down the issue further, whether it is caused in the App service/deployment method, or the ADO pipeline/task.
Scale up to the next higher tier and re-trigger your pipeline. If it succeeds, you may scale back down to the original tier. This would indirectly restart your SCM sites as well.
If the above workarounds don't help, you could check on the following:
Customize your deploy task with options like TakeAppOfflineFlag, DeploymentType or RenameFilesFlag to streamline your deployment.
Try restarting the app/slot just before the deployment in order to recycle the app pool.
Check if your app is running into any of the prescribed limits (ex: file system storage) for your tier.
Drill down into available metrics for your app to identify any CPU/Memory anomalies.
Try the Diagnose and solve problems tool for any additional insights about your app.
If your environment permits, try setting up and deploying to a new slot within your App Service, or try verifying if this happens to another app in a different region.

Media not found Exception In Email Business Process (Hybris)

I've created a process to be able send email to the user on order confirmation.
The problem is that on the DEV environment everything goes well but when I did a deploy to UAT server
I got an exception during the task execution ( " Media not found (requested media location: hf0/h27/8861015965726.bin) ").
Any Ideas what could be happening ?
How can this issue be resolved and what causes this issue.
hybris creates emails using Velocity templates. Those Velocity Templates are stored as Medias on the hybris Servers. hybris Medias consist of two parts: an entry in the respective table in the database and a file on the hard drive. The database entry stores metadata about that media while the file stores the actual content.
Now what hybris is telling you, is that the file on the hard drive is missing. The database entry directs to a file that is not existing. There could be a lot of reasons why that file is missing:
It was deleted during deployment.
It wasn't created during deployment.
The hybris server has no access/access rights to that directory.
In a clustered environment the file could have been stored on another node and is not accessible on the current node.
Media could be the email itself as Johannes stated, but it can also be a part of the email, an image set from the CMS cockpit for example.
To fix this issue you have to master your impex flows.
First be sure that impex contain all the data needed to create properly the email.
Then know what is imported when you deploy and update your system.
Be sure that mandatory files are imported during initialization.
Be sure that data that can be managed by webmasters are not reset by impex during update.
If a data is created during the update because init is already done then be sure that is won't be played after each update.
As the media file is not found, you can
1. go to hmc-->Multimedia-->Media, in search panel,
2. click "search additional attributes" dropdown box, select "PK of file"
3. use "8861015965726" as PK of file to search
Then you can find out what file is missing and you can import impex or upload using hmc to fix this problem.

Not Able to Publish ADF Incremental Package

As Earlier Posted a thread for syncing Data from Premises Mysql to Azure SQL over here referring this article, and found that lookup component for watermark detection is only available for SQL Server Only.
So tried a work Around, that while using "Copy" Data Flow task ,will pick data greater than last watermark stored from Mysql.
Issue:
Able to validate package successfully but not able to publish same.
Question :
In Copy Data Flow Task i'm using below query to get data from MySql greater than watermark available.
Can't we use Query like below on other relational sources like Mysql
select * from #{item().TABLE_NAME} where #{item().WaterMark_Column} > '#{activity('LookupOldWaterMark').output.firstRow.WatermarkValue}'
CopyTask SQL Query Preview
Validate Successfully
Error With no Details
Debug Successfully
Error After following steps mentioned by Franky
Azure SQL Linked Service Error (Resolved by re configuring connection /edit credentials in connection tab)
Source Query got blank (resolved by re-selection source type and rewriting query)
Could you verify if you have access to create a template deployment in the azure portal?
1) Export the ARM Template: int he top-right of the ADFv2 portal, click on ARM Template -> Export ARM Template, extract the zip file and copy the content of the "arm_template.json" file.
2) Create ARM Template deployment: Go to https://portal.azure.com/#create/Microsoft.Template and log in with the same credentials you use in the ADFv2 portal (you can also get to this page going in the Azure portal, click on "Create a resource" and search for "Template deployment"). Now click on "Build your own template in editor" and paste the ARM template from the previous step in the editor and Save.
3) Deploy template: Click on existing resource group and select the same resource group as the one where your Data Factory is. Fill out the parameters that are missing (for this testing it doesn't really matter if the values are valid); Factory name should already be there. Agree the terms and click purchase.
4) Verify the deployment succeeded. If not let me know the error, it might be an access issue which would explain why your publish fails. (ADF team is working on giving a better error for this issue).
Did any of the objects publish into your Data Factory?