How to use multiple client certificates stored in keystore.p12 and set jmeter's system.properties while setting test in Azure Load Testing service? - azure-devops

I am trying to set up test (manual/yaml) in Azure Load Testing service and my test uses client certificates, so I uploaded jmx, keystore(.p12) and csv (has alias of certificates in keystore) to test plan.
In Azure Load Testing, where can I set javax.net.ssl.keyStoreType, javax.net.ssl.keyStore, javax.net.ssl.keyStorePassword, https.use.cached.ssl.context,https.keyStoreStartIndex and https.keyStoreEndIndex properties?
In case of Jmeter, I would set above properties in jmeter's system.properties file. But, in case of Azure Load Testing, not sure how to get this working.
Please suggest, Thanks

As per Configure a load test in YAML
configurationFiles List of relevant configuration files or other files that you reference in the Apache JMeter script. For example, a CSV data set file, images, or any other data file. These files will be uploaded to the Azure Load Testing resource alongside the test script. If the files are in a subfolder on your local machine, use file paths that are relative to the location of the test script.
So my expectation is that if you upload system.properties file along with the .jmx script and CSV file with certificate aliases the Azure load testing engine should pick it up and apply.
It should also be possible to do via GUI:
More information: How to Use Multiple Certificates When Load Testing Secure Websites

Related

Aspera Node API /files/{id}/files endpoint not returning up to date data

I am working on a webapp for transferring files with Aspera. We are using AoC for the transfer server and an S3 bucket for storage.
When I upload a file to my s3 bucket using aspera connect everything appears to be successful, I see it in the bucket, and I see the new file in the directory when I run /files/browse on the parent folder.
I am refactoring my code to use the /files/{id}/files endpoint to list the directory because the documentation says it is faster compared to the /files/browse endpoint. After the upload is complete, when I run the /files/{id}/files GET request, the new file does not show up in the returned data right away. It only becomes available after a few minutes.
Is there some caching mechanism in place? I can't find anything about this in the documentation. When I make a transfer in the AoC dashboard everything updates right away.
Thanks,
Tim
Yes, the file-id base system uses an in-memory cache (redis).
This cache is updated when a new file is uploaded using Aspera. But for files movement directly on the storage, there is a daemon that will periodically scan and find new files.
If you want to bypass the cache, and have the API read the storage, you can add this header in the request:
X-Aspera-Cache-Control: no-cache
Another possibility is to trigger a scan by reading:
/files/{id}
for the folder id

de- serialize JSON metadata to .qvf using qlik sense API

I am aware of Qlik sense serialize app where we generate a JSON object containing metadata information of a .qvf file using Qlik sense API.
I want to do a reverse operation of this i.e generate .qvf file back from json metadata.
After many research just found this link github and it doesnot have a complete information.
Any solution would be helpfull.
Technically you cant create qvf directly from json. You'll have to create an empty qvf and then use various api to import the json.
Qlik have a very nice tool for un-build/build apps (and more). qlik-cli have dedicated commands for un-build/build:
If you are looking for something more "programmable" then ive create some enigma.js mixin for the same purpouse - enigma-mixin. I still need to perform more detailed testing there but it was working ok with simpler tests
Update 08/10/2021
Using qlik-cli
setup context
first unbuild an app:
qlik app unbuild --app 11111111-2222-3333-4444-555555555555
This will create new folder in the current folder named <app_name>-unbuild. The folder will contain all info about the app in json and/or yaml files
once these files are available then you can use them to build another app. Just to mention that the target app should exists before the build is ran:
qlik.exe app build --config ./config.yml --app 55555555-4444-3333-2222-111111111111
The above command will use all available files (specified in config.yml) and update the target app
If you dont want all files to be used and only want to update the data connections, for example, then the build command can be ran with different arguments:
qlik.exe app build --connections ./connections.yml --app 55555555-4444-3333-2222-111111111111
This command will only update the data connections in the target app and will not update anything else

How to read a local csv file using Azure Data Factory and a self-hosted runtime?

I have a Windows Server VM with the ADF Integration Runtime installed running under a local account called deploy. This account is a member of the local admins group. The server is not domain-joined.
I created a new linked service (File System) and pointed it to a csv file on the root of the C drive as a test. When I test the connection I get Connection failed.
Error occurred when trying to access the file in Folder 'C:\etr.csv', File filter: ''. The directory name is invalid. Activity ID: 1b892702-7cc3-48d5-83c7-c680d6d15afd.
Any ideas on a fix?
The linked service needs to be a folder on the target machine. In your screenshot, change C:\etr.csv to C:\ and then define a new dataset that uses the linked service to select etr.csv.
The dataset represents the structure of the data within the linked data stores, and the linked service defines the connection to the data source. So the linked service should point to the folder instead of file. It should be C:\ instead of C:\etr.csv

fabric-samples:balance-transfer example - v1.1.0 - Missing instructions?

fabric-samples:balance-transfer example - v1.1.0 - on a customized network with cryptogen generated crypto - fabric-client-kv* contents are failing to be created. Missing instructions? Please provide what needs to be done for creating these folders and contents in root directory of sample and in /tmp directory for wallet setup.
Created a customized network
Generated cryptogen content for the customized network
Brought of the network and verified it to be correctly running
Adapted the runApps.sh and testAPIs.sh scripts to use customized network with its crypto
User enroll and registration process failed due to missing fabric-client-kv* contents
This is not an issue when sample itself is run. The fabric-client-kv* contents are generated or re-generated
What is missing and what needs to be done to succeed?
If you regenerate the certificates same should be updated in docker-compose and network-config. If your adding a new organization to the network, Need to create a network connection profile configuration which will have the setting for keyValueStore and cryptoStore. In the balance transfer example crypto materials are stored in tmp folder, In this case, if you restart the system you will lose those materials, You can change these configurations on org*.yaml.

Deploying SSAS cube to environments

We are using BIDS 2008 locally (on our workstations) to develop our OLAP objects/cube. Come the time of promotion to Development we can deploy via BIDS. However when a hands-off deployment is required (eg. to UAT or Live) we are generating an XMLA file. This (the generated XMLA file) of course contains environment specific information (eg server name, database name, etc). If we would like to automate the generation of the XMLA file for deployment to each environment, is there a config type process to parameterise these values (like .NET : web.config : appSettings or SSIS : dtsConfig).
Note we could parse the XMLA file and replace these values depending on the environment (eg. via xmlpoke), but this is a little messy and depends on XML path structure, and hence would rather avoid this approach
This should point you in the right direction: http://blog.kejser.org/2006/11/28/automating-build-of-analysis-services-projects/
Here's more on the deployment utility and command line switches: http://msdn.microsoft.com/en-us/library/ms162758(v=sql.105).aspx
before using Micrsoft.analysisservice.deplyement to generate the XMLA file to deply in an AS istance, we need update the files bellows to change all connection string, deployment option,
project.asdatabase
project.deploymenttargets
project.configsettings
project.deploymentoptions
regards,