I have a Fastfile that performs the uploadToTestFlight action:
uploadToTestflight(
username: "foo#example.com",
skipWaitingForBuildProcessing: false,
distributeExternal: true)
This succeeded when I ran it. However, it didn't actually distribute the build to anyone. When I look at the build on App Store Connect > My Apps > Foo App > TestFlight > iOS, it says "Approved" near the build name, which implies that it already went through the review process.
However, when I click on the build, I notice that the only groups or users to which it was released to is App Store Connect Users, meaning it wasn't actually externally released.
I have a group named Foo Group which I would like to release it to whenever I run fastlane. How do I do that?
I tried resolving via the documentation for Pilot but it doesn't have an example with distributing externally.
In the Fastlane repo on GitHub, I found this code in pilot/lib/pilot/build_manager.rb
if options[:distribute_external] && options[:groups].nil?
# Legacy Spaceship::TestFlight API used to have a `default_external_group` that would automatically
# get selected but this no longer exists with Spaceship::ConnectAPI
UI.user_error!("You must specify at least one group using the `:groups` option to distribute externally")
end
My guess is that you didn't notice this silent warning in the output of your Fastlane run. Did you specify the groups parameter?
Also it's worth to specify the changelog param if you're doing the external releases fully automatically.
optional_changelog = %Q{
Your changelog
}
upload_to_testflight(
...
changelog: optional_changelog,
distribute_external: true,
distribute_only: true, // false if you want to upload ipa
groups: [
"Your group",
"Your other group"
],
skip_submission: false, // defaults to false if not specified
skip_waiting_for_build_processing: false, // defaults to false if not specified
)
FTR I was having issues uploading to a external group and the issue is that skip_waiting_for_build_processing needs to be set to false.
So ensure that you have the follow params set
distribute_external: true,
groups: ['Name of your group'],
skip_submission: true,
notify_external_testers: true,
skip_waiting_for_build_processing: false,
https://docs.fastlane.tools/actions/testflight/#Parameters
Related
We currently trying to use Owncloud for our office storage and everything goes smoothly until there are update for owncloud server and the app asking everyone who visiting the site. Even before login, owncloud asking for visitor to update to latest version.
Outside of several changes we employ to Owncloud core source to suit our requirement, there are also concern from security policy(which is best practice in most cases) that no one can update the apps except admin. Even so, admins must test it before deployment in to production.
In documentation, there are several configs item to configure update, here is our configs to disable update
`
'upgrade.disable-web' => false,
'upgrade.automatic-app-update' => false,
'updatechecker' => false,
'updater.server.url' => '127.0.0.1',
`
Yet update notice still occurred, then our team try to block all outgoing traffic and new error screen showing up several days latter with new error 500 message, here what we found in log:
{"reqId":"L75LhvGcWcDjxjqxL3sD","level":3,"time":"2022-11-11T04:47:08+00:00","remoteAddr":"xxx.xxx.xxx.xxx","user":"--","app":"index","method":"GET","url":"\/","message":"Exception: {\"Exception\":\"OC\\\\NeedsUpdateException\",\"Message\":\"\",\"Code\":0,\"Trace\":\"#0 \\\/var\\\/www\\\/owncloud\\\/lib\\\/private\\\/legacy\\\/app.php(124): OC_App::loadApp()\\n#1 \\\/var\\\/www\\\/owncloud\\\/lib\\\/base.php(904): OC_App::loadApps()\\n#2 \\\/var\\\/www\\\/owncloud\\\/index.php(54): OC::handleRequest()\\n#3 {main}\",\"File\":\"\\\/var\\\/www\\\/owncloud\\\/lib\\\/private\\\/legacy\\\/app.php\",\"Line\":188}"}
Anyone have same issue with us? If this keep continue, I think we will dump Onwcloud completely and build ourself.
Thanks
Azure functions swap functionality is not working after enabling private endpoint(with selected networks option) for function app linked storage account(webjobstorage)
Created private endpoint for blob, file and table storage
Below are the additional app settings I am adding
{
"name": "WEBSITE_CONTENTOVERVNET",
"value": "1",
"slotSetting": false
},
{
"name": "WEBSITE_CONTENTSHARE",
"value": "production",
"slotSetting": false
},
{
"name": "WEBSITE_DNS_SERVER",
"value": "168.63.129.16",
"slotSetting": false
},
{
"name": "WEBSITE_VNET_ROUTE_ALL",
"value": "1",
"slotSetting": false
}
Referred this article Secure storage account linked to Function App with private endpoint
From the azure devops I am trying to deploy the code to staging slot first, then later I am swapping it with prod slot. at this step it is failing.
Tried to swap it from the portal that also failed.
I am getting below error
From devops swap task :
##[error]Error: Failed to swap App Service 'testmgmt-fa-min-go' slots - 'staging' and 'production'. Error: InternalServerError - There was an unexpected error swapping slots 'staging' and 'production' for site 'testmgmt-fa-min-go(staging)'. Please try to cancel your swap operation. (CODE: 500)
From Portal:
This was caused by an internal platform component, and I’ll updated this question to notify when the component fix has been fully released. Unfortunately, the ETA for a full roll out is within the next 3 to 4 months.
Thanks to #UBK, your comment helped me to resolve the same swapping issue in my Azure Private Endpoint Function App.
I tried to reproduce the issue by following the given documentation: Secure storage account linked to Function App with private endpoint - Microsoft Tech Community
Solved the swapping issue by allowing access to all networks in the Networking of Storage Account.
The fix is deployed but we had to introduce a new app setting that you should set on your production slot (or the swap slot if you're swapping between two subslots) called WEBSITE_OVERRIDE_STICKY_DIAGNOSTICS_SETTINGS and set it to 0 (zero). I.e.,
WEBSITE_OVERRIDE_STICKY_DIAGNOSTICS_SETTINGS=0
This will allow you to swap the slots when the storage account is network restricted. Here is our documentation on app settings. This should not have any impact on your Azure Monitor related diagnostics settings configuration and is related to the legacy Application Log Settings configuration, which was preventing Premium Functions slot swaps from occurring.
Next steps on our side are:
We will add to our backlog a work item for this setting to defaulted for Premium Functions, so you won't have to add it but currently no ETA for this, so the above is the current final solution.
We will add the app setting to our App Settings list documentation
I ended up on a point that Amplify fails to push any change I made, with a non existent UserPool clientId exception.
Something like
Resource Name: XXXXXXXXXXX (AWS::Cognito::UserPoolClient) Event Type:
update Reason: User pool client does not exist. (Service:
AWSCognitoIdentityProviderService; Status Code: 400; Error Code:
ResourceNotFoundException; Request ID: YYYYYYYYYYYYYYYYYY URL:
https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/xxxxxxxxxxx
I have explained my whole journey on a Github issue for Amplify Cli that you can see here, unfortunately, I'm not getting much support from Amplify team, as you can see there.
I also have created a StackOverflow question with the initial problem I was facing, that you can check here
After digging more into this issue for 3-4 long days, as this issue is blocking my deployment, I came to a guess to what happened:
I have added auth to my amplify project months ago
Eventually, I noticed one of the created clients were not being used, so I have deleted it, using the Cognito console.
I had not updated the auth during months
Now that I have introduced the social authentication Amplify tried to update it and because of the client Id not existing anymore, it can't and raises the mentioned error.
Now, anything I try to update it fails, and I guess the reason is this out of sync between what Amplify expects and what actually is the infra.
Every time I pull --restore my environment, I get my amplify-meta.json updated with this invalid client Id (and yes, I have tried changing it on the local amplify-meta.json and pushing it), something like:
"auth": {
"myproject": {
"service": "Cognito",
"providerPlugin": "awscloudformation",
"output": {
"GoogleWebClient": "111111111.apps.googleusercontent.com",
"AppClientSecret": "aaaaaaaaaaa",
"UserPoolId": "region-pooId",
"AppClientIDWeb": "VALID ID",
"AppClientID": "INVALID ID",
"FacebookWebClient": "2222222222",
"IdentityPoolId": "region:Id",
"IdentityPoolName": "myproject__env",
"UserPoolName": "mypoolname"
},
"lastPushTimeStamp": "2020-05-13T20:48:29.797Z",
"providerMetadata": {
"s3TemplateURL": "https://s3.amazonaws.com/myproject-deployment/amplify-cfn-templates/auth/lexis-cloudformation-template.yml",
"logicalId": "authmyproject"
},
"lastPushDirHash": "XXXXXXXXXXXXXX="
}
},
I have a different valid ClientId on my Cognito, so on my last resort, what I have tried is going direct to the S3TemplateURL pointed on this code and updating it there to the valid one, my guess was that this file was the single point of truth for Amplify.
But no success, still getting the same wrong Id after pull restore.
Any idea how can I make Amplify in sync again? Making it aware that this ClientId doesn't exist anymore and just getting rid of it on the CloudFormation/Templates?
Amplify Cli is not supporting this feature.
I had the same problem.
I updated Appsync and Cognitor in the cloud and I cannot pull the changes to my project.
When I run amplify status, it said no changes.
So I contacted AWS support and they said this is coming feature.
The solution is to change everything in amplify cli and manage amplify in the console. Don't change anything in the cloud.
I'm trying to automatically create review apps as part of my pipeline and testing procedure when pull requests are created on the corresponding GitHub repository. When the PR is created, it appears as a review app, but doesn't actually get created.
In the DevTools console, a 404 error is there about the review-app-config. I'm not sure if this is directly related, as I've successfully created a review app on a different pipeline (with a different owner) with the same error.
This 404 error changes between the file not being available at all, or that it's returning an error. When it's the latter, the file contains the following:
{"id":"missing_version","error":"Please specify a version along with Heroku's API MIME type. For example, `Accept: application/vnd.heroku+json; version=3`.\n"}
I'm creating and managing all of the apps/pipelines with the GUI on dashboard.heroku.com. The version accept header appears to be needed for the Heroku API but I've no idea how to implement it. Any help would be greatly appreciated!
Firstly check that your app.json file is valid json. If it isn't then that will cause the deployment to fall over.
Secondly check if you have any scripts in the app.json key. If you have any here and they are incorrect then this will also cause it to hand and fall over with no warning displayed.
{
"name": "App name",
"scripts": {
"deploy": "command that won't work!!"
},
...
}
You many not need any scripts in here so it can also be empty!
{
"name": "App name",
"scripts": {},
...
}
How is it possible to use Metadata in on_success/on_failure? For example, to send emails via https://github.com/pivotal-cf/email-resource?
I haven't found a way, as I can't change content of files where email resources reside (subject/body), as the metadata is not available to tasks.
And yep, that might be a duplicate for Concourse CI and Build number
But still my question IMHO is a valid use case for notifications.
The metadata you are referring to, I assume, is the environment variables provided to resources, not tasks.
This can be used with the slack resource to provide information about what build failed.
For example:
on_failure:
put: slack-alert
params:
text: |
The `science` pipeline has failed. Please resolve any issues and ensure the pipeline lock was released. Check it out at:
$ATC_EXTERNAL_URL/teams/$BUILD_TEAM_NAME/pipelines/$BUILD_PIPELINE_NAME/jobs/$BUILD_JOB_NAME/builds/$BUILD_NAME
The email resource, you're referencing has an open PR to support these environment variables. I'd discuss your need for that feature there.