Error using CLI for cloud functions with IAM namespaces - ibm-cloud

I'm trying to create an IBM Cloud Function web action from some python code. This code has a dependency which isn't in the runtime, so I've followed the steps here to package the dependency with my code. I now need to create the action on the cloud for this package, using the steps described here. I've got several issues.
The first is that I want to check that this will be going into the right namespace. However though I have several, none are showing up when i do ibmcloud fn namespace list, I just get the empty table with headers. I checked that I was targeting the right region using ibmcloud target -r eu-gb.
The second is that when I try to bypass the problem above by creating a namespace from the command line using ibmcloud fn namespace create nyNamespaceName, it works, but I then check on the web UI, and this new namespace has been created in the Dallas region instead of the London one… I can’t seem to get it to create a namespace in the region that I am currently targeting for some reason, it’s always Dallas.
The third problem is that when I try to follow the steps 2 and 3 from here regardless, accepting that it will end up in the unwanted Dallas namespace, by running the equivalent of ibmcloud fn action create demo/hello <filepath>/hello.js --web true, it keeps telling me I need to target an org and a space. But my namespace is an IAM namespace, it doesn’t have an org and a space, so there are none to give?
Please let me know if I’m missing something obvious or have misunderstood something, because to me it feels like the CLI is not respecting the targeting of a region and not handling IAM stuff correctly.
Edit: adding code as suggested, but this code runs fine locally, it's the CLI part that I'm struggling with?
import sys
import requests
import pandas as pd
import json
from ibm_ai_openscale import APIClient
def main(dict):
# Get AI Openscale GUID
AIOS_GUID = None
token_data = {
'grant_type': 'urn:ibm:params:oauth:grant-type:apikey',
'response_type': 'cloud_iam',
'apikey': 'SOMEAPIKEYHERE'
}
response = requests.post('https://iam.bluemix.net/identity/token', data=token_data)
iam_token = response.json()['access_token']
iam_headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer %s' % iam_token
}
resources = json.loads(requests.get('https://resource-controller.cloud.ibm.com/v2/resource_instances', headers=iam_headers).text)['resources']
for resource in resources:
if "aiopenscale" in resource['id'].lower():
AIOS_GUID = resource['guid']
AIOS_CREDENTIALS = {
"instance_guid": AIOS_GUID,
"apikey": 'SOMEAPIKEYHERE',
"url": "https://api.aiopenscale.cloud.ibm.com"
}
if AIOS_GUID is None:
print('AI OpenScale GUID NOT FOUND')
else:
print('AI OpenScale FOUND')
#GET OPENSCALE SUBSCRIPTION
ai_client = APIClient(aios_credentials=AIOS_CREDENTIALS)
subscriptions_uids = ai_client.data_mart.subscriptions.get_uids()
for sub in subscriptions_uids:
if ai_client.data_mart.subscriptions.get_details(sub)['entity']['asset']['name'] == "MYMODELNAME":
subscription = ai_client.data_mart.subscriptions.get(sub)
#EXPLAINABILITY TEST
sample_transaction_id="SAMPLEID"
run_details = subscription.explainability.run(transaction_id=sample_transaction_id, cem=False)
#Formating results
run_details_json = json.dumps(run_details)
return run_details_json

I know the OP said they were 'targeting the right region'. But I want to make it clear that the 'right region' is the exact region in which the namespaces you want to list or target are located.
Unless you target this region, you won't be able to list or target any of those namespaces.
This is counterintuitive because
You are able to list Service IDs of namespaces in regions other than the one you are targeting.
The web portal allows you to see namespaces in all regions, so why shouldn't the CLI?
I was having an issue very similar to the OP's first problem, but once I targeted the correct region it worked fine.

Related

Pulumi ensure dependency order

I need a way of ensuring some services have been stood up and their URLs formalised in GCP before I craft an OpenAPI spec - using substitutions. The URLs are relatively dynamic as this environment is torn down nightly.
One solution I have is
import { helloWorldUrl } from './cloud-run/hello-world';
import { anotherHelloWorldUrl } from './cloud-run/another-hello-world-service';
import * as pulumi from '#pulumi/pulumi';
import * as fs from 'fs';
import * as Mustache from 'mustache';
pulumi.all([helloWorldUrl, anotherHelloWorldUrl])
.apply(([hello, another]) => {
let gatewayOpenAPI = fs.readFileSync('./api-gateway/open-api/gateway.yaml').toString();
gatewayOpenAPI = Mustache.render(gatewayOpenAPI, { helloWorldUrl: hello, anotherHelloWorld: another });
fs.writeFileSync(`./api-gateway/open-api/gateway-${pulumi.getStack()}.yaml`, gatewayOpenAPI);
// create api gateway infra here.
// cannot return outputs here :(
});
but this does not allow me to set Outputs. Is there a more elegant solution to this?
Cheers
KH
If you want to have a strict dependency order, you should use Pulumi Component Resources. You can then pass in your URLs as an input to the component resource, and access any outputs created by this component resource. You should also note that it's not allowed to create any resources in the callback of the apply method.
You might find the following example helpful to see the component resources in action: https://www.pulumi.com/registry/packages/aws/how-to-guides/s3-folder-component/

Resolution error: Cannot use resource 'x' in a cross-environment fashion, the resource's physical name must be explicit set

I'm trying to pass an ecs cluster from one stack to another stack.
I get this error:
Error: Resolution error: Resolution error: Resolution error: Cannot use resource 'BackendAPIStack/BackendAPICluster' in a cross-environment fashion, the resource's physical name must be explicit set or use `PhysicalName.GENERATE_IF_NEEDED`.
The cluster is defined as below in BackendAPIStack:
this.cluster = new ecs.Cluster(this, 'BackendAPICluster', {
vpc: this.vpc
});
The stacks are defined as follows:
const backendAPIStack = new BackendAPIStack(app, `BackendAPIStack${settingsForThisEnv.stackVersion}`, {
env: {
account: process.env.CDK_DEFAULT_ACCOUNT,
region: process.env.CDK_DEFAULT_REGION
},
digicallPolicyQueue: digicallPolicyQueue,
environmentName,
...settingsForThisEnv
});
const metabaseStack = new MetabaseStack(app, 'MetabaseStack', backendAPIStack.vpc, backendAPIStack.cluster, {
vpc: backendAPIStack.vpc,
cluster: backendAPIStack.cluster
});
metabaseStack.addDependency(backendAPIStack);
Here's the constructor for metabaseStack:
constructor(scope: cdk.Construct, id: string, vpc: ec2.Vpc, cluster: ecs.Cluster, props: MetabaseStackProps) {
super(scope, id, props);
console.log('cluster', cluster)
this.vpc = vpc;
this.cluster = cluster;
this.setupMetabase()
}
and then I'm using the cluster here:
const metabaseService = new ecs_patterns.ApplicationLoadBalancedFargateService(this, 'Metabase', {
assignPublicIp: false,
cluster: this.cluster,
...
I can't find documentation on how to do what I'm trying to do.
You're creating a Region/Account-specific Stack with BackendAPIStack because you're binding the stack to a specific account and region via the env prop value.
Then you're creating a Region/Account-agnostic stack by creating the MetabaseStack without any env prop value.
In general, having two independent stacks like this is fine, but here you're linking them together by passing a reference from the BackendAPIStack to the MetabaseStack, which won't work.
This is a problem because CDK normally links Stacks together by performing Stack Exports and Imports of values, but CloudFormation does not support cross-region or cross-account Stack references
So, your possible solutions are:
(A) Set up your MetabaseStack to use the same account/region as your BackendAPIStack
Under the hood this will setup the Cluster's ARN to be a Stack export from BackendAPICluster and then MetabaseStack will be able to import it.
(B1) Create BackendAPICluster with a clusterName that you pick.
i.e. new Cluster(..., {vpc: this.vpc, clusterName: 'backendCluster' })
By not providing a name, you're using the default of "CloudFormation-generated name" which is the basis of the issue that CDK is reporting, albeit it in a confusing way.
When you do provide a name, then the ARN for the cluster is deterministic (not picked by CloudFormation at deployment time) so CDK then has enough information at build time to determine what the Cluster's ARN will be and can provide that to your MetabaseStack.
(B2) Create BackendAPICluster with a clusterName and let CDK pick
This is done by setting the clusterName to PhysicalName.GENERATE_IF_NEEDED
i.e. new Cluster(..., {clusterName: PhysicalName.GENERATE_IF_NEEDED })
PhysicalName.GENERATE_IF_NEEDED is a marker that indicates that a physical (name) will only be generated by the CDK if it is needed for cross-environment references. Otherwise, it will be allocated by CloudFormation.
This is what the error is trying to tell you, but I didn't understand it either...
If possible, I would go with (A). I suspect it was just an oversight anyway that you weren't passing the same env values to the MetabaseStack and you probably want both of these stacks in the same region to reduce latency and all that.
If not, then I would personally then go with (B2) next because I try to not give any of my resources explicit names unless they are part of some contract with another group. I.e. Assume the role named 'ServiceWorker' in Account XYZ or Download the data from Bucket 'ABC'.

Accessing Raw Gamer Profile Picture

I am using the new XBox Live API for C# (https://github.com/Microsoft/xbox-live-api-csharp) for official access through a UWP app.
I am able to authenticate fine and reference the XBox Live user in context.
SignInResult result = await user.SignInAsync();
XboxLiveUser user = new XboxLiveUser();
Success! However, I can't seem to find an appropriate API call to return XboxUserProfile or XboxSocialProfile. Both of these classes contain URLs to the player's raw gamer pics. After reviewing MSDN documentation and the GH library it isn't clear to me how this is achieved. Any help is greatly appreciated.
The below sample should work if you meet the following pre requisits:
Reference the Shared Project that contains the API from your project and don't reference the "Microsoft.Xbox.Services.UWP.CSharp" project
Copy all source code files from the "Microsoft.Xbox.Services.UWP.CSharp" project into your project
Include the Newtonsoft.Json NuGet package into your project
Steps 1 & 2 are important as this allows you to access the "internal" constructors which otherwise would be protected from you.
Code to retrieve the profile data:
XboxLiveUser user = new XboxLiveUser();
await user.SignInSilentlyAsync();
if (user.IsSignedIn)
{
XboxLiveContext context = new XboxLiveContext(user);
PeopleHubService peoplehub = new PeopleHubService(context.Settings, context.AppConfig);
XboxSocialUser socialuser = await peoplehub.GetProfileInfo(user, SocialManagerExtraDetailLevel.None);
// Do whatever you want to do with the data in socialuser
}
You may still run into an issue like I did. When building the project you may face the following error:
Error CS0103 The name 'UserPicker' does not exist in the current
context ...\System\UserImpl.cs 142 Active
If you get that error make sure you target Win 10.0 Build 14393.

Trigger will not run (added more information)

I have been tinkering with this trigger for hours now, think I pinpointed the issue now.
I have set up an example trigger like in ML8 documentation.
Now I have modified it to a more real-world action.
The issue seems to be that I use a library module that hold my own functions in a lib.xqy. I have tested the lib itself in Query Console, all functions run fine.
The alert action itself also runs fine in QC.
The simpleTrigger works ok.
The more complex one runs IF I REMOVE the function that uses my own lib.
Seems that the trigger is run by a user or from a place where it cannot find my module (which is in the modules db). I have set the trigger-db to point to the content-db.
The triggers look at a directory for new documents (document create).
If I want to use my own lib function the Error thrown is:
[1.0-ml] XDMP-MODNOTFOUND: (err:XQST0059) xdmp:eval("xquery version
"1.0-ml";
let $uri := '/marklo...", (),
<options xmlns="xdmp:eval"><database>12436607035003930594</database>
<modules>32519102440328...</options>)
-- Module /lib/sccss-lib.xqy not found
The module is in the modules-db...
Another thing that bothers me is the example in ML doc does a
xdmp:document-insert("/modules/log.xqy",
text{ "
xquery version '1.0-ml';
..."
}, xdmp:permission('app-user', 'execute'))
What does the permission app-user do in this case?
Anyway main question: Why does the trigger not run if I use a custom module in the trigger action?
I have seen this question and think it is related but I do not understand the answer there...
EDIT start, more information on the trigger create statement:
xquery version "1.0-ml";
import module namespace trgr="http://marklogic.com/xdmp/triggers"
at "/MarkLogic/triggers.xqy";
trgr:create-trigger("sensorTrigger", "Simple trigger for connection systems sensor, the action checks how long this device is around the sensor",
trgr:trigger-data-event(
trgr:directory-scope("/marklogic.solutions.obi/source/", "1"),
trgr:document-content("create"),
trgr:post-commit()),
trgr:trigger-module(xdmp:database("cluey-app-content"), "/triggers/", "check-time-at-sensor.xqy"),
fn:true(), xdmp:default-permissions() )
Also indeed the trigger is created from the QC, so indeed as admin(I yet have to figure out how to do that adding code to app-specific.rb). And also the trigger action is loaded from the QC with a doc insert statement equivalent as the trigger example in the docs.
For completeness I added this to app-specific.rb per suggestion by Geert
alias_method :original_deploy_modules, :deploy_modules
def deploy_modules()
original_deploy_modules
# and apply correct permissions
r = execute_query %Q{
xquery version "1.0-ml";
for $uri in cts:uris()
return (
$uri,
xdmp:document-set-permissions($uri, (
xdmp:permission("#{#properties["ml.app-name"]}-role", "read"),
xdmp:permission("#{#properties["ml.app-name"]}-role", "execute")
))
)
},
{ :db_name => #properties["ml.modules-db"] }
end
For testing I also loaded it as part of the content (using ./ml local deploy content to load it, as said before the action is there it will run so there seems no issue with the permission of the action doc itself. What I do not understand is that as soon as I try to use my own module in the action it fails to find the module or(see comment David) does not have the right permission on the module. So the trigger action will fail to run ... The module is loaded with roxy under /src/lib/lib.xqy
SECOND EDIT
I added all trigger stuf to include in roxy by adding the following to app_specific.rb:
# HK voor gebruik modules die geen REST permissies hebben in een rest extension
alias_method :original_deploy_modules, :deploy_modules
def deploy_modules()
original_deploy_modules
# Create triggers
r = execute_query(%Q{
xquery version "1.0-ml";
import module namespace trgr="http://marklogic.com/xdmp/triggers"
at "/MarkLogic/triggers.xqy";
xdmp:log("Installing triggers.."),
try {
trgr:remove-trigger("sensorTrigger")
} catch ($ignore) {
};
xquery version "1.0-ml";
import module namespace trgr="http://marklogic.com/xdmp/triggers"
at "/MarkLogic/triggers.xqy";
trgr:create-trigger("sensorTrigger", "Trigger to check duration at sensor",
trgr:trigger-data-event(
trgr:directory-scope("/marklogic.solutions.obi/source/", "1"),
trgr:document-content("create"),
trgr:post-commit()
),
trgr:trigger-module(xdmp:modules-database(), "/", "/triggers/check-time-at-sensor.xqy"),
fn:true(),
xdmp:default-permissions(),
fn:false()
)
},
######## THIRD EDIT ###############
#{ :app_name => #properties["ml.app-name"] }
{ :db_name => #properties["ml.modules-db"] }
)
# and apply correct permissions
r = execute_query %Q{
xquery version "1.0-ml";
for $uri in cts:uris()
return (
$uri,
xdmp:document-set-permissions($uri, (
xdmp:permission("#{#properties["ml.app-name"]}-role", "read"),
xdmp:permission("#{#properties["ml.app-name"]}-role", "execute")
))
)
},
{ :db_name => #properties["ml.modules-db"] }
end
As you can seee the rootpath is now "/" in line
trgr:trigger-module(xdmp:modules-database(), "/", "/triggers/check-time-at-sensor.xqy")
I also added permissions by hand but still as soon as I add the line pointing to sccs-lib.xqy my trigger fails...
There is a number of criteria that need to be met for a trigger to work properly. David already mentioned part of them. Let me try to complete the list:
You need to have a database that contains the trigger definition. That is the database against which the trgr:create-trigger was executed. Typically Triggers or some app-triggers.
That trigger database needs to be assigned as triggers-database to the content database (not the other way around!).
You point to a trigger module that contains the code that will get executed as soon as a trigger event occurs. The trgr:trigger-module explicitly points to the uri of the module, and the database in which it is contained.
Any libraries used by that trigger module, need to be in the same database as the trigger module. Typically both trigger module, and related libraries are stored within Modules or some app-modules.
With regard to permissions, the following applies:
First of all, you need privileges to insert the document (uri, and collection).
Then, to be able to execute the trigger, the user that does the insert needs to have a role (directly, inherited, or via amps) that has read and execute permission on the trigger modules, as well on all related libraries.
Then that same user needs to have privileges to do whatever the trigger modules needs to do.
Looking at your create-triggers statement, I notice that the trigger module is pointing to the app-content database. That means it will be looking for libraries in the app-content database as well. I would recommend putting trigger module, and libraries in the app-modules database.
Also, regarding app-user execute permission: that is just a convention. The nobody user has the app-user role. That is typically used to allow the nobody user to run rewriter code.
HTH!
Could you please provide a bit more information - like perhaps the entire trigger create statement?
For creating the trigger, keep in mind:
the trigger database that you insert the trigger into has to be the one defined in the content database you refer to in the trigger and
trgr:trigger-module allows you to define the module database and the module to run. With this defined properly, then I cannot see how /lib/sccss-lib.xqy is not found - unless it is a permissions item...
Now on to the other item in your question: you test stuff in query console. That has the roles of that user - often run by people as admin... MarkLogic gives a 'not found' message also if a document is there - and you simply do not have access to it. So, it is possible that there is a problem with permissions for the documents in your modules database.

Redmine REST API called from Ruby is ignoring updates to some fields

I have some code which was working at one point but no longer works, which strongly suggests that the redmine configuration is involved somehow (I'm not the redmine admin), but the lack of any error messages makes it hard to determine what is wrong. Here is the code:
#!/usr/bin/env ruby
require "rubygems"
gem "activeresource", "2.3.14"
require "active_resource"
class Issue < ActiveResource::Base
self.site = "https://redmine.mydomain.com/"
end
Issue.user = "myname"
Issue.password = "mypassword" # Don't hard-code real passwords :-)
issue = Issue.find 19342 # Created manually to avoid messing up real tickets.
field = issue.custom_fields.select { |x| x.name == "Release Number" }.first
issue.notes = "Testing at #{Time.now}"
issue.custom_field_values = { field.id => "Release-1.2.3" }
success = issue.save
puts "field.id: #{field.id}"
puts "success: #{success}"
puts "errors: #{issue.errors.full_messages}"
When this runs, the output is:
field.id: 40
success: true
errors: []
So far so good, except that when I go back to the GUI and look at this ticket, the "notes" part is updated correctly but the custom field is unchanged. I did put some tracing in the ActiveRecord code and that appears to be sending out my desired updates, so I suspect the problem is on the server side.
BTW if you know of any good collections of examples of accessing Redmine from Ruby using the REST API that would be really helpful too. I may just be looking in the wrong places, but all I've found are a few trivial ones that are just enough to whet one's appetite for more, and the docs I've seen on the redmine site don't even list all the available fields. (Ideally, it would be nice if the examples also specified which version of redmine they work with.)