Why the "vmc services" command only returns one table? - service

In the vmc document from cloudfoundry, it says the "vmc services" command will return two tables, the first tables contains the available service types, and the second table returns the provisioned services instances. But I found with the latest version of vmc, the "vmc services" command only returns one table which contains "provisioned services". It's very in-convenient as I cannot see what kind of services the system can support.
Note: I found the very old version of vmc can list two tables.
Anyone have meet this issue?

For the latest version vmc, please use
vmc info --services
to get all the services available. Use
vmc help COMMAND
to get the usage of each command.
I believe the CF team is working on refining the document of all these stuff. They should soon make it better.

Related

How to add changes for NO ENCRYPT DB2 option to db2RestoreStruct

I am trying to restore encrypted DB to non-encryped DB. I made changes by setting piDbEncOpts to SQL_ENCRYPT_DB_NO but still restore is being failed. Is there db2 sample code is there where I can check how to set "NO Encrypt" option in DB2. I am adding with below code snippet.
db2RestoreStruct->piDbEncOpts->encryptDb = SQL_ENCRYPT_DB_NO
The 'C' API named db2Restore will restore an encrypted-image to a unencrypted database , when used correctly.
You can use a modified version of IBM's samples files: dbrestore.sqc and related files, to see how to do it.
Depending on your 'C' compiler version and settings you might get a lot of warnings from IBM's code, because IBM does not appear to maintain the code of their samples as the years pass. However, you do not need to run IBM's sample code, you can study it to understand how to fix your own C code.
If installed, the samples component must match your Db2-server version+fixpack , and you must use the C include files that come with your Db2-server version+fixpack to get the relevant definitions.
The modifications to IBM's samples code include:
When using the db2Restore API ensure its first argument has a value that is compatible with your server Db2-version-and-fixpack to access the required functionality. If you specify the wrong version number for the first argument, for example a version of Db2 that did not support this functionality, then the API will fail. For example, on my Db2-LUW v11.1.4.6, I used the predefined db2Version1113 , like this:
db2Restore(db2Version1113, &restoreStruct, &sqlca);
When setting the restore iOptions field: enable the flag DB2RESTORE_NOENCRYPT, for example, in IBM's example, include the additional flag: restoreStruct.iOptions = DB2RESTORE_OFFLINE | DB2RESTORE_DB | DB2RESTORE_NODATALINK | DB2RESTORE_NOROLLFWD | DB2RESTORE_NOENCRYPT;
Ensure the restoredDbAlias differs from the encrypted-backup alias name.
I tested with Db2 v11.1.4.6 (db2Version1113 in the API) with gcc 9.3.
I also tested with Db2 v11.5 (db2Version11500 in the API) with gcc 9.3.

ADMG0007E: The configuration data type ConfigSynchronizationService is not valid

I'm trying to automize WebSphere Deployment process for zero down-time using steps which you can find in this link..
According to documentation's first step , we should disable "Automatic Synchronization" for each node. To automize it, I'm using script which given in documentation but when I try to apply the command below :
set syncServ [$AdminConfig list ConfigSynchronizationService $na_id]
I'm facing with an error that : ADMG0007E: The configuration data type ConfigSynchronizationService is not valid
As I checked IBM documentations, I couldn't see any resource which referred to this problem.
Does anyone have any workaround or direct solution for it?
Thanks in advance for your suggestions.
PS: I should mention that document written for zOS system but I'm trying to apply that methodology to WebSphere AS on Linux.

Get-AzureRmResourceGroupDeployment lists machines I cannot see in the web interface

I'm tasked with automating the creation of Azure VM's, and naturally I do a number of more or less broken iterations of trying to deploy a VM image. As part of this, I automatically allocate serial hostnames, but there's a strange reason it's not working:
The code in the link above works very well, but the contents of my ResourceGroup is not as expected. Every time I deploy (successfully or not), a new entry is created in whatever list is returned by Get-AzureRmResourceGroupDeployment; however, in the Azure web interface I can only see a few of these entries. If, for instance, I omit a parameter for the JSON file, Azure cannot even begin to deploy something -- but the hostname is somehow reserved anyway.
Where is this list? How can I clean up after broken deployments?
Currently, Get-AzureRmResourceGroupDeployment returns:
azure-w10-tfs13
azure-w10-tfs12
azure-w10-tfs11
azure-w10-tfs10
azure-w10-tfs09
azure-w10-tfs08
azure-w10-tfs07
azure-w10-tfs06
azure-w10-tfs05
azure-w10-tfs02
azure-w7-tfs01
azure-w10-tfs19
azure-w10-tfs1
although the web interface only lists:
azure-w10-tfs12
azure-w10-tfs13
azure-w10-tfs09
azure-w10-tfs05
azure-w10-tfs02
Solved using the code $siblings = (Get-AzureRmResource).Name | Where-Object{$_ -match "^$hostname\d+$"}
(PS. If you have tips for better tags, please feel free to edit this question!)
If you create a VM in Azure Resource Management mode, it will have a deployment attached to it. In fact if you create any resource at all, it will have a resource deployment attached.
If you delete the resource you will still have the deployment record there, because you still deployed it at some stage. Consider deployments as part of the audit trail of what has happened within the account.
You can delete deployment records with Remove-AzureRmResourceGroupDeployment but there is very little point, since deployments have no bearing upon the operation of Azure. There is no cost associated they are just historical records.
Querying deployments with Get-AzureRmResourceGroupDeployment will yield you the following fields.
DeploymentName
Mode
Outputs
OutputsString
Parameters
ParametersString
ProvisioningState
ResourceGroupName
TemplateLink
TemplateLinkString
Timestamp
So you can know whether the deployment was successful via ProvisioningState know the templates you used with TemplateLink and TemplateLinkString and check the outputs of the deployment etc. This can be useful to figure out what template worked and what didn't.
If you want to see actual resources, that you are potentially being charged for, you can use Get-AzureRmResource
If you just want to retrieve a list of the names of VMs that exist within an Azure subscription, you can use
(Get-AzureRmVM).Name

Retrieving Project Metrics using Rest API and Timemachine from SonarQube

I'm trying to retrieve projects metrics using the REST Api. Therefore I first query the projects using "/api/projects/index". Afterwards I retrieve the metrics using "/api/metrics/search". Both works fine. And I result with:
[id:35476, k:com.test:TestProject, nm:TestProject, qu:TRK, sc:PRJ]
[custom:false, description:Cyclomatic complexity, direction:-1, domain:Complexity, hidden:false, id:10019, key:complexity, name:Complexity, qualitative:false, type:INT]
Now I wanted to retrieve a projects metrics. Therefore I use the following URL:
https://MYHOST/sonarqube/api/timemachine/index?resource=35476&metric=10019&fromDateTime=2010-12-25T23:59:59+0100&toDateTime=2018-12-25T23:59:59+0100
There the server retruns only: [{"cols":[],"cells":[]}]
This surprices me, because when I enter the WebInterface of sonar for the project, I can see numbers. I tried some other metrics, however all ended with the same result. What am I doing wrong?
You didn't mention server version, so I'll assume the latest: 5.2.
I got the same result for a bare query (http://nemo.sonarqube.org/api/timemachine/index), and for a query which specified resource but not metrics (http://nemo.sonarqube.org/api/timemachine/index?resource=org.sonarsource.sonarqube%3Asonarqube).
So I'm guessing there's a problem with either your resource or metric id. Try using the keys (com.test&%3ATestProject, and complexity) instead.
And yes, the ids you got back from the other web services should work here, but what's meant by "id" can be a little... ah... variable from service to service to service.

Trouble adding a new service

I have followed the instructions at https://github.com/cloudfoundry/oss-docs/tree/master/vcap/adding_a_system_service and copied the echo service and created my new service. (That document is somewhat out-of-date in that "excluded components" no longer exists.
In any case, my service shows up as running with a gateway and a node when I look at 'vcap status' on the server. However, when I look at 'vmc services' from the client my service is not in the list. Where is this list maintained and why is my service not on the list?
Various services, including blob, filesystem, mongodb, etc, are shown on the 'vcm services' list even though they have never been included in my config. Where is this maintained and why are other services on this list?
The cloud_controller.log file shows a "Create service request:" for echo every minute. This service is not in my config file (it was once but it was removed and I repeated the deployment). What is prompting this request for a service that was not defined in the config?
The _gateway.log for my service shows the following:
INFO -- Sending info to cloud controller: ...api.vcap.me/services/v1/offerings
INFO -- Fetching handles from cloud controller .../offerings/.../handles
ERROR -- Failed registering with cloud controller, status=400
DEBUG -- [GaaS-Provisioner] Connected to node mbus..
ERROR -- Failed fetching handles, status=404
Why does my gateway fail to register with the cloud controller? I have found some reports that suggest that the problem is with domain name mapping. I have verified that the server can find itself:
$curl api.vcap.me
Welcome to VMware's Cloud Application Platform
What can I do to register my service?
You can also try asking your question on the vcap_dev google group.
https://groups.google.com/a/cloudfoundry.org/forum/?fromgroups#!forum/vcap-dev
They are focused in answering and discussing OSS subjects for Cloud Foundry!
If you follow the document correctly things should work just fine. I understand that the mechanism for maintaining the excluded list of components has changed and can be a point of confusion when following the steps mentioned in the article (just ignore that step totally).
ERROR -- Failed registering with cloud controller, status=400
Well this is a point of worry. I recently followed the article step by step and was able to add a new service.
Is the echo service showing up in vmc services?
Have you copied the the yml files for node and gateway at ./cloudfoundry/.deployments/devbox/config?
Are the tokens for your gateway unique? and matching in the two files? ./cloudfoundry/.deployments/devbox/config/cloud_controller.yml and ./cloudfoundry/.deployments/devbox/config/**_gateway.yml**
I would recommend that you first concentrate on getting the echo service to be listed in the vmc services output. Once done with this you should replicate the steps (with absolute care to modify things like the token) to get your custom service working.
Cheers,
Ankit
You should follow this guide
It work to me.
regards.