unable to import sample Data into Apache atlas - apache-atlas

I have installed Apache atlas using docker with the help of the below URL
https://github.com/michalmiklas/atlas-docker
Now while importing sample data into to apache atlas using the below command,
bash-4.4# ./apache-atlas/bin/quick_start.py http://localhost:21000/
it is throwing the below error
Exception in thread "main" org.apache.atlas.AtlasServiceException: Metadata service API org.apache.atlas.AtlasClientV2$API_V2#30f842ca failed with status 403 (Forbidden) Response Body ({"errorCode":"ATLAS-403-00-001","errorMessage":"bird is not authorized to perform create classification-def Dimension"})
at org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:395)
at org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:323)
at org.apache.atlas.AtlasBaseClient.callAPI(AtlasBaseClient.java:211)
at org.apache.atlas.AtlasClientV2.createAtlasTypeDefs(AtlasClientV2.java:227)
at org.apache.atlas.examples.QuickStartV2.createTypes(QuickStartV2.java:185)
at org.apache.atlas.examples.QuickStartV2.runQuickstart(QuickStartV2.java:141)
at org.apache.atlas.examples.QuickStartV2.main(QuickStartV2.java:126)
No sample data added to Apache Atlas Server.
below is the total log for your reference.
./bin/apache-atlas/bin/quick_start.py http://localhost:21000/
log4j:WARN No such property [maxFileSize] in org.apache.log4j.PatternLayout.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.PatternLayout.
log4j:WARN No such property [maxFileSize] in org.apache.log4j.PatternLayout.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.PatternLayout.
log4j:WARN No such property [maxFileSize] in org.apache.log4j.PatternLayout.
Enter username for atlas :- bird
Enter password for atlas :-
Creating sample types:
Exception in thread "main" org.apache.atlas.AtlasServiceException: Metadata service API org.apache.atlas.AtlasClientV2$API_V2#30f842ca failed with status 403 (Forbidden) Response Body ({"errorCode":"ATLAS-403-00-001","errorMessage":"bird is not authorized to perform create classification-def Dimension"})
at org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:395)
at org.apache.atlas.AtlasBaseClient.callAPIWithResource(AtlasBaseClient.java:323)
at org.apache.atlas.AtlasBaseClient.callAPI(AtlasBaseClient.java:211)
at org.apache.atlas.AtlasClientV2.createAtlasTypeDefs(AtlasClientV2.java:227)
at org.apache.atlas.examples.QuickStartV2.createTypes(QuickStartV2.java:185)
at org.apache.atlas.examples.QuickStartV2.runQuickstart(QuickStartV2.java:141)
at org.apache.atlas.examples.QuickStartV2.main(QuickStartV2.java:126)
No sample data added to Apache Atlas Server.
JFI, bird is admin user group user and i have also tried with DATA_STEWARD and DATA_SCIENTIST user groups but the result is same.

The user has to use the existing username and password to import Data into APACHE-ATLAS.
Default Username : admin (case sensitive)
Password : admin
Once you install the Apache Atlas, first check the Zookeeper server status and do not change the any user configurations.
Thanks for your help

Related

How to register app from private repo in Spring Cloud dataflow 2.6.1

I'm using SCDF 2.6.1 in Openshift 3, and I'm facing error while registering the app, error log like below :
java.lang.NullPointerException: null
at org.springframework.cloud.dataflow.configuration.metadata.container.DefaultContainerImageMetadataResolver.getRegistryRequest(DefaultContainerImageMetadataResolver.java:162)
at org.springframework.cloud.dataflow.configuration.metadata.container.DefaultContainerImageMetadataResolver.getImageLabels(DefaultContainerImageMetadataResolver.java:110)
at org.springframework.cloud.dataflow.configuration.metadata.BootApplicationConfigurationMetadataResolver.resolvePortNamesFromContainerImage(BootApplicationConfigurationMetadataResolver.java:215)
at org.springframework.cloud.dataflow.configuration.metadata.BootApplicationConfigurationMetadataResolver.listPortNames(BootApplicationConfigurationMetadataResolver.java:163)
at org.springframework.cloud.dataflow.server.controller.AppRegistryController.getInfo(AppRegistryController.java:193)
at org.springframework.cloud.dataflow.server.controller.AppRegistryController.info(AppRegistryController.java:162)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
I checked the line of code in DefaultContainerImageMetadataResolver.java:162
// Convert the image name into a well-formed ContainerImage
ContainerImage containerImage = this.containerImageParser.parse(imageName);
// Find a registry configuration that matches the image's registry host
RegistryConfiguration registryConf = this.registryConfigurationMap.get(containerImage.getRegistryHost());
// Retrieve a registry authorizer that supports the configured authorization type.
RegistryAuthorizer registryAuthorizer = this.registryAuthorizerMap.get(registryConf.getAuthorizationType());
I'm pretty sure the error is because registryConf is null as result from
RegistryConfiguration registryConf = this.registryConfigurationMap.get(containerImage.getRegistryHost());
How to put my private repo URI in registryConfigurationMap ?
I have tried to put imagePullSecret in the deployment.yml which is registered with the private repo, but I think it doesn't work because in the startup log, I still see :
2020-09-03 04:55:24.111 INFO 1 --- [ main] urationMetadataResolverAutoConfiguration :
Final Registry Configurations: {registry-1.docker.io=RegistryConfiguration{registryHost='registry-1.docker.io', user='null', secret='****'', authorizationType=dockeroauth2, manifestMediaType='application/vnd.docker.distribution.manifest.v2+json', disableSslVerification='false',
extra={registryAuthUri=https://auth.docker.io/token?service=registry.docker.io&scope=repository:{repository}:pull&offline_token=1&client_id=shell }}}
The only place where SCDF server downloads the container image layer is when it looks for app metadata.
Currently, this is configured to use the docker registry host (as this is where all the out-of-the-box applications are hosted).
If you want to override, you can modify these property values at the time of server startup and proceed.
Remember the fact that this configuration is only needed to download the app metadata layer of the image - not to download the entire container image at the SCDF server side.

wso2 API Endpoint creation failed: 404 ressource not found

I am trying to create a REST API with wso2 API_Manager to gather data from a Postgres database (learning purpose). I struggle doing so and I would like to know whether:
I did not understand wso2 components' roles properly (new techno and subject for me)
or there is an error in the way I configured the manager.
System setup
I used this official docker image, added postgres jdbc jar in /repository/components/lib/ and added the following in /repository/conf/datasources/master-datasources.xml:
<datasource>
<name>s0m3dAtabas3</name>
<description>The db used for testing purposes</description>
<definition type="RDBMS">
<configuration>
<url>jdbc:postgresql://sandor_postgres:5432/s0m3dAtabas3</url>
<driverClassName>org.postgresql.Driver</driverClassName>
<username>s0m3us3rfr0mdAtAMaj0r</username>
<password>N0t5uchAs1mple1</password>
<maxActive>80</maxActive>
<minIdle>5</minIdle>
<maxWait>60000</maxWait>
<defaultAutoCommit>false</defaultAutoCommit>
<testOnBorrow>true</testOnBorrow>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>
I made sure than the postgres' container named sandor_postgres is accessible from wso2's with these credentials. In this database, I have a table called something. The image comes with the following UIs:
admin
publisher
store
Graphical API creation
I first followed the WorldBank tutorial which seemed crystal clear (though I am not quite sure where the data came from). I then tried to adapt it.
Step 1: Design
I used the database name as context (s0m3dAtabas3) v.1.0.0. Since the table is called something, the url pattern I end up with is /s0m3dAtabas3/1.0.0/something
Step 2: Implement
This is where things start to be confusing. No matter the resource path I use in the Endpoint (end point type REST), I get a 404 and the logs are not very helpful
http://192.168.8.111:8280 -> 404
http://192.168.8.111:8280/something -> 404
http://192.168.8.111:9443/tried_several -> Invalid - Error connecting to backend
http://192.168.8.111:8243/tried_several -> Invalid - Error connecting to backend
INFO - InboundDBSyncRequestEvent Running DB sync task.
INFO - LogMediator STATUS = Message dispatched to the main sequence. Invalid URL., RESOURCE = /s0m3dAtabas3/1.0.0
INFO - CarbonAuthenticationUtil 'admin#carbon.super [-1234]' logged in at [2019-10-29 11:42:31,030+0000]
INFO - CarbonAuthenticationUtil 'admin#carbon.super [-1234]' logged in at [2019-10-29 11:42:31,197+0000]
INFO - LogMediator STATUS = Message dispatched to the main sequence. Invalid URL., RESOURCE = /s0m3dAtabas3/bullshit
INFO - CarbonAuthenticationUtil 'admin#carbon.super [-1234]' logged in at [2019-10-29 11:48:30,649+0000]
INFO - CarbonAuthenticationUtil 'admin#carbon.super [-1234]' logged in at [2019-10-29 11:48:30,790+0000]
INFO - LogMediator STATUS = Message dispatched to the main sequence. Invalid URL., RESOURCE = /
INFO - InboundDBSyncRequestEvent Running DB sync task.
INFO - LogMediator STATUS = Message dispatched to the main sequence. Invalid URL., RESOURCE = /
Did I miss some important configuration step or wso2 API Manager is not the standalone component I thought it was and requires another component to achieve what I am looking for?
It seems there is a misunderstanding in the concept-wise.
Here is the basic idea of a typical API Management solution.
You have a web service (REST, SOAP, etc...) which you need to expose as a managed API. Now, you can front your service with API Manager and expose it as a managed API with security, rate limiting, managed life cycle etc.
In your case, it seems you don't have such a service, but only have a database table. So, before using API Manager to front your service, you first need to expose your table as a service. For that purpose, I'd suggest you use the data service component of WSO2 EI 7.0.0. See [1] for how to do that. Once you have your service ready, you can use API Manager to expose it as a managed API.
[1] https://ei.docs.wso2.com/en/latest/micro-integrator/use-cases/tutorials/sending-a-simple-message-to-a-datasource/

ning.http.client Kerberos Example

How can I support Kerberos based authentication with the ning http client?
I am extending existing code which has support for NTLMAuth and I want to be able to include support for Kerberos, which is used on some of the websites that I need to test.
I want to be able to put in the user and password programmatically, I do not want to use a keyTab, or setup krb5 configuration on the system where this is running.
I have the following code block;
import com.ning.http.client.RequestBuilder;
import com.ning.http.client.Realm.RealmBuilder;
....
RealmBuilder myRealmBuilder = new RealmBuilder()
.setScheme(AuthScheme.KERBEROS)
.setUsePreemptiveAuth(true)
.setNtlmDomain(getDomain())
.setNtlmHost(getHost())
.setPrincipal(getUsername())
.setPassword(getUserPassword()));
RequestBuilder rb = new RequestBuilder()
.setMethod(site.getMethod())
.setUrl(site.getUrl())
.setFollowRedirects(site.isFollowRedirects())
.setRealm(myRealmBuilder),
site)
ยดยดยด
Currently I get the error response:
FAILED: Invalid name provided (Mechanism level: KrbException: Cannot locate default realm)
Does anyone have an good example of how to do this correctly?

ClientResponseException in OpenStack4J on IBM Bluemix Object Storage service

I am following this guide to connect to IBM Object Storage for Bluemix with Java:
https://developer.ibm.com/recipes/tutorials/connecting-to-ibm-object-storage-for-bluemix-with-java/
I have double checked the values with the credentials in the service but when I execute the authenticate() method I get following exception:
Caused by: ClientResponseException{message=Not Found, status=404, status-code=NOT_FOUND}
at org.openstack4j.core.transport.HttpExceptionHandler.mapException(HttpExceptionHandler.java:38)
at org.openstack4j.core.transport.HttpExceptionHandler.mapException(HttpExceptionHandler.java:23)
at org.openstack4j.openstack.internal.OSAuthenticator.authenticateV3(OSAuthenticator.java:158)
at org.openstack4j.openstack.internal.OSAuthenticator.invoke(OSAuthenticator.java:70)
at org.openstack4j.openstack.client.OSClientBuilder$ClientV3.authenticate(OSClientBuilder.java:165)
at org.openstack4j.openstack.client.OSClientBuilder$ClientV3.authenticate(OSClientBuilder.java:128)
at com.servengine.objectstorage.ObjectStorageClient.postConstruct(ObjectStorageClient.java:32)
... 85 more
Anyway I can know what is wrong? (URL, userId, password, project, domain, ...)
Thanks

ATG:Error while baseline indexing- Unable to process any CSF calls as the Credential Store server is not enabled

I am getting the following error while doing baseline index of my Endeca application in ATG
15:26:47,891 ERROR [nucleusNamespace.atg.dynamo.security.opss.csf.CredentialStoreManager] (Thread-201) Unable to process any CSF calls as the Credential Store server i
s not enabled. Please check log for more details
15:26:47,913 INFO [nucleusNamespace.atg.commerce.search.StoreLocationOutputConfig] (Thread-201) Starting bulk load
15:26:47,915 INFO [nucleusNamespace.atg.commerce.endeca.index.CategoryToDimensionOutputConfig] (index-/atg/commerce/endeca/index/ProductCatalogSimpleIndexingAdmin) Fa
iled to cancel incremental load of /atg/commerce/endeca/index/CategoryToDimensionOutputConfig, probably because no bulk load was running.
15:26:47,916 INFO [nucleusNamespace.atg.endeca.index.ConfigImportDocumentSubmitter] (Thread-203) Opening configuration repository connection for application logistore
15:26:47,917 ERROR [nucleusNamespace.atg.dynamo.security.opss.csf.CredentialStoreManager] (Thread-203) Unable to process any CSF calls as the Credential Store server i
s not enabled. Please check log for more details
15:26:47,916 INFO [nucleusNamespace.atg.commerce.search.ProductCatalogOutputConfig] (index-/atg/commerce/endeca/index/ProductCatalogSimpleIndexingAdmin) Failed to can
cel incremental load of /atg/commerce/search/ProductCatalogOutputConfig, probably because no bulk load was running.
15:26:47,917 INFO [nucleusNamespace.atg.commerce.search.StoreLocationOutputConfig] (index-/atg/commerce/endeca/index/ProductCatalogSimpleIndexingAdmin) Failed to canc
el incremental load of /atg/commerce/search/StoreLocationOutputConfig, probably because no bulk load was running.
15:26:47,919 INFO [nucleusNamespace.atg.endeca.index.ConfigImportDocumentSubmitter] (Thread-199) Opening configuration repository connection for application logistore
15:26:47,919 ERROR [nucleusNamespace.atg.dynamo.security.opss.csf.CredentialStoreManager] (Thread-199) Unable to process any CSF calls as the Credential Store server i
s not enabled. Please check log for more details
15:26:47,919 INFO [nucleusNamespace.atg.commerce.endeca.index.ProductCatalogSimpleIndexingAdmin] (Thread-203) Indexing process cancelled, Endeca says: Could not retri
eve workbench credential properties from credential store.
15:26:47,919 INFO [nucleusNamespace.atg.endeca.index.ConfigImportDocumentSubmitter] (Thread-207) Opening configuration repository connection for application logistore
15:26:47,920 ERROR [nucleusNamespace.atg.dynamo.security.opss.csf.CredentialStoreManager] (Thread-207) Unable to process any CSF calls as the Credential Store server i
s not enabled. Please check log for more details
15:26:47,921 INFO [nucleusNamespace.atg.commerce.endeca.index.ProductCatalogSimpleIndexingAdmin] (Thread-207) Indexing process cancelled, Endeca says: Could not retri
eve workbench credential properties from credential store.
After doing extensive research I found that C:\ATG\ATG11.2\home\servers\atg_production_lockserver\localconfig\atg\dynamo\server\OPSSInitializer.properties has path for jps-config.xml ie
JPSConfigurationLocation=C:/ATG/ATG11.2/home/../home/security/jps-config.xml
This jps-config.xml has some CSF related configuration.
How can I get rid of this error for successful baseline indexing.
I am stuck on this part.
This happens if you change the default workbench password. Simple solution would be, change Endeca experience manager password back to admin and try.
Otherwise, password needs to be changed in multiple places.
Thanks,
Ajay Agrawal
Go to the OPSSInitializer component in dyn admin and check whether the path for jps-config.xml specified is correct there. If not, correct the path.