We are working with ATG/Endeca integration 11.2. One of the client requirements is a near real time access to some of the attributes from the Product Catalog. However, this isn't possible since baseline has to be run before the Endeca index can be updated.
Is the only solution to query ATG directly using droplets for the attributes that are critical or is there any other way to accomplish this? We are pushing the product catalog to CAS using ProductCatalogSimpleIndexingAdmin(no Forge is being used).
Related
I've been trying to implement a way to download all the changes made by a particular user in salesforce using PowerShell script & create a package The changes could be anything whether it can be added or modified, Apex classes, profiles, Account, etc based on the modified by the user, component ID, timestamp, etc. below is the URL that exposes the API. The URL Does not explain any way to do this by using a script.
https://developer.salesforce.com/docs/atlas.en-us.api_meta.meta/api_meta/meta_listmetadata.htm
Does anyone know how I can implement this?
Regards,
Kramer
Salesforce orgs other than scratch orgs do not currently provide source tracking, which makes it possible to pinpoint user changes in metadata and extract only those changes. This is done by an SFDX/Metadata API client, like Salesforce DX or CumulusCI (disclaimer: I'm on the CumulusCI team).
I would not try to implement a Metadata API client in PowerShell; instead, harness one of the existing tools to do so.
Salesforce orgs other than scratch orgs don't provide source tracking at present. To identify user changes, you can either
Attempt to extract all metadata and diff it against your version control, which is considerably harder than it sounds and is implemented by a variety of commercial DevOps tools for Salesforce (GearSet, Copado, etc).
Have the user manually add components to a Change Set or Unmanaged Package, and use a Metadata API client as above to retrieve the contents of that package. (Little-known fact, a Change Set can be retrieved as a package!)
To emphasize: DevOps on Salesforce does not work like other platforms. Working on the Metadata API requires a fair amount of time investment and specialization. Harness the existing work of the Salesforce community where you can, but be aware that the task you are laying out may be rather more involved than you think and it's not necessarily something you can just throw together from off-the-shelf components.
I need to connect Salesforce to an external database we have, and constantly keep both the database and salesforce updated in as close to real time as we can get. I have tired Google searching possible solutions, but nearly all of them have been outdated by over a year. Any ideas?
Thank You!
Depending on your exact scenario it is quite difficult to give you a proper answer.
However off the top of my head I would suggest two Salesforce products.
Salesforce Connect
https://www.salesforce.com/products/platform/products/salesforce-connect/
Salesforce Connect allows you to connect to various data sources and turn the tables / objects of that data source into a SObject. For example MySQL, Microsoft SQL Server, Oracle etc. There are limitations and thus it would be better to talk to a Certified Architect about such an implementation.
Heroku Connect
https://www.heroku.com/connect
Heroku Connect allows you to connect a Heroku data source with a Salesforce Object. The sync is not immediate but there are quite a few customisations inside the product to make the sync as "live" as possible. There are limitations and thus it would be better to talk to a Certified Architect about such an implementation.
Salesforce Connect has limitations.. It's good for presenting data via the interface, but if you need to act on the data and report on the data it might not be the best bet.
For close to real time hand coded sync, look at the streaming API, or using Salesforce Platform Events.
If you want to use an ETL tool, my organization has had decent luck with DBAmp, which is a Sql add on product and fairly inexpensive as compared to a lot of ETL tools ($1625 annually.) http://www.forceamp.com/ We're able to replicate the entire SF database offline in SQL with DBAMP, push changes to the offline Sql copy and upsert changes. It's also a good backup solution via offline full data copy. We got very good support from them as well when we encountered challenges.
Hope this helps.
Not sure if you are syncing one object or multiple objects but there are a few options that you have.
You can try the salesforce provided features Salesforce Connect which allows you to view and update data from your external source In salesforce but there are limitations with reporting and other considerations you should consider.
If you make use of Heroku, Heroku Connect is your best bet
You can also use a middleware ESB solutions like MuleSoft which can orchestrate keeping data in sync across multiple data sources and do batch loads, but depending on how often changes you want to keep an eye out for api limits for inbound calls to salesforce.
You can roll your own solution where you can use Outbound Messages in workflow (or triggers that initiates an apex class that calls out, but that is more cumbersome and you have to do custom error handling and retry logic which you get for free using outbound messages) to send changes from salesforce to your homegrown service that writes to you database and have you homegrown solution write back to salesforce using the soap or rest api. That would probably take you some time to build. You would also still need to be aware of API limits depending on how many updates are made on the non salesforce side.
You crate a Canvas App which displays data from your DB in Salesforce as a Tab and hook it up via SSO so users are auto logged in. But again there would not be reporting, or any salesforce features that you can take advantage of.
But I really think that you should spend some time to determine what system is your source of truth because that would determine how the data should be synced. You should also investigate if you really need the sync to be realtime or near realtime, or if you can manage with something like an hourly true up on the system that is not the source of truth.
Hey guys I wonder if anyone can help with this.
Now I am facing a problem at my company. We are developing a Magento 2 Community multistore for our customers.
The idea is to have several stores in the same Magento 2 installation, where each store is for each independent company. The problem is the integration with our ERP system. With the API REST we have full control in the installation, even if we are not with the admin master credential. if we run commands like this in postman: https://magentostore.com/rest/V1/orders?searchCriteria
we have all the orders in installation, all stores. So the companies with their credentials would have the same control and it is a very bad problem of security. The stores would have access to data from each other.
We have tried extensions for advanced permissions like Aitoc and Amasty but it's only works at a frontend level and does not take any effect in API REST. We know that Magento was not made for this kind of thing so my question is:
is it possible to change the API REST to filter the queries by store? and where can I find these API REST queries?
I'd thank you so much.
So you can override api calls using webapi.xml file in your module, Just point it in your service interface and change acl if you want. In your service interface inject the original one and add some your filter before calling original.
The second approach to write a plugin on OrderRepositoryInterface and add filter there (but first solution is better because this service is used not only in api so you may do not want to restrict all calls)
I'm developing an ecommerce based on Websphere Commerce 7 WCS7. I need to import products from an external supplier, which is exposing a webservice. I've already implemented a Controller Command performing all the operation needed to extract the products from the remote service, and I've them avalaible as custom Java classes.
I'm a little bit confused about the approach I should follow in this case. I've defined the attributes needed in my scenario and used the dataload utility to import them in the DB. What should I do next? I expect to be able to "create" WCS product programmatically from my Controller Command but I don't know how to use the attribute I've defined in a programmatic insert.
Can someone point me on the right track on how to perform this kind of operation? I went through the documentation, but, given the fact I'm quite new of the WCS environment, I don't know how to proceed according to the current best practices.
It is possible to create a new catalog entry programmatically if you copy what is being done in the LOBTools. I have not done this myself though. I have always added new products via the data loads and when we did need to add from an external service I just output the information to a file and loaded along with our other products. The reason was due to keeping the catalog in sync with the product management system.
Have a look at the various DataBean classes in WCS, like: CatalogEntryDataBean.
See here for WCS data beans:Link
And here for "activation" of a DataBean:Link
A question regarding the integration of the document management system Alfresco into Oracle Application Express (APEX) based on CMIs-repository:
The aim is to use APEX as the portal-page and Alfresco showing it's results (document lists) based on search parameters coming form APEX.
A search result from a CMIS-query should be displayed in an APEX page-region.
Unfortunately I have no experience in this sector (REST, CMIS) - so any advice would be welcome!
A related question regarding user authentication and authorization via CMIS does also arise.
Has anyone out there implemented something like this or used these components together, yet?
The first thing that pops into my mind is making the choice where you want your communication with the repository to take place: client side or server side?
Alfresco supports Web Scripts, so I would be possible to create a javascript-heavy thick client which connects to your repository, get information about your files and redirect to their download links.
The alternative would be to design some way to connect to the repository from the database server. Again there are many ways to do this. You can connect to the repository during your page load and use PL/SQL regions to fire scripts that connect to your repository, get the data you want, and render your region with that information.
Another way would be to periodically check the repository for changes, and maintain a 'shadow copy' of the repository within your oracle database tables.
Of course all of these solutions have their own drawbacks.