I am new to K2 and have to check how similar is it to MS Access. So I need to know whether we connect external data from example from SQL server to K2.
Yes, K2 uses SmartObjects to connect to external data sources (like SQL Server).
Absolutely! Connecting with many disparate repositories of data is K2's great strength.
To connect to a SQL Server, you simply have to create an instance of the SQL service broker, with the details of the server and database you want to read from. Then you can create a SmartObject for each table, view, or stored procedure within that SQL Server database that you need to interact with.
The following thread on K2 Community should get you started: http://community.k2.com/t5/K2-blackpearl/How-to-connect-K2-blackpearl-with-MS-SQL-R2/td-p/53993
K2 is not similar to Access since it is larger platform which meets enterprise workflow automation needs whereas Access would rather allow you to build department level apps with little flexibility - so comparison is incorrect neither from feature set nor from product positioning or pricing point of views.
K2 has 3 major pillars tightly integrated with each other:
Workflow Engine (manages execution of steps defined for process you automating)
SmartForms (allow to build you web UI to your apps and processes)
SmartObjects - this abstraction layer offers you set of OOB connectors which allow you to consume or write data from variety of external LOB systems - SQL Server, Oracle, SharePoint and many more. Custom brokers can be created to connect to any other LOB system which is not covered by OOB brokers set.
So in terms of connecting to different to external data you won't have any problems and capabilities are far greater than those you may find in MS Access. Comparing those two things it is almost like compare SMB shared folder VS SharePoint Server or something like this.
Also product being marketed (and build in that way) to allow "code-less development" - it has really gentle learning curve / allows you to start quick with building your applications.
Related
I need to connect Salesforce to an external database we have, and constantly keep both the database and salesforce updated in as close to real time as we can get. I have tired Google searching possible solutions, but nearly all of them have been outdated by over a year. Any ideas?
Thank You!
Depending on your exact scenario it is quite difficult to give you a proper answer.
However off the top of my head I would suggest two Salesforce products.
Salesforce Connect
https://www.salesforce.com/products/platform/products/salesforce-connect/
Salesforce Connect allows you to connect to various data sources and turn the tables / objects of that data source into a SObject. For example MySQL, Microsoft SQL Server, Oracle etc. There are limitations and thus it would be better to talk to a Certified Architect about such an implementation.
Heroku Connect
https://www.heroku.com/connect
Heroku Connect allows you to connect a Heroku data source with a Salesforce Object. The sync is not immediate but there are quite a few customisations inside the product to make the sync as "live" as possible. There are limitations and thus it would be better to talk to a Certified Architect about such an implementation.
Salesforce Connect has limitations.. It's good for presenting data via the interface, but if you need to act on the data and report on the data it might not be the best bet.
For close to real time hand coded sync, look at the streaming API, or using Salesforce Platform Events.
If you want to use an ETL tool, my organization has had decent luck with DBAmp, which is a Sql add on product and fairly inexpensive as compared to a lot of ETL tools ($1625 annually.) http://www.forceamp.com/ We're able to replicate the entire SF database offline in SQL with DBAMP, push changes to the offline Sql copy and upsert changes. It's also a good backup solution via offline full data copy. We got very good support from them as well when we encountered challenges.
Hope this helps.
Not sure if you are syncing one object or multiple objects but there are a few options that you have.
You can try the salesforce provided features Salesforce Connect which allows you to view and update data from your external source In salesforce but there are limitations with reporting and other considerations you should consider.
If you make use of Heroku, Heroku Connect is your best bet
You can also use a middleware ESB solutions like MuleSoft which can orchestrate keeping data in sync across multiple data sources and do batch loads, but depending on how often changes you want to keep an eye out for api limits for inbound calls to salesforce.
You can roll your own solution where you can use Outbound Messages in workflow (or triggers that initiates an apex class that calls out, but that is more cumbersome and you have to do custom error handling and retry logic which you get for free using outbound messages) to send changes from salesforce to your homegrown service that writes to you database and have you homegrown solution write back to salesforce using the soap or rest api. That would probably take you some time to build. You would also still need to be aware of API limits depending on how many updates are made on the non salesforce side.
You crate a Canvas App which displays data from your DB in Salesforce as a Tab and hook it up via SSO so users are auto logged in. But again there would not be reporting, or any salesforce features that you can take advantage of.
But I really think that you should spend some time to determine what system is your source of truth because that would determine how the data should be synced. You should also investigate if you really need the sync to be realtime or near realtime, or if you can manage with something like an hourly true up on the system that is not the source of truth.
I am a h/w engineer interested in using Bluemix for an IOT application. Other than C, I do not know any programming language but I am willing to learn whatever necessary. My application is as follows:
My sensor nodes would upload data to an existing h/w server that has the capability to upload the data to an external SQL server. I want to analyze this data on the SQL server on a periodic basis and generate reports that I can publish to a mobile application or even a web-page to begin with.
Questions:
Is it possible to implement the "SQL server --> Data analysis --> Report generation + data visualization --> HTML(?) Publish" flow on Bluemix?
What modern/efficient languages can I learn in order to do this with the least effort?
Is there a standard implementation/example that I can use as reference for the flow described above?
This question actually has little to do with IoT--that just happens to be the source of the data--and focuses on how to process data for analysis, report generation, and publishing. You can do this mostly using services in Bluemix such that there's little if any code to write and so the programming language of the runtime may not matter.
First, to store the data, you could use SQL Database or dashDB. The former is "just" a database, whereas the latter includes R and R-Studio for data analysis. Second, for report generation, you can use Embeddable Reporting, which has Cognos (e.g. IBM Cognos Business Intelligence reports) built in.
The way Cloud Foundry in Bluemix works, you'll need to create a runtime with some language, then bind the service instances to it so you can use them. But you may not have any code to write, in which case the language doesn't matter. In case you do need to write some code, choose whichever language you think you can learn most easily. Java programmers prefer that, but it requires compiling; they may also prefer Go. You'll probably have an easier time with Node.js and PHP, which are popular interpreted languages.
A couple of resources for further info:
"Embed rich reports in your applications" shows how to use Embeddable Reporting with dashDB.
"Leverage IBM Cognos on IBM Bluemix using the Embeddable Reporting service" shows how to use Embeddable Reporting with SQL Database.
"Embed Reports and visualize Data in your Bluemix Applications" gives an overview of both approaches.
BTW, Bluemix also has a neat service called Internet of Things, which helps connect your Bluemix app to lots of things all over the Internet. Sounds like you already have this handled for this example, but as you continue to use Bluemix for IoT applications, you might want to look into this service too. The Internet of Things Foundation Starter helps you get started using Node.js, Cloudant, and Node-RED.
I am developing a c# web application that will be hosted in Windows Azure and use Table Data Storage (TDS).
I want to architect my application such that I can also (as an option) deploy the application to a traditional IIS server with some other NoSql back-end. Basically, I want to give my customers the option to either pay me in the software as a service model, OR purchase a license of my application that they can install on a (non-azure) production server of their own.
How can I best architect my data layer and middle tier to achieve both goals?
I will likely need a Windows Azure Worker Role and an Azure Queue. How complicated is to replicate these? Can I substitue a custom Windows Service and some other queuing technology?
How I can the entities in my data model be written such that I can deploy to Azure TDS or some other storage when not deploying to Azure? Would MongoDB or similar be useful for this?
Surely there is a way to architect for Azure without being married to it.
I will likely need a Windows Azure Worker Role and an Azure Queue. How complicated is to replicate these? Can I substitue a custom Windows Service and some other queuing technology?
Yes - a Windows service with some other queuing technology would fit this reasonably well - and worker roles have a main/Run loop which is easy to use within a Windows Service.
How I can the entities in my data model be written such that I can deploy to Azure TDS or some other storage when not deploying to Azure? Would MongoDB or similar be useful for this?
NoSql is a general term encapsulating lots of different technologies. I think Azure TDS currently belongs to the Key-Value store family of NoSql, while MongoDB is more of a document database offering much richer functionality than TDS - see http://en.wikipedia.org/wiki/NoSQL_(concept). For mimmicking Azure TDS I think maybe a variant of something like Redis might work (although I believe Redis itself has wider functionality then TDS currently)
In general, it depends on the shape of your data, but I suspect if you can fit it in Azure TDS, then you'll be able to fit it into your choice of other storage too.
Surely there is a way to architect for Azure without being married to it.
Yes - as you've suggested in your question, you can architect your app so it can work on other technologies instead. In fact, this is quite a similar challenge to the traditional SQL data abstraction methods. However, I think there are a few places where you'll find TDS pushing you in certain
directions which won't fit well with other stores - e.g. Azure pushes you much more towards data replication; has very specific rules on keys; offers high performance using very specific mechanisms; and offers limited transaction integrity in very specific situations. These factors may mean that you do have to indeed change some middle tier layers as well as some data layers in order to get the most out of your app in both its Azure and non-Azure variations.
One other thought - It might be easier to offer your clients a multitenant SaaS version on Azure, and a singletenant version hosted on Azure - but this does depend on the clients!
I found a viable solution. I found that I can use EF Code First with SQL Server or SQL CE if I design my entities with the same PartitionKey & RowKey compound key structure that Azure Table Storage requires.
With a little help from Lokad Cloud (http://code.google.com/p/lokad-cloud/) to perform the interaction with Azure Table Storage, I was able to craft a common DataContext that provides crud operations against either EF's DbContext OR Lokad's TableStorageProvider.
I even found a nice way to manage relationships between entities and lazy-load them properly.
The solution is a bit complex and needs more testing. I will blog about it and post the link here when ready.
We're currently using the SSO component of Oracle 10g App Server to authenticate users on our external / internet facing client "portal" (think similar to online banking)
SSO uses Oracle Internet Directory to store it's data, and we've been able to use PL/SQL and Java to access and modify the data held in OID (e.g create/drop users, change/verify passwords etc)
With the advent of 11g, Oracle appears to have "orphaned" SSO… it is available, but only as an add-on, and it appears to have been superseded by Oracle Access Manager. I'm guessing that it will have been dropped together by 12g. Plus it looks pretty difficult to install and get running correctly.
So, I'm wondering if anyone has any experience of having had the same migration problem as us? If so, what did you do?
Alternatively, does anyone have any experience of doing something similar using Oracle Access Manager? Do you think it will do what we want?
Or is there a better road to go down? Is there something else I should be considering?
Sorry for the very broad question, but it's one of those situations where a person's experience of what does + doesn’t work can make an enormous difference to us making some progress in a timely fashion. Thanks.
From my knowledge, Oracle Internet Directory (OID) is an LDAP compliant directory whereas Oracle Access Manager (OAM) is much more complex and consists of two main systems:
Identity System (users, groups,
workflows)
Access System (single/multi domain
SSO solution for Web and non-Web
based applications).
Access Manager relies on an Identity Server which is a stand-alone server process that communicates with any Directory Server (AD, OID, Sun Directory server..).
So you can use the new OAM and link it with your existing OID... to retrieve users/groups and metadata. All that you could do with OID will be doable with OAM as it brings more abstraction layers.
But in my opinion, and considering your case, directly accessing LDAP servers (OID, AD, etc) and using a light and "home made" SSO system is cheaper than relying on those big systems.... I think OAM is a usefull solution when you have lots of heterogeneous applications (web, non web, mobile, ...) and/or multiple organizations/domains with links and/or you need a very scalable approach.
I'm looking for advices and suggestions on how to synchronise data between two databases.
The first database is a SQL Server 2008 Express that run on disconnected laptops (no network or internet access). The second database (main) is a VFP 9.0 that run on a server.
When the user connect their laptop on the network, I want the synchronisation process to go through.
Other than the different database engines, I have the following items to take into account:
The tables don't necessary have the same structure
The primary keys are not the same (GUID in the SQL Server and often a combination of character fields in VFP)
Synchronisation of the tables must be done in a certain order to respect the parent-child relationships
On some insert on the SQL Server side, a new primary key must be generated and synchronised in the VFP table
A bunch of validations must be made and some feedback from the user are sometimes needed
Not all records need to be synchronised
Some records on the SQL Server need to be deleted after the syncronisation
Need to take into account deleted records from both side
Minimal modifications need to be done on the VFP database
There are probably other points I'm forgotting now, but I think you get the idea of the challenge I face. My guess right now are that I will need to build a custom synchronisation module, but I want your input before I go on in case I overlooked some options and to get some tips on how to approach this.
I looked rapidly at Microsoft Sync Framework, but with all the restrictions I have and the fact that there is no VFP client already built (AFAIK), I don't think it will be of great help.
Thanks in advance for your feedback.
Update: The laptop application is a C# WinForm application and is using SQL Server 2008 Express.
The complexity of the situation and requirements leads me to believe you need to write a Visual FoxPro application. Visual FoxPro connects to SQL Server 2008 data easily. The complexity of the code is matching the requirements and identifying the data that needs to be synched, not the syntax. Visual FoxPro strength is in the data manipulation language and the ability to connect to almost any data source (native DBFs, ODBC, ADO, and XML).
SQL Server can read VFP 9 data via the VFP 9 OLE DB driver. You could write T-SQL stored procedures to get to the VFP data. Not sure how it would recognize the laptop being connected to the network though.
Another approach is to use SQL Server XML Diffgrams. I am not an expert by any stretch of the imagination on this approach, but it would be something you can research.
Since my expertise is with Visual FoxPro I would find it way easier to go the other way though, but that is just me. You have to go with the skillset of the resources you have for the project.
VFP reads and writes SQL Server data via a connection (DSN, ConnectionString) and any technique involving SQL Passthrough (SQLConnect(), SQLExec() and SQLDisconnect()), CursorAdapters, Remote Views, or a combination of the three.
A Visual FoxPro program can also recognize Windows Events like connecting to a network. The application could be installed on each laptop and running to recognize the Windows Event. Once the event is raised the application can attempt to connect to the SQL Server database (possible it is connecting to a network without the SQL Server available or a different network).
Once connected it runs the logic to check and synchronize the databases.
Sounds like you don't have a lot of control over the application writing to the VFP 9 data on the laptop. If you do have control over the application writing to the VFP 9 database you might consider changing the app to write to a SQL Server Express instance on the laptop and then you can use SQL Server replication to manage the synchronization. Not a trivial task though and SQL Server replication, while getting better with each release, does cause hair loss in DBAs. Definitely a lot of work going this route.
Rick Schummer
Visual FoxPro MVP
I would encourage you to take another look at MS sync framework. We have a situation where we want to synchronize occasionally connected C# clients apps with our Java/Oracle backend. You can use the sync framework providers for the C# client and implement your own custom subclass of KnowledgeSyncProvider for the backend. This will get you half-way there, and show you a good pattern to apply for the rest.