Manage Learn to apply best practices and optimize your operations.

Can a workflow-driven business process help to manage reference data on both SAP and Siebel?

Can a workflow-driven business process help to manage reference data on both SAP and Siebel?

We are determining our strategy to manage reference data, in particular customer and contract. We are moving from an environment consisting of many legacy systems, each of which maintain their own version of customer, to Siebel managing the front office, SAP managing the financial accounting functions, and various assorted systems, legacy and otherwise, running the operational logistics and distribution systems. The question I have is over the model to maintain reference data, since Siebel, SAP and operational systems will all contribute data towards a customer record. We are putting a reference data repository in place to consolidate these multiple sources of customer data into one logical customer record. However, we still have an area of doubt; whether we should implement a workflow-driven business process to maintain customers across all of SAP, Siebel and operational systems, using each system to enter the data appropriate to it's function, and then use the repository for sourcing customer info for other systems, e.g. e-Business, or whether to enter information direct into the repository and then synchronize back out to SAP, Siebel and operational systems. The latter option seems on the face of it to be simpler. I would be interested to know if there are any similar cases of this know to you, and if so, what course of action was taken in those instances. Many thanks for any advice you are able to provide.

This is one of the most challenging implementations of the Corporate Information Factory. It appears your company's primary business objective is to address Data Interoperability. That is, the integration and synchronization of quality data across operational support systems. Interoperability can be achieved through the implementation of succinct business procedures that ensure updates are data entered consistently across the appropriate applications in the appropriate maintenance sequence. These procedures would also define the steps to perform reconciliation and to correct the reconciling items.

The goal of implementing a data inter-operability solution is typically two-fold: improve productivity of the organization and improve data integrity.

The key to executing your strategy is as follows:

* Make sure you identify the system-of-record for each data element. Reference Data can mean many things to many people. I view reference data as those things that classify/characterize the fundamental data of your business. (Tables of codes/descriptions implemented as look-up tables.) They usually have fixed domains. They are used to describe the functional transactions of your business (Customer Acquisition, Customer Service, Order Management, etc.) You will have redundant data across your applications. Facilitate your business owners to pick the SOR for each redundant data element. These applications will become your publishers. Your business process model should help you to discern the workflow needed to integrate/synchronize redundant elements across those applications needing to subscribe to them.

* Make sure you identify the refresh frequency for each data element. Once you know the publisher (should only be one) and subscribers (can be many) for each data element, you can define the currency rules for the subscribers. Schedule-based updates are easier/cheaper to implement; event-based updates require a more robust and comprehensive Enterprise Application Integration (EAI) architecture.

* If you are going to implement a hub and spoke, publish and subscribe data integration architecture, avoid allowing direct maintenance of data on the hub. The hub role is usually fulfilled by an Operational Data Store (ODS). The ODS is integrator of everything, owner of nothing.

  • If the majority of your data refresh requirements can be satisfied on a scheduled-basis, you can use the power of an ETL tool and job scheduler to process deltas, post them to the ODS and update subscriber applications with those deltas from the ODS.
  • If you have a representative amount of mission critical business information that MUST be updated on a real-time basis, augment your hub-and-spoke architecture with an event-based data movement tool. There are several EAI tools on the market that have done the hard work completed for you. They have developed the sockets, connectors and adapters for those applications that hide their data structures by imbedding referential integrity and table/column names within their procedural code. SAP and Siebel have this architecture.
  • The next generation EAI tools are showing great promise for enabling interoperability across applications. They are rules-based, support a metadata repository and provide a fast and relatively non-intrusive means for building peer-to-peer data interfaces. If your mission is to address interoperability, you can implement these EAI tools without the hub and spoke architecture. However, if you also need a means for providing tactical and strategic decision support reporting on an enterprise-wide basis, consider keeping the ODS in the architecture and build an extra subscription to the ODS for each data element being shared across your applications. The ODS provides a means for enterprise tactical reporting/viewing and provides the foundation of cleansed/integrated data to the atomic history and data mart layers.

Dig Deeper on Oracle DBA jobs, training and certification

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.