Decision management (DM) can be considered a subset of business process management which automates and streamlines transactions and interactions with customers and partners. Organizations apply DM to high-volume transactions through the creation of business rules and decision trees, which, for example, set thresholds for approval or denial of credit or limit the ability to access information. DM also requires a strategy for managing exceptions to defined business rules that can include alerts and human intervention. More recently, DM is supported by the use of predictive analytics that can estimate likely behavior based on a small profile of information and thus shrink complex decision processes.
Data governance is a process and structure for formally managing information as a resource and ensures that the appropriate people representing business processes, data and technology are involved in the decision-making.
Data Integration refers to the organization’s inventory of data and information assets as well as the tools, strategies and philosophies by which fragmented data assets are aligned to support business goals. Data integration can pursue several strategies, including single, federated or virtual compilations of data for a given business purpose. Increasingly, businesses are striving to deliver consistent data views through master data management (MDM), which is meant to deliver a near real-time, hub-based and synchronized presentation of information to any seat or point of view in the organization. Data integration today is still heavily focused on middleware, software and management tools that connect software and data end points through connectors and adaptors. Over time, companies are migrating to the philosophy of a service-oriented architecture (SOA) that applies Web protocols and standards for self-identifying application and data end points. This transition is proceeding slowly and selectively as companies are reluctant to abandon proven systems, including mainframes and traditional messaging, which remain mission-critical to business operations.
Extract, transform and load (ETL) is the core process of data integration and is typically associated with data warehousing. ETL tools extract data from a chosen source(s), transform it into new formats according to business rules, and then load it into target data structure(s). Managing rules and processes for the increasing diversity of data sources and high volumes of data processed that ETL must accommodate, make management, performance and cost the primary and challenges for users. The traditional ETL approach requires users to map each physical data item with a unique metadata description; newer ETL tools allow the user to create an abstraction layer of common business definitions and map all similar data items to the same definition before applying target-specific business rules, isolating business rules from data and allowing easier ETL management.
Data warehousing is a foundational practice that supports enterprise reporting, business intelligence and decision support. Data warehouses and data marts are created across levels of sophistication and different philosophical approaches, but typically involve extracting and transforming data from operational/transactional databases and loading it to a repository for shared use and analysis. The DW Basics channel has been created to capture and aggregate stories related to fundamental approaches and practices for creating, managing, utilizing and maintaining a data warehouse.
Data modeling
Data modeling is the process of designing and validating a database that will be used to meet a business challenge. Data modelers use terms and symbols to identify and represent all of the data objects needed for a business operation to function.
Data models document entities (the persons, places and things [product, warehouse, partner etc.] an organization encounters in the course of business); the relationships of entities (e.g. employee WORKS in warehouse, MANAGES product and SHIPS to partner); and the attributes of entities (description, order number, address, account balance etc.).
There are three common types of data models. Conceptual data models define and describe business concepts at a high level for stakeholders addressing a business challenge. Logical data models are more detailed and describe entities, attributes and relationships in business terms. Physical data models define database objects, schema and the actual columns and tables of data that will be created in the database.
Like the blueprint of a building, a data model is the design specification for a database. Data modeling can be helped by off the shelf data models that can be adapted to a specific use. But data architects warn that without proper time and attention to "design before you build," organizations face inaccurate reporting, incorrect data, costly remediation and difficulty in meeting new user requirements.
Data Quality
If there is a single pitfall that undermines any given data management initiative, it is most likely to be found in the realm of data quality, a requirement for sound decision-making. Whether in combination or by themselves, databases are almost certain to contain entry errors, multiple common entries and other redundancies that inevitably lead to incorrect or incomplete identifications of customers, products and locations. Thus, data quality is a critical prerequisite to any BI initiative that would otherwise skew or obfuscate meaning in the reporting and analytic outputs of databases, reporting tools, dashboards and scorecards.
The current macroeconomic malaise means organizations in almost every sector are finding it tough. IT budgets are under scrutiny and many new projects have been put on hold or cancelled. An economic downturn forces many businesses to focus inwards on reducing costs and improving efficiencies to prop up its revenues and profit margins. Although companies are relying even more on IT systems to run their businesses in a leaner and meaner fashion, new IT investments are invariably impacted as well. CIOs are left scratching their heads on how to sustain and fund important information management projects such as master data management (MDM) and data quality.
Master data management (MDM) is meant to deliver a near real-time, hub-based and synchronized master record of information to any seat or point of view in the organization. Master records are created with data that is defined, integrated and reconciled from multiple systems (customer relationship management, financial, supply chain, marketing etc.) and classified by type (e.g. product master, customer master, location master etc.). MDM is often pursued by data type through programs that address Customer data integration (CDI) or product information management (PIM), though many observers believe true MDM requires reconciliation of all data types. Critical to MDM are the notions of data quality and matching, which technology tools can help to automate.
Master data management doesn't stop with managing references and identifiers. Manage your master data business vocabulary, facts and rules to move from knowing that two business concepts are the same to knowing what these two concepts mean.
According to Gartner, master data is a consistent and uniform set of identifiers and extended attributes that describe the core entities of the enterprise. Master data is frequently used across multiple business processes. Master Data Management is the set of processes and tools for organizationally defining and managing the master data.
Clearly, master data concerns your key business assets, such as customer, project, account, product, ... Already in 1989, Charlie Bachman mentioned that any large organization will have a couple of thousands of them. But how do you start with master data management when different definitions hinder consolidated views of these assets?
By clarifying the semantics of the concepts that form your master data, you obtain an organizational understanding of this data, while also readying it for use in data services (such as data translation and data validation services). As such, business semantics help you in achieving the value of master data: cost savings through removal of duplicated, invalid and outdated data, increased revenue because of consistent views across systems and overall improved competitive advantage through better control of your data.
Clearly, master data concerns your key business assets, such as customer, project, account, product, ... Already in 1989, Charlie Bachman mentioned that any large organization will have a couple of thousands of them. But how do you start with master data management when different definitions hinder consolidated views of these assets?
Manage your master data on a business level.
It is more than clear that these key business assets need to be semantically described: what defines a customer? What other concepts are related to a customer? What terms are used to refer to a customer in different speech communities? What business rules are applied in various settings on customers? Because of their more central role in an overall data environment (i.e., as opposed to transactional or analytical data), it is highly important to get things understood.By clarifying the semantics of the concepts that form your master data, you obtain an organizational understanding of this data, while also readying it for use in data services (such as data translation and data validation services). As such, business semantics help you in achieving the value of master data: cost savings through removal of duplicated, invalid and outdated data, increased revenue because of consistent views across systems and overall improved competitive advantage through better control of your data.
Benefits
Reduce risk and costs, improve business / IT alignment, and increase your Master Data Management project's succes by:- Removing interpretation conflicts,
- Avoiding unnecessary debate,
- Involving the business in defining and governing your master data definitions,
- Leveraging these business definition on your data integration and middleware infrastructure to reduce integration complexity and costs,
- Increasing governance and compliance.
Data governance is an emerging discipline with an evolving definition. The discipline embodies a convergence of data quality, data management, business process management, and risk management surrounding the handling of data in an organization. Through data governance, organizations are looking to exercise positive control over the processes and methods used by their data stewards to handle data.
Collibra's solution allows you to manage the business context in which data is interpreted as meaningful information.
It will enable you to identify and solve the semantic conflicts between different communities of interest. This results in better communication, better alignment between different stakeholders, and higher IT project success!
Reduce complexity and eliminate guesswork
In a classical data integration approach, point-to-point mappings are created between disparate data formats. These mappings, possibly supported by a technical metadata repository, only say how one element from one data format relates to another entity in another data format. It does not specify what each of these entities mean. The essential step of making the meaning of these two entities explicit so it becomes clear to all stakeholders is skipped. Formats are transformed in a point-to-point fashion which comes with a number of critical problems.- Ad-hoc solution: Manual creation of data translations is a black art with little methodological support. The solutions are very specific and mastered only by a few.
- Error prone: Much of the guesswork is left to the data mapper. Assumptions can be very dangerous and costly when they turn out to be wrong. It can take a long time before problems are discovered. The longer it takes, the higher the risks and the more expensive the consequences.
- Inefficient: Creating format translations is a very time consuming process that provides little reuse. It is especially problematic when mappings have to be changed or maintained over time. Because mappings remain at a technical level, it is very difficult for people to understand, even for the original developers.
- Inefficient: If you consider the above points, and multiply them with the complexity and the time-frame of your organization, your organization is left with an ever growing legacy that requires a state of permanent maintenance.
Business-context driven integration
Collibra's Semantic Data Integration solution enables business stakeholders to define and agree on the meaning of the business context on a business level. We are the first enterprise software company to implement the OMG's Semantics for Business Vocabulary and business Rules (SBVR) standard to enable business stakeholders to do that in a very easy and effective manner. The Business Semantics Glossary also makes it very easy to import existing data models (UML, XSD, Excel, Word, ...) to bootstrap this process.Next, the Business Semantics Studio enables technical stakeholders to link existing applications or data sources (XML, Database, Web-services, ...) to this business context. These links, or what we call commitments, can be loaded into the Business Semantics Enabler to automatically generate a data transformation services to solve your data integration needs.
The figure below provides an overview of the solution:
Align business & IT through semantic integration.
Collibra's solution enables you to define, manage and govern your information model on a business level. It allows you to capture the semantics of your business and automatically generate technical canonical models (UML, XSD, OWL, ...), data transformation and semantic validation services.Add semantics and business context to your SOA architecture to reduce complexity, increase agility, and improve governance.
Bringing people and systems closer together
The ultimate goal of a service oriented architecture is to unify the existing application stove-pipes and build an infrastructure on which it is easy to combine existing and build new application. All large software vendors provide you with the core infrastructure messaging, routing, adapters, service governance, etc. The information part of your infrastructure however has not received the attention it needs. When moving from a stove-pipe approach to a unified approach, your information model expands to encompass all different services. This is a challenge standard software products are not prepared for.Collibra helps to manage this challenge by involving all stakeholders, business and IT, to define and govern your enterprise information model.
Benefits
Reduce risk and costs, improve business / IT alignment, and increase your Data Integration project's succes by:- Removing interpretation conflicts,
- Avoiding unnecessary debate,
- Involving the business in defining and governing your data definitions,
- Leveraging these business definition on your data integration and middleware infrastructure to reduce integration complexity and costs,
- Increasing governance and compliance.
Semantic alignment vision paper
In today’s business ecosystems, information has become a competitive and strategic asset. Being able to exchange data and to interpret the information in the data that has been exchanged in the right context and within a reasonable time is a top priority for many organizations.
Starting from three simple but serious questions regarding data semantics, data utilization, and data governance that pop up daily in information-intensive enterprises, we easily identify a value proposition for semantic alignment. However, current techniques that claim to create semantic alignment in this sense are unsatisfactory, both theoretically and as far as the quality of the results is concerned. They systemically ignore the subtle gap that looms between information sharing among people (i.e. knowledge sharing) at the business/social level on the one hand; and information sharing between computer systems (i.e. data exchange) at the operational/technical level on the other hand.
A solution requires organizations to look beyond mere technical fits and think in terms of mechanisms that transcend their IT infrastructure to a sustainable information-centric infrastructure that meaningfully aligns business with IT. To achieve this goal, we pinpoint two essential requirements: business semantics management and data services.
In this white paper, we zoom in on the challenges of adopting a shared or canonical data model for application integration. This integration model is considered a best-practice when integrating applications on an Enterprise Service Bus or in the context of an SOA environment. The challenges to implement this architecture successfully are threefold:
Starting from three simple but serious questions regarding data semantics, data utilization, and data governance that pop up daily in information-intensive enterprises, we easily identify a value proposition for semantic alignment. However, current techniques that claim to create semantic alignment in this sense are unsatisfactory, both theoretically and as far as the quality of the results is concerned. They systemically ignore the subtle gap that looms between information sharing among people (i.e. knowledge sharing) at the business/social level on the one hand; and information sharing between computer systems (i.e. data exchange) at the operational/technical level on the other hand.
A solution requires organizations to look beyond mere technical fits and think in terms of mechanisms that transcend their IT infrastructure to a sustainable information-centric infrastructure that meaningfully aligns business with IT. To achieve this goal, we pinpoint two essential requirements: business semantics management and data services.
Business Semantics for effective Application Integration and SOA
Why adding semantics and business context to your integration architecture will reduce complexity, increase agility, and improve governance.
In today’s business ecosystems, information has become a competitive and strategic asset. Being able to exchange data and to interpret this data in the right context and within a reasonable time is a top priority for many organizations.In this white paper, we zoom in on the challenges of adopting a shared or canonical data model for application integration. This integration model is considered a best-practice when integrating applications on an Enterprise Service Bus or in the context of an SOA environment. The challenges to implement this architecture successfully are threefold:
- lack of semantic alignment,
- lack of flexibility, and
- lack of governance.
- adding business context to your disparate data sources and let it drive the integration process,
- involving technical and business stakeholders by de-coupling structure from meaning in terms of business vocabularies, facts and rules.
- leveraging these business semantics operationally by deploying them as data services on your Enterprise Service Bus.