Gamified Data Governance and Employee Engagement

ImageGamified Data Governance (GDG) refers to an approach to governance that applies game mechanics to increase participation levels and enable performance monitoring. In a GDG program, the responsibility for Data Governance is taken on by the business and IT user community as a whole.

The process uses gamification features, such as points, levels, and awards, to set group targets and monitor the progress of the program through measurable sets of activities and milestones. The first blog in this series defined the need for data governance gamification (Gamified Data Governance: Defining the Need). The second blog introduced how a GDG program can help facilitate an organization’s progression through the levels of the capability maturity model (Gamified Data Governance and the Maturity Model). This blog looks at how gamification addresses the problem of employee engagement.

Employee involvement in data governance programs has traditionally faced a number of challenges:

  • Complexity. This is a general problem with technology, which presents users with ever changing sets of controls, often involving multi-step processes. The array of features on offer can pose a barrier to new users.
  • Scope. The scale of the undertaking can be daunting. Business glossaries are comprised of thousands of terms, each categorized, defined and associated with business policies and rules.
  • Ownership. A shared understanding of business language is essential to the accuracy, efficiency, and productivity of the enterprise. How information is communicated and received is a shared responsibility. When employees have no ownership of that language, communication breaks down.

Gamified data governance addresses these concerns directly:

  1. Keeping it Simple. The gamified approach narrows the focus of user activities, according to the goals of the program. This is done through assigning specific tasks according to roles and through realignment of the user interface to those roles.
  2. Targeting Goals. The full scope of the program is broken down into more manageable units. This can align with strategic goals, such as focusing on a given subject area (for example, Customer information) or focusing on a specific aspect of governance (for example, ensuring that all terms have corresponding business policies identified). A key part of GDG is that these targeted goals are measurable.
  3. Engaging Users. Gamified data governance is a collective activity, with immediate feedback and acknowledgement of each participant’s contribution. The program invites active involvement in making terminology clear and concise, thus providing ownership of the process to participants.

The GDG program revolves around the activities of 3 groups of users, each with a distinct set of tasks to perform.

Group 1: Data Stewards

The primary goal of the data stewards is to enter business terms into the business glossary.  Data stewards are presented with a form allowing them to enter, modify, or delete various aspects of a term. They can also load batches of terms at once.

Group 2: Data Owners

It is the data owners’ job to ensure the quality of business glossary entries. The data owners review and approve or critique selections and entries made by the data stewards. As part of the gamification workflow, these actions are automatically fed back to the data stewards, with email alerts and running scores reflected on a leaderboard.

Group 3: IT Stewards

It falls to the IT stewards to attach data assets to terms. The IT stewards link data points in reports and databases to the business terminology entries. This is an essential step to relate business concepts to usage and to be able to measure data quality in light of the meaning and rules associated with it.

In Loyalty 3.0, Rajat Paharia outlines 5 intrinsic motivators that serve to encourage people to do something for its own sake, rather than because of some external driving force compelling them to do it. He argues that these key motivators are inherent in gamification.

  1. Autonomy. The gamified approach places governance in the hands of the stakeholders, democratizing the process and empowering the participants to define business terminology.
  2. Mastery. Determining names and definitions are creative acts, requiring communication skills that improve with practice.
  3. Purpose. Data governance is a business imperative, and so the purpose of the program is built in.
  4. Progress. The program builds up a body of work, the business glossary, and participants see the measurable progression of their efforts, both in the gamified metrics, (e.g., counting the number of terms being added) and in the daily use of the business glossary throughout the organization.
  5. Social Interaction. Participants in the program engage in the work together, in part through division of labor and also collaboratively through peer reviews. Combining a competitive element through teams and individual acknowledgment also contributes to social interactivity.

Gamification can help to overcome some of the biggest challenges to a successful data governance program: simplifying the process, providing management levers, and engaging users. The GDG program is open to customization and experimentation, providing a mechanism to align an actively engaged set of users with strategic data governance goals.

Stream Integration Data Governance specialist, Eric Landis, was consulted for this article.

This article was originally published on the Stream Integration website.


Gamified Data Governance and the Maturity Model

DSCN8820There’s a classic maxim applied to many games: a minute to learn, a lifetime to master. In taking a gamified approach to data governance (GDG), it would be wise to keep this in mind. The steps are relatively simple, with the end goal being to bring order to complex systems.

Gamified Data Governance (GDG) refers to an approach to governance that applies game mechanics to increase participation levels and enable performance monitoring. In a GDG program, the responsibility for Data Governance is taken on by the business and IT user community as a whole.

The process uses gamification techniques, such as points, levels, and awards, to set group targets and monitor the progress of the program through measurable sets of activities and milestones. The need for the gamification of data governance has been discussed. This blog introduces how a GDG program can help facilitate an organization’s progression through maturity levels.

Data Governanace Capability Maturity Model

Many organizations follow the five levels of the Capability Maturity Model, which takes a set of business processes from an initial ad hoc state, to one that is managed, measured, and continually improving. The five levels of maturity, in the context of data governance, are:

  1. Initial. This level is characterized as being chaotic, driven through the singular efforts of individuals. At this stage, business users become aware of inconsistencies in their understanding of report fields, leading to siloed data quality requirements. Efforts tend to be piecemeal and uncoordinated, though individuals work hard to remedy problems as they arise.
  2. Repeatable. At this level it is possible to repeat documented processes. Processes to address data concerns are captured in guidelines and manuals; basic checklists have been created for data design, data quality assurance, and ETL error handling. However, there remains a minimal level of formality.
  3. Defined. At this level standard practices being established, documented, and managed. Data governance is formally embedded in the workflow processes of the data management development lifecycle, from business requirements gathering through analysis, data quality assurance, data design and integration, and business intelligence.
  4. Managed. Beyond establishing defined processes, this level incorporates measuring results. Here, the efficacy of data governance processes can be measured in terms of data accuracy, data management efficiencies, and overall productivity. The value of the organization’s data, as well as the quality of the governance being applied, is continually assessed and remediated.
  5. Optimizing. The last level ensures a continual evolution of the process management by striving for ongoing improvement, documentation, enforcement and measurement.

Gamification and the Capability Maturity Model

Gamification of data governance provides a platform on which to build towards an optimizing level of governance. It does this through a number of means:

  • GDG recognizes individual contributions, providing instant feedback in the form of points, badges, and achievement levels. This aspect of gamification acknowledges the heroic aspects of the initial maturity level.
  • GDG processes are repeatable, with an interface that isolates specific tasks. This is partly a byproduct of engineering the process to facilitate scoring, but it serves to clarify and streamline the processes as well.
  • GDG processes are explicitly defined, in terms of the steps to be taken and the roles of participants. Built-in to the gamification are checks and balances of peer reviews and workflow controls.
  • Also part of the gamified aspect of governance is the ability to measure results in terms of raw productivity numbers, levels of participation, and impact to data quality and usage. The program is designed to be managed, with capabilities to guide user activities by setting and monitoring specific goals.
  • With management controls in place and active participation from data owners, data stewards, and IT stewards, GDG is positioned to be continuously optimizing, putting the focus on the most valuable activities, tweaking the generation of data policies and rules, refining business terminology, and managing data assets with some level of dexterity.

A gamified data governance program may not jump the organization to level 5 out of the box, but it will provide all the pieces to get there faster. With business and IT playing from the same rulebook, data governance can be mastered.

Stream Integration Data Governance specialist, Eric Landis, was consulted for this article, which was originally posted on the Stream Integration website.

Data Design Principles

Obey the principles without being bound by them.

- Bruce Lee

Taking a practical approach to developing a well-formed enterprise data warehouse – and by that, I mean one that is accurate, efficient and productive – involves basing it on sound design principles. These principles are specific to each sector of the reference architecture; each of which enables specific capabilities and serves specific functions. Here, I would like to lay out the principles of the Information Warehousing layer’s normalized central repository – the system of record.

The Information Warehousing layer is designed as a normalized repository for the data that has been processed “upstream”. It arrives cleansed, transformed and mastered; and is consolidated here into a single “System of Record”.  The discipline of normalization restructures the data, removing it from the confines of the single perspective of the source system, into the multiple perspectives across the enterprise. The data is modelled according to its “essence” rather than its “use”.

This process of redesign imposes a strict order on the data that promotes data integrity and retains a high degree of flexibility. The way the data is broken apart into separate tables makes it challenging to query, but this is not its main purpose. Data integrity and flexibility are the primary goals, and tuning is geared towards load performance rather than data access.

  1. Data Integrity
    Take a proactive stance to protect referential integrity and reduce instances of redundancy or potential for inconsistency.
  2. Scalability
    Allow for increases in volumes or additional sources of existing information, both within subject areas (e.g., issuers, counterparties) and core concepts (e.g., parties, locations).
  3. Flexibility
    Allow for additional sources or changes in existing sources, so that design is not tied to a given source or mirrors the source. Design will give primary consideration to reuse, then extension and finally modification of existing structures.
  4. Consistency
    Apply standard patterns for data design to promote efficiencies of data and ETL design. The decision-making process will be expedited as will data modelling work and ETL development.
  5. Efficiency
    Focus efficiency on three aspects:

    1. Implementation
      Use of repeatable patterns for data design will minimize data modelling and ETL work effort.
    2. Operation
      Ease ongoing maintenance by keeping the number of data objects to a minimum; maintain consistent standards; and apply logical structures for ease of navigation and use.
    3. Load Performance
      Priority given to performance of ETL load processes; including those that use the System of Record as a source to load the Data Mart sector.
  6. Enterprise Perspective
    For all data objects, retain and remain open to, a full range of existing and potential relationships between entities to ensure that data reflects an enterprise perspective that is not limited to the perspective of any given project’s requirements.

These foundational principles are implemented through strategies that impact storage of history, hierarchical structures, degree of normalization, classifications, surrogate keys and a number of other aspects of design. The principles form the criteria to judge the best approach to take in a given situation. It’s not always straightforward, even with the principles in place – at times one has to favour one principle over another (e.g., flexibility over load performance), but this list provides the guidance to frame the debate and take a considered approach.

As suggested by the opening quote, I’m not advocating a blind conformance to a set of rules; but, in my experience, one of the greatest obstacles to efficient and effective development is the decision-making process. Limiting the parameters of debate with intelligent guidelines can facilitate decisions being made quickly and correctly.

Feel free to suggest additions or amendments to this list of design principles for the System of Record.

Nine ETL Design Principles

The principles of ETL design define the guidelines by which data migration will be constructed. Below are 9 principles to guide architectural decisions.

1. Performance
In almost all cases, the prime concern of any ETL implementation is to migrate the data from source to target as quickly as possible. There is usually a load “window” specified as part of the non-functional requirements; a duration of time that is available to complete the process. The constraints are based on either the availability of the source system data, the need for the business to have access to the information, or a combination of both.

2. Simplicity
As with all programming, a premium is placed on simplicity of design. This is in the interests of productivity of development time, consideration of ongoing maintenance, and a likely improvement in performance. The fewer steps involved, the less chance of mistakes being made, or places for things to go wrong. When changes need to be made, or fixes applied, the fewer touch points, the better. During the life of the processes, ownership will likely change hands. The system’s simplicity will aid clarity for those who need to take it on.

3. Repeatability
One needs to be able to re-run jobs to achieve consistent and predictable results each time. This means it needs to be applicable to all relevant incoming sources, and in no way dependent on specific time parameters. If sources change, the process needs to handle those changes gracefully and consistently.

4. Modularity
Units of work should be designed with an eye to repeatable patterns, interdependencies and discreet operations that function in isolation. The goal is to have as few modules as possible to be applied as templates to all future development work. This principle assists with clarity and efficiency of design, as well as reusability.

5. Reusability
As a principle, the goal should be not only to repeat modular patterns, but where possible re-use existing jobs and apply parameterization. This optimizes the efficiency of development and reduces the cycles required for testing.

6. Extensibility
Rather than “bring everything” from a given source when a data migration process is first built, it should be possible to include only that which is identified as valuable to the business in the context of a given project or release cycle. Over time, additional data elements from the sources can be added to the ETL jobs, with potentially, new targets. The ETL job should take this iterative approach into account.

7. Revocability
This refers to the ability to reset the database after a run and to return to the state it was in prior to running the process. This will be important for testing cycles during the development process, but also in production, in the event the database becomes corrupted or requires rolling back to a previous day.

8. Subject-orientation
Workloads are to be divided into units based on business subject areas rather than source-system groupings or strictly target table structures. This recognizes that a given source table may contain information about more than one subject area (e.g., Customers and Accounts) In addition, a given subject area may be composed of multiple source tables, which may populate multiple targets. Simply limiting the ETL jobs to a single source and a single target may compromise the other principles, particularly performance and simplicity. Similarly, orienting the ETL jobs to either the source or target layouts may degrade the efficiency of the design.

9. Auditability
It is essential to be able to trace the path that data takes from source to target and be able to identify any transformations that are applied on values along the way.

With these guiding principles, specific strategies can be employed with a set of criteria to judge their applicability.

I would be interested to hear feedback on this list; and am open to any additions.

In Defense of Surrogate Keys

Employing surrogate keys is an essential strategy to protect the integrity of data assets in the EDW. For those looking to defend such a position against those who would prefer human-readable business keys (a.k.a. natural keys), this article lays out some arguments. I hope by the end the non-believers will be converted, and those who already know the value of surrogates can move ahead without opposition.

Michelle A. Poolet has provided four criteria to determine whether a surrogate key is required:

1. Is the primary key unique?

Where a natural key is not unique, such as a person’s full name, a surrogate key must be used. However, a unique composite alternate key will be needed to match on the record for updates, to attach attribution or establish relationships.

2. Does it apply to all rows?

Take an example such as Customer. Customers may all come from the same source and be loaded into a Party table. However, parties that play different roles and that are loaded from different source systems, may have natural keys of different formats (e.g., company business number, vendor identifier)

3. Is it minimal?

Composite keys and long descriptive fields are obvious candidates for transformation to surrogate keys.(e.g., Postal Address)

4. Is it stable over time?

Identifiers generated by a Master Data Management (MDM) solution are designed not to alter over time. However, if the MDM solution was to change technology, or some sources were to be passed into the EDW through a different channel, the MDM key would be problematic. It is preferable for surrogate keys to be generated internally to the Information Warehousing layer.

Rationale for preference of Surrogate Keys over Natural Keys: 

# Issue Natural Key Surrogate Key
1 Primary and Foreign Key Size Natural keys and their indexes can be large, depending on the format and components involved in making a record unique. Surrogate keys are usually a single column of type integer, making them as small as possible.
2 Optionality and Applicability The population of natural keys may not be enforced in the source data. Surrogate keys can be attached to any record.
3 Uniqueness Natural keys always have the possibility of contention. Even where a single source has protected against it, contention may exist across sources. Surrogate values are guaranteed to be unique.
4 Privacy Natural keys may be prone to privacy or security concerns. Surrogate keys are cryptic values with no intrinsic meaning.
5 Cascading Updates If the business or source changes the format or datatype of the key, it will need to be updated everywhere it is used. Surrogate values never change.
6 Load Efficiencies With the exception of very short text values, VARCHAR values are less efficient to load than INTEGER. Surrogate values can be loaded more efficiently.
7 Join Efficiencies With the exception of very short text values, joins using VARCHAR values are less efficient than INTEGER. Surrogate values can be joined more efficiently.
8 Common key strategy A lack of a common key strategy will lead to longer decision-making process and more patterns for code development. A common key strategy based on the use of surrogates will streamline design and development.
9 Relationships Natural keys with different data types will lead to the creation of more objects in the database to accommodate the associations between entities. The use of surrogates promotes reuse of associative entities for multiple types of relationships.
10 History Related to points above, the use of natural keys inhibits the flexibility and reuse of objects and patterns to support change capture. Surrogate keys enable a strategic design to store history using associatives.

The list above is drawn in part from an article by Lee Richardson, discussing the pros and cons of surrogate keys.

In summary, the reasons to employ surrogate keys as a design policy are related to two factors

1. Performance

In the majority of cases the surrogate key will perform more efficiently for both loading and joining tables.

2. Productivity

Even in the minority of cases where a natural key will perform as well as a surrogate, by adopting a consistent surrogate key policy there are benefits to efficiencies gained in design, development and maintenance.

EDW Roles and Responsibilities: Part I

In the next couple of articles, we look at the organizational structure of the EDW development team. This needs to be considered in the context of the development life cycle and areas of governance that I have discussed elsewhere, with particular reference to Release Management.

A few things to note while reading this list:

  • Technical infrastructure is not included – this is taken as a given. In my experience, the hardware and software is set up and maintained by a different group, who are not usually involved directly in any design or development decisions. Of course, they are involved when it comes to the underlying technical architecture of the overall solution, and the infrastructure needs to support the flow of data through the warehouse layers.
  • The organization tree structure diagram above is conceptual, and intended to represent a generic organization. Some organizations are larger and have many resources, internal and external within any branch. Others have a small core team that does all the development, with a single person handling many roles.
  • The main thing to take away from the diagram is that the EDW Program team analyse, data model, ETL and QA their work, while project streams gather requirements, spec and build reports. The EDW Council is a body of higher authority who can arbitrate design decisions and provide governance of the process.

Program/Release Manager

The program manager is responsible for the overall management of the EDW release. This includes the fulfillment of all data requirements, adjusted after identification and analysis of the sources; as well as delivery of the validated, repeatable processes to populate the target databases.

Throughout the release life cycle, the program manager is accountable for all the major milestone deliverables that sit within the responsibility of the EDW development team. This excludes only the detailing of requirements for each project stream and the development of the business intelligence solution that is built on top of the data mart structures.

The program manager must be part of the intake process to determine the scope of each release. To this end, he/she is consulted on the definition of each project’s requirements.

While the program manager is not directly involved in, or responsible for, business intelligence, he/she is kept informed of BI progress to remain responsive to needs that may arise.

Project Manager

The project manager of each stream is directly accountable for the detailed definition of requirements feeding into the EDW, as well as the documentation of BI specifications and the delivery of BI reporting that will draw from the data mart.

The project manager will not be involved in the development of the EDW, but will remain informed of its progress. Early in the process, it is particularly important that the project manager be apprised of data quality issues that arise through identification and analysis of sources. The system of record is not relevant to an individual project, while the data mart will be a critical aspect of the project’s success; and so the project manager is informed of major milestones in the development of the data mart.

Business Analyst

The business analyst is responsible for the compiling the detailed definition of requirements. It is also the business analyst’s responsibility to identify the sources that will fulfill those requirements. It is to be expected that not all analysts will have the business knowledge to complete the first task as well as the technical knowledge to complete the second. This is a case where a single role could be performed by different people.

The business analyst supports processes that involve, or require input from, analysis. This means that may be actively involved in the analysis of profile results and writing SQL queries to interrogate the source data. He/she will find, amend and compose business definitions of fields. When it comes time to map sources to targets, the analyst will be engaged to communicate his/her understanding to the person writing the document. It may even be the case that the analyst’s skills in documentation mean that he/she writes does the writing.

Data content governance involves insight into the nature and quality of the data. To this end, the business analyst supports this governance process. The analyst is also consulted on the information and reporting requirements to determine suitable structures for the data mart.

Data Steward

The data steward is responsible for governing what data will be brought into the EDW, the governance of definitions for each of the data elements, and the identification and definition of business rules to ensure data quality. This means, that while the business analyst will see that definitions are attached to each field, the data steward must ensure that the definitions are complete and accurate.

The data steward supports processes that depend on an understanding of the business meaning of data. This includes the definition of requirements, the identification and analysis of sources, and the mapping of those sources to the system of record.

The data steward also supports naming standards, although, due to each steward being attached to a given subject area domain, or set of domains, they are not responsible for setting or maintaining these standards.

Data Quality Analyst

The data quality analyst is responsible for assessing the quality of data at various layers. Initially, the quality analyst is engaged in the activity of running the data profiling functions and any interrogation queries. While the data analyst may draft queries, it is ultimately the responsible of the quality analyst to ensure that the analyst’s questions get answered. Later in the development life cycle, the data quality analyst is responsible for testing the data populating the system of record and the data mart.

The quality analyst enables data analysis, and so supports the governance of data content in assessing the integrity and suitability of the sources.

The data quality analyst may be consulted on questions arising from mapping sources to the system of record. They may offer technical insights or be able to perform additional quality tests to support the mapping exercise.

The requirements and source fields are an input to the quality analyst’s activities, and so he/she is informed of the results of these processes.

Enterprise Data Architect

The enterprise data architect is responsible for the governance of data architecture, including the establishment of principles and guidelines, as well as reviews and final sign-off of the data models. He/she is also responsible for establishing guidelines for logical and physical naming standards.

The enterprise data architect is consulted on all data modelling activities, relating to both the system of record and the data mart. Data models for any other component of the architect will also pass through enterprise architecture for consultation.

The governance of data content relates to the placement of relevant data within the context of the reference architecture. The governance process involves the consultation of the enterprise data architect.

The requirements and source fields are an input to the enterprise data architect’s activities, and so he/she is informed of the results of these processes.

Data Modeller

The data modeller is responsible for all processes that are directly related to data model artifacts, including the logical mapping of the source to the system of record, logical modelling and physical modelling.

The data modeller supports processes that modelling depends on, including the analysis of the sources through profiling and interrogation; or are dependent on the models, which includes the documentation of the physical mappings to populate both the system of record and the data mart.

The governance of business definitions, naming standards and data architecture all pertain directly to the data models; and so the data modeller supports these processes, by ensuring that all fields contain complete, well-formed and accurate definitions and conform to standard naming conventions, and that the structure aligns with the design standards of the given component within the reference architecture.

The data modeller is informed of requirements and source fields, as these are inputs to the process of designing data structures.

Part II will complete the list of EDW Roles and Responsibilities.

Measuring Success

When measuring success the notion of what value is being offered to the business is always raised. This is fine and well – and should be assessed. However, this is done as part of the business rationale given in the requirements for a project, or even earlier, as part of the project charter. If the requirements of a given project are met, then it follows that the value of that project’s goals are realized. But that doesn’t give the EDW the ability to measure itself. How long is it taking to deliver projects? How do you quantify how much is being delivered? Is the work being done with sufficient accuracy and efficiency? Once it is delivered, is it being used?

EDW management needs a set of measures to determine its own performance; in terms of its own qualitative capabilities, as well as quantitative measures of data quality, process quality and utilization of the system.

Here is a list of potential measures of the EDW; measures that can continue to be applied beyond initial delivery to monitor ongoing performance. Success critieria are broken into capabilities and measures; with the measures being further divided into the sections:

  • Data Quality – measuring accuracy
  • Process Quality – measuring efficiency
  • Utilization – measuring productivity


  • Data Quality Assessment
    The ability to assess the accuracy and integrity of data. Data profiling and interrogation capabilities are acquired by setting up a suitable staging area with profiling and SQL query tools.
  • Data Lineage
    The ability to track the source of data elements used in reports and found in the various layers of the EDW; as well as make visible the transformations that may be applied along the way.
  • Impact Analysis
    The ability to assess the impact of a change in a source data element to EDW objects and processes such as target fields, ETL processes and reports.
  • Model Validation
    The ability to assess the validity and completeness of data models with regards to information and reporting requirements.
  • Release Management
    The ability to assess the extent to which requirements have been met and at what stage of development each required data element sits; as well as identify overlapping requirements across projects.

Data Quality Measures

  • Missing Values Count
    The number of fields that fall below an acceptable threshold of having null values.
  • Single Values Count
    The number of fields that fall below an acceptable threshold of having a single non-null value where more than one value is expected.
  • Zero Values Count
    The number of numeric fields that fall below an acceptable threshold of having zero or null values.
  • Special Characters Count
    The number of alphanumeric fields with one of a set of special characters (e.g., #,$,& etc.)
  • Invalid Values Count
    The number of fields with values that are outside the expected range. For classifications, this can be automated with a cross-reference of expected and actual values.
  • Invalid Dates Count
    The number of date fields with invalid dates, either malformed, inconsistently formatted or outside the expected range.
  • Multiple Domains Count
    The number of fields that contain multiple domains of values; either due to lack of rationalization or overloading.
  • Aberrant Records Count
    The number of records that are rejected, for any reason, from the source as they are processed.
  • Misidentified Fields Count
    The number of fields, the name and/or definition of which, do not accurately describe their content.
  • Orphan Records Count
    The number of records that lack expected relationships to other entities. (e.g., Positions without Securities, Securities without Issuers, Transactions without Stores)

Process Quality Measures

  • Malformed Definitions Count
    The number of fields that are blank or have only the logical name as a description.
  • Non-standard Names Count
    The number of tables and fields with names that contain components not found in the list of standard abbreviations.
  • Source Mappings Count
    The number of required source fields with no mapping to target tables.
  • Target Mappings Count
    The number of target fields with no mapping to source fields where a source is expected.
  • Productivity Count
    The number of fields added, and populated, to the System of Record  and Data Mart each month. The other EDW component sectors can be tracked in this way as well.
  • ETL Performance Measure
    The length of time it takes to load the System of Record and Data Mart. (Use number of records, tables and columns being processed to determine a relative measure.)
  • Rework Count
    Number of issue tickets generated. These may further broken down into type and severity.

Utilization Measures

  • Reports Count
    The number of reports in active use each period
  • Users Count
    The number of users running reports each period.
  • Access Count
    The number of access requests (through business intelligence tools) by table each period.

Get every new post delivered to your Inbox.