Feeds:
Posts
Comments

Archive for the ‘EDIQ’ Category

Yesterday as I was driving to work, we had fog everywhere in the area in which I live. We where fogged in so to speak. Typical commute from my house to the nearest freeway takes about 10 minutes on a given day, yesterday, it took 25 minutes. Visibility was poor; I could hardly see more than 100 yard ahead of me and about the same distance behind me. This meant I was driving very cautiously; not at all confident about what was ahead of me. While I was driving through this dense fog a thought came to my mind, isn’t it true that business decision makers go through similar predicament when they are faced with an lack of availability of reliable, high quality data for decision-making?

Poor data quality means lesser visibility into performance of the organization; it also implies impairment of decision-making based on actual data. As with a fog, poor data quality means business decisions are done slowly, over cautiously and many a times based on gut feel, rather than factual data. Slowness in decision-making could mean possible loss of the edge business has over its competition. I feel that there is a lot common between driving through a fog and trying to run the business with poor quality data.

As sun rises and temperature increases, fog will burn out. In the same way effective data quality and data governance initiatives will help burn away the fog created by a lackluster data quality. Burning off the fog is a slow and steady process; all the right conditions need to exist before fog disappears. It is the same with addressing data quality holistically within enterprise. Right conditions need to be created in terms of executive sponsorship, understanding of importance of good data quality, clear demonstration of value created by data assets etc. before true fruits of data quality initiatives can be harvested.

Superior data quality and timeliness of availability of high-quality data has significant impact on day to day business operations as well as strategic initiatives business undertakes.

Advertisement

Read Full Post »

This is the third blog post in a series of blog posts geared towards addressing “Why, What and How?” of getting executive sponsorship for data governance initiatives. In my last post Data Governance Litmus Test: Know thy KPIs I explored importance of knowing KPIs to be able to build link between data governance initiatives outcomes and the organizational strategy. In this post I’m going to explore why it is important to know specific goals of the KPIs which are monitored on periodic basis by executives towards fulfilling organizational strategy.

Data governance initiatives typically will span multiple organizations, key business processes, heterogeneous systems/applications and several people from different lines of businesses. Any time when one is dealing with such a complex composition of players and stakeholders, it is extremely crucial to be articulate about business goals and the impact of the actions on hand on the goals. Once people understand the magnitude of impact, and how they will be responsible for such an impact, getting their co-operation, alignment becomes relatively easy.

Once you understand the KPIs which are important organizationally, you need to drill down one level below to understand what specific goals are important? The process of understanding specific goals will undoubtedly reveal many contributing factors to the fulfillment of the overall goals.

For example:

If one of the major KPIs which executives are tracking is overall spend. At this stage it is important for the data governance initiative team to understand specific goals around this KPI. For example the specifics goals around this KPI could be:

1.     Chief procurement officer has been asked to reduce spend by 2% within four quarters

2.     2% reduction across the board represents $80 million savings.

3.     This savings alone would allow organization to improve its profitability by almost a penny per share. This ultimately will reflect positively in share price improvement and will benefit all the employees of the organization.

Once such details are known, establishing a dialogue with chief procurement officer and his/her key advisers might further reveal that

1.     Their focus is going to be in three specific areas (specific products/raw materials)

2.     Not having singular view of suppliers is a key concern. Because of this issue they are not able to negotiate consistent pricing contracts with the suppliers. They believe that streamlining contracts based on overall spend with suppliers; their subsidiaries will help them achieve more than 70% of their goal.

3.     Supplier contracts are not being returned consistently resulting in higher costs in terms of minimum business guarantees and price point guarantees.

Equipped with this information, it will be much easier for data governance team to highlight and link their efforts to overall goal of reducing spend. For example, with some of this information gathered, one can already pinpoint that teams which are working with suppliers/supplier development, contract negotiations, pricing etc…. are going to be critical to get on board data governance with this initiative. Also, it is clear from these nuggets of information that the overall spend, number of suppliers, number of materials/products being procured will be some of the key metrics and interrelationship between those metrics will be critical to link any ROI from initiatives to clean supplier data, build supplier MDM etc…

With this information data governance team now can not only communicate to their team members but also the executives, that X percent of duplicate data in supplier master would potentially represent Y dollars off excessive spend. Data governance team will be able to explain not only how this can be fixed but what is required to maintain this hygiene on an ongoing basis because of the impact it will have on overall excess spend.

In summary, it is really important to understand the goals behind “what?” of the organizational strategy. Other indirect benefits of this kind of exercise are

1.     Establish communication and contacts with the business stakeholders.

2.     Understand areas where you can focus upfront for the highest impact.

3.     Understand and learn the language which you could use to effectively communicate ROI of data governance back to the executives.

In my next post, I will explore who is behind putting together these KPIs for executive in the current situation. These people are ‘the most critical’ players in the Data Governance team at both execution and implementation levels as the initiatives are kicked off.

Previous posts on this or related topics:

Litmus Test for Data Governance Initiatives: What do you need to do to garner executive sponsorship?

Data Governance Litmus Test: Know thy KPIs

Suggested next posts:

Data Governance Litmus Test: How and who is putting together metrics/KPIs for executives?

Data Governance Litmus Test: Do You Have Access to the Artifacts Used by Executives?

Read Full Post »

Part I

Many of us have been using Agile methodologies for doing product development or for doing IT Projects very successfully. I have noticed that Agile methodologies are very well suited for addressing enterprise data and information quality (EDIQ) initiatives. I almost feel like Agile and Enterprise data quality (addressing of) was a match made in heaven.

Let us inspect key tenets of the Agile methodologies and relate those to what one has to go through in addressing enterprise data/information quality issues (EDIQ).

  1. Active User involvement (Collaborative and Cooperative approach): Fixing/addressing data quality issues has to be done in collaboration with the data stake holders, business users. Creating a virtual team where business, IT, data owners participate is critical for the success of data quality initiatives. While IT provides necessary fire power in terms of technology and means to correct data quality, ultimately it’s the business owners/data owners who will decide the actual fixes.
  2. Team empowerment for making and implementing decisions: Executive sponsorship and empowerment of the team working on data quality are key components of a successful enterprise data/information quality initiative. Teams should be empowered to make necessary fixes to the data and the processes. They should also be empowered to do enforcement/ implementation of these newly defined/refined processes for addressing immediate data quality and ensuring ongoing data quality standard is met.
  3. Incremental, small releases and iterate: As we know, big bang, fix it all approach for addressing data quality does not work.  In order to address data quality realistically, incremental approach with iterative correction is the best way to go. This has been discussed in couple of recent article in “ Missed it by that much” by Jim Harris,  and in my own article
  4. Time boxing the activity: Requirements for data quality evolve. Usually scope of activities will expand as team starts working on data quality initiative. Key to success is to chunk the work and demonstrate incremental improvements in a time boxed fashion. This almost always forces data quality teams to prioritize the data quality issues and then address them in the priority order (this really helps in optimally deploy resources of the organization to get biggest bang for the buck)
  5. Testing/Validation is integrated in the process and it is done early and often: This is my favorite. Many times data quality is addressed by first fixing the data itself in environments like data warehouse/marts or alternate repositories for immediate impact on business initiatives. Testing these fixes for their accuracies and validating its impact will provide for a framework as to how you ultimately want data quality issues fixed (What additional process you might want to inject, what additional checks you might want to do at the source system side, what are the patterns you are seeing in the data etc…). Early testing/validation will create immediate impact with business initiatives and business side will be more inclined to invest and dedicate resources in addressing data quality on ongoing basis.
  6. Transparency and Visibility: All throughout the work one does for fixing data quality, it is extremely important to provide clear visibility into the issues and impact of the data quality, efforts and commitments it will take to fixing data quality and the ongoing progress made towards achieving business goals about data and information quality. Maintaining a scorecard for all data quality fixes and showing trend of improvements is a good way to provide visibility into improvements done in enterprise data quality. I had discussed this in my last article and here is a sample scorecard.

There are many other aspects of Agile methodologies which are applicable to the enterprise data quality initiatives

a)     Like capturing requirements at high level and then elaborating those in visual formats in a team setting

b)    Don’t expect that you are going to build a perfect solution in first iteration

c)     Completing task on hand before taking up another task

d)    Clear definition of what a task “completion” means etc…

In summary, I really feel that Agile methodologies can be easily adapted and used in implementing enterprise data/information quality initiatives. Use of Agile methodologies in  will ensure higher and quick success. They are like a perfect couple “Made for each other”

In my next post, I will take real life example and compare and contrast actual artifacts of Agile methodologies with the artifacts which are required to be created for enterprise data/information quality (EDIQ) initiatives.

Resources: There are several sites about Agile methodologies, I really like couple of them

  1. Manifesto for Agile Software Development
  2. There is a nice book “An Agile War Story: Scrum and XP from the trenches by Henrik Kniberg
  3. Agile Project Management Blog

Read Full Post »