Feeds:
Posts
Comments

Archive for the ‘Management’ Category

Year 2011 has been somewhat different for data management field in terms of manifestos and predictions about what the future holds. At least in the field of data management (DQ/DG etc…) there have been predictions about what features will not hold. In her blog “Jills anti-predictions of 2011”, Jill Dyche identifies what will not happen in 2011 as far as data management /data governance/MDM is concerned within organizations, Dylan Jones in his blog What is or (anti) Data Quality Manifesto? suggest creating anti-manifesto as a viral (anonymous) marketing to force the awareness around perception of the data with management ranks.  This trend has been purely based on many of our collective experience when it comes to management’s lack of commitment and sponsorship (over the years)  to support, plan and execute holistic data governance strategies for supporting high quality data throughout organization for decision-making and operations.

More I think about it, more I feel that (in many instances) ignorance about the state of the data (or data quality) is by design rather than being out of ignorance or lack of understanding. Businesses often hires smart people (most of the cases) in the executive management roles because of their capabilities and the smarts they bring to the table. Many of these executives probably have MBAs and have gone through course work which highlights importance of using accurate data in decision-making. Many of these executives have worked (probably) in some capacities with data throughout their careers and have learned the importance of fact based rather than gut feel based decision-making.

Unwillingness to commit to data management strategies might be stemming from factors which have to do with how success of the executives is evaluated and rewarded.

1.       Pressure to perform on quarterly basis (managing expenses). Many public companies provide their financial results on quarterly basis; every executives focus is to maximize sales profitability in these 90 days. Every attempt is made to curb any unwarranted (in executive’s minds) expenses.

2.       Short term contracts CEO’s (and other executives) have with the board are not conducive for finding long-term solutions. Executives have to prove their worth and short period of time. (In the paper titled “CEO EMPLOYMENT CONTRACT HORIZON AND MYOPIC BEHAVIOR” by Moqui Xu, author concludes that CEO’s with short term contract invest less than their peers. CEO’s with short term contract tend to sacrifice long-term investments for short term value maximization)

3.       Attitude “If I can get “my” numbers and (correct?) data without investing more, Why spend money and efforts on it?”. Little do they know about the manual efforts involved in getting this accurate data, day in and day out to them.

So how can we make a case for investments in Data Management initiatives?

It’s human nature to work hard to avoid pain and/or negative outcome. As human beings, we will do more to avoid pain and negative outcome than to ensure positive results. Executive management will pay more attention to your proposals and business cases for data management when they are faced with situations which are somewhat negative in nature to the performance of overall business. Situations like a restatement of financial results, fines by governing bodies, de-certification or refusal by auditors to sign on compliance, introduction of new legislation around compliance and regulations(it’s no coincidence that many of highly regulated industries like insurance, health care are farther ahead when it comes to data quality/data governance initiatives implementation and adoption) are some of the examples of major negative events (I call them compelling events)within organizational operations which can be effectively used to get executives to listen to the business case for data management. Be ready with your business case, proposal all the time. And when the time is right, present this business case to executive management for their approval and sponsorship. Highlight how initiatives you are proposing will either help avoid these negative situations or help lessen the impact of those negative situations and as a bonus help with the strategic goals of the organization.

For example, recently in their blog, Utopia, Inc. highlighted how inaccurate statement of revenues to their executive rekindled the focus on the data quality/governance initiative within their organization. This is not to say that they were not committed to the data governance or data management initiatives, in fact, they had some of that already in place. This incident provided executive sponsorship and visibility to the data issues and hence commitment from executives for data governance/management initiatives.

I’m not saying that this is the only way to get executive management sponsorship to data management initiatives. There are instances, and there are organizations which will proactively adopt the data management initiatives. Many CEOs will understand strategic inflection points (Only the paranoid survive: Andy Grove) in their industry and would realize importance of effective data management in navigating through changing business conditions. This almost always results in proactively investing and adopting data management business cases.

In ideal world, if businesses adopt best practices for data management ground up, it will help businesses in leveraging data as an asset. Effective data management would help organizations potentially avoid getting into unfavorable situation in first place. Sometimes, though to make a business case one has to choose appropriate timing even though it seems counter intuitive to do so. Sometimes it has to be that way…..

Advertisement

Read Full Post »

This is a eighth blog entry in a blog series highlighting the criticality and the importance of executive sponsorship for data governance initiatives. So far I have explored step-by-step approach of how one can go about developing the case for data governance by connecting initiatives under data governance umbrella with business strategy and outcomes.

In last post we explored how assessing data quality, reliability and timeliness will help towards establishing a baseline around data issues within the organization.

As the data governance teams are exploring data, policies and procedures around handling data, it is important to catalog key findings in a way such that key stakeholders can clearly understand the impact and the issues on hand. Clearly documenting data quality issues, policy issues and any other systemic issues associated with the specific business process (which has the highest influence on strategy outcomes) is very important for ultimately gaining executive sponsorship for initiatives which strive to fix those issues.

Attached is a sample example which demonstrates how findings could be summarized. If you’re using repositories or tools to capture some of this metadata I would highly recommend that you take the effort to summarize those findings in easy to understand fashion. This will help in articulating how data management issues are impacting overall business and in specific some of the key goals which organization is trying to manage.

Key here is clarity, simplicity and relevance. Providing some of the data/metrics around how current are the data management issues will help establishes credibility of your findings (in many instances these metrics may not be readily available, work with your counterparts from the business side and capture these metrics as a part of your discovery process). Always remember that your findings are only as good as the understanding of those findings and its impact by stakeholders in your organization. That is why it is important to make sure that you are presenting the findings in a simple yet impactful form.

You are three fourth of the way in getting executive buy-in, you have done your homework; identified the data management related issues; and presented your findings and the impact of those findings on key performance indicators. You are yet to have a formal agreement/shake hand with stakeholders around common understanding about the impact and possible course of action. In next blog posts, I will discuss how to go about reaching this agreement, and what additional information it might take to get to that point.

In the meantime, feel free to share your presentations/ideas or thoughts on how you explained your findings to key stakeholders in support of ongoing data governance investments.

Previous Relevant Posts:

Litmus Test for Data Governance Initiatives: What do you need to do to garner executive sponsorship?

Data Governance Litmus Test: Know thy KPIs

Data Governance Litmus Test: Know goals behind KPIs

Data Governance Litmus Test: How and who is putting together metrics/KPIs for executives?

Data Governance Litmus Test: Do You Have Access to the Artifacts Used by Executives?

Data Governance Litmus Test: Systems, Processes and Data Behind the KPIs and Goals

Data Governance Litmus Test: Quality, Reliability and Timeliness of the Data

Read Full Post »

This is a sixth blog entry in a blog series highlighting the critical nature and the importance of executive sponsorship for data governance initiatives. In last few entries, I explored need to understand the KPIs, goals behind those KPIs and necessity to get your hands on actual artifacts used by executives in reviewing these KPIs and their goals.

My approach has been very simple and straightforward: data governance initiatives need to absolutely be able to demonstrate impact on top and bottom lines by helping executives improve on the KPIs which are used as means to achieve higher profitability, lower costs and compliance. The process of garnering executive sponsorship is a continuous one. Visibility of data governance organization, its impact across the board; helps in establishing awareness and understanding of how data governance initiatives help organizations. This visibility and awareness makes it easy to maintain ongoing executive sponsorship.

Once you, as a data governance team, have clearly understood KPIs, goals behind those KPIs and have access to the artifacts used by executives, it is time to go back to the technical details. At this stage it is extremely important to map which systems, business processes automated by those systems and data is either directly or indirectly responsible for the outcome of those KPIs. This process of mapping dependency between KPIs, systems, business processes and data can be somewhat automated using metadata management repositories. It is important to capture this information using tools and technologies so that this information can be readily available and shared with other teams and systems.  Technology solution will also facilitate change management, impact analysis in future. The lineage and the metadata I am talking about here, go beyond technical metadata and gets into the realm of business (process and semantic) metadata as well.

This dependency information will come in very handy in establishing scope, definition of the efforts being planned towards specific data governance initiative/project. When collecting information about the systems, business processes automated by those systems and data, it is important to capture relevant information with long-term, repeatable perspective. Information such as:

1.     System name and information,,

2.     Landscape information (where is it being installed/managed/housed, which hardware/software are being used? touch points with other systems etc.)

3.     Ownership and responsibility information from both business and technology perspective. (Which technology teams are responsible for managing, changing and maintaining these systems? Who are the business stake holders who approve any changes to the behavior of these systems? etc.)

4.     Change management processes and procedures concerning the systems and data.

5.     End-users/consumer information (who uses it? How do they use it? When do they use it? For what do they use it? In).

6.     Any life cycle management processes and procedures (for data, systems) which might be into existence currently.

7.     Specific business processes and functions which are being automated by the systems?

Many a times, some of this information might already be present with the teams managing these systems. This exercise should identify presence of that information and make a note of that information. The point here is not to duplicate this information. If the information does not exist, this exercise will help capture such information which is relevant not only for the data governance initiatives, but is also usable by system owners and other stakeholders.

Goal of this step/answering this question is to baseline information about systems, business processes automated by the systems and data. This information is going to help in subsequent stages for establishing, change management processes, defining policies and possibly implementing and monitoring policies around data management/governance.

From this phase/question data governance initiative starts transitioning into nuts and bolts of the IT systems and landscape. In next few blog posts, I will be covering various aspects which data governance team should consider as they start making progress towards establishing official program and start working on it.

Previous Relevant Posts:

Litmus Test for Data Governance Initiatives: What do you need to do to garner executive sponsorship?

Data Governance Litmus Test: Know thy KPIs

Data Governance Litmus Test: Know goals behind KPIs

Data Governance Litmus Test: How and who is putting together metrics/KPIs for executives?

Data Governance Litmus Test: Do You Have Access to the Artifacts Used by Executives?

Read Full Post »

Yesterday as I was driving to work, we had fog everywhere in the area in which I live. We where fogged in so to speak. Typical commute from my house to the nearest freeway takes about 10 minutes on a given day, yesterday, it took 25 minutes. Visibility was poor; I could hardly see more than 100 yard ahead of me and about the same distance behind me. This meant I was driving very cautiously; not at all confident about what was ahead of me. While I was driving through this dense fog a thought came to my mind, isn’t it true that business decision makers go through similar predicament when they are faced with an lack of availability of reliable, high quality data for decision-making?

Poor data quality means lesser visibility into performance of the organization; it also implies impairment of decision-making based on actual data. As with a fog, poor data quality means business decisions are done slowly, over cautiously and many a times based on gut feel, rather than factual data. Slowness in decision-making could mean possible loss of the edge business has over its competition. I feel that there is a lot common between driving through a fog and trying to run the business with poor quality data.

As sun rises and temperature increases, fog will burn out. In the same way effective data quality and data governance initiatives will help burn away the fog created by a lackluster data quality. Burning off the fog is a slow and steady process; all the right conditions need to exist before fog disappears. It is the same with addressing data quality holistically within enterprise. Right conditions need to be created in terms of executive sponsorship, understanding of importance of good data quality, clear demonstration of value created by data assets etc. before true fruits of data quality initiatives can be harvested.

Superior data quality and timeliness of availability of high-quality data has significant impact on day to day business operations as well as strategic initiatives business undertakes.

Read Full Post »

This is a fifth blog entry in a series of blog entries highlighting how to go about securing executive sponsorship for data governance initiatives? In my last post I highlighted the need for understanding the KPIs which are tracked by executives and the importance of clear and very specific knowledge of the goals behind those KPIs.

As you might have already noticed, these steps one goes through to answers litmus test questions, helps data governance organization with establishing a direct relationship between data governance initiatives and organizational priorities. Getting executive sponsorship is not a one shot deal. It is an ongoing process which needs to be initiated, maintained throughout the lifecycle of data governance initiatives.

It is important to get actual copies of the reports/presentations/summaries which executives use to review the progress of the key KPIs in executive management meetings. This will help data governance team in multiple ways.

  1. You will have very clear understanding of how the information provided by KPIs is consumed by executive management? Who is looking at this information and what frequency?
  2. The process of getting these copies will get you access to executives or people around executives who can give you access to executives. This is extremely important as data governance programs seek executive sponsorship.
  3. Making executives and people around them aware that data governance team is a critical recipient of the artifacts which are being used by executives, so that in future should any KPIs, goals, expectations, change executives/ executive office will notify data governance team. This way allowing you to establish data governance team as part (or recipient) of the priority/goal change management process.
  4. These artifacts will help you understand individual executives’ styles around data presentation, consumption. This will be of immense help to you, when you present the data governance ROI and case to the executives.
  5. Periodic copies of these artifacts will help you in establishing baseline for the KPIs and use this baseline to report progress around data governance initiatives.

As I write about these 10 questions for the litmus test of data governance initiatives to evaluate level and extent of executive sponsorship to the data governance programs, my approach has been to use these questions to help create a journey for data governance team which ultimately will help the team in garnering executive/business sponsorship. As you can see, working on getting answer to these questions will create necessary awareness, visibility amongst executives and business stakeholders. So when the time comes secure executive sponsorship it is not a surprise to the key people who will be asked for their support.

Previous posts on this or related topics:

Litmus Test for Data Governance Initiatives: What do you need to do to garner executive sponsorship?

Data Governance Litmus Test: Know thy KPIs

Data Governance Litmus Test: Know goals behind KPIs

Data Governance Litmus Test: How and who is putting together metrics/KPIs for executives?

Read Full Post »

This is a fourth blog entry in a series of blog entries highlighting how to go about securing executive sponsorship for data governance initiatives. In previous posts, I have highlighted the need for  understanding specific KPIs/metrics which executives track,  and tangible goals which are being set against those KPIs.

Almost always, there is either individual or group of individuals who work tirelessly on producing these necessary reports with KPIs/metrics for executives. Many a times these individuals have clear and precise understanding of how these metrics/KPIs are calculated, what data issues, if any, exists in underlying data which supports these metrics.

It is worthwhile to spend time with these groups of people to get a leg up on an understanding of metrics/KPI definitions, knowledge around data issues (data quality, consistency, system of record). The process of engaging these individuals will also help in winning confidence of the people who know the actual details around KPI/metrics, processes associated with calculating and reporting on these metrics. These individuals will likely to be part of your data governance team and are crucial players in winning the vote of confidence from executives as it relates to the value data governance initiatives create.

In one of my engagements with a B2 B customer, executive management had the goal of improving business with existing customers. Hence executive management wanted to track Net new versus Repeat business. Initially sales operations team had no way of reporting on this KPI, so in the early days they reported using statistical sampling. Ultimately, they created a field in their CRM system to capture new or repeat business on their opportunity data. This field was used for doing new versus repeat business KPI reporting. Unfortunately, this field was always entered manually by a sales rep while creating opportunity record. While sales operation team knew that this is not entirely accurate, they had no way of getting around it.

In my early discussions with sales operations team, when I came to know about this, I did a quick assessment for a quarter worth of data. After doing basic de-duping and some cleansing I compared my numbers versus their numbers and there was a significant difference between both of our numbers. This really helped me get sales operations team on board with data cleansing and ultimately data governance around opportunity, customers and prospects data. This discussion/interaction also helped us clearly define what business should be considered Net new and Repeat business.

Obviously, as one goes through this process of collecting information around metrics, underlying data and the process by which these numbers are crunched, it helps to have proper tools and technology in place to capture this knowledge. For example

a)     Capturing definition of metrics

b)     Capturing metadata around data sources

c)      Lineage, actual calculations behind metrics etc.

This process of capturing definitions, metadata, lineage etc. will help in getting high level visibility of the scope of things to come. Metadata and lineage can be used to identify business processes and systems which are impacting KPIs.

In summary, this process of finding people behind the operations of putting together KPIs helps in identifying subject matter experts who can give you clear and high-value pointers around “areas” which data governance initiatives need to focus early on in the process. This process will ultimately help you in recruiting people with right skill set and knowledge in your cross functional data governance team.

Previous posts on this or related topics:

Litmus Test for Data Governance Initiatives: What do you need to do to garner executive sponsorship?

Data Governance Litmus Test: Know thy KPIs

Data Governance Litmus Test: Know goals behind KPIs

Suggested next posts:

Data Governance Litmus Test: Do You Have Access to the Artifacts Used by Executives?

Read Full Post »

It was a long day in Cincinnati, we had full day of conference, demos and discussions around data governance/data quality topics. Some of my colleagues, friends in the industry decided to retire in a bowling alley. Over the bowling game and few beers we obviously resorted to talking about the same topic we have been discussing all throughout the day. After about couple of hours of discussion lane #7, and#8 and #9 came to same conclusion: one of the toughest parts of data governance initiatives is ongoing executive sponsorship and the need for demonstrating tangible ROI.

On my flight way back home I jotted down some thoughts around this topic and thought of creating a basic list of questions which could be used as a litmus test to validate, if all the right steps are taken to ensure ongoing executive sponsorship and tangible ROI proof points for data governance initiative.

Everybody who is involved in some sort of data governance initiative knows the criticality and the importance of having executive sponsorship for the overall success and viability of the data governance programs.

It is really important to have the right level of understanding about organizational goals and drivers. With the specific knowledge of organizational initiatives it is much easier to link data governance initiative to specific organizational goals/drivers. Creating this link between goals and data governance will help in creating the necessary ROI case for data governance as well as garner the executive sponsorship.

So here is the list of the 10 simple yet relevant questions which I am proposing every data governance team should use as a litmus test from time to time to validate, if they are going on the right track to ensure ongoing executive sponsorship and capacity to demonstrate tangible ROI to the organization.

1.       Every Monday morning CEO and his direct reports meet to review organizational KPI’s. Do you precisely know which metrics are being looked at on a weekly basis?

2.       Do you know what the goals are for those KPIs?

3.       Do you know how each of those metrics/KPIs is put together and by whom?

4.       Do you know which KPIs are not meeting their desired goals?

5.       Do you have sample presentation or a report which all of the executives look at, in the Monday morning meeting?

6.       Once you have an idea about the key metrics, people who put together those metrics for executives, do you know which systems are responsible for generating and managing raw data which is required for those metrics?

7.       Do you have an understanding of the quality, reliability, timeliness of the data which is being used to put together those metrics?

8.       Have you found issues with data quality, reliability, timeliness of the data or how the data is managed on ongoing basis?

9.       Have you shared you are findings of the quality and reliability of the raw data which is being used to put together weekly KPIs with the executives which are responsible for those KPIs?

10.   Have you reached any common understanding regarding the need to address data quality, reliability, timeliness or issues around how the data is being managed with the key executives whose KPI’s are being impacted because of the underlying issues associated with the data? And benefits of such actions/initiatives?

If you answered yes to all of the questions above, you are well on your way to generate tangible ROI, Garner executive sponsorship for your data governance initiatives. And you have very high chance of being successful at achieving all of your goals of data governance initiative.

On the other hand, if you did not answer yes to one or many of the questions above, it is time to go back to the whiteboard and understand how truly you have been able to justify the ROI? How realistic is that case? And do you truly have executive sponsorship and support to your data governance initiatives?

I would love to hear your thoughts around this topic, and in specific if you would add any more questions? Take away any of the above questions? Or simplify any of the questions?

In my opinion, how and when you ask these questions and take appropriate actions will differ based on where in the life cycle of the project you are. In future posts, I will discuss relevancy of this litmus test and other factors influencing your actions based on this litmus test for two scenarios:

1.       For teams who are just in the beginning phases of proposing Data governance program, but yet not started.

2.       Teams who are already working through the Data governance programs.

Suggested reading next:

Data Governance Litmus Test: Know thy KPIs

Data Governance Litmus Test: Know goals behind KPIs

Data Governance Litmus Test: How and who is putting together metrics/KPIs for executives?

Data Governance Litmus Test: Do You Have Access to the Artifacts Used by Executives?

Read Full Post »

Score Card for Prioratizing Data Quality Issues

Couple of weeks back I was having conversation with a fellow CTO, he was demonstrating analytics product to me.  There were many instances in the demo (dashboards/reports) where there was lot of dimensional data were missing (for example industry verticals, Product category etc…). Obviously during the course of the demo, discussion around data quality broke out. Fellow CTO mentioned that they do not encourage customers to spend time and energy to fix data quality issues from the analytics prospective if numbers around data quality issues represent  less than 1% of overall $ numbers.  I kind of agreed with his argument and justification (again from directional analytics prospective) around not fixing these data quality issues because:

1) These issues do not interfere with analysis if analysis hinges upon directionality of the business.

2) ROI from fixing these issues is not significant as the data represented by these issues will have less than 1% impact on the directionality of the analysis (which is statistically insignificant).

This got me thinking that data quality is truly multi-dimensional problem (like the story about an elephant in the room and  blindfolded men describing the elephant, everyone concludes it as a different object even though everyone is feeling the elephant).  As data quality professionals, it is important for all of us to bring that prospective in any data quality initiative. Best way to doing this would be to build a data quality score card with the quality assessment and its impact on the context in which data will be used. This type of score card can and should be used in prioritizing fixing of data quality issues. This will also help in justifying ROI of the data quality issues.

 As indicated in chart, each context is analyzed from the prospective of data quality attributes. Each context is given Red, Green or Yellow indicator. Obviously any red indicators need to be addressed before data can be used in that context. In this example, it helps to demonstrate that compliance reporting requirements cannot be met, until data quality issues associated with credit ratings, address data quality are completely resolved. This helps with demonstrating the need, necessary ROI and helps in prioritization of which attributes to be addressed first.

I would love to hear from you as to how did you prioritize and justified quality imitative, what tools/techniques you used?

Read Full Post »

Part I

Many of us have been using Agile methodologies for doing product development or for doing IT Projects very successfully. I have noticed that Agile methodologies are very well suited for addressing enterprise data and information quality (EDIQ) initiatives. I almost feel like Agile and Enterprise data quality (addressing of) was a match made in heaven.

Let us inspect key tenets of the Agile methodologies and relate those to what one has to go through in addressing enterprise data/information quality issues (EDIQ).

  1. Active User involvement (Collaborative and Cooperative approach): Fixing/addressing data quality issues has to be done in collaboration with the data stake holders, business users. Creating a virtual team where business, IT, data owners participate is critical for the success of data quality initiatives. While IT provides necessary fire power in terms of technology and means to correct data quality, ultimately it’s the business owners/data owners who will decide the actual fixes.
  2. Team empowerment for making and implementing decisions: Executive sponsorship and empowerment of the team working on data quality are key components of a successful enterprise data/information quality initiative. Teams should be empowered to make necessary fixes to the data and the processes. They should also be empowered to do enforcement/ implementation of these newly defined/refined processes for addressing immediate data quality and ensuring ongoing data quality standard is met.
  3. Incremental, small releases and iterate: As we know, big bang, fix it all approach for addressing data quality does not work.  In order to address data quality realistically, incremental approach with iterative correction is the best way to go. This has been discussed in couple of recent article in “ Missed it by that much” by Jim Harris,  and in my own article
  4. Time boxing the activity: Requirements for data quality evolve. Usually scope of activities will expand as team starts working on data quality initiative. Key to success is to chunk the work and demonstrate incremental improvements in a time boxed fashion. This almost always forces data quality teams to prioritize the data quality issues and then address them in the priority order (this really helps in optimally deploy resources of the organization to get biggest bang for the buck)
  5. Testing/Validation is integrated in the process and it is done early and often: This is my favorite. Many times data quality is addressed by first fixing the data itself in environments like data warehouse/marts or alternate repositories for immediate impact on business initiatives. Testing these fixes for their accuracies and validating its impact will provide for a framework as to how you ultimately want data quality issues fixed (What additional process you might want to inject, what additional checks you might want to do at the source system side, what are the patterns you are seeing in the data etc…). Early testing/validation will create immediate impact with business initiatives and business side will be more inclined to invest and dedicate resources in addressing data quality on ongoing basis.
  6. Transparency and Visibility: All throughout the work one does for fixing data quality, it is extremely important to provide clear visibility into the issues and impact of the data quality, efforts and commitments it will take to fixing data quality and the ongoing progress made towards achieving business goals about data and information quality. Maintaining a scorecard for all data quality fixes and showing trend of improvements is a good way to provide visibility into improvements done in enterprise data quality. I had discussed this in my last article and here is a sample scorecard.

There are many other aspects of Agile methodologies which are applicable to the enterprise data quality initiatives

a)     Like capturing requirements at high level and then elaborating those in visual formats in a team setting

b)    Don’t expect that you are going to build a perfect solution in first iteration

c)     Completing task on hand before taking up another task

d)    Clear definition of what a task “completion” means etc…

In summary, I really feel that Agile methodologies can be easily adapted and used in implementing enterprise data/information quality initiatives. Use of Agile methodologies in  will ensure higher and quick success. They are like a perfect couple “Made for each other”

In my next post, I will take real life example and compare and contrast actual artifacts of Agile methodologies with the artifacts which are required to be created for enterprise data/information quality (EDIQ) initiatives.

Resources: There are several sites about Agile methodologies, I really like couple of them

  1. Manifesto for Agile Software Development
  2. There is a nice book “An Agile War Story: Scrum and XP from the trenches by Henrik Kniberg
  3. Agile Project Management Blog

Read Full Post »

How many times in the careers of data quality professionals, they are called upon when data quality has become pervasive issue in the organization? I always wonder as to why it has to be this way? Why can’t data quality be considered, injected into any business initiatives before it becomes such a big issue?

I personally think there is a hope and there is a way around it. We (data quality, integration, applications, IT professionals) just have to make sure that data quality becomes one of the most critical initiatives on the radars of executive management and it needs to be championed by CIOs. IT can act as enablers, evangelist in the whole process.

In a recent interview of Jim Harrisconducted by Ajay Ohri,Jim gives an example where in a financial services company had a critical data source in the form of applications received through mail or over the phone. All the data entry personnel were compensated on how many applications they enter in a given time frame, whereas most critical information for the financial institution was correct social security number (there lays the issue). As Jim explains in his findings, when ever social security number was not present or was not legible, data entry personnel entered their own social security number to proceed with data entry operations. This clearly creates for a very low value data base for this financial services company.

Had the IT/CIO data quality evangelist participated in the process of capturing this information and if they had asked some critical/right questions about the intent and usage of capturing this information (e.g. what is critical and most important information in this application form? What happens if the information is not accurate? Are there ways to validate the information being entered), one could have easily

  1. Put some checks around social security number to be valid (cross ref with existing customers etc….)
  2. Encouraged reward model to the team based on accurate information/valid data quality of number of applicants rather than just number of applicants. (Aligning goals of organization through correct data quality and right rewards model)
  3. Create a process where in doubt transactions/records can be triaged through and corrected before accepted and fully complete records etc….

Again my intent is not to pick on one instance or industry. Point I am making here is that Data Quality/Information Quality should always be injected in any new/existing initiatives at the front end rather than back end of the business process. Data quality and how it will be ensured becomes one of the input/drivers in implementing any new business initiatives (it would go side by side with the business objectives). Also consideration of data quality is not a just technology issue but it is a business issue (and hence in above example, I suggest that questions/suggestions from Data quality evangelist could influence how team is compensated)

I know that this discussion would automatically lead into data/information governance and management but there are really small steps any IT organization (CIOs) could take to fix this issue incrementally. Create a role whose (one of the many) responsibility is to make sure that

  1. Any business process/initiative which captures/modifies data has a set of requirements around intended use of the information and assumptions around what that information will be
  2. Outline what data quality/validity checks be performed in a mandatory fashion to achieve clean data for the purpose/intention of the business use
  3. Create a monitoring system to ensure that outlines created for validating data quality are being implemented and are being worked on.
  4. Evangelize importance of correct data and correlation between data quality and information quality.

I would love to get your thoughts on this topic… I think that prevention is better than cure (while we cannot always prevent, we can try) and time has come (given the explosion of data and emphasize on objective decisions based on data rather than gut feel) for all of us to start pushing for data quality at the front end of the process.

Read Full Post »

Older Posts »