Feeds:
Posts
Comments

Archive for the ‘Business Process’ Category

In his recent blog titled Data and Process Transparency Jim Harris makes the case that “a more proactive approach to data quality begins with data and process transparency”. This is very true of any organization striving towards availability of highest quality data for its decision-making as well as efficiency of business processes.

In order to embed transparency in every situation across organization, organizations need to be data driven. After all, what does it mean to be data driven organization, you may ask? There is great deal of literature around this topic in print as well as on the web. I will try to simplify this discussion and say that, to me, when culture of decision making is purely based on the factual data (KPIs/Metrics etc…) within an organization (and not on gut feel, emotions and subjectivity of decision making individuals), organization can be said to have become a data driven organization. Of course, this is a very simplistic definition (just for the purpose of this blog).

Many a times, depending upon organizational maturity you may have organizations which are completely data driven versus organizations which are more mature in one area (vis-a -vis data driven decision-making) versus other areas of the organization. For example, in some cases finance side of the organization might be much more data driven than either marketing or sales site etc.

Using data to make decisions drives both data and process transparency across organization. It discourages use of anecdotal information (and gut feel) and forces people to think in terms of realistic data and evidence presented by data. Also using specific KPI’s/metrics allows organizations to clearly define issues associated with underlying data or business processes more readily.

For example, if the sales operation team is discussing order return rates, they cannot simply say that we have a very low order return rate because of poor addresses in a “data driven organization”. They will say that they have 1% order return rate for (on average) 125,000 orders they ship every month because of the poor shipping addresses. This way of expressing performance not only helps everyone involved in understanding the importance of good data quality but also helps organization with creating sensitivity around capturing good data to begin with. Also expressing performance this way helps with ready-made business case for supporting underlying data management initiatives.

Transforming organizations to a data driven organization is a gargantuan change management task. It requires significant cultural/thinking change up and down the organizational hierarchies. During such transformations, organizational operational DNA is completely changed. Obviously, the benefits and rewards of being the data driven organization are immense and worth the efforts of transformation.

On the other side, during the data driven organizational transformation if organizations find that data is not of reliable quality, this finding will force data management discussion across organization and help kick start initiatives to fix the data as more and more in the organization start using data for decision making.

In end, I would encourage everyone to be as data driven as possible in their decision making and influence areas within your organization to be data driven. As data professionals, this will allow us to be more proactive in addressing data management challenges for the organization.

Read Full Post »

This is a sixth blog entry in a blog series highlighting the critical nature and the importance of executive sponsorship for data governance initiatives. In last few entries, I explored need to understand the KPIs, goals behind those KPIs and necessity to get your hands on actual artifacts used by executives in reviewing these KPIs and their goals.

My approach has been very simple and straightforward: data governance initiatives need to absolutely be able to demonstrate impact on top and bottom lines by helping executives improve on the KPIs which are used as means to achieve higher profitability, lower costs and compliance. The process of garnering executive sponsorship is a continuous one. Visibility of data governance organization, its impact across the board; helps in establishing awareness and understanding of how data governance initiatives help organizations. This visibility and awareness makes it easy to maintain ongoing executive sponsorship.

Once you, as a data governance team, have clearly understood KPIs, goals behind those KPIs and have access to the artifacts used by executives, it is time to go back to the technical details. At this stage it is extremely important to map which systems, business processes automated by those systems and data is either directly or indirectly responsible for the outcome of those KPIs. This process of mapping dependency between KPIs, systems, business processes and data can be somewhat automated using metadata management repositories. It is important to capture this information using tools and technologies so that this information can be readily available and shared with other teams and systems.  Technology solution will also facilitate change management, impact analysis in future. The lineage and the metadata I am talking about here, go beyond technical metadata and gets into the realm of business (process and semantic) metadata as well.

This dependency information will come in very handy in establishing scope, definition of the efforts being planned towards specific data governance initiative/project. When collecting information about the systems, business processes automated by those systems and data, it is important to capture relevant information with long-term, repeatable perspective. Information such as:

1.     System name and information,,

2.     Landscape information (where is it being installed/managed/housed, which hardware/software are being used? touch points with other systems etc.)

3.     Ownership and responsibility information from both business and technology perspective. (Which technology teams are responsible for managing, changing and maintaining these systems? Who are the business stake holders who approve any changes to the behavior of these systems? etc.)

4.     Change management processes and procedures concerning the systems and data.

5.     End-users/consumer information (who uses it? How do they use it? When do they use it? For what do they use it? In).

6.     Any life cycle management processes and procedures (for data, systems) which might be into existence currently.

7.     Specific business processes and functions which are being automated by the systems?

Many a times, some of this information might already be present with the teams managing these systems. This exercise should identify presence of that information and make a note of that information. The point here is not to duplicate this information. If the information does not exist, this exercise will help capture such information which is relevant not only for the data governance initiatives, but is also usable by system owners and other stakeholders.

Goal of this step/answering this question is to baseline information about systems, business processes automated by the systems and data. This information is going to help in subsequent stages for establishing, change management processes, defining policies and possibly implementing and monitoring policies around data management/governance.

From this phase/question data governance initiative starts transitioning into nuts and bolts of the IT systems and landscape. In next few blog posts, I will be covering various aspects which data governance team should consider as they start making progress towards establishing official program and start working on it.

Previous Relevant Posts:

Litmus Test for Data Governance Initiatives: What do you need to do to garner executive sponsorship?

Data Governance Litmus Test: Know thy KPIs

Data Governance Litmus Test: Know goals behind KPIs

Data Governance Litmus Test: How and who is putting together metrics/KPIs for executives?

Data Governance Litmus Test: Do You Have Access to the Artifacts Used by Executives?

Read Full Post »

Yesterday as I was driving to work, we had fog everywhere in the area in which I live. We where fogged in so to speak. Typical commute from my house to the nearest freeway takes about 10 minutes on a given day, yesterday, it took 25 minutes. Visibility was poor; I could hardly see more than 100 yard ahead of me and about the same distance behind me. This meant I was driving very cautiously; not at all confident about what was ahead of me. While I was driving through this dense fog a thought came to my mind, isn’t it true that business decision makers go through similar predicament when they are faced with an lack of availability of reliable, high quality data for decision-making?

Poor data quality means lesser visibility into performance of the organization; it also implies impairment of decision-making based on actual data. As with a fog, poor data quality means business decisions are done slowly, over cautiously and many a times based on gut feel, rather than factual data. Slowness in decision-making could mean possible loss of the edge business has over its competition. I feel that there is a lot common between driving through a fog and trying to run the business with poor quality data.

As sun rises and temperature increases, fog will burn out. In the same way effective data quality and data governance initiatives will help burn away the fog created by a lackluster data quality. Burning off the fog is a slow and steady process; all the right conditions need to exist before fog disappears. It is the same with addressing data quality holistically within enterprise. Right conditions need to be created in terms of executive sponsorship, understanding of importance of good data quality, clear demonstration of value created by data assets etc. before true fruits of data quality initiatives can be harvested.

Superior data quality and timeliness of availability of high-quality data has significant impact on day to day business operations as well as strategic initiatives business undertakes.

Read Full Post »

This is a fifth blog entry in a series of blog entries highlighting how to go about securing executive sponsorship for data governance initiatives? In my last post I highlighted the need for understanding the KPIs which are tracked by executives and the importance of clear and very specific knowledge of the goals behind those KPIs.

As you might have already noticed, these steps one goes through to answers litmus test questions, helps data governance organization with establishing a direct relationship between data governance initiatives and organizational priorities. Getting executive sponsorship is not a one shot deal. It is an ongoing process which needs to be initiated, maintained throughout the lifecycle of data governance initiatives.

It is important to get actual copies of the reports/presentations/summaries which executives use to review the progress of the key KPIs in executive management meetings. This will help data governance team in multiple ways.

  1. You will have very clear understanding of how the information provided by KPIs is consumed by executive management? Who is looking at this information and what frequency?
  2. The process of getting these copies will get you access to executives or people around executives who can give you access to executives. This is extremely important as data governance programs seek executive sponsorship.
  3. Making executives and people around them aware that data governance team is a critical recipient of the artifacts which are being used by executives, so that in future should any KPIs, goals, expectations, change executives/ executive office will notify data governance team. This way allowing you to establish data governance team as part (or recipient) of the priority/goal change management process.
  4. These artifacts will help you understand individual executives’ styles around data presentation, consumption. This will be of immense help to you, when you present the data governance ROI and case to the executives.
  5. Periodic copies of these artifacts will help you in establishing baseline for the KPIs and use this baseline to report progress around data governance initiatives.

As I write about these 10 questions for the litmus test of data governance initiatives to evaluate level and extent of executive sponsorship to the data governance programs, my approach has been to use these questions to help create a journey for data governance team which ultimately will help the team in garnering executive/business sponsorship. As you can see, working on getting answer to these questions will create necessary awareness, visibility amongst executives and business stakeholders. So when the time comes secure executive sponsorship it is not a surprise to the key people who will be asked for their support.

Previous posts on this or related topics:

Litmus Test for Data Governance Initiatives: What do you need to do to garner executive sponsorship?

Data Governance Litmus Test: Know thy KPIs

Data Governance Litmus Test: Know goals behind KPIs

Data Governance Litmus Test: How and who is putting together metrics/KPIs for executives?

Read Full Post »

This is a fourth blog entry in a series of blog entries highlighting how to go about securing executive sponsorship for data governance initiatives. In previous posts, I have highlighted the need for  understanding specific KPIs/metrics which executives track,  and tangible goals which are being set against those KPIs.

Almost always, there is either individual or group of individuals who work tirelessly on producing these necessary reports with KPIs/metrics for executives. Many a times these individuals have clear and precise understanding of how these metrics/KPIs are calculated, what data issues, if any, exists in underlying data which supports these metrics.

It is worthwhile to spend time with these groups of people to get a leg up on an understanding of metrics/KPI definitions, knowledge around data issues (data quality, consistency, system of record). The process of engaging these individuals will also help in winning confidence of the people who know the actual details around KPI/metrics, processes associated with calculating and reporting on these metrics. These individuals will likely to be part of your data governance team and are crucial players in winning the vote of confidence from executives as it relates to the value data governance initiatives create.

In one of my engagements with a B2 B customer, executive management had the goal of improving business with existing customers. Hence executive management wanted to track Net new versus Repeat business. Initially sales operations team had no way of reporting on this KPI, so in the early days they reported using statistical sampling. Ultimately, they created a field in their CRM system to capture new or repeat business on their opportunity data. This field was used for doing new versus repeat business KPI reporting. Unfortunately, this field was always entered manually by a sales rep while creating opportunity record. While sales operation team knew that this is not entirely accurate, they had no way of getting around it.

In my early discussions with sales operations team, when I came to know about this, I did a quick assessment for a quarter worth of data. After doing basic de-duping and some cleansing I compared my numbers versus their numbers and there was a significant difference between both of our numbers. This really helped me get sales operations team on board with data cleansing and ultimately data governance around opportunity, customers and prospects data. This discussion/interaction also helped us clearly define what business should be considered Net new and Repeat business.

Obviously, as one goes through this process of collecting information around metrics, underlying data and the process by which these numbers are crunched, it helps to have proper tools and technology in place to capture this knowledge. For example

a)     Capturing definition of metrics

b)     Capturing metadata around data sources

c)      Lineage, actual calculations behind metrics etc.

This process of capturing definitions, metadata, lineage etc. will help in getting high level visibility of the scope of things to come. Metadata and lineage can be used to identify business processes and systems which are impacting KPIs.

In summary, this process of finding people behind the operations of putting together KPIs helps in identifying subject matter experts who can give you clear and high-value pointers around “areas” which data governance initiatives need to focus early on in the process. This process will ultimately help you in recruiting people with right skill set and knowledge in your cross functional data governance team.

Previous posts on this or related topics:

Litmus Test for Data Governance Initiatives: What do you need to do to garner executive sponsorship?

Data Governance Litmus Test: Know thy KPIs

Data Governance Litmus Test: Know goals behind KPIs

Suggested next posts:

Data Governance Litmus Test: Do You Have Access to the Artifacts Used by Executives?

Read Full Post »

This is the third blog post in a series of blog posts geared towards addressing “Why, What and How?” of getting executive sponsorship for data governance initiatives. In my last post Data Governance Litmus Test: Know thy KPIs I explored importance of knowing KPIs to be able to build link between data governance initiatives outcomes and the organizational strategy. In this post I’m going to explore why it is important to know specific goals of the KPIs which are monitored on periodic basis by executives towards fulfilling organizational strategy.

Data governance initiatives typically will span multiple organizations, key business processes, heterogeneous systems/applications and several people from different lines of businesses. Any time when one is dealing with such a complex composition of players and stakeholders, it is extremely crucial to be articulate about business goals and the impact of the actions on hand on the goals. Once people understand the magnitude of impact, and how they will be responsible for such an impact, getting their co-operation, alignment becomes relatively easy.

Once you understand the KPIs which are important organizationally, you need to drill down one level below to understand what specific goals are important? The process of understanding specific goals will undoubtedly reveal many contributing factors to the fulfillment of the overall goals.

For example:

If one of the major KPIs which executives are tracking is overall spend. At this stage it is important for the data governance initiative team to understand specific goals around this KPI. For example the specifics goals around this KPI could be:

1.     Chief procurement officer has been asked to reduce spend by 2% within four quarters

2.     2% reduction across the board represents $80 million savings.

3.     This savings alone would allow organization to improve its profitability by almost a penny per share. This ultimately will reflect positively in share price improvement and will benefit all the employees of the organization.

Once such details are known, establishing a dialogue with chief procurement officer and his/her key advisers might further reveal that

1.     Their focus is going to be in three specific areas (specific products/raw materials)

2.     Not having singular view of suppliers is a key concern. Because of this issue they are not able to negotiate consistent pricing contracts with the suppliers. They believe that streamlining contracts based on overall spend with suppliers; their subsidiaries will help them achieve more than 70% of their goal.

3.     Supplier contracts are not being returned consistently resulting in higher costs in terms of minimum business guarantees and price point guarantees.

Equipped with this information, it will be much easier for data governance team to highlight and link their efforts to overall goal of reducing spend. For example, with some of this information gathered, one can already pinpoint that teams which are working with suppliers/supplier development, contract negotiations, pricing etc…. are going to be critical to get on board data governance with this initiative. Also, it is clear from these nuggets of information that the overall spend, number of suppliers, number of materials/products being procured will be some of the key metrics and interrelationship between those metrics will be critical to link any ROI from initiatives to clean supplier data, build supplier MDM etc…

With this information data governance team now can not only communicate to their team members but also the executives, that X percent of duplicate data in supplier master would potentially represent Y dollars off excessive spend. Data governance team will be able to explain not only how this can be fixed but what is required to maintain this hygiene on an ongoing basis because of the impact it will have on overall excess spend.

In summary, it is really important to understand the goals behind “what?” of the organizational strategy. Other indirect benefits of this kind of exercise are

1.     Establish communication and contacts with the business stakeholders.

2.     Understand areas where you can focus upfront for the highest impact.

3.     Understand and learn the language which you could use to effectively communicate ROI of data governance back to the executives.

In my next post, I will explore who is behind putting together these KPIs for executive in the current situation. These people are ‘the most critical’ players in the Data Governance team at both execution and implementation levels as the initiatives are kicked off.

Previous posts on this or related topics:

Litmus Test for Data Governance Initiatives: What do you need to do to garner executive sponsorship?

Data Governance Litmus Test: Know thy KPIs

Suggested next posts:

Data Governance Litmus Test: How and who is putting together metrics/KPIs for executives?

Data Governance Litmus Test: Do You Have Access to the Artifacts Used by Executives?

Read Full Post »

There were many predictions in the Software industry for 2010. One of Industry thought leaders Nenshad Bardoliwalla  had his predictions in the area of “Trends in Analytics, BI and Performance Management.” His predictions about how vendors will have packaged/strategy driven execution applications, slice and dice capabilities from the BI vendors returning to its decision centric roots and advance visualization capabilities got me thinking about my favorite topic about purpose built Applications.

What is a purpose driven/built analytic application (PDAA) after all? It is an analytic application which addresses a much focused business area or process and provides insight into the opportunities (for improvements), challenges (performance). In order for such analytic application to provide insight…

  1. It needs to be designed for a specific purpose (or problem in mind), and that purpose or focus really needs to be narrow (to be able to provide holistic insight)
  2. It needs to rely on purpose built visualization and needs to use Web 2.0 style technologies to make analytic insight pervasive (some examples to follow)
  3. It needs to provide descriptive, prescriptive and predictive capabilities to provide holistic insight
    1. Descriptive capabilities will provide view into state of current affairs
    2. Prescriptive capabilities will provide what users need to focus on as a follow up, it also helps in guiding users as to what questions they should ask next to build the holistic insight
    3. Predictive capabilities will facilitate what if analysis and provide insight into what situation business might expect should the current situation continue.
    4. It implicitly provides users with what questions users should ask in a given situation and provides either complete answers or data points leading up to those answers…

Many a times, because of the very specific purpose and narrow focus, most of the insights provided by purpose built analytic applications can be manifested right into the operational application via purpose built gadgets or even purpose built controls. Single dashboard with a interactivity around the widgets/gadgets in the dashboard will typically provide complete insight into the focus/purpose of the analytic application.

Let us discuss an example of what a purpose built analytic application could be…Every organization which has sales force actively selling products/services of the organization has a weekly call to review the pipeline. This is typically done region by region basis and the data is then rolled up at a global level.  A purpose driven analytic application in this situation would be “Weekly Pipeline Review” application. In this application rather than providing free form slicing dicing/reporting capabilities around pipeline data (which will be traditional way), this type of application will focus on:

  1. Current Pipeline
  2. Changes to the pipeline from last week (positive, negative: As this is what is really watched closed in this call to make sure forecast numbers can be achieved)
  3. Indicate impact of the changes to the pipeline on achieving goals/forecast. Based on these changes, extrapolate the impact on Sales Organizations plans…. (what-if)
  4. Provide visibility into deals which might be problematic based on the past performance and heuristics (this is what I call prescriptive)
  5. Provide visibility into deals which are likely to move faster and close faster, again based on past performance. (Again prescriptive)
  6. Provide account names in which incremental up sell can be done (again based on past performance in similar accounts) but there are no active deals/opportunities etc…
  7. Provide visibility into individuals and regions which are at risk of missing their forecast based on their past and current performance.

There are different visualizations which can be used to build such type of application. Focus of this analytic application is to help Sales VP’s and Sales Operations to get through weekly pipeline review call quickly by focusing on exceptions (both on the positive and negative side) and provide full insight into the impact of the changes, areas which they should focus into etc…. Hopefully this explains in detail the difference between purposes built analytic application vs. traditional data warehouse or traditional analytic application.

Let us now briefly look at how purpose built UI supports some of the important aspects (holistic insight) of the purpose built applications. Many of you have used Google portal and have uploaded iGoogle gadgets. One can look at iGoogle gadgets as purpose built applications which focus on one specific area of interest to you.  Take a look at one of the samples put together by Pallav Nadhani to demonstrate Fusion charts visualizations. This gadget is a perfect example of how purpose built UI helps in creating the focus and holistic insight of the analytic Application. This gadget provides complete Weather picture for a location for today or for future.

 There is a company out of New Zealand, Sonar6 which provides product solution around performance management (much focused, purpose driven)/Talent Management. They have done fantastic job of building purpose built application and delivered that application through purpose built UI. I especially like the way they have provided analytic and reporting capabilities (helicopter view) around performance management. You can register for their demo or can look at their brochure/PowerPoint presentations.

There are several other vendors who have made purpose built analytics pervasive in our day to day lives. Recommendation engine built by Amazon is a perfect example of “Purpose built Analytics”

In the end, I truly believe that purpose built analytic applications can and will maximize the value/insight delivered to the end users/customers while keeping the focus of the analytics narrow.

I would love to know your thought around purpose built applications. What has been your experience?

Read Full Post »

Read Full Post »

Part I

Many of us have been using Agile methodologies for doing product development or for doing IT Projects very successfully. I have noticed that Agile methodologies are very well suited for addressing enterprise data and information quality (EDIQ) initiatives. I almost feel like Agile and Enterprise data quality (addressing of) was a match made in heaven.

Let us inspect key tenets of the Agile methodologies and relate those to what one has to go through in addressing enterprise data/information quality issues (EDIQ).

  1. Active User involvement (Collaborative and Cooperative approach): Fixing/addressing data quality issues has to be done in collaboration with the data stake holders, business users. Creating a virtual team where business, IT, data owners participate is critical for the success of data quality initiatives. While IT provides necessary fire power in terms of technology and means to correct data quality, ultimately it’s the business owners/data owners who will decide the actual fixes.
  2. Team empowerment for making and implementing decisions: Executive sponsorship and empowerment of the team working on data quality are key components of a successful enterprise data/information quality initiative. Teams should be empowered to make necessary fixes to the data and the processes. They should also be empowered to do enforcement/ implementation of these newly defined/refined processes for addressing immediate data quality and ensuring ongoing data quality standard is met.
  3. Incremental, small releases and iterate: As we know, big bang, fix it all approach for addressing data quality does not work.  In order to address data quality realistically, incremental approach with iterative correction is the best way to go. This has been discussed in couple of recent article in “ Missed it by that much” by Jim Harris,  and in my own article
  4. Time boxing the activity: Requirements for data quality evolve. Usually scope of activities will expand as team starts working on data quality initiative. Key to success is to chunk the work and demonstrate incremental improvements in a time boxed fashion. This almost always forces data quality teams to prioritize the data quality issues and then address them in the priority order (this really helps in optimally deploy resources of the organization to get biggest bang for the buck)
  5. Testing/Validation is integrated in the process and it is done early and often: This is my favorite. Many times data quality is addressed by first fixing the data itself in environments like data warehouse/marts or alternate repositories for immediate impact on business initiatives. Testing these fixes for their accuracies and validating its impact will provide for a framework as to how you ultimately want data quality issues fixed (What additional process you might want to inject, what additional checks you might want to do at the source system side, what are the patterns you are seeing in the data etc…). Early testing/validation will create immediate impact with business initiatives and business side will be more inclined to invest and dedicate resources in addressing data quality on ongoing basis.
  6. Transparency and Visibility: All throughout the work one does for fixing data quality, it is extremely important to provide clear visibility into the issues and impact of the data quality, efforts and commitments it will take to fixing data quality and the ongoing progress made towards achieving business goals about data and information quality. Maintaining a scorecard for all data quality fixes and showing trend of improvements is a good way to provide visibility into improvements done in enterprise data quality. I had discussed this in my last article and here is a sample scorecard.

There are many other aspects of Agile methodologies which are applicable to the enterprise data quality initiatives

a)     Like capturing requirements at high level and then elaborating those in visual formats in a team setting

b)    Don’t expect that you are going to build a perfect solution in first iteration

c)     Completing task on hand before taking up another task

d)    Clear definition of what a task “completion” means etc…

In summary, I really feel that Agile methodologies can be easily adapted and used in implementing enterprise data/information quality initiatives. Use of Agile methodologies in  will ensure higher and quick success. They are like a perfect couple “Made for each other”

In my next post, I will take real life example and compare and contrast actual artifacts of Agile methodologies with the artifacts which are required to be created for enterprise data/information quality (EDIQ) initiatives.

Resources: There are several sites about Agile methodologies, I really like couple of them

  1. Manifesto for Agile Software Development
  2. There is a nice book “An Agile War Story: Scrum and XP from the trenches by Henrik Kniberg
  3. Agile Project Management Blog

Read Full Post »

Older Posts »