Janet Amos Pribanic says: ‘Business Analytics – It’s really OK that it’s not perfect first time out!’

Janet Amos Pribanic
Chief Operating Officer
John Daniel Associates, Inc.
Janet’s Profile

Business Analytics is changing rapidly. Traditional BI is being challenged due to the rate at which we are not only collecting data, but wanting to leverage that data for business advantage.

A recent survey of over 40 technology vendors’ clients by one of the largest IT research firms showed that ultimately, the customer experience matters. The value given to the customer is what matters, always! So, how do we get there?….

Even if not perfect – get it in the hands of the business fast

When we evaluate successful analytics customers, many of them started with a very inexpensive (hear free trial) solution, and leveraged that experience to build a successful solution and architecture. What better way to learn, than to fail (or partially fail) and take that experience forward. The term “fail” here does not necessarily mean a solution has been built and thrown away. It means that what has been learned via a trial process or prototype has given us valuable insight on what is most meaningful moving forward with analytics and successful analytics architecture. A functional architecture, or methodology, is the best place to start. Here are the 5 steps:

Functional BI Architecture

  1. Identify the business problem we are out to solve with data (analytics)
  2. Gather the data
  3. Build the model to support the business
  4. View and explore the data
  5. Deploy the operational insights

And note – this is an iterative process. There is value in getting this out there not exactly right. However, get it out there fast – you will leverage feedback faster, and this will allow better insight from the business because they can see it and take corrective action faster. In addition, disparities are exposed faster, incorrect business rules are exposed faster, and by having that exposure, the organization can now take action and leverage powerful and cheap architectures, like Hadoop, to enable you to take in massive amounts of data and store data very inexpensively. Now you are prepared to take on strategic business analytics.

Let’s look at a possible example. You have identified and gathered data around a problem in your supply chain. After gathering the data, the next move is to explore it. You sample the data in the data set, test it and analyze it. You develop hypotheses which are then tested. For example, you might do analysis to figure out which two or three vendors are late in deliveries, resulting in customer satisfaction issues.

After you determine why those vendors are not performing, you operationalize the insights you’ve gained, building them into your business logic and workflows. If ABC Company is consistently late on deliveries within the supply chain and contributes to two actions that map to the supply chain model (challenge), begin corrective action before affecting additional clients.

Once insights are operationalized, it is important to close the model feedback loop. The models you build could (and likely will) change over time; the factors that caused supply chain challenge this year may not hold a year from now, as market and other factors change. Test your hypotheses to see if the model still holds or if it needs adjustment. For example, as new forms of vendor interaction are introduced, the supply chain variables may also change over time.

Advanced analytics should become the mainstay of your secret sauce. The point is to use all of your resources effectively: data modelers and the many people with business domain knowledge.

Get analytics out there fast, even if not perfect: it’s a counterintuitive way to make rapid progress. Take in only as much data as you need, analyze and create operational models, and refine those models. Then take those models and begin to build your functional architecture.

To deliver successful analytics, you will also need to plan and staff correctly. To do advanced analytics at scale, there are two approaches from a staffing perspective:

  1. Hire lots of expensive data modelers
  2. Leverage people who are technically savvy in your company with strong business acumen

The first way is certainly challenging. In fact, it can’t scale either because there is not an abundant supply of data modelers to hire, even if you had unlimited resources.

The second way takes a different and proven successful approach. Do not be constrained by the technology that enables analytics but instead focus on the strength of logic that powers analytics.

In practical terms, ask data modelers to partner with you to research ways to solve business problems, and then have them build consumable models that solve those problems. With models that serve as templates, businesspeople can perform analytics on their own, working with the models the data modelers developed, and then extend those models to new areas as business logic dictates.

Your company will significantly benefit by this secret sauce. Choose to make it part of your core competency!

The post Janet Amos Pribanic says: ‘Business Analytics – It’s really OK that it’s not perfect first time out!’ appeared first on John Daniel Associates, Inc..

Source by analyticsweek

Oct 25, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Trust the data  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> The convoluted world of data scientist by v1shal

>> Ten Guidelines for Clean Customer Feedback Data by bobehayes

>> Why Cloud-native is more than software just running on someone else’s computer by analyticsweekpick

Wanna write? Click Here

[ NEWS BYTES]

>>
 How the Windows 10 October 2018 update will impact your enterprise IoT deployments – TechRepublic Under  IOT

>>
 Global Advanced Analytics Market research report 2018: Techniques, Region, Feature Analysis, Study Methodology … – IDA Report Under  Social Analytics

>>
 Hybrid cloud data specialist Datrium nabs $60M led by Samsung at a $282M valuation – TechCrunch Under  Hybrid Cloud

More NEWS ? Click Here

[ FEATURED COURSE]

Machine Learning

image

6.867 is an introductory course on machine learning which gives an overview of many concepts, techniques, and algorithms in machine learning, beginning with topics such as classification and linear regression and ending … more

[ FEATURED READ]

The Future of the Professions: How Technology Will Transform the Work of Human Experts

image

This book predicts the decline of today’s professions and describes the people and systems that will replace them. In an Internet society, according to Richard Susskind and Daniel Susskind, we will neither need nor want … more

[ TIPS & TRICKS OF THE WEEK]

Keeping Biases Checked during the last mile of decision making
Today a data driven leader, a data scientist or a data driven expert is always put to test by helping his team solve a problem using his skills and expertise. Believe it or not but a part of that decision tree is derived from the intuition that adds a bias in our judgement that makes the suggestions tainted. Most skilled professionals do understand and handle the biases well, but in few cases, we give into tiny traps and could find ourselves trapped in those biases which impairs the judgement. So, it is important that we keep the intuition bias in check when working on a data problem.

[ DATA SCIENCE Q&A]

Q:What does NLP stand for?
A: * Interaction with human (natural) and computers languages
* Involves natural language understanding

Major tasks:
– Machine translation
– Question answering: “what’s the capital of Canada?”
– Sentiment analysis: extract subjective information from a set of documents, identify trends or public opinions in the social media

– Information retrieval

Source

[ VIDEO OF THE WEEK]

@Schmarzo @DellEMC on Ingredients of healthy #DataScience practice #FutureOfData #Podcast

 @Schmarzo @DellEMC on Ingredients of healthy #DataScience practice #FutureOfData #Podcast

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Torture the data, and it will confess to anything. – Ronald Coase

[ PODCAST OF THE WEEK]

@DrewConway on fabric of an IOT Startup #FutureOfData #Podcast

 @DrewConway on fabric of an IOT Startup #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

39 percent of marketers say that their data is collected ‘too infrequently or not real-time enough.’

Sourced from: Analytics.CLUB #WEB Newsletter

When Buying a Company, Use Customer Feedback to Improve Due Diligence

Due DiligenceI have been doing some work on how investment professionals can use customer feedback as part of their valuation process. I include a case study of an investment firm that used customer feedback to help confirm the valuation of the target company (it did), and also where to start in terms of managing the business to secure its future.

Investment professionals take a huge risk when they purchase or make a significant investment in a business. To identify and minimize their investment risk, these professionals conduct due diligence of the business. Due diligence is an investigation or audit of a potential investment. Investors typically examine such matters as the business’ finances, proprietary information, employees, insurance, equipment and property, and litigation claims, to name a few (Entrepreneur.com offers these due diligence questions and a downloadable checklist. Forbes has a their checklist. Inc.com has their own checklist and even offers some advice for conducting due diligence.).

While some due diligence efforts include an examination of customer data, they typically focus on identifying the number and types of customers (e.g., where they are located, size). Even in cases where customer feedback is included in the due diligence process, the business sellers hand-pick a few customers to be interviewed by the buyer, resulting in potentially biased information about the health of the business and inflating the perceived value of the business. If customer feedback is to be of value, a more rigorous approach using customer feedback is needed. In this post, I will outline a more in-depth approach at using systematic customer feedback in the due diligence process.

1. Ask a Representative Sample of Customers

When soliciting customer feedback, take steps to ensure the feedback is representative of all possible feedback from the population of customers.  Ask for a complete customer list from the seller and randomly select the customers you want to give you feedback. If you are particularly concerned about specific customer segments, use stratified random sampling (random selection occurs within each customer segment) to ensure you get enough respondents for the segments in question.

While a census is unnecessary to get a reliable picture of the entire customer base, I recommend that, when possible, you invite all customers to provide feedback. For B2C companies, surveys need to be targeted to the buyer of the products/services. For B2B companies, due to the nature of the buying process, surveys need to be targeted to all parties who are directly and indirectly involved in buying the company’s products/services (e.g., decision makers and decision influencers).

Verify the quality of the sample of customers by comparing the demographic make-up of the sample to that of the entire customer base. The extent to which the sample is representative of the population will determine the quality of the inferences you are able to make about the population. To make any meaningful conclusions about the value of the target company, the customers you ask need to be a representative sample of the population of customers.

2. Ask about Customer Loyalty

The value of a company in directly impacted by customer loyalty. The greater the customer loyalty, the higher the company value. Customers can increase the value of a company by engaging in three different types of customer loyalty behaviors. As is illustrated in the Customer Loyalty Measurement Framework, these three types of customer loyalty include: 1) retention loyalty (valuable customers stay around for a long time), 2) advocacy loyalty (customers tell family/friends about the company to drive new customer growth) and 3) purchasing loyalty (customers increase their share-of-wallet to drive average revenue per user/customer (ARPU) growth).

Because customers can exhibit their loyalty to a company in different ways, you need to ask the right loyalty for your specific needs. Does the target company have a history of high defection rates? If so, ask customers about their intention of staying. Does the target company have stagnant ARPU growth? If so, ask customers about their intention of buying different products. Does the target company historically have low new customer growth? If so, ask customers about their intention of recommending the company to their friends.

As a starting point, consider  including a loyalty question for each of the three general types of loyalty behaviors: retention, advocacy and purchasing (see the RAPID loyalty):

  1. Likelihood to switch providers (retention)
  2. Likelihood to renew service contract (retention)
  3. Likelihood to recommend (advocacy)
  4. Overall satisfaction (advocacy)
  5. Likelihood to purchase different solutions from <Company Name> (purchasing)
  6. Likelihood to expand use of <Company Name’s> products throughout company (purchasing)

3. Ask about Customer Experience (CX)

Customer loyalty is impacted by the customer experience. According to Wikipedia, the customer experience (CX) is the sum of all experiences a customer has with a supplier of goods and services over the duration of that relationship. Customers who are satisfied with their experience with the supplier stay longer, recommend more, and buy more from the supplier compared to customers who are less satisfied with their experience.

Ask customers about their experience with the company. While you could ask customers literally hundreds of CX questions about each specific aspect of their experience, research shows that you only need a few CX questions to understand what drives their loyalty. For example, ask customers how satisfied the are with the target company across each of these areas:

  1. Ease of doing business
  2. Product quality
  3. Account Management/Sales
  4. Customer service
  5. Technical support
  6. Communications from the Company
  7. Future Product/Company Direction

4. Ask about Relative Performance

Companies do not perform in a vacuum; competitors are vying for the same customers and limited prospects as the target company you are purchasing. If the target company has plenty of competitors in its space, you need to understand where you rank relative to the competition. After all, top ranked companies receive greater share of wallet compared to their bottom-ranked competitors. All things equal, a company that is ranked the lowest is less valuable than a company that is ranked the highest.

Ask customers about how the company compares to its competitors. Toward that end, the Relative Performance Assessment (RPA), a competitive analytics solution, helps investors understand the relative ranking of the target company and identify ways to increase their ranking, and consequently, increase share of wallet. In its basic form, the RPA method requires two questions:

  1. What best describes Company’s performance compared to the competitors you use?
  2. Please tell us why you rank Company’s performance the way you do. This question allows each customer to indicate the reasons behind his/her response about your ranking. The content of the customers’ comments can be examined to identify underlying themes to help diagnose the reasons for high rankings or low rankings

To understand the value of the company you are purchasing, you need to know how you measure up to the competition. More importantly, after the purchase, the RPA will help you know what you need to do to improve your ranking in the industry.

5. Ask about Company-Specific Issues

Investors may have a need to ask additional questions that are specific to the target company. These questions, driven by specific business needs, can include demographic questions (if not included in their CRM system), open-ended questions, and targeted questions. Typical questions in B2B relationship surveys include:

  • Time as a customer
  • Job function (e.g., Marketing, Sales, IT, Service)
  • Job level (executive, director, manager, individual contributor)
  • Level of influence in purchasing decisions of <Company Name> solutions (Primary decision maker, Decision influencer, No influence)

Include one or two open-ended questions that allow respondents to provide additional feedback in their own words. Depending on how the questions are phrased, customers’ remarks can provide additional insight about the health of the customer relationship. Text analytics help you understand both the primary content of words as well as the sentiment behind them. To understand potential improvement areas, a question I commonly use is:

  • If you were in charge of <Company Name>, what improvements, if any, would you make?

Customer relationship surveys can be used to collect feedback about specific topics that are of interest to executive management. Give careful consideration about asking additional questions. As with any survey question, you must know exactly how the data from the questions will be used to improve customer loyalty. Some popular topics of interest include measuring 1) perceived benefits of solutions and 2) perceived value. Some sample questions are:

  • How much improvement did you experience in productivity due to <Company Name’s> solutions?
  • Satisfaction with price of the solution given the value received

Next, I will present an example of how one investment firm used customer feedback to help in their due diligence process.

Case Study

An investment firm wanted to expand their portfolio of companies by purchasing an existing B2B company. As part of the due diligence process, the investment firm worked with the target company to acquire their customer email list for a Web-based customer survey. The investment firm used the Customer Relationship Diagnostic (CRD) to collect customer feedback. The CRD is a brief survey that asks customers about different types of customer loyalty, satisfaction with general CX touch points, relative performance and a few company-specific questions.

RAPID Loyalty Results
Figure 1. Customer loyalty ratings for target company.

The response rate for the survey was about 70% and consisted primarily of decision makers and decision influencers (~80%) and were Managers, Directors or Executives (~70%).

Case Study: Loyalty Results

Customer loyalty results are located in Figure 1. As you can see, customers reported moderate levels of customer loyalty for most of the loyalty questions (e.g., advocacy and retention). For purchasing loyalty, customers reported low likelihood of buying different products and low likelihood of expanding the use of the target company’s solutions.

Case Study: CX Results

CX Results
Figure 2. Customer Experience (CX) ratings for target company.

Results of the CX ratings can be found in Figure 2. Based on the survey results, the customers were moderately satisfied with their experiences across the touch points, except for Communications from the Company and Future Product/Company Direction.

Between 20% and 50% of the customers said they were dissatisfied with each of the seven customer touch points.

Case Study: Relative Performance Assessment Results

Results of the Relative Performance Assessment ratings are located in Figure 3. As you can see, customers said that only 42% of the customers indicated that the company was better than the competition. Almost 60% of the customers indicated that the company was the same worse than most other competitors.

Relative Performance Assessment
Figure 3. Relative Performance Assessment ratings of target company.

After re-scaling the values of the 5-p0int rating scale (1 = worst to 5 = best) to a 0-100 scale, I estimated that the target company falls roughly at the 54th percentile in their industry; that is, the company’s performance is typical when compared to their competition.

Case Study: Determining Dollar Value of Loyalty

To estimate the expected revenue gains/losses of the target company, I worked with the investment firm to translate the customer loyalty ratings into a dollar value. We employed subject matter experts (SMEs) as well as analyzed existing financial reports of the target company to arrive at our best guess of expected annual revenue gains through new customers (~$300k)) and existing customers (through purchasing new/different products – ~$160k) and estimated the annual revenue at risk due to churn (by customers stop using the company ~ $450k).

Case Study: The Decision

Overall, the customer feedback confirmed the valuation of the company. While the target company was perceived to be in the middle of the pack in their industry (ranked at 54th percentile) and the future direction of their products/company appeared dismal (50% are dissatisfied), investors believed they had the management team that could address these shortcomings. The investment company decided to buy the company.

Case Study: Where to Make Improvements

Driver Matrix
Figure 4. Driver Matrix help you identify the best areas to allocate company resources (e.g., money, time) to maximize ROI of your investment dollars.

The investors now became the business owner, and, consequently, needed to manage the business to secure its future. The survey results were analyzed to help decide where to best allocate resources in areas that will improve customer loyalty (and revenue) while minimizing the improvement costs.

Using driver analysis on the existing data, the investment firm found that there were three key drivers of customer loyalty: 1) product quality, 2) communications from the company and 3) future product/company direction. Again, using SMEs, we were able to estimate the ROI for improving each of the three key drivers. It turns out that the greatest ROI for CX improvements would be achieved by improving communications from the company and future product/company direction.

Benefits of Using Customer Feedback in your Due Diligence Process

You can significantly enhance your due diligence process through a systematic approach of collecting and analyzing customer feedback. Using the questions I proposed above, here are some benefits you can achieve when you use customer feedback as part of your due diligence process when purchasing a company:

  • Identify investment opportunities others miss and avoid investing in poor opportunities. Discover the quality of products and services from the people who matter: The customers.
  • Estimate revenue gains/losses.  Using survey data and financial data, you can estimate annual revenue at risk due to customer churn and revenue growth due to new customers and expanding relationships with current customers.
  • Understand your competitive advantage/disadvantage.  Your relative performance will impact how much incremental money your customers will spend with you. Collecting customer feedback can help you identify what you need to do to beat your competition to improve your growth.
  • Understand the ROI of different improvement efforts.

Summary

Investors can gain valuable insight about a target company they are buying by simply asking customers the right questions. Be sure you ask a representative sample of customers so the feedback you get is meaningful and reflects the entire customer base. Ask customers about different types of loyalty behaviors in which they are likely to engage. This feedback can help you estimate revenue gains and risks. Ask customers about their customer experience to identify company strengths as well as potential problems. Ask customers about the company’s relative performance compared to other companies. This insight can help you understand the competitive landscape in the company’s industry and identify ways to improve/maintain your competitive advantage.

When purchasing a company, a systematic approach to surveying the customers (and analyzing the data correctly) can significantly augment the information in your due diligence process and provide a lot of insight about the value of the company. Asking the customers of the target company could mean the difference between acquiring a valuable company or a lemon.

Learn more about the Customer Relationship Diagnostic (CRD) for your due diligence

To learn more about the Customer Relationship Diagnostic, please complete the form below.

[si-contact-form form=’1′]

Source

Oct 18, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Data shortage  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Apr 19, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..) by admin

>> Dec 07, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..) by admin

>> The ultimate customer experience [infographic] by v1shal

Wanna write? Click Here

[ NEWS BYTES]

>>
 Nikkei: Japan probes Facebook data security – Seeking Alpha Under  Data Security

>>
 How Is Artificial Intelligence Changing The Business Landscape? – Forbes Under  Artificial Intelligence

>>
 Global Risk Analytics Market report provides the data on the past progress, ongoing market scenarios and future … – The Business Investor Under  Risk Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

Intro to Machine Learning

image

Machine Learning is a first-class ticket to the most exciting careers in data analysis today. As data sources proliferate along with the computing power to process them, going straight to the data is one of the most stra… more

[ FEATURED READ]

Superintelligence: Paths, Dangers, Strategies

image

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but … more

[ TIPS & TRICKS OF THE WEEK]

Data aids, not replace judgement
Data is a tool and means to help build a consensus to facilitate human decision-making but not replace it. Analysis converts data into information, information via context leads to insight. Insights lead to decision making which ultimately leads to outcomes that brings value. So, data is just the start, context and intuition plays a role.

[ DATA SCIENCE Q&A]

Q:How to clean data?
A: 1. First: detect anomalies and contradictions
Common issues:
* Tidy data: (Hadley Wickam paper)
column names are values, not names, e.g. 26-45…
multiple variables are stored in one column, e.g. m1534 (male of 15-34 years’ old age)
variables are stored in both rows and columns, e.g. tmax, tmin in the same column
multiple types of observational units are stored in the same table. e.g, song dataset and rank dataset in the same table
*a single observational unit is stored in multiple tables (can be combined)
* Data-Type constraints: values in a particular column must be of a particular type: integer, numeric, factor, boolean
* Range constraints: number or dates fall within a certain range. They have minimum/maximum permissible values
* Mandatory constraints: certain columns can’t be empty
* Unique constraints: a field must be unique across a dataset: a same person must have a unique SS number
* Set-membership constraints: the values for a columns must come from a set of discrete values or codes: a gender must be female, male
* Regular expression patterns: for example, phone number may be required to have the pattern: (999)999-9999
* Misspellings
* Missing values
* Outliers
* Cross-field validation: certain conditions that utilize multiple fields must hold. For instance, in laboratory medicine: the sum of the different white blood cell must equal to zero (they are all percentages). In hospital database, a patient’s date or discharge can’t be earlier than the admission date
2. Clean the data using:
* Regular expressions: misspellings, regular expression patterns
* KNN-impute and other missing values imputing methods
* Coercing: data-type constraints
* Melting: tidy data issues
* Date/time parsing
* Removing observations

Source

[ VIDEO OF THE WEEK]

@JohnTLangton from @Wolters_Kluwer discussed his #AI Lead Startup Journey #FutureOfData #Podcast

 @JohnTLangton from @Wolters_Kluwer discussed his #AI Lead Startup Journey #FutureOfData #Podcast

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

If you can’t explain it simply, you don’t understand it well enough. – Albert Einstein

[ PODCAST OF THE WEEK]

@chrisbishop on futurist's lens on #JobsOfFuture #FutureofWork #JobsOfFuture #Podcast

 @chrisbishop on futurist’s lens on #JobsOfFuture #FutureofWork #JobsOfFuture #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Facebook stores, accesses, and analyzes 30+ Petabytes of user generated data.

Sourced from: Analytics.CLUB #WEB Newsletter

Oracle releases database integration tool to ease big data analytics

Oracle has bolstered its database portfolio with the Oracle Data Integrator (ODI), a piece of middleware designed to help analysts sift through big data across a variety of sources.

As the name suggests, the ODI effectively eases the process of linking data in different formats and from diverse databases and clusters, such as Hadoop, NoSQL and relational databases.

This enables Oracle customers to conduct analysis on large and varied datasets without dedicating time and resources to preparing big data in an integrated and secure way prior to analysis.

In effect, the ODI allows huge pools of data to be treated as just another data source to be used alongside more regularly accessed data warehouses and structured databases.

Jeff Pollock, vice president of product management at Oracle, claimed that the ODI allows customers to be experts in extract, transform and load tools without learning the code needed to carry out such actions.

“Oracle is the only vendor that can automatically generate Spark, Hive and Pig transformations from a single mapping which allows our customers to focus on business value and the overall architecture rather than multiple programming languages,” he said.

Avoiding the need for proprietary code means that the ODI can be run natively with a company’s existing Hadoop cluster, bypassing the need to invest in additional development.

Cluster databases like Hadoop and Spark have traditionally been geared towards programmers with knowledge of the coding needed to manipulate them. On the flipside, analysts would mostly use software tools to carry out enterprise-level data analytics.

The ODI gives the non-code savvy analyst the ability to harness Hadoop and other data sources without requiring the coding knowledge to do so.

It also means that a company’s developers need not retrain to handle multiple databases. Oracle is touting this as a way for companies to save money and time on big data analysis.

Oracle’s move to build its portfolio to focus on delivering direct data insights for its customers is indicative of the business-focused direction big data analytics are heading, underlined byVisa’s head of analytics saying big data projects must focus on making money

Originally posted via “Oracle releases database integration tool to ease big data analytics”

Source: Oracle releases database integration tool to ease big data analytics

Creating Value from Analytics: The Nine Levers of Business Success

IBM just released the results of a global study on how businesses can get the most value from Big Data and analytics. They found nine areas that are critical to creating value from analytics. You can download the entire study here.

IBM Institute for Business Value surveyed 900 IT and business executives from 70 countries from June through August 2013. The 50+ survey questions were designed to help translate concepts relating to generating value from analytics into actions.

Nine Levers to Value Creation

Figure 1. Nine Levers to Value Creation from Analytics
Figure 1. Nine Levers to Value Creation from Analytics. Click image to enlarge.

The researchers identified nine levers that help organizations create value from data. They compared leaders (those who identified their organization as substantially outperforming their industry peers) with the rest of the sample. They found that the leaders (19% of the sample) implement the nine levers to a greater degree than the non-leaders. These nine levers are:

  1. Source of value: Actions and decisions that generate results. Leaders tend to focus primarily on their ability to increase revenue and less so on cost reduction.
  2. Measurement: Evaluating the impact on business outcomes. Leaders ensure they know how their analytics impact business outcomes.
  3. Platform: Integrated capabilities delivered by hardware and software. Sixty percent of Leaders have predictive analytic capabilities, as well as simulation (55%) and optimization (67%) capabilities.
  4. Culture: Availability and use of data and analytics within an organization. Leaders make more than half of their decisions based on data and analytics.
  5. Data: Structure and formality of the organization’s data governance process and the security of its data. Two-thirds of Leaders trust the quality of their data and analytics. A majority of leaders (57%) adopt enterprise-level standards, policies and practices to integrate data across the organization.
  6. Trust: Organizational confidence. Leaders demonstrate a high degree of trust between individual employees (60% between executives, 53% between business and IT executives)
  7. Sponsorship: Executive support and involvement. Leaders (56%) oversee the use of data and analytics within their own departments, guided by an enterprise-level strategy, common policies and metrics, and standardized methodologies compared to the rest (20%).
  8. Funding: Financial rigor in the analytics funding process. Nearly two-thirds of Leaders pool resources to fund analytic investments. They evaluate these investments through pilot testing, cost/benefit analysis and forecasting KPIs.
  9. Expertise: Development of and access to data management and analytic skills and capabilities. Leaders share advanced analytics subject matter experts across projects, where analytics employees have formalized roles, clearly defined career paths and experience investments to develop their skills.

The researchers state that each of the nine levers have a different impact on the organization’s ability to deliver value from the data and analytics; that is, all nine levers distinguish Leaders from the rest but each Lever impacts value creation in different ways. Enable levers need to be in place before value can be seen through the Drive and Amplify levers. The nine levers are organized into three levels:

  1. Enable: These levers form the basis for big data and analytics.
  2. Drive: These levers are needed to realize value from data and analytics; lack of sophistication within these levers will impede value creation.
  3. Amplify: These levers boost value creation.

Recommendations: Creating an Analytic Blueprint

Figure 2. Analytics Blueprint for Creating Value from Data. Click image to enlarge
Figure 2. Analytics Blueprint for Creating Value from Data. Click image to enlarge

Next, the researchers offered a blueprint on how business leaders can translate the research findings into real changes for their own businesses. This operational blueprint consists of three areas: 1) Strategy, 2) Technology and 3) Organization.

1. Strategy

Strategy is about the deliberateness with which the organization approaches analytics. Businesses need to adopt practices around Sponsorship, Source of value and Funding to instill a sense of purpose to data and analytics that connects the strategic visions to the tactical activities.

2. Technology

Technology is about the enabling capabilities and resources an organization has available to manage, process, analyze, interpret and store data. Businesses need to adopt practices around Expertise, Data and Platform to create a foundation for analytic discovery to address today’s problems while planning for future data challenges.

3. Organization

Organization is about the actions taken to use data and analytics to create value. Businesses need to adopt practices around Culture, Measurement and Trust to enable the organization to be driven by fact-based decisions.

Summary

One way businesses are trying to outperform their competitors is through the use of analytics on their treasure trove of data. The IBM researchers were able to identify the necessary ingredients to extract value from analytics. The current research supports prior research on the benefits of analytics in business:

  1. Top-performing businesses are twice as likely to use analytics to guide future strategies and guide day-to-day operations compared to their low-performing counterparts.
  2. Analytic innovators 1) use analytics primarily to increase value to the customer rather than to decrease costs/allocate resources, 2) aggregate/integrate different business data silos and look for relationships among once-disparate metric and 3) secure executive support around the use of analytics that encourage sharing of best practices and data-driven insights throughout their company.

Businesses, to extract value from analytics, need to focus on improving strategic, technological and organizational aspects on how they treat data and analytics. The research identified nine area or levers executives can use to improve the value they generate from their data.

For the interested reader, I recently provided a case study (see: The Total Customer Experience: How Oracle Builds their Business Around the Customer) that illustrates how one company uses analytical best practices to help improve the customer experience and increase customer loyalty.

————————–

TCE Total Customer Experience

 

Buy TCE: Total Customer Experience at Amazon >>

In TCE: Total Customer Experience, learn more about how you can  integrate your business data around the customer and apply a customer-centric analytics approach to gain deeper customer insights.

 

Source by bobehayes

Oct 11, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Conditional Risk  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Nick Howe (@Area9Nick) talks about fabric of learning organization to bring #JobsOfFuture #podcast by v1shal

>> How do we cut through the jumble of Business Analytics? -Janet Amos Pribanic by analyticsweek

>> Why Your Company Should Use Data Science to Make Better Decisions by analyticsweekpick

Wanna write? Click Here

[ NEWS BYTES]

>>
 Deliver desktop and app virtualization for mobile devices – TechTarget Under  Virtualization

>>
 Continental Protects Vehicles From Cyber Attacks – Modern Tire Dealer Under  cyber security

>>
 Cyber security unit has no strategic plan, C&AG finds – Irish Times Under  cyber security

More NEWS ? Click Here

[ FEATURED COURSE]

Statistical Thinking and Data Analysis

image

This course is an introduction to statistical data analysis. Topics are chosen from applied probability, sampling, estimation, hypothesis testing, linear regression, analysis of variance, categorical data analysis, and n… more

[ FEATURED READ]

The Industries of the Future

image

The New York Times bestseller, from leading innovation expert Alec Ross, a “fascinating vision” (Forbes) of what’s next for the world and how to navigate the changes the future will bring…. more

[ TIPS & TRICKS OF THE WEEK]

Grow at the speed of collaboration
A research by Cornerstone On Demand pointed out the need for better collaboration within workforce, and data analytics domain is no different. A rapidly changing and growing industry like data analytics is very difficult to catchup by isolated workforce. A good collaborative work-environment facilitate better flow of ideas, improved team dynamics, rapid learning, and increasing ability to cut through the noise. So, embrace collaborative team dynamics.

[ DATA SCIENCE Q&A]

Q:Give examples of bad and good visualizations?
A: Bad visualization:
– Pie charts: difficult to make comparisons between items when area is used, especially when there are lots of items
– Color choice for classes: abundant use of red, orange and blue. Readers can think that the colors could mean good (blue) versus bad (orange and red) whereas these are just associated with a specific segment
– 3D charts: can distort perception and therefore skew data
– Using a solid line in a line chart: dashed and dotted lines can be distracting

Good visualization:
– Heat map with a single color: some colors stand out more than others, giving more weight to that data. A single color with varying shades show the intensity better
– Adding a trend line (regression line) to a scatter plot help the reader highlighting trends

Source

[ VIDEO OF THE WEEK]

Using Topological Data Analysis on your BigData

 Using Topological Data Analysis on your BigData

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Information is the oil of the 21st century, and analytics is the combustion engine. – Peter Sondergaard

[ PODCAST OF THE WEEK]

@AlexWG on Unwrapping Intelligence in #ArtificialIntelligence #FutureOfData #Podcast

 @AlexWG on Unwrapping Intelligence in #ArtificialIntelligence #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

The data volumes are exploding, more data has been created in the past two years than in the entire previous history of the human race.

Sourced from: Analytics.CLUB #WEB Newsletter

10 Ways Big Data is Changing K-12 Education

You may have heard the term “big data” in reference to companies like Netflix, Google or Facebook. It’s the collection of all those little data points about your choices and decision making process that allows companies to know exactly what movie you’re in the mood for when you plop down on your couch with a bowl of popcorn after a long day. Recently, big data has also made a foray into the educational realm. Whether through information gathered through standardized testing or the use of adaptive learning systems, big data is well on its way to completely transforming K-12 education.

Here are 10 ways Big Data is changing K-12 education:

1. Different pace of learning
One of the main challenges that educators currently face is adapting their instruction so it accommodates many different students who learn at different paces. The tools used to collect data, like intelligent adaptive learning systems, are designed to shift the pace of instruction depending on the prior knowledge, abilities and interests of each student. Teachers, in turn, can use this data to inform their pace of instruction going forward.

2. Sharing information
When students change schools or move across state lines, it has often been a challenge for their new teachers to get a firm grasp of what they have covered and which content areas may need more attention. The Common Core standards make data interchangeable across schools and districts.

3. Pinpoint problem areas
A unique feature of big data is that it allows teachers and administrators to pinpoint academic problem areas in students as they learn rather than after they take the test. For example, if a student is working through an adaptive learning program and the data collected reveals that he or she needs more help understanding the fundamental concepts behind fractions, teachers or the adaptive learning system can set aside time to work individually with that student to address and overcome the problem.

4. Need for analysts
Of course, the collection of all of this data isn’t helpful for anyone if it just sits there – school districts are beginning to need analysts to interpret it all. Disparate data sets must be linked so that decision makers in a school district can view, sort and analyze the information to develop both long- and short-term plans for improving education. School districts may also need to set up workshops to show teachers how they can use all of this data effectively.

5. Different means of educational advancement
Traditionally, readiness for educational advancement has been determined more by age than whether or not the student was ready to learn more challenging material. Gifted students may be advanced, but they often stay in the same class as their peers because information about what they know can only be collected sporadically. Big data allows teachers and administrators to get a continuous sense of where students are falling academically, and whether or not they are ready to advance.

6. Smooth transitions
The collection of data is not only allowing for smoother transition between schools, but also grade levels. Access to information databases about what exactly students know could prove quite useful to school districts that are in the process of implementing the Common Core State Standards. Because the CCSS are changing academic requirements, some students find that they’ve inadvertently missed learning something important because it was shifted to the grade below. Data can pinpoint this problem so it can be addressed.

7. Personalized activities
Personalized learning has become a much-heralded approach to education, and big data is helping teachers tailor activities to individual learners. Technology, in particular, is playing a central role. Tech-savvy students can use computer games and adaptive learning programs to complete educational activities that are interactive and take their skill level into account.

8. Using analytics
One significant change that schools are seeing is the increasing use of analytics to inform their approaches. For example, big data can be analyzed to create plans to improve academic results, decrease dropout rates and influence the day-to-day decision making of administrators and teachers.

9. Engage parents and students
It’s extremely important for parents to be involved in their children’s education, and big data is providing a means of engaging both parents and students. If at parent/teacher conferences, educators can pinpoint exactly where a child is excelling and where more work is needed, and can provide data to back up those claims, parents will have a clearer understanding of what they can do to help their children succeed in school.

10. Customized instruction
Perhaps most exciting for teachers and students alike is the ability for customized instruction that big data provides. This differs greatly from the approach to education in the past, when teachers would deliver one lesson and expect all students to understand, even if they learned in very different ways.
Is your school using big data? What changes are you seeing?

Dan kerns

Originally posted via “10 Ways Big Data is Changing K-12 Education”

Source by analyticsweekpick

Oct 04, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Trust the data  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Can Analytics Improve your Game? by bobehayes

>> @DrewConway on creating socially responsible data science practice #FutureOfData #Podcast by v1shal

>> Avoiding a Data Science Hype Bubble by analyticsweek

Wanna write? Click Here

[ NEWS BYTES]

>>
 Why It’s Time For Retail To Stop Experimenting With IoT, And Start Implementing It – PYMNTS.com Under  Internet Of Things

>>
 Panzura Barreling Toward IPO In $68B Cloud Data Management Market – Forbes Under  Cloud

>>
 As Hadoop landscape evolves, Hortonworks CEO plots future in hybrid cloud and IoT – SiliconANGLE News (blog) Under  Hadoop

More NEWS ? Click Here

[ FEATURED COURSE]

Lean Analytics Workshop – Alistair Croll and Ben Yoskovitz

image

Use data to build a better startup faster in partnership with Geckoboard… more

[ FEATURED READ]

Storytelling with Data: A Data Visualization Guide for Business Professionals

image

Storytelling with Data teaches you the fundamentals of data visualization and how to communicate effectively with data. You’ll discover the power of storytelling and the way to make data a pivotal point in your story. Th… more

[ TIPS & TRICKS OF THE WEEK]

Save yourself from zombie apocalypse from unscalable models
One living and breathing zombie in today’s analytical models is the pulsating absence of error bars. Not every model is scalable or holds ground with increasing data. Error bars that is tagged to almost every models should be duly calibrated. As business models rake in more data the error bars keep it sensible and in check. If error bars are not accounted for, we will make our models susceptible to failure leading us to halloween that we never wants to see.

[ DATA SCIENCE Q&A]

Q:Explain what a local optimum is and why it is important in a specific context,
such as K-means clustering. What are specific ways of determining if you have a local optimum problem? What can be done to avoid local optima?

A: * A solution that is optimal in within a neighboring set of candidate solutions
* In contrast with global optimum: the optimal solution among all others

* K-means clustering context:
It’s proven that the objective cost function will always decrease until a local optimum is reached.
Results will depend on the initial random cluster assignment

* Determining if you have a local optimum problem:
Tendency of premature convergence
Different initialization induces different optima

* Avoid local optima in a K-means context: repeat K-means and take the solution that has the lowest cost

Source

[ VIDEO OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with Juan Gorricho, @disney

 #BigData @AnalyticsWeek #FutureOfData #Podcast with Juan Gorricho, @disney

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

You can use all the quantitative data you can get, but you still have to distrust it and use your own intelligence and judgment. – Alvin Tof

[ PODCAST OF THE WEEK]

#DataScience Approach to Reducing #Employee #Attrition

 #DataScience Approach to Reducing #Employee #Attrition

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

YouTube users upload 48 hours of new video every minute of the day.

Sourced from: Analytics.CLUB #WEB Newsletter

Data Science Skills and the Improbable Unicorn

The role of data and analytics in business continues to grow. To make sense of their plethora of data, businesses are looking to data scientists for help. Job site, indeed.com, shows a continued growth in “data scientist” positions. To better understand the field of data science, we studied hundreds of data professionals.

In that study, we found that data scientists are not created equal. That is, data professionals differ with respect to the skills they possess. For example, some professionals are proficient in statistical and mathematical skills while others are proficient in computer science skills. Still others have a strong business acumen. In the current analysis, I want to determine the breadth of talent that data professionals possess to better understand the possibility of finding a single data scientist who is skilled in all areas. First, let’s review the study sample and the method of how we measured talent.

Assessing Proficiency in Data Skills

We surveyed hundreds of data professionals to tell us about their skills in five areas: Business, Technology, Math & Modeling, Programming and Statistics. Each skill area included five specific skills, totaling 25 different data skills in all.

For example, in the Business Skills area, data professionals were asked to rate their proficiency in such specific skills as “Business development,” and “Governance & Compliance (e.g., security).” In the Technology Skills area, they were asked to rate their proficiency in such skills as “Big and Distributed Data (e.g., Hadoop, Map/Reduce, Spark),” and “Managing unstructured data (e.g., noSQL).” In the Statistics Skills, they were asked to rate their proficiency in such skills as “Statistics and statistical modeling (e.g., general linear model, ANOVA, MANOVA, Spatio-temporal, Geographical Information System (GIS)),” and “Science/Scientific Method (e.g., experimental design, research design).”

For each of the 25 skills, respondents were asked to tell us their level proficiency using the following scale:

  • Don’t know (0)
  • Fundamental Knowledge (20)
  • Novice (40)
  • Intermediate (60)
  • Advanced (80)
  • Expert (100)

This rating scale is based on a proficiency rating scale used by NIH. Definitions for each proficiency level were fully defined in the instructions to the data professionals.

Standard of Performance

Figure 1.
Figure 1. Proficiency in data skills varies by job role.

The different levels of proficiency are defined around the data scientists ability to give or need to receive help. In the instructions to the data professionals, the “Intermediate” level of proficiency was defined as the ability “to successfully complete tasks as requested.” We used that proficiency level (i.e., Intermediate) as the minimum acceptable level of proficiency for each data skill. The proficiency levels below the Intermediate level (i.e., Novice, Fundamental Awareness, Don’t Know) were defined by an increasing need for help on the part of the data professional. Proficiency levels above the Intermediate level (i.e., Advanced, Expert) were defined by the data professional’s increasing ability to give help or be known by others as “a person to ask.”

We looked at the level of proficiency for the 25 different data skills across four different job roles. As is seen in Figure 1, data professionals tend to be skilled in areas that are appropriate for their job role (see green-shaded areas in Figure 1). Specifically, Business Management data professionals show the most proficiency in Business Skills. Researchers, on the other hand, show lowest level of proficiency in Business Skills and the highest in Statistics Skills.

For many of the data skills, the typical data professional does not have the minimum level of proficiency to do be successful at work, no matter their role (see yellow- and red-shaded areas in Figure 1). These data skills include the following: Unstructured data, NLP, Machine Learning, Big and distributed data, Cloud management, Front-end programming, Optimization, Graphic models, Algorithms and Bayesian statistics.

In Search of the Elite Data Scientist

data science unicorn
Figure 2. There are only a handful of data professionals who are proficient in all skill areas

There are a couple of ways an organization can build their data science capability. It can either hire a single individual who is skilled in all data science areas or it can hire a team of data professionals who have complementary skills. In both cases, the organization has all the skills necessary to use data intelligently. However, the likelihood of finding a data professional who is an expert in all five skill areas is quite low (see Figure 2). In our sample, we looked at three levels of proficiency: Intermediate, Advanced and Expert. We found that only 10% of the data professionals indicated they had, at least, an Intermediate level of proficiency in all five skill areas. The picture looks more bleak you look for data professionals who have advanced or expert proficiencies in data skills. The chance of finding a data professional with Advanced skills or better in all five skill areas drops to less than 1%. There were no data professionals who were considered as Experts in all five skill areas.

proficiency by industry
Figure 3. Proficiency levels by industry

We looked at proficiency differences across five industries: Consulting (n = 52), Education / Science (n = 50), Financial, (n = 52), Healthcare (n = 50) and IT (n = 95). We identified data professionals who had an advanced level of proficiency across the different skills. We found that data professionals in the Education / Science industry have more advanced skills (54% of data professionals have at least an advanced level of proficiency in at least one skill area) compared to data professionals in the Financial (37%) and IT (34%) industries.

Summary

The term “data scientist” is ambiguous. There are different types of data scientists, each defined by their level of proficiency in one of five skill areas: Business, Technology, Programming, Math & Modeling and Statistics. Data scientists can be defined by the skills they possess. So, when somebody tells you they are a data scientist, be sure you know what type they are.

Finding a data professional who is proficient in all data science skill areas is extremely difficult. As our study shows, data professionals rarely possess proficiency in all five skill areas at the level needed to be successful at work. The chance of finding a data professional with Expert skills in all five areas (even in 3 or 4 skill areas) is akin to finding a unicorn; they just don’t exist. There were very few data professionals who even had the basic minimum level of proficiency (i.e., Intermediate level of proficiency) in all five skill areas. Additionally, our initial findings on industry differences in skill proficiency suggest that skilled data professionals might be easier to find in specific industries. These industry differences could impact recruitment and management of data professionals. An under-supply of data science talent in one industry could require companies to use more dramatic recruitment efforts to attract data professionals from outside the industry. In industries where there are plenty of skilled data professionals, companies can be more selective in their hiring efforts.

Optimizing the value of business data is dependent on the skills of the data professionals who process the data. We took a skills-based approach to understanding how organizations can extract value from their data. Based on our findings, we recommend that organizations avoid trying to find a single data professional who has the skills that span the entire spectrum of data science. Rather, a better approach is to consider building up your data science capability through the formation of teams of data professionals who have complementary skills.

Originally Posted at: Data Science Skills and the Improbable Unicorn by bobehayes