Dec 28, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Weak data  Source

[ AnalyticsWeek BYTES]

>> Semantic Technology Unlocks Big Data’s Full Value by analyticsweekpick

>> Can Big Data Tell Us What Clinical Trials Don’t? by analyticsweekpick

>> Media firms are excelling at social: Reach grows by 236% by analyticsweekpick

Wanna write? Click Here

[ NEWS BYTES]

>>
 China to publish unified GDP data in fraud crackdown: statistics bureau – Reuters Under  Statistics

>>
 Putting the “AI” in ThAInksgiving – TechCrunch Under  Machine Learning

>>
 Accelerite Takes On VMware, Nutanix with Hybrid Cloud Platform – SDxCentral Under  Hybrid Cloud

More NEWS ? Click Here

[ FEATURED COURSE]

Process Mining: Data science in Action

image

Process mining is the missing link between model-based process analysis and data-oriented analysis techniques. Through concrete data sets and easy to use software the course provides data science knowledge that can be ap… more

[ FEATURED READ]

Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, 4th Edition

image

The eagerly anticipated Fourth Edition of the title that pioneered the comparison of qualitative, quantitative, and mixed methods research design is here! For all three approaches, Creswell includes a preliminary conside… more

[ TIPS & TRICKS OF THE WEEK]

Finding a success in your data science ? Find a mentor
Yes, most of us dont feel a need but most of us really could use one. As most of data science professionals work in their own isolations, getting an unbiased perspective is not easy. Many times, it is also not easy to understand how the data science progression is going to be. Getting a network of mentors address these issues easily, it gives data professionals an outside perspective and unbiased ally. It’s extremely important for successful data science professionals to build a mentor network and use it through their success.

[ DATA SCIENCE Q&A]

Q:You have data on the durations of calls to a call center. Generate a plan for how you would code and analyze these data. Explain a plausible scenario for what the distribution of these durations might look like. How could you test, even graphically, whether your expectations are borne out?
A: 1. Exploratory data analysis
* Histogram of durations
* histogram of durations per service type, per day of week, per hours of day (durations can be systematically longer from 10am to 1pm for instance), per employee…
2. Distribution: lognormal?

3. Test graphically with QQ plot: sample quantiles of log(durations)log?(durations) Vs normal quantiles

Source

[ VIDEO OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData with Jon Gibs(@jonathangibs) @L2_Digital

 #BigData @AnalyticsWeek #FutureOfData with Jon Gibs(@jonathangibs) @L2_Digital

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

The world is one big data problem. – Andrew McAfee

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with @MPFlowersNYC, @enigma_data

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @MPFlowersNYC, @enigma_data

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Market research firm IDC has released a new forecast that shows the big data market is expected to grow from $3.2 billion in 2010 to $16.9 billion in 2015.

Sourced from: Analytics.CLUB #WEB Newsletter

IBM and Hadoop Challenge You to Use Big Data for Good

Big Data is about solving problems by bringing technology, data and people together. Sure, we can identify ways to get customers to buy more stuff or click on more ads, but the ultimate value of Big Data is in its ability to make this world a better place for all. IBM and Hadoop recently launched the Big Data for Social Good Challenge for developers, hackers and data enthusiasts to take a deep dive into real world civic issues.

Requirements

IBMHadoopbigdatasocialgoodIndividuals and organizations are eligible to participate in the challenge. Participants, using publicly available data sets (IBM’s curated data sets or others – here are the data set requirements), can win up to $20,000. Participants must create a working, clickable, and interactive data visualization utilizing the Analytics for Hadoop service on IBM Bluemix. The official rules page is here.

Go to the Big Data for Social Good Challenge page to learn more about how to enter the challenge and how the challenge will be judged (full disclosure: I’m a judge).  Also, check out your competition below.

Source: IBM and Hadoop Challenge You to Use Big Data for Good

Dec 21, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Data Accuracy  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ NEWS BYTES]

>>
 Big data brings big challenges, big opportunities – BioWorld Online Under  Big Data

>>
 Why data science is one of today’s fastest growing IT careers – Mashable Under  Data Scientist

>>
 What do data teams actually do? 8 Chicago companies weigh in – Built In Chicago Under  Business Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

The Analytics Edge

image

This is an Archived Course
EdX keeps courses open for enrollment after they end to allow learners to explore content and continue learning. All features and materials may not be available, and course content will not be… more

[ FEATURED READ]

Antifragile: Things That Gain from Disorder

image

Antifragile is a standalone book in Nassim Nicholas Taleb’s landmark Incerto series, an investigation of opacity, luck, uncertainty, probability, human error, risk, and decision-making in a world we don’t understand. The… more

[ TIPS & TRICKS OF THE WEEK]

Winter is coming, warm your Analytics Club
Yes and yes! As we are heading into winter what better way but to talk about our increasing dependence on data analytics to help with our decision making. Data and analytics driven decision making is rapidly sneaking its way into our core corporate DNA and we are not churning practice ground to test those models fast enough. Such snugly looking models have hidden nails which could induce unchartered pain if go unchecked. This is the right time to start thinking about putting Analytics Club[Data Analytics CoE] in your work place to help Lab out the best practices and provide test environment for those models.

[ DATA SCIENCE Q&A]

Q:What is: lift, KPI, robustness, model fitting, design of experiments, 80/20 rule?
A: Lift:
It’s measure of performance of a targeting model (or a rule) at predicting or classifying cases as having an enhanced response (with respect to the population as a whole), measured against a random choice targeting model. Lift is simply: target response/average response.

Suppose a population has an average response rate of 5% (mailing for instance). A certain model (or rule) has identified a segment with a response rate of 20%, then lift=20/5=4

Typically, the modeler seeks to divide the population into quantiles, and rank the quantiles by lift. He can then consider each quantile, and by weighing the predicted response rate against the cost, he can decide to market that quantile or not.
“if we use the probability scores on customers, we can get 60% of the total responders we’d get mailing randomly by only mailing the top 30% of the scored customers”.

KPI:
– Key performance indicator
– A type of performance measurement
– Examples: 0 defects, 10/10 customer satisfaction
– Relies upon a good understanding of what is important to the organization

More examples:

Marketing & Sales:
– New customers acquisition
– Customer attrition
– Revenue (turnover) generated by segments of the customer population
– Often done with a data management platform

IT operations:
– Mean time between failure
– Mean time to repair

Robustness:
– Statistics with good performance even if the underlying distribution is not normal
– Statistics that are not affected by outliers
– A learning algorithm that can reduce the chance of fitting noise is called robust
– Median is a robust measure of central tendency, while mean is not
– Median absolute deviation is also more robust than the standard deviation

Model fitting:
– How well a statistical model fits a set of observations
– Examples: AIC, R2, Kolmogorov-Smirnov test, Chi 2, deviance (glm)

Design of experiments:
The design of any task that aims to describe or explain the variation of information under conditions that are hypothesized to reflect the variation.
In its simplest form, an experiment aims at predicting the outcome by changing the preconditions, the predictors.
– Selection of the suitable predictors and outcomes
– Delivery of the experiment under statistically optimal conditions
– Randomization
– Blocking: an experiment may be conducted with the same equipment to avoid any unwanted variations in the input
– Replication: performing the same combination run more than once, in order to get an estimate for the amount of random error that could be part of the process
– Interaction: when an experiment has 3 or more variables, the situation in which the interaction of two variables on a third is not additive

80/20 rule:
– Pareto principle
– 80% of the effects come from 20% of the causes
– 80% of your sales come from 20% of your clients
– 80% of a company complaints come from 20% of its customers

Source

[ VIDEO OF THE WEEK]

Unconference Panel Discussion: #Workforce #Analytics Leadership Panel

 Unconference Panel Discussion: #Workforce #Analytics Leadership Panel

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

The most valuable commodity I know of is information. – Gordon Gekko

[ PODCAST OF THE WEEK]

Using Analytics to build A #BigData #Workforce

 Using Analytics to build A #BigData #Workforce

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

235 Terabytes of data has been collected by the U.S. Library of Congress in April 2011.

Sourced from: Analytics.CLUB #WEB Newsletter

5 Steps to Proofing with Big Data

5 Steps to Proofing with Big Data
5 Steps to Proofing with Big Data

Big Data project is “BIG” in terms of resources it requires. So, if a business is not having adequate resources and wants to venture into one, How should one go about it? Proofing big-data project is tricky and should be planned with upmost caution. The following 5 points will work as a guide to help businesses proof their big-data projects better.

1. Use Your Own Data, But Let Them Model It:
The first rule for running an effective Big Data POC is that you should use your own real, production data for the evaluation. If that is something not possible, one should develop a reasonably representative data set that can be handed over to vendors. Another thing when it comes to playing with data is that no matter how tempting it sounds, let vendors play with and model your data. I understand you have some inherent bias and want it to cater to your immediate needs, let your requirements be conveyed to vendors and let them carve a best solution for all possible current business scenarios.

2. Prepare for realistic Hardware:
First thing to understand is that you are talking about Big Data POC. It has to comply with ever increasing data demand and should be scalable. So, POC should involve hardware that is easily scalable to fit your business need. Hardware is many times a party pooper when it comes to implementation. So, make sure to do it right. Go into deep discussion with your vendors around data, possible growth and your business requirements. As a rule consider a POC Data Set should be at least 2-4TB & 18%-25% of your production load. Now what entails a good hardware, which is another topic for another day, but work out kinks with your vendors on hardware scalability issues.

3. Include All Of Your Workload:
Another key issue is preparing the workload in data set. Make sure to include all the workflow representation in your data. This will help modeler is making sure insights are generated across your current business workflows. The more you plan upfront, the better, faster, and cheaper the trajectory goes. It is a common perception in Big Data space that 80% of work is data preparation, so give it it’s due attention. The cost of shortcut is huge and it will bite you back eventually. Split your workload between immediate pain points/Needs, Mission Critical Path, Other surrounding scenarios.

4. Let Them Change & Work With Them:
One point where most of businesses go wrong is that they keep on adding their influence on vendors to steer their play. It is not the right time and place for that. You don’t invest in vendors’ help but also their bias. Make sure they have enough room to play with your load, workflows and work with them to figure out the cost and kinks around their findings. You need to use the best of the state to make sure all the right things are pegged; all the right findings are made. So, by giving them room, you will make sure no pitfall exists in your workflows, you will evaluate your vendor’s capability in handling their load.

5. Grab The Wheel & Take Her On Ride:
It is all not waiting and paying game for you. Every now and then, you should make provision to drive the damn thing and get to experience what is coming. This will not only help you understand where your big-data proof is going but it will give some valuable pointers to your vendors on what should they expect with the model. So, make sure to plan for your test-drives and inform vendors ahead in the game so they could plan accordingly.

Having a proof made is not a poof task but should be a learning curve that every one should commit to. Make sure you plan your schedule accordingly and if possible afford multiple vendors to work with your data. This will give you some safety net on what works best. Proofing is such an expensive job if not done properly upfront. So, never shy away from giving it it’s due else, as the common saying goes: It will always cost you more and take longer.

Source

What to Look for in a Healthcare Big Data Analytics Vendor

Healthcare big data analytics is a booming business, which is both a good and a bad thing for providers seeking to bulk up their infrastructure to supplement their EHRs with sophisticated tools for clinical analytics, population health management, and predictive insights.

The number of up-and-coming big data vendors is growing every day as providers recognize the need to treat data as a resource instead of a burden, and picking a winner out of the pack isn’t always easy for healthcare organizations constrained by finances and concerned about developing long-term, effective partnerships.

If you understand your healthcare big data analytics technology options, are preparing to put your team into action, and are ready to move forward with a strategy to harness big data as a way to drive quality improvements and organizational efficiencies, it’s time to dive into the murky world of vendor selection.

HealthITAnalytics.com explores what to look for in a healthcare big data analytics vendor in order to ensure that a provider gets the right technology for its needs in the short term while keeping options open for shifting and changing strategic goals.

Matching what you have to what you want

As specialists trying to participate in the EHR Incentive Programs have learned to their cost, one size doesn’t fit all when it comes to health IT initiatives.  A large, well-known corporation may be able to boast about their brand recognition and have a client list a mile long, but not all healthcare organizations – or big data sets – are created equal.

Healthcare organizations must have a clear idea of what their data sets look like before they can match their needs and goals to a service provider.  Those that have invested heavily in structuring their EHR input may wish to begin their big data programs with general clinical analytics, as many hospitals do.  Others focused more on research, complex cases, or bolstering their clinical decision support might want to turn to companies that offer cognitive computing or natural language processing that can comb through bulky narrative text.

Providers must also examine their existing infrastructure and decide whether they can build upon technologies already in place, or if they would prefer to rip everything out and start again.  Can the vendor accommodate your legacy systems?  Do you need to invest in basic infrastructure like a data warehouse or master patient index in order to benefit from your potential vendor’s wares?  What are the costs involved in bringing your infrastructure up to baseline, and how long will it take to see a satisfactory return on these investments?

The majority of healthcare organizations do not feel fully prepared to tackle these questions at the moment, but that is quickly changing as experience replaces trepidation.  Healthcare big data analytics is a messy business at the best of times, but don’t let an overeager vendor trivialize how much work must be done in order to get the most out of a contract.

A commitment to interoperability and data standards

Vendors must treat interoperability as more than a buzzword these days as federal agencies, consumers, payers, and patients all crack down on data siloes that make big data analytics such a headache.  After Congress raised questions about vendors who actively block the type of information sharing that is vital for care coordination and population health management and the ONC responded with a widely-read report on the matter, vendors have started to change their tune on interoperability.

The rise of interoperability coalitions like Carequality and the CommonWell Health Alliance may make it a little easier for healthcare providers to identify vendors who are committed to health information exchange, but even the combined might of both organizations does not include a majority of the big data analytics companies on the market.

It is up to healthcare providers to ask about the foundations of a vendor’s technologies and how they will interact with other products, providers, and partners.  A few important questions to ask include:

• Is your product built on open standards or proprietary architecture?  Does it accept APIs, and is anyone actively developing them?

• How easy will it be for my organization to participate in large-scale analytics or health information exchange with a state or local entity, my accountable care organization, public health departments, and research organizations?

• How will your product interface with my existing health IT systems?  What sort of user experience can my clinicians and other staff expect?

• Have you considered the growing importance of medical device integration and the Internet of Things?  How will your technology adapt to the need to integrate additional data sources as patient-generated health data becomes more critical to providing quality care?

Transparent business practices and pricing structures

Taking the pledge for interoperability is just one part of having sound business practices that will encourage long-term partnerships.  While the ONC’s data blocking report may have reportedly spooked some vendors into dropping data exchange fees, the question of who has the rights to demand cash for patient data in motion and at rest has sparked some serious debates.

In 2013, the ONC released a guide for providers looking to negotiate EHR replacement contracts, urging them to pay attention to terms that would limit the transfer of patient information to a new system or cut off access to data during a dispute.  The advice about contract negotiation applies equally to an EHR system or a big data technology, each of which can be licensed for use on an organization’s own technology or provided as a service in the cloud.

The ONC warns providers to pay close attention to liability language that may exonerate the vendor from any responsibility should patient harm arise from unexpected downtime, a privacy violation, or an error or omission in the data.  “Developer contract language often includes indemnification language that shifts liability to you without regard to the cause of the problem or whose ‘acts or omissions’ may have given rise to the claim,” the guide says.

“You may want to negotiate with the EHR technology developer a mutual approach to indemnification that makes each party responsible for its own acts and omissions, so that each party is responsible for harm it caused or was in the best position to prevent,” the ONC suggests.

The guide also suggests courses of action for dispute resolution, intellectual property issues, warrantees, and confidentiality agreements.  Most vendors are willing to negotiate these terms to some degree, but be wary of those who insist on an all-or-nothing approach. Before signing on the dotted line, providers should be sure they are clear about their expectations and responsibilities, as well as ensuring they understand the pricing structures for data storage and transfer without falling victim to hidden fees or sudden hikes in a payment plan.

A balance of track record and innovation

Healthcare big data analytics is all about discovering novel and ingenious ways to use information, but providers investing millions of dollars in new infrastructure want to be sure that they aren’t throwing money down the drain.  Despite the general enthusiasm around embracing new ideas for analytics, executive leaders are still a relatively conservative bunch.

This year’s HIMSS Leadership Survey indicated a very high level of board room support for expanding innovative health IT and data analytics capabilities, yet more than a third of organizational leaders would prefer if that innovation had been tested at another organization first.  Just 24 percent of respondents said that their executive leaders were “open to trying ‘bleeding edge’ technology,” which puts big data analytics purchasers in a quandary.  After all, someone has to be the first one to try something new – and to possible reap the rewards of being adventurous.

But investing in start-up technology companies with big dreams and little real-world experience can be a risky proposition for providers who are looking to stretch every dollar they invest.  Venture capital investment in population health management and analytics companies is through the roof, but not every outfit that receives funding gets bought by a major player or scores a huge IPO.

Healthcare organizations should look for vendors who have secured adequate funding for their products, have working, bug-free examples of their software or hardware to display, offer robust customer support services, have firm timelines and plans for implementation, and don’t make promises they seem unlikely to be able to keep.

The ability to expand and grow with you as strategic plans change

Healthcare organizations are constantly being bombarded with new initiatives, shifting goals for federal mandates, and major changes to health IT programs, reimbursement structures, and quality improvement goals.  As the industry begins to embrace value-based payments and care structures driven by the need to provide high quality services and produce better outcomes, organizational needs and goals must be flexible.

Vendors have to be flexible, too, and be able to provide the right insights at the right time for the task at hand.  While technology turnovers are inevitable as new capabilities and standards move through the market, healthcare providers are looking for products that can carry them through at least a few years of turmoil without requiring a complete overhaul.

Healthcare providers can help themselves make the right choices by having a solid strategic vision for their organization over the next three to five years as meaningful use winds down and accountable care heats up.  Providers may wish to ask themselves:

  • How will I tackle population health management and the increasingly expensive proposition of caring for patients with complex chronic disease needs?  Will our patient demographics change significantly over the next few years?  How can we be proactive about addressing their needs?
  • How will the shift to value-based reimbursement drive the need for improved operational efficiencies within my organization, and how do I think big data will help?
  • What data exchange and interoperability capabilities do I need to ensure care coordination across the continuum?  How can my business partners and I work together to bring data-driven healthcare insights to our community?
  • What patient safety and care quality goals are we hoping to meet?  How can gaining deeper insights into our clinical care produce better patient outcomes?
  • What revenue cycle management issues do we need to address?  Can we turn patient behavior data into better collections, or will an investment in preventative care keep high-cost services to a minimum?
  • How can we improve our data integrity and data governance to maximize our investment in healthcare big data analytics?  Do we need to retrain our EHR users, hire more health information management professionals, or build a dedicated team of data scientists?

Healthcare big data analytics is such a rapidly expanding field that capabilities that seem commonplace today didn’t exist five years ago, and will probably be outdated five years from now.  But understanding your organizational objectives will help you make the best possible decisions with the information available at the moment, and hopefully set up your big data program for long-term future success.

Choosing the right vendor is a critical component of seeing the benefits of big data, and providers should not underestimate the degree to which open communication during this type of ongoing partnership will be required.

After thoroughly considering how a technology purchase will impact their goals, providers should look for stable, responsible, capable, and innovative vendors that offer high quality products with transparent, reasonable pricing structures if they wish to be pioneers in the field of big data.

Originally posted via “What to Look for in a Healthcare Big Data Analytics Vendor”

Originally Posted at: What to Look for in a Healthcare Big Data Analytics Vendor

Dec 14, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
statistical anomaly  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Are U.S. Hospitals Delivering a Better Patient Experience? by bobehayes

>> Is data analytics about causes … or correlations? by analyticsweekpick

>> October 3, 2016 Health and Biotech Analytics News Roundup by pstein

Wanna write? Click Here

[ NEWS BYTES]

>>
 Aurora approves data center expansion – Crain’s Chicago Business Under  Data Center

>>
 Will Big Data Analytics Rescue Lackluster Electronic Health Records? – Health IT Analytics Under  Health Analytics

>>
 Schill announces interdisciplinary data science initiative – AroundtheO Under  Data Science

More NEWS ? Click Here

[ FEATURED COURSE]

Artificial Intelligence

image

This course includes interactive demonstrations which are intended to stimulate interest and to help students gain intuition about how artificial intelligence methods work under a variety of circumstances…. more

[ FEATURED READ]

Rise of the Robots: Technology and the Threat of a Jobless Future

image

What are the jobs of the future? How many will there be? And who will have them? As technology continues to accelerate and machines begin taking care of themselves, fewer people will be necessary. Artificial intelligence… more

[ TIPS & TRICKS OF THE WEEK]

Analytics Strategy that is Startup Compliant
With right tools, capturing data is easy but not being able to handle data could lead to chaos. One of the most reliable startup strategy for adopting data analytics is TUM or The Ultimate Metric. This is the metric that matters the most to your startup. Some advantages of TUM: It answers the most important business question, it cleans up your goals, it inspires innovation and helps you understand the entire quantified business.

[ DATA SCIENCE Q&A]

Q:How do you know if one algorithm is better than other?
A: * In terms of performance on a given data set?
* In terms of performance on several data sets?
* In terms of efficiency?
In terms of performance on several data sets:

– ‘Does learning algorithm A have a higher chance of producing a better predictor than learning algorithm B in the given context?”
– ‘Bayesian Comparison of Machine Learning Algorithms on Single and Multiple Datasets”, A. Lacoste and F. Laviolette
– ‘Statistical Comparisons of Classifiers over Multiple Data Sets”, Janez Demsar

In terms of performance on a given data set:
– One wants to choose between two learning algorithms
– Need to compare their performances and assess the statistical significance

One approach (Not preferred in the literature):
– Multiple k-fold cross validation: run CV multiple times and take the mean and sd
– You have: algorithm A (mean and sd) and algorithm B (mean and sd)
– Is the difference meaningful? (Paired t-test)

Sign-test (classification context):
Simply counts the number of times A has a better metrics than B and assumes this comes from a binomial distribution. Then we can obtain a p-value of the HoHo test: A and B are equal in terms of performance.

Wilcoxon signed rank test (classification context):
Like the sign-test, but the wins (A is better than B) are weighted and assumed coming from a symmetric distribution around a common median. Then, we obtain a p-value of the HoHo test.

Other (without hypothesis testing):
– AUC
– F-Score

Source

[ VIDEO OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with  John Young, @Epsilonmktg

 #BigData @AnalyticsWeek #FutureOfData #Podcast with John Young, @Epsilonmktg

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

If you can’t explain it simply, you don’t understand it well enough. – Albert Einstein

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with @Beena_Ammanath, @GE

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @Beena_Ammanath, @GE

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Market research firm IDC has released a new forecast that shows the big data market is expected to grow from $3.2 billion in 2010 to $16.9 billion in 2015.

Sourced from: Analytics.CLUB #WEB Newsletter

Using Driver Analysis to Improve Employee Loyalty

Researchers have shown a consistent relationship between employee attitudes and customer attitudes. Specifically, they have found that satisfied/loyal employees, compared to dissatisfied/disloyal employees, have more satisfied customers. Examining different bank branches, Schnieder & Bowen (1985) found that branches with satisfied employees have customers who are more satisfied with service and are less likely to churn compared to branches with dissatisfied employees. Companies must consider employees’ needs and attitudes as part of their overall Customer Experience Management (CEM) strategy. Employees, after all, impact everything the customers see, feel, experiences. From marketing and sales to service, employees impact each phase of the customer life cycle, either strengthening or weakening your company’s relationship with the customer.

Ensuring employees are satisfied and loyal is essential to building long-lasting relationships with your customers. In my prior post, I presented an employee survey that you can use to ensure you are providing your employees with the necessary tools, information, work environment and support for them to be satisfied with and successful at their job. In this week’s post, I will demonstrate how to analyze the resulting data from that employee survey. The goal of the analysis is to help you prioritize efforts to improve the quality of the employee relationship.

The Optimal Employee Survey

Your optimal employee relationship survey needs to include a set of questions that are designed to help you improve the employee experience at work and employee loyalty. I have created an employee survey, the Employee Relationship Diagnostic, that measures the four key areas regarding the employee relationship. These sections and their questions are:

  1. Employee Loyalty – 3 questions (overall sat, recommend, intent to leave)
  2. Employee Experience – 26 employee experience questions for work attributes across the employee life cycle
  3. Relative Performance – 2 questions asking about competitive ranking and reasons behind ranking
  4. Company-Specific Questions – (e.g., reasons driving ratings, demographics)

This employee survey is designed to help companies gain key employee insights in 4 areas: 1) Determining employee loyalty and satisfaction levels; 2) Identifying reasons behind dis/loyalty; 3) Prioritizing improvement efforts; 4) Gaining competitive benchmark.

Analyzing the Employee Survey Data: Two Key Pieces of Information

After the employee survey is conducted and the employees have provided their feedback, the next step is analyzing the survey data. We will focus on two of the sections of the survey: Employee Loyalty and Employee Experience. Using the Employee Relationship Diagnostic, here are the measures:

  1. Employee Loyalty: Measures that assess the likelihood of engaging in positive behaviors. I use three questions to measure employee loyalty: 1) Overall satisfaction, 2) Likelihood to recommend and 3) Likelihood to leave (reverse coded). Using a 0 (Not at all likely) to 10 (Extremely likely) scale, higher ratings indicate higher levels of customer loyalty. A single employee loyalty score, the Employee Loyalty Index (ELI) is calculated by averaging the responses across the three loyalty questions.
  2. Satisfaction with the Employee Experience:  Measures that assess the quality of the employee experience. The employee survey includes 26 specific employee experience questions that fall into five general work areas: 1) senior management, 2) focus on the customer, 3) training, 4) performance management and 5) Compensation. Using a 0 (Extremely Dissatisfied) to 10 (Extremely Satisfied) scale, higher ratings indicate a better employee experience (higher employee satisfaction).

Summarizing the Data

You need to understand only two things about each of the 26 employee experience questions: 1) How well you are performing and 2) The impact on employee loyalty (e.g., how important it is in predicting employee loyalty):

  1. Performance:  The level of performance is summarized by a summary statistic for each employee experience question. Different approaches provide basically the same results; pick one that senior executives are familiar with and use it. Some use the mean score (sum of all responses divided by the number of respondents). Others use the “top-box” approach which is simply the percent of respondents who gave you a rating of, say, 9 or 10 (on the 0-10 scale).  So, you will calculate 26 performance scores, one for each work attribute. Low scores reflect a poor employee experience while high scores reflect good employee experience.
  2. Impact:  The impact on employee loyalty can be calculated by simply correlating the ratings of the work attribute with the employee loyalty ratings. This correlation is referred to as the “derived importance” of a particular work attribute. So, if the the survey has measures of 26 work attributes, we will calculate 26 correlations. The correlation between the satisfaction scores of a work attribute and the employee loyalty index indicates the degree to which performance on the work attribute has an impact on employee loyalty behavior. Correlations can be calculated using Excel or any statistical software package. Higher correlations (max is 1.0) indicate a strong relationship between the employee experience and employee loyalty (e.g., work attribute is important to employees). Low correlations (near 0.o) indicate a weak relationship between the employee experience and employee loyalty (e.g., work attribute is not important to employees).
Figure 1. Employee Loyalty Driver Matrix helps you prioritize improvement initiatives.

Graphing the Results: The Loyalty Driver Matrix

So, we now have the two pieces of information for each work attribute: 1) Performance and 2) Impact. Using both the performance index and derived importance for a business area, we plot these two pieces of information for each business area.

The abscissa (x-axis) of the Loyalty Driver Matrix is the performance index (e.g., mean score, top box percentage) of the work attributes. The ordinate (y-axis) of the Loyalty Driver Matrix is the impact (correlation) of the work attribute on employee loyalty.

The resulting matrix is referred to as a Loyalty Driver Matrix (see Figure 1). By plotting all 26 data points, we can visually examine all work attributes at one time, relative to each other.

Understanding the Loyalty Driver Matrix: Making Your Business Decisions

The Loyalty Driver Matrix is divided into quadrants using the average score for each of the axes. Each of the work attributes will fall into one of the four quadrants. The business decisions you make about improving the employee experience will depend on the quadrant in which each work attribute falls:

  1. Key Drivers: Work attributes that appear in the upper left quadrant are referred to as Key Drivers. Key drivers reflect work attributes that have both a high impact on employee loyalty and have low performance ratings relative to the other work attributes. These work attributes reflect good areas for potential employee experience improvement efforts because we have ample room for improvement and we know work attributes are linked to employee loyalty; when these work attributes are improved, you will likely see improvements in employee loyalty.
  2. Hidden Drivers: Work attributes that appear in the upper right quadrant are referred to as Hidden Drivers. Hidden drivers reflect work attributes that have a high impact on employee loyalty and have high performance ratings relative to other work attributes. These work attributes reflect the company’s strengths that keep the employee base loyal. Consider using these work attributes in recruitment and training collateral.
  3. Visible Drivers: Work attributes that appear in the lower right quadrant are referred to as Visible Drivers. Visible drivers reflect work attributes that have a low impact on employee loyalty and have high performance ratings relative to other work attributes. These work attributes reflect the company’s strengths. These areas may not impact employee loyalty but they are areas in which you are performing well. Consider using these work attributes in recruitment and hiring collateral.
  4. Weak Drivers: Work attributes that appear in the lower left quadrant are referred to as Weak Drivers. Weak drivers reflect work attributes that have a low impact on employee loyalty and have low performance ratings relative to other work attributes. These work attributes are lowest priorities for investment. They are of low priority because, despite the fact that performance is low in these areas, these areas do not have a substantial impact on whether or not employees will be loyalty toward your company.
Figure 2. Results of employee loyalty metrics.

Example

A software company wanted to understand how their employees felt about their work environment. Using an employee survey, they solicited feedback from all employees and received completed surveys from nearly 80% of them. The results of the employee loyalty questions appear in Figure 2. While employee loyalty appears good, we see that there is room for improvement.

Applying driver analysis to this set of data resulted in the Loyalty Driver Matrix in Figure 3. The results of this driver analysis shows that Career opportunities, Training and Company communications are key drivers of customer loyalty; these work attributes are the top candidates for potential employee experience improvement efforts; they have a large impact on employee loyalty AND there is room for improvement.

Figure 3. Employee Loyalty Driver Chart

While the Loyalty Driver Matrix helps steer you in the right direction with respect to making improvements, you must consider the cost of making improvements. Senior management needs to balance the insights from the feedback results with the cost (labor hours, financial resources) of making improvements happen. Maximizing ROI occurs when you are able to minimize the costs while maximizing employee loyalty. Senior executives of this software company might find that the cost of improving communications requires less investment but would result in significant improvements in employee loyalty.

Summary

Loyalty Driver Analysis is a business intelligence solution that helps companies understand and improve the health of the employee relationship. The Loyalty Driver Matrix is based on two key pieces of information: 1) Performance of the work attributes and 2) Impact of that work attributes on employee loyalty. Using these two key pieces of information for each work attribute, senior executives are able to make better business decisions to improve employee loyalty to improve customer loyalty and accelerate business growth.

Originally Posted at: Using Driver Analysis to Improve Employee Loyalty by bobehayes

Apple partners with IBM on new health data analysis

Apple is part of a collective formed by IBM to develop new technology that will help health care companies analyze patient data collected from millions of wearable Apple devices.

IBM on Monday unveiled Watson Health Cloud, a cloud-based platform that will allow health researchers to not only store and share and patient data but provide access to IBM’s data mining and analytics capabilities. IBM’s platform, which harnesses the same cognitive computing power that made Watson a household name to millions of “Jeopardy” fans, draws on the vast amounts of consumer health data that can be collected using Apple’s ResearchKit and HealthKit, frameworks that help developers create apps that can gather and share medical information about its users.

“Our deep understanding and history in the health care industry will help ensure that doctors and researchers can maximize the insights available through Apple’s HealthKit and ResearchKit data,” John E. Kelly III, senior vice president for IBM research and solutions portfolio, said in a statement. “IBM’s secure data storage and analytics solutions will enable doctors and researchers to draw on real-time insights from consumer health and behavioral data at a scale never before possible.”

Apple unveiled HealthKit during its Worldwide Developer Conference in June. The software lets consumers track health-related data and serves as a hub for that information. ResearchKit, which wasunveiled last month, is designed to help medical professionals build apps and technologies to assist with various kinds of research.

On Tuesday, Apple announced that it is making ResearchKit available to medical researchers so that they can begin developing new apps. The first wave of ResearchKit-based apps, which are designed to be used for studying asthma, diabetes, breast cancer, cardiovascular disease and Parkinson’s disease, have so far enrolled over 60,000 iPhone users, Apple said.

“Studies that historically attracted a few hundred participants are now attracting participants in the tens of thousands,” said Jeff Williams, Apple’s senior vice president of operations, in a statement Tuesday.

The IBM partnership highlights the increasing focus that the tech sector is putting on health care. Several companies have introduced health-centric gadgets, while others see an opportunity to mine patient data or collect readings on individuals to predict when they’ll get sick and to tailor treatment.

Apple rival Samsung has made a big push in health with its mobile devices, including heart rate monitors and health-focused apps in its Galaxy line of smartphones and Gear Fit. It has also unveiled efforts to develop new sensors and a cloud-based platform for collecting health data.

The Apple Watch, Apple’s foray into the wearables market, is positioned in part as a health and fitness device. It includes features such as activity trackers and vibrating reminders to stand up if you’ve been sitting too long. The device’s Activity app gives you a view of your daily activity, including how many calories you’ve burned, how much exercise you’ve done and how often you’ve stood up to get a break from sitting.

IBM also plans to use HealthKit to build a suite of wellness apps designed to help companies work with their employees to better manage their health needs, from general fitness to acute diseases.

Also partnering with IBM and Apple on the new unit are Johnson & Johnson and Medtronic, a medical device manufacturer.

Originally posted via “Apple partners with IBM on new health data analysis”

Originally Posted at: Apple partners with IBM on new health data analysis

Dec 07, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Data interpretation  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Can big data help you get a good night’s sleep? by analyticsweekpick

>> After trying its own data center, Zynga retreats to the cloud by analyticsweekpick

>> #FutureOfData with @theClaymethod, @TiVo discussing running analytics in media industry – Playcast – Data Analytics Leadership Playbook Podcast by v1shal

Wanna write? Click Here

[ NEWS BYTES]

>>
 ‘Big Data’ resource raises possibility of research revolution – Phys.Org Under  Big Data

>>
 Four Ways of Engagement: Voices from Dreamforce #4 – DMN Under  Marketing Analytics

>>
 Court revises deadlines in lawsuit challenging data security of Trump election commission – Inside Cybersecurity (subscription) Under  Data Security

More NEWS ? Click Here

[ FEATURED COURSE]

A Course in Machine Learning

image

Machine learning is the study of algorithms that learn from data and experience. It is applied in a vast variety of application areas, from medicine to advertising, from military to pedestrian. Any area in which you need… more

[ FEATURED READ]

Big Data: A Revolution That Will Transform How We Live, Work, and Think

image

“Illuminating and very timely . . . a fascinating — and sometimes alarming — survey of big data’s growing effect on just about everything: business, government, science and medicine, privacy, and even on the way we think… more

[ TIPS & TRICKS OF THE WEEK]

Grow at the speed of collaboration
A research by Cornerstone On Demand pointed out the need for better collaboration within workforce, and data analytics domain is no different. A rapidly changing and growing industry like data analytics is very difficult to catchup by isolated workforce. A good collaborative work-environment facilitate better flow of ideas, improved team dynamics, rapid learning, and increasing ability to cut through the noise. So, embrace collaborative team dynamics.

[ DATA SCIENCE Q&A]

Q:Explain likely differences between administrative datasets and datasets gathered from experimental studies. What are likely problems encountered with administrative data? How do experimental methods help alleviate these problems? What problem do they bring?
A: Advantages:
– Cost
– Large coverage of population
– Captures individuals who may not respond to surveys
– Regularly updated, allow consistent time-series to be built-up

Disadvantages:
– Restricted to data collected for administrative purposes (limited to administrative definitions. For instance: incomes of a married couple, not individuals, which can be more useful)
– Lack of researcher control over content
– Missing or erroneous entries
– Quality issues (addresses may not be updated or a postal code is provided only)
– Data privacy issues
– Underdeveloped theories and methods (sampling methods…)

Source

[ VIDEO OF THE WEEK]

Surviving Internet of Things

 Surviving Internet of Things

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

If you can’t explain it simply, you don’t understand it well enough. – Albert Einstein

[ PODCAST OF THE WEEK]

Andrea Gallego(@risenthink) / @BCG on Managing Analytics Practice #FutureOfData #Podcast

 Andrea Gallego(@risenthink) / @BCG on Managing Analytics Practice #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

The largest AT&T database boasts titles including the largest volume of data in one unique database (312 terabytes) and the second largest number of rows in a unique database (1.9 trillion), which comprises AT&T’s extensive calling records.

Sourced from: Analytics.CLUB #WEB Newsletter

United States of America’s CTO Wants You to Kick Ass with Big Data

I recently watched an 8-minute TechCrunch interview of United States of America’s Chief Technology Officer, Todd Park, that got me really excited.  It turns out that the Federal government has a lot of free data. In the interview, Mr. Park encourages developers and entrepreneurs to download these data for the purpose of building new products, services, and companies. Park emphasizes that the President of the United States has fully endorsed the idea that key datasets be made available to the public. The Obama administration recently announced their “Big Data Research and Development Initiative,” in which they are committing more than $200 million in new commitments to Big Data projects. As Park states in the interview, the government want entrepreneurs to use the free data to “… kick ass and create useful services for people…” I’d like to try.

Free Data from Data.Gov

So, being the data lover that I am, I examined the different types of data sets on the data.gov site. The data cover a broad range of topics, from Energy and Education to Safety and Health, each including various types of data sets on a given topic. If you like data, have a flair for product development or just like solving problems, I highly recommend you browse the list of free data sets available for download.

I downloaded six data sets from the health.gov site.  Each data set contained unique metrics for each hospital. The six data sets were:

  1. Survey of Patient’s Hospital Experience: Percent of respondents who indicated top box response (e.g., “always;” overall rating of 9-10; Yes, Definitely recommend.) across seven customer experience questions and two patient loyalty questions.
  2. General Hospital Information: Describes the hospital type and the owner.
  3. Outcome Measures: Includes three mortality rates and three readmission rates for: heart attack, heart failure, and pneumonia
  4. Process of Care Measures: 12 measures related to surgical care improvement
  5. Hospital Acquired Condition (HAC) Measures:  Percent of patients who acquire HAC.
  6. Medicare Spend per Patient: This measure shows whether Medicare spends more, less, or about the same per Medicare patient treated in a specific hospital, compared to how much Medicare spends per patient nationally.

My Big Data and Patient Experience Management

Analyzing each separate data set would provide insight about the metrics contained in each data set. What is the percentage of Types of hospital? What is the average patient rating across hospitals? What is the typical mortality rate across all hospitals? What is the average Medicare spend across hospitals? While the answers to these questions do provide value, the true value of Big Data lies in understanding the relationships (in a statistical sense) among different variables. By understanding relationships among different metrics, you can build predictive models that help explain the reasons behind the numbers (e.g., Are mortality rates related to patient satisfaction?; Do efficient hospitals deliver better service?).

To understand the relationships among different variables, I merged the six data sets together into one Big Data set; so, in the basic form, this super data set included 4610 hospitals on which I had all the metrics from each data set, including patient satisfaction, mortality rate, and Medicare spend. Using this Big Data set, I will be able to examine how the variables are related to each other, building predictive models of patient satisfaction/loyalty ratings. The analysis of these different metrics may help hospitals understand how to deliver a better patient experience through customer experience management practices.

My Analytics Plan

In upcoming posts, I will present the analysis of these hospital data. I am not an expert in patient care but I do understand the metrics well enough to give it the ol’ college try. In my analyses, I will try accomplish a few things. Here are three that immediately come to mind.

  1. Create Meaningful Patient Metrics. To accomplish this, I will look at many metric simultaneously via a factor analysis. This approach will help me see if I can aggregate/combine some questions together into a single metric (e.g. average all seven patient experience ratings into one metric). The ultimate goal is to create a metric that is reliable, valid and useful.
  2. Understand Predictors of Patient Satisfaction.  I will use correlational and regression analysis to understand the drivers of patient loyalty. In addition to using patient experience ratings in the analyses, I will also be able to include objective hospital metrics (e.g., mortality rates, process measures, Medicare spend) to understand many more factors that could impact patient loyalty.
  3. Understand Merits of Different Hospital Metrics. How do you measure the quality of a hospital? Is patient satisfaction/loyalty the best hospital metric? Is mortality rate? By simultaneously looking at different performance metrics for many hospital, we can help understand what each metric means in the context of all other metrics. Creating an overall hospital quality metric can only be accomplished when we understand how all metrics are related to each other.

If you have any ideas on how I can analyze these data, I would love to hear them.

I will be watching The Health Data Initiative (HDI) Forum (The Health Datapalooza) (June 5 and 6) via webcast to learn what other entrepreneurs are doing in the area of healthcare data. The HDI is a public-private collaboration that encourages innovators to utilize health data to develop applications that raise awareness of health system performance and spark community action to improve health.

Source: United States of America’s CTO Wants You to Kick Ass with Big Data by bobehayes