3 ways to boost revenue with data analytics

Financial management

In a mere decade, the physician practice revenue cycle has been transformed. Gone are the days when most patients had $10 or $20 co-payments and their insurance companies generally paid claims in full. Physicians can no longer order lab work and tests according to their preference without considering medical necessity. And as patients shoulder rising care costs, they have become payers themselves, and they’re not quite accustomed to this role.

All of these factors have led to an increasingly complex and challenging revenue cycle — one that requires innovation. “Doing more with less” may be a cliché, but it rings true for physician practices striving to thrive financially while providing the highest quality care; however with the myriad of new initiatives and demands vying for their time, revenue cycle managers and practice leadership may ask, “Is it even possible to do more with less?”

Surprisingly, the answer is “yes” for most practices. Fortunately, you can achieve this goal leveraging something you already have, or can obtain, within the four walls of your practice: knowledge.

Not many practices can afford to purchase technology strictly for analytics and business intelligence. Additionally, in an environment where challenges such as health reform and regulatory demands take substantial time and attention, practices don’t have the luxury of adding resources to tackle such efforts. Nonetheless, practices can jump-start their analytics efforts and fuel more informed decisions via their clearinghouse. By reviewing clearinghouse reports — both standard and custom — you can identify revenue cycle trends, spot problems and test solutions such as process improvements.

Here’s how you can leverage data to achieve revenue cycle improvement goals such as decreasing days in accounts receivable (A/R), reducing denials and optimizing contract negotiations with payers.

1. Reduce denials and rejections
Effectively managing denials and rejections has always been one of physician practices’ greatest revenue cycle challenges. The more denials and rejections a practice has, the more likely key metrics such as days in A/R are to be low-performing, since practices aren’t able to get paid in a timely manner. Denials and rejections are just two of many areas that cause cash flow delays, and when reasons for denials and rejections are identified, such as eliminating unproductive work, practices can begin to improve days in A/R and increase profitability because payment comes in more quickly. These basic revenue cycle challenges, coupled with more stringent medical necessity requirements and value-based reimbursement, are now creating even more challenges in the healthcare industry.

Since ineligibility is often a leading cause for denials, a denial reduction strategy begins in the front office with quality eligibility information. An automated eligibility process provides front-office staff the data they need while also reducing errors. Allowing staff to check eligibility before patients are seen will set the stage for a more informed discussion regarding patient financial responsibility while also ensuring proper claims submission and reducing write-offs. Denial reports by reason are also an important tool; they can help practice managers identify staff or processes that require additional training.

A customized rejection report can help your team stay abreast of changing payer requirements and identify emerging patterns. Your clearinghouse should be able to generate a quarterly or monthly report that shows the most common reasons for claims rejections. Make sure the report details this information by practice location; staff at high-performing locations may be able to offer tips and advice to other offices with higher rejection rates.

Practice leadership can email the report and an analysis of patterns and trends to the entire team. An excellent tool to educate managers, coders and billing staff, this email can highlight areas for improvement or where additional training is required. This analysis should be simple and easy to comprehend, providing a quick snapshot of rejections along with practical ideas for improvements. The goal is for staff to be able to make adjustments to day-to-day work processes simply by reviewing the email. It can even generate some healthy competition as teams at different locations strive to make the greatest improvements.

2. Identify problematic procedures and services
In an era of value-based reimbursement, knowing which codes are prone to reimbursement issues can help your practice navigate an increasingly tricky landscape for claims payment. This information can be particularly helpful as you acclimate your practice to each payer’s value-based methodology such as bundled payments or shared savings. A report showing denials by code and per physician can generate awareness regarding potentially problematic claims submission. It can facilitate team education regarding coding conventions, medical necessity rules and payer requirements.

3. Improve contract negotiations
Clearinghouse reports aren’t just useful for education and improvements within your practice; they can also provide valuable insights as you review payer contracts and prepare for negotiations. In payer-specific reports, look for trends such as the average amount paid on specific codes over time. Compare these averages with your other payers, and go into negotiations armed with this data.

A recent survey of College of Healthcare Information Management Executives (CHIME) indicates that data analytics is the top investment priority for senior executives at large health systems, trumping both Accountable Care and ICD-10. Their reason: quality improvement and cost reduction can best be achieved by evaluating organizational data.

Physician practices can obtain the necessary data to optimize revenue without making costly technology investments. Whether your practice has two physicians or 200, the black-and-white nature of claims data can be invaluable. It can help you evaluate revenue cycle performance, identify problems, drive process changes and ultimately improve cash flow, simply by coupling your newfound knowledge with analytical and problem-solving skills.

Originally posted via “3 ways to boost revenue with data analytics”

Source: 3 ways to boost revenue with data analytics by analyticsweekpick

#FutureOfData with Rob(@telerob) / @ConnellyAgency on running innovation in agency – Playcast – Data Analytics Leadership Playbook Podcast

#FutureOfData with Rob(@telerob) / @ConnellyAgency on running innovation in agency
#FutureOfData with Rob(@telerob) / @ConnellyAgency on running innovation in agency

In this podcast Rob Griffin from Almighty(X) a Connelly partner company sat with Vishal Kumar to discuss how to run innovation in a media agency.

Here’s Rob’s Bio:
Driving transformational innovation within marketing and advertising. Pushing creative and media technology limits. Helping brands take ownership of their technology, data, and media for greater transparency and accountability. Putting the agent back in the agency. Been working in digital marketing and advertising since 1996. A Bostonian. A die hard Celtics fan. Dad. Speaker. Writer. Advisor. Skier. Comic book fan. Lover of good eats.

Originally Posted at: #FutureOfData with Rob(@telerob) / @ConnellyAgency on running innovation in agency – Playcast – Data Analytics Leadership Playbook Podcast by v1shal

Africa: Is Big Data the Solution to Africa’s Big Issues?

At the pick of the Ebola epidemic in West Africa, a Kenyan start-up created a reporting SMS-based system that allowed communities in Sierra Leone to alert the government on new infections and response in different areas of the country.

Echo Mobile would then send the texts sent by citizens and health workers to the Central Government Co-ordination Unit that analyzed the data through a system developed by IBM Africa research lab.

Echo Mobile would then send the texts sent by citizens and health workers to the Central Government Co-ordination Unit that analyzed the data through a system developed by IBM Africa research lab.

The data has helped the government map the spread of Ebola and quickly respond to new infections while at the same time managing the epidemic in the affected communities. Echo Mobile has demonstrated how the continent can leverage on simple data to respond to real situations and create precise, effective solutions in good time.


While the most accepted definition of big data is literally massive data sets that need supercomputers to analyze and make sense of, IBM has deconstructed the term by attaching the 4 Vs for data to qualify as ‘big data’ – volume, variety, veracity, and velocity.

IBM estimates 2.5 quintillion bytes of data or 2.5 Billion gigabytes, is generated every day as a result of a world increasingly dependent on the internet and connected devices. It is further estimated that 90 percent of the world’s data has been created in the last 2 years. For scope, Google announced 100 hours of video were uploaded on YouTube every minute in 2014.

The variety of the data -from CCTV cameras, social media, voice and text data etc – is constantly being churned out every second from as many sources and at the rate at which it is being forgotten.

“But it is not enough to have all this data if you cannot verify its authenticity and that’s where veracity comes in,” explains Cory Wiegert, IBM Software Group’s Product Director for Africa. “By these standards (4Vs), you will find that all data is big data.”

Wiegert says the end-game of big data is to find context and meaning by deploying intelligent analytics to enable users make better decisions. He gives an example of IBM’s cloud application codenamed Watson which has super analytic capabilities giving users sophisticated visualizations.

“Watson feeds on volumes of data. We can feed Watson with loads of medical data – from oncology journals to patient files – where doctors are getting a more detailed picture when treating cancer patients or in research,” explains Weigert.

Big data in Africa

While the 4Vs threshold captures big data in mature markets, emerging markets in Africa present a unique challenge to data scientists. Verifying the authenticity of data and a lack of an entrenched data collection and data-driven decision-making culture complicates the roll-out of big data projects. However, there are pockets of change across the continent.

“The business community in Africa is starting to take interest in big data. Through social media analytics, businesses are getting insights on what consumers are saying about their brands and services. This ultimately leads to innovation and improved service delivery as businesses adapt to the needs of consumers,” says Weigert.

An IDG Connect research, revealed Kenya and Nigeria are ahead of the curve in adapting big data solutions with 75 percent of respondents in the process or planning to deploy big data projects.

Still, capacity to implement the projects in these two countries is low pointing to a lack of awareness on the full benefits of big data ROI.

Odang Madung, co-founder of Odipo Dev – a Nairobi-based data startup, says sectors that are growing their user bases can immediately reap from data analysis.

“Very many industries could benefit depending on how you think of it, but the ones that are especially ripe for the challenge are telecommunication, finance, retail and media companies,” says Madung.

Kenya Power, for instance, recently deployed an automated system that will not only consolidate customer data collection from 10 different sources, but also mine and analyze customer data.

  The analytics solution gives Kenya Power ability to perform complex queries on data to give better insights on the varying needs of customers across different regions.

Mobile operators receive loads of data per day in the form of voice, internet data and texts. Privacy issues aside, allowing data scientists to glean through a particular data set can help tailor solutions specific to regions.

A 2008-2009 research by a team of researchers from Harvard School of Public Health, KEMRI and Carnegie Mellon University revealed how incidence of Malaria spread from Lake Victoria region to the rest of the country. The researchers monitored movement of 15 million Kenyans using 11,920 cell towers and compared that data with the Ministry of Health records showing number of people with Malaria.

While the insights on the correlation between movement of people and malaria prevalence were useful, creating timely and precise interventions to communities that were at risk was of particular interest to the researchers. MIT Technology review notes the research is the largest attempt to use data from cell phones as an epidemiological tool.

However, according to James Gicheru of Dimension Data, for African countries to move from piecemeal temporary projects to wide scale continuous deployment, the foundation of automated processes needs to be laid first.

“The Health sector will have to take significant strides in further embracing IT for example the first step would be to have a consolidated national healthcare system. This would go a long way in providing insight to the government for planning purposes and medical research,” says Gicheru.

Madung adds another angle, saying, data by itself, has limitations that need context to gain maximum benefits of unstructured data.

“Big data in some way needs big theory. Data science teams must consist of at least one person conversant with the domain in which they are dealing with. This kind of data can allow people to come up with spurious conclusions quite often and this can be mitigated with proper domain expertise and context,” says Madung.

Better results

Mbwana Alliy, Managing Partner of Savannah Fund, sees big data deepening financial services in Africa where companies like MoDE and First Access are creating intelligent risk credit scoring through mobile money systems and, “where banks in Africa have been too cautious in the past given lack of either collateral or credit history.”

Alliy says big data, combined with machine learning, can transform education and health in the continent.

“Whilst the growth of mobile devices such as tablets will help bring content to students, there is a big data opportunity to deliver test taking and content systems that measure and adapt to students challenges and learning,” says Alliy.

Alliy is of the opinion big data is not only a useful tool that can transform businesses and governments but also evolving as the core business in some areas.

“Big data is now disrupting the taxi industry because of the way it can efficiently match and predict the demand and supply of transportation services… Uber is really a big data company in disguise of a taxi ordering app.”

Dr. Gilbert Saggia, Kenya Country Manager for Oracle, more or less agrees with Ally, predicting companies will convert big data to data capital.

“Data is now a kind of capital. It’s as necessary for creating new products, services and ways of working as financial capital. For CEOs, this means securing access to, and increasing use of, data capital by digitizing and datafying key activities with customers, suppliers, and partners before rivals do,” says Saggia.

In agriculture, big data analysis is allowing farmers in parts of Africa get better yields by capturing accurate date on rainfall, soil, market prices and other variables which gives way to better decision-making.

But in spite of the potential of big data in Africa, a cultural and structural paradigm change needs to happen in the continent. Governments need to fast-track automation of processes, allow researchers to access data sets within the law and more importantly, act decisively on the outcomes of big data analysis.

Originally posted via “Africa: Is Big Data the Solution to Africa’s Big Issues?”

Originally Posted at: Africa: Is Big Data the Solution to Africa’s Big Issues? by analyticsweekpick

IBM and Hadoop Challenge You to Use Big Data for Good

Big Data is about solving problems by bringing technology, data and people together. Sure, we can identify ways to get customers to buy more stuff or click on more ads, but the ultimate value of Big Data is in its ability to make this world a better place for all. IBM and Hadoop recently launched the Big Data for Social Good Challenge for developers, hackers and data enthusiasts to take a deep dive into real world civic issues.


IBMHadoopbigdatasocialgoodIndividuals and organizations are eligible to participate in the challenge. Participants, using publicly available data sets (IBM’s curated data sets or others – here are the data set requirements), can win up to $20,000. Participants must create a working, clickable, and interactive data visualization utilizing the Analytics for Hadoop service on IBM Bluemix. The official rules page is here.

Go to the Big Data for Social Good Challenge page to learn more about how to enter the challenge and how the challenge will be judged (full disclosure: I’m a judge).  Also, check out your competition below.

Source: IBM and Hadoop Challenge You to Use Big Data for Good

5 Steps to Proofing with Big Data

5 Steps to Proofing with Big Data
5 Steps to Proofing with Big Data

Big Data project is “BIG” in terms of resources it requires. So, if a business is not having adequate resources and wants to venture into one, How should one go about it? Proofing big-data project is tricky and should be planned with upmost caution. The following 5 points will work as a guide to help businesses proof their big-data projects better.

1. Use Your Own Data, But Let Them Model It:
The first rule for running an effective Big Data POC is that you should use your own real, production data for the evaluation. If that is something not possible, one should develop a reasonably representative data set that can be handed over to vendors. Another thing when it comes to playing with data is that no matter how tempting it sounds, let vendors play with and model your data. I understand you have some inherent bias and want it to cater to your immediate needs, let your requirements be conveyed to vendors and let them carve a best solution for all possible current business scenarios.

2. Prepare for realistic Hardware:
First thing to understand is that you are talking about Big Data POC. It has to comply with ever increasing data demand and should be scalable. So, POC should involve hardware that is easily scalable to fit your business need. Hardware is many times a party pooper when it comes to implementation. So, make sure to do it right. Go into deep discussion with your vendors around data, possible growth and your business requirements. As a rule consider a POC Data Set should be at least 2-4TB & 18%-25% of your production load. Now what entails a good hardware, which is another topic for another day, but work out kinks with your vendors on hardware scalability issues.

3. Include All Of Your Workload:
Another key issue is preparing the workload in data set. Make sure to include all the workflow representation in your data. This will help modeler is making sure insights are generated across your current business workflows. The more you plan upfront, the better, faster, and cheaper the trajectory goes. It is a common perception in Big Data space that 80% of work is data preparation, so give it it’s due attention. The cost of shortcut is huge and it will bite you back eventually. Split your workload between immediate pain points/Needs, Mission Critical Path, Other surrounding scenarios.

4. Let Them Change & Work With Them:
One point where most of businesses go wrong is that they keep on adding their influence on vendors to steer their play. It is not the right time and place for that. You don’t invest in vendors’ help but also their bias. Make sure they have enough room to play with your load, workflows and work with them to figure out the cost and kinks around their findings. You need to use the best of the state to make sure all the right things are pegged; all the right findings are made. So, by giving them room, you will make sure no pitfall exists in your workflows, you will evaluate your vendor’s capability in handling their load.

5. Grab The Wheel & Take Her On Ride:
It is all not waiting and paying game for you. Every now and then, you should make provision to drive the damn thing and get to experience what is coming. This will not only help you understand where your big-data proof is going but it will give some valuable pointers to your vendors on what should they expect with the model. So, make sure to plan for your test-drives and inform vendors ahead in the game so they could plan accordingly.

Having a proof made is not a poof task but should be a learning curve that every one should commit to. Make sure you plan your schedule accordingly and if possible afford multiple vendors to work with your data. This will give you some safety net on what works best. Proofing is such an expensive job if not done properly upfront. So, never shy away from giving it it’s due else, as the common saying goes: It will always cost you more and take longer.


What to Look for in a Healthcare Big Data Analytics Vendor

Healthcare big data analytics is a booming business, which is both a good and a bad thing for providers seeking to bulk up their infrastructure to supplement their EHRs with sophisticated tools for clinical analytics, population health management, and predictive insights.

The number of up-and-coming big data vendors is growing every day as providers recognize the need to treat data as a resource instead of a burden, and picking a winner out of the pack isn’t always easy for healthcare organizations constrained by finances and concerned about developing long-term, effective partnerships.

If you understand your healthcare big data analytics technology options, are preparing to put your team into action, and are ready to move forward with a strategy to harness big data as a way to drive quality improvements and organizational efficiencies, it’s time to dive into the murky world of vendor selection.

HealthITAnalytics.com explores what to look for in a healthcare big data analytics vendor in order to ensure that a provider gets the right technology for its needs in the short term while keeping options open for shifting and changing strategic goals.

Matching what you have to what you want

As specialists trying to participate in the EHR Incentive Programs have learned to their cost, one size doesn’t fit all when it comes to health IT initiatives.  A large, well-known corporation may be able to boast about their brand recognition and have a client list a mile long, but not all healthcare organizations – or big data sets – are created equal.

Healthcare organizations must have a clear idea of what their data sets look like before they can match their needs and goals to a service provider.  Those that have invested heavily in structuring their EHR input may wish to begin their big data programs with general clinical analytics, as many hospitals do.  Others focused more on research, complex cases, or bolstering their clinical decision support might want to turn to companies that offer cognitive computing or natural language processing that can comb through bulky narrative text.

Providers must also examine their existing infrastructure and decide whether they can build upon technologies already in place, or if they would prefer to rip everything out and start again.  Can the vendor accommodate your legacy systems?  Do you need to invest in basic infrastructure like a data warehouse or master patient index in order to benefit from your potential vendor’s wares?  What are the costs involved in bringing your infrastructure up to baseline, and how long will it take to see a satisfactory return on these investments?

The majority of healthcare organizations do not feel fully prepared to tackle these questions at the moment, but that is quickly changing as experience replaces trepidation.  Healthcare big data analytics is a messy business at the best of times, but don’t let an overeager vendor trivialize how much work must be done in order to get the most out of a contract.

A commitment to interoperability and data standards

Vendors must treat interoperability as more than a buzzword these days as federal agencies, consumers, payers, and patients all crack down on data siloes that make big data analytics such a headache.  After Congress raised questions about vendors who actively block the type of information sharing that is vital for care coordination and population health management and the ONC responded with a widely-read report on the matter, vendors have started to change their tune on interoperability.

The rise of interoperability coalitions like Carequality and the CommonWell Health Alliance may make it a little easier for healthcare providers to identify vendors who are committed to health information exchange, but even the combined might of both organizations does not include a majority of the big data analytics companies on the market.

It is up to healthcare providers to ask about the foundations of a vendor’s technologies and how they will interact with other products, providers, and partners.  A few important questions to ask include:

• Is your product built on open standards or proprietary architecture?  Does it accept APIs, and is anyone actively developing them?

• How easy will it be for my organization to participate in large-scale analytics or health information exchange with a state or local entity, my accountable care organization, public health departments, and research organizations?

• How will your product interface with my existing health IT systems?  What sort of user experience can my clinicians and other staff expect?

• Have you considered the growing importance of medical device integration and the Internet of Things?  How will your technology adapt to the need to integrate additional data sources as patient-generated health data becomes more critical to providing quality care?

Transparent business practices and pricing structures

Taking the pledge for interoperability is just one part of having sound business practices that will encourage long-term partnerships.  While the ONC’s data blocking report may have reportedly spooked some vendors into dropping data exchange fees, the question of who has the rights to demand cash for patient data in motion and at rest has sparked some serious debates.

In 2013, the ONC released a guide for providers looking to negotiate EHR replacement contracts, urging them to pay attention to terms that would limit the transfer of patient information to a new system or cut off access to data during a dispute.  The advice about contract negotiation applies equally to an EHR system or a big data technology, each of which can be licensed for use on an organization’s own technology or provided as a service in the cloud.

The ONC warns providers to pay close attention to liability language that may exonerate the vendor from any responsibility should patient harm arise from unexpected downtime, a privacy violation, or an error or omission in the data.  “Developer contract language often includes indemnification language that shifts liability to you without regard to the cause of the problem or whose ‘acts or omissions’ may have given rise to the claim,” the guide says.

“You may want to negotiate with the EHR technology developer a mutual approach to indemnification that makes each party responsible for its own acts and omissions, so that each party is responsible for harm it caused or was in the best position to prevent,” the ONC suggests.

The guide also suggests courses of action for dispute resolution, intellectual property issues, warrantees, and confidentiality agreements.  Most vendors are willing to negotiate these terms to some degree, but be wary of those who insist on an all-or-nothing approach. Before signing on the dotted line, providers should be sure they are clear about their expectations and responsibilities, as well as ensuring they understand the pricing structures for data storage and transfer without falling victim to hidden fees or sudden hikes in a payment plan.

A balance of track record and innovation

Healthcare big data analytics is all about discovering novel and ingenious ways to use information, but providers investing millions of dollars in new infrastructure want to be sure that they aren’t throwing money down the drain.  Despite the general enthusiasm around embracing new ideas for analytics, executive leaders are still a relatively conservative bunch.

This year’s HIMSS Leadership Survey indicated a very high level of board room support for expanding innovative health IT and data analytics capabilities, yet more than a third of organizational leaders would prefer if that innovation had been tested at another organization first.  Just 24 percent of respondents said that their executive leaders were “open to trying ‘bleeding edge’ technology,” which puts big data analytics purchasers in a quandary.  After all, someone has to be the first one to try something new – and to possible reap the rewards of being adventurous.

But investing in start-up technology companies with big dreams and little real-world experience can be a risky proposition for providers who are looking to stretch every dollar they invest.  Venture capital investment in population health management and analytics companies is through the roof, but not every outfit that receives funding gets bought by a major player or scores a huge IPO.

Healthcare organizations should look for vendors who have secured adequate funding for their products, have working, bug-free examples of their software or hardware to display, offer robust customer support services, have firm timelines and plans for implementation, and don’t make promises they seem unlikely to be able to keep.

The ability to expand and grow with you as strategic plans change

Healthcare organizations are constantly being bombarded with new initiatives, shifting goals for federal mandates, and major changes to health IT programs, reimbursement structures, and quality improvement goals.  As the industry begins to embrace value-based payments and care structures driven by the need to provide high quality services and produce better outcomes, organizational needs and goals must be flexible.

Vendors have to be flexible, too, and be able to provide the right insights at the right time for the task at hand.  While technology turnovers are inevitable as new capabilities and standards move through the market, healthcare providers are looking for products that can carry them through at least a few years of turmoil without requiring a complete overhaul.

Healthcare providers can help themselves make the right choices by having a solid strategic vision for their organization over the next three to five years as meaningful use winds down and accountable care heats up.  Providers may wish to ask themselves:

  • How will I tackle population health management and the increasingly expensive proposition of caring for patients with complex chronic disease needs?  Will our patient demographics change significantly over the next few years?  How can we be proactive about addressing their needs?
  • How will the shift to value-based reimbursement drive the need for improved operational efficiencies within my organization, and how do I think big data will help?
  • What data exchange and interoperability capabilities do I need to ensure care coordination across the continuum?  How can my business partners and I work together to bring data-driven healthcare insights to our community?
  • What patient safety and care quality goals are we hoping to meet?  How can gaining deeper insights into our clinical care produce better patient outcomes?
  • What revenue cycle management issues do we need to address?  Can we turn patient behavior data into better collections, or will an investment in preventative care keep high-cost services to a minimum?
  • How can we improve our data integrity and data governance to maximize our investment in healthcare big data analytics?  Do we need to retrain our EHR users, hire more health information management professionals, or build a dedicated team of data scientists?

Healthcare big data analytics is such a rapidly expanding field that capabilities that seem commonplace today didn’t exist five years ago, and will probably be outdated five years from now.  But understanding your organizational objectives will help you make the best possible decisions with the information available at the moment, and hopefully set up your big data program for long-term future success.

Choosing the right vendor is a critical component of seeing the benefits of big data, and providers should not underestimate the degree to which open communication during this type of ongoing partnership will be required.

After thoroughly considering how a technology purchase will impact their goals, providers should look for stable, responsible, capable, and innovative vendors that offer high quality products with transparent, reasonable pricing structures if they wish to be pioneers in the field of big data.

Originally posted via “What to Look for in a Healthcare Big Data Analytics Vendor”

Originally Posted at: What to Look for in a Healthcare Big Data Analytics Vendor

Using Driver Analysis to Improve Employee Loyalty

Researchers have shown a consistent relationship between employee attitudes and customer attitudes. Specifically, they have found that satisfied/loyal employees, compared to dissatisfied/disloyal employees, have more satisfied customers. Examining different bank branches, Schnieder & Bowen (1985) found that branches with satisfied employees have customers who are more satisfied with service and are less likely to churn compared to branches with dissatisfied employees. Companies must consider employees’ needs and attitudes as part of their overall Customer Experience Management (CEM) strategy. Employees, after all, impact everything the customers see, feel, experiences. From marketing and sales to service, employees impact each phase of the customer life cycle, either strengthening or weakening your company’s relationship with the customer.

Ensuring employees are satisfied and loyal is essential to building long-lasting relationships with your customers. In my prior post, I presented an employee survey that you can use to ensure you are providing your employees with the necessary tools, information, work environment and support for them to be satisfied with and successful at their job. In this week’s post, I will demonstrate how to analyze the resulting data from that employee survey. The goal of the analysis is to help you prioritize efforts to improve the quality of the employee relationship.

The Optimal Employee Survey

Your optimal employee relationship survey needs to include a set of questions that are designed to help you improve the employee experience at work and employee loyalty. I have created an employee survey, the Employee Relationship Diagnostic, that measures the four key areas regarding the employee relationship. These sections and their questions are:

  1. Employee Loyalty – 3 questions (overall sat, recommend, intent to leave)
  2. Employee Experience – 26 employee experience questions for work attributes across the employee life cycle
  3. Relative Performance – 2 questions asking about competitive ranking and reasons behind ranking
  4. Company-Specific Questions – (e.g., reasons driving ratings, demographics)

This employee survey is designed to help companies gain key employee insights in 4 areas: 1) Determining employee loyalty and satisfaction levels; 2) Identifying reasons behind dis/loyalty; 3) Prioritizing improvement efforts; 4) Gaining competitive benchmark.

Analyzing the Employee Survey Data: Two Key Pieces of Information

After the employee survey is conducted and the employees have provided their feedback, the next step is analyzing the survey data. We will focus on two of the sections of the survey: Employee Loyalty and Employee Experience. Using the Employee Relationship Diagnostic, here are the measures:

  1. Employee Loyalty: Measures that assess the likelihood of engaging in positive behaviors. I use three questions to measure employee loyalty: 1) Overall satisfaction, 2) Likelihood to recommend and 3) Likelihood to leave (reverse coded). Using a 0 (Not at all likely) to 10 (Extremely likely) scale, higher ratings indicate higher levels of customer loyalty. A single employee loyalty score, the Employee Loyalty Index (ELI) is calculated by averaging the responses across the three loyalty questions.
  2. Satisfaction with the Employee Experience:  Measures that assess the quality of the employee experience. The employee survey includes 26 specific employee experience questions that fall into five general work areas: 1) senior management, 2) focus on the customer, 3) training, 4) performance management and 5) Compensation. Using a 0 (Extremely Dissatisfied) to 10 (Extremely Satisfied) scale, higher ratings indicate a better employee experience (higher employee satisfaction).

Summarizing the Data

You need to understand only two things about each of the 26 employee experience questions: 1) How well you are performing and 2) The impact on employee loyalty (e.g., how important it is in predicting employee loyalty):

  1. Performance:  The level of performance is summarized by a summary statistic for each employee experience question. Different approaches provide basically the same results; pick one that senior executives are familiar with and use it. Some use the mean score (sum of all responses divided by the number of respondents). Others use the “top-box” approach which is simply the percent of respondents who gave you a rating of, say, 9 or 10 (on the 0-10 scale).  So, you will calculate 26 performance scores, one for each work attribute. Low scores reflect a poor employee experience while high scores reflect good employee experience.
  2. Impact:  The impact on employee loyalty can be calculated by simply correlating the ratings of the work attribute with the employee loyalty ratings. This correlation is referred to as the “derived importance” of a particular work attribute. So, if the the survey has measures of 26 work attributes, we will calculate 26 correlations. The correlation between the satisfaction scores of a work attribute and the employee loyalty index indicates the degree to which performance on the work attribute has an impact on employee loyalty behavior. Correlations can be calculated using Excel or any statistical software package. Higher correlations (max is 1.0) indicate a strong relationship between the employee experience and employee loyalty (e.g., work attribute is important to employees). Low correlations (near 0.o) indicate a weak relationship between the employee experience and employee loyalty (e.g., work attribute is not important to employees).
Figure 1. Employee Loyalty Driver Matrix helps you prioritize improvement initiatives.

Graphing the Results: The Loyalty Driver Matrix

So, we now have the two pieces of information for each work attribute: 1) Performance and 2) Impact. Using both the performance index and derived importance for a business area, we plot these two pieces of information for each business area.

The abscissa (x-axis) of the Loyalty Driver Matrix is the performance index (e.g., mean score, top box percentage) of the work attributes. The ordinate (y-axis) of the Loyalty Driver Matrix is the impact (correlation) of the work attribute on employee loyalty.

The resulting matrix is referred to as a Loyalty Driver Matrix (see Figure 1). By plotting all 26 data points, we can visually examine all work attributes at one time, relative to each other.

Understanding the Loyalty Driver Matrix: Making Your Business Decisions

The Loyalty Driver Matrix is divided into quadrants using the average score for each of the axes. Each of the work attributes will fall into one of the four quadrants. The business decisions you make about improving the employee experience will depend on the quadrant in which each work attribute falls:

  1. Key Drivers: Work attributes that appear in the upper left quadrant are referred to as Key Drivers. Key drivers reflect work attributes that have both a high impact on employee loyalty and have low performance ratings relative to the other work attributes. These work attributes reflect good areas for potential employee experience improvement efforts because we have ample room for improvement and we know work attributes are linked to employee loyalty; when these work attributes are improved, you will likely see improvements in employee loyalty.
  2. Hidden Drivers: Work attributes that appear in the upper right quadrant are referred to as Hidden Drivers. Hidden drivers reflect work attributes that have a high impact on employee loyalty and have high performance ratings relative to other work attributes. These work attributes reflect the company’s strengths that keep the employee base loyal. Consider using these work attributes in recruitment and training collateral.
  3. Visible Drivers: Work attributes that appear in the lower right quadrant are referred to as Visible Drivers. Visible drivers reflect work attributes that have a low impact on employee loyalty and have high performance ratings relative to other work attributes. These work attributes reflect the company’s strengths. These areas may not impact employee loyalty but they are areas in which you are performing well. Consider using these work attributes in recruitment and hiring collateral.
  4. Weak Drivers: Work attributes that appear in the lower left quadrant are referred to as Weak Drivers. Weak drivers reflect work attributes that have a low impact on employee loyalty and have low performance ratings relative to other work attributes. These work attributes are lowest priorities for investment. They are of low priority because, despite the fact that performance is low in these areas, these areas do not have a substantial impact on whether or not employees will be loyalty toward your company.
Figure 2. Results of employee loyalty metrics.


A software company wanted to understand how their employees felt about their work environment. Using an employee survey, they solicited feedback from all employees and received completed surveys from nearly 80% of them. The results of the employee loyalty questions appear in Figure 2. While employee loyalty appears good, we see that there is room for improvement.

Applying driver analysis to this set of data resulted in the Loyalty Driver Matrix in Figure 3. The results of this driver analysis shows that Career opportunities, Training and Company communications are key drivers of customer loyalty; these work attributes are the top candidates for potential employee experience improvement efforts; they have a large impact on employee loyalty AND there is room for improvement.

Figure 3. Employee Loyalty Driver Chart

While the Loyalty Driver Matrix helps steer you in the right direction with respect to making improvements, you must consider the cost of making improvements. Senior management needs to balance the insights from the feedback results with the cost (labor hours, financial resources) of making improvements happen. Maximizing ROI occurs when you are able to minimize the costs while maximizing employee loyalty. Senior executives of this software company might find that the cost of improving communications requires less investment but would result in significant improvements in employee loyalty.


Loyalty Driver Analysis is a business intelligence solution that helps companies understand and improve the health of the employee relationship. The Loyalty Driver Matrix is based on two key pieces of information: 1) Performance of the work attributes and 2) Impact of that work attributes on employee loyalty. Using these two key pieces of information for each work attribute, senior executives are able to make better business decisions to improve employee loyalty to improve customer loyalty and accelerate business growth.

Originally Posted at: Using Driver Analysis to Improve Employee Loyalty by bobehayes

Apple partners with IBM on new health data analysis

Apple is part of a collective formed by IBM to develop new technology that will help health care companies analyze patient data collected from millions of wearable Apple devices.

IBM on Monday unveiled Watson Health Cloud, a cloud-based platform that will allow health researchers to not only store and share and patient data but provide access to IBM’s data mining and analytics capabilities. IBM’s platform, which harnesses the same cognitive computing power that made Watson a household name to millions of “Jeopardy” fans, draws on the vast amounts of consumer health data that can be collected using Apple’s ResearchKit and HealthKit, frameworks that help developers create apps that can gather and share medical information about its users.

“Our deep understanding and history in the health care industry will help ensure that doctors and researchers can maximize the insights available through Apple’s HealthKit and ResearchKit data,” John E. Kelly III, senior vice president for IBM research and solutions portfolio, said in a statement. “IBM’s secure data storage and analytics solutions will enable doctors and researchers to draw on real-time insights from consumer health and behavioral data at a scale never before possible.”

Apple unveiled HealthKit during its Worldwide Developer Conference in June. The software lets consumers track health-related data and serves as a hub for that information. ResearchKit, which wasunveiled last month, is designed to help medical professionals build apps and technologies to assist with various kinds of research.

On Tuesday, Apple announced that it is making ResearchKit available to medical researchers so that they can begin developing new apps. The first wave of ResearchKit-based apps, which are designed to be used for studying asthma, diabetes, breast cancer, cardiovascular disease and Parkinson’s disease, have so far enrolled over 60,000 iPhone users, Apple said.

“Studies that historically attracted a few hundred participants are now attracting participants in the tens of thousands,” said Jeff Williams, Apple’s senior vice president of operations, in a statement Tuesday.

The IBM partnership highlights the increasing focus that the tech sector is putting on health care. Several companies have introduced health-centric gadgets, while others see an opportunity to mine patient data or collect readings on individuals to predict when they’ll get sick and to tailor treatment.

Apple rival Samsung has made a big push in health with its mobile devices, including heart rate monitors and health-focused apps in its Galaxy line of smartphones and Gear Fit. It has also unveiled efforts to develop new sensors and a cloud-based platform for collecting health data.

The Apple Watch, Apple’s foray into the wearables market, is positioned in part as a health and fitness device. It includes features such as activity trackers and vibrating reminders to stand up if you’ve been sitting too long. The device’s Activity app gives you a view of your daily activity, including how many calories you’ve burned, how much exercise you’ve done and how often you’ve stood up to get a break from sitting.

IBM also plans to use HealthKit to build a suite of wellness apps designed to help companies work with their employees to better manage their health needs, from general fitness to acute diseases.

Also partnering with IBM and Apple on the new unit are Johnson & Johnson and Medtronic, a medical device manufacturer.

Originally posted via “Apple partners with IBM on new health data analysis”

Originally Posted at: Apple partners with IBM on new health data analysis

United States of America’s CTO Wants You to Kick Ass with Big Data

I recently watched an 8-minute TechCrunch interview of United States of America’s Chief Technology Officer, Todd Park, that got me really excited.  It turns out that the Federal government has a lot of free data. In the interview, Mr. Park encourages developers and entrepreneurs to download these data for the purpose of building new products, services, and companies. Park emphasizes that the President of the United States has fully endorsed the idea that key datasets be made available to the public. The Obama administration recently announced their “Big Data Research and Development Initiative,” in which they are committing more than $200 million in new commitments to Big Data projects. As Park states in the interview, the government want entrepreneurs to use the free data to “… kick ass and create useful services for people…” I’d like to try.

Free Data from Data.Gov

So, being the data lover that I am, I examined the different types of data sets on the data.gov site. The data cover a broad range of topics, from Energy and Education to Safety and Health, each including various types of data sets on a given topic. If you like data, have a flair for product development or just like solving problems, I highly recommend you browse the list of free data sets available for download.

I downloaded six data sets from the health.gov site.  Each data set contained unique metrics for each hospital. The six data sets were:

  1. Survey of Patient’s Hospital Experience: Percent of respondents who indicated top box response (e.g., “always;” overall rating of 9-10; Yes, Definitely recommend.) across seven customer experience questions and two patient loyalty questions.
  2. General Hospital Information: Describes the hospital type and the owner.
  3. Outcome Measures: Includes three mortality rates and three readmission rates for: heart attack, heart failure, and pneumonia
  4. Process of Care Measures: 12 measures related to surgical care improvement
  5. Hospital Acquired Condition (HAC) Measures:  Percent of patients who acquire HAC.
  6. Medicare Spend per Patient: This measure shows whether Medicare spends more, less, or about the same per Medicare patient treated in a specific hospital, compared to how much Medicare spends per patient nationally.

My Big Data and Patient Experience Management

Analyzing each separate data set would provide insight about the metrics contained in each data set. What is the percentage of Types of hospital? What is the average patient rating across hospitals? What is the typical mortality rate across all hospitals? What is the average Medicare spend across hospitals? While the answers to these questions do provide value, the true value of Big Data lies in understanding the relationships (in a statistical sense) among different variables. By understanding relationships among different metrics, you can build predictive models that help explain the reasons behind the numbers (e.g., Are mortality rates related to patient satisfaction?; Do efficient hospitals deliver better service?).

To understand the relationships among different variables, I merged the six data sets together into one Big Data set; so, in the basic form, this super data set included 4610 hospitals on which I had all the metrics from each data set, including patient satisfaction, mortality rate, and Medicare spend. Using this Big Data set, I will be able to examine how the variables are related to each other, building predictive models of patient satisfaction/loyalty ratings. The analysis of these different metrics may help hospitals understand how to deliver a better patient experience through customer experience management practices.

My Analytics Plan

In upcoming posts, I will present the analysis of these hospital data. I am not an expert in patient care but I do understand the metrics well enough to give it the ol’ college try. In my analyses, I will try accomplish a few things. Here are three that immediately come to mind.

  1. Create Meaningful Patient Metrics. To accomplish this, I will look at many metric simultaneously via a factor analysis. This approach will help me see if I can aggregate/combine some questions together into a single metric (e.g. average all seven patient experience ratings into one metric). The ultimate goal is to create a metric that is reliable, valid and useful.
  2. Understand Predictors of Patient Satisfaction.  I will use correlational and regression analysis to understand the drivers of patient loyalty. In addition to using patient experience ratings in the analyses, I will also be able to include objective hospital metrics (e.g., mortality rates, process measures, Medicare spend) to understand many more factors that could impact patient loyalty.
  3. Understand Merits of Different Hospital Metrics. How do you measure the quality of a hospital? Is patient satisfaction/loyalty the best hospital metric? Is mortality rate? By simultaneously looking at different performance metrics for many hospital, we can help understand what each metric means in the context of all other metrics. Creating an overall hospital quality metric can only be accomplished when we understand how all metrics are related to each other.

If you have any ideas on how I can analyze these data, I would love to hear them.

I will be watching The Health Data Initiative (HDI) Forum (The Health Datapalooza) (June 5 and 6) via webcast to learn what other entrepreneurs are doing in the area of healthcare data. The HDI is a public-private collaboration that encourages innovators to utilize health data to develop applications that raise awareness of health system performance and spark community action to improve health.

Source: United States of America’s CTO Wants You to Kick Ass with Big Data by bobehayes

Out of the Loop on the Internet of Things? Here’s a Brief Guide.

Q: What does the Internet of Things mean for small business?

A: From smart thermostats to cars, the Internet of Things (IoT) is an ecosystem of devices that kicked off Web 3.0. We asked Andy Smith, general partner at Center Electric, a San Francisco-based venture capital firm focused exclusively on investments in IoT startups, to tell us how it will affect small businesses. 

What exactly is the Internet of Things? 

It’s two things: The first is “smart” connected things, such as a sensor for motion detection or light, a processor and usually a wireless connection to the Internet. Second are the components, software and services to make these things useful. With this integration of devices and analytics, a business will know the best time to run a sale or promotion, add staff hours, improve deliveries or react quickly to external trends.

For instance, a system of sensors like Apple’s iBeacons, which notice who has entered a store, might correlate timely trends, current wait times for cashiers, pricing, merchandising and promotional activities.

How can this benefit the typical small business?

Take the case of a dry cleaner. Garments could be tracked with washable radio-frequency identification (RFID) tags. Tile sensors attached to each garment would prevent loss and notify a customer’s smartphone when their laundry is ready.

IoT tied to a loyalty program in a coffee shop would allow the staff to greet customers by name and know their drink of choice. RFID-based inventory management keeps the shelves stocked, and pressure sensors at the front door could track traffic, occupancy, staffing and even music to reflect the environment of the space.

In an insurance office, files and documents could be tagged to audibly prevent misfiling or loss. The file will tell your smartphone or computer when and where it’s been misfiled.

With so many devices connected to the Internet, what are the security risks?

Hackers could hijack vulnerable IoT devices to block access to a third-party site, but these problems are already commonplace in the PC world today. The same care that you put into protecting your computer systems needs to apply to IoT devices. Your best bet is to have a well-documented manual process that backs up a possible IoT process should something go wrong.

Can I sit out the trend if I don’t want to invest the time and money? 

Much like mobile technology, IoT technology is not for everybody, at least not at the outset. But reasons to adapt early include new revenue growth, cost reduction or substantial increases in customer satisfaction. You may also have to embrace it if your vendors and service providers start integrating IoT into their own workflows and expect you to adopt their tech. Since your business should benefit from a vendor adopting IoT, they should prove to you how the technology better serves your needs, not just theirs.

Originally posted via “Out of the Loop on the Internet of Things? Here’s a Brief Guide.”