Jan 31, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Fake data  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> A case for Computer Algorithms and Recursion in Data Science Education by vinny

>> Simplifying Data Warehouse Optimization by analyticsweekpick

>> What’s a CFO’s biggest fear, and how can machine learning help? by wwcheng

Wanna write? Click Here

[ NEWS BYTES]

>>
 Using Big Data to Give Patients Control of Their Own Health – Singularity Hub Under  Big Data

>>
 Big Data Analytics in Banking Market 2025: Global Demand, Key Players, Overview, Supply and Consumption Analysis – Honest Facts Under  Big Data Analytics

>>
 Manual intervention is hindering the customer experience – Chain Store Age Under  Customer Experience

More NEWS ? Click Here

[ FEATURED COURSE]

CS229 – Machine Learning

image

This course provides a broad introduction to machine learning and statistical pattern recognition. … more

[ FEATURED READ]

Machine Learning With Random Forests And Decision Trees: A Visual Guide For Beginners

image

If you are looking for a book to help you understand how the machine learning algorithms “Random Forest” and “Decision Trees” work behind the scenes, then this is a good book for you. Those two algorithms are commonly u… more

[ TIPS & TRICKS OF THE WEEK]

Keeping Biases Checked during the last mile of decision making
Today a data driven leader, a data scientist or a data driven expert is always put to test by helping his team solve a problem using his skills and expertise. Believe it or not but a part of that decision tree is derived from the intuition that adds a bias in our judgement that makes the suggestions tainted. Most skilled professionals do understand and handle the biases well, but in few cases, we give into tiny traps and could find ourselves trapped in those biases which impairs the judgement. So, it is important that we keep the intuition bias in check when working on a data problem.

[ DATA SCIENCE Q&A]

Q:How to efficiently scrape web data, or collect tons of tweets?
A: * Python example
* Requesting and fetching the webpage into the code: httplib2 module
* Parsing the content and getting the necessary info: BeautifulSoup from bs4 package
* Twitter API: the Python wrapper for performing API requests. It handles all the OAuth and API queries in a single Python interface
* MongoDB as the database
* PyMongo: the Python wrapper for interacting with the MongoDB database
* Cronjobs: a time based scheduler in order to run scripts at specific intervals; allows to bypass the “rate limit exceed” error

Source

[ VIDEO OF THE WEEK]

#HumansOfSTEAM feat. Hussain Gadwal, Mechanical Designer via @SciThinkers #STEM #STEAM

 #HumansOfSTEAM feat. Hussain Gadwal, Mechanical Designer via @SciThinkers #STEM #STEAM

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Everybody gets so much information all day long that they lose their common sense. – Gertrude Stein

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with @DavidRose, @DittoLabs

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @DavidRose, @DittoLabs

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

The Hadoop (open source software for distributed computing) market is forecast to grow at a compound annual growth rate 58% surpassing $1 billion by 2020.

Sourced from: Analytics.CLUB #WEB Newsletter

True Test of Loyalty – Article in Quality Progress

Read the study by Bob E. Hayes, Ph.D. in the June 2008 edition of Quality Progress magazine titled The True Test of Loyalty. This Quality Progress article discusses the measurement of customer loyalty. Despite its importance in increasing profitability, customer loyalty measurement hasn’t kept pace with its technology. Using advocacy, purchasing and retention indexes to manage loyalty is statistically superior to using any single question alone. These indexes helped predict the growth potential of wireless service providers and PC manufacturers. You can download the article here.

Source: True Test of Loyalty – Article in Quality Progress by bobehayes

Jan 24, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Statistics  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> 2016 Trends in Big Data: Insights and Action Turn Big Data Small by jelaniharper

>> The Point of Advanced Machine Learning: Understanding Cognitive Analytics by jelaniharper

>> Data center location – your DATA harbour by martin

Wanna write? Click Here

[ NEWS BYTES]

>>
 Video Data Security. The view from the experts. – Security Today (press release) (blog) Under  Data Security

>>
 3 virtualization infrastructure design rules to shape your deployment – TechTarget Under  Virtualization

>>
 Don’t Miss the Data Train – MarTech Advisor Under  Social Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

Hadoop Starter Kit

image

Hadoop learning made easy and fun. Learn HDFS, MapReduce and introduction to Pig and Hive with FREE cluster access…. more

[ FEATURED READ]

Data Science for Business: What You Need to Know about Data Mining and Data-Analytic Thinking

image

Written by renowned data science experts Foster Provost and Tom Fawcett, Data Science for Business introduces the fundamental principles of data science, and walks you through the “data-analytic thinking” necessary for e… more

[ TIPS & TRICKS OF THE WEEK]

Finding a success in your data science ? Find a mentor
Yes, most of us dont feel a need but most of us really could use one. As most of data science professionals work in their own isolations, getting an unbiased perspective is not easy. Many times, it is also not easy to understand how the data science progression is going to be. Getting a network of mentors address these issues easily, it gives data professionals an outside perspective and unbiased ally. It’s extremely important for successful data science professionals to build a mentor network and use it through their success.

[ DATA SCIENCE Q&A]

Q:What is your definition of big data?
A: Big data is high volume, high velocity and/or high variety information assets that require new forms of processing
– Volume: big data doesn’t sample, just observes and tracks what happens
– Velocity: big data is often available in real-time
– Variety: big data comes from texts, images, audio, video…

Difference big data/business intelligence:
– Business intelligence uses descriptive statistics with data with high density information to measure things, detect trends etc.
– Big data uses inductive statistics (statistical inference) and concepts from non-linear system identification to infer laws (regression, classification, clustering) from large data sets with low density information to reveal relationships and dependencies or to perform prediction of outcomes or behaviors

Source

[ VIDEO OF THE WEEK]

@AnalyticsWeek: Big Data at Work: Paul Sonderegger

 @AnalyticsWeek: Big Data at Work: Paul Sonderegger

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

We chose it because we deal with huge amounts of data. Besides, it sounds really cool. – Larry Page

[ PODCAST OF THE WEEK]

Unconference Panel Discussion: #Workforce #Analytics Leadership Panel

 Unconference Panel Discussion: #Workforce #Analytics Leadership Panel

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Three-quarters of decision-makers (76 per cent) surveyed anticipate significant impacts in the domain of storage systems as a result of the “Big Data” phenomenon.

Sourced from: Analytics.CLUB #WEB Newsletter

Making Big Data Work: Supply Chain Management

In recent decades, companies have looked to technology, lean manufacturing, and global production to increase efficiency and reduce costs. But these tactics are leading to diminishing returns.

Many companies have moved production offshore, for instance. However, the attractiveness of that opportunity is diminishing as differences in global manufacturing costs between countries such as China and the U.S. have narrowed over the past ten years. (See How Global Manufacturing Cost Competitiveness Has Shifted over the Past Decade, BCG Data Point, May 2014.) At the same time, supply chains have grown more complicated—many spanning multiple continents and involving external suppliers—while customer demands have gotten more complex. As a result, companies are bringing production closer to home markets (“nearshoring”) and sometimes “reshoring” production all the way back home to high-labor-rate countries. (See The U.S. as One of the Developed World’s Lowest-Cost Manufacturers: Behind the American Export Surge, BCG Focus, August 2013.)

The combination of large, fast-moving, and varied streams of big data and advanced tools and techniques such as geoanalytics represents the next frontier of supply chain innovation. When they are guided by a clear understanding of the strategic priorities, market context, and competitive needs of a company, these approaches offer major new opportunities to enhance customer responsiveness, reduce inventory, lower costs, and improve agility.

Companies can optimize distribution, logistics, and production networks by using powerful data-processing and analysis capabilities. They can also improve the accuracy of their demand forecasts, discover new demand patterns, and develop new services by sharing data with partners across the supply chain. In addition, they can increase asset uptime and expand throughput, engage in preventive maintenance of production assets and installed products, and conduct near real-time supply planning using dynamic data feeds from production sensors and the Internet of Things.

Three High-Potential Opportunities

But with so much available data and so many improvable processes, it can be challenging for executives to determine where they should focus their limited time and resources. In our work with supply chain operations across a range of industries, we see three opportunities that offer high potential in the near term. Companies that exploit them can generate significant revenues and profits, as well as reduce costs markedly, lower cash requirements, and boost agility.

Visualizing Delivery Routes. Logistics management challenges all but the most sophisticated specialists in “last-mile delivery.” Traditional routing software at advanced delivery companies can show drivers exactly where and how they should drive in order to reduce fuel costs and maximize efficiency. The most flexible systems can plan a truck’s route each day on the basis of historical traffic patterns. But many ordinary systems still leave a lot to be desired, producing significant slack in schedules and, in many cases, lacking the ability to dynamically visualize and calibrate routes at the street level.

Now, add the difficulty of aligning the deliveries of two or more business units or companies, each of which manages its own delivery system but must work with the others as one. We frequently find that by using big data and advanced analytical techniques to deal with tough supply-chain problems such as these, companies can identify opportunities for savings equal to 15 to 20 percent of transportation costs. Recent advances in geoanalytical mapping techniques, paired with the availability of large amounts of location data and cheap, fast, cloud-based computing power, allow companies to dynamically analyze millions of data points and model hundreds of potential truck-route scenarios. The result is a compelling visualization of delivery routes—route by route and stop by stop.

Consider the challenges experienced during the premerger planning for the combination of two large consumer-products companies. To better model the merger of the companies’ distribution networks, the two companies layered detailed geographic location data onto delivery data in a way that made it possible for them to visualize order density and identify pockets of overlap. The companies learned that they shared similar patterns of demand. (See Exhibit 1.) Vehicle-routing software also enabled rapid scenario testing of dozens of route iterations and the development of individual routes for each truck. Scenario testing helped the companies discover as much as three hours of unused delivery capacity on typical routes after drivers had covered their assigned miles.

exhibit

Splitting the fleet between two local depots in one major city would reduce the number of miles in each route and allow trucks to deliver greater volume, lowering the effective cost per case. After the merger, trucks would be able to make the same average number of stops while increasing the average drop size by about 50 percent. The savings from a nationwide combination and rationalization of the two networks were estimated at $40 million, or 16 percent of the total costs of the companies combined. All this would come with no significant investment beyond the initial cost of developing better modeling techniques.

By establishing a common picture of the present and a view of the future, the geoanalysis also delivered less quantifiable benefits: the results built confidence that the estimated savings generated as a result of the merger would reflect reality when the rubber met the road and would also create alignment between the two organizations prior to the often difficult postmerger-integration phase. However, results such as these are only the beginning. New visualization tools, combined with real-time truck monitoring and live traffic feeds from telematics devices, open up even more exciting opportunities, such as dynamic rerouting of trucks to meet real-time changes in demand.

Pinpointing Future Demand. Forecasting demand in a sprawling manufacturing operation can be cumbersome and time consuming. Many managers have to rely on inflexible systems and inaccurate estimates from the sales force to predict the future. And forecasting has grown even more complicated in the current era of greater volatility in demand and increasing complexity in product portfolios.

Now, companies can look at vast quantities of fast-moving data from customers, suppliers, and sensors. They can combine that information with contextual factors such as weather forecasts, competitive behavior, pricing positions, and other external factors to determine which factors have a strong correlation with demand and then quickly adapt to the current reality. Advanced analytical techniques can be used to integrate data from a number of systems that speak different languages—for example, enterprise resource planning, pricing, and competitive-intelligence systems—to allow managers a view of things they couldn’t see in the past. Companies can let the forecasting system do the legwork, freeing the sales force to provide the raw intelligence about changes in the business environment.

Companies that have a better understanding of what they are going to sell tomorrow can ship products whenever customers request them and can also keep less stock on hand—two important levers for improving operational performance and reducing costs. Essentially, with better demand forecasting, companies can replace inventory with information and meet customers’ demands in a much more agile way. We find that companies that do a better job of predicting future demand can often cut 20 to 30 percent out of inventory, depending on the industry, while increasing the average fill rate by 3 to 7 percentage points. Such results can generate margin improvements of as much as 1 to 2 percentage points.

For example, a global technology manufacturer faced significant supply shortages and poor on-time delivery of critical components as a result of unreliable forecasts. Salespeople were giving overly optimistic forecasts, whose effects rippled through the supply chain as the manufacturer ordered more than was really needed to ensure adequate supply. In addition, the company’s suppliers ordered too much from their own component suppliers. As a result, inventories started to increase across the value chain.

To understand the causes of poor forecast performance, the company used advanced tools and techniques to analyze more than 7 million data points, including shipment records, historical forecasting performance, and bill-of-material records. The company also ran simulations comparing forecast accuracy with on-time shipping and inventory requirements to identify the point of diminishing returns for improved accuracy. The underlying pattern of demand proved complex and highly volatile, particularly at the component level. Root cause analysis helped identify the sources of the problem, which included the usual delays and operational breakdowns, as well as more subtle but equally powerful factors such as misaligned incentives and an organization structure with too many silos.

In response, the company redesigned its planning process, dedicating more time to component planning and eliminating bottlenecks from data flows and IT processing. Furthermore, by improving the quality of the data for the component planners, the company was able to reduce the time wasted chasing data and fixing errors. And it developed more sophisticated analytical tools for measuring the accuracy of forecasts.

On the basis of these and other organizational and process improvements, the company expects to improve forecast accuracy by up to 10 percentage points for components and 5 percentage points for systems, resulting in improved availability of parts and on-time delivery to customers. The changes are expected to yield an increase in revenues, while lowering inventory levels, delivering better customer service, and reducing premium freight costs.

Simplifying Distribution Networks. Many manufacturers’ distribution networks have evolved over time into dense webs of warehouses, factories, and distribution centers sprawling across huge territories. Over time, many such fixed networks have trouble adapting to the shifting flows of supplies to factories and of finished goods to market. Some networks are also too broad, pushing up distribution costs. The tangled interrelationships among internal and external networks can defy the traditional network-optimization models that supply chain managers have used for years.

But today’s big-data-style capabilities can help companies solve much more intricate optimization problems than in the past. Leaders can study more variables and more scenarios than ever before, and they can integrate their analyses with many other interconnected business systems. Companies that use big data and advanced analytics to simplify distribution networks typically produce savings that range from 10 to 20 percent of freight and warehousing costs, in addition to large savings in inventories.

A major European fast-moving-consumer-goods company faced these issues when it attempted to shift from a country-based distribution system to a more efficient network spanning the continent. An explosion in the volume and distribution of data across different systems had outstripped the company’s existing capacity, and poor data quality further limited its ability to plan.

The company used advanced analytical tools and techniques to design a new distribution network that addressed these rising complexities. It modeled multiple long-term growth scenarios, simulating production configurations for 30 brands spread across more than ten plants, each with different patterns of demand and material flows. It crunched data on 50,000 to 100,000 delivery points per key country and looked at inventory factors across multiple stages. Planners examined numerous scenarios for delivery, including full truck loads, direct-to-store delivery, and two-tier warehousing, as well as different transport-rate structures that were based on load size and delivery direction.

Unlocking insights from this diverse data will help the company consolidate its warehouses from more than 80 to about 20. (See Exhibit 2.) As a result, the company expects to reduce operating expenses by as much as 8 percent. As the number of warehouses gets smaller, each remaining warehouse will grow bigger and more efficient. And by pooling customer demand across a smaller network of bigger warehouses, the company can decrease the variability of demand and can, therefore, hold lower levels of inventory: it is volatile demand that causes manufacturers to hold more safety stock.

exhibit
How to Begin

Operations leaders who want to explore these opportunities should begin with the following steps.

Connect the supply chain from end to end. Many companies lack the ability to track details on materials in the supply chain, manufacturing equipment and process control reliability, and individual items being transported to customers. They fail to identify and proactively respond to problems in ways that increase efficiency and address customers’ needs. In order to have big data to analyze in the first place, companies must invest in the latest technologies, including state-of-the-art sensors and radio-frequency identification tags, that can build transparency and connections into the supply chain. At the same time, companies should be careful to invest in areas that add the highest business value.

Reward data consistency. Many companies struggle to optimize inventory levels because lot sizes, lead times, product SKUs, and measurement units are entered differently into the various systems across the organization. While big-data systems do not require absolutely perfect data quality and completeness, a solid consistency is necessary. The problem is that in many companies, management doesn’t assign a high priority to the collection of consistent data. That can change when leaders make the impact of poor data clear and measure and reward consistent standards.

Build cross-functional data transparency. The supply chain function depends on up-to-date manufacturing data, but the manufacturing function may tightly guard valuable reliability data so that mistakes will be less visible. The data could also help customer service, which might inform customers proactively of delayed orders when, for example, equipment breaks down. Data about production reliability, adherence to schedules, and equipment breakdowns should be visible across functions. To encourage people to be more transparent, management might assemble personnel from different functions to discuss the data they need to do their jobs better.

Invest in the right capabilities. Many operations leaders still don’t understand how this new discipline can provide a competitive advantage or how to convert big data into the best strategic actions. Hiring a team of top-shelf data scientists to do analytics for analytics sake is not the answer, however. Companies need to both partner with others and develop their own internal, diverse set of capabilities in order to put big data into a strategic business context. Only then will they be able to focus on the right opportunities and get the maximum value from their investments.


Companies that excel at big data and advanced analytics can unravel forecasting, logistics, distribution, and other problems that have long plagued operations.

Those that do not will miss out on huge efficiency gains. They will forfeit the chance to seize a major source of competitive advantage.


To Contact the Authors:

AMERICAS
EUROPE & MIDDLE EAST

Originally posted via “Making Big Data Work: Supply Chain Management”

Originally Posted at: Making Big Data Work: Supply Chain Management

Data Analytics and Organization Maturity

Data Analytics and Organization Maturity
Data Analytics and Organization Maturity

I was in a conference call with a mid-size company and their leadership was curious to learn about the stage there analytics capability is in. Sure, it is a maturity model problem. Maturity model presents a great framework on 5-7 stages of evaluation. You enter at one stage and exit at top stage. Some rank it as a journey from Chaotic to Predictive; some pull it as Emerging to Leading etc. You could skin the cat in any way you want. More often these maturity models are complicated to understand and require some groundwork before you could use it in gauging your organizational maturity. Most of the businesses are different, and so is their data analytics journey, capability and maturity. So, why not work on something which is simpler and provides great litmus test to understand where your capability as an analytics driven company resides?

Data Analytics maturity is a capability, which is synonymous to the culture of the analytics team. Maturity is closer to human evolution than we think. What better way than to learn evolution from a cycle that we all are accustomed to and related to, human maturity. No, I will not go into much biological digs and rate all 12/17/19 stages of maturity. Let’s keep it simple and put it into 5 wider containers. The objective is to give something which is easy to visualize and map your organizational analytics capability against. This should be relatively faster test giving you a quick perspective and direction for further investigation.

Infancy: Yes, the most chaotic stage of em all. You know analytics is important but you are all over the place. Not much synchronization, too much randomness and repeat work. This is a stage which is mostly present at initially when you are building analytics capabilities. The good thing is that, just like for humans, it is a short-lived stage. You get over this and start doing things right which is required for your survival.

Childhood: As suggested above, this is a stage when your survival instincts get sharper than your infant stage. You know what are the few things important for you, you do it right and rest is still a random chaos. This is where most of the ignorant and non-analytics driven businesses lie. They do barebones analytics just to get through their daily chores.

Adolescent: The fun age, as we all know it. It is an age with a lot of confusion, energy, friends, collaboration. This is a good time in human growth as well as in data analytics maturity. You rock your daily chores, and you take out time to explore more avenues. You are open to risks and start making calculated risk and bold moves. This is the fun and aspiring age in analytics maturity as well. I am sure you must be craving for your teen ages as well. This is exactly the reason why. A good data analytics driven business wants to stay in this age for maximizing the ROI on their analytics spend.

Youth: This is a typical analytics driven company. You mastered the art of survival, you can prioritize best, and you are less open to risks. You have sharper biases and likings. This is a stage which slows things down for businesses. You want to embrace change but you find it difficult. Good thing is that you still possess some traits of your adolescent which makes things spicy for you. You could be more experimental and change friendly if you want to be.

Adulthood: This is 800 pound gorilla problem. It is the age with all wisdom, barely any appetite for change, risk, agility. This is the stage where most of the big businesses are stuck. It is a stage with very long turnaround cycle, least risk friendly, slow in moving, slow in adopting. This is the stage which most of businesses aspire to avoid. They acquire, embrace churn, hire new talents just to keep its strategies fresh and change friendly. Analytics should never be in this maturity level.

BTW This is not an enter at first stage and leave at last maturity progression. These are 5 containers you find your analytics capability in. The journey is to find a way to container that fits your competitive landscape and find a way to that container.

From the 5 containers it is not difficult to see that data analytics capability should always stay in its adolescent. Teams, processes and logistics should always embrace agility, change, adoption, scale and risk. This is what will open new horizons for any businesses and specially data driven ones. One good thing is that every business possesses the ability to swing to its adolescent stage, all it requires is a change in mindset which could happen slowly and gradually at the worst.

Originally Posted at: Data Analytics and Organization Maturity

Jan 17, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Tour of Accounting  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ NEWS BYTES]

>>
 Innovation is empowering Taiwan’s cyber security capabilities – Networks Asia Under  cyber security

>>
 How companies can detect cyber attacks early to minimise damage – Business MattersBusiness Matters Under  cyber security

>>
 Global Predictive and Prescriptive Analytics Market 2018: Expansions, Key Drivers, Trends, Challenges, And Forecast … – Market News Today Under  Prescriptive Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

Intro to Machine Learning

image

Machine Learning is a first-class ticket to the most exciting careers in data analysis today. As data sources proliferate along with the computing power to process them, going straight to the data is one of the most stra… more

[ FEATURED READ]

Data Science for Business: What You Need to Know about Data Mining and Data-Analytic Thinking

image

Written by renowned data science experts Foster Provost and Tom Fawcett, Data Science for Business introduces the fundamental principles of data science, and walks you through the “data-analytic thinking” necessary for e… more

[ TIPS & TRICKS OF THE WEEK]

Analytics Strategy that is Startup Compliant
With right tools, capturing data is easy but not being able to handle data could lead to chaos. One of the most reliable startup strategy for adopting data analytics is TUM or The Ultimate Metric. This is the metric that matters the most to your startup. Some advantages of TUM: It answers the most important business question, it cleans up your goals, it inspires innovation and helps you understand the entire quantified business.

[ DATA SCIENCE Q&A]

Q:Explain likely differences between administrative datasets and datasets gathered from experimental studies. What are likely problems encountered with administrative data? How do experimental methods help alleviate these problems? What problem do they bring?
A: Advantages:
– Cost
– Large coverage of population
– Captures individuals who may not respond to surveys
– Regularly updated, allow consistent time-series to be built-up

Disadvantages:
– Restricted to data collected for administrative purposes (limited to administrative definitions. For instance: incomes of a married couple, not individuals, which can be more useful)
– Lack of researcher control over content
– Missing or erroneous entries
– Quality issues (addresses may not be updated or a postal code is provided only)
– Data privacy issues
– Underdeveloped theories and methods (sampling methods…)

Source

[ VIDEO OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with @ScottZoldi, @FICO

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @ScottZoldi, @FICO

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

With data collection, ‘the sooner the better’ is always the best answer. – Marissa Mayer

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with Dr. Nipa Basu, @DnBUS

 #BigData @AnalyticsWeek #FutureOfData #Podcast with Dr. Nipa Basu, @DnBUS

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

The largest AT&T database boasts titles including the largest volume of data in one unique database (312 terabytes) and the second largest number of rows in a unique database (1.9 trillion), which comprises AT&T’s extensive calling records.

Sourced from: Analytics.CLUB #WEB Newsletter

@TimothyChou on World of #IOT & Its #Future Part 1 #FutureOfData #Podcast

[youtube https://www.youtube.com/watch?v=ezNX6XYozIc]

In this first part of two part podcast @TimothyChou discussed the Internet of Things landscape. He laid out how internet has always been about internet of things and not internet of people. He sheds light on internet of things as it is spread across themes of things, connect, collect, learn and do workflows. He builds an interesting case about achieving precision to introduction optimality.

 

Timothy’s Recommended Read:
Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark http://amzn.to/2Cidyhy
Zone to Win: Organizing to Compete in an Age of Disruption Paperback by Geoffrey A. Moore http://amzn.to/2Hd5zpv

Podcast Link:
iTunes: http://math.im/itunes
GooglePlay: http://math.im/gplay

Timothy’s BIO:
Timothy Chou has his career spanning through academia, successful (and not so successful) startups and large corporations. He was one of only a few people to hold the President title at Oracle. As President of Oracle On Demand he grew the cloud business from it’s very beginning. Today that business is over $2B. He wrote about the move of applications to the cloud in 2004 in his first book, “The End of Software”. Today he serves on the board of Blackbaud, a nearly $700M vertical application cloud service company.

After earning his PhD in EE at the University of Illinois he went to work for Tandem Computers, one of the original Silicon Valley startups. Had he understood stock options he would have joined earlier. He’s invested in and been a contributor to a number of other startups, some you’ve heard of like Webex, and others you’ve never heard of but were sold to companies like Cisco and Oracle. Today he is focused on several new ventures in cloud computing, machine learning and the Internet of Things.

About #Podcast:
#FutureOfData podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.

Wanna Join?
If you or any you know wants to join in,
Register your interest @ http://play.analyticsweek.com/guest/

Want to sponsor?
Email us @ info@analyticsweek.com

Keywords:
#FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

Originally Posted at: @TimothyChou on World of #IOT & Its #Future Part 1 #FutureOfData #Podcast by admin

20 Best Practices for Customer Feedback Programs: Applied Research

Below is the final installment of the 20 Best Practices for Customer Feedback Programs. Today’s post covers best practices in Applied Research.

Figure 5. Common types of linkages among disparate data sources
Figure 5. Common types of linkages among disparate data sources

Applied Research Best Practices

Customer-focused research using the customer feedback data can provide additional insight into the needs of the customer base and increases the overall value of the customer feedback program. This research extends well beyond the information that is gained from the typical reporting tools that summarize customer feedback with basic descriptive statistics.

Loyalty leaders regularly conduct applied research using their customer feedback data. Typical research projects can include creating customer-centric business metrics, building incentive compensation programs around customer metrics, and establishing training criteria that has a measured impact on customer satisfaction. Sophisticated research programs require advanced knowledge of research methods and statistics. Deciphering signal from noise in the data require more than the inter-ocular test (eyeballing the data).

Figure 6. Data model for financial linkage analysis

Loyalty leaders link their customer feedback data to other data sources (see Figure 5 for financial, operational, and constituency linkages). Once the data are merged (see Figure 6 for data model for financial linkage), analysis can be conducted to help us understand the causes (operational, constituency) and consequences (financial) of customer satisfaction and loyalty. Loyalty leaders can use the results of these types of studies to:

  1. Support business case of customer feedback program (financial linkage)
  2. Identify objective, operational metrics that impact customer satisfaction and manage employee performance using these customer-centric metrics (operational linkage)
  3. Understand how employees and partners impact customer satisfaction to ensure proper employee and partner relationship management (constituency linkage)

A list of best practices in Applied Research appears in Table 6.

Table 6. Best Practices in Applied Research
Best Practices The specifics…
15. Ensure results from customer feedback collection processes are reliable, valid and useful Conduct a validation study of the customer feedback program. Verify the reliability, validity and usefulness of customer feedback metrics to ensure you are measuring the right things. This assessment needs to be one of the first research projects conducted to support (and dispute any challenges regarding) the use of these customer metrics to manage the company. This research will help you create summary statistics for use in executive reporting and company dashboards; summary scores are more reliable and provide a better basis for business decisions compared to using only individual survey questions.
16. Identify linkage between customer feedback metrics and operational metrics Demonstrate that operational metrics are related to customer feedback metrics so that these operational metrics can be used to manage employees.  Additionally, because of their reliability and specificity, these operational metrics are good candidates for use in employee incentive programs.
17. Regularly conduct applied customer-focused research Build a comprehensive research program using the customer-centric metrics (and other business metrics) to get deep insight regarding the business processes. Customer feedback can be used to improve all phases of the customer lifecycle (marketing, sales, and service).
18. Identify linkage between customer feedback metrics and business metrics Illustrate that financial metrics (e.g., profit, sales, and revenue) are related to customer feedback metrics. Often times, this type of study can be used as a business case to demonstrate value of the customer feedback program.
19. Identify linkage between customer feedback metrics and other constituency’s attitudes Identify factors of constituency attitudes  (e.g., employee and partner satisfaction) that are linked to customer satisfaction/loyalty. Use these insights to properly manage employee and partner relationships to ensure high customer loyalty. Surveying all constituencies in the company ecosystem helps ensure all parties are focused on the customers and their needs.
20. Understand customer segments using customer information Compare customer groups to identify key differences among groups on customer feedback metrics (e.g., satisfaction, and loyalty). This process helps identify best practices internally among customer segments.
Copyright © 2011 Business Over Broadway

Summary

Loyalty leaders are excellent examples of customer-centric companies. Compared to their loyalty lagging counterparts, loyalty leading companies embed customer feedback throughout the entire company, from top to bottom. Loyalty leaders use customer feedback to set the vision and manage their business; they also integrate the feedback into daily business processes and communicate all processes, goals and results of the customer program to the entire company. Finally, they integrate different business data (operational, financial, customer feedback), to reveal deep customer insights through in-depth research.

Take the Customer Feedback Programs Best Practices Survey

You can take the best practices survey to receive free feedback on your company’s customer feedback program. This self-assessment survey assesses the extent to which your company adopts best practices throughout their program. Go here to take the free survey: http://businessoverbroadway.com/resources/self-assessment-survey.

References

Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297-334.

Hayes, B.E. (2011). Lessons in loyalty. Quality Progress, March, 24-31.

Hayes, B.E., Goodden, R., Atkinson, R., Murdock, F. & Smith, D. (2010). Where to Start: Experts weigh in on what all of us can learn from Toyota’s challenges. Quality Progress, April, 16-23.

Hayes, B. E. (2009). Beyond the ultimate question: A systematic approach to improve customer loyalty. Quality Press. Milwaukee, WI.

Hayes, B. E. (2008a). Measuring customer satisfaction and loyalty: Survey design, use and statistical analysis methods (3rd ed.). Quality Press. Milwaukee, WI.

Hayes, B. E. (2008b). Customer loyalty 2.0: The Net Promoter Score debate and the meaning of customer loyalty, Quirk’s Marketing Research Review, October, 54-62.

Hayes, B. E. (2008c). The true test of loyalty. Quality Progress. June, 20-26.

Keiningham, T. L., Cooil, B., Andreassen, T.W., & Aksoy, L. (2007). A longitudinal examination of net promoter and firm revenue growth. Journal of Marketing, 71 (July), 39-51.

Morgan, N.A. & Rego, L.L. (2006). The value of different customer satisfaction and loyalty metrics in predicting business performance. Marketing Science, 25(5), 426-439.

Nunnally, J. M. (1978). Psychometric Theory, Second Edition. New York, NY. McGraw-Hill.

Reichheld, F. F. (2003). The One Number You Need to Grow. Harvard Business Review, 81 (December), 46-54.

Reichheld, F. F. (2006). The ultimate question: driving good profits and true growth. Harvard Business School Press. Boston.

 

 

Originally Posted at: 20 Best Practices for Customer Feedback Programs: Applied Research

Jan 10, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Big Data knows everything  Source

[ AnalyticsWeek BYTES]

>> Data Modeling Tomorrow: Self-Describing Data Formats by jelaniharper

>> Why has R, despite quirks, been so successful? by analyticsweekpick

>> Transcending the Limits of Analytics with Artificial Intelligence by jelaniharper

Wanna write? Click Here

[ NEWS BYTES]

>>
 Costa Rica to Modernize Power Grid with Itron IoT Solution – IoT Evolution World (blog) Under  IOT

>>
 Recruiting in the age of the cyber security skills gap: challenges to overcome – Information Age Under  cyber security

>>
 Big data used to predict the future – Science Daily Under  Big Data

More NEWS ? Click Here

[ FEATURED COURSE]

Machine Learning

image

6.867 is an introductory course on machine learning which gives an overview of many concepts, techniques, and algorithms in machine learning, beginning with topics such as classification and linear regression and ending … more

[ FEATURED READ]

How to Create a Mind: The Secret of Human Thought Revealed

image

Ray Kurzweil is arguably today’s most influential—and often controversial—futurist. In How to Create a Mind, Kurzweil presents a provocative exploration of the most important project in human-machine civilization—reverse… more

[ TIPS & TRICKS OF THE WEEK]

Finding a success in your data science ? Find a mentor
Yes, most of us dont feel a need but most of us really could use one. As most of data science professionals work in their own isolations, getting an unbiased perspective is not easy. Many times, it is also not easy to understand how the data science progression is going to be. Getting a network of mentors address these issues easily, it gives data professionals an outside perspective and unbiased ally. It’s extremely important for successful data science professionals to build a mentor network and use it through their success.

[ DATA SCIENCE Q&A]

Q:Is it better to design robust or accurate algorithms?
A: A. The ultimate goal is to design systems with good generalization capacity, that is, systems that correctly identify patterns in data instances not seen before
B. The generalization performance of a learning system strongly depends on the complexity of the model assumed
C. If the model is too simple, the system can only capture the actual data regularities in a rough manner. In this case, the system poor generalization properties and is said to suffer from underfitting
D. By contrast, when the model is too complex, the system can identify accidental patterns in the training data that need not be present in the test set. These spurious patterns can be the result of random fluctuations or of measurement errors during the data collection process. In this case, the generalization capacity of the learning system is also poor. The learning system is said to be affected by overfitting
E. Spurious patterns, which are only present by accident in the data, tend to have complex forms. This is the idea behind the principle of Occam’s razor for avoiding overfitting: simpler models are preferred if more complex models do not significantly improve the quality of the description for the observations
Quick response: Occam’s Razor. It depends on the learning task. Choose the right balance
F. Ensemble learning can help balancing bias/variance (several weak learners together = strong learner)
Source

[ VIDEO OF THE WEEK]

Discussing #InfoSec with @travturn, @hrbrmstr(@rapid7) @thebearconomist(@boozallen) @yaxa_io

 Discussing #InfoSec with @travturn, @hrbrmstr(@rapid7) @thebearconomist(@boozallen) @yaxa_io

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

We chose it because we deal with huge amounts of data. Besides, it sounds really cool. – Larry Page

[ PODCAST OF THE WEEK]

@chrisbishop on futurist's lens on #JobsOfFuture #FutureofWork #JobsOfFuture #Podcast

 @chrisbishop on futurist’s lens on #JobsOfFuture #FutureofWork #JobsOfFuture #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

In the developed economies of Europe, government administrators could save more than €100 billion ($149 billion) in operational efficiency improvements alone by using big data, not including using big data to reduce fraud and errors and boost the collection of tax revenues.

Sourced from: Analytics.CLUB #WEB Newsletter

Unmasking the Problem with Net Scores and the NPS Claims

I wrote about net scores last week and presented evidence that showed net scores are ambiguous and unnecessary.  Net scores are created by taking the difference between the percent of “positive” scores and the percent of “negative” scores. Net scores were made popular by Fred Reichheld and Satmetrix in their work on customer loyalty measurement. Their Net Promoter Score is a difference score between the percent of “promoters” (ratings of 9 or 10) and percent of “detractors” (ratings of 0 thru 6) on the question, “How likely would you be to recommend <company> to your friends/colleagues?”

This resulting Net Promoter Score is used to gauge the level of loyalty for companies or customer segments. In my post, I presented what I believe to be sound evidence that mean scores and top/bottom box scores are much better summary indices than net scores. Descriptive statistics like the mean and standard deviation provide important information that describe the location and spread of the distribution of responses. Also, top/bottom box scores provide precise information about the size of customer segments. Net scores do neither.

Rob Markey, the co-author of the book, The Ultimate Question 2.0  (along with Fred Reichheld), tweeted about last week’s blog post.

Rob Markey' Tweet

I really am unclear about how Mr. Markey believes my argument is supporting (in CAPS, mind you) the NPS point of view. I responded to his tweet but never received a clarification from him.

So, I present this post as an open invitation for Mr. Markey to explain how my argument regarding the ambiguity of the NPS supports their point of view.

One More Thing

I never deliver arguments shrouded behind a mask of criticism.  While my analyses focused on the NPS, my argument against net scores (difference scores) applies to any net score; I just happened to have data on the recommend question, a common question used in customer surveys. In fact, I even ran the same analyses (e.g., comparing means to net scores) on other customer loyalty questions (e.g., overall sat, likelihood to buy), but I did not present those results because they were highly redundant to what I found using the recommend question. The problem of difference scores applies to any customer metric.

I have directly and openly criticized the research on which the NPS is based in my blog posts, articles, and books. I proudly stand behind my research and critique of the Net Promoter Score. Other mask-less researchers/practitioners have also voiced concern about the “research” on which the NPS is based. See Vovici’s blog post for a review. Also, be sure to read Tim Keiningham’s interview with Research Magazine in which he calls the NPS claims “nonsense”. Yes. Nonsense.

Just to be clear, “Nonsense” does not mean “Awesome.”

Source: Unmasking the Problem with Net Scores and the NPS Claims by bobehayes