Aug 31, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Complex data  Source

[ AnalyticsWeek BYTES]

>> Focus on success, not perfection: Look at this data science algorithm for inspiration by analyticsweekpick

>> Customer Churn or Retention? A Must Watch Customer Experience Tutorial by v1shal

>> Why So Many ‘Fake’ Data Scientists? by analyticsweekpick

Wanna write? Click Here

[ NEWS BYTES]

>>
 Facebook Will Use Artificial Intelligence to Find Extremist Posts – New York Times Under  Artificial Intelligence

>>
 Google Cloud Expands into Australia | Fortune.com – Fortune Under  Cloud

>>
 Phenom People Named an IDC Innovator in New Talent Discovery Report – SYS-CON Media (press release) Under  Talent Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

Artificial Intelligence

image

This course includes interactive demonstrations which are intended to stimulate interest and to help students gain intuition about how artificial intelligence methods work under a variety of circumstances…. more

[ FEATURED READ]

Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython

image

Python for Data Analysis is concerned with the nuts and bolts of manipulating, processing, cleaning, and crunching data in Python. It is also a practical, modern introduction to scientific computing in Python, tailored f… more

[ TIPS & TRICKS OF THE WEEK]

Data Have Meaning
We live in a Big Data world in which everything is quantified. While the emphasis of Big Data has been focused on distinguishing the three characteristics of data (the infamous three Vs), we need to be cognizant of the fact that data have meaning. That is, the numbers in your data represent something of interest, an outcome that is important to your business. The meaning of those numbers is about the veracity of your data.

[ DATA SCIENCE Q&A]

Q:What is the Law of Large Numbers?
A: * A theorem that describes the result of performing the same experiment a large number of times
* Forms the basis of frequency-style thinking
* It says that the sample mean, the sample variance and the sample standard deviation converge to what they are trying to estimate
* Example: roll a dice, expected value is 3.5. For a large number of experiments, the average converges to 3.5

Source

[ VIDEO OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with Nathaniel Lin (@analytics123), @NFPA

 #BigData @AnalyticsWeek #FutureOfData #Podcast with Nathaniel Lin (@analytics123), @NFPA

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

What we have is a data glut. – Vernon Vinge

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with  John Young, @Epsilonmktg

 #BigData @AnalyticsWeek #FutureOfData #Podcast with John Young, @Epsilonmktg

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

In 2015, a staggering 1 trillion photos will be taken and billions of them will be shared online. By 2017, nearly 80% of photos will be taken on smart phones.

Sourced from: Analytics.CLUB #WEB Newsletter

The Competitive Advantage of Managing Relationships with Multi-Domain Master Data Management

Not long ago, it merely made sense to deploy multi-domain Master Data Management (MDM) systems. The boons for doing so included reduced physical infrastructure, less cost, fewer points and instances of operational failure, more holistic data modeling, less complexity, and a better chance to get the proverbial ‘single version of the truth’—especially compared to deploying multiple single-domain hubs.

Several shifts in the contemporary business and data management climate, however, have intensified those advantages so that it is virtually impossible to justify the need for multiple single-domain platforms where a solitary multi-domain one would suffice.

The ubiquity of big data, the popularity of data lakes, and the emerging reality of digital transformation have made it vital to locate both customer and product data (as well as other domains) in a single system so that organizations “now have the opportunity to build relationships between those customers and those products,” Stibo Systems VP of Product Strategy Christophe Marcant remarked. “The pot of gold at the end of the rainbow is in managing those relationships.”

And, in exploiting them for competitive advantage.

Mastering Relationship Management
Understanding the way that multi-domain MDM hubs facilitate relationship management requires a cursory review of MDM in general. On the one hand, these systems provide access to all of the relevant data about a particular business domain, which may encompass various sources. The true value is in the mastering capabilities of these hubs, which facilitate governance protocols and data quality measures by providing uniform consistency for those data. Redundancies, different spellings for the same customer, data profiling, metadata management, and lifecycle management are tended to, in addition to implementing facets of completeness and recentness of data and their requisite fields. Managing these measures inside of a single platform, as opposed to integrating data beforehand with external tools for governance and quality, enables organizations to account for these standards repeatedly and consistently. Conversely, the integration required for using external solutions are frequently “a one time activity: data cleansing and then publishing it to the MDM platform,” Marcant said. “And then that’s it; it’s not going back because it would be another project, another integration, yet another workflow, and yet another opportunity to be out of sync.”

The Multi-Domain Approach
Such limitations do not exist with the quality and governance mechanisms within MDM, in which different types of data are already integrated. However, when deploying multi-domain hubs there are fewer points of integration and workflows related to synchronicity because data of different domains (such as product and customer) are housed together. Moreover, the changing climate in which digital transformation, big data, and data lakes have gained prominence has resulted in much greater utility produced from identifying and understanding relationships between data—both across and within domains. Multi-domain MDM facilitates this sort of relationship management so that organizations can determine how product data directly correlates to customer data, and vice versa. According to Marcant, a well-known book retailer uses such an approach to understand its products and customers to “better tailor what they offer them.”

Connecting the Dots between Domains with Data Modeling and Visualizations
Understanding the relationships between data across conventional domains is done at both granular and high levels. At the former, there are much fewer constraints for data modeling when utilizing a multi-domain platform. “For example, if you’re modeling products, then having the opportunities to model your suppliers, and possibly the market, the location where you make this product available… now you have the opportunity to track information that these are the intersections between suppliers and markets and products,” Marcant noted. He stated that outside of customers and products, the most relevant domain with Stibo’s customers include location, suppliers (supply chain management), and assets.

The ability to represent relationships with modern visualizations produces a degree of transparency and insight that is also an integral part of managing those relationships in multi-domain MDM. “It’s the ability to visualize information in a graphical manner,” Marcant observed. The charts and linear connections facilitated by competitive multi-domain MDMs exist across those domains, and are applicable to myriad use cases. “Being able to visualize relationships between people and organizations and departments is important,” Marcant said. “If you do a merger and acquisition you want to literally see on your screen this chart and be able to map a node to another node.” The visual manifestations of those relationships is a pivotal output of the dearth of modeling constraints when deploying multi-domain MDM.

Extending MDM’s Relevancy
Ultimately, the modeling liberties and visualizations associated with multi-domain MDM are responsible for extending the relevancy of Master Data Management systems. That relevancy is broadened with this approach by incorporating the domains that are most apposite to customer or product domains, and by visually rendering that relevance in the form of relationships across domains. It also provides a level of velocity unmatched by conventional point solutions for integration, data quality and governance when deploying single domain MDM hubs. That expedience is heightened with in-memory computing capabilities, which manifest in MDM via quickness in searching, profiling, onboarding and exporting data—and in producing results relevant for certain business processes and functions. “That speed is not only cost-saving in terms of labor, but really what it means down the road is that if you work faster, your product is going to be available for sale earlier,” Marcant mentioned.

Preparing for Digital Transformation
Of all the factors influencing the advantages of multi-domain MDM, digital transformation may indeed by the most formidable. Its impact on customer expectations is that “every single one of the consumers, they think their interactions at the store and online has to be consistent…that what they touch here, is reflected in like fashion there on the screen,” commented Marcant. As such, the management of relationships between the various domains of MDM is a valuable way of implementing that consistency, and of keeping ahead of the developments within and across the domains that yield competitive advantage. Organizations benefit from understanding how geographic location relates to supply chain concerns, and how those in turn influence their customer and product information. This advantage is reinforced (if not produced) by the comprehensive system of reference in multi-domain MDM systems and by their penchant for charting the relationships that exist between the many facets of master data today.

Source by jelaniharper

Aug 24, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Productivity  Source

[ AnalyticsWeek BYTES]

>> The Big Data Problem in Customer Experience Management: Understanding Sampling Error by bobehayes

>> The 37 best tools for data visualization by analyticsweekpick

>> Big Data: Would number geeks make better football managers? by analyticsweekpick

Wanna write? Click Here

[ NEWS BYTES]

>>
 DOJ Leverages Big Data Analytics to Combat Opioid Fraud, Abuse – Health IT Analytics Under  Big Data Analytics

>>
 F-Secure acquires Digital Assurance; to improve cyber security needs – Deccan Chronicle Under  cyber security

>>
 The Hospital Tech Laboratory: Quality Innovation in a New Era of Value-Conscious Care – AJMC.com Managed Markets Network Under  Streaming Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

Introduction to Apache Spark

image

Learn the fundamentals and architecture of Apache Spark, the leading cluster-computing framework among professionals…. more

[ FEATURED READ]

Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, 4th Edition

image

The eagerly anticipated Fourth Edition of the title that pioneered the comparison of qualitative, quantitative, and mixed methods research design is here! For all three approaches, Creswell includes a preliminary conside… more

[ TIPS & TRICKS OF THE WEEK]

Grow at the speed of collaboration
A research by Cornerstone On Demand pointed out the need for better collaboration within workforce, and data analytics domain is no different. A rapidly changing and growing industry like data analytics is very difficult to catchup by isolated workforce. A good collaborative work-environment facilitate better flow of ideas, improved team dynamics, rapid learning, and increasing ability to cut through the noise. So, embrace collaborative team dynamics.

[ DATA SCIENCE Q&A]

Q:Do you know a few “rules of thumb” used in statistical or computer science? Or in business analytics?

A: Pareto rule:
– 80% of the effects come from 20% of the causes
– 80% of the sales come from 20% of the customers

Computer science: “simple and inexpensive beats complicated and expensive” – Rod Elder

Finance, rule of 72:
– Estimate the time needed for a money investment to double
– 100$ at a rate of 9%: 72/9=8 years

Rule of three (Economics):
– There are always three major competitors in a free market within one industry

Source

[ VIDEO OF THE WEEK]

#FutureOfData Podcast: Peter Morgan, CEO, Deep Learning Partnership

 #FutureOfData Podcast: Peter Morgan, CEO, Deep Learning Partnership

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Data that is loved tends to survive. – Kurt Bollacker, Data Scientist, Freebase/Infochimps

[ PODCAST OF THE WEEK]

#GlobalBusiness at the speed of The #BigAnalytics

 #GlobalBusiness at the speed of The #BigAnalytics

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Market research firm IDC has released a new forecast that shows the big data market is expected to grow from $3.2 billion in 2010 to $16.9 billion in 2015.

Sourced from: Analytics.CLUB #WEB Newsletter

How Big Data And The Internet Of Things Improve Public Transport In London

Transport for London (TfL) oversees a network of buses, trains, taxis, roads, cycle paths, footpaths and even ferries which are used by millions every day. Running these vast networks, so integral to so many people’s lives in one of the world’s busiest cities, gives TfL access to huge amounts of data. This is collected through ticketing systems as well as sensors attached to vehicles and traffic signals, surveys and focus groups, and of course social media.

Lauren Sager-Weinstein, head of analytics at TfL spoke to me about the two key priorities for collecting and analyzing this data: planning services, and providing information to customers. “London is growing at a phenomenal rate,” she says. “The population is currently 8.6 million and is expected to grow to 10m very quickly. We have to understand how they behave and how to manage their transport needs.”

“Passengers want good services and value for money from us, and they want to see us being innovative and progressive in order to meet those needs.”

Oyster prepaid travel cards were first issued in 2003 and have since been expanded across the network. Passengers effectively “charge” them by converting real money from their bank accounts into “Transport for London money” which are swiped to gain access to buses and trains. This enables a huge amount of data to be collected about precise journeys that are being taken.

Journey mapping

This data is anonymized and used to produce maps showing when and where people are traveling, giving both a far more accurate overall picture, as well as allowing more granular analysis at the level of individual journeys, than was possible before. As a large proportion of London journeys involve more than one method of transport, this level of analysis was not possible in the days when tickets were purchased from different services, in cash, for each individual leg of the journey.

That isn’t to say that integrating state of the art data collection strategies with legacy systems has been easy in a city where the public transport has operated since 1829. For example on London Underground (Tube) journeys passengers are used to “checking out and checking in” – tickets are validated (by automatic barriers) at the start and end of a journey. However on buses, passengers simply check in. Traditionally tickets were purchased from the bus driver or inspector for a set fee per journey. There is no mechanism for recording where a passenger leaves the bus and ends their journey – and implementing one would have been impossible without creating an inconvenience to the customer.

“Data collection has to be tied to business operations. This was a challenge to us, in terms of tracking customer journeys,” says Sager-Weinstein. TfL worked with MIT, just one of the academic institutions with which it has research partnerships, to devise a Big Data solution to the problem. “We asked, ‘Can we use Big Data to infer where someone exited?’ We know where the bus is, because we have location data and we have Oyster data for entry,” says Sager-Weinstein. “What we do next is look at where the next tap is. If we see the next tap follows shortly after and is at the entry to a tube station, we know we are dealing with one long journey using bus and tube.”

“This allows us to understand load profiles – how crowded a particular bus or range of buses are at a certain time, and to plan interchanges, to minimize walk times and plan other services such as retail.”

Unexpected events

Big Data analysis also helps TfL respond in an agile way when disruption occurs. Sager-Weinstein cites an occasion where Wansworth Council was forced to close Putney Bridge – crossed by 870,000 people every day – for emergency repairs.

“We were able to work out that half of the journeys started or ended very close to Putney Bridge. The bridge was still open to pedestrians and cyclists, so we knew those people would be able to cross and either reach their destination or continue their journey on the other side. They either live locally, or their destination is local.”

“The other half were crossing the bridge at the half-way point of their journey. In order to serve their needs we were able to set up a transport interchange and increase bus service on alternate routes. We also sent them personalized messages about how their journey was likely to be affected. It was very helpful that we were able to use Big Data to quantify them.”

This personalized approach to providing travel information is the other key priority for TfL’s data initiatives. “We have been working really hard to really understand what our customers want from us in terms of information. We push information from 23 Twitter TWTR -0.26% accounts and provide online customer services 24 hours a day.”

Personalized travel news

Travel data is also used to identify customers who regularly use specific routes and send tailored travel updates to them. “If we know a customer frequently uses a particular station, we can include information about service changes at that station in their updates. We understand that people are hit by a lot of data these days and too much can be overwhelming so there is a strong focus on sending data which is relevant,” says Sager-Weinsten.

“We use information from the back-office systems for processing contactless payments, as well as Oyster, train location and traffic signal data, cycle hire and the congestion charge. We also take into account special events such as the Tour de France and identify people likely to be in those areas. 83% of our passengers rate this service as ‘useful’ or ‘very useful’.” Not bad when you consider that complaining about the state of public transport is considered a hobby by many British people.

TfL also provides its data through open APIs for use by 3rd party app developers, meaning that tailored solutions can be developed for niche user groups.

Its systems currently run on a number of Microsoft MSFT +2.22% and Oracle ORCL +0.00% platforms but the organization is currently looking into adopting Hadoop and other open source solutions to cope with growing data demands going forwards. Plans for the future include increasing the capacity for real-time analytics and working on integrating an even wider range of data sources, to better plan services and inform customers.

Big Data has clearly played a big part in re-energizing London’s transport network. But importantly, it is clear that it has been implemented in a smart way, with eyes firmly on the prize. “One of the most important questions is always ‘why are we asking these questions’” explains Sager-Weinstein. “Big Data is always very interesting but sometimes it is only interesting. You need to find a business case.”

“We always try to come back to the bigger questions – growth in London and how we can meet that demand, by managing the network and infrastructure as efficiently as possible.”

To read the full article on Forbes, click here.

Originally Posted at: How Big Data And The Internet Of Things Improve Public Transport In London by analyticsweekpick

June 5, 2017 Health and Biotech analytics news roundup

First analysis of AACR Project GENIE data published: The dataset was released earlier this year. Among other results, the analysis showed that many tumors have mutations that are ‘clinically actionable.’

Database aims to personalize chemotherapy and reduce long-term heart risks: Treatments for breast cancer can result in cardiovascular disease. University of Alberta researchers will make risk profiles for this outcome and match them with genetic information.

Stamford Health’s plunge into analytics has closed gaps, opened new doors: The hospital used Tableau to improve reporting rates and to connect disparate systems.

At Big Data in Biomedicine, reexamining clinical trials in the era of precision health: Traditional trials are expensive and time-consuming, and are not necessarily the best tool for examining certain questions. Researchers may have to use observational studies more and find creative ways to make current studies larger.

Source: June 5, 2017 Health and Biotech analytics news roundup

Aug 17, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Convincing  Source

[ AnalyticsWeek BYTES]

>> Big Data Analytics Bottleneck Challenging Global Capital Markets Ecosystem, Says TABB Group by analyticsweekpick

>> What Crying Baby Could Teach Big Data Discovery Solution Seekers? by v1shal

>> Big data analytics startup Sqrrl raises $7M by analyticsweekpick

Wanna write? Click Here

[ NEWS BYTES]

>>
 Keeping Clients on the Straight and Narrow – WealthManagement.com Under  Risk Analytics

>>
 â€‹Veriluma’s big prediction for prescriptive analytics – Computerworld Australia Under  Prescriptive Analytics

>>
 Rig contractor to pay up to $100M for big data tech firm – FuelFix (blog) Under  Big Data

More NEWS ? Click Here

[ FEATURED COURSE]

The Analytics Edge

image

This is an Archived Course
EdX keeps courses open for enrollment after they end to allow learners to explore content and continue learning. All features and materials may not be available, and course content will not be… more

[ FEATURED READ]

The Industries of the Future

image

The New York Times bestseller, from leading innovation expert Alec Ross, a “fascinating vision” (Forbes) of what’s next for the world and how to navigate the changes the future will bring…. more

[ TIPS & TRICKS OF THE WEEK]

Analytics Strategy that is Startup Compliant
With right tools, capturing data is easy but not being able to handle data could lead to chaos. One of the most reliable startup strategy for adopting data analytics is TUM or The Ultimate Metric. This is the metric that matters the most to your startup. Some advantages of TUM: It answers the most important business question, it cleans up your goals, it inspires innovation and helps you understand the entire quantified business.

[ DATA SCIENCE Q&A]

Q:Explain the difference between “long” and “wide” format data. Why would you use one or the other?
A: * Long: one column containing the values and another column listing the context of the value Fam_id year fam_inc

* Wide: each different variable in a separate column
Fam_id fam_inc96 fam_inc97 fam_inc98

Long Vs Wide:
– Data manipulations are much easier when data is in the wide format: summarize, filter
– Program requirements

Source

[ VIDEO OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with @MichOConnell, @Tibco

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @MichOConnell, @Tibco

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Hiding within those mounds of data is knowledge that could change the life of a patient, or change the world. – Atul Butte, Stanford

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData with Jon Gibs(@jonathangibs) @L2_Digital

 #BigData @AnalyticsWeek #FutureOfData with Jon Gibs(@jonathangibs) @L2_Digital

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

In the developed economies of Europe, government administrators could save more than €100 billion ($149 billion) in operational efficiency improvements alone by using big data, not including using big data to reduce fraud and errors and boost the collection of tax revenues.

Sourced from: Analytics.CLUB #WEB Newsletter

The New Analytics Professional: Landing A Job In The Big Data Era

Along with the usual pomp and celebration of college commencements and high school graduation ceremonies we’re seeing now, the end of the school year also brings the usual brooding and questions about careers and next steps. Analytics is no exception, and with the big data surge continuing to fuel lots of analytics jobs and sub-specialties, the career questions keep coming. So here are a few answers on what it means to be an “analytics professional” today, whether you’re just entering the workforce, you’re already mid-career and looking to make a transition, or you need to hire people with this background.

The first thing to realize is that analytics is a broad term, and there are a lot of names and titles that have been used over the years that fall under the rubric of what “analytics professionals” do: The list includes “statistician,” “predictive modeler,” “analyst,” “data miner” and — most recently — “data scientist.” The term “data scientist” is probably the one with the most currency – and hype – surrounding it for today’s graduates and upwardly mobile analytics professionals. There’s even a backlash against over-use of the term by those who slap it loosely on resumes to boost salaries and perhaps exaggerate skills.

stairs_jobs

Labeling the Data Scientist

In reality, if you study what successful “data scientists” actually do and the skills they require to do it, it’s not much different from what other successful analytics professionals do and require. It is all about exploring data to uncover valuable insights often using very sophisticated techniques. Much like success in different sports depends on a lot of the same fundamental athletic abilities, so too does success with analytics depend on fundamental analytic skills. Great analytics professionals exist under many titles, but all share some core skills and traits.
The primary distinction I have seen in practice is that data scientists are more likely to come from a computer science background, to use Hadoop, and to code in languages like Python and R. Traditional analytics professionals, on the other hand, are more likely to come from a statistics, math or operations research background, are likely to work in relational or analytics server environments, and to code in SAS and SQL.

Regardless of the labels or tools of choice, however, success depends on much more than specific technical abilities or focus areas, and that’s why I prefer the term “data artist” to get at the intangibles like good judgment and boundless curiosity around data. I wrote an article on the data artist for the International Institute for Analytics (IIA). I also collaborated jointly with the IIA and Greta Roberts from Talent Analytics to survey a wide number of analytics professionals. One of our chief goals in that 2013 quantitative study was to find out whether analytics professionals have a unique, measurable mind-set and raw talent profile.

A Jack-of-All Trades

Our survey results showed that these professionals indeed have a clear, measurable raw talent fingerprint that is dominated by curiosity and creativity; these two ranked very high among 11 characteristics we measured. They are the qualities we should prioritize alongside the technical bona fides when looking to fill jobs with analytics professionals. These qualities also happen to transcend boundaries between traditional and newer definitions of what makes an analytics professional.

This is particularly true as we see more and more enterprise analytics solutions getting built from customized mixtures of multiple systems, analytic techniques, programming languages and data types. All analytics professionals need to be creative, curious and adaptable in this complex environment that lets data move to the right analytic engines, and brings the right analytic engines to where the data may already reside.
Given that the typical “data scientist” has some experience with Hadoop and unstructured data, we tend to ascribe the creativity and curiosity characteristics automatically (You need to be creative and curious to play in a sandbox of unstructured data, after all). But that’s an oversimplification, and our Talent Analytics/International Institute of Analytics survey shows that the artistry and creative mindset we need to see in our analytics professionals is an asset regardless of what tools and technologies they’ll be working with and regardless of what title they have on their business card. This is especially true when using the complex, hybrid “all-of-the-above” solutions that we’re seeing more of today and which Gartner IT -0.48% calls the Logical Data Warehouse.

Keep all this in mind as you move forward. The barriers between the worlds of old and new; open source and proprietary; structured and unstructured are breaking down. Top quality analytics is all about being creative and flexible with the connections between all these worlds and making everything work seamlessly. Regardless of where you are in that ecosystem or what kind of “analytics professional” you may be or may want to hire, you need to prioritize creativity, curiosity and flexibility – the “artistry” – of the job.

To read the original article on Forbes, click here.

Source: The New Analytics Professional: Landing A Job In The Big Data Era by analyticsweekpick

Benchmarking the share of voice of Coca-Cola, Red Bull and Pepsi

Today we’re comparing three soft drink brands: Coca Cola, Pepsi and Red Bull. All are big names in the beverages industry. We’ll use BuzzTalk’s benchmark tool to find out which brand is talked about the most and how people feel about this brand. As you probably know it’s not enough if people talk about your brand. You want them to be positive and enthusiastic.

Coca Cola has the largest Share of Voice

In order to benchmark these brands we’ve created three Media Reports in BuzzTalk. These are all set-up the same way. We include news sites, blogs, journals and Twitter for the time period starting at 23 September 2013. In these reports we didn’t include printed media.

softdrinks share of buzzAs you can see Coca Cola (blue) is the dominant brand online. Nearly 45% of the publications mention Coca Cola. Red Bull (green) and Pepsi Cola (red) follow close to each other at 29 and 26%.

Benchmarking the Buzz as not all buzz is created equal

Coca Cola doesn’t dominate everywhere on the web. If we take a closer look the dominance of Coca Cola is predominantly caused by it’s share of tweets. When we zoom in on news sites we notice it’s Red Bull who’s got the biggest piece of the pie. On blogs (not shown) Coca Cola and Red Bull match up.

buzz by content type

Is Coca Cola’s dominance on Twitter due to Beliebers?

About 99,6% of Coca Cola related publications is on Twitter. Most of these tweets relate to the Coca-Cola.FM radio station in South America in relation with Justin Bieber. On 12th November Coca Cola streamed the concert of this young pop star and what we’re seeing here is the effect of ‘Beliebers’ on the share of voice.

coca cola hashtag justin bieber

The Coca Cola Christmas effect can still be detected

The Bieber effect is even stronger than christmas (42884 versus 2764 tweets).

coca cola hashtag xmas

Last year we demonstrated what’s marking the countdown to the holidays: it’s the release of the new Coca Cola TV-commercial. What we noticed then was a sudden increase in the mood state ‘tension’. In the following graph you can see it’s still there (Coca Cola is still in blue).

coca cola tension time novemberThe mood state ‘tension’ relates to both anxiety and excitement. It’s the emotion we pick up during large product releases. If this is the first time you’re reading about mood states we recommend reading this blogpost as an introduction. Mood states are an interesting add-on to sentiment to be used in predictions about human behavior. The ways in which actual predictions can be made are subject of ongoing research.

How do we feel about these brands?

Let’s examine some more mood states and see whether we can find a mood state that’s clearly associated with a brand. As you can see in the graphs below each soft drink brand gets it fair share of mood state tension. Tension not specific for Coca Cola, though it is more prominent during the countdown towards christmas.

mood states by brandPepsi Cola evokes the most ‘confusion’ and slightly more ‘anger’. The feelings of confusion are often related to feeling quilty after drinking (too much) Pepsi.

how do we feel

Red Bull generates the most mood states as it’s dominating not only for fatigue, but also – to a lesser extend – for depression, tension and vigor.

 

Striking is the amount of publications for Red Bull in which the mood state fatigue can be detected. They say “Red Bull gives you wings” and this tag line has become famous. People now associated tiredness with the desire for Red Bull. But people also blame Red Bull for (still) feeling tired or more tired. At least it’s good to see Red Bull also has it’s share in the ‘vigor’ mood state department.

To read the original article on BuzzTalk, click here.

Originally Posted at: Benchmarking the share of voice of Coca-Cola, Red Bull and Pepsi

Aug 10, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Data Storage  Source

[ AnalyticsWeek BYTES]

>> May 15, 2017 Health and Biotech analytics news roundup by pstein

>> 100 Greatest Quotes On Leadership by v1shal

>> Six Ways to Define Big Data by bobehayes

Wanna write? Click Here

[ NEWS BYTES]

>>
 How Yahoo’s Internal Hadoop Cluster Does Double-Duty on Deep … – The Next Platform Under  Hadoop

>>
 Batting blight with big data – Phys.org – Phys.Org Under  Big Data

>>
 interRel’s State of Business Analytics Survey Identifies Five Key … – Broadway World Under  Business Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

CPSC 540 Machine Learning

image

Machine learning (ML) is one of the fastest growing areas of science. It is largely responsible for the rise of giant data companies such as Google, and it has been central to the development of lucrative products, such … more

[ FEATURED READ]

Hypothesis Testing: A Visual Introduction To Statistical Significance

image

Statistical significance is a way of determining if an outcome occurred by random chance, or did something cause that outcome to be different than the expected baseline. Statistical significance calculations find their … more

[ TIPS & TRICKS OF THE WEEK]

Data aids, not replace judgement
Data is a tool and means to help build a consensus to facilitate human decision-making but not replace it. Analysis converts data into information, information via context leads to insight. Insights lead to decision making which ultimately leads to outcomes that brings value. So, data is just the start, context and intuition plays a role.

[ DATA SCIENCE Q&A]

Q:Explain the difference between “long” and “wide” format data. Why would you use one or the other?
A: * Long: one column containing the values and another column listing the context of the value Fam_id year fam_inc

* Wide: each different variable in a separate column
Fam_id fam_inc96 fam_inc97 fam_inc98

Long Vs Wide:
– Data manipulations are much easier when data is in the wide format: summarize, filter
– Program requirements

Source

[ VIDEO OF THE WEEK]

@AnalyticsWeek: Big Data at Work: Paul Sonderegger

 @AnalyticsWeek: Big Data at Work: Paul Sonderegger

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

You can have data without information, but you cannot have information without data. – Daniel Keys Moran

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with Juan Gorricho, @disney

 #BigData @AnalyticsWeek #FutureOfData #Podcast with Juan Gorricho, @disney

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Every person in the world having more than 215m high-resolution MRI scans a day.

Sourced from: Analytics.CLUB #WEB Newsletter

Asking the Right Customer Experience Questions

Earlier this month, I spoke at the CustomerThink Customer Experience Summit 2011, a free virtual summit featuring Customer Experience researchers and practitioners sharing leading-edge practices to engage with today’s empowered customers. The speakers showed how you can create a compelling customer experience that gives your organization a competitive advantage. For my talk, Asking the Right Customer Experience Questions, I presented best practices for relationship-based surveys for Voice of Customer (VoC) programs. Based on an earlier blog post, my talk proposed a set of survey questions that improves how you measure and improve the health of the customer relationship.

The Optimal Customer Relationship Survey

It turns out short surveys are just as good as lengthy surveys. Your optimal customer relationship survey should have about 20 questions. Here are the best practices:

  1. Measure different types of customer loyalty (retention, advocacy and purchasing). Consider how your customers can engage in different types of loyalty behaviors and include loyalty questions to reflect these different ways.  This section should have 4-6 customer loyalty questions.
  2. Use general customer experience questions instead of specific customer experience questions. Specific customer experience questions add very little to our understanding of customer loyalty. This section should include about 7 general customer experience questions.
  3. Measure your relative performance. Your industry ranking has an impact on how much your customers spend. To increase your customers’ share of wallet, ask them about how you perform relative to your competitors.  This section should have about 3 questions.
  4. Consider additional questions. Before you add any additional questions, consider how you are going to use the resulting data. If you do not know how you will use the data, you probably do not need those questions. To segment your customers, consider adding, at most, 5 psychographic and demographic questions, like age (B2C), job level (B2B), job role (B2B) and education level.

You can view my Customer Experience Summit presentation below and register here to watch all presentations from the Summit.

Source