Jul 18, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
Weak data  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> [Step-by-Step] Using Talend to Bulk Load Data in Snowflake Cloud Data Warehouse by analyticsweekpick

>> Jan 24, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..) by admin

>> Best & Worst Time for Cold Call by v1shal

Wanna write? Click Here

[ FEATURED COURSE]

CPSC 540 Machine Learning

image

Machine learning (ML) is one of the fastest growing areas of science. It is largely responsible for the rise of giant data companies such as Google, and it has been central to the development of lucrative products, such … more

[ FEATURED READ]

The Industries of the Future

image

The New York Times bestseller, from leading innovation expert Alec Ross, a “fascinating vision” (Forbes) of what’s next for the world and how to navigate the changes the future will bring…. more

[ TIPS & TRICKS OF THE WEEK]

Data Have Meaning
We live in a Big Data world in which everything is quantified. While the emphasis of Big Data has been focused on distinguishing the three characteristics of data (the infamous three Vs), we need to be cognizant of the fact that data have meaning. That is, the numbers in your data represent something of interest, an outcome that is important to your business. The meaning of those numbers is about the veracity of your data.

[ DATA SCIENCE Q&A]

Q:What is random forest? Why is it good?
A: Random forest? (Intuition):
– Underlying principle: several weak learners combined provide a strong learner
– Builds several decision trees on bootstrapped training samples of data
– On each tree, each time a split is considered, a random sample of m predictors is chosen as split candidates, out of all p predictors
– Rule of thumb: at each split m=?p
– Predictions: at the majority rule

Why is it good?
– Very good performance (decorrelates the features)
– Can model non-linear class boundaries
– Generalization error for free: no cross-validation needed, gives an unbiased estimate of the generalization error as the trees is built
– Generates variable importance

Source

[ VIDEO OF THE WEEK]

#FutureOfData Podcast: Peter Morgan, CEO, Deep Learning Partnership

 #FutureOfData Podcast: Peter Morgan, CEO, Deep Learning Partnership

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Processed data is information. Processed information is knowledge Processed knowledge is Wisdom. – Ankala V. Subbarao

[ PODCAST OF THE WEEK]

#FutureOfData with @theClaymethod, @TiVo discussing running analytics in media industry

 #FutureOfData with @theClaymethod, @TiVo discussing running analytics in media industry

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

By 2020, we will have over 6.1 billion smartphone users globally (overtaking basic fixed phone subscriptions).

Sourced from: Analytics.CLUB #WEB Newsletter

Big, Bad Data: How Talent Analytics Will Make It Work In HR

team-at-a-glance

Here’s a mind-blowing fact to spark up the late-summer doldrums: research from IBM IBM -2.56% shows that 90% of the data in the world today has been created in the last two years alone. I find this fascinating.

Which means that companies have access to an unprecedented amount of information: insights, intelligence, trends, future-casting. In terms of HR, it’s a gold mine of Big Data.

This past spring, I welcomed the ‘Industry Trends in Human Resources Technology and Service Delivery Survey,’ conducted by the Information Services Group III +0.00% III -1.1% (ISG), a leading technology insights, market intelligence and advisory services company. It’s a useful study, particularly for leaders and talent managers, offering a clear glimpse of what companies investing in HR tech expect to gain from their investment.

talent_analytics

Not surprisingly, there are three key benefits companies expect to realize from investments in HR tech:

• Improved user and candidate experience

• Access to ongoing innovation and best practices to support the business

• Speed of implementation to increase the value of technology to the organization.

It’s worth noting that driving the need for an improved user interface, access, and speed is the nature of the new talent surging into the workforce: people for whom technology is nearly as much a given as air. We grew up with technology, are completely comfortable with it, and not only expect it to be available, we assume it will be available, as well as easy to use and responsive to all their situations, with mobile and social components.

According to the ISG study, companies want HR tech to offer strategic alignment with their business. I view this as more about enabling flexibility in talent management, recruiting and retention — all of which are increasing in importance as Boomers retire, taking with them their deep base of knowledge and experience. And companies are looking more for the analytics end of the benefit spectrum. No surprise here that the delivery model will be through cloud-based SaaS solutions.

Companies also want:

• Data security

• Data privacy

• Integration with existing systems, both HR and general IT

• Customizability —to align with internal systems and processes.

Cloud-based. According to the ISG report, more than 50% of survey respondents have implemented or are implementing cloud-based SaaS systems. It’s easy, it’s more cost-effective than on-premise software, and it’s where the exciting innovation is happening.

Mobile/social. That’s a given. Any HCM tool must have a good mobile user experience, from well-designed mobile forms and ease of access to a secure interface.

They want it to have a simple, intuitive user interface – another given. Whether accessed via desktop or mobile, the solution must offer a single, unified, simple-to-use interface.

They want it to offer social collaboration tools, which is particularly key for the influx of millenials coming into the workplace, who expect to be able to collaborate via social channels. HR is no exception here. While challenging from a security and data protection angle, it’s a must.

But the final requirement the study reported is, in my mind, the most important: Analytics and reporting. Management needs reporting to know their investment is paying off, and they also need robust analytics to keep ahead of trends within the workforce.

It’s not just a question of Big Data’s accessibility, or of sophisticated metrics, such as the Key Performance Indicators (KPIs) that reveal the critical factors for success and measure progress made towards strategic goals. For organizations to realize the promise of Big Data, they must be able to cut through the noise, and access the right analytics that will transform their companies for the better.

Given what companies are after, as shown in the ISG study, I predict that more and more companies are going to be recognizing the benefits of using integrated analytics for their talent management and workforce planning processes. Talent Analytics creates a powerful, invaluable amalgam of data and metrics; it can identify the meaningful patterns within that data and metrics and, for whatever challenges and opportunities an organization faces, it will best inform the decision makers on the right tactics and strategies to move forward. It will take talent analytics to synthesize Big Data and metrics to make the key strategic management decisions in HR. Put another way, it’s not just the numbers, it’s how they’re crunched.

Article originally appeared HERE.

Source: Big, Bad Data: How Talent Analytics Will Make It Work In HR

How Big Data Is Changing The Entertainment Industry!

Big Data is here – The latest buzzword of the Information Technology Industry!

The world is generating humungous amount of data every second. Rapid advances in technology is making analysis of such data a cake-walk. Big Data is influencing every aspect of our lives and will continue to grow bigger and better. Retailers will push us to buy extra chips and soft drinks from the nearest outlet as we will watch the T20 match with our friends and our favorite teams are playing. They will even recommend our favorite party songs CD and encourage us to donate a dollar to our often visited charity. Preventing diseases, share trading, marketing efforts and lot of other use cases are emerging.

Big Data is changing the sports and the entertainment industry as well. Sports and entertainment industry are driven by fans and their word of mouth. Engagement with audience is the key and Big Data is creating opportunities for driving this engagement and influencing audience sentiments.

IBM worked with a media company and ran its predictive models on the social buzz for the movie Ram Leela. According to the reports, IBM predicted a 73% success for the movie based on right selection of cities. Such rich analysis of social data was conducted for Barfi and Ek Tha Tiger. All these movies had a runaway success at the box office.

Hollywood uses Big Data big time ! The social media buzz can predict the box office success – more importantly based on the trending of the movie, strategies can be formulated to ensure favorable positioning of the movie. All science !

Netflix is the best case study of analyzing user behavior and hitting the jackpot ! Netflix original show The House Of Cards’ was commissioned solely on the basis of the big data results of the preferences of its customers.

Shah Rukh Khan’s Chennai Express, one of the biggest box office grossers on 2013, used Big Data & Analytics solutions to drive social media and digital marketing campaigns. IT Services company Persistent Systems helped Chennai Express team with the right strategic inputs. Chennai Express related tweets generated over 1 billion cumulative impressions and the total number of tweets across all hashtags was over 750 thousand over the 90-day campaign period. Persistent Systems CEO Siddhesh Bhobe said “Shah Rukh Khan and the success of Chennai Express have proved that social media is the channel of the future and that it presents unique opportunities to marketers and brands, at an unbeatable ROI (return on investment)”

Lady Gaga and her team browse through our listening preferences and sequences and optimize the playlist for the maximum impact at live events. Singapore based Big Data Analytics firm Crayon has worked with leading Hindi Film industry producers to understand the kind of music to release to create the right buzz for the movie.

Sports is another area where big data is making big impact. FIFA 2014 champion Germany have been using SAP’s Match Insights software. It has made a big difference to the team. Data was crunched relating to player position ‘touch maps’, passing ability, ball retention and even metrics such as ‘aggressive play’. Even Kolkota Knight Riders, an IPL team, to determine the consistency of the players based on 25 data point per ball. It helped in auction as well as ongoing training.

Big Data can definitely be a boon to the entertainment and sports industry. It can improve the profitability of the movies – always a high risk business. The green-lighting of the story to the cast selection to the timing of release can be determined. It can help to pick the right players for the sporting leagues – allowing talent to win !

Entertainment Industry leaders need to collaborate with the leading big data startups and visionaries to create new uses and deliver new success stories!

Originally posted via “How Big Data Is Changing The Entertainment Industry!”

Originally Posted at: How Big Data Is Changing The Entertainment Industry! by analyticsweekpick

Jul 11, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
Trust the data  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Marketing Analytics – Success Through Analysis by analyticsweekpick

>> Closer Than You Think: Data Strategies Across Your Company by analyticsweek

>> CISOs’ newest fear? Criminals with a big data strategy by analyticsweekpick

Wanna write? Click Here

[ FEATURED COURSE]

Data Mining

image

Data that has relevance for managerial decisions is accumulating at an incredible rate due to a host of technological advances. Electronic data capture has become inexpensive and ubiquitous as a by-product of innovations… more

[ FEATURED READ]

The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World

image

In the world’s top research labs and universities, the race is on to invent the ultimate learning algorithm: one capable of discovering any knowledge from data, and doing anything we want, before we even ask. In The Mast… more

[ TIPS & TRICKS OF THE WEEK]

Fix the Culture, spread awareness to get awareness
Adoption of analytics tools and capabilities has not yet caught up to industry standards. Talent has always been the bottleneck towards achieving the comparative enterprise adoption. One of the primal reason is lack of understanding and knowledge within the stakeholders. To facilitate wider adoption, data analytics leaders, users, and community members needs to step up to create awareness within the organization. An aware organization goes a long way in helping get quick buy-ins and better funding which ultimately leads to faster adoption. So be the voice that you want to hear from leadership.

[ DATA SCIENCE Q&A]

Q:Explain selection bias (with regard to a dataset, not variable selection). Why is it important? How can data management procedures such as missing data handling make it worse?
A: * Selection of individuals, groups or data for analysis in such a way that proper randomization is not achieved
Types:
– Sampling bias: systematic error due to a non-random sample of a population causing some members to be less likely to be included than others
– Time interval: a trial may terminated early at an extreme value (ethical reasons), but the extreme value is likely to be reached by the variable with the largest variance, even if all the variables have similar means
– Data: “cherry picking”, when specific subsets of the data are chosen to support a conclusion (citing examples of plane crashes as evidence of airline flight being unsafe, while the far more common example of flights that complete safely)
– Studies: performing experiments and reporting only the most favorable results
– Can lead to unaccurate or even erroneous conclusions
– Statistical methods can generally not overcome it

Why data handling make it worse?
– Example: individuals who know or suspect that they are HIV positive are less likely to participate in HIV surveys
– Missing data handling will increase this effect as it’s based on most HIV negative
-Prevalence estimates will be unaccurate

Source

[ VIDEO OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with Juan Gorricho, @disney

 #BigData @AnalyticsWeek #FutureOfData #Podcast with Juan Gorricho, @disney

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Data are becoming the new raw material of business. – Craig Mundie

[ PODCAST OF THE WEEK]

@EdwardBoudrot / @Optum on #DesignThinking & #DataDriven Products #FutureOfData #Podcast

 @EdwardBoudrot / @Optum on #DesignThinking & #DataDriven Products #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

94% of Hadoop users perform analytics on large volumes of data not possible before; 88% analyze data in greater detail; while 82% can now retain more of their data.

Sourced from: Analytics.CLUB #WEB Newsletter

Don’t Let your Data Lake become a Data Swamp

In an always-on, competitive business environment, organizations are looking to gain an edge through digital transformation. Subsequently, many companies feel a sense of urgency to transform across all areas of their enterprise—from manufacturing to business operations—in the constant pursuit of continuous innovation and process efficiency.

Data is at the heart of all these digital transformation projects. It is the critical component that helps generate smarter, improved decision-make by empowering business users to eliminate gut feelings, unclear hypotheses, and false assumptions. As a result, many organizations believe building a massive data lake is the ‘silver bullet’ for delivering real-time business insights. In fact, according to a survey by CIO review from IDG, 75 percent of business leaders believe their future success will be driven by their organization’s ability to make the most of their information assets. However, only four percent of these organizations said they are set up a data-driven approach for successfully benefits from their information.

Is your Data Lake becoming more of a hindrance than an enabler?

The reality is that all these new initiatives and technologies come with a unique set of generated data, which creates additional complexity in the decision-making process. To cope with the growing volume and complexity of data and alleviate IT pressure, some are migrating to the cloud.

But this transition—in turn—creates other issues. For example, once data is made more broadly available via the cloud, more employees want access to that information. Growing numbers and varieties of business roles are looking to extract value from increasingly diverse data sets, faster than ever—putting pressure on IT organizations to deliver real-time, data access that serves the diverse needs of business users looking to apply real-time analytics to their everyday jobs. However, it’s not just about better analytics—business users also frequently want tools that allow them to prepare, share, and manage data.

To minimize tension and friction between IT and business departments, moving raw data to one place where everybody can access it sounded like a good move.  The concept of the data lake first coined by James Dixon in 2014 expected the data lake to be a large body of raw data in a more natural state where different users come to examine it, delve into it, or extract samples from it. However, increasingly organizations are beginning to realize that all the time and effort spent building massive data lakes have frequently made things worse due to poor data governance and management, which resulted in the formation of so-called “Data Swamps”.

Bad data clogging up the machinery

The same way data warehouses failed to manage data analytics a decade ago, data lakes will undoubtedly become “Data Swamps” if companies don’t manage them in the correct way. Putting all your data in a single place won’t in and of itself solve a broader data access problem. Leaving data uncontrolled, un-enriched, not qualified, and unmanaged, will dramatically hamper the benefits of a data lake, as it will still have the ability to only be utilized properly by a limited number of experts with a unique set of skills.

A success system of real-time business insights starts with a system of trust. To illustrate the negative impact of bad data and bad governance, let’s take a look at what happened to Dieselgate. The Dieselgate emissions scandal highlighted the difference between real-world and official air pollutant emissions data. In this case, the issue was not a problem of data quality, but of ethics, since some car manufacturers misled the measurement system by injecting fake data. This resulted in fines for car manufacturers exceeding more than tens of billions of dollars and consumers losing faith in the industry. After all, how can consumers trust the performance of cars now that they know the system-of-measure has been intentionally tampered with? 

The takeaway in the context of an enterprise data lake is that its value will depend on the level of trust employees have in the data contained in the lake. Failing to control data accuracy and quality within the lake will create mistrust amongst employees, seed doubt about the competency of IT, and jeopardize the whole data value chain, which then negatively impacts overall company performance.

A cloud data warehouse to deliver trusted insights for the masses

Leading firms believe governed cloud data lakes represent an adequate solution to overcoming some of these more traditional data lake stumbling blocks. The following four-step approach helps modernize cloud data warehouse while providing better insight into the entire organization. 

  1. Unite all data sources and reconcile them: Make sure the organization has the capacity to integrate a wide array of data sources, formats and sizes. Storing a wide variety of data in one place is the first step, but it’s not enough. Bridging data pipelines and reconciling them is another way to gain the capacity to manage insights. Verify the company has a cloud-enabled data management platform combining rich integration capabilities and cloud elasticity to process high data volumes at a reasonable price.
  2. Accelerate trusted insights to the masses: Efficiently manage data with cloud data integration solutions that help prepare, profile, cleanse, and mask data while monitoring data quality over time regardless of file format and size.  When coupled with cloud data warehouse capabilities, data integration can enable companies to create trusted data for access, reporting, and analytics in a fraction of the time and cost of traditional data warehouses. 
  3. Collaborative data governance to the rescue: The old schema of a data value chain where data is produced solely by IT in data warehouses and consumed by business users is no longer valid.  Now everyone wants to create content, add context, enrich data, and share it with others. Take the example of the internet and a knowledge platform such as Wikipedia where everybody can contribute, moderate and create new entries in the encyclopedia. In the same way Wikipedia established collaborative governance, companies should instill a collaborative governance in their organization by delegating the appropriate role-based, authority or access rights to citizen data scientists, line-of-business experts, and data analysts.
  4. Democratize data access and encourage users to be part of the Data Value Chain: Without making people accountable for what they’re doing, analyzing, and operating, there is little chance that organizations will succeed in implementing the right data strategy across business lines. Thus, you need to build a continuous Data Value Chain where business users contribute, share, and enrich the data flow in combination with a cloud data warehouse multi-cluster architecture that will accelerate data usage by load balancing data processing across diverse audiences.

In summary, think of data as the next strategic asset. Right now, it’s more like a hidden treasure at the bottom of many companies. Once modernized, shared and processed, data will reveal its true value, delivering better and faster insights to help companies get ahead of the competition.

The post Don’t Let your Data Lake become a Data Swamp appeared first on Talend Real-Time Open Source Data Integration Software.

Source: Don’t Let your Data Lake become a Data Swamp by analyticsweek

Are You Headed for the Analytics Cliff?

When was the last time you updated your analytics—or even took a hard look? Don’t feel guilty if it’s been a while. Even when there are minor indicators of trouble, many companies put analytics projects on the backburner or implement service packs as a Band-Aid solution.

What companies don’t realize, however, is that once analytics begin to fail, time is limited. Application teams that are not quick to act risk losing valuable revenue and customers. Fortunately, if you know the signs, you can avoid a catastrophe.

>> Related: Blueprint to Modern Analytics <<

Are you headed for the analytics cliff? Keep an eye out for these clear indicators that your analytics is failing:

Sign #1: Long Queue of Ad Hoc Requests

Is your queue of ad hoc requests constantly getting longer? Most companies start their analytics journeys by adding basic dashboards and reports to their applications. This satisfies users for a short period of time, but within a few months, users inevitably want more. Maybe they want to explore data on their own or connect new data sources to the application.

Eventually, you end up with a long queue of ad hoc requests for new features and capabilities. When you ignore these requests, you risk unhappy customers and skyrocketing churn rates. If you’re struggling to keep up with the influx—much less get ahead of it—you may be heading for the analytics cliff.

Sign #2: Unhappy Users & Poor Engagement

Are your customers becoming more vocal about what they don’t like about your embedded analytics? Dissatisfied customers, and in turn, poor user engagement, is a clear indication something is wrong. Ask yourself these questions to determine if your application is in trouble:

  • Basic adoption: How many users are regularly accessing the application’s dashboards and reports?
  • Stickiness: Are users spending more or less time in the embedded analytics?
  • The eject button: Have you seen an increase in users exporting data outside of your application to do their own analysis?

The more valuable your embedded dashboards and reports are, the more user engagement you’ll see. Forward-thinking application teams are adding value to their embedded analytics by going beyond basic capabilities.

Sign #3: Losing Customers to Competitors

When customers start abandoning your application for the competition, you’re fast approaching an analytics cliff. Whether you like it or not, you’re stacked against your competitors. If they’re innovating their analytics while yours stay stagnant, you’ll soon lose ground (if you haven’t already).

Companies that want to use embedded analytics as a competitive advantage or a source of revenue can’t afford to put off updates. As soon as your features start to lag behind the competition, you’ll be forced to upgrade just to catch up. And if your customers have started to churn, you’ll be faced with the overwhelming task of winning back frustrated customers or winning over new ones.

Sign #4: Revenue Impact

All the previous indicators were part of a slow and steady decline. By this point, you’re teetering on the edge of the analytics cliff. Revenue impact can come in many forms, including:

  • Declining win rate
  • Slowing pipeline progression
  • Decreasing renewals
  • Drop in sales of analytics modules

A two percent reduction in revenue can be an anomaly, or an indication of a downward trend. Some software companies make the mistake of ignoring such a small decrease. But even slowing rates of growth can be disastrous. According to a recent McKinsey study, “Grow Fast or Die Slow,” company growth yields greater returns and matters more than margins or cost structure. If a software company grows less than 20 percent annually, they have a 92 percent chance of failure. Revenue impact—no matter how small—is a sign that it’s definitely time to act.

To learn more, read our ebook: 5 Early Indicators Your Analytics Will Fail >

 

Source

Jul 04, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
Accuracy check  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> The Methods UX Professionals Use (2018) by analyticsweek

>> How Data Science Is Fueling Social Entrepreneurship by analyticsweekpick

>> 10 Visualizations of Juicebox by analyticsweek

Wanna write? Click Here

[ FEATURED COURSE]

Baseball Data Wrangling with Vagrant, R, and Retrosheet

image

Analytics with the Chadwick tools, dplyr, and ggplot…. more

[ FEATURED READ]

Superintelligence: Paths, Dangers, Strategies

image

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but … more

[ TIPS & TRICKS OF THE WEEK]

Analytics Strategy that is Startup Compliant
With right tools, capturing data is easy but not being able to handle data could lead to chaos. One of the most reliable startup strategy for adopting data analytics is TUM or The Ultimate Metric. This is the metric that matters the most to your startup. Some advantages of TUM: It answers the most important business question, it cleans up your goals, it inspires innovation and helps you understand the entire quantified business.

[ DATA SCIENCE Q&A]

Q:How to detect individual paid accounts shared by multiple users?
A: * Check geographical region: Friday morning a log in from Paris and Friday evening a log in from Tokyo
* Bandwidth consumption: if a user goes over some high limit
* Counter of live sessions: if they have 100 sessions per day (4 times per hour) that seems more than one person can do

Source

[ VIDEO OF THE WEEK]

@TimothyChou on World of #IOT & Its #Future Part 1 #FutureOfData #Podcast

 @TimothyChou on World of #IOT & Its #Future Part 1 #FutureOfData #Podcast

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

You can use all the quantitative data you can get, but you still have to distrust it and use your own intelligence and judgment. – Alvin Tof

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with  John Young, @Epsilonmktg

 #BigData @AnalyticsWeek #FutureOfData #Podcast with John Young, @Epsilonmktg

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Walmart handles more than 1 million customer transactions every hour, which is imported into databases estimated to contain more than 2.5 petabytes of data.

Sourced from: Analytics.CLUB #WEB Newsletter

The UX of Dating Websites & Apps

Online dating websites are one of the primary ways people find dates and even future spouses. These sites represent the bulk of a 3 billion dollar dating services industry.

In fact, around 30% of recent marriages started online, but it’s not like finding a date is as easy as filtering choices on Amazon and having them delivered via drone the next day (not yet at least).

Dating can be hard enough, but in addition to finding the right one, you also have to deal with things like Nigeria-based scams (and not the one with the Prince!).

Even when someone’s not directly trying to steal your money, can you really trust the profiles? By one estimate, over 80% of profiles studied contained at least one lie (usually about age, height, or weight).

Online dating isn’t all bad though. There is some evidence that the online dating sites actually do lead to marriages with slightly higher satisfaction and slightly lower separation rates. It could be due to the variety of people, those mysterious algorithms, or just a self-selection bias.

To understand the online dating user experience, we conducted a retrospective benchmark on seven of the most popular dating websites.

  • eHarmony (www.eharmony.com)
  • Hinge (mobile app)
  • Match.com (www.match.com)
  • OkCupid (www.okcupid.com)
  • Plenty of Fish (www.pof.com)
  • Tinder (www.tinder.com)
  • Zoosk (www.zoosk.com)

Full details are available in the downloadable report. Here are the highlights.

Study and Participant Details

We asked 380 participants who had used one of the seven dating websites in the past year to reflect on their most recent experience with the service.

Participants in the study answered questions about their prior experience, and desktop website users answered the 8-item SUPR-Q and the Net Promoter Score. In particular, we were interested in visitors’ attitudes toward the site, problems they had with the site, and reasons they used the website.

Measuring the Dating Website UX: SUPR-Q

The SUPR-Q is a standardized measure of the quality of a website’s user experience and is a good way to gauge users’ attitudes. It’s based on a rolling database of around 150 websites across dozens of industries.

Scores are percentile ranks and tell you how a website experience ranks relative to the other websites. The SUPR-Q provides an overall score as well as detailed scores for subdimensions of trust, usability, appearance, and loyalty. Its ease item can also predict an accurate SUS equivalent score.

The scores for the six dating websites (excluding the Hinge app) in the perception study were below average at the 43rd percentile (scoring better than 43% of the websites in the database). SUPR-Q scores for this group range from the 19th percentile (Plenty of Fish) to the 69th percentile (eHarmony).

Distrust and Disloyalty

The top improvement area for all the dating websites was trust. Across the websites, the average trust score was in the 23rd percentile. Participants express highest trust toward eHarmony—but trust scores were still only slightly above average (54th percentile). Plenty of Fish had the lowest trust score (5%), followed by Tinder (10%). These lower trust scores were consistent with the studies we found that cite false information and even scams.

The NPS reflects trust scores. eHarmony, the most trusted website, was also the most likely to be recommended with an NPS of 11% while the least trusted site, Plenty of Fish, had the lowest NPS (-46%). Overall, the average NPS score was a paltry -23%.

High Mobile App Usage

Not surprisingly, mobile app usage for dating services is high. 77% of participants reported visiting a dating service using a mobile app while only 61% said they log on using a desktop or laptop computer.

Most participants reported visiting their dating website on a desktop or laptop computer a few times per year, while mobile app users said they log on a few times a week or a few times a month. 19% of Match.com participants reported using the mobile app as much as once a day.
 

“The app is definitely more easy to use and intuitive, while the website seems more like an afterthought.” —Tinder user

 

“It’s one of the few instances of ‘websites turned into apps’ that I actually find value in.” —OkCupid user

 

Across the dating services, over half of participants reported they were looking for a serious relationship and just under half said they were looking for a casual relationship.

Reasons to use the dating services were similar for the website and app, except 42% of desktop website users said they are looking for a friendship while only 29% of mobile app users are looking for friends. Interestingly, this was a statistical difference.

Most Lack Dating Success

While over half of participants reported visiting dating sites to find a serious relationship, only 22% said they’ve actually found a relationship through the service. Specifically, OkCupid and Tinder users had the highest dating success in the group; 35% of OkCupid and 30% of Tinder users reported finding a relationship through the service.

Figure 1: Percent of respondents by site who report being in a relationship with a person they met on the website or app in the last year.

 

“I liked answering a lot of questions which would increase the match percentages I’d be able to find.” —OkCupid user

 

Only 9% of Zoosk users said they have found a relationship using the service. Zoosk users’ top issues with the site were dishonest users and fake profiles, poor matches, and active users who don’t respond.

 

“May not be great for a serious relationship.” —Zoosk user

 

“I keep getting referrals that are far outside my travel zone.” —Zoosk user

Dating Scams and Dishonest Users

Participants reported worrying about dishonest users and scams. On average, only 33% agreed that other users provide honest information about themselves and 41% said they are afraid of dating scams. These were the top issues reported by OkCupid and Plenty of Fish users.

 

“There are tons of fake/spam profiles.” —OkCupid user

 

“More scams than anything.” —Plenty of Fish user

 

“There are many suspicious profiles that seem like catfishing.” —Plenty of Fish user

 

Providing honest information on the site was found to be a significant key driver and explains about 17% of the dating site user experience. Other key drivers included brand attitude (22%), nicely presented profiles (10%), ease of creating and managing profiles (9%), intuitive navigation (9%), and ease of learning about other people (8%).

Figure 2: Key drivers of the online dating website user experience (SUPR-Q scores).

Together, these six components are key drivers of the dating website user experience and account for 75% of the variation in SUPR-Q scores.

 

Safety and Protection Resources

Across the dating services, 18% of participants reported having an issue with another user in the past. Plenty of Fish had the highest instance of issues with other users at 40%; however, 74% of those participants said there were resources available on the site to deal with this.

 

“I blocked the person because he was being very disrespectful.” —Plenty of Fish user

 

“I blocked the person who was harassing me.” —Plenty of Fish user

 

Using a dating service comes with obvious safety concerns and it’s felt by a fair number of users. Across the websites, only 54% of participants agreed that they feel safe using the site. Tinder had the lowest agreement to this item, only 38%. 

 

“There is not more protection from the terrible men that are on there.” —Plenty of Fish user

 

“They do not seem to screen out people with criminal backgrounds. Found local sex offenders in this app. It is also difficult to unsubscribe.” —Plenty of Fish user

 

“Somebody posted improper material in profile and I reported it to admin.” —OkCupid user

 

Plenty of Fish had the highest rate of unwanted images, with 61% of women reporting at least one unwanted image compared to 35% of men. eHarmony and Tinder had similar but slightly lower unwanted image rates.

 

Poor Matching Algorithms

While the right algorithm can help create a match, participants reported algorithms often fell short. Less than half of participants on Match.com, Plenty of Fish, Tinder, and Zoosk agreed with the statement “the site is good at matching me with people” and only 14% of Tinder users said the site asks meaningful questions about users.

 

“Sometimes it’s hard to sort the matches by compatibility.” —Match.com user

 

“I find that its match system doesn’t help a great deal in finding whether someone is well suited for you, and it is rather glitchy, with people appearing after thumbing them down.” —OkCupid user

 

“Poor quality of fish on the site.” —Plenty of Fish user

Full details are available in the downloadable report.

Summary

An analysis of the user experience of seven dating websites found:

  1. Dating is hard; the user experience is probably harder. Current users find the dating website experience below average, with SUPR-Q scores falling at the 43rd percentile. eHarmony was the overall winner for the retrospective study at the 69th percentile, with Plenty of Fish scoring the lowest at the 19th. The top improvement area across the sites was trust. eHarmony also had the highest NPS (11%) while Plenty of Fish had the lowest (-46%).
  1. Participants prefer using mobile apps. 77% of participants reported using the dating service mobile app. The majority of participants reported visiting the dating services a few times per week on their mobile device. 19% of Match.com users said they use the app every day. Participants reported using the app more frequently than the website for each of the dating services.
  1. High hopes with modest success. Over half of participants reported visiting dating sites to find a serious relationship, but only 22% said they have found a relationship through the service. Specifically, OkCupid and Tinder had the highest dating success; 35% of OkCupid and 30% of Tinder users reported finding a relationship. Only 9% of Zoosk users said they found a relationship using the site.
  1. Users are concerned about dating scams and dishonest users. Participants reported worries regarding scams on the dating sites. On average, only 33% agreed that other users provide honest information about themselves and 41% said they are afraid of dating scams.  Providing honest information on the site was found to be a significant key driver and explains about 17% of the dating site user experience.

(function() {
if (!window.mc4wp) {
window.mc4wp = {
listeners: [],
forms : {
on: function (event, callback) {
window.mc4wp.listeners.push({
event : event,
callback: callback
});
}
}
}
}
})();

Sign-up to receive weekly updates.

Source by analyticsweek

2018 Trends in Cloud Computing: The Data Layer

The cloud has proven the most effective means of handling the influx of big data typifying the IT concerns of contemporary organizations. A recap of The Top 10 Technology Trends to Watch: 2018 to 2020 from Forrester states, “The public cloud is a juggernaut that is reinventing computing and the high-tech industry itself…applications are built faster in the public cloud, where they can scale, reach customers, and connect to other apps.”

Although many of the drivers for the burgeoning adoption rates of the cloud have not changed, a shift in focus of their applicability will emerge in earnest during the coming year. The Internet of Things and Artificial Intelligence will continue to push organizations into the cloud, although they’ll be influenced more by their respective edge capabilities and intelligent automation of bots.

Moreover, there have been a number of developments related to security and privacy that are mitigating these conventional inhibitors of cloud deployments. The viability of the public cloud will also contend with advances in hybrid cloud models. The hybrid trend may well prove the most prescient of the cloud’s impact in the near future, as it transitions into a unified access mechanism to control all data—not necessarily where they are, but where they can be used.

“The cloud is obviously going to continue to grow as the default delivery model for new,” NTT DATA Services Vice President of ERP Application Services Simon Spence acknowledged. “Its evolution or journey is slowly making its way through the entire packaged applications space.”

Security and Privacy
A number of well-publicized security breaches have renewed the emphasis on cyber security, highlighting a concern that has traditionally hampered cloud adoption rates. In response to these events and to the ongoing need for secure, private data, a number of tools have appeared to reinforce security so that “security is likely better in the cloud than it has been currently in the existing world,” Spence remarked. The key to implementing such security is to facilitate it in layers so that if one is breached there is another to reinforce it. This approach is a key benefit to utilizing public cloud providers. “All of a sudden, instead of somebody trying to attack you, they have to attack Amazon [Web Services], and then they have to go find you at Amazon,” Spence said. “That’s going to be pretty difficult.” Each instance of hybridization in which organizations use a private cloud within a public cloud provides an additional layer of protection.

At the Data Layer
It’s also essential for organizations to protect the actual data instead of simply relying on external measures for cloud security. Forms of tokenization and encryption grant these benefits, as do certain aspects of blockchain. There’s also security tools Forbes referred to as cloud data protection which encrypt “sensitive data before it goes to the cloud with the enterprise (not the cloud provider) maintaining the keys. Protects from unwelcomed government surveillance and helps remove some of the biggest impediments to cloud adoption—security, compliance, and privacy concerns.” Protecting data where they reside is also the main concept for implementing security with semantic standards. Organizations can fortify data by adding triple attributes to them, which consist of “arbitrary key value pairs for every triple,” Franz CEO Jans Aasman noted. “You can build any security model by starting with the right key value pairs, and then on top apply security filters.” Implementing these measures before replicating data to the cloud—or even for data generated there—makes these off-premise deployments much more secure.

Hybrid Architecture
Hybrid cloud models take the form of any combination of public clouds, private clouds, and on-premise deployments. They either augment existing physical infrastructure with the cloud’s or the infrastructure of a public cloud with a private one. The propagation of hybrid models in the coming year puts an even greater emphasis on the data involved and the architecture required for accessing that data. According to Spence, “Really what becomes the glue and what you have to focus on then becomes data. You have to make sure you can integrate the data across public, private [clouds], and on-premises.” One of the chief concerns related to the proper architecture of hybrid models spanning on-premise and cloud workflows is ensuring that the infrastructure—and how it operates—is aligned.

The most pressing need for hybrid clouds may be applications involving edge computing in the IoT. In this case, Gartner indicated “organizations expect to apply the same operating style with their private infrastructure as in the public cloud. The operation of private infrastructure therefore has to evolve to take on the same model as public cloud services.” In 2018 organizations with hybrid models will attempt to keep pace with evolving cloud infrastructure via options for infrastructure automation. These include tools such as Infrastructure as Code (IaC) and containers, which prime organizations for automation. “The need to get that enterprise architecture done is more critical in today’s world than it ever was before because of the multiple different software providers and…delivery mechanisms,” Spence added.

Data-as-a-Service
DaaS should expand its relevance to the enterprise in the next couple of years partly due to the proliferation of AI services leveraging advanced machine learning techniques in the cloud. Conventionally, DaaS capabilities have garnered less attention than more flagship cloud offerings such as Software-as-Service (SaaS), Information-as-a-Service (IaaS), and Platform-as-a-Service (PaaS). The preceding section on hybrid architecture implies the vitality of IaaS to organizations leveraging the cloud. However, DaaS usage is sure to increase in the coming years partially because of the AI options found in the cloud. Gartner defines DaaS as “the data and services that will drive advanced analytics that will inform decision making across the business.” The wealth of AI-based SaaS options should drive the need for DaaS, since the latter can provide the sort of unstructured big data the former is adept at analyzing.

The Internet of Things
The IoT will solidify itself as one of the most meaningful enterprise applications in 2018. What will continue to expand next year is the propensity to facilitate an edge computing model with it. Computing at the cloud’s edge reduces demands on bandwidth and centralized models by transmitting the results of analytics into centralized locations, as opposed to the entirety of raw data for computations. This capability is particularly useful when dealing with the scale of the IoT’s data. “That’s a lot of data,” affirmed Biotricity CEO Waqaas Al-Siddiq. “You don’t want to download that and put it into your servers. You want it in the cloud so you can access it, cross-reference it, and pull down the information that you want.” Edge computing is gaining credence partly due to the intersection of the IoT and AI, and partly due to its inherent boons. The latter include quicker response times, improved customer satisfaction, and less network traffic. The former pertains to advancements in smart homes and smart cities, as well as increasing adoption rates of AI. The synthesis of these developments has resulted in today’s situation in which Gartner denoted “specific use cases of AI moving “to the edge” are already emerging.”

Another effect of this trend is the expansion of IoT use cases, which are coming to include facets of the management of supply chains networks, workflows, and inventory control. The growing reliance on the IoT will also force organizations to modernize their business applications. According to Spence, one of the forces for application modernization is “heavily driven by business needs focused on allowing companies to meet their new, changing business strategies. In a lot of those, the business driven stuff, we see them tied to ‘I need to put IoT in place’, or ‘I’m looking to do digital transformation’.”

Long Term Trajectory
The long term trajectory of cloud computing is certain. It will connect the most salient aspects of modern data management as the medium in which aspects of big data, AI, and the IoT are accessed. That access layer will also involve hybrid architecture with on-premise deployments, as well as data stemming from the cloud via DaaS and other service models. The viability of the cloud’s centrality to data management, of course, is predicated on the refined security models which underpin it.

The next step in the cloud’s role, it seems, directly correlates to that centrality. By tying together all enterprise assets—those on-premises, in public and private clouds, even from multiple providers—in a single access layer, it can enable possibilities which were previously inconceivable. The most significant cloud trend, then, is the one in which cataloguing techniques function as centralized controllers to make distributed computing environments local for users. Such a global fabric of data should make considerable strides in 2018 to becoming a widespread reality.

“It’s kind of like in the past you might have gone and got a house from a particular software provider; they designed it, they built it for you, and you went to one contractor,” Spence said. “Now in today’s world, you’re really more the general contractor and you’re picking and choosing the various subcontractors that you’re bringing in and you’ve got to make sure those things tie together.”

Source: 2018 Trends in Cloud Computing: The Data Layer by jelaniharper

Jun 27, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
Conditional Risk  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Assess Your Data Science Expertise by bobehayes

>> Six Practices Critical to Creating Value from Data and Analytics [INFOGRAPHIC] by bobehayes

>> How to Choose a Database for Your Predictive Project by analyticsweek

Wanna write? Click Here

[ FEATURED COURSE]

Deep Learning Prerequisites: The Numpy Stack in Python

image

The Numpy, Scipy, Pandas, and Matplotlib stack: prep for deep learning, machine learning, and artificial intelligence… more

[ FEATURED READ]

Storytelling with Data: A Data Visualization Guide for Business Professionals

image

Storytelling with Data teaches you the fundamentals of data visualization and how to communicate effectively with data. You’ll discover the power of storytelling and the way to make data a pivotal point in your story. Th… more

[ TIPS & TRICKS OF THE WEEK]

Fix the Culture, spread awareness to get awareness
Adoption of analytics tools and capabilities has not yet caught up to industry standards. Talent has always been the bottleneck towards achieving the comparative enterprise adoption. One of the primal reason is lack of understanding and knowledge within the stakeholders. To facilitate wider adoption, data analytics leaders, users, and community members needs to step up to create awareness within the organization. An aware organization goes a long way in helping get quick buy-ins and better funding which ultimately leads to faster adoption. So be the voice that you want to hear from leadership.

[ DATA SCIENCE Q&A]

Q:Given two fair dices, what is the probability of getting scores that sum to 4? to 8?
A: * Total: 36 combinations
* Of these, 3 involve a score of 4: (1,3), (3,1), (2,2)
* So: 3/36=1/12
* Considering a score of 8: (2,6), (3,5), (4,4), (6,2), (5,3)
* So: 5/36

Source

[ VIDEO OF THE WEEK]

Understanding #Customer Buying Journey with #BigData

 Understanding #Customer Buying Journey with #BigData

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

I’m sure, the highest capacity of storage device, will not enough to record all our stories; because, everytime with you is very valuable da

[ PODCAST OF THE WEEK]

Discussing Forecasting with Brett McLaughlin (@akabret), @Akamai

 Discussing Forecasting with Brett McLaughlin (@akabret), @Akamai

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

In 2015, a staggering 1 trillion photos will be taken and billions of them will be shared online. By 2017, nearly 80% of photos will be taken on smart phones.

Sourced from: Analytics.CLUB #WEB Newsletter