Jan 17, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Tour of Accounting  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ NEWS BYTES]

>>
 Innovation is empowering Taiwan’s cyber security capabilities – Networks Asia Under  cyber security

>>
 How companies can detect cyber attacks early to minimise damage – Business MattersBusiness Matters Under  cyber security

>>
 Global Predictive and Prescriptive Analytics Market 2018: Expansions, Key Drivers, Trends, Challenges, And Forecast … – Market News Today Under  Prescriptive Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

Intro to Machine Learning

image

Machine Learning is a first-class ticket to the most exciting careers in data analysis today. As data sources proliferate along with the computing power to process them, going straight to the data is one of the most stra… more

[ FEATURED READ]

Data Science for Business: What You Need to Know about Data Mining and Data-Analytic Thinking

image

Written by renowned data science experts Foster Provost and Tom Fawcett, Data Science for Business introduces the fundamental principles of data science, and walks you through the “data-analytic thinking” necessary for e… more

[ TIPS & TRICKS OF THE WEEK]

Analytics Strategy that is Startup Compliant
With right tools, capturing data is easy but not being able to handle data could lead to chaos. One of the most reliable startup strategy for adopting data analytics is TUM or The Ultimate Metric. This is the metric that matters the most to your startup. Some advantages of TUM: It answers the most important business question, it cleans up your goals, it inspires innovation and helps you understand the entire quantified business.

[ DATA SCIENCE Q&A]

Q:Explain likely differences between administrative datasets and datasets gathered from experimental studies. What are likely problems encountered with administrative data? How do experimental methods help alleviate these problems? What problem do they bring?
A: Advantages:
– Cost
– Large coverage of population
– Captures individuals who may not respond to surveys
– Regularly updated, allow consistent time-series to be built-up

Disadvantages:
– Restricted to data collected for administrative purposes (limited to administrative definitions. For instance: incomes of a married couple, not individuals, which can be more useful)
– Lack of researcher control over content
– Missing or erroneous entries
– Quality issues (addresses may not be updated or a postal code is provided only)
– Data privacy issues
– Underdeveloped theories and methods (sampling methods…)

Source

[ VIDEO OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with @ScottZoldi, @FICO

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @ScottZoldi, @FICO

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

With data collection, ‘the sooner the better’ is always the best answer. – Marissa Mayer

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with Dr. Nipa Basu, @DnBUS

 #BigData @AnalyticsWeek #FutureOfData #Podcast with Dr. Nipa Basu, @DnBUS

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

The largest AT&T database boasts titles including the largest volume of data in one unique database (312 terabytes) and the second largest number of rows in a unique database (1.9 trillion), which comprises AT&T’s extensive calling records.

Sourced from: Analytics.CLUB #WEB Newsletter

@TimothyChou on World of #IOT & Its #Future Part 1 #FutureOfData #Podcast

[youtube https://www.youtube.com/watch?v=ezNX6XYozIc]

In this first part of two part podcast @TimothyChou discussed the Internet of Things landscape. He laid out how internet has always been about internet of things and not internet of people. He sheds light on internet of things as it is spread across themes of things, connect, collect, learn and do workflows. He builds an interesting case about achieving precision to introduction optimality.

 

Timothy’s Recommended Read:
Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark http://amzn.to/2Cidyhy
Zone to Win: Organizing to Compete in an Age of Disruption Paperback by Geoffrey A. Moore http://amzn.to/2Hd5zpv

Podcast Link:
iTunes: http://math.im/itunes
GooglePlay: http://math.im/gplay

Timothy’s BIO:
Timothy Chou has his career spanning through academia, successful (and not so successful) startups and large corporations. He was one of only a few people to hold the President title at Oracle. As President of Oracle On Demand he grew the cloud business from it’s very beginning. Today that business is over $2B. He wrote about the move of applications to the cloud in 2004 in his first book, “The End of Software”. Today he serves on the board of Blackbaud, a nearly $700M vertical application cloud service company.

After earning his PhD in EE at the University of Illinois he went to work for Tandem Computers, one of the original Silicon Valley startups. Had he understood stock options he would have joined earlier. He’s invested in and been a contributor to a number of other startups, some you’ve heard of like Webex, and others you’ve never heard of but were sold to companies like Cisco and Oracle. Today he is focused on several new ventures in cloud computing, machine learning and the Internet of Things.

About #Podcast:
#FutureOfData podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.

Wanna Join?
If you or any you know wants to join in,
Register your interest @ http://play.analyticsweek.com/guest/

Want to sponsor?
Email us @ info@analyticsweek.com

Keywords:
#FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

Originally Posted at: @TimothyChou on World of #IOT & Its #Future Part 1 #FutureOfData #Podcast by admin

20 Best Practices for Customer Feedback Programs: Applied Research

Below is the final installment of the 20 Best Practices for Customer Feedback Programs. Today’s post covers best practices in Applied Research.

Figure 5. Common types of linkages among disparate data sources
Figure 5. Common types of linkages among disparate data sources

Applied Research Best Practices

Customer-focused research using the customer feedback data can provide additional insight into the needs of the customer base and increases the overall value of the customer feedback program. This research extends well beyond the information that is gained from the typical reporting tools that summarize customer feedback with basic descriptive statistics.

Loyalty leaders regularly conduct applied research using their customer feedback data. Typical research projects can include creating customer-centric business metrics, building incentive compensation programs around customer metrics, and establishing training criteria that has a measured impact on customer satisfaction. Sophisticated research programs require advanced knowledge of research methods and statistics. Deciphering signal from noise in the data require more than the inter-ocular test (eyeballing the data).

Figure 6. Data model for financial linkage analysis

Loyalty leaders link their customer feedback data to other data sources (see Figure 5 for financial, operational, and constituency linkages). Once the data are merged (see Figure 6 for data model for financial linkage), analysis can be conducted to help us understand the causes (operational, constituency) and consequences (financial) of customer satisfaction and loyalty. Loyalty leaders can use the results of these types of studies to:

  1. Support business case of customer feedback program (financial linkage)
  2. Identify objective, operational metrics that impact customer satisfaction and manage employee performance using these customer-centric metrics (operational linkage)
  3. Understand how employees and partners impact customer satisfaction to ensure proper employee and partner relationship management (constituency linkage)

A list of best practices in Applied Research appears in Table 6.

Table 6. Best Practices in Applied Research
Best Practices The specifics…
15. Ensure results from customer feedback collection processes are reliable, valid and useful Conduct a validation study of the customer feedback program. Verify the reliability, validity and usefulness of customer feedback metrics to ensure you are measuring the right things. This assessment needs to be one of the first research projects conducted to support (and dispute any challenges regarding) the use of these customer metrics to manage the company. This research will help you create summary statistics for use in executive reporting and company dashboards; summary scores are more reliable and provide a better basis for business decisions compared to using only individual survey questions.
16. Identify linkage between customer feedback metrics and operational metrics Demonstrate that operational metrics are related to customer feedback metrics so that these operational metrics can be used to manage employees.  Additionally, because of their reliability and specificity, these operational metrics are good candidates for use in employee incentive programs.
17. Regularly conduct applied customer-focused research Build a comprehensive research program using the customer-centric metrics (and other business metrics) to get deep insight regarding the business processes. Customer feedback can be used to improve all phases of the customer lifecycle (marketing, sales, and service).
18. Identify linkage between customer feedback metrics and business metrics Illustrate that financial metrics (e.g., profit, sales, and revenue) are related to customer feedback metrics. Often times, this type of study can be used as a business case to demonstrate value of the customer feedback program.
19. Identify linkage between customer feedback metrics and other constituency’s attitudes Identify factors of constituency attitudes  (e.g., employee and partner satisfaction) that are linked to customer satisfaction/loyalty. Use these insights to properly manage employee and partner relationships to ensure high customer loyalty. Surveying all constituencies in the company ecosystem helps ensure all parties are focused on the customers and their needs.
20. Understand customer segments using customer information Compare customer groups to identify key differences among groups on customer feedback metrics (e.g., satisfaction, and loyalty). This process helps identify best practices internally among customer segments.
Copyright © 2011 Business Over Broadway

Summary

Loyalty leaders are excellent examples of customer-centric companies. Compared to their loyalty lagging counterparts, loyalty leading companies embed customer feedback throughout the entire company, from top to bottom. Loyalty leaders use customer feedback to set the vision and manage their business; they also integrate the feedback into daily business processes and communicate all processes, goals and results of the customer program to the entire company. Finally, they integrate different business data (operational, financial, customer feedback), to reveal deep customer insights through in-depth research.

Take the Customer Feedback Programs Best Practices Survey

You can take the best practices survey to receive free feedback on your company’s customer feedback program. This self-assessment survey assesses the extent to which your company adopts best practices throughout their program. Go here to take the free survey: http://businessoverbroadway.com/resources/self-assessment-survey.

References

Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297-334.

Hayes, B.E. (2011). Lessons in loyalty. Quality Progress, March, 24-31.

Hayes, B.E., Goodden, R., Atkinson, R., Murdock, F. & Smith, D. (2010). Where to Start: Experts weigh in on what all of us can learn from Toyota’s challenges. Quality Progress, April, 16-23.

Hayes, B. E. (2009). Beyond the ultimate question: A systematic approach to improve customer loyalty. Quality Press. Milwaukee, WI.

Hayes, B. E. (2008a). Measuring customer satisfaction and loyalty: Survey design, use and statistical analysis methods (3rd ed.). Quality Press. Milwaukee, WI.

Hayes, B. E. (2008b). Customer loyalty 2.0: The Net Promoter Score debate and the meaning of customer loyalty, Quirk’s Marketing Research Review, October, 54-62.

Hayes, B. E. (2008c). The true test of loyalty. Quality Progress. June, 20-26.

Keiningham, T. L., Cooil, B., Andreassen, T.W., & Aksoy, L. (2007). A longitudinal examination of net promoter and firm revenue growth. Journal of Marketing, 71 (July), 39-51.

Morgan, N.A. & Rego, L.L. (2006). The value of different customer satisfaction and loyalty metrics in predicting business performance. Marketing Science, 25(5), 426-439.

Nunnally, J. M. (1978). Psychometric Theory, Second Edition. New York, NY. McGraw-Hill.

Reichheld, F. F. (2003). The One Number You Need to Grow. Harvard Business Review, 81 (December), 46-54.

Reichheld, F. F. (2006). The ultimate question: driving good profits and true growth. Harvard Business School Press. Boston.

 

 

Originally Posted at: 20 Best Practices for Customer Feedback Programs: Applied Research

Jan 10, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Big Data knows everything  Source

[ AnalyticsWeek BYTES]

>> Data Modeling Tomorrow: Self-Describing Data Formats by jelaniharper

>> Why has R, despite quirks, been so successful? by analyticsweekpick

>> Transcending the Limits of Analytics with Artificial Intelligence by jelaniharper

Wanna write? Click Here

[ NEWS BYTES]

>>
 Costa Rica to Modernize Power Grid with Itron IoT Solution – IoT Evolution World (blog) Under  IOT

>>
 Recruiting in the age of the cyber security skills gap: challenges to overcome – Information Age Under  cyber security

>>
 Big data used to predict the future – Science Daily Under  Big Data

More NEWS ? Click Here

[ FEATURED COURSE]

Machine Learning

image

6.867 is an introductory course on machine learning which gives an overview of many concepts, techniques, and algorithms in machine learning, beginning with topics such as classification and linear regression and ending … more

[ FEATURED READ]

How to Create a Mind: The Secret of Human Thought Revealed

image

Ray Kurzweil is arguably today’s most influential—and often controversial—futurist. In How to Create a Mind, Kurzweil presents a provocative exploration of the most important project in human-machine civilization—reverse… more

[ TIPS & TRICKS OF THE WEEK]

Finding a success in your data science ? Find a mentor
Yes, most of us dont feel a need but most of us really could use one. As most of data science professionals work in their own isolations, getting an unbiased perspective is not easy. Many times, it is also not easy to understand how the data science progression is going to be. Getting a network of mentors address these issues easily, it gives data professionals an outside perspective and unbiased ally. It’s extremely important for successful data science professionals to build a mentor network and use it through their success.

[ DATA SCIENCE Q&A]

Q:Is it better to design robust or accurate algorithms?
A: A. The ultimate goal is to design systems with good generalization capacity, that is, systems that correctly identify patterns in data instances not seen before
B. The generalization performance of a learning system strongly depends on the complexity of the model assumed
C. If the model is too simple, the system can only capture the actual data regularities in a rough manner. In this case, the system poor generalization properties and is said to suffer from underfitting
D. By contrast, when the model is too complex, the system can identify accidental patterns in the training data that need not be present in the test set. These spurious patterns can be the result of random fluctuations or of measurement errors during the data collection process. In this case, the generalization capacity of the learning system is also poor. The learning system is said to be affected by overfitting
E. Spurious patterns, which are only present by accident in the data, tend to have complex forms. This is the idea behind the principle of Occam’s razor for avoiding overfitting: simpler models are preferred if more complex models do not significantly improve the quality of the description for the observations
Quick response: Occam’s Razor. It depends on the learning task. Choose the right balance
F. Ensemble learning can help balancing bias/variance (several weak learners together = strong learner)
Source

[ VIDEO OF THE WEEK]

Discussing #InfoSec with @travturn, @hrbrmstr(@rapid7) @thebearconomist(@boozallen) @yaxa_io

 Discussing #InfoSec with @travturn, @hrbrmstr(@rapid7) @thebearconomist(@boozallen) @yaxa_io

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

We chose it because we deal with huge amounts of data. Besides, it sounds really cool. – Larry Page

[ PODCAST OF THE WEEK]

@chrisbishop on futurist's lens on #JobsOfFuture #FutureofWork #JobsOfFuture #Podcast

 @chrisbishop on futurist’s lens on #JobsOfFuture #FutureofWork #JobsOfFuture #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

In the developed economies of Europe, government administrators could save more than €100 billion ($149 billion) in operational efficiency improvements alone by using big data, not including using big data to reduce fraud and errors and boost the collection of tax revenues.

Sourced from: Analytics.CLUB #WEB Newsletter

Unmasking the Problem with Net Scores and the NPS Claims

I wrote about net scores last week and presented evidence that showed net scores are ambiguous and unnecessary.  Net scores are created by taking the difference between the percent of “positive” scores and the percent of “negative” scores. Net scores were made popular by Fred Reichheld and Satmetrix in their work on customer loyalty measurement. Their Net Promoter Score is a difference score between the percent of “promoters” (ratings of 9 or 10) and percent of “detractors” (ratings of 0 thru 6) on the question, “How likely would you be to recommend <company> to your friends/colleagues?”

This resulting Net Promoter Score is used to gauge the level of loyalty for companies or customer segments. In my post, I presented what I believe to be sound evidence that mean scores and top/bottom box scores are much better summary indices than net scores. Descriptive statistics like the mean and standard deviation provide important information that describe the location and spread of the distribution of responses. Also, top/bottom box scores provide precise information about the size of customer segments. Net scores do neither.

Rob Markey, the co-author of the book, The Ultimate Question 2.0  (along with Fred Reichheld), tweeted about last week’s blog post.

Rob Markey' Tweet

I really am unclear about how Mr. Markey believes my argument is supporting (in CAPS, mind you) the NPS point of view. I responded to his tweet but never received a clarification from him.

So, I present this post as an open invitation for Mr. Markey to explain how my argument regarding the ambiguity of the NPS supports their point of view.

One More Thing

I never deliver arguments shrouded behind a mask of criticism.  While my analyses focused on the NPS, my argument against net scores (difference scores) applies to any net score; I just happened to have data on the recommend question, a common question used in customer surveys. In fact, I even ran the same analyses (e.g., comparing means to net scores) on other customer loyalty questions (e.g., overall sat, likelihood to buy), but I did not present those results because they were highly redundant to what I found using the recommend question. The problem of difference scores applies to any customer metric.

I have directly and openly criticized the research on which the NPS is based in my blog posts, articles, and books. I proudly stand behind my research and critique of the Net Promoter Score. Other mask-less researchers/practitioners have also voiced concern about the “research” on which the NPS is based. See Vovici’s blog post for a review. Also, be sure to read Tim Keiningham’s interview with Research Magazine in which he calls the NPS claims “nonsense”. Yes. Nonsense.

Just to be clear, “Nonsense” does not mean “Awesome.”

Source: Unmasking the Problem with Net Scores and the NPS Claims by bobehayes

@JohnNives on ways to demystify AI for enterprise #FutureOfData

[youtube https://www.youtube.com/watch?v=daiVHrsZQMU]

@JohnNives on ways to demystify AI for enterprise #FutureOfData

Youtube: https://www.youtube.com/watch?v=daiVHrsZQMU
iTunes: http://math.im/itunes

In this podcast @JohnNives discusses ways to demystify AI for enterprise. He shared his perspective on how businesses should engage with AI and what are some of the best practices and considerations for businesses to adopt AI in their strategic roadmap. This podcast is great for anyone seeking to learn about way to adopt AI in enterprise landscape.

John’s Recommended Listen:
FutureOfData Podcast http://math.im/itunes
War and Peace Leo Tolstoy (Author),‎ Frederick Davidson (Narrator),‎ Inc. Blackstone Audio (Publisher) https://amzn.to/2w7ObkI

Podcast Link:
iTunes: http://math.im/itunes
GooglePlay: http://math.im/gplay

Jean’s BIO:
Jean-Louis (John) Nives serves as Chief Digital Officer and the Global Chair of the Digital Transformation practice at N2Growth. Prior to joining N2Growth, Mr. Nives was at IBM Global Business Services, within the Watson and Analytics Center of Competence. There he worked on Cognitive Digital Transformation projects related to Watson, Big Data, Analytics, Social Business and Marketing/Advertising Technology. Examples include CognitiveTV and the application of external unstructured data (social, weather, etc.) for business transformation.
Prior relevant experience includes executive leadership positions at Nielsen, IRI, Kraft and two successful advertising technology acquisitions (Appnexus and SintecMedia). In this capacity, Jean-Louis combined information, analytics and technology to created significant business value in transformative ways.
Jean-Louis earned a Bachelor’s Degree in Industrial Engineering from University at Buffalo and an MBA in Finance and Computer Science from Pace University. He is married with four children and lives in the New York City area.

About #Podcast:
#FutureOfData podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.

Wanna Join?
If you or any you know wants to join in,
Register your interest @ http://play.analyticsweek.com/guest/

Want to sponsor?
Email us @ info@analyticsweek.com

Keywords:
#FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

Source: @JohnNives on ways to demystify AI for enterprise #FutureOfData by admin

Jan 03, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Human resource  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ NEWS BYTES]

>>
 Protecting Big Data, while Preserving Analytical Agility – Security Boulevard Under  Big Data

>>
 How Restaurant Apps Can Improve The Customer Experience? – Customer Think Under  Customer Experience

>>
 Why Fund Managers Are Investing in Big Data – GuruFocus.com Under  Big Data

More NEWS ? Click Here

[ FEATURED COURSE]

Hadoop Starter Kit

image

Hadoop learning made easy and fun. Learn HDFS, MapReduce and introduction to Pig and Hive with FREE cluster access…. more

[ FEATURED READ]

Hypothesis Testing: A Visual Introduction To Statistical Significance

image

Statistical significance is a way of determining if an outcome occurred by random chance, or did something cause that outcome to be different than the expected baseline. Statistical significance calculations find their … more

[ TIPS & TRICKS OF THE WEEK]

Keeping Biases Checked during the last mile of decision making
Today a data driven leader, a data scientist or a data driven expert is always put to test by helping his team solve a problem using his skills and expertise. Believe it or not but a part of that decision tree is derived from the intuition that adds a bias in our judgement that makes the suggestions tainted. Most skilled professionals do understand and handle the biases well, but in few cases, we give into tiny traps and could find ourselves trapped in those biases which impairs the judgement. So, it is important that we keep the intuition bias in check when working on a data problem.

[ DATA SCIENCE Q&A]

Q:Give examples of bad and good visualizations?
A: Bad visualization:
– Pie charts: difficult to make comparisons between items when area is used, especially when there are lots of items
– Color choice for classes: abundant use of red, orange and blue. Readers can think that the colors could mean good (blue) versus bad (orange and red) whereas these are just associated with a specific segment
– 3D charts: can distort perception and therefore skew data
– Using a solid line in a line chart: dashed and dotted lines can be distracting

Good visualization:
– Heat map with a single color: some colors stand out more than others, giving more weight to that data. A single color with varying shades show the intensity better
– Adding a trend line (regression line) to a scatter plot help the reader highlighting trends

Source

[ VIDEO OF THE WEEK]

@AnalyticsWeek Panel Discussion: Big Data Analytics

 @AnalyticsWeek Panel Discussion: Big Data Analytics

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Torture the data, and it will confess to anything. – Ronald Coase

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with  John Young, @Epsilonmktg

 #BigData @AnalyticsWeek #FutureOfData #Podcast with John Young, @Epsilonmktg

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

A quarter of decision-makers surveyed predict that data volumes in their companies will rise by more than 60 per cent by the end of 2014, with the average of all respondents anticipating a growth of no less than 42 per cent.

Sourced from: Analytics.CLUB #WEB Newsletter

2017 Trends in Data Modeling

The projected expansion of the data ecosystem in 2017 is causing extremely deliberate, systematic challenges for organizations attempting to exploit the most effective techniques available for maximizing data utility.

The plenitude of cognitive computing options, cloud paradigms, data science, and mobile technologies for big data has demonstrated its business value in a multitude of use cases. Pragmatically, however, its inclusion alongside conventional data management processes poses substantial questions on the back end pertaining to data governance and, more fundamentally, to data modeling.

Left unchecked, these concerns could potentially compromise any front-end merit while cluttering data-driven methods with unnecessary silos and neglected data sets. The key to addressing them lies in the implementation of swiftly adjustable data models which can broaden to include the attributes of the constantly changing business environments in which organizations compete.

According to TopQuadrant Executive VP and Director of TopBraid Technologies Ralph Hodgson, the consistency and adaptability of data modeling may play an even more dire role for the enterprise today:

“You have physical models and logical models, and they make their way into different databases from development to user acceptance into production. On that journey, things change. People might change the names of some of the columns of some of those data bases. The huge need is to be able to trace that through that whole assembly line of data.”

Enterprise Data Models
One of the surest ways to create a flexible enterprise model for a top down approach to the multiple levels of modeling Hodgson denoted is to use the linked data approach reliant upon semantic standards. Although there are other means of implementing enterprise data models, this approach has the advantages of being based on uniform standards applicable to all data which quickly adjust to include new requirements and use cases. Moreover, it has the added benefit of linking all data on an enterprise knowledge graph which, according to Franz CEO Jans Aasman, is one of the dominant trends to impact the coming year. “We don’t have to even talk about it anymore,” Aasman stated. “Everyone is trying to produce a knowledge graph of their data assets.”

The merit of a uniform data model for multiple domains throughout the enterprise is evinced in Master Data Management platforms as well; one can argue the linked data approach of ontological models merely extends that concept throughout the enterprise. In both cases, organizations are able to avoid situations in which “they spend so much time trying to figure out what the data model looks like and how do we integrate these different systems together so they can talk.” Stibo Systems Director of Vertical Solutions Shahrukh Arif claimed. “If you have it all in one platform, now you can actually realize that full value because you don’t have so spend so much time and money on the integrations and data models.”

Data Utility Models
The consistency of comprehensive approaches to data modeling are particularly crucial for cloud-based architecture or for incorporating data external to the enterprise. Frequently, organizations may encounter situations in which they must reconcile differences in modeling and metadata when attaining data from third-party sources. They can address these issues upfront by creating what DISCERN Chairman and CEO Harry Blount termed a “data utility model”, in which “all of the relevant data was available and mapped to all of the relevant macro-metadata, a metamodel I should say, and you could choose which data you want” from the third party in accordance with the utility model. Actually erecting such a model requires going through the conventional modeling process of determining business requirements and facilitating them through IT—which organizations can actually have done for them by competitive service providers. “Step one is asking all the right questions, step two is you need to have a federated, real-time data integration platform so you can take in any data in any format at any time in any place and always keep it up to date,” Blount acknowledged. “The third requirement is you need to have a scalable semantic graph structure.”

Relational Data Modeling (On-Demand Schema)
Data modeling in the relational world is increasingly impacted by the modeling techniques associated with contemporary big data initiatives. Redressing the inherent modeling disparities between the two is largely a means of accounting for semi-structured and unstructured data in relational environments primarily designed for structured data. Organizations are able to hurdle this modeling issue through the means of file formats which derive schema on demand. Options such as JSON and Avro are ideal for those who “want what is modeled in the big data world to align with what they have in their relational databases so they can do analytics held in their main databases,” Hodgson remarked.

One of the boons of utilizing Avro is the complete traceability it provides for data in relational settings—although such data may have originated from more contemporary unstructured sources associated with big data. The Avro format, and other files in this vein, allow modelers to traverse both relational schema requirements with what may be a lack of such schema intrinsic to most big data. According to Hodgson, Avro “still has the ontological connection, but it still talks in terms of property values and columns. It’s basically a table in the same sense you find in a spreadsheet. It’s that kind of table but the columns all align with the columns in a relational database, and those columns can be associated with a logical model which need not be an entity-relationship model. It can be an ontology.”

Predictive Models
Predictive models have been widely impacted by cognitive computing methods and other aspects of data science–although these two realms of data management are not necessarily synonymous with classic statistically-trained predictive models. Still, the influx of algorithms associated with various means of cognitive computing are paramount to the creation of predictive models which illustrate their full utility on unstructured big data sets at high velocities. Organizations can access entire libraries of machine learning and deep learning models from third-party vendors through the cloud, and either readily deploy them with their own data or “As a platform, we allow customers to build their own models or extend our models in service of their own specific needs” indico Chief Customer Officer Vishal Daga said.

The result is not only a dramatic reduction in the overall cost, labor, and salaries of hard to find data scientists to leverage cognitive computing techniques for predictive models, but also a degree of personalization—facilitated by the intelligent algorithms involved—enabling organizations to tailor those models to their own particular use cases. Thus, AI-centered SaaS opportunities actually reflect a predictive models on-demand service based on some of the most relevant data-centric processes to date.

Enterprise Representation
The nucleus of the enduring appositeness of data modeling is the increasingly complicated data landscape—including cognitive computing, a bevy of external data sources heralded by the cloud and mobile technologies in big data quantities—and the need to effectually structure data in a meaningful way. Modeling data is the initial step to gleaning its meaning and provides the basis for all of the different incarnations of data modeling, regardless of the particular technologies involved. However, there appears to be a burgeoning sense of credence associated with doing so on an enterprise-wide scale as “Knowing how data’s flowing and who it’s supporting, and what kind of new sources might make a difference to those usages, it’s all going to be possible when you have a representation of the enterprise,” Hodgson commented.

Adding further conviction to the value of enterprise data modeling is the analytic output facilitated by it. All-inclusive modeling techniques at the core of enterprise-spanning knowledge graphs appear well-suited for the restructuring of the data sphere caused by the big data disruption—particularly when paired with in-memory, parallel processing graph-aware analytics engines. “As modern data diversity and volumes grow, relational database management systems (RDBMS) are proving too inflexible, expensive and time-consuming for enterprises,” Cambridge Semantics VP of Engineering Barry Zane said. “Graph-based online analytical processing (GOLAP) will find a central place in everyday business by taking on data analytics challenges of all shapes and sizes, rapidly accelerating time-to-value in data discovery and analytics.”

 

Source

Dec 27, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Data analyst  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Data Matching with Different Regional Data Sets by analyticsweekpick

>> Jeff Palmucci / @TripAdvisor discusses managing a #MachineLearning #AI Team by v1shal

>> The Methods UX Professionals Use (2018) by analyticsweek

Wanna write? Click Here

[ NEWS BYTES]

>>
 Financial Contrast: LendingClub (LC) and Marchex (MCHX) – Fairfield Current Under  Social Analytics

>>
 Unraveling the Data Analytics Advantage – CDOTrends Under  Business Analytics

>>
 Cna Financial Corp (NYSE:CNA) Institutional Investor Sentiment Analysis – The Cardinal Weekly (press release) Under  Sentiment Analysis

More NEWS ? Click Here

[ FEATURED COURSE]

Lean Analytics Workshop – Alistair Croll and Ben Yoskovitz

image

Use data to build a better startup faster in partnership with Geckoboard… more

[ FEATURED READ]

The Industries of the Future

image

The New York Times bestseller, from leading innovation expert Alec Ross, a “fascinating vision” (Forbes) of what’s next for the world and how to navigate the changes the future will bring…. more

[ TIPS & TRICKS OF THE WEEK]

Data Analytics Success Starts with Empowerment
Being Data Driven is not as much of a tech challenge as it is an adoption challenge. Adoption has it’s root in cultural DNA of any organization. Great data driven organizations rungs the data driven culture into the corporate DNA. A culture of connection, interactions, sharing and collaboration is what it takes to be data driven. Its about being empowered more than its about being educated.

[ DATA SCIENCE Q&A]

Q:Explain what a local optimum is and why it is important in a specific context,
such as K-means clustering. What are specific ways of determining if you have a local optimum problem? What can be done to avoid local optima?

A: * A solution that is optimal in within a neighboring set of candidate solutions
* In contrast with global optimum: the optimal solution among all others

* K-means clustering context:
It’s proven that the objective cost function will always decrease until a local optimum is reached.
Results will depend on the initial random cluster assignment

* Determining if you have a local optimum problem:
Tendency of premature convergence
Different initialization induces different optima

* Avoid local optima in a K-means context: repeat K-means and take the solution that has the lowest cost

Source

[ VIDEO OF THE WEEK]

@DrewConway on creating socially responsible data science practice #FutureOfData #Podcast

 @DrewConway on creating socially responsible data science practice #FutureOfData #Podcast

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

It’s easy to lie with statistics. It’s hard to tell the truth without statistics. – Andrejs Dunkels

[ PODCAST OF THE WEEK]

@DrewConway on fabric of an IOT Startup #FutureOfData #Podcast

 @DrewConway on fabric of an IOT Startup #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

The largest AT&T database boasts titles including the largest volume of data in one unique database (312 terabytes) and the second largest number of rows in a unique database (1.9 trillion), which comprises AT&T’s extensive calling records.

Sourced from: Analytics.CLUB #WEB Newsletter

What Happens When You Put Hundreds of BI Experts in One Room?

Last week we wrapped up the second day of our global, two-day client conference: Eureka!. Our sold-out event brought together hundreds of business leaders and analytics professionals from around the globe to listen to thought-provoking presentations and engage in discussions about the evolution of the analytics industry.

You may be wondering why we chose to call out client conference “Eureka!”. I’m glad you asked. A “eureka moment” is an “aha!” moment, a moment where something clicks and finally makes sense. In hearing and sharing stories, experiences, and perspectives with industry veterans and peers, it was our hope that attendees experienced moments of surprise and enlightenment.

Unsurprisingly, some of the hottest topics at Eureka! were the shift to embedding analytics everywhere, the impact of AI and augmented analytics on businesses, and how to drive transformational change with analytics.

Eureka!

Embedding Analytics Everywhere

In his opening day keynote, Sisense CEO, Amir Orad, emphasized the importance of lowering the barrier to analytics and empowering everyone to use data to make decisions. Providing analytics to everyone, everywhere means catering to the different ways people understand data. This means moving beyond desktop dashboards and offering insights naturally throughout our lives.

Continuing with the non-traditional side of analytics, Amir pointed to three organizations using analytics in unique ways:

  1. Celestica, a global electronics manufacturer, leverages analytics to reduce its carbon footprint. Within just four months of implementing analytics, they saw a 1,041 metric tons reduction of Co2e. That’s enough energy to power 110 homes for one full year!
  2. Skullcandy, the incredibly popular maker of headphones, earbuds, and other audio and wireless products, has used analytics in their business to virtually eliminate fraudulent returns.
  3. Indiana Donor Network, the organ and tissue donation network for the state of Indiana, has used analytics to increase skin donations by 70% and cornea donations by a whopping 224%.

Solidifying the need to embed analytics everywhere in order to transform industries was Sham Sokka of Philips, who spoke about revolutionizing patient care by delivering relevant data and analytics to the right individual at each stage of client care. “We fully believe in this concept of data democratization,” Sham said. “Not everyone is a data scientist so you want to have a platform that can serve simple data to a patient but complex data to an administrator. Getting the right data to the right person is super critical.”

AI and Augmented Analytics

There’s no doubt that artificial intelligence and augmented analytics are going to continue to impact every aspect of analytics – from data prep to insight discovery.

In her keynote, Jen Underwood of Impact Analytix, discussed the unprecedented pace of continuous technological change we’re currently witnessing. When organizations adopt augmented analytics, Jen said they see a multitude of benefits, which include:

  1. Empowering the masses: Rather than providing analytics for only around 30% of an organization, augmented analytics makes discovering insight easy enough for everyone.
  2. Saving time: Augmented data prep automates and accelerates the process, applies reinforcement learning while humans drive algorithms, and helps improve data quality for faster results.
  3. Revealing hidden patterns: Augmented analytics can find patterns in your data that a human might never detect – or detect when it’s too late – using manual techniques.
  4. Improving accuracy: With the ability to apply statistical significance, uncertainty, and risk model estimates, augmented analytics takes into account aspects of data prep and modeling that manual approaches may miss.

Joining in on the topic of artificial intelligence, Professor and Author Avi Goldfarb, gave a keynote that had participants glued to their chairs. His session demonstrated how artificial intelligence will affect business, public policy, and society in virtually all fields. The point he drove home? Prediction isn’t useful unless you can do something with it. What’s useful about AI and prediction is the ability to take action and create a feedback loop – that’s where the competitive edge comes into play.

Transformational Change

Advancements in technology are great but it’s the changes they bring to organizations that make all the difference in the real world. In his session, Bill Janczak from Indiana Donor Network told his organization’s story of transformation through the implementation of analytics.

Eureka!

As a small organization with a small IT budget, Indiana Donor Network has a large mission – to help people during their time of need. Run traditionally like a non-profit, Indiana Donor Network realized that changing their behavior and adding in analytics was the missing piece to ensuring organs make it to the right place at the right time. Using analytics they were able to make some major, important changes:

  1. Within hours they can now catch errors and common data entry challenges that would normally take around 30-45 days to find. This lead to improved matches for organ transplants.
  2. They are now able to monitor which donor outreach programs are successful and which are not in order to focus their activities and spend their resources on programs that actually drive more awareness and donor authorization so that more people can be helped in the long run.

We’ve Struck Gold!

The last two days were a whirlwind of bright ideas, futuristic visions, and practical applications of analytics to improve businesses around the globe. If the excitement in the room surrounding all of the technological transformations was any indication, I’d say the future for analytics is bright.

I’d like to extend a quick thank you to all of our speakers and customers for contributing to an awesome, fascinating, and fun event. Until next year!

Originally Posted at: What Happens When You Put Hundreds of BI Experts in One Room? by analyticsweek