May 24, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Data interpretation  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ NEWS BYTES]

>>
 The case for one giant, multibillion-dollar cloud contract for DoD – C4ISRNet Under  Cloud

>>
 Streaming Analytics Market Is Constantly Growing On Account Of the Increasing Operational Efficiency And Production … – Expert Consulting Under  Streaming Analytics

>>
 ‘Salah’s statistics in his debut season at Anfield are quite astonishing’ – how the papers saw Liverpool FC’s … – Daily Post North Wales Under  Statistics

More NEWS ? Click Here

[ FEATURED COURSE]

Tackle Real Data Challenges

image

Learn scalable data management, evaluate big data technologies, and design effective visualizations…. more

[ FEATURED READ]

The Future of the Professions: How Technology Will Transform the Work of Human Experts

image

This book predicts the decline of today’s professions and describes the people and systems that will replace them. In an Internet society, according to Richard Susskind and Daniel Susskind, we will neither need nor want … more

[ TIPS & TRICKS OF THE WEEK]

Fix the Culture, spread awareness to get awareness
Adoption of analytics tools and capabilities has not yet caught up to industry standards. Talent has always been the bottleneck towards achieving the comparative enterprise adoption. One of the primal reason is lack of understanding and knowledge within the stakeholders. To facilitate wider adoption, data analytics leaders, users, and community members needs to step up to create awareness within the organization. An aware organization goes a long way in helping get quick buy-ins and better funding which ultimately leads to faster adoption. So be the voice that you want to hear from leadership.

[ DATA SCIENCE Q&A]

Q:Explain Tufte’s concept of ‘chart junk’?
A: All visuals elements in charts and graphs that are not necessary to comprehend the information represented, or that distract the viewer from this information

Examples of unnecessary elements include:
– Unnecessary text
– Heavy or dark grid lines
– Ornamented chart axes
– Pictures
– Background
– Unnecessary dimensions
– Elements depicted out of scale to one another
– 3-D simulations in line or bar charts

Source

[ VIDEO OF THE WEEK]

@AnalyticsWeek: Big Data at Work: Paul Sonderegger

 @AnalyticsWeek: Big Data at Work: Paul Sonderegger

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

With data collection, ‘the sooner the better’ is always the best answer. – Marissa Mayer

[ PODCAST OF THE WEEK]

#FutureOfData Podcast: Peter Morgan, CEO, Deep Learning Partnership

 #FutureOfData Podcast: Peter Morgan, CEO, Deep Learning Partnership

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

29 percent report that their marketing departments have ‘too little or no customer/consumer data.’ When data is collected by marketers, it is often not appropriate to real-time decision making.

Sourced from: Analytics.CLUB #WEB Newsletter

How To Turn Your Data Into Content Marketing Gold

With every brand out there becoming a publisher, it’s harder than ever to make your content stand out. Each day, you have a choice. You can play it safe and do what everyone else is doing: re-blog the same industry studies and curate uninspired listicles. Or you can be original and craft a story that only you can tell. The good news for most of you: there is content gold right under your nose. If used correctly, this will enable you to create truly compelling content that is not only shareable, but will set you apart from your peers.

The gold is your own data.

This data is often used to inform your business strategies and tactics, such as assessing which headlines performed better or what time of day you should tweet. And while those things are important, we’re talking about a close cousin of those efforts. This is about looking at the data your team has gathered and analyzed, and identifying original insights that you can craft into engaging stories to fuel your content marketing.

Visage, a platform meant to help marketers create branded visual content, conducted a survey of 504 marketers to see just how well they are taking advantage of this opportunity for original data storytelling. 75% of those surveyed are directly responsible for creating content, and 75% work in a company with 10 or less people working in their marketing department. Here’s what they found out:

1. Everyone is creating a lot of content

Most organizations (73%) publish original content on at least a weekly basis, and many (21%) are publishing content multiple times per day. Most brands are doing this because they know that if you aren’t sharing your latest thinking with the digital world (or at least being entertaining), your brand doesn’t exist for most people outside of your family.

content marketing data

2. It’s still not enough

Relatively few modern marketers believe that their organization creates enough original content. The fact is, as anyone who has rescheduled dates on an editorial calendar knows, getting into a publishing rhythm is hard. We can get enamored or overwhelmed by other brands who we see publishing a high volume of content. In such a state, it’s easy for some to play copycat and fall into regurgitating news and curating stories covered by other people. But your real challenge is differentiating from competitors and earning the trust of a potential customer. So, you need to use your limited resources to give yourself a shot for your content to either stand out and be remembered. Otherwise, it will be just one little drop flowing past in the social river.

content marketing data

3. Marketers are sitting on gold

Visage’s survey found that 41% of organizations are doing original market research more than once per year. Conducting a quick survey or poll is one powerful way to create a fresh, original story that hasn’t been told before. Start with a small experiment aimed at helping you understand your own market better, and keep your ideal customer profile in mind as you write your questions. The advantage to this approach is that you can structure your data collection and save yourself the time and money associated with cleaning up and organizing outside data. Finally, format your questions to gather the information and answers that you know your audience will find valuable.

content marketing data

4. Marketers aren’t using their data to its full potential.

The biggest shocker was that 60% of respondents claim to be sitting on interesting data, but only 18% are publishing it externally. There are many valid reasons to keep your internal data private (eg. security, competitive advantage), but you don’t need to take an all-or-nothing approach to this. For example, there’s a big opportunity to share aggregated trends and behaviors. Spotify does this with their music maps, and OKcupid does this with theirOKTrends blog.

content marketing data

5. They see the opportunity

Brand marketers aren’t just hoarding this gold. 82% of companies said it was important or extremely important that their marketing team learn to tell better data stories. You might notice the growing number of situations that require you to communicate with data in your own work, even just in your own internal reports and presentations.

content marketing data

6. The struggle is real

So, if so many marketers are sitting on interesting data and think it is important to craft original stories from it – why isn’t it happening? As the survey showed, many marketers don’t feel they have the skills or tools to craft the story from their data. Only 34% feel their teams have above average data literacy. Even when the data is cleaned, analyzed and ready to be visualized, modern marketers still have a hard job to do. Your audience needs context, and a strong narrative is a key ingredient of communicating with data. Often, the most successful data stories come as a result of combining powerful talents – the journalist working with a graphic designer, or a content marketer working closely with a data analyst. Get both sides of the brain firing in your content creation, even if you need to combine forces.

content marketing data
content marketing data

7. How to get started

Like any new marketing initiative, success in crafting original data stories as a means of differentiating your brand will take time and money. Start where you are and do what you can, even if it feels microscopic at first. If the prospect of getting rolling with your own data seems overwhelming, get some practice with public data available from credible sources like the Census Bureau or Pew Research. The cool news is that it’s easier than ever to get started with a plethora of great tools and educational material on the web.

Data storytelling is a skill that modern marketers can and must learn. If you are committed to creating original content that makes your brand shine, consider the precious gold insights that are ready to be mined from your data to provide tangible value to your audience.

To read the original article on NewsCred, click here.

Source: How To Turn Your Data Into Content Marketing Gold

Measuring The Customer Experience Requires Fewer Questions Than You Think

Figure 1. Three Phases of the Customer Lifecycle

A formal definition of customer experience, taken from Wikipedia, states that customer experience is: “The sum of all experiences a customer has with a supplier of goods or services, over the duration of their relationship with that supplier.” In practical terms, customer experience is the customer’s perception of, and attitude about, different areas of your company or brand across the entire customer lifecycle (see Figure 1 to right).

We know that the customer experience has a large impact on customer loyalty. Customers who are satisfied with the customer experience buy more, recommend you and are easier to up/cross-sell than customers who are dissatisfied with the customer experience. Your goal for the customer relationship survey, then, is to ensure it includes customer experience questions asking about important customer touchpoints.

Table 1. General and Specific Customer Experience Questions. In practice, survey asks customers to rate their satisfaction with each area.

Customer Experience Questions

Customer experience questions typically account for most of the questions in customer relationship surveys. There are two types of customer experience questions: General and Specific. General questions ask customers to rate broad customer touchpoints. Specific customer experience questions focus on specific aspects of the broader touchpoints.  As you see in Table 1, general customer experience questions might ask the customers to rate their satisfaction with 1. Product Quality, 2. Account Management, 3. Technical Support and so on. Specific customer experience questions ask customers to rate their satisfaction with detailed aspects of each broader customer experience area.

I typically see both types of questions in customer relationship surveys for B2B companies. The general experience questions are presented first and then are followed-up with specific experience questions. As such, I have seen customer relationship surveys that have as little as five (5) customer experience questions and other surveys that have 50+ customer experience questions.

Figure 2. General Customer Experience Questions

General Customer Experience Questions

Here are some general customer experience questions I typically use as a starting point for helping companies build their customer survey. As you can see in Figure 2, these general questions address broad areas across the customer lifecycle, from marketing and sales to service.

While specific customer experience questions are designed to provide greater understanding of customer loyalty, it is important to consider their usefulness. Given that we already have general customer loyalty question in our survey, do we need the specific questions? Do the specific questions help us explain customer loyalty differences above what we know through the general questions?

Customer Experience Questions Predicting Customer Loyalty

To answer these questions, I analyzed four different B2B customer relationship surveys, each from four different companies. These companies represented midsize to large enterprise companies. Their semi-annual customer surveys included a variety of loyalty questions and specific and general customer experience questions. The four companies had different combinations of general (5 to 7) and specific customer experience questions (0 to 34).

Figure 3. Impact of General and Specific Customer Experience Questions on Customer Loyalty (overall sat, recommend, buy again). Percent of variability is based on stepwise regression analysis.

The goal of the analysis was to show whether the inclusion of specific experience questions added to our understanding of customer loyalty differences beyond what the general experience questions explained. The results of the analysis are presented in Figure 3.  Through step-wise regression analysis, I first calculated the percent of variance in customer loyalty that is explained by the general customer experience questions (green area). Then, I calculated the percent of variance in customer loyalty explained by the specific questions above what the general questions explained (blue area). Clearly, the few general experience questions explain a lot of the variability in customer loyalty (42% to 85%) while the specific customer experience questions account for very little extra (2% to 4%).

Efficient Customer Relationship Surveys

We may be asking customers too many questions in our relationship surveys. Short relationship surveys, using general experience questions, provide great insight into understanding how to improve customer loyalty. Asking customers about specific, detailed aspects about their experience provides very little additional information about what drives customer loyalty.

Customers’ memories are fallible.  Given the non-trivial time between customer relationship surveys (up to a year between surveys), customers are unable to make fine distinctions regarding their experience with you (as measured in your survey). This might be a good example of the halo effect, the idea that a global evaluation of a company/brand (e.g., great product) influences opinions about their specific attributes (e.g., reliable product, ease of use).

Customers’ ratings about general customer experience areas explain as much of the differences in customer loyalty as we are able to with customer experience questions. Short relationship surveys allow customers the optimal way to give their feedback on a regular basis. Not only do these short relationship surveys provide deep customer insight about the causes of customer loyalty, they also enjoy higher response rates and show that you are considerate of customers’ time.

Source: Measuring The Customer Experience Requires Fewer Questions Than You Think by bobehayes

May 17, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Big Data knows everything  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Customer Service Excellence in 8 steps! by martin

>> Underpinning Enterprise Data Governance with Machine Intelligence by jelaniharper

>> Navigating Big Data Careers with a Statistics PhD by analyticsweekpick

Wanna write? Click Here

[ NEWS BYTES]

>>
 Startup Dremio Promises Improved Big Data Access For Business Analytics With New Release – CRN Under  Business Analytics

>>
 Teladoc taps IBM Watson machine learning for second opinion service – Healthcare IT News Under  Machine Learning

>>
 Amazon or no, banks are in for big changes, one analyst says … – MarketWatch Under  Financial Analytics

More NEWS ? Click Here

[ FEATURED COURSE]

Machine Learning

image

6.867 is an introductory course on machine learning which gives an overview of many concepts, techniques, and algorithms in machine learning, beginning with topics such as classification and linear regression and ending … more

[ FEATURED READ]

Hypothesis Testing: A Visual Introduction To Statistical Significance

image

Statistical significance is a way of determining if an outcome occurred by random chance, or did something cause that outcome to be different than the expected baseline. Statistical significance calculations find their … more

[ TIPS & TRICKS OF THE WEEK]

Fix the Culture, spread awareness to get awareness
Adoption of analytics tools and capabilities has not yet caught up to industry standards. Talent has always been the bottleneck towards achieving the comparative enterprise adoption. One of the primal reason is lack of understanding and knowledge within the stakeholders. To facilitate wider adoption, data analytics leaders, users, and community members needs to step up to create awareness within the organization. An aware organization goes a long way in helping get quick buy-ins and better funding which ultimately leads to faster adoption. So be the voice that you want to hear from leadership.

[ DATA SCIENCE Q&A]

Q:What do you think about the idea of injecting noise in your data set to test the sensitivity of your models?
A: * Effect would be similar to regularization: avoid overfitting
* Used to increase robustness

Source

[ VIDEO OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with Eloy Sasot, News Corp

 #BigData @AnalyticsWeek #FutureOfData #Podcast with Eloy Sasot, News Corp

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

It is a capital mistake to theorize before one has data. Insensibly, one begins to twist the facts to suit theories, instead of theories to

[ PODCAST OF THE WEEK]

@ChuckRehberg / @TrigentSoftware on Translating Technology to Solve Business Problems #FutureOfData #Podcast

 @ChuckRehberg / @TrigentSoftware on Translating Technology to Solve Business Problems #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

73% of organizations have already invested or plan to invest in big data by 2016

Sourced from: Analytics.CLUB #WEB Newsletter

@chrisbishop on futurist’s lens on #JobsOfFuture

[youtube https://www.youtube.com/watch?v=L1nLwjiB32A]

@chrisbishop on futurist’s lens on #JobsOfFuture #FutureofWork #JobsOfFuture #Podcast

In this podcast Christopher Bishop, Chief Reinvention Officer, Improvising Careers talks about his journey as a multimodal careerists, and his past as a rockstar. He shared some of hacks / best practices that businesses could adopt to better work through new age of work, worker and workplace. This podcast has lots of thought leadership perspective for future HR leaders.

Chris’s Recommended Reads:
The Industries of the Future by Alec Ross https://amzn.to/2rPjQlo
Disrupted: My Misadventure in the Start-Up Bubble by Dan Lyons https://amzn.to/2k1RAIT
Breakout Nations: In Pursuit of the Next Economic Miracles by Ruchir Sharma https://amzn.to/2KwERcy
How We Got to Now: Six Innovations That Made the Modern World by Steven Johnson https://amzn.to/2L7Dn9v
The New Rules of Work: The Modern Playbook for Navigating Your Career by Alexandra Cavoulacos and Kathryn Minshew https://amzn.to/2rMjU5F

Podcast Link:
iTunes: http://math.im/jofitunes
GooglePlay: http://math.im/jofgplay

Chris’s BIO:
Christopher Bishop has had many different careers since he graduated from Bennington College with a B.A. in German literature. He has worked as a touring rock musician (played with Robert Palmer), jingle producer (sang on the first Kit Kat jingle “Gimme A Break”) and Web site project manager (developed Johnson & Johnson’s first corporate Web site). Chris also spent 15 years at IBM in a variety of roles including business strategy consultant and communications executive driving social media adoption and use of virtual worlds.

Chris is a member of the World Future Society and gave a talk at their annual conference in Washington, D.C. last summer on “How to Succeed at Jobs That Don’t Exist Yet.” In addition, he’s on the Board of TEDxTimesSquare and gave a talk on *Openness* at the New York event in April 2013.

Chris writes, consults and speaks about “improvising careers” at universities and industry conferences.

About #Podcast:
#JobsOfFuture podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.

Want to sponsor?
Email us @ info@analyticsweek.com

Keywords:
#JobsOfFuture #Leadership #Podcast #Future of #Work #Worker & #Workplace

Source: @chrisbishop on futurist’s lens on #JobsOfFuture

Ashok Srivastava(@aerotrekker) @Intuit on Winning the Art of #DataScience #FutureOfData #Podcast

[youtube https://www.youtube.com/watch?v=I5yZfhd-ZQY]

Ashok Srivastava(@aerotrekker) @Intuit on Winning the Art of #DataScience #FutureOfData

Youtube: https://youtu.be/I5yZfhd-ZQY
iTunes: https://apple.co/2FAZgz2

In this podcast Ashok Srivastava(@aerotrekker) talks about how the code of creating a great data science practice goes through #PeopleDataTech and he suggested how to handle unreasonable expectations from reasonable technologies. He shared his journey through culturally diverse organizations and how he successfully build data science practice. He shared his role in Intuit and some of the AI/Machine learning focus in his current role. This podcast is a must for all data driven leaders, strategists and wannabe technologists who are tasked to grow their organization and build a robust data science practice.

Ashok’s Recommended Read:
Guns, Germs, and Steel: The Fates of Human Societies – Jared Diamond Ph.D. http://amzn.to/2C4bLMT
Collapse: How Societies Choose to Fail or Succeed: Revised Edition – by Jared Diamond http://amzn.to/2C3Bu8f

Podcast Link:
iTunes: http://math.im/itunes
GooglePlay: http://math.im/gplay

Ashok’s BIO:
Ashok N. Srivastava, Ph.D. is the Senior Vice President and Chief Data Officer at Intuit. He is responsible for setting the vision and direction for large-scale machine learning and AI across the enterprise to help power prosperity across the world. He is hiring hundreds of people in machine learning, AI, and related areas at all levels.

Previously, he was Vice President of Big Data and Artificial Intelligence Systems and the Chief Data Scientist at Verizon. He is an Adjunct Professor at Stanford in the Electrical Engineering Department and is the Editor-in-Chief of the AIAA Journal of Aerospace Information Systems. Ashok is a Fellow of the IEEE, the American Association for the Advancement of Science (AAAS), and the American Institute of Aeronautics and Astronautics (AIAA).

Ashok has a range of business experience including serving as Senior Director at Blue Martini Software and Senior Consultant at IBM.

He has won numerous awards, including the Distinguished Engineering Alumni Award, the NASA Exceptional Achievement Medal, IBM Golden Circle Award, the Department of Education Merit Fellowship, and several fellowships from the University of Colorado. Ashok holds a Ph.D. in Electrical Engineering from the University of Colorado at Boulder.

About #Podcast:
#FutureOfData podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.

Wanna Join?
If you or any you know wants to join in,
Register your interest @ http://play.analyticsweek.com/guest/

Want to sponsor?
Email us @ info@analyticsweek.com

Keywords:
#FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

Source

May 10, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Pacman  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Dec 21, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..) by admin

>> â€œPutting Data Everywhere”: Leveraging Centralized Business Intelligence for Full-Blown Data Culture by jelaniharper

>> Finance Best Practices Are Changing—Is Your Organization Keeping Pace? by analyticsweekpick

Wanna write? Click Here

[ NEWS BYTES]

>>
 6 Tips for Keeping Iot Devices Safe – Security Sales & Integration Under  IOT

>>
 Software AG ramps up Australian push with IoT platform – IoT Hub Under  Streaming Analytics

>>
 Customer experience in a new dimension: 3D Augmented Reality App Mercedes cAR and Virtual Reality goggles … – Automotive World (press release) Under  Customer Experience

More NEWS ? Click Here

[ FEATURED COURSE]

Python for Beginners with Examples

image

A practical Python course for beginners with examples and exercises…. more

[ FEATURED READ]

The Signal and the Noise: Why So Many Predictions Fail–but Some Don’t

image

People love statistics. Statistics, however, do not always love them back. The Signal and the Noise, Nate Silver’s brilliant and elegant tour of the modern science-slash-art of forecasting, shows what happens when Big Da… more

[ TIPS & TRICKS OF THE WEEK]

Winter is coming, warm your Analytics Club
Yes and yes! As we are heading into winter what better way but to talk about our increasing dependence on data analytics to help with our decision making. Data and analytics driven decision making is rapidly sneaking its way into our core corporate DNA and we are not churning practice ground to test those models fast enough. Such snugly looking models have hidden nails which could induce unchartered pain if go unchecked. This is the right time to start thinking about putting Analytics Club[Data Analytics CoE] in your work place to help Lab out the best practices and provide test environment for those models.

[ DATA SCIENCE Q&A]

Q:What is star schema? Lookup tables?
A: The star schema is a traditional database schema with a central (fact) table (the “observations”, with database “keys” for joining with satellite tables, and with several fields encoded as ID’s). Satellite tables map ID’s to physical name or description and can be “joined” to the central fact table using the ID fields; these tables are known as lookup tables, and are particularly useful in real-time applications, as they save a lot of memory. Sometimes star schemas involve multiple layers of summarization (summary tables, from granular to less granular) to retrieve information faster.

Lookup tables:
– Array that replace runtime computations with a simpler array indexing operation

Source

[ VIDEO OF THE WEEK]

@JohnTLangton from @Wolters_Kluwer discussed his #AI Lead Startup Journey #FutureOfData #Podcast

 @JohnTLangton from @Wolters_Kluwer discussed his #AI Lead Startup Journey #FutureOfData #Podcast

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

War is 90% information. – Napoleon Bonaparte

[ PODCAST OF THE WEEK]

@JohnNives on ways to demystify AI for enterprise #FutureOfData #Podcast

 @JohnNives on ways to demystify AI for enterprise #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

73% of organizations have already invested or plan to invest in big data by 2016

Sourced from: Analytics.CLUB #WEB Newsletter

Eradicating Silos Forever with Linked Enterprise Data

The cry for linked data began innocuously enough with the simple need to share data. It has reverberated among countless verticals, perhaps most ardently in the health care space, encompassing both the public and private sectors. The advantages of linked enterprise data can positively affect any organization’s ROI and include:

  • Greater agility
  • More effective data governance implementation
  • Coherent data integration
  • Decreased time to action for IT
  • Increased trust in data

Still, the greatest impact that linked data has on the enterprise is its penchant to interminably vanquish the silo-based culture that still persists, and which stands squarely in the way of allowing true data culture to manifest.

According to TopQuadrant Managing Director David Price, for many organizations, “The next natural step in cases where they have data about the same thing that comes from different systems is to try to make links between those so they can have one single view about sets of data.”

And, if those links are managed correctly, they may very well lead to the proverbial single version of the truth.

From Linked Open Data…
The concept of linked enterprise data stems directly from linked open data, which has typically operated at the nexus between the public and private sectors (although it can involve either one singularly) and enabled organizations to link to and access data that are not theirs. Because of the uniform approach of semantic technologies, that data is exchangeable with virtually any data management system that utilizes smart data techniques. “As long as we make sure that all of the data that we put in a semantic data lake adheres to standard RDF technology and we use standard ontologies and taxonomies to format the data, they’re already integrated,” said Franz CEO Jans Aasman. “You don’t have to do anything; you can just link them together.” Thus, organizations in the private sector can readily integrate public sector linked open data into their analytics and applications in a time frame that largely bypasses typical pain points of integration and data preparation.

Such celerity could prove influential in massive data sharing endeavors like the Pentagon Papers, in which data were exchanged across international borders and numerous databases to help journalists track down instances of financial fraud. Price is leading TopQuadrant’s involvement in the Virtual Construction for Roads (V-CON) project, in which the company is contributing to an IT system that harmonizes data for road construction between a plethora of public and private sector entities in Holland and Sweden. When asked if TopQuadrant’s input on the project was based on integrating and linking data among these various parties, Price commented, “That’s exactly where the focus is.”

…to Linked Enterprise Data Insight
Linked data technologies engender an identical effect when deployed within the enterprise. In cases in which different departments require the same data for different purposes, or in instances in which there are multiple repositories or applications involving the same data, linked enterprise data can provide a comprehensive data source comprised of numerous tributaries relevant for all applications. “The difference is this stuff is enabled to also allow you to extract all the data and make it available for anybody to download it… and that includes locally,” Price commented. “You get more flexibility and less vendor lock-in by using standards.” In what might be the most compelling use case for linked enterprise data, organizations can also link all of their data–stemming from internal and external sources–for a more profound degree of analytics based on relationship subtleties that semantic technologies instinctively perceive. Cambridge Semantics VP of Marketing John Rueter weighed in on these benefits when leveraged at scale.

“That scale is related to an almost sort of instantaneous querying and results of an entire collection of data. It has eliminated that linear step-wise approach of multiple links or steps to get at that data. The fact that you’re marrying the combination of scale and speed you’re also, I would posit, getting better insights and more precise and accurate results based upon the sets of questions you’re asking given that you’ve got the ability to access and look at all this data.”

Agile Flexibility
Linked enterprise data allows all data systems to share ontologies—semantic models—that readily adjust to include additional models and data types. The degree of flexibility they facilitate is underscored by the decreased amounts of data preparation and maintenance required to sustain what is in effect one linked system. Instead of addressing modeling requirements and system updates individually, linked enterprise data systems handle these facets of data management holistically and, in most instances, singularly. Issuing additional requirements or updating different databases in a linked data system necessitates doing so once in a centralized manner that is simultaneously reflected in the individual components of the linked data systems. “In a semantic technology approach the data model or schema or ontology is actually its own data,” Price revealed. “The schema is just more data and the data in some database that represents me, David Price, can actually be related to different data models at the same time in the same database.” This sort of flexibility makes for a much more agile environment in which IT teams and end users spend less time preparing data, and more reaping their benefits.

Data Governance Ramifications
Although linked enterprise data doesn’t formally affect data governance as defined as the rules, roles, and responsibilities upon which sustainable use of data depends, it greatly improves its implementation. Whether ensuring regulatory compliance or reuse of data, standards-based environments furnish consistent semantics and metadata that are understood in a uniform way—across as many different systems as an enterprise has. One of the most pivotal points for implementing governance policy is ensuring that organizations are utilizing the same terms for the same things, and vice versa. “The difference our technology brings is that things are much more flexible and can be changed more easily, and the relationships between things can be made much more clear,” Price remarked about the impact of linked data on facilitating governance. Furthermore, the uniform approach of linked data standards ensures that “the items that are managed are accurate, complete, have a good definition that’s understandable by discipline experts, and that sometimes have a more general business glossary definition and things like that,” he added.

Security
There are multiple facets of data governance that are tied to security, such as who has the authority to view which data and how. In a linked data environment such security is imperative, particularly when sharing data across the public and private sectors. Quite possibly, security measures are reinforced even more in linked data settings than in others, since they are fortified by conventional security methods and those particular to smart data technologies. The latter involves supplementing traditional data access methods with semantic statements or triples; the former includes any array of conventional methods to protect the enterprise and its data. “The fact that you use a technology that enables things to be public doesn’t mean they have to be,” Price said. “Then you put on your own security policies. It’s all stored in a database that can be secured at various levels of accessing the database.”

Eradicating Silos
Implicit in all of the previously mentioned benefits is the fact that linked enterprise data effectively eradicates the proliferation of silos which has long complicated data management as a whole. Open data standards facilitate much more fluid data integration while decreasing temporal aspects of data preparation, shifting the emphasis on insight and action. This ability to rid the enterprise of silos is one which transcends verticals, a fact which Price readily acknowledged. “Our approach to the V-Con project is that although the organizations involved in this are National Roads Authority, our view is that the problem they are trying to solve is a general one across more than the roads industry.” In fact, it is applicable to the enterprise in general, particularly that which is attempting to sustain its data management in a long-term, streamlined manner to deliver both cost and performance boons.

Source

May 03, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

[  COVER OF THE WEEK ]

image
Data security  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> The Value of Opinion versus Data in Customer Experience Management by bobehayes

>> The Case For Pure Play Virtualization by analyticsweekpick

>> Steph Curry’s Season Stats in 13 lines of R Code by stattleship

Wanna write? Click Here

[ NEWS BYTES]

>>
 How Machine Learning can help Cryptocurrency Traders Maximize their Gains – Cryptovest Under  Machine Learning

>>
 Mulvaney response to CFPB data security gaps baffles cyber experts – American Banker Under  Data Security

>>
 No experience + hiring freeze + political donor = $121000 job? – The Boston Globe Under  Data Security

More NEWS ? Click Here

[ FEATURED COURSE]

Machine Learning

image

6.867 is an introductory course on machine learning which gives an overview of many concepts, techniques, and algorithms in machine learning, beginning with topics such as classification and linear regression and ending … more

[ FEATURED READ]

The Misbehavior of Markets: A Fractal View of Financial Turbulence

image

Mathematical superstar and inventor of fractal geometry, Benoit Mandelbrot, has spent the past forty years studying the underlying mathematics of space and natural patterns. What many of his followers don’t realize is th… more

[ TIPS & TRICKS OF THE WEEK]

Save yourself from zombie apocalypse from unscalable models
One living and breathing zombie in today’s analytical models is the pulsating absence of error bars. Not every model is scalable or holds ground with increasing data. Error bars that is tagged to almost every models should be duly calibrated. As business models rake in more data the error bars keep it sensible and in check. If error bars are not accounted for, we will make our models susceptible to failure leading us to halloween that we never wants to see.

[ DATA SCIENCE Q&A]

Q:What is star schema? Lookup tables?
A: The star schema is a traditional database schema with a central (fact) table (the “observations”, with database “keys” for joining with satellite tables, and with several fields encoded as ID’s). Satellite tables map ID’s to physical name or description and can be “joined” to the central fact table using the ID fields; these tables are known as lookup tables, and are particularly useful in real-time applications, as they save a lot of memory. Sometimes star schemas involve multiple layers of summarization (summary tables, from granular to less granular) to retrieve information faster.

Lookup tables:
– Array that replace runtime computations with a simpler array indexing operation

Source

[ VIDEO OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with @Beena_Ammanath, @GE

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @Beena_Ammanath, @GE

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

The world is one big data problem. – Andrew McAfee

[ PODCAST OF THE WEEK]

#FutureOfData with @CharlieDataMine, @Oracle discussing running analytics in an enterprise

 #FutureOfData with @CharlieDataMine, @Oracle discussing running analytics in an enterprise

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

94% of Hadoop users perform analytics on large volumes of data not possible before; 88% analyze data in greater detail; while 82% can now retain more of their data.

Sourced from: Analytics.CLUB #WEB Newsletter

Turning Business Users into Citizen Data Scientists

The data scientist position may well be one of the most multi-faceted jobs around, involving aspects of statistics, practical business knowledge, interpersonal skills, programming languages, and many other wide-sweeping qualifications.

For business end users of data-driven processes, however, many times these professionals simply seem like glorified IT personnel: they’re the new hirees the former goes to, and waits upon, to get the data required to do their jobs.

Today, analytics platforms featuring conversational, interactive responses to questions can eliminate the backlog of demands for data while transforming the business into citizen data scientists capable of performing enough lower echelon data science functions to conduct their own analytics.

Moreover, they also equip them with the means to modify the data and answer their own questions, as needed, to take a greater sense of ownership and perhaps even pride in the data which impacts their jobs.

Ben Szekely, Cambridge Semantics Vice President of Solutions and Pre-Sales, reflected that, “Because business users are getting answers back in real time they’re able to start making judgments about the data, and they’re developing a level of trust and intuition about the data in their organization that wasn’t there before.”

Real-Time Answers, Ad-Hoc Questions

The most immediately demonstrable facet of a citizen data scientist is the ability to answer one’s own data-centric questions autonomously. Dependence on external IT personnel or data scientists is not required with centralized data lake options enhanced by smart data technologies and modern query mechanisms. This combination, which leverages in-memory computing, parallel processing, and the power of the cloud to scale on demand, exploits the high-resolution copy of data assets linked together within a semantic data lake. Users are able to issue their own questions and answers of the resulting enterprise knowledge graph through either a simple web browser interface or their favorite self-service BI tool of choice—the latter of which is likely already in use at their organization. “They’re getting their answers through a real-time conversation and interaction with the content, versus going and asking someone and getting back a Powerpoint deck,” Szekely mentioned. “That’s a very static thing which they can’t converse with or really understand necessarily, or [understand] how the answer was come to.”

Understanding Answers and Data

Full-fledged data scientists are able to trust in data and analytics results because they have an intimate knowledge of those data and the processes they underwent to supply answers to questions. Citizen data scientists can have that same understanding and readily determine insight into data provenance. The underlying graph mechanisms powering these options deliver full lineage of data’s sources, transformation, and other aspects of their use so citizen data scientists can retrace the data’s journey to their analytics results. Even laymen business users can understand how to traverse a knowledge graph for these purposes, because all of the data modeling is done in terms predicated on business concepts and processes—as opposed to arcane query languages or IT functions. “We talk about the way things are related to basic concepts and properties,” Szekely said. “You don’t have to be able to read an ER diagram to understand the data. You just have to be able to look at basic names and relationships.” Those names and relationships are described in business terms to maximize end user understanding of data.

Selecting and Modeling Data

Another key function of the data science position is determining which sources are relevant for questions, and modeling their data in a way so that a particular application can extract value from them. Citizen data scientists can also perform this basic functionality autonomously with a number of automated data modeling features. Relational technologies, for example, require copious time periods for constructing data models, calibrating additional data to fit into predefined schemas, and successfully mapping it all together. They require data scientists or IT to “build that monolithic data warehouse model and then map everything in it,” Szekely acknowledged. Conversely, smart data lakes enable users to begin analyzing data as soon as they are ingested, without having to wait for data to be formatted to fit the schema requirements of the repository. There are even basic data cleaning and preparation formulas to facilitate this prerequisite for citizen data scientists. According to Szekely, “You can bring in new data and we’ll build a model kind of automatically from the source data. You can start exploring it and looking at it without doing any additional modeling. The modeling comes in when you want to start connecting it up to other sources or reshaping the data to help with particular business problems.”

Enterprise Accessible Data Science

Previously, data science was relegated to the domain of a select few users who functioned as gatekeepers for the rest of the enterprise. However, self-service analytics platforms are able to effectively democratize some of the rudimentary elements of this discipline so business users can begin accessing their own data. By turning business users into citizen data scientists, these technologies are helping to optimize manpower and productivity across the enterprise.

 

Source by jelaniharper