Aug 29, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
Accuracy check  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> How can you reap the advantages of Big Data in your enterprise? Services you can expect from a Remote DBA Expert by thomassujain

>> Why Organizations Are Choosing Talend vs Informatica by analyticsweekpick

>> Predicting UX Metrics with the PURE Method by analyticsweek

Wanna write? Click Here

[ FEATURED COURSE]

CS229 – Machine Learning

image

This course provides a broad introduction to machine learning and statistical pattern recognition. … more

[ FEATURED READ]

The Industries of the Future

image

The New York Times bestseller, from leading innovation expert Alec Ross, a “fascinating vision” (Forbes) of what’s next for the world and how to navigate the changes the future will bring…. more

[ TIPS & TRICKS OF THE WEEK]

Data Have Meaning
We live in a Big Data world in which everything is quantified. While the emphasis of Big Data has been focused on distinguishing the three characteristics of data (the infamous three Vs), we need to be cognizant of the fact that data have meaning. That is, the numbers in your data represent something of interest, an outcome that is important to your business. The meaning of those numbers is about the veracity of your data.

[ DATA SCIENCE Q&A]

Q:How do you control for biases?
A: * Choose a representative sample, preferably by a random method
* Choose an adequate size of sample
* Identify all confounding factors if possible
* Identify sources of bias and include them as additional predictors in statistical analyses
* Use randomization: by randomly recruiting or assigning subjects in a study, all our experimental groups have an equal chance of being influenced by the same bias

Notes:
– Randomization: in randomized control trials, research participants are assigned by chance, rather than by choice to either the experimental group or the control group.
– Random sampling: obtaining data that is representative of the population of interest

Source

[ VIDEO OF THE WEEK]

@AnalyticsWeek: Big Data Health Informatics for the 21st Century: Gil Alterovitz

 @AnalyticsWeek: Big Data Health Informatics for the 21st Century: Gil Alterovitz

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Data matures like wine, applications like fish. – James Governor

[ PODCAST OF THE WEEK]

#FutureOfData with @CharlieDataMine, @Oracle discussing running analytics in an enterprise

 #FutureOfData with @CharlieDataMine, @Oracle discussing running analytics in an enterprise

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

This year, over 1.4 billion smart phones will be shipped – all packed with sensors capable of collecting all kinds of data, not to mention the data the users create themselves.

Sourced from: Analytics.CLUB #WEB Newsletter

The Methods UX Professionals Use (2018)

The Methods UX Professionals Use (2018) featured

The Methods UX Professionals Use (2018) featuredThe wide range of UX methods is one of the things that makes UX such an interesting field.

Some methods have been around for decades (like usability testing), others are more recent additions, while some seem to be just slight variations on other existing methods.

We’ve been tracking and analyzing the methods UX professionals report using for a few years by analyzing the results of the UXPA salary survey.

The recently completed 2018 UXPA salary survey provides one of the more comprehensive and unbiased pictures of UX method usage and how it’s changed over time. The survey this year contains data from over 1,300 respondents from 51 countries and is comparable to the historical data from 2016, 2014, and 2011, with similarly sized samples.

Here is an overview of how common methods are used and how their use has changed over time. Differences of greater than about 5% across years or methods are statistically significant.

Usability Testing

As has been the case since 2011, usability testing is the most-used method, with three-quarters of respondents using it. Usability testing has evolved in the last decade with many more options to provide unmoderated and remote moderated testing, in addition to traditional lab-based testing. It’s not surprising this core method remains popular in 2018.

Expert Reviews

The term Heuristic Evaluation is often used interchangeably with expert review (much to the chagrin of Rolf Molich). Both expert reviews and heuristic evaluations are analytic techniques that require an evaluator to review an interface against a set of guidelines or principles. This method is frequently used because it’s faster and less expensive than many other methods. 62% of respondents report using expert reviews, which is slightly down from 64% in 2016 and showing a downward trend since 2011.

Benchmarking

Benchmarking a website using standardized metrics and against competitors is an excellent tool to understand how design changes impact the user experience. I strongly recommend the approach for tracking interface changes over time. If you need help, I recently wrote a detailed and practical book for running a benchmark study.

Almost half of respondents report benchmarking or running competitive studies (which is good), but usage is down a bit since 2011.

Eye Tracking

Eye tracking continues to remain a niche method with only 10% of respondents reporting using it. While eye tracking has gotten a lot cheaper in the last decade, its usage is down since we started tracking it in 2011 when 16% reported using it. Its more limited use may reflect the considerable time and training needed to include eye tracking. For example, when we conduct eye tracking studies for our clients, we allot around 10 minutes of processing and analysis time for every one minute of video.

User Research

User research is a broad term that encompasses many methods both qualitative and quantitative. Not surprisingly, then, most respondents report doing some form of user research. The percentage has remained relatively consistent since 2011 (see the figure below), with the exception of focus groups, which dropped statistically from 2016 (37%) to 2018 (32%). With focus groups losing traction among UX professionals, it’s not surprising this method continues to lose popularity (which is probably a good thing as other methods are usually more appropriate in UX). Surveys remain a popular UX research method despite some criticism and I suspect they will remain so as they continue to be integrated into other methods like unmoderated studies.

Defining Users & Requirements

Understanding who the users are and what they’re trying to accomplish with an interface can be essential to improving the user experience. Personas have remained a popular method for defining users despite criticism of how they’re implemented. It’s not surprising then that one of our most popular articles is on how to make them more scientific.

The number of respondents gathering requirements and conducting a contextual inquiry also dropped slightly from 2016. Task analysis continues to show a decline with its usage dipping from 52% in 2011 to 39% this year.

Information Architecture

We’ve consistently found that the ability to find products and information easily is what differentiates good websites and products from bad ones. There’s a science to findability and it involves a number of methods and techniques. The UXPA survey asks about one popular method, card sorting (which is included in our MUIQ platform), which has remained relatively popular since we began tracking it in 2011.

Prototyping & Design

Design is a fundamental part of the user experience. Not surprisingly, more than 70% of respondents report using some form of prototype or wireframe. We test high-fidelity prototypes built from InVision quite frequently on desktop and mobile in our MUIQ app. We’ve also found UX metrics to be reliable, even when used on these early-stage prototypes.

Accessibility

Accessibility remains a more niche activity for this group of respondents but there seems to be a steady growth in assessing accessibility since 2011. In the past seven years, it’s increased from 14% to 21% (which is also a good thing).

Strategy, Training, & Design Thinking

Strategy, project management, and training in UX have been consistently common activities since 2014, with more than a third of participants doing some project management, training, content strategy, and strategic consulting each year. The most common activity remains conceptual design, which is even showing a slight increase in 2018 compared to 2016.

Summary

The results of the 2018 UXPA salary survey reveal that the core methods in UX remain popular today, including usability testing, expert reviews, personas, card-sorting, and prototyping. Some methods like focus groups and eye tracking have continued to lose popularity over the last seven years, while accessibility reviews have increased in usage.

.UXMgraph {max-height: 50%;}
(function() {
if (!window.mc4wp) {
window.mc4wp = {
listeners: [],
forms : {
on: function (event, callback) {
window.mc4wp.listeners.push({
event : event,
callback: callback
});
}
}
}
}
})();

Sign-up to receive weekly updates.

Source by analyticsweek

How the age of Big Data made statistics the hottest job around

Big business’s need to pull insights from terabyte-sized databases spells boom times for data scientists.

Two years ago, Ritchie Bros. Auctioneers faced an interesting dilemma: The Vancouver-based company, which organizes auctions for industrial equipment, was accumulating massive amounts of information on its customers and the items it was listing for sale, but it had no one on staff who could really dive deep and make sense of it all. The company wanted to take a more data-driven approach to pricing, because it needed a better idea of what the market would pay for equipment. When structuring a deal for a used oilfield drill for an upcoming auction in Texas, for example, staff look at what similar equipment sold for in that area in the recent past and use that as a guide. But final sale prices depend on other factors, too. The quality of used equipment can vary wildly, and the resource sector that often buys these products is volatile, meaning demand can fluctuate without warning. In order to develop a more accurate, responsive pricing model, Ritchie Bros. needed help. “The problems we were having were too complex for the skills we had,” says Jeremy Coughlin, director of business intelligence.

So Coughlin went on a hunt for data scientists: experts who can derive insights from large, complex data sets. He checked Linked­In, scoured online forums and attended industry networking events. He experienced first-hand something that a growing number of Canadian companies are now learning: Data experts are in short supply. Many businesses are struggling to find talent, even as more people enter the field. The number of data professionals in Canada—people employed as statisticians, mathematicians and actuaries—has increased by 48% over the past five years, making it the fastest-growing job category in the country. And demand isn’t letting up. A survey conducted last year by IDC found that 53% of large Canadian organizations said lack of talent was the biggest impediment to successful completion of big data projects.

Consulting firm Mc­Kinsey and Co. estimates that the U.S. currently faces a shortage of up to 190,000 people with analytical expertise and 1.5 million managers with the skills to understand and act on what big data can reveal. Universities are jumping on the data trend and attempting to alleviate the talent squeeze by introducing programs to train a new generation of data scientists. “There are not many programs right now, and the output of students is very low,” says Uwe Glässer, a professor of computer science at Simon Fraser University, which recently launched a data science program. Until there are more qualified grads in the field, companies will have to seek out talent.

Coughlin’s six-month search ended at a university, but not with a student: Instead, he hired a professor who taught business intelligence and data mining at the University of British Columbia. Ritchie Bros. now has a small team of data experts, and one of their duties is to develop a more predictive approach to pricing. When the company auctions that oilfield drill, for example, the goal is for its pricing model to forecast demand in the near future based on different factors, such as the price of oil, leaving Ritchie Bros. less vulnerable to market surprises.

Many of the data scientists employed today have jumped ship from the academic world; they’re among the relative few who know what to do with the massive, unstructured sets of data the world is now awash in. IBM estimates that we create 2.5 quintillion bytes of data every day, and because of the quantity of data and the sophisticated algorithms required to decipher it, run-of-the-mill business analysts can’t cope with it effectively.

When Steve Woods needed to hire a data scientist last year, he knew it would be hard to find someone and that potential hires were unlikely to be posting resumés and scouring job boards. Woods is the co-founder of a Toronto-based startup called Nudge, which is developing a software platform to help business professionals keep track of their weak ties, meaning people they don’t keep in touch with regularly. The software can, for example, recommend that you send a specific news article to a particular contact in order to restart a conversation. The underlying software has to sift through and make sense of a lot of information—your relationship to your contacts, what their interests are, what you’ve talked about in the past and what’s happening in the world—so it can recommend what to talk about and when. Woods knew he needed a data scientist to make it all work. “I think that was probably the third conversation in terms of folks to hire for the team,” he says.

He was fortunate enough to be fairly well-connected in the Canadian tech world and started asking around. Eventually he found Zoe Katsimitsoulia, who had a PhD in computational biology from the University of Oxford and was working at another Toronto company. Katsimitsoulia says she joined Nudge because it sounded like an interesting challenge. “There’s a lot more immediate feedback, and that’s pretty rewarding,” she adds. “In academia, you can end up doing research for years.”

Indeed, Nudge’s experience shows how companies can woo data scientists, beyond throwing money at them. (Salaries are rising fast, likely as companies fight for talent. According to Statistics Canada, wages jumped by 38% since 2009.) Data scientists typically like to tackle big challenges, and businesses often provide an opportunity to do so. They also like to see their work tied to business results so they know they’re having an impact.

That’s how Indigo had success in hiring its small team of data scientists. “We do have to make sure the type of work they do is really compelling,” says Sumit Oberai, the company’s chief information officer. For the data team, the main focus is on improving recommendations for customers. The company collects data on online purchases and merges this information with customers’ in-store purchases, if they regularly use their loyalty cards. The data scientists are constantly fine-tuning the algorithms that analyze this information to provide personal book recommendations.

The work has revealed a few surprising insights, such as a segment of book buyers who enjoy both romance and sports fiction. Recommendations drive about 2% of sales today, according to Oberai, but Indigo hopes to boost that number as more customers use its loyalty program and mobile app. The company is increasingly crunching data to determine the optimal product mix at individual stores and the ideal locations to set up new outlets (it relied on data to figure out the best place to open its first American Girl doll boutique last year). “Even five years ago, that would have been done predominantly intuitively,” Oberai says.

Despite the talent grab, not every company is aware of what data scientists can do for them. Chris Bildfell graduated in 2013 with a PhD in astronomy and astrophysics from the University of Victoria and had his heart set on staying in academia. But with the slim job prospects, he eventually applied to a company called Mobify, which makes a mobile shopping platform for retailers. Mobify wasn’t even looking for a data scientist; it had posted an opening for a junior analyst. “I knew the job wasn’t really me, but I just wanted to get my foot in the door and show them what I could do,” he says. Bildfell essentially created the role of data scientist for himself and now analyzes huge piles of information to uncover insights for Mobify. For example, he wrote an algorithm that predicts the page you’re likely to visit next on a retailer’s mobile website in order to preload it. He also crunches data to see how the company’s servers are operating, to deliver analytics to customers and even to check how efficiently Mobify hires new staff.

The role of the data scientist is ultimately a big one, and so most organizations are breaking the job down into different components, says Paul Lovell, vice-president of professional services at SAS Canada. For example, a company may employ people who understand the IT and logistical aspects of data management, and others who figure out the business applications for its reams of information, allowing data scientists to focus purely on the analytical aspects. “There are people who can do it all,” Lovell says, “but they’re as rare as hen’s teeth.” Bildfell says it’s also possible to train certain employees to be data scientists. In his view, they need to be masters of statistics, have some programming experience and possess novel problem-solving skills. “Someone who has two of these skills can improve the third and get from a junior position to a full data scientist role,” he says.

Meanwhile, post-secondary institutions are providing a more formal path to the job. In addition to SFU’s new program, Carleton University has established a collaborative master’s program in data science and others, likeRyerson University and the University of Toronto, have launched certificate programs for managers and executives.

The push to beef up training suggests the shortage won’t last forever, but anyone looking to enter the field in the near-term can expect to find jobs waiting, says Oberai. “This will be an absolutely booming industry for the next 10 years.”

Originally posted via “How the age of Big Data made statistics the hottest job around”

Source

Aug 22, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
Conditional Risk  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Eurovision Data: Should Scandinavia Trust the Swedes? by analyticsweek

>> Why data is no longer just an IT function by analyticsweekpick

>> The Fate of Data Science: Into the Hands of Many by jelaniharper

Wanna write? Click Here

[ FEATURED COURSE]

Learning from data: Machine learning course

image

This is an introductory course in machine learning (ML) that covers the basic theory, algorithms, and applications. ML is a key technology in Big Data, and in many financial, medical, commercial, and scientific applicati… more

[ FEATURED READ]

On Intelligence

image

Jeff Hawkins, the man who created the PalmPilot, Treo smart phone, and other handheld devices, has reshaped our relationship to computers. Now he stands ready to revolutionize both neuroscience and computing in one strok… more

[ TIPS & TRICKS OF THE WEEK]

Winter is coming, warm your Analytics Club
Yes and yes! As we are heading into winter what better way but to talk about our increasing dependence on data analytics to help with our decision making. Data and analytics driven decision making is rapidly sneaking its way into our core corporate DNA and we are not churning practice ground to test those models fast enough. Such snugly looking models have hidden nails which could induce unchartered pain if go unchecked. This is the right time to start thinking about putting Analytics Club[Data Analytics CoE] in your work place to help Lab out the best practices and provide test environment for those models.

[ DATA SCIENCE Q&A]

Q:What is: collaborative filtering, n-grams, cosine distance?
A: Collaborative filtering:
– Technique used by some recommender systems
– Filtering for information or patterns using techniques involving collaboration of multiple agents: viewpoints, data sources.
1. A user expresses his/her preferences by rating items (movies, CDs.)
2. The system matches this user’s ratings against other users’ and finds people with most similar tastes
3. With similar users, the system recommends items that the similar users have rated highly but not yet being rated by this user

n-grams:
– Contiguous sequence of n items from a given sequence of text or speech
– ‘Andrew is a talented data scientist”
– Bi-gram: ‘Andrew is”, ‘is a”, ‘a talented”.
– Tri-grams: ‘Andrew is a”, ‘is a talented”, ‘a talented data”.
– An n-gram model models sequences using statistical properties of n-grams; see: Shannon Game
– More concisely, n-gram model: P(Xi|Xi?(n?1)…Xi?1): Markov model
– N-gram model: each word depends only on the n?1 last words

Issues:
– when facing infrequent n-grams
– solution: smooth the probability distributions by assigning non-zero probabilities to unseen words or n-grams
– Methods: Good-Turing, Backoff, Kneser-Kney smoothing

Cosine distance:
– How similar are two documents?
– Perfect similarity/agreement: 1
– No agreement : 0 (orthogonality)
– Measures the orientation, not magnitude

Given two vectors A and B representing word frequencies:
cosine-similarity(A,B)=?A,B?/||A||?||B||

Source

[ VIDEO OF THE WEEK]

@CRGutowski from @GE_Digital on Using #Analytics to #Transform Sales #FutureOfData #Podcast

 @CRGutowski from @GE_Digital on Using #Analytics to #Transform Sales #FutureOfData #Podcast

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

What we have is a data glut. – Vernon Vinge

[ PODCAST OF THE WEEK]

Understanding Data Analytics in Information Security with @JayJarome, @BitSight

 Understanding Data Analytics in Information Security with @JayJarome, @BitSight

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Brands and organizations on Facebook receive 34,722 Likes every minute of the day.

Sourced from: Analytics.CLUB #WEB Newsletter

Simplifying Data Warehouse Optimization

When I hear the phrase “Data Warehouse Optimization”, shivers go down my spine.  It sounds like such a complicated undertaking.  After all, data warehouses are big, cumbersome and complex systems that can store terabytes and even petabytes of data that people depend on to make important decisions on the way their business is run.  The thought of any type of tinkering with such an integral part of a modern business would make even the most seasoned CIO’s break out into cold sweats.

However, the value of optimizing a data warehouse isn’t often disputed.  Minimizing costs and increasing performance are mainstays on the “to-do” lists of all Chief Information Officers.  But that is just the tip of the proverbial iceberg.  Maximize availability.  Increase data quality.  Limit data anomalies.  Eliminate depreciating overhead.  These are the challenges that become increasingly more difficult to achieve when stuck with unadaptable technologies and confined by rigid hardware specifications.

The Data Warehouse of the Past

Let me put it into some perspective.  Not long ago many of today’s technologies (i.e. Big Data Analytics, Spark engines for processing and Cloud Computing and storage) didn’t exist,  yet the reality of balancing the availability of quality data with the efforts required to cleanse and load the latest information proved a constant challenge.  Every month, IT was burdened with loading the latest data into the data warehouse for the business to analyze.  However, often the loading itself took days to complete and if the load failed, or worse, the data warehouse became corrupted, recovery efforts could take weeks.  By the time last month’s errors were corrected, this month’s data needed to be loaded. 

It was an endless cycle that produced little value.  Not only was the warehouse out-of-date with its information, but it was also tied up in data loading and data recovery processes, thus making it unavailable to the end user.  With the added challenges of today’s continuously increasing data volumes, a wide array of data sources and more demands from the business for real-time data in their analysis, the data warehouse needs to be a nimble and flexible repository of information, rather than a workhorse of processing power.

Today’s Data Warehouse Needs

In this day and age, CIO’s can rest easy knowing that optimizing a data warehouse doesn’t have to be so daunting.  With the availability of Big Data Analytics, lightning-quick processing with Apache Spark, and the seemingly limitless and instantaneous scalability of the cloud, there are surely many approaches one can take to address the optimization conundrum.  But I have found the most effective approach to simplifying data warehouse optimization (and providing the biggest return on investment) is to remove unnecessary processing (i.e. data processing, transformation and cleansing) from the warehouse itself.  By removing the inherent burden of ETL processes, the warehouse has nearly instantaneously increased availability and performance.  This is commonly referred to as “Offloading ETL”. 

This isn’t to say that the data doesn’t need to be processed, transformed and cleansed.  On the contrary, data quality is of utmost importance.  But relying on the same systems that serve up the data to be responsible for processing and transforming the data is robbing the warehouse of its sole purpose; providing accurate, reliable and up-to-date analysis to end-users in a timely fashion, with minimal downtime.  By utilizing Spark and it’s in-memory processing architecture, you can shift the burden of ETL onto other in-house servers designed for such workloads. Or better yet, shift the processing to the cloud’s scalable infrastructure and not only optimize your data warehouse, but ultimately cut IT spend by eliminating the capital overhead of unnecessary hardware.

Talend Big Data & Machine Learning Sandbox

In the new Talend Big Data and Machine Learning Sandbox, one such example illustrates how effective ETL Offloading can be.  Utilizing Talend Big Data and Spark, IT can work with business analysts to perform Pre-load analytics – analyzing the data in its raw form, before it is loaded into a warehouse – in a fraction of the time of standard ETL.  Not only does this give business users insight into the quality of the data before it is loaded into the warehouse, it also allows IT a sort of security checkpoint to prevent poor data from corrupting the warehouse and causing additional outages and challenges.

[youtube https://www.youtube.com/watch?v=CzLUdbUHKPg]

Optimizing a data warehouse can surely produce a fair share of challenges.  But sometimes the best solution doesn’t have to be the most complicated.  That is why Talend offers industry leading data quality, native Spark connectivity and subscription-based affordability, giving you a jump-start on your optimization strategy.  Further, Data Integration tools need to be as nimble as the systems they are integrating.  Therefore, leveraging Talend’s future-proof architecture means you will never be out of style with the latest technology trends; giving you piece of mind that today’s solutions won’t become tomorrow’s problems.

Download the Talend Big Data and Machine Learning Sandbox today and dive into our cookbook. 

The post Simplifying Data Warehouse Optimization appeared first on Talend Real-Time Open Source Data Integration Software.

Originally Posted at: Simplifying Data Warehouse Optimization by analyticsweekpick

5 Best Practices for Hospital Executives Using Data Analytics: How to Improve Outcomes, Cut Costs

How can a healthcare provider reduce the amount of resources used to treat cardiovascular patients while simultaneously increasing the quality of care? Tom Rohleder, PhD, and other researchers at the Mayo Clinic’s Center for the Science of Health Care Delivery in Rochester, Minn., tackled that challenge using computer models and statistical analysis.

keyboardWorking with the clinic’s cardiovascular surgery group, Dr. Rohleder — the associate scientific director of Mayo’s Health Systems Engineering Program — and his colleagues leveraged the fact that groups of patients with common characteristics could be managed more efficiently using standardized care protocols

As they developed a discrete-event computer simulation model to predict minimum bed needs to achieve high-level care, the researchers found incorporating surgery growth and new recovery protocols, smoothing surgery schedules and transferring long-stay patients out of the intensive care unit could reduce bed needs by 30 percent for cardiovascular surgical patients in the ICU and progressive care unit at Mayo, according to the March 2013 case study.

“That kind of general approach is the kind of thing that’s being implemented and identified across Mayo Clinic,” Dr. Rohleder says of the protocols. “You create what we call ‘focused factories’ that deal with groups of patients with common clinical characteristics, identified by data analysis. These focused factories efficiently deliver high quality patient care. That frees up the time for dealing with the more complex cases.”

That case study serves as just one example of how hospitals and health systems have begun to use data analytics and health systems engineering to identify ways to reduce costs and improve care, a growing concern as healthcare reform and economic pressures push providers to spend less and deliver better services.

As hospital and health system CFOs face big questions about the industry’s future due to the unknowns of healthcare reform, data analytics can help them strengthen cost management efforts, up productivity and otherwise prepare for and adapt to the shifting healthcare landscape, says Phil Gaughan, senior director of operational improvement at Truven Health Analytics.

“Most CFOs are still asking themselves, ‘How much cost reduction and process improvement is going to be enough if healthcare reform really matures in the direction that most analysts believe it will?'” Mr. Gaughan says.

However, hospital CFOs and other executives diving into data analytics can also bump into questions and obstacles to successful implementation, ranging from resistance to suggested changes in practice to not knowing how small or big to start. Mr. Gaughan and Dr. Rohleder offer some best practices for making the most out of data analysis.

1. Emphasize patient safety and quality as top priorities. People can get the wrong idea about a data analytics initiative launched to improve operations, Dr. Rohleder says.

“Sometimes you get pigeonholed as an efficiency expert, which can sometimes mean downsizing,” he says of being a systems engineer. “I think that one of the things that we knew we had to do was make it clear that whenever we’re doing an analysis, we’re factoring in patient quality [and] patient safety. When we’re doing our systems engineering work, even though we are talking about becoming more efficient, we’re not doing it at the expense of the patient.”

At Mayo, the systems engineers and analysts simply present reports concerning what the different tradeoffs are for potential strategies, he says. Clinical professionals make the final decision.

2. Find the right talent, and make sure they understand each other. According to Dr. Rohleder, the most important factor for successful data analysis is having people who are experts in the field on board, while also ensuring they understand the medical side of operations.

“I think the reason it’s worked pretty well at Mayo Clinic is we’ve partnered the systems engineering people with the clinical people,” he says.

Mayo keeps its systems engineering staff informed about the constraints of the medical environment, he says.

3. Pay attention to both internal and external data. An internal productivity monitoring system can provide close to real-time data on resource requirements and is a fundamental part of using analytics to improve performance and reduce costs, Mr. Gaughan says. However, providers must look outward as well as inward to assess themselves. He says hospitals and health systems should compare themselves to peers and competitors alike.

“It’s critical to have external data to determine how effective we are in managing our resources relative to the industry,” he says, speaking to how a healthcare provider would see the situation. “I need to keep a very close eye on how am I doing from a cost management and productivity management perspective relative to my competitors.”

4. Start small. Healthcare providers commonly make the mistake off “biting off too big of a chunk” in the beginning when it comes to data analytics, Mr. Gaughan says.

As an example of how to do things right, he recalls the work conducted by Newton-Wellesley Hospital in Newton, Mass. Newton-Wellesley decided to initially focus only on reducing its housekeeping department spending using benchmarks provided by Truven. The hospital tasked its experts with that project and realized about $300,000 in savings in that department.

“They had their proof of concept, and they could share that same process with other departments,” Mr. Gaughan says. “The real principle here is to start smaller. You can’t turn a hospital or health system or an iceberg around overnight. Test the theory, get those early gains and use that as a launching point to convince people that there’s something here worth looking at.”

5. Communicate your objectives clearly and educate your staff. Making data analytics pay off for a hospital or health system takes more than just finding the right experts and giving them a manageable task. In order for the initiative to work, Mr. Gaughan says hospital executives have to be visibly involved and support the project in order for their staff to follow suit. Administrators should also clearly communicate their objectives to everyone in the organization.

Dr. Rohleder stressed the role of education in carrying out systems engineering and data analytics projects. He says executives and data experts should give everyone in the organization a chance to learn about the analytics tools and how those tools can help the health system.

Conclusion
Overall, hospitals that embrace data analysis as a healthcare reform tool will likely stay strong despite declining reimbursement levels and other economic pressures, says Dr. Rohleder.

“The data analysis and the kinds of methods that we apply — like the example I gave with cardiovascular surgery where we actually enhanced the patient quality and safety while at the same time reduced the resources we apply to them — is exactly where healthcare is going to go in the future,” he says. “Using our data, using an evidence-based approach, it gives us that edge to be able to achieve those ends.”

Note: This article originally appeared in Becker Hospital Review. Click for link here.

Originally Posted at: 5 Best Practices for Hospital Executives Using Data Analytics: How to Improve Outcomes, Cut Costs by analyticsweekpick

Aug 15, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
SQL Database  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Mastering Deep Learning with Self-Service Data Science for Business Users by jelaniharper

>> Jun 13, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..) by admin

>> Is withholding your data simply bad science, or should it fall under scientific misconduct? by analyticsweekpick

Wanna write? Click Here

[ FEATURED COURSE]

Introduction to Apache Spark

image

Learn the fundamentals and architecture of Apache Spark, the leading cluster-computing framework among professionals…. more

[ FEATURED READ]

On Intelligence

image

Jeff Hawkins, the man who created the PalmPilot, Treo smart phone, and other handheld devices, has reshaped our relationship to computers. Now he stands ready to revolutionize both neuroscience and computing in one strok… more

[ TIPS & TRICKS OF THE WEEK]

Data Analytics Success Starts with Empowerment
Being Data Driven is not as much of a tech challenge as it is an adoption challenge. Adoption has it’s root in cultural DNA of any organization. Great data driven organizations rungs the data driven culture into the corporate DNA. A culture of connection, interactions, sharing and collaboration is what it takes to be data driven. Its about being empowered more than its about being educated.

[ DATA SCIENCE Q&A]

Q:Do you know / used data reduction techniques other than PCA? What do you think of step-wise regression? What kind of step-wise techniques are you familiar with?
A: data reduction techniques other than PCA?:
Partial least squares: like PCR (principal component regression) but chooses the principal components in a supervised way. Gives higher weights to variables that are most strongly related to the response

step-wise regression?
– the choice of predictive variables are carried out using a systematic procedure
– Usually, it takes the form of a sequence of F-tests, t-tests, adjusted R-squared, AIC, BIC
– at any given step, the model is fit using unconstrained least squares
– can get stuck in local optima
– Better: Lasso

step-wise techniques:
– Forward-selection: begin with no variables, adding them when they improve a chosen model comparison criterion
– Backward-selection: begin with all the variables, removing them when it improves a chosen model comparison criterion

Better than reduced data:
Example 1: If all the components have a high variance: which components to discard with a guarantee that there will be no significant loss of the information?
Example 2 (classification):
– One has 2 classes; the within class variance is very high as compared to between class variance
– PCA might discard the very information that separates the two classes

Better than a sample:
– When number of variables is high relative to the number of observations

Source

[ VIDEO OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with @DavidRose, @DittoLabs

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @DavidRose, @DittoLabs

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Data matures like wine, applications like fish. – James Governor

[ PODCAST OF THE WEEK]

Venu Vasudevan @VenuV62 (@ProcterGamble) on creating a rockstar data science team #FutureOfData #Podcast

 Venu Vasudevan @VenuV62 (@ProcterGamble) on creating a rockstar data science team #FutureOfData #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Walmart handles more than 1 million customer transactions every hour, which is imported into databases estimated to contain more than 2.5 petabytes of data.

Sourced from: Analytics.CLUB #WEB Newsletter

How CFOs Can Harness Analytics

Businesses are collecting more data than ever from their own operations, supply chains, production processes, employees, and customer interactions.

But information alone isn’t knowledge.

Data is nothing more than virtual garbage unless it can be translated into actionable insight and effective business outcomes.

CFOs and The Information Age

CFOs are the financial spokesperson of their organization, historically responsible for financial planning, record-keeping, and reporting. They’re role is to balance risk through budget management, cost benefit analysis, forecasting, and funding knowledge.

As purveyors of finance, these basic duties haven’t changed much.

But as data volume and variety increase, CFOs and financial professionals must leverage these new information streams to identify trends and incorporate analytics into the decision making process. When it comes to seeking out strategic advice, CEOs turn to CFOs 72 percent of the time.

To help shape the strategic direction of the company, CFOs not only have to interpret vast troves of data, but they have to do it faster and more accurately than the competition.

Business Intelligence for CFOs

A Video Guide to Business Intelligence

Companies that adopt a data-driven culture are the most successful. Seventy-six percent of executives from top-performing companies cite data collection as very important or essential, and data-driven companies are three times more likely to rate themselves as substantially ahead of their peers in financial performance.

CFOs have always used valuable information to identify growth opportunities either from existing or new customers, products, and markets. This task has become more complex however due to the amount of sales, operations, employee, website, and customer data that is now being generated across multiple channels.

To bring these disparate silos together and eliminate late nights hunched over reports (that are often out of date by the time they’re analyzed), CFOs need business intelligence and analytics tools. In fact, Gartner research revealed 78 percent of CFOs consider BI and analytics as a top technology initiative for the finance department — beating out even financial management applications.

Though many organizations consider data a priority, they’re still struggling to make progress with BI and analytics. In order to avoid this common problem, let’s examine how business intelligence software helps CFOs harness data and analytics to increase customer engagement and grow profits.

Self-service Insight

Capturing data is important, but using information to propel the business forward is more important. BI software alleviates the headache of collecting and consolidating profit and loss numbers from every department or business unit for comparison and analysis. By integrating various ERP, CRM, marketing, HR, and back-end data sources into a BI platform, all company data is compiled into one central location. This means no more tracking company performance from spreadsheets or waiting on IT to run complex reports.

BI software puts data at your fingertips through the use of dashboards, which present easy to analyze views of selected metrics. Dashboards help improve the BI user experience and make business intelligence approachable. They simplify complex data sets, reveal patterns, and provide you with a way to monitor business performance at a glance, which enables fact-based decision-making.

Below is a live visualization of a mock CFO dashboard created with Tableau. This particular dashboard is used to view your business composition and monitor changes as they occur. Currently, the segment composition is calculated by net sales. By using the “Select Measure” filter in the top left, you can switch your measure to gross profit, operating income, or net profit. You can also use the rest of the filter panel or drill down by clicking anywhere in the visualization.

 

By aggregating important data, dashboards and self-service reporting tools help you identify hidden trends or missed opportunities. For example, rather than just a report that shows profits increased, BI lets you pinpoint why profits increased. To gain context, you can “drill down” to see exactly what is causing the spike in revenue. With a real-time financial view across the company, you can discover which efforts are suffering and which are exceeding, then allocate resources accordingly.

Financial Visualizations

End-to-end business intelligence is critical. CFOs are already trained to discern patterns and implications, so visualization tools that save time and simplify processes are important. BI dashboards allow CFOs to build interactive reports that allow them to get fast answers to business questions such as “What’s driving sales growth?” or “Where are we spending resources?”

Business Intelligence and Analytics Software In Action

Data visualizations turn your financial key performance indicators (KPIs) into clear graphics, which help you track performance and assess risk. Visualization also helps quickly compare different pieces of data with auto-generated charts. These typically include: tables, pie charts, bar graphs, heat maps, scatter plots, and gauges. Our minds often respond better to pictures than to rows of numbers: business managers who use visual data discovery tools are 28 percent more likely to find timely information than peers who only use managed reporting and dashboards.

Advanced Analytics

Descriptive analytics is when you use historical data to pinpoint the reason behind a success or failure. Business intelligence software now goes beyond this to help you shift from a historical perspective to a forward-looking perspective.

ayata-infographic-2012-10-15-720x600

Predictive analytics uses technologies such as forecasting, data mining, and simulations to tell you what is likely to happen. For example, predictive analytics can identify sales fluctuations and product popularity, which can be used to forecast inventory needs at a particular retail store. It can also identify purchase behavior and demographic information about your most valuable clients, which can be used to determine how much money should be invested to gain business from them or similar prospects. Advanced prescriptive analytics goes ever further by recommending the best course of action based on the knowledge you currently have.

_____

Business analytics software uses the data you already have to open up new ways of looking at business and operational information. CFOs can use this technology to scale performance, assess future risk, and hone their financial metrics.

Don’t let your data become virtual garbage.

The right business intelligence tools can help make data your company’s most valuable asset.

To read the original article on Technology Advice, click here.

Source: How CFOs Can Harness Analytics by analyticsweekpick

Why data is no longer just an IT function

Data – or at least the collection, storage, protection, transfer and processing of it – has traditionally been seen as the role of a modern data-driven technical division. However, as data continues to explode in both volume and importance, it is not enough to gather huge amounts of disparate data into a data lake and expect that it will be properly consumed. With data becoming the defining factor of a business’s strategy, this valuable gold dust needs to be in the hands of the right business function, in the right form, at the right time, to be at its most effective. This means that traditional roles within the organization need to adapt, as CIOs and CTOs oversee digital transformation projects across the business landscape.

The aim of digital transformation is to create an adaptive, dynamic company that is powered by digital technology – it is the perfect marriage of the business and IT function and requires both to collaborate to successfully harness the data at a company’s disposal. This will be imperative to deliver the types of rapid growth and customer-centric developments that modern businesses are determined to achieve. In recent years, the groundwork for this has already been delivered in the increasing use of cloud within businesses – which the Cloud Industry Forum revealed earlier this year stands at 88% in the UK, with 67% of users expecting to increase their cloud usage over the coming years. However, while the cloud provides the perfect platform for scalable, agile digitization, three further challenges stand between organizations and digital transformation success, and the business and IT functions need to work together to ensure their business emerges victorious at the other end.

Watch Put More Data to Work: Talend Spring ’18 now.

Watch Now

Challenge 1: Business Wants Data, But IT Can’t Keep Up

With cloud applications, sensors, online data streams and new types of technology emerging week on week, businesses are seeing an explosion of data – both in volume and variety. At the same time, consumers are expecting the very latest products, with personalized services, in real-time. The data businesses have access to can help but frequently ends up siloed, out of context, or of bad quality. Industry estimates predict that working on flawed data costs a business in the region of 10x more than working on perfect data.

Traditionally, employees within the business have maintained this data, but this is no longer feasible in the face of the sheer volume of information that businesses receive. Instead, businesses will need to be empowered by modern technologies such as Big Data and machine learning to ensure that as much of data preparation, cleansing and analysis is guided or automated. Without a combined data landscape of high-quality data, businesses risk missing opportunities by simply not successfully analyzing their own data… or even drive improper insights and related actions.

Being data-driven is a mandate for modern business, and the strain cannot be placed on IT to simply keep pace with the latest technological innovations. Instead, the business function must support in creating a digital strategy, focused on the latest business objectives, in order for the company to succeed.

Challenge 2: Digitization is Changing the Job Description

In the not-too-distant past, IT resources were centralized, with a core IT organization managing on-premises data using legacy systems. While this was an effective way of keeping data safe and organized, it resulted in the data being hard to access and even harder to use. As recently as 2015, BARC statistics stated that from a sample of over 2,000 responses, 45% of business users say their companies have less than 10% of employees using business intelligence (BI).

However, in today’s data-centric world where surveys estimate that 38% of overall job postings require digital skills, empowering 10% of employees to be self-sufficient with data is nowhere near enough. Furthermore, Gartner research asserts that by 2019, citizen data scientists will surpass data scientists in terms of the amount of advanced analysis they produce. The roles of everyone throughout the business, from the CIO to the business process analyst, are emerging to need data right at the user’s fingertips. These figures need access to data to ensure they can strategize, execute and deliver for the business with the most relevant and up-to-date insights available. This means the business must fully equip its employees and at every level to empower their decision-making with highly available and insightful data. As well as providing self-service technologies and applications which provide a turnkey solution to mining insight from data, this involves using training and internal communications to define a data-driven culture throughout business divisions.

Challenge 3: The threats to data, and to businesses, are increasing by the day

The knee-jerk reaction to this might be to make as much data as possible available to as many people as possible. However, any well-versed CIO knows this is not viable. With regulations like the GDPR, organizations have an increasing obligation to make sure only the right people have access to every piece of information or place their entire organization at risk. This is especially important given a backdrop where 71% of users admit to having access to data they should not according to the Ponemon Institute.

The solution to this is successfully implemented self-service IT solutions, which automates functions such as data access requests and data preparation. This is fundamental to allowing business employees quicker access to the right data, as well as providing clear lineages of who accessed what information, when – which will be crucial to monitor under the GDPR. At the same time, automated data preparation tools are essential to reduce the burden on the IT team, performing manual cleansing and formatting tasks. This, in turn, will enable the IT team to focus on delivering new technologies for the organization, rather than troubleshooting legacy issues.

The rise of the cloud has created the possibility for every person in every business to be data driven – but to date, this has not been the case. Instead, organizations experience siloing and limits on innovation. The key is creating an approach to data that is built with the business objectives in mind. A successful digital transformation project is centered on achieving real business outcomes, which is then operationalized by IT – making both vital players in evolving the role and use of data within an organization.

The post Why data is no longer just an IT function appeared first on Talend Real-Time Open Source Data Integration Software.

Originally Posted at: Why data is no longer just an IT function by analyticsweekpick

Aug 08, 19: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
Tour of Accounting  Source

[ LOCAL EVENTS & SESSIONS]

More WEB events? Click Here

[ AnalyticsWeek BYTES]

>> Unifying the Data Tech Stack: Across the Edge, Analytics and Operational Tiers by analyticsweekpick

>> How to Build a Dedicated Usability Lab by analyticsweek

>> Bridging the Gap Between Dev and Ops [Infographic] by analyticsweek

Wanna write? Click Here

[ FEATURED COURSE]

Tackle Real Data Challenges

image

Learn scalable data management, evaluate big data technologies, and design effective visualizations…. more

[ FEATURED READ]

Data Science from Scratch: First Principles with Python

image

Data science libraries, frameworks, modules, and toolkits are great for doing data science, but they’re also a good way to dive into the discipline without actually understanding data science. In this book, you’ll learn … more

[ TIPS & TRICKS OF THE WEEK]

Fix the Culture, spread awareness to get awareness
Adoption of analytics tools and capabilities has not yet caught up to industry standards. Talent has always been the bottleneck towards achieving the comparative enterprise adoption. One of the primal reason is lack of understanding and knowledge within the stakeholders. To facilitate wider adoption, data analytics leaders, users, and community members needs to step up to create awareness within the organization. An aware organization goes a long way in helping get quick buy-ins and better funding which ultimately leads to faster adoption. So be the voice that you want to hear from leadership.

[ DATA SCIENCE Q&A]

Q:Is it better to have 100 small hash tables or one big hash table, in memory, in terms of access speed (assuming both fit within RAM)? What do you think about in-database analytics?
A: Hash tables:
– Average case O(1)O(1) lookup time
– Lookup time doesn’t depend on size

Even in terms of memory:
– O(n)O(n) memory
– Space scales linearly with number of elements
– Lots of dictionaries won’t take up significantly less space than a larger one

In-database analytics:
– Integration of data analytics in data warehousing functionality
– Much faster and corporate information is more secure, it doesn’t leave the enterprise data warehouse
Good for real-time analytics: fraud detection, credit scoring, transaction processing, pricing and margin analysis, behavioral ad targeting and recommendation engines

Source

[ VIDEO OF THE WEEK]

#FutureOfData Podcast: Peter Morgan, CEO, Deep Learning Partnership

 #FutureOfData Podcast: Peter Morgan, CEO, Deep Learning Partnership

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

The temptation to form premature theories upon insufficient data is the bane of our profession. – Sherlock Holmes

[ PODCAST OF THE WEEK]

#DataScience Approach to Reducing #Employee #Attrition

 #DataScience Approach to Reducing #Employee #Attrition

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Every second we create new data. For example, we perform 40,000 search queries every second (on Google alone), which makes it 3.5 searches per day and 1.2 trillion searches per year.In Aug 2015, over 1 billion people used Facebook FB +0.54% in a single day.

Sourced from: Analytics.CLUB #WEB Newsletter