Big, Bad Data: How Talent Analytics Will Make It Work In HR

team-at-a-glance

Here’s a mind-blowing fact to spark up the late-summer doldrums: research from IBM IBM -2.56% shows that 90% of the data in the world today has been created in the last two years alone. I find this fascinating.

Which means that companies have access to an unprecedented amount of information: insights, intelligence, trends, future-casting. In terms of HR, it’s a gold mine of Big Data.

This past spring, I welcomed the ‘Industry Trends in Human Resources Technology and Service Delivery Survey,’ conducted by the Information Services Group III +0.00% III -1.1% (ISG), a leading technology insights, market intelligence and advisory services company. It’s a useful study, particularly for leaders and talent managers, offering a clear glimpse of what companies investing in HR tech expect to gain from their investment.

talent_analytics

Not surprisingly, there are three key benefits companies expect to realize from investments in HR tech:

• Improved user and candidate experience

• Access to ongoing innovation and best practices to support the business

• Speed of implementation to increase the value of technology to the organization.

It’s worth noting that driving the need for an improved user interface, access, and speed is the nature of the new talent surging into the workforce: people for whom technology is nearly as much a given as air. We grew up with technology, are completely comfortable with it, and not only expect it to be available, we assume it will be available, as well as easy to use and responsive to all their situations, with mobile and social components.

According to the ISG study, companies want HR tech to offer strategic alignment with their business. I view this as more about enabling flexibility in talent management, recruiting and retention — all of which are increasing in importance as Boomers retire, taking with them their deep base of knowledge and experience. And companies are looking more for the analytics end of the benefit spectrum. No surprise here that the delivery model will be through cloud-based SaaS solutions.

Companies also want:

• Data security

• Data privacy

• Integration with existing systems, both HR and general IT

• Customizability —to align with internal systems and processes.

Cloud-based. According to the ISG report, more than 50% of survey respondents have implemented or are implementing cloud-based SaaS systems. It’s easy, it’s more cost-effective than on-premise software, and it’s where the exciting innovation is happening.

Mobile/social. That’s a given. Any HCM tool must have a good mobile user experience, from well-designed mobile forms and ease of access to a secure interface.

They want it to have a simple, intuitive user interface – another given. Whether accessed via desktop or mobile, the solution must offer a single, unified, simple-to-use interface.

They want it to offer social collaboration tools, which is particularly key for the influx of millenials coming into the workplace, who expect to be able to collaborate via social channels. HR is no exception here. While challenging from a security and data protection angle, it’s a must.

But the final requirement the study reported is, in my mind, the most important: Analytics and reporting. Management needs reporting to know their investment is paying off, and they also need robust analytics to keep ahead of trends within the workforce.

It’s not just a question of Big Data’s accessibility, or of sophisticated metrics, such as the Key Performance Indicators (KPIs) that reveal the critical factors for success and measure progress made towards strategic goals. For organizations to realize the promise of Big Data, they must be able to cut through the noise, and access the right analytics that will transform their companies for the better.

Given what companies are after, as shown in the ISG study, I predict that more and more companies are going to be recognizing the benefits of using integrated analytics for their talent management and workforce planning processes. Talent Analytics creates a powerful, invaluable amalgam of data and metrics; it can identify the meaningful patterns within that data and metrics and, for whatever challenges and opportunities an organization faces, it will best inform the decision makers on the right tactics and strategies to move forward. It will take talent analytics to synthesize Big Data and metrics to make the key strategic management decisions in HR. Put another way, it’s not just the numbers, it’s how they’re crunched.

Article originally appeared HERE.

Source: Big, Bad Data: How Talent Analytics Will Make It Work In HR

How Big Data Is Changing The Entertainment Industry!

Big Data is here – The latest buzzword of the Information Technology Industry!

The world is generating humungous amount of data every second. Rapid advances in technology is making analysis of such data a cake-walk. Big Data is influencing every aspect of our lives and will continue to grow bigger and better. Retailers will push us to buy extra chips and soft drinks from the nearest outlet as we will watch the T20 match with our friends and our favorite teams are playing. They will even recommend our favorite party songs CD and encourage us to donate a dollar to our often visited charity. Preventing diseases, share trading, marketing efforts and lot of other use cases are emerging.

Big Data is changing the sports and the entertainment industry as well. Sports and entertainment industry are driven by fans and their word of mouth. Engagement with audience is the key and Big Data is creating opportunities for driving this engagement and influencing audience sentiments.

IBM worked with a media company and ran its predictive models on the social buzz for the movie Ram Leela. According to the reports, IBM predicted a 73% success for the movie based on right selection of cities. Such rich analysis of social data was conducted for Barfi and Ek Tha Tiger. All these movies had a runaway success at the box office.

Hollywood uses Big Data big time ! The social media buzz can predict the box office success – more importantly based on the trending of the movie, strategies can be formulated to ensure favorable positioning of the movie. All science !

Netflix is the best case study of analyzing user behavior and hitting the jackpot ! Netflix original show The House Of Cards’ was commissioned solely on the basis of the big data results of the preferences of its customers.

Shah Rukh Khan’s Chennai Express, one of the biggest box office grossers on 2013, used Big Data & Analytics solutions to drive social media and digital marketing campaigns. IT Services company Persistent Systems helped Chennai Express team with the right strategic inputs. Chennai Express related tweets generated over 1 billion cumulative impressions and the total number of tweets across all hashtags was over 750 thousand over the 90-day campaign period. Persistent Systems CEO Siddhesh Bhobe said “Shah Rukh Khan and the success of Chennai Express have proved that social media is the channel of the future and that it presents unique opportunities to marketers and brands, at an unbeatable ROI (return on investment)”

Lady Gaga and her team browse through our listening preferences and sequences and optimize the playlist for the maximum impact at live events. Singapore based Big Data Analytics firm Crayon has worked with leading Hindi Film industry producers to understand the kind of music to release to create the right buzz for the movie.

Sports is another area where big data is making big impact. FIFA 2014 champion Germany have been using SAP’s Match Insights software. It has made a big difference to the team. Data was crunched relating to player position ‘touch maps’, passing ability, ball retention and even metrics such as ‘aggressive play’. Even Kolkota Knight Riders, an IPL team, to determine the consistency of the players based on 25 data point per ball. It helped in auction as well as ongoing training.

Big Data can definitely be a boon to the entertainment and sports industry. It can improve the profitability of the movies – always a high risk business. The green-lighting of the story to the cast selection to the timing of release can be determined. It can help to pick the right players for the sporting leagues – allowing talent to win !

Entertainment Industry leaders need to collaborate with the leading big data startups and visionaries to create new uses and deliver new success stories!

Originally posted via “How Big Data Is Changing The Entertainment Industry!”

Originally Posted at: How Big Data Is Changing The Entertainment Industry! by analyticsweekpick

Don’t Let your Data Lake become a Data Swamp

In an always-on, competitive business environment, organizations are looking to gain an edge through digital transformation. Subsequently, many companies feel a sense of urgency to transform across all areas of their enterprise—from manufacturing to business operations—in the constant pursuit of continuous innovation and process efficiency.

Data is at the heart of all these digital transformation projects. It is the critical component that helps generate smarter, improved decision-make by empowering business users to eliminate gut feelings, unclear hypotheses, and false assumptions. As a result, many organizations believe building a massive data lake is the ‘silver bullet’ for delivering real-time business insights. In fact, according to a survey by CIO review from IDG, 75 percent of business leaders believe their future success will be driven by their organization’s ability to make the most of their information assets. However, only four percent of these organizations said they are set up a data-driven approach for successfully benefits from their information.

Is your Data Lake becoming more of a hindrance than an enabler?

The reality is that all these new initiatives and technologies come with a unique set of generated data, which creates additional complexity in the decision-making process. To cope with the growing volume and complexity of data and alleviate IT pressure, some are migrating to the cloud.

But this transition—in turn—creates other issues. For example, once data is made more broadly available via the cloud, more employees want access to that information. Growing numbers and varieties of business roles are looking to extract value from increasingly diverse data sets, faster than ever—putting pressure on IT organizations to deliver real-time, data access that serves the diverse needs of business users looking to apply real-time analytics to their everyday jobs. However, it’s not just about better analytics—business users also frequently want tools that allow them to prepare, share, and manage data.

To minimize tension and friction between IT and business departments, moving raw data to one place where everybody can access it sounded like a good move.  The concept of the data lake first coined by James Dixon in 2014 expected the data lake to be a large body of raw data in a more natural state where different users come to examine it, delve into it, or extract samples from it. However, increasingly organizations are beginning to realize that all the time and effort spent building massive data lakes have frequently made things worse due to poor data governance and management, which resulted in the formation of so-called “Data Swamps”.

Bad data clogging up the machinery

The same way data warehouses failed to manage data analytics a decade ago, data lakes will undoubtedly become “Data Swamps” if companies don’t manage them in the correct way. Putting all your data in a single place won’t in and of itself solve a broader data access problem. Leaving data uncontrolled, un-enriched, not qualified, and unmanaged, will dramatically hamper the benefits of a data lake, as it will still have the ability to only be utilized properly by a limited number of experts with a unique set of skills.

A success system of real-time business insights starts with a system of trust. To illustrate the negative impact of bad data and bad governance, let’s take a look at what happened to Dieselgate. The Dieselgate emissions scandal highlighted the difference between real-world and official air pollutant emissions data. In this case, the issue was not a problem of data quality, but of ethics, since some car manufacturers misled the measurement system by injecting fake data. This resulted in fines for car manufacturers exceeding more than tens of billions of dollars and consumers losing faith in the industry. After all, how can consumers trust the performance of cars now that they know the system-of-measure has been intentionally tampered with? 

The takeaway in the context of an enterprise data lake is that its value will depend on the level of trust employees have in the data contained in the lake. Failing to control data accuracy and quality within the lake will create mistrust amongst employees, seed doubt about the competency of IT, and jeopardize the whole data value chain, which then negatively impacts overall company performance.

A cloud data warehouse to deliver trusted insights for the masses

Leading firms believe governed cloud data lakes represent an adequate solution to overcoming some of these more traditional data lake stumbling blocks. The following four-step approach helps modernize cloud data warehouse while providing better insight into the entire organization. 

  1. Unite all data sources and reconcile them: Make sure the organization has the capacity to integrate a wide array of data sources, formats and sizes. Storing a wide variety of data in one place is the first step, but it’s not enough. Bridging data pipelines and reconciling them is another way to gain the capacity to manage insights. Verify the company has a cloud-enabled data management platform combining rich integration capabilities and cloud elasticity to process high data volumes at a reasonable price.
  2. Accelerate trusted insights to the masses: Efficiently manage data with cloud data integration solutions that help prepare, profile, cleanse, and mask data while monitoring data quality over time regardless of file format and size.  When coupled with cloud data warehouse capabilities, data integration can enable companies to create trusted data for access, reporting, and analytics in a fraction of the time and cost of traditional data warehouses. 
  3. Collaborative data governance to the rescue: The old schema of a data value chain where data is produced solely by IT in data warehouses and consumed by business users is no longer valid.  Now everyone wants to create content, add context, enrich data, and share it with others. Take the example of the internet and a knowledge platform such as Wikipedia where everybody can contribute, moderate and create new entries in the encyclopedia. In the same way Wikipedia established collaborative governance, companies should instill a collaborative governance in their organization by delegating the appropriate role-based, authority or access rights to citizen data scientists, line-of-business experts, and data analysts.
  4. Democratize data access and encourage users to be part of the Data Value Chain: Without making people accountable for what they’re doing, analyzing, and operating, there is little chance that organizations will succeed in implementing the right data strategy across business lines. Thus, you need to build a continuous Data Value Chain where business users contribute, share, and enrich the data flow in combination with a cloud data warehouse multi-cluster architecture that will accelerate data usage by load balancing data processing across diverse audiences.

In summary, think of data as the next strategic asset. Right now, it’s more like a hidden treasure at the bottom of many companies. Once modernized, shared and processed, data will reveal its true value, delivering better and faster insights to help companies get ahead of the competition.

The post Don’t Let your Data Lake become a Data Swamp appeared first on Talend Real-Time Open Source Data Integration Software.

Source: Don’t Let your Data Lake become a Data Swamp by analyticsweek

Are You Headed for the Analytics Cliff?

When was the last time you updated your analytics—or even took a hard look? Don’t feel guilty if it’s been a while. Even when there are minor indicators of trouble, many companies put analytics projects on the backburner or implement service packs as a Band-Aid solution.

What companies don’t realize, however, is that once analytics begin to fail, time is limited. Application teams that are not quick to act risk losing valuable revenue and customers. Fortunately, if you know the signs, you can avoid a catastrophe.

>> Related: Blueprint to Modern Analytics <<

Are you headed for the analytics cliff? Keep an eye out for these clear indicators that your analytics is failing:

Sign #1: Long Queue of Ad Hoc Requests

Is your queue of ad hoc requests constantly getting longer? Most companies start their analytics journeys by adding basic dashboards and reports to their applications. This satisfies users for a short period of time, but within a few months, users inevitably want more. Maybe they want to explore data on their own or connect new data sources to the application.

Eventually, you end up with a long queue of ad hoc requests for new features and capabilities. When you ignore these requests, you risk unhappy customers and skyrocketing churn rates. If you’re struggling to keep up with the influx—much less get ahead of it—you may be heading for the analytics cliff.

Sign #2: Unhappy Users & Poor Engagement

Are your customers becoming more vocal about what they don’t like about your embedded analytics? Dissatisfied customers, and in turn, poor user engagement, is a clear indication something is wrong. Ask yourself these questions to determine if your application is in trouble:

  • Basic adoption: How many users are regularly accessing the application’s dashboards and reports?
  • Stickiness: Are users spending more or less time in the embedded analytics?
  • The eject button: Have you seen an increase in users exporting data outside of your application to do their own analysis?

The more valuable your embedded dashboards and reports are, the more user engagement you’ll see. Forward-thinking application teams are adding value to their embedded analytics by going beyond basic capabilities.

Sign #3: Losing Customers to Competitors

When customers start abandoning your application for the competition, you’re fast approaching an analytics cliff. Whether you like it or not, you’re stacked against your competitors. If they’re innovating their analytics while yours stay stagnant, you’ll soon lose ground (if you haven’t already).

Companies that want to use embedded analytics as a competitive advantage or a source of revenue can’t afford to put off updates. As soon as your features start to lag behind the competition, you’ll be forced to upgrade just to catch up. And if your customers have started to churn, you’ll be faced with the overwhelming task of winning back frustrated customers or winning over new ones.

Sign #4: Revenue Impact

All the previous indicators were part of a slow and steady decline. By this point, you’re teetering on the edge of the analytics cliff. Revenue impact can come in many forms, including:

  • Declining win rate
  • Slowing pipeline progression
  • Decreasing renewals
  • Drop in sales of analytics modules

A two percent reduction in revenue can be an anomaly, or an indication of a downward trend. Some software companies make the mistake of ignoring such a small decrease. But even slowing rates of growth can be disastrous. According to a recent McKinsey study, “Grow Fast or Die Slow,” company growth yields greater returns and matters more than margins or cost structure. If a software company grows less than 20 percent annually, they have a 92 percent chance of failure. Revenue impact—no matter how small—is a sign that it’s definitely time to act.

To learn more, read our ebook: 5 Early Indicators Your Analytics Will Fail >

 

Source

The UX of Dating Websites & Apps

Online dating websites are one of the primary ways people find dates and even future spouses. These sites represent the bulk of a 3 billion dollar dating services industry.

In fact, around 30% of recent marriages started online, but it’s not like finding a date is as easy as filtering choices on Amazon and having them delivered via drone the next day (not yet at least).

Dating can be hard enough, but in addition to finding the right one, you also have to deal with things like Nigeria-based scams (and not the one with the Prince!).

Even when someone’s not directly trying to steal your money, can you really trust the profiles? By one estimate, over 80% of profiles studied contained at least one lie (usually about age, height, or weight).

Online dating isn’t all bad though. There is some evidence that the online dating sites actually do lead to marriages with slightly higher satisfaction and slightly lower separation rates. It could be due to the variety of people, those mysterious algorithms, or just a self-selection bias.

To understand the online dating user experience, we conducted a retrospective benchmark on seven of the most popular dating websites.

  • eHarmony (www.eharmony.com)
  • Hinge (mobile app)
  • Match.com (www.match.com)
  • OkCupid (www.okcupid.com)
  • Plenty of Fish (www.pof.com)
  • Tinder (www.tinder.com)
  • Zoosk (www.zoosk.com)

Full details are available in the downloadable report. Here are the highlights.

Study and Participant Details

We asked 380 participants who had used one of the seven dating websites in the past year to reflect on their most recent experience with the service.

Participants in the study answered questions about their prior experience, and desktop website users answered the 8-item SUPR-Q and the Net Promoter Score. In particular, we were interested in visitors’ attitudes toward the site, problems they had with the site, and reasons they used the website.

Measuring the Dating Website UX: SUPR-Q

The SUPR-Q is a standardized measure of the quality of a website’s user experience and is a good way to gauge users’ attitudes. It’s based on a rolling database of around 150 websites across dozens of industries.

Scores are percentile ranks and tell you how a website experience ranks relative to the other websites. The SUPR-Q provides an overall score as well as detailed scores for subdimensions of trust, usability, appearance, and loyalty. Its ease item can also predict an accurate SUS equivalent score.

The scores for the six dating websites (excluding the Hinge app) in the perception study were below average at the 43rd percentile (scoring better than 43% of the websites in the database). SUPR-Q scores for this group range from the 19th percentile (Plenty of Fish) to the 69th percentile (eHarmony).

Distrust and Disloyalty

The top improvement area for all the dating websites was trust. Across the websites, the average trust score was in the 23rd percentile. Participants express highest trust toward eHarmony—but trust scores were still only slightly above average (54th percentile). Plenty of Fish had the lowest trust score (5%), followed by Tinder (10%). These lower trust scores were consistent with the studies we found that cite false information and even scams.

The NPS reflects trust scores. eHarmony, the most trusted website, was also the most likely to be recommended with an NPS of 11% while the least trusted site, Plenty of Fish, had the lowest NPS (-46%). Overall, the average NPS score was a paltry -23%.

High Mobile App Usage

Not surprisingly, mobile app usage for dating services is high. 77% of participants reported visiting a dating service using a mobile app while only 61% said they log on using a desktop or laptop computer.

Most participants reported visiting their dating website on a desktop or laptop computer a few times per year, while mobile app users said they log on a few times a week or a few times a month. 19% of Match.com participants reported using the mobile app as much as once a day.
 

“The app is definitely more easy to use and intuitive, while the website seems more like an afterthought.” —Tinder user

 

“It’s one of the few instances of ‘websites turned into apps’ that I actually find value in.” —OkCupid user

 

Across the dating services, over half of participants reported they were looking for a serious relationship and just under half said they were looking for a casual relationship.

Reasons to use the dating services were similar for the website and app, except 42% of desktop website users said they are looking for a friendship while only 29% of mobile app users are looking for friends. Interestingly, this was a statistical difference.

Most Lack Dating Success

While over half of participants reported visiting dating sites to find a serious relationship, only 22% said they’ve actually found a relationship through the service. Specifically, OkCupid and Tinder users had the highest dating success in the group; 35% of OkCupid and 30% of Tinder users reported finding a relationship through the service.

Figure 1: Percent of respondents by site who report being in a relationship with a person they met on the website or app in the last year.

 

“I liked answering a lot of questions which would increase the match percentages I’d be able to find.” —OkCupid user

 

Only 9% of Zoosk users said they have found a relationship using the service. Zoosk users’ top issues with the site were dishonest users and fake profiles, poor matches, and active users who don’t respond.

 

“May not be great for a serious relationship.” —Zoosk user

 

“I keep getting referrals that are far outside my travel zone.” —Zoosk user

Dating Scams and Dishonest Users

Participants reported worrying about dishonest users and scams. On average, only 33% agreed that other users provide honest information about themselves and 41% said they are afraid of dating scams. These were the top issues reported by OkCupid and Plenty of Fish users.

 

“There are tons of fake/spam profiles.” —OkCupid user

 

“More scams than anything.” —Plenty of Fish user

 

“There are many suspicious profiles that seem like catfishing.” —Plenty of Fish user

 

Providing honest information on the site was found to be a significant key driver and explains about 17% of the dating site user experience. Other key drivers included brand attitude (22%), nicely presented profiles (10%), ease of creating and managing profiles (9%), intuitive navigation (9%), and ease of learning about other people (8%).

Figure 2: Key drivers of the online dating website user experience (SUPR-Q scores).

Together, these six components are key drivers of the dating website user experience and account for 75% of the variation in SUPR-Q scores.

 

Safety and Protection Resources

Across the dating services, 18% of participants reported having an issue with another user in the past. Plenty of Fish had the highest instance of issues with other users at 40%; however, 74% of those participants said there were resources available on the site to deal with this.

 

“I blocked the person because he was being very disrespectful.” —Plenty of Fish user

 

“I blocked the person who was harassing me.” —Plenty of Fish user

 

Using a dating service comes with obvious safety concerns and it’s felt by a fair number of users. Across the websites, only 54% of participants agreed that they feel safe using the site. Tinder had the lowest agreement to this item, only 38%. 

 

“There is not more protection from the terrible men that are on there.” —Plenty of Fish user

 

“They do not seem to screen out people with criminal backgrounds. Found local sex offenders in this app. It is also difficult to unsubscribe.” —Plenty of Fish user

 

“Somebody posted improper material in profile and I reported it to admin.” —OkCupid user

 

Plenty of Fish had the highest rate of unwanted images, with 61% of women reporting at least one unwanted image compared to 35% of men. eHarmony and Tinder had similar but slightly lower unwanted image rates.

 

Poor Matching Algorithms

While the right algorithm can help create a match, participants reported algorithms often fell short. Less than half of participants on Match.com, Plenty of Fish, Tinder, and Zoosk agreed with the statement “the site is good at matching me with people” and only 14% of Tinder users said the site asks meaningful questions about users.

 

“Sometimes it’s hard to sort the matches by compatibility.” —Match.com user

 

“I find that its match system doesn’t help a great deal in finding whether someone is well suited for you, and it is rather glitchy, with people appearing after thumbing them down.” —OkCupid user

 

“Poor quality of fish on the site.” —Plenty of Fish user

Full details are available in the downloadable report.

Summary

An analysis of the user experience of seven dating websites found:

  1. Dating is hard; the user experience is probably harder. Current users find the dating website experience below average, with SUPR-Q scores falling at the 43rd percentile. eHarmony was the overall winner for the retrospective study at the 69th percentile, with Plenty of Fish scoring the lowest at the 19th. The top improvement area across the sites was trust. eHarmony also had the highest NPS (11%) while Plenty of Fish had the lowest (-46%).
  1. Participants prefer using mobile apps. 77% of participants reported using the dating service mobile app. The majority of participants reported visiting the dating services a few times per week on their mobile device. 19% of Match.com users said they use the app every day. Participants reported using the app more frequently than the website for each of the dating services.
  1. High hopes with modest success. Over half of participants reported visiting dating sites to find a serious relationship, but only 22% said they have found a relationship through the service. Specifically, OkCupid and Tinder had the highest dating success; 35% of OkCupid and 30% of Tinder users reported finding a relationship. Only 9% of Zoosk users said they found a relationship using the site.
  1. Users are concerned about dating scams and dishonest users. Participants reported worries regarding scams on the dating sites. On average, only 33% agreed that other users provide honest information about themselves and 41% said they are afraid of dating scams.  Providing honest information on the site was found to be a significant key driver and explains about 17% of the dating site user experience.

(function() {
if (!window.mc4wp) {
window.mc4wp = {
listeners: [],
forms : {
on: function (event, callback) {
window.mc4wp.listeners.push({
event : event,
callback: callback
});
}
}
}
}
})();

Sign-up to receive weekly updates.

Source by analyticsweek

2018 Trends in Cloud Computing: The Data Layer

The cloud has proven the most effective means of handling the influx of big data typifying the IT concerns of contemporary organizations. A recap of The Top 10 Technology Trends to Watch: 2018 to 2020 from Forrester states, “The public cloud is a juggernaut that is reinventing computing and the high-tech industry itself…applications are built faster in the public cloud, where they can scale, reach customers, and connect to other apps.”

Although many of the drivers for the burgeoning adoption rates of the cloud have not changed, a shift in focus of their applicability will emerge in earnest during the coming year. The Internet of Things and Artificial Intelligence will continue to push organizations into the cloud, although they’ll be influenced more by their respective edge capabilities and intelligent automation of bots.

Moreover, there have been a number of developments related to security and privacy that are mitigating these conventional inhibitors of cloud deployments. The viability of the public cloud will also contend with advances in hybrid cloud models. The hybrid trend may well prove the most prescient of the cloud’s impact in the near future, as it transitions into a unified access mechanism to control all data—not necessarily where they are, but where they can be used.

“The cloud is obviously going to continue to grow as the default delivery model for new,” NTT DATA Services Vice President of ERP Application Services Simon Spence acknowledged. “Its evolution or journey is slowly making its way through the entire packaged applications space.”

Security and Privacy
A number of well-publicized security breaches have renewed the emphasis on cyber security, highlighting a concern that has traditionally hampered cloud adoption rates. In response to these events and to the ongoing need for secure, private data, a number of tools have appeared to reinforce security so that “security is likely better in the cloud than it has been currently in the existing world,” Spence remarked. The key to implementing such security is to facilitate it in layers so that if one is breached there is another to reinforce it. This approach is a key benefit to utilizing public cloud providers. “All of a sudden, instead of somebody trying to attack you, they have to attack Amazon [Web Services], and then they have to go find you at Amazon,” Spence said. “That’s going to be pretty difficult.” Each instance of hybridization in which organizations use a private cloud within a public cloud provides an additional layer of protection.

At the Data Layer
It’s also essential for organizations to protect the actual data instead of simply relying on external measures for cloud security. Forms of tokenization and encryption grant these benefits, as do certain aspects of blockchain. There’s also security tools Forbes referred to as cloud data protection which encrypt “sensitive data before it goes to the cloud with the enterprise (not the cloud provider) maintaining the keys. Protects from unwelcomed government surveillance and helps remove some of the biggest impediments to cloud adoption—security, compliance, and privacy concerns.” Protecting data where they reside is also the main concept for implementing security with semantic standards. Organizations can fortify data by adding triple attributes to them, which consist of “arbitrary key value pairs for every triple,” Franz CEO Jans Aasman noted. “You can build any security model by starting with the right key value pairs, and then on top apply security filters.” Implementing these measures before replicating data to the cloud—or even for data generated there—makes these off-premise deployments much more secure.

Hybrid Architecture
Hybrid cloud models take the form of any combination of public clouds, private clouds, and on-premise deployments. They either augment existing physical infrastructure with the cloud’s or the infrastructure of a public cloud with a private one. The propagation of hybrid models in the coming year puts an even greater emphasis on the data involved and the architecture required for accessing that data. According to Spence, “Really what becomes the glue and what you have to focus on then becomes data. You have to make sure you can integrate the data across public, private [clouds], and on-premises.” One of the chief concerns related to the proper architecture of hybrid models spanning on-premise and cloud workflows is ensuring that the infrastructure—and how it operates—is aligned.

The most pressing need for hybrid clouds may be applications involving edge computing in the IoT. In this case, Gartner indicated “organizations expect to apply the same operating style with their private infrastructure as in the public cloud. The operation of private infrastructure therefore has to evolve to take on the same model as public cloud services.” In 2018 organizations with hybrid models will attempt to keep pace with evolving cloud infrastructure via options for infrastructure automation. These include tools such as Infrastructure as Code (IaC) and containers, which prime organizations for automation. “The need to get that enterprise architecture done is more critical in today’s world than it ever was before because of the multiple different software providers and…delivery mechanisms,” Spence added.

Data-as-a-Service
DaaS should expand its relevance to the enterprise in the next couple of years partly due to the proliferation of AI services leveraging advanced machine learning techniques in the cloud. Conventionally, DaaS capabilities have garnered less attention than more flagship cloud offerings such as Software-as-Service (SaaS), Information-as-a-Service (IaaS), and Platform-as-a-Service (PaaS). The preceding section on hybrid architecture implies the vitality of IaaS to organizations leveraging the cloud. However, DaaS usage is sure to increase in the coming years partially because of the AI options found in the cloud. Gartner defines DaaS as “the data and services that will drive advanced analytics that will inform decision making across the business.” The wealth of AI-based SaaS options should drive the need for DaaS, since the latter can provide the sort of unstructured big data the former is adept at analyzing.

The Internet of Things
The IoT will solidify itself as one of the most meaningful enterprise applications in 2018. What will continue to expand next year is the propensity to facilitate an edge computing model with it. Computing at the cloud’s edge reduces demands on bandwidth and centralized models by transmitting the results of analytics into centralized locations, as opposed to the entirety of raw data for computations. This capability is particularly useful when dealing with the scale of the IoT’s data. “That’s a lot of data,” affirmed Biotricity CEO Waqaas Al-Siddiq. “You don’t want to download that and put it into your servers. You want it in the cloud so you can access it, cross-reference it, and pull down the information that you want.” Edge computing is gaining credence partly due to the intersection of the IoT and AI, and partly due to its inherent boons. The latter include quicker response times, improved customer satisfaction, and less network traffic. The former pertains to advancements in smart homes and smart cities, as well as increasing adoption rates of AI. The synthesis of these developments has resulted in today’s situation in which Gartner denoted “specific use cases of AI moving “to the edge” are already emerging.”

Another effect of this trend is the expansion of IoT use cases, which are coming to include facets of the management of supply chains networks, workflows, and inventory control. The growing reliance on the IoT will also force organizations to modernize their business applications. According to Spence, one of the forces for application modernization is “heavily driven by business needs focused on allowing companies to meet their new, changing business strategies. In a lot of those, the business driven stuff, we see them tied to ‘I need to put IoT in place’, or ‘I’m looking to do digital transformation’.”

Long Term Trajectory
The long term trajectory of cloud computing is certain. It will connect the most salient aspects of modern data management as the medium in which aspects of big data, AI, and the IoT are accessed. That access layer will also involve hybrid architecture with on-premise deployments, as well as data stemming from the cloud via DaaS and other service models. The viability of the cloud’s centrality to data management, of course, is predicated on the refined security models which underpin it.

The next step in the cloud’s role, it seems, directly correlates to that centrality. By tying together all enterprise assets—those on-premises, in public and private clouds, even from multiple providers—in a single access layer, it can enable possibilities which were previously inconceivable. The most significant cloud trend, then, is the one in which cataloguing techniques function as centralized controllers to make distributed computing environments local for users. Such a global fabric of data should make considerable strides in 2018 to becoming a widespread reality.

“It’s kind of like in the past you might have gone and got a house from a particular software provider; they designed it, they built it for you, and you went to one contractor,” Spence said. “Now in today’s world, you’re really more the general contractor and you’re picking and choosing the various subcontractors that you’re bringing in and you’ve got to make sure those things tie together.”

Source: 2018 Trends in Cloud Computing: The Data Layer by jelaniharper

Big data and digital skills essential for future retail success

retail-park-540x334

The Sector insights: skills and performance challenges in the retail sector (PDF) report by the UK Commission for Employment and Skills warned that retailers must embrace modern technology in the supply chain through to the shop floor.

The report said that big data harvested from customer loyalty schemes and online activity can play a significant role in marketing products to customers and boosting sales.

The retail sector has been at the forefront of big data use, which helped it to weather the recent recession by using information gleaned from data analysis to boost supply chain efficiency and revenues.

However, the report said that properly embracing such technology requires retailers to have employees with the right digital skills.

“This requires new ICT-related skills to take advantage of new business opportunities facilitated by social media for advertising and marketing,” the report said.

“Larger retailers are leading the uptake of new, innovative technologies which are creating shifts in the way that customer service is delivered and managed, changing the profile of the marketing function to incorporate an increased focus on data.

“This leads to a pressing need to attract and retain appropriately skilled workers in order to respond to these changes.

“Smaller retailers are at risk of being left behind unless they recognise the impact of these changes and respond by investing appropriately in their own skills and knowledge, to think more strategically about their business, and embrace appropriate new technologies.”

The report explained that the pace of technology is putting the retail sector at risk of misaligning the skills it has with the needs of businesses looking to use IT to drive performance.

Handling hardware
o2o-feeling-and-looking

The report also found that the integration of hardware such as beacon technology, self-service tills and virtual ‘browse and order’ hubs is forcing the need for shop workers and managers to develop skills that make use of new technologies that help the business and improve customer service.

“Retailers will need to continue to upskill existing staff to respond to the growing use and sophistication of technology,” the report said.

“Findings from the primary research confirm that in-store technologies are also requiring diversification and a higher-level skills base on the shop-floor.

“For example, staff are increasingly required to not only use more advanced technologies, but to interact with customers face to face and to guide them through the retailer’s online presence using mobiles and tablets.”

This is being driven in part by the need to have staff that can deal with customers who are better informed about a company’s products and prices because they have access to multiple shopping options provided through online retail and mobile shopping apps and services.

But the report warned that improving the digital skills of shop floor workers may infringe on their selling ability.

“There is a risk that the focus on IT skills and product knowledge can overshadow the importance of sales skills,” it said.

Furthermore, the report suggested that the skills gap is more pertinent with the older generation of retail workers who will need to be taught new skills, while at the same time businesses will need to keep attracting younger, digital-savvy workers.

“New technology requires workers to have up-to-date IT skills, which can be a challenge for older workers who are less likely to have good IT skills than younger workers,” it said.

“To continue to attract younger workers, the opportunity to use and develop technology-based skills and knowledge within a retail career should be promoted.”

The opportunities in the retail world to tap into big data and other technologies are well known, but finding the right skills amid the UK’s digital skills gap will not be easy for some retailers, despite Tesco’s dismissal of such challenges.

Furthermore, the recent launch of Apple Pay in the UK has brought more contactless payment options into the retail world, and the sector is likely to see increasing use of technology in physical outlets.

Alternatively, technology could replace staff completely, as seen with IBM Watson Analytics used to generate big data in London’s unmanned Honest Café coffee shops.

Note: This article originally appeared in V3. Click for link here.

Source

Making Sense of the 2018 Gartner Magic Quadrant for Data Integration Tools

It’s an exciting time to be part of the data market.  Never before have we seen so much innovation and change in a market, especially in the areas of cloud, big data, machine learning and real-time data streaming.  With all of this market innovation, we are especially proud that Talend was recognized by Gartner as a leader for the third time in a row in their 2018 Gartner Magic Quadrant for Data Integration Tools and remains the only open source vendor in the leaders quadrant.

According to Gartner’s updated forecast for the Enterprise Infrastructure Software market, data integration and data quality tools are the fastest growing sub-segment, growing at 8.6%. Talend is rapidly taking market share in the space with a 2017 growth rate of 40%, more than 4.5 times faster than the overall market.

The Data Integration Market: 2015 vs. 2018

Making the move from challengers to market leaders from 2015 to today was no easy feat for an emerging leader in cloud and big data integration. It takes a long time to build a sizeable base of trained and skilled users while maturing product stability, support and upgrade experiences. 

While Talend still has room to improve, it’s exciting recognition of all the investments Talend has made to see our score improve like that.

Today’s Outlook in the Gartner Magic Quadrant

Mark Byer, Eric Thoo, and Etisham Zaidi are not afraid to change things up in the Gartner Magic Quadrant as the market changes, and their 2018 report is proof of that.  Overall, Gartner continued to raise their expectations for the cloud, big data, machine learning, IoT and more.  If you read each vendor’s write up carefully and take close notes, as I did, you start to see some patterns. 

In my opinion, the latest report from Gartner indicates that in general, you have to pick your poison, you can have a point solution with less mature products and support and a very limited base of trained users in the market, or go with a vendor that has product breadth, maturity and a large base of trained users, but with expensive, complex and hard to deploy solutions.

Talend’s Take on the 2018 Gartner Magic Quadrant for Data Integration Tools

In our minds, this has left a really compelling spot in the market for Talend as the leader in the new cloud and big data use cases that are increasingly becoming the mainstream market needs. For the last 10+ years, we’ve been on a mission to help our customers liberate their data. As data volumes continue to grow exponentially along with growth in business users needing access to that data, this mission has never been more important. This means continuing to invest in our native architecture to enable customers to be the first to adopt new cutting-edge technologies like serverless, containers which significantly reduce total cost of ownership and can run on any cloud.

Talend also strongly believes that data must become a team sport for businesses to win, which is why governed self-service data access tools like Talend Data Preparation and Talend Data Streams are such important investments for Talend.  It’s because of investments like these that we believe Talend will quickly become the overall market leader in data integration and data quality. As I said at the beginning of the blog, our evolution has been a journey and we invite you to come along with us. I encourage you to download a copy of the report,  try Talend for yourself and become part of the community.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. 
GARTNER is a federally registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved.

 

The post Making Sense of the 2018 Gartner Magic Quadrant for Data Integration Tools appeared first on Talend Real-Time Open Source Data Integration Software.

Source: Making Sense of the 2018 Gartner Magic Quadrant for Data Integration Tools by analyticsweekpick

Helping Humans with Healthcare Analytics

[soundcloud url=”https://api.soundcloud.com/tracks/616679784″ params=”color=#ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&show_teaser=true” width=”100%” height=”166″ iframe=”true” /]

Healthcare: everyone needs it, it’s a rapidly technologizing industry, and it produces immense amounts of data every day. To get a sense of where analytics fit into this vital market, I got on the phone with Hamza Jap-Tjong, CEO and Co-Founder of GeriMedica Inzicht, a GeriMedica subsidiary. GeriMedica is a multi-disciplinary electronic medical record (EMR) company servicing the elderly care market and as such, their SaaS platform is filled with data of all kinds. Recently, they rolled out analytics that practitioners could use to improve the quality of care (vs the prior main use case in healthcare analytics, which was done by the billing and finance departments). This helps keep practitioners focused on helping patients vs spending (wasting) hours in a software product. Hamza and I spoke about the state of healthcare analytics, how it can improve care for patients, and where the industry is going.

The State of Healthcare Analytics

As previously mentioned, the healthcare industry creates tons of data every day from a wide array of sources.

“I think tons of data might be an understatement,” says Hamza, citing a Stamford study. “They were talking about data on the scale of exabytes (where each exabyte is 1,000 gigabytes). Where doesn’t that data come from? Fitbits, iPhones, fitness devices on your person… healthcare data is scattered everywhere: not only treatment plans and records created by practitioners, but also stored in machines (X-rays, photographs, etc.).”

Data is the new oil, but without the right tools, the insights locked in that data can’t help anyone. At present, few healthcare organizations (let alone frontline practitioners) are taking advantage of the data at their disposal to improve patient care. Moreover, these teams are dealing with amounts of information so vast that they are impossible to make sense of without help (like from a BI or analytics platform). They also can’t combine these datasets to gain a complete picture without help, either. Current software offerings, even if they have some analytical capabilities for the data that they capture, often can’t mash it up with other datasets.

“In my opinion, we could really improve the data gathering,” Hamza says. “As well as the way we use that data to improve patient care. What we know is that when you look at doctors, nurses, physical therapists, everybody close to the care process, close to the patient, is hankering for data and insights and analytics and we see that there isn’t at the moment a tool that is good enough or easy enough for them to use to gain the insights that they are looking for.”

Additionally, the current generation of medical software has a high barrier to entry/learning curve when it comes to getting useful insights out. All these obstacles prevent caregivers from helping clients as much as they might with easier-to-use analytics.

Improving Patient Care (and Improving Analytics for Practitioners)

Analytics and insight-mining systems have huge potential to improve patient care. Again, healthcare data is too massive for humans to handle unaided. However, there’s hope: Hamza mentioned that AI systems were already being used in medical settings to aggregate research and present an array of options to practitioners without them having to dig through numerous sources themselves.

“A doctor or a nurse does not work nine-to-five. They work long shifts and their whole mindset is focused on solving the mystery and helping the patient. They do not have time to scour through all kinds of tables and numbers. They want an easy-to-understand dashboard that tells a story from A-to-Z in one glance and answers their question.”

This is a huge opportunity for software and analytics companies to help improve patient care and user experience. Integrating easy-to-understand dashboards and analytics tools within medical software lowers the barrier to entry and serves up insights that practitioners can use to make better decisions. The next step is also giving clinicians tools to build their own dashboards to answer their own questions.

The Future of Healthcare Analytics

Many healthcare providers might not know how much analytics could be improving their lives and the care they give their patients. But they certainly know that they’re spending a lot of time gathering information and putting it into systems (and, again, that they have a ton of data). This is slowly changing today and will only accelerate as time goes on. The realization of how much a powerful analytics and BI system could help them with data gathering, insight harvesting, and providing better care will drive more organizations to start using a software’s analytics capabilities as a factor in their future buying decisions.

Additionally, just serving up insights won’t be enough. As analytics become more mainstreamed, users will want the power to dig into data themselves, perform ad hoc analyses, and design their own dashboards. With the right tools and training, even frontline users like doctors and nurses can be empowered to become builders, creating their own dashboards to answer the questions that matter most to them.

“We have doctors who are designers,” Hamza says. “They are designing their own dashboards using our entire dataset, combining millions of rows and records to get the answers that they are looking for.”

Builders are everywhere. Just as the healthcare space is shifting away from only using analytics in financial departments and putting insights into the hands of frontline practitioners, the right tools democratize the ability to create new dashboards and even interactive analytics widgets and empower anyone within an organization to get the answers and build the tools they need.

Creating Better Experiences

When it comes to the true purpose of healthcare analytics, Hamza summed it up perfectly:

“In the end, it’s all about helping end users create a better experience.”

The staggering volume of data that the healthcare industry creates presents a huge opportunity for analytics to find patterns and insights and improve the lives of patients. As datasets become more massive and the analytical questions become more challenging, healthcare teams will rely more and more on the analytics embedded within their EMR systems and other software. This will lead them to start using the presence (or lack thereof) and quality of those analytics when making buying decisions. Software companies that understand this will build solutions that answer questions and save lives; the ones that don’t might end up flatlining.

Source

What’s the True Cost of a Data Breach?

The direct hard costs of a data breach are typically easy to calculate. An organization can assign a value to the human-hours and equipment costs it takes to recover a breached system. Those costs, however, are only a small part of the big picture.

Every organization that has experienced a significant data breach knows this firsthand. Besides direct financial costs, there are actually lost business, third-party liabilities, legal expenses, regulatory fines, and damaged goodwill. The true cost of a data breach encompasses much more than just direct losses.

Forensic Analysis. Hackers have learned to disguise their activity in ways that make it difficult to determine the extent of a breach. An organization will often need forensic specialists to determine how deeply hackers have infiltrated a network. Those specialists charge between $200 and $2,000 per hour.

Customer Notifications. A company that has suffered a data breach has a legal and ethical obligation to send written notices to affected parties. Those notices can cost between $5 and $50 apiece.

Credit Monitoring. Many companies will offer credit monitoring and identity theft protection services to affected customers after a data breach. Those services cost between $10 and $30 per customer.

Legal Defense Costs. Customers will not hesitate to sue a company if they perceive that the company failed to protect their data. Legal costs between $500,000 and $1 million are typical for significant data breaches affecting large companies. Companies often mitigate these high costs with data breach insurance because it covers liability and notification costs, among others.

Regulatory Fines and Legal Judgments. Target paid $18.5 million after a 2013 data breach that exposed the personal information of more than 41 million customers. Advocate Health Care paid a record $5.5 million fine after thieves stole an unsecured hard drive containing patient records. Fines and judgments of this magnitude can be ruinous for a small or medium-sized business.

Reputational Losses. Quantifying the value of lost goodwill and standing within an industry after a data breach is impossible. That lost goodwill can translate into losing more than 20 percent of regular customers, plus revenue depletions exceeding 30 percent. There’s also the cost of missing new business opportunities.

The total losses that a company experiences following a data breach depend on the number of records lost. The average per-record loss in 2017 was $225. Thus, a small or medium-sized business that loses as few as 1,000 customer records can expect to realize a loss of $225,000. This explains why more than 60 percent of SMBs close their doors permanently within six months of experiencing a data breach.

Knowing the risks, companies can focus on devoting their cyber security budget to prevention and response. The first line of defense is technological, including network firewalls and regular employee training. However, hackers can still slip through the cracks, as they’re always devising new strategies for stealing data. A smart backup plan includes a savvy response and insurance to cover the steep costs if a breach occurs. After all, the total costs are far greater than just business interruption and fines; your reputation is at stake, too.

Originally Posted at: What’s the True Cost of a Data Breach? by thomassujain