Given we see fake profiles and potentially chatbots that misfire and miscommunicate we would like your thoughts on whether there should be some sort of Government registry for robots so that consumers know they are legitimate or not. If we had a registry for trolls and or chatbots would that ensure that people could feel more comfortable that they are dealing with a legitimate business or would know if the profile or troll or bot is fake? Is it time for a good housekeeping seal of approval for AI?
These are all provocative questions and questions that are so new I am not sure there is one answer as they are so undefined. What do you think? Who should create such standards? Perhaps we should start by categorizing the types of AI?
Â How do we hire data scientists at SAS, since we are not unique in our search for a rare talent type that continues to be in high demand? This post is the last in a series on finding data scientists, based on best practices at SAS and illustrated with some of our ownÂ âunicorns.â You can read my first blog post on calling them unicorns and forÂ tips 1 and 2Â on finding them in an MS in Analytics Program or from a great program you may not have heard of. You can readÂ tips 3 and 4Â on how to find this kind of talent outside the traditional STEM academic disciplines. And tips 5, 6, and 7 detail the value weâve found in intern programs, social networks, and sponsorship of foreign nationals.
This last post focuses on less tangible aspects, related to curiosity, clarity about what kind of data scientist you need, and havingÂ appropriate expectations when you hire.
8. Look for people with curiosity and a desire to solve problems
As IÂ Â blogged previously, Greta Roberts of Talent Analytics will tell you that the top traits to look for when hiring analytical talent are curiosity, creativity, and discipline, based on a study her organization did of data scientists. It is important to discover if your candidates have these traits, because theyÂ are necessary elements to find a practical solution and separate candidates from those who may get lost in theory. My boss Radhika Kulkarni, the VP of Advanced Analytics R&D at SAS, self-identified this pattern when she arrived at Cornell to pursue a PhD in math. This realization prompted her to switch to operations research, which she felt would allow her to pursue investigating practical solutions to problems, which she preferred to more theoretical research.
That passion continues today, as you can hear Radhika describe in this video on moving the world with advanced analytics. She says âWe are not creating algorithms in an ivory tower and throwing it over the fence and expecting that somebody will use it someday. We actually want to build these methods, these new procedures and functionality to solve our customersâ problems.â This kind of practicality is another key trait to evaluate in your job candidates, in order to avoid the pitfall of hires who are obsessed with finding the âperfectâ solution. Often, as Voltaire observed, âPerfect is the enemy of good.â Many leaders of analytical teams struggle with data scientists who havenât yet learned this lesson. Beating a good model to death for that last bit of lift leads to diminishing returns, something few organizations can afford in an ever-more competitive environment. As an executive customer recently commented during the SAS Analytics Customer Advisory Board meeting, there is anÂ âongoing imperative to speed up that leads to a bias toward action over analysis. 80% is good enough.â
9. Think about what kind of data scientist you need
Ken Sanford describes himself as a talking geek, because he likes public speaking. And he’s good at it. But not all data scientists share his passion and talent for communication. This preferenceÂ may or may not matter, depending on the requirements of the role. As this Harvard Business Review blog post points out, the output of some data scientists will be to other data scientists or to machines. If that is the case, you may not care if the data scientist you hire can speak well or explain technical concepts to business people. In a large organization or one with a deep specialization, you may just need a machine learning geek and not a talking one! But many organizations donât have that luxury. They need their data scientists to be able to communicate their results to broader audiences. If this latter scenario sounds like your world, then look for someone with at least the interest and aptitude, if not yet fully developed, to explain technical concepts to non-technical audiences. Training and experience can work wonders to polish the skills of someone with the raw talent to communicate, but donât assume that all your hires must have this skill.
10. Donât expect your unicorns to grow their horns overnight
Annie Tjetjep relates development for dataÂ scientists to frozen yogurt, an analogy that illustrates how she shines as a quirky and creative thinker, in addition to working as an analytical consultant for SAS Australia. She regularly encounters customers looking for data scientists who have only chosen the title, without additional definition.Â She explains: ââ¦potential employers who abide by the standard definitions of what a âdata scientistâ is (basically equality on all dimensions) usually go into extended recruitment periods and almost always end up somewhat disappointed – whether immediately because they have to compromise on their vision or later on because they find the recruit to not be a good team playerâ¦.We always talk in dimensions and checklists but has anyone thought of it as a cycle? Everyone enters the cycle at one dimension that they’re innately strongest or trained for and further develop skills of the other dimensions as they progress through the cycle – like frozen yoghurt swirling and building in a cup…. Maybe this story sounds familiar… An educated statistician who picks up the programming then creativity (which I call confidence), which improves modelling, then business that then improves modelling and creativity, then communication that then improves modelling, creativity, business and programming, but then chooses to focus on communication, business, programming and/or modelling – none of which can be done credibly in Analytics without having the other dimensions. The strengths in the dimensions were never equally strong at any given time except when they knew nothing or a bit of everything – neither option being very effective – who would want one layer of froyo? People evolve unequally and it takes time to develop all skills and even once you develop them you may choose not to actively retain all of them.â
So perhaps you hire someone with their first layer of froyo in place and expect them to add layers over time.Â In other words, don’t expect your data scientists to grow their unicorn horns overnight. You can build a great team if they have time to develop as Annie describes, but it is all about having appropriate expectations from the beginning.
Free Download of Research Report on the Patient Experience
I spent the past few months conducting research on and writing about the importance of patient experience (PX) in US hospitals. My partners at TCELab have helped me summarize these studies into a single research report, Improving the Patient Experience . As far as I am aware, these series of studies are the first to integrate these disparate US hospital data sources (e.g.,Â Patient Experience, Health Outcomes, Process of Care, and Medicare spending per patient)Â to apply predictive analytics for the purpose of identifying the reasons behind a loyal patient base.
While this research is really about the entirety of US hospitals, hospitals still need to dig deeper into their own specific patient experience data to understand what they need to do to improve the patient experience. This report is a good starting point for hospitals to learn what they need to do to improve the patient experience and increase patient loyalty. Read the entire press release about the research report, Improving the Patient Experience.
Get the free report from TCELab by clicking the image or link below:
Big data is threatening to crush local democracy across the country–and if it succeeds, it may distort local transit and infrastructure development for decades to come.
As Uber has sought to dominate the local taxi industry from Delhi to New York City, the company has deployed its multi-billion dollar venture capital war chest to fight politicians across the country and world, often ignoring local laws as it introduced its app and drivers into the heavily regulated taxi industry.Â In New York City, a bill has been introduced to limit the growth of the company locally while the City Council studies the implications for the local taxi industry.
Yesterday, Uber added an attack ad against the City’s mayor Bill De Blasio on the front page of its hailing app, melding its attempt to control local taxi service with seeking control of local politics.Â In doing so, it highlights the danger of letting multi-billion dollar global corporations control any part of local transit or other infrastructure, since it gives them a stake in distorting local politics as well.
Uber may be a young company but they have entered old-style politics with a vengeance, hiring David Plouffe, the former strategist for President Obama’s 2008 election campaign to help direct a team of 250 lobbyists operating in at least 50 cities and states around the country.Â On top of vast financial resources for traditional lobbying, they control an equally important resource – data and communication with voters throughout local constituencies.Â In local political fights, Uber has used email and its app real estate to launch multiple attacks on political opponents.
An Opening Salvo in the Politics of Local Logistics
The fight over Uber is not about who runs local taxis, but really about who will control local transit and related infrastructure in the future.Â Uber has made it clear its ambitions go far beyond taxis to encompass what Inc. magazine calls “the future of logistics.”
The data Uber collects on users and local transportation can be converted into delivery services or, as writer Ken Roose explains, “like Amazon, it can become something akin to an all-purpose utility–it’ll just be a way you get things and go places.” Uber has already launched a prototype “Uber Cargo” delivery business in Hong Kong and food delivery and courier services in other cities.
This ties into plans by companies like Google and Tesla to introduce driverless cars and a rush of tech companies to control the logistics and information related to local economies and commerce.Â Â Driverless taxis are the obvious long-term next step for a company like Uber and could reshape urban transportation as fundamentally as the original introduction of the automobile.
The big data companies are already gearing up for the politics of controlling the next generation of urban infrastructure.Â Google for example now spends more on lobbying that any other company, a large part of it ($18.2 million) on federal lobbying, but the company has also built a network of state lobbyists to help it on local fights like legalizing its driverless car project.
Driverless cars combined with Uber data – and Google is a major investor in Uber – could remake urban transit, especially as local laws and infrastructure are changed to accommodate them.Â Â Analysts are already discussing how the “‘transportation cloud’…will quickly become dominant form of transportation – displacing far more than just car ownership, it will take the majority of users away from public transportation as well.”
History of Global Corporations Distorting Local Transit Development
With so much at stake, the danger of letting big data money gut local democratic decision-making is obvious.Â We only need look to the history of how the auto industry used its political muscle to literally pave the way for destroying local mass transit in multiple cities and pushing highways and suburbanization.Â Some of that movement was going to happen naturally, but the auto companies sped the process along and deepened it with actions such as buying up local trolley systems and converting them to bus systems.
City streets, which had been a shared resource of cars, bikes and pedestrians, were converted to car-only use. Where car drivers were once held criminally liable for any pedestrian killed by their car, car companies launched major lobbying campaigns to create a new crime, “jaywalking” that put responsibility on pedestrians not to be in the cars’ way.
Like Uber, the car companies used the communication infrastructure of the day, in that case wire services for reporters, to seize control of the public debate on use of streets.Â The National Automobile Chamber of Commerce encouraged reporters to send basic details of traffic accidents to their service and would receive back a complete article to print the next day, with the articles shifting the blame for accidents to pedestrians.
The result of decades of car industry lobbying was the gutting of much of urban America.
Fighting Monopoly as a Political Problem
With momentous political decisions facing local governments as big data, driverless cars and other technologies reshape local transit and logistics, we need to worry about big data monopolies not only distorting local industries at the economic level but also how their political power may distort political decision-making as well.
Uber is backed by an array of economic players, from Google to Goldman Sachs and local politicians should recognize that legalizing Uber is not just adding an additional economic player to the local economy.Â It will add a political player willing to spend billions of dollars with the goal of establishing a global logistics behemoth–and seemingly willing to waste any politicians who get in its way.
If Uber is going to misuse its economic power to try to control local political institutions — including real estate on its apps as political attack ads — local governments should feel justified in restricting its growth until both the potential economic and political problems of an emerging taxi monopolist are addressed.Â And it raises the broader issue of how big data’s political power needs to be restrained to ensure communities get to decide how best to use technology, rather than the technology companies deciding how best to use communities for their own economic interests.
Note: This article originally appeared in Huffington Post. Click for link here.
Janet Amos Pribanic Chief Operating Officer John Daniel Associates, Inc. Janetâs Profile
Business Analytics is changing rapidly. Traditional BI is being challenged due to the rate at which we are not only collecting data, but wanting to leverage that data for business advantage.
A recent survey of over 40 technology vendorsâ clients by one of the largest IT research firms showed that ultimately, the customer experience matters. The value given to the customer is what matters, always! So, how do we get there?â¦.
Even if not perfect â get it in the hands of the business fast
When we evaluate successful analytics customers, many of them started with a very inexpensive (hear free trial) solution, and leveraged that experience to build a successful solution and architecture. What better way to learn, than to fail (or partially fail) and take that experience forward. The term âfailâ here does not necessarily mean a solution has been built and thrown away. It means that what has been learned via a trial process or prototype has given us valuable insight on what is most meaningful moving forward with analytics and successful analytics architecture. A functional architecture, or methodology, is the best place to start. Here are the 5 steps:
Identify the business problem we are out to solve with data (analytics)
Gather the data
Build the model to support the business
View and explore the data
Deploy the operational insights
And note â this is an iterative process. There is value in getting this out there not exactly right. However, get it out there fast â you will leverage feedback faster, and this will allow better insight from the business because they can see it and take corrective action faster. In addition, disparities are exposed faster, incorrect business rules are exposed faster, and by having that exposure, the organization can now take action and leverage powerful and cheap architectures, like Hadoop, to enable you to take in massive amounts of data and store data very inexpensively. Now you are prepared to take on strategic business analytics.
Letâs look at a possible example. You have identified and gathered data around a problem in your supply chain. After gathering the data, the next move is to explore it. You sample the data in the data set, test it and analyze it. You develop hypotheses which are then tested. For example, you might do analysis to figure out which two or three vendors are late in deliveries, resulting in customer satisfaction issues.
After you determine why those vendors are not performing, you operationalize the insights youâve gained, building them into your business logic and workflows. If ABC Company is consistently late on deliveries within the supply chain and contributes to two actions that map to the supply chain model (challenge), begin corrective action before affecting additional clients.
Once insights are operationalized, it is important to close the model feedback loop. The models you build could (and likely will) change over time; the factors that caused supply chain challenge this year may not hold a year from now, as market and other factors change. Test your hypotheses to see if the model still holds or if it needs adjustment. For example, as new forms of vendor interaction are introduced, the supply chain variables may also change over time.
Advanced analytics should become the mainstay of your secret sauce. The point is to use all of your resources effectively: data modelers and the many people with business domain knowledge.
Get analytics out there fast, even if not perfect: itâs a counterintuitive way to make rapid progress. Take in only as much data as you need, analyze and create operational models, and refine those models. Then take those models and begin to build your functional architecture.
To deliver successful analytics, you will also need to plan and staff correctly. To do advanced analytics at scale, there are two approaches from a staffing perspective:
Hire lots of expensive data modelers
Leverage people who are technically savvy in your company with strong business acumen
The first way is certainly challenging. In fact, it canât scale either because there is not an abundant supply of data modelers to hire, even if you had unlimited resources.
The second way takes a different and proven successful approach. Do not be constrained by the technology that enables analytics but instead focus on the strength of logic that powers analytics.
In practical terms, ask data modelers to partner with you to research ways to solve business problems, and then have them build consumable models that solve those problems. With models that serve as templates, businesspeople can perform analytics on their own, working with the models the data modelers developed, and then extend those models to new areas as business logic dictates.
Your company will significantly benefit by this secret sauce. Choose to make it part of your core competency!
I have been doing some work on how investment professionals can use customer feedback as part of their valuation process. I include a case study of an investment firm that usedÂ customer feedback to help confirm the valuation of the target company (it did), and also where to start in terms of managing the business to secure its future.
Investment professionals take a huge risk when they purchase or make a significant investment in a business. To identify and minimize their investment risk, these professionals conduct due diligence of the business. Due diligence is an investigation or audit of a potential investment. Investors typically examine such matters as the business’ finances, proprietary information, employees, insurance, equipment and property, and litigation claims, to name a few (Entrepreneur.com offers these due diligenceÂ questionsÂ and aÂ downloadable checklist. Forbes has a theirÂ checklist. Inc.com has their ownÂ checklistÂ and even offersÂ some adviceÂ for conducting due diligence.).
While some due diligence efforts include an examination of customer data, they typically focus on identifying the number and types of customers (e.g., where they are located, size). Even in cases where customer feedback is included in the due diligence process, the business sellersÂ hand-pick aÂ few customersÂ to be interviewed by the buyer, resulting in potentially biased information about the health of the business and inflating the perceived value of the business. If customer feedback is to be of value, a more rigorous approach using customer feedback is needed. In this post, I will outline a more in-depth approach at using systematic customer feedback in the due diligence process.
1. Ask a Representative Sample of Customers
When soliciting customer feedback, take steps to ensure the feedback is representative of all possible feedback from the population of customers. Â Ask for a complete customer list from the seller and randomly select the customers you want to give you feedback. If you are particularly concerned about specific customer segments, use stratified random sampling (random selection occurs within each customer segment) to ensure you get enough respondents for the segments in question.
While a census is unnecessary to get a reliable picture of the entire customer base, I recommend that, when possible, you invite all customers to provide feedback.Â For B2C companies, surveys need to be targeted to the buyer of the products/services.Â For B2B companies, due to the nature of the buying process, surveys need to be targeted to all parties who are directly and indirectly involved in buying the company’s products/services (e.g., decision makers and decision influencers).
Verify the quality of the sample of customers by comparing the demographic make-up of the sample to that of the entire customer base. The extent to which the sample is representative of the population will determine the quality of the inferences you are able to make about the population. To make any meaningful conclusions about the value of the target company, the customers you ask need to be a representative sample of the population of customers.
2. Ask about Customer Loyalty
The value of a company in directly impacted by customer loyalty. The greater the customer loyalty, the higher the company value. Customers can increase the value of a company by engaging in three different types of customer loyalty behaviors.Â As is illustrated in theÂ Customer Loyalty Measurement Framework, these three types of customer loyalty include: 1)Â retention loyaltyÂ (valuable customers stay around for a long time), 2)Â advocacy loyaltyÂ (customers tell family/friends about the company to drive new customer growth) and 3)Â purchasing loyaltyÂ (customers increase their share-of-wallet to drive average revenue per user/customer (ARPU) growth).
BecauseÂ customers can exhibit their loyalty to a company in different ways,Â you need to ask the right loyalty for your specific needs.Â Does the target company have a history of high defection rates? If so, ask customers about their intention of staying. Does the target company have stagnant ARPU growth? If so, ask customers about their intention of buying different products. Does the target company historically have low new customer growth? If so, ask customers about their intention of recommending the company to their friends.
As a starting point, consider Â including a loyalty question for each of the three general types of loyalty behaviors: retention, advocacy and purchasing (see theÂ RAPID loyalty):
Likelihood to switch providers (retention)
Likelihood to renew service contract (retention)
Likelihood to recommend (advocacy)
Overall satisfaction (advocacy)
Likelihood to purchase different solutions from <Company Name> (purchasing)
Likelihood to expand use of <Company Name’s> products throughout company (purchasing)
3. Ask about Customer Experience (CX)
Customer loyalty is impacted by the customer experience. According to Wikipedia, the customer experience (CX) is the sum of all experiences a customer has with a supplier of goods and services over the duration of that relationship. Customers who are satisfied with their experience with the supplier stay longer, recommend more, and buy more from the supplier compared to customers who are less satisfied with their experience.
Ask customers about their experience with the company. While you could ask customers literally hundreds of CX questions about each specific aspect of their experience, research shows that you only need a few CX questions to understand whatÂ drives their loyalty. For example, ask customers how satisfied the are with the target company across each of these areas:
Ease of doing business
Communications from the Company
Future Product/Company Direction
4. Ask about Relative Performance
Companies do not perform in a vacuum; competitors are vying for the same customers and limited prospects as the target company you are purchasing. If the target company has plenty of competitors in its space, you need to understand where you rank relative to the competition. After all, top ranked companies receive greater share of wallet compared to their bottom-ranked competitors. All things equal, a company that is ranked the lowest is less valuable than a company that is ranked the highest.
Ask customers about how the company compares to its competitors. Toward that end, theÂ Relative Performance AssessmentÂ (RPA), a competitive analytics solution, helps investors understand the relative ranking of the target company and identify ways to increase their ranking, and consequently, increase share of wallet.Â In its basic form, the RPA method requires two questions:
What best describes Company’s performance compared to the competitors you use?
Please tell us why you rank Company’s performance the way you do.Â This question allows each customer to indicate the reasons behind his/her response about your ranking. The content of the customers’ comments can be examined to identify underlying themes to help diagnose the reasons for high rankings or low rankings
To understand the value of the company you are purchasing, you need to know how you measure up to the competition. More importantly, after the purchase, the RPA will help you know what you need to do to improve your ranking in the industry.
5. Ask about Company-Specific Issues
Investors may have a need to ask additional questions that are specific to the target company. These questions, driven by specific business needs, can include demographic questions (if not included in their CRM system), open-ended questions, and targeted questions.Â Typical questions in B2B relationship surveys include:
Time as a customer
Job function (e.g., Marketing, Sales, IT, Service)
Level of influence in purchasing decisions of <Company Name> solutions (Primary decision maker, Decision influencer, No influence)
Include one or two open-ended questions that allow respondents to provide additional feedback in their own words. Depending on how the questions are phrased, customers’ remarks can provide additional insight about the health of the customer relationship. Text analytics help you understand both the primary content of words as well as the sentiment behind them. To understand potential improvement areas, a question I commonly use is:
If you were in charge of <Company Name>, what improvements, if any, would you make?
Customer relationship surveys can be used to collect feedback about specific topics that are of interest to executive management. Give careful consideration about asking additional questions. As with any survey question, you must know exactly how the data from the questions will be used to improve customer loyalty. Some popular topics of interest include measuring 1) perceived benefits of solutions and 2) perceived value. Some sample questions are:
How much improvement did you experience in productivity due to <Company Name’s> solutions?
Satisfaction with price of the solution given the value received
Next, I will present an example of how one investment firm used customer feedback to help in their due diligence process.
An investment firm wanted to expand their portfolio of companies by purchasing an existing B2B company. As part of the due diligence process, the investment firm worked with the target company to acquire their customer email list for a Web-based customer survey. The investment firm used theÂ Customer Relationship DiagnosticÂ (CRD) to collect customer feedback. The CRD is a brief survey that asks customers about different types of customer loyalty, satisfaction with general CX touch points, relative performance and a few company-specific questions.
The response rate for the survey was about 70% and consisted primarily of decision makers and decision influencers (~80%) and were Managers, Directors or Executives (~70%).
Case Study: Loyalty Results
Customer loyalty results are located in Figure 1. As you can see, customers reported moderate levels of customer loyalty for most of the loyalty questions (e.g., advocacy and retention). For purchasing loyalty, customers reported low likelihood of buying different products and low likelihood of expanding the use of the target company’s solutions.
Case Study: CX Results
Results of the CX ratings can be found in Figure 2. Based on the survey results, the customers were moderately satisfied with their experiences across the touch points, except for Communications from the Company and Future Product/Company Direction.
Between 20% and 50% of the customers said they were dissatisfied with each of the seven customer touch points.
Case Study: Relative Performance Assessment Results
Results of the Relative Performance Assessment ratings are located in Figure 3. As you can see, customers said that only 42% of the customers indicated that the company was better than the competition. Almost 60% of the customers indicated that the company was the same worse than most other competitors.
After re-scaling the values of the 5-p0int rating scale (1 = worst to 5 = best) to a 0-100 scale, I estimated that the target company falls roughly at the 54th percentile in their industry; that is, the company’s performance is typical when compared to their competition.
Case Study: Determining Dollar Value of Loyalty
To estimate the expected revenue gains/losses of the target company, I worked with the investment firm to translate the customer loyalty ratings into a dollar value. We employed subject matter experts (SMEs) as well as analyzed existing financial reports of the target company to arrive at our best guess of expected annual revenue gains through new customers (~$300k)) and existing customers (through purchasing new/different products – ~$160k) and estimated the annual revenue at risk due to churn (by customers stop using the company ~ $450k).
Case Study: The Decision
Overall, the customer feedback confirmed the valuation of the company. While the target company was perceived to be in the middle of the pack in their industry (ranked at 54th percentile) and the future direction of their products/company appeared dismal (50% are dissatisfied),Â investors believed they had the management team that could address these shortcomings.Â The investment company decided to buy the company.
Case Study: Where to Make Improvements
The investors now became the business owner, and, consequently, needed to manage the business to secure its future. The survey results were analyzed to help decide where to best allocate resources in areas that will improve customer loyalty (and revenue) while minimizing the improvement costs.
Using driver analysis on the existing data, the investment firm found that there were three key drivers of customer loyalty: 1) product quality, 2) communications from the company and 3) future product/company direction. Again, using SMEs, we were able to estimate the ROI for improving each of the three key drivers. It turns out that the greatest ROI for CX improvements would be achieved by improving communications from the company and future product/company direction.
Benefits of Using Customer Feedback in your Due Diligence Process
You can significantly enhance your due diligence process through a systematic approach of collecting and analyzing customer feedback. Using the questions I proposed above, here are some benefits you can achieve when you use customer feedback as part of your due diligence process when purchasing a company:
Identify investment opportunities others miss and avoid investing in poor opportunities. Discover the quality of products and services from the people who matter: The customers.
Estimate revenue gains/losses. Â Using survey data and financial data, you can estimateÂ annual revenue at risk due to customer churn and revenue growth due to new customers and expanding relationships with current customers.
Understand your competitive advantage/disadvantage. Â Your relative performance will impact how much incremental money your customers will spend with you.Â Collecting customer feedback can help you identify what you need to do to beat your competition to improve your growth.
Understand the ROI of different improvement efforts.
Investors can gain valuable insight about a target company they are buying by simply asking customers the right questions. Be sure you ask a representative sample of customers so the feedback you get is meaningful and reflects the entire customer base. Ask customers about different types of loyalty behaviors in which they are likely to engage. This feedback can help you estimate revenue gains and risks. Ask customers about their customer experience to identify company strengths as well as potential problems. Ask customers about the company’s relative performance compared to other companies. This insight can help you understand the competitive landscape in the company’s industry and identify ways to improve/maintain your competitive advantage.
When purchasing a company, a systematic approach to surveying the customers (and analyzing the data correctly) can significantly augment the information in your due diligence process and provide a lot of insight about the value of the company. Asking the customers of the target company could mean the difference between acquiring a valuable company or a lemon.
Learn more about the Customer Relationship Diagnostic (CRD) for your due diligence
Oracle has bolstered its database portfolio with the Oracle Data Integrator (ODI), a piece of middleware designed to help analysts sift through big data across a variety of sources.
As the name suggests, the ODI effectively eases the process of linking data in different formats and from diverse databases and clusters, such as Hadoop, NoSQL and relational databases.
This enables Oracle customers to conduct analysis on large and varied datasets without dedicating time and resources to preparing big data in an integrated and secure way prior to analysis.
In effect, the ODI allows huge pools of data to be treated as just another data source to be used alongside more regularly accessed data warehouses and structured databases.
Jeff Pollock, vice president of product management at Oracle, claimed that the ODI allows customers to be experts in extract, transform and load tools without learning the code needed to carry out such actions.
“Oracle is the only vendor that can automatically generate Spark, Hive and Pig transformations from a single mapping which allows our customers to focus on business value and the overall architecture rather than multiple programming languages,” he said.
Avoiding the need for proprietary code means that the ODI can be run natively with a company’s existing Hadoop cluster, bypassing the need to invest in additional development.
Cluster databases like Hadoop and Spark have traditionally been geared towards programmers with knowledge of the coding needed to manipulate them. On the flipside, analysts would mostly use software tools to carry out enterprise-level data analytics.
The ODI gives the non-code savvy analyst the ability to harness Hadoop and other data sources without requiring the coding knowledge to do so.
It also means that a company’s developers need not retrain to handle multiple databases. Oracle is touting this as a way for companies to save money and time on big data analysis.
IBM just released the results of a global study on how businesses can get the most value from Big Data and analytics. They found nine areas that are critical to creating value from analytics. You can download the entire study here.
IBM Institute for Business Value surveyed 900 IT and business executives from 70 countries from June through August 2013. The 50+ survey questions were designedÂ to help translate conceptsÂ relating to generating value from analytics into actions.
Nine Levers to Value Creation
The researchers identified nine levers that help organizations create value from data. They compared leaders (those whoÂ identified their organization as substantiallyÂ outperforming their industry peers) with the rest of the sample. They found that the leaders (19% of the sample) implement the nine levers to a greater degree than the non-leaders.Â These nine levers are:
Source of value: Actions and decisions that generate results. Leaders tend to focus primarily on their ability to increase revenue and less so on cost reduction.
Measurement: Evaluating the impact on business outcomes. Leaders ensure they know how their analytics impact business outcomes.
Platform: Integrated capabilities delivered by hardware andÂ software.Â Sixty percent of Leaders have predictive analytic capabilities, asÂ well as simulation (55%) and optimization (67%)Â capabilities.
Culture: Availability and use of data and analytics within anÂ organization. Leaders make more than half of their decisions based on data and analytics.
Data: Structure and formality of the organizationâs dataÂ governance process and the security of its data. Two-thirds of Leaders trust the quality of their data and analytics. A majority of leaders (57%) adopt enterprise-levelÂ standards, policies and practices to integrate dataÂ across the organization.
Trust: Organizational confidence. Leaders demonstrate a high degree of trust between individual employees (60% between executives, 53% between business and IT executives)
Sponsorship: Executive support and involvement. Leaders (56%) oversee the use of data andÂ analytics within their own departments,Â guided by an enterprise-level strategy, common policiesÂ and metrics, and standardized methodologiesÂ compared to the rest (20%).
Funding: Financial rigor in the analytics funding process. Nearly two-thirds of Leaders pool resources to fund analytic investments. They evaluate these investments through pilot testing, cost/benefit analysis and forecasting KPIs.
Expertise: Development of and access to data managementÂ and analytic skills and capabilities.Â Leaders shareÂ advanced analytics subject matter experts across projects, where analytics employees have formalized roles, clearly defined career paths and experience investments to develop their skills.
The researchers state that each of the nine levers have a different impact on the organizationâsÂ ability to deliver value from the data and analytics; that is,Â all nine levers distinguish Leaders from the rest but each LeverÂ impacts value creation in different ways. Enable levers need to be in place before value can be seen through the Drive and Amplify levers.Â The nine levers are organized into three levels:
Enable: These levers form the basis for big data and analytics.
Drive: These levers are needed to realize value from data andÂ analytics; lack of sophistication within these levers will impedeÂ value creation.
Amplify: These levers boost value creation.
Recommendations: Creating an Analytic Blueprint
Next, the researchers offered a blueprint on how business leaders can translate the research findings into real changes for their own businesses. This operational blueprint consists of three areas: 1) Strategy, 2) Technology and 3) Organization.
Strategy is about the deliberateness with which the organization approaches analytics. Businesses need to adopt practices around Sponsorship, Source of value and Funding to instill a sense of purpose to data and analytics that connects the strategic visions to the tactical activities.
Technology is about the enabling capabilities and resources an organization has available to manage, process, analyze, interpret and store data. Businesses need to adopt practices around Expertise, Data and Platform to create a foundation for analytic discovery to address today’s problems while planning for future data challenges.
Organization is about the actions taken to use data and analytics to create value. Businesses need to adopt practices around Culture, Measurement and Trust to enable the organization to be driven by fact-based decisions.
One way businesses are trying to outperform their competitors is through the use of analytics on their treasure trove of data. The IBM researchers were able to identify the necessary ingredients to extract value from analytics.Â The current research supports prior research on the benefits of analytics in business:
Analytic innovators 1) use analytics primarily to increase value to the customer rather than to decrease costs/allocate resources, 2) aggregate/integrate different business data silos and look for relationships among once-disparate metric and 3) secureÂ executive support around the use of analytics that encourage sharing of best practices and data-driven insights throughout their company.
Businesses, to extract value from analytics, need to focus on improving strategic, technological and organizational aspects on how they treat data and analytics. The research identified nine area or levers executives can use to improve the value they generate from their data.
You may have heard the term âbig dataâ in reference to companies like Netflix, Google or Facebook. Itâs the collection of all those little data points about your choices and decision making process that allows companies to know exactly what movie youâre in the mood for when you plop down on your couch with a bowl of popcorn after a long day. Recently, big data has also made a foray into the educational realm. Whether through information gathered through standardized testing or the use of adaptive learning systems, big data is well on its way to completely transforming K-12 education.
Here are 10 ways Big Data is changing K-12 education:
1. Different pace of learning
One of the main challenges that educators currently face is adapting their instruction so it accommodates many different students who learn at different paces. The tools used to collect data, like intelligent adaptive learning systems, are designed to shift the pace of instruction depending on the prior knowledge, abilities and interests of each student. Teachers, in turn, can use this data to inform their pace of instruction going forward.
2. Sharing information
When students change schools or move across state lines, it has often been a challenge for their new teachers to get a firm grasp of what they have covered and which content areas may need more attention. The Common Core standards make data interchangeable across schools and districts.
3. Pinpoint problem areas
A unique feature of big data is that it allows teachers and administrators to pinpoint academic problem areas in students as they learn rather than after they take the test. For example, if a student is working through an adaptive learning program and the data collected reveals that he or she needs more help understanding the fundamental concepts behind fractions, teachers or the adaptive learning system can set aside time to work individually with that student to address and overcome the problem.
4. Need for analysts
Of course, the collection of all of this data isnât helpful for anyone if it just sits there â school districts are beginning to need analysts to interpret it all. Disparate data sets must be linked so that decision makers in a school district can view, sort and analyze the information to develop both long- and short-term plans for improving education. School districts may also need to set up workshops to show teachers how they can use all of this data effectively.
5. Different means of educational advancement
Traditionally, readiness for educational advancement has been determined more by age than whether or not the student was ready to learn more challenging material. Gifted students may be advanced, but they often stay in the same class as their peers because information about what they know can only be collected sporadically. Big data allows teachers and administrators to get a continuous sense of where students are falling academically, and whether or not they are ready to advance.
6. Smooth transitions
The collection of data is not only allowing for smoother transition between schools, but also grade levels. Access to information databases about what exactly students know could prove quite useful to school districts that are in the process of implementing the Common Core State Standards. Because the CCSS are changing academic requirements, some students find that theyâve inadvertently missed learning something important because it was shifted to the grade below. Data can pinpoint this problem so it can be addressed.
7. Personalized activities
Personalized learning has become a much-heralded approach to education, and big data is helping teachers tailor activities to individual learners. Technology, in particular, is playing a central role. Tech-savvy students can use computer games and adaptive learning programs to complete educational activities that are interactive and take their skill level into account.
8. Using analytics
One significant change that schools are seeing is the increasing use of analytics to inform their approaches. For example, big data can be analyzed to create plans to improve academic results, decrease dropout rates and influence the day-to-day decision making of administrators and teachers.
9. Engage parents and students
Itâs extremely important for parents to be involved in their childrenâs education, and big data is providing a means of engaging both parents and students. If at parent/teacher conferences, educators can pinpoint exactly where a child is excelling and where more work is needed, and can provide data to back up those claims, parents will have a clearer understanding of what they can do to help their children succeed in school.
10. Customized instruction
Perhaps most exciting for teachers and students alike is the ability for customized instruction that big data provides. This differs greatly from the approach to education in the past, when teachers would deliver one lesson and expect all students to understand, even if they learned in very different ways.
Is your school using big data? What changes are you seeing?
The role of data and analytics in business continues to grow. To make sense of their plethora of data, businesses are lookingÂ to data scientists for help. Job site, indeed.com, shows a continued growth in “data scientist” positions. To better understand the field of data science, we studied hundreds of data professionals.
In that study, we foundÂ that data scientists are not created equal. That is, data professionals differ with respect to the skills they possess. For example, some professionals are proficient inÂ statistical and mathematical skills while othersÂ are proficient inÂ computer scienceÂ skills. Still others have a strong business acumen. In the current analysis, I want to determine the breadth of talent that data professionals possess to better understand the possibility of finding a single data scientist who is skilled in all areas. First, let’s review the study sample and the method of how we measured talent.
Assessing Proficiency in DataÂ Skills
We surveyed hundreds ofÂ data professionals to tell us about their skills in five areas: Business, Technology, Math & Modeling, Programming and Statistics. Each skill area included five specific skills, totaling 25 different data skills in all.
For example, in the Business Skills area, data professionals were asked to rate their proficiency in such specific skillsÂ as “Business development,” and “Governance & Compliance (e.g., security).” In the Technology Skills area, they were asked to rate their proficiency in such skills as “Big and Distributed Data (e.g., Hadoop, Map/Reduce, Spark),” and “Managing unstructured data (e.g., noSQL).” In the Statistics Skills, they were asked to rate their proficiency in such skills as “Statistics and statistical modeling (e.g., general linear model, ANOVA, MANOVA, Spatio-temporal, Geographical Information System (GIS)),” and “Science/Scientific Method (e.g., experimental design, research design).”
For each of the 25 skills, respondents were asked to tell us their level proficiency using the following scale:
The different levels of proficiency are defined aroundÂ the data scientists ability to give or need to receiveÂ help. In the instructions to the data professionals, the “Intermediate” level of proficiency was defined as the ability “to successfully complete tasks as requested.” We used that proficiency level (i.e., Intermediate) as the minimum acceptable level of proficiency for each data skill. The proficiency levels below the Intermediate level (i.e., Novice, Fundamental Awareness, Don’t Know) were defined by an increasing need for help on the part of the data professional. Proficiency levels above the Intermediate level (i.e., Advanced, Expert) were defined by the data professional’sÂ increasing ability to give help or be knownÂ by others as “a person to ask.”
We looked at the level of proficiency for the 25 different data skills across four different job roles. As is seen in Figure 1, data professionals tend to be skilled in areas that are appropriate for their job role (see green-shaded areas in Figure 1). Specifically, Business Management data professionals show the most proficiency in Business Skills. Researchers, on the other hand, show lowest level of proficiency in Business Skills and the highest in Statistics Skills.
For many of the data skills, the typical data professional does not have the minimum level of proficiency to do be successful at work, no matter their role (see yellow- and red-shaded areas in Figure 1). These data skills include the following: Unstructured data, NLP, Machine Learning, Big and distributed data, Cloud management, Front-end programming, Optimization, Graphic models, Algorithms and Bayesian statistics.
In Search of the Elite Data Scientist
There are a couple of ways an organization can build their data science capability. ItÂ can either hire a single individual who is skilled in all data science areas or it can hire a team of data professionals who have complementary skills. In both cases, the organization has all the skills necessary to use data intelligently. However, the likelihood of finding a data professional who is an expert in all five skill areas is quite low (see Figure 2). In our sample, we looked at three levels of proficiency: Intermediate, Advanced and Expert. We found that only 10% of the data professionals indicated they had, at least, an Intermediate level of proficiency in all five skill areas. The picture looks more bleak you look for data professionals who have advanced or expert proficiencies in data skills. The chance of finding a data professionalÂ with AdvancedÂ skills or better in all five skill areas drops to less thanÂ 1%. There were no data professionals who were considered as Experts in all five skill areas.
We looked atÂ proficiency differences across five industries: Consulting (n = 52), Education / Science (n = 50), Financial, (n = 52), Healthcare (n = 50) and IT (n = 95). We identified dataÂ professionals who had an advanced level of proficiency acrossÂ the differentÂ skills. We found that data professionals in the Education / Science industry have more advanced skills (54% of data professionals have at least an advanced level of proficiency in at least one skill area) compared to data professionals in the Financial (37%) and IT (34%) industries.
The term “data scientist” is ambiguous. There are different types of data scientists, each defined by their level of proficiency in one of fiveÂ skill areas: Business, Technology, Programming, Math & ModelingÂ and Statistics.Â Data scientists can beÂ defined by the skills they possess. So, when somebody tells you they are a data scientist, be sure you know what type they are.
Finding a data professional who is proficientÂ in all data science skill areas is extremely difficult.Â As our studyÂ shows, data professionals rarely possess proficiency in all fiveÂ skill areas at the level needed to be successful at work. The chance of finding a data professional with Expert skills in all five areas (even in 3 or 4 skill areas) is akin to finding a unicorn; they just don’t exist. There were very few data professionals who even had the basic minimum level of proficiency (i.e., Intermediate level of proficiency) in all five skill areas. Additionally, our initial findings on industry differences in skill proficiency suggest that skilled data professionals might be easier to find in specific industries. These industry differences could impact recruitment and management of data professionals. An under-supplyÂ of data science talent in one industry could require companies to use more dramatic recruitment efforts to attract data professionals fromÂ outside theÂ industry. In industries where there are plenty of skilled data professionals, companies can be more selective in their hiring efforts.
Optimizing the value of business data is dependent on the skills of the data professionals who process the data. We took a skills-based approach to understanding how organizations can extractÂ value from their data. Based on our findings, we recommend that organizations avoid trying to find a single data professional who has the skills that span the entire spectrum of data science. Rather, a better approach is to consider building up your data science capability through the formation of teams of data professionals who have complementary skills.