7 Keys To Market Share Growth And Sustainability

7 Keys To Market Share Growth And SustainabilityWith a rapidly changing world, more social, more open, and new ways have emerged to do business. The definition of doing business has fundamentally changed. Any business that is still sticking to its legacy ways would starve and lose market share to competition. The base of it lies in the changes in the customer behavior, customer expectation, technology evolution, regulatory complexities and competitive landscape. It is important for businesses to understand the impact of these changes and prepare accordingly. Businesses should

build strategies that are proactively mitigating the effect and preparing for the future. Else, they would be at the mercy of competition (new and old) to maintain profitability and would
eventually go bust. So adhering to this wave of change, understanding it and steering in the right direction is essential for keeping market share and businesses afloat.

In this wave of change, there are certain things that businesses could do to fundamentally change the way they approach their business processes to gain market share. Following few key areas would be great starting points.

Understand customer landscape and needs:
Focusing on the customer’s need is the most important thing to do to gain market share. With many customer focused companies like Apple, Amazon etc., keeping customers happy and satisfied is the key that differentiates them from the herd. In recent times, customers’  expectations/ needs have changed and companies need to constantly evaluate their methodology and align themselves to read, meet and exceed customer’s expectations. They need to setup a continuous monitoring process to analyze the customer landscape, their needs, and its impact to the business. This helps keeping businesses close to their customer, which is a mantra for success.

Be a thought leader:
Finding a niche and staying as a thought leader is another crucial step in establishing sustainable footprint in business growth. Most successful companies are the ones that are leaders in their space. Laggards will have some room to flex but it would be at the mercy of demand supply gap. It is important to find a core area and excel in it. Such companies innovate, lead and create a brand of trust and credibility with the customers. So, building thought leadership is extremely important for building a business for success.

Be the network:
It is rightly said that “one who builds the network, owns the network”. There is a need to facilitate open communication both within the company, with its external partners, stakeholders and other communities. Building a system that facilitates this communication is beneficial for the company and it is a win-win for everyone. It helps keep a full innovation pipeline, lets the dialog going, saves from blinders, and builds better business connections. This network could also be used as a channel for fostering numerous innovations in the company.

Learn to love data:
Data is your friend. It is impartial and insightful. Make data your friend and leverage it in all business decisions. Data brings sanctity, facts and gutless metric in decisions, thus making success more predictable and informed. As we all know, informed decisions are always better than the decisions based on hunches. Data can help a company in preparing for the future, in understanding customer’s expectation, analyzing industry trends; do predictive modeling to solve big problem questions etc. So, there are many used cases for successful use of data in current competitive landscape. A company closer to data will never find difficulty in maintaining a healthy market share.

Make partners not competitors:
It is a strong statement, but a realistic one. In today’s economy, no one can win in isolation and especially when there are hardly any boundaries. It is important to partner to make things happen. People understand the importance of shorter cycle time and fast time to market and strategic partners and new business models can make this thing a reality. So, it is important to think of a system that has more partners than competition.

Make friends not customers:
To outdo the competition, it is required to stay close to the customers and improve retention. The best way to achieve this is to provide personal touch to products and services and make them feel special and friend like. This would enable customers to reach out to you for help/ suggestions and listening and catering to them would create strong loyalty sentiments. On the other side, customer would act as strong brand advocates, generating leads, providing references and helping your business grow. This creates a bond and has a positive multiplier effect especially in the socially connected world that we live in.

Build a platform on which other could build:

As discussed earlier, innovation could be a pillar stone for differentiation and growth. Crowdsourcing is a concept that is very well adapted and could be a gateway for numerous innovations. So, companies can open certain part of business and leave it for other the stakeholders (partners, customers) to disrupt. For instance, nowadays all the companies from Google, to Apple have opened their platforms to let others build on their platforms and disrupt the market place.


Source: 7 Keys To Market Share Growth And Sustainability

Data science’s limitations in addressing global warming

Data science is not a magic bag of tricks that can somehow find valid patterns under all circumstances. Sometimes the data itself is far too messy to analyze comprehensively in any straightforward way. Sometimes, it’s so massive, heterogeneous and internally inconsistent that no established data-scientific approach can do it justice.

When the data in question is big, the best-laid statistical models can only grasp pieces of its sprawling mosaic. As this recent article notes, that’s often the case with climate change data, which is at the heart of the global warming debate. Authors James H. Faghmous and Vipin Kumar state their thesis bluntly: “Despite the urgency, data science has had little impact on furthering our understanding of our planet in spite of the abundance of climate data….This…stems from the complex nature of climate data as well as the scientific questions climate science brings forth.”

It’s most instructive about their discussion is how they peel the methodological onion behind statistical methods in climate-data analysis. The chief issues, they argue, are as follows:

  • iHistorically shallow data: Modern climate science is a relatively new interdisciplinary field that integrates the work of scientists in meteorology, oceanography, hydrology, geology, biology and other established fields. Consequently, unified climatological data sets that focus on long-term trends are few and far between. Also, some current research priorities (such as global warming) have only come into climatology’s core radar over the past decade or so. As the authors note, “some datasets span only a decade or less. Although the data might be large—for example, high spatial resolution but short temporal duration—the spatiotemporal questions that we can ask from such data are limited.”
  • Spatiotemporal scale-mixing: As a closed system, the planet and all of its dynamic components interact across all spatial scales, from the global to the microscopic, and on all temporal scales, from the geological long-term to the split-second. As the authors note, “Some interactions might last hours or days—such as the influence of sea surface temperatures on the formation of a hurricane—while other interactions might occur over several years (e.g., ice sheets melting).” As anybody who has studied fractal science would point out, all these overlapping interactions introduce nonlinear dynamics that are fearsomely difficult to model statistically.
  • Heterogeneous data provenance: Given how global climate data is, it’s no surprise that no single source, method or instrumentation can possibly generate all of it, either at any single point in time or over the long timeframes necessary to identify trends. The authors note that climate data comes from four principle methodologies, each of them quite diverse in provenance: in situ (example: local meteorological stations), remote sensed (example: satellite imaging), model output (example: simulations of climatic conditions in the distant past) and paleoclimatic (examples: core samples, tree rings, lake sediments). These sources cover myriad variables that may be complementary or redundant with each other, further complicating efforts to combine them into a unified data pool for further analysis. In addition, measurement instrumentation and data post-processing approaches change over the years, making longitudinal comparisons difficult. The heterogeneous provenance of this massive data set frustrates any attempt to ascertain its biases and vet the extent to which it meets consistent standards of quality and consistency. Consequently, any statistical models derived from this mess will suffer the same intrinsic issues.
  • Auto-correlated measurements: Even when we consider a very constrained spatiotemporal domain, the statistical modeling can prove tricky. That’s because adjacent climate-data measurements aren’t often not statistically independent of each other. Unlike the canonical example of rolling a dice, where the outcome of each roll is independent of other rolls, climate-data measurements are often quite correlated with each other, especially if they’re near to each other in space and time. Statisticians refer to this problem as “auto-correlation,” and it wreaks havoc with standard statistical modeling techniques, making it difficult to isolate the impacts of different independent variables on the outcomes of interest.
  • Machine learning difficulties: In climatological data analysis, supervised learning is complicated by the conceptual difficulties of defining what specific data pattern describes “global warming,” “ice age,” “drought” and other trends. One key issue is where you put the observational baseline. Does the training data you’re employing simply describe one climatological oscillation in a long-term cycle? Or does it describe a longer-term trend? How can you know? If you intend to use unsupervised learning, your machine learning model may fit historical data patterns. However, the model may suffer from a statistical problem known as “overfitting”: being so complex and opaque that domain scientists can’t map its variables clearly to well-understood climatological mechanisms. This might make the model useless for predictive and prescriptive analyses.

In spite of all those issues, the authors don’t deny the value of data-scientific methods in climatological research. Instead, they call for a more harmonious balance between theory-driven domain science and data-driven statistical analysis. “What is needed,” they say, “is an approach that leverages the advances in data-driven research yet constrains both the methods and the interpretation of the results through tried-and-true scientific theory. Thus, to make significant contributions to climate science, new data science methods must encapsulate domain knowledge to produce theoretically-consistent results.”

These issues aren’t limited to climate data. Those same data-scientific issues apply to other heterogeneous data domains. For example, social-network graph analysis is a young field that has historically shallow data and attempts to analyze disparate sources, both global and local. How can data scientists effectively untangle intertwined sociological and psychological factors, considering that auto-correlations in human behavior, sentiment and influence run rampant always and everywhere?

If data science can’t get its arms around global warming, how can it make valid predictions of swings in the climate of world opinion?

Originally posted via “Data science’s limitations in addressing global warming”


Source: Data science’s limitations in addressing global warming

See what you never expected with data visualization

Written by Natan Meekers

A strong quote from John Tukey explains the essence of data visualization:

“The greatest value of a picture is when it forces us to notice what we never expected to see.”

Tukey was a famous American mathematician who truly understood data – its structure, patterns and what to look for. Because of that, he was able to come up with some great innovations, like the box plot. His powerful one-liner is a perfect introduction to this topic, because it points out the value of seeing things that we never expected to see.

With the large amounts of data generated every day, it’s impossible to keep up by looking at numbers only. Applying simple visualization techniques helps us to “hear” what the data is telling us. This is because our brain exists in two parts. The left side is logical, the mathematician; the right side is creative, the artist.

Mercedes-Benz, the luxury carmaker, illustrated the value of visualization in its “Whole Brain” campaign in 2012. Ads showed how the two opposing parts of the brain complement each other. They juxtaposed the left side responsible for logic and analysis with the creative and intuitive right side. Through visualization, the campaign communicated that Mercedes-Benz, like the brain, is a combination of opposites. Working together, they create technological innovation, breakthrough engineering, inspiring design and passion.

Mercedes ad depicting left and right brain functions

Visualizing data, i.e. combining left and right sides, lets you optimize decision-making and speed up ad-hoc analysis. That helps you see trends as they’re occurring and take immediate action when needed.

The most impressive thing is that accurate and informative visualizations are just a click away for you, even as a business user. NO technical background or intensive training required at all. With self-service capabilities of modern tools, you can get much more value out of your data just by pointing and clicking.

Data visualization plays a critical role in a world where so much data is pouring in from so many sources every day. It helps us to understand that data more easily. And we can detect hidden patterns, trends or events quicker than ever before. So start using your data TODAY for what it’s really worth.

To read the original article on S.A.S. Voices, click here.


3 ways to boost revenue with data analytics

Financial management

In a mere decade, the physician practice revenue cycle has been transformed. Gone are the days when most patients had $10 or $20 co-payments and their insurance companies generally paid claims in full. Physicians can no longer order lab work and tests according to their preference without considering medical necessity. And as patients shoulder rising care costs, they have become payers themselves, and they’re not quite accustomed to this role.

All of these factors have led to an increasingly complex and challenging revenue cycle — one that requires innovation. “Doing more with less” may be a cliché, but it rings true for physician practices striving to thrive financially while providing the highest quality care; however with the myriad of new initiatives and demands vying for their time, revenue cycle managers and practice leadership may ask, “Is it even possible to do more with less?”

Surprisingly, the answer is “yes” for most practices. Fortunately, you can achieve this goal leveraging something you already have, or can obtain, within the four walls of your practice: knowledge.

Not many practices can afford to purchase technology strictly for analytics and business intelligence. Additionally, in an environment where challenges such as health reform and regulatory demands take substantial time and attention, practices don’t have the luxury of adding resources to tackle such efforts. Nonetheless, practices can jump-start their analytics efforts and fuel more informed decisions via their clearinghouse. By reviewing clearinghouse reports — both standard and custom — you can identify revenue cycle trends, spot problems and test solutions such as process improvements.

Here’s how you can leverage data to achieve revenue cycle improvement goals such as decreasing days in accounts receivable (A/R), reducing denials and optimizing contract negotiations with payers.

1. Reduce denials and rejections
Effectively managing denials and rejections has always been one of physician practices’ greatest revenue cycle challenges. The more denials and rejections a practice has, the more likely key metrics such as days in A/R are to be low-performing, since practices aren’t able to get paid in a timely manner. Denials and rejections are just two of many areas that cause cash flow delays, and when reasons for denials and rejections are identified, such as eliminating unproductive work, practices can begin to improve days in A/R and increase profitability because payment comes in more quickly. These basic revenue cycle challenges, coupled with more stringent medical necessity requirements and value-based reimbursement, are now creating even more challenges in the healthcare industry.

Since ineligibility is often a leading cause for denials, a denial reduction strategy begins in the front office with quality eligibility information. An automated eligibility process provides front-office staff the data they need while also reducing errors. Allowing staff to check eligibility before patients are seen will set the stage for a more informed discussion regarding patient financial responsibility while also ensuring proper claims submission and reducing write-offs. Denial reports by reason are also an important tool; they can help practice managers identify staff or processes that require additional training.

A customized rejection report can help your team stay abreast of changing payer requirements and identify emerging patterns. Your clearinghouse should be able to generate a quarterly or monthly report that shows the most common reasons for claims rejections. Make sure the report details this information by practice location; staff at high-performing locations may be able to offer tips and advice to other offices with higher rejection rates.

Practice leadership can email the report and an analysis of patterns and trends to the entire team. An excellent tool to educate managers, coders and billing staff, this email can highlight areas for improvement or where additional training is required. This analysis should be simple and easy to comprehend, providing a quick snapshot of rejections along with practical ideas for improvements. The goal is for staff to be able to make adjustments to day-to-day work processes simply by reviewing the email. It can even generate some healthy competition as teams at different locations strive to make the greatest improvements.

2. Identify problematic procedures and services
In an era of value-based reimbursement, knowing which codes are prone to reimbursement issues can help your practice navigate an increasingly tricky landscape for claims payment. This information can be particularly helpful as you acclimate your practice to each payer’s value-based methodology such as bundled payments or shared savings. A report showing denials by code and per physician can generate awareness regarding potentially problematic claims submission. It can facilitate team education regarding coding conventions, medical necessity rules and payer requirements.

3. Improve contract negotiations
Clearinghouse reports aren’t just useful for education and improvements within your practice; they can also provide valuable insights as you review payer contracts and prepare for negotiations. In payer-specific reports, look for trends such as the average amount paid on specific codes over time. Compare these averages with your other payers, and go into negotiations armed with this data.

A recent survey of College of Healthcare Information Management Executives (CHIME) indicates that data analytics is the top investment priority for senior executives at large health systems, trumping both Accountable Care and ICD-10. Their reason: quality improvement and cost reduction can best be achieved by evaluating organizational data.

Physician practices can obtain the necessary data to optimize revenue without making costly technology investments. Whether your practice has two physicians or 200, the black-and-white nature of claims data can be invaluable. It can help you evaluate revenue cycle performance, identify problems, drive process changes and ultimately improve cash flow, simply by coupling your newfound knowledge with analytical and problem-solving skills.

Originally posted via “3 ways to boost revenue with data analytics”

Source: 3 ways to boost revenue with data analytics by analyticsweekpick

Jan 04, 18: #AnalyticsClub #Newsletter (Events, Tips, News & more..)


Correlation-Causation  Source

[ AnalyticsWeek BYTES]

>> Analyzing Big Data: A Customer-Centric Approach by bobehayes

>> July 24, 2017 Health and Biotech analytics news roundup by pstein

>> What Do Customers Hate Most About Bad Customer Service [Infographics] by v1shal

Wanna write? Click Here


 2017 Inmar Analytics Forum Draws Diverse Audience of Leaders in Retail, Manufacturing and Healthcare – EconoTimes Under  Prescriptive Analytics

 Dineequity Inc (NYSE:DIN) Institutional Investor Sentiment Analysis – KL Daily Under  Sentiment Analysis

 ProgrammableWeb’s best cognitive, analytics, development APIs of 2017 – ZDNet Under  Analytics

More NEWS ? Click Here


The Analytics Edge


This is an Archived Course
EdX keeps courses open for enrollment after they end to allow learners to explore content and continue learning. All features and materials may not be available, and course content will not be… more


On Intelligence


Jeff Hawkins, the man who created the PalmPilot, Treo smart phone, and other handheld devices, has reshaped our relationship to computers. Now he stands ready to revolutionize both neuroscience and computing in one strok… more


Fix the Culture, spread awareness to get awareness
Adoption of analytics tools and capabilities has not yet caught up to industry standards. Talent has always been the bottleneck towards achieving the comparative enterprise adoption. One of the primal reason is lack of understanding and knowledge within the stakeholders. To facilitate wider adoption, data analytics leaders, users, and community members needs to step up to create awareness within the organization. An aware organization goes a long way in helping get quick buy-ins and better funding which ultimately leads to faster adoption. So be the voice that you want to hear from leadership.


Q:When would you use random forests Vs SVM and why?
A: * In a case of a multi-class classification problem: SVM will require one-against-all method (memory intensive)
* If one needs to know the variable importance (random forests can perform it as well)
* If one needs to get a model fast (SVM is long to tune, need to choose the appropriate kernel and its parameters, for instance sigma and epsilon)
* In a semi-supervised learning context (random forest and dissimilarity measure): SVM can work only in a supervised learning mode



Rethinking classical approaches to analysis and predictive modeling

 Rethinking classical approaches to analysis and predictive modeling

Subscribe to  Youtube


Everybody gets so much information all day long that they lose their common sense. – Gertrude Stein


@BrianHaugli @The_Hanover ?on Building a #Leadership #Security #Mindset #FutureOfData #Podcast

 @BrianHaugli @The_Hanover ?on Building a #Leadership #Security #Mindset #FutureOfData #Podcast


iTunes  GooglePlay


In 2015, a staggering 1 trillion photos will be taken and billions of them will be shared online. By 2017, nearly 80% of photos will be taken on smart phones.

Sourced from: Analytics.CLUB #WEB Newsletter

#FutureOfData with Rob(@telerob) / @ConnellyAgency on running innovation in agency – Playcast – Data Analytics Leadership Playbook Podcast

#FutureOfData with Rob(@telerob) / @ConnellyAgency on running innovation in agency
#FutureOfData with Rob(@telerob) / @ConnellyAgency on running innovation in agency

In this podcast Rob Griffin from Almighty(X) a Connelly partner company sat with Vishal Kumar to discuss how to run innovation in a media agency.

Here’s Rob’s Bio:
Driving transformational innovation within marketing and advertising. Pushing creative and media technology limits. Helping brands take ownership of their technology, data, and media for greater transparency and accountability. Putting the agent back in the agency. Been working in digital marketing and advertising since 1996. A Bostonian. A die hard Celtics fan. Dad. Speaker. Writer. Advisor. Skier. Comic book fan. Lover of good eats.

Originally Posted at: #FutureOfData with Rob(@telerob) / @ConnellyAgency on running innovation in agency – Playcast – Data Analytics Leadership Playbook Podcast by v1shal

Africa: Is Big Data the Solution to Africa’s Big Issues?

At the pick of the Ebola epidemic in West Africa, a Kenyan start-up created a reporting SMS-based system that allowed communities in Sierra Leone to alert the government on new infections and response in different areas of the country.

Echo Mobile would then send the texts sent by citizens and health workers to the Central Government Co-ordination Unit that analyzed the data through a system developed by IBM Africa research lab.

Echo Mobile would then send the texts sent by citizens and health workers to the Central Government Co-ordination Unit that analyzed the data through a system developed by IBM Africa research lab.

The data has helped the government map the spread of Ebola and quickly respond to new infections while at the same time managing the epidemic in the affected communities. Echo Mobile has demonstrated how the continent can leverage on simple data to respond to real situations and create precise, effective solutions in good time.


While the most accepted definition of big data is literally massive data sets that need supercomputers to analyze and make sense of, IBM has deconstructed the term by attaching the 4 Vs for data to qualify as ‘big data’ – volume, variety, veracity, and velocity.

IBM estimates 2.5 quintillion bytes of data or 2.5 Billion gigabytes, is generated every day as a result of a world increasingly dependent on the internet and connected devices. It is further estimated that 90 percent of the world’s data has been created in the last 2 years. For scope, Google announced 100 hours of video were uploaded on YouTube every minute in 2014.

The variety of the data -from CCTV cameras, social media, voice and text data etc – is constantly being churned out every second from as many sources and at the rate at which it is being forgotten.

“But it is not enough to have all this data if you cannot verify its authenticity and that’s where veracity comes in,” explains Cory Wiegert, IBM Software Group’s Product Director for Africa. “By these standards (4Vs), you will find that all data is big data.”

Wiegert says the end-game of big data is to find context and meaning by deploying intelligent analytics to enable users make better decisions. He gives an example of IBM’s cloud application codenamed Watson which has super analytic capabilities giving users sophisticated visualizations.

“Watson feeds on volumes of data. We can feed Watson with loads of medical data – from oncology journals to patient files – where doctors are getting a more detailed picture when treating cancer patients or in research,” explains Weigert.

Big data in Africa

While the 4Vs threshold captures big data in mature markets, emerging markets in Africa present a unique challenge to data scientists. Verifying the authenticity of data and a lack of an entrenched data collection and data-driven decision-making culture complicates the roll-out of big data projects. However, there are pockets of change across the continent.

“The business community in Africa is starting to take interest in big data. Through social media analytics, businesses are getting insights on what consumers are saying about their brands and services. This ultimately leads to innovation and improved service delivery as businesses adapt to the needs of consumers,” says Weigert.

An IDG Connect research, revealed Kenya and Nigeria are ahead of the curve in adapting big data solutions with 75 percent of respondents in the process or planning to deploy big data projects.

Still, capacity to implement the projects in these two countries is low pointing to a lack of awareness on the full benefits of big data ROI.

Odang Madung, co-founder of Odipo Dev – a Nairobi-based data startup, says sectors that are growing their user bases can immediately reap from data analysis.

“Very many industries could benefit depending on how you think of it, but the ones that are especially ripe for the challenge are telecommunication, finance, retail and media companies,” says Madung.

Kenya Power, for instance, recently deployed an automated system that will not only consolidate customer data collection from 10 different sources, but also mine and analyze customer data.

  The analytics solution gives Kenya Power ability to perform complex queries on data to give better insights on the varying needs of customers across different regions.

Mobile operators receive loads of data per day in the form of voice, internet data and texts. Privacy issues aside, allowing data scientists to glean through a particular data set can help tailor solutions specific to regions.

A 2008-2009 research by a team of researchers from Harvard School of Public Health, KEMRI and Carnegie Mellon University revealed how incidence of Malaria spread from Lake Victoria region to the rest of the country. The researchers monitored movement of 15 million Kenyans using 11,920 cell towers and compared that data with the Ministry of Health records showing number of people with Malaria.

While the insights on the correlation between movement of people and malaria prevalence were useful, creating timely and precise interventions to communities that were at risk was of particular interest to the researchers. MIT Technology review notes the research is the largest attempt to use data from cell phones as an epidemiological tool.

However, according to James Gicheru of Dimension Data, for African countries to move from piecemeal temporary projects to wide scale continuous deployment, the foundation of automated processes needs to be laid first.

“The Health sector will have to take significant strides in further embracing IT for example the first step would be to have a consolidated national healthcare system. This would go a long way in providing insight to the government for planning purposes and medical research,” says Gicheru.

Madung adds another angle, saying, data by itself, has limitations that need context to gain maximum benefits of unstructured data.

“Big data in some way needs big theory. Data science teams must consist of at least one person conversant with the domain in which they are dealing with. This kind of data can allow people to come up with spurious conclusions quite often and this can be mitigated with proper domain expertise and context,” says Madung.

Better results

Mbwana Alliy, Managing Partner of Savannah Fund, sees big data deepening financial services in Africa where companies like MoDE and First Access are creating intelligent risk credit scoring through mobile money systems and, “where banks in Africa have been too cautious in the past given lack of either collateral or credit history.”

Alliy says big data, combined with machine learning, can transform education and health in the continent.

“Whilst the growth of mobile devices such as tablets will help bring content to students, there is a big data opportunity to deliver test taking and content systems that measure and adapt to students challenges and learning,” says Alliy.

Alliy is of the opinion big data is not only a useful tool that can transform businesses and governments but also evolving as the core business in some areas.

“Big data is now disrupting the taxi industry because of the way it can efficiently match and predict the demand and supply of transportation services… Uber is really a big data company in disguise of a taxi ordering app.”

Dr. Gilbert Saggia, Kenya Country Manager for Oracle, more or less agrees with Ally, predicting companies will convert big data to data capital.

“Data is now a kind of capital. It’s as necessary for creating new products, services and ways of working as financial capital. For CEOs, this means securing access to, and increasing use of, data capital by digitizing and datafying key activities with customers, suppliers, and partners before rivals do,” says Saggia.

In agriculture, big data analysis is allowing farmers in parts of Africa get better yields by capturing accurate date on rainfall, soil, market prices and other variables which gives way to better decision-making.

But in spite of the potential of big data in Africa, a cultural and structural paradigm change needs to happen in the continent. Governments need to fast-track automation of processes, allow researchers to access data sets within the law and more importantly, act decisively on the outcomes of big data analysis.

Originally posted via “Africa: Is Big Data the Solution to Africa’s Big Issues?”

Originally Posted at: Africa: Is Big Data the Solution to Africa’s Big Issues? by analyticsweekpick

Dec 28, 17: #AnalyticsClub #Newsletter (Events, Tips, News & more..)


Weak data  Source

[ AnalyticsWeek BYTES]

>> Semantic Technology Unlocks Big Data’s Full Value by analyticsweekpick

>> Can Big Data Tell Us What Clinical Trials Don’t? by analyticsweekpick

>> Media firms are excelling at social: Reach grows by 236% by analyticsweekpick

Wanna write? Click Here


 China to publish unified GDP data in fraud crackdown: statistics bureau – Reuters Under  Statistics

 Putting the “AI” in ThAInksgiving – TechCrunch Under  Machine Learning

 Accelerite Takes On VMware, Nutanix with Hybrid Cloud Platform – SDxCentral Under  Hybrid Cloud

More NEWS ? Click Here


Process Mining: Data science in Action


Process mining is the missing link between model-based process analysis and data-oriented analysis techniques. Through concrete data sets and easy to use software the course provides data science knowledge that can be ap… more


Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, 4th Edition


The eagerly anticipated Fourth Edition of the title that pioneered the comparison of qualitative, quantitative, and mixed methods research design is here! For all three approaches, Creswell includes a preliminary conside… more


Finding a success in your data science ? Find a mentor
Yes, most of us dont feel a need but most of us really could use one. As most of data science professionals work in their own isolations, getting an unbiased perspective is not easy. Many times, it is also not easy to understand how the data science progression is going to be. Getting a network of mentors address these issues easily, it gives data professionals an outside perspective and unbiased ally. It’s extremely important for successful data science professionals to build a mentor network and use it through their success.


Q:You have data on the durations of calls to a call center. Generate a plan for how you would code and analyze these data. Explain a plausible scenario for what the distribution of these durations might look like. How could you test, even graphically, whether your expectations are borne out?
A: 1. Exploratory data analysis
* Histogram of durations
* histogram of durations per service type, per day of week, per hours of day (durations can be systematically longer from 10am to 1pm for instance), per employee…
2. Distribution: lognormal?

3. Test graphically with QQ plot: sample quantiles of log(durations)log?(durations) Vs normal quantiles



#BigData @AnalyticsWeek #FutureOfData with Jon Gibs(@jonathangibs) @L2_Digital

 #BigData @AnalyticsWeek #FutureOfData with Jon Gibs(@jonathangibs) @L2_Digital

Subscribe to  Youtube


The world is one big data problem. – Andrew McAfee


#BigData @AnalyticsWeek #FutureOfData #Podcast with @MPFlowersNYC, @enigma_data

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @MPFlowersNYC, @enigma_data


iTunes  GooglePlay


Market research firm IDC has released a new forecast that shows the big data market is expected to grow from $3.2 billion in 2010 to $16.9 billion in 2015.

Sourced from: Analytics.CLUB #WEB Newsletter

IBM and Hadoop Challenge You to Use Big Data for Good

Big Data is about solving problems by bringing technology, data and people together. Sure, we can identify ways to get customers to buy more stuff or click on more ads, but the ultimate value of Big Data is in its ability to make this world a better place for all. IBM and Hadoop recently launched the Big Data for Social Good Challenge for developers, hackers and data enthusiasts to take a deep dive into real world civic issues.


IBMHadoopbigdatasocialgoodIndividuals and organizations are eligible to participate in the challenge. Participants, using publicly available data sets (IBM’s curated data sets or others – here are the data set requirements), can win up to $20,000. Participants must create a working, clickable, and interactive data visualization utilizing the Analytics for Hadoop service on IBM Bluemix. The official rules page is here.

Go to the Big Data for Social Good Challenge page to learn more about how to enter the challenge and how the challenge will be judged (full disclosure: I’m a judge).  Also, check out your competition below.

Source: IBM and Hadoop Challenge You to Use Big Data for Good