Build an Affordable $600 eSports Gaming PC to Play CS: GO, DotA 2, LoL and Overwatch

eSports games have become more and more popular among both fans and gamers. As a result, many people start dreaming of building eSports careers—many gamers have pretty solid plans to achieve that goal. What does every gamer need to play eSports disciplines on the appropriate level, and to be able to challenge top LoL (League […]

The post Build an Affordable $600 eSports Gaming PC to Play CS: GO, DotA 2, LoL and Overwatch appeared first on TechSpective.

Source: Build an Affordable $600 eSports Gaming PC to Play CS: GO, DotA 2, LoL and Overwatch by administrator

Where Chief Data Scientist & Open Source Meets – @dandegrazia #FutureOfData #Podcast


In this podcast @DanDeGrazia from @IBM spoke with @Vishaltx from @AnalyticsWeek to discuss the mingling of chief data scientist with open sources. He sheds light into some of the big opportunities in open source and how businesses could work together to achieve progress in data science. Dan also shared the importance of smooth communication for success as a data scientist.

Dan’s Recommended Read:
The Five Temptations of a CEO, Anniversary Edition: A Leadership Fable by Patrick Lencioni
What Every BODY is Saying: An Ex-FBI Agent8217;s Guide to Speed-Reading People by Joe Navarro, Marvin Karlins

Podcast Link:

Dan’s BIO:
Dan has almost 30 years working with large data sets. Starting with the unusual work of analyzing potential jury pools in the 1980s, Dan also did some of the first PC based voter registration analytics in the Chicago area including putting the first complete list of registered voters on a PC (as hard as that is to imagine today a 50 megabyte harddrive on DOS systems was staggering). Interested in almost anything new and technical, he worked at The Chicago Board of Trade where he taught himself BASIC to write algorithms while working as an Arbitrager in financial futures. After the military Dan moved to San Francisco where he worked several small companies and startups designing and implementing some of the first PC based fax systems (who cares now!), enterprise accounting software and working with early middleware connections using the early 3GL/4GL languages. Always perusing the technical edge cases Dan worked for InfoBright a Column store Database startup in the US and AMEA , at Lingotek an In-Q-Tel funded company working in large data set translations and big data analytics companies like Datameer and his current position as a Chief Data Scientist for Open Source in the IBM Channels organization. Dan’s current just for fun Project is working to create an app that will record and analyze bird songs and provide the user with information on the bird and the specifics of the current song.

About #Podcast:
#FutureOfData podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.

Want to sponsor?
Email us @

#FutureOfData #DataAnalytics #Leadership #Podcast #BigData #Strategy

Source by v1shal

November 14, 2016 Health and Biotech analytics news roundup

Here’s the latest in health and biotech analytics:

Data Specifics Identified for Prediagnostic Heart Failure Detection: IBM researchers analyzed machine learning models that predict heart failure (paper). Among other findings, they worked out that models perform best with shorter prediction windows.

Will Google Take Over the Medical Industry? Big Questions at CO’s Healthcare Conference: In the keynote speech at the Pulse Healthcare Conference, Andrew Quirk pointed to many new players entering the healthcare industry. Panels at the conference covered topics like patient experiences and the future of hospitals.

Accelerating cancer research with deep learning: Georgia Tourassi is head of Health Data Science at Oak Ridge National Laboratory. Her group is using deep neural networks to extract useful diagnostic data, such as the location of a tumor, from clinical reports.

A student innovation to tackle cognitive challenges in health informatics wins this year’s Sysmex Award: The New Zealand diagnostics company gave the award to Daniel Surkalim, a University of Auckland student. He proposed using “graphical relational integrated databases” to make it easier for providers to access electronic health data.

Originally Posted at: November 14, 2016 Health and Biotech analytics news roundup

Making Magic with Treasure Data and Pandas for Python

Mirror Moves, by John Hammink
Mirror Moves, by John Hammink

Originally published on Treasure Data blog.

Magic functions, a mainstay of pandas, enable common tasks by saving you typing. Magic functions are functions preceeded by a % symbol. Magic functions have been introduced into pandas-td version 0.8.0! Toru Takahashi from Treasure Data walks us through.

Treasure Data’s magic functions work by wrapping a separate % around the original function, making the functions callable by %%. Let’s explore further to see how this works.

Until now

We start by creating a connection, importing our relevant libraries, and issuing a basic query, all from python (in Jupyter). Using the sample data, it would look like this:

import os
import pandas_td as td

#Initialize connection
con = td.connect(apikey=os.environ[‘TD_API_KEY’], endpoint = ‘’)
engine = con.query_engine(database=’sample_datasets’, type=’presto’)
#Read Treasure Data query into a DataFrame
df = td.read_td(‘select * from www_access, engine’)

With the magic function

We can now do merely this:


select count(1) as cnt
from nasdaq

If you add the table name nasdaq after %% td_use, you can also see the schema:


Even better, you can tab edit the stored column names:


As long as %matplotlib inline is enabled; then you can throw a query into magic’s %%td_presto – -plot and immediately visualize it!

Screen Shot 2015-08-14 at 2.05.20 PM

Very convenient!

How to enable it

Set the API_KEY environment variable:
export TD_API_KEY=1234/abcd…

You can then load the magic comment automatically! You’ll want to save the following to ~/.ipython/profile_default/

c = get_config()

Let’s review

Loading your data:

Querying your data with presto:
review_query with presto

Accessing stored columns:


Stay tuned for many more useful functions from pandas-td! These tools, including Pandas itself, as well as Python and Jupyter are always changing, so please let us know if anything is working differently than what’s shown here.

Magic, by John Hammink
Magic, by John Hammink

Originally Posted at: Making Magic with Treasure Data and Pandas for Python by john-hammink

July 31, 2017 Health and Biotech analytics news roundup

Scientists use new data mining strategy to spot those at high Alzheimer’s risk: The researchers were able to split patients into different subgroups, which may help future clinical trials.

Amazon has a secret health tech team called 1492 working on medical records, virtual doc visits: The group is apparently looking both at new methods and leveraging current technology.

Protein Libraries Pave the Way for New Treatment Options: Researchers can make large numbers of proteins from the DNA that codes for them, enabling quicker study of molecular biological processes.

Collaborate or Collapse: Why Working Together is Essential for the Life Science Industry: There are substantial barriers to working together, but there are currently some initiatives to make it easier.

Originally Posted at: July 31, 2017 Health and Biotech analytics news roundup

The Relationship Between Survey Response Rates and Survey Ratings

When soliciting feedback from customers through formal surveys, we only receive a percentage of completed or returned surveys. This percentage (number of people who answered the survey divided by the number of people in the sample) is referred to as the response or completion rate. In practice, I have seen response rates as low as 10% and as high as 80% across a variety of different surveys and target populations (e.g., employee and customer). How important is the response rate?

I recently got my hands on free US government data on patient survey ratings for over 3800 US hospitals. The Federal government, specifically the Centers for Medicare & Medicaid Services (CMS) and the Agency for Healthcare Research and Quality (AHRQ) funded the development of this standardized patient survey - HCAHPS (Hospital Consumer Assessment of Healthcare Providers and Systems) – to publicly report the patient’s perspective of hospital care.

The HCAHPS data include a variety of data for each of the 3800 hospitals, including:

  1. Patient ratings: The reported data reflect patient ratings of their inpatient experience across 10 different areas, eight touch points (e.g., nurse communication, pain management) and two loyalty-related questions (e.g., overall quality rating and recommend).  Scores on these metrics can range from 0 (low) to 100 (high) and reflect the percent of patients who provided “top box” ratings. For the current analysis, I created a Patient Advocacy Loyalty index by averaging the two loyalty-related questions. I also used the other eight customer experience ratings.
  2. Survey response rate: These data are reported as the simple response rate. I created five segments of hospitals based on their response rates. These five segments are: 1) 20% or less, 2) between 21% and 30%, 3) between 31% and 40%, 4) between 41% and 50% and 5) 51% or greater.
  3. Number of completed surveys: This variable is reported as one of three levels: 1) less than 100 completed surveys, 2) 100-299 completed surveys and 3) 300 or more completed surveys.
Patient Loyalty by response rates
Figure 1 Patient Advocacy is related to survey response rates


The average survey response rate across all 3848 hospitals was .32. That is, for every 100 patients who are asked to complete the survey, 32 actually provide feedback.

I compared patient advocacy ratings across the different levels of response rates and number of completed surveys. These analyses are visually depicted in Figure 1. As you can see, there are a couple of interesting findings:

  1. Number of completed surveys is slightly related (R² < .01) to patient loyalty. Hospitals that had less than 100 completed surveys had slightly higher patient loyalty scores than hospitals who had more than 100 completed surveys.
  2. Response rate was strongly related (R² = .32) to patient loyalty. Hospitals that had lower survey response rates had significantly and substantially lower patient advocacy ratings compared to hospitals with higher survey response rates. In fact, there is about a 25-point difference between hospitals with the lowest response rates (Patient Advocacy Loyalty ~ 60) and the highest response rates (Patient Advocacy Loyalty ~ 85). By the way, I found a similar pattern of results using the other patient experience metrics (see Figure 2); hospitals with lower response rates had patients who had poorer patient experiences compared to hospitals with higher response rates.
Patient Experience by Response Rates
Figure 2. Patient Experience Ratings by Response Rates

Why is there a relationship between survey response rate and survey ratings? PRC, a consulting firm that specializes in healthcare survey research, make the claim that response rates may cause rating differences. They hint that, to improve your patient ratings, you need to have a higher response rate. While the representativeness of the sample of survey respondents to the population is paramount to drawing conclusions about the population, I am skeptical that merely improving your response rate will increase your ratings.

Perhaps response rate is just another measure of the quality of the customer/patient relationship. The findings suggest that patients who are dissatisfied with their hospital experience are less likely to complete a survey. If true, hospitals with truly dissatisfied patients will have lower ratings and lower response rates.

Potential Problems with HCAHPS Data?

The HCAHPS data are collected by many different survey vendors (In fact, there are 44 approved survey vendors responsible for collecting the patient survey data) using three different data collection methods: 1) telephone only, 2) mail only and 3) mix mode (telephone and mail). There is some research that shows that methodological factors impact response rates. For example, two researchers found a higher patient survey response rate for face-to-face methods for recruitment (76.7%) or data collection (76.9%) compared to the mail method of recruitment (66.5%) or data collection (67%).

Using the HCAHPS patient ratings for hospital reimbursement purposes would require that differences across the various vendors/methods be minimal. It would be interesting (necessary?) to see if there are differences across 44 approved survey vendors and data collection methods with respect to the response rates, other survey process metrics and survey ratings. Understanding the reason behind the strong relationship between response rates and survey ratings is paramount to establishing the validity of the survey ratings.


Survey response rate was significantly and substantially related to survey ratings. Specifically, hospitals that had a higher survey response rate received higher patient ratings on their hospital experience. I will try to explore this issue in upcoming blog posts.

Large survey vendors may be in a good position to study the relationship between survey process measures (e.g., response rates) and survey ratings; these vendors have multiple accounts on which they have both types of metrics. It would be interesting to see if the current finding generalizes to other industries. Additionally, identifying the reasons behind the relationship between response rates and survey ratings would be essential to understanding the validity of the survey ratings.

Originally Posted at: The Relationship Between Survey Response Rates and Survey Ratings by bobehayes

Agile Data Warehouse Design for Big Data

21 Big Data Master Data Management Best Practices
21 Big Data Master Data Management Best Practices

On Nov 14th 2013 Big Data Analytics, Discovery & Visualization meetup hosted “Agile Data Warehouse Design for Big Data” by Jim Stagnitto & John Di Pietro from A2C.

Here’s the synopsis:


Jim Stagnitto and John DiPietro of consulting firm a2c) will discuss Agile Data Warehouse Design – a step-by-step method for data warehousing / business intelligence (DW/BI) professionals to better collect and translate business intelligence requirements into successful dimensional data warehouse designs.


The method utilizes BEAM✲ (Business Event Analysis and Modeling) – an agile approach to dimensional data modeling that can be used throughout analysis and design to improve productivity and communication between DW designers and BI stakeholders. BEAM✲ builds upon the body of mature “best practice” dimensional DW design techniques, and collects “just enough” non-technical business process information from BI stakeholders to allow the modeler to slot their business needs directly and simply into proven DW design patterns.


BEAM✲ encourages DW/BI designers to move away from the keyboard and their entity relationship modeling tools and begin “white board” modeling interactively with BI stakeholders.  With the right guidance, BI stakeholders can and should model their own BI data requirements, so that they can fully understand and govern what they will be able to report on and analyze.


The BEAM✲ method is fully described in

Agile Data Warehouse Design – a text co-written by Lawrence Corr and Jim Stagnitto.


About the speaker:

Jim Stagnitto Director of a2c Data Services Practice

Data Warehouse Architect: specializing in powerful designs that extract the maximum business benefit from Intelligence and Insight investments.

Master Data Management (MDM) and Customer Data Integration (CDI) strategist and architect.

Data Warehousing, Data Quality, and Data Integration thought-leader: co-author with Lawrence Corr of “Agile Data Warehouse Design”, guest author of Ralph Kimball’s “Data Warehouse Designer” column, and contributing author to Ralph and Joe Caserta’s latest book: “The DW ETL Toolkit”.


John DiPietro Chief Technology Officer at A2C IT Consulting

John DiPietro is the Chief Technology Officer for a2c. Mr. DiPietro is responsible
for setting the vision, strategy, delivery, and methodologies for a2c’s Solution
Practice Offerings for all national accounts. The a2c CTO brings with him an
expansive depth and breadth of specialized skills in his field.


Sponsor Note:

Thanks to:

Microsoft NERD for providing awesome venue for the event.

A2C IT Consulting for providing the food/drinks.

Cognizeus for providing book to give away as raffle.

Here’s the youtube link for the presentation:

And Slideshare:

Source: Agile Data Warehouse Design for Big Data by v1shal

Achieving tribal leadership in 5 easy steps

Using tribal leadership to improve culture and build world-class customer experienceBefore we delve into the core of this blog, let me take a moment and spread some light on tribe leadership and what it means.

Every organization is made up of tribes, groups of 20 to 150 people who are bound together by familiarity and shared work. Tribes are the little-acknowledged, basic building block of any large human effort. David Logan, a faculty from USC stated tribe leadership in a video (Attaching the video below for listening pleasure). He categorized tribes into 5 stages:


·       Stage 1: “Life sucks” for everyone, and therefore it is okay for me to behave badly to make my way. Only 2% workforce accounts for this category.

·       Stage 2: “My life sucks, as people can see that life is okay for some other people, but in this stage, people have little to no motivation to change because they believe their life (or their work) is bad.  It’s all “their” fault.  The authors claim about 25% of workplace tribes operate in this mode.

·       Stage 3: “I am great, you are not.”  This is where majority of corporation lives (50% of workplace tribes).  Organizations promoting individual excellence as they hire best and brightest.

·       Stage 4: “We are great, they are not.”  A shift from individual competition to the entire tribe competing against other tribes.  In organizational settings, Stage 4 is a combination of having common goals and values as well as a common “enemy tribe” to compare against.  This represents 22% of workplace tribes.

·       Stage 5: “Life is great.”  The pinnacle of workplace tribes, they seek and promote good life for everyone.  Values are the central glue that holds the tribe together – and violation of those values can rip the tribe apart if the leader lets the violation stand.  There are no tribal competitors, not because they don’t exist, but because the tribe is striving to make an impact (on the world) rather than striving to win (against another tribe).


Each stage pretty much functions as it reads.  As per David Logan, we all belong to some or the other tribe and we only understand one stage up or down. He also suggests that good leaders should be able to lead tribes at every stage.


Sounds pretty neat, ha? I am certain that Zappos being on Stage 4 of Tribal Leadership is not surprised to learn about its pioneer position as a customer centric company. It won’t be tough to imagine Apple as Stage 5 tribal organization.



Best customer centric organizations are directly linked with organizations that deliver world-class products. So, it is important to help companies achieve stage 5 tribes. This promotes brands with strong and deep culture roots that care and want to make products that not only makes customers happy but also changes the world.


Following steps are needed to help companies achieve a culture to progress to stage 5 tribes.


Understand/Identify tribes that live or could live in your organization:

Before we groom companies to thrive in its corporate culture we need to identify various tribes that exists. Knowing each tribe and how people are aligned will not only help learn more about existing corporate culture but also give us a trajectory needed to progressively improve culture.

Promote free speech and open communication to help people align:

It is also important to let employees, vendors, and clients align themselves with tribes without any external influence. Therefore, every effort should be made to embrace free speech and open communication. This will help everyone in aligning themselves to respective tribes and verticals. This ultimately helps in identifying overall corporate positioning. This will then help in understanding weak and strong areas of the business.


Create a culture to identify tribe leaders:

Another important task in identifying tribe leadership is to identify tribe leaders and help them in every possible way to lead and manage tribe. This will not only help in better alignment of the tribes but also in identifying leadership that works in favor of building strong culture. Tribe leader normally carries more influence on tribe and therefore could lead the way for an organization.


Connect tribes to network and learn from each other:

Tribes should also be given a chance to learn from higher stages and see how and what it takes to bridge the gap. Mingling the tribal leaders together, or hosting networking sessions between tribes could do the job. This could result in thinking shift within tribes and if done properly, it could result in improving tribal stages. Therefore, improving overall corporate culture.


Embrace tribal leadership on management level:

Nothing is possible without leadership buyin. It is of uttermost importance that leadership/management buy-into the idea of building tribes and their leadership. Leadership should also make sure that customer centricity is an integral part of corporate DNA.


So, above stated 5 steps are good for starting a corporate culture that embraces effective leadership. This help builds stronger corporate culture that facilitates creating world changing groundbreaking products. Products, that people love and cherish, thereby delivering a company with great customer centricity.


Please leave me your suggestion and critic on comment section below. I would love to have an interactive discussion on this and would appreciate follow up discussion.


Talend increases its investments in Research & Development in Nantes

Almost three years ago today, to the day, Talend opened its fourth global research and development center, and its second one in France, in Nantes. It was clear to Talend from the very beginning of this new innovation center that it would not be a simple satellite of existing centers, but a key element in our strategy and overall R&D efforts.

Dedicated to innovative cloud, big data, and machine-learning technologies, this center plays a key role in our research and development efforts, creating new products, adding new features and services, increasing functionality, and improving our cloud data integration infrastructure. 


A winning investment!

Today, Talend is increasing its investments in France with the expansion of its research and development center in Nantes. This new innovation center of more than 2600m2 will make it possible to meet the requirements of supporting Talend’s strong growth but also serve to strengthen our foothold in the region’s digital ecosystem.

In 2016, when this center of excellence was opened, our objective was to recruit up to 100 engineers by the end of 2018. This target has been exceeded, with the recruitment of 120 engineers. We are now planning to increase our workforce in Nantes to 250 by 2022. 

We are proud of our ability to attract and retain the best talent to our R&D team, to create a challenging but also rewarding environment where employees can thrive, solve complex issues, and find innovative ways to address current and future challenges in data integration, processing, and governance.

At Talend, we apply agile development methodologies, work with the latest technologies, and have created a modern, flexible, and automated software development process that allows us to deliver high-quality applications and quickly adapt to market changes and the new requirements of our customers and partners.


A local footprint

This day, our team is moving into a new office space where they will have every opportunity to thrive in an environment that is conducive to innovation and collaboration. We also hope that this innovation hub will contribute to the development of the digital economy in Nantes and the broader region. We will therefore also have the pleasure of opening this space to the booming environment of local technology companies, by organizing regular meetings and events, meetups, or hackathons.

By establishing ourselves in Nantes, we had chosen a dynamic, innovative city and region, benefiting from a living environment recognized by those who live there on a daily basis. Nantes benefits from a highly developed digital ecosystem with many startups and innovative companies. And what better example to illustrate this than to mention Talend’s acquisition in November 2017 of Restlet, a Nantes-based leader in cloud API design and testing.

But the area of Nantes also benefits from a pool of students and leading engineering schools that are recognized internationally for the quality of their training. We will work closely with these educational centers of excellence to create joint programs around new cloud and big data technologies, work-linked training, or through the sharing of our expertise around open source technologies such as Apache Spark, Apache Beam, or Hadoop.

It is with great pride and emotion that I would like to thank all of Talend’s employees – developers, DevOps, UX designers and other automation specialists – the public stakeholders who have supported us and made our implementation a success, the digital and educational ecosystem for the opportunities we are given to exchange and learn together.


The post Talend increases its investments in Research & Development in Nantes appeared first on Talend Real-Time Open Source Data Integration Software.

Originally Posted at: Talend increases its investments in Research & Development in Nantes

Nick Howe (@Area9Nick) talks about fabric of learning organization to bring #JobsOfFuture #podcast


In this podcast Nick Howe (@NickJHowe) from @Area9Learning talks about the transforming world of learning landscape. He shed light on some of the learning challenges and some of the ways learning could match the evolving world and its learning needs. Nick sheds light on some tactical steps that businesses could adopt to create world class learning organization. This podcast is must for learning organization.

Nick’s Recommended Read:
The End of Average: Unlocking Our Potential by Embracing What Makes Us Different by Todd Rose
Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

Podcast Link:

Nick’s BIO:
Nick Howe is an award winning Chief Learning Officer and business leader with a focus on the application of innovative education technologies. He is the Chief Learning Officer at Area9 Lyceum – one of global leaders in adaptive learning technology, a Strategic Advisor to the Institute of Simulation and Training at the University of Central Florida, and board advisor to multiple EdTech startups.

For twelve years Nick was the Chief Learning Officer at Hitachi Data Systems where he built and led the corporate university and online communities serving over 50,000 employees, resellers and customers.

With over 25 years’ global sales, sales enablement, delivery and consulting experience with Hitachi, EDS Corporation and Bechtel Inc., Nick is passionate about the transformation of customer experiences, partner relationships and employee performance through learning and collaboration

About #Podcast:
#JobsOfFuture podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.

Want to sponsor?
Email us @

#JobsOfFuture #Leadership #Podcast #Future of #Work #Worker & #Workplace

Source: Nick Howe (@Area9Nick) talks about fabric of learning organization to bring #JobsOfFuture #podcast