Aug 27, 20: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://news.analyticsweek.com/tw/newspull.php): failed to open stream: HTTP request failed! in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
Trust the data  Source

[ AnalyticsWeek BYTES]

>> Customer Loyalty Feedback Meets Customer Relationship Management by bobehayes

>> See what you never expected with data visualization by analyticsweekpick

>> System tests: 10 traps to avoid by analyticsweekpick

Wanna write? Click Here

[ FEATURED COURSE]

Probability & Statistics

image

This course introduces students to the basic concepts and logic of statistical reasoning and gives the students introductory-level practical ability to choose, generate, and properly interpret appropriate descriptive and… more

[ FEATURED READ]

Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython

image

Python for Data Analysis is concerned with the nuts and bolts of manipulating, processing, cleaning, and crunching data in Python. It is also a practical, modern introduction to scientific computing in Python, tailored f… more

[ TIPS & TRICKS OF THE WEEK]

Grow at the speed of collaboration
A research by Cornerstone On Demand pointed out the need for better collaboration within workforce, and data analytics domain is no different. A rapidly changing and growing industry like data analytics is very difficult to catchup by isolated workforce. A good collaborative work-environment facilitate better flow of ideas, improved team dynamics, rapid learning, and increasing ability to cut through the noise. So, embrace collaborative team dynamics.

[ DATA SCIENCE Q&A]

Q:What is a decision tree?
A: 1. Take the entire data set as input
2. Search for a split that maximizes the ‘separation” of the classes. A split is any test that divides the data in two (e.g. if variable2>10)
3. Apply the split to the input data (divide step)
4. Re-apply steps 1 to 2 to the divided data
5. Stop when you meet some stopping criteria
6. (Optional) Clean up the tree when you went too far doing splits (called pruning)

Finding a split: methods vary, from greedy search (e.g. C4.5) to randomly selecting attributes and split points (random forests)

Purity measure: information gain, Gini coefficient, Chi Squared values

Stopping criteria: methods vary from minimum size, particular confidence in prediction, purity criteria threshold

Pruning: reduced error pruning, out of bag error pruning (ensemble methods)

Source

[ VIDEO OF THE WEEK]

Discussing Forecasting with Brett McLaughlin (@akabret), @Akamai

 Discussing Forecasting with Brett McLaughlin (@akabret), @Akamai

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Data really powers everything that we do. – Jeff Weiner

[ PODCAST OF THE WEEK]

Dave Ulrich (@dave_ulrich) talks about role / responsibility of HR in #FutureOfWork #JobsOfFuture #Podcast

 Dave Ulrich (@dave_ulrich) talks about role / responsibility of HR in #FutureOfWork #JobsOfFuture #Podcast

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

In the developed economies of Europe, government administrators could save more than €100 billion ($149 billion) in operational efficiency improvements alone by using big data, not including using big data to reduce fraud and errors and boost the collection of tax revenues.

Sourced from: Analytics.CLUB #WEB Newsletter

Artificial Intelligence Will Do What We Ask. That’s a Problem.

By teaching machines to understand our true desires, one scientist hopes to avoid the potentially disastrous consequences of having them do what we command.

he danger of having artificially intelligent machines do our bidding is that we might not be careful enough about what we wish for. The lines of code that animate these machines will inevitably lack nuance, forget to spell out caveats, and end up giving AI systems goals and incentives that don’t align with our true preferences.

A now-classic thought experiment illustrating this problem was posed by the Oxford philosopher Nick Bostrom in 2003. Bostrom imagined a superintelligent robot, programmed with the seemingly innocuous goal of manufacturing paper clips. The robot eventually turns the whole world into a giant paper clip factory.

Such a scenario can be dismissed as academic, a worry that might arise in some far-off future. But misaligned AI has become an issue far sooner than expected.

The most alarming example is one that affects billions of people. YouTube, aiming to maximize viewing time, deploys AI-based content recommendation algorithms. Two years ago, computer scientists and users began noticing that YouTube’s algorithm seemed to achieve its goal by recommending increasingly extreme and conspiratorial content. One researcher reported that after she viewed footage of Donald Trump campaign rallies, YouTube next offered her videos featuring “white supremacist rants, Holocaust denials and other disturbing content.” The algorithm’s upping-the-ante approach went beyond politics, she said: “Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons.” As a result, research suggests, YouTube’s algorithm has been helping to polarize and radicalize people and spread misinformation, just to keep us watching. “If I were planning things out, I probably would not have made that the first test case of how we’re going to roll out this technology at a massive scale,” said Dylan Hadfield-Menell, an AI researcher at the University of California, Berkeley.

YouTube’s engineers probably didn’t intend to radicalize humanity. But coders can’t possibly think of everything. “The current way we do AI puts a lot of burden on the designers to understand what the consequences of the incentives they give their systems are,” said Hadfield-Menell. “And one of the things we’re learning is that a lot of engineers have made mistakes.”

A major aspect of the problem is that humans often don’t know what goals to give our AI systems, because we don’t know what we really want. “If you ask anyone on the street, ‘What do you want your autonomous car to do?’ they would say, ‘Collision avoidance,’” said Dorsa Sadigh, an AI scientist at Stanford University who specializes in human-robot interaction. “But you realize that’s not just it; there are a bunch of preferences that people have.” Super safe self-driving cars go too slow and brake so often that they make passengers sick. When programmers try to list all goals and preferences that a robotic car should simultaneously juggle, the list inevitably ends up incomplete. Sadigh said that when driving in San Francisco, she has often gotten stuck behind a self-driving car that’s stalled in the street. It’s safely avoiding contact with a moving object, the way its programmers told it to — but the object is something like a plastic bag blowing in the wind.

To avoid these pitfalls and potentially solve the AI alignment problem, researchers have begun to develop an entirely new method of programming beneficial machines. The approach is most closely associated with the ideas and research of Stuart Russell, a decorated computer scientist at Berkeley. Russell, 57, did pioneering work on rationality, decision-making and machine learning in the 1980s and ’90s and is the lead author of the widely used textbook Artificial Intelligence: A Modern Approach. In the past five years, he has become an influential voice on the alignment problem and a ubiquitous figure — a well-spoken, reserved British one in a black suit — at international meetings and panels on the risks and long-term governance of AI.

Stuart Russell giving a TED talk.

Stuart Russell, a computer scientist at the University of California, Berkeley, gave a TED talk on the dangers of AI in 2017.

Bret Hartman / TED

As Russell sees it, today’s goal-oriented AI is ultimately limited, for all its success at accomplishing specific tasks like beating us at Jeopardy! and Go, identifying objects in images and words in speech, and even composing music and prose. Asking a machine to optimize a “reward function” — a meticulous description of some combination of goals — will inevitably lead to misaligned AI, Russell argues, because it’s impossible to include and correctly weight all goals, subgoals, exceptions and caveats in the reward function, or even know what the right ones are. Giving goals to free-roaming, “autonomous” robots will be increasingly risky as they become more intelligent, because the robots will be ruthless in pursuit of their reward function and will try to stop us from switching them off.

Instead of machines pursuing goals of their own, the new thinking goes, they should seek to satisfy human preferences; their only goal should be to learn more about what our preferences are. Russell contends that uncertainty about our preferences and the need to look to us for guidance will keep AI systems safe. In his recent book, Human Compatible, Russell lays out his thesis in the form of three “principles of beneficial machines,” echoing Isaac Asimov’s three laws of robotics from 1942, but with less naivete. Russell’s version states:

  1. The machine’s only objective is to maximize the realization of human preferences.
  2. The machine is initially uncertain about what those preferences are.
  3. The ultimate source of information about human preferences is human behavior.

Over the last few years, Russell and his team at Berkeley, along with like-minded groups at Stanford, the University of Texas and elsewhere, have been developing innovative ways to clue AI systems in to our preferences, without ever having to specify those preferences.

These labs are teaching robots how to learn the preferences of humans who never articulated them and perhaps aren’t even sure what they want. The robots can learn our desires by watching imperfect demonstrations and can even invent new behaviors that help resolve human ambiguity. (At four-way stop signs, for example, self-driving cars developed the habit of backing up a bit to signal to human drivers to go ahead.) These results suggest that AI might be surprisingly good at inferring our mindsets and preferences, even as we learn them on the fly.

“These are first attempts at formalizing the problem,” said Sadigh. “It’s just recently that people are realizing we need to look at human-robot interaction more carefully.”

Whether the nascent efforts and Russell’s three principles of beneficial machines really herald a bright future for AI remains to be seen. The approach pins the success of robots on their ability to understand what humans really, truly prefer — something that the species has been trying to figure out for some time. At a minimum, Paul Christiano, an alignment researcher at OpenAI, said Russell and his team have greatly clarified the problem and helped “spec out what the desired behavior is like — what it is that we’re aiming at.”

How to Understand a Human

Russell’s thesis came to him as an epiphany, that sublime act of intelligence. It was 2014 and he was in Paris on sabbatical from Berkeley, heading to rehearsal for a choir he had joined as a tenor. “Because I’m not a very good musician, I was always having to learn my music on the metro on the way to rehearsal,” he recalled recently. Samuel Barber’s 1967 choral arrangement Agnus Dei filled his headphones as he shot beneath the City of Light. “It was such a beautiful piece of music,” he said. “It just sprang into my mind that what matters, and therefore what the purpose of AI was, was in some sense the aggregate quality of human experience.”

Robots shouldn’t try to achieve goals like maximizing viewing time or paper clips, he realized; they should simply try to improve our lives. There was just one question: “If the obligation of machines is to try to optimize that aggregate quality of human experience, how on earth would they know what that was?”

A robot arranging things on a table.
A robot arranging things on a table.

In Scott Niekum’s lab at the University of Texas, a robot named Gemini learns how to place a vase of flowers in the center of a table. A single human demonstration is ambiguous, since the intent might have been to place the vase to right of the green plate, or left of the red bowl. However, after asking a few queries, the robot performs well in test cases.

Scott Niekum

The roots of Russell’s thinking went back much further. He has studied AI since his school days in London in the 1970s, when he programmed tic-tac-toe and chess-playing algorithms on a nearby college’s computer. Later, after moving to the AI-friendly Bay Area, he began theorizing about rational decision-making. He soon concluded that it’s impossible. Humans aren’t even remotely rational, because it’s not computationally feasible to be: We can’t possibly calculate which action at any given moment will lead to the best outcome trillions of actions later in our long-term future; neither can an AI. Russell theorized that our decision-making is hierarchical — we crudely approximate rationality by pursuing vague long-term goals via medium-term goals while giving the most attention to our immediate circumstances. Robotic agents would need to do something similar, he thought, or at the very least understand how we operate.

Russell’s Paris epiphany came during a pivotal time in the field of artificial intelligence. Months earlier, an artificial neural network using a well-known approach called reinforcement learning shocked scientists by quickly learning from scratch how to play and beat Atari video games, even innovating new tricks along the way. In reinforcement learning, an AI learns to optimize its reward function, such as its score in a game; as it tries out various behaviors, the ones that increase the reward function get reinforced and are more likely to occur in the future.

Russell had developed the inverse of this approach back in 1998, work he continued to refine with his collaborator Andrew Ng. An “inverse reinforcement learning” system doesn’t try to optimize an encoded reward function, as in reinforcement learning; instead, it tries to learn what reward function a human is optimizing. Whereas a reinforcement learning system figures out the best actions to take to achieve a goal, an inverse reinforcement learning system deciphers the underlying goal when given a set of actions.

A few months after his Agnus Dei-inspired epiphany, Russell got to talking about inverse reinforcement learning with Nick Bostrom, of paper clip fame, at a meeting about AI governance at the German foreign ministry. “That was where the two things came together,” Russell said. On the metro, he had understood that machines should strive to optimize the aggregate quality of human experience. Now, he realized that if they’re uncertain about how to do that — if computers don’t know what humans prefer — “they could do some kind of inverse reinforcement learning to learn more.”

With standard inverse reinforcement learning, a machine tries to learn a reward function that a human is pursuing. But in real life, we might be willing to actively help it learn about us. Back at Berkeley after his sabbatical, Russell began working with his collaborators to develop a new kind of “cooperative inverse reinforcement learning” where a robot and a human can work together to learn the human’s true preferences in various “assistance games” — abstract scenarios representing real-world, partial-knowledge situations.

One game they developed, known as the off-switch game, addresses one of the most obvious ways autonomous robots can become misaligned from our true preferences: by disabling their own off switches. Alan Turing suggested in a BBC radio lecture in 1951 (the year after he published a pioneering paper on AI) that it might be possible to “keep the machines in a subservient position, for instance by turning off the power at strategic moments.” Researchers now find that simplistic. What’s to stop an intelligent agent from disabling its own off switch, or, more generally, ignoring commands to stop increasing its reward function? In Human Compatible, Russell writes that the off-switch problem is “the core of the problem of control for intelligent systems. If we cannot switch a machine off because it won’t let us, we’re really in trouble. If we can, then we may be able to control it in other ways too.”

Dorsa Sadigh and a robot.

Dorsa Sadigh, a computer scientist at Stanford University, teaches a robot the preferred way to pick up various objects.

Drew Kelly for the Stanford Institute for Human-Centered Artificial Intelligence

Uncertainty about our preferences may be key, as demonstrated by the off-switch game, a formal model of the problem involving Harriet the human and Robbie the robot. Robbie is deciding whether to act on Harriet’s behalf — whether to book her a nice but expensive hotel room, say — but is uncertain about what she’ll prefer. Robbie estimates that the payoff for Harriet could be anywhere in the range of −40 to +60, with an average of +10 (Robbie thinks she’ll probably like the fancy room but isn’t sure). Doing nothing has a payoff of 0. But there’s a third option: Robbie can query Harriet about whether she wants it to proceed or prefers to “switch it off” — that is, take Robbie out of the hotel-booking decision. If she lets the robot proceed, the average expected payoff to Harriet becomes greater than +10. So Robbie will decide to consult Harriet and, if she so desires, let her switch it off.

Russell and his collaborators proved that in general, unless Robbie is completely certain about what Harriet herself would do, it will prefer to let her decide. “It turns out that uncertainty about the objective is essential for ensuring that we can switch the machine off,” Russell wrote in Human Compatible, “even when it’s more intelligent than us.”

These and other partial-knowledge scenarios were developed as abstract games, but Scott Niekum’s lab at the University of Texas, Austin is running preference-learning algorithms on actual robots. When Gemini, the lab’s two-armed robot, watches a human place a fork to the left of a plate in a table-setting demonstration, initially it can’t tell whether forks always go to the left of plates, or always on that particular spot on the table; new algorithms allow Gemini to learn the pattern after a few demonstrations. Niekum focuses on getting AI systems to quantify their own uncertainty about a human’s preferences, enabling the robot to gauge when it knows enough to safely act. “We are reasoning very directly about distributions of goals in the person’s head that could be true,” he said. “And we’re reasoning about risk with respect to that distribution.”

Recently, Niekum and his collaborators found an efficient algorithm that allows robots to learn to perform tasks far better than their human demonstrators. It can be computationally demanding for a robotic vehicle to learn driving maneuvers simply by watching demonstrations by human drivers. But Niekum and his colleagues found that they could improve and dramatically speed up learning by showing a robot demonstrations that have been ranked according to how well the human performed. “The agent can look at that ranking, and say, ‘If that’s the ranking, what explains the ranking?” Niekum said. “What’s happening more often as the demonstrations get better, what happens less often?” The latest version of the learning algorithm, called Bayesian T-REX (for “trajectory-ranked reward extrapolation”), finds patterns in the ranked demos that reveal possible reward functions that humans might be optimizing for. The algorithm also gauges the relative likelihood of different reward functions. A robot running Bayesian T-REX can efficiently infer the most likely rules of place settings, or the objective of an Atari game, Niekum said, “even if it never saw the perfect demonstration.”

Our Imperfect Choices

Russell’s ideas are “making their way into the minds of the AI community,” said Yoshua Bengio, the scientific director of Mila, a top AI research institute in Montreal. He said Russell’s approach, where AI systems aim to reduce their own uncertainty about human preferences, can be achieved with deep learning — the powerful method behind the recent revolution in artificial intelligence, where the system sifts data through layers of an artificial neural network to find its patterns. “Of course more research work is needed to make that a reality,” he said.

Russell sees two major challenges. “One is the fact that our behavior is so far from being rational that it could be very hard to reconstruct our true underlying preferences,” he said. AI systems will need to reason about the hierarchy of long-term, medium-term and short-term goals — the myriad preferences and commitments we’re each locked into. If robots are going to help us (and avoid making grave errors), they will need to know their way around the nebulous webs of our subconscious beliefs and unarticulated desires.

A driver uses a driving simulator at Stanford University’s Cyber and Artificial Intelligence Boot Camp for Congressional Staffers.

In the driving simulator at Stanford University’s Center for Automotive Research, self-driving cars can learn the preferences of human drivers.

Rod Searcey

The second challenge is that human preferences change. Our minds change over the course of our lives, and they also change on a dime, depending on our mood or on altered circumstances that a robot might struggle to pick up on.

In addition, our actions don’t always live up to our ideals. People can hold conflicting values simultaneously. Which should a robot optimize for? To avoid catering to our worst impulses (or worse still, amplifying those impulses, thereby making them easier to satisfy, as the YouTube algorithm did), robots could learn what Russell calls our meta-preferences: “preferences about what kinds of preference-change processes might be acceptable or unacceptable.” How do we feel about our changes in feeling? It’s all rather a lot for a poor robot to grasp.

Like the robots, we’re also trying to figure out our preferences, both what they are and what we want them to be, and how to handle the ambiguities and contradictions. Like the best possible AI, we’re also striving — at least some of us, some of the time — to understand the form of the good, as Plato called the object of knowledge. Like us, AI systems may be stuck forever asking questions — or waiting in the off position, too uncertain to help.

“I don’t expect us to have a great understanding of what the good is anytime soon,” said Christiano, “or ideal answers to any of the empirical questions we face. But I hope the AI systems we build can answer those questions as well as a human and be engaged in the same kinds of iterative process to improve those answers that humans are — at least on good days.”

However, there’s a third major issue that didn’t make Russell’s short list of concerns: What about the preferences of bad people? What’s to stop a robot from working to satisfy its evil owner’s nefarious ends? AI systems tend to find ways around prohibitions just as wealthy people find loopholes in tax laws, so simply forbidding them from committing crimes probably won’t be successful.

Or, to get even darker: What if we all are kind of bad? YouTube has struggled to fix its recommendation algorithm, which is, after all, picking up on ubiquitous human impulses.

Still, Russell feels optimistic. Although more algorithms and game theory research are needed, he said his gut feeling is that harmful preferences could be successfully down-weighted by programmers — and that the same approach could even be useful “in the way we bring up children and educate people and so on.” In other words, in teaching robots to be good, we might find a way to teach ourselves. He added, “I feel like this is an opportunity, perhaps, to lead things in the right direction.”

Source: Quanta Magazine

Source

How Skanska Builds a Foundation of Trust and Transparency, Part 2

According to a recent study from Autodesk and FMI, a high-trust organization possesses many distinct traits; the ability to share information openly and easily being one of them. In fact, the study estimates that up to 66% of all high-trust companies have a single source of sharing project data. A big part of this is collaboration – with high trust companies making this a core value to how they do business.

Collaboration and transparency are two big reasons why Skanska has built such a strong reputation across the construction industry. With the help of technology, the company improves communication and visibility, starting in the earliest phases of the project and through closeout.

In Part 1 of our series, we highlighted how Skanska creates a solid foundation of trust between all project stakeholders. In Part 2 of this series, Pamela Monastra, Senior Vice President and Head of Communications, Skanska USA Building, again speaks with Steve Stouthamer, Executive Vice President, Project Planning Services, Skanska USA Building, about how technology can help improve transparency and collaboration. All of this helps the company to further solidify the trust it has worked hard to build.

Watch the video or read the transcript below.
https://fast.wistia.com/embed/medias/2kku5ffob2.jsonphttps://fast.wistia.com/assets/external/E-v1.js

 

Transcript:

Pamela Monastra: How do we start building trust with our customers?

Pamela Monastra: Let’s talk a little bit about the individual. How do they actually play a role in building trust and construction? 

Steve Stouthamer: I think you’re the way you behave, right? I’m using that term pretty broadly, but I mean it to say, how committed are you to the work at hand and to achieving the objectives of your customer and your design team?

Pamela Monastra: I’d like to know more about how technology has impacted this journey of building trust.

Steve Stouthamer: When I was a young field engineer, and I had a big roll of drawings and another two big volumes of specifications. The only way I could move those around was to carry them. You would lug those into the field, roll them out, mark up things. You’d try to use your memory and notes to go back and send faxes.

Now, think about today. We can go out with a mobile device and have the entire design at our fingertips, including a model. We can photograph an issue. We can FaceTime with our design team on the spot. We can resolve things much more quickly, and that’s really exciting.

Another way technology has affected us I distinctly remember was from my time as General Manager here in our office. We had received a set of drawings that were called “a hundred percent drawings,” and we’d like to really go through those and look to make sure no information was missing. The more we can route out during design, the less likely we’ll have changes in the field.

We took those design documents and used Navisworks Clash Detection, and quickly identified several hundred issues that needed to be resolved in the design, where mechanical systems intersected structural elements and architectural elements. The time that took was hours, not weeks, and it was days, not weeks for the designer to make changes to remove a lot of those things –  it’s tremendous.

Pamela Monastra: Technology, great advancements in our industry. Are there issues?

Steve Stouthamer: There are lots of emerging technologies hitting the market. I’ve seen slides where it’s just compounded like tenfold every year, and it can overwhelm our teams a little bit. Right now, we see more products to solve a single function. I think we’d like to see fewer products that solve multiple functions, and that can help us with many things. It can help us with data collection, too, as we do not have to rely on so many systems to speak to one another. That would be my caution.


 

To learn more about how construction professionals think about trust and what you can do to elevate trust across your organization, download our report, “Trust Matters: The High Cost of Low Trust”.

DOWNLOAD REPORT

The post How Skanska Builds a Foundation of Trust and Transparency, Part 2 appeared first on Autodesk Construction Cloud Blog.

Source

Aug 20, 20: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
Pacman  Source

[ AnalyticsWeek BYTES]

>> The Usability of Dashboards (Part 1): Does Anyone Actually Use These Things? [Guest Post] by analyticsweek

>> Making AI Routine, Repeatable and Reliable by analyticsweekpick

>> Data Sources for Cool Data Science Projects: Part 1 by michael-li

Wanna write? Click Here

[ FEATURED COURSE]

Tackle Real Data Challenges

image

Learn scalable data management, evaluate big data technologies, and design effective visualizations…. more

[ FEATURED READ]

Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython

image

Python for Data Analysis is concerned with the nuts and bolts of manipulating, processing, cleaning, and crunching data in Python. It is also a practical, modern introduction to scientific computing in Python, tailored f… more

[ TIPS & TRICKS OF THE WEEK]

Analytics Strategy that is Startup Compliant
With right tools, capturing data is easy but not being able to handle data could lead to chaos. One of the most reliable startup strategy for adopting data analytics is TUM or The Ultimate Metric. This is the metric that matters the most to your startup. Some advantages of TUM: It answers the most important business question, it cleans up your goals, it inspires innovation and helps you understand the entire quantified business.

[ DATA SCIENCE Q&A]

Q:What is the difference between supervised learning and unsupervised learning? Give concrete examples
?

A: * Supervised learning: inferring a function from labeled training data
* Supervised learning: predictor measurements associated with a response measurement; we wish to fit a model that relates both for better understanding the relation between them (inference) or with the aim to accurately predicting the response for future observations (prediction)
* Supervised learning: support vector machines, neural networks, linear regression, logistic regression, extreme gradient boosting
* Supervised learning examples: predict the price of a house based on the are, size.; churn prediction; predict the relevance of search engine results.
* Unsupervised learning: inferring a function to describe hidden structure of unlabeled data
* Unsupervised learning: we lack a response variable that can supervise our analysis
* Unsupervised learning: clustering, principal component analysis, singular value decomposition; identify group of customers
* Unsupervised learning examples: find customer segments; image segmentation; classify US senators by their voting.

Source

[ VIDEO OF THE WEEK]

George (@RedPointCTO / @RedPointGlobal) on becoming an unbiased #Technologist in #DataDriven World #FutureOfData #Podcast

 George (@RedPointCTO / @RedPointGlobal) on becoming an unbiased #Technologist in #DataDriven World #FutureOfData #Podcast

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

Data are becoming the new raw material of business. – Craig Mundie

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with @ScottZoldi, @FICO

 #BigData @AnalyticsWeek #FutureOfData #Podcast with @ScottZoldi, @FICO

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Akamai analyzes 75 million events per day to better target advertisements.

Sourced from: Analytics.CLUB #WEB Newsletter

Voices in AI – Episode 110: A Conversation with Didem Un Ates

[voices_in_ai_byline]

About this Episode

On Episode 110 of Voices in AI, Byron speaks with Didem Un Ates, the Senior Director of AI customer and partner engagement for Microsoft about Artificial Intelligence.

Listen to this episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm. I’m Byron Reese. Today, my guest is Didem Un Ates. She is with Microsoft and her title is the Senior Director of AI Customer and Partner Engagement. She’s been there for several years. She holds two degrees, including in electrical engineering, from the University of Pennsylvania and she has an MBA from Columbia as well. She’s joining us from London. Welcome, to the show, Didem!

Didem Un Ates: Hi, Byron. Thanks for having me.

I always like to start with the same group of questions, which begins with: What is artificial intelligence and why exactly is it artificial? What’s artificial about it and what is intelligence for that matter?

Thank you. The way I try to explain it to my customers, partners and other individuals like students at schools – universities or high schools, is basically: artificial intelligence is a way of mimicking our brain. Intelligence makes sense of things around us. It’s how we process our environment, how we make sense of it, make these connections between the past and the future and the present, that’s called general intelligence. Then we also have specific intelligences, which is all very specific functions like object recognition or speech recognition. ‘Artificial’ is trying to mimic this with technology, with algorithms.

Well, it’s interesting you’re saying the word ‘mimic.’ Is that to imply it’s not actual intelligence? It’s just doing something that can emulate intelligence or do you actually think it’s smart?

No, it’s definitely smart and it’s – in some cases, the specific intelligence that I referred to, some call it weak AI, is actually already smarter than humans in those areas. Microsoft actually was the first to surpass human intelligence in speech recognition, translation, object recognition etc. Yes, some of these functional areas are already very smart and even smarter than humans, but the general AI, or the strong AI as some like to call it, is around a five year old’s intelligence level. That’s why I call it ‘mimic.’

When you say – you think we’re at a five year old [level] for general intelligence, is that really the case? It seems to me that we have this one trick that’s been working pretty well for a while, which is machine-learning, where we take a bunch of data about the past and we study it and we make projections into the future. That seems to be a really – not a very generalized tool. There are a lot of things where the future’s not like the past. The word ‘banana’ is said the same way tomorrow and yesterday so it’s a really good thing that you could do that.

Things like creativity and other sorts of things we associate with general intelligence, are they even solvable that way? When you say we’re at a five year old, that means maybe next year we’ll be at a six year old, at a seven, and then in 15 or 20 years, we’ll be at a teenager. Is there a limit to our one little trick we know here and what it’s going to be able to do?

These are great questions, and I think similar to you, Byron, I’m obsessively reading about AI and trying to get different perspectives on the experts, mostly at the universities but also the industry. To me, when I say – or when we read about the general intelligence is around the age of a five year old human being right now, all it means is as we improve the algorithms around AI and ML, we mimic the human learning – human brain and it is at the level of a five year old human being. Some of course predict actually that general AI are actually racing to reach an adult human mind. It’s my personal view, not Microsoft’s view by far, but my personal view is: yes, AI/ML will reach adult intelligence, but when this will happen is a big question.

To your point about teenager years, some predict that it will be happening in five years; others are saying it won’t happen in a century. The average at least in my reading and research is somewhere around 15 to 25 years. This is completely my own – let’s say doing my own homework. This is quite serious because it has many implications in terms of let’s say, automation or impact on society, jobs, changes, exciting things coming, and also lots of integrations that we should proactively manage in terms of responsible ethical AI, which we are, as Microsoft, very, very, serious about.

Listen to this episode or read the full transcript at www.VoicesinAI.com

[voices_in_ai_link_back]

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Source by analyticsweekpick

Aug 13, 20: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
SQL Database  Source

[ AnalyticsWeek BYTES]

>> The Hazards of Bad Data in Customer Experience Management [INFOGRAPHIC] by bobehayes

>> Artificial Intelligence- A silver lining to help fight climate change by administrator

>> How mobile consumers are using customer service apps [Infographics] by v1shal

Wanna write? Click Here

[ FEATURED COURSE]

Deep Learning Prerequisites: The Numpy Stack in Python

image

The Numpy, Scipy, Pandas, and Matplotlib stack: prep for deep learning, machine learning, and artificial intelligence… more

[ FEATURED READ]

The Black Swan: The Impact of the Highly Improbable

image

A black swan is an event, positive or negative, that is deemed improbable yet causes massive consequences. In this groundbreaking and prophetic book, Taleb shows in a playful way that Black Swan events explain almost eve… more

[ TIPS & TRICKS OF THE WEEK]

Keeping Biases Checked during the last mile of decision making
Today a data driven leader, a data scientist or a data driven expert is always put to test by helping his team solve a problem using his skills and expertise. Believe it or not but a part of that decision tree is derived from the intuition that adds a bias in our judgement that makes the suggestions tainted. Most skilled professionals do understand and handle the biases well, but in few cases, we give into tiny traps and could find ourselves trapped in those biases which impairs the judgement. So, it is important that we keep the intuition bias in check when working on a data problem.

[ DATA SCIENCE Q&A]

Q:How to clean data?
A: 1. First: detect anomalies and contradictions
Common issues:
* Tidy data: (Hadley Wickam paper)
column names are values, not names, e.g. 26-45…
multiple variables are stored in one column, e.g. m1534 (male of 15-34 years’ old age)
variables are stored in both rows and columns, e.g. tmax, tmin in the same column
multiple types of observational units are stored in the same table. e.g, song dataset and rank dataset in the same table
*a single observational unit is stored in multiple tables (can be combined)
* Data-Type constraints: values in a particular column must be of a particular type: integer, numeric, factor, boolean
* Range constraints: number or dates fall within a certain range. They have minimum/maximum permissible values
* Mandatory constraints: certain columns can’t be empty
* Unique constraints: a field must be unique across a dataset: a same person must have a unique SS number
* Set-membership constraints: the values for a columns must come from a set of discrete values or codes: a gender must be female, male
* Regular expression patterns: for example, phone number may be required to have the pattern: (999)999-9999
* Misspellings
* Missing values
* Outliers
* Cross-field validation: certain conditions that utilize multiple fields must hold. For instance, in laboratory medicine: the sum of the different white blood cell must equal to zero (they are all percentages). In hospital database, a patient’s date or discharge can’t be earlier than the admission date
2. Clean the data using:
* Regular expressions: misspellings, regular expression patterns
* KNN-impute and other missing values imputing methods
* Coercing: data-type constraints
* Melting: tidy data issues
* Date/time parsing
* Removing observations

Source

[ VIDEO OF THE WEEK]

#GlobalBusiness at the speed of The #BigAnalytics

 #GlobalBusiness at the speed of The #BigAnalytics

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

You can have data without information, but you cannot have information without data. – Daniel Keys Moran

[ PODCAST OF THE WEEK]

@AngelaZutavern & @JoshDSullivan @BoozAllen discussed Mathematical Corporation #FutureOfData

 @AngelaZutavern & @JoshDSullivan @BoozAllen discussed Mathematical Corporation #FutureOfData

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Decoding the human genome originally took 10 years to process; now it can be achieved in one week.

Sourced from: Analytics.CLUB #WEB Newsletter

Driving innovation in global health: 2019 trends

If you’ve watched the recently-launched Netflix original series about Bill Gates, you are well aware that huge advances were registered when talking about global health. When living in a developed or developing nation, it’s hard to imagine people still have to risk every time they drink water or go to the toilet. However, when talking about global health, you’re forced to look at the big picture. The truth is, even though there’s a lot of ideas concerning driving innovation in this sector, there’s still a lot more to be done. So, no matter if you’re a Gym Expert or a leading scientist, we can all pitch in and make the world a better place.

Registered progress

No matter how you put it, there’s no way to deny the progress being made in the past. If we look back to the 19th century, every country had its struggles with poverty and sickness. However, the agricultural and industrial revolutions gave the perfect conditions to improve the situation considerably. Just some rough data shows that in the last 15 years, maternal mortality was cut in half while the same can be said about child mortality. However, besides various statistics, the real victories are the ones that see illnesses being eradicated forever. That’s the case of smallpox that claimed over 300 million victims in the 20th century, and it was officially declared as eradicated in 1980.

Going back to the series we mentioned in the introduction, various charities and institutions are driving health inventory even further. And the next big challenge is tackling polio. Back in 1988, there were 350,000 cases of polio worldwide. However, thanks to a sustained effort from the World Health Organization, Bill & Melinda Gates Foundation, and all the stakeholders involved in the process, polio was very close to also being eradicated. And that’s the case for many issues that developing countries are facing when it comes to health, medicine, and hygiene.

So, as there’s no doubt that innovation played a significant role in all the progress registered so far, what are the exact factors that helped us achieve that? And how can we use them to improve things even more as we keep discovering new technology and better ways to handle old problems. Everyone is involved in making sure progress keeps being registered.

Digital revolution: driving medicine into a new age

Originally Posted at: Driving innovation in global health: 2019 trends by administrator

Social Sentiment Company ZenCity Raises $13.5M for Expansion

The Israeli company ZenCity, which helps local governments assess public opinion by combining 311, social media analysis and other open sources on the Internet, has announced $13.5 million in new funding — its largest funding round to date.

A news release today said the money will go toward improving ZenCity’s software, adding partnerships and growing the company’s footprint in the market. The funding round was led by the Israeli venture capital firm TLV Partners, with participation from Salesforce Ventures.

Founded in 2015, ZenCity makes software that collects data from public sources such as social media, local news channels and 311 requests. It then runs this data through an AI tool to identify specific topics, trends and sentiments, from which local government agencies can get an idea of the needs and priorities of their communities.

“Zencity is literally the only way I can get a true big-picture view of all discourse taking place, both on our city-owned channels and those that are not run by the city,” attested Belen Michelis, communications manager for the city of Meriden, Conn., in a case study on the company’s website. “The ability to parse through the chatter from one place is invaluable.”

The latest investments more than doubled ZenCity’s funding, according to Crunchbase, which shows that the company has amassed $21.2 million across three rounds in four years, each larger than the last: $1.7 million announced September 2017, $6 million in September 2018 and $13.5 million today. In May 2018, ZenCity also scored $1 million from Microsoft’s venture capital arm by winning the Innovate.AI competition for Israel’s region.

At the time of that competition, ZenCity counted about 20 customers in the U.S. and Israel. Today’s announcement said the company has over 150 local government customers in the U.S., ranging in size from the city of Los Angeles to the village of Lemont, Ill., with fewer than 20,000 residents.

ZenCity CEO Eyal Feder-Levy said in a statement that his company’s software has a role to play in this moment in history, when city governments are testing new responses to unfolding crises, such as COVID-19 mitigation measures or grants to help local businesses.

“Now more than ever, this investment is further proof of local governments’ acute need for real-time resident feedback,” he said. “The ability to provide municipal leaders with actionable data is a big step in further improving the efficiency and effectiveness of their work.”

Source: Social Sentiment Company ZenCity Raises $13.5M for Expansion

Aug 06, 20: #AnalyticsClub #Newsletter (Events, Tips, News & more..)

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

Warning: file_get_contents(http://events.analytics.club/tw/eventpull.php?cat=WEB): failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found
in /home3/vishaltao/public_html/mytao/script/includeit.php on line 15

[  COVER OF THE WEEK ]

image
Ethics  Source

[ AnalyticsWeek BYTES]

>> Marketing Analytics – Success Through Analysis by analyticsweekpick

>> Copado Adds Government-Specific DevOps Tools to Salesforce by analyticsweekpick

>> Consider The Close Variants During Page Segmentation For A Better SEO by thomassujain

Wanna write? Click Here

[ FEATURED COURSE]

Applied Data Science: An Introduction

image

As the world’s data grow exponentially, organizations across all sectors, including government and not-for-profit, need to understand, manage and use big, complex data sets—known as big data…. more

[ FEATURED READ]

The Industries of the Future

image

The New York Times bestseller, from leading innovation expert Alec Ross, a “fascinating vision” (Forbes) of what’s next for the world and how to navigate the changes the future will bring…. more

[ TIPS & TRICKS OF THE WEEK]

Fix the Culture, spread awareness to get awareness
Adoption of analytics tools and capabilities has not yet caught up to industry standards. Talent has always been the bottleneck towards achieving the comparative enterprise adoption. One of the primal reason is lack of understanding and knowledge within the stakeholders. To facilitate wider adoption, data analytics leaders, users, and community members needs to step up to create awareness within the organization. An aware organization goes a long way in helping get quick buy-ins and better funding which ultimately leads to faster adoption. So be the voice that you want to hear from leadership.

[ DATA SCIENCE Q&A]

Q:How do you know if one algorithm is better than other?
A: * In terms of performance on a given data set?
* In terms of performance on several data sets?
* In terms of efficiency?
In terms of performance on several data sets:

– ‘Does learning algorithm A have a higher chance of producing a better predictor than learning algorithm B in the given context?”
– ‘Bayesian Comparison of Machine Learning Algorithms on Single and Multiple Datasets”, A. Lacoste and F. Laviolette
– ‘Statistical Comparisons of Classifiers over Multiple Data Sets”, Janez Demsar

In terms of performance on a given data set:
– One wants to choose between two learning algorithms
– Need to compare their performances and assess the statistical significance

One approach (Not preferred in the literature):
– Multiple k-fold cross validation: run CV multiple times and take the mean and sd
– You have: algorithm A (mean and sd) and algorithm B (mean and sd)
– Is the difference meaningful? (Paired t-test)

Sign-test (classification context):
Simply counts the number of times A has a better metrics than B and assumes this comes from a binomial distribution. Then we can obtain a p-value of the HoHo test: A and B are equal in terms of performance.

Wilcoxon signed rank test (classification context):
Like the sign-test, but the wins (A is better than B) are weighted and assumed coming from a symmetric distribution around a common median. Then, we obtain a p-value of the HoHo test.

Other (without hypothesis testing):
– AUC
– F-Score

Source

[ VIDEO OF THE WEEK]

#FutureOfData with Rob(@telerob) / @ConnellyAgency on running innovation in agency

 #FutureOfData with Rob(@telerob) / @ConnellyAgency on running innovation in agency

Subscribe to  Youtube

[ QUOTE OF THE WEEK]

I’m sure, the highest capacity of storage device, will not enough to record all our stories; because, everytime with you is very valuable da

[ PODCAST OF THE WEEK]

#BigData @AnalyticsWeek #FutureOfData #Podcast with Joe DeCosmo, @Enova

 #BigData @AnalyticsWeek #FutureOfData #Podcast with Joe DeCosmo, @Enova

Subscribe 

iTunes  GooglePlay

[ FACT OF THE WEEK]

Within five years there will be over 50 billion smart connected devices in the world, all developed to collect, analyze and share data.

Sourced from: Analytics.CLUB #WEB Newsletter

UI Fraud Rises as Relief Funds Go to Bad Actors (Contributed)

news item about rising unemployment

The federal government moved quickly to stand up programs to blunt the economic impact of the COVID-19 pandemic. This was a tremendous undertaking that provided needed financial support to hundreds of millions of Americans. Still, it would be unrealistic to think that such a large-scale effort could be implemented so quickly without any glitches. Unfortunately, fraudsters, both domestically and worldwide, saw this as an opportunity, impacting not only CARES Act funds but Unemployment Insurance (UI) assets, as well. The scope of the issue is vast: Washington state reports up to $650 million in UI money stolen by fraudsters and Maryland has identified more than $500 million in UI fraud.

The Small Business Administration’s (SBA) Paycheck Protection Program (PPP) is intended to keep businesses open by providing forgivable loans to employers to keep workers on the job. However, SBA has been challenged by numerous applications from nonexistent small businesses — duly registered at the state level — claiming they have employees to keep on payroll. Also, some actual small businesses falsified their qualifications for PPP loans, misrepresenting the number of employees and expenses.

While states have no vested interest in PPP funds themselves, fraud has an impact down to the local level:

An employer applies for PPP funds but tells the employees to go on unemployment, causing them to unwittingly commit UI fraud;
A fake company uses stolen identities to apply for a PPP loan, when those individuals are actually employed elsewhere; or
A false, stolen or synthetic identity is used to apply for UI, connecting this fake persona to a real or fake company

Through these techniques and more, fraudsters can directly affect state and local resources and tax revenues, while delaying UI payments to legitimate applicants.

“Pay-and-chase” could potentially lead to the recovery of a portion of the lost funds, but historically, a large percentage of fraudulently obtained dollars is never recovered. Pay-and-chase also has its own costs: The original money is gone, and now you have to spend more — in time, resources and personnel — to try to recoup it. Of course, there is also the deterrent value of chasing down fraudsters, but with limited resources available to auditors, the likelihood is that the majority of that money is unrecoverable.

Stop Fraud at the Front Door

Businesses don’t commit fraud; the people who run those businesses — legitimate or otherwise — do. It’s essential to make sure the applicants are who they say they are, that their businesses are genuine, and that their employees actually exist and work for them.

True, banks are important parties in the loan application process, but the issue starts with state registration, where new businesses register with the Secretary of State and other offices at the city or county level. It’s relatively easy for fraudsters to use stolen identities and fabricated information to create a realistic business entity, complete with management personnel and officers. It’s up to agencies to determine if any or all of the information submitted is true. Historically, this has been tough to do: Research by LexisNexis Risk Solutions shows that only 50 percent of small businesses have a credit history, and half of those with a history only have thin files. Once the business is registered, that information can flow to federal agencies, including the SBA as it reviews PPP loan applications.

The same set of identity issues need to be dealt with when processing UI applicants. Unfortunately, it’s very easy to create an identity that looks real. Online resources exist to falsify an ID or driver’s license, utility bills and pay stubs, so that an applicant can appear legitimate. Stolen or synthetic identities are also being used, which adds to the confusion, as some or all of the information being used about that person is real. The result is that, ultimately, stimulus and UI funds can end up in the wrong hands, leaving the government to recover it from someone who doesn’t exist. These identities may also be used to obtain assistance and benefits through additional state-run programs, as well as to apply for UI and assistance in other states altogether, creating the issue of dual participation.

The Answer Is Data

Preventing fraud requires a judicious, intelligent process that screens applications for business registrations and UI at the earliest possible stage. Most states have systems in place for this, not only for approving business licenses and UI, but also for disaster contingency programs, which require funds to flow quickly from the state or locality to people and businesses. The current environment, however, has made things more complicated, since offices may have limited resources and many applications are completely online to ensure social distancing. But whether in person or digitally, vetting the identities of applicants with confidence can only be done with a comprehensive set of accurate, up-to-date data sources.

Connecting a person’s physical identity — their address, birthdate, Social Security number, etc. — with their digital life — their online activity and where, when and how they interact online — is crucial for building risk scores, which support well-informed decisions on how best to apply limited resources to the issue of fraud. A system that provides real-time identity intelligence and pattern recognition in near-real time would not slow down the application process; in fact, it would improve turnaround time, since less manual vetting is needed.

With a PPP programs extension, the ability to apply for loan forgiveness and the likelihood of another round of stimulus on the way, the opportunities for continued PPP fraud may be growing, further straining citizens, public resources and the economy. A multi-layered physical and digital identity intelligence solution, powered by comprehensive government and commercial data sources, means approvers can more quickly — and more accurately — sort legitimate applicants from scammers by automating the process using a comprehensive data solution. And that helps ensure that funds go where they are needed most to support hardworking people in every state. Hopefully these critical safeguards will be implemented in disaster-ready solutions so we are not chasing taxpayer money next time.

Andrew McClenahan, a solutions architect for LexisNexis Risk Solutions, leads the design and implementation of government agency solutions that uphold public program integrity and provides consultative services to systems integrators and government agencies on operations and data architecture issues to promote efficiency and economy in programs. McClenahan has spent much of his 25-year career in public service, including roles as the director of the Office of Public Benefits Integrity at the Florida Department of Children and Families, law enforcement captain with the Florida Department of Environmental Protection, and various sworn capacities with the Tallahassee Police Department. He is a Certified Public Manager, a Certified Welfare Fraud Investigator, and a Certified Inspector General Investigator.

Source by analyticsweekpick