The heterogeneous complexities of big data present the foremost challenge in delivering that data to the end users who need them most. Those complexities are characterized by:
- Disparate data sources: The influx of big data multiplied the sheer amount of data sources almost exponentially, including those both external and internal ones. Moreover, the quantity of sources required today are made more complex byâ¦
- Multiple technologies powering those sources: For almost every instance in which SQL is still deployed, there is seemingly another application, use case, or data source which involves an assortment of alternative technologies. Moreover, accounting for the plethora of technologies in use today is frequently aggravated by contemporaryâ¦
- Architecture and infrastructure complications: With numerous advantages for deployments in the cloud, on-premise, and in hybrid manifestations of the two, contemporary enterprise architecture and infrastructure is increasingly ensnared in a process which protracts time to value for accessing data. The dilatory nature of this reality is only worsened in the wake ofâ¦
- Heightened expectations for data: As data becomes ever entrenched in the personal lives of business users, the traditional lengthy periods of business intelligence and data insight are becoming less tolerable. According to Dremio Chief Marketing Officer Kelly Stirman, âIn our personal lives, when we want to use data to answer questions, itâs just a few seconds away on Googleâ¦And then you get to work, and your experience is nothing like that. If you want to answer a question or want some piece of data, itâs a multi-week or multi-month process, and you have to ask IT for things. Itâs frustrating as well.â
However, a number of recent developments have taken place within the ever-shifting data landscape to substantially accelerate self-service BI and certain aspects of data science. The end result is that despite the variegated factors characterizing todayâs big data environments, âfor a user, all of my data looks like itâs in a single high performance relational database,â Stirman revealed. âThatâs exactly what every analytical tool was designed for. But behind the scenes, your dataâs spread across hundreds of different systems and dozens of different technologies.â
Conventional BI platforms were routinely hampered by the ETL process, a prerequisite for both integrating and loading data into tools with schema at variance with that of source systems. The ETL process was significant for three reasons. It was the traditional way of transforming data for application consumption. It was typically the part of the analytics process which absorbed a significant amount of timeâand skillâbecause it required the manual writing of code. Furthermore, it resulted in multiple copies of data which could be extremely costly to organizations. Stirman observed that, âEach time you need a different set of transformations youâre making a different copy of the data. A big financial services institution that we spoke to recently said that on average they have eight copies of every piece of data, and that consumes about 40 percent of their entire IT budget which is over a billion dollars.â ETL is one of the facets of the data engineering process which monopolizes the time and resources of data scientists, who are frequently tasked with transforming data prior to leveraging them.
Modern self-service BI platforms eschew ETL with automated mechanisms that provide virtual (instead of physical) copies of data for transformation. Thus, each subsequent transformation is applied to the virtual replication of the data with swift in-memory technologies that not only accelerate the process, but eliminate the need to dedicate resources to physical copies. âWe use a distributed process that can run on thousands of servers and take advantage of the aggregate RAM across thousands of servers,â Stirman said. âWe can execute these transformations dynamically and give you a great high-performance experience on the data, even though weâre transforming it on the fly.â End users can enact this process visually without involving script.
Todayâs self-service BI and data science platforms have also expedited time to insight by making data more available than traditional solutions did. Virtual replications of datasets are useful in this regard because they are stored in the underlying BI solutionâinstead of in the actual source of data. Thus, these platforms can access that data without retrieving them from the initial data source and incurring the intrinsic delays associated with architectural complexities or slow source systems. According to Stirman, the more of these âcopies of the data in a highly optimized formatâ such a self-service BI or data science solution has, the faster it is at retrieving relevant data for a query. Stirman noted this approach is similar to one used by Google, in which there are not only copies of web pages available but also âall these different ways of structuring data about the data, so when you ask a question they can give you an answer very quickly.â Self-service analytics solutions which optimize their data copies in this manner produce the same effect.
Competitive platforms in this space are able to account for the multiplicity of technologies the enterprise has to contend with in a holistic fashion. Furthermore, theyâre able to do so by continuing to prioritize SQL as the preferred query language which is rewritten into the language relevant to the source dataâs technologyâeven when it isnât SQL. By rewriting SQL into the query language of the host of non-relational technology options, users effectively have âa single, unified future-proof way to query any data source,â Stirman said. Thus, they can effectively query any data source without understanding its technology or its query language, because the self-service BI platform does. In those instances in which âthose sources have something you canât express in SQL, we augment those capabilities with our distributed execution engine,â Stirman remarked.
The crux of self-service platforms for BI and data science is that by eschewing ETL for quicker versions of transformation, leveraging in-memory technologies to access virtual copies of data, and re-writing queries from non-relational technologies into familiar relational ones, users can rely on their tool of choice for analytics. Business end users can choose from any popular Tableau, Qlik, or any other preferred tool, while data scientists can use R, Python, or any other popular data science platform. The fact that these solutions are able to facilitate these advantages at scale and in cloud environments adds to their viability. Consequently, âYou log in as a consumer of data and you can see the data, and you can shape it the way you want to yourself without being able to program, without knowing these low level IT skills, and you get the data the way you want it through a powerful self-service model instead of asking IT to do it for you,â Stirman said. âThatâs a fundamentally very different approach from the traditional approach.â