You are currently browsing the tag archive for the ‘visual discovery’ tag.

Qlik was an early pioneer in developing a substantial market for a visual discovery tool that enables end users to easily access and manipulate analytics and data. Its QlikView application uses an associative experience that takes  an in-memory, correlation-based approach to present a simpler design and user experience for analytics than previous tools. Driven by sales of QlikView, the company’s revenue has grown to more than $.5 billion, and originating in Sweden it has a global presence.

At its annual analyst event in New York the business intelligence and analytics vendor discussed recent product developments, in particular the release of Qlik Sense. It is a drag-and-drop visual analytics tool targeted at business users but scalable enough for enterprise use. Its aim is to give business users a simplified visual analytic experience that takes advantage of modern cloud technologies. Such a user experience is important; our benchmark research into next-generation business intelligence shows that usability is an important buying criterion for nearly two out of three (63%) companies. A couple of months ago, Qlik introduced Qlik Sense for desktop systems, and at the analyst event it announced general availability of the cloud and server editions.

vr_bti_br_technology_innovation_prioritiesAccording to our research into business technology innovation, analytics is the top initiative for new technology: 39 percent of organizations ranked it their number-one priority. Analytics includes exploratory and confirmatory approaches to analysis. Ventana Research refers to exploratory analytics as analytic discovery and segments it into four categories that my colleague Mark Smith has articulated. Qlik’s products belong in the analytic discovery category. Users can use the tool to investigate data sets in an intuitive and visual manner, often conducting root cause analysis and decision support functions. This software market is relatively young, and competing companies are evolving and redesigning their products to suit changing tastes. Tableau, one of Qlik’s primary competitors, which I wrote about recently, is adapting its current platform to developments in hardware and in-memory processing, focusing on usability and opening up its APIs. Others have recently made their first moves into the market for visual discovery applications, including Information Builders and MicroStrategy. Companies such as Actuate, IBM, SAP, SAS and Tibco are focused on incorporating more advanced analytics in their discovery tools. For buyers, this competitive and fragmented market creates a challenge when comparing offers in the analytic discovery market.

A key differentiator is Qlik Sense’s new modern architecture, which is designed for cloud-based deployment and embedding in other applications for specialized use. Its analytic engine plugs into a range of Web services. For instance, the Qlik Sense API enables the analytic engine to call to a data set on the fly and allow the application to manipulate data in the context of a business process. An entire table can be delivered to node.js, which extends the JavaScript API to offer server-side features and enables the Qlik Sense engine to take on an almost unlimited number of real-time connections  by not blocking input and output. Previously developers could write PHP script and pipe SQL to get the data, and the resulting application is viable but complex to build and maintain. Now all they need is JavaScript and HTML. The Qlik Sense architecture abstracts the complexity and allows JavaScript developers to make use of complex constructs without intricate knowledge of the database. The new architecture can decouple the Qlik engine from the visualizations themselves, so Web developers can define expressions and dimensions without going into the complexities of the server-side architecture. Furthermore, by decoupling the services, developers gain access to open source visualization technologies such as d3.js. Cloud-based business intelligence and extensible analytics are becoming a hot topic. I have written about this, including a glimpse of our newly announced benchmark research on the next generation of data and analytics in the cloud. From a business user perspective, these types of architectural changes may not mean much, but for developers, OEMs and UX design teams, it allows much faster time to value through a simpler component-based approach to utilizing the Qlik analytic engine and building visualizations.

vr_Big_Data_Analytics_06_benefits_realized_from_big_data_analyticsThe modern architecture of Qlik Sense together with the company’s ecosystem of more than 1,000 partners and a professional services organization that has completed more than 2,700 consulting engagements, gives Qlik a competitive position. The service partner relationships, including those with major systems integrators, are key to the company’s future since analytics is as much about change management as technology. Our research in analytics consistently shows that people and processes lag technology and information in performance with analytics. Furthermore, in our benchmark research into big data analytics, the benefits most often mentioned as achieved are better communication and knowledge sharing (24%), better management and alignment of business goals (18%), and gaining competitive advantage (17%).

As tested on my desktop, Qlik Sense shows an intuitive interface with drag-and-drop capabilities for building analysis. Formulas are easy to incorporate as new measures, and the palate offers a variety of visualization options which automatically fit to the screen. The integration with QlikView is straightforward in that a data model from QlikView can be saved seamlessly and opened intact in Qlik Sense. The storyboard function allows for multiple visualizations to build into narratives and for annotations to be added including linkages with data. For instance, annotations can be added to specific inflection points in a trend line or outliers that may need explanation. Since the approach is all HTML5-based, the visualizations are ready for deployment to mobile devices and responsive to various screen sizes including newer smartphones, tablets and the new class of so-called phablets. In the evaluation of vendors in our Mobile Business Intelligence Value Index Qlik ranked fourth overall.

In the software business, of course, technology advances alone don’t guarantee success. Qlik has struggled to clarify the position its next-generation product and it is not a replacement for QlikView. QlikView users are passionate about keeping their existing tool because they have already designed dashboards and calculations using this tool. Vendors should not underestimate user loyalty and adoption. Therefore Qlik now promises to support both products for as long as the market continues to demand them. The majority of R&D investment will go into Qlik Sense as developers focus on surpassing the capabilities of QlikView. For now, the company will follow a bifurcated strategy in which the tools work together to meet needs for various organizational personas. To me, this is the right strategy. There is no issue in being a two-product company, and the revised positioning of Qlik Sense complements QlikView both on the self-service side and the developer side. Qlik Sense is not yet as mature a product as QlikView, but from a business user’s perspective it is a simple and effective analysis tool for exploring data and building different data views. It is simpler because users no do not need to script the data in order to create the specific views they deem necessary. As the product matures, I expect it to become more than an end user’s visual analysis tool since the capabilities of Qlik Sense lends itself to web scale approaches. Over time, it will be interesting to see how the company harmonizes the two products and how quickly customers will adopt Qlik Sense as a stand-alone tool.

For companies already using QlikView, Qlik Sense is an important addition to the portfolio. It will allow business users to become more engaged in exploring data and sharing ideas. Even for those not using QlikView, with its modern architecture and open approach to analytics, Qlik Sense can help future-proof an organization’s current business intelligence architecture. For those considering Qlik for the first time, the choice may be whether to bring in one or both products. Given the proven approach of QlikView, in the near term a combination approach may be a better solution in some organizations. Partners, content providers and ISVs should consider Qlik Branch, which provides resources for embedding Qlik Sense directly into applications. The site provides developer tools, community efforts such as d3.js integrations and synchronization with Github for sharing and branching of designs. For every class of user, Qlik Sense can be downloaded for free and tested directly on the desktop. Qlik has made significant strides with Qlik Sense, and it is worth a look for anybody interested in the cutting edge of analytics and business intelligence.

Regards,

Ventana Research

Tableau Software introduced its latest advancements in analytics and business intelligence software along with its future plan to more than 5,000 attendees at its annual user conference in its home town of Seattle. The enthusiasm of the primarily millennial-age crowd reflected not only the success of the young company but also its aspirations. The market for what Ventana Research calls visual and data discovery and Tableau have experienced rapid growth that is likely to continue.

vr_ngbi_br_importance_of_bi_technology_considerations_updatedThe company focuses on the mission of usability, which our benchmark research into next-generation business intelligence shows to be a top software buying criterion for more organizations (63%) than any other. Tableau introduced advances in this area including analytic ease of use, APIs, data preparation, storyboarding and mobile technology support as part of its user-centric product strategy. Without revealing specific timelines, executives said that the next major release, Tableau 9.0, likely will be available in the first half of 2015 as outlined by the CEO in his keynote.

Chief Development Officer and co-founder Chris Stolte showed upcoming ease-of-use features such as the addition of Excel-like functionality within workbooks. Users can type a formula directly into a field and use auto-completion or drag and drop to bring in other fields that are components of the calculation. The new calculation can be saved as a metric and easily added to the Tableau data model. Others announced included table calculations, geographic search capabilities and radial and lasso selection on maps. The live demonstration between users onstage was seamless and created flows that the audience could understand. The demonstration reflected impressive navigation capabilities.

Stolte also demonstrated upcoming statistical capabilities.  Box plots have been available since Tableau 8.1, but now the capabilities have been extended for comparative analysis across groups and to create basic forecast models. The comparative descriptive analytics has been improved with drill-down features and calculations within tables. This is important since analysis between and within groups is necessary to use descriptive statistics to reveal business insights. Our research into big data analytics shows that the some of the most important analytic approaches are descriptive in nature: Pivot tables (48%), classification or decision trees (39%) and clustering (37%) are the methods most widely used for big data analytics.

When it comes to predictive analytics, however, Tableau is still somewhat limited. Companies such as IBM, Information Builders, MicroStrategy, SAS and SAP have focused more resources on incorporating advanced analytics in their discovery tools; Tableau has to catch up in this area. Forecasting of basic trend lines is a first step, but if the tool is meant for model builders, then I’d like to see more families of curves and algorithms to fit different data sets such as seasonal variations. Business users, Tableau’s core audience, need automated modeling approaches that can evaluate the characteristics of the data and produce adequate models. How different stakeholders communicate around the statistical parameters and models is also unclear to me. Our research shows that summary statistics and model comparisons are important capabilities for administering and sharing predictive analytics. Overall, Tableau is making strides in both descriptive and predictive statistics and making this intuitive for users.

vr_Info_Optimization_04_basic_information_tasks_consume_timePresenters also introduced new data preparation capabilities on Excel imports including the abilities to view delimiters, split columns and even un-pivot data. The software also has some ability to clean up spreadsheets such as getting rid of headers and footers. Truly dirty data, such as survey data captured in spreadsheets or created with custom calculations and nesting, is not the target here. The data preparation capabilities can’t compare with those provided by companies such as Alteryx, Infromatica, Paxata, Pentaho, Tamr or Trifacta. However, it is useful to quickly upload and clean a basic Excel document and then visualize it in a single application. According to our benchmark research on information optimization, data preparation (47%) and checking data for quality and consistency (45%) are the primary tasks on which analysts spend their time.

Storytelling (which Tableau calls Storypoints), is an exciting area of development for the company. Introduced last year, it enables users to build a sequential narrative that includes graphics and text. New developments enable the user to view thumbnails of different visualizations and easily pull them into the story. Better control over calculations, fonts, colors and text positioning were also introduced. While these changes may seem minor, they are important to this kind of application. A major reason that most analysts take their data out of an analytic tool and put it into PowerPoint is to have this type of control and ease of use. While PowerPoint remains dominant when it comes to communicating analytic results in business, a new category of tools is challenging Microsoft’s preeminence in this area. Tableau Storypoints is one of the easiest to use in the market.

API advancements were discussed by Francois Ajenstat, senior director of product management, who suggested that in the future anything done on Tableau Server can be done through APIs. In this way different capabilities will be exposed so that other software can use them (Tableau visualizations, for example) within the context of their workflows. As well, Tableau has added REST APIs including JavaScript capabilities, which allow users to embed Tableau in applications to do such things as customize dashboards. The ability to visualize JSON data in Tableau is also a step forward since exploiting new data sources is the fastest way to gain business advantage from analytics. This capability was demonstrated using government data, which is commonly packaged in JSON format. As Tableau continues its API initiatives, we hope to see more advances in exposing APIs so Tableau can be integrated into governance workflows, which can be critical to enterprise implementations. APIs also can enable analytic workflow tools to more easily access the product so statistical modelers can understand the data prior to building models. While Tableau integrates on the back end with analytic tools such as Alteryx, the visual and data discovery aspects that must precede statistical model building are still a challenge. Having more open APIs will open up opportunities for Tableau’s customers and could broaden its partner ecosystem.

The company made other enterprise announcements such as Kerberos Security, easier content management, an ability to seamlessly integrate with Salesforce.com, and a parallelized approach to accessing very large data sets. These are all significant developments. As Tableau advances its enterprise vision and continues to expand on the edges of the organization, I expect it to compete in more enterprise deals. The challenge the company faces is still one of the enterprise data model. Short of displacing the enterprise data models that companies have invested in over the years, it will continue to be an uphill battle for Tableau to displace large incumbent BI vendors. Our research into Information Optimization shows that the integration with security and user-access frameworks is the biggest technical challenge for optimizing information. For a deeper look at Tableau’s battle for the enterprise, please read my previous analysis.

VRMobileBIVIPerhaps the most excitement from the audience came from the introduction of Project Elastic, a new mobile application with which users can automatically turn an email attachment in Excel into a visualization. The capability is native so it works in offline mode and provides a fast and responsive experience. The new direction bifurcates Tableau’s mobile strategy which heretofore was a purely HTML5 strategy introduced with Tableau 6.1. Tableau ranked seventh in our 2014 Mobile Business Intelligence Value Index.

Tableau has a keen vision of how it can carve out a place in the analytics market. Usability has a major role in building a following among users that should help it continue to grow. The Tableau cool factor won’t go unchallenged, however. Major players are introducing visual discovery products amid discussion about the need for more governance of data into the enterprise and cloud computing; Tableau likely have to blend into the fabric of analytics and BI in organizations. To do so, the company will need to forge significant partnerships and open its platform for easy access.

Organizations considering visual discovery software in the context of business and IT should include Tableau. For enterprise implementations, consideration should be done to ensure Tableau can support the broader manageability and reliability requirements for larger scale deployments. Visualization of data continues to be a critical method to understand the challenges of global business but should not be the only analytic approach taken for all types of users. Tableau is on the leading edge of visual discovery and should not be overlooked.

Regards,

Ventana Research

A few months ago, I wrote an article on the four pillars of big data analytics. One of those pillars is what is called discovery analytics or where visual analytics and data discovery combine together to meet the business and analyst needs. My colleague Mark Smith subsequently clarified the four types of discovery analytics: visual discovery, data discovery, information discovery and event discovery. Now I want to follow up with a discussion of three trends that our research has uncovered in this space. (To reference how I’m using these four discovery terms, please refer to Mark’s post.)

The most prominent of these trends is that conversations about visual discovery are beginning to include data discovery, and vendors are developing and delivering such tool sets today. It is well-known that while big data profiling and the ability to visualize data give us a broader capacity for understanding, there are limitations that can be vr_predanalytics_predictive_analytics_obstaclesaddressed only through data mining and techniques such as clustering and anomaly detection. Such approaches are needed to overcome statistical interpretation challenges such as Simpson’s paradox. In this context, we see a number of tools with different architectural approaches tackling this obstacle. For example, Information Builders, Datameer, BIRT Analytics and IBM’s new SPSS Analytic Catalyst tool all incorporate user-driven data mining directly with visual analysis. That is, they combine data mining technology with visual discovery for enhanced capability and more usability. Our research on predictive analytics shows that integrating predictive analytics into the existing architecture is the most pressing challenge (for 55% or organizations). Integrating data mining directly into the visual discovery process is one way to overcome this challenge.

The second trend is renewed focus on information discovery (i.e., search), especially among large enterprises with widely distributed systems as well as the big data vendors serving this market. IBM acquired Vivisimo and has incorporated the technology into its PureSystems and big data platform. Microsoft recently previewed its big data information discovery tool, Data Explorer. Oracle acquired Endeca and has made it a key component of its big data strategy. SAP added search to its latest Lumira platform. LucidWorks, an independent information discovery vendor that provides enterprise support for open source Lucene/Solr, adds search as an API and has received significant adoption. There are different levels of search, from documents to social media data to machine data,  but I won’t drill into these here. Regardless of the type of search, in today’s era of distributed computing, in which there’s a need to explore a variety of data sources, information discovery is increasingly important.

The third trend in discovery analytics is a move to more embeddable system architectures. In parallel with the move to the cloud, architectures are becoming more service-oriented, and the interfaces are hardened in such a way that they can integrate more readily with other systems. For example, the visual discovery market was born on the client desktop with Qlik and Tableau, quickly moved to server-based apps and is now moving to the cloud. Embeddable tools such as D3, which is essentially a visualization-as-a-service offering, allow vendors such as Datameer to include an open source library of visualizations in their products. Lucene/Solr represents a similar embedded technology in the information discovery space. The broad trend we’re seeing is with RESTful-based architectures that promote a looser coupling of applications and therefore require less custom integration. This move runs in parallel with the decline in Internet Explorer, the rise of new browsers and the ability to render content using JavaScript Object Notation (JSON). This trend suggests a future for discovery analysis embedded in application tools (including, but not limited to, business intelligence). The environment is still fragmented and in its early stage. Instead of one cloud, we have a lot of little clouds. For the vendor community, which is building more platform-oriented applications that can work in an embeddable manner, a tough question is whether to go after the on-premises market or the cloud market. I think that each will have to make its own decision on how to support customer needs and their own business model constraints.

Regards,

Tony Cosentino

VP and Research Director

Actuate recently announced BIRT Analytics Version 4.2, part of its portfolio of business intelligence software. The new release includes several techniques used by analytics professionals placed behind a user-friendly interface that does not require advanced knowledge of statistics. Beyond the techniques themselves, release 4.2 focuses on guiding users through processes such as campaign analytics and targeting.

With the release, Actuate is focusing on what I have already assessed in BIRT Analytics and to support more advanced analytics within organizations like marketing. For these users, a handful of analytical techniques cover the majority of uses cases. Our benchmark research into predictive analytics shows that vr_predanalytics_top_predictive_techniques_usedclassification trees (used by 69% of participants), regression techniques (66%), association rules (49%) and k-nearest neighbor algorithms (36%) are the techniques used most often. While BIRT Analytics uses Holt-Winters exponential smoothing for forecasting rather than linear regression and k-means for clustering, the key point is that it addresses the most important uses in the organization through a nontechnical user interface. Using techniques like regression or supervised learning algorithms increases complexity, and such analysis often requires formidable statistical knowledge from the user. In addition to the techniques mentioned above, BIRT Analytics reduces complexity by offering Venn diagram set analysis, a geographic mapping function, and the ability to compare attributes using z-score analysis. A z-score is a standardized unit of measure (relative to the model parameters mu and sigma) that represents how far away from a model’s mean a particular measurement rests. The higher the absolute value of the z-score, the more significant the attribute. This analysis is a simple way of showing things such as the likelihood that a particular email campaign segment will respond to a particular offer; such knowledge helps marketers understand what drives response rates and build lift into a marketing campaign. With this analytical tool set, the marketer or front-line analyst is able to dive directly into cluster analysis, market basket analysis, next-best-offer analysis, campaign analysis, attribution modeling, root-cause analysis and target marketing analysis in order to impact outcome metrics such as new customer acquisition, share-of-wallet, customer loyalty and retention.

Actuate also includes the iWorkflow application in release 4.2. It enables users to set business rules based on constantly calculated measurements and their variance relative to optimal KPI values. If the value falls outside of the critical range, it can start an automated process or send a notification for manual effort to remedy a situation. For instance, if an important customer satisfaction threshold is not being met, the system can notify a customer experience manager to take action that corrects the situation . In the same way, the iWorkflow tool allows users to preprogram distribution of analytical results across the organization based on particular roles or security criteria. As companies work to link market insights with operational objectives, Actuate ought to integrate more tightly with applications from companies such as Eloqua, Marketo and salesforce.com. Today this has to be done manually and prevents the automation of closed-loop workflows in areas such as campaign management and customer experience management. Once this is done, however, the tool becomes more valuable to users. The ability to embed analytics into the workflows of the applications themselves is the next challenge for vendors of tools for visual discovery and data discovery.

Other enhancements to BIRT Analytics address data loading and data preparation. The data loader adds a drag-and-drop capability for mapped fields, can incorporate both corporate data and personal data from the desktop and automates batch loading. New preprocessing techniques include scaling approaches and data mapping. The abilities to load data into the columnar store from different information sources and to manipulate the data in databases are important areas that Actuate should continue to develop. Information sources will always be more important than the tools themselves, and data preprocessing is still where most analysts spend the bulk of their time.

BIRT Analytics has been overlooked by many companies in the United States since the roots of the company are in Spain, but the vr_bti_br_whats_driving_change_to_technology_selectiontechnology offers capabilities on par with many of the leaders in the BI category, and some are even more advanced. According to our business technology innovation benchmark research, companies are instituting new technology because of bottom-line considerations such as improvements in business initiatives (60%) and in processes (57%). Furthermore, usability is the top evaluation criterion for business intelligence tools in almost two-thirds (64%) of companies, according to our research on next-generation business intelligence. These are among the reasons we are seeing mass adoption of discovery tools such as BIRT Analytics. Those looking into discovery tools, and especially marketing departments that want to put a portfolio of analytics directly into the hands of the marketing analyst and the data-savvy marketer, should consider BIRT Analytics 4.2.

Regards,

Tony Cosentino

VP and Research Director

Just about all the CIOs I speak with are at an inflection point in their careers. Some are just biding time before retirement, but many are emerging CIOs who are driven more by a business imperative than a technological one. Today, market and cultural pressures are forcing CIOs to move quickly and be flexible. In many ways, this is antithetical to the posture of IT, which can often be described as slow and methodical. This posture however is no longer sustainable in the era of the six forces of business technology innovation that Ventana Research tracks in our BTI benchmark research.

Well-read publications such as the Harvard Business Review, the New York Times and Wired Magazine are espousing the virtues of big data and analytics. CEOs are listening and demanding that their organizations be adaptive and flexible – and most have iPads. This fact should not be underestimated, because everything you do on an iPad is easy. To bring social, local and mobile intelligence together, organizations face the challenge of slow descriptive analytics and what my colleague Mark Smith rightly calls pathetic dashboard environments. Our next-generation business intelligence benchmark research shows organizations rate the importance of business intelligence as very high, but satisfaction levels are low and have a declining trend line. At the same time, usability is growing to be the top buying criteria for business intelligence by an astounding margin over functionality (64% vs. 49%). The driver of these numbers is that expectations are being set at the consumer level by apps such as Google and Yelp!, and business intelligence applications are not living up to these expectations.

vr_ngbi_br_importance_of_bi_technology_considerationsThis is a difficult situation for CIOs because IT has heavy investments in business intelligence tools and in the SQL skills that often underlie support for them. At the same time, they need to think about what the business needs as much as what they currently have in their environment.

In the 1990s, CIOs faced a similar situation with ERP systems and the OLAP tools that were being deployed on the client side of organizations’ technology architectures. In that case, CIOs often needed to think from the perspective of the CFO and form a close partnership with the operational finance team. This was a natural partnership because both areas are number-driven and tools-oriented. In today’s environment, however, the CIO must partner with the CMO and iPad-carrying executives to drive competitive advantage through analytics and big data. The rapid revolution of big data technologies and what I have described as the four pillars of big data analytics are being adopted by business in many cases outside the scope of the CIO. Much to the chagrin of IT, those executives and marketers do not want spreadsheets and “pathetic” dashboards. They want visual discovery, they want search, they want prescriptive analytics, and they want results.  

Finally, CIOs must grapple with the fact that the business must be involved in building out IT since he can no longer have tight centralized control of all technology. Organizations have many different applications sprouting up, from visual discovery tools and business analytics that are also becoming part of the growing use of cloud computing. CIOs cannot even get a baseline on the company’s current technology environment because they often have no idea what’s happening outside of the data center. CIOs need to start learning from business users about the new technologies they’re using and collaborate with them on how they can put the business tools together with what is running in the data center. 

CIOs have been metaphorically barricaded in the data center, and the data center been a cost center, not a profit center, and certainly not an investment center.  CIOs must now reach out to business users to show value. In order to fulfill on the value promise, they will help business users make sure they can deliver on the table stakes: trusted data, secure data and governance around the data. This requires an organizational cadence that can result only from a marriage of the business side with the IT side.

Regards,

Tony Cosentino

VP and Research Director

SAS Institute held its 24th annual analyst summit last week in Steamboat Springs, Colorado. The 37-year-old privately held company is a key player in big data analytics, and company executives showed off their latest developments and product roadmaps. In particular, LASR Analytical Server and Visual Analytics 6.2, which is due to be released this summer, are critical to SAS’ ability to secure and expand its role as a preeminent analytics vendor in the big data era.

For SAS, the competitive advantage in Big Data rests in predictive vr_predanalytics_predictive_analytics_obstaclesanalytics, and according to our benchmark research into predictive analytics, 55 percent of businesses say the challenge of architectural integration is a top obstacle to rolling out predictive analytics in the organization. Integration of analytics is particularly daunting in a big-data-driven world, since analytics processing has traditionally taken place on a platform separate from where the data is stored, but now they must come together. How data is moved into parallelized systems and how analytics are consumed by business users are key questions in the market today that SAS is looking to address with its LASR and Visual Analytics.

Jim Goodnight, the company’s founder and plainspoken CEO, says he saw the industry changing a few years ago. He speaks of a large bank doing a heavy analytical risk computation that took upwards of 18 hours, which meant that the results of the computation were not ready in time for the next trading day. To gain competitive advantage, the time window needed to be reduced, but running the analytics in a serialized fashion was a limiting factor. This led SAS to begin parallelizing the company’s workhorse procedures, some of which were first developed upwards of 30 years ago. Goodnight also discussed the fact that building these parallelizing statistical models is no easy task. One of the biggest hurdles is getting the mathematicians and data scientists that are building these elaborate models to think in terms of the new parallelized architectural paradigm.

Its Visual Analytics software is a key component of the SAS Big Data Analytics strategy. Our latest business technology innovation benchmark research [http://www.ventanaresearch.com/bti/] found that close to half (48%) of organizations present business analytics visually. Visual Analytics, which was introduced early last year, is a cloud-based offering running off of the LASR in-memory analytic engine and the Amazon Web Services infrastructure. This web-based approach allows SAS to iterate quickly without worrying a great deal about revision management while giving IT a simpler server management scenario. Furthermore, the web-based approach provides analysts with a sandbox environment for working with and visualizing in the cloud big data analytics; the analytic assets can then be moved into a production environment. This approach will also eventually allow SAS to combine data integration capabilities with the data analysis capabilities.

With descriptive statistics being the ante in today’s visual discovery world, SAS is focusing Visual Analytics to take advantage of the vr_bigdata_obstacles_to_big_data_analytics (2)company’s predictive analytics history and capabilities. Visual Analytics 6.2 integrates predictive analytics and rapid predictive modeling (RPM) to do, among other things, segmentation, propensity modeling and forecasting. RPM makes it possible for models to be generated via sophisticated software that runs through multiple algorithms to find the best fit based on the data involved. This type of commodity modeling approach will likely gain significant traction as companies look to bring analytics into industrial processes and address the skills gap in advanced analytics. According to our BTI research, the skills gap is the biggest challenge facing big data analytics today, as participants identified staffing (79%) and training (77%) as the top two challenges.

Visual Analytics’ web-based approach is likely a good long-term bet for SAS, as it marries data integration and cloud strategies. These factors, coupled with the company’s installed base and army of loyal users, give SAS a head start in redefining the world of analytics. Its focus on integrating visual analytics for data discovery, integration and commodity modeling approaches also provides compelling time-to-value for big data analytics. In specific areas such as marketing analytics, the ability to bring analytics into the applications themselves and allow data-savvy marketers to conduct a segmentation and propensity analysis in the context of a specific campaign can be a real advantage. Many of SAS’ innovations cannibalize its own markets, but such is the dilemma of any major analytics company today.

The biggest threat to SAS today is the open source movement, which offers big data analytic approaches such as Mahout and R. For instance, the latest release of R includes facilities for building parallelized code. While academics working in R often still build their models in a non-parallelized, non-industrial fashion, the current and future releases of R promise more industrialization. As integration of Hadoop into today’s architectures becomes more common, staffing and skillsets are often a larger obstacle than the software budget. In this environment the large services companies loom larger because of their role in defining the direction of big data analytics. Currently, SAS partners with companies such as Accenture and Deloitte, but in many instances these companies have split loyalties. For this reason, the lack of a large in-house services and education arm may work against SAS.

At the same time, SAS possesses blueprints for major analytic processes across different industries as well as horizontal analytic deployments, and it is working to move these to a parallelized environment. This may prove to be a differentiator in the battle versus R, since it is unclear how quickly the open source R community, which is still primarily academic, will undertake the parallelization of R’s algorithms.

SAS partners closely with database appliance vendors such as Greenplum and Teradata, with which it has had longstanding development relationships. With Teradata, it integrates into the BYNET messaging system, allowing for optimized performance between Teradata’s relational database and the LASR Analytic Server. Hadoop is also supported in the SAS reference architecture. LASR accesses HDFS directly and can run as a thin memory layer on top of the Hadoop deployment. In this type of deployment, Hadoop takes care of everything outside the analytic processing, including memory management, job control and workload management.

These latest developments will be of keen interest to SAS customers. Non-SAS customers who are exploring advanced analytics in a big data environment should consider SAS LASR and its MPP approach. Visual Analytics follows the “freemium” model that is prevalent in the market, and since it is web-based, any instances downloaded today can be automatically upgraded when the new version arrives in the summer. For the price, the tool is certainly worth a test drive for analysts. For anyone looking into such tools and foresee the need for inclusion predictive analytics, it should be of particular interest.

Regards,

Tony Cosentino
VP and Research Director

Big data analytics is being offered as the key to addressing a wide array of management and operational needs across business and IT. But the label “big data analytics” is used in a variety of ways, confusing people about its usefulness and value and about how best to implement to drive business value. The uncertainty this causes poses a challenge for organizations that want to take advantage of big data in order to gain competitive advantage, comply with regulations, manage risk and improve profitability.

Recently, I discussed a high-level framework for thinking about big data analytics that aligns with former Census Director Robert Groves’ ideas of designed data on the one hand and organic data on the other. This second article completes that picture by looking at four specific areas that constitute the practical aspects of big data analytics – topics that must be brought into any holistic discussion of big data analytics strategy. Today, these often represent point-oriented approaches, but architectures are now coming to market that promise more unified solutions.

Big Data and Information Optimization: the intersection of big data analytics and traditional approaches to analytics. Analytics performed by database professionals often differ significantly from analytics delivered by line-of-business staffers who work in more flat-file-oriented environments. Today, advancements in in-memory systems, vr_bigdata_obstacles_to_big_data_analyticsin-database analytics and workload-specific appliances provide scalable architectures that bring processing to the data source and allow organizations to push analytics out to a broader audience, but how to bridge the divide between the two kinds of analytics is still a key question. Given the relative immaturity of new technologies and the dominance of relational databases for information delivery, it is critical to examine how all analytical assets will interact with core database systems.  As we move to operationalizing analytics on an industrial scale, the current advanced analytical approaches break down because it requires pulling data into a separate analytic environment and does not leverage advances in parallel computing. Furthermore, organizations need to determine how they can apply existing skill sets and analytical access paradigms such as business intelligence tools, SQL, spreadsheets and visual analysis, to big data analytics. Our recent big data benchmark research shows that the skills gap is the biggest issue facing analytics initiatives with staffing and training as an obstacle in over three quarters of organizations.

Visual analytics and data discovery: Visualizing data is a hot topic, especially in big data analytics. Much of big data analysis is about finding patterns in data and visualizing them so that people can tell a story and give context to large and diverse sets of data. Exploratory analytics allows us to develop and investigate hypotheses, reduce data, do root-cause analysis and suggest modeling approaches for our predictive analytics. Until now the focus of these tools has been on descriptive statistics related to SQL or flat file environments, but now visual analytics vendors are bringing predictive capabilities into the market to drive usability, especially at the business user level. This is a difficult challenge because the inherent simplicity of these descriptive visual tools clashes with the inherent complexity that defines predictive analytics. In addition, companies are looking to apply visualization to the output of predictive models as well. Visual discovery players are opening up their APIs in order to directly export predictive model output.

New tools and techniques in visualization along with the proliferation of in-memory systems allow companies the means of sorting through and making sense of big data, but exactly how these tools work, the types of visualizations that are important to big data analytics and how they integrate into our current big data analytics architecture are still key questions, as is the issue of how search-based data discovery approaches fit into the architectural landscape.

Predictive analytics: Visual exploration of data cannot surface all patterns, especially the most complex ones. To make sense of enormous data sets, data mining and statistical techniques can find patterns, relationships and anomalies in the data and use them to predict future outcomes for individual cases. Companies need to investigate the use of advanced analytic approaches and algorithmic methods that can transform and analyze organic data for uses such as predicting security threats, uncovering fraud or targeting product offers to particular customers.

Commodity models (a.k.a. good-enough models) are allowing business users to drive the modeling process. How these models can be vr_predanalytics_benifits_of_predictive_analyticsbuilt and consumed at the front line of the organization with only basic oversight by a statistician data scientist is a key area of focus as organizations endeavor to bring analytics into the fabric of the organization. The increased load on the back end systems is another key consideration if the modeling is a dynamic software driven approach. How these models are managed and tracked is yet another consideration. Our research on predictive analytics shows that companies that update their models more frequently have much higher satisfaction ratings than those that update on a less frequent basis.  The research further shows that in over half of organizations that competitive advantage and revenue growth are the primary reasons that predictive analytics are deployed.

Right-time and real-time analytics: It’s important to investigate the intersection of big data analytics with right-time and real-time systems and learn how participants are using big data analytics in production on an industrial scale. This usage guides the decisions that we make today around how to begin the task of big data analytics. Another choice organizations must make is whether to capture and store all of their data and analyze it on the back end, attempt to process it on the fly, or do both. In this context, event processing and decision management technologies represent a big part of big data analytics since they can help examine data streams for value and deliver information to the front lines of the organization immediately. How traditionally batch-oriented big data technologies such as Hadoop fit into the broader picture of right-time consumption still needs to be answered as well. Ultimately, as happens with many aspects of big data analytics, the discussion will need to center on the use case and how to address the time to value (TTV) equation.

Organizations embarking on a big data strategy must not fail to consider the four areas above. Furthermore, their discussions cannot cover just the technological approaches, but must include people, processes and the entire information landscape. Often, this endeavor requires a fundamental rethinking of organizational processes and questioning of the status quo.  Only then can companies see the forest for the trees.

Regards,

Tony Cosentino
VP and Research Director

SiSense gained a lot of traction last week at the Strata conference vr_ngbi_br_importance_of_bi_technology_considerationsin San Jose as it broke records in the so-called 10x10x10 Challenge – analyzing 10 terabytes of data in 10 seconds on a $10,000 commodity machine – and earned the company’s Prism product the Audience Choice Award. The Israel-based company, founded in 2005, has venture capital backing and is currently running at a profit with customers in more than 50 countries and marquee customers such as Target and Merck. Prism, its primary product, provides the entire business analytics stack, from ETL capabilities through data analysis and visualization. From the demonstrations I’ve seen, the toolset appears relatively user-friendly, which is important because customers say usability is the top criterion in 63 percent of organizations according to our next-generation business intelligence.

Prism comprises three main components: ElastiCube Manager, Prism BI Studio and Prism Web. ElastiCube Manager provides a visual environment with which users can ingest data from multiple sources, define relationship between data and do transforms via SQL functions. Prism Studio provides a visual environment that lets customers build visual dashboards that link data and provide users with dynamic charts and interactivity. Finally, Prism Web provides web-based functionality for interacting, sharing dashboards and managing user profiles and accounts.

At the heart of Prism is the ElasticCube technology, which can query large volumes of data quickly. ElasticCube uses a columnar approach, which allows for fast compression. With SiSense, queries are optimized by the CPU itself. That is, the system decides in an ad hoc manner the most efficient way to use both disk and memory. Most other approaches on the market lean either to a pure-play in-memory system or toward a columnar approach.

The company’s approach to big data analytics reveals the chasm that exists in big data analytics understanding between descriptive analytics and more advanced analytics such as we see with R, SAS and SPSS. When SiSense speaks of big data analytics, it is speaking of the ability to consume and explore very large data sets without predefining schemas. By doing away with schemas, the software does away with the  need for a statistician, a data mining engineer or an IT person for that matter. Instead, organizations need analysts with a good understanding of the business, the data sources with which they are working and the basic characteristics of those data sets. SiSense does not do sophisticated predictive modeling or data mining, but rather root-cause and contextual analysis across diverse and potentially very large data sets.

SiSense today has a relatively small footprint, and is facing an uphill battle against entrenched BI and analytics players for enterprise deployments but it’s easy to download and try approach will help it get traction with analysts who are less loyal to the BI that IT departments have purchased. SiSense Vice President of Marketing Bruno Aziza, formerly with Microsoft’s Business Intelligence group, and CEO Amit Bendov have been in the industry for a fair amount of time and understand this challenge. Their platform’s road into the organization is more likely through business groups rather than IT. For this reason, SiSense’s competitors are Tableau and QlikView on the discovery side and products likes SAP HANA and Actuate’s BIRT Analytics in-memory plus columnar approaches are likely the closest competitors in terms of the technological approach to accessing and visualizing large data sets. This ability to access large data sets in a timely manner without the need for data scientists can help overcome the top challenges BI users have in the areas of staffing and real-time access, which we uncovered in our recent business technology innovation research.

vr_bigdata_obstacles_to_big_data_analyticsSiSense has impressive technology, and is getting some good traction. It bears consideration by departmental and mid-market organizations that need to perform analytics across growing volumes of data without the need for an IT department to support their needs.

Regards,

Tony Cosentino
VP and Research Director

LogiXML has been around for more than a decade, but has seen particularly robust growth in the past year. Recent announcements show the company with better than 100-percent year-over-year growth, driven by a 97 percent license renewal rate and new customer growth in SMB, departmental and OEM deployments. The 158-percent growth for the embedded analytics group for the fourth quarter on a year-over-year basis was particularly strong.

LogiXML categorizes its products into Logi Info, its flagship product targeted primarily at IT and developers; Logi Ad Hoc, targeted as vr_ngbi_br_importance_of_bi_technology_considerationsa self-service tool for business users; and a suite of the two bundled with ETL capabilities. Just a few weeks ago, LogiXML released Logi Info v11. The product offers enhanced visualization capabilities, making analytics easier and more robust for end users. Building on an already solid product, the changes in Logi Info seem to be more evolutionary than revolutionary, and appear to be focused on keeping up with functionality necessary to compete in today’s BI market. With that said, the core product has advantages as simple, highly embeddable software. Its light footprint and ease of use are competitive advantages, allowing companies to quickly and easily put business intelligence capabilities into the hands of end users, as well as into the applications themselves. Usability is by far the number one criterion for companies choosing a business intelligence application, according to our benchmark research on next-generation business intelligence.

LogiXML’s ease of use for developers enables companies to tell a compelling time-to-value story for embedding intelligence directly into applications, and takes advantage of the market trend toward agile development. As I’ve written on numerous occasions and most recently in Big Data Faces a Chasm of Misunderstanding, the key challenge for organizations is overcoming the lack of an analytics-driven culture. This challenge is different for IT and for business users, as IT focuses on things such as standards, security and information management, while business users focus more on providing specific insights for business decisions. My colleague Mark Smith has also written about this environment and specifically about how BI tools are failing business. LogiXML’s ability to put BI directly into applications in an agile way allows the company to overcome many of these arguments, since applications fit directly into the role-based decision environments that drive companies today.

Interestingly, the company is said to be rebranding in 2013. Rebranding is always a large effort and often pays off well for a consumer brand or a broad business brand, but given LogiXML’s size and core advantage in the OEM market, the return on such a move is questionable. Perhaps LogiXML is looking to drive its brand into the business user side of the equation with an Intel Inside-type of strategy, or target business users themselves. Either way, the end-game here is unclear given a limited data discovery portfolio and no clear value path to sell directly to business end users. For LogiXML, this path is still primarily through IT and secondarily through business. Furthermore, many new business intelligence and analytic brands coming on the market also target the business end user, so the company will need to make a significant investment in basic awareness-building in order to gain parity with the current brand equity.

Branding aside, the challenge for the company’s continued trajectory of growth is twofold. First, it is reaching an inflection point in its number of employees. As a company reaches about 200 employees, which LogiXML is approaching now, company culture often transforms itself. With solid leadership and a strong roadmap, LogiXML will be able to overcome this first challenge. The second challenge is that it competes in a highly competitive segment against all the major stack players as well as with the more cost-oriented open source players. The expanding market landscape of business intelligence to business analytics, including data discovery and predictive analytics, introduces a host of new players and dilutes the broader opportunity as customers settle into their approaches.

In sum, LogiXML is often able to provide the path of least resistance and solid time-to-value for companies looking to quickly and effectively roll out business intelligence capabilities. In particular, as SaaS providers and OEM relationships continue to gain steam, the embeddable nature of LogiXML’s platform and the ability to construct applications at an elemental level gives the company a good market position from which to advance. Like many others in the increasingly crowded BI space, it will need to capitalize on its current market momentum and solidify its position before others start to invade more of its addressable space. For those organizations looking for an integrated business intelligence platform and tools should assess the improvements in this latest release of LogiXML.

Regards,

Tony Cosentino

VP and Research Director

RSS Tony Cosentino’s Analyst Perspectives at Ventana Research

  • An error has occurred; the feed is probably down. Try again later.

Tony Cosentino – Twitter

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Stats

  • 73,232 hits
%d bloggers like this: