You are currently browsing the monthly archive for July 2013.

Actuate recently announced BIRT Analytics Version 4.2, part of its portfolio of business intelligence software. The new release includes several techniques used by analytics professionals placed behind a user-friendly interface that does not require advanced knowledge of statistics. Beyond the techniques themselves, release 4.2 focuses on guiding users through processes such as campaign analytics and targeting.

With the release, Actuate is focusing on what I have already assessed in BIRT Analytics and to support more advanced analytics within organizations like marketing. For these users, a handful of analytical techniques cover the majority of uses cases. Our benchmark research into predictive analytics shows that vr_predanalytics_top_predictive_techniques_usedclassification trees (used by 69% of participants), regression techniques (66%), association rules (49%) and k-nearest neighbor algorithms (36%) are the techniques used most often. While BIRT Analytics uses Holt-Winters exponential smoothing for forecasting rather than linear regression and k-means for clustering, the key point is that it addresses the most important uses in the organization through a nontechnical user interface. Using techniques like regression or supervised learning algorithms increases complexity, and such analysis often requires formidable statistical knowledge from the user. In addition to the techniques mentioned above, BIRT Analytics reduces complexity by offering Venn diagram set analysis, a geographic mapping function, and the ability to compare attributes using z-score analysis. A z-score is a standardized unit of measure (relative to the model parameters mu and sigma) that represents how far away from a model’s mean a particular measurement rests. The higher the absolute value of the z-score, the more significant the attribute. This analysis is a simple way of showing things such as the likelihood that a particular email campaign segment will respond to a particular offer; such knowledge helps marketers understand what drives response rates and build lift into a marketing campaign. With this analytical tool set, the marketer or front-line analyst is able to dive directly into cluster analysis, market basket analysis, next-best-offer analysis, campaign analysis, attribution modeling, root-cause analysis and target marketing analysis in order to impact outcome metrics such as new customer acquisition, share-of-wallet, customer loyalty and retention.

Actuate also includes the iWorkflow application in release 4.2. It enables users to set business rules based on constantly calculated measurements and their variance relative to optimal KPI values. If the value falls outside of the critical range, it can start an automated process or send a notification for manual effort to remedy a situation. For instance, if an important customer satisfaction threshold is not being met, the system can notify a customer experience manager to take action that corrects the situation . In the same way, the iWorkflow tool allows users to preprogram distribution of analytical results across the organization based on particular roles or security criteria. As companies work to link market insights with operational objectives, Actuate ought to integrate more tightly with applications from companies such as Eloqua, Marketo and salesforce.com. Today this has to be done manually and prevents the automation of closed-loop workflows in areas such as campaign management and customer experience management. Once this is done, however, the tool becomes more valuable to users. The ability to embed analytics into the workflows of the applications themselves is the next challenge for vendors of tools for visual discovery and data discovery.

Other enhancements to BIRT Analytics address data loading and data preparation. The data loader adds a drag-and-drop capability for mapped fields, can incorporate both corporate data and personal data from the desktop and automates batch loading. New preprocessing techniques include scaling approaches and data mapping. The abilities to load data into the columnar store from different information sources and to manipulate the data in databases are important areas that Actuate should continue to develop. Information sources will always be more important than the tools themselves, and data preprocessing is still where most analysts spend the bulk of their time.

BIRT Analytics has been overlooked by many companies in the United States since the roots of the company are in Spain, but the vr_bti_br_whats_driving_change_to_technology_selectiontechnology offers capabilities on par with many of the leaders in the BI category, and some are even more advanced. According to our business technology innovation benchmark research, companies are instituting new technology because of bottom-line considerations such as improvements in business initiatives (60%) and in processes (57%). Furthermore, usability is the top evaluation criterion for business intelligence tools in almost two-thirds (64%) of companies, according to our research on next-generation business intelligence. These are among the reasons we are seeing mass adoption of discovery tools such as BIRT Analytics. Those looking into discovery tools, and especially marketing departments that want to put a portfolio of analytics directly into the hands of the marketing analyst and the data-savvy marketer, should consider BIRT Analytics 4.2.

Regards,

Tony Cosentino

VP and Research Director

PivotLink is a cloud-based provider of business intelligence and analytics that serves primarily retail companies. Its flagship product is Customer PerformanceMETRIX, which I covered in detail last year. Recently, the company released an important update to the product, adding attribution modeling, a type of advanced analytic that allows marketers to optimize spending across channels. For retailers these types of capabilities are particularly important. The explosion of purchase channels introduced by the Internet and competition from online retailers are forcing a more analytic approach to marketing as organizations try to decide where the marketing funds can be spent to best results. Our benchmark research into predictive analytics shows that achieving competitive advantage is the number-one reason for implementing predictive analytics, chosen by two-thirds (68%) of all companies and by even more retail organizations.

vr_predanalytics_benifits_of_predictive_analyticsAttribution modeling applied to marketing enables users to assign relative monetary and/or unit values to different marketing channels. With so many channels for marketers to choose among to spend their limited resources, it is difficult for them to defend the marketing dollars they allot to channels if they cannot provide analysis of the return on the investment. While attribution modeling has been around for a long time, the explosion of channels to create what PivotLink calls omnichannel marketing, is a relatively recent phenomenon. In the past, marketing spend focused on just a few channels such as television, radio, newspapers and billboards. Marketers modeled spending through a type of attribution called market mix models (MMM). These models are built around aggregate data, which is adequate when you have few just a few channels to calibrate, but it breaks down in the face of a broader environment. Furthermore, the MMM approach does not allow for sequencing of events, which is important in understanding how to direct spending to impact different parts of the purchase funnel. Newer data sources combined with attribution approaches like the ones PivotLink employs increase visibility of consumer behavior on the individual level, which enables a more finely grained approach. While market mix models will persist when only aggregate data is available, the collection of data in multiple forms (as by using big data) will expand the use of individual level models.

PivotLink’s approach allows marketers and analysts to address an important part of attribution modeling: how credit is assigned across channels. Until now, the first click and the last click typically have been given greatest weight. The problem is that the first click can give undue weighting to the higher part of the funnel and the last click undue weighting to the lower end. For instance, customers may go to a display advertisement to become aware of an offer, but later do a search and buy shortly after. In this instance, the last-click model would likely give too much credit to the search and not give enough credit to the display advertisement. While PivotLink does enable assignment by first click and last click (and by equal weighting as well), the option of custom weighting is the most compelling. After choosing that option from the drop-down menu, the marketer sees a slider in which weights can be assigned manually. This is often the preferred method of attribution in today’s business environment because it provides more flexibility and often reflects better the reality of a particular category; however,  domain expertise is necessary to apportion the weights wisely. To answer this particular challenge, the PivotLink software offers guidance based on industry best practices on how to weight the credit assignment.

Being based in the cloud, PivotLink is able to achieve an aggressive release cycle. Rapid product development is important for the company as its competitive landscape becomes crowded as on-premises analytics providers port their applications into the cloud and larger vendors look at the midmarket space for incremental growth. PivotLink can counter this by continuing to focus on usability and analytics applications for vertical industries. Attribution modeling is an important feature, and I expect to see PivotLink roll out other compelling analytics as well. Retailers looking for fast time-to-value in analytics and an intuitive system that does not need a statistician nor IT involvement, should consider PivotLink.

Regards,

Tony Cosentino

VP and Research Director

Datameer , a Hadoop-based analytics company, had a major presence at recent Hadoop Summit, led by CEO Stefan Groschupf’s keynote and panel appearance. Besides announcing its latest product release, which is an important advance for the company and its users, Datameer’s outspoken CEO put forth contrarian arguments about the current direction of some of the distributions in the Hadoop ecosystem.

The challenge for the growing ecosystem surrounding Hadoop, the open source processing paradigm, has been in accessing data and vr_bigdata_obstacles_to_big_data_analytics (2)building analytics that serve business uses in a straightforward manner. Our benchmark research into big data shows that the two most pressing challenges to big data analytics are staffing (79%) and training (77%). This so-called skills gap is at the heart of the Hadoop debate since it often takes someone with not just domain skills but also programming and statistical skills to derive value from data in a Hadoop cluster. Datameer is dedicated to addressing this challenge by integrating its software directly with the various Hadoop distributions to provide analytics and access tools, which include visualization and a spreadsheet interface. My coverage of Datameer from last year covers this approach in more detail.

At the conference, Datameer made the announcement of version 3.0 of its namesake product with a celebrity twist. Olympic athlete Sky Christopherson presented a keynote telling how the U.S. women’s cycling team, a heavy underdog, used Datameer to help it earn a silver medal in London. Following that introduction, Groschupf, one of the original contributors to Nutch (Hadoop’s predecessor), discussed features of Datameer 3.0 and what the company is calling “Smart” analytics, which include a variety of advanced analytic techniques such as clustering, decision trees, recommendations and column dependencies.

Our benchmark research into predictive analytics shows thatvr_predanalytics_top_predictive_techniques_used classification trees (used by 69% of participants), association rules (49%) are two of the techniques used most often; both are included in the Datameer product. (Note: Datameer utilizes K-means, an unsupervised clustering approach, rather than K-nearest neighbor which is a supervised clustering approach.) Both on stage and in a private briefing, company spokespeople downplayed the specific techniques in favor of the usability aspects and examples of business use for each of them. Clustering of Hadoop data allows marketing and business analytics professionals to view how data groups together naturally while decision trees help analysts see how sets group and deconstruct from a linear subset perspective rather than from a framed Venn diagram perspective. In this regard clustering is more of a bottom-up approach and decision trees more of a top-down approach. For instance, in a cluster analysis, the analyst combines multiple attributes at one time to understand the dimensions upon which the data group. This can inform broad decisions about strategic messaging and product development. In contrast, with a decision tree, one can look, for instance, at all sales data to see which industries are most likely to buy a product, then follow the tree to see what size of companies within the industry are the best prospects, and then the subset of buyers within those companies who are the best targets.

Datameer’s column dependencies can show analysts relationships between different column variables. The output appears much like a correlation matrix, but uses a technique called Mutual Information. The key benefit of this technique over a traditional correlation approach is that it allows comparison between different types of variables, such as continuous and categorical variables. However, there is a trade-off in usability: The numeric output is not represented by the correlation coefficient with which many analysts are familiar. (I encourage Datameer to give analysts a quick reference of some type to help interpret the numbers associated with this less-known output.) Once the output is understood, it can be useful in exploring specific relationships and testing hypotheses. For instance, a company can test the hypothesis that it is more vertically focused than competitors by looking at industry and deal close rates. If there is no relationship between the variables, the hypothesis may be dismissed and a more horizontal strategy pursued.

The other technique Datameer spoke of is recommendation, also known as next best offer analysis; it is a relatively well known technique that has been popularized by Amazon and other retailers. Recommendation engines can help marketing and sales teams increase share of wallet through cross-sell and up-sell opportunities. While none of these four techniques is new to the world of analytics, the novelty is that Datameer allows this analysis directly on Hadoop, which incorporates new forms of data including Web behavior data and social media data. While many in the Hadoop ecosystem focus on descriptive analysis related to SQL, Datameer’s foray into more advanced analytics pushes the Hadoop envelope.

Aside from the launch of Datameer 3.0, Groschupf and his team used Hadoop Summit to espouse the position that the SQL approach of many Hadoop vendors is a mistake. The crux of the argument is that Hadoop is a sequential access technology (much like a magnetic cassette tape) in which a large portion of the data must be read before the correct data can be pulled off the disk. Groschupf argues that this is fundamentally inefficient and that current MPP SQL approaches do a much better job of processing SQL-related tasks. To illustrate the difference he characterized Hadoop as a freight train and an analytic appliance database as a Ferrari; each, of course, has its proper uses. Customers thus should decide what they want to do with the data from a business perspective and then chose the appropriate technology.

This leads to another point Groschupf made to me: that the big data discussion is shifting away from the technical details to a business orientation. In support of this point, he  showed me a comparison of the Google search terms “big data” and “Hadoop.” The latter was more common in the past few years, when it was almost synonymous with big data, but now generic searches for big data are more common. Our benchmark research into business technology innovation shows a similar shift in buying criteria, with about two-thirds (64%) of buyers naming usability as the most important priority. By the way, a number of Ventana Research blogs including this one have focused on the trend of outcome based buying and decision making.

For organizations curious about big data and what they can do to take advantage of it, Datameer can be a low-risk place to start exploring. The company offers a free download version of its product so you can start looking at data immediately. The idea of time-to-value is critical with big data, and this is a key value proposition for Datameer. I encourage users to test the product with an eye to uncover interesting data that was never available for analysis before. This will help build the big data business use case especially in a bootstrap funding environment where money, skills and time are short.

Regards,

Tony Cosentino

VP and Research Director

Hadoop Summit is the biggest event on the West Coast centered on Hadoop, the open source technology for large-scale data processing. The conference organizers, Hortonworks, estimated that more than 2,400 people attended, which if true would be double-digit growth from last year. Growth on the supplier side was even larger, which indicates the opportunity this market represents. Held in Silicon Valley, the event attracts enterprise customers, industry innovators, thought leaders and venture capitalists. Many announcements were made – too many to cover here. But I want to comment on a few important ones and explain what they mean to the emerging Hadoop ecosystem and the broader market.

Hortonworks is a company spun off by the architects of Yahoo’s Hadoop implementation. Flush with $50 million in new venture funding, the company announced the preview distribution of Apache Hadoop 2.0. This represents a fundamental shift away from the batch-only approach to processing big data of the previous generation. In particular, YARN (Yet Another Resource Manager; Yahoo roots are evident in this name) promises to solve the challenge of multiple workloads running on one cluster. YARN replaces the Hadoop Data Platform (HDP) job scheduler. In that system, a MapReduce job sees itself as the only tenant on HDFS, the Hadoop file system, and precludes any other workload. In YARN, MapReduce becomes a client of the resource manager, which can allocate resources according to differing workload needs. According to Bob Page, product manager at Hortonworks, and Shaun Connolly, VP of corporate strategy, this mixed workload capability opens the door to additional ISV plug-ins including advanced analytics and vr_predanalytics_predictive_analytics_obstaclesstream processing. Integrating workloads is an important step forward for advanced analytics; our benchmark research into predictive analytics shows that the biggest challenge to predictive analytics for more than half (55%) of companies is integrating it into the enterprise architecture. Furthermore, stream processing opens the door to a variety of uses in operational intelligence such as fraud prevention and network monitoring that have not been possible with Hadoop. The company plans general availability of Apache Hadoop 2.0 in the fall. Beyond, YARN, the new version of Hadoop will bring YARN, Hive on Tez for SQL query support, high availability, snapshots, disaster recovery and better rolling upgrade support. Hortonworks simultaneously announced a certification program that allows application providers to be certified on the new version. This next major release of Hadoop is a significant step to the enterprise readiness of Hadoop, and Hortonworks who depends on the open source releases for commercializing and licensing it to customers will now be able to better compete against some of its competitors who have built their own proprietary extensions to Hadoop as part of their offerings.

As noted, various vendors announced their own Hadoop advances at the summit. For one, Teradata continues to expand its Hadoop-based product portfolio and its Unified Data Architecture that I covered recently. The company introduced the Teradata Appliance for Hadoop as well as support for Hadoop utilizing Dell’s commodity hardware. While another Hadoop appliance in the market may not be big news, the commitment of Teradata to the Hadoop community is important. Its professional services work in close partnership with Hortonworks, and now they will offer full scoping and integration services along with the current support services. This enables Teradata to maintain its trusted advisor role within accounts while Hortonworks can take advantage of a robust services and account management structure to help create new business.

Quentin Clark, Microsoft’s VP for SQL Server, gave a keynote addressvr_ss21_spreadsheets_arent_easily_replaced acknowledging the sea change that is occurring as a result of Hadoop. But he emphasized Microsoft’s entrenched position with Excel and SQL Server and the ability to use them alongside Hortonworks for big data. It’s a sound if unfortunate argument that spreadsheets are not going away soon; in our latest benchmark research into spreadsheets 56 percent of participants said that user resistance is the biggest obstacle to change. At the same time, Microsoft has challenges in big data such as a truly useable interface beyond Microsoft Excel, which I recently discussed. At Hadoop Summit, Microsoft reiterated announcements already made in May and were covered by my colleague. They included Hortonworks Data Platform for Windows, in which Hadoop becomes a key priority operating system alongside of SQLServer, and HDsight running on Azure, Microsoft’s cloud platform. The relationship will help Hortonworks overcome objections about the security and manageability of its platform, while Microsoft should benefit from increased sales of its System Center, Active Directory and Windows software. Microsoft also announced the HDP Management Packs for Systems Center that makes Hadoop easier to manage on Windows or Linux and utilizes Ambari API for integration. Perhaps the most interesting demonstration from Microsoft was the preview of Data Explorer. This application provides text-based search across multiple data sources, after which the system can import the various data sources automatically, independent of their type or location. Along with companies like Lucidworks (which my colleague Mark Smith recently discussed, and Splunk, Microsoft is advancing in the important area of information discovery, one of the four types of big data discovery Mark follows.

Datameer made the important announcement of version 3.0 of its namesake flagship product with a celebrity twist. Olympic athlete Sky Christopherson presented a keynote telling how the U.S. women’s cycling team, a heavy underdog, used Datameer to help it earn a silver medal in London. Following that, Stefan Groschupf, CEO of Datameer and one of the original contributors to Nutch (Hadoop’s predecessor), discussed advances in 3.0, which include a variety of advanced analytic techniques such as clustering, decision trees, recommendations and column dependencies. The ability to do these types of advanced analytics and visualize the data natively on Hadoop is not currently available in the market. My coverage of Datameer from last year can be found here.

Splunk announced Hunk, a tool that integrates exploration and visualization in Hadoop and will enable easier access for ‘splunking’ Hadoop clusters. In this tool, Splunk introduces a virtual indexing technology in which indexing occurs in an ad-hoc fashion as it is fed into a columnar data store. This enables analysts to test hypotheses through a “slice and dice” approach once the initial search discovery phase is completed. Sanjay Meta, VP of marketing for Splunk, explained to me how such a tool enables faster time-to-value for Hadoop. Currently there are multiple requests for data resting in Hadoop, but it takes a data scientist to access them. By applying Splunk’s tools to the Hadoop world, the data scientists can move on to more valuable tasks while users trained in Splunk can register and address such requests. Hunk is still somewhat technical in nature and requires specific Splunk training, but the demonstration showed a no-code approach that through the user-friendly Splunk interface returns robust descriptive data in visual form, which can then be worked with in an iterative fashion. My most recent analysis of Splunk can be found here.

Pentaho also made several announcements. The biggest in terms of market impact is that it has become the sole ETL provider for Rackspace’s Hadoop-as-a-service initiative, which aims to deliver a full big data platform in the cloud. Pentaho also announced the Pentaho Labs initiative which will be the R&D arm for the Pentaho open source community. This move should lift both the enterprise and the community, especially in the context of Pentaho’s recent acquisition of Webdetails, a Portuguese analytics and visualization company active in Pentaho’s open source community. The company also announced Adaptive Big Data Layer, which provides a series of plug-ins across the Hadoop ecosystem including all of the major distributions. And a new partnership with Splunk enables read/write access to the Splunk data fabric. Pentaho also is providing tighter integration with MongoDB (including aggregation frameworks) and the Cassandra DBMS. Terilyn Palanca, director of Pentaho’s big data product marketing, and Rebecca Shomair, corporate communications director, made the point that companies need to hedge their bets within the increasingly divergent Hadoop ecosystem and that Pentaho can help them reduce risk in this regard. Mark Smith’s most recent analysis of Pentaho can be found here.

In general what struck me most about this exciting week in the world of Hadoop are the divergent philosophies and incentives at work in the market. The distributions of Map R, Hortonworks, Cloudera and Pivotal (Greenplum) continue to compete for dominance with varying degrees of proprietary and open source approaches. Teradata is also becoming a subscription reseller of Hortonworks HDP to provide even more options to its customers. Datameer and Platfora are taking pure-play integrated approaches, and Teradata, Microsoft and Pentaho are looking at ways to marry the old with the new by guarding current investments and adding new hadoop based capabilities. Another thing that struck me was that no Business intelligence vendors had a presence outside of visual discovery provider, Tableau. This is curious given that many vendors this week talked about responding to the demands of business users for easier access to Hadoop data. This is something our research shows to be a buying trend in today’s environment: Usability is the most important buying criterion in almost two out of three (64%) organizations. Use cases say a lot about usability and people talking on stage and off about their Hadoop experiences increased dramatically this year. I recently wrote about how much has changed in the use of big data in one year, and the discussions at Hadoop Summit confirmed my thoughts. Hortonworks and it ecosystem of partners are now able to further gain opportunity to meet a new generation of big data and information optimization needs. At the same time, we are still in the early stages of turning this technology to business use that requires a focus on use cases and gaining benefits on a continuous basis. Disruptive innovations often take decades to be fully embraced by organizations and society at large. Keep in mind that it was only in December 2004 that Google Labs published its groundbreaking paper on MapReduce.

Regards,

Tony Cosentino

VP and Research Director

Roambi, a supplier of mobile analytics and visualization software, announced the release of a cloud-based version of its product, which allows the company to move beyond the on-premises approach where it is established and into the hands of more business users. Roambi Business enables users to automate data import, create models and refresh data on demand. Furthermore, the company announced a North America Partner Program along with the cloud release. This will encourage ISVs and solution partners to develop for the new product. The move to the vr_bti_br_technology_innovation_prioritiescloud is a big one for the company, giving access to a new market in which companies need to deliver business intelligence (BI) to their increasingly mobile workforces.

The challenge of mobility and operating across smartphones and tablets is coming to the forefront of the BI industry, as indicated by our business technology innovation benchmark research, in which mobile technology is ranked as the second-most important innovation (by 15%) in a virtual tie with collaboration; analytics is the only higher ranked innovation (at 39%). With tablet sales likely to surpass PC sales in just a couple of years, the trend toward mobile devices will continue to gather momentum, and vendors of BI applications will need to provide them for these platforms. One way companies and vendors alike are addressing this challenge is to move applications to the cloud.

Roambi was one of the first to embrace industry trends in data visualization and mobile BI, but until now it focused on larger corporations and intersecting to business and on-premises approaches to BI. The company has been successful with deployments in 10 of the Fortune 50 and in eight of the 10 largest pharmaceutical companies. This presence in industries such as pharmaceuticals makes sense in that many of the early uses of mobile BI has vr_ngbi_br_importance_of_bi_technology_considerationsbeen in retail and sales specifically.

Roambi first caught our attention for having a user-friendly approach to BI that helps mobile workers improve productivity by accessing various forms of information through a handheld device. In particular, Microsoft Excel and BI applications can be ported onto mobile devices in the form of report visualizations and flipped through using the native swipe gestures on Apple iOS devices. The broad access of a cloud platform extends the firm’s focus on usability, which our benchmark research into next-generation BI finds to be the top evaluation criteria for 64 percent of potential customers. My colleague Mark Smith has written more on the user-focused nature of Roambi’s products.

Roambi Business is multitenant software as a service hosted on Amazon Web Services. It offers a no-integration API approach to data movement. That is, the API utilizes REST protocols for data exchange. When a request is sent from the Roambi application, responses are returned in JavaScript Object Notation (JSON). (The exception is that results for particularly large request sets are returned in IEEE754 format.) In this way, the API acts a conduit to transfer data from any JSON-compliant system including Excel, salesforce.com, Google spreadsheets and BI applications. Once data is transferred into the Roambi file system, the publishing tool allows users to quickly turn the data into a user-friendly form that is represented on the mobile device. As well, the product empowers an administrator to define user rights, and single sign-on is provided through industry standard SAML 2.0. Security, a big concern for mobile BI, is addressed in a number of ways including remote wipe, file exploration, application pass codes and file recall.

For the cloud version of the software, Roambi has redesigned the entire publishing engine to be HTML5-compliant but still iOS-oriented in that it takes advantage of native iOS gestures. The redesign of the publishing tool set extends to Roambi Flow, an application that enables power users to assemble and group information for presentations, publications or applications. (An example of such output is a briefing book or a digital publication.) This feature is particularly important since a specific data-driven storyline distributed to a group of users often is needed to produce a decision. Currently, the cumbersome cut-and-paste process revolves around data and content produced in Excel and Word and put into vr_ngbi_br_what_capabilities_matter_for_mobile_biPowerPoint for ultimate dissemination through an organization.

A couple of features are not yet available on the cloud addition. Push notifications are important, and with the new architecture I expect to see that soon. According to our next generation business intelligence benchmark research on mobile BI, alerts and notifications is the most important ranked feature (important to 42% of organizations), which should be a big part of mobile BI. While some interactivity is not available in the first release of the cloud edition with visualization, the flow of reviewing data is simple and easy to examine metrics.

Roambi will face strong competition from other BI vendors aggressively improving their own mobile BI offerings. Vendors of visual discovery software, of traditional BI and of integrated stacks each have a unique position that takes advantage of features like data mashup and broad integration capabilities. The battle for this market will be won only over time, but Roambi has a unique position of its own in terms of ease of use and time-to-value. In fact, the company’s strategic focus on design and the user experience is coinciding with current business discussions and top priorities in the buying trends in the market.

Roambi’s flagship product now in the cloud, technology that was mostly configurable by teams in larger companies’ is available to anyone easily including ISVs. Furthermore, cloud computing approach allows easier access for the business and requires less technical resources and reduces the potential financial impact. For companies looking to deploy business intelligence and analytics quickly to mobile devices while providing ease of use and the ability to communicate not only graphically but with a storyline approach, Roambi is worth a look to see how simple business intelligence can be on mobile technology.

Regards,

Tony Cosentino

VP and Research Director

RSS Tony Cosentino’s Analyst Perspectives at Ventana Research

  • An error has occurred; the feed is probably down. Try again later.

Tony Cosentino – Twitter

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Stats

  • 73,639 hits
%d bloggers like this: