You are currently browsing the monthly archive for January 2013.

Banking giant JP Morgan raised eyebrows in 2012 when it revealed that it had lost a substantial amount of money because of poorly conceived trades it had made for its own account. The losses raised questions about the adequacy of its internal controls, and broader questions about the need for regulations to reduce systemic risk to the banking system. At the heart of the matter were the transactions made by “the London Whale,” the name given to a JP Morgan’s trading operation in the City by its counterparties because of the outsized bets it was making. Until that point, JP Morgan’s Central Investment Office had been profitable and apparently well controlled. In the wake of a discovery of the large losses racked up by “the Whale,” JP Morgan launched an internal investigation into how it happened, and released the findings of the task force established to review the losses and their causes [PDF document].

One of the key points that came out of the internal investigation was the role of desktop spreadsheets in creating the mess. “The Model Review Group noted that the VaR [Value at Risk] computation was being done on spreadsheets using a manual process and it was therefore ‘error prone’ and ‘not easily scalable.’” The report also cited as an inherent operational issue the process of copying and pasting data into analytic spreadsheets “without sufficient quality control.” This form of data entry in any critical enterprise function is a hazard because the data sources themselves may not be controlled. After the fact it is impossible to positively identify the source of the data, and (unless specifically noted) its properties (such as time stamps of the source data) will also be indeterminate. These last two are sensitive issues when marking portfolios to market (that is, determining their value for periodic disclosure purposes), especially when there are thinly traded securities in that portfolio.

The report notes, “Spreadsheet-based calculations were conducted with insufficient controls and frequent formula and code changes were made.” In particular, in response to a recommendation by the Internal Audit group for providing greater clarity and documentation of how securities prices were arrived at, the individual with responsibility for implementing these changes made changes to a spreadsheet that inadvertently introduced material calculation errors, which slipped through because the changes were not subject to a vetting process. Absent a thorough audit of a spreadsheet, these sorts of errors are difficult to spot.

This was not the first time a financial institution has incurred serious losses because of faulty spreadsheets. One of the first big debacles took place more than 20 years ago when a faulty spreadsheet caused First Boston to take a multimillion-dollar hit trading collateralized debt obligations, which had only recently been invented.

Spreadsheet mistakes are quite common. Our recent spreadsheet benchmark research vr_ss21_errors_in_spreadsheetsconfirmed that errors in data and formulas are common in users’ most important spreadsheets. There’s even an association, The European Spreadsheet Risks Interest Group, that tracks the fallout from spreadsheet errors. Yet, even when millions of dollars or euros are at stake, the misuse of spreadsheets persists.

It’s a good thing that spreadsheet hazards are intangible, or they might have been banned or heavily regulated long ago in the United States by the Occupational Safety and Health Administration (OSHA) or similar bodies in other countries. All kidding aside, the London Whale incident raises the question, “Why do people persist in using desktop spreadsheets when they pose this magnitude of risk?”

Our research finds that the answer is ease of use. This is particularly true in situations like the one described by the JP Morgan task force. In the capital markets portion of the financial services industry it’s common for traders, analysts, strategists and risk managers to use desktop spreadsheets for analysis and modeling. Spreadsheets are handy for these purposes because of the fluid nature of the work these individuals do and their need to quickly translate ideas into quantifiable financial models. Often, these spreadsheets are used and maintained mainly by a single individual and undergo frequent modification to reflect changes in markets or strategies. For these reasons, more formal business intelligence tools have not been an attractive option for these users. It’s unlikely that these individuals could be persuaded to take the time to learn a new set of programming skills, and the alternative – having to communicate concepts and strategies to someone who can translate them into code – is a non-starter. Moreover, these tools can be more cumbersome to use for these purposes, especially for those who have worked for years translating their concepts into a two-dimensional grid.

Desktop spreadsheets have become a bad habit when they are used in situations where the risk of errors and their consequences are high. Increasingly, however, they are a habit that can be broken without too much discomfort. The task force recommended more controls over the spreadsheets used for portfolio valuation. One way of doing this is simply to add vetting and sign-off before a spreadsheet is used, controls to prevent unauthorized changes and periodic audits after that to confirm the soundness of the file. This classic approach, however, is less secure and more time-consuming than it needs to be. Organizations can and should use at least one of three approaches to achieve better control of the spreadsheets they use for important processes. First, tools available today can automate the process of inspecting even complex spreadsheets for suspicious formulas, broken links, cells that have a fixed value rather than a formula and other structural sources of errors. Second, server-based spreadsheets retain the familiar characteristics of desktop spreadsheets yet enable greater control over their data and formulas, especially when integrating external and internal data sources (say, using third-party feeds for securities pricing or parameters used in risk assessments). Third, multidimensional spreadsheets enable organizations to create libraries of formulas that can be easily vetted and controlled. When a formula needs updating, changing the source formula changes every instance in the file. Some applications can be linked to enterprise data sources to eliminate the risks of copy-and-paste data entry. Since they are multidimensional, it’s easy to save multiple risk scenarios to the same file for analysis.

Spreadsheets are a remarkable productivity tool, but they have limits that users must respect. Desktop spreadsheets are seductive because they are easy to set up. They are especially seductive in capital markets operations because they also are easy to modify and manipulate. However, these same qualities make it just as easy to build in errors with grave consequences that can be nearly impossible to spot.

A decade ago, there were few practical alternatives to desktop spreadsheets. Today, there are many, and therefore fewer good reasons not to find and use them. The issues uncovered by the “London Whale” episode are far from unique. Only when a disaster occurs and the fallout is made public do people see the consequences, but by then it’s too late. Executives, especially in risk management functions, must become more knowledgeable about spreadsheet alternatives so they can eliminate systemic risks in their internal operations.

Regards,

Robert Kugel

SVP Research

For the past couple of years I’ve been pointing to the importance of in-memory computing to the future of business applications. It’s an integral part of Ventana Research’s business and finance research agenda for 2013, and it’s one of the core technologies that senior executives should have an appreciation for because it can transform all core business processes, especially those that are analytic in nature.

SAP just announced that it’s SAP Business Suite can now run on its HANA in-memory computing platform. HANA became generally available as a standalone database in mid-2011, and SAP states that it has almost a thousand customers. I have already written about how SAP has taken HANA into finance with new applications that use this technology. In-memory databases and processing use main memory rather than hard drives for data storage, which enables much faster response times. Online transaction processing systems such as Business Suite collect data about accounting entries, sales calls and inventory movements. With disk-based systems, the data created by these transactions winds up in multiple tables and databases. Getting information from these data stores often involves a delay because of the physical process of reading and writing disk-based storage and (especially in larger companies) because of the need to process data in batches. By keeping all of the data in its main, solid-state memory and applying massively parallel processing techniques, HANA can execute queries and perform analyses far faster than disk-based systems; SAP says it can be 10 to 1,000 times faster.

In-memory computing can make the analytical applications associated with Business Suite much more interactive in working with very large data sets, which in turn enables analysts in every part of the business to work faster and smarter. ERP systems in the 1990s evolved from earlier ones created to help manufacturers and other businesses that deal in physical goods to optimally manage their inventories. With disk-based databases, inventory-related calculations can take hours to complete, especially in larger companies. This may not be an issue when things are running smoothly, but it can be problematic if, say, there is a supply chain disruption. With an in-memory system it’s possible to quickly perform analyses of such an incident’s impact on future deliveries, work through alternative allocations based on customer value and calculate the financial impact of each option. Every business can benefit from in-memory computing because, for example, it can transform a monthly budget review into a more collaborative, interactive and forward-looking activity. Instead of focusing mainly on past events, organizations can make changes to forecasts, examine the impact of alternative future actions and immediately see how the changes affect revenues, expenses, cash flow and the balance sheet. Most companies don’t do that already because with systems that use disk storage, it can take minutes, hours, days or even weeks to get answers back from even a straightforward business question.

Companies can deploy HANA on premises or use SAP’s cloud platform as my colleague has already assessed. As with other platform-as-a-service offerings, the latter approach is designed to give organizations the ability to create and deploy scalable applications at a lower cost, and readily support mobile devices.

The press conference held to showcase the announcement highlighted John Deere as an early adopter. That company is considering how it can use SAP Business Suite on HANA to provide users of its farm equipment with a broader set of information-based services. The raw material for such services is the sensor-generated data that these machines routinely collect about engine performance, soil conditions and the weather, to name just three. This is a proven business strategy. Jet engines are routinely monitored in flight to detect patterns and anomalies that require attention when the plane lands. This is possible because of the relatively small number of aero engines in operation at any time, and cost-effective because of the heavy expense associated with aircraft downtime. As the cost of tracking and analyzing sensor-generated data drops and information is available in real time, it becomes increasingly feasible for machinery and device manufacturers to offer monitoring and data services to customers to generate revenue, promote customer loyalty and enhance customer satisfaction.

While highlighting all of the advantages of HANA as the computing platform for Business Suite, SAP was quick to emphasize that customers are free to keep whatever SQL database they currently use, and detailed the ways in which it is attempting to keep the choice of database a non-issue from a technical standpoint.

Applications that use in-memory storage and processing are not new, but the scope and scale of Business Suite and its centrality to running any business make this a noteworthy step in business computing. SAP is offering services that will enable customers to accelerate adoption of HANA. It will offer a rapid-deployment solution designed to enable customers to go live in less than six months, and that includes a full set of preconfigured software, implementation services, training and content for a fixed price.

For the moment, SAP is ahead of its rivals, but it is unlikely to enjoy this lead for long; other vendors have plans to offer in-memory applications. Despite the manifold benefits of in-memory systems, I don’t see these systems generating meaningful incremental demand over the next two years, as companies are risk-averse and will want to evaluate the experience of early adopters. Thereafter, I expect a renaissance in business software driven by in-memory and other technologies as well as a generational shift in the expectations and demands of software users.

Regards,

Robert Kugel – SVP Research

Twitter Updates

Stats

  • 70,439 hits
Follow

Get every new post delivered to your Inbox.

Join 67 other followers

%d bloggers like this: