Wednesday, November 29, 2006

Analyzing [what is in] the process

In my post yesterday Process analytics is more than a pretty graph, I talked about the use of Process Analytics tools to do more than just monitor workload, rate of processing and so on. This fitted into the Execute & Analyze phase of a business process optimization lifecycle. James Taylor commented that:
Process analytics is also more than analyzing the process!

In my post I implied this a little by touching on the importance of strong analysis tools to provide information against business KPI and objectives, rather than just the process metrics. Though as James says, there is far more to it than that.
For instance, if I can predict that an account is at risk of going into collections I can route it differently. This is improving my PROCESS with ANALYTICS but it is not about analyzing the process.

This is the transition into the next state in the lifecycle: Manage & Improve. For example, goal management could fit in here, driving the automated routing of many process items based on business KPIs. Now accompany this with business rules, and complex decision analysis at an individual level, as discussed by James in his post. Use the right tools and work can be automatically routed both in bulk and individually to meet the complex requirements of enforcement, processing, performance and business goals.

Much of the benefit of process analytics for more than looking at workload requires a fairly 'full-bodied' view of process. For analytics to work well, managing the business goals, enforcement, and so on, I don't believe process can just be viewed just as abstract work items bouncing around a workflow touching people and systems. Process analytics needs to work alongside a formal business process that manages fully laden process instances:
  • Containing complete, descriptive business metadata
  • Linking to entities in other systems and providing access to their data
  • Managing and referencing content, documents, discussions and tasks
  • Enforcing the delivery of work to appropriate people, systems and services
  • Making available specific data that is required for your analysis and management aims

With tools that support this level of meaningful business process, implicitly 'analyzing the process' becomes 'analyzing what is in the process' - real work cases, customers and accounts. Having access to the valuable business information directly enables process analytics to positively drive the process based on this data. Sounds easy? I'm sure that there is a lot that I need to focus on in this area to get a full picture of how this actually works in practice!

Technorati tags:

Monday, November 27, 2006

Process analytics is more than a pretty graph

One of the technology areas I've been enjoying getting my head around in my new role is Business Process Optimization as it relates to BPM. As I'm starting to understand it, the optimization of business processes can be represented as a lifecycle, stepping through three main phases:

  1. Model & Simulate
  2. Execute & Analyze
  3. Manage & Improve
In the dim and distant past, long before I ever considered joining Global 360, and perhaps when my knowledge of the BPMS market was limited by the constraints of the products I worked with, I had a pretty good discussion about Model & Simulate. I'll probably return to this discussion in the future, since I think there is still some legs in the thinking around simulation where integration (or SOA) is involved.

As for Manage & Improve, in my past I naturally assumed that this was just a function of making sure that the process was well designed and roles flexibly assigned so that fluctuations in workload were well balanced across the available workforce. That is only half the story, but the need to Execute & Analyze effectively are still the foundation for effective processes.

In simple workflow environments a quick report and a simple graph can provide all that is necessary in terms of 'analytics'. It can show a manager at a glimpse where work is building up in a process. But in high volume, complex business process environments, that have constraints applied through contractual Quality of Service agreements or a need to provide exceptional customer service, the analytical capabilities of a system need to be a lot greater. Examples could be credit card dispute resolution, call center customer services, life insurance application processing, or brokerage account opening.

In these complex environments, managers need up-to-date data that can represent work sliced and diced across many dimensions. This enables them to see not only that there is a large mass of work collecting in one activity in the process, but whether that places their highest value clients or service contracts at risk. True process analytics tools can understand the structure and 'flow' of work in business processes, enabling them to produce OLAP cubes for complex analysis. And since it does this by capturing an event stream representing work being processed and routed, by taking the data offline there is not the huge processing impact on the live system that complex database queries would have.

Now that managers can see and respond to predefined analytics, as well as having the tools that enable them to simply visualize the data sliced according to their own local requirements, the job of analytics is done, right? Not really. Process analytics enables slices of data to visualized over time, enabling trends to be spotted or the impact of specific conditions (for example a spike in volume of high value work) to be assessed.

Being able to understand how a business process responds under real conditions seems like the ultimate proof of performance. In a crazy day, where everyone is working flat out, a manager may not be able work out from 'gut-feel' alone how well his process is responding to this big spike in demand. Given data and easy to drive analysis tools after the fact, he or she can quantitatively understand what was different to other days, what went well and where improvements could be made.

I'm really just a beginner in this business process optimization world, but I understand that business process execution can be run in several way: just get work through and out of my sight, or get work done that really benefits the business. With experience, Key Performance Indicators (KPI) can be developed that provide the manager with an 'at a glance' metrics showing if the process is running to plan. The aim of the business is not necessarily to hammer out 10,000 cases an hour, but really to beat the true goals of the business that his teams should be bonused on - be that profitability, customer satisfaction, value of new business, etc.

With a business process that has been optimized based on quantitative experience applied to real metrics, and with analytics that have been built to be meaningful in the heat of the moment, a manager can really work to exceed true business objectives.

Technorati tags:

Friday, November 24, 2006

Feature ticklist trap

A great post on Tyner Blain by Scott Sehlhorst comments on the Fifteen Ways to Shut Down (Windows Vista). This enormous set of ways to shut down a PC led Scott to talk about how software that has every feature under the sun to satisfy every requirement actually makes everyone unhappy. Quoting Joel on Software

Inevitably, you are going to think of a long list of intelligent, defensible reasons why each of these options is absolutely, positively essential. Don’t bother. I know. Each additional choice makes complete sense until you find yourself explaining to your uncle that he has to choose between 15 different ways to turn off a laptop.


As I work with a well established product that has many customers, in a company that prides itself on customer satisfaction, the desire to provide functionality for every customer requirement and request is a challenge. There is little option in this environment but to implement the functionality, so it is fortunate that the development team have a huge amount of experience and handle this balancing act with the appearance of ease.

Even for a skilled team like this, the holistic approach that Scott talks about can be valuable to help ensure excessive complexity is not introduced to the software. He highlights ways to handle structured requirements that focus on the interaction by personas (categories of users), rather than directly trying to satisfy each requirement in turn:
Multiple requirements can lead to multiple, locally-optimized solutions or features. A holistic view needs to be taken to assure that we aren’t introducing too much complexity with these variations of similar features. Interaction design gives us a big-picture perspective to make sure we aren’t making our software too hard for our target users to use.

This persona based design is valuable not just to make sense of a vast array of requirements, but in general so that business analysts and software developers can understand what it is they are proposing and developing, beyond the individual feature / functionality of the standalone application, business process or composite solution. A while back I attended a Pragmatic Marketing course on Requirements That Work that introduces this really well, and was a really enjoyable interactive classroom day. I'd recommend this to anyone that has to work with software requirements.

The advantage that many enterprise software applications have is that they are supported, configured and customized to meet the needs of their own user base. This enables IT departments to hide the unnecessary features and options from their users. Windows Vista, Office and Outlook don't have this luxury. They need to meet the requirements of a disparate, varied audience, with a range of skills and needs.

It seems that Vista needs to learn from the potential failings (and subsequent enlightment) of the Linux world - that if a user doesn't understand an option or function they will ignore it, maybe select the default, then avoid using the software in future. The Debian OS gives skilled users everything, complete with the associated complexity - Ubuntu, which dubs itself 'Linux for human beings' gives 80% of users the defaults they would have selected anyway, and hides the rest. Debian is not well adopted by average desktop users. Ubuntu seems to be doing better in that arena.

Microsoft needs Windows Vista to be a success, and it will only be that if seen by users as being a worthwhile step forward. But it needs to fight its way back out of the feature ticklist trap if it wants to ensure user adoption. I hope that there are not 15 ways to buy and upgrade the OS as well. If there are I may not ever get to the point of selecting one of 15 ways to shut it down.

Technorati tags:

Monday, November 20, 2006

SOA for viewable documents

As customers extend their SOA strategies more and more, a question seems to be arising - is SOA a good fit for documents (like PDFs, TIFFs, Word Docs) and other binary content? Of course SOA powered by an Enterprise Service Bus (ESB) or some other mechanism for composing services can handle binary data, its just a question whether it really makes sense to push all of this data through it.

My background is in document imaging. Imaging and high volume document management systems typically have built up extremely functional image and document viewing capabilities over the course of their often lengthy existences. These viewing capabilities were built with the following design criteria:

  • Responsiveness - how fast can a user be presented the specific information they are requesting so they can continue working.
  • Server performance - do not request more data from the server than is really required by the user to view the document. Don't send unnecessary resolution, color or pages, dependent on predefined user requirements.
  • Network performance - in the days before even 100Mb networks were commonplace, managing network usage was important. In large scale, or distributed implementations it still is.
  • Seamless presentation of multiple types - documents come in many different types, and it does not make sense for easy processing of them for the user to have to navigate different native applications, let alone deal with the load time for some of them.
  • Onion-skin annotation and redaction - the ability to mark up any type of document you can view is essential in some environments, without damaging the original document.

Viewer technology was largely based on a thick-client paradigm to allow it to meet most of these requirements. Stellent (to be acquired by Oracle) offers a range of image viewer technology as its Outside In product line, which has been the backbone of many thick-client image viewer apps. Spicer offers a range of viewers, especially focusing on complex CAD formats. There are others as well, but their number is limited.

Even now, there are very few applications that can present thin-client views of documents and meet the previous design criteria. Daeja is one third-party Java applet that can be integrated to meet this type of requirement for image and PDF documents. Global 360 has powerful thin-client image viewing, annotation and capture to support its BPMS and Content products.

The thing with all of these options is that they have traditionally been designed to plug directly into their imaging repository, either through TCP-IP proprietary file transfers, or for the thin-client versions as standard HTTP GET requests. And that is just for the image viewing. The upload of annotations was specific to the application. This does not fit SOA well, where organizations take the approach to extremes and insist that the ESB sits between all end user applications and their servers, using pure SOAP web services.

Many of the advantages of the viewers is the smart ways they access their servers to get best performance. It seems like a poor use of resources to build an SOA layer between a viewer application and its related repository server, just to enforce a dogmatic approach to SOA. Unless it is really justifiable to be able to reuse any viewer technology (which as I say there is limited choice), or allow a single viewer to access any repository (a complex proposition to do well).

Even if ESBs can handle this type of binary files effectively and efficiently, I am struggling to see if this is really a pragmatic approach to SOA. There must be some value to doing this that I have missed, probably based on my outdated background in this area.

Technorati tags:

Saturday, November 18, 2006

Customer relationship management from Bankwatch

Colin on Bankwatch has written a nice summary of CRM and how it relates to banks: Customer relationship management | Wikipedia

As he says:
The vision for CRM is entirely relevant for Banks, but is way ahead of either capacity to afford, or certainly capacity to implement due to disparate systems.
As Colin talks about in Its not about transactions, its about relationships! CRM can provide better relationships between banks and customers, and this can lead to strong upsell of services over time.

This is the key point as I see it for CRM. For any vendor to be able to help customers feel comfortable across the many different interaction points they have, and the many different call center personnel they may speak to, the vendor needs to provide enough knowledge of the customer's profile for the interaction to go smoothly.

CRM can provide this, as well as ensuring that information across the many disparate systems that contain customer information are easily accessible, and present useful information to the vendor on demand.

Integration of systems is essential, and being able to make a good guess of what is needed from which system, in advance will make any call center worker's job easier. BPM, SOA and CRM together can be very valuable if the technology helps the vendor's user, and doesn't just add more complexity to their job.

Thursday, November 16, 2006

Integration of BPMS and Portals

A recurring question is the integration of BPMS and Portals - how would organizations benefit from the interaction of these two technologies?

'Integration' is a word often avoided by Portal vendors. Vignette preferred the word 'surfacing', since it reflects more closely what is happening when an application's data is presented in the Portal. The chosen data or predefined portlet (a visual application component) is pulled through the Portal and 'surfaced' on the screen.

With a strong Enterprise Portal, formal integration is not required, instead just the configuration of what is displayed where. That said, often implemented as a precursor to a broader SOA platform, the portal becomes the central meeting point for many system integration activities that provide composite applications through 'integration on the glass'.

Outside of this, Enterprise Portal value is sold on:


  • Providing end-users with a single, consistent point to interact with systems and find information
  • Enabling personalization of appearance and applications, to help users feel comfortable with the system and more rapidly find information relevant to them
  • Implementing a consistent technical and visual framework for integrating disparate systems and information services
  • Simplifying, delegating and enforcing administration of Intranet and Web Sites, taking the burden off IT and web designers
The value associated with Portals can be related to many systems deployed in organizations, including the BPMS. A BPMS can incorporate some of the features of a Portal into the native BPMS application, such as consistent presentation, delegated administration and componentized user interfaces. The latter though often demands webpage development to perform.

A BPMS may benefit from being surfaced in a broader Enterprise Portal system, providing consistent access to BPM and the other resident information systems, potentially showing information from both on the same page. For example, this could provide the end-user with contextual information from a knowledge base, automatically displayed based on the type of process or activity he or she is working. In flexible, browser based BPMS applications there is of course no reason why this type of functionality could not be integrated by IT, but the value of the Portal is that once each system is integrated once, it can be reused in other parts of the application without web developers getting involved.

The previous example implies that the most valuable way of surfacing a BPMS in a Portal is for the presentation of end-user work lists, individual work items and associated data. For users who work with BPM occasionally, for example for Expense Claim Approvals, pulling the BPMS into a broader Employee Intranet may help the ease with which users can find and use the system. For the other extreme, heads-down users processing large volumes of work, or customer services focused users, demanding Portal presentation needs to be carefully assessed to ensure that it truly offers value and does not in fact hinder the fast, effective and efficient interaction with the BPMS. Composite display and interaction may slow the system and detract from the user experience.

From another point of view, supervisors or administrators of BPM may benefit from the flexibility and user driven configuration of applications available through Portals. Being able to simply configure the most important process analyts charts for his department may help a supervisor to most quickly and effectively monitor the performance of his staff and spot potential red-flags. Being able to display this information alongside information from other systems enables a better contextual view of what is happening, and why.

Portal 'integration' can be approached in several ways:

  • Surface main web page components directly in the Portal
  • Use the Portal's application / portlet designer
  • Develop highly customized portlets using JSP
For example, Global 360's Java BPMS provides flexible enough browser-based presentation components to be effectively surfaced directly in an Enterprise Portal. Use of the portal's own portlet/application design tools can use the full BPMS Web Services or Java Client API to provide highly configurable displays of data from the BPMS in ways not invisaged by the original user interface. Finally, web developers can easily use familiar JSP page design to create standards-compliant (JSR-168) portlets for highly customized presentation.

True Enterprise Portals can provide a lot of value to an organization. For the right scenarios a BPMS should be surfaced within a portal. The right scenario is typically not one where the BPMS is used for very high volume human-centric processing or data entry, where the Portal is just being used to provide a consistent user interface and slighly simplified page layout capabilities. If a BPMS vendor insists on the importance of a portal for a high-volume processing application, see a red-flag. The BPMS that salesman is pushing you is probably not really designed for your high-volume requirements, and is more likely designed to look pretty for managers approving occasional expense claims.

Technorati tags: BPMS Portal

Monday, November 13, 2006

Virtualization - more bits to break?

Virtualization or Virtual Machines (VMs) hosting Operating Systems in production environments... My reaction to this is, why would you do it?. Why run what is effectively an operating system emulator inside an otherwise decent operating system?

I understand the benefits of VMWare (and Microsoft's new Virtualization) type environments for demonstration, development and maybe even QA environments. And the VMWare player for 'safe' Internet browsing is a clever application of the technology. The main advantage here being that a VM image is a self contained environment that can be copied as just a bunch of standard files from one machine to another, enabling easy backup and snapshotting of an environment at a point in time. This is great for QA Testers and Sales Engineers everywhere. But in a production environment, where you want to squeeze every little bit of juice out of that expensive piece of hardware, I'm struggling with the idea.

I have heard IT shops suggest that VMWare can be an incredible boon for them. Now they have the opportunity to really use large, multiprocessor servers, running many systems, each within its own self-contained 'virtual machine'. This provides some advantages:

  • Limit the resources an individual system uses, for both performance and licensing
  • Enable the deployment of otherwise conflicting applications on a single box
  • Provide a level of manageability of servers, being able to start and stop VMs independently

Each of these reasons carries some advantages. I'm just not convinced that the overhead that a Virtual Machine hosted Operating System carries really helps organizations get the best out of their servers. My experience has been that a VM represents another layer of software to fail, either in terms of the hosted operating system, or just the VM environment - more moving parts means more bits to break. And the hosted VM/OS consumes resources on top of the host OS. Unless this really is an insignificant amount, it seems hardware intensive. Of course, IBM has been successfully doing this for quite a while with Dynamic Logical Partitioning (LPAR), which is tried and trusted (but also supported directly by the hardware).

So as customers ask me if Global 360 products will be officially supported on VMWare I still ponder the question. Sure, we support our products on a range of operating systems, including IBM AIX LPARs. If you want to run the products on a supported OS, that is fine - we don't typically state what the underlying hardware is, as long as it meets the minimum spec that any off the shelf hardware is likely to meet (of course, you need a bigger box to support 10,000 BPMS users!). The problem is, that by specifically asking if we support VMWare implies that we need to QA this environment as well, which is an expensive proposition, as any software vendor understands the costs of adding additional platform support.

This is how I am thinking of addressing the VMWare question. For systems that do not require specific hardware attachments (like scanners or optical jukeboxes) I'm starting to believe that it is safe to assume that VMWare is equivalent to just another hardware platform. But I'm really really struggling to find out, behind the murmur of requests: How many real production systems of a decent scale are out in IT-land that are running on VMWare? Do they run well, all of the time? How well are they supported by the application vendors? Oh, so many questions!


Technorati tags:

Friday, November 10, 2006

Master Data Management - just another load of inaccurate data?

As I continue my offline delve into all things SOA and BPM, the concept of Master Data Management (MDM) keeps coming up. In the past I just assumed that MDM was about data warehousing and maybe even a grand vision for CRM - really just a way of collecting all of the information you have about your customers that is spread across a bunch of disparate legacy, COTS and home grown systems, and having it available whenever you need it. When you bring SOA and BPM into the MDM picture, things seem to get complicated.

Pre-SOA the problem with disparate systems was that applications typically required swivel-chair integration through re-keying portions of data from one system to another, or at best providing ocassional batch loads of information. A single accurate view of a customer's information rarely existed, and in any organization that has not replaced all its systems you will see how much data is spread around. For a different example, when you start a new job, look at how many times you enter your same personal information onto different forms. Each form exists to simplify the entry of your data into a separate system, and the duplicated information on each is an insight into this swivel-chair IT world.

When BPM was brought into the picture, traditional workflow systems typically added to the issue by copying data into a process instance at the start and never sent updates to the data back. Where workflow did use live customer data, it tended to extract the data from a point system, completely disregarding the reliability of that data, more focusing on the simplicity of accessing that system due to lack of other good integration mechanisms.

Customer data reflects our current knowledge of our customers, and should be affected by everything we do with a them, every transaction that is made, every interaction that we have and any background tasks that are going on. If every system that records customer data for itself is not effectively synchronized with others, even SOA is going to struggle to pull disparate systems into meaningful and accurate business services. To me this seems like a fundamentally unreliable piece of SOA. Each service has to rely on not only the actions backend systems perform but also understand the data that a system uses and how that may be inconsistent with another system used by the same service.

MDM provides a way to synchronize and pull data together from the underlying systems into a central place, and this consistent and current layer of master data does appear to have some value. I can also see that it is useful to be able to build new business logic on top of reliable master data, abstracted once from all the underlying sources of unreliable and disjointed data. This makes data reusable and new business logic easier to build and more reliable.

The problem is that I don't see how MDM helps SOA. SOA needs to work with disparate backend systems largely intact, benefiting from the logic they already provide. It should not be trying to replicate or rebuild the business logic in underlying systems, since if you are going to do that you might as well rip and replace those systems, not duplicate the logic in the integration layer.

To round it all out, SOA in combination with BPM needs to be aware of data inconsistency when using disparate backend systems. For them to be effective, all processes and services should ensure the feedback of up-to-date data into the backend systems that own it as the result of processes and service calls. For BPM this requires strong data modeling and integration (with SOA interoperability) to prevent process data duplication - something we have not seen in traditional systems, but I'm seeing more of now.

Maybe MDM can be useful as a component of SOA/BPM , but right now I'm struggling with how it doesn't just become another layer of data that disagrees with everything else in the enterprise.

Technorati tags:

Wednesday, November 01, 2006

"You've Got Work"

When put on the sharp end of a sales meeting showing some process management, it has been typical for customers to ask me whether the workflow engine notifies users through email that they have been assigned a new work item.

Although this shouldn't have been a tough question to answer, it always seemed to be an indicator some deeper issue: notifying an individual user through email that he or she has been delivered a work item actually implies to me (and perhaps other seasoned workflow professionals) that actually the user rarely gets new work items. In which case, is the BPMS I am proposing really the right solution for the customer that is hinting (with the email question) that he doesn't really need the capabilities of a low-latency, highly scalable, heads-down, production workflow product?

What was my answer (beyond - 'yes we do that')? To me, in structured business processes that are being worked by teams of people, email as a notification mechanism is all wrong. In high volume workflows, process instances (work items) should be delivered to groups of users that work in a pool and not addressed or delivered to individuals, as email notification suggests. Pool working is essential to handle peaks in load, prevent the need for reassigning work because an individual is on vacation for a day, and reflects a common skillset of the workers. For this model, email is the wrong approach.

Of couse, my background was the high-volume and heads-down workflows of health insurance coding (data entry and payment decisions), financial services new account applications, and call center dispute and enquiry handling. In these environments the only way to process work effectively and efficiently is to use a BPMS that is designed to keep the work flowing through the system.

My background, until I took a short spell with a corporate "compliance and governance" focus was not based on deploying systems to ensure delivery of my travel expense claim to my manager, then accounts payable. That required a very different structure of workflow delivery. As an employee submitting a claim, my expense work item didn't follow any different logical path than anyone else's - it went from me, to my boss, to AP. But only my boss could do the initial acceptance of the claim, not another manager at his level. My boss was not a heads down worker for administrative tasks. In fact, he would have been happier if he could have accepted my expense claim through Blackberry (and given the delay for him to get to a real PC sometimes, so would I!). There are many business processes that could benefit from workflow automated delivery, not really for efficiency but more for enforcement of policies and processes, that fall under this 'person to person' enforced delivery model.

Given this, is an enterprise, high-volume BPMS the wrong tool for processes like expense claims, SEC reporting and HR recruitment? In some cases the answer is yes - some BPMSs do not have the capability to effectively manage these processes that must direct work to individuals rather than groups. They can do it, but it is a stretch for them to make the relationship between one originator of work and the next person to deliver it to: my expense claim must go to my boss, not any old manager. Customization is not really a good answer for what should really be out of the box processes.

Then there are some BPMSs that can do a great job of these personal delivery workflows, despite their enterprise heritage - I use Global 360's Case Manager product for my expense claims. The weak link there is ensuring that my boss picks up his email notification from the mass of other stuff in his mailbox.

An email notification requirement can be an indicator that the workflow does not require an enterprise BPMS. But selecting the right enterprise BPMS provides the other positive features that come with a product of that class. IT essentials, like: scalability, high-availability, failover, enterprise management, auditability, rapid deployment, flexible platform support, and integration with third-party systems. These are things that years of hammering in call centers and financial services mailrooms have taught enterprise BPMSs (and it will take document review workflow products years to learn). Why should your HR or AP group be allowed to select a less dependable solution than enterprise BPMS?

Technorati tags: