Free or low-cost sources of unstructured information, such as Internet news and online discussion sites, provide detailed local and near real-time data on disease outbreaks, even in countries that lack traditional public health surveillance. To improve public health surveillance and, ultimately, interventions, we examined 3 primary systems that process event-based outbreak information: Global Public Health Intelligence Network, HealthMap, and EpiSPIDER. Despite similarities among them, these systems are highly complementary because they monitor different data types, rely on varying levels of automation and human analysis, and distribute distinct information. Future development should focus on linking these systems more closely to public health practitioners in the field and establishing collaborative networks for alert verification and dissemination. Such development would further establish event-based monitoring as an invaluable public health resource that provides critical context and an alternative to traditional indicator-based outbreak reporting.
I got an update on the Oracle Business Rules product recently. Oracle is an interesting company - they have the components of decision management but do not yet have them under a single umbrella. For instance, they have in-database data mining (blogged about here), the Real Time Decisions (RTD) engine, event processing rules and so on. Anyway, this update was on business rules.
One year ago I penned Event Processing in Twitter Space, and today parts of the net are buzzing about Twitter.
In a nutshell, Twitter is a one-to-many communications service that uses short messages (140 chars or less). Following on the heels of the blogging phenomena, Twitter has been primarily used for microblogging and group communications.
Twitter, and Twitter-like technologies, has great promise in many areas. For example, you could be subscribed to the @tsunamiwarning channel on your dream island vacation and get instant updates on potential disasters. A team of people working in network management could subscribe to the @myserverstatus channel and receive updates on their health of their company IT services. Passengers could subscribe to the @ourgatestatus channel and follow up-to-date information on their fight.
Twitter was created to answer the simple question, “What are you doing now?”
Bruce makes an interesting comment on business rules too: that “routing logic in process gateways” are not “business rules”. That doesn’t really make sense: for sure some gateways will be process-housekeeping decisions of little interest to the business user, but others will surely embed business-critical decisions. On the other hand, it has long been acknowledged that a best practice for BPM is to delegate such business decisions to a managed decision service - hence the explicit new business rule (aka decision) task in BPMN 2.0. And,in the CEP world, for tools like TIBCO BusinessEvents to invoke a decision managed by its Decision Manager tool.
Actually the conceptual model of EPN (event processing network) can be thought as a kind of data flow (although I prefer the term event flow - as what is flowing is really events). The processing unit is EPA (Event Processing Agent). There are indeed two types of input to EPA, which can be called "set-at-a-time" and "event-at-a-time". Typically SQL based languages are more geared to "set-at-a-time", and other languages styles (like ECA rule) are working "event-at-a-time". From conceptual point of view, an EPA get events in channels, one input channels may be of a "stream" type, and in other, the event flow one-by-one. As there are some functions that are naturally set-oriented and other that are naturally event-at-a-time oriented, and application may not fall nicely into one of them, it makes sense to have kind of hybrid systems, and have EPN as the conceptual model on top of both of them...
The main characteristic to be aware of in these tools is that BE is primarily rule-based (using an embedded rule engine), whereas BW and iProcess are orchestration / flow engines. In BE we can use a state diagram to indicate a sequence of states which may define what process / rules apply, but this is really just another way of specifying a particular type of rules (i.e. state transition rules).
The main advantages to specifying behavior as declarative rules are:
Handling complex, event-driven behavior and choreography
Iterative development, rule-by-rule
The main advantages of flow diagrams and BPMN-type models are:
Ease of understanding (especially for simpler process routes)
Process paths are pre-determined and therefore deemed guaranteeable.
In combination these tools provide many of the IT capabilities required in an organization. For example, a business automation task uses BW to consolidate information from multiple existing sources, with human business processes for tasks such as process exceptions managed by iProcess. BE is used to consolidate (complex) events from systems to provide business information, or feed into or drive both BW and iProcess, and also monitors end-to-end system and case performance.
On Event Processing Agents implies a “new” event processing reference architecture with terms like,
(1) simple event processing agents for filtering and routing,
(2) mediated event processing agents for event enrichment, transformation, validation,
(3) complex event processing agents for pattern detection, and
(4) intelligent event processing agents for prediction, decisions.
Frankly, while I generally agree with the concepts, I think the terms in On Event Processing Agents tend to add to the confusion because these concepts in On Event Processing Agents are following, almost exactly, the same reference architecture (and terms) for MSDF, illustrated again below to aid the reader.
The success of Service-Oriented Architecture (SOA) has created the foundation for information
and service sharing across application and organizational boundaries. Through the use of SOA,
organizations are demanding solutions that provide vast scalability, increased reusability of
business services, and greater efficiency of computing resources. More importantly,
organizations need agile architectures that can adapt to rapidly changing business requirements
without the long development cycles that are typically associated with these efforts. Event-Driven
Architecture (EDA) has emerged to provide more sophisticated capabilities that address these
dynamic environments. EDA enables business agility by empowering software engineers with
complex processing techniques to develop substantial functionality in days or weeks rather than
months or years. As a result, EDA is positioned to enhance the business value of SOA.
The purpose of this white paper is to describe the approach employed to overcome the significant
technical challenges required to design a dynamic grid computing architecture for a US
government program. The program required optimization of the overall business process while
maximizing scalability to support dramatic increases in throughput. To realize this goal, an
architecture was developed to support the dynamic placement and removal of business services
across the enterprise.
Multiple channels and types of events…
… executing in multiple Inference Agents (Event Processing Agents on an Event Processing Network)…
… where Events drive Production Rules with associated (shared) data…
… and event patterns (complex events) are derived from the simple events and also drive Production Rules via inferencing…
… to lead to “real-time” decisions.
It is from this operational asymmetry that complexity in event processing is required. In other words, as distributed networks grow in complexity, it is difficult to determine causal dependence when trying to diagnosis a distributed networked system. Most who work in a large distributed network ecosystem (cyberspace) understand this. The CEP notion of “the event cloud” was an attempt to express this complexity and uncertainly (in cyberspace).
Rob sees three key areas where rules can help:
Tighter warranty controls
Claims processing is improved because financial limits, detailed coverage types, materials return and more can be automated and rapidly changed when necessary. The rules also allow “what-if” testing and impact analysis.
Better built vehicles
The decision making is tracked very closely thanks to rules so you can analyze specific repair types, specific VINs and so on. More effective parts return and generally better information also contribute.
Lower cost repairs
Rules allow goodwill repairs, labor-only repairs and specific kinds of repairs to be managed very precisely. Rules-driven decisioning can reduce the variation of costs between dealers and help intervene, rejecting or editing claims that seem overly expensive. The ability of rules to deploy data mining and predictive analytics can also really help here.
JT has posted his view on rules and decisions and how they relate. Given that James talks more about services than events, I thought it would be worth reviewing his post from both a Complex Event Processing and a TIBCO BusinessEvents event processing platform perspective.
”Decision Services:
Support business processes by making the business decisions that allow a process to continue.
Support event processing systems by adding business decisions to event correlation decisions (they are often called Decision Agents in this context).
Allow crucial and high-maintenance parts of legacy enterprise applications to be externalized for reuse and agility.
Can be plugged into a variety of systems using Enterprise Service Bus approaches.”
As I was reading the Smart Roads. Smart Bridges. Smart Grids. feature in the WSJ, I couldn't help but notice all the event processing scenarios and patterns. As you would expect, all the scenarios start with sensors and instrumentation and involve event (instrumentation) transmission, detection and processing. The order, frequency and end actions of transmission, detection and processing vary by scenario. What follows are excerpts from the article in the domains of smart transportation and grids and my associated event processing observations.
Some definitions key on the question of the probability of encountering a given condition of a system once characteristics of the system are specified. Warren Weaver has posited that the complexity of a particular system is the degree of difficulty in predicting the properties of the system if the properties of the system’s parts are given. In Weaver’s view, complexity comes in two forms: disorganized complexity, and organized complexity. [2] Weaver’s paper has influenced contemporary thinking about complexity. [3]
Folks who have been in event processing fields like network management (NMS) or security management for many years have a very high expectation for processing complex events. Most of the network and security management platforms on the market have basic rule-based processing available “out of the box” and most of these platforms have had the capability to process events in near-real time for decades. Adding a new “rules-based event processing platform” to the network and security management software mix does little to add any additional capability and certainly does not solve any nagging complex detection problems.
- leave anything related to transport, communication to other layers- use this revised CEP to express and execute event-relevant logic, the purpose of which is to translate the ambient events into relevant business events- have these business events trigger business processes (however lightweight you want to make them)- have these business processes invoke decision services implemented through decision management to decide what they should be doing at every step- have the business processes invoke action services to execute the actions decided by the decision services- all the while generating business events or ambient events- etc.
- First, event management is primarily about the identification and generation of business events from the ambient events. Similar to what Carole-Ann and I had written in previous posts.- Second, IBM wants to introduce high level EPLs to express the logic for that processing that are business-centric, something very similar to what Business Rules Languages and approaches are in the business rules management area.
Truviso continuously analyzes massive volumes of dynamic information—providing comprehensive visibility and actionable insights for any event, opportunity or trend on-demand. Truviso empowers decision-makers with continuous:
Analysis - always-on, game-changing analytics of streaming data
Visibility - dynamic, web-based dashboards and on-demand applications
Action - extensible, data-driven actions and alerts
Truviso's approach is based on years of pioneering research, leveraging industry standards, so it's fast to implement, flexible to configure, and easy to modify as your needs change over time.
Function answers the question --- what is being done?Technique answers the question -- how something being done?
Application answers the question --- what is the problem being solved?
ExamplesBusiness Activity Monitoring (BAM) is an application type, it solves the problem of controlling the business activities in order to optimize the business, deal with exceptions etc...Business Rules are type of technique --- which can be used to infer facts from other facts or rules (inference rules) , or to determine action when event occurs and condition is satisfied (ECA rules) and more (there are at least half a dozen types of rules, which are techniques to do something).Event Processing is really a set of functions which does what the name indicates -- process events --- processing can be filtering, transforming, enriching, routing, detect patterns, deriving and some more.
CEP is intelligent software that is essentially the next step in algorithmic trading – it sifts through market events looking for possible patterns and acts on them. A recent study into banks’ IT spending patterns by consultancy Aite Group, suggested that while budgets as a whole were likely to shrink by 5%, CEP investment remains on an upward trajectory. 36% of respondents to the survey intended to spend more on CEP this year than in 2008.
Adam Honore, senior analyst at Aite and author of the report, says: “We’re still bullish on the potential for CEP across financial services. Once one group successfully deploys a CEP application, word spreads and more technology groups look at CEP to help solve their issues.”
Various of EP applications are mentioned besides algorithmic trading are: external surveillance by regulators, risk management, auditing, market depth analysis and more.
I don’t know whether you said that a CEP application must necessarily have a model. It may have, or it may not. A rule-based approach (in its general acceptation) is not considered as a model. In the AI terminology, rules are considered as “shallow knowledge”, while models are considered as “deep knowledge”. Shallow knowledge expresses the people’s experience, links symptoms to causes directly, while deep knowledge establishes the links using a model, and the model can be interpreted. Shallow knowledge is very helpful in many cases, and as deep knowledge it also allows detecting situations. Of course, the cooperation of both is desirable to build more powerful systems. I did a rapid search, and below are 3 entries for reference:
Rule-processing is just a style of computation. Of course it is used in BRMS, but it is also used in CEP. CEP systems typically employ rules-based processing to infer higher-order events by matching patterns across many event streams within the event ‘cloud’. BRMS’s use rule processing to match patterns within data tuples representing business-orientated data. CEP systems may support the use of advanced analytics to manage predictive analysis, reasoning under uncertainty and other requirements in relation to the event cloud. Some of the better BRMS’s offer similar analytics in regard to processing business data.
I believe that the general consensus among those who study this kind of thing, is that any decision made wholly by a computer is an operational decision, even if it affects the behavior/tasks of many people or sub-components. Online decisions, being a subset of automated decisions, would then be operational in nature.
The development of the Internet in recent years has made it possible and useful to access many different information systems anywhere in the world to obtain information. While there is much research on the integration of heterogeneous information systems, most commercial systems stop short of the actual integration of available data. Data fusion is the process of fusing multiple records representing the same real-world object into a single, consistent, and clean representation.
The “big elephant in the room” in the ongoing CEP dialog is that most of the current (CEP) software on the market is not capable of machine learning and statistical analysis of dynamic real-time situations. Software vendors have been promoting and selling business process automation solutions and calling this approach “CEP” when, in fact, nothing is new. There is certainly no “technology leap” in these systems, as sold today.
Most BREs today are deployed as “decision services”, and are used in “stateless” transactions to make “decisions” as a part of a business process. A CEP application is instead processing multiple event streams and sources over time, which requires a “stateful” rule service optimized for long running. This is an important distinction, as a stateful BRE for long-running processes needs to have failover support - the ability to cache its working memory for application restarting or distribution. And of course long-running processes need to be very particular over issues like memory handling - no memory leaks allowed!
The one, really big, difference between Complex Event Processing and traditional BRMS tools is that the former is loosely associated with EDA and decisions that are based on multiple events, whereas the latter is more associated with conventional request-reply SOA and automating decisions made in managed business processes.
J. Llinas, C. Bowman, G. Rogova, A. Steinberg, and F. White. In P. Svensson and J. Schubert (Eds.), Proceedings of the Seventh International Conference on Information Fusion (FUSION 2004, page 1218--1230. (2004)
D. Rosca, S. Greenspan, M. Feblowitz, and C. Wild. Requirements Engineering, 1997., Proceedings of the Third IEEE International Symposium on, (January 1997)