Free or low-cost sources of unstructured information, such as Internet news and online discussion sites, provide detailed local and near real-time data on disease outbreaks, even in countries that lack traditional public health surveillance. To improve public health surveillance and, ultimately, interventions, we examined 3 primary systems that process event-based outbreak information: Global Public Health Intelligence Network, HealthMap, and EpiSPIDER. Despite similarities among them, these systems are highly complementary because they monitor different data types, rely on varying levels of automation and human analysis, and distribute distinct information. Future development should focus on linking these systems more closely to public health practitioners in the field and establishing collaborative networks for alert verification and dissemination. Such development would further establish event-based monitoring as an invaluable public health resource that provides critical context and an alternative to traditional indicator-based outbreak reporting.
The main characteristic to be aware of in these tools is that BE is primarily rule-based (using an embedded rule engine), whereas BW and iProcess are orchestration / flow engines. In BE we can use a state diagram to indicate a sequence of states which may define what process / rules apply, but this is really just another way of specifying a particular type of rules (i.e. state transition rules).
The main advantages to specifying behavior as declarative rules are:
Handling complex, event-driven behavior and choreography
Iterative development, rule-by-rule
The main advantages of flow diagrams and BPMN-type models are:
Ease of understanding (especially for simpler process routes)
Process paths are pre-determined and therefore deemed guaranteeable.
In combination these tools provide many of the IT capabilities required in an organization. For example, a business automation task uses BW to consolidate information from multiple existing sources, with human business processes for tasks such as process exceptions managed by iProcess. BE is used to consolidate (complex) events from systems to provide business information, or feed into or drive both BW and iProcess, and also monitors end-to-end system and case performance.
On Event Processing Agents implies a “new” event processing reference architecture with terms like,
(1) simple event processing agents for filtering and routing,
(2) mediated event processing agents for event enrichment, transformation, validation,
(3) complex event processing agents for pattern detection, and
(4) intelligent event processing agents for prediction, decisions.
Frankly, while I generally agree with the concepts, I think the terms in On Event Processing Agents tend to add to the confusion because these concepts in On Event Processing Agents are following, almost exactly, the same reference architecture (and terms) for MSDF, illustrated again below to aid the reader.
It is from this operational asymmetry that complexity in event processing is required. In other words, as distributed networks grow in complexity, it is difficult to determine causal dependence when trying to diagnosis a distributed networked system. Most who work in a large distributed network ecosystem (cyberspace) understand this. The CEP notion of “the event cloud” was an attempt to express this complexity and uncertainly (in cyberspace).
JT has posted his view on rules and decisions and how they relate. Given that James talks more about services than events, I thought it would be worth reviewing his post from both a Complex Event Processing and a TIBCO BusinessEvents event processing platform perspective.
”Decision Services:
Support business processes by making the business decisions that allow a process to continue.
Support event processing systems by adding business decisions to event correlation decisions (they are often called Decision Agents in this context).
Allow crucial and high-maintenance parts of legacy enterprise applications to be externalized for reuse and agility.
Can be plugged into a variety of systems using Enterprise Service Bus approaches.”
Some definitions key on the question of the probability of encountering a given condition of a system once characteristics of the system are specified. Warren Weaver has posited that the complexity of a particular system is the degree of difficulty in predicting the properties of the system if the properties of the system’s parts are given. In Weaver’s view, complexity comes in two forms: disorganized complexity, and organized complexity. [2] Weaver’s paper has influenced contemporary thinking about complexity. [3]
Folks who have been in event processing fields like network management (NMS) or security management for many years have a very high expectation for processing complex events. Most of the network and security management platforms on the market have basic rule-based processing available “out of the box” and most of these platforms have had the capability to process events in near-real time for decades. Adding a new “rules-based event processing platform” to the network and security management software mix does little to add any additional capability and certainly does not solve any nagging complex detection problems.
- leave anything related to transport, communication to other layers- use this revised CEP to express and execute event-relevant logic, the purpose of which is to translate the ambient events into relevant business events- have these business events trigger business processes (however lightweight you want to make them)- have these business processes invoke decision services implemented through decision management to decide what they should be doing at every step- have the business processes invoke action services to execute the actions decided by the decision services- all the while generating business events or ambient events- etc.
- First, event management is primarily about the identification and generation of business events from the ambient events. Similar to what Carole-Ann and I had written in previous posts.- Second, IBM wants to introduce high level EPLs to express the logic for that processing that are business-centric, something very similar to what Business Rules Languages and approaches are in the business rules management area.
CEP is intelligent software that is essentially the next step in algorithmic trading – it sifts through market events looking for possible patterns and acts on them. A recent study into banks’ IT spending patterns by consultancy Aite Group, suggested that while budgets as a whole were likely to shrink by 5%, CEP investment remains on an upward trajectory. 36% of respondents to the survey intended to spend more on CEP this year than in 2008.
Adam Honore, senior analyst at Aite and author of the report, says: “We’re still bullish on the potential for CEP across financial services. Once one group successfully deploys a CEP application, word spreads and more technology groups look at CEP to help solve their issues.”
I don’t know whether you said that a CEP application must necessarily have a model. It may have, or it may not. A rule-based approach (in its general acceptation) is not considered as a model. In the AI terminology, rules are considered as “shallow knowledge”, while models are considered as “deep knowledge”. Shallow knowledge expresses the people’s experience, links symptoms to causes directly, while deep knowledge establishes the links using a model, and the model can be interpreted. Shallow knowledge is very helpful in many cases, and as deep knowledge it also allows detecting situations. Of course, the cooperation of both is desirable to build more powerful systems. I did a rapid search, and below are 3 entries for reference:
Rule-processing is just a style of computation. Of course it is used in BRMS, but it is also used in CEP. CEP systems typically employ rules-based processing to infer higher-order events by matching patterns across many event streams within the event ‘cloud’. BRMS’s use rule processing to match patterns within data tuples representing business-orientated data. CEP systems may support the use of advanced analytics to manage predictive analysis, reasoning under uncertainty and other requirements in relation to the event cloud. Some of the better BRMS’s offer similar analytics in regard to processing business data.
The “big elephant in the room” in the ongoing CEP dialog is that most of the current (CEP) software on the market is not capable of machine learning and statistical analysis of dynamic real-time situations. Software vendors have been promoting and selling business process automation solutions and calling this approach “CEP” when, in fact, nothing is new. There is certainly no “technology leap” in these systems, as sold today.
Most BREs today are deployed as “decision services”, and are used in “stateless” transactions to make “decisions” as a part of a business process. A CEP application is instead processing multiple event streams and sources over time, which requires a “stateful” rule service optimized for long running. This is an important distinction, as a stateful BRE for long-running processes needs to have failover support - the ability to cache its working memory for application restarting or distribution. And of course long-running processes need to be very particular over issues like memory handling - no memory leaks allowed!