Rob sees three key areas where rules can help:
Tighter warranty controls
Claims processing is improved because financial limits, detailed coverage types, materials return and more can be automated and rapidly changed when necessary. The rules also allow “what-if” testing and impact analysis.
Better built vehicles
The decision making is tracked very closely thanks to rules so you can analyze specific repair types, specific VINs and so on. More effective parts return and generally better information also contribute.
Lower cost repairs
Rules allow goodwill repairs, labor-only repairs and specific kinds of repairs to be managed very precisely. Rules-driven decisioning can reduce the variation of costs between dealers and help intervene, rejecting or editing claims that seem overly expensive. The ability of rules to deploy data mining and predictive analytics can also really help here.
Truviso continuously analyzes massive volumes of dynamic information—providing comprehensive visibility and actionable insights for any event, opportunity or trend on-demand. Truviso empowers decision-makers with continuous:
Analysis - always-on, game-changing analytics of streaming data
Visibility - dynamic, web-based dashboards and on-demand applications
Action - extensible, data-driven actions and alerts
Truviso's approach is based on years of pioneering research, leveraging industry standards, so it's fast to implement, flexible to configure, and easy to modify as your needs change over time.
One year ago I penned Event Processing in Twitter Space, and today parts of the net are buzzing about Twitter.
In a nutshell, Twitter is a one-to-many communications service that uses short messages (140 chars or less). Following on the heels of the blogging phenomena, Twitter has been primarily used for microblogging and group communications.
Twitter, and Twitter-like technologies, has great promise in many areas. For example, you could be subscribed to the @tsunamiwarning channel on your dream island vacation and get instant updates on potential disasters. A team of people working in network management could subscribe to the @myserverstatus channel and receive updates on their health of their company IT services. Passengers could subscribe to the @ourgatestatus channel and follow up-to-date information on their fight.
Twitter was created to answer the simple question, “What are you doing now?”
It is from this operational asymmetry that complexity in event processing is required. In other words, as distributed networks grow in complexity, it is difficult to determine causal dependence when trying to diagnosis a distributed networked system. Most who work in a large distributed network ecosystem (cyberspace) understand this. The CEP notion of “the event cloud” was an attempt to express this complexity and uncertainly (in cyberspace).
The one, really big, difference between Complex Event Processing and traditional BRMS tools is that the former is loosely associated with EDA and decisions that are based on multiple events, whereas the latter is more associated with conventional request-reply SOA and automating decisions made in managed business processes.
Rule-processing is just a style of computation. Of course it is used in BRMS, but it is also used in CEP. CEP systems typically employ rules-based processing to infer higher-order events by matching patterns across many event streams within the event ‘cloud’. BRMS’s use rule processing to match patterns within data tuples representing business-orientated data. CEP systems may support the use of advanced analytics to manage predictive analysis, reasoning under uncertainty and other requirements in relation to the event cloud. Some of the better BRMS’s offer similar analytics in regard to processing business data.
Some definitions key on the question of the probability of encountering a given condition of a system once characteristics of the system are specified. Warren Weaver has posited that the complexity of a particular system is the degree of difficulty in predicting the properties of the system if the properties of the system’s parts are given. In Weaver’s view, complexity comes in two forms: disorganized complexity, and organized complexity. [2] Weaver’s paper has influenced contemporary thinking about complexity. [3]
The success of Service-Oriented Architecture (SOA) has created the foundation for information
and service sharing across application and organizational boundaries. Through the use of SOA,
organizations are demanding solutions that provide vast scalability, increased reusability of
business services, and greater efficiency of computing resources. More importantly,
organizations need agile architectures that can adapt to rapidly changing business requirements
without the long development cycles that are typically associated with these efforts. Event-Driven
Architecture (EDA) has emerged to provide more sophisticated capabilities that address these
dynamic environments. EDA enables business agility by empowering software engineers with
complex processing techniques to develop substantial functionality in days or weeks rather than
months or years. As a result, EDA is positioned to enhance the business value of SOA.
The purpose of this white paper is to describe the approach employed to overcome the significant
technical challenges required to design a dynamic grid computing architecture for a US
government program. The program required optimization of the overall business process while
maximizing scalability to support dramatic increases in throughput. To realize this goal, an
architecture was developed to support the dynamic placement and removal of business services
across the enterprise.
The “big elephant in the room” in the ongoing CEP dialog is that most of the current (CEP) software on the market is not capable of machine learning and statistical analysis of dynamic real-time situations. Software vendors have been promoting and selling business process automation solutions and calling this approach “CEP” when, in fact, nothing is new. There is certainly no “technology leap” in these systems, as sold today.
J. Llinas, C. Bowman, G. Rogova, A. Steinberg, and F. White. In P. Svensson and J. Schubert (Eds.), Proceedings of the Seventh International Conference on Information Fusion (FUSION 2004, page 1218--1230. (2004)
D. Rosca, S. Greenspan, M. Feblowitz, and C. Wild. Requirements Engineering, 1997., Proceedings of the Third IEEE International Symposium on, (January 1997)