RuleXpress is a repository-based tool that can be used offline or in a multi-user environment. Models are stored in a central repository and can be checked out to a local copy and then merged back. Within the tool the key organizing principle is that of a community - a group of people who share the same understanding about their vocabulary and rules. Within this you can have projects but the focus of the tool is on the activity of vocabulary/rule management as an ongoing task. The key activities are to manage vocabulary and rules or, more specifically terms, fact model, rules, decision tables and rule groups.
Various of EP applications are mentioned besides algorithmic trading are: external surveillance by regulators, risk management, auditing, market depth analysis and more.
Let's say you've identified a microdecision or two that has economic leverage. What can you do to improve it? There are many possible interventions, and it's important not just to always use the same one. One approach is to automate it entirely. This is the focus of James Taylor and Neil Raden's book Smart Enough Systems, and of Taylor's blog on enterprise decision management. . If the decision is structured enough, that may be a good idea.
Folks who have been in event processing fields like network management (NMS) or security management for many years have a very high expectation for processing complex events. Most of the network and security management platforms on the market have basic rule-based processing available “out of the box” and most of these platforms have had the capability to process events in near-real time for decades. Adding a new “rules-based event processing platform” to the network and security management software mix does little to add any additional capability and certainly does not solve any nagging complex detection problems.
As I was reading the Smart Roads. Smart Bridges. Smart Grids. feature in the WSJ, I couldn't help but notice all the event processing scenarios and patterns. As you would expect, all the scenarios start with sensors and instrumentation and involve event (instrumentation) transmission, detection and processing. The order, frequency and end actions of transmission, detection and processing vary by scenario. What follows are excerpts from the article in the domains of smart transportation and grids and my associated event processing observations.
The Open Group Architecture Framework (TOGAF) is a framework - a detailed method and a set of supporting tools - for developing an enterprise architecture. It may be used freely by any organization wishing to develop an enterprise architecture for use within that organization (see Conditions of Use).
TOGAF is developed and maintained by members of The Open Group, working within the Architecture Forum (refer to www.opengroup.org/architecture). The original development of TOGAF Version 1 in 1995 was based on the Technical Architecture Framework for Information Management (TAFIM), developed by the US Department of Defense (DoD). The DoD gave The Open Group explicit permission and encouragement to create TOGAF by building on the TAFIM, which itself was the result of many years of development effort and many millions of dollars of US Government investment.
Starting from this sound foundation, the members of The Open Group Architecture Forum have developed successive versions of TOGAF and published each one on The Open Group public web site.
If you are new to the field of enterprise architecture and/or TOGAF, you are recommended to read the Executive Overview (refer to Executive Overview), where you will find answers to questions such as:
What is enterprise architecture?
Why do I need an enterprise architecture?
Why do I need TOGAF as a framework for enterprise architecture?
I had an interesting chat with Miko Matsumura VP and Deputy CTO of Software AG the other day. While we ranged widely, the official topic was Software AG’s launch of AlignSpace. AlignSpace is a hosted “Social BPM” solution supporting collaborative process discovery. The idea is that it will combine:
Social networking (around process definitions)
Collaborative design of processes
Translation of a wide variety of process models
A process marketplace
There aren’t many specific details yet (the site has an overview of these things but no details) but Miko discussed some of the key characteristics he felt an offering would need to deliver on this idea of social BPM:
Easy to use, low barrier to entry so those with process know-how but not technical skills (in modeling for instance) can participate.
A pricing model that let’s people participate even those with a fairly small role
Widespread access so that everyone can participate
Independence so that companies are not excluded because of their technology or standards choices.
Community - it is not enough to have “social media” features, it must actually build a community around processes
A marketplace of experts, skills and information must be created so people can buy and sell process expertise.
Multiple channels and types of events…
… executing in multiple Inference Agents (Event Processing Agents on an Event Processing Network)…
… where Events drive Production Rules with associated (shared) data…
… and event patterns (complex events) are derived from the simple events and also drive Production Rules via inferencing…
… to lead to “real-time” decisions.
Vor einigen Tagen fand ich einen Artikel, der auf sehr anschauliche Weise erklärte, weshalb der Business-Rules-Ansatz so wichtig ist. Es sind immer noch wenige Experten, die von jener Relevanz des Business Rules Managements (BRM) ausgehen. Drei entscheidende Fragen werden sich all jene stellen, die ernsthaft über einen Einsatz von BRM sprich automatisierbaren Geschäftsregeln nachdenken:
1. Was sind die Vorteile von Business Rules, die ein zusätzliches Investment rechtfertigen?
2. Warum sollte man nicht nur die Regeln codieren?
3. Werden die Regeln verlässlich funktionieren und vor allem sich reibungslos in das System integrieren lassen?
Technically, BPM/Business Rules approach place process logic with the BPM suite and decision logic in the business rules management system (BRMS). The process logic in a BPM suite sequences and controls activities and launches and cancels processes. Control is achieved with timers and exception handlers. Processes can be designed to recover from errors, restart processes and coordinate activities. The BRMS effectively designs, organizes and executes the logic behind a process decision. An effective BRMS can handle any depth and complexity of decision logic, including computationally complex logic and dense logic.
Check out some of the new apps over at Chartbeat. Very cool real-time analytics with alerts. I wonder when Woopra when announce something similar?
When will Google Analytics become real-time and provide alerts for its millions of users?
Hat off to the chaps over at Chartbeat - beating them to the game! Well done.
The book is out: Yahoo! Web Analytics: Tracking, Reporting, and Analyzing for Data-Driven Insights.
His philosophy is that you should focus on three different but equally important tasks; A) Collecting Data, B) Reporting on Data and C) Deriving insight from Data. Dependant ones vantage point, one or more of the chapters will be in focus. He has divided the book into three parts to reflect these broad tasks.
Part 1, “Advanced Web Analytics Installation,” consists of Chapters 1 through 5. The focus is on data collection. True competitive advantage in web marketing comes from collecting the right data, but also, and no less important, from configuring your web analytics tool in such a way that you can derive insight from the data. Part 1 features detailed code examples that webmasters or developers can apply directly. Marketing people and executives will learn the opportunities they can demand from this tool. He also shows you how to add reporting dimensions to the predefined report structures for fantastic filtering and segmentation opportunities.
Part 2, “Utilizing an Enterprise Web Analytics Platform,” encompasses Chapters 6 through 10, where he focuses on reports. Creating reports is an easy feat, but remember that reports are never better than the data you collect. You need an exceedingly good understanding of how to work with your data. Part 2 is less technical than the first part. In it he teaches you to use your reporting toolbox to provide targeted answers to specific questions, such as “How much revenue did we make from first-time organic search visitors from Canada last week?” For this and many other questions you’ll encounter there is no standard report, but you will know how to get this answer and hundreds of others when you’re through with this section.
Part 3, “Actionable Insights,” encompasses Chapters 11 through 13 and focuses on how to take action on your data to optimize your web property. Having gone through the effort of implementing the data collection and reporting strategies in Parts 1 and 2, you will have gained enough insight to start an optimization process. Part 3 introduces you to optimization using a set of actionable insights. This is merely an appetizer, and the handful of optimizations he presents are not, by any means, the only ones you can pursue. But the ideas and attitude behind them can most definitely be copied and carry you down other optimization avenues. Think of this section as an idea catalog. One of the most important questions he tackles in this section is paid search optimization.
In 2009, Web analytics managers have a multitude of different tools to select to deploy at their corporation. Sets of tools from industry leaders, such as Omniture, WebTrends, Unica, CoreMetrics , Google, and Yahoo, are among the most popular, while options from smaller players like ClickTracks and Woopra exist as well. In theory, you deploy a tool, customize it to fit your needs, and start analyzing the reports — and it all goes swimmingly, right?
Then why have many corporations already chewed through two, maybe even three tools over the last several years, or deployed multiple tools in an attempt to arrive at where they need to be — delivering comprehensive and systematic analysis to their business community, helping to drive action from insight and taking the mantra of “competing on analytics” and “data driven culture” to the next level? Several factors cause disconnects between the promise of a tool and the successful use of a tool, which cause a tool to fail:
Enterprise architecture is a management practice that was initially developed within the IT discipline to manage the complexity of IT systems, as well as the ongoing change constantly triggered by business and technology developments.
Today, one of the primary reasons EA is adopted in organizations worldwide is to promote alignment between business requirements and IT solutions. EA is expanding into other business disciplines, as well: to enable business strategy development, improve business efficiency, facilitate knowledge management and assist with organizational learning, to name a few.
In order to effectively implement EA in organizations, architects are increasingly looking for best practices and frameworks to assist them. One of the few architecture frameworks publicly available to guide architects in their implementation is TOGAF. Put simply, TOGAF is a comprehensive toolset for assisting in the acceptance, production, use and maintenance of enterprise architectures. It is based on an iterative process model supported by best practices and a reusable set of existing architectural assets. Since it was developed by members of The Open Group Architecture Forum more than 10 years ago, TOGAF has emerged as arguably the de facto standard framework for delivering enterprise architecture.
D. Kulkarni, and A. Tripathi. SACMAT '08: Proceedings of the 13th ACM symposium on Access control models and technologies, page 113--122. New York, NY, USA, ACM, (2008)