Below are the new features of TOGAF 9. The bolded text is what was provided by the Open Group. The regular text is my commentary on it.
Modular structure – I am a firm believer that enterprise processes are modular pieces that should be orchestrated based on the specific set of concerns. It is good to see that TOGAF feels the same way.
Promotes greater usability & encourages incremental adoption – This is somewhat lofty and subject largely to implementation details. I do agree that the guidance provided does promote reusability. This is reinforced with the first bullet on the modular structure.
Supports evolutionary release management -
Content framework – This is a significant step in the right direction. The content framework provides architects with a map of information that is needed. From what I have seen so far there isn’t a great amount of detail here. But I am sure there is more to come.
Extended guidance on using TOGAF – The TOGAF book was expanded greatly with new guidance that extends the base concepts of TOGAF and supports new features.
Explicit consideration of architectural styles – In the guidance there are linkages between the TOGAF ADM and Service Oriented Architecture (SOA). I am hoping that this isn’t a tight coupling. If you are interested in my thoughts on architectural styles I wrote a post on this not too long ago. See the post What is an Architecture Style?
SOA and Security – This could be interesting. But only if done right. The Open Group needs to be careful at balancing out too much in the developer details (what OASIS & W3C provides) and high level / nebulous guidance (Analyst firms) that isn’t actionable. What could be of great value is if the Open Group embarked on true architectural patterns and styles that would aid SOA and EA architects on choosing the right architectural strategies.
Further detail added to the Architecture Development Method (ADM)
In diesem Buch werden alle relevanten modernen Standards der Geschäftsprozessanalyse und -modellierung miteinander verbunden und ihre praktische Handhabung dargestellt.
Kern des Buches ist eine Geschäftsprozess-Methodik, die die verschiedenen Standards in praxisrelevanter und harmonischer Weise verbindet. Sie erfahren, welche Standards es gibt, wofür und wie diese eingesetzt werden können und welche Möglichkeiten aber auch möglichen Einschränkungen in der Praxis damit verbunden sind. Basis sind die BPMN (Business Process Modeling Notation), OSM (...), BMM (...), SBVR (..) und UML (...) - wobei diese Standards zielgerichtet nur soweit behandelt werden, wie es für die Auseinandersetzung mit Geschäftsprozessen notwendig ist.
Sie erfahren wie Strategien, Geschäftsregeln und Geschäftsprozesse dargestellt werden können und welche Strukturierungsmöglichkeiten es für Unternehmensarchitekten es gibt.
Das Buch richtet sich an Business-Analysten, Prozessdesigner, Betriebsorganisatoren und verwandte Rollen.
Jess/CLIPS rule set to demonstrate the use of Bayes' theorem in handling dependencies between uncertain beliefs. This rule set uses a very simple scenario in which a number of jars hold combinations of black and white jelly beans. As the test progresses, jars are randomly selected, and beans are removed. The rule set processes a sequence of these bean draw events. Before each draw, the system uses Bayes' theorem to determine the probability of a black or a white bean being drawn from specific jars. The program answers questions in the form of "if the next bean drawn is black, what is the probability that it will be drawn from jar 3"?
As I was reading the Smart Roads. Smart Bridges. Smart Grids. feature in the WSJ, I couldn't help but notice all the event processing scenarios and patterns. As you would expect, all the scenarios start with sensors and instrumentation and involve event (instrumentation) transmission, detection and processing. The order, frequency and end actions of transmission, detection and processing vary by scenario. What follows are excerpts from the article in the domains of smart transportation and grids and my associated event processing observations.
Some definitions key on the question of the probability of encountering a given condition of a system once characteristics of the system are specified. Warren Weaver has posited that the complexity of a particular system is the degree of difficulty in predicting the properties of the system if the properties of the system’s parts are given. In Weaver’s view, complexity comes in two forms: disorganized complexity, and organized complexity. [2] Weaver’s paper has influenced contemporary thinking about complexity. [3]
Overall this looks like a very strong release, especially with some of the core engine enhancements around temporal reasoning, support for XSDs and delarative type modeling. If the commercial vendors did not think they had a real competition on their hands, Mark and his team will prove them wrong with 5.0. Drools 5.0 is not yet ready for release (they are hoping for a November release) but those of you who like playing with, and contributing to, code that is nearly ready can get it from the downloads page (scroll down). Michael Neale posted Drools 5.0 M2 New and Noteworthy Summary recently and Drools 5.0 M1 - New and Noteworthy before that. You can get periodic updates on the world of Drools from Mark and Michael on their blog.
Let’s start by recapping decisions services. Decision services are services, generally stateless ones, that answer business questions for other services. Decision Services typically have no side effects so they can be called whenever they are needed without the caller worrying that something will change in the system. This means that database updates, event generation or other actions taken as a result of the decision are taken by the caller not by the Decision Service. This is not 100% true but works as a general rule. To work, Decision Services need to contain all the logic and algorithms necessary to make the decision correctly.
Continuing some posts on next generation warranty systems in the build up to speaking at the Warranty Chain management conference I thought I would contrast how current generation warranty systems handle critical decisions with how next generation systems do so.
Let's say you've identified a microdecision or two that has economic leverage. What can you do to improve it? There are many possible interventions, and it's important not just to always use the same one. One approach is to automate it entirely. This is the focus of James Taylor and Neil Raden's book Smart Enough Systems, and of Taylor's blog on enterprise decision management. . If the decision is structured enough, that may be a good idea.
1. Decisions are the unit of work to which BI initiatives should be applied.
2. Providing access to data and tools isn't enough if you want to ensure that decisions are actually improved.
3. If you're going to supply data to a decision-maker, it should be only what is needed to make the decision.
4. The relationship between information and decisions is a choice organizations can make--from "loosely coupled," which is what happens in traditional BI, to "automated," in which the decision is made through automation.
5. "Loosely coupled" decision and information relationships are efficient to provision with information (hence many decisions can be supported), but don't often lead to better decisions.
6. The most interesting relationship involves "structured human" decisions, in which human beings still make the final decision, but the specific information used to make the decision is made available to the decision-maker in some enhanced fashion.
7. You can't really determine the value of BI or data warehousing unless they're linked to a particular initiative to improve decision-making. Otherwise, you'll have no idea how the information and tools are being used.
8. The more closely you want to link information and decisions, the more specific you have to get in focusing on a particular decision.
9. Efforts to create "one version of the truth" are useful in creating better decisions, but you can spend a lot of time and money on that goal for uncertain return unless you are very focused on the decisions to be made as a result.
10. Business intelligence results will increasingly be achieved by IT solutions that are specific to particular industries and decisions within them.
Folks who have been in event processing fields like network management (NMS) or security management for many years have a very high expectation for processing complex events. Most of the network and security management platforms on the market have basic rule-based processing available “out of the box” and most of these platforms have had the capability to process events in near-real time for decades. Adding a new “rules-based event processing platform” to the network and security management software mix does little to add any additional capability and certainly does not solve any nagging complex detection problems.
A number of architecture frameworks exist, each of which has its particular advantages and disadvantages, and relevance, for enterprise architecture. Several are discussed in Other Architectures and Frameworks .
However, there is no accepted industry standard method for developing an enterprise architecture. The Open Group goal with TOGAF is to work towards making the TOGAF ADM just such an industry standard method, which can be used for developing the products associated with any recognized enterprise framework that the architect feels is appropriate for a particular architecture. The Open Group vision for TOGAF is as a vehicle and repository for practical, experience-based information on how to go about the process of enterprise architecture, providing a generic method with which specific sets of deliverables, specific reference models, and other relevant architectural assets can be integrated.
To illustrate the concept, this section provides a mapping of the various phases of the TOGAF ADM to the cells of the well-known Zachman Framework.
Das Zachman Framework ist ein 1987 von John Zachman konzipierter domänenneutraler Ordnungsrahmen zur Entwicklung von Informationssystemen.
Es bildet dabei einen Leitfaden, der Vorschläge enthält, welche Aspekte aus welchen Perspektiven Berücksichtigung finden sollten, um die IT-Architektur einer Unternehmung erfolgreich aufzustellen. Mit Hilfe dieser Modellierung kann sowohl die Dokumentation- als auch die Planung eines solchen Projekts unterstützt werden, wenn bspw. nachvollzogen werden soll, welche Entscheidungen welche technischen Umsetzungen nach sich gezogen haben.
The Zachman Framework is a framework for enterprise architecture, which provides a formal and highly structured way of viewing and defining an enterprise.
The Framework in practice is used for organizing enterprise architectural "artifacts" in a way that takes into account both:
who the artifact targets for example, business owner and builder, and
what particular issue for example, data and functionality is being addressed.
These artifacts may include design documents, specifications, and models.[3]
The Framework is in essence a matrix,[4]. It is named after its creator John Zachman, who first developed the concept in the 1980s at IBM. It has been updated several times ever since.[5]
- leave anything related to transport, communication to other layers- use this revised CEP to express and execute event-relevant logic, the purpose of which is to translate the ambient events into relevant business events- have these business events trigger business processes (however lightweight you want to make them)- have these business processes invoke decision services implemented through decision management to decide what they should be doing at every step- have the business processes invoke action services to execute the actions decided by the decision services- all the while generating business events or ambient events- etc.
For those unfamiliar with business-driven architecture, I believe the most viable, agile architectures will be comprised of a blend of architecture strategies, including (but not limited to) service-oriented architecture, event-driven architecture, process-based architecture, federated information, enterprise integration and open source adoption.
A common task in many event processing systems is to detect patterns of events.
If combined, these patterns will eventually form a situation consisting of multiple patterns over time.
So basically a detected instance of a situation is a specific sequence of events.
CEP module receives or intercepts a flurry of events and processes them with the objective of figuring out what those events are relevant for; it triggers the appropriate business processes or decision services
BPM module receives the request for a given process to be applied to a higher level entity (an application, a document...); it automates the steps defined in the business process
BRMS module is invoked with a given context to apply business rules; it makes a business decision
- First, event management is primarily about the identification and generation of business events from the ambient events. Similar to what Carole-Ann and I had written in previous posts.- Second, IBM wants to introduce high level EPLs to express the logic for that processing that are business-centric, something very similar to what Business Rules Languages and approaches are in the business rules management area.
M. zur Muehlen, M. Indulska, and K. Kittel. 19th Australasian Conference on Information Systems (ACIS 2008), Christchurch, New Zealand, Australasian Computer Society, (2008)
D. Kulkarni, and A. Tripathi. SACMAT '08: Proceedings of the 13th ACM symposium on Access control models and technologies, page 113--122. New York, NY, USA, ACM, (2008)