Tuesday, June 2, 2009

ORF 2009 Super Early Bird Discount Extension


October Rules Fest 2009 has extended the Super Early Bird discount of 20% off of normal registration fees to midnight, June 6th, 2009. Regular registration is $500 so that represents $100 savings. If you have any questions, check out http://www.OctoberRulesFest.org for more info on speakers, agenda, etc. The hotel should be determined not later than Friday of this week, June 5th. Anyway, be SURE to register THIS WEEK to get the best savings. :-)

With parking at ANY hotel in downtown Dallas being astronomical (cheaper than New York, San Francisco, London or Paris though) it might be more economical to park-and-ride. There will be directions on the ORF web page at a later date about how to do that.


Thursday, May 28, 2009


Well, ORF 2009 is ready to rock and roll in downtown Dallas again, only this year the conference will be in the center of the downtown Dallas restaurant district rather than on the outskirts like last year. Just point your Google Earth to 1400 Commerce Street in downtown Dallas and you'll see what I mean. Everything from MacDonald's to The French Room to the "ultra cool" Fuse restaurant. After we check them out, the ORF web site have an ever-expanding list of top (not necessarily expensive) restaurants within a two- or three-block walking radius.

We won't have the same space as last year so, counting the 30+ speakers that we will have, there will be room for only about 170 attendees - first come first served. So, register early and you will still get your 20% discount. There is still no extra charge for tutorials if you are attending the conference and this year's tutorial have a great line up. As a matter of fact, the entire agenda is great. As you can see from the speaker bios those who are returning and the new speakers for this year are strictly "top drawer" - the best of the best.

If your company is interested in becoming a sponsor, contact info@OctoberRulesFest.org and let them know. Again, remember that there is room this year for only 170 attendees. Maybe 180, but that's about it. More on the hotel as it develops but it is down to a choice of only two really great hotels for the same price as last year - but more goodies.

So, register early and ensure that you have a great spot at the Second Annual October Rules Fest. Most of all, like last year, we're going to have FUN and we're going to enjoy ourselves.


Thursday, March 12, 2009

RETE Topology Cost Function and dynamic typing

I thought the general public might find this paper interesting. A few weeks back I posted a paper on duck and dynamic typing RETE, which tries to explore some of the challenges of supporting duck/dynamic typing.

The first half of the paper covers the typing issue. The second half explains a topology cost function. I've been using this approach for several years, but didn't bother to formalize it until now. A little bit of history on the topology cost function. Back in 2001-2003 Said Tabet and I were asked by numerous people to quantify and qualify RETE performance. One of the things we did was to compile some rules in JESS and show which nodes would be visited if a set of facts are asserted. We did this manually at the time, since JESS doesn't provide topology cost function out of the box.

I was inspired to write this stuff down when Johan Lindberg asked for more clarification on duck type RETE performance. Thanks to Johan and Joe for their invaluable assistance revising the paper. The paper is available here.


Thursday, February 19, 2009

BIZRULES looking for Rule Burst Guru


While this is NOT a jobs board, we probably should start one for rulebase consultants. For now, here is one that I know about:

BIZRULES is looking for a Haley Rules Modeler *

The Haley Rule Modeler designs and implements the Haley Rules using the Haley data model and rules. The Haley Rule Modeler is expected to have extensive experience implementing the Haley rules engine and should have experience implementing the Haley rules engine in the context of Seibel implementations.

* by "Haley Rules" the client means
- RuleBurst Rule Engine (BRE)
- SoftLaw Rule Engine (BRE) / Expert System (ES)
- Haley Office Rules
- Haley Expert Rules
- Haley Business Rule Engine (BRE)

This is for a long term project. RULEBURST/SOFTLAW experts anywhere in the world (Australia, UK, Canada, USA, etc.) are welcome to apply for this challenging opportunity! Relocation available for top candidates.

If you are interested and have experience with RULEBURST / SOFTLAW / HALEY RULES contact us or send your resume to


+1 305.994.9510

Call or email Rolando directly - not to me.


Monday, February 9, 2009

New Algorithm From Dr. Forgy


I published a blog on Dr. Forgy's new algorithm at http://javarules.blogspot.com/2009/02/new-algorithm-from-dr-forgy.html which contains very few details since there are few details out yet.  BUT, you heard if there first.  :-)


Friday, January 2, 2009

New Benchmarks for 2009


Again, right now, for 1Q2009 (or maybe for next year) we are looking for new Benchmarks. If I can't find any, we'll have to re-hash the old ones with a few new twists.

1. Waltz-50
2. WaltzDB-16
3. WaltzDB-200
4. 10K Telecom
5. 100K Telecom
6. MicroBenchmarks
7. Sudoku
8. 64-Queens

The 10K and 100K Telecom benchmarks do NOT exist yet. The 10K is almost finished but, for reasons of being over-worked out-of-work geek, I have not had time to work on it since about June or July of 2008. However, 1Q2009 is focused on Benchmarks 2009 2nd only the normal 1st priorities of God, Family and Job. After that, the secondary focus will be ORF 2009 until October. The WaltzDB-200 has to be written for everyone except OPSJ and Drools. But it is just a matter of writing two Java classes and that part is pretty simple. It follows the Manners 4, 8, 12, 16, 64, 128, 256 method where we just keep adding more and more data. There is the problem of the code generator, of course.

But, back to the topic at hand: What we need are benchmarks that will stress the entire rulebase, not just a few rules. Waltz is good in that it is a general problem but what we (OK, Dr. Forgy found it...) found was that, like Manners before it, only a couple of rules were being run most of the time. Fortunately, those two rules did stress the engine rather than just building a Fibonacci-type recursive algorithm as did Manners.

What we need mostly is consensus. Everyone agrees that what we have is not "sufficient" but no one has (as yet) defined "sufficiency." Certainly, we can talk more about this at ORF 2009 and discuss what a real performance benchmark would contain and what it would stress. The 10K and 100K Telecom benchmarks are not "true" benchmarks - they simply determine how long a particular engine takes to process rows and rows of data from a decision table. BUT they are all that I have for those engines that cannot process "normal" if-then-else rules.

Comments? Suggestion? Questions? Help is needed if we, the rulebase community, is to move forward and standardize on anything. What we don't want is a simple "toy" benchmark that won't really stress the engine. I have a feeling the either Sudoku or 64 Queens might be our last resort for consensus.

Most of you have blogging rights here - USE THEM! If you don't have blogging rights, apply for them and tell me which USA$1M project you have completed to qualify you for this august position and I shall certainly put you on the list of bloggers for ExSCG. :-)


Wednesday, December 17, 2008

Rulebase Systems Benchmarks 2009


With the forthcoming First Quarter Benchmarks approaching, I have been getting things set up to do the dreaded "Waltz" benchmarks. For the spreadsheet vendors, this holds no fear whatsoever since they can't really do this kind of thing using decision tables. But for vendors of tools such as Blaze Advisor, JRules, Jess, Drools, CLIPS, CLIPS/R2 and OPSJ this means, once again, submitting yourself to the never-ending battle between engineering and marketing.

Daniel Selman came up with a suggestion: Why not do something like http://www.spec.org/jAppServer2004/ ?? I took a look and, quite frankly, I'm not impressed for several reasons. (1) Each vendor has to be "trusted" to run the benchmark and report accurate results. I'm just too old and too jaded to trust vendors not to be swayed by marketing and the CxO gang to fudge on their reports and then dare anyone to challenge them on the results. (2) Application Server benchmarks are very generic and do not test the complexity of a rulebased system engine. (3) The results are confusing in the plethora of engines, number of cores used, etc. so that it's difficult to declare a "winner." Maybe this is what the marketing guys love about it; everyone is a winner because you can claim almost anything.

Here's the problem as I see it: If I claim to be 2.5M tall and nobody has a way to measure what I'm saying then how can you dispute it? Or, if everyone has their own measuring stick, then everyone can claim whatever they wish. I still favor one source, one measuring stick (however flawed) and one clear "winner" with rankings. The source is still open and anyone can do whatever they like in the way of challenging, but at least we would know who was fastest on whatever test we ran.

Now, all I'm asking is that you take a look at the SPEC benchmarks and see what you think. Most of you are already "approved" bloggers on this site so don't comment, just blog. If you have not been approved, just drop me an email and I'll put you on the list. But, let's get this done and out of the way before the end of the year if possible.