Imagine the following description of some doggo:
- If there's a bone nearby and I'm hungry, I'll eat it.
- If I'm hungry (but there is no bone in sight), I will wander.
- If I'm not hungry, but I'm sleepy, I will sleep.
- If I'm not hungry and not sleepy, I'll bark and walk.
We have four propositions that we might implement in a form of FSM, for example.
Clearly, each proposition implies a state, and each state could be transitioned to all the others.
Something is not quite right here, but we can't tell what it is.
It is notorious that FSMs are best suited to define behaviors which kinda have:
- Local in nature (for every certain state, only an enumerable set of possible transitions are allowed)
- Sequential in nature (we carry out actions one by one).
A poor doggo described above isn't local if you pardon the pun. If we look closer, we could see all doggo's states can yield any other state, thus the model is not local at all. Also, there are no determined sequences. All the dog actually does is act according to some priorities or rules.
Luckily, there is one way to model this kind of prioritized, global behavior. It is a rule system (RS), and it allows us to model many behaviors, including random ones and changing ones, that could be a bit too complicated to be modeled as FSMs.
At the core of an RS, there is a set of rules that drive our AI's behavior. Each rule has the form:
Condition => Action
The condition is also known as the left-hand side (LHS) of the rule, whereas the action is the right-hand side (RHS) of the rule. Thus, we specify the circumstances that activate the rule and particular actions necessary to carry out providing if the rule is active. In the previous case using a dog as an example, a more formal specification of the system would be
(Hungry) && (Bone nearby) => Eat it
(Hungry) & (No bone nearby) => Wander
(Not hungry) & (Sleepy) => Sleep
(Not hungry) & (Not sleepy) => Bark and walk
Notice how we enumerated the rules and separated the condition from the actions in a more or less formal way.
The execution of an RS is really straightforward. We test the LHS of each expression (the conditions) in order and then execute the RHS (the action) of the first rule that is activated. This way, RSs imply a sense of priority. A rule closer to the top will have precedence over a rule closer to the bottom of the rule list.
RSs, as opposed to FSMs, provide a global model of behavior. At each execution cycle, all rules are tested, so there are no implied sequences. This makes them better suited for some AIs.
Specifically, RSs provide a better tool when we need to model behavior that is based on guidelines. We model directions as rules, placing the more important ones closer to the top of the list, so they are executed first.