Godot Rules

Game Units AI system
based on
facts and rules

(a module for Godot game engine)
It's rather difficult to find a computer game that attracts players repeatedly, and doesn't make them bored.

Thus, every game developer wants to produce new games and update old ones as fast as possible — to stay afloat in a crowded and highly-competitive market.
We're glad to present you an approach that allows you to express game logic in a most convenient way:
in the language of facts and rules
What's in the box?
GodotRules is the extension of the well-known Gogot game engine that incorporates a famous CLIPS expert shell developed by NASA.
Such an approach unlocks incalculable possibilities to define an AI that would change its own behavior in the middle of the game
It also keeps the game behavior human-readable and inexpensive to modify
Catch on
(defrule catch-the-ball

(object (is-a Robot)
(name ?rob)
(position goalkeeper)
(state IDLE))

(object (is-a Ball)
(name ?ball)
(state IN-GAME))

?fact <- (?ball is near ?rob)

=>

(retract ?fact)
(..
-
> ?rob catch ?ball
-> ?rob kick ?ball into field)
)

Module design principles

Rest upon Godot's design philosophy
Rules are used to manipulate Godot's base building blocks: Nodes and Scenes
Easily embeddable into existing project
In a few simple steps it endows the units from your game with a different type of AI
Separation of Logic and Data
Don't waste a lot of time looking for the right place in the code to modify it
Human readable Rules
Close to natural language as far as possible
Tight integration with common tooling
(CLIPS-lang extension for: Atom, VScode etc. )
If you want to get anywhere in life, don't break the rules — make the rules
— CLIPS User's Guide
more examples of usage:
Doggo example
Imagine the following description of some doggo:

- If there's a bone nearby and I'm hungry, I'll eat it.
- If I'm hungry (but there is no bone in sight), I will wander.
- If I'm not hungry, but I'm sleepy, I will sleep.
- If I'm not hungry and not sleepy, I'll bark and walk.

We have four propositions that we might implement in a form of FSM, for example.

Clearly, each proposition implies a state, and each state could be transitioned to all the others.

Something is not quite right here, but we can't tell what it is.

It is notorious that FSMs are best suited to define behaviors which kinda have:

- Local in nature (for every certain state, only an enumerable set of possible transitions are allowed)

- Sequential in nature (we carry out actions one by one).

A poor doggo described above isn't local if you pardon the pun. If we look closer, we could see all doggo's states can yield any other state, thus the model is not local at all. Also, there are no determined sequences. All the dog actually does is act according to some priorities or rules.

Luckily, there is one way to model this kind of prioritized, global behavior. It is a rule system (RS), and it allows us to model many behaviors, including random ones and changing ones, that could be a bit too complicated to be modeled as FSMs.

At the core of an RS, there is a set of rules that drive our AI's behavior. Each rule has the form:

Condition => Action

The condition is also known as the left-hand side (LHS) of the rule, whereas the action is the right-hand side (RHS) of the rule. Thus, we specify the circumstances that activate the rule and particular actions necessary to carry out providing if the rule is active. In the previous case using a dog as an example, a more formal specification of the system would be

(Hungry) && (Bone nearby) => Eat it
(Hungry) & (No bone nearby) => Wander
(Not hungry) & (Sleepy) => Sleep
(Not hungry) & (Not sleepy) => Bark and walk

Notice how we enumerated the rules and separated the condition from the actions in a more or less formal way.

The execution of an RS is really straightforward. We test the LHS of each expression (the conditions) in order and then execute the RHS (the action) of the first rule that is activated. This way, RSs imply a sense of priority. A rule closer to the top will have precedence over a rule closer to the bottom of the rule list.

RSs, as opposed to FSMs, provide a global model of behavior. At each execution cycle, all rules are tested, so there are no implied sequences. This makes them better suited for some AIs.

Specifically, RSs provide a better tool when we need to model behavior that is based on guidelines. We model directions as rules, placing the more important ones closer to the top of the list, so they are executed first.
Soldier example
Let's look at a more involved example, so I can provide more advice on RSs. Imagine that we need to create the AI for a soldier in a large squadron. The rule system could be:

1. If in contact with an enemy => combat
2. If an enemy is closer than 10 meters and I'm stronger than him => chase him
3. If an enemy is closer than 10 meters escape him => escape him
4. If we have a command from our leader pending => execute it
5. If a friendly soldier is fighting and I have a ranged weapon => shoot at the enemy
6. => Stay still

Again, just six rules are sufficient to describe the behavior of a relatively complex system. Now, the clever placement of the rules could allow some elegant design. If the soldier is in the middle of combat with an enemy but is given an order by the leader, he will ignore the order until he kills the enemy, because the _combat_ rule is higher on the priority list than the "follow order" rule.

Hence lies the beauty of RSs. Not only can we model behaviors, but we can also model behavior layering or how we process the concept of relevance.

In addition, notice how the last rule does not have a condition but is more a "default action." It is relatively normal to have the last rule that is always true.

Once designed, RSs are very easy to actually code.
This site was made on Tilda — a website builder that helps to create a website without any code
Create a website