Assessment of AI System

As I’ll say every time, this will not be finished and the primary purpose of this project is to see what does and doesn’t work for my legitimate projects. These are just dense notes and they’re not really well organized, but here’s what I have so far in my project. TLDR at bottom

I don’t particularly like reading about complicated topics like AI, especially when these topics tend to be very dry and reference math that is way over my head. It’s more than likely the case that what I’m presenting here is somewhat common, I just wouldn’t know.

Lets say you’ve made a simulated life form that has basic needs and desires it wants to fulfill. How does it fulfill them? The typical solution involves putting all of the individual needs on a scale, lets say 1-100. If one hits zero, the creature dies, or suffers a penalty, or whatever. So at any point in time the creature should try to accomplish tasks that raise its lowest need.

Problem: context. If a creature has a thirst level of 45 and a hunger level of 46 then it should go get a drink, right? Well, neither of these values is particularly dire, so sure. But what if the creature is sitting in front of a berry bush? Should it really not eat just because it’s slightly more thirsty? No.

In a vacuum it should get a drink but if we put a berry bush right in front of it then it should be able to assess its situation and adjust its behavior accordingly. So we apply a weight to all of those 1-100 values. Ease of accomplishment.

How do we get those weights? First by probing our senses. What can we immediately see? We pair that with a rudimentary memory system. When the creature sees a stimulant such as food or water it stores that stimulant in a Set that stores information about that stimulant. Location, favorability, etc.

I’m a little torn here. Would these things really remember the exact spot it last saw an apple on the floor? Probably not. It’d probably vaguely remember the area that it saw food at. I have a system for that too, it’s just not plugged into anything. Basically when a stimulant is sensed it associates world grid sections with that stimulant rather than remembering it specifically. Some hybrid system is probably ideal. But if I had to choose just one, then in our case I’d go with the location-based memory rather than the specific item memory.

Anyway, seeing something such as the berry bush is a stimulant. And the stimulants are what trigger the more dynamic AI. Stimulants can be a wide range of things. If needs decay past a certain threshold, that can be considered a stimulant. If the sun sets and the creature is not in its home then a feeling of being exposed can be a stimulant. The darker it gets the more intense the stimulant becomes. The weights are up to the designer but ideally this would cause truly dynamic creatures rather than robotic simulations. Without weights, when the sun starts to set a creature would compare its need to find shelter to its current task. If all the creatures had the same “stats” then at a certain point in time, under identical needs and situations, all the creatures would act the same and go home.

So if a creature needs to drink but the sun is setting then fear of being exposed could overpower the need to drink. Pair that with “personality” weights and you could end up with a creature that is so afraid of the dark it would rather risk starving to death than going out at night. Alternatively you could wind up with a creature whos desire to eat is consistently stronger than its desire to avoid darkness. You could tie these personality traits into their “genes” and track how different traits persist over generations. Weights allow this dynamic behavior to happen. This could definitely be expanded continually but once I’ve run enough simulations under what I’ve outlined here I’ll likely stop. Socialization is incredibly interesting and it could fit into this stimulant system but I’m not sure how well it would be integrated.

TLDR: It’s late and I’m not going to go back and proof-read that. But basically the AI works like this- In a vacuum it weighs all of its needs and then chooses the need that needs the most attention (I throw in RNG on top of that, but that’s up to the designer.)

Once it has chosen what task (need) it should fulfill, it starts to do it. While performing the task it can receive stimulants such as seeing food, or having its needs decay. When a stimulant occurs it weighs that stimulant against its current task. So if its on the way to get a drink but it crosses paths with an apple it will turn to eat if it is hungry enough. (Again I insert RNG here to make these behaviors seem a little more organic.) Or if it is playing to satisfy its entertainment need but its hunger decays it will weigh the hunger vs its current task of playing to see if it should stop and go eat.

6 Likes

A system like an AI can never NOT know where every object lies within a virtual environment. But you can weight the distance of the fluffy into the calculation, since going to a river that is like 500 units away might be too far away to quench its thirst when there is an apple 5 units away from the fluffy. So eating that apple would be prioritized over drinking, even if the fluffy is slightly thirstier than it’s hungry. But if thirst becomes critical, the algorithm needs to prioritize finding water.

2 Likes

We’re talking about fluffies, so I think the proper term is AU.

It stands for artificial unintelligence, obviously.

5 Likes

Actually, my system stores its last seen location. So if it saw an apple on a hill and another creature moved the apple then the first creature would still think it was located on the hill.

And yep, that second part of your comment already happens which gives some nice results.

2 Likes

That’s great! So is the AI searching for the item in the vicinity after returning to the last seen location?

You could also add weighted beacons for things to simulate swnses. For instance an apple has a weight of 50 that decreases by 1 every unit you are away from it based on “smell” whereas an apple tree would have a routing node attached to it to simulate “knowing” that it produces apples. Possibly on a count down timer, so they’re not always going back to the same spot for food. That way if there’s a need, the AI routes to the nearest known nodes for resources before scouting new sources.

Yes, it goes into a very basic “explore” behavior which gets a random point in the area and then tells the AI to go there. If someone wanted to get really technical they could integrate scents into this but that’s way beyond where I’m at right now

I was actually thinking about something like this last night. I was debating whether or not something like the berry bush should be one actor with high food weights or if it should be several dozen berry cluster actors that “overwhelm” the system. If it’s RNG based then having dozens of stimulants in something like a tree or bush could cause interesting behaviors. But it’d likely make them harder to control. I’m not sure but based on what I have now I’ll probably go with the first option.

1 Like