Jump to content


Photo

Artificial Intelligence


  • Please log in to reply
68 replies to this topic

#1 Malcolm Ryan

Malcolm Ryan

    Newbie

  • Member
  • Pip
  • 4 posts

Posted 20 May 2007 - 12:54 AM

What's the state of play with regard to the AI for this project? I'm a big fan of stealth games at least in concept, but I've usually found the AI to be disappointing.

As it so happens, I am an AI researcher myself, and I've given a fair bit of thought into making better stealth AIs. What is the interface for bot-writing like on this project? If it's reasonable accessible, I might be able to contribute something.

Malcolm

Edited by Malcolm Ryan, 20 May 2007 - 12:55 AM.


#2 greebo

greebo

    Heroic Coder

  • Root
  • 16054 posts

Posted 20 May 2007 - 05:00 AM

Do you have any programming experience in that regard?

#3 Crispy

Crispy

    Uber member

  • Member
  • PipPipPipPip
  • 4996 posts

Posted 20 May 2007 - 06:11 AM

The AI is basically functional but needs a lot of work. We're always looking for capable AI programmers!

AI is done partly using Doom 3 script, and partly using C++. Doom 3 script isn't nearly as powerful as C++, but it's more convenient for many tasks. The syntax will be reasonably familiar to anyone who knows C++ or Java.

Standard Doom 3 AI is done using a state machine, with each state having its own function in the AI script. This was plenty for Doom 3 AI, who basically just stood around doing nothing until the player came into the room, and then blindly attacked until they died.

We started out using the same system, but it became too clumsy so we decided to switch to a task-based system. The script can post tasks onto a priority queue. The AI will execute these tasks in order of priority. This has been half-completed; we're currently in the process of redesigning the way in which various stimuli (like seeing an enemy or hearing something suspicious) get posted on to the priority queue, so as to suit the task-based approach a bit better.

Are you this Malcolm Ryan?
My games | Public Service Announcement: TDM is not set in the Thief universe. The city in which it takes place is not the City from Thief. The player character is not called Garrett. Any person who contradicts these facts will be subjected to disapproving stares.

#4 Malcolm Ryan

Malcolm Ryan

    Newbie

  • Member
  • Pip
  • 4 posts

Posted 20 May 2007 - 08:12 PM

Do you have any programming experience in that regard?


I have a PhD in Artificial Intelligence and I currently lecture in Game design at the university of New South Wales.

#5 Malcolm Ryan

Malcolm Ryan

    Newbie

  • Member
  • Pip
  • 4 posts

Posted 20 May 2007 - 08:31 PM

The AI is basically functional but needs a lot of work. We're always looking for capable AI programmers!

AI is done partly using Doom 3 script, and partly using C++. Doom 3 script isn't nearly as powerful as C++, but it's more convenient for many tasks. The syntax will be reasonably familiar to anyone who knows C++ or Java.

Ick. I hate custom scripting languages. They are almost invariably written by people who have no actual experience in language design and therefore they tend to reproduce all the same bad features over again. Haven't they heard of Python, Ruby or Lua? <sigh>

We started out using the same system, but it became too clumsy so we decided to switch to a task-based system. The script can post tasks onto a priority queue. The AI will execute these tasks in order of priority. This has been half-completed; we're currently in the process of redesigning the way in which various stimuli (like seeing an enemy or hearing something suspicious) get posted on to the priority queue, so as to suit the task-based approach a bit better.

How are you keeping track of where the agents think the player might be? I've done a bit of work on tracking. I've been thinking about how you might go about giving guards a fairly realistic searching behavour, without making them either overly stupid or unenjoyably smart. (The purpose of any enemy AI being to put up a good fight and then lose.)

Are you this Malcolm Ryan?

Yes. You work at ANU right? I'll be visiting RSISE next month. Perhaps we can catch up? Drop me an email if you're interested.

#6 Ishtvan

Ishtvan

    Programmer

  • Development Role
  • PipPipPipPipPip
  • 14860 posts

Posted 20 May 2007 - 08:59 PM

Ick. I hate custom scripting languages. They are almost invariably written by people who have no actual experience in language design and therefore they tend to reproduce all the same bad features over again. Haven't they heard of Python, Ruby or Lua? <sigh>

IMO the D3 scripting language can basically be considered C++ , minus some features (like structures and arrays :( ). So if you're familiar with C++ it shouldn't be that bad. We can also add features to it if needed, since that's all exposed in the Doom3 C++ source.

How are you keeping track of where the agents think the player might be? I've done a bit of work on tracking. I've been thinking about how you might go about giving guards a fairly realistic searching behavour, without making them either overly stupid or unenjoyably smart. (The purpose of any enemy AI being to put up a good fight and then lose.)

SophisticatedZombie has done the most work on that, so he could give the best answer.

In short, the AI get "alert" stimuli from suspicious events in the form of sight, sound, touch, and "environmental." We're defining environmental stimuli as things out of place in their environment, like a dead body on the floor, something expensive missing, an arrow stuck in the wall, or a bunch of chairs flipped over and desk drawers opened in a high security area.

Sight, sound and touch alerts all have a location from which they originated (although we're planning to make the location of a sound less well resolved the farther away it is and the more apertures it had to travel through to get to the AI). Again, you'd have to ask SophisticatedZombie, but I believe the current behavior is this:

They initially walk toward the location of the stimulus. As they are walking, they are "thinking" about which areas to search once they get there. When thinking, they access an existing pathfinding grid which has conveniently divided up the area around them into a grid of nodes. The AI ranks each node based on how likely it is that someone is hiding there. This is determined by: 1. How close the node is to the stimulus location, 2. Visibility of that node from their current position. Visibility is determined by checking if their line of sight to the node is occluded, and if not how dark it is at that node. Less visibility means it's more likely that someone could be hiding there.

They then physically walk over these nodes in order of their "hiding spot probability" rank, with a bit of randomization and random wandering about the nodes.

I think that's the status of the system right now. We're finding that it's still a bit too easy to hide from AI doing this, so having them cheat is still a possibility. There's also the issue that if you're hiding in a small dark spot surrounded by light spots and the AI get alerted enough to search near you, you're probably screwed.

There's definitely room for improvement when they get multiple alerts (right now I think they just start the process anew for each alert), and when they're in combat with the player and the player runs away (right now they lose the player too easily). Any ideas on searching behavior you have would certainly be welcome. :)

#7 Crispy

Crispy

    Uber member

  • Member
  • PipPipPipPip
  • 4996 posts

Posted 21 May 2007 - 12:58 AM

Ick. I hate custom scripting languages. They are almost invariably written by people who have no actual experience in language design and therefore they tend to reproduce all the same bad features over again. Haven't they heard of Python, Ruby or Lua? <sigh>

Evidently not... But then, this is id software we're talking about. They do tend to reinvent the wheel a bit, as you'll discover if you browse the Doom 3 SDK for a while. :)

How are you keeping track of where the agents think the player might be? I've done a bit of work on tracking. I've been thinking about how you might go about giving guards a fairly realistic searching behavour, without making them either overly stupid or unenjoyably smart. (The purpose of any enemy AI being to put up a good fight and then lose.)

Ishtvan's summary is pretty much spot-on. We could definitely do with improved tracking behaviour; it's pretty easy to escape from the AI at the moment.

Yes. You work at ANU right? I'll be visiting RSISE next month. Perhaps we can catch up? Drop me an email if you're interested.

Depends what you mean by "work"... :) Anyway, I'll send you an email in a sec.


@New Horizon: Time to give Malcolm some permissions? Then we can continue the discussion in a private forum.
My games | Public Service Announcement: TDM is not set in the Thief universe. The city in which it takes place is not the City from Thief. The player character is not called Garrett. Any person who contradicts these facts will be subjected to disapproving stares.

#8 Crispy

Crispy

    Uber member

  • Member
  • PipPipPipPip
  • 4996 posts

Posted 30 May 2007 - 02:30 AM

*cough*

So, can we push ahead with this? I'd like to hear Malcolm's ideas. :)
My games | Public Service Announcement: TDM is not set in the Thief universe. The city in which it takes place is not the City from Thief. The player character is not called Garrett. Any person who contradicts these facts will be subjected to disapproving stares.

#9 Nyarlathotep

Nyarlathotep

    Advanced Member

  • Member
  • PipPipPip
  • 1200 posts

Posted 30 May 2007 - 07:32 PM

You're not the only one who does. :)

#10 dracflamloc

dracflamloc

    Member

  • Member
  • PipPip
  • 15 posts

Posted 05 June 2007 - 07:33 AM

Yes I'm interested to hear his thoughts from a fresh perspective to compare with my own now that I've been browsing the svn a bit.

#11 New Horizon

New Horizon

    Mod hero

  • Active Developer
  • PipPipPipPipPip
  • 13853 posts

Posted 05 June 2007 - 12:21 PM

Sorry, I wish someone had PM'd me about this. I was quite busy with work during the week that this was discussed. I'll setup some permissions now.

#12 Malcolm Ryan

Malcolm Ryan

    Newbie

  • Member
  • Pip
  • 4 posts

Posted 05 June 2007 - 07:41 PM

*cough*

So, can we push ahead with this? I'd like to hear Malcolm's ideas. :)

Sorry, I've been at work on some other projects.

There's a lot of AI research in tracking which might be useful here, but it would require some adaptation. I'm thinking of something like a particle filter. It would track of a collection of possible locations for the player, weighted by likelihood. The weights are updated based on sensing, by increasing weights with positive evidence (sightings, noises) and negative evidence (seeing empty space where the player might have been). It should also be possible to do things like combining the knowledge of different guards and adding evidence from things like open doors or extinguished lamps.

Of course, the problem here is not to find the player's location (which we already know) but to simulate the guards' lack of knowledge in a realistic way. I think a scheme like this could give the guards an apparently "rational" way of working out where the player may be.

The question then remains of how the guards should use this knowledge, but that will require more thought.

#13 Baddcog

Baddcog

    Mod hero

  • Development Role
  • PipPipPipPipPip
  • 5360 posts

Posted 05 June 2007 - 10:09 PM

Sounds pretty cool.

This is probably obvious to you but if the locations have weights applied and the ai navigate towards higher weights the lower weights should fall off as the others build. Seems fairly realistic too me. maye the dropped weights would be put on a backburner, so the ai stop calculating them once they have ... sorry tired...

say you have 4 locations the ai notices. each have a weight of 5 (I assume this is what you have in mind, I odn't program so I'm guessing). a noise is heard by one of them, so it's weight goes up to 8. maybe to save resources each Ai can only have 20 points of weight. That means 3 points need to be subtracted from the other 3 locations. Maybe by distance "that shadow is furthest from noise, I'll look there last" so it loses 2 points to be at 3.
the next furthest loses 1 point.
now they are 8, 5, 4, 3. If a location gets to 3 it is onthe backburner. The ai will store it but not go there unless all other places have been checked, or if a noise is made there ... otherwise it would head for 8 first, check then go to 5...

?
Dark is the sway that mows like a harvest

#14 Crispy

Crispy

    Uber member

  • Member
  • PipPipPipPip
  • 4996 posts

Posted 05 June 2007 - 10:27 PM

@NH: Yeah, guess I should have PMed you earlier. Sorry.

@Malcolm: No worries; thanks for taking the time, I know first-hand how busy things can get at university. :)

This is more sophisticatedZombie's area, but to me this sounds similar to what the searching routines do already, except better formalised and with more consideration given to the weightings. As Ishtvan said above:

When thinking, they access an existing pathfinding grid which has conveniently divided up the area around them into a grid of nodes. The AI ranks each node based on how likely it is that someone is hiding there. This is determined by: 1. How close the node is to the stimulus location, 2. Visibility of that node from their current position. Visibility is determined by checking if their line of sight to the node is occluded, and if not how dark it is at that node. Less visibility means it's more likely that someone could be hiding there.

This is pretty similar to what you describe, except for how the weightings are managed; each new alert essentially causes all the weightings to reset, whereas your approach calls for maintaining and adjusting the weightings over time. This shouldn't be too hard to implement, I hope.

As you say, the question then arises: What does the AI do with this information? At the moment it just sorts the nodes in order of weighting and investigates each of them in turn (with some optimisations for favouring nearby spots over distant ones, so that less time is spent travelling around). Off the top of my head I can't think of any better way to do it.

@Baddcog: I would guess that the important thing is the relative weight of each location, so you wouldn't necessarily need to reduce the weights of other locations just because the weight of one location increased. This is more of an implementation detail than a high-level concern anyway.
My games | Public Service Announcement: TDM is not set in the Thief universe. The city in which it takes place is not the City from Thief. The player character is not called Garrett. Any person who contradicts these facts will be subjected to disapproving stares.

#15 demagogue

demagogue

    Mod hero

  • Active Developer
  • PipPipPipPipPip
  • 5415 posts

Posted 06 June 2007 - 10:25 PM

This is all very interesting .... too bad it'll probably all be moved to the private forums soon.

I studied a bit of Paul Glimcher's lab work on "vision & risk/decisionmaking" in the brain, if you are interested in how the brain's vision system addresses these exact kinds of issues ... in particular I studied eye saccades, which is a kind of search routine. It sounds very familiar to the way Malcolm was explaining it, relative-weighted areas of the vision area getting updated as evidence comes in ... (well, more complicated, as you could imagine ... You could just google Paul Glimcher, vision, and NYU to find his papers if you want more detail on how LIP, the brain area he studies, does it).

The one major element it adds to what you were saying is that it isn't just "likelihood" alone, but (likelihood of a hit) * (expected payoff, if it's a hit) ... in effect, relative expected utility of a hit (REU = L*P). You've already thought about likelihood well (shadows before well lit areas, heard sounds+, etc), so I don't need to really discuss it. And you've also got a system where the weights are relative to one another built into the hierarchy of choice based on weight, so I don't really need to talk about that, either.

As for payoff, in most cases the expected payoff is probably the same -- a "hit" will be the same thief every time; there he is. It's not like some shadows are more likely to carry more thieves than other shadows so go to those first (the traditional way P works). If the task is just to find that one guy, then that's it. So it might not apply so well.

But just so I don't make a completely irrelevant point, another possible way to think about payoff is the same thief in different situations. E.g., there might be situations where the guard has a better chance of catching the thief by surprise or at least off-guard ... that's a higher payoff for the guard. ... So you might tweak the scales a little so that certain evidence that indicates coming from a certain direction might catch the thief off guard (e.g., coming from behind the thief rather than in front of him) will bump up the P factor in the weight just a little more than usual for that direction relative to the other direction (of course, the L factor might still outweigh for the other direction). Or an approach which better cuts off the thief's exit.

Or, vice versa, things that lessen the expected payoff for the guard might get bumped down, e.g., if the direction puts the guard in a particularly vulnerable position if he catches the thief, relative to catching him from another position. Or, e.g., distance/work to get to the spot might also tweak the payoff, insofar as it leaves him more vulnerable to fight vs. another approach (need to think about that; if fatigue were a factor it definitely would, but I don't think you'll have that, or some approaches making fighting easier or harder for him, not sure.)

The point is that these wouldn't be independent factors affecting the weighting, but a multiplier to the factors you already have (which are most all, in effect, "likelihood" factors), that have the potential to tweak a little in either direction with the right evidence, otherwise it's just set at "1". Although maybe this graduated-approach you have to weighting--bits of evidence tweak the amount this way or that--amounts to the same thing in effect.

My examples are just me thinking on the fly, take them with a grain of salt. You need to think the idea through. You might think it might be better to have a hit at all than worry about the relative value of that hit when it happens (well, really this is about a hit from one place relative to the same hit from another place; but from a more/less advantageous position for the guard, coming from the Thief's behind, or cutting off his exit, etc).

But anyway, because so much of the literature always has this equation of REU = L*P for rational saccade/search behavior, it's worth spending a little time thinking about how the P (expected payoff) factor might work for you, as well as the L factor that you've been working with ... maybe in a much different way than my examples. Since there's just one hit, maybe it's not all that important as it would be in other situations (e.g., where there are other potential hits at play), but it might come up in other ways like I was trying to think out. It's just worth thinking about, that's all I'm saying, outsider that I am. -_- :)

Edited by demagogue, 07 June 2007 - 02:19 PM.

Posted Image

#16 sophisticatedZombie

sophisticatedZombie

    Advanced Member

  • Member
  • PipPipPip
  • 615 posts

Posted 09 June 2007 - 09:29 PM

Have we gotten any more information from the applicants? We have talked in the past about having the AI try to consider possible movement of the thief, modelling its possible paths from the source of the stimulus. It sounds like some of the applicants have worked in that area, so it would be possible for them to jump into that part of the system.

#17 New Horizon

New Horizon

    Mod hero

  • Active Developer
  • PipPipPipPipPip
  • 13853 posts

Posted 10 June 2007 - 09:53 PM

Have we gotten any more information from the applicants? We have talked in the past about having the AI try to consider possible movement of the thief, modelling its possible paths from the source of the stimulus. It sounds like some of the applicants have worked in that area, so it would be possible for them to jump into that part of the system.


I haven't heard anything more yet, but do try private messaging Malcolm, perhaps the two of you could discuss some things.

#18 Tels

Tels

    Mod hero

  • Member
  • PipPipPipPipPip
  • 15024 posts

Posted 01 November 2007 - 05:02 AM

I have a PhD in Artificial Intelligence and I currently lecture in Game design at the university of New South Wales.


God, now I really feel insufficient. :/

Hm, maybe I could move to New South Wales and hear some lectures? Would depend on which continent NSW is :)
"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man." -- George Bernard Shaw (1856 - 1950)

"Remember: If the game lets you do it, it's not cheating." -- Xarax

#19 Dram

Dram

    Disco Inferno

  • Campaign Dev
  • PipPipPipPipPip
  • 7462 posts

Posted 01 November 2007 - 05:37 AM

God, now I really feel insufficient. :/

Hm, maybe I could move to New South Wales and hear some lectures? Would depend on which continent NSW is :)


Australia ;) I'm here too

#20 Tels

Tels

    Mod hero

  • Member
  • PipPipPipPipPip
  • 15024 posts

Posted 01 November 2007 - 06:09 AM

Australia ;) I'm here too


Oh yeah, now I just need to find a way to move to Australia. Does studying game theory on an University count? :D
"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man." -- George Bernard Shaw (1856 - 1950)

"Remember: If the game lets you do it, it's not cheating." -- Xarax

#21 Crispy

Crispy

    Uber member

  • Member
  • PipPipPipPip
  • 4996 posts

Posted 01 November 2007 - 06:16 AM

If you're in the US, there are dedicated gamedev schools that would require less drastic moves. :)

Who am I kidding, come and live in Australia, it's a better place to live IMO. :P
My games | Public Service Announcement: TDM is not set in the Thief universe. The city in which it takes place is not the City from Thief. The player character is not called Garrett. Any person who contradicts these facts will be subjected to disapproving stares.

#22 Dram

Dram

    Disco Inferno

  • Campaign Dev
  • PipPipPipPipPip
  • 7462 posts

Posted 01 November 2007 - 06:51 AM

If you are planning on actually moving to Au, avoid Sydney housing as it's -really- expensive, at least in the city anyways. Apparently Sydney has the highest house prices in relation to average pay in the world, which really sucks cos I live here.

#23 Tels

Tels

    Mod hero

  • Member
  • PipPipPipPipPip
  • 15024 posts

Posted 01 November 2007 - 07:27 AM

If you're in the US, there are dedicated gamedev schools that would require less drastic moves. :)


I am in Europe (despite what my profile on the left says :) and there is NO way I am moving in any way or shape towards North America at the current climate :/

Who am I kidding, come and live in Australia, it's a better place to live IMO. :P


As seen from Europe, it is pretty much "dreamland" (as I was never there :), but I bet it's different once you are there :) Or maybe not :)
"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man." -- George Bernard Shaw (1856 - 1950)

"Remember: If the game lets you do it, it's not cheating." -- Xarax

#24 New Horizon

New Horizon

    Mod hero

  • Active Developer
  • PipPipPipPipPip
  • 13853 posts

Posted 01 November 2007 - 09:36 AM

Hi Intruder,

I'll have you setup in the application forum shortly.

#25 Nyarlathotep

Nyarlathotep

    Advanced Member

  • Member
  • PipPipPip
  • 1200 posts

Posted 01 November 2007 - 02:27 PM

If you're in the US, there are dedicated gamedev schools that would require less drastic moves. :)

Who am I kidding, come and live in Australia, it's a better place to live IMO. :P

This American concurs. He just wishes that your politics weren't even more reactionary than Shrub's.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users