top of page
Writer's pictureJared Koohestani

Video Games, Virtual Simulations, and AI Development




Video Games 


At its core, video games can be broken down into a few main components: 

  1. Game logic: the brain of the game 

  2. Visual design: what you see on the screen 

  3. Physics engine: the realistic physics within the game 


Now within this main structure you also have: 

  1. Audio design- sound effects, music, etc. 

  2. Networking for multiplayer functionality 

  3. A.I. that controls NPCs and Professor Chih-Pu breaks this down beautifully in our A.I. discussion but essentially you have: 

    1. Rule based A.I. - which is a “if this/ then that” structure. If a player comes within 30 feet of the NPC, Then the NPC will attack the player. 

    2. Script based A.I. - which can look like linear dialogue or branched dialogue. 

    3. Then there's data fed model based A.I. that can be trained on vast amounts of data which essentially learns patterns and relationships within the data to generate new dialogue, control characters, and make decisions within the game. And Reinforced Learning A.I. that learns and adapts through experiences within the game.  

 

A.I. models have been learning through experiences within games for a while now. Star Craft II, Minecraft, Atari, Doom are all games that have been used to train A.I. 

 


Simulation Solutions


Simulation solutions are no different than a video game at its core, but I want to shed some light on possible applications beyond the classic rule based and script based A.I. integration.  


This is a bird's eye view since we can go for hours into the specifics of application and this presentation is designed to engage you in critical thought and the evolution of A.I. can increase the possibilities might be appropriate to enhance your services and pull you out from the crowd.  


Reinforced learning A.I. in military simulation programs can present dynamic opponents and allies. AI can populate virtual cities with realistic enemy combatants and civilians. AI-controlled characters can adapt their behavior based on the trainees' actions. A.I can learn from its own experience and adapt to the environment which can give the scenario the unpredictability it may need beyond rule based and script based A.I. because the world doesn't necessarily follow rules and scripts. 


First Responder application can be implemented similarly. Imagine large-scale crisis scenarios where A.I. goes beyond scripts. The A.I. can manage large numbers of virtual casualties with varying injuries and medical needs. The trainee can practice triage protocols, resource allocation, coordinate response, but the A.I. model doesn't only control the NPCs, it can also govern the casualty conditions based on treatment decisions. Improper care can simulate the worsening of the casualty condition. This can trigger secondary effects like internal bleeding or shock and potentially a virtual death if left untreated. You can witness the impact of your decisions.   


You might be asking yourself, well what's wrong with script and rule based A.I. in a modular training program. Nothing is wrong with it. Humans can design and program aspects of these simulations thought linear or branched pathways to achieve training goals, however, I want to present the thought of implementation as A.I. evolves.


Machine learning and reinforced learning have several advantages when appropriately applied so here are my top 3. 

 

  1. Scalability: Creating human responses through pre-programmed scenarios can be immensely complex and time-consuming, reinforcement learning or machine learning A.I. can streamline training scenario creation. 

  2. Complexity of scenarios: Making training more realistic and unpredictable. 

  3. Personalized training: A.I. can identify a trainee's zone of proximal development and then tailor the scenarios to maximize progress within that zone. 


 

Find me on LinkedIn

Email me at


7 views0 comments

Recent Posts

See All

Comments


bottom of page