Stay Silent is a VR strategic FPS (currently in Early Access) where players shoot at each other while using invisibility cloaking devices, and competing for map objectives.
I did most of the Audio work on this project, including all systems and audio design, production, tool design, integration, management, collaborating with programmers, with the help of a colleague for music composition, and another one for help with communication channels.
Using Unity and Wwise, I was responsible for carrying the Audio side of the project, responding to the team's development with Audio design, Audio systems and a workflow in VR.

The primary audio focus was put on guns and a variety of items and weapons, as well as a sense of space and using these to pinpoint your invisble opponent's location. Later in the project, UI and a slower pace became a larger part of the soundscape when the development leaned onto UI inventory holograms and a skill system to use tools in a more strategy based shooter, but resulting in less interaction with the environment itself.
VR design still being a challenge, on my end I had to find solutions to be able to integrate and iterate into Unity without relying too much on the programmer working with audio.
Using Wwise within Unity, several things were made:

-A use of Wwise Reflect and Auro3D plugins, to create dynamic early reflections and reverberation for weapons and props interactions using the geometry I would configure and give to Wwise in Unity, and have HRTF processing for a selection of audio buses.
-Using Ambisonics format and processing for ambience backgrounds and specific emitters for feeling (mostly sounds located on the player and large sounds that benefit from an even energy spread).
-Event and Switches structures to limit the number of Events and names for the programmer to manipulate.
-Two main scripted tools to tackle the integration time and responsability problems, and the emitters performances when considering spatial position and reflections computations.
-Multiple parameters to modify and blend sound layers being played alongside the player's gameplay, adding details for immersion during movement, filtering sources depending on the field of vision.


You can follow this link to read about these designs and functions in more details, as well as audio development choices, within the limits of the NDA before the game's release.

Here are also some isolated assets I created



To be released mid-2020