360 VR video showcasing

AI_

L1: Registered
Jan 13, 2019
10
5
Hi there, I am the current owner and dev for Jump Academy and am currently working on a project that streamlines map showcasing in 360 VR. The motivation was to use this to showcase jump maps and individual jumps for informative purposes. However, I figured this may be useful for the Source mapping community in general.

Here is an example output for what I have so far (testing via CS:GO due to the generally better texture quality needed for qualitative evaluation):

View: https://www.youtube.com/watch?v=PVEXCNy70eU


Noted, there is still some work needed for stitching the sides of the 3D cube map to blend better.

This is all done with freely available software (SourceMod, Python, OpenCV, FFmpeg), so there is no need for commercial software like Adobe Premier. The recording can be done in-game, so there is no need to port the map to run in Source Filmmaker either.

Please let me know if you guys are interested and I will keep you guys posted about the development and releases.
 

AI_

L1: Registered
Jan 13, 2019
10
5
The hardest part is probably getting the prerequisites installed and working. It needs Metamod+Sourcemod to run on the local game server, and Python libraries have to be installed for communicating with the local server, and later for image processing.

After that, this is the procedure so far:

1. Create the camera paths in-game using the Sourcemod plugin via an in-game menu and drag+drop for path nodes. This can take around 10-15 minutes per map depending on pickiness, and and can be replayed live. There's actually an option to have the camera point in the direction it is traveling along the path for those who just want a regular fly-through video without the VR:

View: https://www.youtube.com/watch?v=DCdXzvN5rGU


2. Run the Python recording script while the game is running the map. It will talk to the local server and automatically start/stop recording (using the hotkey for your video recorder, e.g. FRAPS or Nvidia ShadowPlay) for each of the 6 view angles needed for the cube map. The Sourcemod plugin controls the player camera during this whole process. This outputs 6 video files.

3. Run the Python stitching script given these 6 files as input. Currently this will output all the frames as PNG images after converting them to equirectangular projection that most VR viewers/editing software understand. It currently runs at about 2 seconds per frame on CPU at 4K output resolution on my rig with an Intel i7 3770k clocked at 4.3 GHz.

4. Run FFmpeg with the frames as input to output the final video at the desired frame and bit rate.
 

AI_

L1: Registered
Jan 13, 2019
10
5
You can also skip the entire video recording process if you want it to process just one frame with screenshots of the 6 faces. This would be useful if this site adds support for VR previews in the browser without going through YouTube. There should be public JS libraries for static-image VR.
 

AI_

L1: Registered
Jan 13, 2019
10
5
The script now supports calling HLAE to dump the frames. HLAE also allows us to force a non-integer FOV value so we can better align the cube faces. The issue with having different exposure levels for each cube surface was also fixed.


It's not perfect, but think this is the best I can do for now.
 

Fantaboi

Gone and one day forgotten
aa
Mar 11, 2013
892
1,050
@Fantasma did this 360 video for our map Shoreleave:

View: https://youtu.be/yCiMAyY2sxY


I know he put a lot into the backend of it to get it to work, I'm kind of curious what he thinks of your method.
Looks really cool, very similar to my own method but the major change is using sourcemod instead of SFM which eliminates 90% of the issues my approach has. I'm interested in how this then will impact particles when stitching.
And for more sharin, here's my blog post I did of the endeavour which may hold something useful. Feel free to msg me on discord if you wanna discuss more about it
 
Last edited:

AI_

L1: Registered
Jan 13, 2019
10
5
Thanks guys. These were actually the videos and tutorials that motivated me to start this project at the beginning.

Previously (and in my videos above) I also followed the procedure of recording each camera direction as separate videos, then synchronizing the frames together before stitching. But after working on this a bit more, using Sourcemod lets me control the camera direction per-tick and record frames per-tick with HLAE. So at the moment I am experimenting with freezing the camera rig and record through each camera/angle at 1 tick each, then unfreeze it to move forward one tick and repeat. So even if there are particles, we may be able to minimize discontinuity when recording this way.

My latest experiment with this tick-tock like mechanism allowed me to record in SBS stereo:


Stereo in VR seems like it would be painful, so I'll defer that for later. I will be redoing the VR video next using this method.
 
Last edited:

AI_

L1: Registered
Jan 13, 2019
10
5
Here is the output in VR using the new method. The stitching is finally seamless now.

This time the angle for the camera rig moves in the direction of travel instead of the usual axis lock, so there's less neck turning involved when viewing with VR goggles.


What do you guys think? Do you prefer this or axis lock?
 

AI_

L1: Registered
Jan 13, 2019
10
5
Here's the normal axis aligned VR output with the new method. The stitches are also seamless here now.

 

AI_

L1: Registered
Jan 13, 2019
10
5
Here's what it looks like after re-enabling particles and moving things like clouds, trees, and chicken:


Unfortunately, it turns out letting 6 ticks go by per camera snapshot is still quite significant because everything else ends up moving 6x faster than normal.

Also testing the video bitrate, it looks like upwards of 360 Mbps is needed to not look like utter crap at lower resolutions after YouTube's processing. The video in the previous post was at 250 Mbps and looks very distorted going from 4k to even just 1440p. VR seems very sensitive to video compression artifacts.
 
Last edited:

Da Spud Lord

Occasionally I make maps
aa
Mar 23, 2017
1,339
994
Unfortunately, it turns out letting 6 ticks go by per camera snapshot is still quite significant because everything else ends up moving 6x faster than normal.
"host_timescale 0.17"?
 

AI_

L1: Registered
Jan 13, 2019
10
5
Unfortunately, no. The game's tick rate does not change with that, so we end up with the same problem.

I tried changing the camera angles between ticks, but the engine does not support this. But this might be possible on HLAE's end with STV/GOTV demo playback instead of a live server. I'll do some experimenting with this.
 

AI_

L1: Registered
Jan 13, 2019
10
5
I have started porting over my code for generating VR images to a public repo starting with something requested from a fellow over at the Mapcore. Someone there requested a means to convert from an equirectangular image back to individual cube faces for use in e.g. a skybox for a map.

Here's the Python script, for those interested: https://github.com/geominorai/source-panorama-tools/blob/master/eqr_to_cube.py

Example input: https://en.wikipedia.org/wiki/File:MKZD_Bratcevo_2015-10_Equirectangular.jpg
Output files: (see attachment)
 

Attachments

  • cube_faces.png
    cube_faces.png
    209.6 KB · Views: 179