MilOps: The Original Concept – part 2

Continuing from where we left at the end of the previous post, we’re gonna dig a little deeper into technology history in order to answer the question ‘Can it be done?’.

A lot has changed in this millennium in the PC world. As predicted by Moore’s law, the capabilities and performance have roughly doubled every 2 years in this period. For comparison: in 2000 my PC had a Pentium II processor running at 233 MHz, 64 MB of RAM and a hard-drive with a capacity of 2 GB. It also had an nVidia Riva TNT 2 graphics card with 16 MB of VRAM.

According to the Steam hard- & software survey of March 2018, the most common Windows PC configuration today has a 3+ GHz quad-core CPU, 8 GB of RAM, a GeForce GTX 1060 GPU, 2 GB of VRAM and to top it off, a 1 TB hard-drive.

If we look at the performance increase since 2000, the numbers are pretty impressive:

  • The CPU is running at a clock-speed 15 times faster and has 4 times as many cores so a naive calculation would put it at about 60 times more powerful. However, current day CPU’s do more work per cycle than the old Pentium II’s so the actual increase in performance is more along the lines of 200 times.
  • The amount of RAM has increased by a factor of 128 and the speed with which it can be accessed has also gone up several factors as well.
  • Hard drive space has increased substantially by a factor of  512 and VRAM has seen a 128 times increase in capacity, and like regular RAM, it can be accessed much faster as well nowadays.

The CPU-GPU performance gap

The most dramatic increase in number-crunching performance, however, can be found in the GPU on the graphics card: where the TNT2 card ran at 125 MHz and could calculate 2 pixels at a time, the GTX 1060 runs at 1400 MHz (11x faster) and can run 1280 shaders in parallel (640x as many). At the moment, the most powerful desktop Intel processor is about 10x SLOWER than the fastest nVidia GPU in terms of raw compute performance and that difference is expected to keep increasing.

A lot of this performance increase over the years has been used to increase the visual fidelity and display resolution of games in general. For strategy games, it has allowed many of them to transition from 2D to 3D, add more effects or make battlefields more dynamic. It has also allowed the number of units controlled by the player to get a lot bigger in RTS games like the Total War series or Planetary Annihilation or allow players to build huge bustling cities in a game like Cities Skylines.

Based on our experience working with multi-core CPU’s and using GPU’s for various high-performance non-graphics related tasks, we are convinced the hardware of today is sufficiently powerful and has enough memory to simulate an operational-sized battle at this level of detail on a decent PC. This doesn’t mean it is easy. The technical challenges of developing a scalable simulation engine capable of utilizing the available potential of today’s hardware are great, but definitely possible.

Wonder what these guys were for? Click the image to find out

Our approach is to build an engine that simulates combat at a tactical level with more detail and realism than your average RTS game but generally less detailed than a pure tactical battle simulator. This allows us to increase the scale to the desired size and put you, the player, in the commander’s seat to command this mass of men and machines. By using real-world terrain- and map data, you will be able to see the lay of the land just as the actual commanders back then could and your unit will consist of a full establishment of troops. This mean you’re not just commanding the combat elements but you control the full “traveling circus” including supply troops, field kitchens, repair workshops, field hospitals and much more, each with their specific role to play.

So you think you can be a General? It’s time to find out.

Comments and reactions to this blog entry can be made on our forum.

Share: