In the good old days your program was responsible for doing everything that needed to be done during the execution of your program, either by you doing it yourself or by adding library code others wrote to your program. The only thing running beside that in the computer was the code to read in your compiled program - if you were lucky. Some computers had to had code entered through switches before being able to do more (the original "bootstrap" process), or even your whole program entered this way.
It was quickly found that it was nice to have code running capable of loading and executing program. Later it was found that computers were powerful enough to support running several programs at the same time by having the CPU switch between them, especially if the hardware could help, but with the added complexity of the programs not steppings on each others toes (for instance, how to handle multiple programs trying to send data to the printer at once?).
All this resulted in a large amount of helper code being moved out of the individual programs and into the "operating system", with a standardized way of invoking the helper code from user programs.
And that is where we are today. Your programs run full speed but whenever they need something managed by the operating system they call helper routines provided by the operating system, and that code is not needed and not present in the user programs themselves. This included writing to the display, saving files, accessing the network, etc.
Microkernels have been written that provide just what is needed for a given program to run without a full operating system. This has some advantages for the experienced users while giving away most others. You may want to read the Wikipedia page about it - https://en.wikipedia.org/wiki/Microkernel - if you want to know more.
I experimented with a Microkernel capable of running a Java Virtual Machine, but found later that the sweet spot for that is Docker.