that doesn't change how much memory it takes to compile and the relative amount of memory it needs to do the same task
i personally believe that as a programmer if you don't have a grasp of the architecture of the hardware then you can't write good code for it
i don't like the architecture of the intel/amd type of processor, the motorola is a much more generous design with more and more logically designed fast short term memory storage elements
if your code doesn't respect the limitations of the hardware it will develop mysterious problems that seem perfect in the language but are not fit for the hardware
to me that is the centre of the Go philosophy - humans learn how to write the code for the machine, which is the polar opposite of the C++ philosophy which is the machine interprets the code, usually very imperfectly, and thus slowly, to finally get a decent performance
functional is nice but it would only work really well on a simplified architecture with very large register files, like the hardware actually is now, but not how it actually uses it, it doesn't let you actively choose where your data is stored, and thus if you blow its internal processing budgets of memory in one cache you are punished with a big memory copy or access and a lesser performance
i'm sure there is many ways to improve computer programming but any that disrespect the hardware's limitations is punished by slow compilation and higher runtime memory utilization which leads to less performance, and more chances of bugs slipping through due to a long edit/test cycle
no single model fits the hardware because the hardware is highly heteregonous in its design in order to - probably - fit all these divergent philosophies of program organisation