You’re right about beefier; I was thinking “higher quality” not beefier; cleaner voltage output, not so much the wattage. [more of a watching out for the overall “motherboard” health, especially when messing around].
Motherboards can suffer the equivalent of “brown outs” with cruddy power supplies. We get that at our house in the summertime. They’re barely noticeable ‘dips’ in voltage overall, usually in the hottest parts of summer. They’re within acceptable limits, but more poorly made things can suffer from voltage drops, whether cheap motors or motherboards
BUT – you might need a beefier power supply to run the extra fans or cooling units they’ve always had higher quality air cooling systems available for computers; there’s used to be companies who ONLY sold air conditioner units specially made for computers; or better quality internal cooling systems.
[back in the ancient days of room-size computers, they had special cold rooms built for computers ]
Still though, we’re dealing with Moore’s Law for some time now. CPU’s needed minimal cooling, then through the evolution of the Pentium chipsets, I watched it go from small 1/4 inch heatsink, to 1/2 inch heatstink, to heatsinks with fans, to heatsinks with more advanced cooling technologies built-in. Cheaper computers had the same cheap fans, laptops barely have any at all as the quest for skinnier and skinner took place.
Thankfully though, reduced instruction set CPUs have come out; they run more efficiently with the available space/heat limitations, allowing greater power in a smaller space with smaller power needs.
As they push the physical limits of what the standard https://en.wikipedia.org/wiki/Von_Neumann_architecture computer can handle in smaller and smaller spaces, we’re bound to run into a few physics problems in the process and they’ve done a great job getting the most out of the least space.
I’m most impressed by strides made in memory, more space for data manipulation, strides in 3D matrix transforms directly on higher end video chipsets, and advances in solid state and even magnetic storage mediums. really impressed.
I think in the near future, memory requirements will change the style of CPU that’s been in use for… a very long time now.. and the types of things being done on Video cards for display will be incorporated into data storage and in regular system memory.
When THAT happens – when there is a 3D array-space available at a fundamental level, that’s when computers will do absolutely unbelievable things… stuff we’re barely straining to do today [like rendering a 3D image… or being forced to cheat in game engines by using mathematical formulas rather than being able to directly address points in 3D space].
I’m looking forward to the future. What we have today is nice but we can go much further.
They’ve also made strides with layering, sort of how they do it on maps. It’s not quite 3D space, but, like a series of transparencies on top of one another, it allows for more room in less space, and more points to be connected to one another. What sucks for me is I can picture future computers, how the tertiary logic works in my head… but I don’t have the skills to make it happen. Thankfully, I’m not the smartest guy in the world, so there’s undoubtedly thousands of people working on the problem right now who have the same exact ideas, but with better resources.
So, I just have to wait
I’m not putting down “what’s available today”. What’s available today is great. I just keep expecting ‘more’,
My computer right now is rendering a large Minecraft map using overviewer. Now this laptop is cheap. $400. It’s over 2 years old. 2 cores. 8 gigs of memory. It’s ancient stuff now.
It estimates 240 hours to go on the processing.
I know what the current specs are for better 2015 computers and have a rough idea how much faster they would be at processing the same thing using the same code.
I could distribute it among several computers in parallel to make it even faster.
But the problem is more fundamental than clock speed. It’s the architecture. We’re trying to fit 3D (or 4D when dealing with Time, such as 3D videos), into a TWO DIMENSIONAL architecture.
The whole architecture itself needs changing. The logic – the math – the way chips are designed – They’re working on it though, so I’m not worried. I just keep waiting I won’t be happy ’til stuff is in consumer’s hands and we take them for granted – like smartphones. We barely think about them but that’s where some of the greatest advancements have come in overall system design.