nullplan wrote:
A server CPU however doesn't care about power use (as much) and is more interested in throughput than latency.
Power consumption is
very much a concern for servers; hell, a standard metric for server performance is 'cycles per watt per dollar'.
It is also one of the reasons why server CPUs are clocked somewhat lower than high-performance computing CPUs (the two are not at all the same topic, mind you) or Workstation CPUs (which are still another matter, though they generally clock around the same range as HPC chips), which in turn are clocked lower than most consumer CPUs - your typical server-class Xeon or Epyc chip has a base clock somewhere under 3GHz, with the ones aimed at HPC or (for Xeon, since AMD's Threadripper line is a separate badge) workstations are usually 200-500 MHz higher. While the Coffee Lake and Epyc 3 lines have been pushing that up towards 3.5 GHz or even 4GHz for some SKUs, they are still clocked lower than all but the lowest-end contemporary Core, Ryzen, or Threadripper chips meant for the desktop.
While the other main reasons for this - longevity and stability - are in many ways the more serious concerns, both of them are tightly tied to cooling, and to the CPU's TDP - which naturally enough is determined in part by (and is a limiting factor to) power draw.
Oh, and cooling itself is a significant cost center when you have hundreds or thousands of CPUs in tightly-packed blade racks, and have to dump the waste heat not only out of the systems but out of the whole room, if not out of the entire building through a specialized HVAC system.
Simply put, power costs are a key part of a datacenter's bottom line.