OSDev.org
https://forum.osdev.org/

data center eras
https://forum.osdev.org/viewtopic.php?f=11&t=30689
Page 1 of 1

Author:  ggodw000 [ Mon Aug 15, 2016 6:01 pm ]
Post subject:  data center eras

I am preparing to do some presentation, found following list from ccna to be interesting;
data center 1.0 - 1960 - mainframe
data center 2.0 - 1980 - low-end servers
data center 3.0 - 2000 - virtualization

Now I have to say this is oversimplification because for example, various aspects, elements, components of data center virtualization happened during the span of around 1952-2013 and it looks like 2000 has been chosen as a median point or the point it started to pickup mainstream.

Then I made up these two to wonder if my points has any validity:

data center 4.0 - 2004 - hyperconvergence (lego buliding block, approach to data center configuration, expansion)
data center 5.0 - future?
it is either:
Google data center (AI, water cooling)
Etherium (de-centralized virtual machine)

Author:  Kazinsal [ Mon Aug 15, 2016 6:27 pm ]
Post subject:  Re: data center eras

The disconnect of storage and compute is a really big thing that people often overlook when thinking about what comprises a modern datacentre. Your CPU and RAM for your virtualization environment is often going to be in completely separate physical host machines than your disk space, and your disk space is likely going to be collected and partitioned into various storage pools based on availability, redundancy, speed, etc.

Author:  ggodw000 [ Mon Aug 15, 2016 6:50 pm ]
Post subject:  Re: data center eras

Kazinsal wrote:
The disconnect of storage and compute is a really big thing that people often overlook when thinking about what comprises a modern datacentre. Your CPU and RAM for your virtualization environment is often going to be in completely separate physical host machines than your disk space, and your disk space is likely going to be collected and partitioned into various storage pools based on availability, redundancy, speed, etc.


yes, i think it is good point. although the name says hyperconvergence (which most places advertise as compute, storage and network to a single node) it looks like a convergence of storage to a compute and network.
SAN/NAS -> local-disk drive.

But I am not sure on this part though, according to what yours: CPU and RAM is going completely going opposite? (diverging)? Because the CPU and RAM are the only the components that are not virtualized (or translated right?) Everything else network cards, graphics are virtualized and represented by software. (Of course are exception i.e. SRIOV/VDI which is coming back to hardware for performance but lets put ones like this outside the scope).

Your CPU and RAM for your virtualization environment is often going to be in completely separate physical host machines than your disk space, and your disk space is likely going to be collected and partitioned into various storage pools based on availability, redundancy, speed, etc.

Author:  Kazinsal [ Tue Aug 16, 2016 10:13 am ]
Post subject:  Re: data center eras

The idea is that you separate your compute resources from your storage resources, both physically and logically, now that we have commercially-available extremely high bandwidth links (eg. 10 Gigabit Ethernet, 10Gig + LACP, 40 Gigabit Ethernet, Fibre Channel) that we can use to link huge arrays of mass storage (storage area networks) to dozens of clustered compute units (each composed of a CPU and some amount of RAM, often split into dedicated control and shared virtualization memory).

When you're working in the land of 10 gigabits per second and higher, you don't need to have your storage physically present alongside your compute.

Author:  ggodw000 [ Tue Aug 16, 2016 1:27 pm ]
Post subject:  Re: data center eras

Kazinsal wrote:
The idea is that you separate your compute resources from your storage resources, both physically and logically, now that we have commercially-available extremely high bandwidth links (eg. 10 Gigabit Ethernet, 10Gig + LACP, 40 Gigabit Ethernet, Fibre Channel) that we can use to link huge arrays of mass storage (storage area networks) to dozens of clustered compute units (each composed of a CPU and some amount of RAM, often split into dedicated control and shared virtualization memory).

When you're working in the land of 10 gigabits per second and higher, you don't need to have your storage physically present alongside your compute.

hmm, i am afraid it is going opposite. Yes it used to be or still that way: SAN/NAS storage separate from computing so that multiple servers can access it. However I think this is becoming problematic when you need to expand fast, deploy fast and configure fast etc,

So with hyperconvergence, it is coming back to local-disk again. However storage is still redundant of course and data HA is handled by software defined storage so any node(server) in the data center goes south along with compute/network/storage one or more copy is always someplace else. This way data center configuratoin/expand/contract can happen fast and become lego like.

https://www.youtube.com/watch?v=mGpGG_6l38k

Page 1 of 1 All times are UTC - 6 hours
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group
http://www.phpbb.com/