What exactly is a data center?
Before we get into the history, let’s define the term data center. It is a physical space, a facility that contains computer systems and related components. They can include power supply, back up services that provide redundancy, or communication equipment. All of these are kept in a controlled and secured environment.
Companies are using data centers to perform more laborious tasks like data security, providing virtual servers, cloud computing, hosting, load balancing, storage and more.
Probably the first data center was built in the USA in 1946, and it was called ENIAC (Electronic Numerical Integrator and Computer). The American army used it for storing defense codes. This computer was still not using transistors. Instead, it had almost 18 thousand vacuum tubes, 7200 crystal diodes and 10000 capacitors. The thing was huge – 167.2 square meters (1 800 square feet)!
During the 60s, the vacuum tubes were replaced with transistors. The company that put the strongest effort at that time was IBM, with their System series of mainframes. During that time, the virtualization technology was invented, and the mainframes started to multitask.
In 1964, the first supercomputer got introduced. It was the CDC 6600, with the performance of 1 MFlops and peak at 3 MFlops.
The 70’s started exciting with the introduction of Intel’s 4004 processor (1971). It was the first general-purpose programmable processor that became the “brain” of different customized software.
Two years later, Xerox Alto got into the market and presented the first graphical UI. This computer was way ahead of its time and it even came with a 3-button mouse.
In 1977, the Chase Manhattan Bank applied the first LAN – ARCnet. It supported up to 255 computers and a data rate of 2.5 Mbps.
Just a year later, the American multinational software company SunGard established the first commercial disaster recovery.
The massive and expensive mainframes were dying. They were replaced with cheaper and easier to maintain PCs.
The American computer manufacturer Sun Microsystems created the network file system protocol. With it, the client computers were able to access files over the network in a similar way to accessing internal storage.
The 90’s was a time of the .COM boom. The internet usage was rapidly increasing, and so was the demand for better connectivity. Due to that demand, data centers also gained popularity. There were new and larger centers emerging. The service model of the data centers became common for many companies.
At the beginning of the period, power efficiency was beginning to cause maintenance issues. The current generation of data centers was consuming too much power. This started a trend to improve the efficiency, build better cooling systems and to reduce the consumption.
In 2002, Amazon started their web services AWS, which include cloud computing, storage and more. Ten years later, 38% of the business was already in the cloud.
Today, the data center is driving to a new model (client-server) based on subscription. Companies choose this model to reduce their costs. They don’t need to purchase expensive hardware and constantly upgrade it. Instead, they use cloud services, where a third party is responsible for the hardware resources and often for the IT support as well.
The future consists of low-power, long-lasting client devices that connect to the cloud (data centers) where all of the processing is done.