In the 1980s the growth of client/server LAN architectures continued while that of mainframe computing environments declined. The advent of the IBM PC in 1981 and the standardization and cloning of this architecture led to an explosion of PC-based LANs in businesses and corporations around the world, particularly with the release of the IBM PC AT hardware platform in 1984. The number of PCs in use grew from 2 million in 1981 to 65 million in 1991. Novell, which appeared on the scene in 1983, soon became a major player in file and print servers for LANs with its Novell NetWare platform.
However, the biggest development in the area of LAN networking in the 1980s was the continued evolution and standardization of Ethernet. While the DIX consortium worked on Ethernet standards in the late 1970s, the IEEE with its Project 802 initiative tried working toward a single unified LAN standard. When it became clear that this goal was impossible, Project 802 was divided into a number of separate working groups, with 802.3 focusing on Ethernet, 802.4 on Token Bus, and 802.5 on Token Ring technologies and standards. The work of the 802.3 group resulted in the first Ethernet standard, called 10Base5 or thicknet, which was almost identical to the version developed by DIX. 10Base5 was called thicknet because it used thick coaxial cable, and in 1985 the 802.3 standard was extended to include 10Base2 using thin coaxial cable, commonly called thinnet.
Through most of the 1980s, coaxial cable was the main form of cabling used for implementing Ethernet. A company called SynOptics Communications, however, developed a product called LattisNet that was designed for transmitting 10-Mbps Ethernet over twisted-pair wiring using a star-wired topology that was connected to a central hub or repeater. This wiring was cheaper than coaxial cable and was similar to the wiring used in residential and business telephone wiring systems. LattisNet was such a commercial success that in 1990 the 802.3 committee approved a new standard called 10BaseT for Ethernet that ran over twisted-pair wiring. 10BaseT soon superseded the coaxial forms of Ethernet because of its ease of installation and because its hierarchical star-
wired topology was a good match for the architectural topology of multistory buildings.
In other Ethernet developments, fiber-optic cabling, first developed in the early 1970s by Corning, found its first commercial networking application in Ethernet networking in 1984. (The technology itself was standardized as 10BaseFL in the early 1990s.) In 1988 the first fiber-optic transatlantic undersea cable was laid and greatly increased the capacity of transatlantic communication systems.
Ethernet bridges became available in 1984 from DEC and were used both to connect separate Ethernet LANs to make large networks and to reduce traffic bottlenecks on overloaded networks by splitting them into separate segments. Routers could be used for similar purposes, but bridges generally offered better price and performance, as well as less complexity, during the 1980s. Again, market developments preceded standards, as the IEEE 802.1D Bridge Standard, which was initiated in 1987, was not standardized until 1990.
In the UNIX arena, the development of the Network File System (NFS) by Sun Microsystems in 1985 resulted in
a proliferation of diskless UNIX workstations having built-in Ethernet interfaces. This development helped drive the demand for Ethernet and accelerated the evolution of Ethernet bridging technologies into today's switched networks. By 1985 the rapidly increasing numbers of UNIX hosts and LANs connected to the ARPANET began to transform it from what had been mainly a network of mainframe and minicomputer systems into something like what it is today. The first UNIX implementation of TCP/IP came in v4.2 of Berkeley's BSD UNIX, from which other vendors such as Sun Microsystems quickly ported their versions of TCP/IP. Although PC-based LANs rapidly grew in popularity in business and corporate settings during the 1980s, UNIX continued to dominate in academic and professional high-end computing environments as the mainframe environment declined.
IBM introduced its Token Ring networking technology in 1985 as an alternative LAN technology to Ethernet. IBM had submitted its technology to the IEEE in 1982 and the 802.5 committee standardized it in 1984. IBM soon supported the integration of Token Ring with its existing SNA networking services and protocols for IBM mainframe computing environments. The initial Token Ring specifications delivered data at 1 Mbps and 4 Mbps, but it dropped the 1-Mbps version in 1989 when it introduced a newer 16-Mbps version. Interestingly, no formal IEEE specification exists for 16-Mbps Token Ring--vendors simply adopted IBM's technology for the product. Efforts were made to develop high-speed Token Ring, but these have finally been abandoned and today Ethernet reigns supreme.
Also in the field of local area networking, in 1982 the American National Standards Institute (ANSI) began standardizing the specifications for Fiber Distributed Data Interface (FDDI). FDDI was designed to be a high-speed (100 Mbps) fiber-optic networking technology for LAN backbones on campuses and industrial parks. The final FDDI specification was completed in 1988, and deployment in campus LAN backbones grew during the late 1980s and the early 1990s. But today FDDI is considered legacy technology and has been superseded in most places by Fast Ethernet and Gigabit Ethernet (GbE).
In 1983 the ISO developed an abstract seven-layer model for networking called the Open Systems Interconnection (OSI) reference model. Although some commercial networking products were developed based on OSI protocols, the standard never really took off, primarily because of the predominance of TCP/IP. Other standards from the ISO and ITU that emerged in the 1980s included the X.400 electronic messaging standards and the X.500 directory recommendations, both of which held sway for a while but have now largely been supersededX.400 by the Internet's Simple Mail Transfer Protocol (SMTP) and X.500 by Lightweight Directory Access Protocol (LDAP).
A major event in the telecommunications/WAN field in 1984 was the divestiture of AT&T as the result of the seven-year antitrust suit brought against AT&T by the U.S. Justice Department. AT&T's 22 Bell operating companies were formed into 7 new RBOCs (only 4 are left today). This meant the end of the old Bell telephone system, but these RBOCs soon formed the Bellcore telecommunications research establishment to replace the defunct Bell Laboratories. The United States was then divided into Local Access and Transport Areas (LATAs), with intra-LATA communication handled by local exchange carriers (the Bell Operating Companies or BOCs) and inter-LATA communication handled by inter-exchange carriers (IXCs) such as AT&T, MCI, and Sprint Corporation.
The result of the breakup was increased competition, which led to new WAN technologies and generally
lower costs. One of the first effects was the offering of T1 services to subscribers in 1984. Until then, this technology had been used only for backbone circuits for long-distance communication. New hardware devices were offered to take advantage of the increased bandwidth, especially high-speed T1 multiplexers, or
muxes, that could combine voice and data in a single communication stream. The year 1984 also saw the development of digital Private Branch Exchange (PBX) systems by AT&T, bringing new levels of power and flexibility to corporate subscribers.
The Signaling System #7 (SS7) digital signaling system was deployed within the PSTN in the 1980s, first in Sweden and later in the United States. SS7 made new telephony services available to subscribers, such as caller ID, call blocking, and automatic callback.
The first trials of ISDN, a fully digital telephony technology that runs on existing copper local loop lines, began in Japan in 1983 and in the United States in 1987. All major metropolitan areas in the United States have since been upgraded to make ISDN available to those who want it, but ISDN has not caught on in the United States as a WAN technology as much as it has in Europe.
The 1980s also saw the standardization of SONET technology, a high-speed physical layer (PHY) fiber-optic networking technology developed from time-division multiplexing (TDM) digital telephone system technologies. Before the divestiture of AT&T in 1984, local
telephone companies had to interface their own TDM-
based digital telephone systems with proprietary TDM schemes of long-distance carriers, and incompatibilities created many problems. This provided the impetus for creating the SONET standard, which was finalized in 1989 through a series of Comité Consultatif International Télégraphique et Téléphonique (CCITT; anglicized as International Telegraph and Telephone Consultative Committee) standards known as G.707, G.608, and G.709. By the mid-1990s almost all long-
distance telephone traffic in the United States used SONET on trunk lines as the physical interface.
The 1980s brought the first test implementations of Asynchronous Transfer Mode (ATM) high-speed cell-
switching technologies, which could use SONET as the physical interface. Many concepts basic to ATM were developed in the early 1980s at the France-Telecom laboratory in Lannion, France, particularly the PRELUDE project, which demonstrated the feasibility of end-to-
end ATM networks running at 62 Mbps. The CCITT standardized the 53-byte ATM cell format in 1988, and the new technology was given a further push with the creation of the ATM Forum in 1991. Since then, use of ATM has grown significantly in telecommunications provider networks and has become a high-speed backbone technology in many enterprise-level networks around the world. However, the vision of ATM on users' desktops has not been realized because of the emergence of cheaper Fast Ethernet and GbE LAN technologies and because of the complexity of ATM itself.
The convergence of voice, data, and broadcast information remained a distant vision throughout the 1980s and was even set back because of the proliferation of networking technologies, the competition between cable and broadcast television, and the slow adoption of
residential ISDN. New services did appear, however, especially in the area of commercial online services such as America Online (AOL), CompuServe, and Prodigy, which offered consumers e-mail, bulletin board systems (BBSs), and other services.
A significant milestone in the development of the Internet occurred in 1982 when the networking protocol of ARPANET was switched from NCP to TCP/IP. On January 1, 1983, NCP was turned off permanently--anyone who had not migrated to TCP/IP was out of luck. ARPANET, which connected several hundred systems, was split into two parts, ARPANET and MILNET.
The first international use of TCP/IP took place in 1984 at the Conseil Européen pour la Recherche Nucléaire (CERN), a physics research center located in Geneva, Switzerland. TCP/IP was designed to provide a way of networking different computing architectures in heterogeneous networking environments. Such a protocol was badly needed because of the proliferation of vendor-
specific networking architectures in the preceding decade, including "homegrown" solutions developed at many government and educational institutions. TCP/IP made it possible to connect diverse architectures such as UNIX workstations, VMS minicomputers, and Cray supercomputers into a single operational network. TCP/IP soon superseded proprietary protocols such as Xerox Network Systems (XNS), ChaosNet, and DECnet. It has since become the de facto standard for internetworking all types of computing systems.
CERN was primarily a research center for high-energy particle physics, but it became an early European pioneer of TCP/IP and by 1990 was the largest subnetwork of the Internet in Europe. In 1989 a CERN researcher named Tim Berners-Lee developed the Hypertext Transfer Protocol (HTTP) that formed the basis of the World Wide Web (WWW). And all of this developed as a sidebar to the real research that was being done at CERN--slamming together protons and electrons at high speeds to see what fragments would appear!
Also important to the development of Internet technologies and protocols was the introduction of the Domain Name System (DNS) in 1984. At that time, ARPANET had more than 1000 nodes and trying to remember their numerical IP addresses was a headache. DNS greatly simplified that process. Two other Internet protocols were introduced soon afterwards: NNTP was developed in 1987, and Internet Relay Chat (IRC) was developed in 1988.
Other systems paralleling ARPANET were developed in the early 1980s, including the research-oriented Computer Science NETwork (CSNET), and the Because It's Time NETwork (BITNET), which connected IBM mainframe computers throughout the educational community and provided e-mail services. Gateways were set up in 1983 to connect CSNET to ARPANET, and BITNET was similarly connected to ARPANET. In 1989, BITNET and CSNET merged into the Corporation for Research and Educational Networking (CREN).
In 1986 the National Science Foundation NETwork (NSFNET) was created. NSFNET networked the five national supercomputing centers together using dedicated 56-Kbps lines. The connection was soon seen as inadequate and was upgraded to 1.544-Mbps T1 lines in 1988. In 1987, NSF and Merit Networks agreed to jointly manage the NSFNET, which had effectively become the backbone of the emerging Internet. By 1989 the Internet had grown to more than 100,000 hosts, and the Internet Engineering Task Force (IETF) was officially created to administer its development. In 1990, NSFNET officially replaced the aging ARPANET and the modern Internet was born, with more than 20 countries connected.
Cisco Systems was one of the first companies in the 1980s to develop and market routers for Internet Protocol (IP) internetworks, a business that today is worth billions of dollars and is a foundation of the Internet. Hewlett-Packard was Cisco's first customer for its routers, which were originally called gateways.
In wireless telecommunications, analog cellular was implemented in Norway and Sweden in 1981. Systems were soon rolled out in France, Germany, and the United Kingdom. The first U.S. commercial cellular phone system, which was named the Advanced Mobile Phone Service (AMPS) and operated in the 800-MHz frequency band, was introduced in 1983. By 1987 the United States had more than 1 million AMPS cellular subscribers, and higher-capacity digital cellular phone technologies were being developed. The Telecommunications Industry Association (TIA) soon developed specifications and standards for digital cellular communication technologies.
A landmark event that was largely responsible for the phenomenal growth in the PC industry (and hence the growth of the client/server model and local area networking) was the release of the first version of Microsoft's text-based, 16-bit MS-DOS operating system in 1981. Microsoft, which had become a privately held corporation with Bill Gates as president and chairman of the board and Paul Allen as executive vice president, licensed MS-DOS version 1 to IBM for its PC. MS-
DOS continued to evolve and grow in power and usability until its final version, MS-DOS 6.22, which was released in 1993. (I still carry around a DOS boot disk wherever I go in case I need it--don't you?) Anyway, one year after the first version of MS- DOS was released in 1981, Microsoft had its own fully functional corporate network, the Microsoft Local Area Network (MILAN), which linked a DEC 206, two PDP-11/70s, a VAX 11/250, and a number of MC68000 machines running XENIX. This type of setup was typical of the heterogeneous computer networks that characterized the early 1980s.
In 1983, Microsoft unveiled its strategy to develop a new operating system called Windows with a graphical user interface (GUI). Version 1 of Windows, which shipped in 1985, used a system of tiled windows that allowed users to work with several applications simultaneously by switching between them. Version 2 was released in 1987 and supported overlapping windows and support for expanded memory.
Microsoft launched its SQL Server relational database server software for LANs in 1988. In its current version, SQL Server 2000 is an enterprise-class application that competes with other major database platforms such as Oracle and DB2. IBM and Microsoft jointly released their 32-bit OS/2 operating system in 1987 and released OS/2 1.1 with Presentation Manager a year later.
In miscellaneous developments, IBM researchers developed the Reduced Instruction Set Computing (RISC) processor architecture in 1980. Apple Computer introduced its Macintosh computing platform in 1984 (the successor of its Lisa system), which introduced a windows-based GUI that was the precursor to Windows. Apple also introduced the 3.5-inch floppy disk in 1984. Sony Corporation and Philips developed CD-ROM technology in 1985. (Recordable CD-R technologies were developed in 1991.) IBM released its AS/400 midrange computing system in 1988, which continues to be popular to this day.